xuenay: (Default)

I feel like my progress with my academy game has been frustratingly slow. Lots of natural language, little code. Over Christmas leave I finally put together a simple Bayes net and belief propagation implementation, but when I was about to move on to start actually implementing the game side of things, I realized that some of the things that I had planned didn’t quite work after all. Then I got stuck for a while, trying to decide what would work in the first place. But then I fortunately started having new ideas, and finally came up with a core gameplay mechanic that would seem promising.

For example, suppose that the player wants to persuade an NPC to dance with her. Below is a representation of what the player character knows of the situation, somewhat similar to how it might be shown in the game.

Some of you will recognize this as resembling an influence diagram (but note that it doesn’t follow all the conventions of one, so if you do know influence diagrams, don’t try to read it literally as one). The rectangles, which would ordinarily be called decision nodes, are two possible dialogue options that the player has. She could try to take a flattering approach and say “I heard you’re a good dancer” in an attempt to appeal to the other’s vanity, or she could rather try to focus on his kindness and nurturing instinct by admitting that she herself has never danced before.

The ovals – uncertainty nodes - represent things that influence the way that the NPC will react to the different dialogue options. A character who’s confident of their dancing skills is going to react to flattery differently than someone who’s insecure about his skills.  The ovals form a standard Bayesian network (or several of them), meaning that the player can have differing amounts of knowledge about the different uncertainty nodes. Maybe she knows that the NPC has high self-esteem, and that this makes it more likely that he’s confident about his dancing skills, but she doesn’t know how good he actually is at dancing, which also influences his confidence about his skills.

Finally, the diamonds – usually called value nodes, but I call these reaction nodes - represent the different ways that the NPC might react to the various dialogue options. Depending on the values of the uncertainty nodes, flattery may please him, intimidate him, or make him react in an indifferent manner. In order to get the NPC to dance with her, the player has to elicit some number of positive reactions from the NPC, and avoid causing any bad reactions. A simple counter sums positive reactions minus negative reactions that have been caused while discussing the topic of dance: if it exceeds some threshold, he agrees, and if it falls below another threshold, he’ll refuse to even consider the thought anymore. The thresholds are partially determined by how big of a favor the player is asking for, partially by how much the NPC likes/trusts the player character.

Suppose the player doesn’t know much about this NPC, but she also doesn’t want to risk things by just saying things that might or might not upset him. Now, in addition to picking one of the dialogue options, she can also experiment with the network to see which variables would influence his decision the most. Suppose that she is considering the “appeal to kindness” route, and has determined that he really needs to be kind in order for this strategy to work. Now she can select the kindness node, and bring up a different set of dialogue options which are aimed at revealing how kind he is. But here too she needs to be careful, because another counter also tracks the NPC’s overall interest in the conversation. If she gets too many indifferent or negative reactions while talking about his kindness, he may just end the conversation and wander off, before she has ever had a chance to get back to the topic of dance. So there’s also an element of risk there.

Alternatively, if the player doesn’t think that getting to dance is that important after all, she can just try her luck with one of the original dialogue options. Maybe those will reveal something about the NPC that’s useful for something more important. After the conversation is over, the NPC’s basic liking for the player character may be adjusted depending on the total positive and negative reactions that she got out of him.

It’s also easy to add additional different complications or advanced options: e.g. an option to pressure the NPC into something, which if successful contributes points for the goal counter, but reduces their liking for you afterwards.

So that’s the basic mechanic. Notice that this basic structure is a very flexible one: instead of talking to an NPC, the player character could have broken into his room in order to look for her stolen necklace, for instance. In that case the decision nodes could represent different places where the necklace might be, the chance nodes might offer chances to look for clues of where he might have hidden the necklace, and the “hit point” counters might represent time before someone shows up, or before the player character loses her nerve and decides to get out. So the same mechanic can be recycled for many different situations.

As for the educational content, the fact that you’ll need to read the Bayes network involved with each variable in order to decide what to do should teach a bunch of useful things, like a basic ability to read the network, evaluate the value of information related to each chance node, and notice and understand d-separation. Not to mention coming to understand how many different situations can all be seen through the same probabilistic framework. And it ought to all relate to the game content in a meaningful manner.

(I also have lots of ideas for other core mechanics. I also should really get the first version of this game done eventually. So I’m making a note of all those other ideas, but for now I’ll focus on milking this core mechanic for all that it’s worth.)

One interesting question is: should the results of the decision nodes be deterministic or random? In other words, suppose the player knows the values of every node that influences e.g. the outcome of the flattery dialogue option. Does that mean that they know what the outcome of choosing that option will be? Or do they only know the odds of each outcome?

I’m inclined with going for the deterministic route: if you know everything that influences the outcome, then you know the final result. I’m not sure that having a random factor in there would be all that realistic, and more importantly, if you’ve spent your time and resources on discovering the value of every parent node of an option, then you deserve to know exactly what outcome your choice will have, rather than getting screwed over by a 10% chance for an unexpected result. And if you don’t know the value of each parent, then there will effectively already be a random factor involved in terms of what you don’t know.

This should hopefully be my last post before I actually have a playable prototype done. Even if a very short one.

Originally published at Kaj Sotala. You can comment here or there.

xuenay: (Default)

So far I have spoken about the possibility of edugames being good, sketched out the basic idea of an edugame built around Bayesian networks, and outlined some design constraints. Now it’s finally the time to get to the actual meat of the matter – the game mechanics.

Note that everything here is subject to change. I’m aiming to outline things to a sufficient level of detail that I have a reasonable clue of how to start implementing the first prototype. Then, when I do have an early prototype together, I’ll keep experimenting with it and throw away anything that doesn’t seem fun or useful. Still, this post should hopefully give some idea of what the final product will be like.

(This post might not be enough for anyone else to start implementing an actual prototype, since there are a lot of missing pieces still. But I think I have a lot of those missing pieces in my head, it just wouldn’t be useful to try to write all of it down.)

To make things concrete enough to start implementing the game, I need to define a concrete goal for (a part of) the game, some of the concrete ways of achieving that goal, as well as the choices involved in achieving the goal. And of course I need to tie all that together with the educational goals.

Goal. Let’s say that, in the first part of the game, you are trying to get yourself voted into the Student Council as the representative of the first-year students. This requires you to first gain the favor of at least three other first-year students, so that you will be nominated to the position. After that, you need to gain the favor of the majority of the most influential students, so that you will actually be the winning candidate.

From that will follow the second part of the game, where you need to also persuade others in the council to support your agenda (whatever your agenda is), either by persuading them or having them replaced with people who are more supportive of you. But for now I will just focus on the first part.

I’m actually starting to think that I should possibly make this into more of a sandbox game, with no set goals and letting you freely choose your own goals. But I’ll go with this for the first prototype.

How to achieve the goals. The game keeps track of your relationship to the different characters. You achieve the nomination if at least three others characters like you enough. To be voted to the council, a majority of the characters have to both like and trust you more than the other candidates.

This seems like a good time to talk more about relationships, which are a rather crucial element of any social drama. Relationships are one of the main types of resource in the game, the rest of which include your public image, personal skills, and time. I’ll cover the rest shortly.

Relationships. Most games that model your relationships with other characters do so by assigning it a single numerical score, with different actions giving bonuses or penalties to the score. So if you give someone a gift you might get +10 to the relationship and if you insult them you might get -20, and your total relationship is the sum of all these factors.

This is a little boring and feels rather game-ish, so I would like to make it feel a little more like you’re actually interacting with real people. To keep things simple, there will still be an overall “relationship meter” (or actually several, if I can make them distinct and interesting enough), but affecting its value shouldn’t feel like just a mechanical process of giving your friends gifts until they love you.

Borrowing from The Ferrett’s three relationship meters, the ones I’m initially considering are like, trust, and infatuation/love.

Like measures the general extent to which someone, well, considers you likeable, even if they don’t necessarily trust you. A relationship that’s high on like but low on trust is the one that you might have with a nice acquaintance with whom you have intellectual conversations online, or that co-worker who’s friendly enough but who you haven’t really done any major projects with or interacted outside work. A high like makes people more inclined to help you out in ways that don’t involve any risk to them.

Things that affect liking include:

  • Having a public image that includes personality traits that other people like. Some traits are almost universally liked or reviled. Other traits are neutral, but other people tend to like people who they feel are similar to them – or, with some traits, dissimilar.
  • Acting in the interests of others, or contrary to them.
  • People having a crush on you.
  • Other people that you’re friends with.
  • Various random events.

Trust measures the extent to which people are willing to rely on you, and the extent to which they think you’re not going to stab them in the back. A high trust may make it easier to repair damage to the other meters, since it makes people more inclined to give you another chance. It also makes people more inclined to help you out in ways that involve a risk to themselves, to vote for you, to confide in you, and to ask you for help.

Things that build up trust include:

  • Telling people information about yourself, indicating that you trust them. If you tell someone a secret you haven’t told anyone else, this will impress them more than if you tell them something that’s common knowledge. (Just don’t jump to revealing all your deepest secrets on the first meeting, or you’ll come across as a weirdo.)
  • Making and keeping promises.
  • Acting in the interests of others, or contrary to them.
  • People having a crush on you.
  • Other people that you’re friends with.
  • Various random events.

Different characters have different kinds of ideals that they are attracted to. If your public image happens to match their ideal, they may develop a crush on you and build up infatuation. The infatuation will grow in strength over time, assuming that your public image continues to match their ideal. If you then also build up their like and trust, the infatuation may turn into love.

The fact that infatuation also increases their like and trust towards you makes it easier to convert their feelings to love, but the feelings may also come crashing down very quickly if you demonstrate untrustworthiness. An infatuated character is more likely to ignore minor flaws, but anything that produces a major negative modifier to any relationship meter may cause them to become completely disillusioned and start wondering what they ever saw in you in the first place. (Love is more stable, and makes it easier to take advantage of your lovers without them leaving you. But you’d never stoop that low, would you?)

A sufficiently high love makes people willing to do almost anything for you.

Besides the things that were already mentioned, love may be affected by:

  • The amount of trust and like that the person has towards you.
  • You committing yourself to a romantic relationship with them.
  • Whether you have any other lovers (some characters are fine with sharing you, others are not).
  • You choosing to genuinely fall in love with them as well – they’ll sense this and receive a considerable boost to their love meter, but you will also end up permanently prevented from ever taking certain negative actions towards them.
  • Various random events.

Every modifier to any of the relationship meters is associated with some source, which you may try to influence. For example, suppose that you promise your friend to do something by a certain time, and then fail to do so. This will produce a negative modifier to their trust rating. You can then try to talk to them and apologize, and possibly offer some compensation for the misdeed. If you play your cards right, you may be able to erase the penalty, or even turn it into a bonus.

Your public image. I have already mentioned your public image a few times, when I mentioned that your perceived personality traits influence the extent to which others like you, as well as the chance of someone developing a crush on you.

Basically, there’s a set of different personality traits that any character may or may not have. Some, like being kind or being cruel, are mutually exclusive. In the beginning, you don’t know anyone’s personality traits, nor does anyone know yours. If you act kindly, you will develop an image as a kind person, and if you act in a cruel way you’ll develop a reputation as a cruel person. And of course, not everything depends directly on your actions – your rivals may try to spread negative rumors about you.

I haven’t yet determined how exactly knowledge about your actions spreads. I don’t want all of the player’s actions to magically become common knowledge the moment the action is made, but neither do I want to keep track of every separate piece of information. And I do want to offer the player the option to try to keep some of their doings secret. My current compromise would be to keep track of who knows what for as long as the amount of people who knew a particular piece of information remained under a certain number. So if the limit was 6, then the game would keep track of any piece of information that was known to at most six people: once the seventh person found out, it would be assumed to have become common knowledge.

People who like you are less inclined to pass on negative rumors about you, as well as more inclined to pass on positive ones. You can also try to spread negative rumors about your rivals, yourself – but this risks developing a reputation as a lying gossip.

Personal skills. The best way of acting in any particular situation depends on what you know of the people involved. If you cultivate a certain kind of image, who will end up liking you more as a result of it, and who will end up liking you less? Is the concession you are offering to your offended friend sufficient to make them forgive you? How much should you trust your new lover or friend? Who might be spreading those nasty rumors about you?

There are several ways by which you could find this out. First, empathy skills can be learned by study: if you have a high relevant empathy skill, you can make a good guess of what someone is like, or how they might react in some situation, just based on your skill. Learning the skills takes considerable time that could be spent on other things, however.

Also, many personality traits correlate with each other, and various actions correlate with different personality traits. It’s almost as if their connections formed a… wait for it… Bayesian network! But in the beginning of the game, you only have a very rough idea of what the structure of the network is like, or what the relevant conditional probabilities are. So you have to figure this out yourself.

Ideally – and I’m not yet sure of how well I’ll get this to work – the game will give you a tool which you can use to build up your own model of what the underlying network structure might be like: a Bayesian network that you can try to play around with. In the beginning, nearly all of the nodes in the model will be unconnected with each other, though some of the most obvious nodes start out connected. For instance, you’ll know that cruel people are more likely to insult others than kind people are, though your estimate of the exact conditional probability is likely to be off.

If you suspect that two nodes in the underlying graph might be linked, you can try joining them together in your model and test how well your model now fits your observations. You can adjust both the conditional probability tables and linkages in order to create a model that most closely represents what you have seen, with the game automatically offering suggestions for the probabilities based on your experiences so far. (Just be careful to avoid overfitting.)

In addition, you can study various knowledge skills. Knowing psychology, for instance, may reveal some of the links in the network and their associated probabilities to you with certainty, which will automatically update your model.

Time. Everything you do takes time, and you can only be in a single place at once. Your lovers and friends will expect you to hang out with them regularly, studying skills takes time, and your plans may be completely interrupted by the friend of yours who has a mental breakdown and needs you there to comfort them RIGHT NOW. (But if you have a sufficiently good reason why you can’t make it, maybe they’ll understand. Perhaps.)

What choices do you need to make? The above discussion should have suggested some choices that you might run across in the game, such as:

  • Who do you make friends with?
  • Do you try to build a small group of strong friendships, or a large group of weak friendships?
  • Who do you trust with information that could hurt you?
  • How much time do you spend on learning the various skills?
  • How much time do you spend on chasing down the source of various rumors?
  • Do you want to spread any nasty rumors yourself?
  • Do you get romantically involved with someone?
  • If so, who will it be? One lover or many?
  • Do you risk doing things that would damage your reputation if people found out?
  • Do you generally side with the people who are your closest allies, or the ones who have actually been wronged?

And others. As you might notice, the choice you’ll want to make in most of these depends on what you’ve figured out about others… which should help us fulfill our goal of making the player genuinely interested in how to figure these things out, via the formalisms that the game employs and attempts to teach.

Free feel to suggest more the comments!

Originally published at Kaj Sotala. You can comment here or there.

xuenay: (Default)

My work on my Master’s thesis and the Bayesian academy game was temporarily interrupted when I had to focus on finishing the work I had piled up for another course. Now I’m slowly trying to get back into the flow, so here’s a post on some of the things that I’ll be trying to keep in mind while creating the game, and which should help shape its design. This post is still somewhat abstract: more concrete material should hopefully follow in the next post.

I’ve also started on the actual programming, putting together what should hopefully be a flexible Bayes net framework. (I looked at some existing libraries, but it seemed easier to just put together my own implementation.) Mostly model level stuff so far, though, so not much to show yet. I’ll probably put a copy up on Github or somewhere eventually, once it stops looking terribly embarrassing.

Constraints

“Design is the successive application of constraints until only a unique product is left.” — Donald Norman, The Design of Everyday Things

Having some constraints is nice. They help narrow down the space of possible designs and give an idea for what you need to do. So let’s establish some constraints for our game design.

Has to teach useful things about Bayesian networks / probabilistic reasoning. Kinda obvious, but still worth stating explicitly. At the same time, the game has to be fun even if you had no intrinsic interest in Bayesian networks. Balancing these two gets tough, since you can attract gamers by various story elements and interesting game mechanics, but then these elements might easily become ones that do nothing to teach the actual subject matter. My general approach for solving this is to build the mechanics so that they are all tied to various pieces of hidden information, with Bayes nets being your tool for uncovering that hidden information. More on that later.

Every choice should be interesting. Even if you manage to figure out exactly who knows what, that shouldn’t dictate the right option to pick, just as in XCOM, correctly figuring out the probability of killing your opponent given a certain tactic isn’t enough to dictate the choice of the best tactic. Rather, correctly using the skills that the game tries to teach you should be something that better informs you of the benefits and drawbacks of the different choices you make. If there’s only one obvious option to pick, that’s not interesting.

Of course, eventually somebody will figure out some optimal strategy for playing the game which dictates an ideal decision for various situations, but that’s fine. If we design the game right, figuring out the perfect strategy should take a while.

Must not be ruined by “save-scumming”. In my previous article, I gave an example of a choice involving Bayesian networks: you are given several options for how to act, with the best option depending on exactly which character knew what. Now, what does your average video game player do when they need to make a choice based on incomplete information, and they find out the truth state of affairs soon afterwards? They reload an earlier save and choose differently, that’s what.

Constantly reloading an earlier save in order to pick a better decision isn’t much fun, and really ruins the point of the whole game. But if that’s the optimal way to play, people will feel the temptation to do it, even if they know that it will ruin their fun. I would like to give people the freedom to play the game the way they like the most, but I’m worried that in this case, too much freedom would make the experience unfun.

Other games that rely on randomness try to combat save-scumming by saving the random seed, so reloading an earlier save doesn’t change the outcome of any die rolls. We could try doing the opposite: re-randomizing any hidden states once the player reloads the game. But this could turn out to be tricky. After all, if we have a large network of characters whose states we keep track of and who influence each other, setting up the game in such a way that their states can be re-randomized at will seems rather challenging. So I’m inclined to go for the roguelike approach, with only a single, constantly updating save slot per game, with no ability to go back to earlier saves. That gives us a new constraint:

Each individual game should be short. A constantly updating save means that if you lose the game, you have to start all over. This is fine with a game like Faster Than Light, where a single pass through the game only takes a couple of hours. It would be less fine in an huge epic game that took 50 hours to beat. Based on my own gut feel, I’m guessing that FTL’s couple of hours is quite close to the sweet spot – long enough that you get depth to a single game, short enough that your reaction to failure will be to start a new game rather than to quit the whole thing in disgust. So I will be aiming for a game that can be finished in, say, three hours.

This constraint is probably also a good one since my natural inclination would be to make a huge epic sprawling game. Better to go for something easier to handle at first – a limited-duration game is also easier to extensively playtest and debug. I can always expand this after finishing my thesis, making the game so far the first chapter or whatever.

One could also allow reloading, but restricting where you are allowed to save. Recently I have been playing Desktop Dungeons, where your kingdom grows as you go out on quests that are about 10 minutes long each. You can’t save your progress while on a quest, but you can save it between quests, and the temptation to try just one more quest makes for an addictive experience in the same way that FTL’s short length also makes for an addictive experience. But I’m not sure of whether my current design allows for any natural “units” that could be made into no-save regions in the same way as the quests in Desktop Dungeons can.

Another consequence of having constant saves is that

Failures should be interesting.

Always make failure entertaining. In fact, failure should be more entertaining than success. Success is its own reward. If you succeed at solving a puzzle, you feel good, you barely need any confirmation from the game. Just a simple ding will be satisfying. But if you’re struggling, if the game sort of is in on the joke with you and laughs about it, and gives you a funny animation, it’s actually sort of saying, yeah we want — it’s okay to be here. You can have fun while you’re here. You’ll get there eventually — you’ll get to the other end eventually, but while you’re here you can enjoy yourself; you can relax. And that’s really something I learned sort of from doing the game, but that’s really become an ongoing principal of ours in design is to make the failure — keep failure interesting.Scot Osterwald

Even a game being short doesn’t help if a couple of bad decisions mean that you’re stuck and would be better off restarting rather than playing to the end. FTL solves this by quickly killing you off when you’re starting to do badly: this is interesting, since you can almost always trace your failure back to some specific decision you made, and can restart with the intention not to make that mistake again. I’m not sure how well this would work in this game, though I have been thinking about splitting it into a number of substages, each with a limit on the number of actions that you are allowed to do. (First stage: become elected as the Student Council representative of the first year students by winning them over before the elections. Etc.) Failing to achieve the objective before the time limit would lead to a game over, thus killing you quickly once you started doing too badly to recover. Possibly each stage could also be made into a single no-save unit, allowing saves between them.

Another option would be to take the Princess Maker approach: in these games, you are trying to raise your daughter to become a princess, but even if you fail in that goal, there are a variety of other interesting endings based on the choices that you made in the game. A third option would be to ensure that even a sub-optimal choice opens some new paths through the game – but it could be difficult to ensure that you never ended up in a hopelessly unwinnable state.

Still not entirely sure of the exact form of this constraint and the previous one: will just have to try things out and see what seems to be the most fun.

The next update should either be about some concrete game mechanics (finally!) and the ways that relationships are handled in this game, or about the way that the information is represented to the player.

Originally published at Kaj Sotala. You can comment here or there.

xuenay: (Default)

In my previous article, I argued that educational games could be good if they implemented their educational content in a meaningful way. This means making the player actually use the educational material to predict the possible consequences of different choices within the game, in such a manner that the choices will have both short- and long-term consequences. More specifically, I talked about my MSc thesis project, which would attempt to use these ideas to construct a learning game about Bayesian networks.

However, I said little about how exactly one would do this, or how I was intending to do it. This article will take a preliminary stab at answering that question – though of course, game design on paper only goes as far, and I will soon have to stop theorizing and start implementing prototypes to see if they’re any fun. So the final design might be something completely unlike what I’m outlining here. But advance planning should be valuable nonetheless, and perhaps this will allow others to chime in and offer suggestions or criticism.

So, let’s get started. In order to figure out the best way of meaningfully integrating something into a game, we need to know what it is and what we can do with it. What are Bayesian networks?

Bayesian networks in a nutshell

Formally, a Bayesian network a directed acyclic graph describing a joint probability distribution function over n random variables. But that definition isn’t very helpful in answering our question, so let’s try again with less jargon: a Bayesian network is a way for reasoning about situations where you would like to know a thing X, which you can’t observe directly, but you can instead observe a thing Y, which is somehow connected to X.

For example, suppose that you want to know whether or not Alice is a good person, and you believe that being a good person means caring about others. You can’t read her thoughts, so you can’t directly determine whether or not she cares about others. But you can observe the way she talks about other people, and the way that she acts towards them, and whether she keeps up with those behaviors even when it’s inconvenient for her. Combining that information will give you a pretty good estimate of whether or not she does genuinely care about others.

A Bayesian network expresses this intuitive idea in terms of probabilities: if Alice does care about people, then there’s some probability that she will exhibit these behaviors, and some probability that she does not. Likewise, if she doesn’t care about them, there’s still some other probability that she will exhibit these behaviors – she might be selfish, but still want to appear as caring, so that others would like her more. If you have an idea of what these different probabilities are like, then you can observe her actions and ask the reverse question: given these actions, does Alice care about other people (or, what is the probability of her caring)?

Towards gamifying Bayesian networks

Now how does one turn this into a game?

Possibly the simplest possible game that we can use as an example is that of Rock-Paper-Scissors. You’re playing somebody you don’t know, it’s the second round, and on the first round you both played Rock. From this fact, you need to guess what he intends to play next, and use that information to pick a move that will beat him. The observable behavior is the last round’s move, and the thing that you’re trying to predict is your opponent’s next move. (The pedants out there will correctly point out that the actual network you’d want to build for predicting RPS moves would look quite a bit different and more complicated than this one, but let’s disregard that for now.) Similarly, many games involve trying to guess what your opponent is likely to do next – based on the state of the game, your previous actions, and what you know of your opponent – and then choosing a move that most effectively counters theirs. This is particularly obvious with games such as Poker or Diplomacy.

In the previous article, we had a very simple probabilistic network involving Alice, Bob, and Charlie. From the fact that Charlie knew something, we concluded something about the probability of either Alice or Bob knowing something about it; and from the added fact that Bob knew something about it, we could refine our probability of Alice knowing something about it.

Suppose that the piece of knowledge in question was some dark secret of yours that you didn’t want anyone else to know. By revealing that secret, anyone could hurt you.

Now let’s give you some options for dealing with the situation. First, you could preemptively reveal your dark secret to people. That would hurt most people’s opinion about you, but you could put the best possible spin on it, so you wouldn’t be as badly hurt as you would if somebody else revealed it.

Second, you could try to obtain blackmail material on the people who you thought that knew, in order to stop them from revealing your secret. But obtaining that material could be risky, might make them resent you in case they had never intended to reveal the secret in the first place, or encourage them to dig up your secret if they didn’t actually know about it already. Or if you didn’t figure out everyone who knew, you might end up blackmailing only some of them, leaving you helpless against the ones you missed.

Third, you might elect to just ignore the whole situation, and hope that nobody who found out had a grudge towards you. This wouldn’t have the costs involved with the previous two alternatives, but you would risk becoming the target of blackmail or of your secret being revealed. Furthermore, from now on you would have to be extra careful about not annoying the people who you thought knew.

Fourth, you could try to improve your relationship with the people you thought knew, in the hopes that this would improve the odds of them remaining quiet. This could be a wasted effort if they were already friendly with you or hadn’t actually heard about the secret, but on the other hand, their friendship could still be useful in some other situation. Those possible future benefits might cause you to pick this option even if you weren’t entirely sure it was necessary.

Now to choose between those options. To predict the consequences of your choice, and thus the best choice, you would want to know 1) who exactly knew about your secret and 2) what they currently thought of you. As for 1, the example in our previous article was about just that – figuring out the probability of somebody knowing your secret, given some information about who else knew. As for 2, you can’t directly observe someone’s opinion of you, but you can observe the kinds of people they seem to hang out with, the way they act towards you, and so on… making this, too, a perfect example of something that you could use probabilistic reasoning to figure out.

You may also notice that your choices here have both short- and long-term consequences. Choosing to ignore the situation means that you’ll wish to be extra careful about pissing off the people who you think you know, for example. That buys you the option of focusing on something more urgent now, at the cost of narrowing your options later on. One could also imagine different overall strategies: you could try to always be as honest and open as possible about all your secrets, so that nobody could ever blackmail you. Or you could try to obtain blackmail material on everyone and keep everybody terrified of ever pissing you off, and so on. This is starting to sound like a game! (Also like your stereotypical high school, which is nice, since that was our chosen setting.)

Still, so far we have only described an isolated choice, and some consequences that might follow from it. That’s still a long way from having specified a game. After all, we haven’t answered questions like: what exactly are the player’s goals? What are the things that they could be doing, besides blackmailing or befriending these people? The player clearly wants other people to like them – so what goal does it serve to be liked? What can they do with that?

We still need to specify the “big picture” of the game in more detail. More about that in the next article.

Interlude: figuring out the learning objectives

Ideally, the designer of an edugame should have some specific learning objectives in mind, build a game around those, and then use their knowledge about the learning objectives to come up with tests that measure whether the students have learned those things. I, too, would ideally state some such learning goals and then use them to guide the way I designed the “big picture” of the game.

Now my plans for this game have one problem (if you can call it that): the more I think of it, the more it seems like many natural ways of structuring the overall game would teach various ways of applying the basic concepts of the math in question and seeing its implications. For example, it could teach the player the importance of considering various alternative interpretations about an event, instead of jumping to the most obvious-seeming conclusion, or it might effectively demonstrate the way that “echo chambers” of similar-minded people who mostly only talk with each other are likely to reach distorted conclusions. And while those are obviously valuable lessons, they are not necessarily very compatible with my initial plan of “take some existing exam on Bayes networks and figure out whether the players have learned anything by having them do the exam”. The lessons that I described primarily teach critical thinking skills that take advantage of math skills, rather than primarily teaching math skills. And critical thinking skills are notoriously hard to measure.

Also, a game which teaches those lessons does not necessarily need to teach very many different concepts relating to Bayes nets – rather, it might give a thorough understanding of a small number of basic concepts and a few somewhat more advanced ones. A typical math course, in contrast, attempts to cover a much larger set of concepts.

Of course, this isn’t necessarily a bad thing either – a firm grounding in the basic concepts may get people interested in learning the more advanced ones on their own. For example, David Shaffer’s Escher’s World was a workshop in which players became computer-aided designers working with various geometric shapes. While the game did not directly teach much that might come up on a math test, it allowed the students to see why various geometric concepts might be interesting and useful to know. As a result, the students’ grades improved both in their mathematics and art classes, as the game had made the content of those classes meaningful in a new way. The content of math classes no longer felt like just abstract nonsense, but rather something whose relevance to interesting concepts felt obvious.

Again, our previous article mentioned that choices in a game become meaningful if they have both short- and long-term consequences. Ideally, we would want to make the actions of the players meaningful even beyond the game – show them that the mathematics of probabilistic reasoning are interesting because it’s very much the kind of reasoning that we use all the time, in our daily lives.

So I will leave the exact specification of the learning objectives until later – but I do know that one important design objective will be to make the game enjoyable and compelling enough that people will be motivated to play it voluntarily, even on their spare time. And so that, ideally, they would find in it meaning that went beyond it being just a game.

For the next article, I’ll try to come up with a big-picture description of the game that seems fun – and if you people think that it does, I’ll finally get to work on the prototyping and see how much of that I can actually implement in an enjoyable way.

Next post in series: Bayesian academy game – constraints.

Originally published at Kaj Sotala. You can comment here or there.

xuenay: (Default)

As a part of my Master’s thesis in Computer Science, I am designing a game which seeks to teach its players a subfield of math known as Bayesian networks, hopefully in a fun and enjoyable way. This post explains some of the basic design and educational philosophy behind the game, and will hopefully also convince you that educational games don’t have to suck.

I will start by discussing a simple-but-rather-abstract math problem and look at some ways by which people have tried to make math problems more interesting. Then I will consider some of the reasons why the most-commonly used ways of making them interesting are failures, look at the things that make the problems in entertainment games interesting and the problems in most edutainment games uninteresting, and finally talk about how to actually make a good educational game. I’ll also talk a bit about how I’ll try to make the math concerning Bayesian networks relevant and interesting in my game, while a later post will elaborate more on the design of the game.

So as an example of the kinds of things that I’d like my game to teach, here’s an early graph from the Coursera course on Probabilistic Graphical Models. For somewhat mathy people, it doesn’t represent anything complicated: there’s a deterministic OR gate Y that takes as input two binary random variables, X1 and X2. For non-mathy people, that sentence was probably just some incomprehensible gibberish. (If you’re one of those people, don’t worry, just keep reading.)

I’m not going to go through the whole example here, but the idea is to explain why observing the state of X1 might sometimes give you information about X2. (If the following makes your eyes glaze over, again don’t worry – you can just skip ahead to the next paragraph.) Briefly, if you know that Y is true, then either X1 or X2 or both must be true, and in two out of three of those possible cases, X2 is true. But if you find out that X1 is true, then that eliminates the case where X1 was false and X2 was true, so the probability of X2 being true goes down to .5 from .66. In the course, the explanation of this simple case is then used to build up an understanding of more complicated probabilistic networks and how observing one variable may give you information about other variables.

For mathy types the full explanation is probably relatively easy to follow, at least if you put in a little bit of thought. But for someone who is unfamiliar with math – or worse, scared of it -, it might not be. So the question is, how to convert that explanation to a form that is somewhat easier to understand?

The traditional school math approach would be to convert the abstract explanation into a concrete “real-life” case. Let’s say that the variables are people. X1 becomes Alice, X2 becomes Bob, and Y becomes Charlie. A variable being true means that the person is question has heard about some piece of information – say, that Lord Deathfist the Terrible is on a rampage again. If one takes the lines to mean “Alice tells Charlie stuff and Bob tells Charlie stuff (but Alice and Bob don’t talk with each other)”, the “OR gate” thing becomes relatively easy to understand. It means simply that Charlie knows about the rampage if either Alice or Bob, or both, know about it and have told Charlie.

Now we could try to explain it in common-sense terms like this: “Suppose that Charlie knows about Lord Deathfist. That means that either Alice, or Bob, or both, know about it, and have told him. Now out of those three possibilities, Alice knows about it in two possible cases (the one where Alice knows, and the one where Alice and Bob both know) and there’s one possible case where she does not know (the scenario where only Bob knows), so the chance of Alice knowing this is ⅔. But if we are also told that Bob knows it, that only allows for the possibilities where 1) Bob knows and 2) both Alice and Bob know, so that’s 1 possibility out of 2 for Alice knowing it, so the chance of Alice knowing goes down to ½ where it used to be ⅔.”

This is… slightly better. Maybe. We still have several problems. For one, it’s still easy to lose track of what exactly the possible scenarios are, though we might be able to solve that particular problem by adding animated illustrations and stuff.

But still, the explanation takes some effort to follow, and you still need to be motivated to do so. And if we merely dress up this abstract math problem with some imaginary context, that still doesn’t make it particularly interesting. Who the heck are these people, and why should anyone care about what they know? If we are not already familiar with them, “Alice” and “Bob” aren’t much better than X1 or X2 – they are still equally meaningless.

We could try to fix that by picking names we were already familiar with – like Y was Luke Skywalker, and X1 and X2 were Han Solo and Princess Leia, and Luke would know about the Empire’s new secret plan if either Han or Leia had also found out about it, and we wanted to know the chance of all of them already knowing this important piece of information.

But we’d still be quite aware of the fact that the whole Star Wars gimmick was just coating for something we weren’t really interested in. Not to mention that the whole problem is more than a little artificial – if Leia tells Luke, why wouldn’t Luke just tell Han? And even if we understood the explanation, we couldn’t do anything interesting with it. Like, knowing the logic wouldn’t allow us to blow up the Death Star, or anything.

So some games try to provide that kind of significance for the task: work through an arithmetic problem, and you get to see the Death Star blown up as a reward. But while this might make it somewhat more motivating to play, we’d rather play an action game where we could spend all of our time shooting at the Death Star and not waste any time doing arithmetic problems. Additionally, the action game would also allow us to shoot at other things, like TIE Fighters, and that would be more fun.

Another way of putting this would be that we don’t actually find the math task itself meaningful. It’s artificial and disconnected from the things that we are actually interested in.

Let’s take a moment to contrast this to the way that one uses math in commercial entertainment games. If I’m playing XCOM: Enemy Unknown, for instance, I might see that my enemy has five hit points, while my grenade does three points of damage. Calculating the difference, I see that throwing the grenade would leave my enemy with two hit points left, enough to shoot back on his turn. Fortunately I have another guy nearby, and he hasn’t used his grenade either – but I also know that there are at least six more enemies left on the battlefield. Do I really want to use both of my remaining grenades, just to take out one enemy? Maybe I should just try shooting him… both of my guys have a 50% chance to hit him with their guns, and they’d do an average of three points of damage on a hit, so that’s an expected three points of damage if both take the shot, or – calculating it differently – a 25% chance of killing the alien dead… which aren’t very good odds, so maybe the other guy should throw the grenade and the other shoot, and since grenades magically never miss in this game, I’d then have a 50% chance of killing the alien.

So as I play XCOM, I keep running arithmetic calculations through my head. But unlike in the “solve five arithmetic problems, then you get to see the Death Star blowing up” example, these calculations aren’t just glued-on. In fact, while playing, I never actually think that I am solving a set of arithmetic and probability problems in order to be rewarded with the sight of the enemies dying and my soldiers surviving. I think that I’m out killing aliens and doing my best to keep my guys alive. (How many of you had realized that XCOM is an educational game that, among other things, drills you on arithmetic problems? Well, it is!)

This can be a bad thing in some senses – it means that I’m engaging in “stealth learning”, learning a skill without realizing it. Not realizing it means that I can’t consciously reflect and introspect on my learning, and I may have difficulties transferring the skill to other domains, since my unawareness of what I’m doing makes it harder to notice if I happen to run across a problem that employs the same principles but looks superficially different. But it does also mean that the calculations are very much meaningful, and that I don’t view them as an unnecessary annoyance that I’d rather skip and move on to the good parts.

The game scholars Katie Salen and Eric Zimmerman write:

Another component of meaningful play requires that the relationship between action and outcome is integrated into the larger context of the game. This means that an action a player takes not only has immediate significance in the game, but also affects the play experience at a later point in the game. Chess is a deep and meaningful game because the delicate opening moves directly result in the complex trajectories of the middle game-and the middle game grows into the spare and powerful encounters of the end game. Any action taken at one moment will affect possible actions at later moments.

The calculations in XCOM are meaningful because they let me predict the immediate consequences of my choices. Those immediate consequences will influence the outcome of the rest of the current battle, and the end result of the battle will influence my options when I return to the strategic layer of the game, where my choices will influence how well I will do in future battles…

In contrast, the arithmetic exercises in a simple edutainment game aren’t very meaningful: maybe they let you see the Death Star blowing up, but you don’t care about the end result of the calculations themselves, because they don’t inform any choices that you need to make. Of course, there can still be other ways by which the arithmetic “game” becomes meaningful – maybe you get scored based on how quickly you solve the problems, and then you end up wanting to maximize your score, either in competition with yourself or others. Meaning can also emerge from the way that the game fits into a broader social context, as the competition example shows. But of course, that still doesn’t make most edutainment games very fun.

So if we wish people to actually be motivated to solve problems relating to Bayesian networks, we need to embed them in a context that makes them meaningful. In principle, we could just make them into multistage puzzles: DragonBox is fantastic in the way that it turns algebraic equations into puzzles, where you need to make the right choices in the early steps of the problem in order to solve it in the most efficient manner. But while that is good for teaching abstract mathematics, it doesn’t teach much about how to apply the math. And at least I personally find games with a story to be more compelling than pure puzzle games – and also more fun to design.

So I’ll want to design a game in which our original question of “does Bob also know about this” becomes meaningful, because that knowledge will inform our choices, and because there will be long-term consequences that are either beneficial or detrimental, depending on whether or not we correctly predicted the probability of Bob knowing something.

My preliminary design for such a game is set in an academy that’s inspired both by Harry Potter’s Hogwarts (to be more specific, the Hogwarts in the fanfic Harry Potter and the Methods of Rationality) and Revolutionary Girl Utena’s Ohtori Academy. Studying both physical combat and magic, the students of the academy are a scheming lot, ruled over by an iron-fisted student council made up of seven members… And figuring out things like exactly which student is cheating on their partner and who else knows about it, may turn out to be crucial for a first-year student seeking to place herself and her chosen allies in control of the council. If only she can find out which students are trustworthy enough to become her allies… misreading the evidence about someone’s nature may come to cost her dearly later.

In my next post, I will elaborate more on the preliminary design of the game, and of the ways in which it will teach its players the mathematics of Bayesian networks.

Originally published at Kaj Sotala. You can comment here or there.

July 2017

S M T W T F S
      1
2345678
9101112131415
16171819 202122
23242526272829
3031     

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 22nd, 2017 12:49 pm
Powered by Dreamwidth Studios