Can AI tools monitor online multiplayer environments as well as humans?
Descarga y escucha en cualquier lugar
Descarga tus episodios favoritos y disfrútalos, ¡dondequiera que estés! Regístrate o inicia sesión ahora para acceder a la escucha sin conexión.
Can AI tools monitor online multiplayer environments as well as humans?
Descripción
[Introduction] Have you ever harmed by verbal harassment in online multiplayer games? Are your game experience affected by too strict keyword filtering? Your troubles will be solved by AI in...
mostra másHave you ever harmed by verbal harassment in online multiplayer games? Are your game experience affected by too strict keyword filtering? Your troubles will be solved by AI in the future! In this episode, we're going to talk about AI tools monitoring in online multiplayer games.
Host: Aiqi Xu, the student journalist of University of Melbourne
Guest: Lucy Sparrow, Lecturer at the University of Melbourne in the School of Computing and Information Systems, specializing in human computer interaction.
[Transcript]
Aiqi Xu: Welcome to the Level Up where you can pick up a new knowledge about games and level up as a gamer. Here is Aiqi Xu. Today we are going to talk about the moderating of AI tools in online multiplayer games. And we are excited to have Lucy here and she is expert in digital ethics.
Lucy Sparrow: Thank you so much for having me on. I am Lucy Sparrow, so I'm a lecturer here at the University of Melbourne. My work is primarily in the field of human computer interaction.
Aiqi Xu: I'm curious about what made you decide on the research in the multiplayer gaming environment?
Lucy Sparrow: Yeah, so the main reason is basically that I love playing games.
Aiqi Xu: Yeah, I do.
Lucy Sparrow: You do?
[LAUGH]
Lucy Sparrow: I've always loved playing games since I was a really small kid and I thought I also really love research. So I thought, "Why don't I just combine two things that I really really like for my career?" And that's what I did, basically. Yeah, and I think they're just super fascinating worlds because I have this really strong belief that play is a really integral, an important part of being a human, of being alive. I think it's something that every single human being on Earth can relate to.
Aiqi Xu: We know where you are working on the AI tools, which is monitoring the game environments.
Lucy Sparrow: Yes.
Aiqi Xu: I'm wondering a question like, we know the strictness of different games, the modularization system is quite different. As a gamer, I find that we always rely on the keywords filtering, and the manual monitoring or the player reporting to protect the game environment. But for the AI tools, how can we determine the strictness of it?
Lucy Sparrow: Yeah, okay, so it gets quite complicated. So recently, I just presented a paper that I wrote with my team on AI moderation in multiplayer games, and particularly the ethics around it. And we found that basically people had quite different ways of understanding and viewing AI moderation. For some people, it was a really, really good way to make the game safer, because one of the benefits of AI over typical keyword kind of automated moderation is that, it at least claims to be better at detecting the context of a word. So it's not so important that say you say some kind of curse word, right? It might identify that curse word as problematic, but then look at it in the context of a sentence. And that could mean so if someone uses a word that it's contained within another word, or they use it in a positive kind of way, the AI technically is better able of detecting that than traditional keyword filters that would just say, "Hey, you used a bad word, and therefore..."
Aiqi Xu: So it can figure out the context of their conversation?
Lucy Sparrow: That's the claim. Yes, and that's what it's working towards. It's not necessarily always good at doing that, but because they're quite new tools, but the goal is that they get to a point.
Aiqi Xu: But what AI faced is also human, you know, there are some bad guys. They still try to produce some toxic contents, and they're always updating their words.
Lucy Sparrow: Yes, so that is, as you say, it's a really, really common thing in a lot of games. Gamers, players in particular are really, really well known for always trying to find a loophole, no matter what the rule, they will always try to find a way around it, right?
Aiqi Xu: There's kind of like the polyphone. They use another certain words, which pronounce similar to toxic words, like the beach, to the, that F word like this.
Lucy Sparrow: Yes, yes.
Aiqi Xu: So how do you feel about actions like this, to attempting to avoid being identified by AI tools?
Lucy Sparrow: So interestingly, when I was speaking to some AI, AI moderation developers, they said that they're aware of this issue, and it's an ongoing problem, right? So language is always updating so quickly, and people are always finding new ways to insult each other, basically, such that what they actually do is they hire social linguists, so people with a linguistic background to do research on the terms and the ways that people are using those terms online.
Aiqi Xu: So they try to predict it?
Lucy Sparrow: Yes, so they try to predict it. They try to be up to date constantly on the new words, and then they can feed the AI that information.
Aiqi Xu:I never think about that.
Lucy Sparrow: Yeah, it would be great.Yeah, I know. I was really surprised to learn that this profession suddenly has this new flavor.
Aiqi Xu: We look forward to seeing the AI tools actually using all over the world games. Like that. Thank you so much.
Lucy Sparrow: Thank you so much.
————————————————
[Background music]:
8-Bit Music On. Created by moodmode. CC-BY.
[Cover image]:
Create by Aiqi Xu.
Información
Autor | 徐艾琪 |
Organización | 徐艾琪 |
Página web | - |
Etiquetas |
Copyright 2024 - Spreaker Inc. an iHeartMedia Company