Pages
  • First Page
  • National
  • International
  • Iranica
  • Sports
  • Perspective
  • Economy
  • Social
  • Photojournalism
  • Art & Culture
Number Seven Thousand Two Hundred and Fifty Two - 06 March 2023
Iran Daily - Number Seven Thousand Two Hundred and Fifty Two - 06 March 2023 - Page 6

Georgia Institute of Technology scholar Amy S. Bruckman:

Death of Twitter is a positive step towards improving our information space

Let’s talk about the next part of your book. Why and in what ways do you think the Internet has changed the way we think?
Well, the fundamental ways that knowledge is formed are different now. It’s interesting to think about the fact that echo chambers are not always bad. The example I use in my class is from back in the Usenet days. There were two newsgroups about feminism. One of them moderated and one of them unmoderated. The moderated feminism newsgroup declared that you must accept the principles of feminism to post here, and anything that contradicts them will be deleted. The unmoderated one said you can write whatever you want.
In the moderated group, we would start with principles like everyone deserves equal rights regardless of their gender and then, we could have a good conversation about what comes next once you accept those facts. The unmoderated group was more like 4chan. It was a flame fest, where you couldn’t have any conversations of a serious nature. The moderate newsgroup was an echo chamber since you had to agree to a certain ideology to participate. A lot of times echo chambers are bad, but they can be good if it’s a group of people starting from correct assumptions.
An example of a bad echo chamber would be the one reflected in a paper that my student, Sijia Xiao, and I did. We studied people who believe in the “chemtrail conspiracy,” which is the idea that the condensation trails visible behind airplanes are deliberately sprayed for evil purposes. If you get a group of people together to say, “We believe people should be treated equally, now let’s talk about what comes next,” that’s good. If you get a group of people together to say, “Chemtrails are destroying the world, now what’s next?” that’s bad. But this ability to form groups that have strong sets of shared assumptions is really novel.
Maybe the Internet facilitated it to an extent that was not possible before, but we had the same phenomena before on a smaller scale.
Oh, of course. Absolutely.

So, are we seeing more conspiracy theories emerge because of the Internet?
People have always believed in crazy things. There’s an empirical question of whether more people believe in more crazy things than used to happen. I don’t have the data to answer that. It’s certainly the case that the speed with which a non-standard belief can spread has increased. You could look at, for example, the QAnon conspiracy. It was created relatively quickly and spread to a relatively large number of people. The Internet absolutely played a role in that. But, of course, people believing in crazy things is as old as time.
Is it incidental that we are in — according to some scholars, at least — a post-truth society that also emerged after the invention of the Internet?
Well, I don’t know what you mean when you say we’re in a post-truth society. There’s a lot of truth everywhere and there’s a lot of craziness everywhere. Are there more non-standard beliefs than there used to be? I don’t know. If you look back a few hundred years, lots of people believed all kinds of things that are objectively wrong. If you look at 18th-century medicine, it’s all insane. Are we more insane than we used to be? I don’t know. How do we measure? You have to define your constructs.
So, you do not buy into the whole concept of a post-truth society. Is that correct?
I do think we have some problems with our current information space that could be significantly improved. And I do think that the death of Twitter, as it is currently unfolding, is a very positive step towards improving our information space. One of the things I argue in my book is that a for-profit company can never do the right thing for individuals or communities. When Twitter is driven by capitalist priorities and the desire to have good quarterly earnings above the needs of individuals or communities. It can never do the right thing for people.
There are a lot of people moving to the nonprofit platform Mastodon as a result of the Twitter controversy. I hope we look back on this moment as one where nonprofit social media gained a real foothold with real people. Now, the fact that my academic friends are all using it doesn’t mean it’s going to catch on more broadly. There are also some problems with Mastodon and the fediverse that still need solving. There’s no question about that. But I think there’s enough potential that we’ll look back on this moment as an important one where things began changing for the better.
That was interesting. How do you see the role of artificial intelligence in the whole thing? Some scholars argue that it is taking away our agency as humans.
Artificial Intelligence is going to do both harm and good. The burden is on technologists and politicians to steer us toward it doing more good. There’s a lot of potential harm. It’s crazy. They’re people making decisions based on black-box algorithms that they don’t understand.
Think about the example of Amazon trying to use an AI system to pick who to hire. They hire a lot of people. So, going through all the resumes is a lot of work. They took a database of the people they’ve hired in the past and ran machine learning on it to assess whether a new job candidate will be like the former successful employees of Amazon, helping them decide whether they should hire that candidate.
They found that the algorithm discriminated against women and minorities. So, they went back through their dataset and tried to remove gender, which didn’t fix it. Then, they went back and removed any implicit associations with gender. So, for instance, if you play softball in the US, you’re a woman, but if you play baseball, you’re a man. They took out things like softball and baseball which accidentally identify someone’s gender. It still didn’t help because the truth is that their historical hiring practices were so biased that any tool trained on that dataset is going to just recreate bias. So, they finally said, “Forget it, we’re not going to do this,” which was the responsible thing to do.
There are all kinds of very worrisome things that are happening as a result of AI’s increased use in society. Computer Vision is another one. Will we have any right to privacy if they perfect face recognition? Can I ever go to the mall and not have people try and market to me based on which store I paused at or which billboard I looked at? So, the right to privacy is very much contested.
More important than our privacy rights as consumers in a democratic society, the rights of people in authoritarian regimes are an extreme concern. So, yes, it’s a little annoying if I end up getting lots of ads for the Gap brand because I spent 10 minutes at the mall at the Gap store and the camera saw me. Yes, that’s slightly annoying, but the way that this AI technology is being used in less free societies is egregious.

In the fifth chapter, you talk rather extensively about identification, anonymity, and the importance of using pseudo-names. There is an argument that if people are not recognized by their actual identity on forums and social media, they won’t feel the need to behave responsibly. So, they would say and do whatever they want. How would you respond to that criticism?
People care about the reputation of their pseudonyms. To back up a second, I teach my students that we are all always pseudonymous. It’s not two buckets: anonymous or identified. There’s also pseudonymous. And in fact, we’re all always pseudonymous. It’s a multi-dimensional space with different degrees of identifiability.
At one end of the spectrum, if you committed a horrible crime and posted evidence of it on an anonymous account, most likely state powers could find you. It would take a lot of work, but they could find you. So, truly Anonymous is not anonymous. At the other end of the spectrum, let’s say you’re posting under your real name. Is it possible that someone posting under their real name is actually someone pretending to be someone else and they fooled everyone? Of course, it is. So, no one is ever truly anonymous, and no one has ever truly identified. We’re all always somewhere in the middle.
When you use any kind of name, to refer to yourself, you continually leak little details about who you are. It becomes closer and closer to identifying you over time. I have an identified Reddit account with my real name, and I have another Reddit account that is personal. But at some point, I mentioned on that account that I’m a professor. At some point, I mentioned that I live in Atlanta. I was walking through the park, saw a duck doing something funny, and posted a photo of the duck. By the time you add together a professor who lives in Atlanta and watches a particular TV show, all of a sudden it becomes clear who that really is. So, we’re all always somewhere in a complicated space between anonymous and identified.
Now, I guess your question was about how that shapes how people behave. If people care about the behavior of their pseudonyms, which they often do, then there’s not that much difference. Is it true that when you can have a throwaway account, it helps people to behave truly badly? Yes, it does. And it could be that some platforms may want to disallow that. Others may have good reason to allow it. For instance, if you are interested in radical political speech and believe that there’s a constructive function to political speech, you better let people be closer to anonymity.

TO BE CONTINUED

 

Search
Date archive