The Quietus - A new rock music and pop culture website

Books

Technology Can’t Fight Back: Untangling The Web With Aleks Krotoski
Emily Bick , August 4th, 2013 06:12

Emily Bick speaks to the writer, broadcaster and social psychologist about her new book, Untangling the Web, the perceived influence of the internet on our lives and, perhaps more importantly, vice versa

Of all the futurologists and prognosticators at the Future Now event at Yoko Ono's Meltdown, social psychologist Dr. Aleks Krotoski's talk about her new book, Untangling the web: What the internet is doing to you, was one of the most engaging and believable – at least in part because, instead of assuming that trends in technology and behaviour would lead to any specific future scenario, Krotoski emphasised human agency over technological determinism. The web she writes about is made of people, and relationships, and how these intersect with digital structures—and these intersections are where things become interesting.

Untangling the web examines everything we do online, from finding love and building communities, to porn, death, religion and political organisation. Krotoski writes in a friendly, chatty style, similar to the style she uses to present BBC Radio 4's technology series, The Digital Human, with several (not always flattering) personal anecdotes used to ease readers into explanations of complex topics. That said, each chapter traces whole histories of research, and gives detailed scientific evidence to debunk some of the wilder hyperbolic claims about the social effects of the web out there. There is criticism and skepticism here, too, where warranted – but on the whole, this book is a humane and patient guide to understanding social behaviour online, and one that gives people the benefit of the doubt.

When you gave your talk at Meltdown, you said that your book's subtitle 'What the internet is doing to you' isn't really accurate because the web isn't doing anything to us, because it is us. Could you explain a bit more about what you meant by that?

Aleks Krotoski: I think more than anything else, my great frustration about these types of very determinist subtitles you get at the end of business books, is that they're there to sell copies. And I think that the reason why we put that on this book is that there are so many polemic arguments and some people say, that it's going to totally change everything for the better, and other people say that it's going to totally change everything for the worse. I find that ridiculously frustrating, because can we say that something like the telephone has changed us, overwhelmingly? Can we say that some things like water, and electricity have changed us fundamentally as human beings? No. They've changed our behaviour in some ways, but they certainly haven't changed us fundamentally as human beings. And I think that the accusation that a technology can do something to us is giving the technology far, far more value than it actually has.

You put that into context with examples of older technologies; you reference Tom Standage's book The Victorian Internet and use his example of a couple that gets married by telegraph to discuss online dating, and later you use a lot of other research to take down some of the arguments that extend into moral panic. Your porn chapter was good at using evidence to take down some of the panic about porn. Could you talk a little bit about that?

AK: It's interesting that the general dialogue you get about any sexual content in any context, in any point in time, is that as soon as you're exposed to it, you will lose all moral fibre. That I find very frustrating. The idea that the moment of looking at pornography as something that is a slippery slope, that will take you immediately into paedophilia, is actually offensive to the human race, and to the social world in which we operate and that we exist in.

There is definitely evidence to suggest that people who have problematic consumption of pornography will indeed go down that slippery slope, and they will look for greater and greater hits. But that's for people who have problematic use. The simple act of looking at pornography in any context does not instantly turn you into someone with problematic use. This is a dialogue we hear again and again, when it comes to the web, that just turning on the computer that's all you see, and you'll lose the sense of community, because you'll no longer want to go outside and be with human beings. And if you go onto a site that spouts hatred and terrorism et cetera, or some kind of horrible violence-promoting organisation, then instantly you will become somebody who experiences that. Sadly, well, in some ways gladly – but sadly, that's not the case. It's not doing anything to us.

There was a great research report specifically about radicalisation several years ago, for the Home Office, in 2012. In there, they say that extremist websites are really bad for recruitment, and creating new little terrorists. They're great for people who are already vulnerable for that kind of thing, and for us, even as psychologists, we have no idea of what vulnerability is and what it means, and how it happens. So it's difficult to predict who and when an individual, which individual and when that kind of thing would tick over and take you down that path. But the bare fact of it is that's incredibly rare. And to create a kind of-- in some ways a fetish around fetish – problematises it even more and makes it really appealing and itself causes problems. It's also something to do with the idea that human beings in their basic humanness are somehow morally corrupt, and I find that very offensive.

One of the things that you discuss in your chapter on terrorism is that the online terrorist wannabes end up infighting about who is the most righteous, instead of doing much of anything.

AK: Absolutely, and that ends up happening all the time in online communities, whether it's communities of environmentalists, or even just people hanging out, or who are in radical communities. New people are welcomed with caution, but there are very clear, almost coming of age rituals that happen to individuals when they're in online communities. At first, they're warmly greeted, but people are kind of sceptical – you know so little about the person. Then over time, you develop the trust, over time, these people through contribution and things that are considered valuable, are welcomed even further into the fold, until such time as they become regulars, and occasionally they even ascend to become administrators, moderators and all that kind of thing. So yeah, it's quite natural for that to be happening amongst the radical communities.

If we could just go back to talking about porn for a minute, what are your thoughts on Cameron's plans for an 'opt-in' porn switch on all internet connections, because it seems that porn and terrorism are the two big bugbears of the internet that are used to stamp down on personal freedom and control of what people search for.

AK: Ownership as well, is another thing that keeps coming up – it's all in the name of national security, whether it's protecting ourselves from ourselves, or protecting ourselves from outsiders, or protecting the economy. So it's unsurprising again that those are the flashpoints. Yeah, the announcement today is – I think he doesn't really know! I'd love to speak with his advisers and figure out what on earth they're telling him and how he's been interpreting that for the public. Also, it could also very easily have to do with how it's being reported. We do have to take that into consideration as well. It is impossible, frankly. It is impossible to do such a thing.

To create those kinds of filters suggests that you would then create them for everybody, but that in itself is a form of censorship, first and foremost, which of course this country doesn't like to think of itself as promoting. And second, if you do such a thing, then it's going to push people underground. People will find anonymous routers—the fact of the matter is, as my partner says, is that what will happen is that ISPs or the government, ultimately, will have a list of people who have registered their interest in pornography. Which – I don't really see as valuable to the government, in the first instance, but also, why people would want to do that, you know? I don't know. It's just a knee-jerk response to an issue that is clearly very, very important to tackle. But often with these things, it's kind of like picking on the weakest kid in the room. Technology can't fight back, right? And in many ways, the issue is less with technology and more with how we deal with ourselves and between ourselves, to regulate that kind of thing amongst ourselves.

In your book there is a lot that's really pro-anonymity; you explain how virtual identities can give people a safe space to test new identities, and kind of practice and play out these roles that they might want to try on, without knowing if they work yet. Then later on you write about anonymity and disinhibition, and all the effects that lead people to troll and behave a bit badly. And you're kind of brave, because you confess your own past trollish behaviour, which is great, because most of us have done something like that at some point, but won't admit it – so how do we create civil spaces in communities and preserve the good side of being anonymous at the same time?

AK: It's really difficult, isn't it? Because to celebrate the anonymity means that we will continue to evolve in a really interesting way, and the web will be part of that evolution, right? If, suddenly, the anonymity, or the idea that life can take a different turn from who you were as a teenager, as you grow into adulthood – if we don't make amends for those types of things, and if we don't allow people the freedom with which to explore their developing identities, then the web will no longer be that place [for exploration], and it will turn into something completely different, which will probably be an information thing, which is like it was in the early days of the web. So obviously, I think that aspect of anonymity is really great and wonderful and a really important part of our human development. But at the same time, and on the other hand, the thing that everybody talks about, as you mentioned, was this kind of antisocial element, this non-community behaviour.

I think in some ways, what we're going to have to do, and what has been done for 2-3 decades now, is we're just going to have to recognise that there are sometimes idiots out there, right, and we're going to have to learn how to deal with them, rather than panicking about them. It's not fun to be threatened online. It's really uncomfortable. I have been threatened, many times, in my life, and it's not great. But there are ways to deal with it, and there are ways to gain support. We certainly don't want the web to be just a festering pool of trollish behaviour. And thankfully it's not. There are places that people know and that people engage in, knowing full well that they can be anonymous and they can be in a safe place. And if we don't have those safe spaces, then the web is not going to be as valuable to us as it is now.

So how do you create those kinds of community environments? I don't know. In some ways, it's about allowing people to be idiots. Just let them be idiots, and then blow off their steam and then move on, because ultimately the community is the thing that's the most important part of this, and if somebody comes in and tries to disrupt that community, then on the one hand you can ignore them, and on the other hand, another method that works, apparently – I spoke recently with a woman who did her PhD looking at different kinds of trolling behaviours and how to deal with them – on the other hand, going straight back at the troll and calling them on it and confronting them is another way of getting rid of the problem person. But it's about maintaining that cohesion of community, keeping the relationships that exist in the online community, strong. And also to the degree that having an enemy within the group binds people together, sometimes having a troll or an idiot is a good thing. Because they can say, 'You're not part of the group, we are part of this group'. How to do it? It's a bit like doing it in the real world, you know? What happens is that you get somebody who's in a group who commits some sort of infraction, and they're kicked out of the group. It's pretty similar. And there are ways and means of doing that in virtual environments as well.

Aren't there some problems with forcing one online identity all the time? You can have one identity in one social group online but you may have another one somewhere else – there's an in-between state between being anonymous online or having one uniform identity online. It's good for marketing to have a uniform identity, but not for users maintaining different social selves, or presence in different communities. A lot of sites will only let you comment with a certain plugin that makes you declare your identity – it seems to work better for marketers than for actual people. How should we think about building communities for people when so many things are monetised through applications that just want to have all our data in one place? What if we want to use different data for different roles in different places on the web?

AK: It is a really big challenge, isn't it? Because you honestly can't blame organisations for doing that, because that's how they're making their money. And if they're not making their money, then – there are no other business models at the moment, which is quite frustrating. The person who comes up with an alternative that's not based on advertising, then wow, good luck with that, I'd love,love love to see what comes out of that. But that is the dominant way that people are funding these incredible services that we are able to use, that are allowing us to speak across continents and oceans and connect with other people in really interesting ways.

I think, therefore, rather than a technological solution - because technological solutions tend to be very rigid, they're a bit like regulation, they tend to be either on/off, very binary, and inflexible, even despite all the discussions about development using agile processes and publishing as you go, and not thinking that what you publish is lasting forever, and how things will evolve through user experience, at least, that's within a technology developer framework – we have to develop a kind of social recognition that, even though we may be under one identity in the virtual world, as it stands now, that one identity actually represents a whole person on the other side of the screen. So rather than assume that everything that comes up in a Google search represents that individual right this second, I think we need to come up with a social understanding [of the individual] – this is our responsibility, this goes back to the technology not doing anything to us. It's a social responsibility to turn around and say, hold on a second, this is how we socially understand this person, who at the age of 15 did one thing and at the age of 25 did another, and now that they're 36, they're doing something else. That's how we're going to get around the one identity.

I honestly think that's how we're going to get around it, you know, especially as people who are internet new – I hate using the expression digital natives and what are they called, digital immigrants – but then there is an element of the fact that I am able to have multiple identities, because not everything has been captured from my life. I don't know how many years I had before I started using the web, I think I was in college before I started using the web, and it was a new and exciting technology et cetera et cetera. But that's a whole bunch of life that I had before. And as soon as people like me die out, right, and you do have something from birth until death, then we are going to really have to think about this. I've got stupid things I did ten years ago, but they were stupid things that I did ten years ago, fifteen years ago that are online and that won't go away, right? But through experiences of saying, that was ten years ago, that was fifteen years ago, I've done all this stuff since, you know, I've grown and changed, I've lived three or four lifetimes in that period - it's through those kind of dialogues and conversations, the kind of disgruntledness that we may experience, the dissonance that we may experience when a piece of our past comes up and disrupts our present, that we're going to start recognising that we need to start having these social conversations ourselves about how we cope with the fact that the technology keeps serving up a single data point that is us, that is at every point. And that is an interesting philosophical conundrum! Ultimately, who are we when everything just comes up as now? But a technological solution is not the way to go.

What about the right to be forgotten—like if you wanted to start over and declare data bankruptcy, wipe it all?

AK: I think that if you do that, then people are going to be, 'Hmmmm, where were you?' Like in job interviews, if you've got a blank: 'What were you doing in that time?' And if you've got blanks, our character is to think the worst in people, and we're going to be like, 'What have you got to hide?' instead of , 'Oh , there you were, taking a digital detox' or there you were, just not online, or you didn't want something to be remembered right away. Also, there's the problem with being fragmented. You can own your own data, but you don't own the things that are about you, as it were, that were generated by other people. So that's problematic as well.

Ultimately, it's about the long view, and the reputation of the individual. Reputation is truly the only currency in the virtual world. It's what generates trust, it's what generates how we view one another, it's the Google search you do when you're looking for a date and you find somebody on match.com or whatever it is. If you do have giant holes in your past, then people will be weirded out by that, and then I think that people will make an assumption that you are disingenuous, that you have something to hide. Which will undermine the trust that you hope to build by getting rid of stuff. Now, on the other hand, if you kind of face up to something and say, 'Ah-ha, now I did that, and that was stupid', or, 'I was involved in that, but I have been rehabilitated' or whatever it is, then we start to have the conversations again, that are social constructs more than anything else.

What about the dark side of reputation – the institutional reputation, the data that's about you that's being traded behind your back by companies? Data about what we've done, data about our connections, network things that Facebook and Google and who knows who else will have, that's all being shared and sold and used to make all kinds of decisions about what we'll see, what kinds of financial deals we're offered, all kinds of things. And after all the PRISM disclosures came out, what kinds of government stuff this data is being used for, who knows? But at your talk, you said something about people being more aware recently, and about wanting rights to know about what data is out there and what people are able to find out from their information - right now there is a lot of data that we give out freely because we may not know what big data is capable of mining it for.

AK: Oh yeah, I know. It's exciting, isn't it? I find it fascinating. I think that one of the lessons that we need to learn, and I don't know if this is something that's going to happen, but one of the lessons that we need to learn is that big data doesn't give us all the answers. And when I was doing my PhD research, I was doing big data analysis. I didn't know it, because it wasn't called that at the time. And it was amazing, it's absolutely extraordinary, to kind of see all these insights, and what was on offer, and really see people's behaviours. And then, through a series of great, difficult conversations with my supervisors, and also reading a lot about social network analysis, which is what I did my PhD on, that was the basis of my analysis – really realising to the degree which and to the extent of how little big data actually captures. Like you can make certain assumptions, but you can't make all the assumptions. That's kind of scary, actually, when you think of how much of the world is coasting on the world of data.

I thought that the whole privacy thing was going to erupt three years ago. I don't know why I thought it was going to be three years ago, but I was like, this is the year that privacy is absolutely going to get up people's noses. And they're going to be freaked out about how much data is captured and collected. And I remember saying to friends, like, ''Don't put your photos on Facebook, because that means Facebook owns them', and they're like, 'I don't care, man!' ! I honestly think that seeing this kind of thing again and again-- six months ago, or eight months ago I think I was talking about the Google privacy updates, and people were going, 'This is it, everyone's going to be freaked out about privacy', and then we had PRISM and NSA, and people are kind of moving on.

Is this just lock-in, like it's too hard for people to opt out, and these things can just ratchet up?

AK: I honestly think that people – they don't actually care. And it astounds me. Absolutely astounds me. Like, why aren't people up in arms? Why is the average person on the street not freaking out about the fact that our phone calls are being hacked into, on the one hand, by organisations, and on the other hand, being tracked by organisations? And then you've got one argument saying, 'Well, I've done nothing, so I don't care', and you've got the other side saying, 'Yeah, but wait a minute, this data may actually show two or three degrees of separation, and puts you, just going by the data, in some sort of guilty position'. It's really, really, really difficult to know.

Data is a fascinating thing. Data is the modern statistics, and people are just believing it without being critical of it, and saying, ok, yeah, that's fine. That, I think, is one of the things that we're going to be learning about the web. It goes back to something that I've been talking about a lot. In fact, I've just been writing up the research report on it, about being critical about the people behind the scenes. Because the people behind the scenes are those who are creating the systems that we interact with, the world of interaction and each other. And it's through these portals that these people have that they are generating their USP, which is relevance and value. That's the reason why Facebook is successful, that's the reason why Google is successful, that they give us relevance and value for what it is they do. Brilliant, fantastic! But to do that, to try to actually deliver that in a way that conceptualises human beings, with all of our crazy messy difficulties and all of the things that we go back on, and all our inconsistencies and contradictions, is to have a particular view about what a human being is.

I had a really funny conversation with a woman who's a social psychologist, we were talking about her research, and she's doing research with a whole bunch of computer scientists. I can't remember specifically the research question, but on this year and a half long project, it took the first nine months to convince the computer scientists that identity is fluid, prismatic, multiplicitous and flexible.

And so the construction of the human being as the data point, or as something that can be fed relevance and value which means x and y is something that I think people are- and this is all based upon this big data stuff- is going to conflict with some people, and they're going to stop just accepting what it is, what comes out of the computer, because they're going to be like, 'Hold on a second, who came up with this idea? And is that who I actually think I am, and who I think human beings are?' So hopefully, maybe that will be the catalyst for people getting all freaked out about privacy.

In the final chapter of your book, you go into this really impassioned explanation about how no technology is neutral, and every technology contains embedded value judgements. What are the next steps that you see, for anyone that's interested in being able to critique new technologies? What do people need to know to kind of evaluate the kind of culture behind design choices, to contextualise the technologies that they use – especially people who are interested in the social side of technology who might not have a technical background.

AK: That last chapter is based on is some work that I did with two organisations, the first is the Nominet trust, who are fantastic, and the second is Google, of all people. The main research question is, what are the main assumptions behind the big services that we use? And specifically, it's about relevance and value. What are the assumptions that relevance and value mean to Google, which is basically trying to help you look for information? And Facebook, which is essentially trying to help you be social in the easiest way that it can see.

So I've been thinking a lot about this, and what it is that people can do. What I tend to do, which is slightly ridiculous, is that I tend to physicalise technology. I built this thing called the serendipity engine, which was based on basically a passing comment from Eric Schmidt, the chairman at Google, who at the time was the CEO. In order to try and understand what actually is necessary to deliver something that has relevance and value. In many ways, it's about – it's about so many things: it's about the location that you're in, it's about the filters you see the world through, it's about your political stamps, whether you voted for the winners of the last election on your country, it's about how you feel psychologically and how you feel physically – all kinds of different elements that are important to understand about how human beings are, and how we form attitudes and ideas, how we believe something is going to be relevant and valuable. And more than anything, I think the easiest answer is to recognise the lenses through which you see the world anyway.

And you can do that at country level, you know, what does your country stand for, and you can do that almost at an individual level, you know, what does my family stand for? What does my religious organisation stand for, who am I? And then have kind of a moment of postmodern realisation that that's not how everybody else thinks, and then recognise that that's how technological organisations create their systems, but that the decisions that they make about what is at the top of the search results, or how friendships and relationships should be mediated, and importance in them – and ecognise that those themselves are social constructions that are driven by the time that we're in, by the materials that are available to produce these things, that they're driven by the political attitudes and the social networks of the people who were deciding to create them.

There are so many different elements of the problem that go into this that you can actually understand why the human being was constructed in a particular way for a technology, because it's almost impossible to recreate the actual human without having the technology and a kind of essential understanding of what a human being is. Difficult to say – and I'm really sorry, I wish I could give you a list of things to look out for for people who want to be critical of a technology, but more than anything, I think it's more about thinking about, who is the person who created the system, and where are they from, and how might that be reflected in what it is that they've developed.

Untangling the web: What the internet is doing to you is out now, published by Faber & Faber