
Whilst I’ve been unemployed during the pandemic, the excess free time has afforded me opportunities to think more deeply about certain issues that impact my life. Especially considering 2020 was an election year, and we saw how social media impacted the 2016 election. Social media has been my primary way of connecting with friends for years—even those I initially met in person, and not just my online friends. I’ve been on Facebook since July 2007, the same year my oldest kid was born. Instagram and Twitter since April 2012 (although I don’t use them as consistently as Facebook). But it’s become especially important to me since we haven’t been able to connect in person since early 2020. I’ve also been connecting in different ways and more personally with some people who had been more acquaintances than close friends pre-pandemic, which has been a nice experience for me. We’ve all been experiencing the pandemic differently—some people may have had to use social media less, perhaps because of zoom fatigue, whilst others have needed it more to help them feel connected to the world whilst physically isolated from it.
One friend I started connecting with more is an improviser and software developer I met at an improv festival in 2016, Matt. Earlier this year, he mentioned he wanted to start having philosophical debates with people, and with all my reading about human and social psychology in the last year or so, I pretty much jumped at the opportunity.
One of the topics I was specifically interested in discussing is the ethics of social engineering on social media. I hadn’t yet watched Netflix’s The Social Dilemma documentary (though I have now, since Matt and I had that philosophical discussion over Facebook Messenger video chat); it was just a topic I wanted to explore because I’ve been wondering about the ways to combat the rise of conspiracy theories getting traction and changing the way people think. Is it ethical to try and use AI algorithms on social media to try and get people to behave kinder to those they disagree with? Then I read about how social media algorithms were spreading fear of coronavirus vaccines. I think about it especially more now as someone who studied data analytics and the basics of machine learning in a boot camp, and as I think about what I want to be doing with those skills. I’ve been manipulated against my will or awareness, and not just through social media, so I don’t think that I’d feel very comfortable doing the same to others. At the same time, I really want to hope for positive changes in the world, where people are able to act genuinely kinder to each other (as in, they feel like it’s the right thing to do, and they’re not being kind as a way to manipulate people to get the outcome they want). How do people choose to be genuinely kind without being manipulated into it?
Through our discussion, Matt and I sort of came up with this idea of choosing options on our social media. Like, “Which algorithm do we want to use to view our feed at this moment?” I guess it’s similar to how Twitter allows you to mute words and phrases from showing up in your feed, but maybe less user-defined. Matt was lamenting that he couldn’t just view his Facebook feed as “Most recent posts” any more. Meanwhile I’m bothered that the algorithms force us to live in our own little bubbles of people who think like we do, and I’d love to see a feed option for “content that is completely outside what I usually agree with, so I have a chance to see other perspectives” or maybe an “opposing viewpoints” feed for short. There needs to be an easier way to help us see those different opinions in order for us to foster healthy debate and find a middle-ground again, and not just for the purpose of fighting each other, name-calling, and accusing the other side of being evil, so we’re all so defensive all the time. Of course, we still have to fight misinformation and conspiracy theories in all that, too, otherwise more people could potentially fall down the rabbit hole (honestly I was surprised and embarrassed by how easy it was to see how people could fall for the Wayfair conspiracy on Twitter in July last year, as it happened at a time I was having difficulty sleeping). Watching The Social Dilemma on Netflix just showed me how the algorithms led unsuspecting people down more and more conspiracy theory holes. Social media is designed for engagement, and usually conflict and fear drives higher engagement levels. Abusive behaviour is rewarded because it increases engagement.
So how do we foster more positive viewpoints instead? Well, that’s where I’d suggest an option for a social media feed algorithm just for “positive feelings.” The ability to switch on a machine learning algorithm that just shows us the positive experiences our friends and the people we’re following are sharing. If we have the option to tailor our feeds in this way—“positive feelings,” “opposing viewpoints,” “most recent posts,” “tailored engagement” (i.e. what our main social media feeds currently do), and then maybe some other options like, “potentially triggering content,” “just the news,” and so on—then it could give users back some semblance of control over their mental health and ideas about the world. Or maybe it’ll just create more bubbles of the way we think. I don’t want to pretend I think I have the best solution. But we’re already being constantly experimented on, and I don’t want to leave social media because I’ve had a lot of positive experiences as a result of being on it. I’d just like to see us being able to have more choice to tailor what we’re exposed to at any given time. Implementing that on the social media sites people already use is probably much easier than starting a brand new transparent social media company where you opt-in to being experimentally socially engineered to trying to look at things more positively, or treating people more kindly, which was another idea Matt and I floated during our discussion.

Another idea I had, which I have a feeling may already be in an experimental stage on Facebook based on seeing a friend being flagged for not being able to post something she wanted to write, is something I first became aware was possible when researching co-parenting apps to use with my ex-husband. One of them (which is not the one we use, so I haven’t seen it in action), OurFamilyWizard, has a feature that checks what you’ve typed before you send your message, and advises you to perhaps change your tone or words if it’s maybe coming across as abusive. I assume this is one of the reasons OurFamilyWizard is the co-parenting app a lot of lawyers and courts recommend in domestic violence cases where the parents still have to share custody. I wondered, how much abuse could be reduced from social media if the apps themselves previewed what we wrote before we hit send, scanned for offensive terms or name-calling, advised us that some of the language we used may be abusive, and asked us, “Are you sure you want to publish this? Perhaps reconsider your phrasing, because x reason.” I’m not suggesting they stop people from publishing that content entirely (I can also see this feature as having negative blowback, used against marginalised communities rather than protecting them, or being like tone policing), but considering a decent amount of abuse is a reactionary and retaliatory defense mechanism based on our fight or flight response, having that prompt could disrupt the reactive state and cause us to think more about the consequences of our words before we share anything. I guess it’s also similar to the apps that prevent people from things like drunk-dialing their ex. We know what situations can get us into trouble, and sometimes we need help to prevent us from following through with our initial thoughts.
It’s a messy and complicated issue. I can’t imagine my life without social media these days, but at the same time, I also recognise the harm it can inflict. The Social Dilemma discussed the increase in mental health issues and suicide amongst younger users. That side of it makes me glad that my kids (ages 14 and 10) don’t have their own phones (I didn’t get a mobile phone until I was 16 so I don’t see a reason my kids should have one before that age either), and don’t use social media themselves, beyond YouTube. My 14-year-old is so adamantly against social media and sees its addictive qualities, which is why they’re against it. Sometimes it makes me wonder if that idea came from watching me and their dad on social media on our phones. They learned to read proficiently well when they were only 4 years old in part because, at the time, it was sometimes easier to communicate with us by typing messages to us on Skype than it was to have a verbal conversation. What does that say about us as parents? My kids hate memes, which are most often transmitted through social media. I’m pretty sure both of my kids will grow up to be better people than I am, and less of a slave to what they engage with online.
I find this topic endlessly fascinating and could probably find a lot more to say about it, but I also know my blog posts sometimes wind up being incredibly long, so I’m going to leave things here for today. Perhaps discussing this topic publicly will wind up with me limiting my options for work opportunities, if my opinions work against the model of increasing engagement for profitability, but I’d rather work for a company that cares about the ethics that go into their machine learning models and software development. I’d love to hear other people’s thoughts on any of the issues I’ve discussed in this post, so feel free to engage in the comments or on social media. After all, that’s probably how you found this post, right? Let’s show it’s possible to engage respectfully and positively, without devolving into reactionary abuse.