Examination Of The Ethical Use Of AI In Social Media


In the ever-expanding digital universe, we find ourselves navigating the complex terrain of social media. From image-sharing, to blog posting and thought- provoking discussions, social media platforms have become a dominant virtual town square. However, behind the scenes, a plethora of algorithms and artificial intelligence (AI) systems are working tirelessly to shape our online experience on these platforms. With this in mind, should the use of AI by social media be banned?

In this text, I will delve into the psychological and cognitive effects that social media inflicts on its users and make a case for why the deployment of AI on these platforms should either be banned outright or subjected to rigorous regulation. Moreover, I will examine how this relates to being an AI fairness problem.


First, i will start with a definition of AI, followed by a definition of social media.

AI can be defined as “the capability of computer systems or algorithms to imitate human behavior.”(merriam-webster, n.d). To imitate human behavior, the computer system can simulate parts of human intelligence or cognitive skills to perform tasks such as problem-solving, learning, reasoning, perception, and language understanding. Typically, AI systems use algorithms, data, and machine learning to make autonomous decisions and adapt to changing situations.

Social media refers to online platforms and websites that enable users to create, share and interact with content and other users. Users can post text, images, videos, and engage with others through comments and likes. Social media facilitates communication and networking on a global scale, making it a prominent aspect of modern digital culture. (Merriam-Webster, n.d.)

With this is mind, and putting the concept of AI aside for a minute, there has been some research on the negative psychological and cognitive effects social media has on its users. The following findings have been compiled by the Center for Humane Technology, in their “Ledger of Harms” (Center of Humane Technology, 2022). I will proceed to list a few of them.

  • Cyberbullying significantly raises the risk of suicide ideation in children, making them three times more likely to contemplate suicide than their peers. The online bullying experience is particularly distressing, likely due to the victim’s awareness of a larger public audience (van Geel, Vedder, & Tanilon, 2014).

  • Excessive screen-based media use in preschoolers, over an hour a day, hampers core brain regions responsible for language and literacy. As screen time increases, language skills decrease, and vital brain regions suffer structural integrity loss. This study highlights concerns about screen use on young children’s brain development (Hutton, Dudley, & Horowitz-Kraus, 2019).

  • Prolonged screen time in early childhood leads to developmental delays in language, problem-solving, and social interaction, persisting for over a year. Excessive screen exposure during these formative years can significantly hinder a child’s optimal development (Madigan, Browne, Racine, & Mori, 2019).

  • Increasing social media usage correlates with higher depression levels in teenagers. For every additional hour spent on social media, there’s a 2% increase in depressive symptoms (Boers, Afzali, Newton, & Conrod, 2019).

  • Merely having a smartphone around diverts attention, even when off and face down. An experiment showed a drop in memory and problem-solving when phones were nearby but off. Surprisingly, phone-dependent people improved memory and intelligence when phones were in another room. Phones are “high-priority stimuli,” sapping attention, even when ignored (Ward, Duke, Gneezy, & Bos, 2017).

  • Soon after starting smartphone use, mental math declines, attention weak- ens, and conformity rises. Brain scans show reduced activity in the right prefrontal cortex, seen in ADHD (Hadar, Hadas, Lazarovits, Alyagon, Eliraz, & Zargen, 2017).

  • Memory favors social text over complex text. People recall comments on news more than the article or headline. They remember Facebook posts better than book sentences or faces (Mickes, Darby, Hwe, Bajic, Warker, Harris, & Christenfeld, 2013).

  • Media channel switching harms working and long-term memory. The Extractive Attention Economy and many social platforms threaten human memory (Uncapher and Wagner, 2018).

Keeping these documented effects in mind, i will proceed by trying to answer the initial question.

AI is used in differents facets of social media. However, the most relevant ones would be it’s use in recommendation algorithms, behaviour-based marketing and facial recognition. If we take into account the negative psychological and cognitive effects social media has on adolescents and adults alike, the use of AI in these mentioned features should not be taken lightly. To me, there seems to be three main aspects that could potentially be the reason for these negative effects.

  1. Using AI to keep people active on the platform longer
  2. Using AI to create targeted ads based on the users activity and data
  3. Using AI to distort people’s view of reality

Using AI to keep people active on the platform longer

Recommendation algorithms, powered by AI, are arguably what drives person- alized content feeds. These algorithms can analyze user behavior, preferences, and engagement patterns to suggest posts, videos, or products that are likely to resonate with the user. This enhances user experience and keeps them engaged, driving user retention and platform usage (Fayyaz, et.al. 2020)

This, we could say is a case of unsupervised automated decision making. The AI is deciding what should show up in the users content feed without any human in the loop. A machine learning approach to figure out create a list of content to expose to the user, will result in a systems that only recommends content that algorithmically is seen as relevant to the user. I would say this is illegitimate, since there is no consideration if the user benefits from seeing it; only the decision that the user should see it, due to the level of “relevance”.

The content is seen as relevant because the AI estimates a higher chance for the user to interact with the content in some way. If the user is consistently shown content that the user wants to interact with, the user stays active longer on the platform. If the user stays active longer on the platform, it is exposed to more ads, which in turn increases the platform’s revenue.

In other words, these social media platforms are using AI to create a list of content that has a high probability for the user to interact with, preying on the user’s cognitive mechanisms, to optimize ad revenue. In my opinion, it’s an unethical practice to exploit people’s cognitive mechanisms and pitfalls for capital gain. Especially since these people are most likely unaware of what their cognitive mechanisms work, or even what they are.

Using AI to create targeted ads based on the users activity and data

Behavioral advertising is the concept of tracking user interactions and preferences fot the purpose of delivering highly targeted advertisements (Boerman, S. C., et. al. 2017). This results in increased ad effectiveness and, in turn, better return of investment for businesses. Considering that the Norwegian Data Protection Authority has put in effect a temporary ban on behavioral advertising, which affects Metas business practices (Judin, 2023), i think it’s safe to assume Facebook and Instagram has adopted this practice.

The use of AI to track and analyze user behavior can border on invasive, as it delves into individuals’ online activities and personal preferences. The hyper-targeted content delivered through behavioral marketing can create echo chambers and reinforce biases, limiting exposure to diverse viewpoints and information. This unethical surveillance and tracking of individuals online behavior, can be seen as manipulation.

Using AI to distort people’s view of others and themselves

Facial recognition is used in social media for various things, but for the purpose of this text, i want to focus on “beauty filters”. By beauty filters, i mean filters that enhances a person’s appearance to look more conventionally attractive and/or smooth out imperfections.

A study by Ozimek, et. al., found that “(. . . ) a significant negative correlation was found between photo editing behaviour and self-perceived attractiveness in terms of appearance” (Ozimek, et. al. 2023. p.8). AI-powered beauty filters on, for example Instagram, facilitate a photo-editing behaviour." A concern the same study raised is that these AI-powered beauty filters might remove or gloss over features of their appearance that they might not deem unattractive themselves. The study also mentions that “Numerous studies indicated a positive correlation between self-perceived attractiveness and self-esteem (. . . )” (Ozimek, et. al. 2023. p.5).

In pursuit of adhering to societal beauty norms, these filters might encourage the homogenization of beauty, where individuality and unique features are diminished. This can lead to a potential loss of self-identity and a sense of disconnection from one’s authentic self. As a result, individuals might unknowingly become accustomed to an altered version of themselves, making it harder to distinguish between their real and filtered self.

How is this a fairness issue?

The use of AI in social media raises fairness issues across its various applications. In recommendation algorithms, AI-driven personalization can enhance the user experience, but may inadvertently contribute to filter bubbles. These bubbles can isolate users within their existing beliefs and preferences, limiting exposure to diverse viewpoints and reinforcing biases.

Behavioral marketing, another facet of AI in social media, presents privacy concerns. While it optimizes ad effectiveness by delivering highly targeted content, it surveils user behavior and preferences. This raises ethical issues surrounding privacy rights and the potential for discriminatory practices. Hyper-targeted ads may lead to perpetuation of stereotypes, and perhaps even historical data.

Additionally, AI’s involvement in beauty filters and self-perception on social media can have far-reaching implications. These filters often promote conventional beauty standards, which can distort users’ self-image and prompt an increased focus on conforming to these ideals. This AI-driven distortion of self-perception can disproportionately impact individuals who do not fit within these beauty standards, potentially leading to feelings of inadequacy and lower self-esteem. As a result, these beauty filters might perpetuate societal biases as it pertains to appearance.

In essence, AI fairness issues in social media concern algorithmic biases, privacy concerns, and perpetuation of biases of appearance. Addressing these issues would require a careful balance of ethical AI practices, adequate regulation, and a more user-centered design to ensure that AI-driven systems promote a safe digital environment where social media users are not exploited for profit.


The integration of AI into social media has ignited discussions surrounding its consequences and ethical dimensions. It plays a significant role in social media, with AI-powered content curation being one of its core components. The approach utilizes recommendation algorithms powered by AI to engage users by exposing personalized content. However, there are, of course, ethical concerns that arise regarding the potential exploitation of user data for financial gain.

This, in tandem with targeted advertising, employs behavioral marketing to track and analyze user behavior, delivering personalized ads. This enhances ad effectiveness but could also contribute to the creation of echo chambers, where users are exposed only to information that aligns with their already existing beliefs.

The influence of AI on self-image is another compelling dimension. AI-driven beauty filters can alter the users’ self-perceived attractiveness, in turn affecting self-esteem and encouraging conformity to conventional beauty standards.

The discussion about AI’s role in social media extends to fairness concerns, including algorithmic biases, privacy issues, and the perpetuation of appearance-related biases. Finding a balance in te regulation of beauty filters, powered by AI, is important to combat these concerns and foster a safer digital environment.

While challenges are apparent, there seems to be a need for more stringent regulation of AI in social media. Mainly to prevent the exploitation of users for profit, but also to prevent facilitation of an increase in psychological and cognitive problems. Hopefully, with growing awareness and evolving technology, there will be an opportunity to mitigate AI’s potential risks through robust ethical guidelines and vigilant oversight. This might help to offer enriching user experiences without compromising psychological and cognitive health.


  1. Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and Machine Learning: Limitations and Opportunities. fairmlbook.org. http://www.fairmlbook.org

  2. Boerman, S. C., Kruikemeier, S., & Zuiderveen Borgesius, F. J. (2017). Online Behavioral Advertising: A Literature Review and Research Agenda. Journal of Advertising, 46(3), 363-376. https://doi.org/10.1080/00913367.2017.1339368

  3. Boers, E., Afzali, M. H., & Conrod, P. (2020). Social me- dia use and alcohol consumption in teens. Preventive Medicine. https://www.sciencedirect.com/science/article/pii/S0091743520300165

  4. Boers, E., Afzali, M. H., Newton, N., & Conrod, P. (2019). So- cial media usage and depression in adolescents. JAMA Pedi- atrics. https://jamanetwork.com/journals/jamapediatrics/article- abstract/2737909

  5. Center for Humane Technology. (2022, n.d, 7). Ledger of Harms. Ledger of Harms. https://ledger.humanetech.com/

  6. Fayyaz, Z., Ebrahimian, M., Nawara, D., Ibrahim, A., & Kashef, R. (2020). Recommendation Systems: Algorithms, Challenges, Met- rics, and Business Opportunities. Applied Sciences, 10(21), 7748. https://doi.org/10.3390/app10217748

  7. Hadar, A., Hadas, I., Lazarovits, A., Alyagon, U., Eliraz, D., & Zargen, A. (2017). Screen time and mental arithmetic in smartphone users. PLoS One. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0180094

  8. Hutton, J. S., Dudley, J., & Horowitz-Kraus, T. (2019). Screen- based media and children’s brain development. JAMA Pediatrics. https://jamanetwork.com/journals/jamapediatrics/fullarticle/2754101

  9. Judin, T. (2023, 17. Juli). Midlertidig forbud mot adferds- basert markedsføring på Facebook og Instagram. Datatilsynet. https://www.datatilsynet.no/aktuelt/aktuelle-nyheter-2023/midlertidig- forbud-mot-adferdsbasert-markedsforing-pa-facebook-og-instagram/

  10. Lemola, A., Perkinson-Gloor, N., Brand, S., & Dewald-Kaufman, J. (2014). Electronic media use at night and depressive symptoms. Journal of Youth and Adolescence. http://dx.doi.org/10.1007/s10964-014-0176-x

  11. Madigan, S., Browne, D. T., Racine, N., & Mori, C. (2019). A longitudinal study of screen time in children. JAMA Pediatrics. https://jamanetwork.com/journals/jamapediatrics/fullarticle/2722666

  12. Merriam-Webster. (n.d.). Artificial intelligence. In Merriam-Webster.com dictionary. Retrieved October 15, 2023, from https://www.merriam- webster.com/dictionary/artificial%20intelligence

  13. Merriam-Webster. (n.d.). Social media. In Merriam-Webster.com dictionary. Retrieved October 15, 2023, from https://www.merriam- webster.com/dictionary/social%20media

  14. Mickes, L., Darby, R. S., Hwe, V., Bajic, D., Warker, J. A., Harris, C. R., & Christenfeld, N. J. S. (2013). Major memory for microblogs: What makes a message worth remembering? Memory & Cognition. http://dx.doi.org/10.3758/s13421-012-0281-6

  15. Ozimek, P., Lainas, S., Bierhoff, HW. et al.(2023). How photo editing in social media shapes self-perceived attractiveness and self-esteem via self-objectification and physical appearance comparisons. BMC Psychology, 11 (99), 1-14. https://doi.org/10.1186/s40359-023-01143-0

  16. Uncapher, M. R., & Wagner, A. D. (2018). Media multitasking and cogni- tive abilities. Proceedings of the National Academy of Sciences, 115 (40), 9889-9894. https://www.pnas.org/content/115/40/9889

  17. van Geel, M., Vedder, P., & Tanilon, J. (2014). Cyberbullying and adolescent mental health: Systematic review. JAMA Pediatrics. https://jamanetwork.com/journals/jamapediatrics/fullarticle/1840250

  18. Ward, A. F., Duke, K., Gneezy, A., & Bos, M. W. (2017). Brain drain: The mere presence of one’s own smartphone reduces available cogni- tive capacity. Journal of the Association for Consumer Research, 2 (2). https://www.journals.uchicago.edu/doi/abs/10.1086/691462