20 Matching Annotations
  1. Mar 2024
    1. 21.3.1. As a Social Media User# As a social media user, we hope you are informed about things like: how social media works, how they influence your emotions and mental state, how your data gets used or abused, strategies in how people use social media, and how harassment and spam bots operate. We hope with this you can be a more informed user of social media, better able to participate, protect yourself, and make it a valuable experience for you and others you interact with. For example, you can hopefully recognize when someone is intentionally posting something bad or offensive (like the bad cooking videos we mentioned in the Virality chapter, or an intentionally offensive statement) in an attempt to get people to respond and spread their content. Then you can decide how you want to engage (if at all) given how they are trying to spread their content.

      In my opinion, being an informed social media user entails understanding platform mechanics, recognizing their impact on emotions and mental well-being, and being mindful of data usage. It enables users to traverse content that has been purposefully provided with shock value or offensive intent, allowing them to make informed engagement options. By remaining watchful against manipulation tactics, users can help to foster a pleasant online environment while also protecting themselves from harm.

    1. Many people like to believe (or at least convince others) that they are doing something to make the world a better place, as in this parody clip from the Silicon Valley show (the one Kumail Nanjiani was on, though not in this clip):

      Unintended effects are common in innovation, as seen by Eli Whitney's cotton gin, which exacerbated slavery, and Alfred Nobel's dynamite, which contributed to devastating warfare. Albert Einstein's remorse over nuclear weapons exemplifies the ethical issues of dual-use innovations, whereas Aza Raskin's concerns over endless scroll demonstrate the unintended harmful consequences of seemingly innocuous technological capabilities. These examples serve as cautionary tales, highlighting the complex and unpredictable repercussions of innovation on society.

    1. 19.3.2. Privacy Concerns# Another source of responses to Meta (and similar social media sites), is concern around privacy (especially in relation to surveillance capitalism). The European Union passed the General Data Protection Regulation (GDPR) law, which forces companies to protect user information in certain ways and give users a “right to be forgotten” online. Apple also is concerned about privacy, so it introduced app tracking transparency in 2021. In response, Facebook says Apple iOS privacy change will result in $10 billion revenue hit this year. Note that Apple can afford to be concerned with privacy like this because it does not make much money off of behavioral data. Instead, Apple’s profits are mostly from hardware (e.g., iPhone) and services (e.g., iCloud, Apple Music, Apple TV+).

      The inclusion of privacy concerns, particularly about surveillance capitalism, demonstrates the growing public awareness about data protection. The European Union's passage of the GDPR reflects a global trend of regulating and securing user information, emphasizing the right to privacy online. The spat between Facebook and Apple over privacy policy highlights the financial impact that privacy reforms can have on companies that rely significantly on behavioral data, as opposed to Apple's business model, which prioritizes hardware and services over data-driven revenues.

    1. So, what Meta does to make money (that is, how shareholders get profits), is that they collect data on their users to make predictions about them (e.g., demographics, interests, etc.). Then they sell advertisements, giving advertisers a large list of categories that they can target for their ads. The way that Meta can fulfill their fiduciary duty in maximizing profits is to try to get: More users: If Meta has more users, it can offer advertisers more people to advertise to. More user time: If Meta’s users spend more time on Meta, then it has more opportunities to show ads to each user, so it can sell more ads. More personal data: The more personal data Meta collects, the more predictions about users it can make. It can get more data by getting more users, and more user time, as well as finding more things to track about users. Reduce competition: If Meta can become the only social media company that people use, then they will have cornered the market on access to those users. This means advertisers won’t have any alternative to reach those users, and Meta can increase the prices of their ads.

      Meta's income plan is based on user growth, interaction, and significant data collection for targeted advertising. The emphasis on attracting more users, increasing their time spent on Meta platforms, and improving predictive analytics demonstrates the company's dedication to maximizing ad exposure and tailored content. By minimizing competition and aiming for market dominance, Meta hopes to cement its position as the go-to social media platform, providing advertisers with unrivaled access to a large user base and the opportunity for higher ad pricing. This approach is consistent with Meta's fiduciary duty, focusing on shareholder profitability through a multidimensional strategy that leverages user data and advertising opportunities.

  2. Feb 2024
    1. What do you consider to be the most important factors in making an instance of public shaming bad?

      The debate over the ethics of public shaming exposes a variety of perspectives. Jennifer Jacquet views shame as a morally powerful tool for the weak against the strong, emphasizing its scalability in addressing societal concerns. Concerns have been raised, however, about schadenfreude and the possible harm inflicted on "normal" humans in the age of social media, provoking reflections on the essential conditions and restrictions for morally appropriate public shaming, as stated by philosophers Paul Billingham and Thomas Parr.

    1. Shame is the feeling that “I am bad,” and the natural response to shame is for the individual to hide, or the community to ostracize the person. Guilt is the feeling that “This specific action I did was bad.” The natural response to feeling guilt is for the guilty person to want to repair the harm of their action.

      In my perspective, shame is a strong sense of psychological inadequacy in which people perceive themselves to be inherently flawed or unworthy. The natural response to shame is frequently withdrawal or isolation, as the individual attempts to conceal perceived flaws in order to avoid rejection. Guilt, on the other hand, stems from the recognition of a specific damaging activity, leading a constructive response centered on accountability, learning, and the desire to remedy the repercussions of that behavior.

    1. Gamergate# Gamergate was a harassment campaign in 2014-2015 that targeted non-men in gaming: Zoë Quinn, Brianna Wu, and Anita Sarkeesian. The harassment was justified by various false claims (e.g., journalistic malpractice), but mostly motivated by either outright misogyny or feeling threatened by critiques of games/gaming culture from a not straight-white-male viewpoint. The video below talks about how two factions within gamergate fed off each other (you can watch the whole gamergate series here)

      The harassment of Zoë Quinn, Brianna Wu, and Anita Sarkeesian, as well as the poisonous intentions behind it, highlights the need for a more inclusive and friendly gaming community. It emphasized the significance of standing up to misogyny and creating an environment in which various voices are acknowledged, allowing everyone to enjoy and contribute to the gaming world without fear of harassment.

    1. This can be done privately through things like: Bullying: like sending mean messages through DMs Cyberstalking: Continually finding the account of someone, and creating new accounts to continue following them. Or possibly researching the person’s physical location. Hacking: Hacking into an account or device to discover secrets, or make threats. Tracking: An abuser might track the social media use of their partner or child to prevent them from making outside friends. They may even install spy software on their victim’s phone. Death threats / rape threats Etc.

      Engaging in cyberstalking, hacking, and sending threats via social media raises ethical questions about privacy invasion, online harassment, and digital security. These behaviors breach personal boundaries, cause psychological distress, and undermine cybersecurity ethics. Respecting privacy, avoiding harassment, and ensuring digital security are all critical ethical considerations in online interactions.

    1. 16.3. Ad-hoc Crowdsourcing Examples# Crowdsourcing isn’t always pre-planned or designed for. Sometimes a crowd stumbles into crowd tasks in an unplanned, ad hoc manner. Like identifying someone and sharing the news in this scene from the movie Crazy Rich Asians:

      Crowdsourcing on social media can speed up the spread of rumors and misinformation, particularly during times of crisis. This can have major effects, such as altering public perception, inciting fear, or even influencing political events. Disinformation operations, whether planned or unplanned, can influence public opinion and contribute to a loss of faith in information sources.

    1. Some online platforms are specifically created for crowdsourcing. For example: Wikipedia: Is an online encyclopedia whose content is crowdsourced. Anyone can contribute, just go to an unlocked Wikipedia page and press the edit button. Institutions don’t get special permissions (e.g., it was a scandal when US congressional staff edited Wikipedia pages), and the expectation that editors do not have outside institutional support is intended to encourage more people to contribute. Quora: An crowdsourced question and answer site. Stack Overflow: A crowdsourced question-and-answer site specifically for programming questions. Amazon Mechanical Turk: A site where you can pay for crowdsourcing small tasks (e.g., pay a small amount for each task, and then let a crowd of people choose to do the tasks and get paid). Upwork: A site that lets people find and contract work with freelancers (generally larger and more specialized tasks than Amazon Mechanical Turk. Project Sidewalk: Crowdsourcing sidewalk information for mobility needs (e.g., wheelchair users).

      Another example I can think of from real life would be Duolingo, while primarily a language learning app, Duolingo uses crowdsourcing to improve and expand its language courses. Users can contribute by suggesting translations and reporting errors.

    1. What dangers are posed with languages that have limited or no content moderation?

      Limited or no content moderation in particular languages on Facebook poses various risks, including the proliferation of hate speech, misinformation, and harmful content within specific linguistic communities. With menus and prompts available in 111 languages but only 41 officially translated "Community standards," there is a risk that inappropriate content will go unnoticed in languages that are not supported. Furthermore, the scarcity of automated techniques for detecting hate speech in many languages (only roughly 30) complicates the task of maintaining a safe and controlled online environment across varied linguistic groupings.

    1. 14.1.3. Safety# Another concern is for the safety of the users on the social media platform (or at least the users that the platform cares about). Users who don’t feel safe will leave the platform, so social media companies are incentivized to help their users feel safe. So this often means moderation to stop trolling and harassment.

      Online harassment and cyberbullying are serious safety problems on social media, with detrimental psychological consequences for those targeted by abusive messages and threats. For example, public personalities such as celebrities or influencers are frequently the victims of unrelenting online harassment, creating a toxic environment not just for them but also for others watching the abuse. In response, social media companies incorporate safety measures such as content moderation, reporting systems, and anti-harassment regulations, but these programs face issues in balancing user safety with concerns about censorship and free speech.

    1. First let’s consider that, while social media use is often talked of as an “addiction” or as “junk food,” there might be better ways to think about social media use, as a place where you might enjoy, connect with others, learn new things, and express yourself.

      Participate in positive online interactions by sharing affirmations, thankfulness, and uplifting information, which will help to create a positive digital atmosphere. Follow accounts that promote mental resilience and overall well-being to have a more happy online experience. Furthermore, diversify your perspective by meeting people from varied origins and learning about different cultures and languages. Participate in language exchange groups to improve your language abilities and share cultural insights. By incorporating these activities into your social media interactions, you may help create a more enriching and beneficial online presence.

    1. Self-Bullying# One form of digital self-harm is self-bullying, where people set up fake alternate accounts which they then use to post bullying messages at themselves.

      While self-bullying through phony alternate accounts is not a well-known issue, it is crucial to remember that indulging in such activity can have major effects for one's mental health and wellbeing. Individuals battling with low self-esteem, mental health concerns, or a desire for attention may create phony accounts and publish nasty comments about themselves.

    1. Citation and giving credit# Fig. 12.13 The “This is fine” meme image by K.C. Green# On the 10th anniversary of the webcomic by K.C. Green where the “This is fine” meme came from, he reflected on his feelings about how those frames from his comic became a viral meme: When a work gets as big as this has, is it still yours? Not talking about copyright and legal stuff. It says something larger that everyone can feel and relate to. […] I’ve been forced time and time again with these 6 panels, to be the party pooper, gate-keeper, girlboss, etc and just to get people to recognize there are artists behind these drawings online. These memes we share. […] So I do what I can and try to keep in good humor and be thankful that I can still do what I do for a living. Given the community activities on social media of copying, remixing, cultural appropriate, and cultural exchange: How do you think attribution should work when copying and reusing content on social media (like if you post a meme or gif on social media)? When is it ok to not cite sources for content? When should sources be cited, and how should they be cited? How can you participate in cultural exchange without harmful cultural appropriation?

      Proper citation is essential for ethical material sharing. It honors the original creators' effort and talent, instilling a sense of ownership and recognition in the digital realm. Ethical content sharing helps to combat issues such as plagiarism, exploitation, and the deletion of creators' contributions. It promotes a more respectful and inclusive online environment.

    1. Intentionally bad or offensive content# Users can also create intentionally bad or offensive content in an attempt to make it go viral (which is a form of trolling). So when criticism of this content goes viral, that is in fact aligned with the original purpose. For example, this cooking video contains an unusual recipe (SpaghettiOs as a pie filling) and unusual cooking methods (like using forearms to spread butter). In the comments, people post their horrified reactions, and the original poster responds naively (e.g., viewer reaction: “When she started mashing her forearms into the butter and garlic my soul left my body.” Video creator reply: “in a good way, right? haha”). The video continued to spread as people tried the recipes themselves (link 1, link 2). It turns out that this video and other similar cooking videos are intentionally made to be bad videos and intended to produce a reaction (see article: Your Least Favorite Gross Viral Food Videos Are All Connected to This Guy). Saying and doing provocative, shocking, and offensive things can also be an effective political strategy, and getting viral attention through others’ negative reactions has been seen as a key component of Donald Trump’s political successes.

      The deliberate fabrication of content with the intent of eliciting unfavorable reactions for the purposes of virality is a sort of intentional deceit. This raises problems regarding the creator's responsibility to create honest and genuine content that fosters audience trust.

    1. Knowing that there is a recommendation algorithm, users of the platform will try to do things to make the recommendation algorithm amplify their content. This is particularly important for people who make their money from social media content.

      Users may overuse popular keywords or trendy terms in their content to mislead the algorithm into associating it with popular trends. In the case of video material, a creator may include a list of trending keywords in the description, even if they are only weakly related to the actual content, in order to boost exposure in search and recommendation results.

    2. How recommendations can go well or poorly# Friends or Follows:# Recommendations for friends or people to follow can go well when the algorithm finds you people you want to connect with. Recommendations can go poorly when they do something like recommend an ex or an abuser because they share many connections with you.

      Continuous monitoring and optimization of recommendation algorithms is required to accommodate changing user needs and potential problems. For example, social media networks' algorithms are frequently updated to improve user experience and safety. For instance, including user feedback mechanisms and machine learning models that consider emotional context can assist optimize recommendations over time.

    1. Design Justice# We mentioned Design Justice earlier, but it is worth reiterating again here that design justice includes considering which groups get to be part of the design process itself.

      Design Justice feels a lot like hosting a grand creative bash where everyone's ideas aren't just accepted but are downright necessary. It's not about creating things for people; it's about co-crafting them, especially with those voices that often get overlooked in the design scene.

    1. 10.2.3. Making an environment work for all# Another strategy for managing disability is to use Universal Design, which originated in architecture. In universal design, the goal is to make environments and buildings have options so that there is a way for everyone to use it2. For example, a building with stairs might also have ramps and elevators, so people with different mobility needs (e.g., people with wheelchairs, baby strollers, or luggage) can access each area. In the elevators the buttons might be at a height that both short and tall people can reach. The elevator buttons might have labels both drawn (for people who can see them) and in braille (for people who cannot), and the ground floor button may be marked with a star, so that even those who cannot read can at least choose the ground floor. In this way of managing disabilities, the burden is put on the designers to make sure the environment works for everyone, though disabled people might need to go out of their way to access features of the environment.

      Numerous real-life instances illustrate this point. Consider websites and applications, which are consistently crafted with diverse accessibility features like screen readers, alternative text for images, and keyboard navigation. Additionally, they incorporate adjustable text sizes and color contrasts to cater to individuals with visual impairments.