19 Matching Annotations
  1. Last 7 days
    1. Some people view internet-based social media (and other online activities) as inherently toxic and therefore encourage a digital detox, where people take some form of a break from social media platforms and digital devices. While taking a break from parts or all of social media can be good for someone’s mental health (e.g., doomscrolling is making them feel more anxious, or they are currently getting harassed online), viewing internet-based social media as inherently toxic and trying to return to an idyllic time from before the Internet is not a realistic or honest view of the matter.

      Rather than seeing the internet as the problem itself, it might be more productive to think critically about platform design, algorithms, and personal habits. The goal probably shouldn’t be total withdrawal, but learning how to engage more intentionally and sustainably.

    2. Some people view internet-based social media (and other online activities) as inherently toxic and therefore encourage a digital detox, where people take some form of a break from social media platforms and digital devices. While taking a break from parts or all of social media can be good for someone’s mental health (e.g., doomscrolling is making them feel more anxious, or they are currently getting harassed online), viewing internet-based social media as inherently toxic and trying to return to an idyllic time from before the Internet is not a realistic or honest view of the matter.

      Rather than seeing the internet as the problem itself, it might be more productive to think critically about platform design, algorithms, and personal habits. The goal probably shouldn’t be total withdrawal, but learning how to engage more intentionally and sustainably.

    3. Many have anecdotal experiences with their own mental health and those they talk to. For example, cosmetic surgeons have seen how photo manipulation on social media has influenced people’s views of their appearance: People historically came to cosmetic surgeons with photos of celebrities whose features they hoped to emulate. Now, they’re coming with edited selfies. They want to bring to life the version of themselves that they curate through apps like FaceTune and Snapchat. Selfies, Filters, and Snapchat Dysmorphia: How Photo-Editing Harms Body Image

      Overall, this phenomenon highlights how platform design and digital tools can shape mental health in subtle but powerful ways. It raises important ethical questions about responsibility—both for users and for the companies that create and promote these technologies.

  2. Feb 2026
    1. Similarly, recommendation algorithms are rules set in place that might produce biased, unfair, or unethical outcomes. This can happen whether or not the creators of the algorithm intended these outcomes. Once these algorithms are in place though, the have an influence on what happens on a social media site. Individuals still have responsibility with how they behave, but the system itself may be set up so that individual efforts cannot not be overcome the problems in the system.

      This section makes an important distinction between individual analysis and systemic analysis, which is crucial when thinking about recommendation algorithms.

      At the individual level, users are responsible for their behavior: what they like, comment on, share, or search. For example, if someone repeatedly engages with extreme content, the algorithm may interpret that engagement as interest. From this perspective, it may seem reasonable to say users are “training” the algorithm through their actions.

    2. # Sometimes though, individuals are still blamed for systemic problems. For example, Elon Musk, who has the power to change Twitters recommendation algorithm, blames the users for the results:

      This section makes an important distinction between individual analysis and systemic analysis, which is crucial when thinking about recommendation algorithms.

      At the individual level, users are responsible for their behavior: what they like, comment on, share, or search. For example, if someone repeatedly engages with extreme content, the algorithm may interpret that engagement as interest. From this perspective, it may seem reasonable to say users are “training” the algorithm through their actions.

    3. Though even modifying a recommendation algorithm has limits in what it can do, as social groups and human behavior may be able to overcome the recommendation algorithms influence.

      This section makes an important distinction between individual analysis and systemic analysis, which is crucial when thinking about recommendation algorithms.

      At the individual level, users are responsible for their behavior: what they like, comment on, share, or search. For example, if someone repeatedly engages with extreme content, the algorithm may interpret that engagement as interest. From this perspective, it may seem reasonable to say users are “training” the algorithm through their actions.

    1. When social media platforms show users a series of posts, updates, friend suggestions, ads, or anything really, they have to use some method of determining which things to show users. The method of determining what is shown to users is called a recommendation algorithm, which is an algorithm (a series of steps or rules, such as in a computer program) that recommends posts for users to see, people for users to follow, ads for users to view, or reminders for users.

      Recommendation algorithms shape almost everything I see on social media, and I’ve experienced both surprisingly accurate and deeply frustrating recommendations.

    1. For each setting you see, try to come up with what disabilities that setting would be beneficial for (there may be multiple).

      Accessible design recognizes that disability is often created by design assumptions. Rather than placing the burden on individuals to adapt or be “fixed,” approaches like universal design

    1. A disability is an ability that a person doesn’t have, but that their society expects them to have.1 For example:

      Disability is not just about individual limitations, but about the assumptions society makes when designing spaces, technologies, and systems.

    1. Benefits of Functions# There are several advantages to creating and using functions in computer programs, such as: Reusing code instead of repeating code: When we find ourselves repeating a set of actions in our program, we end up writing (or copying) the same code multiple times. If we put that repeated code in a function, then we only have to write it once and then use that function in all the places we were repeating the code. Single, standardized definitions: Let’s say we made code that takes a name and tries to split it into a first name and last name, and we have that code copied in several places in our program. Then we realize that our code isn’t handling some last names correctly, like “O’Reilly” and “Del Toro.” If we fix this bug in one of the places the code is copied in our program it still will be broken elsewhere, so we have to find all the places and fix it there. If, on the other hand we had the code to split names in a function, and used that function everywhere else, then we only have to fix the bug inside that one function and our code everywhere is fixed. Code organization: Making functions also can help us organize our code. It lets us give a name to a block of code, and when we use it, those function names can help make the code more understandable. Making code as functions also helps in letting us put those pieces of code in other files or in code libraries, so the file we are working on is smaller and easier to manage.

      This explanation clearly shows how functions improve efficiency and clarity in programming by reducing repetition, standardizing logic, and making code easier to read and manage.

    1. While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure. For example, the proper security practice for storing user passwords is to use a special individual encryption process for each individual password. This way the database can only confirm that a password was the right one, but it can’t independently look up what the password is or even tell if two people used the same password. Therefore if someone had access to the database, the only way to figure out the right password is to use “brute force,” that is, keep guessing passwords until they guess the right one (and each guess takes a lot of time). But while that is the proper security for storing passwords. So for example, Facebook stored millions of Instagram passwords in plain text, meaning the passwords weren’t encrypted and anyone with access to the database could simply read everyone’s passwords. And Adobe encrypted their passwords improperly and then hackers leaked their password database of 153 million users.

      This example illustrates a gap between users’ expectations of privacy and the actual practices of social media companies. Although users consent to sharing data, that consent assumes responsible stewardship, which is violated when companies fail to implement basic security measures.

    1. Some governments and laws protect the privacy of individuals (using a Natural Rights ethical framing). These include the European Union’s General Data Protection Regulation (GDPR), which includes a “right to be forgotten”, and the United State’s Supreme Court has at times inferred a constitutional right to privacy.

      This example effectively shows how a Natural Rights ethical framework is reflected in real-world laws, linking abstract ethical principles to concrete legal protections of privacy.

    1. Social Media platforms use the data they collect on users and infer about users to increase their power and increase their profits.

      Social media data is used to maximize user engagement and profit, primarily through targeted advertising, but this same system can be exploited to manipulate vulnerable populations and undermine democratic processes.

    1. People in the antiwork subreddit found the website where Kellogg’s posted their job listing to replace the workers. So those Redditors suggested they spam the site with fake applications, poisoning the job application data, so Kellogg’s wouldn’t be able to figure out which applications were legitimate or not (we could consider this a form of trolling). Then Kellogg’s wouldn’t be able to replace the striking workers, and they would have to agree to better working conditions.

      Data poisoning highlights that data is never neutral. Whether through unintentional bias or intentional sabotage, poisoned data can undermine research, distort automated decision-making, and be used as a form of resistance or political action in data-driven systems.

  3. Jan 2026
    1. Making a bot that is troll proof is very difficult! You either need to severely limit how your bot engages with people, or do a ton of work trying to prevent trolling and fix problems when people find a new way of trolling you.

      This example shows that even simple automated reply bots can be easily exploited to repeat harmful or abusive language. Adding basic rules or restrictions does not fully prevent trolling, which highlights how difficult it is to design automated systems that are both interactive and safe.

    1. 2003 saw the launch of several popular social networking services: Friendster, Myspace, and LinkedIn. These were websites where the primary purpose was to build personal profiles and create a network of connections with other people, and communicate with them. Facebook was launched in 2004 and soon put most of its competitors out of business, while YouTube, launched in 2005 became a different sort of social networking site built around video.

      This section discusses the early history of social networking services, noting the launch of platforms like Friendster, MySpace, LinkedIn, and later Facebook and YouTube, and how their purposes evolved over time.

    1. Age

      For age, I would store it as an integer with range constraints (e.g., 0–120) and allow users to opt out (“prefer not to say”). I would avoid storing birthdate unless absolutely necessary because it increases privacy risk. Even with constraints, age data can be inaccurate due to misreporting and can enable profiling or harm to minors, so in many cases an age range is a safer representation than an exact age.

    1. So, for example, when Twitter tells me that the tweet was posted on Feb 10, 2020, does it mean Feb 10 for me? Or for the person who posted it? Those might not be the same. Or if I want to see for a given account, how much they tweeted “yesterday,” what do I mean by “yesterday?” We might be in different time zones and have different start and end times for what we each call “yesterday.”

      Images, sounds, videos, and dates require complex representations that simplify reality; choices such as compression and time zone definitions shape what data we see and how we interpret social media activity, raising ethical concerns about accuracy, context, and fairness.

    1. Justine lost her job at IAC, apologized, and was later rehired by IAC.

      From a utilitarian perspective, IAC’s decision to dismiss Justine can be understood as an attempt to minimize overall harm and protect the company’s public image. Firing her helped calm public outrage and maintain consumer trust, which benefits a larger group of stakeholders. However, this case also shows a limitation of utilitarianism: the severe consequences for one individual may be justified too quickly in the name of collective benefit, especially when online outrage escalates rapidly.