21 Matching Annotations
  1. Last 7 days
    1. Larger efforts at trying to determine emotions or mental health through things like social media use, or iPhone or iWatch use, have had very questionable results, and any claims of being able to detect emotions reliably are probably false.

      Larger efforts to determine emotions or mental health through social media or devices like the iPhone or Apple Watch have produced iffy results, and any claims of reliably detecting emotions are probably false. This is interesting because people today are trying to assess mental health online, but social media can affect moods in ways that are misleading, so we can’t assume everything we see is accurate.

    1. People with various illnesses often find support online, and even form online communities. It is often easier to fake an illness in an online community than in an in-person community, so many have done so (like the fake @Sciencing_Bi fake dying of covid

      I have noticed this a lot in movies in film as literature class I took in high school, this stuff is taken seriously, I have seen this before on Instagram, where someone pretended to have cancer, and I thought "They shouldn't be doing this, this could legally get them in trouble." I don't know why this is growing, people need to realize that this is indeed asking for money, which should stop at all costs because people should have a sense of intelligence that this is not right ethically.

  2. Feb 2026
    1. where people get filtered into groups and the recommendation algorithm only gives people content that reinforces and doesn’t challenge their interests or beliefs

      At times, algorithms can be worrisome because they often place people into groups that reinforce beliefs, limiting exposure to differing perspectives. This filtering can be dangerous, as people might trust content or people without realizing how biased recommendations get.

    1. Now, how these algorithms precisely work is hard to know, because social media sites keep these algorithms secret, probably for multiple reasons:

      Social media algorithms are interesting because most users don’t realize how much their online behavior is tracked and analyzed. While the exact workings are secret, it’s clear these systems are designed to influence what we see and how we interact in subtle, complex ways.

    1. When designers and programmers don’t think to take into account different groups of people, then they might make designs that don’t work for everyone. This problem often shows up in how designs do or do not work for people with disabilities. But it also shows up in other areas as well.

      When designers default to a narrow idea of the average user, their work can unintentionally exclude people whose experiences differ. That exclusion isn’t limited to disability, it shows how design choices quietly decide who gets to participate easily and who doesn’t.

    1. In comparison to tetrachromats, trichromats (the majority of people), lack the ability to see some colors. But our society doesn’t build things for tetrachromats, so their extra ability to see color doesn’t help them much. And trichromats’ relative reduction in seeing color doesn’t cause them difficulty, so being a trichromat isn’t considered to be a disability.

      Ability only matters in relation to the environment: tetrachromats may see more, but because the world isn’t built for that perception, it has little value. What we call a disability often reflects society norms, not an objective lack of ability.

    1. ackers can target individuals with attacks like: Password reuse attacks, where if they find out your password from one site, they try that password on many other sites Hackers tricking a computer into thinking they are another site, for example: the US NSA impersonated Google

      Hacking can be very dangerous because many people reuse passwords or create predictable patterns that attackers can exploit. When hackers discover one password, they can use it across multiple sites, increasing the risk of data loss and unauthorized access.

    1. We might want to prevent people from stealing our identities or accounts, so we keep information (like passwords) private

      This part confirms my thinking on password sharing is not good because the privacy that goes into it is not good. We keep information like phone passwords private to prevent others from stealing our identities or gaining unauthorized access to our accounts or other information. Sharing passwords removes that privacy and increases the risk of theft, financial loss, and misuse of personal information.

    1. Additionally, groups keep trying to re-invent old debunked pseudo-scientific (and racist) methods of judging people based on facial features (size of nose, chin, forehead, etc.), but now using artificial intelligence.

      I have noticed a lot of these on social media they like to ask you about these kinds of things, and we have to be careful because AI nowadays gives old ideas a new sense of accuracy, even though the science behind them is still wrong. It also shows how technology can reinforce bias instead of eliminating it if it’s not used

    1. Some data is directly provided to the platform by the users. Platforms may ask users for information like: email address name profile picture interests friends

      This is interesting because people often share this data, which gives platforms detailed insight into others' identities and behaviors. That information can be used to personalize experiences, but also raises privacy and data-use concerns.

  3. Jan 2026
    1. Catfishing: Create a fake profile that doesn’t match the actual user, usually in an attempt to trick or scam someone

      What’s interesting here is how catfishing shows our natural behavior to trust social signals, showing that deception can manipulate emotions and behavior in the smallest ways. It also reveals that even the smallest of lies can have a big impact once they’re uncovered, because humans are so attuned to being misled

    1. As a rule, humans do not like to be duped. We like to know which kinds of signals to trust, and which to distrust. Being lulled into trusting a signal only to then have it revealed that the signal was untrustworthy is a shock to the system, unnerving and upsetting. People get angry when they find they have been duped. These reactions are even more heightened when we find we have been duped simply for someone else’s amusement at having done so.

      This part is interesting because being untrustworhy proves everyone that the material is fake, they have no way to be identified, the sources might not be credible either, which angers me when something like this happens in my feed. What’s interesting here is how deeply our sense of trust is tied to predictability and reliability, when a signal we rely on turns out to be deceptive, it not only surprises us but also triggers a strong emotional reaction.

    1. Some social media sites only allow reciprocal connections, like being “friends” on Facebook Some social media sites offer one-way connections, like following someone on Twitter or subscribing to a YouTube channel. There are, of course, many variations and nuances besides what we mentioned above, but we wanted to get you started thinking about some different options.

      There are many social media connection pathways out there that we don’t always perceive as dangerous, but they can be. For example, even one-way connections like following someone on Instagram or subscribing to a YouTube channel can expose personal information or allow strangers to influence our opinions without us realizing it.

    1. One of the early ways of social communication across the internet was with Email, which originated in the 1960s and 1970s. These allowed people to send messages to each other, and look up if any new messages had been sent to them.

      I am very shocked because I didn't know people were using email back in the day. People in the 1960s-1970s were using hand-written letters to communicate to one another, and I didn't really know and I'm interested in how the email was so popular in the 1960s-1970s without a real laptop back in the day.

    1. When looking at real-life data claims and datasets, you will likely run into many different problems and pitfalls in using that data. Any dataset you find might have: missing data erroneous data (e.g., mislabeled, typos) biased data manipulated data

      It’s interesting to see that data can be misleading sometimes, and that we have to be careful because no dataset is 100% accurate. Data can have missing values, errors, or biases, so when analyzing it, we often have to think deeper on whether what we’re seeing is true or if it's because of a erroneous data set that's because of bias, missing data.

    1. If we download information about a set of tweets (text, user, time, etc.) to analyze later, we might consider that set of information as the main data, and our metadata might be information about our download process, such as when we collected the tweet information, which search term we used to find it, etc.

      I never realized how powerful metadata can be. It’s interesting that it’s not just about the content of the tweets, but also about information like when and how we collected them. That extra layer can really change how we understand and analyze data. It can reveal what time someone does things, trends, and behavior that we don't see behind the scenes.

    1. Bots present a similar disconnect between intentions and actions. Bot programs are written by one or more people, potentially all with different intentions, and they are run by others people, or sometimes scheduled by people to be run by computers.

      I think this is really interesting, because bots are not independent, they are written by real people, and I found this interesting, because it's incredible to see bots written by different people can mean different things. This helps explain why online content from bots can sometimes seem unpredictable or inconsistent.

    1. Note that sometimes people use “bots” to mean inauthentically run accounts, such as those run by actual humans, but are paid to post things like advertisements or political content. We will not consider those to be bots, since they aren’t run by a computer. Though we might consider these to be run by “human computers” who are following the instructions given to them, such as in a click farm:

      This paragraph, I mean, it's very important that people know what bots do, they misconceptualize them, but in this day of ChatGPT, Microsoft Copilot, Google Gemini, DuckAI. These bots though are actually interesting in a sense how that not all accounts called ‘bots’ are truly automated. I think this distinction is important because it changes how we understand online content, whether it’s influenced by algorithms or by campaigns.

    1. Ubuntu

      I found this very interesting. At my high school, a long-time teacher who's been there for 25+ years, who's also the head football coach, Steve Valach, emphasizes the word "Ubuntu" at the kickoff assembly each year during the first week of school. When I heard about it for the first time nearly 4.5 years ago, he made it so memorable because it means "I am because we are." That is the key part about ubuntu, the connectedness it creates, the team aspect of it, it's harmonious as it says in the text because it's a unity feeling. In football when the game is really close, sometimes, I see a huddle of how a team is going to win a nail biting game, if the game is 28-27, "ubuntu" comes into mind because the offense needs to have receivers catching, good routing, special teams making it harder for the other team to score, defense stepping up to the plate, and when the clock goes to 0, it really has that feeling of "We won the game, everyone contributed." Which is, in my mind, that feeling of "Ubuntu." everyone pitched in, nobody did something where it hindered someone's capabilities, everyone was capable. This idea also connects to virtue ethics, because it emphasizes developing good character through cooperation, respect, and helping others succeed.

    1. What you do not want done to yourself, do not do to others.’”

      This quote here What you do not want done to yourself, do not do to others.’” I did this: I think by this, it's showing everyone that we should be treating others respectfully at all times, that we should always think twice. Everyone who's older will often say "Think twice." This applies here, think twice before posting, it's critical because we don't want to say something mean that normally we would not say to others, it creates that negative feeling, it does not feel good, and we would want to avoid that. The only way to do so is to be kind, respectful, and understand that respectful communication is a two way street. What one says has a direct impact on the other person, we humans don't know that, so I think it's time to wake up from this denial, and face the truth here, that what we say has a direct impact on others.

    1. Word spread, and Justine’s tweet went viral. Twitter users found other recent offensive tweets by Justine about countries she was traveling in. IAC (Justine’s employer) called the tweet “outrageous, offensive” but “Unfortunately, the employee in question is unreachable on an international flight.” Twitter users, now knowing that Justine is on a flight, started the hashtag #hasjustinelanedyet, which started trending on Twitter (including some celebrities tweeting about it).

      I'm kind of shocked about this, it's alarming how tweets spread because retweeting is kind of like a fungus that multiplies, once you repost, others do it too, and when people repost on Facebook or Instagram or X now, other people see it. This is the problem in today's world, we don't see the power of us retweeting, and it might not always be good, like what happened here, and that's what people often do not realize. Social media needs to do a better job with the repost, and people should consider not reposting anymore.