1,151 Matching Annotations
  1. Aug 2021
    1. If we cannot afford real, diverse, and independent assessment, we will not realize the promise of middleware.
    2. Facebook deploys tens of thousands of people to moderate user content in dozens of languages. It relies on proprietary machine-learning and other automated tools, developed at enormous cost. We cannot expect [End Page 169] comparable investment from a diverse ecosystem of middleware providers. And while most providers presumably will not handle as much content as Facebook does, they will still need to respond swiftly to novel and unpredictable material from unexpected sources. Unless middleware services can do this, the value they provide will be limited, as will users' incentives to choose them over curation by the platforms themselves.

      Does heavy curation even need to exist? If a social company were able to push a linear feed of content to people without the algorithmic forced engagement, then the smaller, fringe material wouldn't have the reach. The majority of the problem would be immediately solved with this single feature.

    3. Second, how is everyone going to get paid? Without a profit motive for middleware providers, the magic will not happen, or it will not happen at large enough scale. Something about business models—or, at a minimum, the distribution of ads and ad revenue—will have to change. That leaves the two thorny issues I do know a fair amount about: curation costs and user privacy.
    4. First, how technologically feasible is it for competitors to remotely process massive quantities of platform data? Can newcomers really offer a level of service on par with incumbents?

      Do they really need to process all the data?

    5. The First Amendment precludes lawmakers from forcing platforms to take down many kinds of dangerous user speech, including medical and political misinformation.

      Compare social media with the newspaper business from this perspective.

      People joined social media not knowing the end effects, but now don't have a choice of platform after-the-fact. Social platforms accelerate the disinformation using algorithms.

      Because there is choice amongst newspapers, people can easily move and if they'd subscribed to a racist fringe newspaper, they could easily end their subscription and go somewhere else. This is patently not the case for any social media. There's a high hidden personal cost for connectivity that isn't taken into account. The government needs to regulate this and not the speech portion.

      Social media should be considered a common carrier and considered as such. It was an easier and more logical process in the telephone, electricity and other areas to force this as the cost of implementation for them was magnitudes of order higher. The data formats and storage for social should be standardized (potentially even in three or more formats) and that should be the common carrier imposed. Would this properly skirt the First Amendment issues?

    6. Fukuyama's work, which draws on both competition analysis and an assessment of threats to democracy, joins a growing body of proposals that also includes Mike Masnick's "protocols not platforms," Cory Doctorow's "adversarial interoperability," my own "Magic APIs," and Twitter CEO Jack Dorsey's "algorithmic choice."

      Nice overview of work in the space for fixing monopoly in social media space the at the moment. I hadn't heard about Fukuyama or Daphne Keller's versions before.

      I'm not sure I think Dorsey's is actually a thing. I suspect it is actually vaporware from the word go.

      IndieWeb has been working slowly at the problem as well.

    7. Francis Fukuyama has called "middleware": content-curation services that could give users more control over the material they see on internet platforms such as Facebook or Twitter.
  2. Jul 2021
    1. Platforms of the Facebook walled-factory type are unsuited to thework of building community, whether globally or locally, becausesuch platforms are unresponsive to their users, and unresponsive bydesign (design that is driven by a desire to be universal in scope). Itis virtually impossible to contact anyone at Google, Facebook,Twitter, or Instagram, and that is so that those platforms can trainus to do what they want us to do, rather than be accountable to ourdesires and needs

      This is one of the biggest underlying problems that centralized platforms often have. It's also a solid reason why EdTech platforms are pernicious as well.

    2. As Astra Taylor explains in her vital book !e People’sPlatform, this process has often been celebrated by advocates ofnew platforms.

      Worth taking a look at?

    3. It is common to refer to universally popular social media sites likeFacebook, Instagram, Snapchat, and Pinterest as “walled gardens.”But they are not gardens; they are walled industrial sites, withinwhich users, for no financial compensation, produce data which theowners of the factories sift and then sell. Some of these factories(Twitter, Tumblr, and more recently Instagram) have transparentwalls, by which I mean that you need an account to post anythingbut can view what has been posted on the open Web; others(Facebook, Snapchat) keep their walls mostly or wholly opaque.

      Would it be useful to distinguish and differentiate the silos based on their level of access? Some are transparent silos while others are not?

      Could we define a spectrum from silo to open? Perhaps axes based on audience or access? Privacy to fully open? How many axes might there be?

    1. What motivated my newsletter reading habits normally? In large part, affection and light voyeurism. I subscribed to the newsletters of people I knew, who treated the form the way they had once treated personal blogs. I skimmed the dadlike suggestions of Sam Sifton in the New York Times’ Cooking newsletter (skillet chicken and Lana Del Rey’s “Chemtrails Over the Country Club” — sure, okay). I subscribed briefly to Alison Roman’s recipe newsletter before deciding that the ratio of Alison Roman to recipes was much too high. On a colleague’s recommendation, I subscribed to Emily Atkin’s climate newsletter and soon felt guilty because it was so long and came so often that I let it pile up unread. But in order to write about newsletters, I binged. I went about subscribing in a way no sentient reader was likely to do — omnivorously, promiscuously, heedless of redundancy, completely open to hate-reading. I had not expected to like everything I received. Still, as the flood continued, I experienced a response I did not expect. I was bored.

      The question of motivation about newsletter subscriptions is an important one. Some of the thoughts here mirror some of my feelings about social media in general.

      Why?

    2. Early on, circa 2015, there was a while when every first-person writer who might once have written a Tumblr began writing a TinyLetter. At the time, the writer Lyz Lenz observed that newsletters seemed to create a new kind of safe space. A newsletter’s self-selecting audience was part of its appeal, especially for women writers who had experienced harassment elsewhere online.

      What sort of spaces do newsletters create based upon their modes of delivery? What makes them "safer" for marginalized groups? Is there a mitigation of algorithmic speed and reach that helps? Is it a more tacit building of community and conversation? How can these benefits be built into an IndieWeb space?

      How can a platform provide "reach" while simultaneously creating negative feedback for trolls and bad actors?

    1. Offline we exist by default; online we have to post our way into selfhood.
    2. A platform like Twitter makes our asynchronous posts feel like real-time interaction by delivering them in such rapid succession, and that illusion begets another more powerful one, that we’re all actually present within the feed.

      This same sort of illusion also occurs in email where we're always assumed to be constantly available to others.

    1. <small><cite class='h-cite via'> <span class='p-author h-card'>Alan Jacobs</span> in re-setting my mental clock – Snakes and Ladders (<time class='dt-published'>07/01/2021 14:58:05</time>)</cite></small>

  3. Jun 2021
    1. Professor, interested in plagues, and politics. Re-locking my twitter acct when is 70% fully vaccinated.

      Example of a professor/research who has apparently made his Tweets public, but intends to re-lock them majority of threat is over.

  4. May 2021
    1. Charlotte Jee recently wrote a lovely fictional intro to a piece on a “feminist Internet” that crystallized something I can’t quite believe I never saw before; if girls, women and non-binary people really got to choose where they spent their time online, we would never choose to be corralled into the hostile, dangerous spaces that endanger us and make us feel so, so bad. It’s obvious when you think about it. The current platforms are perfectly designed for misogyny and drive literally countless women from public life, or dissuade them from entering it. Online abuse, doxing, blue-tick dogpiling, pro-stalking and rape-enabling ‘features’ (like Strava broadcasting runners’ names and routes, or Slack’s recent direct-messaging fiasco) only happen because we are herded into a quasi-public sphere where we don’t make the rules and have literally nowhere else to go.

      A strong list of toxic behaviors that are meant to keep people from having a voice in the online commons. We definitely need to design these features out of our social software.

    1. In 1962, a book called Silent Spring by Rachel Carson documenting the widespread ecological harms caused by synthetic pesticides went off like a metaphorical bomb in the nascent environmental movement.

      Where is the Silent Spring in the data, privacy, and social media space?

    2. Amidst the global pandemic, this might sound not dissimilar to public health. When I decide whether to wear a mask in public, that’s partially about how much the mask will protect me from airborne droplets. But it’s also—perhaps more significantly—about protecting everyone else from me. People who refuse to wear a mask because they’re willing to risk getting Covid are often only thinking about their bodies as a thing to defend, whose sanctity depends on the strength of their individual immune system. They’re not thinking about their bodies as a thing that can also attack, that can be the conduit that kills someone else. People who are careless about their own data because they think they’ve done nothing wrong are only thinking of the harms that they might experience, not the harms that they can cause.

      What lessons might we draw from public health and epidemiology to improve our privacy lives in an online world? How might we wear social media "masks" to protect our friends and loved ones from our own posts?

    1. “For one of the most heavily guarded individuals in the world, a publicly available Venmo account and friend list is a massive security hole. Even a small friend list is still enough to paint a pretty reliable picture of someone's habits, routines, and social circles,” Gebhart said.

      Massive how? He's such a public figure that most of these connections are already widely reported in the media or easily guessable by an private invistigator. The bigger issue is the related data of transactions which might open them up for other abuses or potential leverage as in the other examples.

    1. Social media platforms work on a sort of flywheel of engagement. View->Engage->Comment->Create->View. Paywalls inhibit that flywheel, and so I think any hope for a return to the glory days of the blogosphere will result in disappointment.

      The analogy of social media being a flywheel of engagement is an apt one and it also plays into the idea of "flow" because if you're bored with one post, the next might be better.

    1. Yet apart from a few megastar “influencers”, most creators receive no reward beyond the thrill of notching up “likes”.

      But what are these people really making? Besides one or two of the highest paid, what is a fair-to-middling influencer really making?

    1. A strong and cogent argument for why we should not be listening to the overly loud cries from Tristan Harris and the Center for Human Technology. The boundary of criticism they're setting is not extreme enough to make the situation significantly better.

      It's also a strong argument for who to allow at the table or not when making decisions and evaluating criticism.

    2. These companies do not mean well, and we should stop pretending that they do.
    3. But “humane technology” is precisely the sort of pleasant sounding but ultimately meaningless idea that we must be watchful for at all times. To be clear, Harris is hardly the first critic to argue for some alternative type of technology, past critics have argued for: “democratic technics,” “appropriate technology,” “convivial tools,” “liberatory technology,” “holistic technology,” and the list could go on.

      A reasonable summary list of alternatives. Note how dreadful and unmemorable most of these names are. Most noticeable in this list is that I don't think that anyone actually built any actual tools that accomplish any of these theoretical things.

      It also makes more noticeable that the Center for Humane Technology seems to be theoretically arguing against something instead of "for" something.

    4. Big tech can patiently sit through some zingers about their business model, as long as the person delivering those one-liners comes around to repeating big tech’s latest Sinophobic talking point while repeating the “they meant well” myth.
    5. Thus, these companies have launched a new strategy to reinvigorate their all American status: engage in some heavy-handed techno-nationalism by attacking China. And this Sinophobic, and often flagrantly racist, shift serves to distract from the misdeeds of the tech companies by creating the looming menace of a big foreign other. This is a move that has been made by many of the tech companies, it is one that has been happily parroted by many elected officials, and it is a move which Harris makes as well.

      Perhaps the better move is to frame these companies as behemoths on the scale of foreign countries, but ones which have far more power and should be scrutinized more heavily than even China itself. What if the enemy is already within and it's name is Facebook or Google?

    1. Darren Dahly. (2021, February 24). @SciBeh One thought is that we generally don’t ‘press’ strangers or even colleagues in face to face conversations, and when we do, it’s usually perceived as pretty aggressive. Not sure why anyone would expect it to work better on twitter. Https://t.co/r94i22mP9Q [Tweet]. @statsepi. https://twitter.com/statsepi/status/1364482411803906048

    1. Ira, still wearing a mask, Hyman. (2020, November 26). @SciBeh @Quayle @STWorg @jayvanbavel @UlliEcker @philipplenz6 @AnaSKozyreva @johnfocook Some might argue the moral dilemma is between choosing what is seen as good for society (limiting spread of disinformation that harms people) and allowing people freedom of choice to say and see what they want. I’m on the side of making good for society decisions. [Tweet]. @ira_hyman. https://twitter.com/ira_hyman/status/1331992594130235393

    1. build and maintain a sense of professional community. Educator and TikTok user Jeremy Winkle outlines four ways teachers can do this: provide encouragement, share resources, provide quick professional development, and ask a question of the day (Winkler).

      I love all of these ideas. It's all-around edifying!

  5. Apr 2021
    1. Others are asking questions about the politics of weblogs – if it’s a democratic medium, they ask, why are there so many inequalities in traffic and linkage?

      This still exists in the social media space, but has gotten even worse with the rise of algorithmic feeds.

    1. I managed to do half the work. But that’s exactly it: It’s work. It’s designed that way. It requires a thankless amount of mental and emotional energy, just like some relationships.

      This is a great example of how services like Facebook can be like the abusive significant other you can never leave.

    2. I realized it was foolish of me to think the internet would ever pause just because I had. The internet is clever, but it’s not always smart. It’s personalized, but not personal. It lures you in with a timeline, then fucks with your concept of time. It doesn’t know or care whether you actually had a miscarriage, got married, moved out, or bought the sneakers. It takes those sneakers and runs with whatever signals you’ve given it, and good luck catching up.
    3. Pinterest doesn’t know when the wedding never happens, or when the baby isn’t born. It doesn’t know you no longer need the nursery. Pinterest doesn’t even know if the vacation you created a collage for has ended. It’s not interested in your temporal experience.This problem was one of the top five complaints of Pinterest users.
    4. So on a blindingly sunny day in October 2019, I met with Omar Seyal, who runs Pinterest’s core product. I said, in a polite way, that Pinterest had become the bane of my online existence.“We call this the miscarriage problem,” Seyal said, almost as soon as I sat down and cracked open my laptop. I may have flinched. Seyal’s role at Pinterest doesn’t encompass ads, but he attempted to explain why the internet kept showing me wedding content. “I view this as a version of the bias-of-the-majority problem. Most people who start wedding planning are buying expensive things, so there are a lot of expensive ad bids coming in for them. And most people who start wedding planning finish it,” he said. Similarly, most Pinterest users who use the app to search for nursery decor end up using the nursery. When you have a negative experience, you’re part of the minority, Seyal said.

      What a gruesome name for an all-too-frequent internet problem: miscarriage problem

    5. To hear technologists describe it, digital memories are all about surfacing those archival smiles. But they’re also designed to increase engagement, the holy grail for ad-based business models.

      It would be far better to have apps focus on better reasons for on this day features. I'd love to have something focused on spaced repetition for building up my memory for other things. Reminders at a week, a month, three months, and six months would be a useful thing for some posts.

    6. Our smartphones pulse with memories now. In normal times, we may strain to remember things for practical reasons—where we parked the car—or we may stumble into surprise associations between the present and the past, like when a whiff of something reminds me of Sunday family dinners. Now that our memories are digital, though, they are incessant, haphazard, intrusive.
    7. I still have a photograph of the breakfast I made the morning I ended an eight-year relationship and canceled a wedding. It was an unremarkable breakfast—a fried egg—but it is now digitally fossilized in a floral dish we moved with us when we left New York and headed west. I don’t know why I took the photo, except, well, I do: I had fallen into the reflexive habit of taking photos of everything. Not long ago, the egg popped up as a “memory” in a photo app. The time stamp jolted my actual memory.

      Example of unwanted spaced repetition via social media.

    1. This year’s Slow Art Day — April 10 — comes at a time when museums find themselves in vastly different circumstances.

      Idea: Implement a slow web week for the IndieWeb, perhaps to coincide with the summit at the end of the week.

      People eschew reading material from social media and only consume from websites and personal blogs for a week. The tough part is how to implement actually doing this. Many people would have a tough time finding interesting reading material in a short time. What are good discovery endpoints for that? WordPress.com's reader? Perhaps support from feed reader community?

  6. Mar 2021
    1. There's a reasonably good overview of some ideas about fixing the harms social media is doing to democracy here and it's well framed by history.

      Much of it appears to be a synopsis from the perspective of one who's only managed to attend Pariser and Stround's recent Civic Signals/New_Public Festival.

      There could have been some touches of other research in the social space including those in the Activity Streams and IndieWeb spaces to provide some alternate viewpoints.

    2. Tang has sponsored the use of software called Polis, invented in Seattle. This is a platform that lets people make tweet-like, 140-character statements, and lets others vote on them. There is no “reply” function, and thus no trolling or personal attacks. As statements are made, the system identifies those that generate the most agreement among different groups. Instead of favoring outrageous or shocking views, the Polis algorithm highlights consensus. Polis is often used to produce recommendations for government action.

      An example of social media for proactive government action.

    3. Matias has his own lab, the Citizens and Technology Lab at Cornell, dedicated to making digital technologies that serve the public and not just private companies.

      [[J. Nathan Matias]] Citizens and Technology Lab

      I recall having looked at some of this research and not thinking it was as strong as is indicated here. I also seem to recall he had a connection with Tristan Harris?

    4. What Fukuyama and a team of thinkers at Stanford have proposed instead is a means of introducing competition into the system through “middleware,” software that allows people to choose an algorithm that, say, prioritizes content from news sites with high editorial standards.

      This is the second reference I've seen recently (Jack Dorsey mentioning a version was the first) of there being a marketplace for algorithms.

      Does this help introduce enough noise into the system to confound the drive to the extremes for the average person? What should we suppose from the perspective of probability theory?

    5. One person writing a tweet would still qualify for free-speech protections—but a million bot accounts pretending to be real people and distorting debate in the public square would not.

      Do bots have or deserve the right to not only free speech, but free reach?

    6. The scholars Nick Couldry and Ulises Mejias have called it “data colonialism,” a term that reflects our inability to stop our data from being unwittingly extracted.

      I've not run across data colonialism before.

    1. What I’d like more of is a social web that sits between these two extremes, something with a small town feel. So you can see people are around, and you can give directions and a friendly nod, but there’s no need to stop and chat, and it’s not in your face. It’s what I’ve talked about before as social peripheral vision (that post is about why it should be build into the OS).

      I love the idea of social peripheral vision online.

    2. A status emoji will appear in the top right corner of your browser. If it’s smiling, there are other people on the site right now too.

      This is pretty cool looking. I'll have to add it as an example to my list: Social Reading User Interface for Discovery.

      We definitely need more things like this on the web.

      It makes me wish the Reading.am indicator were there without needing to click on it.

      I wonder how this sort of activity might be built into social readers as well?

    3. If somebody else selects some text, it’ll be highlighted for you.

      Suddenly social annotation has taken an interesting twist. @Hypothes_is better watch out! ;)

    1. ReconfigBehSci. (2020, December 5). As everyone’s focus turns to vaccine hesitancy, we will need to take a close look not just at social media but at Amazon- the “top” recommendations I get when typing in ‘vaccine’ are all anti-vaxx https://t.co/ug5QAcKT9Q [Tweet]. @SciBeh. https://twitter.com/SciBeh/status/1335181088818388992

    1. So Substack has an editorial policy, but no accountability. And they have terms of service, but no enforcement.

      This is also the case for many other toxic online social media platforms. A fantastic framing.

    1. Q: So, this means you don’t value hearing from readers?A: Not at all. We engage with readers every day, and we are constantly looking for ways to hear and share the diversity of voices across New Jersey. We have built strong communities on social platforms, and readers inform our journalism daily through letters to the editor. We encourage readers to reach out to us, and our contact information is available on this How To Reach Us page.

      We have built strong communities on social platforms

      They have? Really?! I think it's more likely the social platforms have built strong communities which happen to be talking about and sharing the papers content. The paper doesn't have any content moderation or control capabilities on any of these platforms.

      Now it may be the case that there are a broader diversity of voices on those platforms over their own comments sections. This means that a small proportion of potential trolls won't drown out the signal over the noise as may happen in their comments sections online.

      If the paper is really listening on the other platforms, how are they doing it? Isn't reading some or all of it a large portion of content moderation? How do they get notifications of people mentioning them (is it only direct @mentions)?

      Couldn't/wouldn't an IndieWeb version of this help them or work better.

    1. A question on CSS or accessibility or even content management is a rare thing indeed. This isn't a community centred on helping people build their own websites, as I had first imagined[5]. Instead, it's a community attempting to shift the power in online socialising away from Big Tech and back towards people[6].

      There is more of the latter than the former to be certain, but I don't think it's by design.

      Many of the people there are already experts in some of these sub-areas, so there aren't as many questions on those fronts. Often there are other resources that are also better for these issues and links to them can be found within the wiki.

      The social portions are far more difficult, so this is where folks are a bit more focused.

      I think when the community grows, we'll see more of these questions about CSS, HTML, and accessibility. (In fact I wish more people were concerned about accessibility and why it was important.)

    1. In this respect, we join Fitzpatrick (2011) in exploring “the extent to which the means of media production and distribution are undergoing a process of radical democratization in the Web 2.0 era, and a desire to test the limits of that democratization”

      Something about this is reminiscent of WordPress' mission to democratize publishing. We can also compare it to Facebook whose (stated) mission is to connect people, while it's actual mission is to make money by seemingly radicalizing people to the extremes of our political spectrum.

      This highlights the fact that while many may look at content moderation on platforms like Facebook as removing their voices or deplatforming them in the case of people like Donald J. Trump or Alex Jones as an anti-democratic move. In fact it is not. Because of Facebooks active move to accelerate extreme ideas by pushing them algorithmically, they are actively be un-democratic. Democratic behavior on Facebook would look like one voice, one account and reach only commensurate with that person's standing in real life. Instead, the algorithmic timeline gives far outsized influence and reach to some of the most extreme voices on the platform. This is patently un-democratic.

    1. ReconfigBehSci. (2020, December 8). I’ve been pondering failed predictions today. A spectacular error of mine: In the early media rush to listen to scientists and doctors, I actually thought Western societies might be seeing the end of the “influencer” and a renewed interest in people who did stuff 1/2 [Tweet]. @SciBeh. https://twitter.com/SciBeh/status/1336383952232308736

    1. "So capitalism created social media. Literally social life, but mediated by ad sellers." https://briefs.video/videos/why-the-indieweb/

      Definition of social media: social life, but mediated by capitalistic ad sellers online.

  7. Feb 2021
    1. Dispo is an invite-only social photo app with a twist: you can’t see any photos you take with the app until 24 hours after you take them. (The app sends you a push notification to open them every day at 9AM local time: among other things, a nice hack to boost daily usage.) Founded by David Dobrik, one of the world’s most popular YouTubers, Dispo has been around as a basic utility for a year.

      This is the first reference to Dispo I've come across.

    1. Small world of annotation enthusiasts, but hopefully getting bigger!

      I've always wished that Hypothes.is had some additional social features built in for discovering and following others, but they do have just enough for those who are diligent.

      I've written a bit about how to follow folks and tags using a feed reader.

      And if you want some quick links or even an OPML feed of people and material I'm following on Hypothesis: https://boffosocko.com/about/following/#Hypothesis%20Feeds

    1. Sharpe claims that Englishmen “were able to…constitute themselves as political agents” by reading, whether or not they read about state affairs; for politics was “a type of consciousness” and the psyche “a text of politics.” “The Civil War itself became a contested text.” So reading was everything: “We are what we read.”

      The argument here is that much of the English Civil War was waged in reading and writing. Compare this with today's similar political civil war between the right and the left, but it is being waged in social media instead in sound bites, video clips, tweets, which encourage visceral gut reactions instead of longer and better thought out arguments and well tempered reactions.

      Instead of moving forward on the axis of thought and rationality, we're descending instead into the primordial and visceral reactions of our "reptilian brains."

    1. A view into communities, identity, and how smaller communities might be built in new ways and with new business models that aren't as centralized or ad driven as Facebook, Twitter, et al.

    2. But the inverse trajectory, from which this essay takes its name, is now equally viable: “come for the network, pay for the tool.” Just as built-in social networks are a moat for information products, customized tooling is a moat for social networks.1 This entrenchment effect provides a realistic business case for bespoke social networks. Running a bespoke social network means you’re basically in the same business as Slack, but for a focused community and with tailored features. This is a great business to be in for the same reasons Slack is: low customer acquisition costs and long lifetime value. The more tools, content, and social space are tied together, the more they take on the qualities of being infrastructure for one’s life.

      An interesting value proposition and way of looking at the space that isn't advertising specific.

    1. I can even imagine a distant future where governments might sponsor e.g. social networking as a social service. I know many people don’t trust their governments, but when it comes down to it they’re more likely to be working in people’s interests than a group of unelected tech barons responsible only to their shareholders at best, or themselves in the cases where they have dual class stock with unequal voting rights, or even their families for 100s of years.

      Someone suggesting government run social media. There are potential problems, but I'm definitely in for public libraries doing this sort of work/hosting/maintenance.

    1. We’ve always used the term ‘social networking’ to refer to the process of finding and connecting with those people. And that process has always depended on a fabric of trust woven most easily in the context of local communities and face-to-face interaction.

      Too much of modern social networking suffers from this fabric of trust and rampant context collapse. How can we improve on these looking forward?

    1. Glad to have you back Ben!

      Interesting to hear the results of the experiment. Knowing that it only made you $10 on their platform is an interesting data point.

      I can't wait to see what you come up with on the community front. Healthier competitors to Facebook's pages/communities is a problem we need more work on.

    1. But while we can all agree that tech has a moderation problem, there's a lot less consensus on what to do about it. Broadly speaking, there are two broad approaches: the first is to fix the tech giants and the second is to fix the Internet.

      There is another approach (or two or more). The IndieWeb approach is another framing which isn't included in the two listed here, though it does have a few hints of "fixing the Internet" since they have created some new web recommendations through the W3C.

      Circling back to this, his definition of fix the Internet is talking about almost exactly IndieWeb.

    2. Economists call this a "network effect": the more people there are on Twitter, the more reason there is to be on Twitter and the harder it is to leave. But technologists have another name for this: "lock in." The more you pour into Twitter, the more it costs you to leave. Economists have a name for that cost: the "switching cost."
    1. Technologie kan ons helpen om de wereld op nieuwe manieren te bekijken. Het is daarom meer dan een hulpmiddel: het is de verbinding tussen de mens en de wereld om haar heen. “Technologie medieert tussen de mens en de wereld”, concludeert Verbeek.

      Technologie is een interface die, zoals McLuhan al aangaf, mogelijkheden biedt om de wereld anders te zien. Niet minder 'echt' of 'natuurgetrouw' overigens. We zijn al langer gewend om de werkelijkheid gemedieerd waar te nemen (zie Cooley) en kunnen al langer spreken van een symbolische samenleving (zie Elchardus).