- Jun 2024
-
disruptedjournal.postdigitalcultures.org disruptedjournal.postdigitalcultures.org
-
In this respect, we join Fitzpatrick (2011) in exploring “the extent to which the means of media production and distribution are undergoing a process of radical democratization in the Web 2.0 era, and a desire to test the limits of that democratization”
Comment by chrisaldrich: Something about this is reminiscent of WordPress' mission to democratize publishing. We can also compare it to Facebook whose (stated) mission is to connect people, while it's actual mission is to make money by seemingly radicalizing people to the extremes of our political spectrum.
This highlights the fact that while many may look at content moderation on platforms like Facebook as removing their voices or deplatforming them in the case of people like Donald J. Trump or Alex Jones as an anti-democratic move. In fact it is not. Because of Facebooks active move to accelerate extreme ideas by pushing them algorithmically, they are actively be un-democratic. Democratic behavior on Facebook would look like one voice, one account and reach only commensurate with that person's standing in real life. Instead, the algorithmic timeline gives far outsized influence and reach to some of the most extreme voices on the platform. This is patently un-democratic.
-
- Sep 2023
-
www.wired.com www.wired.com
-
Zeynep Tufekci recently wrote in the Times. “YouTube may be one of the most powerful radicalizing instruments of the 21st century.”
-
“Even the creators don’t always understand why it recommends one video instead of another,” says Guillaume Chaslot, an ex-YouTube engineer who worked on the site’s algorithm.
-
According to YouTube chief product officer Neal Mohan, 70 percent of views on YouTube are from recommendations—so the site’s algorithms are largely responsible for amplifying RT’s propaganda hundreds of millions of times.
-
These algorithms are invisible, but they have an outsized impact on shaping individuals’ experience online and society at large.
-
- Mar 2023
-
-
he gained popularity, particularly among young men, by promoting what he presented as a hyper-masculine, ultra-luxurious lifestyle.
Andrew Tate, a former kickboxer and Big Brother (17, UK) housemate, has gained popularity among young men for promoting a "hyper-masculine, ultra-luxurious lifestyle".
Where does Tate fit into the pantheon of the prosperity gospel? Is he touching on it or extending it to the nth degree? How much of his audience overlaps with the religious right that would internalize such a viewpoint?
-
- Feb 2023
-
www.washingtonpost.com www.washingtonpost.com
-
TikTok offers an online resource center for creators seeking to learn more about its recommendation systems, and has opened multiple transparency and accountability centers where guests can learn how the app’s algorithm operates.
There seems to be a number of issues with the positive and negative feedback systems these social media companies are trying to create. What are they really measuring? The either aren't measuring well or aren't designing well (or both?)...
-
Is algorithmic content moderation creating a new sort of cancel culture online?
-
Unlike other mainstream social platforms, the primary way content is distributed on TikTok is through an algorithmically curated “For You” page; having followers doesn’t guarantee people will see your content. This shift has led average users to tailor their videos primarily toward the algorithm, rather than a following, which means abiding by content moderation rules is more crucial than ever.
Social media has slowly moved away from communication between people who know each other to people who are farther apart in social spaces. Increasingly in 2021 onward, some platforms like TikTok have acted as a distribution platform and ignored explicit social connections like follower/followee in lieu of algorithmic-only feeds to distribute content to people based on a variety of criteria including popularity of content and the readers' interests.
-
- Jan 2023
-
chat.indieweb.org chat.indieweb.org
-
[Rose] I trained instagram into thinking I have cats. I get a lot of cat adverts which are just cute pictures of cats now, 10/10 would recommend!
-
- Dec 2022
-
pluralistic.net pluralistic.net
-
Alas, lawmakers are way behind the curve on this, demanding new "online safety" rules that require firms to break E2E and block third-party de-enshittification tools: https://www.openrightsgroup.org/blog/online-safety-made-dangerous/ The online free speech debate is stupid because it has all the wrong focuses: Focusing on improving algorithms, not whether you can even get a feed of things you asked to see; Focusing on whether unsolicited messages are delivered, not whether solicited messages reach their readers; Focusing on algorithmic transparency, not whether you can opt out of the behavioral tracking that produces training data for algorithms; Focusing on whether platforms are policing their users well enough, not whether we can leave a platform without losing our important social, professional and personal ties; Focusing on whether the limits on our speech violate the First Amendment, rather than whether they are unfair: https://doctorow.medium.com/yes-its-censorship-2026c9edc0fd
This list is particularly good.
Proper regulation of end to end services would encourage the creation of filtering and other tools which would tend to benefit users rather than benefit the rent seeking of the corporations which own the pipes.
-
But there's another side to this playlistification of feeds: playlists and other recommendation algorithms are chokepoints: they are a way to durably interpose a company between a creator and their audience. Where you have chokepoints, you get chokepoint capitalism: https://chokepointcapitalism.com/
Massive social media networks using algorithmic feeds and other programmatic and centralizing methods to interpose themselves between people trying to reach each other, often in ways which allow them to extract additional value from the participants. They become necessary platforms which create chokepoints for flows of information which Cory Doctorow and Rebecca Giblin call "chokepoint capitalism".
-
- Jul 2022
-
herman.bearblog.dev herman.bearblog.dev
-
https://herman.bearblog.dev/a-better-ranking-algorithm/
-
I removed the in-feed upvote button, making posts only up-votable at the bottom of the post itself. This increases the vote quality (if not the quantity).
Putting upvoting at the bottom of a post is a better indicator of quality than at the top where it's less likely to have been read and more of a knee-jerk reactions, particularly for the punch-the-monkey crowd.
Similar to how I use read, listen, and watch posts.
-
use the number of views as a factor in determining the score of a post. Essentially using the “upvote rate” instead of pure upvote number. Score = (Upvotes - Views) / (Time in hours + 4) ^ Gravity Or removing time from consideration entirely, decaying posts the more they are read. Score = Upvotes / Views ^ Gravity These, however, have a bias against longer posts (although this is the case with the all these algorithms in general, but exacerbated here). Since longer posts (by virtue of them being long) may take time to read or are saved to read later, and may get a lot of views initially which actually degrade the score despite people finding the content valuable. It’s an interesting idea though and could be used for platforms where all posts are of a similar digestibility.
On platforms where the content takes roughly the same amount of time to consume, one can factor in the number of views versus upvotes as a quality indicator. One needs to be more careful with longer form content though as length will tend to decrease readership and clicks and potentially push people to "bookmark" to read later. How to account for these?
What are the list of variables in this overall problem?
-
I dislike the separation of Trending and Newest. This is one of the main reasons for false negatives as new articles don’t receive many (if any) views. I’m thinking about randomly interspersing new articles in the trending feed to give them the potential of getting their first few votes. This (as ever) has an effect on quality, so has to be done with care.
Introducing some randomness for new unranked articles is an interesting and likely useful tactic.
-
The Hacker News algorithm This algorithm is fairly straight forward (although there’s some magic going on under the surface when it comes to moderation, shadow banning, and post pinning). This is slightly simplified, but in a nutshell: Score = Upvotes / Time since submission ^ Gravity Where Gravity = 1.7 As upvotes accumulate the score rises but is counterbalanced by the time since submission. Gravity makes it exponentially more difficult to rank as time goes by.
short outline of the Hacker News algorithm.
-
Once a post goes viral on Twitter, Hacker News, Reddit, or anywhere else off-platform, it has the potential to form a “Katamari ball” where it gets upvotes because it has upvotes (which means it gets more upvotes, because it has more upvotes, which means…well…you get it). This is also known as "the network effect", but I feel a Katamari ball better illustrates it.
Network effects can describe a broad variety of phenomenon. Is Katamari ball a better descriptor of this specific phenomenon?
How does one prioritize the richer quality Lindy library material that may be even more beneficial than things which are simply new?
-
The most common way is to log the number of upvotes (or likes/downvotes/angry-faces/retweets/poop-emojis/etc) and algorithmically determine the quality of a post by consensus.
When thinking about algorithmic feeds, one probably ought to not include simple likes/favorites/bookmarks as they're such low hanging fruit. Better indicators are interactions which take time, effort, work to post.
Using various forms of webmention as indicators could be interesting as one can parse responses and make an actual comment worth more than a dozen "likes", for example.
Curating people (who respond) as well as curating the responses themselves could be useful.
Time windowing curation of people and curators could be a useful metric.
Attempting to be "democratic" in these processes may often lead to the Harry and Mary Beercan effect and gaming issues seen in spaces like Digg or Twitter and have dramatic consequences for the broader readership and community. Democracy in these spaces is more likely to get you cat videos and vitriol with a soupçon of listicles and clickbait.
-
- May 2022
-
www.thecut.com www.thecut.com
-
This came in the context of weighing what she stood to gain and lose in leaving a staff job at BuzzFeed. She knew the worth of what editors, fact-checkers, designers, and other colleagues brought to a piece of writing. At the same time, she was tired of working around the “imperatives of social media sharing.” Clarity and concision are not metrics imposed by the Facebook algorithm, of course — but perhaps such concerns lose some of their urgency when readers have already pledged their support.
Continuing with the idea above about the shift of Sunday morning talk shows and the influence of Hard Copy, is social media exerting a negative influence on mainstream content and conversation as a result of their algorithmic gut reaction pressure? How can we fight this effect?
-
-
thenewstack.io thenewstack.io
-
“It was 2017, I would say, when Twitter started really cracking down on bots in a way that they hadn’t before — taking down a lot of bad bots, but also taking down a lot of good bots too. There was an appeals process [but] it was very laborious, and it just became very difficult to maintain stuff. And then they also changed all their API’s, which are the programmatic interface for how a bot talks to Twitter. So they changed those without really any warning, and everything broke.
Just like chilling action by political actors, social media corporations can use changes in policy and APIs to stifle and chill speech online.
This doesn't mean that there aren't bad actors building bots to actively cause harm, but there is a class of potentially helpful and useful bots (tools) that can make a social space better or more interesting.
How does one regulate this sort of speech? Perhaps the answer is simply not to algorithmically amplify these bots and their speech over that of humans.
More and more I think that the answer is to make online social interactions more like in person interactions. Too much social media is giving an even bigger bullhorn to the crazy preacher on the corner of Main Street who was shouting at the crowds that simply ignored them. Social media has made it easier for us to shout them back down, and in doing so, we're only making them heard by more. We need a negative feedback mechanism to dampen these effects the same way they would have happened online.
-
He and his fellow bot creators had been asking themselves over the years, “what do we do when the platform [Twitter] becomes unfriendly for bots?”
There's some odd irony in this quote. Kazemi indicates that Twitter was unfriendly for bots, but he should be specific that it's unfriendly for non-corporately owned bots. One could argue that much of the interaction on Twitter is spurred by the primary bot on the service: the algorithmic feed (bot) that spurs people to like, retweet, and interact with more content and thus keeping them on the platform for longer.
-
- Apr 2022
-
winnielim.org winnielim.org
-
Since most of our feeds rely on either machine algorithms or human curation, there is very little control over what we actually want to see.
While algorithmic feeds and "artificial intelligences" might control large swaths of what we see in our passive acquisition modes, we can and certainly should spend more of our time in active search modes which don't employ these tools or methods.
How might we better blend our passive and active modes of search and discovery while still having and maintaining the value of serendipity in our workflows?
Consider the loss of library stacks in our research workflows? We've lost some of the serendipity of seeing the book titles on the shelf that are adjacent to the one we're looking for. What about the books just above and below it? How do we replicate that sort of serendipity into our digital world?
How do we help prevent the shiny object syndrome? How can stay on task rather than move onto the next pretty thing or topic presented to us by an algorithmic feed so that we can accomplish the task we set out to do? Certainly bookmarking a thing or a topic for later follow up can be useful so we don't go too far afield, but what other methods might we use? How can we optimize our random walks through life and a sea of information to tie disparate parts of everything together? Do we need to only rely on doing it as a broader species? Can smaller subgroups accomplish this if carefully planned or is exploring the problem space only possible at mass scale? And even then we may be under shooting the goal by an order of magnitude (or ten)?
-
We have to endlessly scroll and parse a ton of images and headlines before we can find something interesting to read.
The randomness of interesting tidbits in a social media scroll help to put us in a state of flow. We get small hits of dopamine from finding interesting posts to fill in the gaps of the boring bits in between and suddenly find we've lost the day. As a result an endless scroll of varying quality might have the effect of making one feel productive when in fact a reasonably large proportion of your time is spent on useless and uninteresting content.
This effect may be put even further out when it's done algorithmically and the dopamine hits become more frequent. Potentially worse than this, the depth of the insight found in most social feeds is very shallow and rarely ever deep. One is almost never invited to delve further to find new insights.
How might a social media stream of content be leveraged to help people read more interesting and complex content? Could putting Jacques Derrida's texts into a social media-like framing create this? Then one could reply to the text by sentence or paragraph with their own notes. This is similar to the user interface of Hypothes.is, but Hypothes.is has a more traditional reading interface compared to the social media space. What if one interspersed multiple authors in short threads? What other methods might work to "trick" the human mind into having more fun and finding flow in their deeper and more engaged reading states?
Link this to the idea of fun in Sönke Ahrens' How to Take Smart Notes.
-
…and they are typically sorted: chronologically: newest items are displayed firstthrough data: most popular, trending, votesalgorithmically: the system determines what you see through your consumption patterns and what it wants you to seeby curation: humans determine what you seeby taxonomy: content is displayed within buckets of categories, like Wikipedia Most media entities employ a combination of the above.
For reading richer, denser texts what is the best way of ordering and sorting it?
Algorithmically sorting with a pseudo-chronological sort is the best method for social media content, but what is the most efficient method for journal articles? for books?
Tags
- focus
- Jacques Derrida
- discovery
- curation
- Hypothes.is
- sort orders
- insight
- deep reading
- taxonomies
- flow
- active acquisition
- research workflows
- passive acquisition
- artificial intelligence
- control
- problem spaces
- controlled sloppiness
- active reading
- digital social reading
- libraries
- social media
- chronological order
- serendipity
- sorting
- algorithmic feeds
- Mihaly Csikszentmihalyi
- Sönke Ahrens
- user interfaces
- filtering
- library stacks
- social annotation
Annotators
URL
-
- Mar 2022
-
-
First is that it actually lowers paid acquisition costs. It lowers them because the Facebook Ads algorithm rewards engaging advertisements with lower CPMs and lots of distribution. Facebook does this because engaging advertisements are just like engaging posts: they keep people on Facebook.
Engaging advertisements on Facebook benefit from lower acquisition costs because the Facebook algorithm rewards more interesting advertisements with lower CPMs and wider distribution. This is done, as all things surveillance capitalism driven, to keep eyeballs on Facebook.
This isn't too dissimilar to large cable networks that provide free high quality advertising to mass manufacturers in late night slots. The network generally can't sell all of their advertising inventory, particularly in low viewing hours, so they'll offer free or incredibly cheap commercial rates to their bigger buyers (like Coca-Cola or McDonalds, for example) to fill space and have more professional looking advertisements between the low quality advertisements from local mom and pop stores and the "as seen on TV" spots. These higher quality commercials help keep the audience engaged and prevents viewers from changing the channel.
-
-
www.cs.umd.edu www.cs.umd.edu
-
Posting a new algorithm, poem, or video on the web makes it a vailable, but unless appropriate recipients notice it, the originator has little chance to influence them.
An early statement of the problem of distribution which has been widely solved by many social media algorithmic feeds. Sadly pushing ideas to people interested in them (or not) doesn't seem to have improved humanity. Perhaps too much of the problem space with respect to the idea of "influence" has been devoted to marketing and commerce or to fringe misinformation spaces? How might we create more value to the "middle" of the populace while minimizing misinformation and polarization?
-
The current mass media such as t elevision, books, and magazines are one-directional, and are produced by a centralized process. This can be positive, since respected editors can filter material to ensure consistency and high quality, but more widely accessible narrowcasting to specific audiences could enable livelier decentralized discussions. Democratic processes for presenting opposing views, caucusing within factions, and finding satisfactory compromises are productive for legislative, commercial, and scholarly pursuits.
Social media has to some extent democratized the access to media, however there are not nearly enough processes for creating negative feedback to dampen ideas which shouldn't or wouldn't have gained footholds in a mass society.
We need more friction in some portions of the social media space to prevent the dissemination of un-useful, negative, and destructive ideas swamping out the positive ones. The accelerative force of algorithmic feeds for the most extreme ideas in particular is one of the most caustic ideas of the last quarter of a century.
-
- Feb 2022
-
dancohen.org dancohen.org
-
https://dancohen.org/2019/07/23/engagement-is-the-enemy-of-serendipity/
Dan Cohen talks about a design change in the New York Times app that actively discourages exploration and discovery by serendipity.
This is similar to pulling out digital copies of books you're looking for instead of going to the library, tracking down the book on the shelf and in the process seeing and experiencing the books on the shelf which are nearby, or even the book that catches your eye across the aisle, wasn't in your sphere of search or interest, but you pick it up anyway.
How can we bring this sort of design back to digital experiences?
It's not just the algorithmic feeds which are narrowing our interests and exposure, but the design of our digital spaces as well.
-
- Jan 2022
-
soatok.blog soatok.blog
-
https://soatok.blog/2022/01/12/dont-dunk-the-gunk/
Clever way of making the internet a nicer place.
-
- Oct 2021
-
www.theatlantic.com www.theatlantic.com
-
Adrienne LaFrance outlines the reasons we need to either abandon Facebook or cause some more extreme regulation of it and how it operates.
While she outlines the ills, she doesn't make a specific plea about the solution of the problem. There's definitely a raging fire in the theater, but no one seems to know what to do about it. We're just sitting here watching the structure burn down around us. We need clearer plans for what must be done to solve this problem.
-
An internal message characterizing Zuckerberg’s reasoning says he wanted to avoid new features that would get in the way of “meaningful social interactions.” But according to Facebook’s definition, its employees say, engagement is considered “meaningful” even when it entails bullying, hate speech, and reshares of harmful content.
Meaningful social interactions don't need algorithmic help.
-
At the time, Facebook was already weighting the reactions other than “like” more heavily in its algorithm—meaning posts that got an “angry” reaction were more likely to show up in users’ News Feeds than posts that simply got a “like.” Anger-inducing content didn’t spread just because people were more likely to share things that made them angry; the algorithm gave anger-inducing content an edge. Facebook’s Integrity workers—employees tasked with tackling problems such as misinformation and espionage on the platform—concluded that they had good reason to believe targeting posts that induced anger would help stop the spread of harmful content.
-
Facebook has dismissed the concerns of its employees in manifold ways. One of its cleverer tactics is to argue that staffers who have raised the alarm about the damage done by their employer are simply enjoying Facebook’s “very open culture,” in which people are encouraged to share their opinions, a spokesperson told me.
- Share opinions
- Opinions viewed as "fact"
- "Facts" spread as news.
- Platform accelerates "news".
- Bad things happen
- Profit
-
- Sep 2021
-
www.youtube.com www.youtube.com
-
Kevin Marks talks about the bridging of new people into one's in-group by Twitter's retweet functionality from a positive perspective.
He doesn't foresee the deleterious effects of algorithms for engagement doing just the opposite of increasing the volume of noise based on one's in-group hating and interacting with "bad" content in the other direction. Some of these effects may also be bad from a slow brainwashing perspective if not protected for.
-
- Aug 2021
-
-
Fukuyama's answer is no. Middleware providers will not see privately shared content from a user's friends. This is a good answer if our priority is privacy. It lets my cousin decide which companies to trust with her sensitive personal information. But it hobbles middleware as a tool for responding to her claims about vaccines. And it makes middleware providers far less competitive, since they will not be able to see much of the content we want them to curate.
Is it alright to let this sort of thing go on the smaller scale personal shared level? I would suggest that the issue is not this small scale conversation which can happen linearly, but we need to focus on the larger scale amplification of misinformation by sources. Get rid of the algorithmic amplification of the fringe bits which is polarizing and toxic. Only allow the amplification of the more broadly accepted, fact-based, edited, and curated information.
-
Facebook deploys tens of thousands of people to moderate user content in dozens of languages. It relies on proprietary machine-learning and other automated tools, developed at enormous cost. We cannot expect [End Page 169] comparable investment from a diverse ecosystem of middleware providers. And while most providers presumably will not handle as much content as Facebook does, they will still need to respond swiftly to novel and unpredictable material from unexpected sources. Unless middleware services can do this, the value they provide will be limited, as will users' incentives to choose them over curation by the platforms themselves.
Does heavy curation even need to exist? If a social company were able to push a linear feed of content to people without the algorithmic forced engagement, then the smaller, fringe material wouldn't have the reach. The majority of the problem would be immediately solved with this single feature.
-
The First Amendment precludes lawmakers from forcing platforms to take down many kinds of dangerous user speech, including medical and political misinformation.
Compare social media with the newspaper business from this perspective.
People joined social media not knowing the end effects, but now don't have a choice of platform after-the-fact. Social platforms accelerate the disinformation using algorithms.
Because there is choice amongst newspapers, people can easily move and if they'd subscribed to a racist fringe newspaper, they could easily end their subscription and go somewhere else. This is patently not the case for any social media. There's a high hidden personal cost for connectivity that isn't taken into account. The government needs to regulate this and not the speech portion.
Social media should be considered a common carrier and considered as such. It was an easier and more logical process in the telephone, electricity and other areas to force this as the cost of implementation for them was magnitudes of order higher. The data formats and storage for social should be standardized (potentially even in three or more formats) and that should be the common carrier imposed. Would this properly skirt the First Amendment issues?
-
Fukuyama's work, which draws on both competition analysis and an assessment of threats to democracy, joins a growing body of proposals that also includes Mike Masnick's "protocols not platforms," Cory Doctorow's "adversarial interoperability," my own "Magic APIs," and Twitter CEO Jack Dorsey's "algorithmic choice."
Nice overview of work in the space for fixing monopoly in social media space the at the moment. I hadn't heard about Fukuyama or Daphne Keller's versions before.
I'm not sure I think Dorsey's is actually a thing. I suspect it is actually vaporware from the word go.
IndieWeb has been working slowly at the problem as well.
Tags
- logarithmic amplification
- First Amendment
- algorithmic choice
- middleware
- democracy
- economics
- common carrier
- Mike Masnick
- platforms
- free speech
- journalism
- monopolies
- Magic APIs
- social media machine guns
- social media
- algorithmic feeds
- Francis Fukuyama
- adversarial interoperability
- algorithmic amplification
Annotators
URL
-
- Jul 2021
-
delong.typepad.com delong.typepad.com
-
One of the reasons for this situation is that the very media we have mentioned are so designed as to make thinking seem unnecessary (though this is only an appearance). The packaging of intellectual positions and views is one of the most active enterprises of some of the best minds of our day. The viewer of television, the listener to radio, the reader of magazines, is presented with a whole complex of elements-all the way from ingenious rhetoric to carefully selected data and statistics-to make it easy for him to "make up his own mind" with the minimum of difficulty and effort. But the packaging is often done so effectively that the viewer, listener, or reader does not make up his own mind at all. Instead, he inserts a packaged opinion into his mind, somewhat like inserting a cassette into a cassette player. He then pushes a button and "plays back" the opinion whenever it seems appropriate to do so. He has performed acceptably without having had to think.
This is an incredibly important fact. It's gone even further with additional advances in advertising and social media not to mention the slow drip mental programming provided by algorithmic feeds which tend to polarize their readers.
People simply aren't actively reading their content, comparing, contrasting, or even fact checking it.
I suspect that this book could use an additional overhaul to cover many of these aspects.
-
- Apr 2021
-
plasticbag.org plasticbag.org
-
Others are asking questions about the politics of weblogs – if it’s a democratic medium, they ask, why are there so many inequalities in traffic and linkage?
This still exists in the social media space, but has gotten even worse with the rise of algorithmic feeds.
-
- Mar 2021
-
www.theatlantic.com www.theatlantic.com
-
One person writing a tweet would still qualify for free-speech protections—but a million bot accounts pretending to be real people and distorting debate in the public square would not.
Do bots have or deserve the right to not only free speech, but free reach?
-
-
journal.disruptivemedia.org.uk journal.disruptivemedia.org.uk
-
In this respect, we join Fitzpatrick (2011) in exploring “the extent to which the means of media production and distribution are undergoing a process of radical democratization in the Web 2.0 era, and a desire to test the limits of that democratization”
Something about this is reminiscent of WordPress' mission to democratize publishing. We can also compare it to Facebook whose (stated) mission is to connect people, while it's actual mission is to make money by seemingly radicalizing people to the extremes of our political spectrum.
This highlights the fact that while many may look at content moderation on platforms like Facebook as removing their voices or deplatforming them in the case of people like Donald J. Trump or Alex Jones as an anti-democratic move. In fact it is not. Because of Facebooks active move to accelerate extreme ideas by pushing them algorithmically, they are actively be un-democratic. Democratic behavior on Facebook would look like one voice, one account and reach only commensurate with that person's standing in real life. Instead, the algorithmic timeline gives far outsized influence and reach to some of the most extreme voices on the platform. This is patently un-democratic.
-
-
graphics.wsj.com graphics.wsj.com
-
A Wall Street Journal experiment to see a liberal version and a conservative version of Facebook side by side.
-
-
www.technologyreview.com www.technologyreview.com
-
-
In 2017, Chris Cox, Facebook’s longtime chief product officer, formed a new task force to understand whether maximizing user engagement on Facebook was contributing to political polarization. It found that there was indeed a correlation, and that reducing polarization would mean taking a hit on engagement. In a mid-2018 document reviewed by the Journal, the task force proposed several potential fixes, such as tweaking the recommendation algorithms to suggest a more diverse range of groups for people to join. But it acknowledged that some of the ideas were “antigrowth.” Most of the proposals didn’t move forward, and the task force disbanded. Since then, other employees have corroborated these findings. A former Facebook AI researcher who joined in 2018 says he and his team conducted “study after study” confirming the same basic idea: models that maximize engagement increase polarization. They could easily track how strongly users agreed or disagreed on different issues, what content they liked to engage with, and how their stances changed as a result. Regardless of the issue, the models learned to feed users increasingly extreme viewpoints. “Over time they measurably become more polarized,” he says.
-
“When you’re in the business of maximizing engagement, you’re not interested in truth. You’re not interested in harm, divisiveness, conspiracy. In fact, those are your friends,” says Hany Farid, a professor at the University of California, Berkeley who collaborates with Facebook to understand image- and video-based misinformation on the platform.
-
-
themarkup.org themarkup.org
-
This is a fascinating view into algorithmic feeds.
-
-
themarkup.org themarkup.org
-
Reminiscent of the Wall Street Journals Red Feed/Blue Feed: https://graphics.wsj.com/blue-feed-red-feed/
-
- Dec 2020
-
www.theatlantic.com www.theatlantic.com
-
The few people who are willing to defend these sites unconditionally do so from a position of free-speech absolutism. That argument is worthy of consideration. But there’s something architectural about the site that merits attention, too: There are no algorithms on 8kun, only a community of users who post what they want. People use 8kun to publish abhorrent ideas, but at least the community isn’t pretending to be something it’s not. The biggest social platforms claim to be similarly neutral and pro–free speech when in fact no two people see the same feed. Algorithmically tweaked environments feed on user data and manipulate user experience, and not ultimately for the purpose of serving the user. Evidence of real-world violence can be easily traced back to both Facebook and 8kun. But 8kun doesn’t manipulate its users or the informational environment they’re in. Both sites are harmful. But Facebook might actually be worse for humanity.
-
Every time you click a reaction button on Facebook, an algorithm records it, and sharpens its portrait of who you are.
It might be argued that the design is not creating a portrait of who you are, but of who Facebook wants you to become. The real question is: Who does Facebook want you to be, and are you comfortable with being that?
-
- Oct 2020
-
knightcolumbia.org knightcolumbia.org
-
Meanwhile, politicians from the two major political parties have been hammering these companies, albeit for completely different reasons. Some have been complaining about how these platforms have potentially allowed for foreign interference in our elections.3 3. A Conversation with Mark Warner: Russia, Facebook and the Trump Campaign, Radio IQ|WVTF Music (Apr. 6, 2018), https://www.wvtf.org/post/conversation-mark-warner-russia-facebook-and-trump-campaign#stream/0 (statement of Sen. Mark Warner (D-Va.): “I first called out Facebook and some of the social media platforms in December of 2016. For the first six months, the companies just kind of blew off these allegations, but these proved to be true; that Russia used their social media platforms with fake accounts to spread false information, they paid for political advertising on their platforms. Facebook says those tactics are no longer allowed—that they've kicked this firm off their site, but I think they've got a lot of explaining to do.”). Others have complained about how they’ve been used to spread disinformation and propaganda.4 4. Nicholas Confessore & Matthew Rosenberg, Facebook Fallout Ruptures Democrats’ Longtime Alliance with Silicon Valley, N.Y. Times (Nov. 17, 2018), https://www.nytimes.com/2018/11/17/technology/facebook-democrats-congress.html (referencing statement by Sen. Jon Tester (D-Mont.): “Mr. Tester, the departing chief of the Senate Democrats’ campaign arm, looked at social media companies like Facebook and saw propaganda platforms that could cost his party the 2018 elections, according to two congressional aides. If Russian agents mounted a disinformation campaign like the one that had just helped elect Mr. Trump, he told Mr. Schumer, ‘we will lose every seat.’”). Some have charged that the platforms are just too powerful.5 5. Julia Carrie Wong, #Breaking Up Big Tech: Elizabeth Warren Says Facebook Just Proved Her Point, The Guardian (Mar. 11, 2019), https://www.theguardian.com/us-news/2019/mar/11/elizabeth-warren-facebook-ads-break-up-big-tech (statement of Sen. Elizabeth Warren (D-Mass.)) (“Curious why I think FB has too much power? Let's start with their ability to shut down a debate over whether FB has too much power. Thanks for restoring my posts. But I want a social media marketplace that isn't dominated by a single censor. #BreakUpBigTech.”). Others have called attention to inappropriate account and content takedowns,6 6. Jessica Guynn, Ted Cruz Threatens to Regulate Facebook, Google and Twitter Over Charges of Anti-Conservative Bias, USA Today (Apr. 10, 2019), https://www.usatoday.com/story/news/2019/04/10/ted-cruz-threatens-regulate-facebook-twitter-over-alleged-bias/3423095002/ (statement of Sen. Ted Cruz (R-Tex.)) (“What makes the threat of political censorship so problematic is the lack of transparency, the invisibility, the ability for a handful of giant tech companies to decide if a particular speaker is disfavored.”). while some have argued that the attempts to moderate discriminate against certain political viewpoints.
Most of these problems can all fall under the subheading of the problems that result when social media platforms algorithmically push or accelerate content on their platforms. An individual with an extreme view can publish a piece of vile or disruptive content and because it's inflammatory the silos promote it which provides even more eyeballs and the acceleration becomes a positive feedback loop. As a result the social silo benefits from engagement for advertising purposes, but the community and the commons are irreparably harmed.
If this one piece were removed, then the commons would be much healthier, fringe ideas and abuse that are abhorrent to most would be removed, and the broader democratic views of the "masses" (good or bad) would prevail. Without the algorithmic push of fringe ideas, that sort of content would be marginalized in the same way we want our inane content like this morning's coffee or today's lunch marginalized.
To analogize it, we've provided social media machine guns to the most vile and fringe members of our society and the social platforms are helping them drag the rest of us down.
If all ideas and content were provided the same linear, non-promotion we would all be much better off, and we wouldn't have the need for as much human curation.
-
-
daily.jstor.org daily.jstor.org
-
I literally couldn’t remember when I’d last looked at my RSS subscriptions. On the surface, that might seem like a win: Instead of painstakingly curating my own incoming news, I can effortlessly find an endless supply of interesting, worthwhile content that the algorithm finds for me. The problem, of course, is that the algorithm isn’t neutral: It’s the embodiment of Facebook and Twitter’s technology, data analysis, and most crucial, business model. By relying on the algorithm, instead of on tags and RSS, I’m letting an army of web developers, business strategists, data scientists, and advertisers determine what gets my attention. I’m leaving myself vulnerable to misinformation, and manipulation, and giving up my power of self-determination.
-
-
people.well.com people.well.comMetacrap1
-
Schemas aren't neutral
This section highlights why relying on algorithmic feeds in social media platforms like Facebook and Twitter can be toxic. Your feed is full of what they think you'll like and click on instead of giving you the choice.
-
-
www.roughtype.com www.roughtype.com
-
Third, content collapse puts all types of information into direct competition. The various producers and providers of content, from journalists to influencers to politicians to propagandists, all need to tailor their content and its presentation to the algorithms that determine what people see. The algorithms don’t make formal or qualitative distinctions; they judge everything by the same criteria. And those criteria tend to promote oversimplification, emotionalism, tendentiousness, tribalism — the qualities that make a piece of information stand out, at least momentarily, from the screen’s blur.
This is a terrifically painful and harmful thing. How can we redesign a system that doesn't function this way?
Tags
Annotators
URL
-
- Feb 2020
-
www.makeuseof.com www.makeuseof.com
-
The biggest drawback of algorithmic feeds is that you might be looking at irrelevant content. When you see something on your timeline and want to comment, you will have to check the timestamp to see if your comment is still relevant or not.
-
- Dec 2019
-
collect.readwriterespond.com collect.readwriterespond.com
-
Alexander Samuel reflects on tagging and its origins as a backbone to the social web. Along with RSS, tags allowed users to connect and collate content using such tools as feed readers. This all changed with the advent of social media and the algorithmically curated news feed.
Tags were used for discovery of specific types of content. Who needs that now that our new overlords of artificial intelligence and algorithmic feeds can tell us what we want to see?!
Of course we still need tags!!! How are you going to know serendipitously that you need more poetry in your life until you run into the tag on a service like IndieWeb.xyz? An algorithmic feed is unlikely to notice--or at least in my decade of living with them I've yet to run into poetry in one.
-
- Apr 2019
-
www.eugenewei.com www.eugenewei.com
-
As people start following more and more accounts on a social network, they reach a point where the number of candidate stories exceeds their capacity to see them all. Even before that point, the sheer signal-to-noise ratio may decline to the point that it affects engagement. Almost any network that hits this inflection point turns to the same solution: an algorithmic feed.
-
- Feb 2019
-
larrysanger.org larrysanger.org
-
The social media browser plugins. Here’s the killer feature. Create at least one (could be many competing) browser plugins that enable you to (a) select feeds and then (b) display them alongside a user’s Twitter, Facebook, etc., feeds. (This could be an adaptation of Greasemonkey.) In other words, once this feature were available, you could tell your friends: “I’m not on Twitter. But if you want to see my Tweet-like posts appear in your Twitter feed, then simply install this plugin and input my feed address. You’ll see my posts pop up just as if they were on Twitter. But they’re not! And we can do this because you can control how any website appears to you from your own browser. It’s totally legal and it’s actually a really good idea.” In this way, while you might never look at Twitter or Facebook, you can stay in contact with your friends who are still there—but on your own terms.
This is an intriguing idea. In particular, it would be cool if I could input my OPML file of people I'm following and have a plugin like this work with other social readers.
-
- Nov 2018
-
www.nytimes.com www.nytimes.com
-
While the NTK Network does not have a large audience of its own, its content is frequently picked up by popular conservative outlets, including Breitbart.
One wonders if they're seeding it and spreading it falsely on Facebook? Why not use the problem as a feature?!
-
- Sep 2018
-
www.kickscondor.com www.kickscondor.com
-
My relationship is a lot healthier with blogs that I visit when I please. This is another criticism I have with RSS as well—I don’t want my favorite music blog sending me updates every day, always in my face. I just want to go there when I am ready to listen to something new. (I also hope readers to my blog just stop by when they feel like obsessing over the Web with me.)
Amen!
-