- Mar 2024
-
thebaffler.com thebaffler.com
-
Ongweso Jr., Edward. “The Miseducation of Kara Swisher: Soul-Searching with the Tech ‘Journalist.’” The Baffler, March 29, 2024. https://thebaffler.com/latest/the-miseducation-of-kara-swisher-ongweso.
ᔥ[[Pete Brown]] in Exploding Comma
Tags
- technology and the military
- surveillance capitalism
- toxic technology
- Satya Nadella
- Sheryl Sandberg
- techno-utopianism
- access journalism
- diversity equity and inclusion
- read
- Travis Kalanick (Uber)
- acceleration
- bad technology
- attention economy
- Kara Swisher
- social media machine guns
- Tony West
- Microsoft
- Sundar Pichai
Annotators
URL
-
- Sep 2023
-
www.wired.com www.wired.com
-
DiResta, Renee. “Free Speech Is Not the Same As Free Reach.” Wired, August 30, 2018. https://www.wired.com/story/free-speech-is-not-the-same-as-free-reach/.
-
- Aug 2023
-
Local file Local file
-
T9 (text prediction):generative AI::handgun:machine gun
-
- Feb 2023
-
www.washingtonpost.com www.washingtonpost.com
-
Could it be the sift from person to person (known in both directions) to massive broadcast that is driving issues with content moderation. When it's person to person, one can simply choose not to interact and put the person beyond their individual pale. This sort of shunning is much harder to do with larger mass publics at scale in broadcast mode.
How can bringing content moderation back down to the neighborhood scale help in the broadcast model?
-
- Nov 2022
-
www.theatlantic.com www.theatlantic.com
-
As part of the Election Integrity Partnership, my team at the Stanford Internet Observatory studies online rumors, and how they spread across the internet in real time.
-
- Nov 2021
-
www.theatlantic.com www.theatlantic.com
-
In America, of course, we don’t have that kind of state coercion. There are currently no laws that shape what academics or journalists can say; there is no government censor, no ruling-party censor. But fear of the internet mob, the office mob, or the peer-group mob is producing some similar outcomes. How many American manuscripts now remain in desk drawers—or unwritten altogether—because their authors fear a similarly arbitrary judgment? How much intellectual life is now stifled because of fear of what a poorly worded comment would look like if taken out of context and spread on Twitter?
Fear of cancel culture and social repercussions prevents people from speaking and communicating as they might otherwise.
Compare this with the right to reach, particularly for those without editors, filtering, or having built a platform and understanding how to use it responsibly.
-
- Aug 2021
-
-
Fukuyama's answer is no. Middleware providers will not see privately shared content from a user's friends. This is a good answer if our priority is privacy. It lets my cousin decide which companies to trust with her sensitive personal information. But it hobbles middleware as a tool for responding to her claims about vaccines. And it makes middleware providers far less competitive, since they will not be able to see much of the content we want them to curate.
Is it alright to let this sort of thing go on the smaller scale personal shared level? I would suggest that the issue is not this small scale conversation which can happen linearly, but we need to focus on the larger scale amplification of misinformation by sources. Get rid of the algorithmic amplification of the fringe bits which is polarizing and toxic. Only allow the amplification of the more broadly accepted, fact-based, edited, and curated information.
-
Facebook deploys tens of thousands of people to moderate user content in dozens of languages. It relies on proprietary machine-learning and other automated tools, developed at enormous cost. We cannot expect [End Page 169] comparable investment from a diverse ecosystem of middleware providers. And while most providers presumably will not handle as much content as Facebook does, they will still need to respond swiftly to novel and unpredictable material from unexpected sources. Unless middleware services can do this, the value they provide will be limited, as will users' incentives to choose them over curation by the platforms themselves.
Does heavy curation even need to exist? If a social company were able to push a linear feed of content to people without the algorithmic forced engagement, then the smaller, fringe material wouldn't have the reach. The majority of the problem would be immediately solved with this single feature.
-
- Oct 2020
-
knightcolumbia.org knightcolumbia.org
-
Meanwhile, politicians from the two major political parties have been hammering these companies, albeit for completely different reasons. Some have been complaining about how these platforms have potentially allowed for foreign interference in our elections.3 3. A Conversation with Mark Warner: Russia, Facebook and the Trump Campaign, Radio IQ|WVTF Music (Apr. 6, 2018), https://www.wvtf.org/post/conversation-mark-warner-russia-facebook-and-trump-campaign#stream/0 (statement of Sen. Mark Warner (D-Va.): “I first called out Facebook and some of the social media platforms in December of 2016. For the first six months, the companies just kind of blew off these allegations, but these proved to be true; that Russia used their social media platforms with fake accounts to spread false information, they paid for political advertising on their platforms. Facebook says those tactics are no longer allowed—that they've kicked this firm off their site, but I think they've got a lot of explaining to do.”). Others have complained about how they’ve been used to spread disinformation and propaganda.4 4. Nicholas Confessore & Matthew Rosenberg, Facebook Fallout Ruptures Democrats’ Longtime Alliance with Silicon Valley, N.Y. Times (Nov. 17, 2018), https://www.nytimes.com/2018/11/17/technology/facebook-democrats-congress.html (referencing statement by Sen. Jon Tester (D-Mont.): “Mr. Tester, the departing chief of the Senate Democrats’ campaign arm, looked at social media companies like Facebook and saw propaganda platforms that could cost his party the 2018 elections, according to two congressional aides. If Russian agents mounted a disinformation campaign like the one that had just helped elect Mr. Trump, he told Mr. Schumer, ‘we will lose every seat.’”). Some have charged that the platforms are just too powerful.5 5. Julia Carrie Wong, #Breaking Up Big Tech: Elizabeth Warren Says Facebook Just Proved Her Point, The Guardian (Mar. 11, 2019), https://www.theguardian.com/us-news/2019/mar/11/elizabeth-warren-facebook-ads-break-up-big-tech (statement of Sen. Elizabeth Warren (D-Mass.)) (“Curious why I think FB has too much power? Let's start with their ability to shut down a debate over whether FB has too much power. Thanks for restoring my posts. But I want a social media marketplace that isn't dominated by a single censor. #BreakUpBigTech.”). Others have called attention to inappropriate account and content takedowns,6 6. Jessica Guynn, Ted Cruz Threatens to Regulate Facebook, Google and Twitter Over Charges of Anti-Conservative Bias, USA Today (Apr. 10, 2019), https://www.usatoday.com/story/news/2019/04/10/ted-cruz-threatens-regulate-facebook-twitter-over-alleged-bias/3423095002/ (statement of Sen. Ted Cruz (R-Tex.)) (“What makes the threat of political censorship so problematic is the lack of transparency, the invisibility, the ability for a handful of giant tech companies to decide if a particular speaker is disfavored.”). while some have argued that the attempts to moderate discriminate against certain political viewpoints.
Most of these problems can all fall under the subheading of the problems that result when social media platforms algorithmically push or accelerate content on their platforms. An individual with an extreme view can publish a piece of vile or disruptive content and because it's inflammatory the silos promote it which provides even more eyeballs and the acceleration becomes a positive feedback loop. As a result the social silo benefits from engagement for advertising purposes, but the community and the commons are irreparably harmed.
If this one piece were removed, then the commons would be much healthier, fringe ideas and abuse that are abhorrent to most would be removed, and the broader democratic views of the "masses" (good or bad) would prevail. Without the algorithmic push of fringe ideas, that sort of content would be marginalized in the same way we want our inane content like this morning's coffee or today's lunch marginalized.
To analogize it, we've provided social media machine guns to the most vile and fringe members of our society and the social platforms are helping them drag the rest of us down.
If all ideas and content were provided the same linear, non-promotion we would all be much better off, and we wouldn't have the need for as much human curation.
-
It would allow end users to determine their own tolerances for different types of speech but make it much easier for most people to avoid the most problematic speech, without silencing anyone entirely or having the platforms themselves make the decisions about who is allowed to speak.
But platforms are making huge decisions about who is allowed to speak. While they're generally allowing everyone to have a voice, they're also very subtly privileging many voices over others. While they're providing space for even the least among us to have a voice, they're making far too many of the worst and most powerful among us logarithmic-ally louder.
It's not broadly obvious, but their algorithms are plainly handing massive megaphones to people who society broadly thinks shouldn't have a voice at all. These megaphones come in the algorithmic amplification of fringe ideas which accelerate them into the broader public discourse toward the aim of these platforms getting more engagement and therefore more eyeballs for their advertising and surveillance capitalism ends.
The issue we ought to be looking at is the dynamic range between people and the messages they're able to send through social platforms.
We could also analogize this to the voting situation in the United States. When we disadvantage the poor, disabled, differently abled, or marginalized people from voting while simultaneously giving the uber-rich outsized influence because of what they're able to buy, we're imposing the same sorts of problems. Social media is just able to do this at an even larger scale and magnify the effects to make their harms more obvious.
If I follow 5,000 people on social media and one of them is a racist-policy-supporting, white nationalist president, those messages will get drowned out because I can only consume so much content. But when the algorithm consistently pushes that content to the top of my feed and attention, it is only going to accelerate it and create more harm. If I get a linear presentation of the content, then I'd have to actively search that content out for it to cause me that sort of harm.
-
-
buzzmachine.com buzzmachine.com
-
As an American and a staunch defender of the First Amendment, I’m allergic to the notion of forbidden speech. But if government is going to forbid it, it damned well better clearly define what is forbidden or else the penumbra of prohibition will cast a shadow and chill on much more speech.
Perhaps it's not what people are saying so much as platforms are accelerating it algorithmically? It's one thing for someone to foment sedition, praise Hitler, or yell their religious screed on the public street corner. The problem comes when powerful interests in the form of governments, corporations, or others provide them with megaphones and tacitly force audiences to listen to it.
When Facebook or Youtube optimize for clicks keyed on social and psychological constructs using fringe content, we're essentially saying that machines, bots, and extreme fringe elements are not only people, but that they've got free speech rights, and they can be prioritized with the reach and exposure of major national newspapers and national television in the media model of the 80's.
I highly suspect that if real people's social media reach were linear and unaccelerated by algorithms we wouldn't be in the morass we're generally seeing on many platforms.
-
Many of the book’s essayists defend freedom of expression over freedom from obscenity. Says Rabbi Arthur Lelyveld (father of Joseph, who would become executive editor of The New York Times): “Freedom of expression, if it is to be meaningful at all, must include freedom for ‘that which we loathe,’ for it is obvious that it is no great virtue and presents no great difficulty for one to accord freedom to what we approve or to that to which we are indifferent.” I hear too few voices today defending speech of which they disapprove.
I might take issue with this statement and possibly a piece of Jarvis' argument here. I agree that it's moral panic that there could be such a thing as "too much speech" because humans have a hard limit for how much they can individually consume.
The issue I see is that while anyone can say almost anything, the problem becomes when a handful of monopolistic players like Facebook or YouTube can use algorithms to programattically entice people to click on and consume fringe content in mass quantities and that subtly, but assuredly nudges the populace and electorate in an unnatural direction. Most of the history of human society and interaction has long tended toward a centralizing consensus in which we can manage to cohere. The large scale effects of algorithmic-based companies putting a heavy hand on the scales are sure to create unintended consequences and they're able to do it at scales that the Johnson and Nixon administrations only wish they had access to.
If we look at as an analogy to the evolution of weaponry, I might suggest we've just passed the border of single shot handguns and into the era of machine guns. What is society to do when the next evolution occurs into the era of social media atomic weapons?
-
- Jul 2019
-
www.buzzfeednews.com www.buzzfeednews.com
-
Another solution might be to limit on the number of times a tweet can be retweeted.
This isn't too dissimilar to an idea I've been mulling over and which Robin Sloan wrote about on the same day this story was released: https://platforms.fyi/
-