- Nov 2024
-
-
The paradigm of AI – interestingly characterised by the popular AI critic Cathy O’Neil in Weapons of Math Destruction (2016) as ‘project[ing] the past into the future’ – simply doesn’t work for fields that change or evolve.
-
- Mar 2024
-
thebaffler.com thebaffler.com
-
We need a better catch-all term for the ills perpetrated on humanity and society by technology companies' extractive practices and general blindness to their own effects while they become rich. It should have a terrifically pejorative tone.
Something which subsumes the crazy bound up in some of the following: - social media machine guns - toxic technology - mass produced toxicity - attention economy - bad technology - surveillance capitalism - technology and the military - weapons of math destruction
It should be the polar opposite of: - techno-utopianism
-
- Jan 2024
-
Local file Local file
-
Thus we have the possibility not just of weapons of mass destructionbut of knowledge-enabled mass destruction (KMD), this destructive-ness hugely amplified by the power of self-replication.
coinage of the phrase knowledge-enabled mass destruction here?
-
- Dec 2023
-
www.youtube.com www.youtube.com
-
https://www.youtube.com/watch?v=7xRXYJ355Tg The AI Bias Before Christmas by Casey Fiesler
-
- Feb 2023
-
wordcraft-writers-workshop.appspot.com wordcraft-writers-workshop.appspot.com
-
Many authors noted that generations tended to fall into clichés, especially when the system was confronted with scenarios less likely to be found in the model's training data. For example, Nelly Garcia noted the difficulty in writing about a lesbian romance — the model kept suggesting that she insert a male character or that she have the female protagonists talk about friendship. Yudhanjaya Wijeratne attempted to deviate from standard fantasy tropes (e.g. heroes as cartographers and builders, not warriors), but Wordcraft insisted on pushing the story toward the well-worn trope of a warrior hero fighting back enemy invaders.
Examples of artificial intelligence pushing toward pre-existing biases based on training data sets.
-
- May 2022
-
branded.substack.com branded.substack.com
-
We don’t know how many media outlets have been run out of existence because of brand safety technology, nor how many media outlets will never be able to monetize critical news coverage because the issues important to their communities are marked as “unsafe.”
-
-
-
With Alphabet Inc.’s Google, and Facebook Inc. and its WhatsApp messaging service used by hundreds of millions of Indians, India is examining methods China has used to protect domestic startups and take control of citizens’ data.
Governments owning citizens' data directly?? Why not have the government empower citizens to own their own data?
-
- Mar 2022
-
www.cs.umd.edu www.cs.umd.edu
-
The current mass media such as t elevision, books, and magazines are one-directional, and are produced by a centralized process. This can be positive, since respected editors can filter material to ensure consistency and high quality, but more widely accessible narrowcasting to specific audiences could enable livelier decentralized discussions. Democratic processes for presenting opposing views, caucusing within factions, and finding satisfactory compromises are productive for legislative, commercial, and scholarly pursuits.
Social media has to some extent democratized the access to media, however there are not nearly enough processes for creating negative feedback to dampen ideas which shouldn't or wouldn't have gained footholds in a mass society.
We need more friction in some portions of the social media space to prevent the dissemination of un-useful, negative, and destructive ideas swamping out the positive ones. The accelerative force of algorithmic feeds for the most extreme ideas in particular is one of the most caustic ideas of the last quarter of a century.
-
Since any powerful tool, such as a genex, can be used for destructive purposes, the cautions are discussed in Section 5.
Given the propensity for technologists in the late 90s and early 00s to have rose colored glasses with respect to their technologies, it's nice to see at least some nod to potential misuses and bad actors within the design of future tools.
-
- Feb 2022
-
twitter.com twitter.com
-
Hmm...this page doesn’t exist. Try searching for something else.
Apparently Persuall was embarrassed about their pro-surveillance capitalism stance and perhaps not so much for its lack of kindness and care for the basic humanity of students.
Sad that they haven't explained or apologized for their misstep.
https://web.archive.org/web/20220222022208/https://twitter.com/perusall/status/1495945680002719751
Additional context: https://twitter.com/search?q=(%40perusall)%20until%3A2022-02-23%20since%3A2022-02-21&src=typed_query
-
- Oct 2021
-
www.theatlantic.com www.theatlantic.com
-
Adrienne LaFrance outlines the reasons we need to either abandon Facebook or cause some more extreme regulation of it and how it operates.
While she outlines the ills, she doesn't make a specific plea about the solution of the problem. There's definitely a raging fire in the theater, but no one seems to know what to do about it. We're just sitting here watching the structure burn down around us. We need clearer plans for what must be done to solve this problem.
-
- Dec 2020
-
www.theatlantic.com www.theatlantic.com
-
The company’s early mission was to “give people the power to share and make the world more open and connected.” Instead, it took the concept of “community” and sapped it of all moral meaning. The rise of QAnon, for example, is one of the social web’s logical conclusions. That’s because Facebook—along with Google and YouTube—is perfect for amplifying and spreading disinformation at lightning speed to global audiences. Facebook is an agent of government propaganda, targeted harassment, terrorist recruitment, emotional manipulation, and genocide—a world-historic weapon that lives not underground, but in a Disneyland-inspired campus in Menlo Park, California.
The original goal with a bit of moderation may have worked. Regression to the mean forces it to a bad place, but when you algorithmically accelerate things toward our bases desires, you make it orders of magnitude worse.
This should be though of as pure social capitalism. We need the moderating force of government regulation to dampen our worst instincts, much the way the United State's mixed economy works (or at least used to work, as it seems that raw capitalism is destroying the United States too).
-
- Oct 2020
-
theintercept.com theintercept.com
-
But these lookalike audiences aren’t just potential new customers — they can also be used to exclude unwanted customers in the future, creating a sort of ad targeting demographic blacklist.
-
How consumers would be expected to navigate this invisible, unofficial credit-scoring process, given that they’re never informed of its existence, remains an open question.
-
“It sure smells like the prescreening provisions of the FCRA,” Reidenberg told The Intercept. “From a functional point of view, what they’re doing is filtering Facebook users on creditworthiness criteria and potentially escaping the application of the FCRA.”
-
In an initial conversation with a Facebook spokesperson, they stated that the company does “not provide creditworthiness services, nor is that a feature of Actionable Insights.” When asked if Actionable Insights facilitates the targeting of ads on the basis of creditworthiness, the spokesperson replied, “No, there isn’t an instance where this is used.” It’s difficult to reconcile this claim with the fact that Facebook’s own promotional materials tout how Actionable Insights can enable a company to do exactly this. Asked about this apparent inconsistency between what Facebook tells advertising partners and what it told The Intercept, the company declined to discuss the matter on the record,
-
-
www.bloomberg.com www.bloomberg.com
-
YouTube doesn’t give an exact recipe for virality. But in the race to one billion hours, a formula emerged: Outrage equals attention.
Talk radio has had this formula for years and they've almost had to use it to drive any listenership as people left radio for television and other media.
I can still remember the different "loudness" level of talk between Bill O'Reilly's primetime show on Fox News and the louder level on his radio show.
-
A 2015 clip about vaccination from iHealthTube.com, a “natural health” YouTube channel, is one of the videos that now sports a small gray box.
Does this box appear on the video itself? Apparently not...
Examples:
But nothing on the embedded version:
A screengrab of what this looks like:
-
When Wojcicki took over, in 2014, YouTube was a third of the way to the goal, she recalled in investor John Doerr’s 2018 book Measure What Matters.“They thought it would break the internet! But it seemed to me that such a clear and measurable objective would energize people, and I cheered them on,” Wojcicki told Doerr. “The billion hours of daily watch time gave our tech people a North Star.” By October, 2016, YouTube hit its goal.
Obviously they took the easy route. You may need to measure what matters, but getting to that goal by any means necessary or using indefensible shortcuts is the fallacy here. They could have had that North Star, but it's the means they used by which to reach it that were wrong.
This is another great example of tech ignoring basic ethics to get to a monetary goal. (Another good one is Marc Zuckerberg's "connecting people" mantra when what he should be is "connecting people for good" or "creating positive connections".
-
The conundrum isn’t just that videos questioning the moon landing or the efficacy of vaccines are on YouTube. The massive “library,” generated by users with little editorial oversight, is bound to have untrue nonsense. Instead, YouTube’s problem is that it allows the nonsense to flourish. And, in some cases, through its powerful artificial intelligence system, it even provides the fuel that lets it spread.#lazy-img-336042387:before{padding-top:66.68334167083543%;}
This is a great summation of the issue.
-
Somewhere along the last decade, he added, YouTube prioritized chasing profits over the safety of its users. “We may have been hemorrhaging money,” he said. “But at least dogs riding skateboards never killed anyone.”
-
-
dancohen.org dancohen.org
-
A more active stance by librarians, journalists, educators, and others who convey truth-seeking habits is essential.
In some sense these people can also be viewed as aggregators and curators of sorts. How can their work be aggregated and be used to compete with the poor algorithms of social media?
-
-
techcrunch.com techcrunch.com
-
Meta co-founder and CEO Sam Molyneux writes that “Going forward, our intent is not to profit from Meta’s data and capabilities; instead we aim to ensure they get to those who need them most, across sectors and as quickly as possible, for the benefit of the world.”
Odd statement from a company that was just acquired by Facebook founder's CVI.
-
-
knightcolumbia.org knightcolumbia.org
-
Meanwhile, politicians from the two major political parties have been hammering these companies, albeit for completely different reasons. Some have been complaining about how these platforms have potentially allowed for foreign interference in our elections.3 3. A Conversation with Mark Warner: Russia, Facebook and the Trump Campaign, Radio IQ|WVTF Music (Apr. 6, 2018), https://www.wvtf.org/post/conversation-mark-warner-russia-facebook-and-trump-campaign#stream/0 (statement of Sen. Mark Warner (D-Va.): “I first called out Facebook and some of the social media platforms in December of 2016. For the first six months, the companies just kind of blew off these allegations, but these proved to be true; that Russia used their social media platforms with fake accounts to spread false information, they paid for political advertising on their platforms. Facebook says those tactics are no longer allowed—that they've kicked this firm off their site, but I think they've got a lot of explaining to do.”). Others have complained about how they’ve been used to spread disinformation and propaganda.4 4. Nicholas Confessore & Matthew Rosenberg, Facebook Fallout Ruptures Democrats’ Longtime Alliance with Silicon Valley, N.Y. Times (Nov. 17, 2018), https://www.nytimes.com/2018/11/17/technology/facebook-democrats-congress.html (referencing statement by Sen. Jon Tester (D-Mont.): “Mr. Tester, the departing chief of the Senate Democrats’ campaign arm, looked at social media companies like Facebook and saw propaganda platforms that could cost his party the 2018 elections, according to two congressional aides. If Russian agents mounted a disinformation campaign like the one that had just helped elect Mr. Trump, he told Mr. Schumer, ‘we will lose every seat.’”). Some have charged that the platforms are just too powerful.5 5. Julia Carrie Wong, #Breaking Up Big Tech: Elizabeth Warren Says Facebook Just Proved Her Point, The Guardian (Mar. 11, 2019), https://www.theguardian.com/us-news/2019/mar/11/elizabeth-warren-facebook-ads-break-up-big-tech (statement of Sen. Elizabeth Warren (D-Mass.)) (“Curious why I think FB has too much power? Let's start with their ability to shut down a debate over whether FB has too much power. Thanks for restoring my posts. But I want a social media marketplace that isn't dominated by a single censor. #BreakUpBigTech.”). Others have called attention to inappropriate account and content takedowns,6 6. Jessica Guynn, Ted Cruz Threatens to Regulate Facebook, Google and Twitter Over Charges of Anti-Conservative Bias, USA Today (Apr. 10, 2019), https://www.usatoday.com/story/news/2019/04/10/ted-cruz-threatens-regulate-facebook-twitter-over-alleged-bias/3423095002/ (statement of Sen. Ted Cruz (R-Tex.)) (“What makes the threat of political censorship so problematic is the lack of transparency, the invisibility, the ability for a handful of giant tech companies to decide if a particular speaker is disfavored.”). while some have argued that the attempts to moderate discriminate against certain political viewpoints.
Most of these problems can all fall under the subheading of the problems that result when social media platforms algorithmically push or accelerate content on their platforms. An individual with an extreme view can publish a piece of vile or disruptive content and because it's inflammatory the silos promote it which provides even more eyeballs and the acceleration becomes a positive feedback loop. As a result the social silo benefits from engagement for advertising purposes, but the community and the commons are irreparably harmed.
If this one piece were removed, then the commons would be much healthier, fringe ideas and abuse that are abhorrent to most would be removed, and the broader democratic views of the "masses" (good or bad) would prevail. Without the algorithmic push of fringe ideas, that sort of content would be marginalized in the same way we want our inane content like this morning's coffee or today's lunch marginalized.
To analogize it, we've provided social media machine guns to the most vile and fringe members of our society and the social platforms are helping them drag the rest of us down.
If all ideas and content were provided the same linear, non-promotion we would all be much better off, and we wouldn't have the need for as much human curation.
-
It would allow end users to determine their own tolerances for different types of speech but make it much easier for most people to avoid the most problematic speech, without silencing anyone entirely or having the platforms themselves make the decisions about who is allowed to speak.
But platforms are making huge decisions about who is allowed to speak. While they're generally allowing everyone to have a voice, they're also very subtly privileging many voices over others. While they're providing space for even the least among us to have a voice, they're making far too many of the worst and most powerful among us logarithmic-ally louder.
It's not broadly obvious, but their algorithms are plainly handing massive megaphones to people who society broadly thinks shouldn't have a voice at all. These megaphones come in the algorithmic amplification of fringe ideas which accelerate them into the broader public discourse toward the aim of these platforms getting more engagement and therefore more eyeballs for their advertising and surveillance capitalism ends.
The issue we ought to be looking at is the dynamic range between people and the messages they're able to send through social platforms.
We could also analogize this to the voting situation in the United States. When we disadvantage the poor, disabled, differently abled, or marginalized people from voting while simultaneously giving the uber-rich outsized influence because of what they're able to buy, we're imposing the same sorts of problems. Social media is just able to do this at an even larger scale and magnify the effects to make their harms more obvious.
If I follow 5,000 people on social media and one of them is a racist-policy-supporting, white nationalist president, those messages will get drowned out because I can only consume so much content. But when the algorithm consistently pushes that content to the top of my feed and attention, it is only going to accelerate it and create more harm. If I get a linear presentation of the content, then I'd have to actively search that content out for it to cause me that sort of harm.
-
-
www.newyorker.com www.newyorker.com
-
A spokeswoman for Summit said in an e-mail, “We only use information for educational purposes. There are no exceptions to this.” She added, “Facebook plays no role in the Summit Learning Program and has no access to any student data.”
As if Facebook needed it. The fact that this statement is made sort of goes to papering over the idea that Summit itself wouldn't necessarily do something as nefarious or worse with it than Facebook might.
-
-
www.buzzfeed.com www.buzzfeed.com
-
Having low scores posted for all coworkers to see was “very embarrassing,” said Steph Buja, who recently left her job as a server at a Chili’s in Massachusetts. But that’s not the only way customers — perhaps inadvertently — use the tablets to humiliate waitstaff. One diner at Buja’s Chili’s used Ziosk to comment, “our waitress has small boobs.”According to other servers working in Ziosk environments, this isn’t a rare occurrence.
This is outright sexual harrassment and appears to be actively creating a hostile work environment. I could easily see a class action against large chains and/or against the app maker themselves. Aggregating the data and using it in a smart way is fine, but I suspect no one in the chain is actively thinking about what they're doing, they're just selling an idea down the line.
The maker of the app should be doing a far better job of filtering this kind of crap out and aggregating the data in a smarter way and providing a better output since the major chains they're selling it to don't seem to be capable of processing and disseminating what they're collecting.
-
Systems like Ziosk and Presto allow customers to channel frustrations that would otherwise end up on public platforms like Yelp — which can make or break a restaurant — into a closed system that the restaurant controls.
I like that they're trying to own and control their own data, but it seems like they've relied on a third party company to do most of the thinking for them and they're not actually using the data they're gathering in the proper ways. This is just painfully deplorable.
-
-
daily.jstor.org daily.jstor.org
-
I literally couldn’t remember when I’d last looked at my RSS subscriptions. On the surface, that might seem like a win: Instead of painstakingly curating my own incoming news, I can effortlessly find an endless supply of interesting, worthwhile content that the algorithm finds for me. The problem, of course, is that the algorithm isn’t neutral: It’s the embodiment of Facebook and Twitter’s technology, data analysis, and most crucial, business model. By relying on the algorithm, instead of on tags and RSS, I’m letting an army of web developers, business strategists, data scientists, and advertisers determine what gets my attention. I’m leaving myself vulnerable to misinformation, and manipulation, and giving up my power of self-determination.
-
-
bookbook.pubpub.org bookbook.pubpub.org
-
Safiya Noble, Algorithms of Oppression (New York: New York University Press, 2018). See also Mozilla’s 2019 Internet Health Report at https://internethealthreport.org/2019/lets-ask-more-of-ai/.
-
-
www.economist.com www.economist.com
-
eight years after release, men are 43% more likely to be taken back under arrest than women; African-Americans are 42% more likely than whites, and high-school dropouts are three times more likely to be rearrested than college graduates.
but are these possibly the result of external factors (like racism?)
-