- Nov 2024
-
katinamagazine.org katinamagazine.org
-
However, these rankings rely on indicators that cannot be fully implemented in Indonesia and other similar countries, such as utilizing English as the main academic publishing language, thereby perpetuating the dominance of traditional Western ranking metrics.
Language of publication is such an important attribute, and it is not mentioned enough. I wonder if AI translations will start to change the bias towards English?
-
- Jan 2024
-
www.technologyreview.com www.technologyreview.com
-
- for: progress trap -AI, carbon footprint - AI, progress trap - AI - bias, progress trap - AI - situatedness
-
- Jul 2023
-
arxiv.org arxiv.org
-
In traditional artforms characterized by direct manipulation [32]of a material (e.g., painting, tattoo, or sculpture), the creator has a direct hand in creating thefinal output, and therefore it is relatively straightforward to identify the creator’s intentions andstyle in the output. Indeed, previous research has shown the relative importance of “intentionguessing” in the artistic viewing experience [33, 34], as well as the increased creative valueafforded to an artwork if elements of the human process (e.g., brushstrokes) are visible [35].However, generative techniques have strong aesthetics themselves [36]; for instance, it hasbecome apparent that certain generative tools are built to be as “realistic” as possible, resultingin a hyperrealistic aesthetic style. As these aesthetics propagate through visual culture, it can bedifficult for a casual viewer to identify the creator’s intention and individuality within the out-puts. Indeed, some creators have spoken about the challenges of getting generative AI modelsto produce images in new, different, or unique aesthetic styles [36, 37].
Traditional artforms (direct manipulation) versus AI (tools have a built-in aesthetic)
Some authors speak of having to wrestle control of the AI output from its trained style, making it challenging to create unique aesthetic styles. The artist indirectly influences the output by selecting training data and manipulating prompts.
As use of the technology becomes more diverse—as consumer photography did over the last century, the authors point out—how will biases and decisions by the owners of the AI tools influence what creators are able to make?
To a limited extent, this is already happening in photography. The smartphones are running algorithms on image sensor data to construct the picture. This is the source of controversy; see Why Dark and Light is Complicated in Photographs | Aaron Hertzmann’s blog and Putting Google Pixel's Real Tone to the test against other phone cameras - The Washington Post.
Tags
Annotators
URL
-
- May 2023
-
ourworldindata.org ourworldindata.orgBooks1
-
A book is defined as a published title with more than 49 pages.
[24] AI - Bias in Training Materials
-
-
www.technologyreview.com www.technologyreview.com
-
An AI model taught to view racist language as normal is obviously bad. The researchers, though, point out a couple of more subtle problems. One is that shifts in language play an important role in social change; the MeToo and Black Lives Matter movements, for example, have tried to establish a new anti-sexist and anti-racist vocabulary. An AI model trained on vast swaths of the internet won’t be attuned to the nuances of this vocabulary and won’t produce or interpret language in line with these new cultural norms. It will also fail to capture the language and the norms of countries and peoples that have less access to the internet and thus a smaller linguistic footprint online. The result is that AI-generated language will be homogenized, reflecting the practices of the richest countries and communities.
[21] AI Nuances
-
- Apr 2023
- Dec 2022
-
digitalcredentials.mit.edu digitalcredentials.mit.edu
-
Many HRMS providers point to AI approaches for processing unstructured data as the bestcurrently available approach to dealing with validation. Currently these approaches suffer frominsufficient accuracy. Improving them requires development of large and high-quality referencedatasets to better train the models.
Historical labor data will be full of bias. AI approaches must correct for bias in training sets, lest we build very sophisticated and intelligent systems that excel at perpetuating the bias they were taught.
-
- Mar 2021
-
twitter.com twitter.com
-
ReconfigBehSci. (2020, November 9). Session 2: The policy interface followed with a really helpful presentation by Lindsey Pike, from Bristol, and then panel discussion with Mirjam Jenny (Robert Koch Insitute), Paulina Lang (UK Cabinet Office), Rachel McCloy (Reading Uni.), and Rene van Bavel (European Commission) [Tweet]. @SciBeh. https://twitter.com/SciBeh/status/1325795286065815552
-
- Jan 2021
-
www.smithsonianmag.com www.smithsonianmag.com
-
Artificial Intelligence Is Now Used to Predict Crime.
Artificial Intelligence
-