9 Matching Annotations
  1. Nov 2024
    1. However, these rankings rely on indicators that cannot be fully implemented in Indonesia and other similar countries, such as utilizing English as the main academic publishing language, thereby perpetuating the dominance of traditional Western ranking metrics.

      Language of publication is such an important attribute, and it is not mentioned enough. I wonder if AI translations will start to change the bias towards English?

  2. Jan 2024
  3. Jul 2023
    1. In traditional artforms characterized by direct manipulation [32]of a material (e.g., painting, tattoo, or sculpture), the creator has a direct hand in creating thefinal output, and therefore it is relatively straightforward to identify the creator’s intentions andstyle in the output. Indeed, previous research has shown the relative importance of “intentionguessing” in the artistic viewing experience [33, 34], as well as the increased creative valueafforded to an artwork if elements of the human process (e.g., brushstrokes) are visible [35].However, generative techniques have strong aesthetics themselves [36]; for instance, it hasbecome apparent that certain generative tools are built to be as “realistic” as possible, resultingin a hyperrealistic aesthetic style. As these aesthetics propagate through visual culture, it can bedifficult for a casual viewer to identify the creator’s intention and individuality within the out-puts. Indeed, some creators have spoken about the challenges of getting generative AI modelsto produce images in new, different, or unique aesthetic styles [36, 37].

      Traditional artforms (direct manipulation) versus AI (tools have a built-in aesthetic)

      Some authors speak of having to wrestle control of the AI output from its trained style, making it challenging to create unique aesthetic styles. The artist indirectly influences the output by selecting training data and manipulating prompts.

      As use of the technology becomes more diverse—as consumer photography did over the last century, the authors point out—how will biases and decisions by the owners of the AI tools influence what creators are able to make?

      To a limited extent, this is already happening in photography. The smartphones are running algorithms on image sensor data to construct the picture. This is the source of controversy; see Why Dark and Light is Complicated in Photographs | Aaron Hertzmann’s blog and Putting Google Pixel's Real Tone to the test against other phone cameras - The Washington Post.

  4. May 2023
    1. An AI model taught to view racist language as normal is obviously bad. The researchers, though, point out a couple of more subtle problems. One is that shifts in language play an important role in social change; the MeToo and Black Lives Matter movements, for example, have tried to establish a new anti-sexist and anti-racist vocabulary. An AI model trained on vast swaths of the internet won’t be attuned to the nuances of this vocabulary and won’t produce or interpret language in line with these new cultural norms. It will also fail to capture the language and the norms of countries and peoples that have less access to the internet and thus a smaller linguistic footprint online. The result is that AI-generated language will be homogenized, reflecting the practices of the richest countries and communities.

      [21] AI Nuances

  5. Apr 2023
  6. Dec 2022
    1. Many HRMS providers point to AI approaches for processing unstructured data as the bestcurrently available approach to dealing with validation. Currently these approaches suffer frominsufficient accuracy. Improving them requires development of large and high-quality referencedatasets to better train the models.

      Historical labor data will be full of bias. AI approaches must correct for bias in training sets, lest we build very sophisticated and intelligent systems that excel at perpetuating the bias they were taught.

  7. Mar 2021
  8. Jan 2021