4 Matching Annotations
  1. Nov 2023
    1. [[Rolf Kleef]] [[Aldo de Moor]]

      OpenAI call "democratic inputs for AI" 10 projects, 100k, 3 months

      [[Rolf Aldo Common Ground AI consensus]] #2023/11/01

    2. Firstly, we must continually question the underlying assumptions, potential pitfalls, risks, and possible unintended adverse effects of introducing AI into democratic processes. Not the least by always checking and refining LLM outputs with real people, or we risk falling into the fallacies and risks of democracy in silica.

      Where here is the role of AI? And does it matter as much in each of its roles? - moderation of conversation - synthesising new statements (this one particularly?) - transcripts - summarising opinions - determine statistically more supported statements - the import of minority statements? (e.g. all may have an opinion, maybe not all opinions matter the same way in a case (democratically built bridges may fall, in comparison one built by engineering teams) --> this points to curating the issues to discuss. And ensuring all voices are indeed weighed, not just outvoted, such that groups aren't marginalised.

    3. Secondly, our process is inherently and somewhat intimately social. While this is by design, we observe that a significant portion of the population self selects out of such explicitly social interactions with strangers. These are similar issues faced by in-person citizens assemblies, where a small portion of the participants may need repeated encouragement before they share their opinions and gain confidence. While human facilitators were on call to help during the experiment, looking into active facilitation, coaching and aftercare for more sensitive participants may be crucial when deploying Common Ground.

      or mix with solitary interaction like in pol.is?

    4. Common Ground can be conceptualised as a multi-player variant of Pol.is. Instead of voting on Statements in isolation, we match participants into small groups of three people where they are encouraged to deliberate over the Statements they vote on, and where an AI moderator powered by GPT4 synthesises new Statements from the content of their discussion.
      • The new statements synthesizing is interesting. Are these checked with the group of 3?
      • Is the voting like in pol.is where you have an increasing 'cost' of voting / spreading attention?