391 Matching Annotations
  1. Last 7 days
    1. For us personally, this means that we no longer use generative AI – neither for private nor professional purposes.

      Authors avoid the use of generative AI. But realise that is difficult for most to do, and as such a privileged tech capable position

    2. To Gen or Not To Gen: The Ethical Use of Generative AI 33 minute read This blog entry started out as a translation of an article that my colleague Jakob and I wrote for a German magazine. After that we added more stuff and enriched it by additional references and sources. We aim at giving an overview about many - but not all - aspects that we learned about GenAI and that we consider relevant for an informed ethical opinion. As for the depth of information, we are just scratching the surface; hopefully, the loads of references can lead you to diving in deeper wherever you want. Since we are both software developers our views are biased and distorted. Keep also in mind that any writing about a “hot” topic like this is nothing but a snapshot of what we think to know today. By the time you read it the authors’ knowledge and opinions have already changed. Last Update: December 8, 2025. Table of ContentsPermalink Abstract About us Johannes Link Jakob Schnell Introduction Ethics, what does that even mean? Clarification of terms Basics Can LLMs think? What LLMs are good at GenAI as a knowledge source GenAI in software development Actual vs. promised benefits Harmful aspects of GenAI GenAI is an ecological disaster Power Water Electronic Waste GenAI threatens education and science GenAI is destroying the free internet. GenAI is a danger to democracy GenAI versus human creativity Digital colonialism Political aspects Conclusion Can there be ethical GenAI? How to act ethically AbstractPermalink ChatGPT, Gemini, Copilot. The number of generative AI applications (GenAI) and models is growing every day. In the field of software development in particular, code generation, coding assistants and vibe coding are on everyone’s lips. Like any technology, GenAI has two sides. The great promises are offset by numerous disadvantages: immense energy consumption, mountains of electronic waste, the proliferation of misinformation on the internet and the dubious handling of intellectual property are just a few of the many negative aspects. Ethically responsible behaviour requires us to look at all the advantages, disadvantages and collateral damages of a technology before we use it or recommend its use to others. In this article, we examine both sides and eventually arrive at our personal and naturally subjective answer to whether and how GenAI can be used in an ethical manner. About usPermalink Johannes LinkPermalink … has been programming for over 40 years, 30 of them professionally. Since the end of the last century, extreme programming and other human-centred software development approaches have been at the heart of his work. The meaningful and ethical implementation of his private and professional life has been his driving force for years. He has been involved with GenAI since the early days of OpenAI’s GPT language models. More about Johannes can be found at https://johanneslink.net. Jakob SchnellPermalink … studied mathematics and computer science and has been working as a software developer for 5 years. He works as a lecturer and course director in university and non-university settings. As a youth leader, he also comes into regular contact with the lives of children and young people. In all these environments, he observes the growing use of GenAI and its impact on people. IntroductionPermalink Ethics, what does that even mean?Permalink Ethical behaviour sounds like the title of a boring university seminar. However, if you look at the wikipedia article of the term 1, you will find that ‘how individuals behave when confronted with ethical dilemmas’ is at the heart of the definition. So it’s about us as humans taking responsibility and weighing up whether and how we do or don’t do certain things based on our values. We have to consider ethical questions in our work because all the technologies we use and promote have an impact on us and on others. Therefore, they are neither neutral nor without alternative. It is about weighing up the advantages and potential against the damage and risks; and that applies to everyone, not just us personally. Because often those who benefit from a development are different from those who suffer the consequences. As individuals and as a society, we have the right to decide whether and how we want to use technologies. Ideally, this should be in a way that benefits us all; but under no circumstances should it be in a way that benefits a small group and harms the majority. The crux of the matter is that ethical behaviour does not come for free. Ethics are neither efficient nor do they enhance your economic profit. That means that by acting according to your values you will, at some point, have to give something up. If you’re not willing to do that, you don’t have values - just opinions. Clarification of termsPermalink When we write ‘generative AI’ (GenAI), we are referring to a very specific subset of the many techniques and approaches that fall under the term ‘artificial intelligence’. Strictly speaking, these are a variety of very different approaches that range from symbolic logic, over automated planning up to the broad field of machine learning (ML). Nowadays most effort, hype and money goes into deep learning (DL): a subfield of ML that uses multi-layered artificial neural networks to discover statistical correlations (aka patterns) based on very large amounts of training data in order to reproduce those patterns later. Large language models (LLM) and related methods for generating images, videos and speech now make it possible to apply this idea to completely unstructured data. While traditional ML methods often managed with a few dozen parameters, these models now work with several trillion (10^12) parameters. In order for this to produce the desired results, both the amount of training data and the training duration must be increased by several orders of magnitude. This brings us to the definition of what we mean by ‘GenAI’ in this article: Hyperscaled models that can only be developed, trained and deployed by a handful of companies in the world. These are primarily the GenAI services provided by OpenAI, Anthropic, Google and Microsoft, or based on these services. We also focus primarily on language models; the generation of images, videos, speech and music plays only a minor role in this article. Our focus on hyperscale services does not mean that other ML methods are free of ethical problems; however, we are dealing with a completely different order of magnitude of damage and risk here. For example, there do exist variations of GenAI that use the same or similar techniques, but on a much smaller scale and restricted domains (e.g. AlphaFold 2). These approaches tend to bring more value with fewer downsides. BasicsPermalink GenAI models are designed to interpolate and extrapolate 3, i.e. to fill in the gaps between training data and speculate beyond the limits of the training data. Together with the stochastic nature of the training data, this results in some interesting properties: GenAI models ‘invent’ answers; with LLMs, we like to refer to this as ‘hallucinations’. GenAI models do not know what is true or false, good or bad, efficient or effective, only what is statistically probable or improbable in relation to training data, context and query (aka prompt). GenAI models cannot explain their output; they have no capability of introspection. What is sold as introspection is just more output, with the previous output re-injected. GenAI models do not learn from you; they only draw from their training material. The learning experience is faked by reinjecting prior input into a conversation’s context 4. The context, i.e. the set of input parameters provided, is decisive for the accuracy of the generated result, but can also steer the model in the wrong direction. Increasing the context window makes a query much more computation-intensive - likely in a quadratic way. Therefore, the promised increase of “maximum context window” of many models is mostly fake 5. The reliability of LLMs cannot be fundamentally increased by even greater scaling 6. Can LLMs think?Permalink Proponents of the language-of-thought hypothesis 7 believe it is possible for purely language-based models to acquire the capabilities of the human brain – reasoning, modelling, abstraction and much more. Some enthusiasts even claim that today’s models have already acquired this capability. However, recent studies 8 9 show that today’s models are neither capable of genuine reasoning nor do they build internal models of the world. Moreover, “…according to current neuroscience, human thinking is largely independent of human language 10” and there is fundamental scientific doubt that achieving human cognition through computation is achievable in practice let alone by scaling up training of deep networks 11. An example of a lack of understanding of the world is the prompt ‘Give me a random number between 0 and 50’. The typical GenAI response to this is ‘27’, and it is significantly more reliable than true randomness would allow. (If you don’t believe it, just try it out!) This is because 27 is the most likely answer in the GenAI training data – and not because the model understands what ‘random’ means. ‘Chain of Thought (CoT)’ approaches and ‘Reasoning models’ attempt to improve reasoning by breaking down a prompt, the query to the model, into individual (logical) steps and then delegating these individual steps back to the LLM. This allows some well-known reasoning benchmarks to be met, but it also multiplies the necessary computational effort by a factor between 30 and 700 12. In addition, multistep reasoning lets individual errors chain together to form large errors. And yet, CoT models do not seem to possess any real reasoning abilities 13 14 and improve the overall accuracy of LLMs only marginally 15. The following thought experiment from 16 underscores the lack of real “thinking” capabilities: LLMs have simultaneous access to significantly more knowledge than humans. Together with the postulated ability of LLMs to think logically and draw conclusions, new insights should just fall from the sky. But they don’t. Getting new insights from LLMs would require these to be already encoded in the existing training material, and to be decoded and extracted by pure statistical means. What LLMs are good atPermalink Undoubtedly, LLMs represent a major qualitative advance when it comes to extracting information from texts, generating texts in natural and artificial languages, and machine translation. But even here, the error rate, and above all the type of error (‘hallucinations’), is so high that autonomous, unsupervised use in serious applications must be considered highly negligent. GenAI as a knowledge sourcePermalink As we have pointed out above, LLMs cannot differentiate between true and false - regardless of the training material. It does not answer the question “What is XYZ?” but the question “How would an answer to question ‘What is XYZ?’ look like?”. Nevertheless, many people claim that the answers that ChatGPT and alike provide for the typical what-how-when-who queries are good enough and often better than what a “normal” web search would have given us. Arguably, this is the most prevalent use case for “AI” bots today. The problem is that most of the time we will never learn about the inaccuracies, left-outs, distortions and biases that the answer contained - unless we re-check everything, which defies the whole purpose of speeding up knowledge retrieval. The less we already know, the better the “AI’s” answer looks to us, but the less equipped we are to spot the problems. A recent by the BBC and 22 Public Service Media organizations shows that 45% of all “AI” assistants’ answers on questions about news and current affairs have significant errors 17. Moreover, LLMs are easy prey for manipulation - either by the service providing organization or by third parties. A recent study claims that even multi-billion-parameter models can be “poisoned” by injecting just a few corrupted documents 18. So, if anything is at stake all output from LLMs must be carefully validated. Doing that, however, would contradict the whole point of using “AI” to speed up knowledge acquisition. GenAI in software developmentPermalink The creation and modification of computer programmes is considered a prime domain for the use of LLMs. This is partly because programming languages have less linguistic variance and ambiguity than natural languages. Moreover, there are many methods for automatically checking generated source code, such as compiling, static code analysis and automated testing. This simplifies the validation of generated code and thereby gives an additional feeling of trust. Nevertheless, individual reports on the success of coding assistants such as Copilot, Cursor, etc. vary greatly. They range from ‘completely replacing me as a developer’ to ‘significantly hindering my work’. Some argue that coding agents considerably reduce the time they have to invest in “boilerplate” work, like writing tests, creating data transfer objects or connecting your domain code to external libraries. Others counter by pointing out that delegating these drudgeries to GenAI makes you miss opportunities to get rid of them, e.g. by introducing a new abstraction or automating parts of your pipeline, and to learn about the intricacies and failure modes of the external library. Other than old-school code generation or code libraries prompting a coding agent is not “just another layer of abstraction”. It misses out on several crucial aspects of a useful abstraction: Its output is not deterministic. You cannot rely on any agent producing the same code next time you feed it the same prompt. The agent does not hide the implementation details, nor does it allow you to reliably change those details if the previous implementation turns out to be inadequate. Code that is output by an LLM, even if it is generated “for free”, has to be considered and maintained each time you touch the related logic or feature. The agent does not tell you if the amount of details you give in your prompt is sufficient for figuring out an adequate implementation. On the contrary, the LLM will always fill the specification holes with some statistically derived assumptions. Sadly, serious studies on the actual benefits of GenAI in software development are rare. The randomised trial by Metr 19 provides an initial indication, measuring a decline in development speed for experienced developers. An informal study by ThoughtWorks estimates the potential productivity gain from using GenAI in software development at around 5-15% 20. If “AI coding” were increasing programmers’ productivity by any big number, we would see a measurable growth of new software in app stores and OSS repositories. But we don’t, the numbers are flat at best 2122. But even if we assume a productivity increase in coding through GenAI, there are still two points that further diminish this postulated efficiency gain: Firstly, the results of the generation must still be cross-checked by human developers. However, it is well known that humans are poor checkers and lose both attention and enjoyment in the process. Secondly, software development is only to a small extent about writing and changing code. The most important part is discovering solutions and learning about the use of these solutions in their context. Peter Naur calls this ‘programming as theory building’ 23. Even the perfect coding assistant can therefore only take over the coding part of software development. For the essential rest, we still need humans. If we now also consider the finding that using AI can relatively quickly lead to a loss of problem-solving skills 24 or that these skills are not acquired at all, then the overall benefit of using GenAI in professional software development is more than questionable. As long as programming - and every technicality that comes with it - will not be fully replaced by some kind of AI, we will still need expert developers who can programm, maintain and debug code to the finest level of detail. Where, we wonder, will those senior developers come from when companies replace their junior staff with coding agents? Actual vs. promised benefitsPermalink If you read testimonials about the use of GenAI that people perceive as successful, you will mostly encounter scenarios in which ‘AI’ helps to make tasks that are perceived as boring, unnecessarily time-consuming or actually pointless faster or more pleasant. So it’s mainly about personal convenience and perceived efficiency. Entertainment also plays a major role: the poem for Grandma’s birthday, the funny song for the company anniversary or the humorous image for the presentation are quickly and supposedly inexpensively generated by ‘AI’. However, the promises made by the dominant GenAI companies are quite different: solving the climate crisis, providing the best medical advice for everyone, revolutionising science, ‘democratising’ education and much more. GPT5, for example, is touted by Sam Altman, CEO of OpenAI, as follows: ‘With GPT-5, it’s now like talking to an expert — a legitimate PhD-level expert in any area you need […] they can help you with whatever your goals are.’ 25 However, to date, there is still no actual use case that provides a real qualitative benefit for humanity or at least larger groups. The question ‘What significant problem (for us as a society) does GenAI solve?’ remains unanswered. On the contrary: While machine learning and deep learning methods certainly have useful applications, the most profitable area of application for ‘AI’ at present is the discovery and development of new oil and gas fields 26. Harmful aspects of GenAIPermalink But regardless of how one assesses the benefits of this technology, we must also consider the downsides, because only then can we ultimately make an informed and fair assessment. In fact, the range of negative effects of hyperscaled generative AI that can already be observed is vast. Added to this are numerous risks that have the potential to cause great social harm. Let’s take a look at what we consider to be the biggest threats: GenAI is an ecological disasterPermalink PowerPermalink The data centres required for training and operating large generative models 27 far exceed today’s dimensions in terms of both number and size. The projected data centre energy demand in the USA is predicted to grow from 4.4% of total electricity in 2023 to 22% in 2028 28. In addition, the typical data centre electricity mix is more CO2-intensive than the average mix. There is an estimated raise of ~11 percent for coal generated electricity in the US, as well as tripled emissions of greenhouse gases worldwide by 2030 - compared to the scenario without GenAI technology 29. Just recently Sam Altman from OpenAI blogged some numbers about the energy and water usage of ChatGPT for “the average query” 30. On the one hand, an average is rather meaningless when a distribution is heavily unsymmetric; the numbers for queries with large contexts or “chain of reasoning” computations would be orders of magnitude higher. Thus, the potential efficiency gains from more economical language models are more than offset by the proliferation of use, e.g. through CoT approaches and ‘agent systems’. On the other hand, big tech’s disclosure of energy consumption (e.g. by Google 31) is intentionally selective. Ketan Joshi goes into quite some details why experts think that the AI industry is hiding the full picture 32. Since building new power plants - even coal or gas fuelled ones - takes a lot of time, data center companies are even reviving old jet engines for powering their new hyper-scalers 33. You have to be aware that those engines are not only much more noisy than other power plants but also pump out nitrous oxide, one of the main chemicals responsible for acid rain 34. WaterPermalink Another problem is the immensely high water consumption of these data centres 35. After all, cooling requires clean water in drinking quality in order to not contaminate or clog the cooling pipes and pumps. Already today, new data centre locations are competing with human consumption of drinking water. According to Bloomberg News about two-thirds of data-centers that were built or developed in 2022 are located in areas that are already under “water-stress” 36. In the US alone “AI servers […] could generate an annual water footprint ranging from 731 to 1,125 million m3” 37. It’s not only an American problem, though. In other areas of the world the water-thirsty data centers also compete with the drinking water supply for humans 38. Electronic WastePermalink Another ecological problem is being noticeably exacerbated by ‘AI’: the amount of electronic waste (e-waste) that we ship mainly to “Third World” countries and which is responsible for soil contamination there. Efficient training and querying of very large neural networks requires very large quantities of specialised chips (GPUs). These chips often have to be replaced and disposed of within two years. The typical data center might not last longer than 3 to 5 years before it has to be rebuilt in large parts39. In summary, it can be said that GenAI is at least an accelerator of the ecological catastrophe that threatens the earth. And it is the argument for Google, Amazon and Microsoft to completely abolish their zero CO2 targets 40 and replace them with investments of several hundred billion dollars for new data centers. GenAI threatens education and sciencePermalink People often try to use GenAI in areas where they feel overloaded and overwhelmed: training, studying, nursing, psychotherapeutic care, etc. The fields of application for ‘AI’ are therefore a good indication of socially neglected and underfunded areas. The fact that LLMs are very good at conveying the impression of genuine knowledge and competence makes their use particularly attractive in these areas. A teacher under the simultaneous pressure of lesson preparation, corrections and covering for sick colleagues turns to ChatGPT to quickly create an exercise sheet. A student under pressure to get good grades has their English essay corrected by ‘AI’. The researcher under pressure to publish will ‘save’ research time by reading the AI-generated summary of relevant papers – even if they are completely wrong in terms of content 41. Tech companies like OpenAI and Microsoft play on that situation by offering their ‘AI’ for free or for little money to students and universities. The goal is obvious: Students that get hooked on outsourcing some of their “tedious” task to a service will continue to use - and eventually buy - this service after graduation. What falls by the wayside are problem-solving skills, engagement with complex sources, and the generation of knowledge through understanding and supplementing existing knowledge. Some even argue that AI is destroying critical education and learning itself 42: Students aren’t just learning less; their brains are learning not to learn. The training cycle of schools and universities is fast. Teachers are already reporting that pupils and students have acquired noticeably less competence in recent years, but have instead become dependent on unreliable ‘tools’ 43. The real problem with using GenAI to do assignments is not cheating, but students “are not just undermining their ability to learn, but to someday lead.” 44 GenAI is destroying the free internet.Permalink The fight against bots on the internet is almost as old as the internet itself – and has been quite successful so far. Multifactor authentication, reCaptcha, honeypots and browser fingerprinting are just a few of the tools that help protect against automated abuse. However, GenAI takes this problem to a new level – in two ways. To make ‘the internet’ usable as the main source for training LLMs, AI companies use so-called ‘crawlers’. These essentially behave like DDoS attackers: They send tens of thousands of requests at once, from several hundred IPs in a very short time. Robot.txt files are ignored; instead, the source IP and user agent are obscured 45. These practices have massive disadvantages for providers of genuine content: Costs for additional bandwidth. Lost advertising revenue, as search engines now offer LLM-generated summaries instead of links to the sources. This threatens the existence of remaining independent journalism in particular 46. Misuse of own content for AI-supported competition. If the place where knowledge is generated is separated from the place where it is consumed, and if this makes the performance of generation even more opaque than before, the motivation to continue generating knowledge also declines. For projects such as Wikipedia, this means fewer donors and fewer contributors. Open communities often have no other option but to shut themselves off. Another aspect is the flooding of the internet with generated content that cannot be automatically distinguished from non-generated content. This content overwhelms the maintainers of open source software or portals such as Wikipedia 47. If this content is then also entered by humans – often in the belief that they are doing good – it is no longer possible to take action against the methodology. In the long run, this means that less and less authentic training material will lead to increasingly poor results from the models. Last but not least, autonomously acting agents make the already dire state of internet security much worse 48. Think of handing all your personal data and credentials to a robot that is distributing and using that data across the web, wherever and whenever it deems it necessary for reaching some goal. is controlled by LLMs who are vulnerable to all kinds of prompt injection attacs 49. is controlled by and reporting to companies that do not have your best interest in mind. has no awareness and knowledge about the implication of its actions. is acting on your behalf and thereby making you accountable. GenAI is a danger to democracyPermalink The manipulation of public opinion through social media precedes the arrival of LLMs. However, this technology gives the manipulators much more leverage. By flooding the web with fake news, fake videos and fake everything undemocratic (or just criminal) parties make it harder and harder for any serious media and journalism to get the attention of the public. People no longer have a common factual basis, which is necessary for all social negotiations. If you don’t agree on at least some basic facts, arguing about policies and measures to take is pointless. Without negotiations democracy will be dying; in many parts of the world it already is. GenAI versus human creativityPermalink Art and creativity are also threatened by generative AI. The impact on artists’ incomes of logos, images and illustrations now being easily and quickly created by AI prompts is obvious. A similar effect can also be observed in other areas. Studies show that poems written by LLMs are indistinguishable from those written by humans and that generative AI products are often rated more highly 50. This can be explained by a trend towards the middle and the average, which can also be observed in the music and film scenes film scene: due to its basic function, GenAI cannot create anything fundamentally new, but replicates familiar patterns, which is precisely why it is so well received by the public. Ironically, ‘AI’ draws its ‘creativity’ from the content of those it seeks to replace. Much of this content was used as training material against the will of the rights holders. Whether this constitutes a copyright infringement has not yet been decided; morally, the situation seems clear. The creative community is the first to be seriously threatened by GenAI in its livelihood 51. It’s not a coincidence that a big part of GenAI efforts is targeted at “democratizing art”. This framing is completely upside down. Art has been one of the most democratic activities for a very long time. Everybody can do it; but not everybody wants to do put in the effort, the practicing time and the soul. Real art is not about the product but about the process, which requires real humans. Generating art without the friction is about getting rid of the humans in the loop - and still making money. Digital colonialismPermalink The huge amount of data required by hyperscaled AI approaches makes it impossible to completely curate the learning content. And yet, one would like to avoid the reproduction of racist, inhuman and criminal content. Attempts are being made to get the problem under control by subsequently adapting the models to human preferences and local laws through additional ‘reinforcement learning from human feedback (RLHF)’ 52. The cheap labour for this very costly process can be found in the Global South. There, people are exposed to hours of hate speech, child abuse, domestic violence and other horrific scenarios in their poorly paid jobs in order to filter them out of the training material of large AI companies 53. Many emerge from these activities traumatised. However, it is not only people who are exploited in the less developed regions of the world, but also nature: the poisoning of the soil with chemicals during the extraction of raw materials for digital chips, as well as the contamination caused by our electronic waste and its improper disposal, are collateral damage that we willingly accept and whose long-term consequences are currently extremely difficult to assess. Here, too, the “developed” world profits, whereas the negative aspects are outsourced to the former colonies and other poor regions of the world. Political aspectsPermalink As software developers, we would like to ‘leave politics out of it’ and instead focus entirely on the cool tech. However, this is impossible when the advocates of this technology pursue strong political and ideological goals. In the case of GenAI, we can cleary see that the US corporations behind it (OpenAI, Google, Meta, Microsoft, etc.) have no problem with the current authoritarian – some say fascist – US government 54. In concrete terms, this means, among other things, that the models are explicitly manipulated to be less liberal or simply not to generate any output that could upset the CEO or the president 55. Even more serious is the fact that many of the leading minds behind these corporations and their financiers adhere to beliefs that can be broadly described as digital fascism. These include Peter Thiel, Marc Andreessen, Alex Karp, JD Vance, Elon Musk and many others on “The Authoritarian Stack” 56. Their ideologies, disguised as rational theories, are called longtermism and effective altruism. What they have in common is that they consider democracy and the state to be obsolete models, compassion to be ‘woke’, and that the current problems of humanity are insignificant, as our future lies in the colonisation of space and the merging of humans with artificial superintelligence 57. Do we want to give people who adhere to these ideologies (even) more power, money and influence by using and paying for their products? Do we want to feed their computer systems with our data? Do we really want to expose ourselves and our children to the answers from chatbots which they have manipulated? Not quite as abstruse, but similarly misanthropic, is the imminent displacement of many jobs by AI, as postulated by the same corporations in order to put pressure on employees with this claim. Demanding a large salary? Insisting on your legal rights? Complaining about too much workload? Doubts about the company’s goals? Then we’ll just replace you with cheap and uncomplaining AI! Whichever way you look at it, AI and GenAI are already being used politically. If we go along without resistance, we are endorsing this approach and supporting it with our time, our attention and our money. ConclusionPermalink Ideally, we would like to quantify our assessment by adding up the advantages, adding up the disadvantages and finally checking whether the balance is positive or negative. Unfortunately, in our specific case, neither the benefits nor the harm are easily quantifiable; we must therefore consult our social and personal values. Discussions about GenAI usually revolve purely around its benefits. Often, the capabilities of all ‘AI’ technologies (e.g. protein folding with AlphaFold 2) are lumped together, even though they have little in common with hyperscaling GenAI. However, if we consider the consequences and do not ignore the problems this technology entails – i.e. if we consider both sides in terms of ethics – the assessment changes. Convenience, speed and entertainment are then weighed against numerous damages and risks to the environment, the state and humanity. In this sense, the ethical use and further expansion of GenAI in its current form is not possible. Can there be ethical GenAI?Permalink If the use of GenAI is not ethical today what would have to change, which negative effects of GenAI would have to disappear or at least be greatly reduced in order to tip the balance between benefits and harms in the other direction? The models would have to be trained exclusively with publicly known content whose original creators consent to its use in training AI models. The environmental damage would have to be reduced to such an extent that it does not further fuel the climate crisis. Society would have to get full access to the training and operation of the models in order to rule out manipulation by third parties and restrict their use to beneficial purposes. This would require democratic processes, good regulation and oversight through judges and courts. The misuse and harming of others, e.g., through copyright theft or digital colonialism, would have to be prevented. Is such a change conceivable? Perhaps. Is it likely, given the interest groups and political aspects involved? Probably not

      All these factors are achievable I think, or will be soonish. Smaller models, better sourced data sets, niche models, etc. But not with current actors as mentioned at the end.

    1. would take seriously the fact that intelligence is now being scaled and distributed through organizations long before it is unified or fully understood

      there's no other way, understanding comes from using it, and having stuff go wrong. The scandals around algos are important in this. Scale and distribution are different beasts. Distribution does not need scale (but a network effect helps) in order to work. The need for scale in digital is an outcome of the financing structure and chosen business model, and is the power grab essentially. #openvraag hoe zet je meer focus op distributie als tegenkracht tegen de schalingshonger van actoren?

    2. Empirical grounding. In 2015, scaling laws, emergent capabilities, and deployment‑driven feedback loops were speculative. Today, they are measurable. That shift changes the nature of responsibility, governance, and urgency in ways that were difficult to justify rigorously at the time.

      States that, in contrast to a decade ago, now we can measure scaling, emergent capabilities, feedback loops. Interesting. - [ ] #30mins #ai-ethics werk dit uit in meer detail. Wat meet je dan, hoe kan dat er uit zien? Hoe vergelijkt dat met div beoordelingsmechanismen?

    3. Political economy and power. The book largely brackets capital concentration, platform dynamics, and geopolitical competition. Today, these are central to any serious discussion of AI, not because the technology changed direction, but because it scaled fast enough to collide with real institutions and entrenched interests.

      geopolitics, whether in shape of capital, tech or politics has become key, which he overlooked in 2015/8

    4. Alignment as an operational problem. The book assumes that sufficiently advanced intelligences would recognize the value of cooperation, pluralism, and shared goals. A decade of observing misaligned incentives in human institutions amplified by algorithmic systems makes it clear that this assumption requires far more rigorous treatment. Alignment is not a philosophical preference. It is an engineering, economic, and institutional problem.

      The book did not address alignment, assumed it would sort itself out (in contrast to [[AI begincondities en evolutie 20190715140742]] how starting conditions might influence that. David recognises how algo's are also used to make diffs worse.

    5. what it feels like to live through an intelligence transition that does not arrive as a single rupture, but as a rolling transformation, unevenly distributed across institutions, regions, and social strata.

      More detailed formulation of Gibson future is already here but distributed. Add sectors/domains. There's more here to tease out wrt my change management work. - [ ] #30mins #ai-ethics vul in met concretere voorbeelden hoe deze quote vorm krijgt

    6. As a result, the debate shifted. The central question is no longer “Can we build this?” but “What does this do to power, incentives, legitimacy, and trust?”

      David posits questions that are all on the application side, what is the impact of using ai. There are also questions on the design side, how do we shape the tools wrt those concepts. Vgl [[AI begincondities en evolutie 20190715140742]] e.g. diff outcomes if you start from military ai params or civil aviation (much stricter), in ref to [[Novacene by James Lovelock]]

  2. Dec 2025
  3. Nov 2025
    1. The tunnel far below represented Nevada’s latest salvo in a simmering water war: the construction of a $1.4 billion drainage hole to ensure that if the lake ever ran dry, Las Vegas could get the very last drop

      Deep Concept: Modern America is mostly corrupt from it's own creation of wealth. Wealth is power, power corrupts and absolute power corrupts absolutely! Money and wealth have completely changed the underlying foundation of America. Modern America is the corrupted result of wealth. Morality and ethics in modern American have been reshaped to "fit" European Aristocracy, ironically the same European aristocracy America fled in the Revolutionary War.

      Billions and billions of tax payer money is spent on projects that could never pass rigorous examination and best public ROI use. Political authoritative conditions rule public tax money for the benefit of a few at the expense of the many. The public "cult-like" sheep have no clue how they are being abused.

      The authoritative abusers (politicians) follow the "mostly" corrupt American (fuck-you) form of government and individual power tactics that have been conveniently embedded in corrupt modern morality and ethics, used by corrupted lawyers and judges to codify the fundamental moral code that underpins the original American Constitution.

    1. Digital archaeology resists the (digital) neo-colonialism of Google, Facebook, and similar tech giants that typically promote disciplinary silos and closed code and data repositories.

      This quote highlights the important ethical dimension of digital archaeology. Showing how open access and collaborative tools in digital archaeology challenge corporate control over knowledge. It aligns with using open GIS platforms that I will use for my projects and open data policy.

  4. Sep 2025
    1. Joy, Bill. “Why the Future Doesn’t Need Us.” Wired, April 1, 2000. https://www.wired.com/2000/04/joy-2/.

      Annotation url: urn:x-pdf:753822a812c861180bef23232a806ec0

      Annotations: https://jonudell.info/h/facet/?user=chrisaldrich&url=urn%3Ax-pdf%3A753822a812c861180bef23232a806ec0&max=100&exactTagSearch=true&expanded=true

      Reprints available at: - Joy, Bill. “Why the Future Doesn’t Need Us.” 2000. AAAS Science and Technology Policy Yearbook 2001, edited by Albert H. Teich et al., Amer Assn for the Advancement of Science, 2002, pp. 47–75. Google Books, https://www.google.com/books/edition/Integrity_in_Scientific_Research/0X-1g8YElcsC.<br /> - Joy, Bill. “Why the Future Doesn’t Need Us.” 2000. Emerging Technologies: Ethics, Law and Governance, by Gary E. Marchant and Wendell Wallach, edited by Gary E. Marchant and Wendell Wallach, 1st ed., Routledge, 2020, pp. 65–71.

  5. Aug 2025
    1. Mechanisms of Techno-Moral Change: A Taxonomy and Overview John Danaher & Henrik Skaug Sætra 2023

      The idea that technologies can change moral beliefs and practices is an old one. But how, exactly, does this happen? This paper builds on an emerging field of inquiry by developing a synoptic taxonomy of the mechanisms of techno-moral change. It argues that technology affects moral beliefs and practices in three main domains: decisional (how we make morally loaded decisions), relational (how we relate to others) and perceptual (how we perceive situations). It argues that across these three domains there are six primary mechanisms of techno-moral change: (i) adding options; (ii) changing decision-making costs; (iii) enabling new relationships; (iv) changing the burdens and expectations within relationships; (v) changing the balance of power in relationships; and (vi) changing perception (information, mental models and metaphors). The paper also discusses the layered, interactive and second-order effects of these mechanisms.

      DOI 10.1007/s10677-023-10397-x

      Mechanisms of Techno-moral Change in Zotero PDF

  6. Jul 2025
  7. Jun 2025
  8. May 2025
  9. Mar 2025
  10. Feb 2025
  11. Jan 2025
    1. [[Moral Progress by Philip Kitcher]] reads, based on both their blurbs, like a continuation of [[The Ethical Project by Philip Kitcher]], exploring how actual ethical changes have occurred in time. Moves towards method rather than big-T truth (in line w social evolution perspective of the Ethical Project) and strengthens the pragmatism in pragmatic naturalism. 10yrs between the 2 books. Blurb mentions progressing away from something rather than to something, which chimes neatly with the evolutionary perspective of moving away from being hindered by a selective pressure. The attention to moral progress as method brings it closer to my notion of [[Ethics As A Practice (EaaP) 20200819161530]]

    1. Philip Kitcher makes a provocative proposal: Instead of conceiving ethical commands as divine revelations or as the discoveries of brilliant thinkers, we should see our ethical practices as evolving over tens of thousands of years, as members of our species have worked out how to live together and prosper. Elaborating this radical new vision, Kitcher shows how the limited altruistic tendencies of our ancestors enabled a fragile social life, how our forebears learned to regulate their interactions with one another, and how human societies eventually grew into forms of previously unimaginable complexity. The most successful of the many millennia-old experiments in how to live, he contends, survive in our values today.

      pushes virtue ethics and natural law ethics aside for a more evolutionary view of ethics enabling societal conviviality it seems. I sense a link to Dennett's culture as evolution and speeding up evolution, and to networked agency. Perhaps also the Latour's ANT? Link w the relational ethics in AI work C did?

    2. Our human values, Kitcher shows, can be understood not as a final system but as a project-the ethical project-in which our species has engaged for most of its history, and which has been central to who we are.

      The book title [[The Ethical Project by Philip Kitcher]] reflect the ongoing nature of our evolving human values. It is not a final system, but a collective project that continually defines our sociality and through it our humanity.

    3. an approach he calls "pragmatic naturalism," Kitcher reveals the power of an evolving ethics built around a few core principles

      Author calls it this ethics to make evolutionary social groupings work 'pragmatic naturalism'. I get those terms at first glance, but if the functioning of social structures is its aim, a term closer to relationships focused ethics, and evolution might be more telling, next to the clearly involved pragmatism. This term sounds closer to a fork of natural law ethics, which it doesn't seem to be, to indicate its early origins and evolutionary past. evolutionaryrelationalethics?

    1. Philip Kitcher, The Ethical Project This is a very substantial book which attempts to re-cast the nature and history of ethics as a form of "social technology", aimed at remedying "altruism failures", and generally moving humanity beyond the kind of social life endured by other primates — nasty, poor, and brutish, but not solitary. (Though he doesn't mention it, this is almost an inversion of Brecht's line "grub first, then ethics".) The guiding stars are Dewey (especially Human Nature and Conduct), John Stuart Mill (especially On Liberty and The Subjection of Women), and modern work on the evolution of cooperation. Kitcher builds from here to an examination of what counts as ethical progress, appropriate method and substance for meta-ethics, and appropriate method and recommendations for actual substantive ethics at the present day. The latter are strongly egalitarian, and not just founded on the "expanding circle" of empathy notion.

      https://web.archive.org/web/20250103085440/http://bactra.org/weblog/algae-2011-11.html#kitcher

      [[The Ethical Project by Philip Kitcher]] here said to posit ethics as 'social technology' as response to 'altruism failures' (?responses showing it was undeserved), allowing a diff social life as other primates / animal groups. An ethics for networked agency perhaps? http://www.columbia.edu/~psk16/

  12. Dec 2024
    1. I think the Paleolithic ethical framework is simply—I mean, the hunter-gatherers—having no separation between themselves, no radical distinction between human and nonhuman—thought everything else was kindred. Literally, they thought if you went out to hunt and you’re hunting a deer, the deer is your sister or your brother, or maybe your ancestor, or maybe, more precisely, past/future forms of yourself. Because I think the ethic was you hunted with sort of prayers and sacrifice and humility. You’re asking a deer—a brother or a sister or an ancestor—to give its life for you.

      for - food is sacred - why we say prayer for the living being that died so that we may live - samsara - kill others so that we may live - hunting and killing other - from - Emergence Magazine - interview - An Ethics of Wild Mind - David Hinton

    2. in Vermont, Native Americans lived here—well, like everywhere in North America—they lived here in Vermont for over ten thousand years. The ecosystem was basically intact, and that’s because they had that ethical system built into their fundamental cultural assumptions—the assumptions that guided their lives. They didn’t think about them. They didn’t question them. They were simply the assumptions, the unthought assumptions.

      for - philosophy matters! - biodiversity crisis - 10,000 years of preservation vs 100 years of clearcut - David Hinton - comparison - polycrisis - climate crisis - two unthought assumptions - philosophical differences - Indigenous people of Vermont vs European settlers - from - Emergence Magazine - interview - An Ethics of Wild Mind - David Hinton

      comparison - polycrisis - climate crisis - biodiversity crisis - Indigneous people of Vermont - vs European settlers - unthought assumptions - unthought assumptions of Indigenous people took care of forests for 10,000 years - unthought assumptions of European settlers clear cut all the forests in 100 years - These are philosophical differences - PHILOSOPHY MATTERS!

    3. We think of ourselves as this little bubble of obsessions and memories going on in our head that’s detached from everything else. That’s the wound.

      for - summary - polycrisis - requires a shift in stories - from little self - to big self - from - Emergence Magazine - interview - An Ethics of Wild Mind - David Hinton

      summary - polycrisis - requires a shift in stories - from little self - to big self - from - Emergence Magazine - interview - An Ethics of Wild Mind - David Hinton - We think of ourselves as this little bubble of obsessions and memories going on in our head detached from everything else - THAT'S THE WOUND! - That sounds and IS FELT as bleak, isn't it? - The scientific story of the cosmos is that there are countless solar systems in our universe, countless suns and planets over vast time scales - Our planet evolved life billions of years ago - Some of those life forms became multicellular animals, like us - Some of them developed eyes, nose, ears, skin and a brain and central nervous system - When we look out into the world, it is the cosmos distilled in us looking out at itself - Hence, we are intertwingled and woven into the fabric of everything - the cosmos in human form experiencing the cosmos itself - When we think about our extinction, it is also the cosmos thinking about extinction - When we feel ANYTHING, that's the cosmos feeling it - And WHEN WE DIE that is the cosmos in this human form dying to itself

    4. I sort of trace out these parallel developments

      for - history - connection stories that challenge the Genesis control story- begin with indigenous peoples of North America - then ping pong back and forth between Europe and North America - from - Emergence Magazine - interview - An Ethics of Wild Mind - David Hinton

      history - connection stories that challenge the Genesis control story - Indigenous elders of North America share stories with some Westerners in the United States and Canada - These are shared in Europe and become popular, especially amongst intellectuals - It was refreshing to hear an account of nature that wasn't considered evil and that had to be tamed and brought into God's order - Alexander von Humboldt wrote some of these and was widely read - Thoreau, WHitman and Rousseau read Humboldt - British and German Romantics such as Wordworth, Shelly and Coleridge are also influenced by it and see the rediscovery of the wonder of nature as an antidote to the alienation of the industrial age - Completing the circle, American intellects Thoreau and Emerson read the Romantics, in turn influencing Whitman and John Muir

    5. The Greeks took that material change and they mythologized it into the soul. And then, of course, Genesis—the creation of the world in Christianity—says, the world is here for humans. It was created for humans to use, to dominate, to exploit, you know, in their trial here to see if they’re righteous or not.

      for - key insight - roots of anthropomorphism - Greek and Christian narratives - from - Emergence Magazine - interview - An Ethics of Wild Mind - David Hinton - adjacency - existential polycrisis - roots of anthropomorphism in the written language - Deep Humanity BEing journeys that explore how language constructs our reality

      key insight / summary - roots of anthropomorphism - Greek and Christian narratives - The Greeks defined the soul - The Genesis story established that we were the chosen species and all others are subservient to us - From that story, domination of nature becomes the social norm, leading all the way to the existential polycrisis / metacrisis we are now facing - This underscores the critical salience of Deep Humanity to the existential polycrisis - exploring the roots of language and how it changes our perceptions of reality - showing us how we construct our narratives at the most fundamental level, then buy into them

    6. But once you can write things down, then that mental realm suddenly starts looking timeless and radically different from the world around us. And I think that’s what really created this sense of an interior, what became, with the Greeks and the Christians, a kind of soul; this thing that’s actually made of different stuff. It’s made of spirit stuff instead of matter

      for - new insight - second cause of human separation - after settling down, it was WRITING! intriguing! - from - Emergence Magazine - interview - An Ethics of Wild Mind - David Hinton - adjacency - sense of separation - first - settling down - human place - second - writing - from - Emergence Magazine - interview - An Ethics of Wild Mind - David Hinton

      adjacency - between - sense of separation - first - settling down - human place - second - transition from oral to written language - adjacency relationship - Interesting that I was just reading an article on language and perception from the General Semantics organization: General Semantics and non-verbal awareness - The claim is that the transition from oral language to written language created the feeling of interiority and of a separate "soul". - This is definitely worth exploring!

      explore claim - the transition from oral language traditions to writing led us to form the sense of interiority and of a "soul" separate from the body - This claim, if we can validate it, can have profound implications - Writing definitely led us to create much more complex words but we were able to do much more efficient timebinding - transmitting knowledge from one generation to the next. - We didn't have to depend on just a few elders to pass the knowledge on. With the invention of the printing press, written language got an exponential acceleration in intergenerational knowledge transmission. - This had a huge feedback effect on the oral language itself, increase the number of words and meanings exponentially. - There are complex recipes for everything and written words allow us to capture the complex recipes or instructions in ways that would overwhelm oral traditions.

      to - article - General Semantics and Non-Verbal Awareness - https://hyp.is/BePQhLvTEe-wYD_MPM9N3Q/www.time-binding.org/Article-Database

    7. the sense we have now began when Paleolithic hunter-gatherers started settling into Neolithic agricultural villages. And then at that point, there was a separate human space—it’s the village and the cultivated fields around it. Hunter-gatherers didn’t have that, they’re just wandering through “the wild,” “wilderness.” Of course, that idea would make no sense to them, because there’s no separation.

      for - adjacency - paleolithic hunter-gatherer - to neolithic agricultural village - dawn of agriculture - village - cultivated fields around it - created a human space - the village - thus began the - great separation - Emergence Magazine - interview - An Ethics of Wild Mind - David Hinton

      adjacency - between - paleolithic hunter-gatherer - to neolithic agricultural village - dawn of agriculture village - cultivated fields around it - settling down - birth of the human space - the village - thus began - the great separation - adjacency relationship - He connects two important ideas together, the transition from - always-moving, never settling down paleolithic hunter-gatherer to - settled-down neolithic agricultural farmers - The key connection is that this transition from moving around and mobile to stationary is the beginning of our separation from nature - John Ikerd talks about the same thing in his article on the "three great separations". He identifies agriculture as the first of three major cultural separation events that led to our modern form of alienation - The development of a human place had humble beginnings but today, these places are "human-made worlds" that are foreign to any other species. - The act of settling down in one fixed space gave us a place we can continually build upon, accrue and most importantly, begin and continue timebinding - After all, a library is a fixed place, it doesn't move. It would be very difficult to maintain were it always moving.

      to - article - In These Times - The Three “Great Separations” that Unravelled Our Connection to Earth and Each Other - John Ikerd - https://hyp.is/CEzS6Bd_Ee6l6KswKZEGkw/inthesetimes.com/article/industrial-agricultural-revolution-planet-earth-david-korten - timebinding - Alfred Korzyski

    8. You describe how foundational stories of our Western, Christian paradigm are based on this idea of “a self-enclosed human realm separate from everything else,” and that this paradigm is a wound—one “so complete we can’t see it anymore, for it defines the very nature of what we assume ourselves to be.”

      for - human bubble, ailenated from nature, human world so different from natural world - nice meme - self-enclosed human realm separate from everything else - Emergence Magazine - interview - An Ethics of Wild Mind - David Hinton

    Tags

    Annotators

    URL

  13. Nov 2024
    1. gravitating away from the discourse of measuring and fixing unfair algorithmic systems, or making them more transparent, or accountable. Instead, I’m finding myself fixated on articulating the moral case for sabotaging, circumventing, and destroying “AI”, machine learning systems, and their surrounding political projects as valid responses to harm

      Author moved from mitigating harm of algo systems to the moral standpoint that actively resisting, sabotaging, ending AI with attached political projects are valid reaction to harm. So he's moving from monster adaptation / cultural category adaptation to monster slaying cf [[Monstertheorie 20030725114320]]. I empathise but also wonder, bc of the mention of the political projects / structures attached, about polarisation in response to monster embracers (there are plenty) shifting the [[Overton window 20201024155353]] towards them.

    1. Decolonizing AI is a multilayered endeavor, requiring a reaction against the philosophy of ‘universal computing’—an approach that is broad, universalistic, and often overrides the local. We must counteract this with varied and localized approaches, focusing on labor, ecological impact, bodies and embodiment, feminist frameworks of consent, and the inherent violence of the digital divide. This holistic thinking should connect the military use of AI-powered technologies with their seemingly innocent, everyday applications in apps and platforms. By exploring and unveiling the inner bond between these uses, we can understand how the normalization of day-to-day AI applications sometimes legitimizes more extreme and military employment of these technologies.There are normalized paths and routine ways to violence embedded in the very infrastructure of AI, such as the way prompts (text inputs, N.d.R.) are rendered into actual imagery. This process can contribute to dehumanizing people, making them legitimate targets by rendering them invisible.

      Ameera Kawash (artist, researcher) def of decolonizing AI.

  14. Oct 2024
  15. Sep 2024
  16. Aug 2024
    1. In the meantime, we're all missing out on the benefits of (a) capturing images in wider gamuts today so we can view them in years to come, and (b) having systems and displays today that can be somewhere on the journey between the very limited sRGB/Rec709 standards of the previous century, and the inevitable landing spot of achieving full Rec2100 style displays everywhere into the future. Rejecting HDR now because we can only make it half way between the current spec and that goal misses the entire point of gradual improvement, and especially the benefit of capturing content today to view it on better displays in years to come.
    1. CHINESE AMBASSADOR Exactly. But you have always taught us that liberty is the same thing as capitalism, as if life, liberty and the pursuit of happiness cannot be crushed by greed. Your American dream is financial, not ethical.

      West Wing S7 E 11 "Internal Displacement"<br /> http://www.westwingtranscripts.com/search.php?flag=getTranscript&id=145<br /> written by Aaron Sorkin & Bradley Whitford

      A powerful quote about what really matters in America

  17. Jul 2024
  18. Jun 2024
  19. Feb 2024
    1. T. Herlau, "Moral Reinforcement Learning Using Actual Causation," 2022 2nd International Conference on Computer, Control and Robotics (ICCCR), Shanghai, China, 2022, pp. 179-185, doi: 10.1109/ICCCR54399.2022.9790262. keywords: {Digital control;Ethics;Costs;Philosophical considerations;Toy manufacturing industry;Reinforcement learning;Forestry;Causality;Reinforcement learning;Actual Causation;Ethical reinforcement learning}

  20. Jan 2024
    1. the canonical unit, the NCU supports natural capital accounting, currency source, calculating and accounting for ecosystem services, and influences how a variety of governance issues are resolved
      • for: canonical unit, collaborative commons - missing part - open learning commons, question - process trap - natural capital

      • comment

        • in this context, indyweb and Indranet are not the canonical unit, but then, it seems the model is fundamentally missing the functionality provided but the Indyweb and Indranet, which is and open learning system.
        • without such an open learning system that captures the essence of his humans learn, the activity of problem-solving cannot be properly contextualised, along with all of limitations leading to progress traps.
        • The entire approach of posing a problem, then solving it is inherently limited due to the fractal intertwingularity of reality.
      • question: progress trap - natural capital

        • It is important to be aware that there is a real potential for a progress trap to emerge here, as any metric is liable to be abused
      • for: elephants in the room - financial industry at the heart of the polycrisis, polycrisis - key role of finance industry, Marjorie Kelly, Capitalism crisis, Laura Flanders show, book - Wealth Supremacy - how the Extractive Economy and the Biased Rules of Captialism Drive Today's Crises

      • Summary

        • This talk really emphasizes the need for the Stop Reset Go / Deep Humanity Wealth to Wellth program
        • Interviewee Marjorie Kelly started Business Ethics magainze in 1987 to show the positive side of business After 30 years, she found that it was still tinkering at the edges. Why? - because it wasn't addressing the fundamental issue.
        • Why there hasn't been noticeable change in spite of all these progressive efforts is because we avoided questioning the fundamental assumption that maximizing returns to shareholders and gains to shareholder portfolios is good for people and planet.**** It turns out that it isn't. It's fundamentally bad for civilization and has played a major role in shaping today's polycrisis.
        • Why wealth supremacy is entangled with white supremacy
        • Financial assets are the subject
          • Equity and bonds use to be equal to GDP in the 1950s.
          • Now it's 5 times as much
        • Financial assets extracts too much from common people
        • Question: Families are swimming in debt. Who owns all this financial debt? ...The financial elites do.
      • meme

        • wealth supremacy and white supremacy are entangled
  21. Dec 2023

    Tags

    Annotators

  22. Nov 2023
    1. I am even more attuned to creative rights. We can address algorithms of exploitation by establishing creative rights that uphold the four C’s: consent, compensation, control, and credit. Artists should be paid fairly for their valuable content and control whether or how their work is used from the beginning, not as an afterthought.

      Consent, compensation, control, and credit for creators whose content is used in AI models

  23. Oct 2023
    1. https://web.archive.org/web/20231019053547/https://www.careful.industries/a-thousand-cassandras

      "Despite being written 18 months ago, it lays out many of the patterns and behaviours that have led to industry capture of "AI Safety"", co-author Rachel Coldicutt ( et Anna Williams, and Mallory Knodel for Open Society Foundations. )

      For Open Society Foundations by 'careful industries' which is a research/consultancy, founded 2019, all UK based. Subscribed 2 authors on M, and blog.

      A Thousand Cassandras in Zotero.

    1. Foster international collaboration on PPDSA through promotion of partnerships and aninternational policy environment that furthers the development and adoption of PPDSAtechnologies and supports common values while protecting national and economic security

      Another cross-over from paper with Anna

    2. Elevate and promote foundational and use-inspired research through investments inmultidisciplinary research that will advance practical deployment of PPDSA approach andexploratory research to develop the next generation of PPDSA technologies.
    3. Advance governance and responsible adoption through the establishment of a multi-partnersteering group to help develop and maintain a healthy PPDSA ecosystem, greater clarity on theuse of PPDSA technologies within the statutory and regulatory environments, and proactive riskmitigation measures.

      Some implications for the Research Harmonization paper with Anna

    4. PPDSA technologies will be created and used in a manner that stimulates responsible scientificresearch and innovation, and enables individuals and society to benefit equitably from the valuederived from data sharing and analytics

      Data Sharing and Ethics

  24. Sep 2023
      • for: bio-buddhism, buddhism - AI, care as the driver of intelligence, Michael Levin, Thomas Doctor, Olaf Witkowski, Elizaveta Solomonova, Bill Duane, care drive, care light cone, multiscale competency architecture of life, nonduality, no-self, self - illusion, self - constructed, self - deconstruction, Bodhisattva vow
      • title: Biology, Buddhism, and AI: Care as the Driver of Intelligence
      • author: Michael Levin, Thomas Doctor, Olaf Witkowski, Elizaveta Solomonova, Bill Duane, AI - ethics
      • date: May 16, 2022
      • source: https://www.mdpi.com/1099-4300/24/5/710/htm

      • summary

        • a trans-disciplinary attempt to develop a framework to deal with a diversity of emerging non-traditional intelligence from new bio-engineered species to AI based on the Buddhist conception of care and compassion for the other.
        • very thought-provoking and some of the explanations and comparisons to evolution actually help to cast a new light on old Buddhist ideas.
        • this is a trans-disciplinary paper synthesizing Buddhist concepts with evolutionary biology
    1. synthetic bioengineering provides a really astronomically large option space for new bodies and new minds that don't have 00:04:28 standard evolutionary backstories
      • for: cultural evolution, cumulative cultural evolution, CCE, bioengineering, novel life form, culturally evolved life, bioethics, progress trap, progress trap - bioengineering, progress trap - genetic engineering
      • comment
        • cultural evolution, which itself emerges from biological evolution is acting upon itself to create new life forms that have no evolutionary backstory
        • this is tantamount to playing God
        • progress traps often emerge out of the large speed mismatch between cultural and biological/genetic evolution.
        • Nowhere is this more profound than in bioengineering of new forms of life with no evolutionary history
        • This presents profound ethical challenges
  25. Aug 2023
    1. One of the most common examples was in thefield of criminal justice, where recent revelations have shown that an algorithm used by the UnitedStates criminal justice system had falsely predicted future criminality among African-Americans attwice the rate as it predicted for white people

      holy shit....bad!!!!!

    2. The idea that AI algorithms are free from biases is wrong since the assumptionthat the data injected into the models are unbiased is wrong

      Computational != objective! Common idea rests on lots of assumptions

  26. Jun 2023
    1. Overview of how tech changes work moral changes. Seems to me a detailing of [[Monstertheorie 20030725114320]] diving into a specific part of it, where cultural categories are adapted to fit new tech in. #openvraag are the sources containing refs to either Monster theory by Smits or the anthropoligical work of Mary Douglas. Checked: it doesn't, but does cite refs by PP Verbeek and Marianne Boenink, so no wonder there's a parallel here.

      The first example mentioned points in this direction too: the 70s redefinition of death as brain death, where it used to be heart stopped (now heart failure is a cause of death), was a redefinition of cultural concepts to assimilate tech change. Third example is a direct parallel to my [[Empathie verschuift door Infrastructuur 20080627201224]] [[Hyperconnected individuen en empathie 20100420223511]]

      Where Monstertheory is a tool to understand and diagnose discussions of new tech, wherein the assmilation part (both cultural cats and tech get adapted) is the pragmatic route (where the mediation theory of PP Verbeek is located), it doesn't as such provide ways to act / intervene. Does this taxonomy provide agency?

      Or is this another way to locate where moral effects might take place, but still the various types of responses to Monsters still may determine the moral effect?

      Zotero antilib Mechanisms of Techno-moral Change

      Via Stephen Downes https://www.downes.ca/post/75320

    1. As the EU heads toward significant AI regulation, Altman recently suggested such regulation might force his company to pull out of Europe. The proposed EU regulation, of course, is focused on copyright protection, privacy rights, and suggests a ban on certain uses of AI, particularly in policing — all concerns of the present day. That reality turns out to be much harder for AI proponents to confront than some speculative future

      While wrongly describing the EU regulation on AI, author rightly points to the geopolitical reality it is creating for the AI sector. AIR is focused on market regulation, risk mitigation wrt protection of civic rights and critical infrastructure, and monopoly-busting/level playing field. Threatening to pull out of the EU is an admission you don't want to be responsible for your tech at all. And it thus belies the ethical concerns voiced through proximate futurising. Also AIR is just one piece of that geopolitical construct, next to GDPR, DMA, DSA, DGA, DA and ODD which all consistently do the same things for different parts of the digital world.

    2. In 2010, Paul Dourish and Genevieve Bell wrote a book about tech innovation that described the way technologists fixate on the “proximate future” — a future that exists “just around the corner.” The authors, one a computer scientist, and the other a tech industry veteran, were examining emerging tech developments in “ubiquitous computing,” which promised that the sensors, mobile devices, and tiny computers embedded in our surroundings would lead to ease, efficiency, and general quality of life. Dourish and Bell argue that this future focus distracts us from the present while also absolving technologists of responsibility for the here and now.

      Proximate Future is a future that is 'nearly here' but never quite gets here. Ref posits this is a way to distract from issues around a tech now and thus lets technologists dodge responsibility and accountability for the now, as everyone debates the issues of a tech in the near future. It allows the technologists to set the narrative around the tech they develop. Ref: [[Divining a Digital Future by Paul Dourish Genevieve Bell]] 2010

      Vgl the suspicious call for reflection and pause wrt AI by OpenAI's people and other key players. It's a form of [[Ethics futurising dark pattern 20190529071000]]

      It may not be a fully intentional bait and switch all the time though: tech predictions, including G hypecycle put future key events a steady 10yrs into the future. And I've noticed when it comes to open data readiness and before that Knowledge management present vs desired [[Gap tussen eigen situatie en verwachting is constant 20071121211040]] It simply seems a measure of human capacity to project themselves into the future has a horizon of about 10yrs.

      Contrast with: adjacent possible which is how you make your path through [[Evolutionair vlak van mogelijkheden 20200826185412]]. Proximate Future skips actual adjacent possibles to hypothetical ones a bit further out.

    1. Technology is valuable and empowering, but at what end direct cost? Consumers don't have available data for the actual costs of the options they're choosing in many contexts.

      What if that reprocessing costs the equivalent of three glasses of waters? Is it worth it for our environment, especially when the direct costs to the "consumer" are hidden into advertising models.

      (via Brenna)

  27. May 2023
    1. the Carthusian monks decided in 2019 to limit Chartreuse production to 1.6 million bottles per year, citing the environmental impacts of production, and the monks' desire to focus on solitude and prayer.[10] The combination of fixed production and increased demand has resulted in shortages of Chartreuse across the world.

      In 2019, Carthusian monks went back to their values and decided to scale back their production of Chartreuse.

    1. must have an alignment property

      It is unclear what form the "alignment property" would take, and most importantly how such a property would be evaluated especially if there's an arbitrary divide between "dangerous" and "pre-dangerous" levels of capabilities and alignment of the "dangerous" levels cannot actually be measured.

    1. study done this past December to get a sense of how possible this is: Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers" – Catherine Gao, et al. (2022)Blinded human reviewers were given a mix of real paper abstracts and ChatGPT-generated abstracts for submission to 5 of the highest-impact medical journals.

      I think these types of tests can only result in showing human failing at them. Because the test is reduced to judging only the single artefact as a thing in itself, no context etc. That's the basic element of all cons: make you focus narrowly on something, where the facade is, and not where you would find out it's fake. Turing isn't about whether something's human, but whether we can be made to believe it is human. And humans can be made to believe a lot. Turing needs to keep you from looking behind the curtain / in the room to make the test work, even in its shape as a thought experiment. The study (judging by the sentences here) is a Turing test in the real world. Why would you not look behind the curtain? This is the equivalent of MIT's tedious trolley problem fixation and calling it ethics of technology, without ever realising that the way out of their false dilemma's is acknowledging nothing is ever a di-lemma but always a multi-lemma, there are always myriad options to go for.

  28. Apr 2023
    1. just than the State

      I think this is yet to be seen. Although it is true that the computer always gives the same output given the same input code, a biased network with oppressive ideologies could simply transform, instead of change, our current human judiciary enforcement of the law.

    1. In other words, the currently popular AI bots are ‘transparent’ intellectually and morally — they provide the “wisdom of crowds” of the humans whose data they were trained with, as well as the biases and dangers of human individuals and groups, including, among other things, a tendency to oversimplify, a tendency for groupthink, and a confirmation bias that resists novel and controversial explanations

      not just trained with, also trained by. is it fully transparent though? Perhaps from the trainers/tools standpoint, but users are likely to fall for the tool abstracting its origins away, ELIZA style, and project agency and thus morality on it.

    1. Recommended Source

      Under the "More on Philosophies of Copyright" section, I recommended adding the scholarly article by Chinese scholar Peter K. Yu that explains how Chinese philosophy of Yin-Yang can address the contradictions in effecting or eliminating intellectual property laws. One of the contradictions is in intellectual property laws protecting individual rights while challenging sustainability efforts for future generations (as climate change destroys more natural resources.

      Yu, Peter K., Intellectual Property, Asian Philosophy and the Yin-Yang School (November 19, 2015). WIPO Journal, Vol. 7, pp. 1-15, 2015, Texas A&M University School of Law Legal Studies Research Paper No. 16-70, Available at SSRN: https://ssrn.com/abstract=2693420

      Below is a short excerpt from the article that details Chinese philosophical thought on IP and sustainability:

      "Another area of intellectual property law and policy that has made intergenerational equity questions salient concerns the debates involving intellectual property and sustainable development. Although this mode of development did not garner major international attention until after the 1992 Earth Summit in Rio de Janeiro, the Yin-Yang school of philosophy—which “offers a normative model with balance, harmony, and sustainability as ideals”—provides important insight into sustainable development."

    1. But I also don’t think that a company that creates harmful technology should be excused simply because they’re bad at it.

      Being crap at doing harm doesn't allow you to claim innocence of doing harm.

  29. Mar 2023
    1. Ganguli, Deep, Askell, Amanda, Schiefer, Nicholas, Liao, Thomas I., Lukošiūtė, Kamilė, Chen, Anna, Goldie, Anna et al. "The Capacity for Moral Self-Correction in Large Language Models." arXiv, (2023). https://doi.org/https://arxiv.org/abs/2302.07459v2.

      Abstract

      We test the hypothesis that language models trained with reinforcement learning from human feedback (RLHF) have the capability to "morally self-correct" -- to avoid producing harmful outputs -- if instructed to do so. We find strong evidence in support of this hypothesis across three different experiments, each of which reveal different facets of moral self-correction. We find that the capability for moral self-correction emerges at 22B model parameters, and typically improves with increasing model size and RLHF training. We believe that at this level of scale, language models obtain two capabilities that they can use for moral self-correction: (1) they can follow instructions and (2) they can learn complex normative concepts of harm like stereotyping, bias, and discrimination. As such, they can follow instructions to avoid certain kinds of morally harmful outputs. We believe our results are cause for cautious optimism regarding the ability to train language models to abide by ethical principles.

    1. A.I. Is Mastering Language. Should We Trust What It Says?<br /> by Steven Johnson, art by Nikita Iziev

      Johnson does a good job of looking at the basic state of artificial intelligence and the history of large language models and specifically ChatGPT and asks some interesting ethical questions, but in a way which may not prompt any actual change.


      When we write about technology and the benefits and wealth it might bring, do we do too much ethics washing to help paper over the problems to help bring the bad things too easily to pass?

    1. For open educators, this runs counter to the very reason we use OER in the first place. Many open educators choose OER because there are legal permissions that allow for the ethical reuse of other people’s material — material the creators have generously and freely made available through the application of open licenses to it. The thought of using work that has not been freely gifted to the commons by the creator feels wrong for many open educators and is antithetical to the generosity inherent in the OER community.
    1. “We call on the field to recognize that applications that aim to believably mimic humans bring risk of extreme harms,” she co-wrote in 2021. “Work on synthetic human behavior is a bright line in ethical Al development, where downstream effects need to be understood and modeled in order to block foreseeable harm to society and different social groups.”

      Synthetic human behavior as AI bright line

      Quote from Bender

  30. Feb 2023
    1. In my most recent field (software), the engineer is placated with the delusion that the purpose is to give the customer what ey wants, whether that solves the customer’s problem or not. This is a lazy sort of self-imposed servitude that entirely avoids the actual purpose of engineering.

      One reason why software engineering isn't "real" engineering: no ethical obligation to "the public good".

    1. https://web.archive.org/web//https://www.binnenlandsbestuur.nl/bestuur-en-organisatie/io-research/morele-vragen-rijksambtenaren-vaak-onvoldoende-opgevolgd

      #nieuw reden:: Verschillende soorten morele dilemma's volgens I&O bij (rijks)ambtenaren. Let wel, alleen rondom beleidsproces, niet om digitale / data dingen. #openvraag is er een kwalitatief verschil tussen die 2 soorten vragen, of zijn het verschillende verschijningsvormen van dezelfde vragen? Verhouden zich de vragen tot de 7 normen van openbaar bestuur? Waarom is dat niet de indeling dan? #2022/06/30

  31. Jan 2023
    1. Students working together in a group trying to make meaning out of their own data could find themselves in a similar situation. Your lack of imagination about your own data may not result in a lack of imagination by others about what they think of you. Putting students in these situations without preparing them about assumptions they might make of others could lead to embarrassment and misunderstanding. 

      This is a really interesting point about all kinds of self-disclosure in the classroom, but especially disclosing what third-parties think about you.

    1. Environmentalists say bulldozing the village to expand the Garzweiler mine would result in huge amounts of greenhouse gas emissions. The government and utility company RWE argue the coal is needed to ensure Germany's energy security.Police officers use water cannons on protesters in Luetzerath on Saturday. (Thilo Schmuelgen/Reuters)The regional and national governments, both of which include the environmentalist Green party, reached a deal with RWE last year allowing it to destroy the abandoned village in return for ending coal use by 2030, rather than 2038.Some speakers at Saturday's demonstration assailed the Greens, whose leaders argue that the deal fulfils many of the environmentalists' demands and saved five other villages from demolition.What on Earth?Why the reversal of a decades-old coal policy sparked controversy in Alberta"It's very weird to see the German government, including the Green party, make deals and compromise with companies like RWE, with fossil fuel companies, when they should rather be held accountable for all the damage and destruction they have caused," Thunberg said."My message to the German government is that they should stop what's happening here immediately, stop the destruction, and ensure climate justice for everyone."

      Assuming the facts are correct and complete here, it's surprisingly naive of Thunberg to take this view. One unknown is whether the displaced villagers were suitably compensated for being evicted. Still, taking 8 years off the deadline to end coal use - that's a pretty massive win and could set the stage for even more in the future.

    1. Excellent article on the complex nature of rape. The key point for me is that too many people think it's always a black-and-white matter. In fact, the boundary between rape and not-rape is not that crisp. There is a boundary layer here. I think that if more people realized every boundary is really a boundary layer, there would be fewer conflicts about such matters.

  32. Dec 2022
    1. https://shkspr.mobi/blog/2022/12/the-ethics-of-syndicating-comments-using-webmentions/

      Not an answer to the dilemma, though I generally take the position of keeping everything unless someone asks me to take it down or that I might know that it's been otherwise deleted. Often I choose not to delete my copy, but simply make it private and only viewable to me.

      On the deadnaming and related issues, it would be interesting to create a webmention mechanism for the h-card portions so that users might update these across networks. To some extent Automattic's Gravatar system does this in a centralized manner, but it would be interesting to see it separately. Certainly not as big an issue as deadnaming, but there's a similar problem on some platforms like Twitter where people will change their display name regularly for either holidays, or lately because they're indicating they'd rather be found on Mastodon or other websites.

      The webmention spec does contain details for both editing/deleting content and resending webmentions to edit and/or remove the original. Ideally this would be more broadly adopted and used in the future to eliminate the need for making these choices by leaving the choice up to the original publisher.

      Beyond this, often on platforms that don't have character limits (Reddit for example), I'll post at the bottom of my syndicated copy of content that it was originally published on my site (along with the permalink) and explicitly state that I aggregate the replies from various locations which also helps to let people know that they might find addition context or conversation at the original post should they be interested. Doing this on Twitter, Mastodon, et al is much harder due to space requirements obviously.

      While most responses I send would fall under fair use for copying, I also have a Creative Commons license on my text in an effort to help others feel more comfortable with having copies of my content on their sites.

      Another ethical layer to this is interactions between sites which both have webmentions enabled. To some extent this creates an implicit bi-directional relationship which says, I'm aware that this sort of communication exists and approve of your parsing and displaying my responses.

      The public norms and ethics in this area will undoubtedly evolve over time, so it's also worth revisiting and re-evaluating the issue over time.

  33. Nov 2022

    Tags

    Annotators

  34. Sep 2022
    1. Whereas consequentialists will define virtues as traits that yield good consequences and deontologists will define them as traits possessed by those who reliably fulfil their duties, virtue ethicists will resist the attempt to define virtues in terms of some other concept that is taken to be more fundamental. Rather, virtues and vices will be foundational for virtue ethical theories and other normative notions will be grounded in them.
    1. humanity as not only the source and context for technology and its use, but its ultimate yardstick for the constructive use and impact of technology. This may sound obvious, it certainly does to me, but in practice it needs to be repeated to ensure it is used as such a yardstick from the very first design stage of any new technology.

      Vgl [[Networked Agency 20160818213155]] wrt having a specific issue to address that is shared by the user group wielding a tech / tool, in their own context.

      Vgl [[Open data begint buiten 20200808162905]] wrt the only yardstick for open data stems from its role as policy instrument: impact achieved outside in the aimed for policy domains through the increased agency of the open data users.

      Tech impact is not to be measured in eyeballs, usage, revenue etc. That's (understandably) the corporation's singular and limited view, the rest of us should not adopt it as the only possible one.

  35. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. The critical writing on data science hastaken the paradoxical position of in-sisting that normative issues pervadeall work with data while leaving unad-dressed the issue of data scientists’ethical agency. Critics need to consid-er how data scientists learn to thinkabout and handle these trade-offs,while practicing data scientists needto be more forthcoming about all ofthe small choices that shape their deci-sions and systems.

      In my opinion, data science is a huge field with a lot of unresolved issues. I believe that critics of data science must be able to understand all the analytics that data scientists use to truly have a valid critique. I also believe that the critical writing about data science doesn't do much justice as actually understanding the analytics would. How would analytics play a role in critical writing?

  36. Aug 2022
    1. We should all transition from thinking about logic as a field of great dead white men and as a field of “geniuses”, to recognizing those men for the flawed creatures they were, whose “genius” relied on the subjugation of many women and BIPOC around them, and ensuring that the Wikipedia, SEP, etc., pages for these logicians acknowledge that.

      This is the wrong approach, because it imposes modern norms on past times. It's illogical and superficial.

      It would be appropriate, though, to carefully review the histories of past logicians and to document more fully the roles that others played in their work, with a clinical and factual dispassion, and with the intention of being accurate and attributing progress to whoever actually did the work.

  37. Jul 2022
  38. May 2022
  39. Apr 2022
  40. Mar 2022
    1. Importantly, we had a human in the loop with a firm moral and ethical ‘don’t-go-there’ voice to intervene.

      The human-in-the-loop was a key breakpoint between the model's findings as concepts and the physical instantiation of the model's findings. As the article goes on to say, unwanted outcomes come from both taking the human out of the loop and replacing the human in the loop with someone with a different moral or ethical driver.

  41. Feb 2022
  42. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. he gaps between data scientists and critics are wide, but critique di-vorced from practice only increases them.

      In my opinion this struggle needs to play out. Data science can always be streamlined to remove the bias that everyone has. the conflict this article speaks of is just the natural progression of what has been happening already. The only way this will get better is through constant struggle, just like everything else.

    1. The paper did not seem to have consent from participants for: (a) Agreeing to participate in the different types of interventions (which have the potential of hurting the health of citizens and could even lead to their death.); (b) using their personal health data to publish a research paper.

      Given that the authors are government actors who can easily access millions of citizens and put them in the study groups they desire, without even telling citizens that they are in a study, I worry that this practice will be popularized by governments in Latin America and put citizens in danger.

      I also want to note that this is a new type of disinformation where government actors can post content on these repositories and given that most citizens do NOT know it is NOT peer reviewed it can help the government actors to validate their public policy. The research paper becomes political propaganda and the international repository helps to promotes the propaganda.

  43. Jan 2022
    1. Σύμφωνα μάλιστα με τον νομπελίστα Ινδό οικονομολόγο Αμάρτια Σεν και η γενναιοδωρία είναι μια μεταμφιεσμένη υποκρισία, ένας ιδιοτελής αλτρουισμός, διότι αν είστε καλός με τους ανθρώπους επειδή αυτό σας κάνει να νιώθετε καλύτερα, τότε η ευσπλαχνία σας είναι ιδιοτελής και όχι ανιδιοτελής.

      Αντι να παραδεχτει την ανιδιοτέλεια σας μακροπρόθεσμη στρατηγική ιδιοτέλειας, καταργει κυνικα την έννοια, και κρατάει μονο την ιδιοτέλεια,ωστε να ταιρίαζει με το νοεφιλελευθερισμό - κυκλικό επιχείρημα που φτωχαίνει τη λέξη και το μυαλο.

  44. Dec 2021
    1. people end up being told their needs are not important, and theirlives have no intrinsic worth. The last, we are supposed to believe, isjust the inevitable effect of inequality; and inequality, the inevitableresult of living in any large, complex, urban, technologicallysophisticated society. Presumably it will always be with us. It’s just amatter of degree.

      People being told they don't matter and don't have intrinsic worth is a hallmark of colonialism. It's also been an ethical issue in the study of anthropology for the past 150 years.

      Anthropologist Tim Ingold in Anthropology: Why It Matters touches on some of this issue of comparing one group of people with another rather than looking at and appreciating the value of each separately.

  45. Nov 2021
    1. The dopamine reward system has also been shown to bestimulated by most drugs of abuse and plays an important rolein addiction [33]. An important question is whether jhanameditators are subject to addiction and tolerance effects thatcan result from stimulation of the dopamine reward system.

      The question of potential addiction to self-induced states that activate the dopamine (and/or other neurochemical) reward system(s) is important. From a more philosophical angle, should we welcome beneficial addictions that, if cultivated, might significantly improve individual and group quality of life? Isn't this related to our high regard for replacing detrimental with positive habits? Habit formation and maintenance also depends on activation of neural reward systems (see Nir Eyal's book, Hooked).

    1. Not that everyone really wants an apology. One former journalist told me that his ex-colleagues “don’t want to endorse the process of mistake/apology/understanding/forgiveness—they don’t want to forgive.” Instead, he said, they want “to punish and purify.” But the knowledge that whatever you say will never be enough is debilitating. “If you make an apology and you know in advance that your apology will not be accepted—that it is going to be considered a move in a psychological or cultural or political game—then the integrity of your introspection is being mocked and you feel permanently marooned in a world of unforgivingness,” one person told me. “And that is a truly unethical world.”

      How can restorative justice work in a broader sense when public apologies aren't more carefully considered by the public-at-large? If the accuser accepts an apology, shouldn't that be enough? Society-at-large can still be leery of the person and watch their behavior, but do we need to continue ostracizing them?

      An interesting example to look at is that of Monica Lewinsky who in producing a version of her story 20+ years later is finally able to get her own story and framing out. Surely there will be political adherents who will fault her, but has she finally gotten some sort of justice and reprieve from a society that utterly shunned her for far too long for an indiscretion which happens nearly every day in our society? Compare her with Hester Prynne.

      Are we moving into a realm in which everyone is a public figure on a national if not international stage? How do we as a society handle these cases? What are the third and higher order effects besides the potential for authoritarianism which Applebaum mentions?

  46. Oct 2021
    1. This is a nice introduction to some issues of concern to me. For instance, the absence of pain is good - but why is it good? The empirical reason for this is that it satisfies evolved instinct. So again, what is good tracks to what is natural. But the naturalistic fallacy undermines that. And most importantly, there is no known scientific connection between evolution and instinct on the one hand, and "good" on the other. My answer is: morality is not natural, it is an artifice of humanity. And since it's an artifice, we can make it whatever we want.

    1. UX is now "user exploitation."

      As a student in the new generation of UX designers i've been more sensible to these matters but i do see signs of alternative behaviors emerging (the right to repair, for example) which could benefit hugely from our discipline methods and learnings.

    1. The real conspiracies are hiding in plain sight.

      The big difference between the paranoiac's conspiracy theories and the real ones is that in the fake ones the conspirators are "in it together" and form a like-minded group. In reality, the billionaires would be very happy to through each other under the bus if they could.

      So it's not so much that there are real conspiracies as there are a known set of methods and tools - known to everyone, everywhere - that allow this gross power imbalance to be created. These methods and tools are known to all but can only be used by the rich because they are themselves very costly.

  47. Sep 2021

    Tags

    Annotators

    1. should we do research that is not consciousness raising for the participants? Is such research an oppressive process that of necessity exploits the subject?

      Do the researchers have any responsibility towards safeguarding the wellbeing of their interviewees when such research of difficult topics and discusses can create emotionally challenging or be restimulating for the participants.

    2. We were pushed to develop our analysis further by women in the study whom we asked to read the manuscript. They were hesitant about being negative, but were clearly critical.

      Justice - allowing women to read and critque manuscript. Result - researchers - participant design changes - participants asked for sociological interpretation by the researchers. They were looking for insights.

    Tags

    Annotators

    1. liberty of conscience

      "Liberty of conscience" is a phrase Roger Williams uses in a religious context to denote the freedom for one to follow his or her religious or ethical beliefs. It is an idea that refers to conscious-based thought and individualism. Each person has the right to their own conscience. It is rooted in the idea that all people are created equal and that no culture is better than the other.

      This idea is strongly tied to: freedom from coercion of conscience (own thoughts and ideas), equality of rights, respect and toleration. It is a fundamental element of what has come to be the "American idea of religious liberty". Williams spoke of liberty of conscience in reference to a religious sense. This concept of individualism and free belief was later extrapolated in a general sense. He believed that government involvement ended when it came to divine beliefs.

      Citation: Eberle, Edward J. "Roger Williams on Liberty of Conscience." Roger Williams University Law Review: vol 10:, iss: 2, article 2, pp. 288-311. http://docs.rwu.edu/rwu_LR/vol10/iss2/2. Accessed 8 Sept. 2021.

  48. Aug 2021
  49. Jul 2021
    1. Well, no. I oppose capital punishment, just as (in my view) any ethical person should oppose capital punishment. Not because innocent people might be executed (though that is an entirely foreseeable consequence) but because, if we allow for capital punishment, then what makes murder wrong isn't the fact that you killed someone, it's that you killed someone without the proper paperwork. And I refuse to accept that it's morally acceptable to kill someone just because you've been given permission to do so.

      Most murders are system 1-based and spur-of-the-moment.

      System 2-based murders are even more deplorable because in most ethical systems it means the person actively spent time and planning to carry the murder out. The second category includes pre-meditated murder, murder-for-hire as well as all forms of capital punishment.

    1. e book does not wholly succeed, but Jonas’s central idea ispowerful and has not been given the attention it deserves. !at ideaarises from one governing insight: Under technocratic modernity,“the altered nature of human action, with the magnitude andnovelty of its works and their impact on man’s global future, raisesmoral issues for which past ethics, geared to the dealings of manwith his fellow-men within narrow horizons of space and time, has

      left us unprepared.” Although Heidegger found it necessary, in his attempt to rethink metaphysics, to go back to the insights of the pre-Socratic philosophers, Jonas does not believe that any earlier thinkers hold the key to the ethical challenge posed by technocratic modernity, because no previous society possessed powers that could extend its reach so far in both space and time. A wholly new ethics is required, and is required simply because of the scope of our technologies.

      Hans Jonas, a student of Martin Heidegger, argues in The Imperative of Responsibility, that modern technology requires a new ethical framework because no previous societies possessed the technical powers to extend their reach so far in time and space as ours currently do.

  50. Jun 2021
  51. May 2021
    1. O’Connor, D. B., Aggleton, J. P., Chakrabarti, B., Cooper, C. L., Creswell, C., Dunsmuir, S., Fiske, S. T., Gathercole, S., Gough, B., Ireland, J. L., Jones, M. V., Jowett, A., Kagan, C., Karanika‐Murray, M., Kaye, L. K., Kumari, V., Lewandowsky, S., Lightman, S., Malpass, D., … Armitage, C. J. (2020). Research priorities for the COVID‐19 pandemic and beyond: A call to action for psychological science. British Journal of Psychology. https://doi.org/10.1111/bjop.12468

  52. Apr 2021
    1. People can take the conversations with willing co-workers to Signal, Whatsapp, or even a personal Basecamp account, but it can't happen where the work happens anymore.

      Do note that two of the three systems that Fried use for examples are private. In other words, only people who you explicitly want to see what you're writing will see just that.

      This goes against his previous actions somewhat, e.g. https://twitter.com/jasonfried/status/1168986962704982016

    2. Sensitivities are at 11, and every discussion remotely related to politics, advocacy, or society at large quickly spins away from pleasant. You shouldn't have to wonder if staying out of it means you're complicit, or wading into it means you're a target.

      This is something that even pre-Socratic philosophers discussed. Not saying something is also saying something.

      Most of what is done by and in a capitalist company is supported by a certain rationale: to make as much money as possible for your shareholders.

      If you care about making money, you speak out against injustices; These injustices could be logical, moral, ethical, or a mixture.

      The phrase 'It's become too much' is a bit vague from Fried, who has written books that advocate speaking out, e.g. 'It Doesn't Have To Be Crazy At Work'.

  53. Mar 2021
    1. the community is both endlessly creative and genuinely interested in solving big issues in meaningful ways. Whether it's their commitment to careful (and caring) community stewardship or their particular strain of techno-ethics, I have been consistently (and pleasantly) surprised at what I've seen during the last twelve months. I don't always see eye-to-eye with their decisions and I don't think that the community is perfect, but it's consistently (and deliberately) striving to be better, and that's a fairly rare thing, online or off.
    1. But anything that reduced engagement, even for reasons such as not exacerbating someone’s depression, led to a lot of hemming and hawing among leadership. With their performance reviews and salaries tied to the successful completion of projects, employees quickly learned to drop those that received pushback and continue working on those dictated from the top down.

      If the company can't help regulate itself using some sort of moral compass, it's imperative that government or other outside regulators should.

  54. Feb 2021
    1. What is the relationship between design, power, and social justice? “Design justice” is an approach to design that is led by marginalized communities and that aims explicitly to challenge, rather than reproduce, structural inequalities. It has emerged from a growing community of designers in various fields who work closely with social movements and community-based organizations around the world.

      Alles wat niet wordt gedisciplineerd en gestructureerd door natuurwetenschappelijke wetmatigheden hangt samen met de menselijke creativiteit en behoeften. Van de inrichting van steden tot de inrichting van de maatschappij hebben we te maken met het ontwerpactiviteiten. De relatie tussen die inrichting en het gedrag van gebruikers waarvoor die inrichting is bedoeld is een vrij complexe. Of zoals Churchill het eens (1943) verwoordde:

      “We shape our buildings, thereafter they shape us.”

      Niet veel later (1967) werd een vergelijkbare uitspraak (ten onrechte) toegeschreven aan McLuhan:

      "We shape our tools, and thereafter our tools shape us."

      Degene die deze uitspraak deed, John Culkin, illusteerde dit aan de hand van de intrede van de auto

      Once we have created a car, for example, our society evolves to make the car normal, and our behavior adapts to accommodate this new normal.

      De wederkerige invloed (performativiteit) van al hetgeen de mens creëert (uiteenlopend van gebouwen en apparaten tot 'simme steden' en algoritmes) is een belangrijk om te begrijpen dat een ontwerp meer is dan kenmerk dat het gebruik bevorderd. Ontwerpkenmerken hebben blijkbaar wederkerig effect op het menselijk gedrag. Ze zetten niet alleen aan tot gedrag dat is bedoeld en wordt getriggerd door de affordances van het ontwerp: unieke relatie tussen de kenmerken van een ‘ding’ in samenhang met een gebruiker die beïnvloedt hoe dat ding wordt gebruikt. Een relatie die verder gaat dan een eenzijdige perception-action coupling.

      Met betrekking tot sociale media kunnen we bijvoorbeeld spreken van 'transactional media effects':

      "... outcomes of media use also influence media use. Transactional media-effects models consider media use and media effects as parts of a reciprocal over-time influence process, in which the media effect is also the cause of its change (Früh & Schönbach, 1982)."

      Het gegeven dat ontwerpers vaak alleen de positieve ervaring van gebruikers voor ogen hebben is volgens Danah Abdulla niet constructief.

      "...optimism in design is not always constructive. In fact, it hinders the politicization of designers. If design is going to contribute to tools that can change the world positively, it must begin to embrace pessimism."

    1. Most business writing lacks a meaningful engagement with the question of whether the strategies, tactics, and trends on offer are good, in a larger and longer term sense. It is negligent not to address these questions.

      Very few businesses consider the long term effects of their work...