185 Matching Annotations
  1. Jul 2022
    1. While we currently do not see a need for legislation at this stage, we cannot rule out that legislation may be required as part of making sure our regulators are able to implement the framework.

      The UK is not currently planning on introducing specific AI regulation legislation.

    2. we will acknowledge that AI is a dynamic, general purpose technology and that the risks arising from it depend principally on the context of its application.

      Suggests a focus more on object-level accident and misuse risks, rather than structural risks or misalignment risks that are less dependent on context (e.g. power seeking behaviour).

    1. Instead of giving responsibility for AI governance to a central regulatory body, as the EU is doing through its AI Act, the government’s proposals will allow different regulators to take a tailored approach to the use of AI in a range of settings.

      The UK is taking a sectoral approach to AI regulation

    1. Nuclear energy, he argued, is inherently centralising because of the way a nuclear power station has to be designed, built, and maintained, and the way its power is distributed. Solar power by contrast is decentralising. It’s easier, if expensive, to install solar power at a small scale, and solar reduces people’s reliance on a centralised grid. Solar power is therefore a more democratic technology.

      I don't necessarily agree that decentralising is the same thing as more democratic/enables more democracy? Perhaps the centralised nature of nuclear technology means it requires contestation and cooperation over its use, encouraging people to take more interest in democratic processes to govern its use. Whereas decentralised solar allows people to opt-off when they disagree, and thus disengage from democratic processes with fellow citizens and become more atomised and individualised.

  2. Jan 2022
    1. Finally, we couldn’t let people borrow against, sell, or otherwise pledge their future Fund distributions, or we won’t really solve the problem of fairly distributing wealth over time. The government can simply make such transactions unenforceable.

      Ah okay, this does address some of the concerns above about reconcentration of wealth.

    2. In a world where everyone benefits from capitalism as an owner

      In this scheme above, it is a world where Americans benefit from American capitalism over American firms; not universal global benefits, and presumably would still be significant inequality between countries. Would be interesting to understand how this would operate on multinational firms, a la, Amazon, in a world where multiple countries adopt these kinds of schemes (and without a global coordinating government).

    3. A tax payable in company shares will align incentives between companies, investors, and citizens, whereas a tax on profits does not–incentives are superpowers, and this is a critical difference.

      Also agree with this, although interesting to see how, if shares are tradable, whether they end up reconcentrated in the hands of a few institutions. Investigate what happened after Thatcher-era privatisations?

    4. We should therefore focus on taxing capital rather than labor, and we should use these taxes as an opportunity to directly distribute ownership and wealth to citizens

      Definitely agree with this part.

    5. If robots can build a house on land you already own from natural resources mined and refined onsite, using solar power, the cost of building that house is close to the cost to rent the robots.

      Feels like the important assumption here is "land you already own", which presumably is not the case for many people, especially younger and poorer people in high-income countries. This could just exacerbate existing wealth inequality (although with more of the wealth concentrated in land ownership; a return to a more feudal era...)

  3. Dec 2021
    1. Unfortunately, these measures are more like cleaning sewage out of rivers than stopping water companies spewing it in in the first place.

      Perhaps an alternative analogy is that it's trying to get water companies to build more sewage treatment plants and hold them accountable for treating that sewage; rather than trying to prevent sewage in the first place.

      Some sewage is avoidable, e.g. more efficient cleaning and irrigation systems, but ultimately a lot of it is an unavoidable byproduct of human life, e.g. eating, washing ourselves and our clothes etc. and the remedy might be worse than the disease if we target the upstream production rather than trying to efficiently filter the byproducts.

  4. Aug 2021
    1. The Multimedia Principle states that people learn better from words and pictures rather than words or pictures alone , as people can process information through both a visual channel and auditory channel simultaneously.

      Something to keep in mind when drafting reports. Always think about how information can be represented visually as well as textually.

  5. Jul 2021
    1. The price system can accomplish so much with so little precisely because economic actors do not need to reach for a manual or consult their therapist to know what to do when prices change

      Not entirely sure what he means here or elsewhere in this paragraph. Prices work because people know what to expect? Is the argument here that if some significant proportion of the population stop respecting prices or firms react in a non-profit maximising way, then prices are no longer a useful signal about value?

    2. A more fitting title for Hayek’s famous essay would be ‘The Non-Use of Knowledge in Society’, for he insists that the price system works so well precisely because economic actors do not need to know much about the world to act effectively in it.footnote18 Prices do not convey knowledge, at least not from one end of the market to the other. Nor do they have to: as long as one economic actor discovers a set of facts that changes their evaluation of a commodity, the effects of that revaluation propagate throughout the system—driving the commodity’s price up or down—without anyone else needing to know what the new facts actually are. If the price system conveys anything, it’s the current positions—many of them based on erroneous perceptions of the present and the future—of all economic actors with regard to one another

      In Hayek's view, the role of prices in markets is to aggregate and convey information about evaluations of the value of a particular good or service by others (in the same market), whether or not those evaluations depend upon solid facts or wild guessing.

      Are prices in this way a kind of 'privacy-preserving' method of aggregating information about the value of things? They allow individuals to convey information about the value of something to others, without needing to share any information about how that valuation was determine, nor even know who the others interested in the good or service are.

    3. an entire field, known as ‘information economics’, which studies how various asymmetries of information—that between sellers and buyers of used cars being the most famous example—undermine market efficiency. Once those asymmetries are resolved, through public policy or private contracting, the existing inefficiencies should fade away, bringing competition closer to its ‘perfect’ equilibrium condition.

      Is privacy (or perhaps, a lack of transparency of relevant information; what is the difference between a lack of privacy and transparency?) identified as a cause of economic inefficiency in information economics?

    4. In what follows, I will revisit—and, I hope, revitalize—the Socialist Calculation Debate, exploring some of the ways in which the participants conceived the relations between knowledge, price and social coordination, and how their referents may have changed in the age of big data. I will go on to suggest ways in which the development of digital ‘feedback infrastructure’ offers opportunities for the left to propose better processes of discovery, better solutions for the hyper-complexity of social organization in fast-changing environments, and better matches of production and consumption than Hayek’s solution—market competition and the price system—could provide.

      Summary of the thesis of the article

  6. astralcodexten.substack.com astralcodexten.substack.com
    1. the job of a developing country government is to try to get everyone to ignore profits in favor of the industrial learning process. "Ignore profits" doesn't actually mean the companies shouldn't be profitable. All else being equal, higher profits are a sign that the company is learning its industry better. But it means that there are many short-term profit opportunities that shouldn't be taken because nobody will learn anything from them. And lots of things that will spend decades unprofitable should be done anyway, for educational value.

      Protectionism unfairly favouring domestic industries isn't necessarily just corruption and/or poor economic policy which makes consumers face unnecessarily high prices. It can also be a method of allowing companies to learn, develop, acquire technological expertise etc. which will allow them to compete internationally in future.

      Like sending a child to school first, a controlled environment in which they learn and can fail with minimal consequences/do poor work initially, in order to learn and eventually enter the labour market.

  7. Mar 2021
    1. the ontology implicit in the system’s design may not correspond perfectly to itstarget. This is evident in how the system models users. In the system’s perspective, each user entry is a separate individual, however this may often not be the case in the real world. Several users may share a single account (think, for example, members of a family sharing a Netflix account), or a single user may have more than one account, using each in different circumstances. In both cases, a single user entry in the recommender system’s perspective does not map to a physical individual. This sort of fragmentation and mismatchbetween the RS’simplied ontology and the target system causesissues for the ethical evaluation of RSs. Without a reliable way to identify the actual stakeholders in a recommendation and match them with what is represented by the system, it is difficult to give a reliable evaluation of the impacts of a RS.

      Online user identities do not necessarily form a one-to-one correspondence with physical human beings. A single user may represent multiple individuals or a single individual may have multiple accounts even on a single service (perhaps to intentionally compartmentalise their activities or simply having forgotten or lost access to a previous account.)

    2. System. This captures the interests of the platform on which the recommendations are generated.

      The operator of a market, who's interests may not be aligned with either buyers or sellers.

    1. a majority of the most commercially successful recommender systems are based on hybrid or collaborative filtering techniques, and work by constructing models of their users to generate personalised recommendations.

      What are hybrid or collaborative filtering techniques?

    2. users’ choice of parameters can reveal sensitive information about the users themselves. For example, adding a filter to exclude some kind of content gives away the information that the user may find this content distressing, irrelevant, or in other ways unacceptable.

      The ways in which you choose to filter a recommendation system may themselves lead to sharing of information about your preferences you may not wanting to disclose (but find yourself forced to in order to avoid content you don't wish to see). For example, your sexuality or trauma relating to miscarriages, deaths of a family member, sexual assault or similar.

    3. we shall not consider systems that approach the recommendation problem using different techniques, such as, for instance, knowledge-based systems.

      This paper focuses on algorithmic/machine-learning based recommender systems, not knowledge-based systems, i.e. focuses on situations where the recommendations are not explicitly programmed.

    4. recommender systems to be a class of algorithms that address the recommendation problem using a content-based or collaborative filtering approach, or a combination thereof.

      One definition of recommender systems

  8. Feb 2021
    1. Practically, this also suggests digital health credentialing solutions should offer various means for users to present their credentials, such as through a print-out, QR code, or SMS text. Solutions should also meet accessibility standards and be offered in multiple languages. 

      Multiple forms of inclusion, including digital vs non-digital and across languages.

    1. The purpose of use for vaccination proofssupported by these guidelines is following the conclusions of the European Council3. Vaccination certificates are to be used primarily as a standardised and interoperable form ofproof of vaccination for medical purposes.Other purposes for which proofs of vaccination could be used, may be decided by Member States, with the reservation to ongoing scientific, ethical, legal,andsocietal discussions.

      eHealth Network reserving judgement on the use of vaccination certifications beyond 'medical purposes', e.g. someone needing to get multiple vaccines doses in different countries, make a new healthcare provider aware of their vaccine status or reporting of side effects.

  9. Jan 2021
    1. One of the most infuriating parts of Weyl's essay is where he talks about how technocracy is bad because it can incorporate subtly racist assumptions into its equations - as if asking random people to make subjective decisions is safer from that failure mode!

      Racist assumptions exist in both formalised and informal decision-making processes. You can argue that formal mechanisms magnify the scope and scale of bias, but their legibility and centrality of decision-making can also make it easier to identify and remove that bias if so motivated.

    1. Participation in making each decision, therefore, generally needs to be limited to those involved in and affected by each decision being made, with only decisions that concern everybody being brought to society as a whole.

      The nature of externalities and ripple effects, especially in the longer-term, surely means that everyone's decisions affect everyone else all the time. If nothing else, the logic of markets recognises this action at a distance in the price mechanism.

    2. They will weigh their need for energy to heat their homes and power their workplaces against values of ecological sustainability and intergenerational justice.

      If this system excludes the voices of future generations in making those plans, and they can only rely on the moral value of intergenerational justice of current people, is it really a democratic planning procedure? What do we do about the marginalised groups who are unable to represent themselves in these decision making processes?

    3. But to distill all such relevant indicators to one unit of account suggests a degree of commensurability between goals that is exactly what socialists would want to overcome.

      Is that really what socialists want to overcome? That seems like a metaethical question about the comparability or ability to weigh different values against each other. If you're a utilitarian socialist, it seems pretty straightforward that you do want commensurability.

    4. Given these constraints, the most advanced computer on the planet still could not determine the correct production plan because the different choices are rooted in competing values and visions of the good—in other words, they are political choices. 

      The answer to this seems to be inverse reinforcement learning? Rather than try to boil down all values into one single parameter or even generate a complex set of rules that try to account for all these factors, instead design a system which optimises based on human feedback about it's decisions and their outcomes.

      That way it can intuitively learn the preferences each and every human has about the allocation of resources, and weigh those against each other, and therefore implicitly take into account people's moral inclinations towards justice, dignity, sustainability etc.

    5. It is possible that one could formalize all of this knowledge into explicit rules that a computer could execute. However, the difficulties involved in articulating such rules across all workplaces, in all sectors, are simply staggering.

      This is the appeal of machine learning though right? That you don't need to articulate and formalise explicit rules across a vast range of factors, and instead let the system learn those parameters and rules for itself.

      Machine learning isn't magic, but reinforcement learning systems seem to be strong evidence that machine learning systems can learn directly from feedback in their environment, rather than have any rules or strategies hardcoded into them. So this critique seems based on a view of algorithms that isn't quite reflective on modern systems?

    6. A capitalist economy is organized through the interaction of prices and markets. A socialist economy, by contrast, would be “consciously regulated… in accordance with a settled plan,”

      I don't think this is quite true. Markets and planning (centralised or decentralised) are a kind of technology in the form of institutions and mechanisms. Whereas capitalism versus socialism, at least under some understandings, is more like whether property should be primarily private or collective and more generally about distribution and deservingness of wealth.

  10. Dec 2020
    1. Constitute a governance body that can learn, adapt, and be a repository for institutional knowledge

      Ethics Committees of all sorts can be a focal point for the development and retention of knowledge about past successes, failures, and good practice, rather than ethical decisions needed to be made ad hoc without a clear process for appeals, deliberation, etc.

    2. Ethics committees should be considered as a component of building a data and AI ethics ecosystem within and between organizations.

      AI Ethics Committees are a part of a wider system of 'ethics', accountability, scrutiny etc. of research and products. They should be thought of in terms of how they fit into that context and what purpose/objectives they fulfill within that system.

      • Do the ethics committees serve a purpose not already filled elsewhere?
      • Could their intended objective be achieved better by alternative means?
      • Do they have a net marginal benefit even if they are overlapping with other interventions?
    3. A committee-based oversight model has been used effectively in several contexts

      Interestingly don't mention Institutional Biosafety Committees, which Sara Jordan identifies as the best source of inspiration for an AI Ethics Committee in her work on the topic. Perhaps because these include more of a focus on risks and broader harms to those beyond the research subjects and to wider society/place AI in the category of risky subjects rather than the quite general remit of IRBs.

    1. by taking control of the ethics review, a government intent on seeing biomedical research as an economic driver18,21 will be in a good position to ensure that such committees do not raise difficult ethical barriers to such research.

      Do governments and companies have perverse incentives to establish ethics committees as a PR exercise to try to build trust for projects primarily motivated by profits/economic growth/political gain? Failure of independent corporate ethics boards, e.g. Google, seem to point towards yes?

    2. the Nuremberg Code and the subsequent Declarations of Helsinki were attempts by the judges and the medical profession to develop a framework to protect research subjects against the untoward actions of states.

      What are the equivalents in AI research? Asilomar principles and the proliferation that have come afterwards, e.g. from the OECD?

    3. Committees too are expected to understand a raft of new regulatory legislation that provides enhanced legal protection for individual research subjects—for example the Data Protection Act 1998, the Human Tissue Act 2004, as well as the Clinical Trials Regulations 2004. The framing of this legislation means that legal duties are more likely to be placed on organisations, NHS trusts or universities rather than individual doctors or the profession. Such legislation has the effect of requiring organisation to produce policy based on “legal” rules to replace the “old” ethical norms of a profession.22,23 It is no longer easy to draw a clear dividing line between the responsibilities of institutions and their associated but independent research ethics committees.

      Do legal duties placed on organisations such as companies and universities make many questions an 'ethics' committee might pose a more clear cut legal issue. Would it be better to separate out 'compliance' to legal duties from 'ethics' of the project and have separate processes for both?

    4. with over 100 local committees in the UK and no centralised body with a responsibility for coordination, inconsistency in decision making was inevitable. The latter becomes a considerable problem when an increasing number of studies involve national or international collaboration.

      Any set of AIRECs will probably need a coordinating body from the start. Possibly at both national and international levels, given how international much AI research is, both in academia and industry.

    5. A further strength was that the committee could operate efficiently by issuing guidance about the types of study that it was essential for the committee to review.

      No reason this benefit couldn't be replicated in a centralised or decentralised model, regulated or not

    6. Investigators could sound the committee out in advance and hence incorporate ethical requirement in their research design.

      This seems like an advantage AIRECs would want to replicate. Rejection rate doesn't seem like a good target/metric for a committee. If no projects are rejected because unethical projects get modified or not brought forward at all, that seems like the mechanism is successful in achieving its aims, though indirectly as a deterent.

    7. Local committees had knowledge of how best to promote research ethics in a local research community.

      Presented as an advantage of self-regulation but not obviously contingent as a benefit on self-regulation. Seems more like a benefit of decentralisation; and it does seem broadly true that a more local committee, especially in a niche field, might have a better understanding of how to assess research and work with researchers to ensure ethical standards are met.

    8. The era of self regulation ended in May 2004. Now any research ethics committee considering clinical trials which fall under the European Union clinical trials directive must be constituted and operate under directive rules.

      Has this changed in the post-Brexit era? What are the broader implications of Brexit for the governance of research ethics committees in the UK? Are they going to be constrained by EU trade deal and further participant in Horizon?

    9. the Department of Health took little interest in the activities of such committees until 1991, when limited guidance was issued about their constitution and operation.

      The beginning of oversight of research ethics committees by a government body in the UK?

    10. the 1975 World Medical Association’s amendment to the Declaration of Helsinki which advocated the establishment of research ethics committees.

      The pivotal moment in the creation of research ethics committees globally?

    11. In 1964, the Royal College of Physicians published a statement recommending that all human research subjects should under go ethical review.

      Is this the beginning of ethics review committees in the UK? Or the beginning of them in medical practice in the UK? Where does the idea for ethical review come from in this report? (Probably the Nuremburg Code)

    1. The purpose of this forum is to:agree UK-wide policy on research governance and ethics reviewact as the United Kingdom Ethics Committee Authority (UKECA)

      The Health Research Authority's Four Nations Policy Leads Group.

      Highlights a seemingly split between health and non-health research ethics systems in the UK. Also flagged elsewhere in the notable discussion of NHS and non-NHS Research Ethics Committees.

    1. Recommendation: A coalition of stakeholders should createa task force to research options forconducting and funding third party auditing of AI systems.

      Link to the Ada work on auditing and algorithm inspection

    2. licensing system could be implemented inwhich auditors undergo astandard training process in order to become a licensed AI system auditor. However, given the varietyof methods and applications in the field of AI, it is not obvious whether auditor licensing is a feasibleoption for the industry: perhaps a narrower form of licensing would be helpful (e.g., a subset of AI suchas adversarial machine learning).

      Generalised and specialist auditing qualifications? How to ensure that auditors keep up to speed with the latest technology and don't get capture by large tech companies which currently monopolise technical talent which might be required for auditing?

    3. Auditing could take at least four quite different forms, andlikely further variations are possible: auditingby an independent body with government-backed policing andsanctioning power; auditing that occursentirely within the context of a government, though with multiple agencies involved[37]; auditing bya private expert organization or some ensemble of such organizations; and internal auditing followedby public disclosure of (some subset of) the results.

      The UK approach to Online Harms appears to primarily rely on approach 4), with recourse to approach 1) if there is sufficient evidence to suspect harm has been occuring and not proportionately addressed.

    4. Policies (whether governmental ororganizational) that help ensure safe channels for expressing concerns are thus key foundations forverifying claims about AI development being conducted responsibly.

      What protections do whistleblowers have in the UK? Who do UK-based tech workers blow the whistle to? Should OFCOM for example have a whistleblowing reporting function under it's Online Harms remit?

    1. the failure to align the UK’s definition of autonomous weapons with international partners: doing so must be a first priority for the Centre once established.

      UK avoiding participating in a ban on lethal autonomous weapons by holding a definition of autonomy not shared by other countries. House of Lords committee questioning that decision.

    1. The CDEI should establish and publish national standards for the ethical development and deployment of AI. National standards will provide an ingrained approach to ethical AI, and ensure consistency and clarity on the practical standards expected for the companies developing AI, the businesses applying AI, and the consumers using AI. These standards should consist of two frameworks, one for the ethical development of AI, including issues of prejudice and bias, and the other for the ethical use of AI by policymakers and businesses.

      Focus on moving from ethics to practical guidance for the development and deployment of AI systems.

      Separating out ethics in development from ethics in deployment. Seem implicitly to focus on technical solutions/accuracy concerns in development by focusing on prejudice and bias, rather than on development of technology which could be used in a dangerous manner.

    1. Robust safeguards will be included in the online harms legislation to govern when the regulator can require the use of automated technology. The regulator will only be able to require the use of tools that are highly accurate in identifying only illegal content, minimising the inadvertent flagging of legal content (‘false positives’) for human review. The regulator will advise the government on the accuracy of tools and make operational decisions regarding whether or not a specific company should be required to use them. However, before the regulator can use the power it will need to seek approval from Ministers on the basis that sufficiently accurate tools exist.

      Does OFCOM have the technical capability to do this assessment? If not, what resources and plans are there to skill-up the regulator?

    2. Category 1 services will be determined through a three-step process. First, the primary legislation will set out high level factors which lead to significant risk of harm occurring to adults through legal but harmful content.

      Based on number of users and functionality.

    3. The regulator will set out the steps that companies should take to address the risk posed by their services, and ultimately will have the power to assess whether the steps taken are sufficient to fulfil the company’s regulatory requirements.

      How will OFCOM assess whether the steps are sufficient? Auditing powers?

    4. All companies in scope will have a specific legal duty to have effective and accessible reporting and redress mechanisms. This will cover harmful content and activity, infringement of rights (such as over-takedown), or broader concerns about a company’s compliance with its regulatory duties.

      Ada thoughts on redress for algorithmic decision making?

    5. A safety by design approach can apply from the conception stage of a new business onwards. User safety must be considered when designing the functionality of an online product or service, but also applies to setting in place an organisation’s objectives and culture to fully support a safety by design approach.

      Very individualistic framing of safety by design, focusing on user safety, rather than safety of workers, non-users, systemic safety, of e.g. information ecosystems.

    6. Deliver a new £2.6m project to prototype how better use of data around online harms can lead to improved Artificial Intelligence systems, and deliver better outcomes for citizens

      Who is going to deliver this? OFCOM, CDEI, Both, outsourced (to Faculty AI most likely)?

    1. given only the ability to query a pre-trained language model, it is possible to extract specific pieces of training data that the model has memorized. As such, training data extraction attacks are realistic threats on state-of-the-art large language models.

      Machine learning models can leak training data that they have memorised through training simply by responding to queries tailored to explicitly extract that information (and thus via the minimum viable public interface for a model)

    1. In ethics we use the term “pro tanto”, meaning “to that extent”, to refer to things that have some bearing on what we ought to do but that can be outweighed.

      Pro tanto - a factor that is relevant to decision-making, but not in and of itself overwhelming of any other considerations.

    2. it would be a mistake to read an article about a harm caused by an AI system and conclude that we shouldn’t be using that AI system. Similarly, it would be a mistake to read an article about a benefit caused by an AI system and conclude that it’s fine for us to use that system.Drawing conclusions about what we have all things considered reasons to do from pro tanto arguments discourages us from carrying out work that is essential to AI ethics. It discourages us from exploring alternative ways of deploying systems, evaluating the benefits of those systems, or assessing the harms of the existing institutions and systems that they could replace.

      Just because an AI system causes harm, that doesn't mean we shouldn't use it if other considerations outweigh that harm. And just because we can address that harm, it doesn't mean we should if all the alternatives to that harm cause greater harms themselves. The converse is also true with benefits.

    1. Recommendation 36: Transparency reporting should also include information about the human resources behind the content moderation decisions, including what training they have had and what support they are offered.

      A recognition of the invisible work that goes into content moderation.

    2. Recommendation 35: Transparency reports should include, where appropriate, information on the use of algorithms and automated processes in content moderation. However, given the commercial sensitivities and safety implications associated with publishing certain information, further discussion on this topic is needed.

      Can Ada contribute useful insight on safe algorithmic disclosure in relation to content moderation?

    3. Recommendation 13: The regulator should be equipped with other information gathering and investigation powers so it can understand whether companies are fulfilling the duty of care and hold them to account.

      Link to algorithmic auditing work by Ada?

    1. If short HLMI timelines (less than 10‐15 years) are expected, the lengthy period to negotiate and create such a body would be a critical weakness. If longer timelines are expected, there should be sufficient time to develop a centralised institution.

      Whether centralisation of global governance of high level machine intelligence is a good idea may partly hinge on how far away you think HLMI is.

    2. A poorly executed attempt at centralisation could lock‐in a fate worse than fragmentation.

      Centralisation seems like it has high upsides but also probably a bigger downside risk as well, in terms of path dependence and lock-in of underperforming institutions.

    3. barriers may not limit all non‐state actors from engaging in multiple fora. Indeed, those with sufficient resources may be able to pursue strategies to their advantage

      A fragmented AI global governance regime will make it difficult for academics and smaller companies to take part in all the diverse groups and negotiations that those involve. Thus, only large tech companies, governments and extremely well resourced lobby groups or elite universities will have the capacity to fully engage across all the domains. This risks a decentralised governance regime actually centralising power in the hands of fewer, more elite, actors.

    4. It seems unlikely that powerful vested economic and military interests in AI will be steered by a plethora of small bodies better than a single, well‐resourced and empowered institution.

      Due to market concentration in the industries that are at the forefront of developing AI technologies, i.e. search, social media, cloud computing etc. and the international nature of most of these tech companies, these monopolies will need a monopolist regulator in order to prevent races to the bottom in regulation and compliance, along with the ability to make credible threats to these companies interests.

    1. qualitative research onunderstanding responsible AI has concentrated on interviews with AI practitioners working acrossa variety of companies and industries; yet AI development and research is often a team endeavour.While the research above has been illustrative for understanding how practitioners conceptualisethe scope of their jobs, these studies remain compartmentalised and insulated from practices thathappen in day-to-day research. To better understand how ongoing collaborations and contestationyields responsible innovation, other forms of qualitativeresearch–participant observation, documentstudies–can complement interviews to shed more light on howthe community is coming to termswith the normative implications of their work

      When thinking about auditing AI, perhaps we need to not just interview individual developers to understand their motivations and goals in creating AI systems, but undertake ethnographic observations in the function of the developers as a whole.

      This would probably be very resource intensive, but may still be very valuable in highly impactful systems that impose significant social effects, e.g. Google Search or Youtube recommendations. Further, these socially significant systems are also likely to be the most distributively developed and so least amenable to understanding through one-to-one interviews.

    1. While epistemic uncertainty describes limitations in confidence about existing knowledge [11],aleatory uncertainty describes uncertainty “due to the fundamental indeterminacy or randomnessin the world”

      Epistemic uncertainty is uncertainty we can reduce by gaining more information about the world (although new information is not guaranteed to reduce uncertainty) whereas aleatory uncertainty is uncertainty that cannot be reduced by additional information.

    1. even a seemingly ethically neutral problem where one might expect total freedom of choice,such as choosing a project’s programming language, generates ethical considerations. These mayinclude considerations of the suitability of the language to the task, your personal efficiency in thelanguage, the inherent computational burden of the language, and wider community accessibility.

      Hadn't thought about the environmental, sustainability and accessibility impacts of language choice. Presumably extends to whether you choose TensorFlow or PyTorch...

    2. we encounter problems where all realistic solutions haveconsiderable trade-offs and there are fewer precedents to follow. As a result, discussions surroundingthese kinds of problems often end inconclusively.

      Emphasises the importance of having clear case studies and perhaps an archive/case law of ethical decisions taken in research, with follow-up on the consequences of those decisions, and so what precedent arises from those.

    1. an algorithm that uses racial data is more vulnerable to racial biasthan an algorithm with anonymized data that does not include race or other proxies to race

      This might be true, but it seems very difficult to determine whether some factors will be a proxy for race. Names and geography seem like an easy proxies to identify, but what about consumption habits, writing style, idiosyncratic interests etc.

    1. How humans incorporate algorithmic output can also contribute to the failure of facial recognition systems. This institutional shift comes from the fact that the same system may be utilized in different ways by different companies and agencies.

      Two different actors may to choose the same output of an AI system in different ways, leading to different outcomes even when the technology is the same. Points to the importance of considering the wider techno-social context in which the tool is used, not just the accuracy of the tool itself.

    2. The context in which accuracy is tested is often vastly different from the context in which the actual program is applied. FRT vendors may train their systems with clear, well-lit images, but when deployed in law-enforcement applications, for example, officers might use FRT on live footage from body cameras of far lower quality. Computer science research has established that this “domain shift” can significantly degrade model performance.

      Domain shift - training and testing AI on datasets that don't reflect the kind of data the model will encounter when deployed, leading to much lower performance in practical application than in testing. May lead to misleading claims about accuracy in e.g. facial recognition.

    1. Would immunity-based discrimination be akin to unacceptable discrimination based on genetics or disability? Or would it be acceptable the way we find it acceptable to limit drivers licenses for those who are visually impaired, because their biological condition poses risks to others?

      immunity and vaccine certification as analogous to a driving license? Okay to restrict travel ability to those with who cannot safely travel by that mode.

    1. many countries could be left without the ability to eliminate severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). If that happens, the next few years could see a total state-shift in global connectedness, as some countries become essentially impossible to travel to or from. This situation will be compounded if countries require vaccination certificates for entry without parallel equitable global vaccine access

      International vaccination certificates could cut some countries off from the rest of the world if there isn't sufficient global vaccine access

    1. An ethic of liberal individualism prioritises the autonomy of independent individuals over the wellbeing of the community. By stark contrast, a communitarian ethic prioritises the health of the community and, by extension, the individuals who make up that community, with special attention to those at greater risk of harm.

      Rebecca Brown et al come down in favour of immunity passports because they prioritise liberal individualism. Whereas Francoise Baylis et al reject immunity certification because they prioritise communitarian public health ethics.

  11. Nov 2020
    1. Lady Harding told NHS bosses at an event organised by the Health Service Journal that at the same time at-home lateral flow tests were being rolled out, her staff and the vaccine staff were looking at how to combine test results and vaccine status into the app.

      Dido Harding looking to integrate vaccine and test results into the NHS Test and Trace app

    1. We will publish an evaluation of the Isle of Wight and Newham findings in due course.

      Helen Whately, Minister of State (Minister for Care) at the Department of Health and Social Care, promises that the government will publish an evaluation of the NHS contact tracing app pilots on the Isle of Wight and in Newham.

    1. The type of antibodies most closely associated with protection are neutralising antibodies(these are not currently measured by commercial tests).

      Commercial antibody tests do not measure the antibodies with the highest correlation to protection.

    1. Government should place a mandatory transparency obligation on all public sector organisations using algorithms that have a significant influence on significant decisions affecting individuals.

      Link to algorithm registers and Google's model cards?

    2. there are also circumstances where using algorithms to make life-affecting decisions can be seen as unfair by failing to consider an individual’s circumstances, or depriving them of personal agency.

      Both humans and algorithms can ignore context and take away agency from those they make decisions about.

    3. the evidence is far less clear on whether algorithmic decision-making tools carry more or less risk of bias than previous human decision-making processes. Indeed, there are reasons to think that better use of data can have a role in making decisions fairer, if done with appropriate care.

      Important to compare the bias of automated decision-making systems with the bias of human decision-making. An automated system is likely to have different biases and be opaque in different ways, but it may still lead to less biased results overall. In this case, it would still be desirable to use a 'biased' algorithmic system.

    4. As our work has progressed it has become clear that we cannot separate the question of algorithmic bias from the question of biased decision-making more broadly.

      Good that the report recognises that algorithmic bias is not just a technical one, but part of a broader decision-making system in which automated decision-making is just, though sometimes quite important, part.

    1. AWSs like slaughterbots are ideal tools of assassination and terror, hence deeply politically destabilizing. The usual obstacles to one individual killing another – technical difficulty, fear of being caught, physical risk during execution, and innate moral aversion – are all lowered or eliminated using a programmable autonomous weapon. All else being equal, if lethal AWSs proliferate, this will make both political assassinations and acts of terror inevitably more possible, and dramatically so if the current rate is limited by any of the above obstacles. Our sociopolitical systems react very strongly to both types of violence, and the consequences are unpredictable but could be very large-scale.

      Lethal autonomous weapons make targeted assassinations and surprise terror attacks much easier and more accessible, even to terrorist organisations not just states.

    2. One crucial difference between AWs and other WMDs is that the former’s ability to discriminate among potential targets is much better, and this capability should increase with time. A second is that Autonomous WMD would, unlike other WMDs, leave the targeted territory relatively undamaged and quickly inhabitable.

      Lethal autonomous weapons as a potential WMD.

      Ability to target select and discriminate + leaving infrastructures, houses etc. in place seems to make them a particularly well-suited tool for genocide, especially in ethnically deserve places.

    1. It seems harder to find a description at the same level of granularity for neural nets beyond something like "it learns a high-dimensional manifold that maps onto the input data". Even though we can often get succinct summaries of what an ML model is doing, only some of them are informative from a simulatable perspective.

      Succinct summaries of what an algorithm is doing does not necessarily make them interpretable.

    2. Lipton et al. point out that this desideratum can be less about the specific type of model and more about the its size. A decision tree with a billion nodes, for example, may still be challenging to understand. Understanding is also about holding most of the model in your mind

      Whether a human can 'simulate' in their mind an algorithm, whether they can go through each step and check it is reasonable, is dependent both of the type of algorithm and the scale, you need both understandable decisions and few enough different decisions that at a human can comprehend them all at once.

      Some algorithms can involve billions of decisions, but if it simply the same decision over and over, e.g. is X > Y in sorting, we can still easily understand the whole process.

    1. A Washington state law passed in March, supported by Microsoft and introduced by a state senator who works for the company, illustrates some of the divisions.Washington’s law requires government agencies to disclose information about their use of face-recognition technology and that technology’s accuracy on different demographics. It also requires “meaningful human review” when the technology is used for major decisions, and it prohibits law enforcement from using face algorithms on live videofeeds except in emergencies.Microsoft called that law an “important model,” but it is more permissive than the outright bans on government use of face recognition passed in more than a dozen cities, including Boston and San Francisco.

      Tech companies realise that the current position of facial recognition is untenable and that there will be some kind of regulation. It is thus in their interests to advance regulation that curbs the worst excesses of the use of facial recognition by law enforcement while still allowing them to sell those products. The focus of disclosure of accuracy is clearly good but also likely to favour the large tech companies who have the resources to develop more accurate software.

      Once regulation is in place, some of the impetus for change will dissipate and lawmaker attention will turn to other areas, creating at least a medium term equilibrium. Lawmakers are unlikely to want to revisit the same topic as it makes them look bad/look like they failed the first time, so this more permissive regulation might buy the tech companies 4-8 years of stability.

    1. The ambitious digital package negotiated in the UK-Japan Comprehensive Economic Partnership Agreement (CEPA) includes protections against the forced transfer of source code and coded algorithms. This means that UK businesses will not be forced to share their source code as a condition of entering the Japanese market and serves to protect companies’ trade secrets. As a consequence, these companies can be confident they will retain any competitive advantage that their source code provides.However the agreement also ensures that the government is still able to access the source code and algorithms when needed in order to monitor adherence to, and enforce, laws and regulations.

      The government believes that 'protections against the forced transfer of source code and coded algorithms' in Japan trade deal does not constitute a barrier to algorithm inspection and algorithm audit

    1. There are legitimate reasons for holding different measures of the same thing. A high profile example is the challenge of defining the number of deaths from COVID: various different definitions may be appropriate depending on the use of the statistics.

      When trying to quantify some factor of interest into a variable, there may be several legitimate ways to measure that underlying subject of interest. Different definitions may be more or less useful for different purposes, e.g. speed of collection versus accuracy. Further, multiple different measures may help give a better idea of the ground truth, as biases and errors in data collection and methodology can be smoothed out across measures.

    2. The ultimate goal is not to deliver a platform but to enable the organisation to derive more value from its data.

      Need to remember that the goal of a data aggregation and analysis platform project is the creation of a specific platform, but creating value for users and society through extracting more value from that data.

    1. I can take my lived experience hat on and off

      Is this actually true? Even in an 'academic' or 'professional' context, your decisions, intentions and beliefs are informed by your experience, which you can't turn off.

      The nature of science being a social process requires acknowledging that scientists in themselves cannot be truly objective and will always be informed by the culture they find themselves in. While this often serves to highlight the biases in expert judgement that exclude those with lived experience, it must surely cut both ways in that experts with lived experience can't stop having that experience.

    1. Participation as citizen control occurs, in Arnstein’s words, when “participants or residents can govern a program or an institution, be in full charge of policy and managerial aspects, and be able to negotiate the conditions under which ‘outsiders’ may change them.”

      This presumably can manifest and fulfill extremely reactionary outcomes.

      In practice, NIMBYism can be used to prevent 'outsiders' moving into a community, reducing available housing and opportunity for those who need in order to preserve often racially and income segregated neighbourhoods.

      It also seems to manifest in the US in the form of trying to spin-off parts of cities and counties so that rich, white areas can split their educational systems off from poorer, more ethnic communities and cut their own taxes rather than redistribute to those in need.

      Very important to determine the bounds of 'local control'.

    1. The proposed Bluetooth contact tracing card, sometimes called the CovidCard, is now being trialled in Ngongotahā, just outside Rotorua.Between 500 and 1,500 members of the Ngongotahā community will participate in the trial. Registration began on 30 October, and the trial will finish on 15 November.Participants will be asked to wear the cards as they go about their daily activities and attend community events. The cards use Bluetooth to exchange “digital handshakes” and will keep an anonymised record of participants’ close contact with each other.The trial will help the Ministry of Health understand how well the contact tracing cards perform in a real-world scenario, whether they work with our contact tracing systems, and if people will accept and use them.The results of the trial will help the Government decide if we should use contact tracing cards alongside the NZ COVID Tracer app to support contact tracing.

      New Zealand trialling 'wearable' non-smartphone based bluetooth digital contact tracing system.

      Actually running a pilot in collaboration with local community groups and explicitly saying the trial will help determine whether this system is used in conjuction with other contact tracing systems, based on real-world performance and public acceptance.

    1. Singapore pupils aged over seven must use the city state's contact-tracing app or wearable device from December. Both use Bluetooth signals to log any contact with other users' devices.Pupils do not always have access to their phones but free tokens that can be worn on a lanyard or carried are being given away at community centres.

      Singapore leading in the development of digital contact tracing wearables.

    1. 5: REVOKED. In v1.5, REVOKED is not used. In v1.6 and higher, REVOKED eliminates exposures associated with that key from the detected exposures.

      Presumably allows for exposure risk to be retroactively removed. For example, someone self-reports infection, sends out alerts, then gets a negative PCR test, and then those people with alerts receive a second alert giving them the all clear.

    2. 4: RECURSIVE. This value is reserved for future use.

      Future versions of the Google Apple Exposure Notification API may allow for recursive contact tracing systems. Something to watch/discuss.

    1. app-based approaches allow for “recursive”contact tracing, whereby contacts of contacts can betraced to an arbitrary recursive depth, at no additionalcost.

      Recursive contact tracing is the idea of tracing contacts beyond those of the original confirmed case, e.g. the person with the positive COVID-19 test.

      For example, in 2-level recursive contact tracing, you would ask contacts of the contacts of a positive case to self-isolate or otherwise alert them. in 3-level recursive contact tracing, you would them ask those people's contacts to also take some action.

      App-based contact tracing allows this to happen very quickly, by allowing 'risk' to ripple out from a positive case to their contacts, and their contacts' contacts, with minimal additional cost.

    1. James Bethell 2020-04-20 Phone call with IBM Meeting to discuss COVID-19 Immunity Certification

      James Bethell apparently also met with IBM to discuss COVID-19 immunity certificates.

    2. James Bethell 2020-04-13 Phone call with Yoti Ltd Phone call to discuss COVID-19 Certification

      Yoti met with James Bethell, Minister of Innovation at the Department of Health and Social Care, to discuss COVID-19 immunity certificates in mid-April

    1. ‘We didn’t sign up to develop weapons,’” Stephens said, explaining, “That’s literally the opposite of Anduril. We will tell candidates when they walk in the door, ‘You are signing up to build weapons.’”

      In contrast to Google, Apple, Microsoft, and to an extent Amazon, companies like Anduril are developing a workforce and culture explicitly okay with developing military technology. This makes the likelihood of internal pushback against problematic uses of their technology much less likely; unlike cases like Project Maven.

      Question is then whether top AI researchers/engineers are willing to work in that environment in the first place? Or are there sufficient American nationalists with sufficient skills that Anduril et al can still develop a strong AI capability? Will there be stigma against those who work at Anduril, Palantir etc. in joining other tech companies?

      If they can acquire Google and other major tech company services to integrate into their technology, then do they need really high-quality staff or just competent engineers?

  12. Oct 2020
    1. The BBC has learned that the Test and Trace team has gathered further data that indicates the app is out-performing other countries' efforts in terms of how many people have disclosed a positive test to trigger the automated contact-tracing process.However, it is not ready to publicly disclose the information at this time.Officials also declined to reveal whether they had logged the number of people who had uninstalled the app, which would give a better indication of the number of active users.

      More people disclosing a positive test to trigger automated contact-tracing might simply be a product of having more cases in the first place? Or is it a percentage of number of positive tests? Or percentage of users compared to estimated prevalence of COVID in community?

    1. The threshold was due to move from 900 to 180, but because we have a new statistical algorithm taking advantage of improved distance estimation, we are now lowering it to 120.

      Risk-scoring algorithm seems to have changed, but seems like notifications are going to be ~8 times more likely, assuming people generate risk at roughly the same rate. With better distance calculation and infectiousness scoring, aiming to catch all cases and have some unneeded self-isolation. Optimise false negative over false positive.

    2. The NHS COVID-19 app uses Bluetooth Low Energy to understand the distance, over time, between people who have downloaded the app. If someone tests positive for coronavirus, the app’s risk scoring algorithm uses this data, along with the infectiousness of the individual testing positive, to make calculations about risk, and work out who should be sent an alert.

      Did not realise they were also using an 'infectiousness' variable in the risk scoring algorithm. Has this been mentioned publicly before?

    1. The members also commented on the lack of any standardised definitions for key terms, such as ethics, fairness and transparency, and the lack of any standardised measurements for such principles.

      Echoes what was said in Data & Society report on ethics owners, particularly the lack of standardised measurements for 'ethical' objectives. Presumable a coordination role government and regulators can play here?

    2. Lots of AI-governance principles have been produced around the world, both for financial services and other sectors. However, the members said these can be difficultto execute without more appropriate and practical internal governance frameworksthat provide minimum standards or best practice guidelines. Members also said that further clarification on how best to apply principles would be welcome.

      Public-private forum on AI in financial services highlighting that principles need to be translated into practical guidelines that provide standards to be met or best practice guidance, so that these principles can be put into action by engineers, technicians, users etc.

    1. “practical necessity” is, in some sense, the more fundamental issue. The problem still exists even in the case of ideal governance, even in the case where everyone has good intentions and we get the design of institutions just right. 

      Even if all actors were well-motivated, privacy might still be necessarily invaded by the normal operation of services. Perhaps whether this is important depends on whether you view privacy as instrumentally or intrinsically good? If everyone has altruistic intentions, does it matter that they know lots of about you? (Not sure those are the same question... Informational hazards or incomplete knowledge about you might both be dangerous even with good intentions and well-designed institutions...)

    2. We can ask: “When there are trade-offs between social and institutional privacy, what do I tend to choose?” It seems to me like opportunities to trade one form of privacy off against the other actually come up pretty frequently in modern life. You can ask a friend a personally revealing question, for example, or you can ask Google.

      Does information acquisition almost always require some loss of privacy? (or is it just that any action that involves interaction inherently reduces some of your privacy through the act of creating information and revealing preferences?).

      Asking a friend or Google reveals information to one or the other. So would asking a library or Amazon to find a relevant book on your general topic of interest. But the latter might at less be more privacy-preserving...

    3. many small examples of improvements in information technology leading to improvements in social privacy. One recent case is the introduction of e-readers. There’s some evidence that this has actually changed people’s reading habits, by allowing people to read books without necessarily alerting the people around them as to what they’re reading.

      Hadn't really considered this before, but obviously an e-book library is in many ways more private than a physical set of books (which are harder to physically conceal and can't be encrypted...). Does this kind of privacy allow for greater self-expression/less norm-conforming reading

    1. the actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of minds, such as simple ways to think about space, objects, multiple agents, or symmetries. All these are part of the arbitrary, intrinsically-complex, outside world. They are not what should be built in, as their complexity is endless; instead we should build in only the meta-methods that can find and capture this arbitrary complexity. Essential to these methods is that they can find good approximations, but the search for them should be by our methods, not by us. We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done.

      Would seem to favour the develop of neurosymbolic AI and metaconcept learner systems in particular, were you can leverage computational resources to create knowledge bases and understanding about the world within the AI, rather than trying to impose or impart our own knowledge and categorisations of the world.

    2. One thing that should be learned from the bitter lesson is the great power of general purpose methods, of methods that continue to scale with increased computation even as the available computation becomes very great. The two methods that seem to scale arbitrarily in this way are search and learning.

      The bitter lesson argues that the best way to achieve performance improvements in artificial intelligence is to focus on more general and abstract methods that scale with increasing computation. Thus, we should on meta techniques such as searching and learning, over trying to embed existing human knowledge into systems.

      This is somewhat contingent on a continually increasing amount of computation resources being available, primarily through Moore's Law (though it could also be propped up by increasing algorithmic and programming efficiency, which essentially increases the computation available from given hardware.)

      This seems to be the approach taken by OpenAI, in the creation of GPT, GPT-2 and GPT-3. Given the increasing success and seemingly lack of diminishing marginal returns in the creation of 100s of billion parameter models, OpenAI seems to be validating The Bitter Lesson.

    1. Pre-training refers to training a model to perform some task and then using the learned parameters as an initialization to learn a related task.

      Pre-training: training a model for one task and then fine-tuning that model on a related task, e.g. training a cat classifying model and retraining it on foxes.

    2. The Adam optimizer proposed to use the first and second moments of the gradients to automatically adapt the learning rate. The result turned out to be quite robust and less sensitive to hyperparameter choices. In other words, Adam often just works and did not require the same extensive tuning as other optimizers

      The Adam hyperparameter optimiser provides generally effective results in optimising the learning rate, and was not sensitive to other hyperparameters, reducing the barrier to entry to optimising the learning of networks.

    1. This area is one for which additional evidence would greatly help to judge risk and how this risk can be traded off against the benefits of immunity passports.

      Assessing the scale of the moral hazard (in terms of people's relative incentive to become intentionally infected with COVID-19, so becoming a burden on the public health system and potentially further spreading COVID-19) associated with the implementation of immunity certificates could be a crucial crux in determining their use.

      The need to set out the counter-factuals from their use versus not use, in public health and economic terms.

    2. By refusing to formalise the permissibility of such actions, inevitable low-risk behaviour is classified as rule breaking, and could even subject people to fines and punishments that do not correspond to the harm their behaviour causes.

      Not creating a formal immunity certificate controlled by the government could lead to the proliferation of private immunity certification schemes and informal assessments of immunity based on antibody tests and previous infection that may be poorly grounded in science, leading people to misjudge their own risk to others in the absence of formal guidance and permissibility of some actions.

      It could also lead to authorities being overly punishing to those who's actions were not actually likely to be particularly harmful. This would likely be unfair and could also lead to more serious rule-breaking, if the assumption is what they were doing were already illegal or transgressive. For example, hosting a large house party if hosting a small gathering of friends with confirmed antibodies was already deemed illegal.

    3. Basing immunity passports on a vaccine has advantages: the stimulus is uniform and is therefore likely to have a more predictable pattern and duration of immunity than is infection, and vaccination makes immunity potentially available to the whole population. The ethical issue then becomes one of timely access to vaccination for everyone.

      Vaccine-based immunity certificates have less scientific uncertainty associated as the pattern and efficacy of immunity conferred by vaccines will have been measured during the vaccine trials across a diverse range of people. Also, vaccines provide a consistent stimulus of immunity (within a given vaccine) so less uncertainty than natural immunity generated through an illness of variable severity.

      They also have significantly less moral hazard associated with them, as receiving a vaccine is a pro-social activity, compared to deliberately seeking out infection (a decidedly anti-social, damaging to public health activity) incentivised by natural antibody based immunity certificates.

      However, there are still concerns about the efficacy of vaccines. A vaccine that is 50% efficacious would mean that half of those with the vaccine could still be at-risk and a risk to others. Middling efficacy is fine at population-level when the goal is to bring down R below 1 consistently across the population but could be problematic in certifying any given individual as 'safe'. Also, a number of different vaccines with differing efficacies (potentially differentially efficacious across different demographic groups) will be available. Some efficacy threshold will likely need to be determined for any vaccine-based immunity certificate (perhaps with different thresholds for different activities based on their respective risk), with potentially personalised risk scores even for those having had the same vaccine.

      There are also significant questions of unequal access, given the limited capacity of vaccine production and distributions, which suggests there may not be sufficient vaccines for the whole world for multiple years, which may made even worse if there is the need for booster doses of vaccines or even an entirely new vaccine for a sufficiently mutated strain of COVID-19.

    1. CommonPass is designed to protect data privacy and satisfy data privacy regulations by adhering to the following privacy principles:Agency: Data are stored or shared only with explicit, informed consentData Minimization: Only the minimum amount of personal data are used for any transactionFederation: Personally identifiable health information is stored only at the source or on the user’s phoneUse: Data are only stored to the extent necessary and never used for any other purpose

      CommonPass emphasising data privacy concerns, rather than potential inequalities or uncertainties in access, meaning of test results/vaccinations, etc.

    2. The CommonPass platform assesses whether the individual’s lab test results or vaccination records (1) come from a trusted source, and (2) satisfy the health screening requirements of the country they want to enter.  CommonPass delivers a simple yes/no answer as to whether the individual meets the current entry criteria, but the underlying health information stays in the individual’s control.

      CommonPass aiming to be an attribute verification system for COVID-19 PCR tests and vaccination records. Doesn't reveal status of tests and vaccinations, only whether they meet a pre-specified threshold.

      Could imagine requiring a vaccine with a given level of efficacy, e.g. 75%, even if lower efficacy vaccines have been approved for general use.

    1. the chief of Palantir’s UK business: Louis Mosley. The UK boss of the secretive CIA-backed data company which is helping the NHS tackle coronavirus is Louis Mosley, the nephew of the former motor racing boss Max Mosley and grandson of Sir Oswald Mosley

      Palantir UK is literally run by the grandson of the leader of the British Union of Fascists, which feels a bit on-the-nose.

    1. Improving reproducibility and innovation isn’t easy, to be sure. But science policy and science funders could do both at once by demanding more null results, and by substantially funding efforts to contradict groupthink and confirmation bias.

      A better contribution to global science than a UK ARPA? Become the global sponsor of contrarianism and replications?

    1. “Each additional region that must be included when matching exposure keys adds overhead to every client device, and to the backend that serves those clients.”

      Quote from Chris Hicks

      The wider the scope of interoperability between countries and the greater the number of users and interactions, the greater the bandwidth needed to run the system will be, and the greater the amount of storage needed on each device.

      Can get around data limits by requiring mobile providers to zero rate data usage for digital contact tracing and provide it at the highest speed available for each device. However, the estimated 390MB a day from the gateway could be a more serious problem for those on older devices and no obvious way to get around those hardware limitations besides better compression.

    2. Every 12 hours or so, each country’s back-end server uploads its latest positive-test keys to the federated server and also downloads any new keys from the other country, to feed back to their own local app. “From an app perspective, the app only ever talks to its own back end,”

      Interoperability between contact tracing apps using the Google Apple ENS will likely use a federated server that imports data from all the national and regional servers, merges them, and distributes back to each country or region's back-end server, which the app itself connects to. Possible for all Google Apple ENS systems to share a single federated server globally?

    3. The Department for Health and Social Care didn’t respond to WIRED’s requests for comment, but told the BBC that it was “working on a technical solution,” which would include the Republic of Ireland.

      An all Britain and Ireland solution for contact tracing interoperability is reportedly being developed. If the Irish app also works with the European gateway service and other NearForm apps, it could be the most interoperable digital contact tracing app

    1. Once deployed in distant battles and occupations, military methods tend to find a way back to the home front. They are first deployed against unpopular or relatively powerless minorities, and then spread to other groups.

      A pragmatic reason to support the rights of migrants and minorities is that tools, techniques and policies trialled on those groups tend to them be applied by elites to the public at large. Similar point to bordering out immigrants from the welfare state in political sociology literature.

    2. An iron fist in the velvet glove of advanced technology, drones can mete out just enough surveillance to pacify the occupied, while avoiding the kind of devastating bloodshed that would provoke a revolution or international intervention.

      Targeted repression of dissidents is much more likely to be successful than indiscriminate retribution, as those who aren't currently actively opposed to the regime or invading force will feel (accurately or not) that they have less to fear. Automated surveillance and enforcement by drones vastly reduces the cost of engaging in that strategy, versus traditional forms of repression. Enables the maintenance and expansion of digital authoritarianism.

    1. it is not enough for a government to rely on existing rights and protections when deploying a novel technology, nor is it enough for it to limit its consideration of potential harms to those from direct government action; governments must also consider the impact of private use of the same tools, and the protections people need to ensure their well-being against commercial use.

      Governments cannot simply opt-out of providing novel technologies that could prove legitimately useful in preventing infectious disease, nor do they obligations regarding these technologies and their governance end at public-sector deployments. Even if they choose not provide a technology, be it contact tracing apps or immunity certificates, they must take a proactive approach to governing their uses in the private sector.

    2. The disregard for clear experimentation goals or for checks and balances in implementation, as well as the lack of rigour in explaining the practical value of these technologies to the public,

      This is the important part. We need clear and transparently communicated measures of efficacy that act as a check on whether the technology is deployed and allows the public to understand what value, if any, there actually is in the technology.

    3. there’s a more fundamental issue: mobile phones aren’t able to easily or equitably capture information that’s critical to understanding the risk of COVID-19 virus transmission, such as whether a person is wearing a mask or is outdoors.

      Contact-tracing apps are a blunt tool. They aim to generate risk scores, a proxy for the likelihood someone will have contracted a disease like COVID-19, but they only take account of some of the factors we know are important in transmission, i.e. proximity and duration of contact.

      They don't include setting (e.g. indoors or outdoors, and relatedly the amount of ventilation), masks and other protective equipment (that may be partially protective but not so protective as to obviate the need for contact detection), volume of speakers etc.

      However, evaluation of these tools need to consider the alternatives (assuming that development and distribution is not prohibitively resource intensive, which it doesn't seem to be unless you opt for a wearable solution). Is it better than nothing? Is it better than manual contact tracing? Is it a good complement to manual contact tracing and vice versa?

      A blunt tool can still get the job done.

    1. Such rules would require “equal terms for equal service,” meaning any company that relies on a dominant platform to reach customers should have the same access. In other words, SpanishDict and Google Translate would have equal opportunity to appear at the top of Google search results.

      The meaning of 'equal opportunity' seems very important here. If the Google Translate option works better for the user and is the best product for that user, it seems reasonable it take the top spot. You wouldn't want the ranking to be random (defeats the whole point of the search engine) and the scale of search queries would seem to make manually selecting a range of 'reasonable' options for each completely impractical.

      Perhaps transparency of the ranking system (at least to regulators) with the ability to determine there is no explicit self-preferencing and that the design of the ranking algorithm doesn't lead to implicit self-preferencing either.

    1. Collectively, though, Europe is second only to the United States, with 769 AI startups (22 percent of the global total). This shows that, while single European countries may not be globally competitive, Europe has the potential to be a major player in AI if it can strengthen its digital single market

      Does it make sense to treat Europe as a single entity?

      In this paper, are they talking about the European Union (which does at least act collectively on much funding and regulation, harmonising across borders), the European Economic Area, or the continent of Europe?

    1. a private space online: An online space should be considered private insofar as a user can reasonably expect that they control who sees information that they share within that space.

      The Demos definition of online private spaces. Emphasises control of information flow rather than scale of information sharing. Although scale and control of flow are likely to be interrelated; you can only control information flow when its held in a limited spaces, unless you have some kind of DRM/self-destruction information.

    1. The difference between measurable and nonmeasurable impact can lead to extra attention for the class of ethical impacts that are already heavily documented and amenable to quantitative measures, like algorithmic bias and fairness, and a relative lack of scrutiny for harder-to-measure or impossible-to-quantify impact.

      And the most catastrophic risks will have very limited data available to assess them, as they generally have not yet happened or occur very rarely but are nonetheless extremely concerning.

    2. ethics owners we spoke with, however, pointed out that statements of principle cannot accomplish much if they are not deliberately linked to corporate practices.

      Crucial determiner of whether principles are principally an exercise in outward-facing public relations and to modify internal descent, or whether they are actually intended to guide and change the actions of the company

    3. In the U.S., current human subject research rules were first codified in 1981, through the adoption of the Common Rule, which mandates the respect for persons, justice, and beneficence of new research

      What is the UK parallel for this regulation and institutional review boards in American universities and research institutions?

    4. we have regularly observed calls for the adoption of a “Hippocratic Oath for data scientists.”9 However, the Hippocratic Oath is focused on the intimate relationship between a physician and an individual patient, something that is not analogous to the relationship between product developers and many millions or billions of users.10As is so often the case for ethics inside Silicon Valley, this difference in scale is the defining problem

      The patient-doctor relationship is almost always a one-to-one interaction, with clear actors on either side who are aware of each and can model each other's interested.

      The developer-user relationship is a many-to-many interaction, with a collection of developers interacting with users at a vast scale intermediated by software that the developers have jointly created (and often also created with users who upload content, provide data to update algorithms etc.)

    5. Ethics owners consistently described the challenges they face in trying to integrate ethical objectives into OKRs, largely due to the difficulty of measuringethics, a problem that increases as companies scale their products and services.

      Ethics is notoriously hard to quantify or measure with clear metrics (if anything, optimising specific metrics without regard to anything else is a common source of ethical problems). Many 'ethics' frameworks may deny ethics is even measurable in this way.

      Deontogical and other rules-based approaches may actually be able to offer very clear, yes or no, red lines but struggle in resolving tensions in a way that engineers can easily apply.

      Utilitarianism seems well suited to metrification in theory, taking a consequentialist viewpoint, but the quantification of a 'moral calculus' has been a stumbling block for the theory for centuries and its unlikely Silicon Valley engineers will be able to solve it overnight.

    6. ethics owners draw upon existing practices for organizational ethics within Silicon Valley, such as corpo-rate statements of principle, ethical review boards, or software development lifecycle tools.

      Statement of principles seem to struggle from a lack of specificity that makes them vague and hard to put into practice (perhaps intentionally so?).

      Ethics review boards seem to fail quite often from case studies like Deepmind/Royal Free, Google's AI board, NHSX, etc. What lessons can be learnt from university ethics board, if any?

    7. Current ethics owners draw upon earlier ethical frameworks, including medical research ethics, business ethics, and professional ethics.

      Again, those working on tech/digital/AI ethics drawing on medical ethics and presumably bioethics. What problems have bioethics encountered and overcome that might be relevant?

    8. ethics owners

      A type of professional within technology companies responsible for handling 'ethical' problems within those companies. This can mean intervening in the product development cycle and translating public pressure into corporate practices, among other things. Seems related to legal and compliance staff, but perhaps with less clear remits and scope.

    1. “It’s very obvious that the [Chinese Communist Party] is twisting this particular technology to do things that are in pretty violent conflict with some of the values that underpin our societies here in the United States and in Europe,” he said, adding that any GPAI member has to sign up to the OECD’s principles for artificial intelligence.

      The Global Partnership on AI even more clearly now being setup as a counterweight to and alliance against China in the use of AI. Contributing to fragmentation in international AI governance? Or centralisation but without China? Is there a Chinese equivalent developing?

    2. Vestager said that her office is looking into additional measures covering “how to best make sure that citizens all over Europe have the certainty that their fundamental rights are secured.” Asked about a potential ban, she added that “we have taken no final decisions yet.”

      Margrethe Vestager hasn't ruled out a ban on some uses of facial recognition technology by the European Commission.

    1. The solution? Less ML: Twitter's solution to this problem is to use less ML and to give its users more control over how their images appear.

      Sometimes the best option is to not use AI systems at all, or at least, not until you can demonstrably prove they're better than manual control and worth the effort, rather than just deploying them for the sake of it.

    1. Expensive, crewed platforms that we cannot replace and can ill afford to lose will be increasingly vulnerable to swarms of self-coordinating smart munitions perhaps arriving at hypersonic speeds or ballistically from space designed to swamp defences already weakened by pre-emptive cyber-attack.

      Suggesting a move towards autonomous drone warfare, abandoning some existing weapons platforms, e.g. tanks, all together?

      How does this relate to the ongoing debates around banning lethal autonomous weapons? UK basically expecting that won't happening.

    1. collecting and analyzing data on diversity over time, comparing those numbers to the numbers at other organizations, and sharing them with key stakeholders, companies can increase accountability and transparency around diversity issues.

      Should we collecting demographic data on staff, contractors, participants etc.? What is a meaningful benchmark for comparison? The UK population? The global population?

  13. Sep 2020
    1. Research communities, funding organizations, and academic publishers should work toward developing common standards for reporting progress in generative models. This might include raising the bar on documenting the processes used in training a new model, as well as integrating this information in a machine-readable way into the metadata included with published academic papers. Such standard-ization would improve transparency around the state of the field in ways that facili-tate better strategic planning.

      Good policy often requires accurate forecasting and clear indications of when a given actor achieves (or is capable of achieving) a given outcome. Deepfakes are one example, but it seems like a lot of different possibly beneficial or harmful AI applications could also be reasonably mapped to performance benchmarks which would make them possible.

    2. Widespread, aggressive deepfake detection and takedown by the biggest online platforms will not cover private messaging networks, such as WhatsApp or Telegram

      Increasing divergence between public and 'private' platforms as spreaders of disinformation. Harder to call out disinformation when you don't know it exists. Is there the possibility of building on-device deepfake detection systems so that users can be alerted in app while retaining encrypted messaging (Unsure but open question to follow up)

    3. As a method of producing deepfakes becomes more popular, detection teams will have access to the resources necessary to identify them.

      The nature of generative-adversarial systems and the back and forth between them, both as competing machine learning models within the deepfakes themselves and the disinformation spreaders and researchers competing, lead to more popular methods of deception, especially if they are available to both sides, being more easily detected as more focus is put on them and there are more examples from which to train detectors.

    4. Freely or cheaply available generative models for creating a range of different fakes will likely become the norm as the knowledge to create deepfakes grows more widespread. While these pre-trained models may be lower quality and less customizable than models created from scratch, the low cost to using them may attract less well-resourced disinformation campaigns.

      Most non-state disinformation campaigns are likely to use pre-trained open-source models to generate their deepfakes. This may make them particularly vulnerable to detection through radioactive data and other markers. However, the lack of sophistication and detectability may not matter if content goes viral by playing into people's biases and getting them to overlook smaller irregularities. But it may make it easier for major platforms to prevent and push back against them.

    5. This review of the research literature understates the operational complexities faced by an online influence operation in deploying deepfakes “in the field.” GANs are relatively delicate tools, even for trained researchers. Training is often unstable and subject to “mode collapse,” in which the generative model arrives at a single or small set of outputs able to fool the discriminator, resulting in generators capable of producing only a tiny set of synthesized outputs.46 These “collapsed” models will be useless to a disinformation campaign that needs to generate more than a handful of faked faces or images. The on-staff technical expertise of an influence operation will make a major difference in whether or not a malicious actor can create generative models effectively.

      Most deepfake tools are not yet easily deployable unless those using it have access to sufficient technical expertise in developing these systems. This then probably restructs sophisticated uses of deepfakes to state-aligned actors or groups that happen to contain expert machine learning researchers.

    6. The ML models for producing deepfakes can leave suspicious distortions in images, audio, and video that are often consistent across content distributed by an influence campaign. Deepfakes may therefore contain a kind of “fingerprint,” allow-ing investigators to link together all media originating from a given disinformation campaign. Investigators, in turn, can trace the campaign to a specific source and alert the public.

      Had never thought about it like this before. But using the same dataset (especially if it a 'radioactive' dataset), the same software, same GANs etc. can link together parts of your information operations that might otherwise have seemed 'organic' or at least hard to connect to a single entity.

    7. deepfake technologies are increasingly integrated into soft-ware platforms that do not require special technical expertise.

      There are two ways technologies can become more widely used. EIther individuals become more highly skilled and adept at using them or the tools become more accessible through routinatisation and abstraction of the tasks required, lowering the entry threshold. The latter seems a much greater risk than the former when it comes to the malicious use of deepfakes.

    8. The fact that disinformation campaigns rely on cheap, rough-and-ready ways of producing content suggests that practical considerations figure into the types of content they spread. There is no need to spend additional resources creating an elaborate fake video when simply copying an image from elsewhere and mislead-ingly captioning it will achieve the same impact.

      Why innovate when what you've got already works well enough for your purposes?

    1. The structural landscape of bargaining is largely tilted in favour of employers. While many tech workers are well-paid enough to have a decent amount of runway, and are able to ‘exit’ to similarly well-paid jobs elsewhere, they are hampered by the lack of unionisation.

      Link to the formation of United Tech and Allied Workers in the UK

    2. Corporate organisations’ products and services typically require ongoing maintenance, so management is less able to absorb costs from failing to agree with its employees

      AI companies require ongoing maintenance and development of their systems, so cannot afford to lose workers even temporarily.

    1. the Government should introduce legislation which defines what data will be collected, how long it can be held, when it will be deleted. Such legislation should include a ban on contact tracing data being shared for any purpose other than combating the spread of Coronavirus.

      Joint Select Committee on Human Rights making same argument as Ada Lovelace Institute on the need for legislation to govern the contact tracing app.

    1. The predictable increases in computeand memory every two years meant hard-ware design became risk-adverse. Even fortasks which demanded higher performance,the benefits of moving to specialized hard-ware could be quickly eclipsed by the nextgeneration of general purpose hardware withever growing compute.

      There's no need to develop specialised hardware for a task if you can get the same performance increases from general purpose technology; this allows you to benefit from others research into performance improves rather than locking yourself into continued investment in developing the specialised hardware branch of the tech tree.

    1. no institutionis fully immune to regime capture, and centralisation may reducethe costs of lobbying, making capture easier by providing a singlelocus of influence. On the other hand, a regime complex comprisingmany parallel institutions could find itself vulnerable to capture bypowerful actors, who are better positioned than smaller parties tosend representatives to every forum.

      Unclear whether regulatory capture is worse in centralised or decentralised institutions. Likely a case by case basis; but possibly worthy of future investigation?

    2. fragmented regimes may force states to spread resourcesand funding over many distinct institutions, particularly limitingthe ability of less well-resourced states or parties to participate fully

      A single centralised governance regime in AI (or other arenas) could allow smaller countries to focus their attentions and thus have a greater chance of having a say in decision-making.

    3. We define ‘fragmentation’ as a patchwork of internationalorganisations and institutions which focus on a particular issuearea, but differ in scope, membership and often rules [3, p.16]. Wedefine centralisation as an arrangement in which governance of aparticular issue lies under the authority of a single umbrella body.

      Worth considering where the new Global Partnership on AI fits into this. Large number of members, including non-Western states like India, but seems to implicitly cut out the Chinese. Does this contribute to fragmentation? What does a situation where there is significant centralisation on both sides of a divide, with competing international regimes, look like?

    4. The technical fieldhas no settled definition for ‘AI’,2so it should be no surprise thatdefining a manageable scope for AI governance will be difficult. Yetthis challenge is not unique to AI: definitional issues abound in areassuch as environment and energy, but have not figured prominentlyin debates over centralisation. Indeed, energy and environmentministries are common at the domestic level, despite problems insetting the boundaries of natural systems and resources.

      Just because a topic is composed of several distinct areas doesn't mean it doesn't make sense to approach governing them in a cohesive and integrated way, when many of these technologies have overlapping risks and applications, and rely on many common technical foundations.

    1. The misuse and accident perspectives tend to focus only on the last step in a causal chain leading up to a harm: that is, the person who misused the technology, or the system that behaved in unintended ways. This, in turn, places the policy spotlight on measures that focus on this last causal step: for example, ethical guidelines for users and engineers, restrictions on obviously dangerous technology, and punishing culpable individuals to deter future misuse.

      Focusing on accidents and misuse involving AI applications focuses the attention on developers, users and use-cases. This is like focusing car makers, drivers, and terrorists ramming crowds with cars when considering the risks from internal combustion engines, rather than considering the effects of urban sprawl or climate change that the technology enables as structural change.

    1. "technology is not inherently good or bad" and it is only specific uses which need to be regulated, lest innovation be hurt and Portland's technology industry suffer a negative impact.

      Companies like Amazon want regulation to focus on use cases rather than the fundamental underlying technology. Not necessarily bad to have regulation for specific use cases, e.g. police surveillance, but makes it much more difficult to preempt abuse of the technology in unexpected ways and deal with cross-cutting issues involved in the creation of the technology itself.

    1. One of the new tools debuted by Facebook allows administrators to remove and block certain trending topics among employees. The presentation discussed the “benefits” of “content control.” And it offered one example of a topic employers might find it useful to blacklist: the word “unionize.”

      An example of a big tech company trying to push back against the wave of worker organising in the last few years, both in wider society, and within the tech companies themselves.

    1. Others have acknowledged the importance of recognising con-flicts between values in AI ethics, but to our knowledge none haveexplored in detail why this would be beneficial or what it wouldlook like in practice.

      Interesting that this was published in AIES 2019 and in AIES 2020, Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing seems to explicitly lay out three ethical tensions in facial recognition auditing. An example of tensions in practice.

    2. in order to be action-guiding, principles need to beaccompanied by an account of how they apply in specific situations,and how to balance them when they conflict.

      Important to keep in mind for my own work.

    3. many of thedifferent existing sets can be synthesised into five key principles:the four that are already used in bioethics - autonomy, beneficence,non-maleficence, and justice [4] - plus the additional principle ofexplicability, which captures the challenges of intelligibility andaccountability unique to AI systems.

      Interesting and perhaps reassuring that many of the high-level AI principles are pretty similar to those in bioethics. Suggests there's possibly a lot to learn about difficulties in practical application from bioethicists who have a longer track record in going from principles to practice. Worth looking/talking more to the Nuffield Council on Bioethics?

    1. Audits of this nature havealso made institutions wary – in September 2019, IBM, an audittarget in the Gender Shades study, removed its facial recognitioncapabilities from its publicly distributed API [26]. Similarly, Kairosbegan putting its services behind an expensive paywall followingits inclusion in the Actionable Auditing study [27]. Such practices,although rightfully stopping developers from using a product re-vealed to be flawed, also compromise the product’s auditability –making it more expensive and challenging for auditors to evaluate,even though it may still be in active use by enterprise customers.

      Companies may choose to paywall their products most likely to be problematic and so likely to cause bad PR. This effectively prevents citizen and non-profit independent auditing, except by those with significant financial resources. This makes the case for government regulator auditing even stronger; they can require the legal disclosure and access to a product for auditing without concern for financial costs.

    2. The consistency of the benchmark can also be compromised ifthe dataset changes over time through the removal of individuals.

      The usefulness of a benchmark for auditing may be in tension with the ability of individuals to have a meaningful opt-out from inclusion in the dataset used for benchmarking. This seems to point towards up-front opt-in consent (with an acknowledgement from participants they cannot opt-out at a later date) the best compromise from a consent versus consistency point of view.

    3. While Clarifai includes manymore Caucasian celebrity identities (74% of celebrity labels) thanany other group, Microsoft, with 37% Caucasian, 19% Asian and 21%Black celebrity names included appears to have a more inclusivedesign.

      Feels like there is an implicit assumption about what a more inclusive design is. Whether these datasets are representative depends on the population they are being sampled from. For example, if they are being compared to the US population at large, Clarifai overrepresents White faces whereas Microsoft underrepresents them, and Microsoft overrepresents Asian and Black faces. Whereas Microsoft might be much closer to representing the global population (although likely overrepresenting White faces in that case)

    4. While it is important to strive for equal performance acrosssubgroups in some tasks, audits have to be deliberate so as not tonormalize tasks that are inherently harmful to certain communities.The gender classification task on which previously audited cor-porations minimized classification bias, for example, has harmfuleffects in both incorrect and correct classification. For example,it can promote gender stereotypes [19], is exclusionary of trans-gender, non-binary, and gender non-conforming individuals, andthreatens further harm against already marginalized individuals

      Automated classification can be be problematic both in treating different subgroups differently (which can be dealt with by improving the performance on those subgroups, likely through more data gathering) and in enabling the automation of tasks that we deem unacceptable or socially harmful in a given context

    5. This may imply that external algorithmic audits onlyincentivize companies to address performance disparities on thetasks they are publicly audited for.

      Public pressure and accountability through audits does seem to encourage companies to update their systems. However, this does not translate into wider investigations of bias within their systems for which they were not audited for. This suggests that the response to the audit served more as a public relations action, than an acceptance of the systems failure and a change in processes and culture among those developing the system.

    1. The US has been urging Europeans to reduce their hydrocarbon imports from Russia for some time. Many European planners are sympathetic: the trouble is that few other options are available. Most Persian Gulf supplies go to East Asia – and in any case the US has crippled Iran’s hydrocarbon exports. Liquid natural gas can be shipped from the US thanks to the shale boom, but unlike Japan, where LNG technology was pioneered, much of Europe still lacks the infrastructure needed to process it. A pipeline from Russia is more efficient. The only alternative is Energiewende, the transition from an energy system based on hydrocarbons to renewables.

      Switching to renewable energy as a geopolitics; a move towards energy nationalism or a reduction in dependence on constant flows from other countries. A move away from reliance on international energy transfers and strategic relations to acquire fossil fuels and towards independence; possibly anti-globalisation?

      On the other hand, developing and deploying green technologies will still require materials that cannot be acquired everywhere. For example, rare earth metals used in solar panels and wind turbines, for which China is by far the main global producer https://chinapower.csis.org/china-rare-earths/#:~:text=As%20of%202019%2C%20China%20still,of%20major%20rare%20earth%20importers.

      From Russian gas to Chinese renewables?

    2. In contemporary Russia oil and gas sales account for 60 per cent of exports and 30 per cent of GDP. Far from being a reanimated evil empire, Russia has reverted to a third world model of political aristocracy funded by the sale of natural resources.

      I didn't realise how dependent Russia's economy is on its oil industry. This makes the drastic fall in oil prices in the last few years and particularly in the Spring 2020 because of the COVID-19 shock seem even more dangerous for the country...

    1. the worst of tech is known only to a rumor mill of in-house data scientists.

      On the importance of getting access to developers themselves when building an algorithmic auditing regime.

    2. U.S. public institutions are disadvantaged by an enormous information asymmetry. They have neither the mandate nor the capacity to examine proprietary corporate data systems or to scrutinize the math behind the curtain.

      Making a similar point to one made in the UK around the need for regulators to have the technical capacity to investigate companies and for there to be a regulatory framework that requires companies to disclose data to regulators or approved researchers.

      Link to the Ada Lovelace Institute work on algorithmic auditing and Centre for Data Ethics and Innovation work on recommender systems.

    1. if we accept that the vaccine produces some form of immunity, it will be because we have been able to establish some test that will allow us to certify that immunity. If such a test exists, it would be logical to respect the right to freedom of movement for all persons who satisfy that test, regardless of how they have acquired that immunity.

      I don't agree with this claim. As repeatedly mentioned in discussions of immunity certificates, the perverse incentives to get infected (and therefore harm yourself and others) are one of the biggest issues with the idea of immunity certificates. Whereas vaccines do not put you or others at risk, and so are a purely pro-social method of acquiring immunity.

    2. Our hypothesis, by contrast, is that society will be quick to consider that those who receive the vaccines are immune to COVID-19. Therefore, vaccinated people will reject being deprived of their basic rights and freedoms. Ultimately, this effect of dividing society into two large groups, the seropositive and the seronegative, will be unavoidable, even if we do not issue immunity passports to those who have recovered from COVID-19 without receiving the vaccine.

      Not issuing immunity certificates will not prevent the potential division of society into those with and without some form of COVID-19 immunity. Just delay it until a vaccine is available.

      How long this division lasts in this scenario depends on the availability of vaccines, i.e. can everyone be vaccination in weeks, or will it take months or years? Also how long does the vaccine immunity last; is it possible to move back from being seropositive to seronegative again?

      Also, vaccines don't come with some of the same perverse incentives as antibody testing, as having a positive incentive to be vaccinated is a social good. Fraudulent vaccine certificate is still a possibility, but as taking as a vaccine has a much lower downside risk to the individual than intentionally catching COVID-19, that also seems less risky.

    1. the UK protects computer-generated works which do not have a human creator (s178 CDPA). The law designates the author of such a work as “the person by whom the arrangements necessary for the creation of the work are undertaken” (s9(3) CDPA). Protection lasts for 50 years from the date the work is made (s12(7) CDPA). When proposed in 1987, this was said by Lord Young of Graffham to be “the first copyright legislation anywhere in the world which attempts to deal specifically with the advent of artificial intelligence”. It was expressly designed to do more than protect works created using a computer as a “clever pencil”. Instead, it was meant to protect material such as weather maps, output from expert systems, and works generated by AI. Although it was expected that other countries would follow suit, few countries other than the UK currently provide similar protection for computer-generated works.

      Interesting example of legislation trying to preempt the effects of future technology, and AI in particular.

      Also interesting as an example of the UK trying to be an innovator in terms of regulation. Not clear whether they have succeeded or failed yet. While very few countries have followed suit, it may simply be because it was not a relevant question in the minds of other countries legislators. It may still be the case that this wave of AI leads to legislators taking it more seriously and then turning to the UK as a model. In general, it does seem like the UK is taking a lead in the realm of artificial intelligence and intellectual property.