12 Matching Annotations
  1. Sep 2020
    1. To recognize and address these situations, you have to make sure that you test the algorithm in a regime that is similar to how it will be used in the real world. So, if your machine-learning algorithm is one that is trained on the data from a given set of hospitals, and you will only use it in those same set of hospitals, then latching onto which hospital did the scan could well be a reasonable approach. It’s effectively letting the algorithm incorporate prior knowledge about the patient population in different hospitals. The problem really arises if you’re going to use that algorithm in the context of another hospital that wasn’t in your data set to begin with. Then, you’re asking the algorithm to use these biases that it learned on the hospitals that it trained on, on a hospital where the biases might be completely wrong.

      Example of biases and recognition of them.

    1. This fix might seem small, but it is crucial. Machines that have our objectives as their only guiding principle will be necessarily uncertain about what these objectives are, because they are in us — all eight billion of us, in all our glorious variety, and in generations yet unborn — not in the machines.

      difference again

    2. Because machines, unlike humans, have no objectives of their own, we give them objectives to achieve. In other words, we build machines, feed objectives into them, and off they go. The more intelligent the machine, the more likely it is to complete that objective.

      very explicit example of distinction between machines and humans

    1. you might see its spirit in their ambitions to investigate the “rules” connecting human thought with word “manipulation” and in their efforts to explore the relationship between creativity and randomness—not to mention in their grander goal of creating machines that would “improve themselves.”

      goals

    1. The jobs that appear to face intrusion by these newer patents are different from the more manual jobs that were affected by industrial robots: intelligent machines may, for example, take on more tasks currently conducted by physicians, such as detecting cancer, making prognoses, and interpreting the results of retinal scans, as well as those of office workers that involve making determinations based on data, such as detecting fraud or investigating insurance claims. People with bachelor’s degrees might be more exposed to the effects of the new technologies than other educational groups, as might those with higher incomes. The findings suggest that nurses, doctors, managers, accountants, financial advisers, computer programmers, and salespeople might see significant shifts in their work. Occupations that require high levels of interpersonal skill seem most insulated.

      What jobs specifically appear to be affected and examples.

    2. Until recently, the consensus among researchers seemed to be that workers with higher levels of education would be less affected by automation than those lower down on the economic hierarchy.

      social phenomenon elicited by machines

    3. Automation on a factory floor evokes a simple image: robotic arms assembling parts into Tesla cars; mobile robots driving pallets of goods through Amazon distribution centers

      example of machines taking over humans' jobs

    1. They’ve taken a variety of approaches: algorithms that help detect and mitigate hidden biases within training data or that mitigate the biases learned by the model regardless of the data quality; processes that hold companies accountable to the fairer outcomes and discussions that hash out the different definitions of fairness.

      resolutions to fix AI biases

    2. “‘Fixing’ discrimination in algorithmic systems is not something that can be solved easily,” says Selbst. “It’s a process ongoing, just like discrimination in any other aspect of society.”

      quotes with confidence of fixing and improving AIs

    3. you may not realize the downstream impacts of your data and choices until much later. Once you do, it’s hard to retroactively identify where that bias came from and then figure out how to get rid of it.

      Negative impacts of biased AI

    4. computer scientists randomly split their data before training into one group that’s actually used for training and another that’s reserved for validation once training is done. That means the data you use to test the performance of your model has the same biases as the data you used to train it. Thus, it will fail to flag skewed or prejudiced results.

      Bias example that shows machines imperfection