9 Matching Annotations
  1. Feb 2022
    1. as well as the gradual reduction of human involvement or even oversight over many automatic processes, pose pressing issues of fairness, responsibility and respect of human rights, among others.

      Why not try to increase human involvement in algorithms? It seems if concerns over the ethical reliability of algorithms is in question greater human involvement is needed until algorithms can be trusted.

    1. he subjectcan be pushed to make the ‘‘institutionally preferredaction rather than their own preference’’

      How can somebody completely remove these indicators? After all wouldn't trying to remove biases just get them replaced by new biases?

    1. Because CENS was an academic research lab, faculty members held a large amount of power to decide which projects students pursued and what issues students faced during design, testing, and implem

      CENS seems like it takes its job seriously. Like I said in my other annotation for week 5. Just because data scientists are trying to root out bias in all forms doesn't mean it is always effective or that what is effective can't be improved.

  2. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. he gaps between data scientists and critics are wide, but critique di-vorced from practice only increases them.

      In my opinion this struggle needs to play out. Data science can always be streamlined to remove the bias that everyone has. the conflict this article speaks of is just the natural progression of what has been happening already. The only way this will get better is through constant struggle, just like everything else.

    1. To recognize that technology is only one component of a broadersociopolitical system is not to give technology a free pass.

      In accordance with this article I think technology should be scrutinized. Anything made by humans is going to be subject to the same flaws we are subject to. So algorithms should be scrutinized for fairness, but not by one group of testers but by multiple. After all we all have the same flaws. On the other note business use algorithms all the time and it happens to promote revenue for companies that use it so not all algorithms are useless and need to be scraped and /or revamped.

  3. Jan 2022
    1. My belief that we oug

      the highlighting seems to be glitched on this article for me. Some of the highlights are not mine and those that are only highlight half of what I wanted. Anyhow here is my annotation.:

      I think the main crux of this is the advancement of technology and how much we lose ourselves to it. I think to go back to old habits takes discipline. I also think this article is too negative on the whole about these advancements. Change isn't always bad.

    1. Although the paper was not peer-reviewed, its provocative findings generated a range of press coverage. [2]

      This article talks a lot about this paper that was not peer reviewed. It is an interesting idea that has been explored a lot in science fiction. It could be accurate, but like many fiction works explore these methods are either overzealous or just plain wrong. That is also me assuming that the article is trust worthy. Personally I think the sample size was to small.

    1. at peak flu levels reached 11% of the US public this flu season, almost double the CDC’s estimate of about 6%.

      How is google retaining this data. It seems weird to me that they would just refuse to explain the error. There has to be a reason that Google would come to a conclusion that is double that of the CDC.

    1. We are definitely living in interesting times!

      The problem with Machine learning in my eyes seems to be the non-transparency in the field. After all what makes the data we are researching valuable. If he collect so much data why is only .5% being studied? There seems to be a lot missing and big opportunities here that aren't being used properly.