29 Matching Annotations
  1. Apr 2022
    1. Appriss is adamant that a NarxCare score is not meant to supplant a doctor’s diagnosis. But physicians ignore these numbers at their peril. Nearly every state now uses Appriss software to manage its prescription drug monitoring programs, and most legally require physicians and pharmacists to consult them when prescribing controlled substances, on penalty of losing their license. In some states, police and federal law enforcement officers can also access this highly sensitive medical information—in many cases without a warrant—to prosecute both doctors and patients.

      score would be an good assist for doctor, but doctor cannot be replaced this system, agreed. doctors and scorgin system should work together and create better system of monitoring for patient.

    2. Over the past two decades, the US Department of Justice has poured hundreds of millions of dollars into developing and maintaining state-level prescription drug databases—electronic registries that track scripts for certain controlled substances in real time, giving authorities a set of eyes onto the pharmaceutical market. Every US state, save one, now has one of these prescription drug monitoring programs, or PDMPs. And the last holdout, Missouri, is just about to join the rest.

      good to see governments control here. in the certain controlled substances it's better to have the governmentn monitoring, just in case some agency abuse them.

    1. While new scores multiply, consumers remain in the dark about many of their consumer scores and about the information included in scores they typically don’t have the rights to see, correct, or opt out of. A primary concern is how these scores affect individuals and meaningful opportunities available to them. Another area of concern is the factors used in new consumer scores, which may include readily commercially available information about race, ethnicity, religion, gender, marital status, and consumer-reported health information.

      the new score system has many not closing issues before getting out as customer don't have control of their information, once those individual are involoved in these opportunities.

    2. Predictive scores bring varying benefits and drawbacks. Scores can be correct, or they can be wrong or misleading. Consumer scores – created by either the government or the private sector – threaten privacy, fairness, and due process because scores, particularly opaque scores with unknown ingredients or factors, can too easily evade the rules established to protect consumers.

      potential issue could associate with the secret score from the customers, as most of the algorithm cannot find out the factors like fairneess and privacy.

    1. The issue of unlawfulness over the harm caused by derogatory resultsis a question of considerable debate. For example, in the United States,where free speech protections are afforded to all kinds of speech, includ-ing hate speech and racist or sexist depictions of people and communi-ties, there is a higher standard of proof required to show harm towarddisenfranchised or oppressed people. We need legal protections nowmore than ever, as automated decision-making systems wield greaterpower in society

      law and government intervention are needed under certain scenarios, as not everyone have lawful mind in proper social, political, and historical context .

    2. If there is a technical fix, then what are the constraints that Googleis facing such that eight years later, the issue has yet to be resolved? Asearch for the word “Jew” in 2012 produces a beige box at the bottom ofthe results page from Google linking to its lengthy disclaimer about theresults—which remain a mix of both anti-Semitic and informative sites(see figure 1.13). That Google places the responsibility for bad resultsback on the shoulders of information searchers is a problem, since mostof the results that the public gets on broad or open-ended racial andgendered searches are out of their control and entirely

      i think the reason google has taken little responsibility as a searching platform for open-ended racial and gender search. sometime, they cannot control how text can be interprete.

    1. The internet is full of information borders. Sometimesthose borders follow foreseeable lines, for example accord-ing to geography and language. But there are frequentexceptions. Many information borders are unpredictableand harder to explain. Geographically close countrieswith the same language may have significantly differentresults, and distant countries with unrelated languagesmay have unexpectedly similar results. By automatically

      algorithm might not be able to interprete culture or regional information to some level as liguistics and geographic and other tranditions cannot be learnt by algorithm easy. Due to lack of data, the learning and interpretation process could be challenging.

    2. To identify such information regions and borders moreprecisely, we choose to analyze text results, not images,since there are better-defined ways of measuring textsimilarity. We analyze the text using automatic methodsthat reveal which countries have similar and dissimilarresults, enabling us to remap the world by the similarityof the information that a searcher sees.Specifically, we again make searches worldwide usingthe Google-determined default language for the coun-try. Then, given the text results in that language, wemachine-translate the results back into English, againusing Google Translate. Each country’s English resultscan be understood as an approximately hundred-dimensional vector of its most distinctive words, foundvia the tf-idf algorithm [31]. (For example, Japan’s topwords in its results for “god” are “japanese,” “shinto,”“kami” [spirits], and “awe.”) The similarity between twocountries is quantified as the cosine similarity betweentheir vectors. Finally, we use an algorithm called UMAP,which is state-of-the-art for dimensionality reduction[32], to arrange the countries in a two-dimensional spaceand automatically cluster them according to how similartheir search results are.

      while using automate method analysis, it's easy to notice countries have similar and dissimilar results. This could be a powerful tool to create mapping for similiar information across the wroldwide regions.

    1. One of the problems for those trying to see movement clearly using this type of data isnumerical accuracy regarding data subjects. On average, the presence of antennas willcorrelate fairly closely with the presence of people: a mobile provider’s ability to collectlocation data on its users is dependent on their connecting with antennas, and since remotelocations have fewer antennas (de Montjoye et al., 2013), where the population is less dense,fewer signals will be collected. So for example there will be both more signals and morepeople in urban areas than in rural ones. This correspondence between signals andindividuals becomes unreliable, however, when large numbers of people move throughremote areas with few antennas.

      the larger population may just become more obvious in remote areas. given the numerical accuracy played a role in data subject matters. The regional difference need to be raised to scientist attentions when designing experiement.

    2. First, calling records showwhich of a network’s antennae is being used, and thus the user’s movement from the vicinityof one antenna to another. Alternatively, more specific location data can be gathered as aphone’s SIM card automatically checks in with its nearest antenna.

      consciousness while handling mobile data need to be pay attentioned to. many data from our activities have been collected. The government and law enforcement are only sources that can restraint the misuse of data.

    1. Studies on the construction of race in the United States often mention census categories, either as an indication of the stages of its evolution or to emphasize the federal government’s participation in the process. But the racial categories of the census are most often perceived as a sign of racism at work in society, rather than the subject of a more in- depth investigation.

      the underrepresented groups need to have government help in developing their own subject in the world

    1. The Census Bureau had hoped to make use of Mexicans’ feelings of ethnic pride to convince them to participate enthusiastically in the 1930 census.

      government feeling the importance is a good sign as the start of the whole new era

    2. In spite of this trend toward greater complexity, the system of racial catego-ries, so important in American public statistics, still relied on the primordial opposition between whites and blacks. In fact, the relationship by this time had been solidified by locking the definition of the black population, so that it was finally stable, while the definition and limits of the “white race” were called into question by the importance of the immigration issue, which was also a racial issue even if not a question of color, and by the creation, with the 1930 census, of a Mexican race.

      minority group need to be picked up while designing the complex classification model. The tuning worth paying attention to

  2. Mar 2022
    1. “They are not only failing to prioritize hiring more people from minority communities, they are quashing their voices,” she said.

      same as we discussed late week, when processing the minority community the bias and unexpected issue should be take into consideration. so the minority could be representative in the modeling output

    2. undar Pichai, chief executive of Alphabet, Google’s parent company, has compared the advent of artificial intelligence to that of electricity or fire, and has said that it is essential to the future of the company and computing

      large company like google should involve as much as in the AI for social good causes. So other firm could be aware of the importance of it.

    1. All of these approaches take time and are most valuable whenapplied early in the development process as part of a conceptual in-vestigation of values and harms rather than as a post-hoc discoveryof risks [72]. These conceptual investigations should come beforeresearchers become deeply committed to their ideas and thereforeless likely to change course when confronted with evidence of pos-sible harms. This brings us again to the idea we began this sectionwith: that research and development of language technology, atonce concerned with deeply human data (language) and creatingsystems which humans interact with in immediate and vivid ways,should be done with forethought and care

      relates to what we discussed last week that NLP could be the prmilinary obsticle to overcome in the embeding of ML in the real life. we ultimately want to have a persistently running and functioning processing system to help make better decision.

    1. Speed is key. There are technologies, such as semi-supervised learning which requires very little labeled data, that could eventually allow tech companies to develop language services without seeking out cultural knowledge, Mehelona says.

      sometime,it could be relatively easy to apply something in technology firm as long as you know the tool to use . so there will be issue rise in the future

    1. Social media platforms like Facebook are ill-equipped to combat these trends as they simply have not invested the resources in personnel and AI systems that understand local languages and social tensions at play. Their business models raise questions as to whether this trend will be reversed without a major paradigm shift.

      this is something that people are not paying attention to when using Facebook and other social medias. Firms like Facebook borrowed some existing things to build for their business.

    1. There is no algorithm that can fix this; this isn’t even an algorithmic problem, really. Human judges are currently making the same sorts of forced trade-offs—and have done so throughout history.

      humanity could be inplugged in algorithms in the near future, first thing we need is learn more of the boundary of the baseline of humanity.

    2. We gave you two definitions of fairness: keep the error rates comparable between groups, and treat people with the same risk scores in the same way. Both of these definitions are totally defensible! But satisfying both at the same time is impossible. 

      fair definition. indicators and defination makes total sense, but in order to execute it in a good mannar worth fair supervising systems

    1. We need public buy-in (quite literally) for the idea that successful, equitable automation means a sociotechnical system in which workers play a central role, whether through directly or indirectly working with machines, and are compensated accordingly. 

      somehow treats mechine or system like human, will this cause any issue or argue in the future. It depends on how the societechinical system is designed.

    1. First, the impenetrable black box of automatedhiring casts doubts on the notion of equal opportunityfor all workers, as the fairness of criteria used in hiringcannot be verified. A nontransparent hiring process alsocreates more difficulty for discovering and redressingdisparities in hiring. Second, the opaque nature of howdata collected from workers could be evaluated ordeployed both by present and future employers presentschallenges to worker personhood, worker autonomy,and social mobility.

      black box issue exist everywhere which persistantly challenges humanity, no matter in real life or world of data. nowadays, it's been more scary that data leakage are so prevalent. Hard to imagine how much personal information have leaked by now.

  3. Feb 2022
    1. This filtering has been used to show higher-paying job adsto men more often than to women, to charge more for standardized test prep courses to people in areaswith a high density of Asian residents, and many other forms of coded inequity.

      income ineaqulity has been created by stereotypes but they forget that women and men are born with the same thinking able.

    2. Despite decades of scholarship on the social fabrication of group identity,tech developers, like their marketing counterparts, are encoding race, ethnicity, and gender asimmutable characteristics that can be measured, bought, and sold

      tech era has taken our information for business usage without consent. if we want to get rid of the constriant, we probably need to publicize how dangerous these fabrication could harm our life.

    1. use of the company’s AI for military applications.

      couldn't judge. but miliary application could be board and private. Officers shoud be cautious about the use case.

    1. The ubu-Ntu framework for understanding the ethical implications and human rights risks of automated decision-making systems (ADMS) relies on examining the nature of the interconnected and layered relationships relating to the creation and use of these systems.

      this automated decision making system exmain the quality of relationship based on culture and context people live in. So it's important to understand the culture before we start implement any technical before.

    2. eo-colonialism, like colonialism, is an attempt to export the social conflicts of the capitalist countries. The temporary success of this policy can be seen in the ever widening gap between the richer and the poorer nations of the world. But the internal contradictions and conflicts of neo-colonialism make it certain that it cannot endure as a permanent world policy.

      caplitalism persists as the development of techology going further. As long as people have access to higher level technique to develop the world, the inequality will persist. Maybe we should come up with way to evalute power in a divese manner.

  4. data-ethics.jonreeve.com data-ethics.jonreeve.com
    1. Large data companies have no responsi-bility to make their data available, and they have total control over who getsto see them.

      hirerchy exists in the data world too

    2. Both behavioral and articulated networks have great value to researchers,but they are not equivalent to personal networks. For example, although con-tested, the concept of ‘tie strength’ is understood to indicate the importanceof individual relationships (Granovetter 1973). When mobile phone datasuggest that workers spend more time with colleagues than their spouse, thisdoes not necessarily imply that colleagues are more important than spouses.Measuring tie strength through frequency or public articulation is a commonmistake: tie strength – and many of the theories built around it – is a subtlereckoning in how people understand and value their relationships with otherpeople. Not every connection is equivalent to every other connection, andneither does frequency of contact indicate strength of relationship. Further,the absence of a connection does not necessarily indicate that a relationshipshould be made

      very true, this new big data based world has been easier to reflecting the value of connections bewteen peopel. It also shows how vulnerable relationship could be. To some extend, social media is devastating to primary way of building relationships as people are alway able to seek for fast and better connection, even though they are not high qulity.