3,441 Matching Annotations
  1. Apr 2021
    1. The privacy policy — unlocking the door to your profile information, geodata, camera, and in some cases emails — is so disturbing that it has set off alarms even in the tech world.

      This Intercept article covers some of the specific privacy policy concerns Barron hints at here. The discussion of one of the core patents underlying the game, which is described as a “System and Method for Transporting Virtual Objects in a Parallel Reality Game" is particularly interesting. Essentially, this system generates revenue for the company (in this case Niantic and Google) through the gamified collection of data on the real world - that selfie you took with squirtle is starting to feel a little bit less innocent in retrospect...

    2. Yelp, like Google, makes money by collecting consumer data and reselling it to advertisers.

      This sentence reminded me of our "privacy checkup" activity from week 7 and has made me want to go and review the terms of service for some of the companies featured in this article- I don't use yelp, but Venmo and Lyft are definitely keeping track of some of my data.

    1. The insertion of an algorithm’s predictions into the patient-physician relationship also introduces a third party, turning the relationship into one between the patient and the health care system. It also means significant changes in terms of a patient’s expectation of confidentiality. “Once machine-learning-based decision support is integrated into clinical care, withholding information from electronic records will become increasingly difficult, since patients whose data aren’t recorded can’t benefit from machine-learning analyses,” the authors wrote.

      There is some work being done on federated learning, where the algorithm works on decentralised data that stays in place with the patient and the ML model is brought to the patient so that their data remains private.

  2. Mar 2021
    1. The scholars Nick Couldry and Ulises Mejias have called it “data colonialism,” a term that reflects our inability to stop our data from being unwittingly extracted.

      I've not run across data colonialism before.

    1. Visualise written content into a more dynamic way. Many people, some neurodivergent folks especially, benefit from information being distilled into diagrams, comics, or less word-dense formats. Visuals can also benefit people who might not read/understand the language you wrote it in. They can also be an effective lead-in to your long-form from visually-driven avenues like Pinterest or Instagram.

      This is also a great exercise for readers and learners. If the book doesn't do this for you already, spend some time to annotate it or do it yourself.

    1. The repository also contains the datasets used in our experiments, in JSON format. These are in the data folder.
    1. In computer science, a tree is a widely used abstract data type that simulates a hierarchical tree structure

      a tree (data structure) is the computer science analogue/dual to tree structure in mathematics

    1. graph theory is the study of graphs, which are mathematical structures used to model pairwise relations between objects
    1. Ashish K. Jha, MD, MPH. (2020, December 12). Michigan vs. Ohio State Football today postponed due to COVID But a comparison of MI vs OH on COVID is useful Why? While vaccines are coming, we have 6-8 hard weeks ahead And the big question is—Can we do anything to save lives? Lets look at MI, OH for insights Thread [Tweet]. @ashishkjha. https://twitter.com/ashishkjha/status/1337786831065264128

    1. <small><cite class='h-cite via'> <span class='p-author h-card'>hyperlink.academy</span> in The Future of Textbooks (<time class='dt-published'>03/18/2021 23:54:19</time>)</cite></small>

    1. The urgent argument for turning any company into a software company is the growing availability of data, both inside and outside the enterprise. Specifically, the implications of so-called “big data”—the aggregation and analysis of massive data sets, especially mobile

      Every company is described by a set of data, financial and other operational metrics, next to message exchange and paper documents. What else we find that contributes to the simulacrum of an economic narrative will undeniably be constrained by the constitutive forces of its source data.

    1. a data donation platform that allows users of browsers to donate data on their usage of specific services (eg Youtube, or Facebook) to a platform.

      This seems like a really promising pattern for many data-driven problems. Browsers can support opt-in donation to contribute their data to improve Web search, social media, recommendations, lots of services that implicitly require lots of operational data.

    1. DataBeers Brussels. (2020, October 26). ⏰ Our next #databeers #brussels is tomorrow night and we’ve got a few tickets left! Don’t miss out on some important and exciting talks from: 👉 @svscarpino 👉 Juami van Gils 👉 Joris Renkens 👉 Milena Čukić 🎟️ Last tickets here https://t.co/2upYACZ3yS https://t.co/jEzLGvoxQe [Tweet]. @DataBeersBru. https://twitter.com/DataBeersBru/status/1320743318234562561

    1. Cailin O’Connor. (2020, November 10). New paper!!! @psmaldino look at what causes the persistence of poor methods in science, even when better methods are available. And we argue that interdisciplinary contact can lead better methods to spread. 1 https://t.co/C5beJA5gMi [Tweet]. @cailinmeister. https://twitter.com/cailinmeister/status/1326221893372833793

    1. These methods should be used with caution, however, because important business rules and application logic may be kept in callbacks. Bypassing them without understanding the potential implications may lead to invalid data.
    1. Erich Neuwirth. (2020, November 11). #COVID19 #COVID19at https://t.co/9uudp013px Zu meinem heutigen Bericht sind Vorbemerkungen notwendig. Das EMS - aus dem kommen die Daten über positive Tests—Hat anscheinend ziemliche Probleme. Heute wurden viele Fälle nachgemeldet. In Wien gab es laut diesem [Tweet]. @neuwirthe. https://twitter.com/neuwirthe/status/1326556742113746950

  3. Feb 2021
    1. Data on blockchains are different from data on the Internet, and in one important way in particular. On the Internet most of the information is malleable and fleeting. The exact date and time of its publication isn't critical to past or future information. On a blockchain, the truth of the present relies on the details of the past. Bitcoins moving across the network have been permanently stamped from the moment of their coinage.

      data on blockchain vs internet

    1. Trailblazer will automatically create a new Context object around your custom input hash. You can write to that without interferring with the original context.
    1. Purely functional programming may also be defined by forbidding state changes and mutable data.
    2. Purely functional data structures are persistent. Persistency is required for functional programming; without it, the same computation could return different results.
    1. What this means is: I better refrain from writing a new book and we rather focus on more and better docs.

      I'm glad. I didn't like that the book (which is essentially a form of documentation/tutorial) was proprietary.

      I think it's better to make documentation and tutorials be community-driven free content

    2. ather, data is passed around from operation to operation, from step to step. We use OOP and inheritance solely for compile-time configuration. You define classes, steps, tracks and flows, inherit those, customize them using Ruby’s built-in mechanics, but this all happens at compile-time. At runtime, no structures are changed anymore, your code is executed dynamically but only the ctx (formerly options) and its objects are mutated. This massively improves the code quality and with it, the runtime stability
    1. Miguel Andariego comentó en este momento:

      Sería mejor arrancar con Duckduckgo para no promover del todo al gigante.

    1. Kit Yates. (2021, January 22). Is this lockdown 3.0 as tough as lockdown 1? Here are a few pieces of data from the @IndependentSage briefing which suggest that despite tackling a much more transmissible virus, lockdown is less strict, which might explain why we are only just keeping on top of cases. [Tweet]. @Kit_Yates_Maths. https://twitter.com/Kit_Yates_Maths/status/1352662085356937216

    1. A fairly comprehensive list of problems and limitations that are often encountered with data as well as suggestions about who should be responsible for fixing them (from a journalistic perspective).

    2. Benford’s Law is a theory which states that small digits (1, 2, 3) appear at the beginning of numbers much more frequently than large digits (7, 8, 9). In theory Benford’s Law can be used to detect anomalies in accounting practices or election results, though in practice it can easily be misapplied. If you suspect a dataset has been created or modified to deceive, Benford’s Law is an excellent first test, but you should always verify your results with an expert before concluding your data has been manipulated.

      This is a relatively good explanation of Benford's law.

      I've come across the theory in advanced math, but I'm forgetting where I saw the proof. p-adic analysis perhaps? Look this up.

    3. More journalistic outlets should be publishing data explainers about where their data and analysis come from so that readers can double check it.

    4. There is no worse way to screw up data than to let a single human type it in.
    1. In America, the number of searches at the time of the lockdown in 2020 for boredom rose by 57 percent, loneliness by 16 percent and worry by 12 percent.

      #

    2. in Europe at the time of the lockdown in 2020 for boredom rose by 93 percent, loneliness 40 percent and worry 27 percent

      #

    1. Cytoscape is an open source software platform for visualizing complex networks and integrating these with any type of attribute data. A lot of Apps are available for various kinds of problem domains, including bioinformatics, social network analysis, and semantic web.
  4. Jan 2021
    1. 8/10 a 14/10 - Leitura dos textos do Módulo 814/10, 19:00 a 20:30 -  Encontro presencial para conversar sobre os

      Acredito que as datas estejam eradas. Quais seriam as corretas?

    1. Data analysis, and the parts of statistics which adhere to it, must…take on the characteristics of science rather than those of mathematics…

      Is data analysis included in data science? If not, what is the relationship between them?

    1. this paper identifies / lists 5 reasons to follow the money in health care. These reasons are applicable to social services or other areas of philanthropy as well.

    1. ReconfigBehSci on Twitter: ‘RT @NatureNews: COVID curbed carbon emissions in 2020—But not by much, and new data show global CO2 emissions have rebounded: Https://t.c…’ / Twitter. (n.d.). Retrieved 20 January 2021, from https://twitter.com/SciBeh/status/1351840770823757824

    1. We could change the definition of Cons to hold references instead, but then we would have to specify lifetime parameters. By specifying lifetime parameters, we would be specifying that every element in the list will live at least as long as the entire list. The borrow checker wouldn’t let us compile let a = Cons(10, &Nil); for example, because the temporary Nil value would be dropped before a could take a reference to it.
    1. Why is CORS important? Currently, client-side scripts (e.g., JavaScript) are prevented from accessing much of the Web of Linked Data due to "same origin" restrictions implemented in all major Web browsers. While enabling such access is important for all data, it is especially important for Linked Open Data and related services; without this, our data simply is not open to all clients. If you have public data which doesn't use require cookie or session based authentication to see, then please consider opening it up for universal JavaScript/browser access. For CORS access to anything other than simple, non auth protected resources
    1. Likewise, privacy is an important issue in BCI ethics since the captured neural signals can be used to gain access to a user’s private information. Ethicists have raised concerns about how BCI data is stored and protected.
    1. Alongside the companies that gather data, there are newly powerful companies that build the tools for organizing, processing, accessing, and visualizing it—companies that don’t take in the traces of our common life but set the terms on which it is sorted and seen. The scraping of publicly available photos, for instance, and their subsequent labeling by low-paid human workers, served to train computer vision algorithms that Palantir can now use to help police departments cast a digital dragnet across entire populations. 

      organizing the mass of information is the real tricky part

  5. Dec 2020
    1. What is a data-originated component? It’s a kind of component that is primarily designed and built for either: displaying, entering, or customizing a given data content itself, rather than focusing on the form it takes. For example Drawer is a non data-originated component, although it may include some. Whereas Table, or Form, or even Feed are good examples of data-originated components.
    1. ever transitioning from teaching high school to teaching the university then coming to the to the community college i've become very fascinated with kind of how students move from one to the other

      Interesting to see trends in data and identify experiences that indicates continuity from high schools in the area to SPSCC (for instance as a Running start), and then to SMU. What are the variety of pathways that students who enrolled at SPSCC decide to, apply, get admission, and funding. What is the percentage of transfers from SPSCC to SMU?

    1. “provenance” — broadly, where did data arise, what inferences were drawn from the data, and how relevant are those inferences to the present situation? While a trained human might be able to work all of this out on a case-by-case basis, the issue was that of designing a planetary-scale medical system that could do this without the need for such detailed human oversight.

      Data Provenance

      The discipline of thinking about:

      (1) where did the data arise? (2) what inferences were drawn (3) how relevant are those inferences to the present situation?

    2. There is a different narrative that one can tell about the current era. Consider the following story, which involves humans, computers, data and life-or-death decisions, but where the focus is something other than intelligence-in-silicon fantasies. When my spouse was pregnant 14 years ago, we had an ultrasound. There was a geneticist in the room, and she pointed out some white spots around the heart of the fetus. “Those are markers for Down syndrome,” she noted, “and your risk has now gone up to 1 in 20.” She further let us know that we could learn whether the fetus in fact had the genetic modification underlying Down syndrome via an amniocentesis. But amniocentesis was risky — the risk of killing the fetus during the procedure was roughly 1 in 300. Being a statistician, I determined to find out where these numbers were coming from. To cut a long story short, I discovered that a statistical analysis had been done a decade previously in the UK, where these white spots, which reflect calcium buildup, were indeed established as a predictor of Down syndrome. But I also noticed that the imaging machine used in our test had a few hundred more pixels per square inch than the machine used in the UK study. I went back to tell the geneticist that I believed that the white spots were likely false positives — that they were literally “white noise.” She said “Ah, that explains why we started seeing an uptick in Down syndrome diagnoses a few years ago; it’s when the new machine arrived.”

      Example of where a global system for inference on healthcare data fails due to a lack of data provenance.

    1. Treemaps are a visualization method for hierarchies based on enclosure rather than connection [JS91]. Treemaps make it easy to spot outliers (for example, the few large files that are using up most of the space on a disk) as opposed to parent-child structure.

      Treemaps visualize enclosure rather than connection. This makes them good visualizations to spot outliers (e.g. large files on a disk) but not for understanding parent-child relationships.

    1. One way to do that is to export them from @sapper/app directly, and rely on the fact that we can reset them immediately before server rendering to ensure that session data isn't accidentally leaked between two users accessing the same server.
    1. ReconfigBehSci @SciBeh (2020) For those who might think this issue isn't settled yet, the piece include below has further graphs indicating just how much "protecting the economy" is associated with "keeping the virus under control" Twitter. Retrieved from: https://twitter.com/i/web/status/1306216113722871808

    1. I haven't met anyone who makes this argument who then says that a one stop convenient, reliable, private and secure online learning environment can’t be achieved using common every day online systems

      Reliable: As a simple example, I'd trust Google to maintain data reliability over my institutional IT support.

      And you'd also need to make the argument for why learning needs to be "private", etc.

    1. And then there was what Lanier calls “data dignity”; he once wrote a book about it, called Who Owns the Future? The idea is simple: What you create, or what you contribute to the digital ether, you own.

      See Tim Berners-Lee's SOLID project.

  6. Nov 2020
    1. Identify, classify, and apply protective measures to sensitive data. Data discovery and data classification solutions help to identify sensitive data and assign classification tags dictating the level of protection required. Data loss prevention solutions apply policy-based protections to sensitive data, such as encryption or blocking unauthorized actions, based on data classification and contextual factors including file type, user, intended recipient/destination, applications, and more. The combination of data discovery, classification, and DLP enable organizations to know what sensitive data they hold and where while ensuring that it's protected against unauthorized loss or exposure.

      [[BEST PRACTICES FOR DATA EGRESS MANAGEMENT AND PREVENTING SENSITIVE DATA LOSS]]

    2. Egress filtering involves monitoring egress traffic to detect signs of malicious activity. If malicious activity is suspected or detected, transfers can be blocked to prevent sensitive data loss. Egress filtering can also limit egress traffic and block attempts at high volume data egress.