3,501 Matching Annotations
  1. Feb 2021
  2. Jan 2021
    1. Data analysis, and the parts of statistics which adhere to it, must…take on the characteristics of science rather than those of mathematics…

      Is data analysis included in data science? If not, what is the relationship between them?

    1. We could change the definition of Cons to hold references instead, but then we would have to specify lifetime parameters. By specifying lifetime parameters, we would be specifying that every element in the list will live at least as long as the entire list. The borrow checker wouldn’t let us compile let a = Cons(10, &Nil); for example, because the temporary Nil value would be dropped before a could take a reference to it.
    1. Why is CORS important? Currently, client-side scripts (e.g., JavaScript) are prevented from accessing much of the Web of Linked Data due to "same origin" restrictions implemented in all major Web browsers. While enabling such access is important for all data, it is especially important for Linked Open Data and related services; without this, our data simply is not open to all clients. If you have public data which doesn't use require cookie or session based authentication to see, then please consider opening it up for universal JavaScript/browser access. For CORS access to anything other than simple, non auth protected resources
    1. Alongside the companies that gather data, there are newly powerful companies that build the tools for organizing, processing, accessing, and visualizing it—companies that don’t take in the traces of our common life but set the terms on which it is sorted and seen. The scraping of publicly available photos, for instance, and their subsequent labeling by low-paid human workers, served to train computer vision algorithms that Palantir can now use to help police departments cast a digital dragnet across entire populations. 

      organizing the mass of information is the real tricky part

  3. Dec 2020
    1. What is a data-originated component? It’s a kind of component that is primarily designed and built for either: displaying, entering, or customizing a given data content itself, rather than focusing on the form it takes. For example Drawer is a non data-originated component, although it may include some. Whereas Table, or Form, or even Feed are good examples of data-originated components.
    1. ever transitioning from teaching high school to teaching the university then coming to the to the community college i've become very fascinated with kind of how students move from one to the other

      Interesting to see trends in data and identify experiences that indicates continuity from high schools in the area to SPSCC (for instance as a Running start), and then to SMU. What are the variety of pathways that students who enrolled at SPSCC decide to, apply, get admission, and funding. What is the percentage of transfers from SPSCC to SMU?

    1. “provenance” — broadly, where did data arise, what inferences were drawn from the data, and how relevant are those inferences to the present situation? While a trained human might be able to work all of this out on a case-by-case basis, the issue was that of designing a planetary-scale medical system that could do this without the need for such detailed human oversight.

      Data Provenance

      The discipline of thinking about:

      (1) where did the data arise? (2) what inferences were drawn (3) how relevant are those inferences to the present situation?

    2. There is a different narrative that one can tell about the current era. Consider the following story, which involves humans, computers, data and life-or-death decisions, but where the focus is something other than intelligence-in-silicon fantasies. When my spouse was pregnant 14 years ago, we had an ultrasound. There was a geneticist in the room, and she pointed out some white spots around the heart of the fetus. “Those are markers for Down syndrome,” she noted, “and your risk has now gone up to 1 in 20.” She further let us know that we could learn whether the fetus in fact had the genetic modification underlying Down syndrome via an amniocentesis. But amniocentesis was risky — the risk of killing the fetus during the procedure was roughly 1 in 300. Being a statistician, I determined to find out where these numbers were coming from. To cut a long story short, I discovered that a statistical analysis had been done a decade previously in the UK, where these white spots, which reflect calcium buildup, were indeed established as a predictor of Down syndrome. But I also noticed that the imaging machine used in our test had a few hundred more pixels per square inch than the machine used in the UK study. I went back to tell the geneticist that I believed that the white spots were likely false positives — that they were literally “white noise.” She said “Ah, that explains why we started seeing an uptick in Down syndrome diagnoses a few years ago; it’s when the new machine arrived.”

      Example of where a global system for inference on healthcare data fails due to a lack of data provenance.

    1. Treemaps are a visualization method for hierarchies based on enclosure rather than connection [JS91]. Treemaps make it easy to spot outliers (for example, the few large files that are using up most of the space on a disk) as opposed to parent-child structure.

      Treemaps visualize enclosure rather than connection. This makes them good visualizations to spot outliers (e.g. large files on a disk) but not for understanding parent-child relationships.

    1. I haven't met anyone who makes this argument who then says that a one stop convenient, reliable, private and secure online learning environment can’t be achieved using common every day online systems

      Reliable: As a simple example, I'd trust Google to maintain data reliability over my institutional IT support.

      And you'd also need to make the argument for why learning needs to be "private", etc.

  4. Nov 2020
    1. Identify, classify, and apply protective measures to sensitive data. Data discovery and data classification solutions help to identify sensitive data and assign classification tags dictating the level of protection required. Data loss prevention solutions apply policy-based protections to sensitive data, such as encryption or blocking unauthorized actions, based on data classification and contextual factors including file type, user, intended recipient/destination, applications, and more. The combination of data discovery, classification, and DLP enable organizations to know what sensitive data they hold and where while ensuring that it's protected against unauthorized loss or exposure.

      [[BEST PRACTICES FOR DATA EGRESS MANAGEMENT AND PREVENTING SENSITIVE DATA LOSS]]

    2. Egress filtering involves monitoring egress traffic to detect signs of malicious activity. If malicious activity is suspected or detected, transfers can be blocked to prevent sensitive data loss. Egress filtering can also limit egress traffic and block attempts at high volume data egress.
    3. Data Egress vs. Data IngressWhile data egress describes the outbound traffic originating from within a network, data ingress, in contrast, refers to the reverse: traffic that originates outside the network that is traveling into the network. Egress traffic is a term used to describe the volume and substance of traffic transferred from a host network to an outside network.

      [[DATA EGRESS VS. DATA INGRESS]]

    4. Data Egress MeaningData egress refers to data leaving a network in transit to an external location. Outbound email messages, cloud uploads, or files being moved to external storage are simple examples of data egress. Data egress is a regular part of network activity, but can pose a threat to organizations when sensitive data is egressed to unauthorized recipients.Examples of common channels for data egress include:EmailWeb uploadsCloud storageRemovable media (USB, CD/DVD, external hard drives)FTP/HTTP transfers

      [[Definition/Data Egress]]

    1. In-depth questionsThe following interview questions enable the hiring manager to gain a comprehensive understanding of your competencies and assess how you would respond to issues that may arise at work:What are the most important skills for a data engineer to have?What data engineering platforms and software are you familiar with?Which computer languages can you use fluently?Do you tend to focus on pipelines, databases or both?How do you create reliable data pipelines?Tell us about a distributed system you've built. How did you engineer it?Tell us about a time you found a new use case for an existing database. How did your discovery impact the company positively?Do you have any experience with data modeling?What common data engineering maxim do you disagree with?Do you have a data engineering philosophy?What is a data-first mindset?How do you handle conflict with coworkers? Can you give us an example?Can you recall a time when you disagreed with your supervisor? How did you handle it?

      deeper dive into [[Data Engineer]] [[Interview Questions]]

    1. to be listed on Mastodon’s official site, an instance has to agree to follow the Mastodon Server Covenant which lays out commitments to “actively moderat[e] against racism, sexism, homophobia and transphobia”, have daily backups, grant more than one person emergency access, and notify people three months in advance of potential closure. These indirect methods are meant to ensure that most people who encounter a platform have a safe experience, even without the advantages of centralization.

      Some of these baseline protections are certainly a good idea. The idea of advance notice of shut down and back ups are particularly valuable.

      I'd not know of the Mastodon Server Covenant before.

    1. The Hierarchy of AnalyticsAmong the many advocates who pointed out the discrepancy between the grinding aspect of data science and the rosier depictions that media sometimes portrayed, I especially enjoyed Monica Rogati’s call out, in which she warned against companies who are eager to adopt AI:Think of Artificial Intelligence as the top of a pyramid of needs. Yes, self-actualization (AI) is great, but you first need food, water, and shelter (data literacy, collection, and infrastructure).This framework puts things into perspective.

      [[the hierarchy of analytics]]

    1. Maybe your dbt models depend on source data tables that are populated by Stitch ingest, or by heavy transform jobs running in Spark. Maybe the tables your models build are depended on by analysts building reports in Mode, or ML engineers running experiments using Jupyter notebooks. Whether you’re a full-stack practitioner or a specialized platform team, you’ve probably felt the pain of trying to track dependencies across technologies and concerns. You need an orchestrator.Dagster lets you embed dbt into a wider orchestration graph.

      It can be common for [[data models]] to rely on other sources - where something like [[Dagster]] fits in - is allowing your dbt fit into a wider [[orchestration graph]]

    2. We love dbt because of the values it embodies. Individual transformations are SQL SELECT statements, without side effects. Transformations are explicitly connected into a graph. And support for testing is first-class. dbt is hugely enabling for an important class of users, adapting software engineering principles to a slightly different domain with great ergonomics. For users who already speak SQL, dbt’s tooling is unparalleled.

      when using [[dbt]] the [[transformations]] are [[SQL statements]] - already something that our team knows

    3. What is dbt?dbt was created by Fishtown Analytics to enable data analysts to build well-defined data transformations in an intuitive, testable, and versioned environment.Users build transformations (called models) defined in templated SQL. Models defined in dbt can refer to other models, forming a dependency graph between the transformations (and the tables or views they produce). Models are self-documenting, easy to test, and easy to run. And the dbt tooling can use the graph defined by models’ dependencies to determine the ancestors and descendants of any individual model, so it’s easy to know what to recompute when something changes.

      one of the [[benefits of [[dbt]]]] is that the [[data transformations]] or [[data models]] can refer to other models, and help show the [[dependency graph]] between transformatios

    1. The attribution data modelIn reality, it’s impossible to know exactly why someone converted to being a customer. The best thing that we can do as analysts, is provide a pretty good guess. In order to do that, we’re going to use an approach called positional attribution. This means, essentially, that we’re going to weight the importance of various touches (customer interactions with a brand) based on their position (the order they occur in within the customer’s lifetime).To do this, we’re going to build a table that represents every “touch” that someone had before becoming a customer, and the channel that led to that touch.

      One of the goals of an [[attribution data model]] is to understand why someone [[converted]] to being a customer. This is impossible to do accurately, but this is where analysis comes in.

      There are some [[approaches to attribution]], one of those is [[positional attribution]]

      [[positional attribution]] is that we are weighting the importance of touch points - or customer interactions, based on their position within the customer lifetime.

    2. Marketers have been told that attribution is a data problem -- “Just get the data and you can have full knowledge of what’s working!” -- when really it’s a data modeling problem. The logic of your attribution model, what the data represents about your business, is as important as the data volume. And the logic is going to change based on your business. That’s why so many attribution products come up short.

      attribution isn't a data problem, it's a data modeling problem]] - it's not just the data, but what the data represents about your business.

    1. I increasingly don’t care for the world of centralized software. Software interacts with my data, on my computers. Its about time my software reflected that relationship. I want my laptop and my phone to share my files over my wifi. Not by uploading all my data to servers in another country. Especially if those servers are financed by advertisers bidding for my eyeballs.
  5. Oct 2020
    1. This is until you realize you're probably using at least ten different services, and they all have different purposes, with various kinds of data, endpoints and restrictions. Even if you have the capacity and are willing to do it, it's still damn hard.
    2. Hopefully we can agree that the current situation isn't so great. But I am a software engineer. And chances that if you're reading it, you're very likely a programmer as well. Surely we can deal with that and implement, right? Kind of, but it's really hard to retrieve data created by you.
    1. (d) All calculations shown in this appendix shall be implemented on a site-level basis. Site level concentration data shall be processed as follows: (1) The default dataset for PM2.5 mass concentrations for a site shall consist of the measured concentrations recorded from the designated primary monitor(s). All daily values produced by the primary monitor are considered part of the site record; this includes all creditable samples and all extra samples. (2) Data for the primary monitors shall be augmented as much as possible with data from collocated monitors. If a valid daily value is not produced by the primary monitor for a particular day (scheduled or otherwise), but a value is available from a collocated monitor, then that collocated value shall be considered part of the combined site data record. If more than one collocated daily value is available, the average of those valid collocated values shall be used as the daily value. The data record resulting from this procedure is referred to as the “combined site data record.”
      1. Calculate mean of all collocated NON-primary monitors' values per day
      2. Coalesce primary monitor value with this calculated mean
    1. Legislation to stem the tide of Big Tech companies' abuses, and laws—such as a national consumer privacy bill, an interoperability bill, or a bill making firms liable for data-breaches—would go a long way toward improving the lives of the Internet users held hostage inside the companies' walled gardens. But far more important than fixing Big Tech is fixing the Internet: restoring the kind of dynamism that made tech firms responsive to their users for fear of losing them, restoring the dynamic that let tinkerers, co-ops, and nonprofits give every person the power of technological self-determination.
    1. More conspicuously, since Trump’s election, the RNC — at his campaign’s direction — has excluded critical “voter scores” on the president from the analytics it routinely provides to GOP candidates and committees nationwide, with the aim of electing down-ballot Republicans. Republican consultants say the Trump information is being withheld for two reasons: to discourage candidates from distancing themselves from the president, and to avoid embarrassing him with poor results that might leak. But they say its concealment harms other Republicans, forcing them to campaign without it or pay to get the information elsewhere.
    1. you can then use “Sign In with Google” to access the publisher’s products, but Google does the billing, keeps your payment method secure, and makes it easy for you to manage your subscriptions all in one place.  

      I immediately wonder who owns my related subscription data? Is the publisher only seeing me as a lumped Google proxy or do they get may name, email address, credit card information, and other details?

      How will publishers be able (or not) to contact me? What effect will this have on potential customer retention?

    1. Methodology To determine the link between heat and income in U.S. cities, NPR used NASA satellite imagery and U.S. Census American Community Survey data. An open-source computer program developed by NPR downloaded median household income data for census tracts in the 100 most populated American cities, as well as geographic boundaries for census tracts. NPR combined these data with TIGER/Line shapefiles of the cities.

      This is an excellent example of data journalism.

    1. 1.1. Monitors For the purposes of AQS, a monitor does not refer to a specific piece of equipment. Instead, it reflects that a given pollutant (or other parameter) is being measured at a given site. Identified by: The site (state + county + site number) where the monitor is located AND The pollutant code AND POC – Parameter Occurrence Code. Used to uniquely identify a monitor if there is more than one device measuring the same pollutant at the same site. For example monitor IDs are usually written in the following way: SS-CCC-NNNN-PPPPP-Q where SS is the State FIPS code, CCC is the County FIPS code, and NNNN is the Site Number within the county (leading zeroes are always included for these fields), PPPPP is the AQS 5-digit parameter code, and Q is the POC. For example: 01-089-0014-44201-2 is Alabama, Madison County, Site Number 14, ozone monitor, POC 2.

      How monitors (specific measures of specific criteria) are identified in AQS data.

  6. Sep 2020
    1. "The Data Visualisation Catalogue is a project developed by Severino Ribecca to create a library of different information visualisation types." I like the explanations of when one might use a particular type of data visualization to highlight - or obscure! - what the data is saying.

    1. Facebook ignored or was slow to act on evidence that fake accounts on its platform have been undermining elections and political affairs around the world, according to an explosive memo sent by a recently fired Facebook employee and obtained by BuzzFeed News.The 6,600-word memo, written by former Facebook data scientist Sophie Zhang, is filled with concrete examples of heads of government and political parties in Azerbaijan and Honduras using fake accounts or misrepresenting themselves to sway public opinion. In countries including India, Ukraine, Spain, Brazil, Bolivia, and Ecuador, she found evidence of coordinated campaigns of varying sizes to boost or hinder political candidates or outcomes, though she did not always conclude who was behind them.
    1. Nic Fildes in London and Javier Espinoza in Brussels April 8 2020 Jump to comments section Print this page Be the first to know about every new Coronavirus story Get instant email alerts When the World Health Organization launched a 2007 initiative to eliminate malaria on Zanzibar, it turned to an unusual source to track the spread of the disease between the island and mainland Africa: mobile phones sold by Tanzania’s telecoms groups including Vodafone, the UK mobile operator.Working together with researchers at Southampton university, Vodafone began compiling sets of location data from mobile phones in the areas where cases of the disease had been recorded. Mapping how populations move between locations has proved invaluable in tracking and responding to epidemics. The Zanzibar project has been replicated by academics across the continent to monitor other deadly diseases, including Ebola in west Africa.“Diseases don’t respect national borders,” says Andy Tatem, an epidemiologist at Southampton who has worked with Vodafone in Africa. “Understanding how diseases and pathogens flow through populations using mobile phone data is vital.”
      the best way to track the spread of the pandemic is to use heatmaps built on data of multiple phones which, if overlaid with medical data, can predict how the virus will spread and determine whether government measures are working.
      
    1. How to Export Your Content If you log into Graphite before August 15th, you can download each file in any of the available formats offered. If you'd like a bulk download, I recommend (for the technically inclined) using the exporter tool I created. For those less technically inclined, Blockstack may have some options for you. Remember, Graphite never owned your content. Never had control of your content. And that was the real power of its offering. 
    1. Had it not been for the attentiveness of one person who went beyond the task of classifying galaxies into predetermined categories and was able to communicate this to the researchers via the online forum, what turned out to be important new phenomena might have gone undiscovered.

      Sometimes our attempts to improve data quality in citizen science projects can actually work against us. Pre-determined categories and strict regulations could prevent the reporting of important outliers.