3,630 Matching Annotations
  1. Mar 2021
    1. Visualise written content into a more dynamic way. Many people, some neurodivergent folks especially, benefit from information being distilled into diagrams, comics, or less word-dense formats. Visuals can also benefit people who might not read/understand the language you wrote it in. They can also be an effective lead-in to your long-form from visually-driven avenues like Pinterest or Instagram.

      This is also a great exercise for readers and learners. If the book doesn't do this for you already, spend some time to annotate it or do it yourself.

    1. Ashish K. Jha, MD, MPH. (2020, December 12). Michigan vs. Ohio State Football today postponed due to COVID But a comparison of MI vs OH on COVID is useful Why? While vaccines are coming, we have 6-8 hard weeks ahead And the big question is—Can we do anything to save lives? Lets look at MI, OH for insights Thread [Tweet]. @ashishkjha. https://twitter.com/ashishkjha/status/1337786831065264128

    1. The urgent argument for turning any company into a software company is the growing availability of data, both inside and outside the enterprise. Specifically, the implications of so-called “big data”—the aggregation and analysis of massive data sets, especially mobile

      Every company is described by a set of data, financial and other operational metrics, next to message exchange and paper documents. What else we find that contributes to the simulacrum of an economic narrative will undeniably be constrained by the constitutive forces of its source data.

    1. DataBeers Brussels. (2020, October 26). ⏰ Our next #databeers #brussels is tomorrow night and we’ve got a few tickets left! Don’t miss out on some important and exciting talks from: 👉 @svscarpino 👉 Juami van Gils 👉 Joris Renkens 👉 Milena Čukić 🎟️ Last tickets here https://t.co/2upYACZ3yS https://t.co/jEzLGvoxQe [Tweet]. @DataBeersBru. https://twitter.com/DataBeersBru/status/1320743318234562561

  2. Feb 2021
    1. Data on blockchains are different from data on the Internet, and in one important way in particular. On the Internet most of the information is malleable and fleeting. The exact date and time of its publication isn't critical to past or future information. On a blockchain, the truth of the present relies on the details of the past. Bitcoins moving across the network have been permanently stamped from the moment of their coinage.

      data on blockchain vs internet

    1. What this means is: I better refrain from writing a new book and we rather focus on more and better docs.

      I'm glad. I didn't like that the book (which is essentially a form of documentation/tutorial) was proprietary.

      I think it's better to make documentation and tutorials be community-driven free content

    2. ather, data is passed around from operation to operation, from step to step. We use OOP and inheritance solely for compile-time configuration. You define classes, steps, tracks and flows, inherit those, customize them using Ruby’s built-in mechanics, but this all happens at compile-time. At runtime, no structures are changed anymore, your code is executed dynamically but only the ctx (formerly options) and its objects are mutated. This massively improves the code quality and with it, the runtime stability
    1. Kit Yates. (2021, January 22). Is this lockdown 3.0 as tough as lockdown 1? Here are a few pieces of data from the @IndependentSage briefing which suggest that despite tackling a much more transmissible virus, lockdown is less strict, which might explain why we are only just keeping on top of cases. [Tweet]. @Kit_Yates_Maths. https://twitter.com/Kit_Yates_Maths/status/1352662085356937216

    1. Benford’s Law is a theory which states that small digits (1, 2, 3) appear at the beginning of numbers much more frequently than large digits (7, 8, 9). In theory Benford’s Law can be used to detect anomalies in accounting practices or election results, though in practice it can easily be misapplied. If you suspect a dataset has been created or modified to deceive, Benford’s Law is an excellent first test, but you should always verify your results with an expert before concluding your data has been manipulated.

      This is a relatively good explanation of Benford's law.

      I've come across the theory in advanced math, but I'm forgetting where I saw the proof. p-adic analysis perhaps? Look this up.

  3. Jan 2021
    1. Data analysis, and the parts of statistics which adhere to it, must…take on the characteristics of science rather than those of mathematics…

      Is data analysis included in data science? If not, what is the relationship between them?

    1. We could change the definition of Cons to hold references instead, but then we would have to specify lifetime parameters. By specifying lifetime parameters, we would be specifying that every element in the list will live at least as long as the entire list. The borrow checker wouldn’t let us compile let a = Cons(10, &Nil); for example, because the temporary Nil value would be dropped before a could take a reference to it.
    1. Why is CORS important? Currently, client-side scripts (e.g., JavaScript) are prevented from accessing much of the Web of Linked Data due to "same origin" restrictions implemented in all major Web browsers. While enabling such access is important for all data, it is especially important for Linked Open Data and related services; without this, our data simply is not open to all clients. If you have public data which doesn't use require cookie or session based authentication to see, then please consider opening it up for universal JavaScript/browser access. For CORS access to anything other than simple, non auth protected resources
    1. Alongside the companies that gather data, there are newly powerful companies that build the tools for organizing, processing, accessing, and visualizing it—companies that don’t take in the traces of our common life but set the terms on which it is sorted and seen. The scraping of publicly available photos, for instance, and their subsequent labeling by low-paid human workers, served to train computer vision algorithms that Palantir can now use to help police departments cast a digital dragnet across entire populations. 

      organizing the mass of information is the real tricky part

  4. Dec 2020
    1. What is a data-originated component? It’s a kind of component that is primarily designed and built for either: displaying, entering, or customizing a given data content itself, rather than focusing on the form it takes. For example Drawer is a non data-originated component, although it may include some. Whereas Table, or Form, or even Feed are good examples of data-originated components.
    1. ever transitioning from teaching high school to teaching the university then coming to the to the community college i've become very fascinated with kind of how students move from one to the other

      Interesting to see trends in data and identify experiences that indicates continuity from high schools in the area to SPSCC (for instance as a Running start), and then to SMU. What are the variety of pathways that students who enrolled at SPSCC decide to, apply, get admission, and funding. What is the percentage of transfers from SPSCC to SMU?

    1. “provenance” — broadly, where did data arise, what inferences were drawn from the data, and how relevant are those inferences to the present situation? While a trained human might be able to work all of this out on a case-by-case basis, the issue was that of designing a planetary-scale medical system that could do this without the need for such detailed human oversight.

      Data Provenance

      The discipline of thinking about:

      (1) where did the data arise? (2) what inferences were drawn (3) how relevant are those inferences to the present situation?

    2. There is a different narrative that one can tell about the current era. Consider the following story, which involves humans, computers, data and life-or-death decisions, but where the focus is something other than intelligence-in-silicon fantasies. When my spouse was pregnant 14 years ago, we had an ultrasound. There was a geneticist in the room, and she pointed out some white spots around the heart of the fetus. “Those are markers for Down syndrome,” she noted, “and your risk has now gone up to 1 in 20.” She further let us know that we could learn whether the fetus in fact had the genetic modification underlying Down syndrome via an amniocentesis. But amniocentesis was risky — the risk of killing the fetus during the procedure was roughly 1 in 300. Being a statistician, I determined to find out where these numbers were coming from. To cut a long story short, I discovered that a statistical analysis had been done a decade previously in the UK, where these white spots, which reflect calcium buildup, were indeed established as a predictor of Down syndrome. But I also noticed that the imaging machine used in our test had a few hundred more pixels per square inch than the machine used in the UK study. I went back to tell the geneticist that I believed that the white spots were likely false positives — that they were literally “white noise.” She said “Ah, that explains why we started seeing an uptick in Down syndrome diagnoses a few years ago; it’s when the new machine arrived.”

      Example of where a global system for inference on healthcare data fails due to a lack of data provenance.

    1. Treemaps are a visualization method for hierarchies based on enclosure rather than connection [JS91]. Treemaps make it easy to spot outliers (for example, the few large files that are using up most of the space on a disk) as opposed to parent-child structure.

      Treemaps visualize enclosure rather than connection. This makes them good visualizations to spot outliers (e.g. large files on a disk) but not for understanding parent-child relationships.

    1. I haven't met anyone who makes this argument who then says that a one stop convenient, reliable, private and secure online learning environment can’t be achieved using common every day online systems

      Reliable: As a simple example, I'd trust Google to maintain data reliability over my institutional IT support.

      And you'd also need to make the argument for why learning needs to be "private", etc.

  5. Nov 2020
    1. Identify, classify, and apply protective measures to sensitive data. Data discovery and data classification solutions help to identify sensitive data and assign classification tags dictating the level of protection required. Data loss prevention solutions apply policy-based protections to sensitive data, such as encryption or blocking unauthorized actions, based on data classification and contextual factors including file type, user, intended recipient/destination, applications, and more. The combination of data discovery, classification, and DLP enable organizations to know what sensitive data they hold and where while ensuring that it's protected against unauthorized loss or exposure.

      [[BEST PRACTICES FOR DATA EGRESS MANAGEMENT AND PREVENTING SENSITIVE DATA LOSS]]

    2. Egress filtering involves monitoring egress traffic to detect signs of malicious activity. If malicious activity is suspected or detected, transfers can be blocked to prevent sensitive data loss. Egress filtering can also limit egress traffic and block attempts at high volume data egress.
    3. Data Egress vs. Data IngressWhile data egress describes the outbound traffic originating from within a network, data ingress, in contrast, refers to the reverse: traffic that originates outside the network that is traveling into the network. Egress traffic is a term used to describe the volume and substance of traffic transferred from a host network to an outside network.

      [[DATA EGRESS VS. DATA INGRESS]]

    4. Data Egress MeaningData egress refers to data leaving a network in transit to an external location. Outbound email messages, cloud uploads, or files being moved to external storage are simple examples of data egress. Data egress is a regular part of network activity, but can pose a threat to organizations when sensitive data is egressed to unauthorized recipients.Examples of common channels for data egress include:EmailWeb uploadsCloud storageRemovable media (USB, CD/DVD, external hard drives)FTP/HTTP transfers

      [[Definition/Data Egress]]

    1. In-depth questionsThe following interview questions enable the hiring manager to gain a comprehensive understanding of your competencies and assess how you would respond to issues that may arise at work:What are the most important skills for a data engineer to have?What data engineering platforms and software are you familiar with?Which computer languages can you use fluently?Do you tend to focus on pipelines, databases or both?How do you create reliable data pipelines?Tell us about a distributed system you've built. How did you engineer it?Tell us about a time you found a new use case for an existing database. How did your discovery impact the company positively?Do you have any experience with data modeling?What common data engineering maxim do you disagree with?Do you have a data engineering philosophy?What is a data-first mindset?How do you handle conflict with coworkers? Can you give us an example?Can you recall a time when you disagreed with your supervisor? How did you handle it?

      deeper dive into [[Data Engineer]] [[Interview Questions]]

    1. to be listed on Mastodon’s official site, an instance has to agree to follow the Mastodon Server Covenant which lays out commitments to “actively moderat[e] against racism, sexism, homophobia and transphobia”, have daily backups, grant more than one person emergency access, and notify people three months in advance of potential closure. These indirect methods are meant to ensure that most people who encounter a platform have a safe experience, even without the advantages of centralization.

      Some of these baseline protections are certainly a good idea. The idea of advance notice of shut down and back ups are particularly valuable.

      I'd not know of the Mastodon Server Covenant before.