3,382 Matching Annotations
  1. Jul 2016
    1. Data collection on students should be considered a joint venture, with all parties — students, parents, instructors, administrators — on the same page about how the information is being used.
    1. The arrival of quantified self means that it's no longer just what you type that is being weighed and measured, but how you slept last night, and with whom.
  2. Jun 2016
    1. Even if you trust everyone spying on you right now, the data they're collecting will eventually be stolen or bought by people who scare you. We have no ability to secure large data collections over time.

      Fair enough.

      And "Burn!!" on Microsoft with that link.

    1. dynamic documents

      A group of experts got together last year at Daghstuhl and wrote a white paper about this.

      Basically the idea is that the data, the code, the protocol/analysis/method, and the narrative should all exist as equal objects on the appropriate platform. Code in a code repository like Github, Data in a data repo that understands data formats, like Mendeley Data (my company) and Figshare, protocols somewhere like protocols.io and the narrative which ties it all together still at the publisher. Discussion and review can take the form of comments, or even better, annotations just like I'm doing now.

    1. n a sample of 2,101 scientificpapers published between 1665 and 1800, Beaver andRosen found that 2.2% described collaborative work. No-table was the degree of joint authorship in astronomy,especially in situations where scientists were dependentupon observational data.

      Astronomy was area of collaboration because they needed to share data

    Tags

    Annotators

    1. What type of team do you need to create these visualisations? 
OpenDataCity has a special team of really high-level nerds. Experts on hardware, servers, software development, web design, user experience and so on. I contribute the more mathematical view on the data. But usually a project is done by just one person, who is chief and developer, and the others help him or her. So, it's not like a group project. Usually, it's a single person and a lot of help. That makes it definitely faster, than having a big team and a lot of meetings.

      This strengths the idea that data visualization is a field where a personal approach is still viable, as is shown also by a lot of individuals that are highly valuated as data visualizers.

  3. May 2016
    1. After graduating from MIT at the age of 29, Loveman began teaching at Harvard Business School, where he was a professor for nine years.[8][10] While at Harvard, Loveman taught Service Management and developed an interest in the service industry and customer service.[8][10] He also launched a side career as a speaker and consultant after a 1994 paper he co-authored, titled "Putting the Service-Profit Chain to Work", attracted the attention of companies including Disney, McDonald's and American Airlines. The paper focused on the relationship between company profits and customer loyalty, and the importance of rewarding employees who interact with customers.[7][8] In 1997, Loveman sent a letter to Phil Satre, the then-chief executive officer of Harrah's Entertainment, in which he offered advice for growing the company.[7] Loveman, who had done some consulting work for the company in 1991,[11] again began to consult for Harrah's and, in 1998, was offered the position of chief operating officer.[8] He initially took a two year sabbatical from Harvard to take on the role of COO of Harrah's,[10] at the end of which Loveman decided to remain with the company.[12]

      Putting the Service-Profit Chain to Work

    1. the most important figures that one needs for management are unknown or unknowable (Lloyd S. Nelson, director of statistical methods for the Nashua corporation), but successful management must nevertheless take account of them.

      分清楚哪些是能知道的,哪些是不能知道的数据

    1. From Bits to Narratives: The Rapid Evolution of Data Visualization Engines

      It was an amazing presentation by Mr Cesar A Hidalgo, It was an eye opener for me in the area of data visualisation, As the national level organisation, we have huge data, but we never thought about data visualisation. You projects particularly pantheon and immersion is marvelous and I came to know that, you are using D3. It is a great job

    1. The entirely quantitative methods and variables employed by Academic Analytics -- a corporation intruding upon academic freedom, peer evaluation and shared governance -- hardly capture the range and quality of scholarly inquiry, while utterly ignoring the teaching, service and civic engagement that faculty perform,
  4. Apr 2016
    1. SocialBoost — is a tech NGO that promotes open data and coordinates the activities of more than 1,000 IT-enthusiasts, biggest IT-companies and government bodies in Ukraine through hackathons for socially meaningful IT-projects, related to e-government, e-services, data visualization and open government data. SocialBoost has developed dozens of public services, interactive maps, websites for niche communities, as well as state projects such as data.gov.ua, ogp.gov.ua. SocialBoost builds the bridge between civic activists, government and IT-industry through technology. Main goal is to make government more open by crowdsourcing the creation of innovative public services with the help of civic society.
    1. Great Principles of Computing<br> Peter J. Denning, Craig H. Martell

      This is a book about the whole of computing—its algorithms, architectures, and designs.

      Denning and Martell divide the great principles of computing into six categories: communication, computation, coordination, recollection, evaluation, and design.

      "Programmers have the largest impact when they are designers; otherwise, they are just coders for someone else's design."

    1. We should have control of the algorithms and data that guide our experiences online, and increasingly offline. Under our guidance, they can be powerful personal assistants.

      Big business has been very militant about protecting their "intellectual property". Yet they regard every detail of our personal lives as theirs to collect and sell at whim. What a bunch of little darlings they are.

    1. preferably

      Delete "preferably". Limiting the scope of text mining to exclude societal and commercial purposes limits the usefulness to enterprises (especially SMEs that cannot mine on their own) as well as to society. These limitations have ramifications in terms of limiting the research questions that researchers can and will pursue.

    2. Encourage researchers not to transfer the copyright on their research outputs before publication.

      This statement is more generally applicable than just to TDM. Besides, "Encourage" is too weak a word here, and from a societal perspective, it would be far better if researchers were to retain their copyright (where it applies), but make their copyrightable works available under open licenses that allow publishers to publish the works, and others to use and reuse it.

  5. thenewinquiry.com thenewinquiry.com
    1. In December 2014, FitBit released a pledge stating that it “is deeply committed to protecting the security of your data.” Still, we may soon be obliged to turn over the sort of information the device is designed to collect in order to obtain medical coverage or life insurance. Some companies currently offer incentives like discounted premiums to members who volunteer information from their activity trackers. Many health and fitness industry experts say it is only a matter of time before all insurance providers start requiring this information.
    1. Accession codes

      The panda and polar bear datasets should have been included in the data section rather than hidden in the URLs section. Production removed the DOIs and used (now dead) URLs instead, but for the working links and insight see the following blog: http://blogs.biomedcentral.com/gigablog/2012/12/21/promoting-datacitation-in-nature/

    1. To date 5'-cytosine methylation (5mC) has not been reported in Caenorhabditis elegans, and using ultra-performance liquid chromatography/tandem mass spectrometry (UPLC-MS/MS) the existence of DNA methylation in T. spiralis was detected, making it the first 5mC reported in any species of nematode.

      As a novel and potentially controversial finding, the huge amounts of supporting data are depositedhere to assist others to follow on and reproduce the results. This won the BMC Open Data Prize, as the judges were impressed by the numerous extra steps taken by the authors in optimizing the openness and easy accessibility of this data, and were keen to emphasize that the value of open data for such breakthrough science lies not only in providing a resource, but also in conferring transparency to unexpected conclusions that others will naturally wish to challenge. You can see more in the blog posting and interview with the authors here: http://blogs.biomedcentral.com/gigablog/2013/10/02/open-data-for-the-win/

  6. Mar 2016
    1. There is a human story behind every data point and as educators and innovators we have to shine a light on it.
    1. three-dimensional inversion recovery-prepped spoiled grass coronal series

      ID: BPwPsyStructuralData SubjectGroup: BPwPsy Acquisition: Anatomical DOI: 10.18116/C6159Z

      ID: BPwoPsyStructuralData SubjectGroup: BPwoPsy Acquisition: Anatomical DOI: 10.18116/C6159Z

      ID: HCStructuralData SubjectGroup: HC Acquisition: Anatomical DOI: 10.18116/C6159Z

      ID: SZStructuralData SubjectGroup: SZ Acquisition: Anatomical DOI: 10.18116/C6159Z

    1. Open data

      Sadly, there may not be much work on opening up data in Higher Education. For instance, there was only one panel at last year’s international Open Data Conference. https://www.youtube.com/watch?v=NUtQBC4SqTU

      Looking at the interoperability of competency profiles, been wondering if it could be enhanced through use of Linked Open Data.

  7. Feb 2016
    1. I read my first books on data mining back in the early 1990's and one thing I read was that "80% of the effort in a data mining project goes into data cleaning."
    1. Great explanation of 15 common probability distributions: Bernouli, Uniform, Binomial, Geometric, Negative Binomial, Exponential, Weibull, Hypergeometric, Poisson, Normal, Log Normal, Student's t, Chi-Squared, Gamma, Beta.

    1. Since its start in 1998, Software Carpentry has evolved from a week-long training course at the US national laboratories into a worldwide volunteer effort to improve researchers' computing skills. This paper explains what we have learned along the way, the challenges we now face, and our plans for the future.

      http://software-carpentry.org/lessons/<br> Basic programming skills for scientific researchers.<br> SQL, and Python, R, or MATLAB.

      http://www.datacarpentry.org/lessons/<br> Managing and analyzing data.

  8. Jan 2016
    1. The journal will accommodate data but should be presented in the context of a paper. The Winnower should not act as a forum for publishing data sets alone. It is our feeling that data in absence of theory is hard to interpret and thus may cause undue noise to the site.

      This will be the case also for the data visualizations showed here, once the data is curated and verified properly. Still data visualizations can start a global conversation without having the full paper translated to English.

    1. 50 Years of Data Science, David Donoho<br> 2015, 41 pages

      This paper reviews some ingredients of the current "Data Science moment", including recent commentary about data science in the popular media, and about how/whether Data Science is really di fferent from Statistics.

      The now-contemplated fi eld of Data Science amounts to a superset of the fi elds of statistics and machine learning which adds some technology for 'scaling up' to 'big data'.

    1. The explosion of data-intensive research is challenging publishers to create new solutions to link publications to research data (and vice versa), to facilitate data mining and to manage the dataset as a potential unit of publication. Change continues to be rapid, with new leadership and coordination from the Research Data Alliance (launched 2013): most research funders have introduced or tightened policies requiring deposit and sharing of data; data repositories have grown in number and type (including repositories for “orphan” data); and DataCite was launched to help make research data cited, visible and accessible. Meanwhile publishers have responded by working closely with many of the community-led projects; by developing data deposit and sharing policies for journals, and introducing data citation policies; by linking or incorporating data; by launching some pioneering data journals and services; by the development of data discovery services such as Thomson Reuters’ Data Citation Index (page 138).
    1. It doesn’t work if we think the people who disagree with us are all motivated by malice, or that our political opponents are unpatriotic.  Democracy grinds to a halt without a willingness to compromise; or when even basic facts are contested, and we listen only to those who agree with us. 

      C'mon, civic technologists, government innovators, open data advocates: this can be a call to arms. Isn't the point of "open government" to bring people together to engage with their leaders, provide the facts, and allow more informed, engaged debate?

    1. "A friend of mine said a really great phrase: 'remember those times in early 1990's when every single brick-and-mortar store wanted a webmaster and a small website. Now they want to have a data scientist.' It's good for an industry when an attitude precedes the technology."
    1. UT Austin SDS 348, Computational Biology and Bioinformatics. Course materials and links: R, regression modeling, ggplot2, principal component analysis, k-means clustering, logistic regression, Python, Biopython, regular expressions.

    1. paradox of unanimity - Unanimous or nearly unanimous agreement doesn't always indicate the correct answer. If agreement is unlikely, it indicates a problem with the system.

      Witnesses who only saw a suspect for a moment are not likely to be able to pick them out of a lineup accurately. If several witnesses all pick the same suspect, you should be suspicious that bias is at work. Perhaps these witnesses were cherry-picked, or they were somehow encouraged to choose a particular suspect.

    1. Guidelines for publishing GLAM data (galleries, libraries, archives, museums) on GitHub. It applies to publishing any kind of data anywhere.

      • Document the schema of the data.
      • Make the usage terms and conditions clear.
      • Tell people how to report issues.<br> Or, tell them that they're on their own.
      • Tell people whether you accept pull requests (user-contributed edits and additions), and how.
      • Tell people how often the data will be updated, even if the answer is "sporadically" or "maybe never".

      https://en.wikipedia.org/wiki/Open_Knowledge<br> http://openglam.org/faq/

    1. Set Semantics¶ This tool is used to set semantics in EPUB files. Semantics are simply, links in the OPF file that identify certain locations in the book as having special meaning. You can use them to identify the foreword, dedication, cover, table of contents, etc. Simply choose the type of semantic information you want to specify and then select the location in the book the link should point to. This tool can be accessed via Tools->Set semantics.

      Though it’s described in such a simple way, there might be hidden power in adding these tags, especially when we bring eBooks to the Semantic Web. Though books are the prime example of a “Web of Documents”, they can also contribute to the “Web of Data”, if we enable them. It might take long, but it could happen.

  9. Dec 2015
    1. The idea was to pinpoint the doctors prescribing the most pain medication and target them for the company’s marketing onslaught. That the databases couldn’t distinguish between doctors who were prescribing more pain meds because they were seeing more patients with chronic pain or were simply looser with their signatures didn’t matter to Purdue.
    1. Users publish coursework, build portfolios or tinker with personal projects, for example.

      Useful examples. Could imagine something like Wikity, FedWiki, or other forms of content federation to work through this in a much-needed upgrade from the “Personal Home Pages” of the early Web. Do see some connections to Sandstorm and the new WordPress interface (which, despite being targeted at WordPress.com users, also works on self-hosted WordPress installs). Some of it could also be about the longstanding dream of “keeping our content” in social media. Yes, as in the reverse from Facebook. Multiple solutions exist to do exports and backups. But it can be so much more than that and it’s so much more important in educational contexts.

    1. A personal API builds on the domain concept—students store information on their site, whether it’s class assignments, financial aid information or personal blogs, and then decide how they want to share that data with other applications and services. The idea is to give students autonomy in how they develop and manage their digital identities at the university and well into their professional lives
    1. Big Sur is our newest Open Rack-compatible hardware designed for AI computing at a large scale. In collaboration with partners, we've built Big Sur to incorporate eight high-performance GPUs
    1. The EDUPUB Initiative VitalSource regularly collaborates with independent consultants and industry experts including the National Federation of the Blind (NFB), American Foundation for the Blind (AFB), Tech For All, JISC, Alternative Media Access Center (AMAC), and others. With the help of these experts, VitalSource strives to ensure its platform conforms to applicable accessibility standards including Section 508 of the Rehabilitation Act and the Accessibility Guidelines established by the Worldwide Web Consortium known as WCAG 2.0. The state of the platform's conformance with Section 508 at any point in time is made available through publication of Voluntary Product Accessibility Templates (VPATs).  VitalSource continues to support industry standards for accessibility by conducting conformance testing on all Bookshelf platforms – offline on Windows and Macs; online on Windows and Macs using standard browsers (e.g., Internet Explorer, Mozilla Firefox, Safari); and on mobile devices for iOS and Android. All Bookshelf platforms are evaluated using industry-leading screen reading programs available for the platform including JAWS and NVDA for Windows, VoiceOver for Mac and iOS, and TalkBack for Android. To ensure a comprehensive reading experience, all Bookshelf platforms have been evaluated using EPUB® and enhanced PDF books.

      Could see a lot of potential for Open Standards, including annotations. What’s not so clear is how they can manage to produce such ePub while maintaining their DRM-focused practice. Heard about LCP (Lightweight Content Protection). But have yet to get a fully-accessible ePub which is also DRMed in such a way.

    1. Data gathering is ubiquitous in science. Giant databases are currently being minedfor unknown patterns, but in fact there are many (many) known patterns that simplyhave not been catalogued. Consider the well-known case of medical records. A patient’smedical history is often known by various individual doctor-offices but quite inadequatelyshared between them. Sharing medical records often means faxing a hand-written noteor a filled-in house-created form between offices.
    1. As of May 1, 2015, there is a new requirement from some research councils that research data must also be openly available,

      data requirements

    1. Among the most useful summaries I have found for Linked Data, generally, and in relationship to libraries, specifically. After first reading it, got to hear of the acronym LODLAM: “Linked Open Data for Libraries, Archives, and Museums”. Been finding uses for this tag, in no small part because it gets people to think about the connections between diverse knowledge-focused institutions, places where knowledge is constructed. Somewhat surprised academia, universities, colleges, institutes, or educational organisations like schools aren’t explicitly tied to those others. In fact, it’s quite remarkable that education tends to drive much development in #OpenData, as opposed to municipal or federal governments, for instance. But it’s still very interesting to think about Libraries and Museums as moving from a focus on (a Web of) documents to a focus on (a Web of) data.

  10. Nov 2015
    1. The effectiveness of infographics, or any other form of communication, can be measured in terms of whether people:

      • pay attention to it
      • understand it
      • remember it later

      Titles are important. Ideally, the title should concisely state the main point you want people to grasp.

      Recall of both labels and data can be improved by using redundancy -- text as well as images. For example:

      • flags in addition to country names
      • proportional bubbles in addition to numbers.
    1. TPOT is a Python tool that automatically creates and optimizes machine learning pipelines using genetic programming. Think of TPOT as your “Data Science Assistant”: TPOT will automate the most tedious part of machine learning by intelligently exploring thousands of possible pipelines, then recommending the pipelines that work best for your data.

      https://github.com/rhiever/tpot TPOT (Tree-based Pipeline Optimization Tool) Built on numpy, scipy, pandas, scikit-learn, and deap.

  11. Oct 2015
    1. The Coming of OERRelated to the enthusiasm for digital instructional resources,four-fifths (81percent) of the survey participants agreethat “Open Source textbooks/Open Education Resource(OER) content “will be an important source for instructional resources in five yea
    1. why not annotate, say, the Eiffel Tower itself

      As long as it has some URI, it can be annotated. Any object in the world can be described through the Semantic Web. Especially with Linked Open Data.

    2. If you deal with PDFs online, you’ve probably noticed that some are different from others. Some are really just images.

      First step in Linked Open Data is moving away from image PDFs.

    1. The second level of Open Access is Gold Open Access, which requires the author to pay the publishing platform a fee to have their work placed somewhere it can be accessed for free. These fees can range in the hundreds to thousands of dollars.

      Not necessarily true. This is a misconception. "About 70 percent of OA journals charge no APCs at all. We’ve known this for a decade but it’s still widely overlooked by people who should know better." -Suber http://lj.libraryjournal.com/2015/09/opinion/not-dead-yet/an-interview-with-peter-suber-on-open-access-not-dead-yet/#_

  12. Sep 2015
    1. In a nutshell, an ontology answers the question, “What things can we say exist in a domain, and how do we describe those things that relate to each other?”

    2. According to inventor of the World Wide Web, Tim Berners-Lee, there are four key principles of Linked Data (Berners-Lee, 2006): Use URIs to denote things. Use HTTP URIs so that these things can be referred to and looked up (dereferenced) by people and user agents. Provide useful information about the thing when its URI is dereferenced, leveraging standards such as RDF, SPARQL. Include links to other related things (using their URIs) when publishing data on the web.

    3. In section 4.1.3.2 of the xAPI specification, it states “Activity Providers SHOULD use a corresponding existing Verb whenever possible.”

    1. This is problematic because the article has been influential in the literature supporting the use of antidepressants in adolescents.

      Example of the type of harm that lack of transparency can lead to.

    2. Access to primary data from trials has important implications for both clinical practice and research, including that published conclusions about efficacy and safety should not be read as authoritative. The reanalysis of Study 329 illustrates the necessity of making primary trial data and protocols available to increase the rigour of the evidence base.

      How can anyone argue that science isn't served by making primary data available? We must recognize that more people are harmed by not sharing data than are harmed by data being shared.

    1. (B) Dyn labeling in dyn-IRES-cre x Ai9-tdTomato compared to in situ images from the Allen Institute for Brain Science in a sagittal section highlighting presence of dyn in the striatum, the hippocampus, BNST, amygdala, hippocampus, and substantia nigra. All images show tdTomato (red) and Nissl (blue) staining.(C) Coronal section highlighting dynorphinergic cell labeling in the NAc as compared to the Allen Institute for Brain Science.

      Allen Brain Institute

    1. Because cue-evoked DA release developed throughout learning, we examined whether DA release correlated with conditioned-approach behavior. Figure 1E and table S1 show that the ratio of the CS-related DA release to the reward-related DA release was significantly (r2 = 0.68; P = 0.0005) correlated with number of CS nosepokes in a conditioning session (also see fig. S4).

      single trial analysis

    1. This approach is called change data capture, which I wrote about recently (and implemented on PostgreSQL). As long as you’re only writing to a single database (not doing dual writes), and getting the log of writes from the database (in the order in which they were committed to the DB), then this approach works just as well as making your writes to the log directly.

      Interesting section on applying log-orientated approaches to existing systems.

  13. Aug 2015
    1. Shared information

      The “social”, with an embedded emphasis on the data part of knowledge building and a nod to solidarity. Cloud computing does go well with collaboration and spelling out the difference can help lift some confusion.

    1. I feel that there is a great benefit to fixing this question at the spec level. Otherwise, what happens? I read a web page, I like it and I am going to annotate it as being a great one -- but first I have to find out whether the URI my browser is used, conceptually by the author of the page, to represent some abstract idea?
    1. data deposition is limited to researchers working at the same institution,

      Not necessarily. For many institutions, as long as one of the researchers is affiliated, the data can be deposited

    1. Big data to knowledge (BD2K)

      would like to know more about this term and HHS inititiative

    2. the definition of a “dataset,”

      this is interesting, and will be interesting to track within and across disciplines

    3. Approximately 87% of the invisible datasets consist of data newly collected for the research reported; 13% reflect reuse of existing data. More than 50% of the datasets were derived from live human or non-human animal subjects.

      Another good statistic to have

    4. Among articles with invisible datasets, we found an average of 2.9 to 3.4 datasets, suggesting there were approximately 200,000 to 235,000 invisible datasets generated from NIH-funded research published in 2011.

      This is a good statistic to have handy.

  14. Jun 2015
    1. The comparison between the model and the experts is based on the species distribution models (SMDs), not on actual species occurrences, so the observed difference could be due to weakness in the SDM predictions rather than the model outperforming the experts. The explanation for this choice in Footnote 4 is reasonable, but I wonder if it could be addressed by rarifying the sampling appropriately.

    1. If you can’t find the correct web page, ask a reference librarian.

      YES, ASK US. Also, we love to work with faculty on managing their data!

    1. possible with modern technology,

      This is terrifying but also fascinating. Imagine the data for MFA programs on the content/style whatever on the last page readers thumbed before stopping the turning!

      Also, couldn't this system be easily gamed: creating bots to "peruse" texts at the right pace repeatedly?

    1. G enerat ing student performance data that can help students, teachers, and parents identify areas for further teaching or practice

      Data, data, data

    1. Critical Habitat - Terrestrial - Polygon [USFWS] Critical Habitat - Terrestrial - Line [USFWS]

      Critical Habitat Layers need to be updated

  15. May 2015
    1. The book would need to be set up on a website first

      Not necessarily, if PDF is in the mix, it can be the medium for annotations that might later anchor to a website -- even if PDFs are distributed to participants and used locally as mentioned above.

    1. periods have proven to work poorly with Linked Data principles, which require well-defined entities for linking.
  16. Apr 2015
    1. There is now a strong body of evidence showing failure to comply with results-reporting requirements across intervention classes, even in the case of large, randomised trials [3–7]. This applies to both industry and investigator-driven trials. I

      Compliance not mechanism

    2. “the registration of all interventional trials is a scientific, ethical, and moral responsibility”

      World Health Organization's statement

    1. Anyone withholding the methods and results of a clinical trial is already in breach of multiple codes and regulations, including the Declaration of Helsinki, various promises from industry and professional bodies, and, in many cases, the United States Food and Drug Administration (FDA) Amendment Act of 2007. Indeed, a recently published cohort study of trials in clinicaltrials.gov found that more than half had failed to post results; and even though the FDA is entitled to issue fines of $10,000 a day for transgressions, no such fines have ever been levied [3].

      Sticks don't work if they aren't used. I find this rather disturbing.

    2. The best currently available evidence shows that the methods and results of clinical trials are routinely withheld from doctors, researchers, and patients [2–5], undermining our best efforts at informed decision making.
    1. This week there was an amazing landmark announcement from the World Health Organisation: they have come out and said that everyone must share the results of their clinical trials, within 12 months of completion, including old trials (since those are the trials conducted on currently used treatments).
    1. First, the domain is a poor candidate because the domain of all entities relevant to neurobiological function is extremely large, highly fragmented into separate subdisciplines, and riddled with lack of consensus (Shirky, 2005).

      Probably a good thing to add to the Complex Data integration workshop write up

    1. Wouldn’t it be useful, both to the scientific community or the wider world, to increase the publication of negative results?
  17. Mar 2015
  18. iopscience.iop.org iopscience.iop.org
    1. Geneva group “high” mass-loss evolutionary tracks

      Is there a http link for these evolutionary models?

  19. Feb 2015
  20. Jan 2015
    1. Make no mistake, in today's digital age, we are most definitely "renters" with virtually no rights—including rights to our data.
    2. The Internet of Things promises to create mountains upon mountains of data, but none of it will be yours.
    1. The big question, of course, is whether that player has to be a private capitalist corporation, or some federated, publicly-run set of services that could reach a data-sharing agreement free of monitoring by intelligence agencies.

      So there we are. It is pretty straight forward really.

    2. But if you turn data into a money-printing machine for citizens, whereby we all become entrepreneurs, that will extend the financialization of everyday life to the most extreme level, driving people to obsess about monetizing their thoughts, emotions, facts, ideas—because they know that, if these can only be articulated, perhaps they will find a buyer on the open market. This would produce a human landscape worse even than the current neoliberal subjectivity. I think there are only three options. We can keep these things as they are, with Google and Facebook centralizing everything and collecting all the data, on the grounds that they have the best algorithms and generate the best predictions, and so on. We can change the status of data to let citizens own and sell them. Or citizens can own their own data but not sell them, to enable a more communal planning of their lives. That’s the option I prefer.

      Very well thought out. Obviously must know about read write web, TSL certificate issues etc. But what does neoliberal subjectivity mean? An interesting phrase.

  21. Dec 2014
  22. Nov 2014
    1. If we believe in equality, if we believe in participatory democracy and participatory culture, if we believe in people and progressive social change, if we believe in sustainability in all its environmental and economic and psychological manifestations, then we need to do better than slap that adjective “open” onto our projects and act as though that’s sufficient or — and this is hard, I know — even sound.
    2. that the moments when students generate “education data” is, historically, moments when they come into contact with the school and more broadly the school and the state as a disciplinary system
  23. May 2014
    1. SSPP # 7.2 Power Usage Effectiveness (PUE) (Electronic Maximum annual weighted average PUE of 1.4 by FY15 )

      SLAC target PUE of 1.4 by FY15

    1. When the project is complete later this year (all done while the existing data center remained in operation!), the data center's annual PUE will drop from 1.5 to 1.2, saving 20 percent of its annual electrical cost.

      Warren Hall target efficiency: 1.2 as of 2011

    1. The MGHPCC is targeting a PUE of less than 1.3. A recent report cites typical data center PUEs at 1.9. This means that our facility can expect to

      Target of 1.3 (vs typical data centers around 1.9) PUE

  24. Apr 2014
    1. Mike Olson of Cloudera is on record as predicting that Spark will be the replacement for Hadoop MapReduce. Just about everybody seems to agree, except perhaps for Hortonworks folks betting on the more limited and less mature Tez. Spark’s biggest technical advantages as a general data processing engine are probably: The Directed Acyclic Graph processing model. (Any serious MapReduce-replacement contender will probably echo that aspect.) A rich set of programming primitives in connection with that model. Support also for highly-iterative processing, of the kind found in machine learning. Flexible in-memory data structures, namely the RDDs (Resilient Distributed Datasets). A clever approach to fault-tolerance.

      Spark's advantages:

      • DAG processing model
      • programming primitives for DAG model
      • highly-iterative processing suited for ML
      • RDD in-memory data structures
      • clever approach to fault-tolerance
  25. Feb 2014
    1. 1960 and 1975, states more than doubled their rate of appropriations for higher education, from four dollars per thousand in state revenue to ten.
    2. From 1945 to 1975, the number of undergraduates increased five-fold, and graduate students nine-fold. PhDs graduating one year got jobs teaching the ever-larger cohort of freshman arriving the next.
    3. In the first half of the 20th century, higher education was a luxury and a rarity in the U.S. Only 5% or so of adults, overwhelmingly drawn from well-off families, had attended college.
    4. The proportion of part-time and non-tenure track teachers went from less than half of total faculty, before 1975, to over two-thirds now.
    1. The Backblaze environment is the exact opposite. I do not believe I could dream up worse conditions to study and compare drive reliability. It's hard to believe they plotted this out and convened a meeting to outline a process to buy the cheapest drives imaginable, from all manner of ridiculous sources, install them into varying (and sometimes flawed) chassis, then stack them up and subject them to entirely different workloads and environmental conditions... all with the purpose of determining drive reliability.

      The conditions and process described here mirrors the process many organizations go through in an attempt to cut costs by trying to cut through what is perceived as marketing-hype. The cost differences are compelling enough to continually tempt people down a path to considerably reduce costs while believing that they've done enough due-diligence to avoid raising the risk to an unacceptable level.

    2. The enthusiast in me loves the Backblaze story. They are determined to deliver great value to their customers, and will go to any length to do so. Reading the blog posts about the extreme measures they took was engrossing, and I'm sure they enjoyed rising to the challenge. Their Storage Pod is a compelling design that has been field-tested extensively, and refined to provide a compelling price point per GB of storage.

      An anecdote with data to quantify the experience has some value sort of drawing conclusions for making future decisions-- but the temptation to make decisions on that single story is high in the face of the void quantified stories & data from other sources. What is a responsible way to collect these data-stories and publish them with disclaimers sufficient enough to avoid the spin that invariably comes along with them?

      In part the industry opens itself up to this kind of spin when the data at-scale is not made available publicly and we're all subject to the marketing-spin in the purchase decision-making process.

  26. Jan 2014
    1. Less than half (45%) of the respondents are satisfied with their ability to integrate data from disparate sources to address research questions

      The most important take-away I see in this whole section on reasons for not making data electronically available is not mentioned here directly!

      Here are the raw numbers for I am satisfied with my ability to integrate data from disparate sources to address research questions:

      • 156 (12.2%) Agree Strongly
      • 419 (32.7%) Agree Somewhat
      • 363 (28.3%) Neither Agree nor Disagree
      • 275 (21.5%) Disagree Somewhat
      • 069 (05.4%) Disagree Strongly

      Of the people who are not satisfied in some way, how many of those think current data sharing mechanisms are sufficient for their needs?

      Of the ~5% of people who are strongly dissatisfied, how many of those are willing to spend time, energy, and money on new sharing mechanisms, especially ones that are not yet proven? If they are willing to do so, then what measurable result or impact will the new mechanism have over the status quo?

      Who feel that current sharing mechanisms stand in the way of publications, tenure, promotion, or being cited?

      Of those who are dissatisfied, how many have existing investment in infrastructure versus those who are new and will be investing versus those who cannot invest in old or new?

      10 years ago how would you have convinced someone they need an iPad or Android smartphone?

    2. Reasons for not making data electronically available. Regarding their attitudes towards data sharing, most of the respondents (85%) are interested in using other researchers' datasets, if those datasets are easily accessible. Of course, since only half of the respondents report that they make some of their data available to others and only about a third of them (36%) report their data is easily accessible, there is a major gap evident between desire and current possibility. Seventy-eight percent of the respondents said they are willing to place at least some their data into a central data repository with no restrictions. Data repositories need to make accommodations for varying levels of security or access restrictions. When asked whether they were willing to place all of their data into a central data repository with no restrictions, 41% of the respondents were not willing to place all of their data. Nearly two thirds of the respondents (65%) reported that they would be more likely to make their data available if they could place conditions on access. Less than half (45%) of the respondents are satisfied with their ability to integrate data from disparate sources to address research questions, yet 81% of them are willing to share data across a broad group of researchers who use data in different ways. Along with the ability to place some restrictions on sharing for some of their data, the most important condition for sharing their data is to receive proper citation credit when others use their data. For 92% of the respondents, it is important that their data are cited when used by other researchers. Eighty-six percent of survey respondents also noted that it is appropriate to create new datasets from shared data. Most likely, this response relates directly to the overwhelming response for citing other researchers' data. The breakdown of this section is presented in Table 13.

      Categories of data sharing considered:

      • I would use other researchers' datasets if their datasets were easily accessible.
      • I would be willing to place at least some of my data into a central data repository with no restrictions.
      • I would be willing to place all of my data into a central data repository with no restrictions.
      • I would be more likely to make my data available if I could place conditions on access.
      • I am satisfied with my ability to integrate data from disparate sources to address research questions.
      • I would be willing to share data across a broad group of researchers who use data in different ways.
      • It is important that my data are cited when used by other researchers.
      • It is appropriate to create new datasets from shared data.
    3. Data sharing practices. Only about a third (36%) of the respondents agree that others can access their data easily, although three-quarters share their data with others (see Table 11). This shows there is a willingness to share data, but it is difficult to achieve or is done only on request.

      There is a willingness, but not a way!

    4. Nearly one third of the respondents chose not to answer whether they make their data available to others. Of those who did respond, 46% reported they do not make their data electronically available to others. Almost as many reported that at least some of their data are available somehow, either on their organization's website, their own website, a national network, a global network, a personal website, or other (see Table 10). The high percentage of non-respondents to this question most likely indicates that data sharing is even lower than the numbers indicate. Furthermore, the less than 6% of scientists who are making “All” of their data available via some mechanism, tends to re-enforce the lack of data sharing within the communities surveyed.
    5. Adding descriptive metadata to datasets helps makes the dataset more accessible by others and into the future. Respondents were asked to indicate all metadata standards they currently use to describe their data. More than half of the respondents (56%) reported that they did not use any metadata standard and about 22% of respondents indicated they used their own lab metadata standard. This could be interpreted that over 78% of survey respondents either use no metadata or a local home grown metadata approach.

      Not surprising that roughly 80% use no or ad hoc metadata.

    6. Data reuse. Respondents were asked to indicate whether they have the sole responsibility for approving access to their data. Of those who answered this question, 43% (n=545) have the sole responsibility for all their datasets, 37% (n=466) have for some of their datasets, and 21% (n=266) do not have the sole responsibility.
    7. Policies and procedures sometimes serve as an active rather than passive barrier to data sharing. Campbell et al. (2003) reported that government agencies often have strict policies about secrecy for some publicly funded research. In a survey of 79 technology transfer officers in American universities, 93% reported that their institution had a formal policy that required researchers to file an invention disclosure before seeking to commercialize research results. About one-half of the participants reported institutional policies that prohibited the dissemination of biomaterials without a material transfer agreement, which have become so complex and demanding that they inhibit sharing [15].

      Policies and procedures are barriers, but there are many more barriers beyond that which get in the way first.

    8. data practices of researchers – data accessibility, discovery, re-use, preservation and, particularly, data sharing
      • data accessibility
      • discovery
      • re-use
      • preservation
      • data sharing
    1. The Data Life Cycle: An Overview The data life cycle has eight components: Plan : description of the data that will be compiled, and how the data will be managed and made accessible throughout its lifetime Collect : observations are made either by hand or with sensors or other instruments and the data are placed a into digital form Assure : the quality of the data are assured through checks and inspections Describe : data are accurately and thoroughly described using the appropriate metadata standards Preserve : data are submitted to an appropriate long-term archive (i.e. data center ) Discover : potentially useful data are located and obtained, along with the relevant information about the data ( metadata ) Integrate : data from disparate sources are combined to form one homogeneous set of data that can be readily analyzed Analyze : data are analyzed

      The lifecycle according to who? This 8-component description is from the point of view of only the people who obsessively think about this "problem".

      Ask a researcher and I think you'll hear that lifecycle means something like:

      collect -> analyze -> publish
      

      or a more complex data management plan might be:

      ask someone -> receive data in email -> analyze -> cite -> publish -> tenure
      

      To most people lifecycle means "while I am using the data" and archiving means "my storage guy makes backups occasionally".

      Asking people to be aware of the whole cycle outlined here is a non-starter, but I think there is another approach to achieve what we want... dramatic pause [to be continued]

      What parts of this cycle should the individual be responsible for vs which parts are places where help is needed from the institution?

    2. Data represent important products of the scientific enterprise that are, in many cases, of equivalent or greater value than the publications that are originally derived from the research process. For example, addressing many of the grand challenge scientific questions increasingly requires collaborative research and the reuse , integration, and synthesis of data.

      Who else might care about this other than Grand Challenge Question researchers?

    3. Journals and sponsors want you to share your data

      What is the sharing standard? What are the consequences of not sharing? What is the enforcement mechanism?

      There are three primary sharing mechanisms I can think of today: email, usb stick, and dropbox (née ftp).

      The dropbox option is supplanting ftp which comes from another era, but still satisfies an important niche for larger data sets and/or higher-volume or anonymous traffic.

      Dropbox, email and usb are all easily accessible parts of the day-to-day consumer workflow; they are all trivial to set up without institutional support or, importantly, permission.

      An email account is already provisioned by default for everyone or, if the institutional email offerings are not sufficient, a person may easily set up a 3rd-party email account with no permission or hassle.

      Data management alternatives to these three options will have slow or no adoption until the barriers to access and use are as low as email; the cost of entry needs to be no more than *a web browser, an email address, and no special permission required".

    4. An effective data management program would enable a user 20 years or longer in the future to discover , access , understand, and use particular data [ 3 ]. This primer summarizes the elements of a data management program that would satisfy this 20-year rule and are necessary to prevent data entropy .

      Who cares most about the 20-year rule? This is an ideal that appeals to some, but in practice even the most zealous adherents can't picture what this looks like in some concrete way-- except in the most traditional ways: physical paper journals in libraries are tangible examples of the 20-year rule.

      Until we have a digital equivalent for data I don't blame people looking for tenure or jobs for not caring about this ideal if we can't provide a clear picture of how to achieve this widely at an institutional level. For digital materials I think the picture people have in their minds is of tape backup. Maybe this is generational? New generations not exposed widely to cassette tapes, DVDs, and other physical media that "old people" remember, only then will it be possible to have a new ideal that people can see in their minds-eye.

    5. A key component of data management is the comprehensive description of the data and contextual information that future researchers need to understand and use the data. This description is particularly important because the natural tendency is for the information content of a data set or database to undergo entropy over time (i.e. data entropy ), ultimately becoming meaningless to scientists and others [ 2 ].

      I agree with the key component mentioned here, but I feel the term data entropy is an unhelpful crutch.

    6. data entropy Normal degradation in information content associated with data and metadata over time (paraphrased from [ 2 ]).

      I'm not sure what this really means and I don't think data entropy is a helpful term. Poor practices certainly lead to disorganized collections of data, but I think this notion comes from a time when people were very concerned about degradation of physical media on which data is stored. That is, of course, still a concern, but I think the term data entropy really lends itself as an excuse for people who don't use good practices to manage data and is a cover for the real problem which is a kind of data illiteracy in much the same way we also face computational illiteracy widely in the sciences. Managing data really is hard, but let's not mask it with fanciful notions like data entropy.

    7. Although data management plans may differ in format and content, several basic elements are central to managing data effectively.

      What are the "several basic elements?"

    8. By documenting your data and recommending appropriate ways to cite your data, you can be sure to get credit for your data products and their use

      Citation is an incentive. An answer to the question "What's in it for me?"

    9. This primer describes a few fundamental data management practices that will enable you to develop a data management plan, as well as how to effectively create, organize, manage, describe, preserve and share data

      Data management practices:

      • create
      • organize
      • manage
      • describe
      • preserve
      • share
    10. The goal of data management is to produce self-describing data sets. If you give your data to a scientist or colleague who has not been involved with your project, will they be able to make sense of it? Will they be able to use it effectively and properly?
    1. One respondent noted that NSF doesn't have an enforcement policy. This is presumably true of other mandate sources as well, and brings up the related and perhaps more significant problem that mandates are not always (if they are ever) accompanied by the funding required to satisfy them. Another respondent wrote that funding agencies expect universities to contribute to long-term data storage.
    2. Data management activities, grouped. The data management activities mentioned by the survey can be grouped into five broader categories: "storage" (comprising backup or archival data storage, identifying appropriate data repositories, day-to-day data storage, and interacting with data repositories); "more information" (comprising obtaining more information about curation best practices and identifying appropriate data registries and search portals); "metadata" (comprising assigning permanent identifiers to data, creating and publishing descriptions of data, and capturing computational provenance); "funding" (identifying funding sources for curation support); and "planning" (creating data management plans at proposal time). When the survey results are thus categorized, the dominance of storage is clear, with over 80% of respondents requesting some type of storage-related help. (This number may also reflect a general equating of curation with storage on the part of respondents.) Slightly fewer than 50% of respondents requested help related to metadata, a result explored in more detail below.

      Categories of data management activities:

      • storage
        • backup/archival data storage
        • identifying appropriate data repositories
        • day-to-day data storage
        • interacting with data repositories
      • more information
        • obtaining more information about curation best practices
        • identifying appropriate data registries
        • search portals
      • metadata
        • assigning permanent identifiers to data
        • creating/publishing descriptions of data
        • capturing computational provenance
      • funding
        • identifying funding sources for curation support
      • planning
        • creating data management plans at proposal time
    3. Data management activities, grouped. The data management activities mentioned by the survey can be grouped into five broader categories: "storage" (comprising backup or archival data storage, identifying appropriate data repositories, day-to-day data storage, and interacting with data repositories); "more information" (comprising obtaining more information about curation best practices and identifying appropriate data registries and search portals); "metadata" (comprising assigning permanent identifiers to data, creating and publishing descriptions of data, and capturing computational provenance); "funding" (identifying funding sources for curation support); and "planning" (creating data management plans at proposal time). When the survey results are thus categorized, the dominance of storage is clear, with over 80% of respondents requesting some type of storage-related help. (This number may also reflect a general equating of curation with storage on the part of respondents.) Slightly fewer than 50% of respondents requested help related to metadata, a result explored in more detail below.

      Storage is a broad topic and is a very frequently mentioned topic in all of the University-run surveys.

      http://www.alexandria.ucsb.edu/~gjanee/dc@ucsb/survey/plots/q4.2.png

      Highlight by Chris during today's discussion.

    4. Distribution of departments with respect to responsibility spheres. Ignoring the "Myself" choice, consider clustering the parties potentially responsible for curation mentioned in the survey into three "responsibility spheres": "local" (comprising lab manager, lab research staff, and department); "campus" (comprising campus library and campus IT); and "external" (comprising external data repository, external research partner, funding agency, and the UC Curation Center). Departments can then be positioned on a tri-plot of these responsibility spheres, according to the average of their respondents' answers. For example, all responses from FeministStds (Feminist Studies) were in the campus sphere, and thus it is positioned directly at that vertex. If a vertex represents a 100% share of responsibility, then the dashed line opposite a vertex represents a reduction of that share to 20%. For example, only 20% of ECE's (Electrical and Computer Engineering's) responses were in the campus sphere, while the remaining 80% of responses were evenly split between the local and external spheres, and thus it is positioned at the 20% line opposite the campus sphere and midway between the local and external spheres. Such a plot reveals that departments exhibit different characteristics with respect to curatorial responsibility, and look to different types of curation solutions.

      This section contains an interesting diagram showing the distribution of departments with respect to responsibility spheres:

      http://www.alexandria.ucsb.edu/~gjanee/dc@ucsb/survey/plots/q2.5.png

    5. In the course of your research or teaching, do you produce digital data that merits curation? 225 of 292 (77%) of respondents answered "yes" to this first question, which corresponds to 25% of the estimated population of 900 faculty and researchers who received the survey.

      For those who do not feel they have data that merits curation I would at least like to hear a description of the kinds of data they have and why they feel it does not need to be curated?

      For some people they may already be using well-curated data sets; on the other hand there are some people who feel their data may not be useful to anyone outside their own research group, so there is no need to curate the data for use by anyone else even though under some definition of "curation" there may be important unmet curation needs for internal-use only that may be visible only to grad students or researchers who work with the data hands-on daily.

      UPDATE: My question is essentially answered here: https://hypothes.is/a/xBpqzIGTRaGCSmc_GaCsrw

    6. Responsibility, myself versus others. It may appear that responses to the question of responsibility are bifurcated between "Myself" and all other parties combined. However, respondents who identified themselves as being responsible were more likely than not to identify additional parties that share that responsibility. Thus, curatorial responsibility is seen as a collaborative effort. (The "Nobody" category is a slight misnomer here as it also includes non-responses to this question.)

      This answers my previous question about this survey item:

      https://hypothes.is/a/QrDAnmV8Tm-EkDuHuknS2A

    7. Awareness of data and commitment to its preservation are two key preconditions for successful data curation.

      Great observation!

    8. Which parties do you believe have primary responsibility for the curation of your data? Almost all respondents identified themselves as being personally responsible.

      For those that identify themselves as personally responsible would they identify themselves (or their group) as the only ones responsible for the data? Or is there a belief that the institution should also be responsible in some way in addition to themselves?

    9. Availability of the raw survey data is subject to the approval of the UCSB Human Subjects Committee.
    10. Survey design The survey was intended to capture as broad and complete a view of data production activities and curation concerns on campus as possible, at the expense of gaining more in-depth knowledge.

      Summary of the survey design

    11. Researchers may be underestimating the need for help using archival storage systems and dealing with attendant metadata issues.

      In my mind this is a key challenge: even if people can describe what they need for themselves (that in itself is a very hard problem), what to do from the infrastructure standpoint to implement services that aid the individual researcher and also aid collaboration across individuals in the same domain, as well as across domains and institutions... in a long-term sustainable way is not obvious.

      In essence... how do we translate needs that we don't yet fully understand into infrastructure with low barrier to adoption, use, and collaboration?

    12. Researchers view curation as a collaborative activity and collective responsibility.
    13. To summarize the survey's findings: Curation of digital data is a concern for a significant proportion of UCSB faculty and researchers. Curation of digital data is a concern for almost every department and unit on campus. Researchers almost universally view themselves as personally responsible for the curation of their data. Researchers view curation as a collaborative activity and collective responsibility. Departments have different curation requirements, and therefore may require different amounts and types of campus support. Researchers desire help with all data management activities related to curation, predominantly storage. Researchers may be underestimating the need for help using archival storage systems and dealing with attendant metadata issues. There are many sources of curation mandates, and researchers are increasingly under mandate to curate their data. Researchers under curation mandate are more likely to collaborate with other parties in curating their data, including with their local labs and departments. Researchers under curation mandate request more help with all curation-related activities; put another way, curation mandates are an effective means of raising curation awareness. The survey reflects the concerns of a broad cross-section of campus.

      Summary of survey findings.

    14. In 2012 the Data Curation @ UCSB Project surveyed UCSB campus faculty and researchers on the subject of data curation, with the goals of 1) better understanding the scope of the digital curation problem and the curation services that are needed, and 2) characterizing the role that the UCSB Library might play in supporting curation of campus research outputs.

      1) better understanding the scope of the digital curation problem and the curation services that are needed

      2) characterizing the role that the UCSB Library might play in supporting curation of campus research outputs.

    1. The project will develop an analysis package in the open-source language R and complement it with a step-by-step hands-on manual to make tools available to a broad, international user community that includes academics, scientists working for governments and non-governmental organizations, and professionals directly engaged in conservation practice and land management. The software package will be made publicly available under http://www.clfs.umd.edu/biology/faganlab/movement/.

      Output of the project:

      • analysis package written in R
      • step-by-step hands-on manual
      • make tools available to a broad, international community
      • software made publicly available

      Question: What software license will be used? The Apache software license is potentially a good choice here because it is a strong open source license supported by a wide range of communities with few obligations or barriers to access/use which supports the goal of a broad international audience.

      Question: Will the data be made available under a license, as well? Maybe a CC license of some sort?

    2. These species represent not only different types of movement (on land, in air, in water) but also different types of relocation data (from visual observations of individually marked animals to GPS relocations to relocations obtained from networked sensor arrays).

      Movement types:

      • land
      • air
      • water

      Types of relocation data:

      • visual observations
      • GPS
      • networked sensor arrays
    1. Once a searchable atlas has been constructed there are fundamentally two approaches that can be used to analyze the data: one visual, the other mathematical.
    2. The initial inputs for deriving quantitative information of gene expression and embryonic morphology are raw image data, either of fluorescent proteins expressed in live embryos or of stained fluorescent markers in fixed material. These raw images are then analyzed by computational algorithms that extract features, such as cell location, cell shape, and gene product concentration. Ideally, the extracted features are then recorded in a searchable database, an atlas, that researchers from many groups can access. Building a database with quantitative graphical and visualization tools has the advantage of allowing developmental biologists who lack specialized skills in imaging and image analysis to use their knowledge to interrogate and explore the information it contains.

      1) Initial input is raw image data 2) feature extraction on raw image data 3) extracted features stored in shared, searchable database 4) database available to researchers from many groups 5) quantitative graphical and visualization tools allow access to those without specialized skill in imaging and image analysis

    1. We regularly provide scholars with access to content for this purpose. Our Data for Research site (http://dfr.jstor.org)

      The access to this is exceedingly slow. Note that it is still in beta.

  27. Nov 2013
    1. Not even gephi is very good at visualising temporal networks.

      Hmm I disagree. In teh version of Gephi very thing is cool.