2,748 Matching Annotations
  1. Feb 2017
    1. And just as every porter wants to have an admirer, so even the proudest of men, the philosopher, supposes that he secs on all sides the eyes of the universe tele-scopically focused upon his action and thought. It is remarkable that this was brought about by the intellect, which was certainly allotted to these most unfortunate, delicate, and ephemeral beings merely as a device for detaining them a minute within existence.

      There’s a really interesting link to be made with Willard here. Nietzsche is taking on the philosopher (as well as Enlightenment thinking), as philosophers tend to position themselves at the center of universe because they are on the search for truth. He challenges the science of it, saying that the telescopic (read: narrow) inquiry is futile as they are on a search for something that is not there. In other words, the more a philosopher tries to focus in on “the truth,” the more a philosopher loses sight of the purpose of the inquiry.

      Likewise, Willard takes on patriarchal exegesis as though it, too, is a science. By using the telescopic metaphor (similar to Nietzsche), she makes it clear that a search for truth in such a narrow sense is useless to the human endeavor. From “The Letter Killeth”:

      “We need women commentators to bring out the women’s side of the book; we need the stereoscopic view of truth in general, which can only be had when a woman’s eye and man’s together shall discern the perspective of the Bible’s full-orbed revelation…while they turn their linguistic telescopes on truth, I may be allowed to make a correction for the “personal equation” in the results which they espy” (1126).

      Although Willard does suggest that the truth can be reached (a “full orbed revelation”), it is not until both halves — the woman’s and the man’s — is taken into account. Again, the more a male preacher tries to focus in on "the truth," the more he loses sight of the purpose of the inquiry. Really, all of humanity is a stake for Willard.

      I don't know. There’s something going on with eyes and telescopes and science and philosophy and exegesis, but I’m not quite sure how to articulate it…

    1. We also think that there is too much of a disconnect between research and the people who it serves.

      This should be corrected.

    1. Morris Pelzel on Doug Engelbart's Augmenting Human Intellect: A Conceptual Framework.

      the presentation and arrangement of symbolic data is crucial, and any given arrangement may be more or less conducive to discovery. That is why Engelbart’s observation about “playing” with and “rearranging” the materials we are dealing with strikes such a resonant chord. Even if we are only dealing with text, the ease of recombining and manipulating our words and phrases enables our writing to more quickly reach a suitable form of expression.

      as we are social animals, our intellects work most effectively, not in isolation but in connection with others. When thoughts and ideas are externally represented, they thereby enter to some degree a public space,

    1. With Anenke, a French orthodontics company, they are patient-testing a new process that uses digital scanning to create a dental device which is printed with safe polymers.

      A new way to make braces!

    2. “We’ve figured out how to print custom wood grain by taking sawdust and plastic, combining it into a filament and extruding or forcing it out,” says Tibbits.

      Printing wood!

    3. Tibbits has pioneered the field of 4D printing — that is, using 3D printers to create material that then transforms into a predetermined shape (TED Talk: The emergence of 4D printing).

      I've never heard of 4D printing!

    4. At MIT’s International Design Center, Self-Assembly Lab founder Skylar Tibbits, co-director Jared Laucks and their team are inventing smart materials that can automatically take shape in useful — and sometimes surprising — ways.

      This would be a fun project to work on!

    1. “A lesson about every single element on the periodic table.”

      This is a fascinating idea! I'm going to try to watch more of them!

    1. The climate scientists gave the conspiracy theorists an opening by letting their advocacy color their science, which compromised the legitimacy of their enterprise and, ironically, weakened the political movement itself.

      Check out this book on the intersection of science and democracy.

    2. With scientific claims, the only definitive answer is to reexamine the original research data and repeat the experiments and analysis. But no one has the time or the expertise to examine the original research literature on every topic, let alone repeat the research. As such, it is important to have some guidelines for deciding which theories are plausible enough to merit serious examination.

      "The superiority of Scientific Evidence Reexamined":

      "Allow me now to ask, Will he be so perfectly satisfied on the first trial as not to think it of importance to make a second, perhaps u third, and a fourth? Whence arises this diffidence'! Purely from the consciousness of the fallibility of his own faculties. But to what purpose, it may be said, the reiterations of the at-tempt, since it is impossible for him, by any efforts, to shake off his dependence on the accuracy of his attention and fidelity of his memory? Or, what can he have more than reiterated testimonies of his memory, in support of the truth of its for-mer testimony? I acknowledge, that after a hundred attempts he can have no more. But even this is a great deal. We learn from experience, that the mistakes or oversights committed by the mind in one operation. arc sometime!-., on a review, corrected on the second, or perhaps on a third. Besides, the repetition, when no error is discovered, enlivens the remembrance, and so strengthens the conviction. But, for this conviction. it is plain that we are in a great measure indebted to memory. and in some measure even to experience." (Campbell 922)

    1. After a brief training session, participants spent six hours archiving environmental data from government websites, including those of the National Oceanic and Atmospheric Administration and the Interior Department.

      A worthwhile effort.

    2. An anonymous donor has provided storage on Amazon servers, and the information can be searched from a website at the University of Pennsylvania called Data Refuge. Though the Federal Records Act theoretically protects government data from deletion, scientists who rely on it say would rather be safe than sorry.

      Data refuge.

    3. “But if we want to defend the role of science in policy making, scientists need to run for office.

      This would be an interesting development.

    1. Centre for Integrative Neuroscience and Neurodynamics

      This is the most awesomest research Centre ever!!

    1. Some or the new instruments or science arc, ./ """\ \ themselves, sciences; others arc arts; still others, c\v,W{' . producL'i of either art or nature

      How is his usage of "sciences" and "arts" here different (or not) from "technology"? In other words, to what extent is he saying that instrumental developments in science and art are the result of new technology?

      He explicitly describes technological advances like the microscope, telescope, and mariner's needle, but can the other developments be attributed to technology in more implicit (but nonetheless important) ways?

  2. Jan 2017
    1. One of those values is the principle of material honesty. One material should not be used as a substitute for another. Otherwise the end result is deceptive.

      Great principle!

      Should be applied to science as well: scientific publication is meant to spread ideas and findings, not to evaluate researchers!

    1. In Python, as well as in any other object-oriented programming language, we define a class to be a description of what the data look like (the state) and what the data can do (the behavior). Classes are analogous to abstract data types because a user of a class only sees the state and behavior of a data item. Data items are called objects in the object-oriented paradigm. An object is an instance of a class.

      Class = General description of form and functions of data. Object = A member or instance of a class.

  3. Dec 2016
    1. Bullshit is much harder to detect when we want to agree with it. The first and most important step is to recognise the limits of our own cognition. We must be humble about our ability to justify our own beliefs. These are the keys to adopting a critical mindset – which is our only hope in a world so full of bullshit.

      How can people practically identify and expand the limits of thier cognition? Beliefs have the persistent habit of reinforcing themselves. It would be nice if the author had pointed to actual recourses for 'adopting a critical mindset.

    1. Montreal Neurological Institute

      sharing all data associated with its research; no patents for 5 yrs (see video) - first major research institute of it's kind - check if this is really true?

    2. European Union, Japan and the United States

      Find out specifically which of these are "open" and if they are all focused on neuroscience?

    1. Another 23 percent showed signs of accepting the story to some degree, the researchers said.

      The headline for this article is really deceptive. Or the summary of the statistics is sloppy. Or both. Either way, this is dangerous from a political perspective, as is evident in the comments field. From bipartisan, lazy accusations of "fake news" to disbelief in the accounts of survivors of abuse, this story will be used in nasty ways that go far beyond what it deserves.

  4. Nov 2016
    1. Every theorem of mathematics, every significant result of science, is a challenge to our imagination as interface designers. Can we find ways of expressing these principles in an interface? What new objects and new operations does a principle suggest? What a priori surprising relationship between those objects and operations are revealed by the principle? Can we find interfaces which vividly reveal those relationships, preferably in a way that is unique to the phenomenon being studied?
    2. Speech, writing, math notation, various kinds of graphs, and musical notation are all examples of cognitive technologies. They are tools that help us think, and they can become part of the way we think -- and change the way we think.

      Computer interfaces can be cognitive technologies. To whatever degree an interface reflects a set of ideas or methods of working, mastering the interface provides mastery of those ideas or methods.

      Experts often have ways of thinking that they rarely share with others, for various reasons. Sometimes they aren't fully aware of their thought processes. The thoughts may be difficult to convey in speech or print. The thoughts may seem sloppy compared to traditional formal explanations.

      These thought processes often involve:

      • minimal canonical examples - simple models
      • heuristics for rapid reasoning about what might work

      Nielsen considers turning such thought processes into (computer) interfaces. "Every theorem of mathematics, every significant result of science, is a challenge to our imagination as interface designers. Can we find ways of expressing these principles in an interface? What new objects and operations does a principle suggest?"

    1. The participants with relatively strong spatial abilities tended to gravitate towards, and excel in, scientific and technical fields such as the physical sciences, engineering, mathematics, and computer science.
  5. Oct 2016
    1. Democratizing science does not mean settling questions about Nature by plebiscite, any more than democratizing politics means setting the prime rate by referendum. What democratization does mean, in science as elsewhere, is creating institutions and practices that fully incorporate principles of accessibility, transparency, and accountability. It means considering the societal outcomes of research at least as attentively as the scientific and technological outputs. It means insisting that in addition to being rigorous, science be popular, relevant, and participatory.
    1. (courses.csail.mit.edu/18.337/2015/docs/50YearsDataScience.pdf)

      nice reference !

  6. Sep 2016
    1. Activities such as time spent on task and discussion board interactions are at the forefront of research.

      Really? These aren’t uncontroversial, to say the least. For instance, discussion board interactions often call for careful, mixed-method work with an eye to preventing instructor effect and confirmation bias. “Time on task” is almost a codeword for distinctions between models of learning. Research in cognitive science gives very nuanced value to “time spent on task” while the Malcolm Gladwells of the world usurp some research results. A major insight behind Competency-Based Education is that it can allow for some variance in terms of “time on task”. So it’s kind of surprising that this summary puts those two things to the fore.

  7. Aug 2016
    1. Performance CategoryDesign Categoriesi. StructureFrame design, shape and materials –for functionii. MobilityThrusters: number, power, orientationiii. SensorsCameras, lights, sonar, touch sensors, compass, GPSiv. ToolsArms, claws, rakes, wrenches, hammersv. Ranging DistanceTether length: waterproofing required vi. Buoyancy/ BallastFixed or variable, location and materialsvii. ControlsRC via wire or signal via fibre optic cableviii. Other?Depends on the specific mission

      Are you doing science projects? Maybe you can use an old mission scope to have students ask questions about. That way some of the questions we will need to face will be answered before we actually get the mission for this year.

  8. maurice1979-blog.tumblr.com maurice1979-blog.tumblr.com
    1. Hi there, I am using this open source tool to promote open science by make open annotations directly on the was as a platform for collaboration. You also can jot down your comments in the context where it belongs.

  9. Jul 2016
    1. Page 187 On hyper authorship

      "hyper authorship” is an indicator of "collective cognition" in which the specific contributions of individuals no longer can be identified. Physics has among the highest rates of coauthorship in the sciences and the highest rates of self archiving documents via a repository. Whether the relationship between research collaborators (as indicated by the rates of coauthorship) and sharing publications (as reflected in self archiving) holds in other fields is a question worth exploring empirically.

    2. Page 184

      scientific data will not be "all digital "anytime soon, however. Substantial amounts of important "legacy data "remain in paper form, both public and private hands. Estimated 750 million specimens in the US natural storymaker history museums, for example, black digital descriptions. And effort is underway to digitize the descriptions of large-scale, using barcoding technics. Digitizing historical documents such as newspapers, handbooks, directories, and land-use records will benefit the sciences in addition to the humanities and social sciences. These records are used to establish historical patterns of weather, crop yields, animal husbandry, and so forth. And untold wealth of scientific data lies in private hands. Individual scientists often keep the records of the research career in their offices of oratories, only by storage limited only by storage space on the shelves and refrigerators, freezers, and digital devices.

    3. Page 47

      Communication is the essence of scholarship comment as many observers have said in many ways. Scholarship is an inherently social activity, involving a wide range of private and public interactions within the research Community. Publication comment as the public report of research, is part of a continuous cycle of Reading, Writing, disgusting, searching, investigating, presenting, submitting, and reviewing. No scholarly publication stands alone. Each new work in a field his position relative to others through the process of citing relevant literature.

    1. Neil Fraser says Vietnam is doing well with computer science education.

      "If grade 5 students in Vietnam are performing at least on par with their grade 11 peers in the USA, what does grade 11 in Vietnam look like? I walked into a high school CS class, again without any advance notice. The class was working on the assignment below (partially translated by their teacher for my benefit afterwards). Given a data file describing a maze with diagonal walls, count the number of enclosed areas, and measure the size of the largest one."

    1. p. 141

      Initially, the digital humanities consisted of the curation and analysis of data that were born digital, and the digitisation and archiving projects that sought to render analogue texts and material objects into digital forms that could be organised and searched and be subjects to basic forms of overarching, automated or guided analysis, such as summary visualisations of content or connections between documents, people or places. Subsequently, its advocates have argued that the field has evolved to provide more sophisticated tools for handling, searching, linking, sharing and analysing data that seek to complement and augment existing humanities methods, and facilitate traditional forms of interpretation and theory building, rather than replacing traditional methods or providing an empiricist or positivistic approach to humanities scholarship.

      summary of history of digital humanities

  10. Jun 2016
    1. VIA EFF

      Open access: All human knowledge is there—so why can’t everybody access it? (Ars Techica)

      Excellent report on the state of academic publishing— and why so much of it is still locked down.

      NOTE

      if we can Not access the works we fund, we can Neither annotate all knowledge.

      And this case, it may pertain the most crucial body of all our knowledge — the knowledge upon what we are to found our own futures for us all. What is to be recognized as "the Human knowledge", whilst yet unknown by almost everyone us Humans ourselves.>

    2. A history of open access academic publishing from the early 1990s to 2016.

  11. jis.sagepub.com.ezproxy.alu.talonline.ca jis.sagepub.com.ezproxy.alu.talonline.ca
    1. Bibliometric studies of research collaboration:A review

      Subramanyam, K. 1983. “Bibliometric Studies of Research Collaboration: A Review.” J. Inf. Sci. Eng. 6 (1): 33–38.

    1. Civilization advances by extending the number of important operations we can perform without thinking about them.

      This sounds really similar to the concept of "abstraction".

    Tags

    Annotators

    1. If the RRID is well-formed, and if the lookup found the right record, a human validator tags it a valid RRID — one that can now be associated mechanically with occurrences of the same resource in other contexts. If the RRID is not well-formed, or if the lookup fails to find the right record, a human validator tags the annotation as an exception and can discuss with others how to handle it. If an RRID is just missing, the validator notes that with another kind of exception tag.

      Sounds a lot like the way reference managers work. In many cases, people keep the invalid or badly-formed results.

    2. “papers are the only scientific artifacts that are guaranteed to be preserved.”

      Under the current mode of action.

  12. screen.oxfordjournals.org screen.oxfordjournals.org
    1. verningthis function is the belief that there must be - at a particular levelof an author's thought, of his conscious or unconscious desire — apoint where contradictions are resolved, where the incompatibleelements can be shown to relate to one another or to cohere arounda fundamental and originating contradiction. Fin

      This is not true (in theory) of scientific authorship. We don't judge the coherence of the oeuvre.

      Again it conflict with Fish's view of literary criticism

    2. a scientific programme, the founding act is on an equal footingI with its future transformations: it is merely one among the manymodifications that it makes possible. This interdependence cantake several forms. In the future development of a science, thefounding act may appear as little more than a single instance of amore general phenomenon that has been discovered. It might bequestioned, in retrospect, for being too intuitive or empirical andsubmitted to the rigours of new theoretical operations in order tosituate it in a formal domain. Finally, it might be thought a hastygeneralization whose validity should be restricted. In other words,the founding act of a science can always be rechannelled through' the machinery of transformations it has instituted.

      Paradigm shifts are part of the science that follows (i.e. are filled in by normal science, in Kuhn's terms).

    3. The distinctive contribution of these authors is that they pro-duced not only their own work, but the possibility and the rulesof formation of other texts. In this sense, their role differs entirelyfrom that of a novelist, for example, who is basically never morethan the author of his own text. Freud is not simply the author ofThe Interpretation of Dreams or of Wit and its Relation to theUnconscious and Marx is not simply the author of the CommunistManifesto or Capital: they both established the endless possibilityof discourse. Obviously, an easy objection can be made. The authorof a novel may be responsible for more than his own text; if heacquires some 'importance' in the literary world, his influence canhave significant ramifications. To take a very simple example, onecould say that Ann Radcliffe did not simply write The Mysteriesof Udolpho and a few other novels, but also made possible theappearance of Gothic Romances at the beginning of the nine-teenth century. To this extent, her function as an author exceedsthe limits of her work. However, this objection can be answeredby the fact that the possibilities disclosed by the initiators of dis-cursive practices (using the examples of Marx and Freud, whomI believe to be the first and the most important) are significantlydifferent from those suggested by novelists. The novels of AnnRadcliffe put into circulation a certain number of resemblances andanalogies patterned on her work - various characteristic signs,figures, relationships, and structures that could be integrated intoother books. In short, to say that Ann Radcliffe created the GothicRomance means that there are certain elements common to herworks and to the nineteenth-century Gothic romance: the heroineruined by her own innocence, the secret fortress that functions as

      Really useful passage to compare to Kuhn. This is basically an argument about paradigm shifters and normal science as applied to literature.

    4. Another thesis has detained us from taking full measure of the 17author's disappearance. It avoids confronting the specific event thatmakes it possible and, in subtle ways, continues to preserve theexistence of the author. This is the notion of icriture. Strictlyspeaking,.it should allow us not only to circumvent references toan author, but to situate his recent absence. The conception oficriture, as currently employed, is concerned with neither the actof writing nor the indications, as symptoms or signs within a text,of an author's meaning; rather, it stands for a remarkably profoundattempt to elaborate the conditions of any text, both the conditionsof its spatial dispersion and its temporal deployment

      écriture is a fasle way of stepping around the problem in literary criticism, because it simply defers the identity of the author, without stopping treating the author as a unit. But it might be a solution to science writing, in that a credit system, for example, doesn't need an author-function to exist.

    5. his problem is both theoretical and practical. If we wishto publish the complete works of Nietzsche, for example, where dowe draw the line? Certainly, everything must be published, but canwe agree on what 'everything' means? We will, of course, includeeverything that Nietzsche himself published, along with the draftsof his works, his plans for aphorisms, his marginal notations andcorrections. But what if, in a notebook filled with aphorisms, wefind a reference, a reminder of an appointment, an address, or alaundry bill, should this be included in his works? Why not? T

      How to define literature: again, a difference to science. It would never occur to us to confuse a scientists scientific work from all other writing, because the category is so clear; but literature is a more amorphous term.

    1. Beaver and Rosen (1978) have shown how the differentialrates of scientific institutionalization in France, England,and Germany are mirrored in the relative output of coau-thored papers.

      bibliography tying rate of coauthorship to professionalisation of science

    2. In some domains, path-breaking work is nec-essarily the outcome of collaborative activity rather thanindividualistic scholarship, a fact reflected in the modestproportion of federal research funds which is allocated toindividual investigators rather than teams. Collaborationsare a necessary feature of much, though by no means all,contemporary scientific research.

      in some domains, collaboration is necessary. Hence the preference for team grants

    3. After World War II, collaboration became a defin-ing feature of ‘big science’ (Bordons & Gomez, 2000;Cronin, 1995, pp. 4 –13; Katz & Martin, 1997).

      collaboration becomes a defining feature of "big science" after the war.

    4. n general terms, the lone authorstereotype ignores the fact that a great deal of the scholarlyliterature is the product of a “socio-technical production andcommunications network” (Kling, McKim, Fortuna, &King, 1999),

      A great deal of scientific production is the product of a "socio-technical production and communications network"

    5. hapin (1995,p. 178) notes in his brilliant study of trust in 17th-centuryEnglish science,

      "Brilliant study of trust in 17th century English science"

    6. Before the precursors of today’s scholarly journals es-tablished themselves in the second half of the 17th century,scientists communicated via letters.

      original form of scholarly comm was letters

    1. A few cognitive scientists – notably Anthony Chemero of the University of Cincinnati, the author of Radical Embodied Cognitive Science (2009) – now completely reject the view that the human brain works like a computer. The mainstream view is that we, like computers, make sense of the world by performing computations on mental representations of it, but Chemero and others describe another way of understanding intelligent behaviour – as a direct interaction between organisms and their world.

      http://psychsciencenotes.blogspot.com/p/about-us.html<br> Psychologists Andrew Wilson and Sabrina Golonka

    2. Misleading headlines notwithstanding, no one really has the slightest idea how the brain changes after we have learned to sing a song or recite a poem. But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions. When called on to perform, neither the song nor the poem is in any sense ‘retrieved’ from anywhere in the brain, any more than my finger movements are ‘retrieved’ when I tap my finger on my desk. We simply sing or recite – no retrieval necessary.
  13. May 2016
    1. Therein lies the power of mistakes as a vehicle for, as Rilke famously put it, “living the questions” and thus advancing knowledge in a way that certainty cannot — for, as Richard Feynman memorably noted, the scientist’s job is to remain unsure, and so seems the philosopher’s. Dennett writes:
    2. The history of philosophy is in large measure the history of very smart people making very tempting mistakes, and if you don’t know the history, you are doomed to making the same darn mistakes all over again. … There is no such thing as philosophy-free science, just science that has been conducted without any consideration of its underlying philosophical assumptions.
    1. Writing and submission. The process of compiling findings, writing accompanying narrative and making this available for public view and scrutiny can be simplified by the use of new improved software. These tools can help identify relevant papers through increasingly powerful learning algorithms (e.g. F1000Workspace, Mendeley, Readcube). They can also enable collaborative authoring (e.g. F1000Workspace, Overleaf, Google docs), and provide formatting tools to simplify the process of structuring an article to ensure all the necessary underlying information has been captured (e.g. F1000Workspace, EndNote). Submission for posting as a preprint, and/or for formal publication and peer review, should be as simple as a single click.

      How can an "Open Science Platform" be built upon proprietary tools only? Maybe is meaning of "open" to define here?

  14. Apr 2016
    1. Great Principles of Computing<br> Peter J. Denning, Craig H. Martell

      This is a book about the whole of computing—its algorithms, architectures, and designs.

      Denning and Martell divide the great principles of computing into six categories: communication, computation, coordination, recollection, evaluation, and design.

      "Programmers have the largest impact when they are designers; otherwise, they are just coders for someone else's design."

    1. one of the roles of philosophy over the past two and half millennia has been to prepare the ground for the birth and eventual intellectual independence of a number of scientific disciplines. But contra what you seem to think, this hasn’t stopped with the Scientific Revolution, or with the advent of quantum mechanics. Physics became independent with Galileo and Newton (so much so that the latter actually inspired David Hume and Immanuel Kant to do something akin to natural philosophizing in ethics and metaphysics); biology awaited Darwin (whose mentor, William Whewell, was a prominent philosopher, and the guy who coined the term “scientist,” in analogy to artist, of all things); psychology spun out of its philosophical cocoon thanks to William James, as recently (by the standards of the history of philosophy) as the late 19th century. Linguistics followed through a few decades later (ask Chomsky); and cognitive science is still deeply entwined with philosophy of mind (see any book by Daniel Dennett). Do you see a pattern of, ahem, progress there? And the story doesn’t end with the newly gained independence of a given field of empirical research. As soon as physics, biology, psychology, linguistics and cognitive science came into their own, philosophers turned to the analysis (and sometimes even criticism) of those same fields seen from the outside: hence the astounding growth during the last century of so called “philosophies of”: of physics (and, more specifically, even of quantum physics), of biology (particularly of evolutionary biology), of psychology, of language, and of mind.

      Massimo Pigliucci skewering Neil deGrasse Tyson for outright dismissal of philosophy.

    1. . Referees ofgrant proposals agree much more about what is unworthy of support than about what does have scientific value. In

      Grant referees are better at agreeing on inadequate work than adequate

    Tags

    Annotators

    1. A system that assumes a "quite good" institution is unable to get better, and thus denies them the funds that would enable them to get better, is probably not an optimal system for promoting merit. A system that rewards in proportion to merit would at least be able to recognise and reflect the dynamism of university research; research groups wax and wane as people come, go, get disheartened, get re-invigorated.

      On the importance of funding middle-ground

    2. it could be argued that we don’t just need an elite: we need a reasonable number of institutions in which there is a strong research environment, where more senior researchers feel valued and their graduate students and postdocs are encouraged to aim high. Our best strategy for retaining international competitiveness might be by fostering those who are doing well but have potential to do even better

      capacity requires top and middle.

    1. . I consider that my job, as a philosopher, is to activate the possible, and not to describe the probable, that is, to think situations with and through their unknowns when I can feel them

      The job of a philosopher is to "activate the possible, not describe the probable."

  15. Mar 2016
  16. Feb 2016
    1. The science of the individual relies on dynamic systems theory rather than group statistics. Its research methodology is characterized by “analyze, then aggregate” (analyze each subject separately, then combine individual patterns into collective understanding) rather than “aggregate, then analyze” (derive group statistics based on aggregate data, then use these statistics to evaluate and understand individuals).

      A mathematical psychologist at Penn State University, Molenaar extended ergodic theory (link is external) to prove that it was not mathematically permissible to use assessment instruments based on group averages to evaluate individuals.

      A Manifesto on Psychology as Idiographic Science, Peter Molenaar

    1. 43 Garvey Linn and Tomita 1972 discovered that almost 1/3 of authors who had a paper rejected had "abandoned the subject matter area of their articles" within a year (p. 214).

      Garvey, William D., Nan Lin, and Kazuo Tomita. 1972. “Research Studies in Patterns of Scientific Communication: III. Information-Exchange Processes Associated with the Production of Journal Articles.” Information Storage and Retrieval 8 (5): 207–21. doi:10.1016/0020-0271(72)90031-9.

    1. Sabrina Gonzalez Pasterski is a 22-year-old MIT graduate and Harvard PhD candidate in physics whose talents have gained a lot of attention.

      http://physicsgirl.com/

    1. An all-star international team of astrophysicists used an exquisitely sensitive, $1.1 billion set of twin instruments known as the Laser Interferometer Gravitational-wave Observatory, or LIGO, to detect a gravitational wave generated by the collision of two black holes 1.3 billion light-years from Earth.
    1. Great explanation of 15 common probability distributions: Bernouli, Uniform, Binomial, Geometric, Negative Binomial, Exponential, Weibull, Hypergeometric, Poisson, Normal, Log Normal, Student's t, Chi-Squared, Gamma, Beta.

    1. Since its start in 1998, Software Carpentry has evolved from a week-long training course at the US national laboratories into a worldwide volunteer effort to improve researchers' computing skills. This paper explains what we have learned along the way, the challenges we now face, and our plans for the future.

      http://software-carpentry.org/lessons/<br> Basic programming skills for scientific researchers.<br> SQL, and Python, R, or MATLAB.

      http://www.datacarpentry.org/lessons/<br> Managing and analyzing data.

  17. Jan 2016
    1. While there are some features shared between a university repository and us we are distinctly different for the following reasons: We offer DOIs to all content published on The Winnower All content is automatically typeset on The Winnower Content published on the winnower is not restricted to one university but is published amongst work from peers at different institutions around the world Work is published from around the world it is more discoverable We offer Altmetrics to content  Our site is much more visually appealing than a typical repository  Work can be openly reviewed on The Winnower but often times not even commented on in repositories. This is not to say that repositories have no place, but that we should focus on offering authors choices not restricting them to products developed in house.

      Over this tension/complementary between in house and external publishing platforms I wonder where is the place for indie web self hosted publishing, like the one impulsed by grafoscopio.

      A reproducible structured interactive grafoscopio notebook is self contained in software and data and holds all its history by design. Will in-house solutions and open journals like The Winnower, RIO Journal or the Self Journal of Science, support such kinds of publishing artifacts?

      Technically there is not a big barrier (it's mostly about hosting fossil repositories, which is pretty easy, and adding a discoverability and author layer on top), but it seems that the only option now is going to big DVCS and data platforms now like GitHub or datahub alike for storing other research artifacts like software and data, so it is more about centralized-mostly instead of p2p-also. This other p2p alternatives seem outside the radar for most alternative Open Access and Open Science publishers now.

    1. 50 Years of Data Science, David Donoho<br> 2015, 41 pages

      This paper reviews some ingredients of the current "Data Science moment", including recent commentary about data science in the popular media, and about how/whether Data Science is really di fferent from Statistics.

      The now-contemplated fi eld of Data Science amounts to a superset of the fi elds of statistics and machine learning which adds some technology for 'scaling up' to 'big data'.

    1. Discussion about Obama's computer science for K-12 initiative. CS programs in high school are about 40 years overdue. It is a valid concern that much of this money may be wasted on overpriced proprietary software, hardware, and training programs. And of course, average schools will handle CS about like they handle other subjects -- not very well.

      Another concern raised, and countered, is that more programmers will mean lower wages for programmers. But not everyone who studies CS in high school is going to become a programmer. And an increase in computer literacy may help increase the demand for programmers and technicians.

    1. educators and business leaders are increasingly recognizing that CS is a “new basic” skill necessary for economic opportunity. The President referenced his Computer Science for All Initiative, which provides $4 billion in funding for states and $100 million directly for districts in his upcoming budget; and invests more than $135 million beginning this year by the National Science Foundation and the Corporation for National and Community Service to support and train CS teachers.
    1. open Science

      Die Auswirkungen des digitalen Wandels in der Forschung erforschr der Leibniz-Forschungsverbund Science 2.0. Die derzeit 37 Partner bearbeiten die Forschungsschwerpunkte „Neue Arbeitsgewohnheiten“, „Technologieentwicklung“ und „Nutzungsforschung“. Damit untrennbar verbunden sind die aktuellen Entwicklungen im Hinblick auf die Öffnung des gesamten Wissenschaftsprozesses oder Teilen davon („Open Science“)

      http://www.leibniz-science20.de/

    1. This has implications far beyond the cryptocurrency

      The concept of trust, in the sociological and economic sense, underlies exchange. In the 15th-17th centuries, the Dutch and English dominance of trade owed much to their early development of instruments of credit that allowed merchants to fund and later to insure commercial shipping without the exchange of hard currency, either silver or by physically transporting the currency of the realm. Credit worked because the English and Dutch economies trusted the issuers of credit.

      Francis Fukuyama, a philosopher and political economist at Stanford, wrote a book in 1995, Trust: The Social Virtues and the Creation of Prosperity, on the impact of cultures of trust on entrepreneurial growth. Countries of ‘low trust’ have close family culture who limit trust to relations: France, China, S. Italy. Countries of ‘high trust’ have greater ‘spontaneous sociability’ that encourages the formation of intermediate institutions between the state and the family, that encourage greater entrepreneurial growth: Germany, England, the U.S. – I own the book and (shame on me!) haven’t yet read it.

      I thought of this article in those contexts – of the general need for trusted institutions and the power they have in mediating an economy, and the fascinating questions raised when a new facilitator of trust is introduced.

      How do we trust? Across human history, how have we extended the social role of trust to institutions? If a new modality of trust comes available, how does that change institutional structures and correspondingly the power of individuals, of institutions. How would it change the friction to growth and to decline?

      Prior to reading this article, I had dismissed Bitcoin as a temporary aberration, mostly for criminal enterprises and malcontents. I still feel that way. But the underlying technology and it’s implications – now that’s interesting.

    1. Category Theory for the Sciences by David I. Spivak<br> Creative Commons Attribution-NonCommercial-ShareAlike 4.0<br> MIT Press.

    1. "A friend of mine said a really great phrase: 'remember those times in early 1990's when every single brick-and-mortar store wanted a webmaster and a small website. Now they want to have a data scientist.' It's good for an industry when an attitude precedes the technology."
    1. paradox of unanimity - Unanimous or nearly unanimous agreement doesn't always indicate the correct answer. If agreement is unlikely, it indicates a problem with the system.

      Witnesses who only saw a suspect for a moment are not likely to be able to pick them out of a lineup accurately. If several witnesses all pick the same suspect, you should be suspicious that bias is at work. Perhaps these witnesses were cherry-picked, or they were somehow encouraged to choose a particular suspect.

    1. Stupid models are extremely useful. They are usefulbecause humans are boundedly rational and because language is imprecise. It is often only by formalizing a complex system that we can make progress in understanding it. Formal models should be a necessary component of the behavioral scientist’s toolkit. Models are stupid, and we need more of them.

      Formal models are explicit in the assumptions they make about how the parts of a system work and interact, and moreover are explicit in the aspects of reality they omit.

      -- Paul Smaldino

    2. Microeconomic models based on rational choice theory are useful for developing intuition, and may even approximate reality in a fewspecial cases, but the history of behavioral economics shows that standard economic theory has also provided a smorgasbord of null hypotheses to be struck down by empirical observation.
    3. Where differences between conditions are indicated, avoid the mistake of running statistical analyses as if you were sampling from a larger population.

      You already have a generating model for your data – it’s your model. Statistical analyses on model data often involve modeling your model with a stupider model. Don’t do this. Instead, run enough simulations to obtain limiting distributions.

    4. A model’s strength stemsfromits precision.

      I have come across too many modeling papers in which the model – that is, the parts, all their components, the relationships between them, and mechanisms for change – is not clearly expressed. This is most common with computational models (such as agent-based models), which can be quite complicated, but also exists in cases of purely mathematical models.

    5. However, I want to be careful not to elevate modelers above those scientists who employ other methods.

      This is important for at least two reasons, the first and foremost of which is that science absolutely requires empirical data. Those data are often painstaking to collect, requiring clever, meticulous, and occasionally tedious labor. There is a certain kind of laziness inherent in the professional modeler, who builds entire worlds from his or her desk using only pen, paper, and computer. Relatedly, many scientists are truly fantastic communicators, and present extremely clear theories that advance scientific understanding without a formal model in sight. Charles Darwin, to give an extreme example, laid almost all the foundations of modern evolutionary biology without writing down a single equation.

    6. Ultimately,the theory has been shown to be incorrect, and has been epistemically replaced by the theory of General Relativity. Nevertheless, the theory is able to make exceptionally good approximations of gravitational forces –so good that NASA’s moon missions have relied upon them.

      General Relativity may also turn out to be a "dumb model". https://twitter.com/worrydream/status/672957979545571329

    7. Table 1.Twelve functions served by false models. Adapted with permissionfrom Wimsatt

      Twelve good uses for dumb models, William Wimsatt (1987).

    8. To paraphrase Gunawardena (2014), a model is a logical engine for turning assumptions into conclusions.

      By making our assumptions explicit, we can clearly assess their implied conclusions. These conclusions will inevitably be flawed, because the assumptions are ultimately incorrect or at least incomplete. By examining how they differ from reality, we can refine our models, and thereby refine our theories and so gradually we might become less wrong.

    9. the stupidity of a model is often its strength. By focusing on some key aspects of a real-world system(i.e., those aspectsinstantiated in the model), we can investigate how such a system would work if, in principle, we really couldignore everything we are ignoring. This only sounds absurd until one recognizes that, in our theorizing about the nature of reality –both as scientists and as quotidianhumans hopelessly entangled in myriad webs of connection and conflict –weignore thingsall the time.
    10. The generalized linear model, the work horse ofthe social sciences, models data as being randomly drawn from a distribution whose mean varies according to some parameter. The linear model is so obviously wrong yet so useful that the mathematical anthropologist Richard McElreathhas dubbed it “the geocentric model of applied statistics,”in reference to the Ptolemaic model of the solar system that erroneously placed the earth rather than the sun at the center but nevertheless produced accurate predictions of planetary motion as they appeared in the night sky(McElreath 2015).

      A model that approximates some aspect of reality can be very useful, even if the model itself is flat-out wrong.

      But on the other hand, we can't accept approximation of reality as hard proof that a model is correct.

    11. Unfortunately, my own experience working with complex systems and working among complexity scientistssuggests that we are hardly immune to such stupidity. Consider the case of Marilyn Vos Savantand the Monty Hall problem.

      Many people, including some with training in advanced mathematics, contradicted her smugly. But a simple computer program that models the situation can demonstrate her point.

      2/3 times, your first pick will be wrong. Every time that happens, the door Monty didn't open is the winner. So switching wins 2/3 times.

      http://marilynvossavant.com/game-show-problem/

    12. Mitch Resnick, in his book Turtles, Termites, and Traffic Jams, details his experiences teaching gifted high school students about the dynamics of complex systems using artificial life models (Resnick 1994). He showed them how organized behavior could emerge when individualsresponded only to local stimuli using simple rules, without the need for a central coordinating authority. Resnick reports that even after weeks spent demonstrating the principles of emergence,using computer simulations that the students programmed themselves, many students still refused to believe that what they were seeing could really work without central leadership.
    1. Maybe digital history is at the midway point on the continuum between art and science

      I find the realm of data science, and particularly data visualisation, really interesting in this context. Visual renderings of complex data sets can bridge the divide between art and science, while also requiring a certain, new, literacy in order to decipher and decode them. At the same time, increasingly we see these aesthetics in museums and galleries, co-opted by 'art' proper: http://o-c-r.org/portfolio/listening-post/

      https://vimeo.com/59622009

  18. Dec 2015
    1. As part of EFF’s 25th Anniversary celebrations, we are releasing “Pwning Tomorrow: Stories from the Electronic Frontier,” an anthology of speculative fiction from more than 20 authors, including Bruce Sterling, Lauren Beukes, Cory Doctorow, and Charlie Jane Anders. To get the ebook, you can make an optional contribution to support EFF’s work, or you can download it at no cost. We're releasing the ebook under a Creative Commons Attribution-NonCommercial-No Derivatives 4.0 International license, which permits sharing among users. 
    1. We believe that openness and transparency are core values of science. For a long time, technological obstacles existed preventing transparency from being the norm. With the advent of the internet, however, these obstacles have largely disappeared. The promise of open research can finally be realized, but this will require a cultural change in science. The power to create that change lies in the peer-review process.

      We suggest that beginning January 1, 2017, reviewers make open practices a pre-condition for more comprehensive review. This is already in reviewers’ power; to drive the change, all that is needed is for reviewers to collectively agree that the time for change has come.

    1. Part of Galileo’s genius was to transfer the spirit of the Italian Renaissance in the plastic arts to the mathematical and observational ones.
    1. Big Sur is our newest Open Rack-compatible hardware designed for AI computing at a large scale. In collaboration with partners, we've built Big Sur to incorporate eight high-performance GPUs
    1. Similarly, in science there exists substantial expertise making brilliant connectionsbetween concepts, but it is being conveyed in silos of English prose known as journalarticles. Every scientific journal article has a methods section, but it is almost impossibleto read a methods section and subsequently repeat the experiment—the English languageis inadequate to precisely and concisely convey what is being done.

      This issue of reproducible science is starting to be tackled but I do believe formal methods and abstractions would go along way to making sure we adhere these ideas. It is a bit like writing a program with global state vs a functionally defined program, but even worse, since you may forget to write down one little thing you did to the global state.

    2. As mentioned above category theory has branched out into certain areas of scienceas well. Baez and Dolan have shown its value in making sense of quantum physics, itis well established in computer science, and it has found proponents in several otherfields as well. But to my mind, we are the very beginning of its venture into scientificmethodology. Category theory was invented as a bridge and it will continue to serve inthat role.
    3. In 1980 Joachim Lambek showed that the types and programs used in computerscience form a specific kind of category. This provided a new semantics for talking aboutprograms, allowing people to investigate how programs combine and compose to createother programs, without caring about the specifics of implementation. Eugenio Moggibrought the category theoretic notion of monads into computer science to encapsulateideas that up to that point were considered outside the realm of such theory.
    4. The paradigm shift brought on by Einstein’s theory of relativity brought on the real-ization that there is no single perspective from which to view the world. There is nobackground framework that we need to find; there are infinitely many different frame-works and perspectives, and the real power lies in being able to translate between them.It is in this historical context that category theory got its start.
    5. Agreementis the good stuff in science; it’s the high fives.But it is easy to think we’re in agreement, when really we’re not. Modeling ourthoughts on heuristics and pictures may be convenient for quick travel down the road,but we’re liable to miss our turnoff at the first mile. The danger is in mistaking ourconvenient conceptualizations for what’s actually there. It is imperative that we havethe ability at any time to ground out in reality.
    1. For me the interesting part of Stiegler’s work is the idea that it is technics that invents human beings; it is technics that constitutes the human.

      We make the tools. Then the tools make us.

      "Careful With That Axe, Eugene" -- Pink Floyd

  19. Nov 2015
    1. Les représentants de la Bibliothèque nationale de France (BnF) annoncèrent leur objectif de ramener le délai de traitement des documents à six semaines en moyenne

      C’était long, en 2002! Où en est la BnF, aujourd’hui? D’une certaine façon, ce résumé semble prédire la venue des données, la fédération des catalogues, etc. Pourtant, il semble demeurer de nombreux obstacles, malgré tout ce temps. Et si on pouvait annoter le Web directement?

    1. “Many random number generators in use today are not very good. There is a tendency for people to avoid learning anything about such subroutines; quite often we find that some old method that is comparatively unsatisfactory has blindly been passed down from one programmer to another, and today’s users have no understanding of its limitations.”— Donald Knuth; The Art of Computer Programming, Volume 2.

      Mike Malone examines JavaScript's Math.random() in v8, argues that the algorithm used should be replaced, and suggests alternatives.

    1. TPOT is a Python tool that automatically creates and optimizes machine learning pipelines using genetic programming. Think of TPOT as your “Data Science Assistant”: TPOT will automate the most tedious part of machine learning by intelligently exploring thousands of possible pipelines, then recommending the pipelines that work best for your data.

      https://github.com/rhiever/tpot TPOT (Tree-based Pipeline Optimization Tool) Built on numpy, scipy, pandas, scikit-learn, and deap.

    1. Max Planck, when asked how often science changes and adopts new ideas, said, “with every funeral.” And for better or worse, they happen pretty regularly.
    2. Who among us could predict anything five years into the future? What kind of science would science be if it could make reliable predictions about stuff five years out? Science is about what we don’t know yet and how we’re going to get to know it.

      GREAT!

    3. I know how crazy that sounds, but it is of course exactly the right way to proceed. If you are reviewing a grant, you should be interested in how it will fail—usefully or just by not succeeding. Not succeeding is not the same as failing. Not in science.

      Yep. Every day we learn new ways that doesn't work because of this and that... It's sad that this is not "formally" discussed as part of the process!

    4. Too often you fail until you succeed, and then you are expected to stop failing. Once you have succeeded you supposedly know something that helps you to avoid further failure. But that is not the way of science. Success can lead only to more failure. The success, when it comes, has to be tested rigorously and then it has to be considered for what it doesn’t tell us, not just what it does tell us. It has to be used to get to the next stop in our ignorance—it has to be challenged until it fails, challenged so that it fails.

      A great interpretation on the way of science.

    1. According to Mark T. Mitchell, professor of political science at Patrick Henry College in Virginia: Gratitude is born of humility, for it acknowledges the giftedness of the creation and the benevolence of the Creator. This recognition gives birth to acts marked by attention and responsibility. Ingratitude, on the other hand, is marked by hubris, which denies the gift, and this always leads to inattention, irresponsibility, and abuse.
  20. Oct 2015
    1. Davidson shocked his professors by taking off for India to explore meditation practice and Buddhist teachings. After three months there and in Sri Lanka, he came back convinced he would do meditation research. He was quickly disabused of this notion by his professors, who let him know that if he had any hope of a career in science, he’d better stow the meditation and follow a more conventional path of research. He became a closet meditator and an affective neuroscientist—a deep student of the emotions.

      This seems to be the theme for scientific pioneers in recent decades.

    1. as Brent Tully (known for his discovery of supergalaxies) observed: “It’s disturbing  that there is a new theory every time there is a new observation.”

      "When the facts change I can my mind, don't you, sir?"

  21. Sep 2015
    1. when the built environment ceases to accommodate behavioral requirements, people seek to correct the problem through construction, renovation, or moving to a different building

      Stauffer Science Building transitioning into the new and improved Science and Learning Center is an example on Whittier College's campus of this idea.

  22. Aug 2015
    1. However, if an open access version of a text is available, this must always be treated as the primary text. Here the commercial version of the text becomes the secondary version and it should always be cited second and in a manner that makes this completely clear. For instance, after the primary reference to the full text, you could write: ‘Also available as: ….’

      Would be interesting to write a tool that could take a paper as input and replace all citations with references to freely available versions

    1. Here, on page 2, a study on infrasound conducted by Mr. Richard James is referenced. Mr. Richard James references Nina Pierpont's "Wind Turbine Syndrome" in articles he has written, namely "Wind Turbine Infra and Low-Frequency Sound: Warning Signs That Were Not Heard," see this link. Wind turbine syndrome is not a real medical syndrome, see this link and this link. In fact, Mr. Richard James and his methodologies for measuring sound has been discredited in a Michigan court, see Rick James – A Technical Discussion of His Deposition and Testimony in the Spencer / Kobetz Lawsuit.

      On page 7, we learn that Mr. Richard James trained a field technician to set up sound measuring equipment at a dozen homes within the Shirley Wind Farm. It's unclear if Mr. Richard James was present to ensure set up and staging of equipment was per professional protocol. The trained field technician is stated to live within the Shirley Wind Farm. Mr. Richard James also collected weather data using a website called wonderground.com [sic]. Note that the field technician didn't record weather data via actual observation while domiciled within the Shirley Wind Farm. Also to consider is the likelihood of gaps in the collection of data, "On many occasions, there was an observer recording the events of the turbines..." This sounds fuzzy. Brings doubt to the reliability of collected data.

      On page 13, the Brown County Board of health declares the Shirley Wind Farm a human health hazard.

      As a result of Brown County's declaration, the Governor of Wisconsin will spend $250,000 to study health effects of wind power.

  23. Jun 2015
    1. Laboratory analysis of those samples found compounds that are toxic to humans, including acetone and methylene chloride — powerful industrial solvents — along with oil.

      In what concentrations? "Toxic" is pretty meaningless.

    1. warns us against equating changes in scientific understanding of a sense such as smell, what is called “osmology,” with experiential transformations. Attending to the history of smell, he tells us, is also valuable in undermining simple binary oppositions between boundaried individuals and their englobing environ- ment, the basis of Cartesian subject/object dualisms. Instead, it helps situate us in a more fluid, immersive context, where such stark oppositions are understood as themselves contingent rather than necessary

      This reminds me of our Monday discussion of Spinoza re: how expanded "scientific understanding" changes (or doesn't change) sensory experiences.

    1. There’s a scale for how to think about science. On one end there’s an attempt to solve deep, fundamental questions of nature; on the other is rote uninteresting procedure. There’s also a scale for creating products. On one end you find ambitious, important breakthroughs; on the other small, iterative updates. Plot those two things next to each other and you get a simple chart with four sections. Important science but no immediate practical use? That’s pure basic research — think Niels Bohr and his investigations into the nature of the atom. Not much science but huge practical implications? That’s pure applied research — think Thomas Edison grinding through thousands of materials before he lit upon the tungsten filament for the lightbulb.
  24. May 2015
  25. www.jstor.org.mutex.gmu.edu www.jstor.org.mutex.gmu.edu
    1. Annual Reviews is collaborating with JSTOR to digitize, preserve and extend access to Annual Review of Anthropology. http://www.jstor.org The Globalization of Pentecostal and Charismatic Christianity

      Finally an open-source, open access option for sharing research!

  26. Apr 2015
    1. Wouldn’t it be useful, both to the scientific community or the wider world, to increase the publication of negative results?
  27. Jan 2015
    1. Unlike most popularizers (at least of mine), this post didn’t describe a completed piece of research. It just served just an opportunity to riff about an idea I found interesting. But blogging made me realize this idea could be more interesting than I had realized. A “motivated view” of empathy could, for instance, help in understanding illnesses like autism and psychopathy, or thinking up techniques to “grow” empathy. I figured it’d be worth sinking some more effort into it, and wrote a long form academic article on the subject. After much work and a long (but productive) peer review process, that article was published just last week! More importantly, the ideas in that piece—taken over by my students—now drive much new work in my lab that might not have happened otherwise.

      A very interesting point, especially considering the fact that in my own research of science bloggers (#MySciBlog research at LSU), this seems to be a common approach to blogging by scientists/scholars. A scientist/scholarly blogger often starts a blog post with an idea, nugget or concept that they are curious about or interested in learning more about. Many 'intellectuals' also say that blogging helps them collect and clarify their thoughts on a topic or question. The natural result for those engaged in scholarship is for some blogged topics/questions to blossom into larger and more complex ideas and even research questions. I know for myself, several of my blog posts - and especially my some of my freelance science journalism work - has prompted me to pursue complimentary research in my role as a science communication PhD student.

      As an added bonus of being public on the web, the 'blogged' content can elicit feedback from readers and scholars that further pushes the blogger's own ideas and scholarship in new directions.

  28. Feb 2014
    1. developed a specific coding scheme for volunteers to follow. As a supplement to that scheme, the following tutorial can be used to gain more facility with identifying article structure and und

      this is cool

  29. Sep 2013
    1. Hence the man who makes a good guess at truth is likely to make a good guess at probabilities

      At first, I didn't like this quote, then I thought back to good ol' Oakley's stats class. We make scientific theories based on what idea is most likely to happen (we reject/do not reject the null hypothesis, but we do not say we accept the null hypothesis). Science: putting me in my place since I had a place to be put.

    1. "science" which can teach us to do under all circumstances the things which will insure our happiness and success.

      Happiness has far too much variability to be considered scientific, in my opinion. It's all a matter of opinion and personal experience. Hence my disagreement with his "judgment not knowledge" bit.