- Jul 2024
-
opencollective.com opencollective.com
-
Indy Learning Commons
for - Indyweb information page - Open Collective Indyweb
from - Paper Review - Participatory Systems Mapping - https://hyp.is/FSRodE0QEe-Z26cIILK6sw/journals.sagepub.com/doi/10.1177/1356389020980493
-
- Jun 2024
-
www.youtube.com www.youtube.com
-
for - progress trap - AI - threat of superintendence - interview - Leopold Aschenbrenner - former Open AI employee - from -. YouTube - review of Leopold Aschenbrenner's essay on Situational Awareness - https://hyp.is/ofu1EDC3Ee-YHqOyRrKvKg/docdrop.org/video/om5KAKSSpNg/
-
- Jun 2023
-
forum.zettelkasten.de forum.zettelkasten.de
-
Todd Henry in his book The Accidental Creative: How to be Brilliant at a Moment's Notice (Portfolio/Penguin, 2011) uses the acronym FRESH for the elements of "creative rhythm": Focus, Relationships, Energy, Stimuli, Hours. His advice about note taking comes in a small section of the chapter on Stimuli. He recommends using notebooks with indexes, including a Stimuli index. He says, "Whenever you come across stimuli that you think would make good candidates for your Stimulus Queue, record them in the index in the front of your notebook." And "Without regular review, the practice of note taking is fairly useless." And "Over time you will begin to see patterns in your thoughts and preferences, and will likely gain at least a few ideas each week that otherwise would have been overlooked." Since Todd describes essentially the same effect as @Will but without mentioning a ZK, this "magic" or "power" seems to be a general feature of reviewing ideas or stimuli for creative ideation, not specific to a ZK. (@Will acknowledged this when he said, "Using the ZK method is one way of formalizing the continued review of ideas", not the only way.)
via Andy
Andy indicates that this review functionality isn't specific to zettelkasten, but it still sits in the framework of note taking. Given this, are there really "other" ways available?
-
- Oct 2022
-
Local file Local file
-
- Apr 2022
-
asapbio.org asapbio.org
-
Considering campaigns to post journal reviews on preprints. (n.d.). ASAPbio. Retrieved April 29, 2022, from https://asapbio.org/considering-campaigns-to-post-journal-reviews-on-preprints
-
-
-
doi: https://doi.org/10.1038/d41586-021-02346-4
https://www.nature.com/articles/d41586-021-02346-4
Oddly this article doesn't cover academia.edu but includes ResearchGate which has a content-sharing partnership with the publisher SpringerNature.
Matthews, D. (2021). Drowning in the literature? These smart software tools can help. Nature, 597(7874), 141–142. https://doi.org/10.1038/d41586-021-02346-4
-
Open Knowledge Maps, meanwhile, is built on top of the open-source Bielefeld Academic Search Engine, which boasts more than 270 million documents, including preprints, and is curated to remove spam.
Open Knowledge Maps uses the open-source Bielefeld Academic Search Engine and in 2021 indicated that it covers 270 million documents including preprints. Open Knowledge Maps also curates its index to remove spam.
How much spam is included in the journal article space? I've heard of incredibly low quality and poorly edited journals, so filtering those out may be fairly easy to do, but are there smaller levels of individual spam below that?
-
Another visual-mapping tool is Open Knowledge Maps, a service offered by a Vienna-based not-for-profit organization of the same name. It was founded in 2015 by Peter Kraker, a former scholarly-communication researcher at Graz University of Technology in Austria.
https://openknowledgemaps.org/
Open Knowledge maps is a visual literature search tool that is based on keywords rather than on a paper's title, author, or DOI. The service was founded in 2015 by Peter Kraker, a former scholarly communication researcher at Graz University of Technology.
Tags
- references
- Peter Kraker
- information overload
- Connected Papers
- topical headings
- preprints
- tools for thought
- tools
- disclosures
- journalism
- Bielefeld Academic Search Engine
- bias
- visual thinking
- taxonomies
- literature review
- Open Knowledge Maps
- Vienna
- ResearchRabbit
- ResearchGate
- search engines
- 2015
- literature search
- spam
- read
- apps
- FOMO
Annotators
URL
-
- Dec 2021
-
www.nature.com www.nature.com
-
Replicating scientific results is tough—But essential. (2021). Nature, 600(7889), 359–360. https://doi.org/10.1038/d41586-021-03736-4
-
-
twitter.com twitter.com
-
AIMOS. (2021, November 30). How can we connect #metascience to established #science fields? Find out at this afternoon’s session at #aimos2021 Remco Heesen @fallonmody Felipe Romeo will discuss. Come join us. #OpenScience #OpenData #reproducibility https://t.co/dEW2MkGNpx [Tweet]. @aimos_inc. https://twitter.com/aimos_inc/status/1465485732206850054
-
- Jul 2021
-
psyarxiv.com psyarxiv.com
-
Yesilada, M., Holford, D. L., Wulf, M., Hahn, U., Lewandowsky, S., Herzog, S., Radosevic, M., Stuchlý, E., Taylor, K., Ye, S., Saxena, G., & El-Halaby, G. (2021). Who, What, Where: Tracking the development of COVID-19 related PsyArXiv preprints. PsyArXiv. https://doi.org/10.31234/osf.io/evmgs
-
- Jun 2021
-
-
Evans, T. R., Branney, P., Clements, A., & Hatton, E. (2021). Preregistration of Applied Research for Evidence-Based Practice [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/snj2d
-
- May 2021
-
twitter.com twitter.com
-
ReconfigBehSci on Twitter: ‘Great presentation by Cooper Smout on @proFOK for trying to overcome the collective action problem of open science #scibeh2020 https://t.co/Gsr66BRGcJ’ / Twitter. (n.d.). Retrieved 4 March 2021, from https://twitter.com/SciBeh/status/1325749613731713024
-
- Apr 2021
-
-
Die weitestgehende Öffnung liegt bei dieser Variante vor, wenn sowohl Autor*innen- wie auch Gutachter*innen- und Gutachtentransparenz besteht. Offene Review-Verfahren schließen ferner die Option einer nachträglichen Veröffentlichung der Gutachten als Begleittexte einer Publikation mit ein
Volle Transparenz wäre m.E. erst dann gegeben, wenn auch abgelehente Einreichungen mitsamt der der Gutachten, die zur Ablehnung geführt haben ins Netz gestellt werden. Mir scheint, um Meinungs- oder Zitationskartelle zu verhindern (oder zumindest offensichtlich werden zu lassen), wäre das sogar wichtiger als die Namen der Gutachter anzugeben.
Tags
Annotators
URL
-
- Mar 2021
-
www.chevtek.io www.chevtek.io
-
he goes on to talk about third party problems and how you're never guaranteed something is written correctly or that even if it is you don't know if it's the most optimal solution
-
-
news.ycombinator.com news.ycombinator.com
-
here is my set of best practices.I review libraries before adding them to my project. This involves skimming the code or reading it in its entirety if short, skimming the list of its dependencies, and making some quality judgements on liveliness, reliability, and maintainability in case I need to fix things myself. Note that length isn't a factor on its own, but may figure into some of these other estimates. I have on occasion pasted short modules directly into my code because I didn't think their recursive dependencies were justified.I then pin the library version and all of its dependencies with npm-shrinkwrap.Periodically, or when I need specific changes, I use npm-check to review updates. Here, I actually do look at all the changes since my pinned version, through a combination of change and commit logs. I make the call on whether the fixes and improvements outweigh the risk of updating; usually the changes are trivial and the answer is yes, so I update, shrinkwrap, skim the diff, done.I prefer not to pull in dependencies at deploy time, since I don't need the headache of github or npm being down when I need to deploy, and production machines may not have external internet access, let alone toolchains for compiling binary modules. Npm-pack followed by npm-install of the tarball is your friend here, and gets you pretty close to 100% reproducible deploys and rollbacks.This list intentionally has lots of judgement calls and few absolute rules. I don't follow all of them for all of my projects, but it is what I would consider a reasonable process for things that matter.
-
-
aimos.community aimos.community
-
Conference Details. (n.d.). AIMOS. Retrieved 5 March 2021, from https://aimos.community/2020-details
-
- Feb 2021
-
twitter.com twitter.com
-
Dr Elaine Toomey on Twitter. (n.d.). Twitter. Retrieved 24 February 2021, from https://twitter.com/ElaineToomey1/status/1357343820417933316
-
- Oct 2020
-
-
But maybe this PR should still be merged until he finds time for that?
-
Sorry this sat for so long!
Tags
- iterative process
- waiting for maintainers to review / merge pull request / give feedback
- big change/rewrite vs. continuous improvements / smaller refactorings
- not a blocker (issue dependency)
- pull request stalled
- open-source software: progress seems slow
- don't let big plans/goals get in the way of integrating/releasing smaller changes/improvements
Annotators
URL
-
- Sep 2020
-
outbreaksci.prereview.org outbreaksci.prereview.org
-
Outbreak Science Rapid PREreview • Dashboard. (n.d.). Retrieved September 11, 2020, from https://outbreaksci.prereview.org/dashboard?q=COVID-19&q=Coronavirus&q=SARS-CoV-2
-
-
rapidreviewscovid19.mitpress.mit.edu rapidreviewscovid19.mitpress.mit.edu
-
Rapid Reviews COVID-19. (n.d.). Rapid Reviews COVID-19. Retrieved September 11, 2020, from https://rapidreviewscovid19.mitpress.mit.edu/
-
- Aug 2020
-
openreview.net openreview.net
-
About | OpenReview. (n.d.). Retrieved May 30, 2020, from https://openreview.net/about
-
-
sci-hub.tw sci-hub.tw
-
Schalkwyk, M. C. I. van, Hird, T. R., Maani, N., Petticrew, M., & Gilmore, A. B. (2020). The perils of preprints. BMJ, 370. https://doi.org/10.1136/bmj.m3111. https://t.co/qNPLYCeT99?amp=1
-
-
www.biorxiv.org www.biorxiv.org
-
Besançon, L., Peiffer-Smadja, N., Segalas, C., Jiang, H., Masuzzo, P., Smout, C., Deforet, M., & Leyrat, C. (2020). Open Science Saves Lives: Lessons from the COVID-19 Pandemic. BioRxiv, 2020.08.13.249847. https://doi.org/10.1101/2020.08.13.249847
-
-
ropensci.org ropensci.org
-
‘OSF: A Project Management Service Built for Research - ROpenSci - Open Tools for Open Science’. Accessed 10 August 2020. https://ropensci.org/blog/2020/08/04/osf/.
-
-
www.fastcompany.com www.fastcompany.com
-
Taraborelli, D., Taraborelli, D., & Taraborelli, D. (2020, August 5). How the COVID-19 crisis has prompted a revolution in scientific publishing. Fast Company. https://www.fastcompany.com/90537072/how-the-covid-19-crisis-has-prompted-a-revolution-in-scientific-publishing
-
- Jun 2020
-
-
Knöchelmann, M. (2020, February 25) Open Humanities: Why Open Science in the Humanities is not Enough. Impact of Social Sciences. https://blogs.lse.ac.uk/impactofsocialsciences/2020/02/25/open-humanities-why-open-science-in-the-humanities-is-not-enough/
Tags
- unity
- open humanities
- science
- open science
- cooperation
- research
- social challenge
- scholarship
- peer review
- technology
- is:blog
- lang:en
Annotators
URL
-
-
onlinelibrary.wiley.com onlinelibrary.wiley.com
-
British Journal of Social Psychology. (n.d.). Wiley Online Library. https://doi.org/10.1111/(ISSN)2044-8309
-
-
-
Heathers, J. (2020, May 21). Preprints Aren’t The Problem—WE Are The Problem. Medium. https://medium.com/@jamesheathers/preprints-arent-the-problem-we-are-the-problem-75d29a317625
-
- May 2020
-
psyarxiv.com psyarxiv.com
-
Ikeda, K., Yamada, Y., & Takahashi, K. (2020). Post-Publication Peer Review for Real [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/sp3j5
-
-
neurochambers.blogspot.com neurochambers.blogspot.com
-
Chambers, C. (2020 March 16). CALLING ALL SCIENTISTS: Rapid evaluation of COVID19-related Registered Reports at Royal Society Open Science
10 Updates*
-
- Apr 2020
-
psyarxiv.com psyarxiv.com
-
Beitner, J., Brod, G., Gagl, B., Kraft, D., & Schultze, M. (2020, April 23). Offene Wissenschaft in der Zeit von Covid-19 – Eine Blaupause für die psychologische Forschung?. https://doi.org/10.31234/osf.io/sh8xg
-
-
docs.google.com docs.google.com
- Feb 2020
-
riojournal.com riojournal.com
-
Keywords
I would include "open data" and "data sharing" as keywords too.
Tags
Annotators
URL
-
- Dec 2019
-
academic.oup.com academic.oup.com
-
Supplementary data
Of special interest is that a reviewer openly discussed in blog his general thoughts about the state of the art in the field based on what he had been looking at in the paper. This blog came out just after he completed his 1st round review, and before an editorial decision was made.
http://ivory.idyll.org/blog/thoughts-on-assemblathon-2.html
This spawned additional blogs that broadened the discussion among the community-- again looking toward the future.<br> See: https://www.homolog.us/blogs/genome/2013/02/23/titus-browns-thoughts-on-the-assemblathon-2-paper/
And
Further the authors, now in the process of revising their manuscript, joined in on twitter, reaching out to the community at large for suggestions on revisions, and additional thoughts. Their paper had been posted in arxiv- allowing for this type of commenting and author/reader interaction See: https://arxiv.org/abs/1301.5406
The Assemblathon.org site collected and presented all the information on the discussion surrounding this article. https://assemblathon.org/page/2
A blog by the editors followed all this describing this ultra-open peer review, highlighting how these forms of discussions during the peer review process ended up being a very forward-looking discussion about the state of based on what the reviewers were seeing in this paper, and the directions the community should now focus on. This broader open discussion and its very positive nature could only happen in an open, transparent, review process. See: https://blogs.biomedcentral.com/bmcblog/2013/07/23/ultra-open-peer-review/
-
- Oct 2019
-
riojournal.com riojournal.com
-
A Million Brains in the Cloud
Arno Klein and Satrajit S. Gosh published this research idea in 2016 and opened it to review. In fact, you could review their abstract directly in RIO, but for the MOOC activity "open peer review" we want you to read and annotate their proposal using this Hypothes.is layer. You can add annotations by simply highlighting a section that you want to comment on or add a page note and say in a few sentences what you think of their ideas. You can also reply to comments that your peers have already made. Please sign up to Hypothes.is and join the conversation!
Tags
Annotators
URL
-
- Oct 2018
-
fossilsandshit.com fossilsandshit.com
-
open peer review model
-
- Mar 2018
-
pdxscholar.library.pdx.edu pdxscholar.library.pdx.edu
-
Keeping Up with...Open Peer Review
-
- Jun 2017
-
www.nature.com www.nature.com
-
protected platform whereby many expert reviewers could read and comment on submissions, as well as on fellow reviewers’ comments
Conduct prepeer review during the manuscript development on a web platform. That is what is happening in Therapoid.net.
-
intelligent crowd reviewing
Crowdsourcing review? Prepeer review as precursor to preprint server.
-
- Mar 2017
-
-
Eve Marder, a neurobiologist at Brandeis University and a deputy editor at eLife, says that around one third of reviewers under her purview sign their reviews.
Perhaps these could routinely become page notes?
-
If Kriegeskorte is invited by a journal to write a review, first he decides whether he’s interested enough to review it. If so, he checks whether there’s a preprint available—basically a final draft of the manuscript posted publicly online on one of several preprint servers like arxiv and biorxiv. This is crucial. Writing about a manuscript that he’s received in confidence from a journal editor would break confidentiality—talking about a paper before the authors are ready. If there’s a preprint, great. He reviews the paper, posts to his blog, and also sends the review to the journal editor.
Interesting workflow and within his rights.
-
The tweet linked to the blog of a neuroscientist named Niko Kriegeskorte, a cognitive neuroscientist at the Medical Research Council in the UK who, since December 2015, has performed all of his peer review openly.
Interesting...
-
- Jan 2017
-
www.insidehighered.com www.insidehighered.com
-
mcpress.media-commons.org mcpress.media-commons.org
- Oct 2016
-
www.timeshighereducation.com www.timeshighereducation.com
-
Examples of bad peer review and why it is damaging to researchers
-
- Feb 2016
-
www.readability.com www.readability.com
-
As I have mentioned in previous posts, several platforms have appeared recently that could take on this role of third-party reviewer. I could imagine at least: libreapp.org, peerevaluation.org, pubpeer.com, and publons.com. Pandelis Perakakis mentioned several others as well: http://thomas.arildsen.org/2013/08/01/open-review-of-scientific-literature/comment-page-1/#comment-9.
-
- Jan 2016
-
www.readability.com www.readability.com
-
Below I list a few advantages and drawbacks of anonymity where I assume that a drawback of anonymous review is an advantage of identified review and vice versa. Drawbacks Reviewers do not get credit for their work. They cannot, for example, reference particular reviews in their CVs as they can with publications. It is relatively “easy” for a reviewer to provide unnecessarily blunt or harsh critique. It is difficult to guess if the reviewer has any conflict of interest with the authors by being, for example, a competing researcher interested in stalling the paper’s publication. Advantages Reviewers do not have to fear “payback” for an unfavourable review that is perceived as unfair by the authors of the work. Some (perhaps especially “high-profile” senior faculty members) reviewers might find it difficult to find the time to provide as thorough a review as they would ideally like to, yet would still like to contribute and can perhaps provide valuable experienced insight. They can do so without putting their reputation on the line.
-
-
www.readability.com www.readability.com
-
With most journals, if I submit a paper that is rejected, that information is private and I can re-submit elsewhere. In open review, with a negative review one can publicly lose face as well as lose the possibility of re-submitting the paper. Won’t this be a significant disincentive to submit? This is precisely what we are trying to change. Currently, scientists can submit a paper numerous times, receive numerous negative reviews and ultimately publish their paper somewhere else after having “passed” peer review. If scientists prefer this system then science is in a dangerous place. By choosing this model, we as scientists are basically saying we prefer nice neat stories that no one will criticize. This is silly though because science, more often than not, is not neat and perfect. The Winnower believes that transparency in publishing is of the utmost importance. Going from a closed anonymous system to an open system will be hard for many scientists but I believe that it is the right thing to do if we care about the truth.
-
PLOS Labs is working on establishing structured reviews and we have talked with them about this.
-
It should be noted that papers will always be open for review so that a paper can accumulate reviews throughout its lifetime.
-
- Dec 2015
-
opennessinitiative.org opennessinitiative.org
-
We believe that openness and transparency are core values of science. For a long time, technological obstacles existed preventing transparency from being the norm. With the advent of the internet, however, these obstacles have largely disappeared. The promise of open research can finally be realized, but this will require a cultural change in science. The power to create that change lies in the peer-review process.
We suggest that beginning January 1, 2017, reviewers make open practices a pre-condition for more comprehensive review. This is already in reviewers’ power; to drive the change, all that is needed is for reviewers to collectively agree that the time for change has come.
-
- May 2015
-
blogs.plos.org blogs.plos.org
-
Author and peer reviewer anonymity haven’t been shown to have an overall benefit, and they may cause harm. Part of the potential for harm is if journals act as though it’s a sufficiently effective mechanism to prevent bias.
-
Peer reviewers were more likely to substantiate the points they made (9, 14, 16, 17) when they knew they would be named. They were especially likely to provide extra substantiation if they were recommending an article be rejected, and they knew their report would be published if the article was accepted anyway (9, 15).
-