2,387 Matching Annotations
  1. Feb 2016
    1. All of the above might be almost tolerable if DVCSes were easier to use than traditional SCMs, but they aren’t.

      Fortunately is not the case of fossil ;-)

    1. The Transmeta architecture assumes from day one that any business plan that calls for making a computer that doesn't run Excel is just not going anywhere.

      Salvo por los computadores virtuales, llamados objetos, que deben ser compatibles con sus datos, pero no deben ejecutarlo directamente. Ya lo hicimos con este ejemplo de visualización de datos

    2. Conclusion: if you're in a market with a chicken and egg problem, you better have a backwards-compatibility answer that dissolves the problem, or it's going to take you a loooong time to get going (like, forever).

      La otra posibilidad es entrar a "mercados" emergentes donde si bien debe haber compatibilidad con el pasado, la principal inquietud es explorar/prototipar el futuro.

    3. Jon Ross, who wrote the original version of SimCity for Windows 3.x, told me that he accidentally left a bug in SimCity where he read memory that he had just freed. Yep. It worked fine on Windows 3.x, because the memory never went anywhere. Here's the amazing part: On beta versions of Windows 95, SimCity wasn't working in testing. Microsoft tracked down the bug and added specific code to Windows 95 that looks for SimCity. If it finds SimCity running, it runs the memory allocator in a special mode that doesn't free memory right away. That's the kind of obsession with backward compatibility that made people willing to upgrade to Windows 95.

      Pero esta obsesión puede ser perjudicial también, implicando la carga del pasado en sistemas que no tendrían por qué tenerla. Cfg: Pharo vs Squeak.

    4. Feature two: old DOS programs assumed they had the run of the chip. As a result, they didn't play well together. But the Intel 80386 had the ability to create "virtual" PCs, each of them acting like a complete 8086, so old PC programs could pretend like they had the computer to themselves, even while other programs were running and, themselves, pretending they had the whole computer to themselves.

      La idea de computadores virtuales también fue una de las primeras ideas de Alan Kay, pero con objetos en lugar de programas, lo cual lo convertía en más modular y deconstruible.

    5. That bears mentioning again. WordStar was ported to DOS by changing one single byte in the code.  Let that sink in.

      Esto ya no importa hoy en día. Con las capacidades de hardware actuales, un sistema basado en grafoscopio podría correr en Android, Windows, Unix (con sus variantes Mac y Gnu/Linux) o un dispositivo como la rasberry pi, sin cambiar un sólo bit. La dificultad está en movilizar una metáfora nueva para escribir y una nueva manera de pensar frente a la computación.

    1. As I have mentioned in previous posts, several platforms have appeared recently that could take on this role of third-party reviewer. I could imagine at least: libreapp.org, peerevaluation.org, pubpeer.com, and publons.com. Pandelis Perakakis mentioned several others as well: http://thomas.arildsen.org/2013/08/01/open-review-of-scientific-literature/comment-page-1/#comment-9.
    2. I think that such third-party review companies should exist for the sole purpose of providing competent, thorough, trust-worthy review of scientific papers and that a focus on profit might divert their attention from this goal. For example, unreasonably high profit margins are one of the reasons that large publishers such as Elsevier are currently being criticised.
    1. Now, pretty much everyone hosts their open source projects on GitHub, including Google, Facebook, Twitter, and even Microsoft—once the bete noire of open source software. In recent months, as Microsoft open sourced some of its most important code, it used GitHub rather than its own open source site, CodePlex. S. “Soma” Somasegar—the 25-year Microsoft veteran who oversees the company’s vast collection of tools for software developers—says CodePlex will continue to operate, as will other repositories like Sourceforge and BitBucket. “We want to make sure it continues being there, as a choice,” he tells WIRED. But he sees GitHub as the only place for a project like Microsoft .NET. “We want to meet developers where they are,” he says. “The open source community, for the most part, is on GitHub.”
    2. Somasegar estimates that about 20 percent of Microsoft’s customers now use Git in some way.
    3. In short, open source has arrived. And, ultimately, that means we can build and shape and improve our world far more quickly than before.

      But this "improving" is not equal for all. The perspective on the commons seems marginal here. Code commons are used for private owned by the few instead of cooperatives owned by the workers.

    4. The irony of GitHub’s success, however, is the open source world has returned to a central repository for all its free code. But this time, DiBona—like most other coders—is rather pleased that everything is in one place. Having one central location allows people to collaborate more easily on, well, almost anything. And because of the unique way GitHub is designed, the eggs-in-the-same-basket issue isn’t as pressing as it was with SourceForge. “GitHub matters a lot, but it’s not like you’re stuck there,” DiBona says. While keeping all code in one place, you see, GitHub also keeps it in every place. The paradox shows the beauty of open source software—and why it’s so important to the future of technology.

      Well, it depends on how much meta-data you can extract from GitHub. As with so many other social software, the value is not in data (photos, code, twits) as in metadata (comments, tags, social graphs, issues), so, while having your data with you, in your phone or laptop is worthy, would be nice to know how much metadata these infrastructures generate and how it is distributed (or not).

    1. Instead of, for example, 100 large open source projects with active communities, we’ve got 10,000 tiny repos with redundant functionality.One of open source’s biggest advantages was resilience. A public project with many contributors was theoretically stronger than a private project locked inside a company with fewer contributors.Now, the widespread adoption of open source threatens to create just the opposite.

      May be this threat can be overcome by simpler infrastructure that can be understood by a single person. That was one of the original goals of Smalltalk and I think that its current incarnations (in Pharo or Cuis) address this personal empowerment better that the Unix/OS current tradition, which informs most of our current experience with technology. Fossil instead of Git is another example of this preference for simplicity. So, 1000 smaller agile communities can be possible instead of 10 big bureaucratic ones, making fork still an important right without a lot of balkanization. The open nature of these agile communities is different of private projects locked inside a company.

      I have experience by myself examples of those different kind of community configurations and different thresholds for participation in the case of Debian, Arch Linux, Leo Editor or Pharo (to cite a few) and that's why the idea of agile open and small community could work, even with the proper pains of addressing complex projects/problems inside them.

    2. What makes this more difficult to resolve is that GitHub is — surprise! — not open source. GitHub is closed source, meaning that only GitHub staff is able to make improvements to its platform.The irony of using a proprietary tool to manage open source projects, much like BitKeeper and Linux, has not been lost on everyone. Some developers refuse to put their code on GitHub to retain their independence. Linus Torvalds, the creator of Git himself, refuses to accept pull requests (code changes) from GitHub.

      That's why I have advocated tools like Fossil to other members of our Hackerspace and other communities like Pharo or decentralized options to Mozilla Science (without much acceptation in the communities or even any reaction from Mozilla Science).

      Going with the de facto and popular defaults (without caring about freedom or diversity) seems the position of open source/science communities and even digital activist, which contrast sharply with their discourse for the building of tools/data/politics, but seems invisible in the building of community/metadata/metapolitics.

      The kind of disempowerment these communities are trying to fight, is the one they're suffering with GitHub, like showed here: https://hypothes.is/a/AVKjLddpvTW_3w8LyrU-

      So there is a tension between the convenience and wider awareness/participation of centralized privative platforms that is wanted by these open/activist communities and a growth in the (over)use of the commons that is bigger that the growth of its sustainability/ethos, as shown here: https://hypothes.is/a/AVKjfsTRvTW_3w8LyrqI . Sacrificing growth/convenience by choosing simpler and more coherent infrastructures aligned with the commons and its ethos seems a sensible approach then.

    3. But it comes with new challenges: how to actually manage demand and workflows, how to encourage contributions, and how to build antifragile ecosystems.

      This is a key issue. My research is about the relationship of mutual modification between communities and digital artifacts to bootstrap empowering dynamics.

      The question regarding participation could be addressed by making an infrastructural transposition (putting what is in the background in the foreground as suggested by Susan Leigh Star). This has been, in a sense the approach of this article, making visible what is behind infrastructures like LAMP, GitHub or StackExchange and has also been the approach of my comments. Of course there are things beyond infrastructure, but the way the infrastructures determine communities and the change that communities can made or not on them could be a key to antifragile, that is traversed by critical pedagogy, community and cognition. How can we change the artifacts that change us is a questions related with antifragile. This is the question of my research (in the context of a Global South hackerspace), but I never connected it with antifragile until reading this text.

    4. Technically, if you use someone else’s code revision from Stack Overflow, you would have to add a comment in your code that attributes the code to them. And then that person’s code would potentially have a different license from the rest of your code.Your average hobbyist developer might not care about the rules, but many companies forbid employees from using Stack Overflow, partly for this reason.As we enter a post open source world, Stack Overflow has explored transitioning to a more permissive MIT license, but the conversation hasn’t been easy. Questions like what happens to legacy code, and dual licensing for code and non-code contributions, have generated confusion and strong reactions.
    5. As a result, while plenty of amateur developers use open source projects, those people aren’t interested in, or capable of, seriously giving back. They might be able to contribute a minor bug or fix, but the heavy lifting is still left to the veterans.

      I'm starting to feel this even with my new project, grafoscopio. The burden of development is now on core functionality that will make the project easier to use and adapt for newcomers, but still there is a question about how many of them will worry or be enabled to work on improving this core functionality or help in some way with its maintenance.

    6. Experienced maintainers have felt the burden. Today, open source looks less like a two-way street, and more like free products that nobody pays for, but that still require serious hours to maintain.This is not so different from what happened to newspapers or music, except that nearly all the world’s software is riding on open source.
    7. There is also concern around using a centralized platform to manage millions of repositories: GitHub has faced several outages in recent years, including a DDoS attack last year and a network disruption just yesterday. A disruption in just one website — GitHub — affects many more.Earlier this month, a group of developers wrote an open letter to GitHub, expressing their frustration with the lack of tools to manage an ever-increasing work load, and requesting that GitHub make important changes to its product.
    8. The free software generation had to think about licenses because they were taking a stance on what they were not (that is, proprietary software). The GitHub generation takes this right for granted. They don’t care about permissions. They default to open.Open source is so popular today that we don’t think of it as exceptional anymore. We’re so open source, that maybe we’re post open source:But not is all groovy in the land of post open source.
    9. n 2011, there were 2 million repositories on GitHub. Today, there are over 29 million. GitHub’s Brian Doll noted that the first million repositories took nearly 4 years to create; getting from nine to ten million took just 48 days.
    10. Now developers had all the tools they needed. In the 1980s, they had to use a scattered combination of IRC, mailing lists, forums, and version control systems.By 2010, they had Git for version control, GitHub to collaborate, and Stack Overflow to ask and answer questions.

      Este párrafo muestra una transición de lo distribuido de la Internet a de los 80's lo centralizado de la Internet actual (2010~2015) y como esta tendencia no sólo ocurrió en el mundo de la web en general, sino del desarrollo de software (de hecho mediante la incorporación de experiencias e interfaces web centralizadas, sobre infraestructuras no web distribuidas).

    1. A quote often attributed to Gloria Steinem says: “We’ve begun to raise daughters more like sons... but few have the courage to raise our sons more like our daughters.” Maker culture, with its goal to get everyone access to the traditionally male domain of making, has focused on the first. But its success means that it further devalues the traditionally female domain of caregiving, by continuing to enforce the idea that only making things is valuable. Rather, I want to see us recognize the work of the educators, those that analyze and characterize and critique, everyone who fixes things, all the other people who do valuable work with and for others—above all, the caregivers—whose work isn’t about something you can put in a box and sell.
    2. am not a maker. In a framing and value system is about creating artifacts, specifically ones you can sell, I am a less valuable human. As an educator, the work I do is superficially the same, year on year. That’s because all of the actual change, the actual effects, are at the interface between me as an educator, my students, and the learning experiences I design for them. People have happily informed me that I am a maker because I use phrases like "design learning experiences," which is mistaking what I do (teaching) for what I’m actually trying to help elicit (learning). To characterize what I do as "making" is to mistake the methods—courses, workshops, editorials—for the effects. Or, worse, if you say that I "make" other people, you are diminishing their agency and role in sense-making, as if their learning is something I do to them.

      As a teacher I also felt this sense of repetition in what I did. Same curricula, different people (particularly in the mathematics department at Javeriana University, where I worked). So the "escape" from repetition was in educative resources and spaces mostly of them mediated by digital technology. That was the material correlate of the inmaterial happening.

      So the strong issue is about the material and the immaterial in making. For me the opposition of makers versus non-makers is to underlying a consumer society, but it embodies the danger of not recognizing the immaterial making of culture by everyone everyday.

    3. In Silicon Valley, this divide is often explicit: As Kate Losse has noted, coders get high salary, prestige, and stock options. The people who do community management—on which the success of many tech companies is based—get none of those. It’s unsurprising that coding has been folded into "making." Consider the instant gratification of seeing "hello, world" on the screen; it’s nearly the easiest possible way to "make" things, and certainly one where failure has a very low cost. Code is "making" because we've figured out how to package it up into discrete units and sell it, and because it is widely perceived to be done by men.
    4. It’s not, of course, that there’s anything wrong with making (although it’s not all that clear that the world needs more stuff).

      The wave of "Internet of Things" seems to be co-opted by consumerist view of the world needing more "stuff". While repairing or repurposing is kind of a second class activity, particularly in the Global North and in contrast with the Global South (see for example the gambiarra approach and critique from Brazil).

      So this maker of the new and visible seems not only informed by gender but also by race/place.

    5. Almost all the artifacts that we value as a society were made by or at the order of men. But behind every one is an invisible infrastructure of labor—primarily caregiving, in its various aspects—that is mostly performed by women.

      The main issue here is the visible versus the invisible work. Making in the "makers" movement sense is related with making the visible stuff, usually the hardware/software related one with a strong formal correlate (because stuff takes the form of programmed code or is the result of programming code, i.e 3D printing), while "soft" informal stuff, like the day to day issues of logistics about places and doings is invisible.

      The question in not solved simply by making the invisible visible, as Susan Leigh Star has pointed out (in the case of nursing, for example). It's also about leaving the invisible to be agent of important stuff without being trapped by the formalism of the visible. To give the visible and the invisible the proper weight without only trying one to become the other.

  2. Jan 2016
    1. Below I list a few advantages and drawbacks of anonymity where I assume that a drawback of anonymous review is an advantage of identified review and vice versa. Drawbacks Reviewers do not get credit for their work. They cannot, for example, reference particular reviews in their CVs as they can with publications. It is relatively “easy” for a reviewer to provide unnecessarily blunt or harsh critique. It is difficult to guess if the reviewer has any conflict of interest with the authors by being, for example, a competing researcher interested in stalling the paper’s publication. Advantages Reviewers do not have to fear “payback” for an unfavourable review that is perceived as unfair by the authors of the work. Some (perhaps especially “high-profile” senior faculty members) reviewers might find it difficult to find the time to provide as thorough a review as they would ideally like to, yet would still like to contribute and can perhaps provide valuable experienced insight. They can do so without putting their reputation on the line.
    1. With most journals, if I submit a paper that is rejected, that information is private and I can re-submit elsewhere. In open review, with a negative review one can publicly lose face as well as lose the possibility of re-submitting the paper. Won’t this be a significant disincentive to submit? This is precisely what we are trying to change. Currently, scientists can submit a paper numerous times, receive numerous negative reviews and ultimately publish their paper somewhere else after having “passed” peer review. If scientists prefer this system then science is in a dangerous place. By choosing this model, we as scientists are basically saying we prefer nice neat stories that no one will criticize. This is silly though because science, more often than not, is not neat and perfect. The Winnower believes that transparency in publishing is of the utmost importance. Going from a closed anonymous system to an open system will be hard for many scientists but I believe that it is the right thing to do if we care about the truth.
    2. At what point does payment occur, and are you concerned with the possible perception that this is pay-to-publish? Payment occurs as soon as you post your paper online. I am not overly concerned with the perception that this is pay-to-publish because it is. What makes The Winnower different is the price we charge. Our price is much much lower than what other journals charge and we are clear as to what its use will be: the sustainability and growth of the website. arXiv, a site we are very much modeled after does not charge anything for their preprint service but I would argue their sustainability based on grants is questionable. We believe that authors should buy into this system and we think that the price we will charge is more than fair. Ultimately, if a critical mass is reached in The Winnower and other revenue sources can be generated than we would love to make publishing free but at this moment it is not possible.
    3. I strongly believe that if you’re scared of open peer review then we should be scared of your results.
    4. While The Winnower won’t eliminate bias (we are humans, after all) the content of the reviews can be evaluated by all because they will be readily accessible. [Note: reviewers could list competing interests in the template suggested on The Winnower’s blog.]
    5. Moreover, editors are literally selecting for simple studies but very often studies are not simple and results are not 100% clear. If you can’t publish your work because it is honest but poses some questions then eventually you will have to mold your work to what an editor wants and not what the data is telling you. There is a significant correlation between impact factor and misconduct and it is my opinion that much of this stems from researchers bending the truth, even if ever so slightly, to get into these career advancing publications.
    6. PLOS Labs is working on establishing structured reviews and we have talked with them about this.
    7. It should be noted that papers will always be open for review so that a paper can accumulate reviews throughout its lifetime.
    8. The journal will accommodate data but should be presented in the context of a paper. The Winnower should not act as a forum for publishing data sets alone. It is our feeling that data in absence of theory is hard to interpret and thus may cause undue noise to the site.

      This will be the case also for the data visualizations showed here, once the data is curated and verified properly. Still data visualizations can start a global conversation without having the full paper translated to English.

    1. I think The Winnower has found a nice niche publishing what is called “grey literature.” (i.e. we publish content that is not traditionally afforded a platform).  By focusing on this niche in the in the short term (<5years) we can build a community that will allows us to experiment with different models in the long term (>5Years).  I found out very early after launch of The Winnower—it’s not enough to build a platform around a new model, you have to convey the value to the community and really incentivize people to use it.
    2. I am hoping to change scholarly communication at all levels and I think transparency must be at the heart of this.
    3. While there are some features shared between a university repository and us we are distinctly different for the following reasons: We offer DOIs to all content published on The Winnower All content is automatically typeset on The Winnower Content published on the winnower is not restricted to one university but is published amongst work from peers at different institutions around the world Work is published from around the world it is more discoverable We offer Altmetrics to content  Our site is much more visually appealing than a typical repository  Work can be openly reviewed on The Winnower but often times not even commented on in repositories. This is not to say that repositories have no place, but that we should focus on offering authors choices not restricting them to products developed in house.

      Over this tension/complementary between in house and external publishing platforms I wonder where is the place for indie web self hosted publishing, like the one impulsed by grafoscopio.

      A reproducible structured interactive grafoscopio notebook is self contained in software and data and holds all its history by design. Will in-house solutions and open journals like The Winnower, RIO Journal or the Self Journal of Science, support such kinds of publishing artifacts?

      Technically there is not a big barrier (it's mostly about hosting fossil repositories, which is pretty easy, and adding a discoverability and author layer on top), but it seems that the only option now is going to big DVCS and data platforms now like GitHub or datahub alike for storing other research artifacts like software and data, so it is more about centralized-mostly instead of p2p-also. This other p2p alternatives seem outside the radar for most alternative Open Access and Open Science publishers now.

    4. 20 years: ideas and results will be communicated iteratively and dynamically, not as a story written in stone. There is an increasing number of artifacts beyond text (data, visualizations, software tools, code, spreadsheets, multimedia content, etc.) How might these outputs factor into the scholarly conversation and more directly, the tenure and promotion process? JN: I think all these various outputs you mention are gaining prominence in scholarly communication.  I think that will continue and will become more and more important in how scholars are evaluated and rightly so a lot of work is done in different mediums and outside the confines of the article.  We need to experiment with different approaches of evaluation and part of that is looking beyond one thing (how often you publish and where you publish).  We do need to be careful though as new systems are implemented, new is not necessarily better.

      Precisely this part refers to the comment made here:

      https://hypothes.is/a/AVKTqqSqvTW_3w8Lym-d

      Would be nice to know how to enable this interactive and dynamic communication now. My bet is to use interactive moldable notebooks, like grafoscopio, for integrating all the workflow: writing, data visualization, data sharing and versioning, moldable tools, etc.

      The approach of grafoscopio compared to similar interactive notebooks like jupyter, Zeppelin or Beaker is that modifiable, self contained and work off-line, which is important in the Global South context and helps with the power concentration that we have witness with the late web even in academical publishing and is more related with the indie publishing approach (see Indie Web for an alternative).

    5. I think the next generation of scientists who have grown up in the Internet era will have zero patience for the current system and because of that they will seek different outlets that make sense in light of the fact the Internet exists!  
    6. Most importantly, is that we’ve given the tools of scholarly publishers to the scholars themselves to use.  Which has had the unexpected effect that different types of content are being produced (conference proceedings, grants, open letters, responses to grants, peer reviews, logistics for organizing symposiums, and more).  Ultimately, we’ve created a platform that allows anyone to get their idea out there and to be afforded the same tools that a traditional publisher offers, that is in my opinion quite impactful.

      Woud be nice to have links to examples of such kinds of contents, particularly for the "more" part. I would like to know if there is something related with datasets, visualizations & algorithms.

    7. developing countries and developing scientists (students) are left out of scholarly discourse,

      Multilingual or diverse language in journals must be developed to serve the diverse public and authors in the Global South. Having mostly English language journals is a big barrier also for scientific discourse in the Global South. Some other fluid forms of discourse could be articulated about other research artifacts like datasets or algorithms, that are more language neutral, instead of being focused in mainly English scholar text.

      This may be an example of such publications based more on data & algorithms that enable such global agile discourse:

      A visualization for public released info on meds

      (more details here)

    1. Esta licencia de PI finaliza cuando eliminas tu contenido de PI o tu cuenta, salvo si el contenido se compartió con terceros y estos no lo eliminaron.
    2. En el caso de contenido protegido por derechos de propiedad intelectual, como fotos y videos ("contenido de PI"), nos concedes específicamente el siguiente permiso, de acuerdo con la configuración de la privacidad y de las aplicaciones: nos concedes una licencia no exclusiva, transferible, con derechos de sublicencia, libre de regalías y aplicable en todo el mundo para utilizar cualquier contenido de PI que publiques en Facebook o en conexión con Facebook ("licencia de PI").
    3. Fecha de la última revisión: 30 de enero de 2015

      Hay un documental en Netflix que habla de cómo los términos y condiciones cambiar permanentemente y si los leyéramos todos, tardaríamos más o menos 3 meses al año. El documental se llama Terms and Condicions May Apply

    1. Green OA and the role of repositories remain controversial. This is perhaps less the case for institutional repositories, than for subject repositories, especially PubMed Central. The lack of its own independent sustainable business model means Green OA depends on its not undermining that of (subscription) journals. The evidence remains mixed: the PEER project found that availability of articles on the PEER open repository did not negatively impact downloads from the publishers’s site, but this was contrary to the experience of publishers with more substantial fractions of their journals’ content available on the longer-established and better-known arXiv and PubMed Central repositories. The PEER usage data study also provided further confirmation of the long usage half-life of journal articles and its substantial variation between fields (suggesting the importance of longer embargo periods than 6–12 months, especially for those fields with longer usage half-lives). Green proponents for their part point to the continuing profitability of STM publishing, the lack of closures of existing journals and the absence of a decline in the rate of launch of new journals since repositories came online as evidence of a lack of impact to date, and hence as evidence of low risk of impact going forward. Many publishers’ business instincts tell them otherwise; they have little choice about needing to accept submissions from large funders such as NIH, but there has been some tightening of publishers’ Green policies (page 102).
    2. Research funders are playing an increasingly important role in scholarly communication. Their desire to measure and to improve the returns on their investments emphasises accountability and dissemination. These factors have been behind their support of and mandates for open access (and the related, though less contentious policies on data sharing). These policies have also increased the importance of (and some say the abuse of) metrics such as Impact Factor and more recently are creating part of the market for research assessment services (page88).
    3. Open access publishing has led to the emergence of a new type of journal, the so-called megajournal. Exemplified by PLOS ONE, the megajournal is characterised by three features: full open access with a relatively low publication charge; rapid “non-selective” peer review based on “soundness not significance” (i.e. selecting papers on the basis that science is soundly conducted rather than more subjective criteria of impact, significance or relevance to a particularly community); and a very broad subject scope. The number of megajournals continues to grow: Table 10 lists about fifty examples (page 99).
    4. and the more research intensive universities remain concerned about the net impact on their budgets (page 90; 123).

      ¿Qué quiere decir esto?

    5. Gold open access based on APCs has a number of potential advantages. It would scale with the growth in research outputs, there are potential system-wide savings, and reuse is simplified. Research funders generally reimburse publication charges, but even with broad funder support the details regarding the funding arrangements within universities it remain to be fully worked out. It is unclear where the market will set OA publication charges: they are currently lower than the historical average cost of article publication; about 25% of authors are from developing countries;
    6. The APC model itself has become more complicated, with variable APCs (e.g. based on length), discounts, prepayments and institutional membership schemes, offsetting and bundling arrangements for hybrid publications, an individual membership scheme, and so on (page 91; 93).
    7. Average publishing costs per article vary substantially depending on a range of factors including rejection rate (which drives peer review costs), range and type of content, levels of editorial services, and others. The average 2010 cost of publishing an article in a subscription-based journal with print and electronic editions was estimated by CEPA to be around £3095 (excluding non-cash peer review costs). The potential for open access to effect cost savings has been much discussed, but the emergence of pure-play open access journal publishers allows examples of average article costs to be inferred from their financial statements. These range from $290 (Hindawi), through $1088 (PLOS), up to a significantly higher figure for eLife (page 66).
    8. There is continued interest in expanding access by identifying and addressing these specific barriers to access or access gaps. While open access has received most attention, other ideas explored have included increased funding for national licences to extend and rationalise cover; walk-in access via public libraries (a national scheme was piloted in the UK in 2014); the development of licences for sectors such as central and local government, the voluntary sector, and businesses (page 84)
    9. The most commonly cited barriers to access are cost barriers and pricing, but other barriers cited in surveys include: lack of awareness of available resources; a burdensome purchasing procedure; VAT on digital publications; format and IT problems; lack of library membership; and conflict between the author’s or publisher’s rights and the desired use of the content (page 84).
    10. While publishers have always provided services such as peer review and copy-editing, increased competition for authors, globalisation of research, and new enabling technologies are driving an expansion of author services and greater focus on improving the author experience. One possibly emerging area is that of online collaborative writing tools: a number of start-ups have developed services and some large publishers are reported to be exploring this area (page 153).
    11. Semantic technologies have become mainstream within STM journals, at least for the larger publishers and platform vendors. Semantic enrichment of content (typically using software tools for automatic extraction of metadata and identification and linking of entities) is now widely used to improve search and discovery; to enhance the user experience; to enable new products and services; and for internal productivity improvements. The full-blown semantic web remains some way off, but publishers are starting to make use of linked data, a semantic web standard for making content more discoverable and re-usable (page 143).
    12. The growing importance to funders and institutions of research assessment and metrics has been reflected in the growth of information services such as research analytics built around the analysis of metadata (usage, citations, etc.), and the growth of a new software services such as CRIS tools (Current Research Information Systems) (page 150).
    13. Text and data mining are starting to emerge from niche use in the life sciences industry, with the potential to transform the way scientists use the literature. It is expected to grow in importance, driven by greater availability of digital corpuses, increasing computer capabilities and easier-to-use software, and wider access to content
    14. The explosion of data-intensive research is challenging publishers to create new solutions to link publications to research data (and vice versa), to facilitate data mining and to manage the dataset as a potential unit of publication. Change continues to be rapid, with new leadership and coordination from the Research Data Alliance (launched 2013): most research funders have introduced or tightened policies requiring deposit and sharing of data; data repositories have grown in number and type (including repositories for “orphan” data); and DataCite was launched to help make research data cited, visible and accessible. Meanwhile publishers have responded by working closely with many of the community-led projects; by developing data deposit and sharing policies for journals, and introducing data citation policies; by linking or incorporating data; by launching some pioneering data journals and services; by the development of data discovery services such as Thomson Reuters’ Data Citation Index (page 138).
    15. Similarly the rapid general adoption of mobile devices (smartphones and tablets) has yet to change significantly the way most researchers interact with most journal content–accesses from mobile devices still account for less than 10% of most STM platform’s traffic as of 2014 (though significantly higher in some fields such as clinical medicine) –but this is changing. Uptake for professional purposes has been fastest among physicians and other healthcare professionals, typically to access synoptic secondary services, reference works or educational materials rather than primary research journals. For the majority of researchers, though, it seems that “real work” still gets done at the laptop or PC (page 24; 30; 139).
    16. Social networks and other social media have yet to make the impact on scholarly communication that they have done on the wider consumer web. The main barriers to greater use have been the lack of clearly compelling benefits to outweigh the real costs (e.g. in time) of adoption. Quality and trust issues are also relevant: researchers remain cautious about using means of scholarly communication not subject to peer review and lacking recognised means of attribution. Despite these challenges, social media do seem likely to become more important given the rapid growth in membership of the newer scientific social networks (Academia, Mendeley, ResearchGate), trends in general population, and the integration of social features into publishing platforms and other software (page 72; 134).
    17. Virtually all STM journals are now available online, and in many cases publishers and others have retrospectively digitised early hard copy material back to the first volumes. The proportion of electronic-only journal subscriptions has risen sharply, partly driven by adoption of discounted journal bundles. Consequently the vast majority of journal use takes place electronically, at least for research journals, with print editions providing some parallel access for some general journals, including society membership journals, and in some fields (e.g. humanities and some practitioner fields). The number of established research (i.e. non-practitioner) journals dropping their print editions looks likely to accelerate over the coming few years (page 30).
    18. There is a significant amount of innovation in peer review, with the more evolutionary approaches gaining more support than the more radical. For example, some variants of open peer review (e.g. disclosure of reviewer names either before or after publication; publication of reviewer reports alongside the article) are becoming more common. Cascade review (transferring articles between journals with reviewer reports) and even journal-independent (“portable”) peer review are establishing a small foothold. The most notable change in peer review practice, however, has been the spread of the “soundness not significance” peer review criterion adopted by open access “megajournals” like PLOS ONE and its imitators. Post-publication review has little support as a replacement for conventional peer review but there is some interest in its use as a complement to it (for example, the launch of PubMed Commons is notable in lending the credibility of PubMed to post-publication review). There is similar interest in “altmetrics” as a potentially useful complement to review and in other measures of impact. A new technology of potential interest for post-publication review is open annotation, which uses a new web standard to allow citable comments to be layered over any website (page 47).
    19. Reading patterns are changing, however, with researchers reading more, averaging 270 articles per year, depending on discipline (more in medicine and science, fewer in humanities and social sciences), but spending less time per article, with reported reading times down from 45-50 minutes in the mid-1990s to just over 30 minutes. Access and navigation to articles is increasingly driven by search rather than browsing; at present there is little evidence that social referrals are a major source of access (unlike consumer news sites, for example), though new scientific social networks may change this. Researchers spend very little time on average on publisher web sites, “bouncing” in and out and collecting what they need for later reference (page 52).
    20. Despite a transformation in the way journals are published, researchers’ core motivations for publishing appear largely unchanged, focused on securing funding and furthering the author’s career (page 69)
    21. Although this report focuses primarily on journals, the STM book market (worth about $5 billion annually) is evolving rapidly in a transition to digital publishing. Ebooks made up about 17% of the market in 2012 but are growing much faster than STM books and than the STM market as a whole (page 24).
    22. The annual revenues generated from English-language STM journal publishing are estimated at about $10 billion in 2013, (up from $8 billion in 2008, representing a CAGR of about 4.5%), within a broader STM information publishing market worth some $25.2 billion. About 55% of global STM revenues (including non-journal STM products) come from the USA, 28% from Europe/Middle East, 14% from Asia/Pacific and 4% from the rest of the world (page 23).
    1. There's no incentive structure for people to comment extensively, because it can take time to write a thoughtful comment, and one currently doesn't get credit for it,” he says. “But it's an experiment that needs to be done.”
    2. At the moment, Neylon explains, the scholarly publishing process involves ferrying a document from place to place. Researchers prepare manuscripts, share them with colleagues, fold in comments and submit them to journals. Journal editors send copies to peer reviewers, returning their comments to the author, who goes back and forth with the editor to finalize the text. After publication, readers weigh in with commentary of their own.
    3. To jump-start interest in the annotation program, arXiv has been converting mentions of its articles in external blog posts (called trackbacks) into annotations that are visible on an article's abstract page when using Hypothes.is.
    4. The scientific publisher eLife in Cambridge, UK, has been testing the feasibility of using Hypothes.is to replace its peer-review commenting system, says Ian Mulvany, who heads technology at the firm. The publisher plans to incorporate the annotation platform in a site redesign instead of its current commenting system, Disqus. At a minimum, says Mulvany, Hypothes.is provides a mechanism for more-targeted commentary — the equivalent of moving comments up from the bottom of a web page into the main body of the article itself.
    5. The digital library JSTOR, for example, is developing a custom Hypothes.is tool for its educational project with the Poetry Foundation, a literary organization and publisher in Chicago, Illinois.
    6. That should enable the tool to be used for journal clubs, classroom exercises and even peer review.
    7. But unlike Hypothes.is, the Genius code is not open-source, its service doesn't work on PDFs, and it is not working with the scholarly community.
    8. A few websites today have inserted code that allows annotations to be made on their pages by default, including the blog platform Medium, the scholarly reference-management system F1000 Workspace and the news site Quartz. However, annotations are visible only to users on those sites. Other annotation services, such as A.nnotate or Google Docs, require users to upload documents to cloud-computing servers to make shared annotations and comments on them.
    1. It's always a strange thing, going from nothing to something. Starting with just an idea, and gradually turning it into something real. Inventing along the way all these things that start so small, and gradually become these whole structures. I always think anyone who's been involved in such a thing has kind of a glow of confidence that lasts at least a decade—realizing that, yes, with the right effort nothing can turn into something.

      Yo he dicho que es más difícil pasar de la nada al algo, que del algo al algo más. Esto requiere una especie de férrea convicción en el valor de emprender algo cuando no hay nada aún y sobrellevar el estrés de hacerlo.

    1. The difference is that with the Smalltalk code I can let the message do the talking. The message initiates the action. With C# I have to call a method in order to "send a message." This is what OO is supposed to avoid, because we're exposing implementation, here. It reifies the abstraction.
    2. A fundamental difference between the way Smalltalk treats objects and the way other so-called OOP languages treat them is that objects, as Alan Kay envisioned them (he coined the term "object-oriented"), are really meant to be servers (in software, not necessarily hardware), not mere collections of functions that have privileged access to abstract data types.

      La última parte en particular, funciones con acceso privilegiado a tipos de datos abstractos es lo que se ve los cursos clásicos de programación en Java o C++.

    3. The fundamental principle of objects in Smalltalk is message passing. What matters is what goes on between objects, not the objects themselves. The abstraction is in the message passing, not the objects.

      Alan Kay habla en un video de la cultura japonesa y el concepto "ma", que podría traducirse como intersticio, o "el espacio entre" y que el énfasis de la cultura anglo estaba en las cosas visibles (objetos) y no en los intangibles (mensajes) y, según recuerdo, dice que un nombre más apropiado pudo haber sido programación orientada a mensajes.

    4. The idea was to create a "no-centers" system design, where logic, and operational control is distributed, not centralized.

      Similar a los tejidos vivos, donde el procesamiento no está en ningún lado en particular. Alan Kay habla de los objetos similares a las células y los mensajes similares a las álgebras.

  3. Dec 2015
    1. v := RTView new. s := (RTBox new size: 30) + RTLabel. es := s elementsOn: (1 to: 20). v addAll: es. RTGridLayout on: es. v

      Nice! Here is just another example with no single letter named variables, and more explicit data:

      | visual composedShape data viewElements |
      visual := RTView new.
      data := #('lion-o' 'panthro' 'tigro' 'chitara' 'munra' 'ozimandias' 'Dr Manhatan').
      composedShape := (RTEllipse new size: 100; color: Color veryLightGray) + RTLabel.
      viewElements := composedShape elementsOn: data.
      visual addAll: viewElements. 
      RTGridLayout on: viewElements.
      visual
      

      At the beginning I understood that data "comes from Smalltalk", but may be adding some tips with alternative examples, explicit data and longer variable names, could help newbies like me by offering comparisons with numerical and intrinsic data inside the image. The explanation about composed shapes and "+" sign is very well done.

    2. Roassal maps objects and connections to graphical elements and edges. In additions, values and metrics are represented in visual dimentions (e.g., width, height, intensity of graphical elements). Such mapping is an expressive way to build flexible and rich visualization. This chapter gives an overview of Roassal and its main API. It covers the essential concepts of Roassal, such as the view, elements, shapes, and interactions.

      I would try a less technical introduction to combine with this one. How about:

      When we're building a visualization, we want the properties of the objects in our domain to be expressed graphically, by shapes, connections and visual dimensions like width, height, intensity of graphical elements. Roassal builds such mappings as an expressive way to build flexible and rich visualizations.

    1. Once a Roassal element has been created, modifying its shape should not result in an update of the element.

      This part should be clarified. Could a further example be referenced?

    2. c := TRCanvas new. shape := TRBoxShape new size: 40. c addShape: shape. shape when: TRMouseClick do: [ :event | event shape remove. c signalUpdate ]. c

      I get this error MessageNotUnderstood: TRMouseLeftClick>>myCircle for this similar code:

      | canvas myCircle data |
      canvas := TRCanvas new.
      myCircle := TREllipseShape new size: 100; color: Color white.
      data := #('lion-o' 'panthro' 'tigro' 'chitara' 'munra' 'ozimandias' 'Dr Manhatan').
      canvas addShape: myCircle.
      myCircle when: TRMouseClick do: [:event | event myCircle remove. canvas signalUpdate  ].
      canvas
      

      If I change myCircle with shape it works fine, but I wouldn't imagine that variable names could be so picky. Generic names should work (circle doesn't work neither).

    3. Any sophisticated visualization boils down into primitives graphical elements. Ultimately, no matter how complex a visualization is, circles, boxes, labels are the bricks to convey information. Before jumping into Roassal, it is important to get a basic understand on these primitive graphical elements. Trachel is a low-level API to draw primitive graphical elements.

      Nice introduction. The only primitive I miss was the line.

    1. Congratulations on that! Looking forward to see how this develops and put my two cents on this effort.

      Just a minor correction: it's IPython, not iPython (in the Fernando Perez appearing and end credits)

    1. Our utopian visions of the future, freed from present problems by human ingenuity and technical competence, might be possible on paper, but they are unlikely in reality. We have already made the biggest mistake, and spent 10,000 years perfecting a disastrous invention, then making ourselves ever more reliant on it. However, the archaeologists who give us glimpses of our ancestors, and the anthropologists who introduce us to our cousins, have been able to show us why we dream what we do. What we yearn for is not just our imagined future; it is our very real past.
    2. Agriculture turns land that feeds thousands of species into land that feeds one. It literally starves other species out of existence.

      Sin embargo con la agricultura orgánica, como la de Guillermo, no se favorecen los monocultivos.

    3. Without a surplus of food, sustained military campaigns are simply not possible.
    4. Among nomads, property becomes a burden if it accumulates. A society of equals, which places little value on what material wealth it does possess, is not fertile ground for property crime.
    5. A group of nomads, finding itself unable to agree on an issue of importance, can always split into two or more groups, each of which can go its own way and implement the decision they believe to be the best. Farmers, however, are stuck where they are, and the best kind of democracy that a settled community can produce is the tyranny of the majority.

      Un ejemplo temprano de plurarquía vs democracia

    6. In the 1960s and 1970s anthropologists, such as Richard Lee and Yehudi Cohen, noticed the strong correlation between how societies produce their food and how they are structured socio-politically. Years of accumulated anthropological research showed that those who live by hunting and gathering show a very strong tendency to live in egalitarian, consensus-based societies.
  4. Nov 2015
    1. !How!might!civil!society!actors!shape!the!data!revolution?!In!particular,!how!might!they!go!beyond!the!question!of!what!data!is!disclosed!towards!looking!at!what!is!measured!in!the!first!place?

      This is deeply related with how we express what we value. But metrics can also deform the very perception of value and the way we behave according to it. A case for money and the necessity of diversity of it can be found on Riches beyond belief (So you want to invent your own currency).

      Data, as a political construct, is employed for argumentation in favor or against the implementation of certain visions of the world.

    1. In the age of social media, there are a myriad ways our online presence may be used against us by a multitude of adversaries. From stalkers to prosecutors, any public information that can be attached to our identities may be used to their advantage and our detriment. It is important that we are mindful of the resources we make available to potential attackers.

      Se requiere un balance entre identidad y privacidad. Perfiles públicos para aquello por lo que queremos ser reconocidos y privados para aquello que puede poner en riesgo nuestra integridad física y la de los nuestros. Manejar esta dualidad, que debería estar disponible para todos en principio, como garantía constitucional, requerirá de otros diseños de hardware y software (llaves USB, físicas, quizás con cierta biometría y procesamiento incorporados, hardware abierto, pero encriptado, etc.)

    1. never post screencaps that show tabs. EVER.
    2. Lo identitario es clave y debe balancearse con el anonimato. Pareciera que se necesita un sistema p2p, que corra en nuestras máquinas y hardware (un llavero, físico, USB), que se pueda usar para proteger nuestra identidad digital. Se encargaría de temas como la encripción y desencripción de mensajes, el uso de correos temporales para descargar información, la creación de perfiles anónimos, pero con reputación, para compartir cierta información crítica y en general de las actividades que implican "danzar con el poder" como decían en el evento de STEPS Latinoamérica.

    1. Extreme efficiency of exchange, in other words, might come at the cost of developing new business contacts.
    2. I accept bitcoins for the same reason that I accept normal money. Mainstream money is used to replace a specific trust relationship with a general one. I take British pounds from a specific person because I trust that I can exchange those pounds for something else within the general British pound-using community. Likewise, I take the bitcoins from the specific buyer because I trust that the broader Bitcoin community will accept them from me in exchange for something of intrinsic value. The main departure from normal electronic money is that Bitcoin uses a decentralised network in place of a central hierarchy. The advantages are anonymity, a sense of freedom and, it has been argued, a more resilient system.
    3. Perhaps we can tinker with the word ‘money’ itself. It’s a mass noun, like you’d use for some kind of tangible substance, and it makes money sound like a ‘thing-in-itself’. As a kind of mental discipline, I prefer to use a different word: COGAS. It stands for ‘claims on goods and services’, which is all money really is. And now I have a word that describes itself, as opposed to one that actively hides its own reality. It sounds trivial, but the linguistic process works a subtle psychological loop, referring money to the world outside itself. It’s a simple way to start peeling back the façade.

      Algo similar hace Stallman cuando cambia DRM reemplazando "Rights" por "Restrictions". Ese cambio simbólico es importante!

    4. There’s an ecological dimension to this, of course, which is my overriding concern. Our ability to exchange without knowing where things come from blinds us to the real core of the economy: not money, but the physical things we must wrench from the ground by human effort, which is underpinned by agricultural systems, and energised by sunlight, water and soil.
    5. GDP is supposed to reflect what is created in society, but if my grandad builds me a table in his workshop, it’s not included in GDP, and if I buy a table in Ikea, it is. The former is not considered valid production, whereas the latter is. That is arbitrary, and obviously something has gone wrong.
    6. Similar network effects arise with social platforms such as Facebook — in theory, you can opt out, but only if you don’t mind the penalty of social exclusion. What’s more, when integrated into a national legal system and backed by the threat of violence, the sanctions for dissent become rather persuasive. At the unsubtle end of the spectrum, the monarch may simply throw you in jail for not using her preferred currency.
    7. Gold reveals the basic tension in the textbook definition of money — the idea that it can be both a store of value and a means of exchange. For the most part, when something is truly valuable in itself, people are disinclined to part with it (why swap rum for something else when you can just drink it?).
    8. It’s a reassuring myth, one that obscures the deep difference between barter and monetary exchange. In the former, nothing is left unresolved and no faith is required. It’s a closed circuit, a like-for-like swap. By contrast, money transactions are never closed; you pass on an abstract, faith-based claim in exchange for a tangible good.
    9. but this still means that every monetary transaction is a leap of faith. And faith has to be carefully maintained.

      Podríamos hacer que la gente depositara su fe en algo con valor más intrínsico, información útil, por ejemplo.

    10. Shopkeepers accept the paper because they believe that it has abstract value — because, in turn, they believe that others believe it, too. The value is circular, predicated on each person believing that others believe in it.

      Recuerdo decir que el dinero eran piezas de papel con retratos de gente muerta.

    11. I have an enduring memory of a TED talk in which he ripped a banknote into pieces, trying to make the point that the paper itself doesn’t have value
    12. The best guides in this half-lit territory turn out to be not economists, but rather the loose bands of monetary mystics and iconoclasts who are developing strange new exchange technologies. They are a scattered tribe, with elders including the likes of Bernard Lietaer, Ellen Brown and Thomas Greco, sages passing on tips on how to breach the Monetary Matrix.
    13. Money sounds like it’s an ordinary noun, a self-contained object. If it is a physical object, it must be paper or metal or digits on a computer. And yet, very few of us think a £5 note is merely a piece of paper: the same idea of £5 can be expressed in electronic or metal form, after all.

      Información cuyo sustrato material puede cambiar, como la abstracción de los números: una misma cantidad puede estar asociada a colecciones de diversos objetos.

    14. By contrast, money itself is more like a low-level programming language, very hard to see or to understand but closer to gritty reality. It’s like your computer’s machine code, interfacing with the hardware: even the experts take it for granted. You might need to explain to someone what a bond is, but nobody is ever ‘taught’ what money is.

      En mi caso el caracter ilusorio de la moneda fue revelado de manera temprana. Quizás es por eso que no tengo mucha :-P.

    15. To draw an analogy with computer coding, we might say that financial instruments are analogous to ‘high-level’ programming languages such as Java or Ruby: they let you string commands together in order to perform certain actions. You want to get resources from A to B over time? Well, we can program a financial instrument to do that for you.

      Interesante analogía. Habría que mirar cómo las prácticas cooperativistas pueden impulsar flujos de A hacia B y estar sustentandos en varios sustratos materiales, algunos con poca tecnología (monedas locales) y otros con alta tecnología (bitcoins, ethereum, etc).

    16. The financial system exists, above all, to mediate flows of money, not to question what money is.
    1. Weapons of the Weak is not just a political study, however; it is also an outstanding work of ethnography. Based on thorough research and careful, perceptive fieldwork, it manages to avoid some of the failings of traditional ethnography by its emphasis on the centrality of individual human beings in their particular situations. Whether or not it offers definitive answers to the questions it investigates, it certainly provides some solid ground to stand on in looking for them.
    2. As a result, Scott suggests that the ideological superstructure must always be seen as a product of struggle, not as something preexisting.
  5. Oct 2015
    1. Usando los datos Carlos Alberto hizo una visualización de ingresos/egresos por departamento.

      Cuando hago click en "Cundinamarca" aparecen los datos de Córdoba.

    1. Min 52:43, Patternmakers: the artisans who enable the machine, who create the patterns that make machines possible.

    1. cómo pensar en un sistema donde la práctica académica no siga reforzando la privatización del conocimiento al seguir centrada en la protección del autor y sus obras bajo derechos de propiedad, pero que, por otro lado, el hacer invisible al autor o matarlo signifique una circulación de obras como mercancías sin dueño, lo cual hace un gran favor al sistema capitalista al tener acceso a “free gifts”, es decir, conocimientos que son fácilmente incorporados en los circuitos de producción dominante.

      Una exploración de una posibilidad sobre otras prácticas académica que no refuerce la privatización de conocimiento es grafoscopio. Si bien acá se pueden trazar autorías individuales, la infraestructura de bolsillo y bifurcable facilita las colectivas. Licencias como la P2P license pueden prohibir la apropiación por parte de privados del conocimiento, si no aportan de vuelta al procomún.

    1. Apple does not sell great design. It sells design that flatters its owner. (And Apple’s timing has been perfect to exploit the rising tide of wealth inequality.)
  6. Sep 2015
    1. | canvas point | canvas := DrGeoCanvas new. canvas fullscreen. point := canvas point: 0@0. canvas do: [ -5 to: 5 by: 0.1 do: [:x | point moveTo: x@(x cos * 3). (Delay forMilliseconds: 100) wait. canvas update] ]
    2. c := DrGeoCanvas new.

      This line should be before the definition of "triangle"

    1. Unfortunately, this process rarely actually happens the right way, often because the business people ask their data people the wrong questions to being with, and since they think of their data people as little more than pieces of software – data in, magic out – they don’t get their data people sufficiently involved with working on something that data can address.
    1. In programming languages like C++, C# or Java a class usually would be defined in a source code. A class definition file (Desktop.cpp/ Desktop.cs/ Desktop.java) in these languages would be a dumb text definition file fed into a compiler to verify and translate.In an interactive and lively system like Pharo a class could be created like any other object by sending instance creation methods. The reason is simple: in a pure OO environment anything is an object, so even a class is an object. Remember: there are only objects and messages.

      Una muestra de "live coding" (vía objetos) versus "static code" (vía archivos).

      An examples of "live coding" (via objects) versus "static code" (via files).

    2. As the previous examples showed Pharo has very much in common with an operating system. The difference is that it is more a lively kernel and scriptable object system that one can easily persist and transfer and that is easily extendable using the Smalltalk language.

      En tracing the dynabook se muestra cómo smalltalk era una alternativa a los sistemas operativos. Era otra manera de plantear toda una experiencia de cómputo completa y a la vez minimalista.

      In Tracing the dynabook it is shown how smalltalk was an alternative to operative systems, It was another way to propose a whole computer experience which was, at the same time complete and minimalist.

    3. So Pharo is a like an easy transferable operating system moveable between systems and devices.
    1. Post offices are ubiquitous across America — what if they could be retrofitted to also be Social Security offices and DMVs and passport offices and polling locations? What if folks who aren’t comfortable with fancy, modern websites could walk into their post office and have any question about government answered for them? Yeah

      En Colombia, este lugar podrían ser las bibliotecas públicas. Hay una red con cerca de 1400 distribuidas a lo largo y ancho del país.

    2. So putting together a single online resource like this is no small task. It would require the collaboration of local, state, and federal governments and agencies that have immense amounts of overlap in their missions and are very protective of their individually appropriated budgets

      También está el problema de ceder poder cuando se interopera. Uno que yo he evidenciado en el gobierno directamente.

    3. there is probably no more devastating counterargument to giving the authorities more sophisticated technology to interact with citizens than the ongoing NSA scandal

      Los gobiernos están dispuestos a invertir en tecnología de punta para espiarnos, pero no para empoderarnos.

    4. Government websites are notoriously frustrating trips back in time, and it’s virtually impossible to find out all the ways government could potentially help you.
    1. I loved those days: writing post after post after post, day after day, forces a different mindset as a writer. You loosen up; you get conversational.

      Me recuerda la conversación con @Xtringray sobre escribir todos los días. En ese sentido diría que soy menos conversacional que escritural. Para las conversaciones uso las listas de correo principalmente y el microblog en segunda instancia, mientras que los escritos suelen ser más detallados y demoran en producirse.

    1. NS Taleb teaches that a system can be designed to be anti-fragile, to not only cope to small amount of stress but also become better and more resilient by it. And while this might be extremely difficult, Taleb also shows that small is truly beautiful thus for our discipline
    2. using Cloud based services, despite their creators best intentions has considerable risks sinceAny Cloud based approach creates massive hugely attractive fragile Honeypots
    1. “There’s been this brutal narrative of the digital native,” says Owens. “People think they already supposed to know this stuff, but they don’t.” But Known and other tools can help change that.

      Esta brutal narrativa también incluye el hecho de que algunas personas no son aptas para aprender cosas relacionadas con la tecnología por su edad.

    1. That would mean the shift to an economic system where the fruits of the most powerful technologies humans have invented are shared more equally among us. If we embraced work-saving technologies rather than feared them, and organised our society around their potential, it could mean being able to live a good life with a ten-hour working week.
    2. But we live in a world not of steam, but of silicon, solar and synthetic biology. Yes, technology challenges existing business models and maybe even capitalism as we know it. The solution isn't economically illiterate nostalgia or wringing your hands about inevitable social unrest. Rather seeing "structural unemployment" – as Rupert calls it – as a threat, we should take it as an opportunity to build a society where we can have much more and work much less.
    3. While Rupert's vision is that of a dystopian, socially fragmented future, Osborne's ambition is to return to the economics of the Victorians. The former see a world whose politics are increasingly incapable of resolving the problems of its time, while the latter pines for the ways of the steam-age.
    4. That "us" refers to the array of oligarchs, billionaires and chief executives Rupert was speaking to.
    5. In that respect, the recent victory of anti-eviction activist and ex-squatter, Ada Colau, in the race to become the Mayor of Barcelona is a sign of things to come. It is politicians like Colau, the Greek Prime Minister Alex Tsipras and Pablo Iglesias, the leader of a radical-left party which could form Spain's next government, who are the leading edge of something which Rupert fears is much, much bigger.
    1. the guff from the government about "fixing the roof while the sun is shining"

      "arreglar el techo mientras el sol está brillando"

    2. If the state moved heaven and earth to create capitalism, what will stop it doing the same to ensure its survival and creating some kind of techno-fascism – less a transition motor and more a whack-a-mole game, bashing non-capitalist initiatives on the head as they emerge?
    3. Abundance is already here – we have enough stuff but don't share it properly. Loads of people are already in bullshit jobs that don't need to happen – and technology hasn't changed that until now.
    4. I think the choke point for the transition to postcapitalism comes when the market sector and non-market sector become round about the same size.
    5. "The state has to be rethought as a transition motor," he says – meaning it needs to be reimagined as a vehicle for change rather than a defender of the status quo. "And transition's a long period – we're not talking about two years, we're talking about 50," says Paul.
    6. The identity change is true closer to home as well as in the developing world, with traditional workplace identities evaporating.
    1. Currently, Facebook, by providing free access to select websites, via its platform to a number of emerging economies, has become the internet to this substantive user base. Net neutrality here has evidently taken a backseat in the name of doing good and given Facebook a unique vantage point into database behaviour among this bop populace.

      Facebook intentando tener una posición privilegiada de mercado usando a los pobres y la "inclusión" como instrumento.

    2. The question that remains is how to treat this rising populace as culturally diverse and yet refrain from exoticizing them; how to allow big data to be an empowering tool among emerging economies while simultaneously strengthening their institutions; and how to create alternative modes of inclusivity to the default neoliberal approach of the marketization of the poor.
    3. The longevity of such social entrepreneurship lies in the belief that the state will continue to disappoint its citizens. Here, zones of marginalization become zones of innovation.

      La innovación social vista como mercado.

    4. Lastly, far from the claim of these initiatives to being novel and unprecedented, we need to recognize that these surveillance systems have their roots in colonial practices of identification of the colonized.
    5. Also, it is worth asking whether by embracing the bottom of the pyramid (bop) perspective of the poor as empowered consumers, are we in fact marketizing the poor?
    6. what Owen Thomas calls ‘high tech racism’. Certain bodies are more ‘unreadable’ than others
    7. While the West appears to be moving away from the convergence of datasets due to privacy laws, constitutional rights and public concern, these very initiatives in the global South are celebrated as acts of empowerment. Why the apparent contradiction?
    8. when we pay attention to the debates about surveillance, privacy and net neutrality and the demand for alternative models and practices to sustain the digital commons, they are primarily driven by western concerns, contexts, and user behaviors from these privileged domains. This undoubtedly provides a thwarted view of the internet.

      ¿Podemos hacer parte de una conversación desde lo local? ¿podemos articularnos con una conversación global?

    1. as our lives are dominated ever more completely by complex computer systems, it is a little disquieting to realise that perhaps our heroes must be as alien and inscrutable as our problems.
    2. the engineer has started to operate as a visionary improviser, seeing an adjacent world-state within the world system and instantly imbuing it with the radioactive glow of moral mission

      Me recuerda La ballena y el reactor nuclear. ¿Tiene la tecnología una política? y si la tiene, ¿tiene una moral?

  7. Aug 2015
    1. Where Otlet and Wells envisioned publicly funded, trans-national organizations, we now have an oligarchy of public corporations.
    2. Google freely excludes sites from its index for reasons that it is under no obligation to disclose—the secrets of the Googlebot are Delphic mysteries known only to its inner circle of engineers.
    3. The culture stood in stark contrast to the orderly, institutional tendencies of Otlet and Wells. Where Europeans were turning to their institutions in a time of crisis, many Americans were growing up in a value system that emphasized individualism and personal liberation. It was in this milieu that Licklider, Engelbart, and others began laying the foundations for the web we know today.
    4. A deeper look into the historical record, though, reveals a different story: The web in its current state was by no means inevitable. Not only were there competing visions for how a global knowledge network might work, divided along cultural and philosophical lines, but some of those discarded hypotheses are coming back into focus as researchers start to envision the possibilities of a more structured, less volatile web
    5. While these features have connected untold millions and created new forms of social organization, they also come at a cost. Material seems to vanish almost as quickly as it is created, disappearing amid broken links or into the constant flow of the social media “stream.” It can be hard to distinguish fact from falsehood. Corporations have stepped into this confusion, organizing our browsing and data in decidedly closed, non-transparent ways. Did it really have to turn out this way?

      La web, utopía y distopía en simultánea.

    1. we will end up ‘a society that grows ever richer, but in which all the gains in wealth accrue to whoever owns the robots’

      Similar a lo que dice Jaron Lanier en "Who owns the future".

    2. Open-source principles are a major point of distinction between DACs and the existing, overwhelmingly proprietary systems used for logistics, management and trading.

      Podría usarse ethereum para hacer más abiertos y transparentes los distintos elementos de la gestión de una ciudad.

    3. Imagine, for instance, a bike-rental system administered by a DAC hosted across hundreds or thousands of different computers in its home city. The DAC would handle the day-to-day management of bikes and payments, following parameters laid down by a group of founders. Those hosting the management programme would be paid in the system’s own cryptocurrency – let’s call it BikeCoin. That currency could be used to rent bikes – in fact, it would be required to, and would derive its value on exchanges such as BitShares from the demand for local bike rentals

      Se parece a la idea de Sebastian para Popayan y el Cauca.

    4. And yet, on reflection, Rifkin’s examples turn out to be anything but collaborative at their heart. Companies such as Uber and Airbnb are fiercely profit-driven, taking large cuts from all the exchanges they facilitate. They are middlemen themselves, albeit somewhat more efficient and open than their predecessors. What’s more, the digital payment systems that underpin their services are also highly centralised and very expensive.

      Un nuevo intermediario, de proporciones inmensas y transnacional, concentrándolo casi todo.

    1. As we know from public media, when products exist in the marketplace for reasons other than profit, it affects the whole market for the better. In other words, this kind of organization would be a public good as well as an academic one
    2. software is created through a design thinking process, with iterative user research and testing performed with both educators and students. The result is likely to be software that better meets their needs, released with an understanding that it is never finished, and instead will be rapidly improved during its use.
    3. While it's great that any member of staff can create a database, the IT department is then expected to maintain and repair it. The avalanche of applications can quickly become overwhelming - and sometimes they can overlap significantly, leading to inefficient overspending and further maintenance nightmares. For these and a hundred other reasons, purchasing needs to be planned.

      Sin embargo iniciativas como frictionless data permitirían aplicaciones a la medida que a su vez interoperaran. La parte de redundancia y consistencia de datos debería ser asumida creando sistemas modulares que puedan tener lugares centralizados y distribuidos para su funcionamiento (quizás combinándolo con tecnologías como el blockchain para lugares que requieran datos consistentes y compatidos).

    1. This business model does not need, or even more it is prohibited, by an alternative Application Pattern, a pattern that it is Human centric, Human scale, that puts you in the center and does not see you as a data generation unit aggregated inside a giant swarm of people.
    1. Personally, I think the people at Facebook made a stellar product. It's creepy how much they know about us, even creepier how much they care about this information, and scary that they are sharing it with governments. But it's also wonderful to be able to find nearly anyone in the world, contact them immediately, set up events, share media, etc. They have build a robust, incredibly impressive and functional platform that gets better every day.

      Este párrafo lo explica todo y es un pensamiento popular: sacrificar conveniencia por privacidad y otros derechos. Como si no se pudieran lograr la conveniencia con respeto a ellos. La Indie web muestra que podemos encontrar a mucha gente y contactarlos (por correo), así como acordar eventos y compartir medios de comunicación (Archive, Known, etc). Es como si la única forma de hacerlo convenientemente fuera Facebook. Que miopía la del autor!

    2. Security. How hack-proof is Ello? Is the code going to be open sourced? How will we know if/when they are working with the NSA?

      La aproximación de Known es mejor: se conecta con la web enajenada pero popular (facebook, twitter) y su código fuente es abierto. En cuanto a trabajar con la NSA, después de Snowden, ya sabemos que Facebook si lo hace, un lugar donde el autor obvia las comparaciones.

    1. Hacking, in my world, is a route to escaping the shackles of the profit-fetish, not a route to profit.
    2. the true hacker spirit does not reside at Google, guided by profit targets
    3. The gentrification of hacking is… well, perhaps a perfect hack.
    4. And before you know it, an earnest Stanford grad is handing me a business card that says, without irony: ‘Founder. Investor. Hacker.’

      Mi "emprendimiento", mutabiT, muestra varias cosas de la cultura hacker que pueden tener potencial en contextos educativos, empresariales o gubernamentales entre otros, pero sigue alineada a la construcción de procomún y no de lucro para la propia empresa o sus propietarios (como muestran los balances y las apuestas hechas ;-))

    5. This process of gentrification becomes a war over language
    6. This doublethink bleeds through into mainstream corporate culture, with the growing institution of the corporate ‘hackathon’

      Desvirtuar la idea de la hackatón para que sirva de maquillaje y al enajenamiento en lugar del empoderamiento. Lo vimos pasar también en Colombia e hicimos una propuesta contestataria, como puede verse en: La Gobernatón: ¿Qué sigue?

    7. And so we see a gradual stripping away of the critical connotations of hacking. Who said a hacker can’t be in a position of power? Google cloaks itself in a quirky ‘hacker’ identity, with grown adults playing ping pong on green AstroTurf in the cafeteria, presiding over the company’s overarching agenda of network control.

      The startup hacker lie

    8. ‘hacking’ as quirky-but-edgy innovation by optimistic entrepreneurs with a love of getting things done
    9. the revised definition of the tech startup entrepreneur as a hacker forms part of an emergent system of Silicon Valley doublethink
    10. The countercultural trickster has been pressed into the service of the preppy tech entrepreneur class.
    11. Here is where the second form of corruption begins to emerge. The construct of the ‘good hacker’ has paid off in unexpected ways, because in our computerised world we have also seen the emergence of a huge, aggressively competitive technology industry with a serious innovation obsession. This is the realm of startups, venture capitalists, and shiny corporate research and development departments. And, it is here, in subcultures such as Silicon Valley, that we find a rebel spirit succumbing to perhaps the only force that could destroy it: gentrification.
    12. In the context of a complex system – computer, financial or underground transit – the political divide is always between well-organised, active insiders versus diffuse, passive outsiders. Hackers challenge the binary by seeking access, either by literally ‘cracking’ boundaries – breaking in – or by redefining the lines between those with permission and those without. We might call this appropriation.
    13. Thus a single manifestation of a single element of the original spirit gets passed off as the whole.

      Esta es la primera corrupción del espíritu hacker, según el autor, su simplificación y caricaturización. Recuerdo ser invitado a un evento de Hackers y Seguridad hace varios años, insistiendo que la seguridad no era aquello a lo que me dedicaba, aunque de todos modos sabía sobre los ethos hacker (finalmente no pude asistir).

    14. Despite the hive-mind connotations of faceless groups such as Anonymous, the archetype of ‘the hacker’ is essentially that of an individual attempting to live an empowered and unalienated life. It is outsider in spirit, seeking empowerment outside the terms set by the mainstream establishment.

      cfg "mente colmena" y Who owns the future, de Jaron Lanier.

    15. I was attracted to the hacker archetype because, unlike the straightforward activist who defines himself in direct opposition to existing systems, hackers work obliquely. The hacker is ambiguous, specialising in deviance from established boundaries, including ideological battle lines. It’s a trickster spirit, subversive and hard to pin down. And, arguably, rather than aiming towards some specific reformist end, the hacker spirit is a ‘way of being’, an attitude towards the world.
    16. For all his protestations of innocence, it’s clear that Draper’s curiosity was essentially subversive. It represented a threat to the ordered lines of power within the system. The phreakers were trying to open up information infrastructure, and in doing so they showed a calculated disregard for the authorities that dominated it.
    17. For all his protestations of innocence, it’s clear that Draper’s curiosity was essentially subversive. It represented a threat to the ordered lines of power within the system. The phreakers were trying to open up information infrastructure, and in doing so they showed a calculated disregard for the authorities that dominated it.
    18. The internet promises open access to information and online assembly for individual computer owners. At the same time, it serves as a tool for corporate monopolists and government surveillance.

      Aaron hablaba de este caracter dual y permanente de la red en Internet own boy