2,263 Matching Annotations
  1. Nov 2016
    1. Corporate practices can be directly hostile to individuals with exceptional skills and initiative in technical matters. I consider such management of technical people cruel and wasteful. Kierkegaard was a strong proponent for the individual against “the crowd” and has some serious discussion of the importance of aesthetics and ethical behavior. I couldn’t point to a specific language feature and say, “See, there’s the influence of the nineteenth-century philosopher,” but he is one of the roots of my reluctance to eliminate “expert level” features, to abolish “misuses,” and to limit features to support only uses that I know to be useful. I’m not particularly fond of Kierkegaard’s religious philosophy, though.

      Interesante ver cómo el lenguaje de programación es diseñado como una prevención contra la cultura corporativa.

    2. TR: How do you account for the fact that C++ is both widely criticized and resented by many programmers but at the same time very broadly used? Why is it so successful?BS: The glib answer is, There are just two kinds of languages: the ones everybody complains about and the ones nobody uses.There are more useful systems developed in languages deemed awful than in languages praised for being beautiful–many more. The purpose of a programming language is to help build good systems, where “good” can be defined in many ways. My brief definition is, correct, maintainable, and adequately fast. Aesthetics matter, but first and foremost a language must be useful; it must allow real-world programmers to express real-world ideas succinctly and affordably.

      La idea, de Stroupstrup en C++, de un lenguaje para escribir sistemas (¿de computo?) constrasta con la de uno que sirva a la expresión creativa del espíritu humano, de Ingalls en Smalltalk. El programador profesional como destinatario del lenguaje en C++ también contrasta con los niños en Smalltalk.

    3. TR: In retrospect, in designing C++, wasn’t your decision to trade off programmer efficiency, security, and software reliability for run time performance a fundamental mistake?BS: Well, I don’t think I made such a trade-off. I want elegant and efficient code. Sometimes I get it. These dichotomies (between efficiency versus correctness, efficiency versus programmer time, efficiency versus high-level, et cetera.) are bogus.What I did do was to design C++ as first of all a systems programming language: I wanted to be able to write device drivers, embedded systems, and other code that needed to use hardware directly. Next, I wanted C++ to be a good language for designing tools. That required flexibility and performance, but also the ability to express elegant interfaces.
    4. And without real changes in user behavior, software suppliers are unlikely to change.
    5. People reward developers who deliver software that is cheap, buggy, and first. That’s because people want fancy new gadgets now.
    6. Software developers have become adept at the difficult art of building reasonably reliable systems out of unreliable parts. The snag is that often we do not know exactly how we did it: a system just “sort of evolved” into something minimally acceptable. Personally, I prefer to know when a system will work, and why it will.
    1. You couldn’t charge people to use Python, for example, any more than you could charge someone to speak English.

      This reminds me of Elionnor Ostrom and Antonio Lafuente examples of language as a common.

      Do we have a sustainability model for the commons?

    2. What did it mean for a project not to be venture backable? Likely, it met one of the following criteria:There was no business model (ex. no customers to charge, or no advertising model)The project was not structured as a C corpThe project could not conceivably return an investment at venture scale
    3. YCombinator funds nonprofits and now, research. OATV launched indie.vc last year. Peter Thiel created Breakout Labs. Elon Musk created Open AI.
    1. serveStatic: '/static' from: '/var/www/htdocs'

      How this message can be send using TeaLight

    2. GET: '/divide/<a>/<b>' -> [ :req | (req at: #a) / (req at: #b)];

      This should be:

      GET: '/dividir/<a>/<b>' -> [ :req | (req at: #a) asNumber / (req at: #b) asNumber]

  2. Oct 2016
    1. My hope is that the book I’ve written gives people the courage to realize that this isn’t really about math at all, it’s about power.
    1. n a study cited by the Swiss group last month, researchers found Twitter data alone a more reliable predictor of heart disease than all standard health and socioeconomic measures combined.

    Tags

    Annotators

    1. Empirically, I trace how civic hackathons in Los Angeles evolved in 2013 - 2015 from engineering exercises to spectacles where civic futures of technology were performed through communication, before turning to particular lessons drawn from civic hackathons. Civic hackathons’ emergence from technical cultures means they often produce conservative civic visions. I pay close attention to moments of failure – moments when possible technologies reproduced existing cultural divisions. Still, I resist describing them as completely co-opted and useless, as I found surprising moments of civic learning and exploration.

      Interesante ver que no está totalmente cooptada.

    2. This bold claim has led design and critical scholars to hotly debate if participants have a technological ideology imposed on them, or if thinking with technologies enable new civic perspectives.

      Pueden estar ocurriendo ambas. La pregunta sería cuándo ocurre cuál. Unas pistas pueden estar del lado de la alfabetización crítica (Freire, Data Pop).

    3. Winners were praised by sponsors and rewarded with invitations to be part of an accelerator or incubator. Stories from the day generated ample traffic on social media and articles in local newspapers.
  3. Sep 2016
    1. Cooperation without coordination can propel democracy to the next level, but it will be met with so much resistance by those currently in power.

      Is "cooperation without coordination" really happening? Because coordination is happening via infrastruture (Git and other code repositories like Fossil, wikis and so on).

      It is, again, an oversimplification?

    1. EA principles can work in areas outside of global poverty. He was growing the movement the way it ought to be grown, in a way that can attract activists with different core principles rather than alienating them.
    2. Effective altruism is not a replacement for movements through which marginalized peoples seek their own liberationAnd you have to do meta-charity well — and the more EA grows obsessed with AI, the harder it is to do that. The movement has a very real demographic problem, which contributes to very real intellectual blinders of the kind that give rise to the AI obsession. And it's hard to imagine that yoking EA to one of the whitest and most male fields (tech) and academic subjects (computer science) will do much to bring more people from diverse backgrounds into the fold.
    3. But ultimately you have to stop being meta. As Jeff Kaufman — a developer in Cambridge who's famous among effective altruists for, along with his wife Julia Wise, donating half their household's income to effective charities — argued in a talk about why global poverty should be a major focus, if you take meta-charity too far, you get a movement that's really good at expanding itself but not necessarily good at actually helping people.

      "Stop being meta" could be applied in some sense to meta systems like Smalltalk and Lisp, because their tendency to develop meta tools used mostly by developers, instead of "tools" used by by mostly everyone else. Burring the distinction between "everyone else" and developers in their ability to build/use meta tools, means to deliver tools and practices that can be a bridge with meta-tools. This is something we're trying to do with Grafoscopio and the Data Week.

    4. The other problem is that the AI crowd seems to be assuming that people who might exist in the future should be counted equally to people who definitely exist today. That's by no means an obvious position, and tons of philosophers dispute it. Among other things, it implies what's known as the Repugnant Conclusion: the idea that the world should keep increasing its population until the absolutely maximum number of humans are alive, living lives that are just barely worth living. But if you say that people who only might exist count less than people who really do or really will exist, you avoid that conclusion, and the case for caring only about the far future becomes considerably weaker
    5. The problem is that you could use this logic to defend just about anything. Imagine that a wizard showed up and said, "Humans are about to go extinct unless you give me $10 to cast a magical spell." Even if you only think there's a, say, 0.00000000000000001 percent chance that he's right, you should still, under this reasoning, give him the $10, because the expected value is that you're saving 10^32 lives.
    6. At one point, Russell set about rebutting AI researcher Andrew Ng's comment that worrying about AI risk is like "worrying about overpopulation on Mars," countering, "Imagine if the world's governments and universities and corporations were spending billions on a plan to populate Mars." Musk looked up bashfully, put his hand on his chin, and smirked, as if to ask, "Who says I'm not?"

      Es decir, debemos preocuparnos ahora por los riesgos imaginarios de inversiones que ni los gobiernos, ni las universidades están haciendo para un "apocalipsis Sci Fi" un lugar de preocuparnos por los problemas reales. Absurdo!

    1. The second reason for avoiding the GPL is that it does not work well in the context of systems which embed their own code in documents. Programs such as GCC avoid this by having specific exemptions for the embedded code, but it is not possible for us to add this exemption to code taken from elsewhere. This is a problem, since a document in an object-oriented environment is a collection of objects, and objects are a combination of code and data. This could lead to the documents becoming accidentally GPL'd.

      Algo para considerar en el caso de grafoscopio y su documentación. Si bien el código está licenciado con MIT, estaba pensando en licenciar los documentos con una licencia copy far letf, como la P2P license. Esto podría aplicar para la serialización del objeto como un archivo. Aún así me quedan dudas sobre cómo aplicaría eso en un contexto de objetos vivos y si esa licencia puede "expandirse" a otros objetos del sistema en la medida en que se hace más difusa la diferencia entre el archivo y los objetos (un documento que está hecho de objetos y se serializa en un archivo).

    1. But despair is not useful. Despair is paralysis, and there’s work to be done. I feel that this is an essential psychological problem to be solved, and one that I’ve never seen mentioned: How do we create the conditions under which scientists and engineers can do great creative work without succumbing to despair? And how can new people be encouraged to take up the problem, when their instinct is to turn away and work on something that lets them sleep at night?

      Remembers me about the last week discussion of insomnia of the (justified) existentialist. I have shared my fair amount of it.

    2. “It’s more complicated than that.” No kidding. You could nail a list of caveats to any sentence in this essay. But the complexity of these problems is no excuse for inaction. It’s an invitation to read more, learn more, come to understand the situation, figure out what you can build, and go build it. That’s why this essay has 400 hyperlinks. It’s meant as a jumping-off point. Jump off it. There’s one overarching caveat. This essay employed the rhetoric of “problem-solving” throughout. I was trained as an engineer; engineers solve problems. But, at least for the next century, the “problem” of climate change will not be “solved” — it can only be “managed”. This is a long game. One more reason to be thinking about tools, infrastructure, and foundations. The next generation has some hard work ahead of them.

      Also a good foot note related with the ones at beginning. A problem solving language doesn't mean to be enchanted by the magic of techno-solutionism. It can be an invitation from a particular point of view to action and dialogue. This seems the case here. Thanks Bret.

    3. If we must be gods, we should at least be cautious and well-informed gods, with the best possible tools for seeing, understanding, and debating our interventions, and the best possible meta-tools for improving those tools.

      Also applies for sensible & concerned humans, which are worried about their own relation with the planet and other beings on it, despite of not being gods (or aspiring to be). On the case of tools for understanding and metatools for improving those tools, again Perfection & Feedback Loops, or: why worse is better seems enlightening, particularly in the context of Smalltalk legacy. Grafoscopio could make a humble contribution on the seeing, understanding and debating part

    4. A commitment to sourcing every fact is laudible, of course. What’s missing is an understanding of citing and publishing models, not just the data derived from those models. This is a new problem — older media couldn’t generate data on their own. Authors will need to figure out journalistic standards around model citation, and readers will need to become literate in how to “read” a model and understand its limitations.

      The idea of new media able not only to quote numbers, but also to produce them is interesting. In the case of some domain specific visualizations, like the ones we did for public medicine info, they are part of a bigger narrative, that introduce some conventions for interpreting the visuals, i.e. they teach some particular graphicacy (the ability to create and understand data visualizations, according to data pop alliance ). May be this new media needs to build that reader/explorer, by presenting her/him with works of literacy, numeracy and graphicacy, and setups and context to learn, discuss and build them.

    5. Imagine an authoring tool designed for arguing from evidence. I don’t mean merely juxtaposing a document and reference material, but literally “autocompleting” sourced facts directly into the document. Perhaps the tool would have built-in connections to fact databases and model repositories, not unlike the built-in spelling dictionary. What if it were as easy to insert facts, data, and models as it is to insert emoji and cat photos?

      This would be a good aim for Grafoscopio. At the moment there are some prototypes of how it can integrate a data continuum live environment, but these are still first steps and querying data or building visualizations requires quite a lot of expertise. The nice thing though is that once build, the domain specific language is easy to use and conveys explorable knowledge behind, in an integrated environment.

    6. The importance of models may need to be underscored in this age of “big data” and “data mining”. Data, no matter how big, can only tell you what happened in the past. Unless you’re a historian, you actually care about the future — what will happen, what could happen, what would happen if you did this or that. Exploring these questions will always require models. Let’s get over “big data” — it’s time for “big modeling”.
    7. Readers are thus encouraged to examine and critique the model. If they disagree, they can modify it into a competing model with their own preferred assumptions, and use it to argue for their position. Model-driven material can be used as grounds for an informed debate about assumptions and tradeoffs. Modeling leads naturally from the particular to the general. Instead of seeing an individual proposal as “right or wrong”, “bad or good”, people can see it as one point in a large space of possibilities. By exploring the model, they come to understand the landscape of that space, and are in a position to invent better ideas for all the proposals to come. Model-driven material can serve as a kind of enhanced imagination.

      This is a part where my previous comments on data activism data journalism (see 1,2 & 3) and more plural computing environments for engagement of concerned citizens on the important issues of our time could intersect with Victor's discourse.

    8. The success of Arduino has had the perhaps retrograde effect of convincing an entire generation that the way to sense and actuate the physical world is through imperative method calls in C++, shuffling bits and writing to ports, instead of in an environment designed around signal processing, control theory, and rapid and visible exploration. As a result, software engineers find a fluid, responsive programming experience on the screen, and a crude and clumsy programming experience in the world.
    9. But the idea that you might implement a control system in an environment designed for designing control systems — it hasn’t been part of the thinking. This leads to long feedback loops, inflexible designs, and guesswork engineering. But most critically, it denies engineers the sort of exploratory environment that fosters novel ideas.

      On short feed back loops and modelling, this talk from Markus Denker, one of the main architects behind Pharo Smalltalk, can be enlightening: Perfection & Feedback Loops, or: why worse is better

    10. This pejorative reflects a peculiar geocentrism of the programming language community, whose “general-purpose languages” such as Java and Python are in fact very much domain-specific — specific to the domain of software development.

      Nice inversion of general vs specific domain language.

    11. The Gamma: Programming tools for data journalism

      (b) languages for novices or end-users, [...] If we can provide our climate scientists and energy engineers with a civilized computing environment, I believe it will make a very significant difference.

      But data journalists, and in fact, data activist, social scientist, and so on, could be a "different type of novice", one that is more critically and politically involved (in the broader sense of the "politic" word).

      The wider dialogue on important matters that is mediated, backed up and understood by dealing data, (as climate change) requires more voices that the ones are involved today, and because they need to be reason and argument using data, we need to go beyond climate scientist or energy engeeners as the only ones who need a "civilized computing environment" to participate in the important complex and urgent matters of today world. Previously, these more critical voices (activists, journalists, scientists) have helped to make policy makers accountable and more sensible on other important and urgent issues.

      In that sense my work with reproducible research in my Panama Papers as a prototype of a data continuum environment, or others, like Gamma, could serve as an exploration, invitation and early implementation of what is possible to enrich this data/computing enhanced dialogue.

    12. I say this despite the fact that my own work has been in much the opposite direction as Julia. Julia inherits the textual interaction of classic Matlab, SciPy and other children of the teletype — source code and command lines.

      The idea of a tradition technologies which are "children of teletype" is related to the comparison we do in the data week workshop/hackathon. In our case we talk about "unix fathers" versus "dynabook children" and bifurcation/recombination points of this technologies:

    13. If efficiency incentives and tools have been effective for utilities, manufacturers, and designers, what about for end users? One concern I’ve always had is that most people have no idea where their energy goes, so any attempt to conserve is like optimizing a program without a profiler.
    14. It’s TCP/IP for energy. Think of these algorithms as hybrids of distributed networking protocols and financial trading algorithms — they are routing energy as well as participating in a market.

      The idea of "TCP/IP" for energy is pretty compelling. What is the place for hobbyist low cost tech gadgets, like arduino and alike into enabling this TCP/IP for energy?

    15. The catalyst for such a scale-up will necessarily be political. But even with political will, it can’t happen without technology that’s capable of scaling, and economically viable at scale. As technologists, that’s where we come in.

      May be we come before, by enabling this conversation (as said previously). Political agenda is currently coopted by economical interests far away of a sustainable planet or common good. Feedback loops can be a place to insert counter-hegemonic discourse to enable a more plural and rational dialogue between civil society and goverment, beyond short term economic current interest/incumbents.

    16. This is aimed at people in the tech industry, and is more about what you can do with your career than at a hackathon. I’m not going to discuss policy and regulation, although they’re no less important than technological innovation. A good way to think about it, via Saul Griffith, is that it’s the role of technologists to create options for policy-makers.

      Nice to see this conversation happening between technology and broader socio-political problems so explicit in Bret's discourse.

      What we're doing in fact is enabling this conversation between technologist and policy-makers first, and we're highlighting it via hackathon/workshops, but not reducing it only to what happens there (an interesting critique to the techno-solutionism hackathon is here), using the feedback loops in social networks, but with an intention of mobilizing a setup that goes beyond. One example is our twitter data selfies (picture/link below). The necesity of addressing urgent problem that involve techno-socio-political complex entanglements is more felt in the Global South.

      ^ Up | Twitter data selfies: a strategy to increase the dialog between technologist/hackers and policy makers (click here for details).

  4. Aug 2016
    1. Any gentrification process inevitably presents two options. Do you abandon the form, leave it to the yuppies and head to the next wild frontier? Or do you attempt to break the cycle, deface the estate-agent signs, and picket outside the wine bar with placards reading ‘Yuppies Go Home’?

      Nosotros escogimos una tercera opción con la gobernatón

  5. Jun 2016
    1. I thought was anonymised, but, two or three months later, I was able to de-anonymise it. 

This happened with crowdflow.net, a project about the consolidated.db, the database that records all of your movements on iPhones, that was backed-up unencrypted on PCs and Macs.
    2. Also, the more complex a software project becomes, the more work you have to put into and it grows exponentially. So, keep it simple and make it fast. It's much easier to write software, throw it away and start over again quickly, than having this huge generic system that tries to do everything. It doesn't make sense. It's just too much work. You'd get this huge software system with thousand dependencies and, in the end, it's really hard to innovate, get new stuff in there, or, the worst case, to change the concept. Almost every software that we have published is not generic but is used only for one case. So, keep it simple and get a prototype in under three days.

      Agile visualization its a worthy exception to this trend. It is generic while being flexible and moldable. My first projects start with an easy prototype in a week and became full projects in a couple of months average. Then I can reuse the visual components by using abstraction and making visual builders.

      The couple of months average included the learning of the programming language and environment, the data cleaning and completion. With the builders the time has started to decrease exponentially.

    3. What type of team do you need to create these visualisations? 
OpenDataCity has a special team of really high-level nerds. Experts on hardware, servers, software development, web design, user experience and so on. I contribute the more mathematical view on the data. But usually a project is done by just one person, who is chief and developer, and the others help him or her. So, it's not like a group project. Usually, it's a single person and a lot of help. That makes it definitely faster, than having a big team and a lot of meetings.

      This strengths the idea that data visualization is a field where a personal approach is still viable, as is shown also by a lot of individuals that are highly valuated as data visualizers.

    4. Can you avoid metadata? No, not at all. Sorry, not at all, anymore. I think you can even generate more metadata from existing data. Just think about living near a sightseeing building or something like that, how many tourists took holiday pictures and published them on Facebook. How many of these pictures contains your face? Then, one day, somebody will create a facial recognition algorithm and gather even more metadata about you.
    1. In Pharo, how you meta-click depends on your operating system: either you must hold Shift-Ctrl or Shift-Alt (on Windows or Linux)

      On Linux the shortcut should be: Shift-Alt-MiddleMouseButton

    1. Every graphic element of Pharo that you click on...  - With Cmd+Shift+Option,  - you'll get a little menu around the graphic element.

      I don't get this halo directly, by presing Ctrl + Shift, which can be a little confusing. What I get is a contextual menu that let's me to select the halo after that. See:

      After that I need to go to the add halo menu. It's kind of indirect, compared with the previous behavior.

      Am I doing something wrong?

    1. Gemma Hersh, Policy Director at Elsevier explained that “pricing correlates to impact factor." When asked why costs were so high, Hersh stressed that “publishers do not operate a cost based system, [...] we operate a value-based system which reflects the value that we provide."

      El colmo de la desfachatez!

  6. May 2016
    1. The essay competition will run until June 15th and will be judged by a committee of scientists, librarians, members of industry, and students based on the following criteria. 

      Which criteria are you referring to?

    1. One last piece of advice before beginning your journey: do no try to build any complex visualization in Roassal without first reading the builder infrastructure. Builders may ease your life!

      I can give testimony. I started a complex visualization by changing pre-existing builders, as documented here, but now that I have made a custom builder, the Domain Specific Visualization becomes easier to program and more elegant (see RTMatrixRing in Roassal).

    1. RTLabelled

      This method should be changed by RTLabeled. See: http://ws.stfx.eu/P4DOWTE4G2H2

    2. path := '/Users/alexandrebergel/Documents'. allFilesUnderPath := path asFileReference allChildren. b := RTMondrian new. b shape circle color: Color gray trans; if: [ :aFile | aFile path basename endsWith: '.pdf' ] color: Color red trans. b nodes: allFilesUnderPath. b edges connectFrom: #parent. b normalizer normalizeSize: #size min: 10 max: 150 using: #sqrt. b layout cluster. b

      In my file system, this example take a lot of time an throws and error. May be a more transient directory directory and other type of files, for example png could be more direct to the user.

      The proposal is on: http://ws.stfx.eu/FUJMCBVH9DS8

  7. Mar 2016
    1. Research stage 1. Case selections and contextualisation of probes by country-leads will be reviewed jointly. Following a state-of-the-art country overview that will gather key insights for the selected case-study and its macro-context, country level workshops will be convened to firm up data-collection plans.

      El estado del arte ya se adelantó en ciudad de datos para los movimientos ciudadanos, aunque no es de todo el país (sólo Bogotá y Medellín). En particular falta ver lo que están haciendo los movimientos de maperos (Open Street Map en eje cafetero, vivir en la finca, #MapatónXGuajira. La relación con el contexto macro sólo está ubicada para los proyectos de datos abiertos entre Open Data Co y el data week y su relación con OpenSpending y OpenBudgets. Habría que ubicar formas más orgánicas de relación entre el caso de estudio, que para mí debería ser el Data Week / grafoscopio, con los distintos datasets de cada ocasión (gastos públicos, mapas etc.) y los contextos macros. El intentar trabajar con datasets distintos todo el tiempo puede generar inconvenientes en afianzar el discurso y los saberes de la comunidad de base... o potenciar formas de triangulación de conocimientos. Estas tensiones las hemos visto con el último data week.

    2. 2. broader civil society changes such as emergence of new democratic movements, new citizen-led collaborations.

      El puente entre el micronivel y el macronivel puede ocurrir desde cómo estas plataformas y espacios de involucramiento ciudadano mencionados antes hacen parte de nuevas colaboraciones lideradas por los ciudadanos, en principio, y eventualmente de nuevos movimientos.

    3. engagement platform/space, its communications and data architecture (including algorithms) using web-analytics and qualitative methods

      La cita completa, que se parte entre páginas del pdf es:

      '3. narrative analysis of the citizen engagement platform/space, its communications and data architecture (including algorithms) using web-analytics and qualitative methods.

      Acá grafoscopio, el data week y mi participación en la investigación podrían centrarse, para continuar enfocadas en lo que pretendo desde el doctorado. A partir de plataformas y prácticas cuidadanas (data scrapping, grafoscopio, data week) se puede mirar qué le hace falta a las infraestructuras gobernamentales para dar cuenta de las demandas cívicas.

    4. A case study methodology has been selected as it is the most appropriate for abstracting broad theoretical insights from specific contextual experiences12. Across eight countries in Asia, Africa, South America and Europe, one case study per country in the area of ICT-mediated citizen engagement will be investigated in depth.
    5. covering dimensions of voice and participation, transparency and accountability, rule of law, equity and inclusion and gender equality to create a tentative index, to be tested by the findings of this study.
    6. The analytical framework will map the shifts along these two categories both from the 'government-end' and 'citizen-end', distilling citizen engagement in emergent meanings, norms and powers. It will also probe tensions – between voice, which may translate as noise on ICT channels, and deliberation that can, without the corresponding right to be heard, end up as discourse dissipation.

      Tensiones: de voz a ruido, de deliberación sin escucha a disipación del discurso.

    7. Statement of Research Building from empirical specifics of eight case studies from various countries, which will be chosen keeping in mind contextual diversity and institutional maturity, this study will use an analytical framework to address the following questions;RQ1: How do processes of signification, legitimation and domination in ICT-mediated citizen engagement give rise to new governance regimes?RQ2: Under what conditions can ICT mediated citizen engagement support and promote democratic governance?In addition, the study will attempt to develop an index on Transformative Citizen Engagement to evaluate the impact of citizen engagement on democratic governance, testing its efficacy. It will attempt to explain changes to governance systems and develop a layered index (tentatively, Transformative Citizen Engagement Index) that will be tested to evaluate the impact of citizen engagement on democratic governance.

      Ciudad de datos, grafoscopio y el data week están orientados a la pregunta 1:

      RQ1: How do processes of signification, legitimation and domination in ICT- mediated citizen engagement give rise to new governance regimes?

      Mientras que el diálogo entre comunidades de base y gobierno podría ayudar a resolver la pregunta 2:

      RQ2: Under what conditions can ICT mediated citizen engagement support and promote democratic governance?

    1. If we do not scrutinise these questions we risk being left with, for example, data without users or analysis without action. Information about tax evasion is toothless without having institutions who are adequately resourced to tackle it. If civil society groups are to stand a chance of effectively counter-balancing corporate influence on political decision-making, they must be equipped with the capacities and legal mechanisms - not just the information - which will enable them to do so.

      Esta es la tensión principal que he atestiguado en mi propia experiencia desde la relación de estado y sociedad civil.

    1. The paradox, of course, is that Google’s intense data collection and number crunching have led it to the same conclusions that good managers have always known. In the best teams, members listen to one another and show sensitivity to feelings and needs.

      Umm... esto me recuerda la charla sobre el "big data" para elegir el color de la caja de un cereal... Aunque alguna gente necesita que los datos demuestren lo que su intuición ya sabe.

    2. However, establishing psychological safety is, by its very nature, somewhat messy and difficult to implement. You can tell people to take turns during a conversation and to listen to one another more. You can instruct employees to be sensitive to how their colleagues feel and to notice when someone seems upset. But the kinds of people who work at Google are often the ones who became software engineers because they wanted to avoid talking about feelings in the first place.

      Mi experiencia es cercana a esto. Ingenieros con habilidades sociales más bien pobres, brillantes individualmente, pero con los que es difícil trabajar en equipo... y no estoy diciendo que yo esté exento por no ser ingeniero.

    3. Within psychology, researchers sometimes colloquially refer to traits like ‘‘conversational turn-taking’’ and ‘‘average social sensitivity’’ as aspects of what’s known as psychological safety — a group culture that the Harvard Business School professor Amy Edmondson defines as a ‘‘shared belief held by members of a team that the team is safe for interpersonal risk-taking.’’ Psychological safety is ‘‘a sense of confidence that the team will not embarrass, reject or punish someone for speaking up,’’ Edmondson wrote in a study published in 1999. ‘‘It describes a team climate characterized by interpersonal trust and mutual respect in which people are comfortable being themselves.’’
    4. the good teams all had high ‘‘average social sensitivity’’ — a fancy way of saying they were skilled at intuiting how others felt based on their tone of voice, their expressions and other nonverbal cues. One of the easiest ways to gauge social sensitivity is to show someone photos of people’s eyes and ask him or her to describe what the people are thinking or feeling — an exam known as the Reading the Mind in the Eyes test. People on the more successful teams in Woolley’s experiment scored above average on the Reading the Mind in the Eyes test. They seemed to know when someone was feeling upset or left out. People on the ineffective teams, in contrast, scored below average. They seemed, as a group, to have less sensitivity toward their colleagues.
    5. First, on the good teams, members spoke in roughly the same proportion, a phenomenon the researchers referred to as ‘‘equality in distribution of conversational turn-taking.’’ On some teams, everyone spoke during each task; on others, leadership shifted among teammates from assignment to assignment. But in each case, by the end of the day, everyone had spoken roughly the same amount. ‘‘As long as everyone got a chance to talk, the team did well,’’ Woolley said. ‘‘But if only one person or a small group spoke all the time, the collective intelligence declined.’’
    6. The researchers eventually concluded that what distinguished the ‘‘good’’ teams from the dysfunctional groups was how teammates treated one another. The right norms, in other words, could raise a group’s collective intelligence, whereas the wrong norms could hobble a team, even if, individually, all the members were exceptionally bright.
  8. Feb 2016
    1. When pursuing the Early Adopters, don't spend time putting in features that only the Conservatives care about.

      Algo así me pasó con la facilidad de instalación. El data week mostró ser una mejor manera de detectar las características realmente importantes enfocarme en ellas, por ejemplo las galerías de visualización y raspado de datos, mientras que la instalación era asistida en persona durante el evento.

    2. So you have to treat them like they are very special.  Give them everything they want, almost as if they were ordering a custom application.  You may have to implement special features just for them.  You may have to give them substantial discounts.  You should visit their site and meet them in person.  You may have to install your product for them. 

      Desde esta perspectiva de "mercadeo", esto ha venido pasando con grafoscopio, en alguna medida, aunque estamos en la fase de adoptadores tempranos. Esta visualización fue hecha con un "esfuerzo extra" para una "pragmatista en sufrimiento" que no estaba interesada en Smalltalk, pero si en las visualizaciones. Creo que allí hay una población grande de personas que podrían estar interesados en grafoscopio y eventualmente en aprender a programarlo y extenderlo.

    1. But mature languages can't change easily if at all. This situation is described in theory of frozen evolution. Basically the evolution of language is similar of the evolution of a new specie. Evolution happens in short burst when the new language(specie) is created. In this period language design is plastic, there are few if any users, little amount of code and libraries and the community consists of enthusiasts. This short period of plasticity is fallowed by a long period of stability. The evolution of the language is frozen. There are few tweaks here and there but nothing radical. And nobody , not even the original creator could change that.
    2. Clojure has no chance to dislodge Java as premier language for writing enterprise applications, but it could win as the language for writing concurrent applications. That is Clojure platform.
    3. In order to successfully cross the chasm you need a pragmatist in pain.

      Lo sería yo con la visualización de datos y las herramientas amoldables? Quizás no, pues esos dominios ya han sido abordados antes, aunque no combinados de este modo (entonces quizás sí :-P)

    1. Lisp isn’t a language, it’s a building material. Alan Kay
    2. In my professional career I’ve noticed that doing “The Right Thing” is not always the best path. That phenomenon is called The Rise of “Worse is Better” and it can be observed in many things including C, Unix, Windows, JavaScript… Seems like New Jersey approach wins out more than it looses in a real world. (Be a virus, spread and than the pressure will rise to better the game).
    1. And then Git happened. Git is so amazingly simple to use that APress, a single publisher, needs to have three different books on how to use it. It’s so simple that Atlassian and GitHub both felt a need to write their own online tutorials to try to clarify the main Git tutorial on the actual Git website. It’s so transparent that developers routinely tell me that the easiest way to learn Git is to start with its file formats and work up to the commands.

      A ton of long books show how #git is so simple that it requires a ton of long books.

    2. And yet, when someone dares to say that Git is harder than other SCMs, they inevitably get yelled at, in what I can only assume is a combination of Stockholm syndrome and groupthink run amok by overdosing on five-hour energy buckets.

      I have been there in my local hackerspace. There is some kind of almost religion irrational feeling about defending git over any critic, mostly from people who doesn't know any DVCS except git.

    3. You needn’t look further than how many books and websites exist on Git to realize that you are looking at something deeply mucked up.
    4. All of the above might be almost tolerable if DVCSes were easier to use than traditional SCMs, but they aren’t.

      Fortunately is not the case of fossil ;-)

    1. The Transmeta architecture assumes from day one that any business plan that calls for making a computer that doesn't run Excel is just not going anywhere.

      Salvo por los computadores virtuales, llamados objetos, que deben ser compatibles con sus datos, pero no deben ejecutarlo directamente. Ya lo hicimos con este ejemplo de visualización de datos

    2. Conclusion: if you're in a market with a chicken and egg problem, you better have a backwards-compatibility answer that dissolves the problem, or it's going to take you a loooong time to get going (like, forever).

      La otra posibilidad es entrar a "mercados" emergentes donde si bien debe haber compatibilidad con el pasado, la principal inquietud es explorar/prototipar el futuro.

    3. Jon Ross, who wrote the original version of SimCity for Windows 3.x, told me that he accidentally left a bug in SimCity where he read memory that he had just freed. Yep. It worked fine on Windows 3.x, because the memory never went anywhere. Here's the amazing part: On beta versions of Windows 95, SimCity wasn't working in testing. Microsoft tracked down the bug and added specific code to Windows 95 that looks for SimCity. If it finds SimCity running, it runs the memory allocator in a special mode that doesn't free memory right away. That's the kind of obsession with backward compatibility that made people willing to upgrade to Windows 95.

      Pero esta obsesión puede ser perjudicial también, implicando la carga del pasado en sistemas que no tendrían por qué tenerla. Cfg: Pharo vs Squeak.

    4. Feature two: old DOS programs assumed they had the run of the chip. As a result, they didn't play well together. But the Intel 80386 had the ability to create "virtual" PCs, each of them acting like a complete 8086, so old PC programs could pretend like they had the computer to themselves, even while other programs were running and, themselves, pretending they had the whole computer to themselves.

      La idea de computadores virtuales también fue una de las primeras ideas de Alan Kay, pero con objetos en lugar de programas, lo cual lo convertía en más modular y deconstruible.

    5. That bears mentioning again. WordStar was ported to DOS by changing one single byte in the code.  Let that sink in.

      Esto ya no importa hoy en día. Con las capacidades de hardware actuales, un sistema basado en grafoscopio podría correr en Android, Windows, Unix (con sus variantes Mac y Gnu/Linux) o un dispositivo como la rasberry pi, sin cambiar un sólo bit. La dificultad está en movilizar una metáfora nueva para escribir y una nueva manera de pensar frente a la computación.

    1. As I have mentioned in previous posts, several platforms have appeared recently that could take on this role of third-party reviewer. I could imagine at least: libreapp.org, peerevaluation.org, pubpeer.com, and publons.com. Pandelis Perakakis mentioned several others as well: http://thomas.arildsen.org/2013/08/01/open-review-of-scientific-literature/comment-page-1/#comment-9.
    2. I think that such third-party review companies should exist for the sole purpose of providing competent, thorough, trust-worthy review of scientific papers and that a focus on profit might divert their attention from this goal. For example, unreasonably high profit margins are one of the reasons that large publishers such as Elsevier are currently being criticised.
    1. Now, pretty much everyone hosts their open source projects on GitHub, including Google, Facebook, Twitter, and even Microsoft—once the bete noire of open source software. In recent months, as Microsoft open sourced some of its most important code, it used GitHub rather than its own open source site, CodePlex. S. “Soma” Somasegar—the 25-year Microsoft veteran who oversees the company’s vast collection of tools for software developers—says CodePlex will continue to operate, as will other repositories like Sourceforge and BitBucket. “We want to make sure it continues being there, as a choice,” he tells WIRED. But he sees GitHub as the only place for a project like Microsoft .NET. “We want to meet developers where they are,” he says. “The open source community, for the most part, is on GitHub.”
    2. Somasegar estimates that about 20 percent of Microsoft’s customers now use Git in some way.
    3. In short, open source has arrived. And, ultimately, that means we can build and shape and improve our world far more quickly than before.

      But this "improving" is not equal for all. The perspective on the commons seems marginal here. Code commons are used for private owned by the few instead of cooperatives owned by the workers.

    4. The irony of GitHub’s success, however, is the open source world has returned to a central repository for all its free code. But this time, DiBona—like most other coders—is rather pleased that everything is in one place. Having one central location allows people to collaborate more easily on, well, almost anything. And because of the unique way GitHub is designed, the eggs-in-the-same-basket issue isn’t as pressing as it was with SourceForge. “GitHub matters a lot, but it’s not like you’re stuck there,” DiBona says. While keeping all code in one place, you see, GitHub also keeps it in every place. The paradox shows the beauty of open source software—and why it’s so important to the future of technology.

      Well, it depends on how much meta-data you can extract from GitHub. As with so many other social software, the value is not in data (photos, code, twits) as in metadata (comments, tags, social graphs, issues), so, while having your data with you, in your phone or laptop is worthy, would be nice to know how much metadata these infrastructures generate and how it is distributed (or not).

    1. Instead of, for example, 100 large open source projects with active communities, we’ve got 10,000 tiny repos with redundant functionality.One of open source’s biggest advantages was resilience. A public project with many contributors was theoretically stronger than a private project locked inside a company with fewer contributors.Now, the widespread adoption of open source threatens to create just the opposite.

      May be this threat can be overcome by simpler infrastructure that can be understood by a single person. That was one of the original goals of Smalltalk and I think that its current incarnations (in Pharo or Cuis) address this personal empowerment better that the Unix/OS current tradition, which informs most of our current experience with technology. Fossil instead of Git is another example of this preference for simplicity. So, 1000 smaller agile communities can be possible instead of 10 big bureaucratic ones, making fork still an important right without a lot of balkanization. The open nature of these agile communities is different of private projects locked inside a company.

      I have experience by myself examples of those different kind of community configurations and different thresholds for participation in the case of Debian, Arch Linux, Leo Editor or Pharo (to cite a few) and that's why the idea of agile open and small community could work, even with the proper pains of addressing complex projects/problems inside them.

    2. What makes this more difficult to resolve is that GitHub is — surprise! — not open source. GitHub is closed source, meaning that only GitHub staff is able to make improvements to its platform.The irony of using a proprietary tool to manage open source projects, much like BitKeeper and Linux, has not been lost on everyone. Some developers refuse to put their code on GitHub to retain their independence. Linus Torvalds, the creator of Git himself, refuses to accept pull requests (code changes) from GitHub.

      That's why I have advocated tools like Fossil to other members of our Hackerspace and other communities like Pharo or decentralized options to Mozilla Science (without much acceptation in the communities or even any reaction from Mozilla Science).

      Going with the de facto and popular defaults (without caring about freedom or diversity) seems the position of open source/science communities and even digital activist, which contrast sharply with their discourse for the building of tools/data/politics, but seems invisible in the building of community/metadata/metapolitics.

      The kind of disempowerment these communities are trying to fight, is the one they're suffering with GitHub, like showed here: https://hypothes.is/a/AVKjLddpvTW_3w8LyrU-

      So there is a tension between the convenience and wider awareness/participation of centralized privative platforms that is wanted by these open/activist communities and a growth in the (over)use of the commons that is bigger that the growth of its sustainability/ethos, as shown here: https://hypothes.is/a/AVKjfsTRvTW_3w8LyrqI . Sacrificing growth/convenience by choosing simpler and more coherent infrastructures aligned with the commons and its ethos seems a sensible approach then.

    3. But it comes with new challenges: how to actually manage demand and workflows, how to encourage contributions, and how to build antifragile ecosystems.

      This is a key issue. My research is about the relationship of mutual modification between communities and digital artifacts to bootstrap empowering dynamics.

      The question regarding participation could be addressed by making an infrastructural transposition (putting what is in the background in the foreground as suggested by Susan Leigh Star). This has been, in a sense the approach of this article, making visible what is behind infrastructures like LAMP, GitHub or StackExchange and has also been the approach of my comments. Of course there are things beyond infrastructure, but the way the infrastructures determine communities and the change that communities can made or not on them could be a key to antifragile, that is traversed by critical pedagogy, community and cognition. How can we change the artifacts that change us is a questions related with antifragile. This is the question of my research (in the context of a Global South hackerspace), but I never connected it with antifragile until reading this text.

    4. Technically, if you use someone else’s code revision from Stack Overflow, you would have to add a comment in your code that attributes the code to them. And then that person’s code would potentially have a different license from the rest of your code.Your average hobbyist developer might not care about the rules, but many companies forbid employees from using Stack Overflow, partly for this reason.As we enter a post open source world, Stack Overflow has explored transitioning to a more permissive MIT license, but the conversation hasn’t been easy. Questions like what happens to legacy code, and dual licensing for code and non-code contributions, have generated confusion and strong reactions.
    5. As a result, while plenty of amateur developers use open source projects, those people aren’t interested in, or capable of, seriously giving back. They might be able to contribute a minor bug or fix, but the heavy lifting is still left to the veterans.

      I'm starting to feel this even with my new project, grafoscopio. The burden of development is now on core functionality that will make the project easier to use and adapt for newcomers, but still there is a question about how many of them will worry or be enabled to work on improving this core functionality or help in some way with its maintenance.

    6. Experienced maintainers have felt the burden. Today, open source looks less like a two-way street, and more like free products that nobody pays for, but that still require serious hours to maintain.This is not so different from what happened to newspapers or music, except that nearly all the world’s software is riding on open source.
    7. There is also concern around using a centralized platform to manage millions of repositories: GitHub has faced several outages in recent years, including a DDoS attack last year and a network disruption just yesterday. A disruption in just one website — GitHub — affects many more.Earlier this month, a group of developers wrote an open letter to GitHub, expressing their frustration with the lack of tools to manage an ever-increasing work load, and requesting that GitHub make important changes to its product.
    8. The free software generation had to think about licenses because they were taking a stance on what they were not (that is, proprietary software). The GitHub generation takes this right for granted. They don’t care about permissions. They default to open.Open source is so popular today that we don’t think of it as exceptional anymore. We’re so open source, that maybe we’re post open source:But not is all groovy in the land of post open source.
    9. n 2011, there were 2 million repositories on GitHub. Today, there are over 29 million. GitHub’s Brian Doll noted that the first million repositories took nearly 4 years to create; getting from nine to ten million took just 48 days.
    10. Now developers had all the tools they needed. In the 1980s, they had to use a scattered combination of IRC, mailing lists, forums, and version control systems.By 2010, they had Git for version control, GitHub to collaborate, and Stack Overflow to ask and answer questions.

      Este párrafo muestra una transición de lo distribuido de la Internet a de los 80's lo centralizado de la Internet actual (2010~2015) y como esta tendencia no sólo ocurrió en el mundo de la web en general, sino del desarrollo de software (de hecho mediante la incorporación de experiencias e interfaces web centralizadas, sobre infraestructuras no web distribuidas).

    1. A quote often attributed to Gloria Steinem says: “We’ve begun to raise daughters more like sons... but few have the courage to raise our sons more like our daughters.” Maker culture, with its goal to get everyone access to the traditionally male domain of making, has focused on the first. But its success means that it further devalues the traditionally female domain of caregiving, by continuing to enforce the idea that only making things is valuable. Rather, I want to see us recognize the work of the educators, those that analyze and characterize and critique, everyone who fixes things, all the other people who do valuable work with and for others—above all, the caregivers—whose work isn’t about something you can put in a box and sell.
    2. am not a maker. In a framing and value system is about creating artifacts, specifically ones you can sell, I am a less valuable human. As an educator, the work I do is superficially the same, year on year. That’s because all of the actual change, the actual effects, are at the interface between me as an educator, my students, and the learning experiences I design for them. People have happily informed me that I am a maker because I use phrases like "design learning experiences," which is mistaking what I do (teaching) for what I’m actually trying to help elicit (learning). To characterize what I do as "making" is to mistake the methods—courses, workshops, editorials—for the effects. Or, worse, if you say that I "make" other people, you are diminishing their agency and role in sense-making, as if their learning is something I do to them.

      As a teacher I also felt this sense of repetition in what I did. Same curricula, different people (particularly in the mathematics department at Javeriana University, where I worked). So the "escape" from repetition was in educative resources and spaces mostly of them mediated by digital technology. That was the material correlate of the inmaterial happening.

      So the strong issue is about the material and the immaterial in making. For me the opposition of makers versus non-makers is to underlying a consumer society, but it embodies the danger of not recognizing the immaterial making of culture by everyone everyday.

    3. In Silicon Valley, this divide is often explicit: As Kate Losse has noted, coders get high salary, prestige, and stock options. The people who do community management—on which the success of many tech companies is based—get none of those. It’s unsurprising that coding has been folded into "making." Consider the instant gratification of seeing "hello, world" on the screen; it’s nearly the easiest possible way to "make" things, and certainly one where failure has a very low cost. Code is "making" because we've figured out how to package it up into discrete units and sell it, and because it is widely perceived to be done by men.
    4. It’s not, of course, that there’s anything wrong with making (although it’s not all that clear that the world needs more stuff).

      The wave of "Internet of Things" seems to be co-opted by consumerist view of the world needing more "stuff". While repairing or repurposing is kind of a second class activity, particularly in the Global North and in contrast with the Global South (see for example the gambiarra approach and critique from Brazil).

      So this maker of the new and visible seems not only informed by gender but also by race/place.

    5. Almost all the artifacts that we value as a society were made by or at the order of men. But behind every one is an invisible infrastructure of labor—primarily caregiving, in its various aspects—that is mostly performed by women.

      The main issue here is the visible versus the invisible work. Making in the "makers" movement sense is related with making the visible stuff, usually the hardware/software related one with a strong formal correlate (because stuff takes the form of programmed code or is the result of programming code, i.e 3D printing), while "soft" informal stuff, like the day to day issues of logistics about places and doings is invisible.

      The question in not solved simply by making the invisible visible, as Susan Leigh Star has pointed out (in the case of nursing, for example). It's also about leaving the invisible to be agent of important stuff without being trapped by the formalism of the visible. To give the visible and the invisible the proper weight without only trying one to become the other.

  9. Jan 2016
    1. Below I list a few advantages and drawbacks of anonymity where I assume that a drawback of anonymous review is an advantage of identified review and vice versa. Drawbacks Reviewers do not get credit for their work. They cannot, for example, reference particular reviews in their CVs as they can with publications. It is relatively “easy” for a reviewer to provide unnecessarily blunt or harsh critique. It is difficult to guess if the reviewer has any conflict of interest with the authors by being, for example, a competing researcher interested in stalling the paper’s publication. Advantages Reviewers do not have to fear “payback” for an unfavourable review that is perceived as unfair by the authors of the work. Some (perhaps especially “high-profile” senior faculty members) reviewers might find it difficult to find the time to provide as thorough a review as they would ideally like to, yet would still like to contribute and can perhaps provide valuable experienced insight. They can do so without putting their reputation on the line.
    1. With most journals, if I submit a paper that is rejected, that information is private and I can re-submit elsewhere. In open review, with a negative review one can publicly lose face as well as lose the possibility of re-submitting the paper. Won’t this be a significant disincentive to submit? This is precisely what we are trying to change. Currently, scientists can submit a paper numerous times, receive numerous negative reviews and ultimately publish their paper somewhere else after having “passed” peer review. If scientists prefer this system then science is in a dangerous place. By choosing this model, we as scientists are basically saying we prefer nice neat stories that no one will criticize. This is silly though because science, more often than not, is not neat and perfect. The Winnower believes that transparency in publishing is of the utmost importance. Going from a closed anonymous system to an open system will be hard for many scientists but I believe that it is the right thing to do if we care about the truth.
    2. At what point does payment occur, and are you concerned with the possible perception that this is pay-to-publish? Payment occurs as soon as you post your paper online. I am not overly concerned with the perception that this is pay-to-publish because it is. What makes The Winnower different is the price we charge. Our price is much much lower than what other journals charge and we are clear as to what its use will be: the sustainability and growth of the website. arXiv, a site we are very much modeled after does not charge anything for their preprint service but I would argue their sustainability based on grants is questionable. We believe that authors should buy into this system and we think that the price we will charge is more than fair. Ultimately, if a critical mass is reached in The Winnower and other revenue sources can be generated than we would love to make publishing free but at this moment it is not possible.
    3. I strongly believe that if you’re scared of open peer review then we should be scared of your results.
    4. While The Winnower won’t eliminate bias (we are humans, after all) the content of the reviews can be evaluated by all because they will be readily accessible. [Note: reviewers could list competing interests in the template suggested on The Winnower’s blog.]
    5. Moreover, editors are literally selecting for simple studies but very often studies are not simple and results are not 100% clear. If you can’t publish your work because it is honest but poses some questions then eventually you will have to mold your work to what an editor wants and not what the data is telling you. There is a significant correlation between impact factor and misconduct and it is my opinion that much of this stems from researchers bending the truth, even if ever so slightly, to get into these career advancing publications.
    6. PLOS Labs is working on establishing structured reviews and we have talked with them about this.
    7. It should be noted that papers will always be open for review so that a paper can accumulate reviews throughout its lifetime.
    8. The journal will accommodate data but should be presented in the context of a paper. The Winnower should not act as a forum for publishing data sets alone. It is our feeling that data in absence of theory is hard to interpret and thus may cause undue noise to the site.

      This will be the case also for the data visualizations showed here, once the data is curated and verified properly. Still data visualizations can start a global conversation without having the full paper translated to English.

    1. I think The Winnower has found a nice niche publishing what is called “grey literature.” (i.e. we publish content that is not traditionally afforded a platform).  By focusing on this niche in the in the short term (<5years) we can build a community that will allows us to experiment with different models in the long term (>5Years).  I found out very early after launch of The Winnower—it’s not enough to build a platform around a new model, you have to convey the value to the community and really incentivize people to use it.
    2. I am hoping to change scholarly communication at all levels and I think transparency must be at the heart of this.
    3. While there are some features shared between a university repository and us we are distinctly different for the following reasons: We offer DOIs to all content published on The Winnower All content is automatically typeset on The Winnower Content published on the winnower is not restricted to one university but is published amongst work from peers at different institutions around the world Work is published from around the world it is more discoverable We offer Altmetrics to content  Our site is much more visually appealing than a typical repository  Work can be openly reviewed on The Winnower but often times not even commented on in repositories. This is not to say that repositories have no place, but that we should focus on offering authors choices not restricting them to products developed in house.

      Over this tension/complementary between in house and external publishing platforms I wonder where is the place for indie web self hosted publishing, like the one impulsed by grafoscopio.

      A reproducible structured interactive grafoscopio notebook is self contained in software and data and holds all its history by design. Will in-house solutions and open journals like The Winnower, RIO Journal or the Self Journal of Science, support such kinds of publishing artifacts?

      Technically there is not a big barrier (it's mostly about hosting fossil repositories, which is pretty easy, and adding a discoverability and author layer on top), but it seems that the only option now is going to big DVCS and data platforms now like GitHub or datahub alike for storing other research artifacts like software and data, so it is more about centralized-mostly instead of p2p-also. This other p2p alternatives seem outside the radar for most alternative Open Access and Open Science publishers now.

    4. 20 years: ideas and results will be communicated iteratively and dynamically, not as a story written in stone. There is an increasing number of artifacts beyond text (data, visualizations, software tools, code, spreadsheets, multimedia content, etc.) How might these outputs factor into the scholarly conversation and more directly, the tenure and promotion process? JN: I think all these various outputs you mention are gaining prominence in scholarly communication.  I think that will continue and will become more and more important in how scholars are evaluated and rightly so a lot of work is done in different mediums and outside the confines of the article.  We need to experiment with different approaches of evaluation and part of that is looking beyond one thing (how often you publish and where you publish).  We do need to be careful though as new systems are implemented, new is not necessarily better.

      Precisely this part refers to the comment made here:

      https://hypothes.is/a/AVKTqqSqvTW_3w8Lym-d

      Would be nice to know how to enable this interactive and dynamic communication now. My bet is to use interactive moldable notebooks, like grafoscopio, for integrating all the workflow: writing, data visualization, data sharing and versioning, moldable tools, etc.

      The approach of grafoscopio compared to similar interactive notebooks like jupyter, Zeppelin or Beaker is that modifiable, self contained and work off-line, which is important in the Global South context and helps with the power concentration that we have witness with the late web even in academical publishing and is more related with the indie publishing approach (see Indie Web for an alternative).

    5. I think the next generation of scientists who have grown up in the Internet era will have zero patience for the current system and because of that they will seek different outlets that make sense in light of the fact the Internet exists!  
    6. Most importantly, is that we’ve given the tools of scholarly publishers to the scholars themselves to use.  Which has had the unexpected effect that different types of content are being produced (conference proceedings, grants, open letters, responses to grants, peer reviews, logistics for organizing symposiums, and more).  Ultimately, we’ve created a platform that allows anyone to get their idea out there and to be afforded the same tools that a traditional publisher offers, that is in my opinion quite impactful.

      Woud be nice to have links to examples of such kinds of contents, particularly for the "more" part. I would like to know if there is something related with datasets, visualizations & algorithms.

    7. developing countries and developing scientists (students) are left out of scholarly discourse,

      Multilingual or diverse language in journals must be developed to serve the diverse public and authors in the Global South. Having mostly English language journals is a big barrier also for scientific discourse in the Global South. Some other fluid forms of discourse could be articulated about other research artifacts like datasets or algorithms, that are more language neutral, instead of being focused in mainly English scholar text.

      This may be an example of such publications based more on data & algorithms that enable such global agile discourse:

      A visualization for public released info on meds

      (more details here)

    1. Esta licencia de PI finaliza cuando eliminas tu contenido de PI o tu cuenta, salvo si el contenido se compartió con terceros y estos no lo eliminaron.
    2. En el caso de contenido protegido por derechos de propiedad intelectual, como fotos y videos ("contenido de PI"), nos concedes específicamente el siguiente permiso, de acuerdo con la configuración de la privacidad y de las aplicaciones: nos concedes una licencia no exclusiva, transferible, con derechos de sublicencia, libre de regalías y aplicable en todo el mundo para utilizar cualquier contenido de PI que publiques en Facebook o en conexión con Facebook ("licencia de PI").
    3. Fecha de la última revisión: 30 de enero de 2015

      Hay un documental en Netflix que habla de cómo los términos y condiciones cambiar permanentemente y si los leyéramos todos, tardaríamos más o menos 3 meses al año. El documental se llama Terms and Condicions May Apply

    1. Green OA and the role of repositories remain controversial. This is perhaps less the case for institutional repositories, than for subject repositories, especially PubMed Central. The lack of its own independent sustainable business model means Green OA depends on its not undermining that of (subscription) journals. The evidence remains mixed: the PEER project found that availability of articles on the PEER open repository did not negatively impact downloads from the publishers’s site, but this was contrary to the experience of publishers with more substantial fractions of their journals’ content available on the longer-established and better-known arXiv and PubMed Central repositories. The PEER usage data study also provided further confirmation of the long usage half-life of journal articles and its substantial variation between fields (suggesting the importance of longer embargo periods than 6–12 months, especially for those fields with longer usage half-lives). Green proponents for their part point to the continuing profitability of STM publishing, the lack of closures of existing journals and the absence of a decline in the rate of launch of new journals since repositories came online as evidence of a lack of impact to date, and hence as evidence of low risk of impact going forward. Many publishers’ business instincts tell them otherwise; they have little choice about needing to accept submissions from large funders such as NIH, but there has been some tightening of publishers’ Green policies (page 102).
    2. Research funders are playing an increasingly important role in scholarly communication. Their desire to measure and to improve the returns on their investments emphasises accountability and dissemination. These factors have been behind their support of and mandates for open access (and the related, though less contentious policies on data sharing). These policies have also increased the importance of (and some say the abuse of) metrics such as Impact Factor and more recently are creating part of the market for research assessment services (page88).
    3. Open access publishing has led to the emergence of a new type of journal, the so-called megajournal. Exemplified by PLOS ONE, the megajournal is characterised by three features: full open access with a relatively low publication charge; rapid “non-selective” peer review based on “soundness not significance” (i.e. selecting papers on the basis that science is soundly conducted rather than more subjective criteria of impact, significance or relevance to a particularly community); and a very broad subject scope. The number of megajournals continues to grow: Table 10 lists about fifty examples (page 99).
    4. and the more research intensive universities remain concerned about the net impact on their budgets (page 90; 123).

      ¿Qué quiere decir esto?

    5. Gold open access based on APCs has a number of potential advantages. It would scale with the growth in research outputs, there are potential system-wide savings, and reuse is simplified. Research funders generally reimburse publication charges, but even with broad funder support the details regarding the funding arrangements within universities it remain to be fully worked out. It is unclear where the market will set OA publication charges: they are currently lower than the historical average cost of article publication; about 25% of authors are from developing countries;
    6. The APC model itself has become more complicated, with variable APCs (e.g. based on length), discounts, prepayments and institutional membership schemes, offsetting and bundling arrangements for hybrid publications, an individual membership scheme, and so on (page 91; 93).
    7. Average publishing costs per article vary substantially depending on a range of factors including rejection rate (which drives peer review costs), range and type of content, levels of editorial services, and others. The average 2010 cost of publishing an article in a subscription-based journal with print and electronic editions was estimated by CEPA to be around £3095 (excluding non-cash peer review costs). The potential for open access to effect cost savings has been much discussed, but the emergence of pure-play open access journal publishers allows examples of average article costs to be inferred from their financial statements. These range from $290 (Hindawi), through $1088 (PLOS), up to a significantly higher figure for eLife (page 66).
    8. There is continued interest in expanding access by identifying and addressing these specific barriers to access or access gaps. While open access has received most attention, other ideas explored have included increased funding for national licences to extend and rationalise cover; walk-in access via public libraries (a national scheme was piloted in the UK in 2014); the development of licences for sectors such as central and local government, the voluntary sector, and businesses (page 84)
    9. The most commonly cited barriers to access are cost barriers and pricing, but other barriers cited in surveys include: lack of awareness of available resources; a burdensome purchasing procedure; VAT on digital publications; format and IT problems; lack of library membership; and conflict between the author’s or publisher’s rights and the desired use of the content (page 84).
    10. While publishers have always provided services such as peer review and copy-editing, increased competition for authors, globalisation of research, and new enabling technologies are driving an expansion of author services and greater focus on improving the author experience. One possibly emerging area is that of online collaborative writing tools: a number of start-ups have developed services and some large publishers are reported to be exploring this area (page 153).
    11. Semantic technologies have become mainstream within STM journals, at least for the larger publishers and platform vendors. Semantic enrichment of content (typically using software tools for automatic extraction of metadata and identification and linking of entities) is now widely used to improve search and discovery; to enhance the user experience; to enable new products and services; and for internal productivity improvements. The full-blown semantic web remains some way off, but publishers are starting to make use of linked data, a semantic web standard for making content more discoverable and re-usable (page 143).
    12. The growing importance to funders and institutions of research assessment and metrics has been reflected in the growth of information services such as research analytics built around the analysis of metadata (usage, citations, etc.), and the growth of a new software services such as CRIS tools (Current Research Information Systems) (page 150).
    13. Text and data mining are starting to emerge from niche use in the life sciences industry, with the potential to transform the way scientists use the literature. It is expected to grow in importance, driven by greater availability of digital corpuses, increasing computer capabilities and easier-to-use software, and wider access to content
    14. The explosion of data-intensive research is challenging publishers to create new solutions to link publications to research data (and vice versa), to facilitate data mining and to manage the dataset as a potential unit of publication. Change continues to be rapid, with new leadership and coordination from the Research Data Alliance (launched 2013): most research funders have introduced or tightened policies requiring deposit and sharing of data; data repositories have grown in number and type (including repositories for “orphan” data); and DataCite was launched to help make research data cited, visible and accessible. Meanwhile publishers have responded by working closely with many of the community-led projects; by developing data deposit and sharing policies for journals, and introducing data citation policies; by linking or incorporating data; by launching some pioneering data journals and services; by the development of data discovery services such as Thomson Reuters’ Data Citation Index (page 138).
    15. Similarly the rapid general adoption of mobile devices (smartphones and tablets) has yet to change significantly the way most researchers interact with most journal content–accesses from mobile devices still account for less than 10% of most STM platform’s traffic as of 2014 (though significantly higher in some fields such as clinical medicine) –but this is changing. Uptake for professional purposes has been fastest among physicians and other healthcare professionals, typically to access synoptic secondary services, reference works or educational materials rather than primary research journals. For the majority of researchers, though, it seems that “real work” still gets done at the laptop or PC (page 24; 30; 139).
    16. Social networks and other social media have yet to make the impact on scholarly communication that they have done on the wider consumer web. The main barriers to greater use have been the lack of clearly compelling benefits to outweigh the real costs (e.g. in time) of adoption. Quality and trust issues are also relevant: researchers remain cautious about using means of scholarly communication not subject to peer review and lacking recognised means of attribution. Despite these challenges, social media do seem likely to become more important given the rapid growth in membership of the newer scientific social networks (Academia, Mendeley, ResearchGate), trends in general population, and the integration of social features into publishing platforms and other software (page 72; 134).
    17. Virtually all STM journals are now available online, and in many cases publishers and others have retrospectively digitised early hard copy material back to the first volumes. The proportion of electronic-only journal subscriptions has risen sharply, partly driven by adoption of discounted journal bundles. Consequently the vast majority of journal use takes place electronically, at least for research journals, with print editions providing some parallel access for some general journals, including society membership journals, and in some fields (e.g. humanities and some practitioner fields). The number of established research (i.e. non-practitioner) journals dropping their print editions looks likely to accelerate over the coming few years (page 30).
    18. There is a significant amount of innovation in peer review, with the more evolutionary approaches gaining more support than the more radical. For example, some variants of open peer review (e.g. disclosure of reviewer names either before or after publication; publication of reviewer reports alongside the article) are becoming more common. Cascade review (transferring articles between journals with reviewer reports) and even journal-independent (“portable”) peer review are establishing a small foothold. The most notable change in peer review practice, however, has been the spread of the “soundness not significance” peer review criterion adopted by open access “megajournals” like PLOS ONE and its imitators. Post-publication review has little support as a replacement for conventional peer review but there is some interest in its use as a complement to it (for example, the launch of PubMed Commons is notable in lending the credibility of PubMed to post-publication review). There is similar interest in “altmetrics” as a potentially useful complement to review and in other measures of impact. A new technology of potential interest for post-publication review is open annotation, which uses a new web standard to allow citable comments to be layered over any website (page 47).
    19. Reading patterns are changing, however, with researchers reading more, averaging 270 articles per year, depending on discipline (more in medicine and science, fewer in humanities and social sciences), but spending less time per article, with reported reading times down from 45-50 minutes in the mid-1990s to just over 30 minutes. Access and navigation to articles is increasingly driven by search rather than browsing; at present there is little evidence that social referrals are a major source of access (unlike consumer news sites, for example), though new scientific social networks may change this. Researchers spend very little time on average on publisher web sites, “bouncing” in and out and collecting what they need for later reference (page 52).
    20. Despite a transformation in the way journals are published, researchers’ core motivations for publishing appear largely unchanged, focused on securing funding and furthering the author’s career (page 69)
    21. Although this report focuses primarily on journals, the STM book market (worth about $5 billion annually) is evolving rapidly in a transition to digital publishing. Ebooks made up about 17% of the market in 2012 but are growing much faster than STM books and than the STM market as a whole (page 24).
    22. The annual revenues generated from English-language STM journal publishing are estimated at about $10 billion in 2013, (up from $8 billion in 2008, representing a CAGR of about 4.5%), within a broader STM information publishing market worth some $25.2 billion. About 55% of global STM revenues (including non-journal STM products) come from the USA, 28% from Europe/Middle East, 14% from Asia/Pacific and 4% from the rest of the world (page 23).
    1. There's no incentive structure for people to comment extensively, because it can take time to write a thoughtful comment, and one currently doesn't get credit for it,” he says. “But it's an experiment that needs to be done.”
    2. At the moment, Neylon explains, the scholarly publishing process involves ferrying a document from place to place. Researchers prepare manuscripts, share them with colleagues, fold in comments and submit them to journals. Journal editors send copies to peer reviewers, returning their comments to the author, who goes back and forth with the editor to finalize the text. After publication, readers weigh in with commentary of their own.
    3. To jump-start interest in the annotation program, arXiv has been converting mentions of its articles in external blog posts (called trackbacks) into annotations that are visible on an article's abstract page when using Hypothes.is.
    4. The scientific publisher eLife in Cambridge, UK, has been testing the feasibility of using Hypothes.is to replace its peer-review commenting system, says Ian Mulvany, who heads technology at the firm. The publisher plans to incorporate the annotation platform in a site redesign instead of its current commenting system, Disqus. At a minimum, says Mulvany, Hypothes.is provides a mechanism for more-targeted commentary — the equivalent of moving comments up from the bottom of a web page into the main body of the article itself.
    5. The digital library JSTOR, for example, is developing a custom Hypothes.is tool for its educational project with the Poetry Foundation, a literary organization and publisher in Chicago, Illinois.
    6. That should enable the tool to be used for journal clubs, classroom exercises and even peer review.
    7. But unlike Hypothes.is, the Genius code is not open-source, its service doesn't work on PDFs, and it is not working with the scholarly community.
    8. A few websites today have inserted code that allows annotations to be made on their pages by default, including the blog platform Medium, the scholarly reference-management system F1000 Workspace and the news site Quartz. However, annotations are visible only to users on those sites. Other annotation services, such as A.nnotate or Google Docs, require users to upload documents to cloud-computing servers to make shared annotations and comments on them.
    1. It's always a strange thing, going from nothing to something. Starting with just an idea, and gradually turning it into something real. Inventing along the way all these things that start so small, and gradually become these whole structures. I always think anyone who's been involved in such a thing has kind of a glow of confidence that lasts at least a decade—realizing that, yes, with the right effort nothing can turn into something.

      Yo he dicho que es más difícil pasar de la nada al algo, que del algo al algo más. Esto requiere una especie de férrea convicción en el valor de emprender algo cuando no hay nada aún y sobrellevar el estrés de hacerlo.

    1. The difference is that with the Smalltalk code I can let the message do the talking. The message initiates the action. With C# I have to call a method in order to "send a message." This is what OO is supposed to avoid, because we're exposing implementation, here. It reifies the abstraction.
    2. A fundamental difference between the way Smalltalk treats objects and the way other so-called OOP languages treat them is that objects, as Alan Kay envisioned them (he coined the term "object-oriented"), are really meant to be servers (in software, not necessarily hardware), not mere collections of functions that have privileged access to abstract data types.

      La última parte en particular, funciones con acceso privilegiado a tipos de datos abstractos es lo que se ve los cursos clásicos de programación en Java o C++.

    3. The fundamental principle of objects in Smalltalk is message passing. What matters is what goes on between objects, not the objects themselves. The abstraction is in the message passing, not the objects.

      Alan Kay habla en un video de la cultura japonesa y el concepto "ma", que podría traducirse como intersticio, o "el espacio entre" y que el énfasis de la cultura anglo estaba en las cosas visibles (objetos) y no en los intangibles (mensajes) y, según recuerdo, dice que un nombre más apropiado pudo haber sido programación orientada a mensajes.

    4. The idea was to create a "no-centers" system design, where logic, and operational control is distributed, not centralized.

      Similar a los tejidos vivos, donde el procesamiento no está en ningún lado en particular. Alan Kay habla de los objetos similares a las células y los mensajes similares a las álgebras.

  10. Dec 2015
    1. v := RTView new. s := (RTBox new size: 30) + RTLabel. es := s elementsOn: (1 to: 20). v addAll: es. RTGridLayout on: es. v

      Nice! Here is just another example with no single letter named variables, and more explicit data:

      | visual composedShape data viewElements |
      visual := RTView new.
      data := #('lion-o' 'panthro' 'tigro' 'chitara' 'munra' 'ozimandias' 'Dr Manhatan').
      composedShape := (RTEllipse new size: 100; color: Color veryLightGray) + RTLabel.
      viewElements := composedShape elementsOn: data.
      visual addAll: viewElements. 
      RTGridLayout on: viewElements.
      visual
      

      At the beginning I understood that data "comes from Smalltalk", but may be adding some tips with alternative examples, explicit data and longer variable names, could help newbies like me by offering comparisons with numerical and intrinsic data inside the image. The explanation about composed shapes and "+" sign is very well done.

    2. Roassal maps objects and connections to graphical elements and edges. In additions, values and metrics are represented in visual dimentions (e.g., width, height, intensity of graphical elements). Such mapping is an expressive way to build flexible and rich visualization. This chapter gives an overview of Roassal and its main API. It covers the essential concepts of Roassal, such as the view, elements, shapes, and interactions.

      I would try a less technical introduction to combine with this one. How about:

      When we're building a visualization, we want the properties of the objects in our domain to be expressed graphically, by shapes, connections and visual dimensions like width, height, intensity of graphical elements. Roassal builds such mappings as an expressive way to build flexible and rich visualizations.

    1. Once a Roassal element has been created, modifying its shape should not result in an update of the element.

      This part should be clarified. Could a further example be referenced?

    2. c := TRCanvas new. shape := TRBoxShape new size: 40. c addShape: shape. shape when: TRMouseClick do: [ :event | event shape remove. c signalUpdate ]. c

      I get this error MessageNotUnderstood: TRMouseLeftClick>>myCircle for this similar code:

      | canvas myCircle data |
      canvas := TRCanvas new.
      myCircle := TREllipseShape new size: 100; color: Color white.
      data := #('lion-o' 'panthro' 'tigro' 'chitara' 'munra' 'ozimandias' 'Dr Manhatan').
      canvas addShape: myCircle.
      myCircle when: TRMouseClick do: [:event | event myCircle remove. canvas signalUpdate  ].
      canvas
      

      If I change myCircle with shape it works fine, but I wouldn't imagine that variable names could be so picky. Generic names should work (circle doesn't work neither).

    3. Any sophisticated visualization boils down into primitives graphical elements. Ultimately, no matter how complex a visualization is, circles, boxes, labels are the bricks to convey information. Before jumping into Roassal, it is important to get a basic understand on these primitive graphical elements. Trachel is a low-level API to draw primitive graphical elements.

      Nice introduction. The only primitive I miss was the line.

    1. Congratulations on that! Looking forward to see how this develops and put my two cents on this effort.

      Just a minor correction: it's IPython, not iPython (in the Fernando Perez appearing and end credits)

    1. Our utopian visions of the future, freed from present problems by human ingenuity and technical competence, might be possible on paper, but they are unlikely in reality. We have already made the biggest mistake, and spent 10,000 years perfecting a disastrous invention, then making ourselves ever more reliant on it. However, the archaeologists who give us glimpses of our ancestors, and the anthropologists who introduce us to our cousins, have been able to show us why we dream what we do. What we yearn for is not just our imagined future; it is our very real past.
    2. Agriculture turns land that feeds thousands of species into land that feeds one. It literally starves other species out of existence.

      Sin embargo con la agricultura orgánica, como la de Guillermo, no se favorecen los monocultivos.

    3. Without a surplus of food, sustained military campaigns are simply not possible.
    4. Among nomads, property becomes a burden if it accumulates. A society of equals, which places little value on what material wealth it does possess, is not fertile ground for property crime.
    5. A group of nomads, finding itself unable to agree on an issue of importance, can always split into two or more groups, each of which can go its own way and implement the decision they believe to be the best. Farmers, however, are stuck where they are, and the best kind of democracy that a settled community can produce is the tyranny of the majority.

      Un ejemplo temprano de plurarquía vs democracia

    6. In the 1960s and 1970s anthropologists, such as Richard Lee and Yehudi Cohen, noticed the strong correlation between how societies produce their food and how they are structured socio-politically. Years of accumulated anthropological research showed that those who live by hunting and gathering show a very strong tendency to live in egalitarian, consensus-based societies.
  11. Nov 2015
    1. !How!might!civil!society!actors!shape!the!data!revolution?!In!particular,!how!might!they!go!beyond!the!question!of!what!data!is!disclosed!towards!looking!at!what!is!measured!in!the!first!place?

      This is deeply related with how we express what we value. But metrics can also deform the very perception of value and the way we behave according to it. A case for money and the necessity of diversity of it can be found on Riches beyond belief (So you want to invent your own currency).

      Data, as a political construct, is employed for argumentation in favor or against the implementation of certain visions of the world.

    1. In the age of social media, there are a myriad ways our online presence may be used against us by a multitude of adversaries. From stalkers to prosecutors, any public information that can be attached to our identities may be used to their advantage and our detriment. It is important that we are mindful of the resources we make available to potential attackers.

      Se requiere un balance entre identidad y privacidad. Perfiles públicos para aquello por lo que queremos ser reconocidos y privados para aquello que puede poner en riesgo nuestra integridad física y la de los nuestros. Manejar esta dualidad, que debería estar disponible para todos en principio, como garantía constitucional, requerirá de otros diseños de hardware y software (llaves USB, físicas, quizás con cierta biometría y procesamiento incorporados, hardware abierto, pero encriptado, etc.)

    1. never post screencaps that show tabs. EVER.
    2. Lo identitario es clave y debe balancearse con el anonimato. Pareciera que se necesita un sistema p2p, que corra en nuestras máquinas y hardware (un llavero, físico, USB), que se pueda usar para proteger nuestra identidad digital. Se encargaría de temas como la encripción y desencripción de mensajes, el uso de correos temporales para descargar información, la creación de perfiles anónimos, pero con reputación, para compartir cierta información crítica y en general de las actividades que implican "danzar con el poder" como decían en el evento de STEPS Latinoamérica.

    1. Extreme efficiency of exchange, in other words, might come at the cost of developing new business contacts.
    2. I accept bitcoins for the same reason that I accept normal money. Mainstream money is used to replace a specific trust relationship with a general one. I take British pounds from a specific person because I trust that I can exchange those pounds for something else within the general British pound-using community. Likewise, I take the bitcoins from the specific buyer because I trust that the broader Bitcoin community will accept them from me in exchange for something of intrinsic value. The main departure from normal electronic money is that Bitcoin uses a decentralised network in place of a central hierarchy. The advantages are anonymity, a sense of freedom and, it has been argued, a more resilient system.
    3. Perhaps we can tinker with the word ‘money’ itself. It’s a mass noun, like you’d use for some kind of tangible substance, and it makes money sound like a ‘thing-in-itself’. As a kind of mental discipline, I prefer to use a different word: COGAS. It stands for ‘claims on goods and services’, which is all money really is. And now I have a word that describes itself, as opposed to one that actively hides its own reality. It sounds trivial, but the linguistic process works a subtle psychological loop, referring money to the world outside itself. It’s a simple way to start peeling back the façade.

      Algo similar hace Stallman cuando cambia DRM reemplazando "Rights" por "Restrictions". Ese cambio simbólico es importante!

    4. There’s an ecological dimension to this, of course, which is my overriding concern. Our ability to exchange without knowing where things come from blinds us to the real core of the economy: not money, but the physical things we must wrench from the ground by human effort, which is underpinned by agricultural systems, and energised by sunlight, water and soil.
    5. GDP is supposed to reflect what is created in society, but if my grandad builds me a table in his workshop, it’s not included in GDP, and if I buy a table in Ikea, it is. The former is not considered valid production, whereas the latter is. That is arbitrary, and obviously something has gone wrong.
    6. Similar network effects arise with social platforms such as Facebook — in theory, you can opt out, but only if you don’t mind the penalty of social exclusion. What’s more, when integrated into a national legal system and backed by the threat of violence, the sanctions for dissent become rather persuasive. At the unsubtle end of the spectrum, the monarch may simply throw you in jail for not using her preferred currency.
    7. Gold reveals the basic tension in the textbook definition of money — the idea that it can be both a store of value and a means of exchange. For the most part, when something is truly valuable in itself, people are disinclined to part with it (why swap rum for something else when you can just drink it?).
    8. It’s a reassuring myth, one that obscures the deep difference between barter and monetary exchange. In the former, nothing is left unresolved and no faith is required. It’s a closed circuit, a like-for-like swap. By contrast, money transactions are never closed; you pass on an abstract, faith-based claim in exchange for a tangible good.
    9. but this still means that every monetary transaction is a leap of faith. And faith has to be carefully maintained.

      Podríamos hacer que la gente depositara su fe en algo con valor más intrínsico, información útil, por ejemplo.

    10. Shopkeepers accept the paper because they believe that it has abstract value — because, in turn, they believe that others believe it, too. The value is circular, predicated on each person believing that others believe in it.

      Recuerdo decir que el dinero eran piezas de papel con retratos de gente muerta.

    11. I have an enduring memory of a TED talk in which he ripped a banknote into pieces, trying to make the point that the paper itself doesn’t have value
    12. The best guides in this half-lit territory turn out to be not economists, but rather the loose bands of monetary mystics and iconoclasts who are developing strange new exchange technologies. They are a scattered tribe, with elders including the likes of Bernard Lietaer, Ellen Brown and Thomas Greco, sages passing on tips on how to breach the Monetary Matrix.
    13. Money sounds like it’s an ordinary noun, a self-contained object. If it is a physical object, it must be paper or metal or digits on a computer. And yet, very few of us think a £5 note is merely a piece of paper: the same idea of £5 can be expressed in electronic or metal form, after all.

      Información cuyo sustrato material puede cambiar, como la abstracción de los números: una misma cantidad puede estar asociada a colecciones de diversos objetos.

    14. By contrast, money itself is more like a low-level programming language, very hard to see or to understand but closer to gritty reality. It’s like your computer’s machine code, interfacing with the hardware: even the experts take it for granted. You might need to explain to someone what a bond is, but nobody is ever ‘taught’ what money is.

      En mi caso el caracter ilusorio de la moneda fue revelado de manera temprana. Quizás es por eso que no tengo mucha :-P.

    15. To draw an analogy with computer coding, we might say that financial instruments are analogous to ‘high-level’ programming languages such as Java or Ruby: they let you string commands together in order to perform certain actions. You want to get resources from A to B over time? Well, we can program a financial instrument to do that for you.

      Interesante analogía. Habría que mirar cómo las prácticas cooperativistas pueden impulsar flujos de A hacia B y estar sustentandos en varios sustratos materiales, algunos con poca tecnología (monedas locales) y otros con alta tecnología (bitcoins, ethereum, etc).

    16. The financial system exists, above all, to mediate flows of money, not to question what money is.
    1. Weapons of the Weak is not just a political study, however; it is also an outstanding work of ethnography. Based on thorough research and careful, perceptive fieldwork, it manages to avoid some of the failings of traditional ethnography by its emphasis on the centrality of individual human beings in their particular situations. Whether or not it offers definitive answers to the questions it investigates, it certainly provides some solid ground to stand on in looking for them.
    2. As a result, Scott suggests that the ideological superstructure must always be seen as a product of struggle, not as something preexisting.
  12. Oct 2015
    1. Usando los datos Carlos Alberto hizo una visualización de ingresos/egresos por departamento.

      Cuando hago click en "Cundinamarca" aparecen los datos de Córdoba.

    1. Min 52:43, Patternmakers: the artisans who enable the machine, who create the patterns that make machines possible.

    1. cómo pensar en un sistema donde la práctica académica no siga reforzando la privatización del conocimiento al seguir centrada en la protección del autor y sus obras bajo derechos de propiedad, pero que, por otro lado, el hacer invisible al autor o matarlo signifique una circulación de obras como mercancías sin dueño, lo cual hace un gran favor al sistema capitalista al tener acceso a “free gifts”, es decir, conocimientos que son fácilmente incorporados en los circuitos de producción dominante.

      Una exploración de una posibilidad sobre otras prácticas académica que no refuerce la privatización de conocimiento es grafoscopio. Si bien acá se pueden trazar autorías individuales, la infraestructura de bolsillo y bifurcable facilita las colectivas. Licencias como la P2P license pueden prohibir la apropiación por parte de privados del conocimiento, si no aportan de vuelta al procomún.

    1. Apple does not sell great design. It sells design that flatters its owner. (And Apple’s timing has been perfect to exploit the rising tide of wealth inequality.)
  13. Sep 2015
    1. | canvas point | canvas := DrGeoCanvas new. canvas fullscreen. point := canvas point: 0@0. canvas do: [ -5 to: 5 by: 0.1 do: [:x | point moveTo: x@(x cos * 3). (Delay forMilliseconds: 100) wait. canvas update] ]
    2. c := DrGeoCanvas new.

      This line should be before the definition of "triangle"

    1. Unfortunately, this process rarely actually happens the right way, often because the business people ask their data people the wrong questions to being with, and since they think of their data people as little more than pieces of software – data in, magic out – they don’t get their data people sufficiently involved with working on something that data can address.
    1. In programming languages like C++, C# or Java a class usually would be defined in a source code. A class definition file (Desktop.cpp/ Desktop.cs/ Desktop.java) in these languages would be a dumb text definition file fed into a compiler to verify and translate.In an interactive and lively system like Pharo a class could be created like any other object by sending instance creation methods. The reason is simple: in a pure OO environment anything is an object, so even a class is an object. Remember: there are only objects and messages.

      Una muestra de "live coding" (vía objetos) versus "static code" (vía archivos).

      An examples of "live coding" (via objects) versus "static code" (via files).

    2. As the previous examples showed Pharo has very much in common with an operating system. The difference is that it is more a lively kernel and scriptable object system that one can easily persist and transfer and that is easily extendable using the Smalltalk language.

      En tracing the dynabook se muestra cómo smalltalk era una alternativa a los sistemas operativos. Era otra manera de plantear toda una experiencia de cómputo completa y a la vez minimalista.

      In Tracing the dynabook it is shown how smalltalk was an alternative to operative systems, It was another way to propose a whole computer experience which was, at the same time complete and minimalist.