establishing democratic management over its urban deployment constitutes the right to the city.
easier said than done.. but what are some suggestions for how this could succeed?
establishing democratic management over its urban deployment constitutes the right to the city.
easier said than done.. but what are some suggestions for how this could succeed?
If you can’t find the correct web page, ask a reference librarian.
YES, ASK US. Also, we love to work with faculty on managing their data!
Fundamental questions for the library revolve around issues of: stewardship (what types of annotations are appropriate for library ownership, vs. say a course platform), persistence (how long should different types of annotations be persisted and preserved), costs (who will fund annotation storage over time) access (what privacy and distribution controls need to be placed on access to annotations.)
API Management Using Github
I have documented eleven approaches to using Github for API management to date:
API Services During my monitoring of the API space, I came across a new API monitoring service called AutoDevBot, which monitors all your API endpoints, and notifies you when something goes wrong. Pretty standard feature in a new wave of API integration tools and services I’m seeing emerge, but what is interesting is they use Github as a central place to store the settings for the API monitoring service. AutoDevBot has you clone their settings template, make changes you need to monitor your APIs, register and fire up AutoDevBot to monitor. Seems like a pretty simple way for API service providers to engage with API providers, allowing them to manage all the configuration for API services alongside their own internal API operations.
Github As The Central Presence, Definition, Configuration, And Source Code For Your API Posted on 02-05-2014 It is easy to think of Github as a central repository for your open source code—most developers understand that. I have written before about the many ways to use Github as part of your API management strategy, but in the last few months I'm really seeing Github playing more of a central role in the overall lifecycle of an API.
Journals and sponsors want you to share your data
What is the sharing standard? What are the consequences of not sharing? What is the enforcement mechanism?
There are three primary sharing mechanisms I can think of today: email, usb stick, and dropbox (née ftp).
The dropbox option is supplanting ftp which comes from another era, but still satisfies an important niche for larger data sets and/or higher-volume or anonymous traffic.
Dropbox, email and usb are all easily accessible parts of the day-to-day consumer workflow; they are all trivial to set up without institutional support or, importantly, permission.
An email account is already provisioned by default for everyone or, if the institutional email offerings are not sufficient, a person may easily set up a 3rd-party email account with no permission or hassle.
Data management alternatives to these three options will have slow or no adoption until the barriers to access and use are as low as email; the cost of entry needs to be no more than *a web browser, an email address, and no special permission required".
An effective data management program would enable a user 20 years or longer in the future to discover , access , understand, and use particular data [ 3 ]. This primer summarizes the elements of a data management program that would satisfy this 20-year rule and are necessary to prevent data entropy .
Who cares most about the 20-year rule? This is an ideal that appeals to some, but in practice even the most zealous adherents can't picture what this looks like in some concrete way-- except in the most traditional ways: physical paper journals in libraries are tangible examples of the 20-year rule.
Until we have a digital equivalent for data I don't blame people looking for tenure or jobs for not caring about this ideal if we can't provide a clear picture of how to achieve this widely at an institutional level. For digital materials I think the picture people have in their minds is of tape backup. Maybe this is generational? New generations not exposed widely to cassette tapes, DVDs, and other physical media that "old people" remember, only then will it be possible to have a new ideal that people can see in their minds-eye.
A key component of data management is the comprehensive description of the data and contextual information that future researchers need to understand and use the data. This description is particularly important because the natural tendency is for the information content of a data set or database to undergo entropy over time (i.e. data entropy ), ultimately becoming meaningless to scientists and others [ 2 ].
I agree with the key component mentioned here, but I feel the term data entropy is an unhelpful crutch.
This primer describes a few fundamental data management practices that will enable you to develop a data management plan, as well as how to effectively create, organize, manage, describe, preserve and share data
Data management practices:
Data management activities, grouped. The data management activities mentioned by the survey can be grouped into five broader categories: "storage" (comprising backup or archival data storage, identifying appropriate data repositories, day-to-day data storage, and interacting with data repositories); "more information" (comprising obtaining more information about curation best practices and identifying appropriate data registries and search portals); "metadata" (comprising assigning permanent identifiers to data, creating and publishing descriptions of data, and capturing computational provenance); "funding" (identifying funding sources for curation support); and "planning" (creating data management plans at proposal time). When the survey results are thus categorized, the dominance of storage is clear, with over 80% of respondents requesting some type of storage-related help. (This number may also reflect a general equating of curation with storage on the part of respondents.) Slightly fewer than 50% of respondents requested help related to metadata, a result explored in more detail below.
Categories of data management activities:
Having made these points many times in the last few years, I've realized that the fundamental problem is in the mistaken belief that the type system has anything whatsoever to do with the storage allocation strategy. It is simply false that the choice of whether to use the stack or the heap has anything fundamentally to do with the type of the thing being stored. The truth is: the choice of allocation mechanism has to do only with the known required lifetime of the storage.
The type system has nothing to do with the storage allocation strategy; the choice of allocation mechanism has to do only with the known required lifetime of the storage.
Now compare this to the stack. The stack is like the heap in that it is a big block of memory with a “high water mark”. But what makes it a “stack” is that the memory on the bottom of the stack always lives longer than the memory on the top of the stack; the stack is strictly ordered. The objects that are going to die first are on the top, the objects that are going to die last are on the bottom. And with that guarantee, we know that the stack will never have holes, and therefore will not need compacting. We know that the stack memory will always be “freed” from the top, and therefore do not need a free list. We know that anything low-down on the stack is guaranteed alive, and so we do not need to mark or sweep.
When a garbage collection is performed there are three phases: mark, sweep and compact. In the “mark” phase, we assume that everything in the heap is “dead”. The CLR knows what objects were “guaranteed alive” when the collection started, so those guys are marked as alive. Everything they refer to is marked as alive, and so on, until the transitive closure of live objects are all marked. In the “sweep” phase, all the dead objects are turned into holes. In the “compact” phase, the block is reorganized so that it is one contiguous block of live memory, free of holes.
If we’re in that situation when new memory is allocated then the “high water mark” is bumped up, eating up some of the previously “free” portion of the block. The newly-reserved memory is then usable for the reference type instance that has just been allocated. That is extremely cheap; just a single pointer move, plus zeroing out the newly reserved memory if necessary.
The idea is that there is a large block of memory reserved for instances of reference types. This block of memory can have “holes” – some of the memory is associated with “live” objects, and some of the memory is free for use by newly created objects. Ideally though we want to have all the allocated memory bunched together and a large section of “free” memory at the top.
The closest thing to common ground may be events for configuration-management software like PuppetConf or ChefConf, or possibly re:Invent.
Despite her work ethic, her track record, and the fact that we all really liked her, her skills were no longer adequate. Some of us talked about jury-rigging a new role for her, but we decided that wouldn’t be right. So I sat down with Laura and explained the situation—and said that in light of her spectacular service, we would give her a spectacular severance package. I’d braced myself for tears or histrionics, but Laura reacted well: She was sad to be leaving but recognized that the generous severance would let her regroup, retrain, and find a new career path. This incident helped us create the other vital element of our talent management philosophy: If we wanted only “A” players on our team, we had to be willing to let go of people whose skills no longer fit, no matter how valuable their contributions had once been. Out of fairness to such people—and, frankly, to help us overcome our discomfort with discharging them—we learned to offer rich severance packages.
I do not maintain any big open source projects, but in talking to people who do it’s become my understanding that the bulk of the work is sifting through issues and pull requests, not actually coding. The former is the thing they consider hardest, the thing that burns them out, their most overwhelming responsibility.