607 Matching Annotations
  1. May 2022
  2. Apr 2022
    1. it starts with 00:32:31 this one kind of thing called single finger and these are all just variations or practice styles [Music] 00:32:45 and then octave double stop skills [Music] and you know just down the list but you know these things are all developed 00:32:59 through the practice the daily practice but then once once they've been developed then i can just plug them into songs and and create so that's just i'm really excited about this form like the fiddle wrong is because

      Jason Kleinberg takes basic tunes and then has a list of variations of practice styles which he runs through with each one (eg. single-finger, octave double stops scale, old-time, polkafy, blues, etc.) and he plays those tunes in these modified styles not only to practice, but to take these "musical conversations" and translate them into his own words. This is a clever way of generating new music and potentially even new styles by mixing those which have come before. To a great sense, he's having a musical conversation with prior composers and musicians in the same way that an annotator will have a conversation in the margins with an author. It's also an example of the sort of combinatorial creativity suggested by Raymond Llull's work.

    1. IMAP URL for text fragment

      ``` The URL: <imap://minbari.example.org/gray-council;UIDVALIDITY=385759045/; UID=20/;PARTIAL=0.1024>

      may result in the following client commands and server responses:

      <connect to minbari.example.org, port 143> S: * OK [CAPABILITY IMAP4rev1 STARTTLS AUTH=ANONYMOUS] Welcome C: A001 AUTHENTICATE ANONYMOUS S: + C: c2hlcmlkYW5AYmFieWxvbjUuZXhhbXBsZS5vcmc= S: A001 OK Welcome sheridan@babylon5.example.org C: A002 SELECT gray-council <client verifies the UIDVALIDITY matches> C: A003 UID FETCH 20 BODY.PEEK[]<0.1024> ```

      ABNF: abnf partial-range = number ["." nz-number] ; partial FETCH. The first number is ; the offset of the first byte, ; the second number is the length of ; the fragment.

    1. solo thinking isrooted in our lifelong experience of social interaction; linguists and cognitivescientists theorize that the constant patter we carry on in our heads is a kind ofinternalized conversation. Our brains evolved to think with people: to teachthem, to argue with them, to exchange stories with them. Human thought isexquisitely sensitive to context, and one of the most powerful contexts of all isthe presence of other people. As a consequence, when we think socially, wethink differently—and often better—than when we think non-socially.

      People have evolved as social animals and this extends to thinking and interacting. We think better when we think socially (in groups) as opposed to thinking alone.

      This in part may be why solo reading and annotating improves one's thinking because it is a form of social annotation between the lone annotator and the author. Actual social annotation amongst groups may add additonal power to this method.

      I personally annotate alone, though I typically do so in a publicly discoverable fashion within Hypothes.is. While the audience of my annotations may be exceedingly low, there is at least a perceived public for my output. Thus my thinking, though done alone, is accelerated and improved by the potential social context in which it's done. (Hello, dear reader! 🥰) I can artificially take advantage of the social learning effects even if the social circle may mathematically approach the limit of an audience of one (me).

    2. A 2019 study published in the Proceedings of the National Academy ofSciences supports Wieman’s hunch. Tracking the intellectual advancement ofseveral hundred graduate students in the sciences over the course of four years,its authors found that the development of crucial skills such as generatinghypotheses, designing experiments, and analyzing data was closely related to thestudents’ engagement with their peers in the lab, and not to the guidance theyreceived from their faculty mentors.

      Learning has been shown to be linked to engagement with peers in social situations over guidance from faculty mentors.

      Cross reference: David F. Feldon et al., “Postdocs’ Lab Engagement Predicts Trajectories of PhD Students’ Skill Development,” Proceedings of the National Academy of Sciences 116 (October 2019): 20910–16


      Are there areas where this is not the case? Are there areas where this is more the case than not?

      Is it our evolution as social animals that has heightened this effect? How could this be shown? (Link this to prior note about social evolution.)

      Is it the ability to scaffold out questions and answers and find their way by slowly building up experience with each other that facilitates this effect?

      Could this effect be seen in annotating texts as well? If one's annotations become a conversation with the author, is there a learning benefit even when the author can't respond? By trying out writing about one's understanding of a text and seeing where the gaps are and then revisiting the text to fill them in, do we gain this same sort of peer engagement? How can we encourage students to ask questions to the author and/or themselves in the margins? How can we encourage them to further think about and explore these questions? Answer these questions over time?

      A key part of the solution is not just writing the annotations down in the first place, but keeping them, reviewing over them, linking them together, revisiting them and slowly providing answers and building solutions for both themselves and, by writing them down, hopefully for others as well.

    1. https://en.wikipedia.org/wiki/Open_text

      Within the field of semiotic analysis, an open text is one that can be interpreted by readers in a variety of ways. By way of contrast, a closed text prompts the reader to only one interpretation.

      Given the definition of an open text (opera aperta), in practice, the Bible may be one of the most open texts ever written despite its more likely original intention of it being a strictly closed text.

      What does a spectrum of open to closed look like? Can it be applied to other physical forms that could potentially be open to interpretation? Consider art, for example, which by general nature is far more open to interpretation (an open "text") and rarely are there artworks which are completely closed to a single interpretation.

      How does time and changing audiences/publics affect a work? The Bible may have been meant as a closed text in its original historical context, but time and politics have shown it to be one of the most spectacularly open texts ever written.

    1. 3. Who are you annotating with? Learning usually needs a certain degree of protection, a safe space. Groups can provide that, but public space often less so. In Hypothes.is who are you annotating with? Everybody? Specific groups of learners? Just yourself and one or two others? All of that, depending on the text you’re annotating? How granular is your control over the sharing with groups, so that you can choose your level of learning safety?

      This is a great question and I ask it frequently with many different answers.

      I've not seen specific numbers, but I suspect that the majority of Hypothes.is users are annotating in small private groups/classes using their learning management system (LMS) integrations through their university. As a result, using it and hoping for a big social experience is going to be discouraging for most.

      Of course this doesn't mean that no one is out there. After all, here you are following my RSS feed of annotations and asking these questions!

      I'd say that 95+% or more of my annotations are ultimately for my own learning and ends. If others stumble upon them and find them interesting, then great! But I'm not really here for them.

      As more people have begun using Hypothes.is over the past few years I have slowly but surely run into people hiding in the margins of texts and quietly interacted with them and begun to know some of them. Often they're also on Twitter or have their own websites too which only adds to the social glue. It has been one of the slowest social media experiences I've ever had (even in comparison to old school blogging where discovery is much higher in general use). There has been a small uptick (anecdotally) in Hypothes.is use by some in the note taking application space (Obsidian, Roam Research, Logseq, etc.), so I've seen some of them from time to time.

      I can only think of one time in the last five or so years in which I happened to be "in a text" and a total stranger was coincidentally reading and annotating at the same time. There have been a few times I've specifically been in a shared text with a small group annotating simultaneously. Other than this it's all been asynchronous experiences.

      There are a few people working at some of the social side of Hypothes.is if you're searching for it, though even their Hypothes.is presences may seem as sparse as your own at present @tonz.

      Some examples:

      @peterhagen Has built an alternate interface for the main Hypothes.is feed that adds some additional discovery dimensions you might find interesting. It highlights some frequent annotators and provide a more visual feed of what's happening on the public Hypothes.is timeline as well as data from HackerNews.

      @flancian maintains anagora.org, which is like a planet of wikis and related applications, where he keeps a list of annotations on Hypothes.is by members of the collective at https://anagora.org/latest

      @tomcritchlow has experimented with using Hypothes.is as a "traditional" comments section on his personal website.

      @remikalir has a nice little tool https://crowdlaaers.org/ for looking at documents with lots of annotations.

      Right now, I'm also in an Obsidian-based book club run by Dan Allosso in which some of us are actively annotating the two books using Hypothes.is and dovetailing some of this with activity in a shared Obsidian vault. see: https://boffosocko.com/2022/03/24/55803196/. While there is a small private group for our annotations a few of us are still annotating the books in public. Perhaps if I had a group of people who were heavily interested in keeping a group going on a regular basis, I might find the value in it, but until then public is better and I'm more likely to come across and see more of what's happening out there.

      I've got a collection of odd Hypothes.is related quirks, off label use cases, and experiments: https://boffosocko.com/tag/hypothes.is/ including a list of those I frequently follow: https://boffosocko.com/about/following/#Hypothesis%20Feeds

      Like good annotations and notes, you've got to put some work into finding the social portion what's happening in this fun little space. My best recommendation to find your "tribe" is to do some targeted tag searches in their search box to see who's annotating things in which you're interested.

    2. Where annotation is not an individual activity, jotting down marginalia in solitude, but a dialogue between multiple annotators in the now, or incrementally adding to annotators from the past.

      My first view, even before any of the potential social annotation angle, is that in annotating or taking notes, I'm simultaneously having a conversation with the author of the work and/or my own thoughts on the topic at hand. Anything beyond that for me is "gravy".

      I occasionally find that if I'm writing as I go that I'll have questions and take a stab only to find that the author provides an answer a few paragraphs or pages on. I can then look back at my thought to see where I got things right, where I may have missed or where to go from there. Sometimes I'll find holes that both the author and I missed. Almost always I'm glad that I spent the time thinking about the idea critically and got to the place myself with or without the author's help. I'm not sure that most others always do this, but it's a habit I've picked up from reading mathematics texts which frequently say things like "we'll leave it to the reader to verify or fill in the gaps" or "this is left as an exercise". Most readers won't/don't do this, but my view is that it's almost always where the actual engagement and learning from the material stems.

      Sometimes I may be writing out pieces to clarify them for myself and solidify my understanding while other times, I'm using the text as a prompt for my own writing. My intention most often is to add my own thoughts in a significantly well-thought out manner such that I can in the near future reuse these annotations/notes in essays or other writing. Some of this comes from broad experience of keeping a commonplace book for quite a while, and some of it has been influenced on reading about the history of note taking practices by others. One of the best summations of the overall practice I've seen thus far is Sönke Ahrens' How to Take Smart Notes (Create Space, 2017), though I find there are some practical steps missing that can only be found by actually practicing his methods in a dedicated fashion for several months before one sees changes in their thought patterns, the questions they ask, and the work that stems from it all. And by work, I mean just that. The whole enterprise is a fair amount of work, though I find it quite fun and very productive over time.

      In my youth, I'd read passages and come up with some brilliant ideas. I might have underlined the passage and written something like "revisit this and expand", but I found I almost never did and upon revisiting it I couldn't capture the spark of the brilliant idea I had managed to see before. Now I just take the time out to write out the entire thing then and there with the knowledge that I can then later revise it and work it into something bigger later. Doing the work right now has been one of the biggest differences in my practice, and I'm finding that projects I want to make progress on are moving forward much more rapidly than they ever did.

    1. Yeshiva teaching in the modern period famously relied on memorization of the most important texts, but a few medieval Hebrew manu-scripts from the twelfth or thirteenth centuries include examples of alphabetical lists of words with the biblical phrases in which they occurred, but without pre-cise locations in the Bible—presumably because the learned would know them.

      Prior to concordances of the Christian Bible there are examples of Hebrew manuscripts in the twelfth and thirteenth centuries that have lists of words and sentences or phrases in which they occurred. They didn't include exact locations with the presumption being that most scholars would know the texts well enough to quickly find them based on the phrases used.


      Early concordances were later made unnecessary as tools as digital search could dramatically decrease the load. However these tools might miss the value found in the serendipity of searching through broad word lists.

      Has anyone made a concordance search and display tool to automatically generate concordances of any particular texts? Do professional indexers use these? What might be the implications of overlapping concordances of seminal texts within the corpus linguistics space?

      Fun tools like the Bible Munger now exist to play around with find and replace functionality. https://biblemunger.micahrl.com/munge

      Online tools also have multi-translation versions that will show translational differences between the seemingly ever-growing number of English translations of the Bible.

    1. Humankind’s insatiable demand for electronic devices is creating the world’s fastest-growing waste stream.

      This sentence caught my attention the most because not only was it the opening sentence, but it also outlined the problem that was discussed throughout the rest of the article. As society expands and grows upward electronically, there is more and more e-waste. Since this flow of e-waste is becoming so large, it is being labeled as the world's fastest-growing waste stream. This article then goes on to explain this phenomenon.

      The article I linked below goes over the toxicological implications of e-waste and I think it ties in well to the topics covered in this article. It shows how it can affect the health of humans and to which degree.

  3. Mar 2022
    1. wathsapp

      En las instituciones educativas de los sectores más vulnerables donde la falta de acceso a las Tics fue la constante, redes sociales como el Whatsapp terminaron siendo el único medio que permitía un contacto esporádico y una educación remota, la privacidad y la cotidianidad de los docentes se vio alterada por la falta de un derecho a la desconexión, la labor docente se volvió una labor de tiempo completo donde la carencia de recursos primo y la creatividad de los docentes para realizar sus clases permitieron sacar procesos adelante.

  4. Feb 2022
    1. Together: responsive, inline “autocomplete” pow­ered by an RNN trained on a cor­pus of old sci-fi stories.

      I can't help but think, what if one used their own collected corpus of ideas based on their ever-growing commonplace book to create a text generator? Then by taking notes, highlighting other work, and doing your own work, you're creating a corpus of material that's imminently interesting to you. This also means that by subsuming text over time in making your own notes, the artificial intelligence will more likely also be using your own prior thought patterns to make something that from an information theoretic standpoint look and sound more like you. It would have your "hand" so to speak.

    1. If you now think: “That’s ridiculous. Who would want to read andpretend to learn just for the illusion of learning and understanding?”please look up the statistics: The majority of students chooses everyday not to test themselves in any way. Instead, they apply the verymethod research has shown again (Karpicke, Butler, and Roediger2009) and again (Brown 2014, ch. 1) to be almost completelyuseless: rereading and underlining sentences for later rereading.And most of them choose that method, even if they are taught thatthey don’t work.

      Even when taught that some methods of learning don't work, students will still actively use and focus on them.


      Are those using social annotation purposely helping students to steer clear of these methods? is there evidence that the social part of some of these related annotation or conversational practices with both the text and one's colleagues helpful? Do they need to be taken out of the text and done in a more explicit manner in a lecture/discussion section or in a book club like setting similar to that of Dan Allossso's or even within a shared space like the Obsidian book club to have more value?

  5. gingkowriter.com gingkowriter.com
    1. on top stacked laying flat on the left side, next to a potted plant on the right two other books to the right of the plant, spines not visible

      tools for thought rheingold MIT Press logo concept design: the essence of software jackson designing constructionist futures nathan holbert, matthew berland, and yasmin b. kafai, editors MIT Press logo structure and interpretation of computer programs second edition abelson and sussman MIT Press Indroduction to the theory of computation

      top shelf ordinary orientation: books upright, spines facing out tops leaning to the left

      toward a theory of instruction bruner belknap / harvard tools for conviviality ivan illich harper & row the human interface raskin addison wesley the design of everyday things don norman basic books changing minds disessa MIT Press logo mindstorms seymour papert unknown logo understanding computers and cognition winograd and flores addison wesley software abstraction jackson revised edition MIT Press logo living with complexity norman MIT Press logo the art of doing science and engineering—learning to learn richard w. hamming stripe press logo the computer boys take over ensmenger recoding gender abbate MIT Press logo weaving the web tim berners-lee harper dealers of lightning: xerox parc and the dawn of the computer age michael a hiltik harper the dream machine m. mitchell waldrop stripe press logo from counterculture to cyberculture fred turner chicago the innovators walter isaacson simon & schuster paperbacks a people's history of computing in the united states joy lisi rankin harvard the media lab stewart brand penguin logo

      bottom shelf ordinary orientation: books upright, spines facing out tops leaning to the right

      about face: the essentials of interaction design cooper, reimann, cronin, noessel 4th edition wiley the new media reader wardrip, fruin, and montfort, editors designing interactions bill moggridge includes DVD MIT Press logo interactive programming environments barstow, shrobe, sanderwall mcgraw hill visual programming shu software visualization editors: stasko, domingue, brown, price MIT Press logo types and programming languages pierce MIT Press logo smalltalk-80: the interactive programming environment goldberg addison wesley constructing the user... statecharts qa 76.9 .u83 h66 1999 the human use of human beings: cybernetics and society wiener da capo pasteur's quadrant stokes brookings scientific freedom: the elixir of civilization donald w. braben stripe press logo a pattern language alexander, ishikawa, silverstein, jacobson, fiksdahl-king, angel oxford the timeless way of building alexander oxford

  6. Jan 2022
  7. Dec 2021
    1. With text replacement, you can use shortcuts to replace longer phrases. When you enter the shortcut in a text field, the phrase automatically replaces it. For example, you could type "GM" and "Good morning" would automatically replace it.  To manage text replacement, tap Settings > General > Keyboard > Text Replacement.  To add a text replacement, tap the Add button , then enter your phrase and shortcut. When you're done, tap Save.  To remove a text replacement, tap Edit, tap the Remove button  then tap Delete. To save your changes, tap Done.

      They also have another, debatably much more relevant function Apple’s docs don’t acknowledge!

      Setting the same values for Phraseand Shortcut in this menu basically achieves the same thing as “Learn Spelling.”

  8. Nov 2021
  9. Oct 2021
    1. So, here is how I manage it, if the line height cannot be reduced sufficiently by the numeric entry/spinbox: Try clicking the question-mark (un-set variable inline height).If that does not resolve the issue, activate the Tt button ("outer" text style) and set the font height to something small and linespacing to something small and click the questionmark.Then de-activate Tt (outer) and edit the text normally.i.e. The outer style overrides the inner style.
  10. Sep 2021
    1. (Fletcher, 2014;Gwilt & Rissanen, 2011; Leerberg, Riisberg, & Boutrup, 2010;Rissanen & McQuillan, 2016

      Many in-text citations are used just in the background. They include author's names and years instead of superscripts (which I think would be easier to read but oh well).

    Tags

    Annotators

  11. Aug 2021
    1. scale_x_discrete(guide = guide_axis(n.dodge = 2))

      With guide_axis(), we can add dodge to our axis label texts to avoid overlapping texts. In the code below, we have used guide_axis() function with n.dodge=2 inside scale_x_discrete() to dodge overlapping text on x-axis.

  12. Jul 2021
    1. they do not form the basis for discovery,

      I don't entirely agree with this part of the statement because the digital tools we have allow us to both view information in an entirely new way and to see connections that we couldn't have seen very readily. For example, the ability to take any written work and create a concordance of words can give us great insight that just reading the work would not have. If we wanted to see to what degree society is viewed from a male vs. female perspective between 1920 and 2020 we could analyze specific words in several pieces of literature from those time periods to see how significantly each gender is represented. If not impossible to do before digital tools, it would certainly be so laborious as to render it an insignificant goal in the scheme of humanistic inquiry. Thus we there is a basis for discovery within digital tools.

  13. Jun 2021
  14. May 2021
    1. Vulgata

      Darunter versteht man heute die als authentisch aufgefasste Übersetzung der Bibel. Das Verfahren zur Ermittlung eines Vulgatatextes wird allerdings schon seit dem 3. und 2. Jhd. vor Christus praktiziert. Es kann also auf alle authentischen Texte bezogen werden.

  15. Apr 2021
    1. Ideally, GitHub would understand rich formats

      I've advocated for a different approach.

      Most of these "rich formats" are, let's just be honest, Microsoft Office file formats that people aren't willing to give up. But these aren't binary formats through-and-through; the OOXML formats are ZIP archives (following Microsoft's "Open Packaging Conventions") that when extracted are still almost entirely simple "files containing lines of text".

      So rather than committing your "final-draft.docx", "for-print.oxps" and what-have-you to the repo, run them through a ZIP extractor then commit that to the repo. Then, just like any other source code repo, include a "build script" for these—which just zips them back up and gives them the appropriate file extension.

      (I have found through experimentation that some of these packages do include some binary files (which I can't recall offhand), but they tend to be small, and you can always come up with a text-based serialization for them, and then rework your build script so it's able to go from that serialization format to the correct binary before zipping everything up.)

    1. Feedback from the faculty teaching team after teaching for almost 8 weeks is how to template and simplify space for students to use, here is a direct quote: “could we create dedicated blog page for students that would be a pre-made, fool-proof template? When a student’s WordPress blog does not work and we can’t fix the problem, it is very frustrating to be helpless beside an exasperated student.”

      There may be a bit of a path forward here that some might consider using that has some fantastic flexibility.

      There is a WordPress plugin called Micropub (which needs to be used in conjunction with the IndieAuth plugin for authentication to their CMS account) that will allow students to log into various writing/posting applications.

      These are usually slimmed down interfaces that don't provide the panoply of editing options that the Gutenberg interface or Classic editor metabox interfaces do. Quill is a good example of this and has a Medium.com like interface. iA Writer is a solid markdown editor that has this functionality as well (though I think it only works on iOS presently).

      Students can write and then post from these, but still have the option to revisit within the built in editors to add any additional bells and whistles they might like if they're so inclined.

      This system is a bit like SPLOTs, but has a broader surface area and flexibility. I'll also mention that many of the Micropub clients are open source, so if one were inclined they could build their own custom posting interface specific to their exact needs. Even further, other CMSes like Known, Drupal, etc. either support this web specification out of the box or with plugins, so if you built a custom interface it could work just as well with other platforms that aren't just WordPress. This means that in a class where different students have chosen a variety of ways to set up their Domains, they can be exposed to a broader variety of editing tools or if the teacher chooses, they could be given a single editing interface that is exactly the same for everyone despite using different platforms.

      For those who'd like to delve further, I did a WordPress-focused crash course session on the idea a while back:

      Micropub and WordPress: Custom Posting Applications at WordCamp Santa Clarita 2019 (slides)

    1. What you want is not to detect if stdin is a pipe, but if stdin/stdout is a terminal.

      The OP wasn't wrong in exactly the way this comment implies: he didn't just ask how to detect whether stdin is a pipe. The OP actaully asked how to detect whether it is a terminal or a pipe. The only mistake he made, then, was in assuming those were the only two possible alternatives, when in fact there is (apparently) a 3rd one: that stdin is redirected from a file (not sure why the OS would need to treat that any differently from a pipe/stream but apparently it does).

      This omission is answered/corrected more clearly here:

      stdin can be a pipe or redirected from a file. Better to check if it is interactive than to check if it is not.

    1. This question does not show any research effort; it is unclear or not useful Bookmark this question. Show activity on this post. I'm trying to filter the output of the mpv media player, removing a particular line, but when I do so I am unable to control mpv with the keyboard. Here is the command: mpv FILE | grep -v 'Error while decoding frame' When I run the command, everything displays correctly, but I am unable to use the LEFT and RIGHT keys to scan through the file, or do anything else with the keyboard. How do I filter the output of the program while retaining control of it?
  16. Mar 2021
  17. Feb 2021
  18. Jan 2021
    1. Group Rules from the Admins1NO POSTING LINKS INSIDE OF POST - FOR ANY REASONWe've seen way too many groups become a glorified classified ad & members don't like that. We don't want the quality of our group negatively impacted because of endless links everywhere. NO LINKS2NO POST FROM FAN PAGES / ARTICLES / VIDEO LINKSOur mission is to cultivate the highest quality content inside the group. If we allowed videos, fan page shares, & outside websites, our group would turn into spam fest. Original written content only3NO SELF PROMOTION, RECRUITING, OR DM SPAMMINGMembers love our group because it's SAFE. We are very strict on banning members who blatantly self promote their product or services in the group OR secretly private message members to recruit them.4NO POSTING OR UPLOADING VIDEOS OF ANY KINDTo protect the quality of our group & prevent members from being solicited products & services - we don't allow any videos because we can't monitor what's being said word for word. Written post only.

      Wow, that's strict.

  19. Dec 2020
  20. Nov 2020
  21. jct.lucidrhino.design jct.lucidrhino.design
    1. The heart of the JCT Charity is to restore dignity to those who have lost all. To show respect to all, (especially those who have little self-respect), and to hod out that helping hand. Never have people felt so isolated as now. Being heard is an important part of building ‘community’. Many we seek to help can’t speak English for example. JCT aims to give a voice to those who have not had one and to help them to interact and contribute again.

      Text needs editing to:

      The heart of JCT Charity is to restore dignity to those who have lost all. To show respect to all, and to hold out that helping hand to those in crisis of some kind. Never have people felt so isolated as now. Being heard is an important part of this. Many we seek to help can’t speak English for example. JCT aims to help people interact and contribute again.

    2. Many have said that COVID-19 has brought communities together. Neighborhoods have risen to the challenge of helping each other out in really practical ways, food banks have emerged, street Whatsapp groups popped up everywhere and friends doing each other’s shopping. And yet isolation and personal crisis are to be found everywhere. JCT sees the need to make an impact in these times, allowing connection, enabling a dialogue, joining communities together. For us Covid has forced us to close many of our previous services especially drop-ins for people who were homeless or in need. This crisis forced us to work in a more focused way, one to one, with many of our clients concentrationg of getting housing and funds for those in the greatest need. What we found was that we were able to make a greater difference to those we were able to work with over this time, and from this JCT were able to fashion a new, highly effective, method of casework. The lockdown has helped us identify a specific way to help in this time of great need.

      Copy needs editing to:

      During the pandemic, Jesus Centre staff are collaborating and are working diligently to adapt existing services and develop new projects for people in social and economic hardship. Our plan is to channel our resources where they are most needed and to continue to support the homeless, elderly and refugees who are struggling even more throughout the Coronavirus pandemic

  22. Oct 2020
    1. I don’t think the right answer is to use something like the Mnemonic medium to memorize a cookbook’s contents. I think a likelier model is: each time you see a recipe, there’s some chance it’ll trigger an actionable “ooh, I want to make this!”, dependent on seasonality, weather, what else you’ve been cooking recently, etc. A more effective cookbook might simply resurface recipes intermittently over time, creating more opportunities for a good match: e.g. a weekly email with 5-10 cooking ideas, perhaps with some accompanying narrative. Ideally, the cookbook would surface seasonally-appropriate recipes. Seasonality would make the experience of “reading” a cookbook extend over the course of a year—a Timeful text.

      Indigenous peoples not only used holidays and other time-based traditions as a means of spaced repetition, but they also did them for just this purpose of time-based need. Winter's here and the harvest changes? Your inter-tribal rituals went over your memory palace for just those changes. Songs and dances recalled older dishes and recopies that hadn't been made in months and brought them into a new rotation.

      Anthropologists have collected examples of this specific to hunting seasons and preparations of the hunt in which people would prepare for the types of game they would encounter. Certainly they did this for feast times and seasonal diets as well. Indians in the Americas are documenting having done things like this for planting corn and keeping their corn varieties pure over hundreds of years.

  23. Sep 2020
    1. 19 Now the Lord God had formed out of the ground all the wild animals(AA) and all the birds in the sky.(AB) He brought them to the man to see what he would name them; and whatever the man called(AC) each living creature,(AD) that was its name. 20 So the man gave names to all the livestock, the birds in the sky and all the wild animals.

      God had given Adam the responsibility to name all living creatures on Earth after the first days of creation. In Ursula K. Le Guin’s “She Unnames Them”, the idea of how labels or given names could take away from “personal choice” and “freedom” was explored throughout the text. Instead of believing that humans are above animals and living creatures, Buddhists view animals as very sacred beings and are to be shown with respect and to never be harmed. They also believe that humans can be reborn as animals, all interconnected within one another, supporting their beliefs of showing extreme care towards animals and allowing them to live freely.

    2. When the woman saw that the fruit of the tree was good for food and pleasing to the eye, and also desirable(AT) for gaining wisdom, she took some and ate it.

      Despite being told by God that she and her husband were not allowed to eat the fruit from the tree of the knowledge of good and evil, Eve gave into her temptations. The idea of the "forbidden fruit" has been carried into other pieces of literature, using an apple to symbolize a character's temptation leading to downfall.

      For example, in the fairy tale, Snow White and the Seven Dwarfs, when Snow White eats the poisoned apple, offered by the evil witch, who parallels the serpent, she falls into a death-like sleep.

  24. Aug 2020
  25. Jul 2020
  26. Jun 2020
  27. May 2020
  28. Apr 2020
    1. Python contributed examples¶ Mic VAD Streaming¶ This example demonstrates getting audio from microphone, running Voice-Activity-Detection and then outputting text. Full source code available on https://github.com/mozilla/DeepSpeech-examples. VAD Transcriber¶ This example demonstrates VAD-based transcription with both console and graphical interface. Full source code available on https://github.com/mozilla/DeepSpeech-examples.
    1. Python API Usage example Edit on GitHub Python API Usage example¶ Examples are from native_client/python/client.cc. Creating a model instance and loading model¶ 115 ds = Model(args.model) Performing inference¶ 149 150 151 152 153 154 if args.extended: print(metadata_to_string(ds.sttWithMetadata(audio, 1).transcripts[0])) elif args.json: print(metadata_json_output(ds.sttWithMetadata(audio, 3))) else: print(ds.stt(audio)) Full source code
    1. DeepSpeech is an open source Speech-To-Text engine, using a model trained by machine learning techniques based on Baidu's Deep Speech research paper. Project DeepSpeech uses Google's TensorFlow to make the implementation easier. NOTE: This documentation applies to the 0.7.0 version of DeepSpeech only. Documentation for all versions is published on deepspeech.readthedocs.io. To install and use DeepSpeech all you have to do is: # Create and activate a virtualenv virtualenv -p python3 $HOME/tmp/deepspeech-venv/ source $HOME/tmp/deepspeech-venv/bin/activate # Install DeepSpeech pip3 install deepspeech # Download pre-trained English model files curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.7.0/deepspeech-0.7.0-models.pbmm curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.7.0/deepspeech-0.7.0-models.scorer # Download example audio files curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.7.0/audio-0.7.0.tar.gz tar xvf audio-0.7.0.tar.gz # Transcribe an audio file deepspeech --model deepspeech-0.7.0-models.pbmm --scorer deepspeech-0.7.0-models.scorer --audio audio/2830-3980-0043.wav A pre-trained English model is available for use and can be downloaded using the instructions below. A package with some example audio files is available for download in our release notes.
    1. Library for performing speech recognition, with support for several engines and APIs, online and offline. Speech recognition engine/API support: CMU Sphinx (works offline) Google Speech Recognition Google Cloud Speech API Wit.ai Microsoft Bing Voice Recognition Houndify API IBM Speech to Text Snowboy Hotword Detection (works offline) Quickstart: pip install SpeechRecognition. See the “Installing” section for more details. To quickly try it out, run python -m speech_recognition after installing. Project links: PyPI Source code Issue tracker Library Reference The library reference documents every publicly accessible object in the library. This document is also included under reference/library-reference.rst. See Notes on using PocketSphinx for information about installing languages, compiling PocketSphinx, and building language packs from online resources. This document is also included under reference/pocketsphinx.rst.
    1. Running the example code with python Run like this: cd vosk-api/python/example wget https://github.com/alphacep/kaldi-android-demo/releases/download/2020-01/alphacep-model-android-en-us-0.3.tar.gz tar xf alphacep-model-android-en-us-0.3.tar.gz mv alphacep-model-android-en-us-0.3 model-en python3 ./test_simple.py test.wav To run with your audio file make sure it has proper format - PCM 16khz 16bit mono, otherwise decoding will not work. You can find other examples of using a microphone, decoding with a fixed small vocabulary or speaker identification setup in python/example subfolder
    2. Vosk is a speech recognition toolkit. The best things in Vosk are: Supports 8 languages - English, German, French, Spanish, Portuguese, Chinese, Russian, Vietnamese. More to come. Works offline, even on lightweight devices - Raspberry Pi, Android, iOS Installs with simple pip3 install vosk Portable per-language models are only 50Mb each, but there are much bigger server models available. Provides streaming API for the best user experience (unlike popular speech-recognition python packages) There are bindings for different programming languages, too - java/csharp/javascript etc. Allows quick reconfiguration of vocabulary for best accuracy. Supports speaker identification beside simple speech recognition.
    1. import all the necessary libraries into our notebook. LibROSA and SciPy are the Python libraries used for processing audio signals. import os import librosa #for audio processing import IPython.display as ipd import matplotlib.pyplot as plt import numpy as np from scipy.io import wavfile #for audio processing import warnings warnings.filterwarnings("ignore") view raw modules.py hosted with ❤ by GitHub View the code on <a href="https://gist.github.com/aravindpai/eb40aeca0266e95c128e49823dacaab9">Gist</a>. Data Exploration and Visualization Data Exploration and Visualization helps us to understand the data as well as pre-processing steps in a better way. 
    2. In the 1980s, the Hidden Markov Model (HMM) was applied to the speech recognition system. HMM is a statistical model which is used to model the problems that involve sequential information. It has a pretty good track record in many real-world applications including speech recognition.  In 2001, Google introduced the Voice Search application that allowed users to search for queries by speaking to the machine.  This was the first voice-enabled application which was very popular among the people. It made the conversation between the people and machines a lot easier.  By 2011, Apple launched Siri that offered a real-time, faster, and easier way to interact with the Apple devices by just using your voice. As of now, Amazon’s Alexa and Google’s Home are the most popular voice command based virtual assistants that are being widely used by consumers across the globe. 
    3. Learn how to Build your own Speech-to-Text Model (using Python) Aravind Pai, July 15, 2019 Login to Bookmark this article (adsbygoogle = window.adsbygoogle || []).push({}); Overview Learn how to build your very own speech-to-text model using Python in this article The ability to weave deep learning skills with NLP is a coveted one in the industry; add this to your skillset today We will use a real-world dataset and build this speech-to-text model so get ready to use your Python skills!
    1. One can imagine that this whole process may be computationally expensive. In many modern speech recognition systems, neural networks are used to simplify the speech signal using techniques for feature transformation and dimensionality reduction before HMM recognition. Voice activity detectors (VADs) are also used to reduce an audio signal to only the portions that are likely to contain speech. This prevents the recognizer from wasting time analyzing unnecessary parts of the signal.
    2. Most modern speech recognition systems rely on what is known as a Hidden Markov Model (HMM). This approach works on the assumption that a speech signal, when viewed on a short enough timescale (say, ten milliseconds), can be reasonably approximated as a stationary process—that is, a process in which statistical properties do not change over time.
    3. The first component of speech recognition is, of course, speech. Speech must be converted from physical sound to an electrical signal with a microphone, and then to digital data with an analog-to-digital converter. Once digitized, several models can be used to transcribe the audio to text.
    1. there is also strong encouragement to make code re-usable, shareable, and citable, via DOI or other persistent link systems. For example, GitHub projects can be connected with Zenodo for indexing, archiving, and making them easier to cite alongside the principles of software citation [25].
      • Teknologi Github dan Gitlab fokus kepada modus teks yang dapat dengan mudah dikenali dan dibaca mesin/komputer (machine readable).

      • Saat ini text mining adalah teknologi utama yang berkembang cepat. Machine learning tidak akan jalan tanpa bahan baku dari teknologi text mining.

      • Oleh karenanya, jurnal-jurnal terutama terbitan LN sudah lama memiliki dua versi untuk setiap makalah yang dirilis, yaitu versi PDF (yang sebenarnya tidak berbeda dengan kertas zaman dulu) dan versi HTML (ini bisa dibaca mesin).

      • Pengolah kata biner seperti Ms Word sangat bergantung kepada teknologi perangkat lunak (yang dimiliki oleh entitas bisnis). Tentunya kode-kode untuk membacanya akan dikunci.

      • Bahkan PDF yang dianggap sebagai cara termudah dan teraman untuk membagikan berkas, juga tidak dapat dibaca oleh mesin dengan mudah.

  29. Mar 2020
  30. Feb 2020
  31. Dec 2019
    1. The beauty of using Google Sheets or another spreadsheet tool for your to do list is that you have so many formatting options. Sometimes I change the color of a cell to indicate that it's high priority. Other times I bold it. And other times I just write IMPORTANT in front of it. Whatever works. But if you like to be more consistent, you can choose colors to indicate specific things: priority, level of effort, type of tasks, or anything else you want to be able to see at a glance. For example, I always highlight a row in blue if I'm going to be out of the office. That way, I don't overschedule the week. And I highlight a row in red if it's a non-negotiable—something I have to do the day it's scheduled because of an external deadline. And because you have text formatting options—which many to do lists don't—you can make your formatting as granular as you'd like. Bold certain types of tasks, italicize others, or even add a border around cells. Whatever stands out to you visually, go with that

      Free-form text formatting has its pros and its cons.

      Pros: It's very flexible. Since it's free-form, you can ad hoc make any new system you want, and designate, say, bold or blue to mean whatever you want it to.

      Cons: No way to enforce the rules you made for yourself. In fact, it may be hard to even remember the rules you made for yourself. You may have to create a key/legend for yourself to be safe.

      This is like why I dislike software where the only way to change font is to manually choose a font. I like it better when you can define a style/class (I think Word can do this, IIRC; and obviously HTML/CSS can), choose how that class should be formatted (font, etc.) and then can style any text with that class. This is a better way to go because classes have semantic meaning. This is the same dilemma I remember facing ~10 years ago when WYM editor was fairly new: It let you select use semantic classes/elements, whereas WYSIWIG editors were the norm (probably still are) and only let you do manual free-form formatting, with no semantic meaning conveyed.

  32. plaintext-productivity.net plaintext-productivity.net
  33. burnsoftware.wordpress.com burnsoftware.wordpress.com
  34. burnsoftware.wordpress.com burnsoftware.wordpress.com
    1. A user can manipulate the file contents in a plain text editor in sensible, expected ways. For example, a text editor that can sort lines alphabetically should be able to sort your task list in a meaningful way.
    2. Plain text is software and operating system agnostic. It's searchable, portable, lightweight, and easily manipulated. It's unstructured. It works when someone else's web server is down or your Outlook .PST file is corrupt. There's no exporting and importing, no databases or tags or flags or stars or prioritizing or insert company name here-induced rules on what you can and can't do with it.