375 Matching Annotations
  1. Apr 2022
    1. Connected Papers uses the publicly available corpus compiled by Semantic Scholar — a tool set up in 2015 by the Allen Institute for Artificial Intelligence in Seattle, Washington — amounting to around 200 million articles, including preprints.

      Semantic Scholar is a digital tool created by the Allen Institute for Artificial Intelligence in Seattle, Washington in 2015. It's corpus is publicly available for search and is used by other tools including Connected Papers.

    1. He continues by comparing open works to Quantum mechanics, and he arrives at the conclusion that open works are more like Einstein's idea of the universe, which is governed by precise laws but seems random at first. The artist in those open works arranges the work carefully so it could be re-organized by another but still keep the original voice or intent of the artist.

      Is physics open or closed?

      Could a play, made in a zettelkasten-like structure, be performed in a way so as to keep a consistent authorial voice?

      What potential applications does the idea of opera aperta have for artificial intelligence? Can it be created in such a way as to give an artificial brain a consistent "authorial voice"?

  2. Mar 2022
    1. projet européen X5-GON (Global Open Education Network) qui collecte les informations sur les ressources éducatives libres et qui marche bien avec un gros apport d’intelligence artificielle pour analyser en profondeur les documents
    1. This generative model normally penalizes predicted toxicity and rewards predicted target activity. We simply proposed to invert this logic by using the same approach to design molecules de novo, but now guiding the model to reward both toxicity and bioactivity instead.

      By changing the parameters of the AI, the output of the AI changed dramatically.

    1. Of course, users are still the source of the insight that makes a complete document also a compelling document.

      Nice that he takes a more humanistic viewpoint here rather than indicating that it will all be artificial intelligence in the future.

  3. Feb 2022
    1. Stay at the forefront of educational innovation

      What about a standard of care for students?

      Bragging about students not knowing how the surveillance technology works is unethical.<br><br>Students using accessibility software or open educational resources shouldn't be punished for accidentally avoiding surveillance. pic.twitter.com/Uv7fiAm0a3

      — Ian Linkletter (@Linkletter) February 22, 2022
      <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

      #annotation https://t.co/wVemEk2yao

      — Remi Kalir (@remikalir) February 23, 2022
      <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
    1. At the back of Dr Duncan's book on the topic, Index, A History Of The, he includes not one but two indexes, in order to make a point.

      Dennis Duncan includes two indices in his book Index, A History of The, one by a professional human indexer and the second generated by artificial intelligence. He indicates that the human version is far better.

    1. We need to getour thoughts on paper first and improve them there, where we canlook at them. Especially complex ideas are difficult to turn into alinear text in the head alone. If we try to please the critical readerinstantly, our workflow would come to a standstill. We tend to callextremely slow writers, who always try to write as if for print,perfectionists. Even though it sounds like praise for extremeprofessionalism, it is not: A real professional would wait until it wastime for proofreading, so he or she can focus on one thing at a time.While proofreading requires more focused attention, finding the rightwords during writing requires much more floating attention.

      Proofreading while rewriting, structuring, or doing the thinking or creative parts of writing is a form of bikeshedding. It is easy to focus on the small and picayune fixes when writing, but this distracts from the more important parts of the work which really need one's attention to be successful.

      Get your ideas down on paper and only afterwards work on proofreading at the end. Switching contexts from thinking and creativity to spelling, small bits of grammar, and typography can be taxing from the perspective of trying to multi-task.


      Link: Draft #4 and using Webster's 1913 dictionary for choosing better words/verbiage as a discrete step within the rewrite.


      Linked to above: Are there other dictionaries, thesauruses, books of quotations, or individual commonplace books, waste books that can serve as resources for finding better words, phrases, or phrasing when writing? Imagine searching through Thoreau's commonplace book for finding interesting turns of phrase. Naturally searching through one's own commonplace book is a great place to start, if you're saving those sorts of things, especially from fiction.

      Link this to Robin Sloan's AI talk and using artificial intelligence and corpuses of literature to generate writing.

  4. Jan 2022
    1. https://vimeo.com/232545219

      from: Eyeo Conference 2017

      Description

      Robin Sloan at Eyeo 2017 | Writing with the Machine | Language models built with recurrent neural networks are advancing the state of the art on what feels like a weekly basis; off-the-shelf code is capable of astonishing mimicry and composition. What happens, though, when we take those models off the command line and put them into an interactive writing environment? In this talk Robin presents demos of several tools, including one presented here for the first time. He discusses motivations and process, shares some technical tips, proposes a course for the future — and along the way, write at least one short story together with the audience: all of us, and the machine.

      Notes

      Robin created a corpus using If Magazine and Galaxy Magazine from the Internet Archive and used it as a writing tool. He talks about using a few other models for generating text.

      Some of the idea here is reminiscent of the way John McPhee used the 1913 Webster Dictionary for finding words (or le mot juste) for his work, as tangentially suggested in Draft #4 in The New Yorker (2013-04-22)

      Cross reference: https://hypothes.is/a/t2a9_pTQEeuNSDf16lq3qw and https://hypothes.is/a/vUG82pTOEeu6Z99lBsrRrg from https://jsomers.net/blog/dictionary


      Croatian acapella singing: klapa https://www.youtube.com/watch?v=sciwtWcfdH4


      Writing using the adjacent possible.


      Corpus building as an art [~37:00]

      Forgetting what one trained their model on and then seeing the unexpected come out of it. This is similar to Luhmann's use of the zettelkasten as a serendipitous writing partner.

      Open questions

      How might we use information theory to do this more easily?

      What does a person or machine's "hand" look like in the long term with these tools?

      Can we use corpus linguistics in reverse for this?

      What sources would you use to train your model?

      References:

      • Andrej Karpathy. 2015. "The Unreasonable Effectiveness of Recurrent Neural Networks"
      • Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, et al. "Generating sentences from a continuous space." 2015. arXiv: 1511.06349
      • Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. 2017. "A Hybrid Convolutional Variational Autoencoder for Text generation." arXiv:1702.02390
      • Soroush Mehri, et al. 2017. "SampleRNN: An Unconditional End-to-End Neural Audio Generation Model." arXiv:1612.07837 applies neural networks to sound and sound production
    1. Markoff, a long-time chronicler of computing, sees Engelbart as one pole in a decades-long competition "between artificial intelligence and intelligence augmentation -- A.I. versus I.A."

      There is an interesting difference between artificial intelligence and intelligence automation. Index cards were already doing the second by the early 1940s.

  5. Dec 2021
  6. Nov 2021
  7. Oct 2021
  8. Sep 2021
  9. Aug 2021
    1. Provide more opportunities for new talent. Because healthcare has been relatively solid and stagnant in what it does, we're losing out on some of the new talent that comes out — who are developing artificial intelligence, who are working at high-tech firms — and those firms can pay significantly higher than hospitals for those talents. We have to find a way to provide some opportunities for that and apply those technologies to make improvements in healthcare.

      Intestesing. Mr. Roach thinks healthcare is not doing enough to attract new types of talent (AI and emerging tech) into healthcare. We seem to be losing this talent to the technology sector.

      I would agree with this point. Why work for healthcare with all of its massive demands and HIPPA and lack of people knowing what you are even building. Instead, you can go into tech, have a better quality of life, get paid so much more, and have the possibility of exiting due to a buyout from the healthcare industry.

    1. Building on platforms' stores of user-generated content, competing middleware services could offer feeds curated according to alternate ranking, labeling, or content-moderation rules.

      Already I can see too many companies relying on artificial intelligence to sort and filter this material and it has the ability to cause even worse nth degree level problems.

      Allowing the end user to easily control the content curation and filtering will be absolutely necessary, and even then, customer desire to do this will likely loose out to the automaticity of AI. Customer laziness will likely win the day on this, so the design around it must be robust.

  10. Jul 2021
    1. Facebook AI. (2021, July 16). We’ve built and open-sourced BlenderBot 2.0, the first #chatbot that can store and access long-term memory, search the internet for timely information, and converse intelligently on nearly any topic. It’s a significant advancement in conversational AI. https://t.co/H17Dk6m1Vx https://t.co/0BC5oQMEck [Tweet]. @facebookai. https://twitter.com/facebookai/status/1416029884179271684

  11. Jun 2021
    1. t hadn’t learned sort of the concept of a paddle or the concept of a ball. It only learned about patterns of pixels.

      Cognition and perception are closely related in humans, as the theory of embodied cognition has shown. But until the concept of embodied cognition gained traction, we had developed a pretty intellectual concept of cognition: as something located in our brains, drained of emotions, utterly rational, deterministic, logical, and so on. This is still the concept of intelligence that rules research in AI.

    2. the original goal at least, was to have a machine that could be like a human, in that the machine could do many tasks and could learn something in one domain, like if I learned how to play checkers maybe that would help me learn better how to play chess or other similar games, or even that I could use things that I’d learned in chess in other areas of life, that we sort of have this ability to generalize the things that we know or the things that we’ve learned and apply it to many different kinds of situations. But this is something that’s eluded AI systems for its entire history.

      The truth is we do not need to have computers to excel in the things we do best, but to complement us. We shall bet on cognitive extension instead of trying to re-create human intelligence --which is a legitimate area of research, but computer scientists should leave this to cognitive science and neuroscience.

    1. Last year, Page told a convention of scientists that Google is “really trying to build artificial intelligence and to do it on a large scale.”

      What if they're not? What if they're building an advertising machine to manipulate us into giving them all our money?

      From an investor perspective, the artificial answer certainly seems sexy while using some clever legerdemain to keep the public from seeing what's really going on behind the curtain?

    2. It seeks to develop “the perfect search engine,” which it defines as something that “understands exactly what you mean and gives you back exactly what you want.”

      What if we want more serendipity? What if we don't know what we really want? Where is this in their system?

  12. May 2021
    1. Turing was an exceptional mathematician with a peculiar and fascinating personality and yet he remains largely unknown. In fact, he might be considered the father of the von Neumann architecture computer and the pioneer of Artificial Intelligence. And all thanks to his machines; both those that Church called “Turing machines” and the a-, c-, o-, unorganized- and p-machines, which gave rise to evolutionary computations and genetic programming as well as connectionism and learning. This paper looks at all of these and at why he is such an often overlooked and misunderstood figure.
  13. Mar 2021
    1. In this respect, we join Fitzpatrick (2011) in exploring “the extent to which the means of media production and distribution are undergoing a process of radical democratization in the Web 2.0 era, and a desire to test the limits of that democratization”

      Something about this is reminiscent of WordPress' mission to democratize publishing. We can also compare it to Facebook whose (stated) mission is to connect people, while it's actual mission is to make money by seemingly radicalizing people to the extremes of our political spectrum.

      This highlights the fact that while many may look at content moderation on platforms like Facebook as removing their voices or deplatforming them in the case of people like Donald J. Trump or Alex Jones as an anti-democratic move. In fact it is not. Because of Facebooks active move to accelerate extreme ideas by pushing them algorithmically, they are actively be un-democratic. Democratic behavior on Facebook would look like one voice, one account and reach only commensurate with that person's standing in real life. Instead, the algorithmic timeline gives far outsized influence and reach to some of the most extreme voices on the platform. This is patently un-democratic.

    1. Meanwhile, the algorithms that recommend this content still work to maximize engagement. This means every toxic post that escapes the content-moderation filters will continue to be pushed higher up the news feed and promoted to reach a larger audience.

      This and the prior note are also underpinned by the fact that only 10% of people are going to be responsible for the majority of posts, so if you can filter out the velocity that accrues to these people, you can effectively dampen down the crazy.

    2. In his New York Times profile, Schroepfer named these limitations of the company’s content-moderation strategy. “Every time Mr. Schroepfer and his more than 150 engineering specialists create A.I. solutions that flag and squelch noxious material, new and dubious posts that the A.I. systems have never seen before pop up—and are thus not caught,” wrote the Times. “It’s never going to go to zero,” Schroepfer told the publication.

      The one thing many of these types of noxious content WILL have in common are the people at the fringes who are regularly promoting it. Why not latch onto that as a means of filtering?

    3. But anything that reduced engagement, even for reasons such as not exacerbating someone’s depression, led to a lot of hemming and hawing among leadership. With their performance reviews and salaries tied to the successful completion of projects, employees quickly learned to drop those that received pushback and continue working on those dictated from the top down.

      If the company can't help regulate itself using some sort of moral compass, it's imperative that government or other outside regulators should.

    4. <small><cite class='h-cite via'> <span class='p-author h-card'>Joan Donovan, PhD</span> in "This is just some of the best back story I’ve ever read. Facebooks web of influence unravels when @_KarenHao pulls the wrong thread. Sike!! (Only the Boston folks will get that.)" / Twitter (<time class='dt-published'>03/14/2021 12:10:09</time>)</cite></small>

  14. Feb 2021
  15. Jan 2021
  16. Dec 2020
  17. Nov 2020
  18. Oct 2020
    1. Similarly, technology can help us control the climate, make AI safe, and improve privacy.

      regulation needs to surround the technology that will help with these things

    1. What if you could use AI to control the content in your feed? Dialing up or down whatever is most useful to you. If I’m on a budget, maybe I don’t want to see photos of friends on extravagant vacations. Or, if I’m trying to pay more attention to my health, encourage me with lots of salads and exercise photos. If I recently broke up with somebody, happy couple photos probably aren’t going to help in the healing process. Why can’t I have control over it all, without having to unfollow anyone. Or, opening endless accounts to separate feeds by topic. And if I want to risk seeing everything, or spend a week replacing my usual feed with images from a different culture, country, or belief system, couldn’t I do that, too? 

      Some great blue sky ideas here.

    1. Walter Pitts was pivotal in establishing the revolutionary notion of the brain as a computer, which was seminal in the development of computer design, cybernetics, artificial intelligence, and theoretical neuroscience. He was also a participant in a large number of key advances in 20th-century science.
  19. Sep 2020
  20. Aug 2020
  21. Jul 2020
  22. Jun 2020
    1. each of them flows through each of the two layers of the encoder

      each of them flows through each of the two layers of EACH encoder, right?

    1. It made it challenging for the models to deal with long sentences.

      This is similar to autoencoders struggling with producing high-resolution imagery because of the compression that happens in the latent space, right?

    1. it seems that word-level models work better than character-level models

      Interesting, if you think about it, both when we as humans read and write, we think in terms of words or even phrases, rather than characters. Unless we're unsure how to spell something, the characters are a secondary thought. I wonder if this is at all related to the fact that word-level models seem to work better than character-level models.

    2. As you can see above, sometimes the model tries to generate latex diagrams, but clearly it hasn’t really figured them out.

      I don't think anyone has figured latex diagrams (tikz) out :')

    3. Antichrist

      uhhh should we be worried

    1. We only forget when we’re going to input something in its place. We only input new values to the state when we forget something older.

      seems like a decision aiming for efficiency

    2. outputs a number between 000 and 111 for each number in the cell state Ct−1Ct−1C_{t-1}

      remember, each line represents a vector.

  23. May 2020
    1. Mei, X., Lee, H.-C., Diao, K., Huang, M., Lin, B., Liu, C., Xie, Z., Ma, Y., Robson, P. M., Chung, M., Bernheim, A., Mani, V., Calcagno, C., Li, K., Li, S., Shan, H., Lv, J., Zhao, T., Xia, J., … Yang, Y. (2020). Artificial intelligence for rapid identification of the coronavirus disease 2019 (COVID-19). MedRxiv, 2020.04.12.20062661. https://doi.org/10.1101/2020.04.12.20062661

  24. Apr 2020
    1. Abdulla, A., Wang, B., Qian, F., Kee, T., Blasiak, A., Ong, Y. H., Hooi, L., Parekh, F., Soriano, R., Olinger, G. G., Keppo, J., Hardesty, C. L., Chow, E. K., Ho, D., & Ding, X. (n.d.). Project IDentif.AI: Harnessing Artificial Intelligence to Rapidly Optimize Combination Therapy Development for Infectious Disease Intervention. Advanced Therapeutics, n/a(n/a), 2000034. https://doi.org/10.1002/adtp.202000034

  25. Dec 2019
    1. Alexander Samuel reflects on tagging and its origins as a backbone to the social web. Along with RSS, tags allowed users to connect and collate content using such tools as feed readers. This all changed with the advent of social media and the algorithmically curated news feed.

      Tags were used for discovery of specific types of content. Who needs that now that our new overlords of artificial intelligence and algorithmic feeds can tell us what we want to see?!

      Of course we still need tags!!! How are you going to know serendipitously that you need more poetry in your life until you run into the tag on a service like IndieWeb.xyz? An algorithmic feed is unlikely to notice--or at least in my decade of living with them I've yet to run into poetry in one.

  26. Aug 2019
    1. so there won’t be a blinking bunny, at least not yet, let’s train our bunny to blink on command by mixing stimuli ( the tone and the air puff)

      Is it just that how we all learn and evolve? 😲

    1. A notable by-product of a move of clinical as well as research data to the cloud would be the erosion of market power of EMR providers.

      But we have to be careful not to inadvertently favour the big tech companies in trying to stop favouring the big EMR providers.

    2. cloud computing is provided by a small number of large technology companies who have both significant market power and strong commercial interests outside of healthcare for which healthcare data might potentially be beneficial

      AI is controlled by these external forces. In what direction will this lead it?

    3. it has long been argued that patients themselves should be the owners and guardians of their health data and subsequently consent to their data being used to develop AI solutions.

      Mere consent isn't enough. We consent to give away all sorts of data for phone apps that we don't even really consider. We need much stronger awareness, or better defaults so that people aren't sharing things without proper consideration.

    4. To realize this vision and to realize the potential of AI across health systems, more fundamental issues have to be addressed: who owns health data, who is responsible for it, and who can use it? Cloud computing alone will not answer these questions—public discourse and policy intervention will be needed.

      This is part of the habit and culture of data use. And it's very different in health than in other sectors, given the sensitivity of the data, among other things.

    5. In spite of the widely touted benefits of “data liberation”,15 a sufficiently compelling use case has not been presented to overcome the vested interests maintaining the status quo and justify the significant upfront investment necessary to build data infrastructure.

      Advancing AI requires more than just AI stuff. It requires infrastructure and changes in human habit and culture.

    6. However, clinician satisfaction with EMRs remains low, resulting in variable completeness and quality of data entry, and interoperability between different providers remains elusive.11

      Another issue with complex systems: the data can be volumous but poor individual quality, relying on domain knowledge to be able to properly interpret (eg. that doctor didn't really prescribe 10x the recommended dose. It was probably an error.).

    7. Second, most healthcare organizations lack the data infrastructure required to collect the data needed to optimally train algorithms to (a) “fit” the local population and/or the local practice patterns, a requirement prior to deployment that is rarely highlighted by current AI publications, and (b) interrogate them for bias to guarantee that the algorithms perform consistently across patient cohorts, especially those who may not have been adequately represented in the training cohort.9

      AI depends on:

      • static processes - if the population you are predicting changes relative to the one used to train the model, all bets are off. It remains to be seen how similar they need to be given the brittleness of AI algorithms.
      • homogeneous population - beyond race, what else is important? If we don't have a good theory of health, we don't know.
    8. Simply adding AI applications to a fragmented system will not create sustainable change.
    1. Both artists, through annotation, have produced new forms of public dialogue in response to other people (like Harvey Weinstein), texts (The New York Times), and ideas (sexual assault and racial bias) that are of broad social and political consequence.

      What about examples of future sorts of annotations/redactions like these with emerging technologies? Stories about deepfakes (like Obama calling Trump a "dipshit" or the Youtube Channel Bad Lip Reading redubbing the words of Senator Ted Cruz) are becoming more prevalent and these are versions of this sort of redaction taken to greater lengths. At present, these examples are obviously fake and facetious, but in short order they will be indistinguishable and more commonplace.

  27. Jun 2019
    1. The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). It is a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research.[2] At the meeting, Roger Schank and Marvin Minsky—two leading AI researchers who had survived the "winter" of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. Three years later, the billion-dollar AI industry began to collapse.
  28. May 2019
    1. Deepmachinelearning,whichisusingalgorithmstoreplicatehumanthinking,ispredicatedonspecificvaluesfromspecifickindsofpeople—namely,themostpowerfulinstitutionsinsocietyandthosewhocontrolthem.

      This reminds me of this Reddit page

      The page takes pictures and texts from other Reddit pages and uses it to create computer generated posts and comments. It is interesting to see the intelligence and quality of understanding grow as it gathers more and more information.

    1. government investments
    2. initiatives from the U.S., China, and Europ
    3. Recent Government Initiatives
    4. engagement in AI activities by academics, corporations, entrepreneurs, and the general public

      Volume of Activity

    5. Derivative Measures
    6. AI Vibrancy Index
    7. limited gender diversity in the classroom
    8. improvement in natural language
    9. the COCO leaderboard
    10. patents
    11. robot operating system downloads,
    12. he GLUE metric
    13. robot installations
    14. AI conference attendance
    15. the speed at which computers can be trained to detect objects

      Technical Performance

    16. quality of question answering

      Technical Performance

    17. changes in AI performance

      Technical Performance

    18. Technical Performance
    19. number of undergraduates studying AI

      Volume of Activity

    20. growth in venture capital funding of AI startups

      Volume of Activity

    21. percent of female applicants for AI jobs

      Volume of Activity

    22. Volume of Activity
    23. increased participation in organizations like AI4ALL and Women in Machine Learning
    24. producers of AI patents
    25. ML teaching events
    26. University course enrollment
    27. 83 percent of 2017 AI papers
  29. Apr 2019
    1. Ashley Norris is the Chief Academic Officer at ProctorU, an organization that provides online exam proctoring for schools. This article has an interesting overview of the negative side of technology advancements and what that has meant for student's ability to cheat. While the article does culminate as an ad, of sorts, for ProctorU, it is an interesting read and sparks thoughts on ProctorU's use of both human monitors for testing but also their integration of Artificial Intelligence into the process.

      Rating: 9/10.

  30. Mar 2019
    1. If you do not like the price you’re being offered when you shop, do not take it personally: many of the prices we see online are being set by algorithms that respond to demand and may also try to guess your personal willingness to pay. What’s next? A logical next step is that computers will start conspiring against us. That may sound paranoid, but a new study by four economists at the University of Bologna shows how this can happen.
    1. Worse still, even if we had the ability to take a snapshot of all of the brain’s 86 billion neurons and then to simulate the state of those neurons in a computer, that vast pattern would mean nothing outside the body of the brain that produced it. This is perhaps the most egregious way in which the IP metaphor has distorted our thinking about human functioning.

      Again, this doesn't conflict with a machine-learning or deep-learning or neural-net way of seeing IP.

    2. No ‘copy’ of the story is ever made

      Or, the copy initially made is changed over time since human "memory" is interdependent and interactive with other brain changes, whereas each bit in computer memory is independent of all other bits.

      However, machine learning probably results in interactions between bits as the learning algorithm is exposed to more training data. The values in a deep neural network interact in ways that are not so obvious. So this machine-human analogy might be getting new life with machine learning.

    3. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight

      I don't see how this is true. The IP perspective depends on algorithms. There are many different algorithms to perform various tasks. Some perform reverse-kinematic calculations, but others conduct simpler, repeated steps. In computer science, this might be dynamic programming, recursive algorithms, or optimization. It seems that the IP metaphor still fits: it's just that those using the metaphor may not have updated their model of IP to be more modern.

    1. There is no wonder that AI gains popularity. A lot of facts and pros are the stimulators of such profitable growth of AI. The essential peculiarities are fully presented in the given article.

    1. we provide him as much help as possible in making a plan of action. Then we give him as much help as we can in carrying it out. But we also have to allow him to change his mind at almost any point, and to want to modify his plans.

      I'm thinking about the role of AI tutors/advisors here. How often do they operate in the kind of flexible way described here. I wonder if they can without actual human intervention.

  31. Feb 2019
    1. Nearly half of FBI rap sheets failed to include information on the outcome of a case after an arrest—for example, whether a charge was dismissed or otherwise disposed of without a conviction, or if a record was expunged

      This explains my personal experience here: https://hyp.is/EIfMfivUEem7SFcAiWxUpA/epic.org/privacy/global_entry/default.html (Why someone who had Global Entry was flagged for a police incident before he applied for Global Entry).

    2. Applicants also agree to have their fingerprints entered into DHS’ Automatic Biometric Identification System (IDENT) “for recurrent immigration, law enforcement, and intelligence checks, including checks against latent prints associated with unsolved crimes.

      Intelligence checks is very concerning here as it suggests pretty much what has already been leaked, that the US is running complex autonomous screening of all of this data all the time. This also opens up the possibility for discriminatory algorithms since most of these are probably rooted in machine learning techniques and the criminal justice system in the US today tends to be fairly biased towards certain groups of people to begin with.

    3. It cited research, including some authored by the FBI, indicating that “some of the biometrics at the core of NGI, like facial recognition, may misidentify African Americans, young people, and women at higher rates than whites, older people, and men, respectively.

      This re-affirms the previous annotation that the set of training data for the intelligence checks the US runs on global entry data is biased towards certain groups of people.

  32. Jan 2019
    1. AI Robots will be replacing the White Collar Jobs by 6% until 2021

      AI software and the chatbots will be included in the current technologies and have automated with the robotic system. They will have given rights to access calendars, email accounts, browsing history, playlists, past purchases, and media viewing history. 6% is the huge number in the world as people would be seen struggling in finding the jobs. But there are benefits also as your work would have done easily and speedily

    1. CTP synthesizes critical reflection with technology production as a way of highlighting and altering unconsciously-held assumptions that are hindering progress in a technical field.

      Definition of critical technical practice.

      This approach is grounded in AI rather than HCI

      (verbatim from the paper) "CTP consists of the following moves:

      • identifying the core metaphors of the field

      • noticing what, when working with those metaphors, remains marginalized

      • inverting the dominant metaphors to bring that margin to the center

      • embodying the alternative as a new technology

  33. Nov 2018
    1. Entscheidend ist, dass sie Herren des Verfahrens bleiben - und eine Vision für das neue Maschinenzeitalter entwickeln.

      Es sieht für mich nicht eigentlich so aus als wären wir jemals die "Herren des Verfahrens" gewesen. Und auch darum geht es ja bei Marx. Denke ich.

  34. Sep 2018
    1. And its very likely that IA is a much easier road to the achievement of superhumanity than pure AI. In humans, the hardest development problems have already been solved. Building up from within ourselves ought to be easier than figuring out what we really are and then building machines that are all of that.

      The authors of the text are proposing a radically different approach to the inevitable "singularity" event. They propose the research and development IA, or Intelligence Amplification, is developing computers with a symbiosis with humans. Noting that IA could be easier to develop than AI algorithms, since humanity had to probe what their true weaknesses and strengths are. In turn, developing an IA system that could cover humanities' weaknesses. This would summarily prevent an IA algorithm from getting over itself, which could potentially slow a point when we reach singularity.

  35. Jul 2018
    1. Leading thinkers in China argue that putting government in charge of technology has one big advantage: the state can distribute the fruits of AI, which would otherwise go to the owners of algorithms.
  36. Jun 2018
    1. In “Getting Real,” Barad proposes that “reality is sedimented out of the process ofmaking the world intelligible through certain practices and not others ...” (1998: 105). If,as Barad and other feminist researchers suggest, we are responsible for what exists, what isthe reality that current discourses and practices regarding new technologies makeintelligible, and what is excluded? To answer this question Barad argues that we need asimultaneous account of the relations of humans and nonhumansandof their asymmetriesand differences. This requires remembering that boundaries between humans and machinesare not naturally given but constructed, in particular historical ways and with particularsocial and material consequences. As Barad points out, boundaries are necessary for thecreation of meaning, and, for that very reason, are never innocent. Because the cuts impliedin boundary making are always agentially positioned rather than naturally occurring, andbecause boundaries have real consequences, she argues, “accountability is mandatory”(187). :We are responsible for the world in which we live not because it is an arbitraryconstruction of our choosing, but because it is sedimented out of particular practicesthat we have a role in shaping (1998: 102).The accountability involved is not, however, a matter of identifying authorship in anysimple sense, but rather a problem of understanding the effects of particular assemblages,and assessing the distributions, for better and worse, that they engender.
    2. Finally, the ‘smart’ machine's presentation of itself asthe always obliging, 'labor-saving device' erases any evidence of the labor involved in itsoperation "from bank personnel to software programmers to the third-world workers whoso often make the chips" (75).
    3. Chasin poses the question (which I return to below) of how a change in our view ofobjects from passiveand outside the social could help to undo the subject/object binaryand all of its attendant orderings, including for example male/female, or mental/manua
    4. Figured as servants,she points out, technologies reinscribe the difference between ‘us’ and those who serve us,while eliding the difference between the latter and machines: "The servanttroubles thedistinction between we-human-subjects-inventors with a lot to do (on the onehand) andthem-object-things that make it easier for us (on the other)" (1995: 73)
  37. Apr 2018
    1. The alternative, of a regulatory patchwork, would make it harder for the West to amass a shared stock of AI training data to rival China’s.

      Fascinating geopolitical suggestion here: Trans-Atlantic GDPR-like rules as the NATO of data privacy to effectively allow "the West" to compete against the People's Republic of China in the development of artificial intelligence.

  38. Dec 2017
    1. Most of the recent advances in AI depend on deep learning, which is the use of backpropagation to train neural nets with multiple layers ("deep" neural nets).

      Neural nets consist of layers of nodes, with edges from each node to the nodes in the next layer. The first and last layers are input and output. The output layer might only have two nodes, representing true or false. Each node holds a value representing how excited it is. Each edge has a value representing strength of connection, which determines how much of the excitement passes through.

      The edges in an untrained neural net start with random values. The training data consists of a series of samples that are already labeled. If the output is wrong, the edges are adjusted according to how much they contributed to the error. It's called backpropagation because it starts with the output nodes and works toward the input nodes.

      Deep neural nets can be effective, but only for single specific tasks. And they need huge sets of training data. They can also be tricked rather easily. Worse, someone who has access to the net can discover ways of adding noise to images that will make the net "see" things that obviously aren't there.

  39. Aug 2017
    1. So this transforms how we do design. The human engineer now says what the design should achieve, and the machine says, "Here's the possibilities." Now in her job, the engineer's job is to pick the one that best meets the goals of the design, which she knows as a human better than anyone else, using human judgment and expertise.

      A post on the Keras blog was talking about eventually using AI to generate computer programs to match certain specifications. Gruber is saying something very similar.

  40. Apr 2017
  41. Mar 2017
    1. Great overview and commentary. However, I would have liked some more insight into the ethical ramifications and potential destructiveness of an ASI-system as demonstrated in the movie.

  42. Feb 2017
  43. Jan 2017
    1. According to a 2015 report by Incapsula, 48.5% of all web traffic are by bots.

      ...

      The majority of bots are "bad bots" - scrapers that are harvesting emails and looking for content to steal, DDoS bots, hacking tools that are scanning websites for security vulnerabilities, spammers trying to sell the latest diet pill, ad bots that are clicking on your advertisements, etc.

      ...

      Content on websites such as dev.to are reposted elsewhere, word-for-word, by scrapers programmed by Black Hat SEO specialists.

      ...

      However, a new breed of scrapers exist - intelligent scrapers. They can search websites for sentences containing certain keywords, and then rewrite those sentences using "article spinning" techniques.

  44. Dec 2016
    1. The team on Google Translate has developed a neural network that can translate language pairs for which it has not been directly trained. "For example, if the neural network has been taught to translate between English and Japanese, and English and Korean, it can also translate between Japanese and Korean without first going through English."

  45. Sep 2016
  46. Jun 2016
  47. May 2016
  48. Apr 2016
    1. We should have control of the algorithms and data that guide our experiences online, and increasingly offline. Under our guidance, they can be powerful personal assistants.

      Big business has been very militant about protecting their "intellectual property". Yet they regard every detail of our personal lives as theirs to collect and sell at whim. What a bunch of little darlings they are.

  49. Jan 2016
  50. Dec 2015
    1. OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.
    1. Big Sur is our newest Open Rack-compatible hardware designed for AI computing at a large scale. In collaboration with partners, we've built Big Sur to incorporate eight high-performance GPUs
  51. Nov 2015
    1. TPOT is a Python tool that automatically creates and optimizes machine learning pipelines using genetic programming. Think of TPOT as your “Data Science Assistant”: TPOT will automate the most tedious part of machine learning by intelligently exploring thousands of possible pipelines, then recommending the pipelines that work best for your data.

      https://github.com/rhiever/tpot TPOT (Tree-based Pipeline Optimization Tool) Built on numpy, scipy, pandas, scikit-learn, and deap.

  52. Jul 2015
  53. May 2015
    1. In this work, Lee and Brunskill fit a separate Knowledge Tracing model to each student’s data. This involv ed fitting four parameters: initial probability o f mastery, probability of transitioning from unmastered to mastered, probability of giving an incorrect answer if the student has mastered the skill, and probability of giving a correct answer if the student has not mastered the skill. Each student’s model is fit using a combination of Expectation Maximization (EM) combined with a brute force search

      First comment

  54. Nov 2014
    1. The Most Terrifying Thought Experiment of All Time

      TLDR: Thought experiment that, by knowing about it, you are contributing to humanity enslavement to a all powerful AI