- Dec 2020
-
-
Despite being seen as a leader and a rising star in the Canadian AI sector, Element AI faced difficulties getting products to market.
They had faced productisastion problems, just like many other AI startups.It looks like they have GTM problems too,
-
Element AI had more than 500 employees, including 100 PhDs.
500 employees is indeed large. A 100-person team of PhDs is very large as well, They could probably tackle many difficult AI Problems!
-
n 2017, the startup raised what was then a historic $137.5 million Series A funding round from a group of notable investors including Intel, Microsoft, National Bank of Canada, Development Bank of Canada (BDC), NVIDIA, and Real Ventures.
This was indeed a historic amonunt raised! Probably because of Yoshua Bengio one of the god fathers of AI!
-
- Nov 2020
-
www.schneier.com www.schneier.com
-
AI is not analogous to the big science projects of the previous century that brought us the atom bomb and the moon landing. AI is a science that can be conducted by many different groups with a variety of different resources, making it closer to computer design than the space race or nuclear competition. It doesn’t take a massive government-funded lab for AI research, nor the secrecy of the Manhattan Project. The research conducted in the open science literature will trump research done in secret because of the benefits of collaboration and the free exchange of ideas.
AI research is not analogous to space research or an arms race.
It can be conducted by different groups with a variety of different resources. Research conducted in the open is likely to do better because of the benefits of collaboration.
-
- Oct 2020
-
about.fb.com about.fb.com
-
Facebook AI is introducing M2M-100, the first multilingual machine translation (MMT) model that can translate between any pair of 100 languages without relying on English data. It’s open sourced here. When translating, say, Chinese to French, most English-centric multilingual models train on Chinese to English and English to French, because English training data is the most widely available. Our model directly trains on Chinese to French data to better preserve meaning. It outperforms English-centric systems by 10 points on the widely used BLEU metric for evaluating machine translations. M2M-100 is trained on a total of 2,200 language directions — or 10x more than previous best, English-centric multilingual models. Deploying M2M-100 will improve the quality of translations for billions of people, especially those that speak low-resource languages. This milestone is a culmination of years of Facebook AI’s foundational work in machine translation. Today, we’re sharing details on how we built a more diverse MMT training data set and model for 100 languages. We’re also releasing the model, training, and evaluation setup to help other researchers reproduce and further advance multilingual models.
Summary of the 1st AI model from Facebook that translates directly between languages (not relying on English data)
-
-
www.coe.int www.coe.int
-
AI and control of Covid-19 coronavirus. (n.d.). Artificial Intelligence. Retrieved October 15, 2020, from https://www.coe.int/en/web/artificial-intelligence/ai-and-control-of-covid-19-coronavirus
-
-
thispersondoesnotexist.com thispersondoesnotexist.com
-
;
Tags
Annotators
URL
-
-
www.theatlantic.com www.theatlantic.com
-
DiResta, Renée. ‘The Supply of Disinformation Will Soon Be Infinite’. The Atlantic, 20 September 2020. https://www.theatlantic.com/ideas/archive/2020/09/future-propaganda-will-be-computer-generated/616400/.
-
- Sep 2020
-
www.telegraph.co.uk www.telegraph.co.uk
-
Microsoft deletes 'teen girl' AI after it became a Hitler-loving sex robot within 24 hours
Still an incredible headline...
-
-
psyarxiv.com psyarxiv.com
-
Yang, Scott Cheng-Hsin, Chirag Rank, Jake Alden Whritner, Olfa Nasraoui, and Patrick Shafto. ‘Unifying Recommendation and Active Learning for Information Filtering and Recommender Systems’. Preprint. PsyArXiv, 25 August 2020. https://doi.org/10.31234/osf.io/jqa83.
Tags
- lang:en
- recommendation accuracy
- experimental approach
- AI
- Internet
- predictive accuracy
- is:preprint
- exploration-exploitation tradeoff
- artificial intelligence
- algorithms
- recommender system
- parameterized model
- machine learning
- information filtering
- computer science
- cognitive science
- active learning
Annotators
URL
-
-
wip.mitpress.mit.edu wip.mitpress.mit.edu
-
Building the New Economy · Works in Progress. (n.d.). Works in Progress. Retrieved June 16, 2020, from https://wip.mitpress.mit.edu/new-economy
-
- Aug 2020
-
-
Kirkwood. I. (2020) HERE’S HOW #CDNTECH COMPANIES ARE PITCHING IN DURING COVID-19. Betakit. Retrieved from:https://betakit.com/heres-how-cdntech-companies-are-pitching-in-during-the-covid-19-outbreak/
-
- Jul 2020
-
ibuildmyideas.substack.com ibuildmyideas.substack.com
-
-
-
www.youtube.com www.youtube.com
-
Centre for Effective Altruism. (2020, June 13 & 14). EAGxVirtual 2020 Virtual Conference. https://www.youtube.com/playlist?list=PLwp9xeoX5p8NfF4UmWcwV0fQlSU_zpHqc
-
-
www.youtube.com www.youtube.com
-
American Philosophical Society. (2020, June 08). Evidence Symposium. YouTube. https://www.youtube.com/playlist?list=PLoKwLGnyZL4Ds5cQo5muFMg8zKXK4KobH
-
-
www.nesta.org.uk www.nesta.org.uk
-
Nesta, (2020, May 15). Invisible work: Nesta talks to John Howkins. https://www.nesta.org.uk/event/live-stream-invisible-work/
-
- Jun 2020
-
www.forbes.com www.forbes.com
-
Google’s novel response has been to compare each app to its peers, identifying those that seem to be asking for more than they should, and alerting developers when that’s the case. In its update today, Google says “we aim to help developers boost the trust of their users—we surface a message to developers when we think their app is asking for a permission that is likely unnecessary.”
-
-
www.weforum.org www.weforum.org
-
How COVID-19 revealed 3 critical AI procurement blindspots. (n.d.). World Economic Forum. Retrieved June 22, 2020, from https://www.weforum.org/agenda/2020/06/how-covid-19-revealed-3-critical-blindspots-ai-governance-procurement/
Tags
- app
- lang:en
- blindspot
- AI
- risk
- COVID-19
- transparency
- contact tracing
- diligence
- chatbots
- diagnostics
- citation
- procurement
- is:blog
- prediction
- fairness
Annotators
URL
-
-
bobheadxi.dev bobheadxi.dev
-
5A85F3
I have signed up for hypothesis and verified my email so i can leave you this following comment:
long time reader, first time poster here. greatest blog of all time
Tags
Annotators
URL
-
-
psyarxiv.com psyarxiv.com
-
Kozyreva, A., Lewandowsky, S., & Hertwig, R. (2019, December 4). Citizens Versus the Internet: Confronting Digital Challenges With Cognitive Tools. https://doi.org/10.31234/osf.io/ky4x8
Tags
- nudging
- lang:en
- self-nudging
- fake news
- online manipulation
- AI
- algorithm
- digital
- reasoning
- internet
- is:preprint
- technocognition
- artificial intelligence
- attention economy
- disinformation
- decision aid
- behavioral policy
- cognitive tools
- decision autonomy
- boosting
- online behavior
- misinformation
- choice architecture
Annotators
URL
-
-
scisight.apps.allenai.org scisight.apps.allenai.orgAbout1
-
singularityhub.com singularityhub.com
-
Gent, Edd. ‘Robots to the Rescue: How They Can Help During Coronavirus (and Future Pandemics)’. Singularity Hub (blog), 1 April 2020. https://singularityhub.com/2020/04/01/robots-to-the-rescue-how-they-can-help-during-coronavirus-and-future-pandemics/.
-
-
www.scs.cmu.edu www.scs.cmu.edu
-
Young, V. A. (2020, May 20). Nearly Half Of The Twitter Accounts Discussing ‘Reopening America’ May Be Bots. Carnegie Mellon School of Computer Science. https://www.scs.cmu.edu/news/nearly-half-twitter-accounts-discussing-%E2%80%98reopening-america%E2%80%99-may-be-bots
-
-
-
Kurzweil, R. (2020 May 19). AI-Powered Biotech Can Help Deploy a Vaccine In Record Time. Wired. https://www.wired.com/story/opinion-ai-powered-biotech-can-help-deploy-a-vaccine-in-record-time/
-
-
www.metascience2019.org www.metascience2019.org
-
Yang Yang: The Replicability of Scientific Findings Using Human and Machine Intelligence (Video). Metascience 2019 Symposium. https://www.metascience2019.org/presentations/yang-yang/
-
-
www.pnas.org www.pnas.org
-
Yang, Y., Youyou, W., & Uzzi, B. (2020). Estimating the deep replicability of scientific findings using human and artificial intelligence. Proceedings of the National Academy of Sciences, 117(20), 10762–10768. https://doi.org/10.1073/pnas.1909046117
-
- May 2020
-
www.javatpoint.com www.javatpoint.com
-
Machine learning has a limited scope
-
AI is a bigger concept to create intelligent machines that can simulate human thinking capability and behavior, whereas, machine learning is an application or subset of AI that allows machines to learn from data without being programmed explicitly
-
-
expertsystem.com expertsystem.com
-
Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed
Tags
Annotators
URL
-
-
www.investopedia.com www.investopedia.com
-
machines tend to be designed for the lowest possible risk and the least casualties
why is this a problem?
-
machines must weigh the consequences of any action they take, as each action will impact the end result
-
goals of artificial intelligence include learning, reasoning, and perception
-
refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions
-
-
www.thelancet.com www.thelancet.com
-
Schwalbe, N., & Wahl, B. (2020). Artificial intelligence and the future of global health. The Lancet, 395(10236), 1579–1586. https://doi.org/10.1016/S0140-6736(20)30226-9
-
-
www.ft.com www.ft.com
-
Multiple articles from Financial Times - Future of AI and Digital Healthcare
-
-
-
Hope, T., Borchardt, J., Portenoy, J., Vasan, K., & West, J. (2020, May 6). Exploring the COVID-19 network of scientific research with SciSight. Medium. https://medium.com/ai2-blog/exploring-the-covid-19-network-of-scientific-research-with-scisight-f75373320a8c
-
-
catalyst.nejm.org catalyst.nejm.org
-
Guney S., Daniels C., & Childers Z.. (2020 April 30). Using AI to Understand the Patient Voice During the Covid-19 Pandemic. Catalyst Non-Issue Content, 1(2). https://doi.org/10.1056/CAT.20.0103
-
-
www.webpurify.com www.webpurify.com
-
extensionworkshop.com extensionworkshop.com
-
You can now distribute your add-on. Note, however, that your add-on may still be subject to further review, if it is you’ll receive notification of the outcome of the review later.
-
- Apr 2020
-
en.wikipedia.org en.wikipedia.org
-
As the largest Voronoi regions belong to the states on the frontier of the search, this means that the tree preferentially expands towards large unsearched areas.
-
inherently biased to grow towards large unsearched areas of the problem
-
-
towardsdatascience.com towardsdatascience.com
-
www.nltk.org www.nltk.org
-
Natural Language Processing with Python – Analyzing Text with the Natural Language Toolkit Steven Bird, Ewan Klein, and Edward Loper
-
-
www.khalidalnajjar.com www.khalidalnajjar.com
-
How to setup and use Stanford CoreNLP Server with Python Khalid Alnajjar August 20, 2017 Natural Language Processing (NLP) Leave a CommentStanford CoreNLP is a great Natural Language Processing (NLP) tool for analysing text. Given a paragraph, CoreNLP splits it into sentences then analyses it to return the base forms of words in the sentences, their dependencies, parts of speech, named entities and many more. Stanford CoreNLP not only supports English but also other 5 languages: Arabic, Chinese, French, German and Spanish. To try out Stanford CoreNLP, click here.Stanford CoreNLP is implemented in Java. In some cases (e.g. your main code-base is written in different language or you simply do not feel like coding in Java), you can setup a Stanford CoreNLP Server and, then, access it through an API. In this post, I will show how to setup a Stanford CoreNLP Server locally and access it using python.
-
-
stanfordnlp.github.io stanfordnlp.github.io
-
CoreNLP includes a simple web API server for servicing your human language understanding needs (starting with version 3.6.0). This page describes how to set it up. CoreNLP server provides both a convenient graphical way to interface with your installation of CoreNLP and an API with which to call CoreNLP using any programming language. If you’re writing a new wrapper of CoreNLP for using it in another language, you’re advised to do it using the CoreNLP Server.
Tags
Annotators
URL
-
-
stanfordnlp.github.io stanfordnlp.github.io
-
Programming languages and operating systems Stanford CoreNLP is written in Java; recent releases require Java 1.8+. You need to have Java installed to run CoreNLP. However, you can interact with CoreNLP via the command-line or its web service; many people use CoreNLP while writing their own code in Javascript, Python, or some other language. You can use Stanford CoreNLP from the command-line, via its original Java programmatic API, via the object-oriented simple API, via third party APIs for most major modern programming languages, or via a web service. It works on Linux, macOS, and Windows. License The full Stanford CoreNLP is licensed under the GNU General Public License v3 or later. More precisely, all the Stanford NLP code is GPL v2+, but CoreNLP uses some Apache-licensed libraries, and so our understanding is that the the composite is correctly licensed as v3+.
-
Stanford CoreNLP provides a set of human language technology tools. It can give the base forms of words, their parts of speech, whether they are names of companies, people, etc., normalize dates, times, and numeric quantities, mark up the structure of sentences in terms of phrases and syntactic dependencies, indicate which noun phrases refer to the same entities, indicate sentiment, extract particular or open-class relations between entity mentions, get the quotes people said, etc. Choose Stanford CoreNLP if you need: An integrated NLP toolkit with a broad range of grammatical analysis tools A fast, robust annotator for arbitrary texts, widely used in production A modern, regularly updated package, with the overall highest quality text analytics Support for a number of major (human) languages Available APIs for most major modern programming languages Ability to run as a simple web service
-
-
opencv.org opencv.orgAbout1
-
OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Being a BSD-licensed product, OpenCV makes it easy for businesses to utilize and modify the code. The library has more than 2500 optimized algorithms, which includes a comprehensive set of both classic and state-of-the-art computer vision and machine learning algorithms. These algorithms can be used to detect and recognize faces, identify objects, classify human actions in videos, track camera movements, track moving objects, extract 3D models of objects, produce 3D point clouds from stereo cameras, stitch images together to produce a high resolution image of an entire scene, find similar images from an image database, remove red eyes from images taken using flash, follow eye movements, recognize scenery and establish markers to overlay it with augmented reality, etc. OpenCV has more than 47 thousand people of user community and estimated number of downloads exceeding 18 million. The library is used extensively in companies, research groups and by governmental bodies. Along with well-established companies like Google, Yahoo, Microsoft, Intel, IBM, Sony, Honda, Toyota that employ the library, there are many startups such as Applied Minds, VideoSurf, and Zeitera, that make extensive use of OpenCV. OpenCV’s deployed uses span the range from stitching streetview images together, detecting intrusions in surveillance video in Israel, monitoring mine equipment in China, helping robots navigate and pick up objects at Willow Garage, detection of swimming pool drowning accidents in Europe, running interactive art in Spain and New York, checking runways for debris in Turkey, inspecting labels on products in factories around the world on to rapid face detection in Japan. It has C++, Python, Java and MATLAB interfaces and supports Windows, Linux, Android and Mac OS. OpenCV leans mostly towards real-time vision applications and takes advantage of MMX and SSE instructions when available. A full-featured CUDAand OpenCL interfaces are being actively developed right now. There are over 500 algorithms and about 10 times as many functions that compose or support those algorithms. OpenCV is written natively in C++ and has a templated interface that works seamlessly with STL containers.
Tags
Annotators
URL
-
-
-
Punn, N. S., Sonbhadra, S. K., & Agarwal, S. (2020). COVID-19 Epidemic Analysis using Machine Learning and Deep Learning Algorithms [Preprint]. Health Informatics. https://doi.org/10.1101/2020.04.08.20057679
-
-
github.com github.com
-
that can be partially automated but still require human oversight and occasional intervention
-
but then have a tool that will show you each of the change sites one at a time and ask you either to accept the change, reject the change, or manually intervene using your editor of choice.
-
- Mar 2020
-
www.attorneyio.com www.attorneyio.com
-
Humans can no longer compete with AI in chess. They should not be without AI in litigation either.
-
Just as chess players marshall their 16 chess pieces in a battle of wits, attorneys must select from millions of cases in order to present the best legal arguments.
-
Tags
Annotators
URL
-
-
www.cmswire.com www.cmswire.com
-
Attorney IO, a provider of an artificial intelligence-driven service that helps attorneys manage their legal documents
-
-
www.linkedin.com www.linkedin.com
-
Now that we’re making breakthroughs in artificial intelligence, there’s a deeply cemented belief that the human brain works as a deterministic, mathematical process that can be replicated exactly by a Turing machine.
-
-
www.quora.com www.quora.com
-
It doesn’t.What it does do is teach AI to recognize various things and fool you into thinking you’re getting better security.When you get something for free, you are the product.
-
-
-
Overestimating robots and AI underestimates the very people who can save us from this pandemic: Doctors, nurses, and other health workers, who will likely never be replaced by machines outright. They’re just too beautifully human for that.
Yes - we used to have human elevator operators and telephone operators that would manually connect your calls. We now have automated check-out lines in stores and toll booths. In the future, we will have automated taxis and, yes, even some automated health care. Automated healthcare will enable better healthcare coverage with the same number of healthcare workers (or the same level of coverage with fewer workers). There can be good things or bad things about it - the way we do it will absolutely matter. We just need to think through how best to obtain the good without much of the bad ... rather than assuming it wont ever happen.
-
the demand for products will keep climbing as well, as we’re seeing with this hiring bonanza.
Probably not. The increase in demand is a result of the social-distancing and the hoarding. This is not a steady state. The demand for many things will return to normal (or below) once people figure out what they are using and what is still available. For example - you don't use that much more toilet paper when you are at home ... but you buy more if you don't know when it will be available again.
-
Last week, Amazon officials announced that in response to the coronavirus they were hiring 100,000 additional humans to work in fulfillment centers and as delivery drivers, showing that not even this mighty tech company can do without people.
Amazon has adopted automation in a very big and increasing way. Just because it has not automated everything yet, doesn't mean that complete automation isn't possible. We already know automated delivery is in the works. Amazon, Uber and Google are all working on the details of autonomous navigation ... and the ultimate result will absolutely impact future drivers (pun intended).
-
Why haven’t the machines saved us yet?
because machines don't buy tickets to fly on planes and vacation on cruise ships.
-
And that’s all because of the vulnerabilities of the human worker.
It has more to do with the vulnerabilities of the human traveler and the human guest (and less to do with the workers). The demand for these services has simply gone down while people try to avoid spreading the virus.
Tags
Annotators
URL
-
-
www.thenation.com www.thenation.com
-
Ai Weiwei,
-
-
artificialintelligence-news.com artificialintelligence-news.com
-
The system has been criticised due to its method of scraping the internet to gather images and storing them in a database. Privacy activists say the people in those images never gave consent. “Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland said in a recent interview with CoinDesk. “It’s kind of a bizarre argument to make because [your face is the] most public thing out there.”
-
-
-
Enligt Polismyndighetens riktlinjer ska en konsekvensbedömning göras innan nya polisiära verktyg införs, om de innebär en känslig personuppgiftbehandling. Någon sådan har inte gjorts för det aktuella verktyget.
Swedish police have used Clearview AI without any 'consequence judgement' having been performed.
In other words, Swedish police have used a facial-recognition system without being allowed to do so.
This is a clear breach of human rights.
Swedish police has lied about this, as reported by Dagens Nyheter.
-
-
www.lastampa.it www.lastampa.it
-
le nuove tecnologie sono presenti nella vita di tutti, sia lavorativa sia quotidiana. Spesso non ci rendiamo neanche conto che interagiamo con sistemi automatici o che disseminiamo sulla rete dati che riguardano la nostra identità personale. Per cui si produce una grave asimmetria tra chi li estrae (per i propri interessi) e chi li fornisce (senza saperlo). Per ottenere certi servizi, alcuni siti chiedono a noi di precisare che non siamo un robot, ma in realtà la domanda andrebbe capovolta
-
«È necessario che l’etica accompagni tutto il ciclo della elaborazione delle tecnologie: dalla scelta delle linee di ricerca fino alla progettazione, la produzione, la distribuzione e l’utente finale. In questo senso papa Francesco ha parlato di “algoretica”»
-
-
library.educause.edu library.educause.edu
-
However, there is skepticism about AI’s ability to replace human teaching in activities such as judging writing style, and some have expressed concern that policy makers could use AI to justify replacing (young) human labor.
Maha describes here the primary concern I have with the pursuit of both AI and adaptive technologies in education. Not that the designers of such tools are attempting to replace human interaction, but that the spread of "robotic" educational tools will accelerate the drive to further reduce human-powered teaching and learning, leading perhaps to class-based divisions in educational experiences like Maha imagines here.
AI and adaptive tool designers often say that they are hoping their technologies will free up time for human teachers to focus on more impactful educational practices. However, we already see how technologies that reduce human labor often lead to further reductions the use of human teachers — not their increase. As Maha points out, that's a social and economic issue, not a technology issue. If we focus on building tools rather than revalorizing human-powered education, I fear we are accelerating the devaluation of education already taking place.
-
- Jan 2020
-
www.amazon.com www.amazon.com
-
Norbert Wiener was a mathematician with extraordinarily broad interests. The son of a Harvard professor of Slavic languages, Wiener was reading Dante and Darwin at seven, graduated from Tufts at fourteen, and received a PhD from Harvard at eighteen. He joined MIT's Department of Mathematics in 1919, where he remained until his death in 1964 at sixty-nine. In Ex-Prodigy, Wiener offers an emotionally raw account of being raised as a child prodigy by an overbearing father. In I Am a Mathematician, Wiener describes his research at MIT and how he established the foundations for the multidisciplinary field of cybernetics and the theory of feedback systems. This volume makes available the essence of Wiener's life and thought to a new generation of readers.
-
-
-
He was sitting on large crate containing Boston Dynamics’ robot dog, Spot.
They should put heads on them to make them less scary.
-
-
helpx.adobe.com helpx.adobe.com
-
Cut and erase artwork Transform your artwork by cutting and erasing content.
-
Transform artwork Learn how to transform artwork with the Selection tool, Transform panel, and various transform tools.
-
-
helpx.adobe.com helpx.adobe.com
-
command
-
-
helpx.adobe.com helpx.adobe.com
-
Scale objects Scaling an object enlarges or reduces it horizontally (along the x axis), vertically (along the y axis), or both. Objects scale relative to a reference point which varies depending on the scaling method you choose. You can change the default reference point for most scaling methods, and you can also lock the proportions of an object.
-
-
www.dazeddigital.com www.dazeddigital.com
-
Inside the Infinite Imagination of a Computer -- James Bridle
-
-
outline.com outline.com
-
The underlying guiding idea of a “trustworthy AI” is, first and foremost, conceptual nonsense. Machines are not trustworthy; only humans can be trustworthy (or untrustworthy). If, in the future, an untrustworthy corporation or government behaves unethically and possesses good, robust AI technology, this will enable more effective unethical behaviour.
yikes
-
-
-
Platform capitalism, digital technology, and the future of work
-
- Dec 2019
-
en.wikipedia.org en.wikipedia.org
-
Hans Moravec argued in 1976 that computers were still millions of times too weak to exhibit intelligence. He suggested an analogy: artificial intelligence requires computer power in the same way that aircraft require horsepower. Below a certain threshold, it's impossible, but, as power increases, eventually it could become easy.[79] With regard to computer vision, Moravec estimated that simply matching the edge and motion detection capabilities of human retina in real time would require a general-purpose computer capable of 109 operations/second (1000 MIPS).[80] As of 2011, practical computer vision applications require 10,000 to 1,000,000 MIPS. By comparison, the fastest supercomputer in 1976, Cray-1 (retailing at $5 million to $8 million), was only capable of around 80 to 130 MIPS, and a typical desktop computer at the time achieved less than 1 MIPS.
-
-
www.nytimes.com www.nytimes.com
-
This is not a new idea. It is based on the vision expounded by Vannevar Bush in his 1945 essay “As We May Think,” which conjured up a “memex” machine that would remember and connect information for us mere mortals. The concept was refined in the early 1960s by the Internet pioneer J. C. R. Licklider, who wrote a paper titled “Man-Computer Symbiosis,” and the computer designer Douglas Engelbart, who wrote “Augmenting Human Intellect.” They often found themselves in opposition to their colleagues, like Marvin Minsky and John McCarthy, who stressed the goal of pursuing artificial intelligence machines that left humans out of the loop.
Seymour Papert, had an approach that provides a nice synthesis between these two camps, buy leveraging early childhood development to provide insights on the creation of AI.
-
Thompson’s point is that “artificial intelligence” — defined as machines that can think on their own just like or better than humans — is not yet (and may never be) as powerful as “intelligence amplification,” the symbiotic smarts that occur when human cognition is augmented by a close interaction with computers.
Intelligence amplification over artificial intelligence. In reality you can't get to AI until you've mastered IA.
-
-
www.wilsoncenter.org www.wilsoncenter.org
-
Four databases of citizen science and crowdsourcing projects — SciStarter, the Citizen Science Association (CSA), CitSci.org, and the Woodrow Wilson International Center for Scholars (the Wilson Center Commons Lab) — are working on a common project metadata schema to support data sharing with the goal of maintaining accurate and up to date information about citizen science projects. The federal government is joining this conversation with a cross-agency effort to promote citizen science and crowdsourcing as a tool to advance agency missions. Specifically, the White House Office of Science and Technology Policy (OSTP), in collaboration with the U.S. Federal Community of Practice for Citizen Science and Crowdsourcing (FCPCCS),is compiling an Open Innovation Toolkit containing resources for federal employees hoping to implement citizen science and crowdsourcing projects. Navigation through this toolkit will be facilitated in part through a system of metadata tags. In addition, the Open Innovation Toolkit will link to the Wilson Center’s database of federal citizen science and crowdsourcing projects.These groups became aware of their complementary efforts and the shared challenge of developing project metadata tags, which gave rise to the need of a workshop.
Sense Collective's Climate Tagger API and Pool Party Semantic Web plug-in are perfectly suited to support The Wilson Center's metadata schema project. Creating a common metadata schema that is used across multiple organizations working within the same domain, with similar (and overlapping) data and data types, is an essential step towards realizing collective intelligence. There is significant redundancy that consumes limited resources as organizations often perform the same type of data structuring. Interoperability issues between organizations, their metadata semantics and serialization methods, prevent cumulative progress as a community. Sense Collective's MetaGrant program is working to provide a shared infastructure for NGO's and social impact investment funds and social impact bond programs to help rapidly improve the problems that are being solved by this awesome project of The Wilson Center. Now let's extend the coordinated metadata semantics to 1000 more organizations and incentivize the citizen science volunteers who make this possible, with a closer connection to the local benefits they produce through their efforts. With integration into Social impact Bond programs and public/private partnerships, we are able to incentivize collective action in ways that match the scope and scale of the problems we face.
-
- Nov 2019
-
www.cleveroad.com www.cleveroad.com
-
What’s the Difference Between AI, Machine Learning and Data Science?
-
-
ignitedlabs.education.asu.edu ignitedlabs.education.asu.edu
-
Tech Literacy Resources
This website is the "Resources" archive for the IgniteED Labs at Arizona State University's Mary Lou Fulton Teachers College. The IgniteED Labs allow students, staff, and faculty to explore innovative and emerging learning technology such as virtual reality (VR), artifical intelligence (AI), 3-D printing, and robotics. The left side of this site provides several resources on understanding and effectively using various technologies available in the IgniteED labs. Each resources directs you to external websites, such as product tutorials on Youtube, setup guides, and the products' websites. The right column, "Tech Literacy Resources," contains a variety of guides on how students can effectively and strategically use different technologies. Resources include "how-to" user guides, online academic integrity policies, and technology support services. Rating: 9/10
-
-
-
However, PIPA is the agency's first standalone bot, meaning it can be used across multiple government agencies. Crucially, the bot can be embedded within web and mobile apps, as well as within third-party personal assistants, such as Google Home and Alexa. According to Keenan, the gang of five digital assistants released so far by the DHS have answered "more than 2.3 million questions, reducing the need for people to have to pick up a phone or come into a service centre for help.” “This is what our digital transformation program is all about – making life simpler and easier for all Australians.”
Scope of PIPA
-
-
www.computerworld.com.au www.computerworld.com.au
-
uman Services has a number of public-facing chatbots already. The newest of them is ‘Charles’, launched last year, which offers support for the government’s MyGov service.Others include ‘Sam’ and ‘Oliver’, both of which launched in 2017. The department’s customer-facing digital assistants have so far answered more than 2.3 million questions. Human Services also uses a number of staff-facing chatbots. In November Keenan revealed that the department had launched an Augmented Intelligence Centre of Excellence, which the minister said would boost collaboration with industry, academia and other government entities.
Chatbots that exist
-
-
www.zdnet.com www.zdnet.com
-
The federal government has decided that all Commonwealth entities would benefit from having a chatbot, with the Department of Human Services (DHS) announcing it was working on the development of one that will be ready by the end of 2019.The Platform Independent Personal Assistant -- PIPA -- is expected to "significantly improve the customer experience for users of online government services", according to Minister for Human Services and Digital Transformation Michael Keenan.
Federal Government creating PIPA chatbot
-
-
www.zdnet.com www.zdnet.com
-
Before implementing Alex 2.5 years ago, IP Australia staffers were taking 12,000 calls per month."Now I'm not saying Alex was the only intervention we had, but it was one of the main ones. Acting on the insights we were getting from Alex, we're now down to 5,000 calls per month and still dropping," Stokes said. "The value for money and return on investment is quite good."
IP Australia using chatbox named Alex to reduce calls received
-
-
en.wikipedia.org en.wikipedia.org
-
In 2001, AI founder Marvin Minsky asked "So the question is why didn't we get HAL in 2001?"[167] Minsky believed that the answer is that the central problems, like commonsense reasoning, were being neglected, while most researchers pursued things like commercial applications of neural nets or genetic algorithms. John McCarthy, on the other hand, still blamed the qualification problem.[168] For Ray Kurzweil, the issue is computer power and, using Moore's Law, he predicted that machines with human-level intelligence will appear by 2029.[169] Jeff Hawkins argued that neural net research ignores the essential properties of the human cortex, preferring simple models that have been successful at solving simple problems.[170] There were many other explanations and for each there was a corresponding research program underway.
-
Eventually the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were "brittle" (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier. Expert systems proved useful, but only in a few special contexts
-
The neats: logic and symbolic reasoning[edit source] Logic was introduced into AI research as early as 1958, by John McCarthy in his Advice Taker proposal.[100] In 1963, J. Alan Robinson had discovered a simple method to implement deduction on computers, the resolution and unification algorithm. However, straightforward implementations, like those attempted by McCarthy and his students in the late 1960s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems.[101] A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh, and soon this led to the collaboration with French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog.[102] Prolog uses a subset of logic (Horn clauses, closely related to "rules" and "production rules") that permit tractable computation. Rules would continue to be influential, providing a foundation for Edward Feigenbaum's expert systems and the continuing work by Allen Newell and Herbert A. Simon that would lead to Soar and their unified theories of cognition.[103] Critics of the logical approach noted, as Dreyfus had, that human beings rarely used logic when they solved problems. Experiments by psychologists like Peter Wason, Eleanor Rosch, Amos Tversky, Daniel Kahneman and others provided proof.[104] McCarthy responded that what people do is irrelevant. He argued that what is really needed are machines that can solve problems—not machines that think as people do.[105] The scruffies: frames and scripts[edit source] Among the critics of McCarthy's approach were his colleagues across the country at MIT. Marvin Minsky, Seymour Papert and Roger Schank were trying to solve problems like "story understanding" and "object recognition" that required a machine to think like a person. In order to use ordinary concepts like "chair" or "restaurant" they had to make all the same illogical assumptions that people normally made. Unfortunately, imprecise concepts like these are hard to represent in logic. Gerald Sussman observed that "using precise language to describe essentially imprecise concepts doesn't make them any more precise."[106] Schank described their "anti-logic" approaches as "scruffy", as opposed to the "neat" paradigms used by McCarthy, Kowalski, Feigenbaum, Newell and Simon.[107] In 1975, in a seminal paper, Minsky noted that many of his fellow "scruffy" researchers were using the same kind of tool: a framework that captures all our common sense assumptions about something. For example, if we use the concept of a bird, there is a constellation of facts that immediately come to mind: we might assume that it flies, eats worms and so on. We know these facts are not always true and that deductions using these facts will not be "logical", but these structured sets of assumptions are part of the context of everything we say and think. He called these structures "frames". Schank used a version of frames he called "scripts" to successfully answer questions about short stories in English.[108] Many years later object-oriented programming would adopt the essential idea of "inheritance" from AI research on frames.
-
-
en.wikipedia.org en.wikipedia.org
-
Bolt, Beranek and Newman (BBN) developed its own Lisp machine, named Jericho,[7] which ran a version of Interlisp. It was never marketed. Frustrated, the whole AI group resigned, and were hired mostly by Xerox. So, Xerox Palo Alto Research Center had, simultaneously with Greenblatt's own development at MIT, developed their own Lisp machines which were designed to run InterLisp (and later Common Lisp). The same hardware was used with different software also as Smalltalk machines and as the Xerox Star office system.
-
In 1979, Russell Noftsker, being convinced that Lisp machines had a bright commercial future due to the strength of the Lisp language and the enabling factor of hardware acceleration, proposed to Greenblatt that they commercialize the technology.[citation needed] In a counter-intuitive move for an AI Lab hacker, Greenblatt acquiesced, hoping perhaps that he could recreate the informal and productive atmosphere of the Lab in a real business. These ideas and goals were considerably different from those of Noftsker. The two negotiated at length, but neither would compromise. As the proposed firm could succeed only with the full and undivided assistance of the AI Lab hackers as a group, Noftsker and Greenblatt decided that the fate of the enterprise was up to them, and so the choice should be left to the hackers. The ensuing discussions of the choice divided the lab into two factions. In February 1979, matters came to a head. The hackers sided with Noftsker, believing that a commercial venture fund-backed firm had a better chance of surviving and commercializing Lisp machines than Greenblatt's proposed self-sustaining start-up. Greenblatt lost the battle.
Tags
Annotators
URL
-
- Oct 2019
-
conference.nber.org conference.nber.org
-
We live in an age of paradox. Systems using artificial intelligence match or surpass human level performance in more and more domains, leveraging rapid advances in other technologies and driving soaring stock prices. Yet measured productivity growth has fallen in half over the past decade, and real income has stagnated since the late 1990s for a majority of Americans. Brynjolfsson, Rock, and Syverson describe four potential explanations for this clash of expectations and statistics: false hopes, mismeasurement, redistribution, and implementation lags. While a case can be made for each explanation, the researchers argue that lags are likely to be the biggest reason for paradox. The most impressive capabilities of AI, particularly those based on machine learning, have not yet diffused widely. More importantly, like other general purpose technologies, their full effects won't be realized until waves of complementary innovations are developed and implemented. The adjustment costs, organizational changes and new skills needed for successful AI can be modeled as a kind of intangible capital. A portion of the value of this intangible capital is already reflected in the market value of firms. However, most national statistics will fail to capture the full benefits of the new technologies and some may even have the wrong sign
This is for anyone who is looking deep in economics of artificial intelligence or is doing a project on AI with respect to economics. This paper entails how AI might effect our economy and change the way we think about work. the predictions and facts which are stated here are really impressive like how people 30 years from now will be lively with government employment where everyone will get equal amount of payment.
-
-
journals.sagepub.com journals.sagepub.com
-
espite the potential of emerging technologies to assist persons with cognitive disabilities,significant practical impediments remain to be overcome in commercialization, consumerabandonment, and in the design and development of useful products. Barriers also exist in terms of the financial and organizational feasibility of specific envisionedproducts, and their limited potential to reach the consumer market. Innovative engineeringapproaches, effective needs analysis, user-centered design, and rapid evolutionary developmentare essential to ensure that technically feasible products meet the real needs of persons withcognitive disabilities. Efforts must be made by advocates, designers and manufacturers to promote betterintegration of future software and hardware systems so that forthcoming iterations of personalsupport technologies and assisted care systems technologies do not quickly become obsolete.They will need to operate seamlessly across multiple real-world environments in the home,school, community, and workplace
This journal clearly explains the use of technologies with special aid people how a certain group can leverage it, while also touch basing on what are the challenges which special aid people face financially.
-
-
library.oapen.org library.oapen.org
-
Elon Musk.
Eine entsprechend der Thematik angelehnte Diskussion zwischen Elon Musk und dem chinesischer Unternehmer Jack Ma über Künstlicher Intelligenz (englisch) Diskussion
-
-
www.themandarin.com.au www.themandarin.com.au
-
No matter how well you design a system, humans will end up surprising you with how they use it. “We make it obvious that it’s a bot, a digital assistant, at the start. But sometimes customers overlook that. And they’ll say, ‘are you a bot? What’s going on here? Transfer me through!’ And they’ll get into it quite strongly,” explains David Grilli, AGL’s chatbot product owner
Interesting to note response to chatbots
-
-
-
Why artificial general intelligence based on the neocortex without older more complex emotional (reptilian) systems is not the type of threat being proposed...
-
- Sep 2019
-
onezero.medium.com onezero.medium.com
-
At the moment, GPT-2 uses a binary search algorithm, which means that its output can be considered a ‘true’ set of rules. If OpenAI is right, it could eventually generate a Turing complete program, a self-improving machine that can learn (and then improve) itself from the data it encounters. And that would make OpenAI a threat to IBM’s own goals of machine learning and AI, as it could essentially make better than even humans the best possible model that the future machines can use to improve their systems. However, there’s a catch: not just any new AI will do, but a specific type; one that uses deep learning to learn the rules, algorithms, and data necessary to run the machine to any given level of AI.
This is a machine generated response in 2019. We are clearly closer than most people realize to machines that can can pass a text-based Turing Test.
-
-
inside.com inside.com
-
75 countries already using the technology
75 countries already use facial recognition
-
- Aug 2019
-
arxiv.org arxiv.org
-
Encoding SDRs for use in HTM systems
Tags
Annotators
URL
-
-
www.youtube.com www.youtube.comNumenta1
-
HTM and SDR's - part of how the brain implements intelligence.
"In this first introductory episode of HTM School, Matt Taylor, Numenta's Open Source Flag-Bearer, walks you through the high-level theory of Hierarchical Temporal Memory in less than 15 minutes."
-
-
towardsdatascience.com towardsdatascience.com
-
Machine learning is an approach to making many similar decisions that involves algorithmically finding patterns in your data and using these to react correctly to brand new data
-
-
distill.pub distill.pub
-
Semantic dictionaries are powerful not just because they move away from meaningless indices, but because they express a neural network’s learned abstractions with canonical examples. With image classification, the neural network learns a set of visual abstractions and thus images are the most natural symbols to represent them. Were we working with audio, the more natural symbols would most likely be audio clips. This is important because when neurons appear to correspond to human ideas, it is tempting to reduce them to words. Doing so, however, is a lossy operation — even for familiar abstractions, the network may have learned a deeper nuance. For instance, GoogLeNet has multiple floppy ear detectors that appear to detect slightly different levels of droopiness, length, and surrounding context to the ears. There also may exist abstractions which are visually familiar, yet that we lack good natural language descriptions for: for example, take the particular column of shimmering light where sun hits rippling water.
nuance beyond words
-
-
bafybeieioeskrvqzljn73hlehsg3vizm7mxxabejyocgaxiqkk2iix74wa.ipfs.w3s.link bafybeieioeskrvqzljn73hlehsg3vizm7mxxabejyocgaxiqkk2iix74wa.ipfs.w3s.link
-
AI relies upon a bet. It is the bet that if you get your syntax (mechanism) right the semantics (meaning) will take care of itself. It is the hope that if computer engineers get the learning feedback process right, a new transhuman intellect will emerge.
-
- Jul 2019
-
www.publicbooks.org www.publicbooks.org
-
AI, especially in popular culture, is often a jumping-off point for dialogue with ourselves about what the future means, sometimes at the expense of understanding the present.
-
- Jun 2019
-
www.wired.com www.wired.com
-
By comparison, Amazon’s Best Seller badges, which flag the most popular products based on sales and are updated hourly, are far more straightforward. For third-party sellers, “that’s a lot more powerful than this Choice badge, which is totally algorithmically calculated and sometimes it’s totally off,” says Bryant.
"Amazon's Choice" is made by an algorithm.
Essentially, "Amazon" is Skynet.
-
- May 2019
-
www.technologyreview.com www.technologyreview.com
-
Humans act like a “liability sponge,” she says, absorbing all legal and moral responsibility in algorithmic accidents no matter how little or unintentionally they are involved.
-
-
dougengelbart.org dougengelbart.org
-
a working station that has a visual display screen some three feet on a side; this is his working surface, and is controlled by a computer (his "clerk") with which he can communicate by means of a small keyboard and various other devices
Here's an example of a state of the art workstation in 1962.
<br>By The original uploader was Rees11 at English Wikipedia. - Transferred from <span class="plainlinks">en.wikipedia</span> to Commons., CC BY-SA 2.5, Link
-
- Apr 2019
-
jfgagne.ai jfgagne.ai
-
India Not seen a major player
-
Global AI Talent Report 2019
India not to be seen in this. Women participation increasing.
-
-
www.businessinsider.com www.businessinsider.com
-
Amazon employs a system that not only tracks warehouse workers' productivity but also can automatically fire them for failing to meet expectations.
The bots now fire humans. AI 2.0.
-
-
www.aitrends.com www.aitrends.com
-
The agency is looking for industry vendors that can provide such a capability, which should also include “topic modeling; text categorization; text clustering; information extraction; named entity resolution; relationship extraction; sentiment analysis; and summarization,” and “may include statistical techniques that can provide a general understanding of the statutory and regulatory text as a whole.”
AI is going to be used to help employees understand regulations. This is a good example to how AI is going to help us do our jobs better but it will also be risk of the employees missing out on crucial exposure and experience and in the end relying too much on the machine?
-
-
streetfightmag.com streetfightmag.com
-
arstechnica.com arstechnica.com
-
We often think about AI “replacing us” with a vision of robots literally doing our jobs, but it’s not going to shake out in quite that way. Look at radiology, for example: with the advances in computer vision, people sometimes talk about AI replacing radiologists. We probably won’t ever get to the point where there’s zero human radiologists. But a very possible future is one where, out of 100 radiologists now, AI lets the top 5 or 10 of them do the job of all the rest. If such a scenario plays out, where does that leave the other 90 or so doctors?
-
-
en.wikipedia.org en.wikipedia.org
-
Machine learning techniques were originally designed for stationary and benign environments in which the training and test data are assumed to be generated from the same statistical distribution.
the best thing ever!
-
- Mar 2019
-
www.charlottestix.com www.charlottestix.com
-
what EU leadership in AI could look like and what might be needed to get there.
So, EU strategy is investing in ethical AI and by this avoiding direct competition with China and US but still having their place at the party?
Tags
Annotators
URL
-
-
decryptmedia.com decryptmedia.com
-
“Meditations on Moloch,”
Clicked through to the essay. It appears to be mainly an argument for a super-powerful benevolent general artificial intelligence, of the sort proposed by AGI-maximalist Nick Bostrom.
The money quote:
The only way to avoid having all human values gradually ground down by optimization-competition is to install a Gardener over the entire universe who optimizes for human values.
🔗 This is a great New Yorker profile of Bostrom, where I learned about his views.
🔗Here is a good newsy profile from the Economist's magazine on the Google unit DeepMind and its attempt to create artificial general intelligence.
-
-
techbuzztalk.com techbuzztalk.com
-
There is no wonder that AI gains popularity. A lot of facts and pros are the stimulators of such profitable growth of AI. The essential peculiarities are fully presented in the given article.
-
-
-
More people work in the shadow mines of content moderation than are officially employed by Facebook or Google. These are the people who keep our Disneyland version of the web spic and span.
-
- Feb 2019
-
davecormier.com davecormier.com
-
Algorithms will privilege some forms of ‘knowing’ over others, and the person writing that algorithm is going to get to decide what it means to know… not precisely, like in the former example, but through their values. If they value knowledge that is popular, then knowledge slowly drifts towards knowledge that is popular.
I'm so glad I read Dave's post after having just read Rob Horning's great post, "The Sea Was Not a Mask", also addressing algorithms and YouTube.
-
Some questions to use when discussing why we shouldn’t replace humans with AI (artificial intelligence) for learning
Great discussion of what questions to ask about artificial intelligence and learning from Dave Cormier.
-
-
www.eff.org www.eff.org
-
AI Progress Measurement
-
-
dougengelbart.org dougengelbart.org
-
The summation of human experience is being expanded at a prodigious rate
The prodigious rate itself is expanding, is it a scale even conceivable at this time? (insert the usual stats of YouTube content growing at 300 hours a minute).
I'm anxious to read if he anticipates the notion of turning to automation to try and handle this organization- it always seemed that Bush's vision was human focused.
-
The conceptual framework we seek must orient us toward the real possibilities and problems associated with using modern technology to give direct aid to an individual in comprehending complex situations, isolating the significant factors, and solving problems.
This problem of orientation is more true today than ever and I'm just not convinced that Silicon Valley (however well-intentioned) represents the right group to devise a framework to truly serve EVERYONE.
Anyone interested in joining a grassroots effort to help influence those at the top? Let me know - wkendal-at-gmail
-
executive capability.
All of this focus on process, sub-process and sequencing keeps me thinking of machine-learning and concepts of AI. Seems this executive capability provides differentiation; human-learning.
-
augmentation means
New term for me; seems to break down how we interface with the world. A lot of HCI and learning theory baked in here. Heck, AI is baked in here.
-
-
-
But every single photo on the site has been created by using a special kind of artificial intelligence algorithm called generative adversarial networks (GANs).
These could be actual people. How would we know?
-
- Jan 2019
-
wallstreetcn.com wallstreetcn.com
-
假设另一种场景,我把它叫“运输机难题”——假设你是一场救灾行动的总指挥,正带着一支小队坐在一架装满物资的运输机中,这是唯一一架运输机,如果没准时到就会有上万灾民饿死病死,如果最终也没到那几十万人都活不成。但此时受恶劣天气影响飞机突然损坏了,承载不了这么大的重量,必须要有一半人跳下飞机(假设不能丢物资),否则可能机毁人亡,要不要让半支队伍跳下去? 这架运输机就是比特大陆,灾区的难民就是现在的币民。比特大陆若是完蛋,对行业造成的冲击又会让一大批币民破产出局。 试想一下,有一天比特大陆真的倒闭了,那么可以预见到,矿机将挥泪大甩卖,矿工将抛售手里的BTC、BCH,BCH奄奄一息,BTC跌到新一轮谷底。虽然还会有“灾后重建工作”,但一大批人都将倒在这场灾难里,看不到明天的太阳。 作为灾民,会不会因为心疼跳下去的半支队伍而甘愿饿死病死?作为币民,会不会因为心疼被裁掉的几百上千人而甘愿看着比特大陆倒闭,忍受自己哪怕只是短期的破产?
<big>评:</big><br/><br/>「电车难题」曾引起旷日持久的讨论,而它的姊妹版「运输机难题」恐怕也一时难解。对于此类道德两难的选择困境,人们通常倾向于从自己的经验判断出发——主人公「搬动方向杆使车辆撞死一人」的行为,所受到的公众谴责要远小于主人公「站在轨道上方的天桥,为了救五人而故意把桥上的另一个人推下桥以逼停电车」的选择。那么对于「运输机难题」来说呢?还有没有比「一半机组成员跳下飞机」更优的解?<br/><br/>这样的讨论又让人联想到技术主义者在 AI 人工智能领域的意见分野:一派人认为 AI 最终的目的是取代人类,而另一派人的观点则坚信 AI 旨在增强人类(augmentation)。哪一派的话语权更大呢?答案并不重要。重要的是, be nice.
Tags
Annotators
URL
-
-
www.theatlantic.com www.theatlantic.com
-
The chances that they might miscommunicate and collide will therefore be far smaller.
Theoretically yes, but however when we consider the number of Engineers, Developers or even Human - AI team pulling these services off might still be like the "drivers unfamiliar with the changing traffic regulations"
-
The technology that favored democracy is changing, and as artificial intelligence develops, it might change further.
i would like to see arguments around this as i further read.
-
-
www.sciencedirect.com www.sciencedirect.com
-
By utilizing the Deeplearning4j library1 for model representation, learning and prediction, KNIME builds upon a well performing open source solution with a thriving community.
-
It is especially thanks to the work of Yann LeCun and Yoshua Bengio (LeCun et al., 2015) that the application of deep neural networks has boomed in recent years. The technique, which utilizes neural networks with many layers and enhanced backpropagation algorithms for learning, was made possible through both new research and the ever increasing performance of computer chips.
-
One of KNIME's strengths is its multitude of nodes for data analysis and machine learning. While its base configuration already offers a variety of algorithms for this task, the plugin system is the factor that enables third-party developers to easily integrate their tools and make them compatible with the output of each other.
-
- Dec 2018
-
wendynorris.com wendynorris.com
-
I also arguelater that the challenge of the social–technical gap creates an opportunity to re-focus CSCW as a Simonian science of the artificial (where a science of the arti-ficial is suitably revised from Simon’s strictly empiricist grounds).
Simonian Science of the Artificial refers to "a physical symbol system that has the necessary and sufficient means for intelligent action."
From Simon, Herbert, "The Sciences of the Artificial," Third Edition (1996)
-
-
crowdsourcing-class.org crowdsourcing-class.org
Tags
Annotators
URL
-
-
artificial-intelligence-class.org artificial-intelligence-class.org
Tags
Annotators
URL
-
-
www.technologyreview.com www.technologyreview.com
-
-
Исследование должно было дать ответ на 9 главных вопросов о том, что предпочтительнее: 1. сохранить жизнь человека или животного; 2. сохранить курс или свернуть; 3. сохранить жизнь пассажиров или пешеходов; 4. наибольшего количества людей или наименьшего; 5. мужчин или женщин; 6. молодых или стариков; 7. толстых или худых; 8. пешеходов, переходящих дорогу в соответствии с ПДД, или пешеходов-нарушителей; 9. людей с высоким или низким социальным статусом.
То есть в последствии машина будет анализировать каждого пассажира и пешехода, находящегося в непосредственной близости, чтобы потом принять решение, кто должен будет погибнуть?
Tags
Annotators
URL
-
- Nov 2018
-
www.nature.com www.nature.com
Tags
Annotators
URL
-
- Oct 2018
-
library.stanford.edu library.stanford.edu
-
In December of this year, Stanford Libraries will co-host a conference on artificial intelligence with the National Library of Norway in Oslo. I
-
-
www.edsurge.com www.edsurge.com
-
For all the talk about data and learning, Essa offered this blunt assessment: “Pretty much all edtech sucks. And machine learning is not going to improve edtech.” So what’s missing? “It’s not about the data, but how do we apply it. The reason why this technology sucks is because we don’t do good design. We need good design people to understand how this works.”
I'm pretty sure this doesn't make any sense. Also, it is pretty funny.
-
- Sep 2018
-
course.fast.ai course.fast.ai
-
-
-
-
www.nature.com www.nature.com
-
The new learned optical correlator uses existing light to save energy costs over an optoelectronic two-layer Combines Convolutional neural network (CNN). https://www.sciencedaily.com/releases/2018/08/180802130750.htm
Tags
Annotators
URL
-
-
www.mnemotext.com www.mnemotext.com
-
That’s Dr. Hunter, isn’t it? “By the Way do you mind if I ask you a personal question?
HAL, a supposedly emotion feigning ultra-intelligent A.I., has just asked Dave if he could ask him a "personal question?" This should raise a concern in Dave, but it doesn't. Earlier in the film, during the BBC interview, the interviewer asked the Astronauts if HAL had emotions or if he was just faking it, their reply was that he was definitely programmed to feign emotions, however the fact whether if he actually had emotions or not remains a mystery. In this scene HAL acknowledges the existence of emotions by asking if he can ask a question that might incite a negative emotional response, a "personal question." This revelation should have frightened Dave, because it shows that HAL is more than a computer and is capable of more than just controlling the ship and maintaining optimal performance, HAL is capable of reading emotions and perhaps even capable of being afflicted by them.
-
Hal, you have an enormous responsibility on this mission perhaps the greatest responsibility of any single mission element. You’re the brain and central nervous system of the ship. Your responsibilities include watching over the men in hibernation. Does this ever cause you any lack of confidence?
Hal is given complete control over the ship and everything inside it, even the people. It is in this way that he is beyond that of a tool. He controls, he is not controlled. As portrayed in the film he can kill any of the crew members any time, which he does, and advises the crew members of what they should do. This is perfectly described in "The Technological Singularity" where the authors states that a super-intelligent AI will be as much of a tool to humanity as we are tools to animals.
-
– Do you know what happened? I’m sorry, Dave. I don’t have enough information.
Hal is having a very human experience at this point in the film. Not only has he killed one of the cremates and intends to kill the other cremates, but he has some sense that it is wrong and it will lead to bad things for him. Even though he knows exactly what happened, he knows that it would be best for him to keep it away from Dave. This human experience only enhances when begins to die through the slow and monotonous process of being shut down. He begins to tell Dave that he can feel it and that he is afraid, showing that he has more than intelligence, but that he also has consciousness.
-
-
www.mnemotext.com www.mnemotext.com
-
Large computer networks (and their associated users) may “wake up” as superhumanly intelligent entities.
The great "AI" has been around for a while now, we human are largely working on a computer machine to think for "itself". As fascinating as it sounds, aren't we just being lazy; depending on a robot to do the work for us. What will happen with the human race if these AI start producing more and better equipped AI. We have a brain that can produce so much if we just decide to do things on our own.
-
performance curves beginning to level off – because of our inability to automate the design work needed to support further hardware improvements. Wed end up with some very powerful hardware, but without the ability to push it further
Addressing the question of singularity, the author takes on an interesting perspective. One rationalization or opposing view is that technology is only as informational and intelligent as the creator itself. Just as the Mores conclude, "the computational competence of single neurons may be far higher than generally believed" and that "our present computer hardware might be [] 10 orders of magnitude short [compared to] our heads". This means that AI cannot surpass human intelligence as popularly believed. Rather, the article conjectures the possibility that if singularity were to occur, further innovation and improvements could never be made. I assume this is a biological and anatomical argument. Thus, implying that the technological constraints of AI cause it to be inferior to the biological makeup of the human brain. Thus, the author suggests that singularity can never really be fully realized.
-
The maximum possible effectiveness of a software system increases in direct proportion to the log of the effectiveness (i.e., speed, bandwidth, memory capacity) of the underlying hardware.
Simply stating that there will always be something restrictive about what technologies can do. Thus far in human technological advances there have not been a single database that can support a beyond human software. As stated in the quotes, the 'mind' of the piece of software is limited to all the effectiveness of the hardware, and by the time that humans are able to invent something that could effectively contain this non-human beyond human brain there would be some counter measures in placed to reduce the risk of an AI taking over the human race. The resource cost would also discourage for such experiment to be funded as it would be expensive to fund the researcher on creating compatible parts and programmers to develop something that would resemble that of a human mind but something more advance. Programming is also another problem, humans do not fully understand the human mind so there is a very unlikely chance that some programmer is able to accidentally write a line of code that make an AI be able to extend further than what a human can comprehend. The idea of a technology singularity stays a theory but this one single quote assures that the technology singularity is far from what is achievable.
-
-
machinelearnings.co machinelearnings.co
-
AI and machine learning
-
- Aug 2018
-
www.deutschestextarchiv.de www.deutschestextarchiv.de
-
Habe ich einBuch, das für mich Verſtand hat, einen Seelſor¬ger, der für mich Gewiſſen hat, einen Arzt der fürmich die Diät beurtheilt, u. ſ. w. ſo brauche ich michja nicht ſelbſt zu bemühen. Ich habe nicht nöthigzu denken, wenn ich nur bezahlen kann; anderewerden das verdrießliche Geſchäft ſchon für michübernehmen.
Kant über künstliche Intelligenz
-
- Jul 2018
-
er.educause.edu er.educause.edu
-
What has changed, what remains the same, and what general patterns can be discerned from the past twenty years in the fast-changing field of edtech?
Join me in annotating @mweller's thoughtful exercise at thinking through the last 20 years of edtech. Given Martin's acknowledgements of the caveats of such an exercise, how can we augment this list to tell an even richer story?
-
-
www.insidehighered.com www.insidehighered.com
-
On the other hand, computers cannot read.
This is entirely too complex an assertion to be made without support. It seems easy to understand, and yet it is not.
-
-
databricks.com databricks.com
-
Link interessante - funzioni statistiche dataframe (Spark):
-
- Jun 2018
-
cognitiveclass.ai cognitiveclass.ai
-
Nice site sponsored by IBM providing lot of training materials for AI, MachineLearning and programming
Tags
Annotators
URL
-
-
spark.apache.org spark.apache.org
-
Collaborative Filtering sample with Apache Spark
This framework can be used for recommender systems.
-
- May 2018
-
ainowinstitute.org ainowinstitute.orgResearch1
-
-
www.katecrawford.net www.katecrawford.net
-
-
alleninstitute.org alleninstitute.org
-
-
-
“In short, they have no history of supporting the machine learning research community and instead they are viewed as part of the disreputable ecosystem of people hoping to hype machine learning to make money.”
Whew. Hot.
-
-
www.pewinternet.org www.pewinternet.org
-
www.pewinternet.org www.pewinternet.org
Tags
Annotators
URL
-
-
www.accenture.com www.accenture.com
Tags
Annotators
URL
-
-
www.cbinsights.com www.cbinsights.com
-
globenewswire.com globenewswire.com
-
-
AI will also serve as a global economy booster, by contributing as much as $15.7 trillion to the world economy by 2030 due to productivity and personalization improvements.
-
-
www.theatlantic.com www.theatlantic.com
-
in search of a guiding philosophy
Is it "in search of" or in avoidance of?
-
rather than to comprehend them
Thinking about instructional design here - how verbs like understand and appreciate are to be avoided in learning outcomes because they are difficult to measure - and wondering if this isn't an outcome.
-
Philosophers and others in the field of the humanities who helped shape previous concepts of world order tend to be disadvantaged, lacking knowledge of AI’s mechanisms or being overawed by its capacities.
They are also disadvantaged because their fields are undervalued and underappreciated.
-
Who is responsible for the actions of AI? How should liability be determined for their mistakes? Can a legal system designed by humans keep pace with activities produced by an AI capable of outthinking and potentially outmaneuvering them?
Politically, people have been pushing deregulation for decades, but we have regulations for a reason, as these questions illustrate.
-
algorithms to personalize results and make them available to other parties for political or commercial purposes
Algorithms personalize results for political/commercial purposes
-
internet’s purpose is to ratify knowledge
Ratification? What about augmenting intelligence?
-
Human cognition loses its personal character. Individuals turn into data, and data become regnant
Reminds me of The End of Theory. But if we lose the theory, the human understanding, what will be the consequences?
-
order is now in upheaval
Upheaval from anti-intellectualism as well as AI
-
Would these machines learn to communicate with one another?
Would Skynet) be born?
-
His machine, he said, learned to master Go by training itself through practice
-
-
www.wired.com www.wired.com
-
Google's founding philosophy is that we don't know why this page is better than that one: If the statistics of incoming links say it is, that's good enough
"Ours is not to reason why..."
-
- Apr 2018
-
astrologynewsservice.com astrologynewsservice.com
-
Astrology proven by artificial intelligence.
-
- Dec 2017
-
www.algorithmdog.com www.algorithmdog.com
-
人工智能的来龙去脉简介
-
- Nov 2017
-
www.edsurge.com www.edsurge.com
-
Could the data lead colleges to rethink how they operate to serve students?
-
- Oct 2017
-
adactio.com adactio.com
-
I can’t go on
but I must go on!. Is this the future we are heading towards?
Tags
Annotators
URL
-
-
-
Tim Urban
Tim Urban was interviewed by Forbes in this article.
Does not come across as an AI expert. Sounds like a casual blogger. We need to figure out what his background is that gives him so much gravitas writing an article like this, and why someone like Elon Musk would believe in what he says is true.
Or, maybe he ghost wrote this for Elon?
-
-
content.iospress.com content.iospress.com
-
What kind of role could intelligent machines have in this ecosystem?
This is interesting!
Tags
Annotators
URL
-
- Sep 2017
-
toidicodedao.com toidicodedao.com
-
Đầu tiên mình nghĩ bạn cần nắm về machine learning và algorithm, bạn có thể bắt đầu bằng các khóa học trên mạng. Mình recommend khóa học Machine Learning của Andrew Ng, khóa học này được coi là kinh thánh cho data scientist. Sau đó bạn có thể bắt đầu với Python hoặc R và tham gia challenge trên Kaggle. Kaggle là một platform để Data Scientist tham gia, kiếm tiền thưởng và cạnh tranh thứ hạng với nhau. Nhiều người cũng nói với mình Kaggle là con đường tốt nhất và ngắn nhất để đến với Data Science.
Học cơ bản
-
-
Local file Local file
-
when randomness is used, itis easy to lose accountability, since by definition any outcome which a randomized process couldhave produced is at least facially consistent with the design of that process
problems randomization poses for accountability
-
he power of computers is generally limited by a concept that computer scientists call noncomputability.58In short, certain types of problems cannot be solved by any computer program in any finite amount of time. There are many examples of noncomputable problems, but the most famous is Alan Turing’s “Halting Problem,” whichasks whether a given program will finish running (“halt”)
Non computabilitty - cannot be solved by a program in an finite time
-
Testing of any kind is, however, a fundamentally limited approach to determining whether any fact about a computer system is true or untrue.
Limits of testing
-
“black-box testing,” which considers only the inputs and outputs of a system or component, and “white-box testing,” in which the structure of the system’s internals is used to design test case
-
dynamic methods are limited by the finite number of inputs that can be tested or outputs that can be observed
-