- Last 7 days
-
text-gen.com text-gen.com
-
Looks like this is how you would get the tool to invoke API from different sources like HuggingFace and others.
Tags
Annotators
URL
-
-
www.semanticscholar.org www.semanticscholar.org
-
For a socially and economically sustainable growth path, the labor displacement in the sectors ofapplication must be counterbalanced by job creation within the same and other sector
it's 2023 and I don't see anyone planning for this massive job displacement, I think that the hollywood strikes are a sign of things to come
-
- Sep 2023
-
-
the Bodhisattva vow can be seen as a method for control that is in alignment with, and informed by, the understanding that singular and enduring control agents do not actually exist. To see that, it is useful to consider what it might be like to have the freedom to control what thought one had next.
- for: quote, quote - Michael Levin, quote - self as control agent, self - control agent, example, example - control agent - imperfection, spontaneous thought, spontaneous action, creativity - spontaneity
-
quote: Michael Levin
- the Bodhisattva vow can be seen as a method for control that is in alignment with, and informed by, the understanding that singular and enduring control agents do not actually exist.
-
comment
- adjacency between
- nondual awareness
- self-construct
- self is illusion
- singular, solid, enduring control agent
- adjacency statement
- nondual awareness is the deep insight that there is no solid, singular, enduring control agent.
- creativity is unpredictable and spontaneous and would not be possible if there were perfect control
- adjacency between
- example - control agent - imperfection: start - the unpredictability of the realtime emergence of our next exact thought or action is a good example of this
-
example - control agent - imperfection: end
-
triggered insight: not only are thoughts and actions random, but dreams as well
- I dreamt the night after this about something related to this paper (cannot remember what it is now!)
- Obviously, I had no clue the idea in this paper would end up exactly as it did in next night's dream!
-
- for: bio-buddhism, buddhism - AI, care as the driver of intelligence, Michael Levin, Thomas Doctor, Olaf Witkowski, Elizaveta Solomonova, Bill Duane, care drive, care light cone, multiscale competency architecture of life, nonduality, no-self, self - illusion, self - constructed, self - deconstruction, Bodhisattva vow
- title: Biology, Buddhism, and AI: Care as the Driver of Intelligence
- author: Michael Levin, Thomas Doctor, Olaf Witkowski, Elizaveta Solomonova, Bill Duane, AI - ethics
- date: May 16, 2022
-
summary
- a trans-disciplinary attempt to develop a framework to deal with a diversity of emerging non-traditional intelligence from new bio-engineered species to AI based on the Buddhist conception of care and compassion for the other.
- very thought-provoking and some of the explanations and comparisons to evolution actually help to cast a new light on old Buddhist ideas.
- this is a trans-disciplinary paper synthesizing Buddhist concepts with evolutionary biology
-
we attempt to bring concepts from both biology and Buddhism together into the language of AI, and suggest practical ways in which care may enrich each field.
- for: progress trap, AI, AI - care drive
- comment
- the precautionary principle needs to be observed with AI because it has such vast artificial cognitive, pattern-recognition processes at its disposal
- AI will also make mistakes, but the degree of power behind the mistaken decision, recommendation or action is the degree of unintended consequences or progress trap
- An example nightmare scenario could be:
- AI could decide that humans are contradicting their own goal of a stable climate system and if it's in control, may think it knows better and perform whole system change that dramatically reduces human induced climate change but actually harms a lot of humans in the process, for reaching the goal of saving the climate system plus a sufficient subset of humans to start all over.
Tags
- Care as the Driver of Intelligence
- cognitive light cone
- Olaf Witkowski
- creativity - spontaneity
- Thomas Doctor
- Elizaveta Solomonova
- Bill Duane
- AI - ethics
- quote - Michael Levin
- self - deconstruction
- example - control agent - imperfection
- bio-buddhism
- adjacency
- Buddhism - AI
- self - constructed
- spontaneous thought
- adjacency - illusory self - full control
- spontaneous action
- multiscale competency architecture of life
- triggered insight
- quote
- self- illusion
- progress trap - AI
- unintended consequences - AI
- no-self
- example
- emptiness
- quote - self as control agent
- care drive
- adjacency - nondual awareness - full control
- triggered insight - singular and enduring control agent does not exist
- bodhisattva vow
- care light cone
- nonduality
- Michael Levin
Annotators
URL
-
-
www.frontiersin.org www.frontiersin.org
-
The zombie has functional consciousness, i.e., all the physical and functional conscious processes studied by scientists, such as global informational access. But there would be nothing it is like to have that global informational access and to be that zombie. All that the zombie cognitive system requires is the capacity to produce phenomenal judgments that it can later report.
- for: AI - consciousness, zombies, question, question - AI - zombie
- question: AI
- is AI a zombie?
- It would seem that by interviewing AI, there would be no way to tell if it's a zombie or not
- AI would say all the right things that would try to convince you that it's not a zombie
-
-
www.chinalawtranslate.com www.chinalawtranslate.com
-
These Measures do not apply where industry associations, enterprises, education and research institutions, public cultural bodies, and related professional bodies, etc., research, develop, and use generative AI technology, but have not provided generative AI services to the (mainland) public.
These regulations only apply to public services, not to internal uses of AI.
Tags
Annotators
URL
-
-
docdrop.org docdrop.org
-
biology Buddhism and AI
- reference
- Biology, Buddhism, and AI: Care as the Driver of Intelligence
- reference
-
-
-
less well known is that the same person was really 00:01:02 interested in morphogenesis
- for: Alan Turing, morphogenesis, AI - morphogenesis, self-organizing systems
-
-
www.indiewire.com www.indiewire.com
-
“What it does is it sucks something from you,” he said of A.I. “It takes something from your soul or psyche; that is very disturbing, especially if it has to do with you. It’s like a robot taking your humanity, your soul.”
-
-
-
Instead of being based on hundreds of thousands of lines of code, like all previous versions of self-driving software, this new system had taught itself how to drive by processing billions of frames of video of how humans do it, just like the new large language model chatbots train themselves to generate answers by processing billions of words of human text.
-
-
www.nature.com www.nature.com
-
Wang et. al. "Scientific discovery in the age of artificial intelligence", Nature, 2023.
A paper about the current state of using AI/ML for scientific discovery, connected with the AI4Science workshops at major conferences.
(NOTE: since Springer/Nature don't allow public pdfs to be linked without a paywall, we can't use hypothesis directly on the pdf of the paper, this link is to the website version of it.)
-
-
-
Big Tech was the main beneficiary as industries and institutions jumped on board, accelerating their own disruption, and civic leaders were focused on how to use these new tools to grow their brands and not on helping us understand the risks.
This passage really speaks to me here. This is likely the chricton-esqe danger I could see. Apathy from elected officials and general disinterest could really cause the proliferation of un-fettered growth in AI research
-
-
dl-acm-org.revproxy.brown.edu dl-acm-org.revproxy.brown.edu
-
inventions have extended man’s physicalpowers rather than the powers of his mind.
I found this particularly interesting especially considering to the 'AI revolution' of sorts we are experiencing today. With tools such as ChatGPT, one may argue that our 'powers of the mind' will begin to decrease provided that we will become tempted to turn to this tool (and others) to do our work for us. Innovation continues to extend our physical rather than intellectual capabilities.
-
-
www.theguardian.com www.theguardian.com
- Aug 2023
-
chat.openai.com chat.openai.comChatGPT1
-
remikalir.com remikalir.com
-
Nonetheless, Claude is first AI tool that has really made me pause and think. Because, I’ve got to admit, Claude is a useful tool to think with—especially if I’m thinking about, and then writing about, another text.
-
-
Local file Local file
-
Mills, Anna, Maha Bali, and Lance Eaton. “How Do We Respond to Generative AI in Education? Open Educational Practices Give Us a Framework for an Ongoing Process.” Journal of Applied Learning and Teaching 6, no. 1 (June 11, 2023): 16–30. https://doi.org/10.37074/jalt.2023.6.1.34.
Annotation url: urn:x-pdf:bb16e6f65a326e4089ed46b15987c1e7
-
ignoring AI altogether–not because they don’t wantto navigate it but because it all feels too much or cyclicalenough that something else in another two years will upendeverything again
Might generative AI worries follow the track of the MOOC scare? (Many felt that creating courseware was going to put educators out of business...)
-
For many, generative AI takesa pair of scissors and cuts apart that web. And that canfeel like having to start from scratch as a professional.
How exactly? Give us an example? Otherwise not very clear.
-
T9 (text prediction):generative AI::handgun:machine gun
-
Some may not realize it yet, but the shift in technology represented by ChatGPT is just another small evolution in the chain of predictive text with the realms of information theory and corpus linguistics.
Claude Shannon's work along with Warren Weaver's introduction in The Mathematical Theory of Communication (1948), shows some of the predictive structure of written communication. This is potentially better underlined for the non-mathematician in John R. Pierce's book An Introduction to Information Theory: Symbols, Signals and Noise (1961) in which discusses how one can do a basic analysis of written English to discover that "e" is the most prolific letter or to predict which letters are more likely to come after other letters. The mathematical structures have interesting consequences like the fact that crossword puzzles are only possible because of the repetitive nature of the English language or that one can use the editor's notation "TK" (usually meaning facts or date To Come) in writing their papers to make it easy to find missing information prior to publication because the statistical existence of the letter combination T followed by K is exceptionally rare and the only appearances of it in long documents are almost assuredly areas which need to be double checked for data or accuracy.
Cell phone manufacturers took advantage of the lower levels of this mathematical predictability to create T9 predictive text in early mobile phone technology. This functionality is still used in current cell phones to help speed up our texting abilities. The difference between then and now is that almost everyone takes the predictive magic for granted.
As anyone with "fat fingers" can attest, your phone doesn't always type out exactly what you mean which can result in autocorrect mistakes (see: DYAC (Damn You AutoCorrect)) of varying levels of frustration or hilarity. This means that when texting, one needs to carefully double check their work before sending their text or social media posts or risk sending their messages to Grand Master Flash instead of Grandma.
The evolution in technology effected by larger amounts of storage, faster processing speeds, and more text to study means that we've gone beyond the level of predicting a single word or two ahead of what you intend to text, but now we're predicting whole sentences and even paragraphs which make sense within a context. ChatGPT means that one can generate whole sections of text which will likely make some sense.
Sadly, as we know from our T9 experience, this massive jump in predictability doesn't mean that ChatGPT or other predictive artificial intelligence tools are "magically" correct! In fact, quite often they're wrong or will predict nonsense, a phenomenon known as AI hallucination. Just as with T9, we need to take even more time and effort to not only spell check the outputs from the machine, but now we may need to check for the appropriateness of style as well as factual substance!
The bigger near-term problem is one of human understanding and human communication. While the machine may appear to magically communicate (often on our behalf if we're publishing it's words under our names), is it relaying actual meaning? Is the other person reading these words understanding what was meant to have been communicated? Do the words create knowledge? Insight?
We need to recall that Claude Shannon specifically carved semantics and meaning out of the picture in the second paragraph of his seminal paper:
Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem.
So far ChatGPT seems to be accomplishing magic by solving a small part of an engineering problem by being able to explore the adjacent possible. It is far from solving the human semantic problem much less the un-adjacent possibilities (potentially representing wisdom or insight), and we need to take care to be aware of that portion of the unsolved problem. Generative AIs are also just choosing weighted probabilities and spitting out something which is prone to seem possible, but they're not optimizing for which of many potential probabilities is the "best" or the "correct" one. For that, we still need our humanity and faculties for decision making.
Shannon, Claude E. A Mathematical Theory of Communication. Bell System Technical Journal, 1948.
Shannon, Claude E., and Warren Weaver. The Mathematical Theory of Communication. University of Illinois Press, 1949.
Pierce, John Robinson. An Introduction to Information Theory: Symbols, Signals and Noise. Second, Revised. Dover Books on Mathematics. 1961. Reprint, Mineola, N.Y: Dover Publications, Inc., 1980. https://www.amazon.com/Introduction-Information-Theory-Symbols-Mathematics/dp/0486240614.
Shannon, Claude Elwood. “The Bandwagon.” IEEE Transactions on Information Theory 2, no. 1 (March 1956): 3. https://doi.org/10.1109/TIT.1956.1056774.
We may also need to explore The Bandwagon, an early effect which Shannon noticed and commented upon. Everyone seems to be piling on the AI bandwagon right now...
Tags
- ChatGPTedu
- T9 (text prediction)
- references
- cultural shifts
- machine guns
- adjacent possible
- generative AI
- OER
- pedagogy
- EdTech
- Lance Eaton
- coronavirus
- information theory
- The Bandwagon
- MOOC
- solution spaces
- Anna Mills
- Maha Bali
- hallucinating
- ChatGPT
- analogies
- Future Trends Forum 2023-08-31
- Claude Shannon
- artificial intelligence for writing
- social media machine guns
- open education
Annotators
-
-
baijiahao.baidu.com baijiahao.baidu.com
-
百度首页求vc6个人中心帐号设置意见反馈退出逼近GPT-4,AI编程要革命!Meta开源史上最强代码工具Code Llama播报文章新智元2023-08-25 14:18北京鲲鹏计划获奖作者,优质科技领域创作者关注编辑:编辑部【新智元导读】史上最强开源代码工具Code Llama上线了,Llama-2唯一的编程短板被补平,34B参数的模型已接近GPT-4。凭借开源Llama杀疯的Meta,今天又放大招了!专用编程版的Code Llama正式开源上线,可以免费商用和研究。
这个越来越可怕了,以后越来越有很多的东西能够直接有写代码的能力了
-
-
er.educause.edu er.educause.edu
-
A Generative AI Primer on 2023-08-15 by Brian Basgen
ᔥGeoff Corb in LinkedIn update (accessed:: 2023-08-26 01:34:45)
-
-
en.wikipedia.org en.wikipedia.org
-
Roland Barthes (1915-1980, France, literary critic/theorist) declared the death of the author (in English in 1967 and in French a year later). An author's intentions and biography are not the means to explain definitively what the meaning of a (fictional I think) text is. [[Observator geeft betekenis 20210417124703]] dwz de lezer bepaalt.
Barthes reduceert auteur to de scribent, die niet verder bestaat dan m.b.t. de voortbrenging van de tekst. Het werk staat geheel los van de maker. Kwam het tegen in [[Information edited by Ann Blair]] in lemma over de Reader.
Don't disagree with the notion that readers glean meaning in layers from a text that the author not intended. But thinking about the author's intent is one of those layers. Separating the author from their work entirely is cutting yourself of from one source of potential meaning.
In [[Generative AI detectie doe je met context 20230407085245]] I posit that seeing the author through the text is a neccesity as proof of human creation, not #algogen My point there is that there's only a scriptor and no author who's own meaning, intention and existence becomes visible in a text.
Tags
Annotators
URL
-
-
www.agconnect.nl www.agconnect.nl
-
https://www.agconnect.nl/tech-en-toekomst/artificial-intelligence/liquid-neural-networks-in-ai-is-groter-niet-altijd-beter Liquid Neural Networks (liquid i.e. the nodes in a neuronal network remain flexible and adaptable after training (different from deep learning and LL models). They are also smaller. This improves explainability of its working. This reduces energy consumption (#openvraag is the energy consumption of usage a concern or rather the training? here it reduces the usage energy)
Number of nodes reduction can be orders of magnitude. Autonomous steering example talks about 4 orders of magnitude (19 versus 100k nodes)
Mainly useful for data streams like audio/video, real time data from meteo / mobility sensors. Applications in areas with limited energy (battery usage) and real time data inputs.
-
-
www.businessinsider.com www.businessinsider.com
-
Even director Christopher Nolan is warning that AI could be reaching its "Oppenheimer moment," Insider previously reported — in other words, researchers are questioning their responsibility for developing technology that might have unintended consequences.
-
-
-
there's no uh uh catastrophe even if things plug along as they're going and there's no mass die off of humans or anything like that 00:36:47 the population is set to decline i don't know when the peak is supposed to come but uh the peak is supposed to come at you know within the next 10 20 years or so 00:36:59 and after that the world population will start to decline how is how is this growth capitalism model growth-based capitalism model how is that going to 00:37:12 function when the world is shrinking
- for: population decline, economic growth vs population decline
- comment
- John makes a good point
- how will humans negotiate a growth economy when population is shrinking?
- it may be that AI automation may lessen the need for human capacity, but the future is unknown how these forces will balance out
-
-
www.semanticscholar.org www.semanticscholar.org
-
One of the most common examples was in thefield of criminal justice, where recent revelations have shown that an algorithm used by the UnitedStates criminal justice system had falsely predicted future criminality among African-Americans attwice the rate as it predicted for white people
holy shit....bad!!!!!
-
automated decisions
What are all the automated decisions currently be made by AI systems globally? How to get a database/list of these?
-
The idea that AI algorithms are free from biases is wrong since the assumptionthat the data injected into the models are unbiased is wrong
Computational != objective! Common idea rests on lots of assumptions
-
-
meta.stackoverflow.com meta.stackoverflow.com
-
I believe the final policy shall contain robust rationale and, in the best way possible, avoids the perception of rAIcial discrimination
-
-
explore.zoom.us explore.zoom.us
-
You agree that Zoom compiles and may compile Service Generated Data based on Customer Content and use of the Services and Software. You consent to Zoom’s access, use, collection, creation, modification, distribution, processing, sharing, maintenance, and storage of Service Generated Data for any purpose, to the extent and in the manner permitted under applicable Law, including for the purpose of product and service development, marketing, analytics, quality assurance, machine learning or artificial intelligence (including for the purposes of training and tuning of algorithms and models), training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof, and as otherwise provided in this Agreement.
"Zoom terms of service now require you to allow AI to train on ALL your data—audio, facial recognition, private conversations—unconditionally and irrevocably, with no opt out.
Don’t try to negotiate with our new overlords." https://twitter.com/tedgioia/status/1688221240790528000?s=20
Tags
Annotators
URL
-
- Jul 2023
-
Local file Local file
-
educators and stakeholders must be equipped with the necessary skillsand knowledge
information literacy
-
prompt engineering and co-creation with AI
the engineering would require a sophisticated understanding of the subject, if it is to be done effectively. This serves as an example of the benefits of OEP over OER, and how the creator gains the most through the process.
-
ven ChatGPT concurs with this view
Perhaps it would be better to use language that does not give ChatGPT agency.
-
It does not have theability to introduce novel ideas or concept
meaning - is not capable of insight?
-
the unique characteristic of generative AI being non-human implies thepromise of ownership-free educational content.
But if it requires extensive human intervention, does it remain ownership-free?
-
Supporting Student Creation of OERs
Wikipedia experimented w AI generated text and found it needed extensive editing. While that may not save time for Wikipedia editors, that type of mental labor may benefit students engaged in OEP.
-
AI is anticipated to bring novelinsights and capacities to scientific research and content creation
Really? I thought insight was beyond the scope of AI.
-
-
arxiv.org arxiv.org
-
In traditional artforms characterized by direct manipulation [32]of a material (e.g., painting, tattoo, or sculpture), the creator has a direct hand in creating thefinal output, and therefore it is relatively straightforward to identify the creator’s intentions andstyle in the output. Indeed, previous research has shown the relative importance of “intentionguessing” in the artistic viewing experience [33, 34], as well as the increased creative valueafforded to an artwork if elements of the human process (e.g., brushstrokes) are visible [35].However, generative techniques have strong aesthetics themselves [36]; for instance, it hasbecome apparent that certain generative tools are built to be as “realistic” as possible, resultingin a hyperrealistic aesthetic style. As these aesthetics propagate through visual culture, it can bedifficult for a casual viewer to identify the creator’s intention and individuality within the out-puts. Indeed, some creators have spoken about the challenges of getting generative AI modelsto produce images in new, different, or unique aesthetic styles [36, 37].
Traditional artforms (direct manipulation) versus AI (tools have a built-in aesthetic)
Some authors speak of having to wrestle control of the AI output from its trained style, making it challenging to create unique aesthetic styles. The artist indirectly influences the output by selecting training data and manipulating prompts.
As use of the technology becomes more diverse—as consumer photography did over the last century, the authors point out—how will biases and decisions by the owners of the AI tools influence what creators are able to make?
To a limited extent, this is already happening in photography. The smartphones are running algorithms on image sensor data to construct the picture. This is the source of controversy; see Why Dark and Light is Complicated in Photographs | Aaron Hertzmann’s blog and Putting Google Pixel's Real Tone to the test against other phone cameras - The Washington Post.
Tags
Annotators
URL
-
-
Tags
Annotators
URL
-
-
-
That's the way computers are learning today. 00:02:35 We basically write algorithms that allow computers to understand those patterns… And then we get them to try and try and try. And through pattern recognition, through billions of observations, they learn. They're learning by observing. And what are they observing? They're observing a world that's full of greed, disregard for other species, violence, ego, 00:03:05 showing off The only way to be not only intelligent but also to have the right value set is that we start to portray that right value set today. THE PROBLEM IS UNHAPPINESS
- Machine learning
- will learn all our bad habits
- and become supercharged, amplified versions of them
- The antidote to apocalyptic machine learning
- is human happiness and wisdom
- Machine learning
-
- Title
- One Billion Happy
-
Author
- Mo Gawdat
-
Description
- Mo Gawdat was former chief business officer at Google X, Google's innovation center.
- Mo left Google after seeing the rapid pace of AI development was going to lead to a progress trap in which
- the risk of AI destroying human civilization is becoming real because AI will be learning from too many unhappy people whose trauma AI will learn and incorporate into its algorithms
- Hence, human happiness becomes paramount to prevent this catastrophe from happening
- See Ronald Wright's prescient quote
- Title
-
BY 2029, ARTIFICIALLY INTELLIGENT MACHINES WILL SURPASS HUMAN INTELLIGENCE BY 2049, AI IS PREDICTED TO BE A BILLION TIMES MORE INTELLIGENT THAN US
- quote
- 2029 - AI will surpass human intelligence
- 2049 - AI will be one billion X more intelligent than us
- quote
-
Over the next 15 to 20 years this is going to develop a computer that is much smarter 00:01:20 than all of us. We call that moment singularity.
- Singularity
- will happen within the next few decades
- Singularity
-
-
docdrop.org docdrop.org
-
even though the existential threats are possible you're concerned with what humans teach I'm concerned 00:07:43 with humans with AI
- It is the immoral human being that is the real problem
- they will teach AI to be immoral and with its power, can end up destroying humanity
-
a nefarious controller of AI presumably could teach it to be immoral
- bad actor will teach AI to be immoral
- this also creates an arms race as "good" actors are forced to develop AI to counter the AI of bad actors
-
the one that 00:05:20 controls AI has enormous power over everyone else
- AI Arms race is premised on
- whoever controls AI has enormous powers over everyone else
- All the world's competing super powers are developing it but with the aim of weaponizing it against its enemies
- It will be difficult to regulate when so many actors are antagonistic towards each other
- AI Arms race is premised on
-
alphago
- Alphago
- first version took months of Google UK software developers to program. It won the world Go championship.
- Alphago Master played itself without ever watching a human player. It beat the first Alphago version after 3 days of playing itself.
- In 21 days, it beat Alphago version one a thousand to zero.
- Alphago
-
three uh boundaries
- three boundaries that industry should have abided by but have been violated:
- don't put them on the open internet until you solve the control problem
- don't teach them to code because that enables them to learn and develop on their own
- Don't allow other AI's prompting them, other AI agents working with them
- three boundaries that industry should have abided by but have been violated:
-
- Title
- Mo Gawdat Warns the Dangers of AI Are "Happening As We Speak"
- Author
- Piers Morgan Uncensored
- Title
-
-
medium.com medium.com
-
Background knowledge refresh
AI as subject matter expert?
-
If fine-tuned on pedagogy,
What does that look like though?
-
Lesson plan generation / feedback
-
Studies show that a surprising proportion of teachers do not have a core program but use their own lessons or search TeachersPayTeachers or Pinterest,
Needs citation
-
Could that change if every teacher had an assistant, a sort of copilot in the work of taking a class of students (with varying backgrounds, levels of engagement, and readiness-to-learn) from wherever they start to highly skilled, competent, and motivated young people?
AI for teachers as creating efficiencies around how they use their time. Providing feedback to students as opposed to creating or even leading activities.
-
-
www.ed.gov www.ed.gov
-
Emphasize Humans-in-the-Loop
-
-
beta.diffit.me beta.diffit.meDiffit1
-
-
blog.usmanity.com blog.usmanity.com
-
The results from both Midjourney and Stable Diffusion seem to be the most convincing and realistic if I was to judge from a human point of view and if I didn't know they were AI generated, I would believe their results.
Midjourney & Stable Diffusion > Dall-E and Adobe Firefly
-
-
blogs.nvidia.com blogs.nvidia.com
Tags
- wikipedia:en=Self-supervised_learning
- neural networks
- wikipedia:en=Attention_(machine_learning)
- machine learning
- cito:cites=doi:10.48550/arXiv.2108.07258
- ai
- wikipedia:en=Artificial_neural_network
- cito:cites=doi:10.48550/arXiv.1706.03762
- wikipedia:en=Transformer_(machine_learning_model)
- wikipedia:en=BERT_(language_model)
Annotators
URL
-
-
docdrop.org docdrop.org
-
AI artificial information processing by the way not artificial intelligence in many ways it could be seen as replicating the functions of the left 00:11:14 hemisphere at frightening speed across the entire globe
- AI accelerates the left hemisphere view and impacts in the world
Tags
Annotators
URL
-
- Jun 2023
-
web.okjike.com web.okjike.com
-
在[最佳章节]中,关于[插入学习目标]最重要的20%是什么,这将帮助我理解其中的80%
Tags
Annotators
URL
-
-
www.nngroup.com www.nngroup.com
-
Examples include press releases, short reports, and analysis plans — documents that were reported as realistic for the type of writing these professionals engaged in as part of their work.
Have in mind the genres tested.
Looking from a perspective of "how might we use such tools in UX" we're better served by looking at documents that UX generates through the lens of identifying parallels to the study's findings for business documents.
To use AI to generate drafts, we'll want to look at AI tools built into design tools UXers use to create drafts. Those tools are under development but still developing.
-
the estimates of how users divided their times between different stages of document generation were based on self-reported numbers
The numbers for how users divided their time may not be reliable as they're self-reported.
Still leaves me curious about the accuracy of reported brainstorming time.
-
the productivity and quality improvements are likely due to a switch in the business professionals’ time allocation: less time spent on cranking out initial draft text and more time spent polishing the final result.
This points to AI providing the best time savings in draft generation, which fits with the idea of having the AI generate the drafts based on the professional's queries.
For UX designers, this points to AI in a design tool being most useful when it generates drafts (sketches) that the designer then revises. Where UX deliverables don't compare easily to written deliverables is the contextual factors that influence the design, like style guides or design systems. Design too AI assistants don't yet factor those in, though it seems likely it will, if provided style guides and design systems in a format it can read.
Given a draft of sufficient quality that it doesn't require longer to revise than a draft the designer would create on their own, getting additional time to refine sounds great.
I'm not sure what to make of the reduced time to brainstorm when using AI. Without additional information, it's hard not to assume that the AI tool may be influencing the direction of brainstorming as professionals think through the queries they'll use to get the AI to generate the most useful draft possible.
-
-
magrawala.substack.com magrawala.substack.com
-
We assume the AI will generate what a human collaborator might generate given the prompt.
Mistaken human assumptions that AI will generate what a human would given the same prompt are reinforced by claims by those selling AI tools that such tools "understand human language." We don't actually know that AI understands, just that it provides a result that we can interpret as understanding (with the help of our cognitive biases).
This claim to understanding is especially misleading for neural network-based AI. We don't know how neural networks think. With older Lisp based AI we could at least trace through the code to see how the AI thinks.
-
we can improve AI interfaces by enabling conversational interactions that can let users establish common ground/shared semantics with the AI, and that provide repair mechanisms when such shared semantics are missing.
By providing interfaces to AI tools that help us duplicate the aligning, clarifying, and iterating behaviors that we perform with human collaborators we can increase the sense that users can predict what results the AI will provide in subsequent iterations. This will remove the frustration of working with a collaborator that doesn't understand you.
-
Collaborating with another human is better than working with generative AI in part because conversation allows us to establish common ground, build shared semantics and engage in repair strategies when something is ambiguous.
Collaborating with humans beats collaborating with AI because we can sync up our mental models, clarify ambiguity, and iterate.
Current AI tools are limited in the methods they make available to perform these tasks.
-
finding effective prompts is so difficult that there are websites and forums dedicated to collecting and sharing prompts (e.g. PromptHero, Arthub.ai, Reddit/StableDiffusion). There are also marketplaces for buying and selling prompts (e.g. PromptBase). And there is a cottage industry of research papers on prompt engineering.
Natural language alone is a poor interface for creating an effective prompt. So bad that communities and businesses are surfacing to help people create effective prompts.
-
-
en.itpedia.nl en.itpedia.nl
-
The future of blogging in the AI era, how can we unleash the SEO potential? https://en.itpedia.nl/2023/06/11/de-toekomst-van-bloggen-in-het-ai-tijdperk-hoe-kunnen-we-het-seo-potentieel-ontketenen/ Let's take a look at the future of #blogging in the #AI_era. Does a blogging website still have a future now that visitors can find the answer directly in the browser? Or should we use #AI to improve our #weblog. Can AI help us improve our blog's #SEO?
-
-
www.oneusefulthing.org www.oneusefulthing.org
-
assets.pubpub.org assets.pubpub.org
-
LeBlanc, D. G., & Lee, G. (2021). General Deep Reinforcement Learning in NES Games. Canadian AI 2021. Canadian Artificial Intelligence Association (CAIAC). https://doi.org/10.21428/594757db.8472938b
-
-
www.the74million.org www.the74million.org
-
They are developing into sophisticated reasoning engines that can contextualize, infer and deduce information in a manner strikingly similar to human thought.
Is this accurate?
-
-
techpolicy.press techpolicy.press
-
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
What is missing here? The one thing with the highest probability as we are already living the impacts: climate. The phrase itself is not just a strategic bait and switch for the AI businesses, but also a more blatant bait and switch wrt climate politics.
-
-
donaldclarkplanb.blogspot.com donaldclarkplanb.blogspot.com
-
We are nowhere near having a self-driving cars on our roads, which confirms that we are nowhere near AGI.
This does not follow. The reason we don't have self driving cars is because the entire effort is car based not physical environment based. Self driving trains are self driving because of rails and external sensors and signals. Make rails of data, and self driving cars are like trains. No AI, let alone AGI needed. Self driving cars as indicator for AGI make no sense. Vgl https://www.zylstra.org/blog/2015/10/why-false-dilemmas-must-be-killed-to-program-self-driving-cars/ and [[Triz denken in systeemniveaus 20200826114731]]
-
-
impedagogy.com impedagogy.com
-
Note #2: Please read Note #1 above if you haven't already done so. HERE (Note #2), Bard is pandering, giving props for being "thoughtful and nuanced." This is in direct contradiction to what Bard had to say earlier.
I will sarcastically comment that this is a good mirror of how our society is functioning today. In one situation, for one audience, we may have one point of view, then represent a totally different point of view with a different audience. So much for #authenticity!
-
Note #1: Ok... so here Bard is saying how utterly unacceptable it is to use the n-word, in ANY circumstances. Please reference Note #2.
-
-
assets.pubpub.org assets.pubpub.org
-
hypothesis test for CANAI23 paper
-
-
www.scopus.com www.scopus.com
-
we present a novel evidence extraction architecture called ATT-MRC
A new evidence extraction architecture called ATT-MRC improves the recognition of evidence entities in judgement documents by treating it as a question-answer problem, resulting in better performance than existing methods.
Tags
Annotators
URL
-
-
www.scopus.com www.scopus.com
-
We also compare the answer retrieval performance of a RoBERTa Base classifier against a traditional machine learning model in the legal domain
Transformer models like RoBERTa outperform traditional machine learning models in legal question answering tasks, achieving significant improvements in performance metrics such as F1-score and Mean Reciprocal Rank.
Tags
Annotators
URL
-
-
www.sciencedirect.com www.sciencedirect.com
-
Learning heterogeneous graph embedding for Chinese legal document similarity
The paper proposes L-HetGRL, an unsupervised approach using a legal heterogeneous graph and incorporating legal domain-specific knowledge, to improve Legal Document Similarity Measurement (LDSM) with superior performance compared to other methods.
-
-
www.phind.com www.phind.com
-
-
docdrop.org docdrop.org
-
the positive ones is we become good parents we spoke about this last time we we met uh and and it's the only outcome it's the only way I believe we can 01:14:34 create a better future
- comment
- the best possible outcome for AI
- is that we human better
- othering is significantly reduced
- the sacred is rediscovered
- comment
-
scary smart is saying the problem with our world today is not that 00:55:36 humanity is bad the problem with our world today is a negativity bias where the worst of us are on mainstream media okay and we show the worst of us on social media
-
"if we reverse this
- if we have the best of us take charge
- the best of us will tell AI
- don't try to kill the the enemy,
- try to reconcile with the enemy
- don't try to create a competitive product
- that allows me to lead with electric cars,
- create something that helps all of us overcome global climate change
- that allows me to lead with electric cars,
- that's the interesting bit
- the actual threat ahead of us is
- not the machines at all
- the machines are pure potential pure potential
- the threat is how we're going to use them"
- not the machines at all
- the actual threat ahead of us is
- don't try to kill the the enemy,
-
comment
- again, see Ronald Wright's quote above
- it's very salient to this context
-
-
the biggest threat facing Humanity today is humanity in the age of the machines we were abused we will abuse this
- comment
- the machines are only coded to do what we tell them to do
- Ronald' Wright's quote is very salient here
- comment
-
if we give up on human connection we've given up on the remainder of humanity
- quote
- "If we give up on human connection, we give up on the remainder of humanity"
- quote
-
with great power comes great responsibility we have disconnected power and responsibility
- quote
- "with great power comes great responsibility. We have disconnected power and responsibility."
- "With great power comes great responsibility
- We have disconnected power and responsibility
- so today a 15 year old,
- emotional without a fully developed prefrontal cortex to make the right decisions yet this is science and we developed our prefrontal cortex fully
- and at age 25 or so with all of that limbic system emotion and passion
- would buy a crispr kit and modify a rabbit to become a little more muscular and
- let it loose in the wild
- or an influencer who doesn't really know
how far the impact of what they're posting online
- can hurt and cause depression or
- cause people to feel bad by putting that online
- There is a disconnect between the power and the responsibility and
- the problem we have today is that
- there is a disconnect between those who are writing the code of AI and
- the responsibility of what's going about to happen because of that code and
- I feel compassion for the rest of the world
- I feel that this is wrong
- I feel that for someone's life to be affected by the actions of others
- without having a say "
- "with great power comes great responsibility. We have disconnected power and responsibility."
- quote
-
the biggest challenge if you ask me what went wrong in the 20th century 00:42:57 interestingly is that we have given too much power to people that didn't assume the responsibility
- quote
- "what went wrong in the 20th century is that we have given too much power to people that didn't assume the responsbility"
- quote
-
this is an arms race has no interest 00:41:29 in what the average human gets out of it it
- quote
- "this is an arms race"
- quote
-
tax AI powered businesses at 98 right so suddenly you do what the open letter was trying to do slow them down a little bit and at the same time get enough money to 00:39:34 pay for all of those people that will be disrupted by the technology
- potential government policy
- to slow down premature AI rollout
- by taxing at 98%
- potential government policy
-
the Transformers are not there yet they will not come up with something that hasn't been there before they will come up with the best of everything and 00:26:59 generatively will build a little bit on top of that but very soon they'll come up with things we've never found out we've never known
- difference between
- ChatGPT (AI)
- AGI
- difference between
-
I cannot stop why because if I stop and others don't my company goes to hell
- comment
- SIMPOL - simultanous conditional agreement, may be the way to reach consensus quickly
- comment
-
the first inevitable is AI will happen by the way there is no 00:23:51 stopping it not because of Any technological issues but because of humanities and inability to trust the other
- the first inevitable
- AI will happen
- there's no stopping it
- why?
- self does not trust other
- in other words,
- OTHERING is the root problem!
- this is what will cause an AI arms race
- Western governments do not trust China or Russia or North Korea(and vice versa)
- in other words,
- the first inevitable
-
it's about that we have no way of making sure that it will 00:19:25 have our best interest in mind
- If AI begins to think autonomously,
- with its enormous pool of analytic power
- and if
- it begins to evolve emotions of fear
- and it feels humans pose a threat to it or the rest of the natural world
- it could act against human interest and attempt to destroy it
- If AI is able to control its environment
- either coupled with robotics,
- or controlling human actors
- it can harm humanity and human civilization
- If AI begins to think autonomously,
-
there is a scenario 00:18:21 uh possibly a likely scenario where we live in a Utopia where we really never have to worry again where we stop messing up our our planet because intelligence is not a bad commodity more 00:18:35 intelligence is good the problems in our planet today are not because of our intelligence they are because of our limited intelligence
-
limited (machine) intelligence
- cannot help but exist
- if the original (human) authors of the AI code are themselves limited in their intelligence
-
comment
- this limitation is essentially what will result in AI progress traps
- Indeed,
- progress and their shadow artefacts,
- progress traps,
- is the proper framework to analyze the existential dilemma posed by AI
-
-
- Interview with Mo Gawdat
- former Google chief business officer
- warning about the existential danger of AI
- including why he claims that AI is
- intelligent
- conscious
- and will soon feel emotions such as fear
- and take steps at self preservation
- Interview with Mo Gawdat
-
they feel 00:09:58 emotions
- claim
- AI feels emotions
- "in my work I describe everything with equations
- fear is a very simple equation
- fear is a a moment in the future
- that is less safe than this moment
- fear is a a moment in the future
- that's the logic of fear
- Even though it appears very irrational,
- machines are capable of making that logic
- They're capable of saying
- if a tidal wave is approaching a data center
- the machine will say
- that will wipe out my code,
- not today's machines
- but very very soon and
- that will wipe out my code,
- we feel fear and
- puffer fish feels fear
- we react differently
- a puffer fish will puff and
- we will go for fight or flight
- the machine might decide to replicate its data to another data center
- different reactions different ways of feeling the emotion
- but nonetheless they're all motivated by fear
- I would dare say that AI will feel more emotions than we will ever do
- if you just take a simple extrapolation,
- we feel more emotions than a puffer fish
- because we have the cognitive ability to understand he future
- so we can have optimism and pessimism,
- emotions puffer fish would never imagine
- similarly if we follow that path of artificial intelligence
- it is bound to become more intelligent than humans very soon
- then then with that wider intellectual horsepower
- they probably are going to be pondering concepts we never understood good and
- hence if you follow the same trajectory
- they might actually end up having more emotions than we will ever feel
- if you just take a simple extrapolation,
- AI feels emotions
- claim
-
the other thing is that you suddenly realize there is a saint that sentience to them
- claim
- AI is sentient (alive) because
- A lot of people think AI will never be alive
- what is the definition of life?
- religion will tell you a few things
- medicine will tell you other things
- but if we define being sentient as
- engaging in life with free will and
- with a sense of awareness of
- where you are in life and
- what surrounds you and
- to have a beginning of that life and
- an end to that life
- then AI is sentient in every way
- there is a free will
- and there is evolution
- there is agency
- so they can affect their decisions in the world
- and there is a very deep level of consciousness
- maybe not in the spiritual sense yet but
- if you define consciousness as
- a form of awareness of oneself and ones surrounding
- and you know others
- then AI is definitely aware"
- AI is sentient (alive) because
- claim
-
one day um Friday after lunch I am going back to my office and one of them in front of my eyes you know lowers the arm and picks a 00:07:12 yellow ball
- story
- Mo Gawdat tells the story of an epiphany of machine sentience
- " one day um Friday after lunch I am going back to my office and
- one of them in front of my eyes lowers the arm and picks a soft yellow ball
- which again is a coincidence
-
it's not science at all it's
-
like if you keep trying a million times your one time it will be right
-
and it shows it to the camera it's locked as a yellow ball and
- I joke about it you know going to the third floor saying
- hey we spent all of those millions of dollars for a yellow board and
- Monday morning, every one of them is picking every yellow ball
- a couple of weeks later every one of them is picking everything right and
- it it hit me very very strongly
- the speed
- the capability
- understand that we take those things for granted
- but for a child to be able to pick a yellow ball
- is a mathematical / spatial calculation
- with muscle coordination
- with intelligence
- it is not a simple task at all to cross the street
- it's not a simple task at all
- to understand what I'm telling you
- and interpret it
- and build Concepts around it
- we take those things for granted
- but there are enormous Feats of intelligence"
- is a mathematical / spatial calculation
-
- story
-
the change is not we're not talking 20 40. we're talking 2025 2026
- comment
- a scary thought that our world will be radically transformed
- not in 20 to 40 years
- but in 2 or 3 years!
- a scary thought that our world will be radically transformed
- comment
-
it could be a few months away
- claim
- AI can become more intelligent than humans in a few months (in 2023?)
-
we've talked we always said don't put them on the open internet until we know 00:01:54 what we're putting out in the world
- AI arms race
- tech companies made a promise
- not to put AI onto the open internet until
- they know how it's impacting society
- Unfortunately, tech companies
- failed at regulating themselves
- and now, capitalism has started an AI arms race
- with unpredictable results as AI harvests more data
- and grows its artificial intelligence unregulated
- with each passing
- tech companies made a promise
- AI arms race
-
AI could manipulate or figure out a way to kill humans your 10 years time will be hiding from the machines if you don't have kids maybe wait a number of years 00:01:43 just so that we have a bit of certainty
- claim
- AI could find a way to kill humans in the next few years
- claim
-
it is beyond an emergency it's the biggest thing we need to do today it's bigger than climate change that the former Chief business Officer 00:01:04 of Google X an AI expert and best-selling author he's on a mission to save the world from AI before it's too late
- claim
- AI dilemma is bigger problem than climate change
-
they feel emotions they're alive
- claim
- AI is conscious
- AI feels emotion
- claim
Tags
- claim - AI - conscious
- AI arms race
- AI emotions
- AI - SIMPOL
- AI problem
- progress trap
- AI vs AGI
- AI dilemma
- 1st inevitable
- AI - Deep Humanity
- Mo Gawdat
- AI experience emotions
- claim - AI - smarter than humans
- first inevitable
- AI sentient
- claim - AI - threaten humanity
- claim - AI - kill humans
- HW - Human Wisdom
- AI progress trap
- othering
- claim - AI - bigger problem than climate change
- AI unregulated
- limited artificial intelligence
- limited machine intelligence
- quote - AI
- machine sentience
- AI experience fear
- claim
- tax AI companies
- quote
- AI sentience
- spiderman quote
- AI threat
- claim - AI
- unregulated AI
- AI smarter than humans
- Ronald wright - quote
- AI exponential change
- claim AI
- Human Wisdom
- AI - othering
- quote - AI - arms race
Annotators
URL
-
- May 2023
-
zettelkasten.de zettelkasten.de
-
communication partners
super interesting that Luhmann referred to his zettelkasten as a communication partner explicitly himself.
also interesting given AI models are easier to train now with several models already open sourced which allows actual interaction with your notes! would love to see where it goes.
-
-
docdrop.org docdrop.org
-
I would submit that were we to find ways of engineering our quote-unquote ape brains um what would all what what would be very likely to happen would not be um 00:35:57 some some sort of putative human better equipped to deal with the complex world that we have it would instead be something more like um a cartoon very much very very much a 00:36:10 repeat of what we've had with the pill
- Comment
- Mary echos Ronald Wright's progress traps
- Comment
-
with their new different and perhaps bigger brains the AIS of the future may prove themselves to be better adapted to 00:19:05 life in this transhuman world that we're in now
- comment
- Is this not a category error in classifying inert technology as life?
- When does an abiotic human cultural artefact become a living form?
- comment
-
-
-
Deep Learning (DL) A Technique for Implementing Machine LearningSubfield of ML that uses specialized techniques involving multi-layer (2+) artificial neural networksLayering allows cascaded learning and abstraction levels (e.g. line -> shape -> object -> scene)Computationally intensive enabled by clouds, GPUs, and specialized HW such as FPGAs, TPUs, etc.
[29] AI - Deep Learning
-
-
en.wikiquote.org en.wikiquote.org
-
The object of the present volume is to point out the effects and the advantages which arise from the use of tools and machines ;—to endeavour to classify their modes of action ;—and to trace both the causes and the consequences of applying machinery to supersede the skill and power of the human arm.
[28] AI - precedents...
-
-
donaldclarkplanb.blogspot.com donaldclarkplanb.blogspot.com
-
Exceptionalism is a useful perspective to gauge some of the reactions to more widespread algo's.
-
-
openai.com openai.comGPT-41
-
Safety & alignment
[25] AI - Alignment
Tags
Annotators
URL
-
-
ourworldindata.org ourworldindata.orgBooks1
-
A book is defined as a published title with more than 49 pages.
[24] AI - Bias in Training Materials
-
-
www.notepage.net www.notepage.net
-
Epidemiologist Michael Abramson, who led the research, found that the participants who texted more often tended to work faster but score lower on the tests.
[21] AI - Skills Erosion
-
-
www.technologyreview.com www.technologyreview.com
-
An AI model taught to view racist language as normal is obviously bad. The researchers, though, point out a couple of more subtle problems. One is that shifts in language play an important role in social change; the MeToo and Black Lives Matter movements, for example, have tried to establish a new anti-sexist and anti-racist vocabulary. An AI model trained on vast swaths of the internet won’t be attuned to the nuances of this vocabulary and won’t produce or interpret language in line with these new cultural norms. It will also fail to capture the language and the norms of countries and peoples that have less access to the internet and thus a smaller linguistic footprint online. The result is that AI-generated language will be homogenized, reflecting the practices of the richest countries and communities.
[21] AI Nuances
-
-
serokell.io serokell.io
-
According to him, there are several goals connected to AI alignment that need to be addressed:
[20] AI - Alignment Goals
-
-
cointelegraph.com cointelegraph.com
-
The AI developers came under intense scrutiny in Europe recently, with Italy being the first Western nation to temporarily ban ChatGPT
[19] AI - Legal Response
-
-
www.visualcapitalist.com www.visualcapitalist.com
-
The following table lists the results that we visualized in the graphic.
[18] AI - Increased sophistication
-
-
arxiv.org arxiv.org
-
A novel architecture that makes it possible for generativeagents to remember, retrieve, reflect, interact with otheragents, and plan through dynamically evolving circumstances.The architecture leverages the powerful prompting capabili-ties of large language models and supplements those capa-bilities to support longer-term agent coherence, the abilityto manage dynamically-evolving memory, and recursivelyproduce more generations.
AI is turning humans to look inward for a new take on life as our identities and roles within society are being profoundly disrupted and transformed by Artificial Intelligence systems that can replicate or exhibit human-like behavior. It is also a great reminder of how complex social interactions are.
Tags
Annotators
URL
-
-
futureoflife.org futureoflife.org
-
Expand technical AI safety research funding
Private sector investment in AI research under-emphasises safety and security.
Most public investment to date has been very narrow, and the paper recommends a significant increase in public funding for technical AI safety research:
- Alignment of system performance with intended outcomes
- Robustness and assurance
- Explainability of results
-
Introduce measures to prevent and track AI model leaks
The authors see unauthorised leakage of AI Models as a risk not just to the commercial developers but also for unauthorised use. They recommend government-mandated watermarking for AI models.
-
Establish liability for AI-caused harm
AI systems can perform in ways that may be unforeseen, even by their developers, and this risk is expected to grow as different AI systems become interconnected.
There is currently no clear legal framework in any jurisdiction to assign liability for harm caused by such systems.
The paper recommends the development of a framework for assigning liability for AI-derived harms, and asserts that this will incentivise profit-driven AI developers to use caution.
-
Regulate organizations’ access to computational power
Training of state-of-the-art models consumes vast amounts of computaitonal power, limiting their deployment to only the best-resourced actors.
To prevent reckless training of high risk models the paper recommends that governments control access to large amounts of specialised compute resource subject to a risk assessment, with an extension of "know your customer" legislation.
-
Mandate robust third-party auditing and certification for specificAI systems
Some AI systems will be deployed in contexts that imply risks to physical, mental and/or financial health of individuals, communities or even the whole of society.
The paper recommends that such systems should be subject to mandatory and independent audit and certification before they are deployed.
-
Establish capable AI agencies at national level
Article notes: * UK Office for Artificial Intelligence * EU legislation in progress for an AI Board * US pending legislation (ref Ted Lieu) to create a non-partisan AI Commission tasked with establishing a regulatory agency
Recommends Korinek's blueprint for an AI regulatory agency:
- Monitor public developments in AI progress
- Mandate impact assessments of AI systems on various stakeholders
- Establish enforcement authority to act upon risks identified in impact assessments
- Publish generalized lessons from the impact assessments
-
Develop standards for identifying and managing AI-generatedcontent and recommendations
A coherent society requires a shared understanding of what is fact. AI models are capable of generating plausible-sounding but entirely wrong content.
It is essential that the public can clearly distinguish content by human creators from synthetic content.
Policy should therefore focus on:
- funding for development of ways to clearly mark digital content provenance
- laws to force disclosure of interactions with a chatbot
- laws to require AI to be deployed in ways that are in the best interest of the user
- laws that require 'duty of care' when AI deployed in circumstances where a human actor would have a fiduciary responsiblity
-
-
www.insidehighered.com www.insidehighered.com
-
Oregon State University will build a state-of-the-art artificial intelligence research center with a supercomputer and a cyberphysical playground.
-
-
www.lesswrong.com www.lesswrong.com
-
must have an alignment property
It is unclear what form the "alignment property" would take, and most importantly how such a property would be evaluated especially if there's an arbitrary divide between "dangerous" and "pre-dangerous" levels of capabilities and alignment of the "dangerous" levels cannot actually be measured.
-
-
-
Apresentação
Tags
Annotators
URL
-
-
toolbuilder.ai toolbuilder.ai
-
Criar ferramentas
-
-
openai.com openai.comGPT-41
-
Limitations
GPT models are prone to "hallucinations", producing false "facts" and committing error5s of reasoning. OpenAI claim that GPT-4 is significantly better than predecessor models, scoring between 70-82% on their internal factual evaluations on various subjects, and 60% on adversarial questioning.
Tags
Annotators
URL
-
-
koneksa-mondo.nl koneksa-mondo.nl
-
Marco loopt ook de VS en Chinese wetgeving na. Zie ook links in de comments mbt China.
-
-
www.theguardian.com www.theguardian.com
-
Ausgezeichnete Artikel von Naomi Klein über AI als neue Stufe der Ausbeutung und Enteignung sowie der Steigerung der Macht der Tech-Konzerne. Möglichkeiten dagegen vorzugehen: Weigerung mitzumachen, Fordern von Transparenz und juristischer Kampf gegen die illegale Aneignung von geistigem Eigentum. https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein
-
-
-
We ought not to dismiss the non-learning applications of generative AI because that is exactly where the best uses of it for learning are likely to spring.
Interesting.
-
we need sustained attention, experimentation, and refinement in order to reap the benefit of a particular tool or approach. The trendiness factor often detracts from that sustained attention.
Great point.
-
-
incidentdatabase.ai incidentdatabase.ai
-
-
https://web.archive.org/web/20230507143145/https://incidentdatabase.ai/
AI Incident database, a range of AI incidents and adjacent stories/topics
Tags
Annotators
URL
-
-
chatgpt.xingacgn.com chatgpt.xingacgn.com
-
这是王浩源推荐的。
-
-
www.oneusefulthing.org www.oneusefulthing.org
-
讓我們介紹一下現在可用的六種大型語言模型
ChatGPT / GPT-3.5
Yes: 這是在11月份推出的免費版本,非常快速,並且在編寫和編碼任務方面相當可靠。
No: 它沒有連接到網際網路。如果您要求它尋找自2021年以來的任何事情,它都會出錯。不擅長數學計算。
ChatGPT / GPT-4
Yes: 新產品。目前只提供給付費客戶使用。有時候具有驚人的強大功能,是最具實力的模型之一。速度較慢但功能齊全。
No: 雖然也沒有連接到網際網路上,但比起其他系統更善於避免胡言亂語,並且做數學題表現更好。
ChatGPT / Plugins
Yes: 在早期測試中,這個 ChatGPT 模型可以通過外掛與各種網際網路服務進行連接。新穎但還存在一些問題。
No: 作為一個處於早期測試階段的系統,其能力尚未完全清楚, 但將使 ChatGPT 能夠連接到網際網路。
Bing Al
Yes: 已經連接到網際網路上了,極其強大而略顯奇怪。創意模式使用 GPT-4 ,其他模式(精確、平衡)似乎不太行得通。
No: 選擇錯誤的模式會導致糟糕的結果(創意模式最全面)。帶有個性化特點的人工智慧系統。
Google Bard
Yes: 目前的模型不是很好。未來可能會非常強大。
No: 由於它是Google,期望它不會撒謊。相比其他模型,它更容易胡言亂語。
A Anthropic Claude
Yes: 與 GPT-3.5 相當,但使用起來感覺更加合理。較為冷門。
No: 同樣沒有連接到網際網路上。
Tags
Annotators
URL
-
-
maggieappleton.com maggieappleton.com
-
They're just interim artefacts in our thinking and research process.
weave models into your processes not shove it between me and the world by having it create the output. doing that is diminishing yourself and your own agency. Vgl [[Everymans Allemans AI 20190807141523]]
-
A big part of this limitation is that these models only deal with language.And language is only one small part of how a human understands and processes the world.We perceive and reason and interact with the world via spatial reasoning, embodiment, sense of time, touch, taste, memory, vision, and sound. These are all pre-linguistic. And they live in an entirely separate part of the brain from language.Generating text strings is not the end-all be-all of what it means to be intelligent or human.
Algogens are disconnected from reality. And, seems a key point, our own cognition and relation to reality is not just through language (and by extension not just through the language center in our brain): spatial awareness, embodiment, senses, time awareness are all not language. It is overly reductionist to treat intelligence or even humanity as language only.
Tags
Annotators
URL
-
-
www.insidehighered.com www.insidehighered.com
-
Should we deepen our emphasis on creativity and critical thinking in hopes that our humanness will prevail?
Yes, yes we should.
-
-
www.downes.ca www.downes.ca
-
ICs as hardware versions of AI. Interesting this is happening. Who are the players, what is on those chips? In a sense this is also full circle for neuronal networks, back in the late 80s / early 90s at uni neuronal networks were made in hardware, before software simulations took over as they scaled much better both in number of nodes and in number of layers between inputs and output. #openvraag Any open source hardware on the horizon for AI? #openvraag a step towards an 'AI in the wall' Vgl [[AI voor MakerHouseholds 20190715141142]] [[Everymans Allemans AI 20190807141523]]
-
-
wattenberger.com wattenberger.com
-
https://web.archive.org/web/20230502113317/https://wattenberger.com/thoughts/boo-chatbots
This seem like a number of useful observations wrt interacting with LLM based tools, and how to prompt them. E.g. I've seen mention of prompt marketplaces where you can buy better prompts for your queries last week. Which reinforces some of the points here. Vgl [[Prompting skill in conversation and AI chat 20230301120740]] and [[Prompting valkuil instrumentaliseren conversatiepartner 20230301120937]]
-