- Jan 2024
-
cacm.acm.org cacm.acm.org
-
Newspaper and magazine publishers could curate their content, as could the limited number of television and radio broadcasters. As cable television advanced, there were many more channels available to specialize and reach smaller audiences. The Internet and WWW exploded the information source space by orders of magnitude. For example, platforms such as YouTube receive hundreds of hours of video per minute. Tweets and Facebook updates must number in the hundreds of millions if not billions per day. Traditional media runs out of time (radio and television) or space (print media), but the Internet and WWW run out of neither. I hope that a thirst for verifiable or trustable facts will become a fashionable norm and part of the soluti
Broadcast/Print are limited by time and space; is digital infinite?
-
-
media.dltj.org media.dltj.org
-
So, Sam decided, why not count the adverts he himself saw. It's just one number and applies only to the editor at a marketing magazine living in London on one arbitrary day. And what I saw were 93 ads I tried to be as open as I could about the fact that it's likely that I didn't notice every ad I could have done, but equally, I didn't miss that many, I don't think Sam also persuaded other people in the industry to do their own count. The most I've seen is 100 and 54. And I think I was quite generous. The lowest I've seen is 26. The most interesting version of the experiment was that I tasked someone to see as many as he could in a day and he got to 512 what a way to spend the day. And you will have noticed it's nowhere close to 10,000 ads.
One person counted 93 per day
-
Sam Anderson didn't marketing gurus have been making claims about advertising numbers for a very long time and Sam followed the trail all the way back to the 19 sixties. The very start of it was this piece of research by a man called Edwin Abel, who was a marketer for General Foods. Edwin Abel wanted to do a rough calculation on how many adverts people saw. He looked at how many hours of TV and radio people watched or listened to every day and worked out the average number of ads per hour on those mediums. So he multiplied those two numbers together to come up with this number. And this is our 1500 this 1500 ads a day number is still kicking around today. Often as the lowest of the big numbers in the blogosphere. And it is potentially a kind of legitimate calculation for the number of ads seen or heard, albeit from a quite different time in history. But there's some fine print to consider that is a number for a family of four. So if you divided that between a family of four, you'd actually be looking at something like 375 ads in this estimation.
Research from the 1960s suggests 375 ads/day
-
One of the numbers we quoted was a high estimate at the time, which was 5000 ads per day and that's the number that got latched on to. So this 5000 number was not for the number of adverts that a consumer actually sees and registers each day, but rather for the clutter as Walker calls it, these large numbers are not counts of the number of ads that people pay attention to.
Number of impressions of advertising
This advertising impressions may roll over us, but we don't actually see and register them.
-
-
www.wnycstudios.org www.wnycstudios.org
-
One of the arguments that they make is that the original parents' rights crusade in the US was actually in opposition to the effort to ban child labor. The groups really driving that opposition, like the National Association of Manufacturers, these conservative industry groups, what they were opposed to was an effort to muck up what they saw as the natural state of affairs that would be inequality.
Earlier, "parents rights" promoted by National Association of Manufacturers
The "conservative industry group" didn't want to have an enforced public education mandate take away from the inequitable "natural state of affairs".
-
Well, there are some people who have never liked the idea of public education because it's the most socialist thing that we do in this country. We tax ourselves to pay for it and everybody gets to access it. That's not a very American thing to do. Then you have conservative religious activists. They see a real opening, thanks to a whole string of Supreme Court cases to use public dollars to fund religious education. Then you have people who don't believe in public education for other reasons. Education is the single largest budget item in most states. If your goal is to cut taxes way back, if your goal is to give a handout to the wealthiest people in your state, spending less on education is going to be an absolute requirement. The same states that are enacting these sweeping school voucher programs, if you look at states like Iowa and Arkansas, they've ushered in huge tax cuts for their wealthiest residents. That means that within the next few years, there will no longer be enough funds available to fund their public schools, even at a time when they have effectively picked up the tab for affluent residents of the state who already send their kids to private schools.
Public education is the most socialist program in the U.S.
Defunding it is seen as a positive among conservative/libertarian wings. Add tax cuts for the wealthy to sway the government more towards affluent people.
-
As unregulated as these programs are, that minimal data is something we have access to. There's been great coverage of this, including a recent story in The Wall Street Journal by an education reporter named Matt Barnum, that what we are seeing in state after state is that in the early phases of these new programs, that the parents who are most likely to take advantage of them are not the parents of low-income and minority kids in the public schools despite that being the big sales pitch, that instead they are affluent parents whose kids already attended private school. When lawmakers are making the case for these programs, they are making the Moms for Liberty arguments.
Benefits of state-based school choice programs are going to affluent parents
The kinds are already going to private schools; the money isn't going to low-income parents. As a result, private schools are emboldened to raise tuition.
-
The Heritage Foundation has been an early and very loud backer of Moms for Liberty. There, I think it's really instructive to see that they are the leader of the project 2025 that's laying out the agenda for a next Trump administration. You can look at their education platform, it is not about taking back school boards. It's about dismantling public education entirely.
Heritage Foundation backing this effort as part of a goal to "dismantle" public education
-
I would point you to something like recent Gallup polling. We know, it's no secret that American trust and institutions has plummeted across the board, but something like only 26% of Americans say that they have faith in public schools. Among Republicans, it's even lower, it's 14%. Groups like Moms for Liberty have played a huge part in exacerbating the erosion of that trust.
Gallup polling shows drop in confidence in public schools, especially among Republicans
Losses by Moms-for-Liberty candidates reinforce the notion that there is partisanship in public education.
-
Annotations are on the Transcript tab of this web page
Abstract
Last month, it seemed like Moms for Liberty, the infamous political group behind the recent push for book bans in schools across the country, might be on the wane. In November, a series of Moms for Liberty endorsed candidates lost school board elections, and in local district elections, the group took hit after hit. In Iowa, 12 of 13 candidates backed by the Moms were voted out, and in Pennsylvania, Democrats won against at least 11 of their candidates. But recently, Moms for Liberty co-founder Tiffany Justice claimed in an interview, "we're just getting started," boasting about the group's plans to ramp up efforts in 2024.
-
-
media.dltj.org media.dltj.org
-
mail trap
Mailtrap email testing
-
Papercut is a Windows application that just sits in the corner of your screen and your system notification tray and it's an SMTP server that doesn't send email every email that you send it intercepts
Papercut: Windows-based SMTP server/interceptor
ChangemakerStudios/Papercut-SMTP: Papercut SMTP -- The Simple Desktop Email Server
-
mailjet markup language
Origins of Mailjet Markup Language for richly formatted emails
MJML → CSHTML (Razor) → HTML
-
John Gilmore John is a very interesting person he's one of those people who I agree with everything he does right up to the point where I think he turns into a bit of a dick and then he kind of stops just past that point John was employee number five at Sun Microsystems he was one of the founders of the Electronic Frontier Foundation he is a uh I've seen him described as an extreme libertarian cipherpunk activist and uh the most famous quote I've seen from John is this one the net interprets censorship as damage and Roots around it if you start blocking ports because you don't like what people are doing the internet is designed to find another way rounded and you know he has taken this philosophy to an extreme he runs an open mail relay if you go to hop.toad.com it will accept email from anyone on the planet on Port 25 and it will deliver it doesn't care who you are doesn't care where you came from which is kind of the libertarian ethos in a nutshell
John Gilmore's open SMTP relay
-
I went through to see how many email addresses I could register
Attempt to register quirky usernames at major email providers
The RFC allows for "strange" usernames, but some mail providers are more restrictive.
-
in 1978 Gary Turk was working for the digital Equipment Corporation he was a sales rep and his job was to sell these the deck system 20. now this thing had built-in arpanet protocol support it was like you don't have to do anything special you could plug it into a network and it would just work and rightly or wrongly Gary thought well I reckon people who are on the upper net might be interested in knowing about this computer and digital didn't have a whole lot of sales going on on the US West Coast they had a big office on the East Coast but West Coast you know California Portland those kind of places they didn't really have much of a presence so he got his assistant to go through the arpanet directory and type in the email addresses of everybody on the American West Coast who had an email address 393 of them and put them now at this point they overflowed the header field so all the people who got this email got an email which started with about 250 other people's email addresses and then right down at the end of it it says hey we invite you to come and see the deck system 2020
Gary Turk "invented" spam in 1978
-
one person whose Innovation is still a significant part of the way we work with it was this guy it's Ray Tomlinson and he was working on an opponent Mail system in 1971 and Rey invented at Rey is the person who went well hang on if we know the user's name and we know the arpanet host where they host their email we could put an at in the middle because it's Alice at the machine
Ray Tomlinson invented the use of @ in 1971
-
this is the SMTP specification latest version October 2008 about how we should try to deliver email and what it says is if it doesn't work first time you should wait at least 30 minutes and then you should keep trying for four or five days before you finally decide that the email didn't work
Email was not intended to be instantly delivered
The standards say wait at least 30 minutes if initial delivery failed, then try again for 4-5 days.
-
June 23, 2023
Abstract
We're not quite sure exactly when email was invented. Sometime around 1971. We do know exactly when spam was invented: May 3rd, 1978, when Gary Thuerk emailed 400 people an advertisement for DEC computers. It made a lot of people very angry... but it also sold a few computers, and so junk email was born.
Fast forward half a century, and the relationship between email and commerce has never been more complicated. In one sense, the utopian ideal of free, decentralised, electronic communication has come true. Email is the ultimate cross-network, cross-platform communication protocol. In another sense, it's an arms race: mail providers and ISPs implement ever more stringent checks and policies to prevent junk mail, and if that means the occasional important message gets sent to junk by mistake, then hey, no big deal - until you're sending out event tickets and discover that every company who uses Mimecast has decided your mail relay is sending junk. Marketing teams want beautiful, colourful, responsive emails, but their customers' mail clients are still using a subset of HTML 3.2 that doesn't even support CSS rules. And let's not even get started on how you design an email when half your readers will be using "dark mode" so everything ends up on a black background.
Email is too big to change, too broken to fix... and too important to ignore. So let's look at what we need to know to get it right. We'll learn about DNS, about MX and DKIM and SPF records. We'll learn about how MIME actually works (and what happens when it doesn't). We'll learn about tools like Papercut, Mailtrap, Mailjet, Foundation, and how to incorporate them into your development process. If you're lucky, you'll even learn about UTF-7, the most cursed encoding in the history of information systems. Modern email is hacks top of hacks on top of hacks... but, hey, it's also how you got your ticket to be here today, so why not come along and find out how it actually works?
-
-
media.dltj.org media.dltj.org
-
Jan 3, 2024
Vint Cerf is known for his pioneering work as one of the fathers of the internet. He now serves as the vice president and chief internet evangelist for Google where he furthers global policy development and accessibility of the internet. He shares his Brief But Spectacular take on the future of the internet.
This must have been cut up from a much longer piece. The way this has been edited together makes for a short video, but the concepts are all over the place.
-
About two-thirds of the world's population have access to it. We have to understand how to make all of these applications literally accessible to everyone.
Universally accessible
-
The good part is that voices that might not have been heard can be heard in the Internet environment. The not-so-good thing is that some voices that you don't want to hear are also amplified, including truths and untruths. So we're being asked in some sense to pay for the powerful tool that we have available by using our brains to think critically about the content that we see.
Combating disinformation with literacy
For better or for worse—whether the driver was an idealistic view of the world or the effect of an experimental network that got bigger than anyone could imagine—the internet is a permissive technology. The intelligence of the network is built into the edges...the center core routes packets of data without understanding the contents. I think Cerf is arguing here that the evaluation of content is something best done at the edge, too...in the minds of the internet participants.
-
- Dec 2023
-
www.semafor.com www.semafor.com
-
Texas has a law called CUBI and Illinois has BIPA. They prevent me from even doing the scan on somebody to determine if they’re in the set. I think these are bad laws. They prevent a very useful, reasonable, completely sensible thing.The thing that people are worried about, I don’t think anyone is building. No one has been trying to build this ‘who are these random people?’
Meta’s CTO doesn’t know about Clearview AI
There are companies that are trying to build systems to recognize random faces.
-
-
ital.corejournals.org ital.corejournals.org
-
Open Source kinship with Librarianship
The open-source movement, while sharing some of the same civic ideals as librarianship, is not as motivationally coherent. Some corners of the movement are motivated by industrial or market concerns. Therefore, as open source emerges as a common option for many libraries, it is in the interests of the profession to establish, early on, the terms on which it will critically engage with open source.
The synergy between open source and librarianship seem natural, but this author points to a different motivation.
-
-
99percentinvisible.org 99percentinvisible.org
-
When the U.S. implemented standardized postage rates, the cost to mail letters became more predictable and less expensive. Now a person could pay to send a letter in advance and the recipient could get it without a fee. A stamp was proof that a sender had paid for the mail.
Postage stamps as a government document, proof of postalized rate paid
-
In the early years of the post office, mail was often sent cash on delivery, like a collect call. If someone sent you a letter, you paid for it when you picked it up at the post office. But postage was expensive and the system for calculating fees was complicated. It was hard to know what a letter would end up costing in the end.
Early mail was delivered C.O.D.
-
- Nov 2023
-
-
Tracking down citations to retracted publications has gotten easier.
NISO’s CREC project should make it even easier by automating some of the notification process: https://www.niso.org/standards-committees/crec
-
-
arstechnica.com arstechnica.com
-
The FTC has accused Kochava of violating the FTC Act by amassing and disclosing "a staggering amount of sensitive and identifying information about consumers," alleging that Kochava's database includes products seemingly capable of identifying nearly every person in the United States. According to the FTC, Kochava's customers, ostensibly advertisers, can access this data to trace individuals' movements—including to sensitive locations like hospitals, temporary shelters, and places of worship, with a promised accuracy within "a few meters"—over a day, a week, a month, or a year. Kochava's products can also provide a "360-degree perspective" on individuals, unveiling personally identifying information like their names, home addresses, phone numbers, as well as sensitive information like their race, gender, ethnicity, annual income, political affiliations, or religion, the FTC alleged.
“Capable of identifying nearly every person in the U.S.”
So you have nothing to hide?
-
-
media.dltj.org media.dltj.org
-
One of the ways that, that chat G BT is very powerful is that uh if you're sufficiently educated about computers and you want to make a computer program and you can instruct uh chat G BT in what you want with enough specificity, it can write the code for you. It doesn't mean that every coder is going to be replaced by Chad GP T, but it means that a competent coder uh with an imagination can accomplish a lot more than she used to be able to, uh maybe she could do the work of five coders. Um So there's a dynamic where people who can master the technology can get a lot more done.
ChatGPT augments, not replaces
You have to know what you want to do before you can provide the prompt for the code generation.
-
-
-
Up until this point, every publisher had focused on 'traffic at scale', but with the new direct funding focus, every individual publisher realized that traffic does not equal money, and you could actually make more money by having an audience who paid you directly, rather than having a bunch of random clicks for the sake of advertising.The ratio was something like 1:10,000. Meaning that for every one person you could convince to subscribe, donate, become a member, or support you on Patreon ... you would need 10,000 visitors to make the same amount from advertising. Or to put that into perspective, with only 100 subscribers, I could make the same amount of money as I used to earn from having one million visitors.
Direct subscription to independent publishers can beat advertising revenue
-
Again, this promise that personalized advertising would generate better results was just not happening. Every year, the ad performance dropped, and the amount of scale needed to make up for that decline was just out of reach for anyone doing any form of niche.So, yes, the level of traffic was up by a lot, but that still didn't mean we could make more money as smaller publishers. Third party display advertising has just never worked for smaller niche publishers.
Third-party personal ads drove traffic to sites, but not income
-
We have the difference between amateurs just publishing as a hobby, or professionals publishing as a business. And on the other vector we have whether you are publishing for yourself, or whether you are publishing for others.And these differences create very different focuses. For instance, someone who publishes professionally, but 'for themselves' is a brand. That's what defines a brand magazine.Meanwhile, independent publishers are generally professionals (or trying to be), who are producing a publication for others. In fact, in terms of focus, there is no difference between being an independent publisher and a regular traditional publisher. It's exactly the same focus, just at two very different sizes.Bloggers, however, were mostly amateurs, who posted about things as hobbies, often for their own sake, which is not publishing in the business sense.Finally, we have the teachers. This is the group of people who are not trying to run a publishing business, but who are publishing for the sake of helping others.
Publishing: profession versus amateur and for-you versus for-others
I think I aim DLTJ mostly for the amateur/for-others quadrant
-
There was no automatic advertising delivery. There was no personalization, or any kind of tracking. Instead, I go through all of this every morning, picking which ads I thought looked interesting today, and manually changing and updating the pages on my site.This also meant that, because there was no tracking, the advertising companies had no idea how many times an ad was viewed, and as such, we would only get paid per click.Now, the bigger sites had started to do dynamic advertising, which allowed them to sell advertising per view, but, as an independent publisher, I was limited to only click-based advertising.However, that was actually a good thing. Because I had to pick the ads manually, I needed to be very good at understanding my audience and what they needed when they visited my site. And so there was a link between audience focus and the advertising.Also, because it was click based, it forced me as an independent publisher to optimize for results, whereas a 'per view' model often encouraged publishers to lower their value to create more ad views.
Per-click versus per-view advertising in the 1900s internet
-
-
-
How will people build professional callouses if the early work that may be viewed as mundane essentials are taken over by AI systems? Do we risk living in the age of the last masters, the age of the last experts?
Professional callouses
This is a paragraph too far. There are many unnecessary "callouses" that have been removed from work, and we are better for it. Should we go back to the "computers" of the 1950s and 1960s...women whose jobs were to make mathematical calculations?
As technology advances, there are actions that are "pushed down the complexity stack" of what is assumed to exist and can be counted on.
-
I am even more attuned to creative rights. We can address algorithms of exploitation by establishing creative rights that uphold the four C’s: consent, compensation, control, and credit. Artists should be paid fairly for their valuable content and control whether or how their work is used from the beginning, not as an afterthought.
Consent, compensation, control, and credit for creators whose content is used in AI models
-
Generative AI systems that allow for biometric clones can easily exploit our likeness through the creation of synthetic media that propagate deep fakes. We need biometric rights that protect our faces and voices from algorithms of exploitation.
On the need for biometric rights to prevent activities like deep fakes
-
The nightmares of AI discrimination and exploitation are the lived reality of those I call the excoded
Defining 'excoded'
-
AI raises the stakes because now that data is not only used to make decisions about you, but rather to make deeply powerful inferences about people and communities. That data is training models that can be deployed, mobilized through automated systems that affect our fundamental rights and our access to whether you get a mortgage, a job interview, or even how much you’re paid. Thinking individually is only part of the equation now; you really need to think in terms of collective harm. Do I want to give up this data and have it be used to make decisions about people like me—a woman, a mother, a person with particular political beliefs?
Adding your data to AI models is a collective decision
-
- Oct 2023
-
media.dltj.org media.dltj.org
-
the name knot is a direct reference to actual knots tied on the rope. One end of the rope was then attached to a big spool which the rope was wound up on and the other end of the rope was attached to a type of triangular wooden board in a very specific way. The idea was actually pretty simple. The wooden board which was called the chip log was then thrown overboard from the back of the ship and one of the sides of the wooden ship had a lead weight to keep it vertical on the surface of the sea. As the ship then sped forward, the wood was supposed to stay mostly still on the surface of the water, with the rope quickly unwinding from the spool. On the rope, there were knots spaced out on regular intervals either every 47 feet and 3 inches or every 48 ft, depending on which source you use. The idea was that the sailors would use a small hourglass which measured either 28 seconds or 30 seconds, again, depending on the source and they would count how many knots went by in that time and the answer that they then derived from this exercise was the ship's speed, measured in literal knots
Origin of "knot" as a unit of length
Also "knot" is nautical miles per hour.
-
one being that a nautical mile is the meridian arc length, corresponding to one minute of a degree of latitude. In other words, the full circumference of the earth is 360° and a nautical mile is one 60th of one degree at the equator but that's a historical definition. Today, a nautical mile is defined using the metric system like everything else and it is exactly 1,852 meters.
Historic and current definitions of "nautical mile"
-
the United States officially started defining its own units by using the metric system as a reference, way back in 1893 with something called the Mendenhall Order. And that means that, for example, today, the inch is not defined as the length of three barley corns which genuinely was its official definition for centuries, instead the inch is now officially defined as 25.4 millimeters and all other US customary units have similar metric definitions
United States Customary Units defined with metric measurements
Also referred to as the Traditional System of Weights and Measures
-
- Aug 2023
-
hangingtogether.org hangingtogether.org
-
Some people thought that the Control Number represented a mechanism for identifying a record as having originated with OCLC and therefore subject to the cooperative’s record use policy.
Control number as mechanism for identifying a WorldCat record
Yet isn't this what the [[OCLCvClarivate]] lawsuit said?
-
Recently we recommended that OCLC declare OCLC Control Numbers (OCN) as dedicated to the public domain. We wanted to make it clear to the community of users that they could share and use the number for any purpose and without any restrictions. Making that declaration would be consistent with our application of an open license for our own releases of data for re-use and would end the needless elimination of the number from bibliographic datasets that are at the foundation of the library and community interactions. I’m pleased to say that this recommendation got unanimous support and my colleague Richard Wallis spoke about this declaration during his linked data session during the recent IFLA conference. The declaration now appears on the WCRR web page and from the page describing OCNs and their use.
OCLC Control Numbers are in the public domain
An updated link for the "page describing OCNs and their use" says:
The OCLC Control Number is a unique, sequentially assigned number associated with a record in WorldCat. The number is included in a WorldCat record when the record is created. The OCLC Control Number enables successful implementation and use of many OCLC products and services, including WorldCat Discovery and WorldCat Navigator. OCLC encourages the use of the OCLC Control Number in any appropriate library application, where it can be treated as if it is in the public domain.
-
- Jul 2023
-
www.rfc-editor.org www.rfc-editor.org
-
It's noteworthy that RFC 7258 doesn't consider that bad actors are limited to governments, and personally, I think many advertising industry schemes for collecting data are egregious examples of pervasive monitoring and hence ought also be considered an attack on the Internet that ought be mitigated where possible. However, the Internet technical community clearly hasn't acted in that way over the last decade.
Advertising industry schemes considered an attack
Stephen Farrell's perspective.
-
Many have written about how being under constant surveillance changes a person. When you know you're being watched, you censor yourself. You become less open, less spontaneous. You look at what you write on your computer and dwell on what you've said on the telephone, wonder how it would sound taken out of context, from the perspective of a hypothetical observer. You're more likely to conform. You suppress your individuality. Even though I have worked in privacy for decades, and already knew a lot about the NSA and what it does, the change was palpable. That feeling hasn't faded. I am now more careful about what I say and write. I am less trusting of communications technology. I am less trusting of the computer industry.
How constant surveillance changes a person
Bruce Schneier's perspective.
-
-
arxiv.org arxiv.org
-
A second, complementary, approach relies on post-hoc machine learning and forensic anal-ysis to passively identify statistical and physical artifacts left behind by media manipulation.For example, learning-based forensic analysis techniques use machine learning to automati-cally detect manipulated visual and auditory content (see e.g. [94]). However, these learning-based approaches have been shown to be vulnerable to adversarial attacks [95] and contextshift [96]. Artifact-based techniques exploit low-level pixel artifacts introduced during synthe-sis. But these techniques are vulnerable to counter-measures like recompression or additivenoise. Other approaches involve biometric features of an individual (e.g., the unique motionproduced by the ears in synchrony with speech [97]) or behavioral mannerisms [98]). Biomet-ric and behavioral approaches are robust to compression changes and do not rely on assump-tions about the moment of media capture, but they do not scale well. However, they may bevulnerable to future generative-AI systems that may adapt and synthesize individual biometricsignals.
Examples of methods for detecting machine generated visual media
-
tabula rasa
Latin for "scraped tablet" meaning "clean slate". Tabula rasa | Britannica
-
he new tools have sparked employment concerns for creative occupations suchas composers, designers, and writers. This conflict arises because SBTC fails to differentiatebetween cognitive activities like analytical work and creative ideation. Recent research [82, 83]demonstrates the need to quantify the specific activities of various artistic workers before com-paring them to the actual capabilities of technology. A new framework is needed to characterize
Generative AI straddles analytical work and creative ideation
the specific steps of the creative process, precisely which and how those steps might be impacted by generative AI tools, and the resulting effects on workplace requirements and activities of varying cognitive occupations.
Unlike previous automation tools (which took on repetitive processes), generative AI encroaches on some parts of the creative process.
SBTC: Skill-Biased Technological Change framework
-
First, under a highly permissive view, theuse of training data could be treated as non-infringing because protected works are not directlycopied. Second, the use of training data could be covered by a fair-use exception because atrained AI represents a significant transformation of the training data [63, 64, 65, 66, 67, 68].1Third, the use of training data could require an explicit license agreement with each creatorwhose work appears in the training dataset. A weaker version of this third proposal, is to atleast give artists the ability to opt-out of their data being used for generative AI [69]. Finally,a new statutory compulsory licensing scheme that allows artworks to be used as training databut requires the artist to be remunerated could be introduced to compensate artists and createcontinued incentives for human creation [70].
For proposals for how copyright affects generative AI training data
- Consider training data a non-infringing use
- Fair use exception
- Require explicit license agreement with each creator (or an opt-out ability)
- Create a new "statutory compulsory licensing scheme"
-
AI-generated content may also feed future generative models, creating a self-referentialaesthetic flywheel that could perpetuate AI-driven cultural norms. This flywheel may in turnreinforce generative AI’s aesthetics, as well as the biases these models exhibit.
AI bias becomes self-reinforcing
Does this point to a need for more diversity in AI companies? Different aesthetic/training choices leads to opportunities for more diverse output. To say nothing of identifying and segregating AI-generated output from being used i the training data of subsequent models.
-
In traditional artforms characterized by direct manipulation [32]of a material (e.g., painting, tattoo, or sculpture), the creator has a direct hand in creating thefinal output, and therefore it is relatively straightforward to identify the creator’s intentions andstyle in the output. Indeed, previous research has shown the relative importance of “intentionguessing” in the artistic viewing experience [33, 34], as well as the increased creative valueafforded to an artwork if elements of the human process (e.g., brushstrokes) are visible [35].However, generative techniques have strong aesthetics themselves [36]; for instance, it hasbecome apparent that certain generative tools are built to be as “realistic” as possible, resultingin a hyperrealistic aesthetic style. As these aesthetics propagate through visual culture, it can bedifficult for a casual viewer to identify the creator’s intention and individuality within the out-puts. Indeed, some creators have spoken about the challenges of getting generative AI modelsto produce images in new, different, or unique aesthetic styles [36, 37].
Traditional artforms (direct manipulation) versus AI (tools have a built-in aesthetic)
Some authors speak of having to wrestle control of the AI output from its trained style, making it challenging to create unique aesthetic styles. The artist indirectly influences the output by selecting training data and manipulating prompts.
As use of the technology becomes more diverse—as consumer photography did over the last century, the authors point out—how will biases and decisions by the owners of the AI tools influence what creators are able to make?
To a limited extent, this is already happening in photography. The smartphones are running algorithms on image sensor data to construct the picture. This is the source of controversy; see Why Dark and Light is Complicated in Photographs | Aaron Hertzmann’s blog and Putting Google Pixel's Real Tone to the test against other phone cameras - The Washington Post.
-
n order to be considered meaningful human control, a generative system shouldbe capable of incorporating a human author’s intent into its output. If a user starts with no spe-cific goal, the system should allow for open-ended, curiosity-driven exploration. As the user’sgoal becomes clearer through interaction, the system should be able to both guide and deliverthis intent. Such systems should have a degree of predictability, allowing users to graduallyunderstand the system to the extent that they can learn to anticipate the results of their actions.Given these conditions, we can consider the human user as accountable for the outputs of thegenerative system. In other words, MHC is achieved if human creators can creatively expressthemselves through the generative system, leading to an outcome that aligns with their inten-tions and carries their personal, expressive signature. Future work is needed to investigate inwhat ways generative systems and interfaces can be developed that allow more meaningful hu-man control by adding input streams that provide users fine-grained causal manipulation overoutputs.
Meaningful Human Control of AI
A concept originally from autonomous weapons, MHC is a design concept where the tool gradually adapts its output to the expectations of its users. The results are the a creative output that "aligns with [the users'] intentions and carries their personal, expressive signature."
-
Anthropomorphizing AI can pose challenges to the ethicalusage of this technology [12]. In particular, perceptions of human-like agency can underminecredit to the creators whose labor underlies the system’s outputs [13] and deflect responsibil-ity from developers and decision-makers when these systems cause harm [14]. We, therefore,discuss generative AI as a tool to support human creators [15], rather than an agent capableof harboring its own intent or authorship. In this view, there is little room for autonomousmachines being “artists” or “creative” in their own right.
Problems with anthropomorphizing AI
-
Unlike past disruptions, however, generative AI relies on training data made by people
Generative AI is different from past innovations
The output of creators is directly input into the technology, which mages generative AI different. And creates questions that don't have parallels to past innovations
-
Generative AI tools, at first glance, seem to fully automate artistic production—an impres-sion that mirrors past instances when traditionalists viewed new technologies as threatening“art itself.” In fact, these moments of technological change did not indicate the “end of art,” buthad much more complex effects, recasting the roles and practices of creators and shifting theaesthetics of contemporary media [3].
Examples of how new technology displaced traditional artists
- photography versus painting: replacing portrait painters
- music production: digital sampling and sound synthesis
- computer animation and digital photography
-
Epstein, Ziv, Hertzmann, Aaron, Herman, Laura, Mahari, Robert, Frank, Morgan R., Groh, Matthew, Schroeder, Hope et al. "Art and the science of generative AI: A deeper dive." ArXiv, (2023). Accessed July 21, 2023. https://doi.org/10.1126/science.adh4451.
Abstract
A new class of tools, colloquially called generative AI, can produce high-quality artistic media for visual arts, concept art, music, fiction, literature, video, and animation. The generative capabilities of these tools are likely to fundamentally alter the creative processes by which creators formulate ideas and put them into production. As creativity is reimagined, so too may be many sectors of society. Understanding the impact of generative AI - and making policy decisions around it - requires new interdisciplinary scientific inquiry into culture, economics, law, algorithms, and the interaction of technology and creativity. We argue that generative AI is not the harbinger of art's demise, but rather is a new medium with its own distinct affordances. In this vein, we consider the impacts of this new medium on creators across four themes: aesthetics and culture, legal questions of ownership and credit, the future of creative work, and impacts on the contemporary media ecosystem. Across these themes, we highlight key research questions and directions to inform policy and beneficial uses of the technology.
-
-
newlaborforum.cuny.edu newlaborforum.cuny.edu
-
The original law stipulated a fifty–fifty split of the regular minimum wage: employers paid a subminimum cash wage that was half of the regular minimum wage; the other half was provided via customers’ tips. Even today, many customers do not know that a tip intended as a gratuity is often a wage-subsidy provided to the employer.
Effect of the 1966 Fair Labor Standards Act amendment
Introduction of the sub-minimum wage, which was originally at 50% but is now lower.
-
The Pullman Train Company, for instance, hired many formerly enslaved people and fought hard to keep paid wages very low. When investigated by the Railroad Commission of California in 1914, Pullman argued that they “paid adequate wages and did not expect their employees to exact tips”—an assertion unfounded by payroll data and strongly rejected by the commission.[3] Pullman, in fact, left it to the mostly white customers to determine the workers’ compensation through voluntary and unpredictable tips.
Tipping of service workers in post-slavery America
…allowing those served—whites—to determine the compensation of those serving—blacks.
-
The practice of tipping is traced to the Middle Ages and the European feudal system, when masters would sporadically give pocket change to their servants. The practice outlasted the feudal era, becoming a habit between customers—often upper class—and service workers. It also spread more generally. The modern custom of tipping was imported to the United States in the nineteenth century by upper-class American travelers to Europe. At the same time, an influx of European workers more acquainted with the practice helped to establish and spread the practice of tipping in the United States.
Origins of tipping
-
-
-
media.dltj.org media.dltj.org
-
which cost around 1.5 million euros
Average German household electricity consumption: 3,113 kilowatt-hours (2018, source)
Energy consumption for 250 households: 778,250 kilowatt-hours (or 778.25 mwh).
Wholesale electricity price in Germany is 102.4 euros per megawatt-hour (2023, source)
Yearly revenue: €79,692.80.
Payback period on €1.5M: about 19 years, not including maintenance.
-
4-Jul-2023 — Transcript is translated from German by YouTube.
Description (translated from German):
The problem of a Bavarian farmer: his hops are thirsty and the energy transition is progressing too slowly. The solution: a solar system that provides shade over the fields - and a sustainable second source of income.
-
-
media.dltj.org media.dltj.org
-
in that sense the internet is very decentralized that the control of the network is up to whoever owns the network the two percent where you said there's nobody in control there's a little bit of centralized control there's an organization called the internet Corporation for assigned names and numbers
ICANN as the single centralized point on the internet
-
if you think about sending letters through the US Postal Service how you've got an address on it so every packet that flows from the Netflix server to you has an address on it says this is going to Jenna it's going to the What's called the Internet Protocol address of your device think of all the the range of devices that are hooked up to the Internet it's totally amazing right every single one of them has one thing in common and that is they speak the IP protocol the Internet Protocol
IP address networking like postal addresses
-
the internet is a lot like that it's an interconnection of local roads local networks like the network in your house for example how does like all of the um networks in my house connect to all the city networks
Internetworking as a network of roads
-
Protocols are you up for one yeah knock knock who's there lettuce lettuce who let us go on a knock knock joke is an example of a protocol
Explaining protocols as a knock-knock joke
-
Nov 23, 2022
The internet is the most technically complex system humanity has ever built. Jim Kurose, Professor at UMass Amherst, has been challenged to explain the internet to 5 different people; a child, a teen, a college student, a grad student, and an expert.
-
-
www.theverge.com www.theverge.com
-
But Reader was also very much a product of Google’s infrastructure. Outside Google, there wouldn’t have been access to the company’s worldwide network of data centers, web crawlers, and excellent engineers. Reader existed and worked because of Google’s search stack, because of the work done by Blogger and Feedburner and others, and most of all, the work done by dozens of Google employees with 20 percent of their time to spare and some ideas about how to make Reader better. Sure, Google killed Reader. But nearly everyone I spoke to agreed that without Google, Reader could never have been as good as it was.
Reader could not have existed without Google infrastructure and talent
-
At its peak, Reader had just north of 30 million users, many of them using it every day. That’s a big number — by almost any scale other than Google’s. Google scale projects are about hundreds of millions and billions of users, and executives always seemed to regard Reader as a rounding error. Internally, lots of workers used and loved it, but the company’s leadership began to wonder whether Reader was ever going to hit Google scale. Almost nothing ever hits Google scale, which is why Google kills almost everything.
Google Scale is needed for a product to survive
-
One feature took off immediately, for power users and casual readers alike: a simple sharing system that let users subscribe to see someone else’s starred items or share their collection of subscriptions with other people. The Reader team eventually built comments, a Share With Note feature, and more.
Simple social sharing made the product take off
-
It wasn’t until the team launched a redesign in 2006 that added infinite scrolling, unread counts, and some better management tools for heavy readers that Reader took off.
Be prepared to throw out the first version
-
he bristles thinking about the fight and the fact that Google Reader is known as “an RSS reader” and not the ultra-versatile information machine it could have become. Names matter, and Reader told everyone that it was for reading when it could have been for so much more
Product names matter
It sets the perception for the boundaries of what something can be.
-
- Jun 2023
-
media.dltj.org media.dltj.org
-
think that the folks at the top of the org chart know more than we do and they often do have helpful holistic perspective about the company and the industry it operates in but all of the actual work of an organization all of its output happens at the bottom of the org chart and the teams at the edge of the organization leaders at the top may have a wide perspective but the edges are where an organization's detailed knowledge lives
Knowledge of the organization lives at the edges, not at the top
-
documenting the steps conversations tools and other activities required to complete a Core Business activity like processing an insurance claim or launching a new product these Maps were always surprising for the teams that created them and they often raised existential questions about employees roles in the organization the reason was that they revealed the actual structure of the organization the relationships the information Pathways that were responsible for the organization actually being able to get work done they learned the actual structure of the organization was an organic emergent phenomenon constantly shifting and changing based on the work to be done and often bearing little resemblance to the formal hierarchy of the organization
Mapping organizational processes
In the process of mapping these processes, the organization learns the real structure of the organization—beyond what is in the org chart.
-
paper published by Dr Ruth Ann heising has some insight for us
Ruthanne Huising (2019) Moving off the Map: How Knowledge of Organizational Operations Empowers and Alienates. Organization Science 30(5):1054-1075. https://doi.org/10.1287/orsc.2018.1277
-
Description
It was mid-afternoon on Friday, March 11, 2011 when the ground in Tōhoku began to shake. At Fukushima Daiichi nuclear power plant, it seemed like the shaking would never stop. Once it did, the reactors had automatically shut down, backup power had come online, and the operators were well on their way to having everything under control. And then the tsunami struck. They found themselves facing something beyond any worse-case scenario they imagined, and their response is a study in contrasts. We can learn a lot from the extremes they experienced about finding happiness and satisfaction at work.
-
-
media.dltj.org media.dltj.org
-
In the intro to thinking and its, meadows says everyone on everything in a system can act dutifully and rationally, yet all of these well meaning actions often lead up to a perfectly terrible result. This is what happened here. Every actor in the system borrowed a little bit of safety to optimize something else
Rational actions can lead to disastrous results
"All of life is a system"
-
n her fantastic book, thinking in systems, we are given great tools to pick apart the situation using systems thinking. Meadows introduces the stocks and flows.
Stocks and Flows
From Donella Meadows' Thinking in Systems. In this case, the "stock" is safety and "flows" are actions that make the reservoir of safety go up (more rigorous review of the aircraft design) or down (increased focus on the bottom line over engineering concerns).
-
one of the first I learned to spot was the Boeing 737. It's the best selling commercial aircraft of all time, it's everywhere. Once you know the trick, it's incredibly easy to identify in the air. It has no doors over the landing gear, if it's too big to be a regional, it's a 737. You can see how the gears swing out from the wheel shaped cavities in the center of the fuselage.
Boeing 737s don't have doors for the main landing gear
-
-
deliverypdf.ssrn.com deliverypdf.ssrn.com
-
On the advice of my lawyer, I respectfullydecline to answer on the basis of the FifthAmendment, which—according to theUnited States Supreme Court—protects ev-eryone, even innocent people, from the needto answer questions if the truth might beused to help create the misleading impres-sion that they were somehow involved in acrime that they did not commit.
Suggested wording
Exercise a Fifth Amendment right without using the word "incriminating"
-
Citation
Duane, James, The Right to Remain Silent: A New Answer to an Old Question (February 2, 2012). Criminal Justice, Vol. 25, No. 2, 2010, Available at SSRN: https://ssrn.com/abstract=1998119
-
-
media.dltj.org media.dltj.org
-
to what extent is the vendor community adopting bib frame in their systems architectures
On BIBFRAME adoption
Library of Congress and ShareVDE. The big guys are doing it, but will there become a split between the big libraries and small libraries over BIBFRAME? Will the diversity of metadata models inhibit interoperability?
-
10% more or less of academic libraries in the US use an open source system after all that time. And about either 17 or 14, I'd have the number in front of me for and to public libraries are using an open source I L S
Percentage of open source ILS in academic and public libraries
-
The industry has changed from being fragmented to consolidated
On the consolidation of the library automation field
-
-
crsreports.congress.gov crsreports.congress.govR475834
-
Such global and regional climate statementsdiffer from attributing specific extremeweather events to specific human influences,which scientists once considered infeasiblewith then-existing data and methods.18 Thischanged with the publication of an article in2003 proposing a method of establishing legalliability for climate change by determining how much human influence had changed theprobability of an undesirable event, such as flooding.
Extreme Event Attribution begins in 2003
Allen, Myles. "Liability for climate change." Nature 421, no. 6926 (2003): 891-892. Accessed June 2, 2023. https://doi.org/10.1038/421891a
-
The use of probability and risk interchangeably can causeconfusion. For example, two common methods toestimate the probability of occurrence of a naturalhazard event include the term risk in their names: theRisk Ratio (RR) and the Fraction of Attributable Risk(FAR). In this report, when referring to RR and FAR,the term risk refers to the climatic or meteorologicalprobability of an event of a specific magnitude, not tothe potential impact of the event on human systems.Apart from discussing these specific terms that use riskin their definitions, this report uses the term hazard asthe probability of a particular event occurring, such as ahurricane, and risk as the hazard combined with thevulnerability of humans and human systems to thathazard. In this sense, the risk is the likelihood ofadverse outcomes from the hazard. For example, thehazard of a major hurricane striking the Florida coasttoday and 100 years ago may be the same, but the riskis much higher today because of the growth in theamount of exposed infrastructure.
Definitions of probability/risk and hazard
- Hazard == probability of a particular event occurring
- Risk == hazard plus impact on humans and human systems
-
Climate change attribution is the study of whether, or to what degree, human influence may havecontributed to extreme climate or weather events.Advances in the science of climate attribution now allow scientists to make estimates of thehuman contribution to such events.
Definition of Climate change attribution
-
Is That Climate Change? The Science of Extreme Event Attribution
Congressional Research Service R47583 June 1, 2023
-
- Apr 2023
-
doctorow.medium.com doctorow.medium.com
-
Twitter is a neat illustration of the problem with benevolent dictatorships: they work well, but fail badly. Because they are property — not protocols — they can change hands, and overnight, you get a new, malevolent dictator who wants to retool the system for extraction, rather than collaboration.
Benevolent dictatorships: work well; fail badly
Twitter is the example listed here. But I wonder about benevolent dictatorships in open source. One example: does Linus have a sound succession plan for Linux? (Can such a succession plan even be tested and adjusted?)
-
-
-
LLMs predictably get more capable withincreasing investment, even withouttargeted innovation
Three variables guide capability: the amount of data ingested, the number of parameters, and the computing power used to train the model. This assumes there are no fundamental changes in the system design. This allows engineers to predict the rough capabilities before the effort and expense of building the model.
-
Discussion and Limitations
The author identifies these limitations to the current knowledge and predicting advancement:
- We should expect some of the prominent flaws of current LLMs to improve significantly
- There will be incentives to deploy LLMs as agents that flexibly pursue goals
- LLM developers have limited influence over what is developed
- LLMs are likely to produce a rapidly growing array of risks
- Negative results with LLMs can be difficult to interpret but point to areas of real weakness
- The science and scholarship around LLMs is especially immature
-
Brief interactions with LLMs are oftenmisleading
Instruction-following behavior aren't native to the models, and changes in prompt phrasing can have a dramatic impact on the output.
-
LLMs need not express the values of theircreators nor the values encoded in web text
How a model is pre-trained has a greater influence over the output than the text it was trained on. This opens the possibility for interventions in the form of "constitutional AI"—a set of norms and values as constraints in the pre-training stages. There remains the problem of how there are no reliable ways to guarantee behavior (see the fourth point).
-
Human performance on a task isn’t anupper bound on LLM performance
Models process far more information that any human can see. Also, "LLMs appear to be much better than humans at their pretraining task...and humans can teach LLMs to do some simple tasks more accurately than the humans themselves."
-
Experts are not yet able to interpret theinner workings of LLMs
The number of connections between tokens (billions) make a deterministic understanding of how an answer is derived impossible for humans to understand. There are techniques that, at some level, help with understanding models, but that understanding breaks down with later models.
-
There are no reliable techniques forsteering the behavior of LLMs
Fine-tuning and reinforced learning clearly effect the output of models, but they are not completely effective and the effect of such training cannot be predicted with sufficient certainty. This is the source of concern by many researchers for losing control over LLMs (presumably when LLMs are more tightly integrated with external actions.
-
LLMs often appear to learn and userepresentations of the outside world
The models show evidence of reasoning about abstract concepts, including color perception, adaptation based on what an author knows or believes, spatial layouts, and distinguishing misconceptions from facts. The paper notes that they conflicts with the "next-word-predictor" way that LLMs are explained.
-
Specific important behaviors in LLM tendto emerge unpredictably as a byproduct ofincreasing investment
Engineers can not (yet?) predict the capabilities that will emerge for a given quantity of data, model size, and computing power. Although they know if will be more capable, they don't know what those capabilities will be. The paper notes that surveys of researchers underestimated the capabilities of emerging models. Researchers were surveyed in 2021; the capabilities that were expected to be possible in 2024 were actually seen in 2022, and GPT-4's capabilities were not expected until 2025.
-
Bowman, Samuel R.. "Eight Things to Know about Large Language Models." arXiv, (2023). https://doi.org/https://arxiv.org/abs/2304.00612v1.
Abstract
The widespread public deployment of large language models (LLMs) in recent months has prompted a wave of new attention and engagement from advocates, policymakers, and scholars from many fields. This attention is a timely response to the many urgent questions that this technology raises, but it can sometimes miss important considerations. This paper surveys the evidence for eight potentially surprising such points: 1. LLMs predictably get more capable with increasing investment, even without targeted innovation. 2. Many important LLM behaviors emerge unpredictably as a byproduct of increasing investment. 3. LLMs often appear to learn and use representations of the outside world. 4. There are no reliable techniques for steering the behavior of LLMs. 5. Experts are not yet able to interpret the inner workings of LLMs. 6. Human performance on a task isn't an upper bound on LLM performance. 7. LLMs need not express the values of their creators nor the values encoded in web text. 8. Brief interactions with LLMs are often misleading.
Found via: Taiwan's Gold Card draws startup founders, tech workers | Semafor
Tags
Annotators
URL
-
-
-
-
study co-authored by scientists at the Allen Institute for AI,
-
-
crsreports.congress.gov crsreports.congress.gov
-
Do AI Outputs Infringe Copyrights in Other Works?
-
Does the AI Training Process Infringe Copyright in Other Works?
-
Who Owns the Copyright to Generative AI Outputs?
-
Do AI Outputs Enjoy Copyright Protection?
-
Abstract
Recent innovations in artificial intelligence (AI) are raising new questions about how copyright law principles such as authorship, infringement, and fair use will apply to content created or used by AI. So-called “generative AI” computer programs—such as Open AI’s DALL-E 2 and ChatGPT programs, Stability AI’s Stable Diffusion program, and Midjourney’s self-titled program—are able to generate new images, texts, and other content (or “outputs”) in response to a user’s textual prompts (or “inputs”). These generative AI programs are “trained” to generate such works partly by exposing them to large quantities of existing works such as writings, photos, paintings, and other artworks. This Legal Sidebar explores questions that courts and the U.S. Copyright Office have begun to confront regarding whether the outputs of generative AI programs are entitled to copyright protection as well as how training and using these programs might infringe copyrights in other works.
-
- Mar 2023
-
arxiv.org arxiv.org
-
On the other hand, our results are surprising in that they show we can steer models to avoid bias and dis-crimination by requesting an unbiased or non-discriminatory response in natural language. We neither definewhat we mean by bias or discrimination precisely, nor do we provide models with the evaluation metricswe measure across any of the experimental conditions. Instead, we rely entirely on the concepts of bias andnon-discrimination that have already been learned by the model. This is in contrast to classical machinelearning models used in automated decision making, where precise definitions of fairness must be describedin statistical terms, and algorithmic interventions are required to make models fair.
Reduction in bias comes without defining bias
-
Taken together, our experiments suggest that models with more than 22B parameters, and a sufficient amountof RLHF training, are indeed capable of a form of moral self-correction. In some ways, our findings areunsurprising. Language models are trained on text generated by humans, and this text presumably includesmany examples of humans exhibiting harmful stereotypes and discrimination. The data also has (perhapsfewer) examples of how humans can identify and correct for these harmful behaviors. The models can learnto do both.
22B parameters and sufficient RLHF training enable self-correction
-
Ganguli, Deep, Askell, Amanda, Schiefer, Nicholas, Liao, Thomas I., Lukošiūtė, Kamilė, Chen, Anna, Goldie, Anna et al. "The Capacity for Moral Self-Correction in Large Language Models." arXiv, (2023). https://doi.org/https://arxiv.org/abs/2302.07459v2.
Abstract
We test the hypothesis that language models trained with reinforcement learning from human feedback (RLHF) have the capability to "morally self-correct" -- to avoid producing harmful outputs -- if instructed to do so. We find strong evidence in support of this hypothesis across three different experiments, each of which reveal different facets of moral self-correction. We find that the capability for moral self-correction emerges at 22B model parameters, and typically improves with increasing model size and RLHF training. We believe that at this level of scale, language models obtain two capabilities that they can use for moral self-correction: (1) they can follow instructions and (2) they can learn complex normative concepts of harm like stereotyping, bias, and discrimination. As such, they can follow instructions to avoid certain kinds of morally harmful outputs. We believe our results are cause for cautious optimism regarding the ability to train language models to abide by ethical principles.
-
-
www.quantamagazine.org www.quantamagazine.org
-
In an analysis of LLMs released last June, researchers at Anthropic looked at whether the models would show certain types of racial or social biases, not unlike those previously reported in non-LLM-based algorithms used to predict which former criminals are likely to commit another crime. That study was inspired by an apparent paradox tied directly to emergence: As models improve their performance when scaling up, they may also increase the likelihood of unpredictable phenomena, including those that could potentially lead to bias or harm.
"Larger models abrupty become more biased"
Since it isn't understood how LLMs work, this becomes an unquantifiable risk when using LLMs.
-
But the researchers quickly realized that a model’s complexity wasn’t the only driving factor. Some unexpected abilities could be coaxed out of smaller models with fewer parameters — or trained on smaller data sets — if the data was of sufficiently high quality. In addition, how a query was worded influenced the accuracy of the model’s response.
Influence of data quality and better prompts
Models with fewer parameters show better abilities when they trained with better data and had a quality prompt. Improvements to the prompt, including "chain-of-the-thought reasoning" where the model can explain how it reached an answer, improved the results of BIG-bench testing.
-
In 2020, Dyer and others at Google Research predicted that LLMs would have transformative effects — but what those effects would be remained an open question. So they asked the research community to provide examples of difficult and diverse tasks to chart the outer limits of what an LLM could do. This effort was called the Beyond the Imitation Game Benchmark (BIG-bench) project, riffing on the name of Alan Turing’s “imitation game,” a test for whether a computer could respond to questions in a convincingly human way. (This would later become known as the Turing test.) The group was especially interested in examples where LLMs suddenly attained new abilities that had been completely absent before.
Origins of "BIG-bench"
AI researchers were asked to create a catalog of tasks that would challenge LLMs. This benchmark is used to assess the effectiveness of model changes and scaling up of the number of parameters.
-
Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before.
Defining "zero-shot"
The ability for a model to solve a problem it hasn't seen before.
-
In 2017, researchers at Google Brain introduced a new kind of architecture called a transformer. While a recurrent network analyzes a sentence word by word, the transformer processes all the words at the same time. This means transformers can process big bodies of text in parallel.
Introduction of Google's "transformer" architecture
The introduction of the "transformer" architecture out of research from Google changed how LLMs were created. Instead of a "recurrent" approach—where sentences were processed word-by-word, the transformer process looks at large groups of words at the same time. That enabled parallel processing on the text.
-
Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.
Definition of Emergent Behavior
From smaller components, larger and more complex systems are built. In other fields, emergent behavior can be predicted. In LLMs, this ability has been unpredictable as of yet.
-
It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.
Unexpected emergent abilities from large LLMs
Larger models can complete tasks that smaller models can't. An increase in complexity can also increase bias and inaccuracies. Researcher Jason Wei has cataloged 137 emergent abilities of large language models.
-
-
www.federalregister.gov www.federalregister.gov
-
For example, when an AI technology receives solely a prompt [27] from a human and produces complex written, visual, or musical works in response, the “traditional elements of authorship” are determined and executed by the technology—not the human user.
LLMs meet Copyright guidance
See comparison later in the paragraph to "commissioned artist" and the prompt "write a poem about copyright law in the style of William Shakespeare"
-
And in the current edition of the Compendium, the Office states that “to qualify as a work of `authorship' a work must be created by a human being” and that it “will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.”
Copyright Office's definition of authorship
From the Compendium of Copyright Office Practices, section 313.2.
-
The Court defined an “author” as “he to whom anything owes its origin; originator; maker; one who completes a work of science or literature.” [14] It repeatedly referred to such “authors” as human, describing authors as a class of “persons” [15] and a copyright as “the exclusive right of a man to the production of his own genius or intellect.”
Supreme Court definition of "Author"
From Burrow-Giles Lithographic Co. v. Sarony, 111 U.S. 53 (1884)
-
In the Office's view, it is well-established that copyright can protect only material that is the product of human creativity. Most fundamentally, the term “author,” which is used in both the Constitution and the Copyright Act, excludes non-humans. The Office's registration policies and regulations reflect statutory and judicial guidance on this issue.
Copyright Office's Statement of Human Creativity
-
-
idlewords.com idlewords.com
-
we have turned to machine learning, an ingenious way of disclaiming responsibility for anything. Machine learning is like money laundering for bias. It's a clean, mathematical apparatus that gives the status quo the aura of logical inevitability. The numbers don't lie.
Machine learning like money laundering for bias
-
Techies will complain that trivial problems of life in the Bay Area are hard because they involve politics. But they should involve politics. Politics is the thing we do to keep ourselves from murdering each other. In a world where everyone uses computers and software, we need to exercise democratic control over that software.
Politics defy modeling, but that makes its reality so important
-
Companies that perform surveillance are attempting the same mental trick. They assert that we freely share our data in return for valuable services. But opting out of surveillance capitalism is like opting out of electricity, or cooked foods—you are free to do it in theory. In practice, it will upend your life.
Opting-out of surveillance capitalism?
-
We started out collecting this information by accident, as part of our project to automate everything, but soon realized that it had economic value. We could use it to make the process self-funding. And so mechanized surveillance has become the economic basis of the modern tech industry.
Surveillance Capitalism by Accident
-
First we will instrument, then we will analyze, then we will optimize. And you will thank us. But the real world is a stubborn place. It is complex in ways that resist abstraction and modeling. It notices and reacts to our attempts to affect it. Nor can we hope to examine it objectively from the outside, any more than we can step out of our own skin. The connected world we're building may resemble a computer system, but really it's just the regular old world from before, with a bunch of microphones and keyboards and flat screens sticking out of it. And it has the same old problems. Approaching the world as a software problem is a category error that has led us into some terrible habits of mind.
Reality actively resists modeling
-
this intellectual background can also lead to arrogance. People who excel at software design become convinced that they have a unique ability to understand any kind of system at all, from first principles, without prior training, thanks to their superior powers of analysis. Success in the artificially constructed world of software design promotes a dangerous confidence.
Risk of thinking software design experience is generally transferable
-
-
www.wired.com www.wired.com
-
the apocalypse they refer to is not some kind of sci-fi takeover like Skynet, or whatever those researchers thought had a 10 percent chance of happening. They’re not predicting sentient evil robots. Instead, they warn of a world where the use of AI in a zillion different ways will cause chaos by allowing automated misinformation, throwing people out of work, and giving vast power to virtually anyone who wants to abuse it. The sin of the companies developing AI pell-mell is that they’re recklessly disseminating this mighty force.
Not Skynet, but social disruption
-
-
www.technologyreview.com www.technologyreview.com
-
The required behavior of a large language model for something like search is very different than for something that’s just meant to be a playful chatbot. We need to figure out how we walk the line between all these different uses, creating something that’s useful for people across a range of contexts, where the desired behavior might really vary
Context of use matters when setting behavior
-
Every time we have a better model, we want to put it out and test it. We’re very optimistic that some targeted adversarial training can improve the situation with jailbreaking a lot. It’s not clear whether these problems will go away entirely, but we think we can make a lot of the jailbreaking a lot more difficult. Again, it’s not like we didn’t know that jailbreaking was possible before the release. I think it’s very difficult to really anticipate what the real safety problems are going to be with these systems once you’ve deployed them. So we are putting a lot of emphasis on monitoring what people are using the system for, seeing what happens, and then reacting to that. This is not to say that we shouldn’t proactively mitigate safety problems when we do anticipate them. But yeah, it is very hard to foresee everything that will actually happen when a system hits the real world.
Jailbreaks we’re anticipated, but the huge public uptake required more and faster effort to fix.
-
-
Since November, OpenAI has already updated ChatGPT several times. The researchers are using a technique called adversarial training to stop ChatGPT from letting users trick it into behaving badly (known as jailbreaking). This work pits multiple chatbots against each other: one chatbot plays the adversary and attacks another chatbot by generating text to force it to buck its usual constraints and produce unwanted responses. Successful attacks are added to ChatGPT’s training data in the hope that it learns to ignore them.
Adversarial training with ChatGPT
The bot gets pitted against itself to see if it can be broken. Since there is a randomization factor in each generated stream, there is a possibility that a chat sequence can get around the defenses.
-
-
www.ted.com www.ted.com
-
it will be important that we can all ramp up the capacity of extracting and removing CO2 from the atmosphere. Now, when thinking about the latter, so far I've spoken about technology, but we shouldn't forget that also, nature offers several solutions to extract carbon from the air, such as forests and oceans. And one element that is very important will be doubling down on these methods offered by nature, enhancing them and protecting them.
Importance of technological extraction methods to supplement natural extraction methods
-
there is not a lot of CO2 in the air. We're currently at around 420 ppm. That means one molecule out of 2,500 molecules in the air around us is CO2. That's not a lot. And that means to extract only one ton of CO2 from the air, we need to filter around two million cubic meters of air. That's about 800 Olympic swimming pools.
CO2 is rather low density in air, making it challenging to capture
-
This is Orca. This is the first worldwide commercial direct air capture and storage plant. It is in Iceland, and it is an industrial plant that extracts CO2 out of ambient air. We have operated now for more than one year. It costs more than 10 million dollars to build Orca. And its modules, those eight boxes that we call CO2 collectors, they are designed to extract a bit more than 10 tons of carbon dioxide from the air every day.
Direct capture of CO2 from the atmosphere at near-commercial scale
It uses a absorbent material to capture the CO2, then heats the material to 100°C to extract the CO2. Climeworks then mixes the CO2 with water before injecting into volcanic basalt rock formations where it solidifies into crystals after about 2 years.
-
From TED Countdown London 2022
Abstract
To restrain global warming, we know we need to drastically reduce pollution. The very next step after that: using both natural and technological solutions to trap as much excess carbon dioxide from the air as possible. Enter Orca, the world's first large-scale direct air capture and storage plant, built in Iceland by the team at Climeworks, led by climate entrepreneur Jan Wurzbacher. This plant is capable of removing 4,000 tons of carbon dioxide from the air each year. With affordability and scalability in mind, Wurzbacher shares his vision for what comes after Orca, the future of carbon removal tech -- and why these innovations are crucial to stop climate change.
-
-
deliverypdf.ssrn.com deliverypdf.ssrn.com
-
Such regulation is already being pursued in Europe, where the DigitalServices Act would require large platforms to interoperate, a requirementthat could easily be modified to include the Fediverse.
EU Digital Services Act interoperable requirement
-
A different concern with decentralized moderation is that it will leadto “filter bubbles” and “echo chambers” by which instance members willchoose to only interact with like-minded users.
Risk of filter bubbles
-
The benefit of decentralized moderation is that it can satisfy both thosethose that want to speak and those that don’t want to listen. By empoweringusers, through their choice of instance, to avoid content they find objection-able, the Fediverse operationalizes the principle that freedom of speech is notthe same as freedom of reach
Decentralized moderation satisfies freedom to speak and freedom not to listen
-
And if effective moderation turns out to requiremore infrastructure, that could lead to a greater consolidation of instances.This is what happened with email, which, in part due to the investmentsnecessary to counter spam, has become increasingly dominated by Googleand Microsoft.
Will consolidation of email providers point to consolidation of fediverse instances?
This is useful to note. The email protocols are open and one can chose to host their own email server that—at a protocol level—can interoperate with any other. At a practical level, though, there is now service requirements (spam filtering) and policy choices (only accepting mail from known good sending servers) that limit the reach of a new, bespoke mail server.
What would the equivalent of an email spam problem look like on the fediverse?
-
Gab is a useful case study in how decentralized social media can self-police. On the one hand, there was no way for Mastodon to expel Gabfrom the Fediverse. As Mastodon’s founder Eugen Rochko explained, “Youhave to understand it’s not actually possible to do anything platform-widebecause it’s decentralized. . . . I don’t have the control.”32 On the other hand,individual Mastodon instances could—and the most popular ones did—refuse to interact with the Gab instance, effectively cutting it off from mostof the network in a spontaneous, bottom-up process of instance-by-instancedecisionmaking. Ultimately, Gab was left almost entirely isolated, with morethan 99% of its users interacting only with other Gab users. Gab respondedby “defederating”: voluntarily cutting itself off from the remaining instancesthat were still willing to communicate with it.
Gab's attempt to join the fediverse was rejected by Mastodon admins
-
https://perma.cc/G82J-73WX
Dead link, unfortunately. The original post was deleted, and the archive is not available at perma.cc. Unfortunately it looks like it wasn't captured by wayback nor archive.today
-
Mastodon instances thus operate according to the principle of content-moderation subsidiarity: content-moderation standards are set by, and differacross, individual instances. Any given Mastodon instance may have rulesthat are far more restrictive than those of the major social media platforms.But the network as a whole is substantially more speech protective thanare any of the major social media platforms, since no user or content canbe permanently banned from the network and anyone is free to start aninstance that communicates both with the major Mastodon instances and theperipheral, shunned instances.
Content-moderation subsidiarity means no user is banned
A user banned from one instance can join another instance or start an instance of their own to federate with the network. This is more protective of speech rights than centralized networks.
It does make persistent harassment a threat, though. If the cost/effort of creating new instance after new instance is low enough, then a motivated actor can harass users and instances.
-
The Fediverse indeed does, because its decentralization is a matter ofarchitecture, not just policy. A subreddit moderator has control only insofar asReddit, a soon-to-be public company,22 permits that control. Because Redditcan moderate any piece of content—indeed, to ban a subreddit outright—no matter whether the subreddit moderator agrees, it is subject to publicpressure to do so. Perhaps the most famous example is Reddit’s banning ofthe controversial pro-Trump r/The_Donald subreddit several months beforethe 2020 election.
Reddit : Fediverse :: Decentralized-by-policy : Decentralized-by-architecture
Good point! It makes me think that fediverse instances can look to subreddit governance as models for their own governance structures.
-
When a user decides to moves instances, theymigrate their account data—including their blocked, muted, and followerusers lists and post history—and their followers will automatically refollowthem at their new account.
Not all content moves when migrating in Mastodon
This is not entirely true at the time of publication. Post history, for instance, does not move from one server to another. There is perhaps good reason for this...the new instance owner may not want to take on the liability of content that is automatically moved to their server.
-
content-moderation subsidiarity. Just asthe general principle of political subsidiarity holds that decisions should bemade at the lowest organizational level capable of making such decisions,15content-moderation subsidiarity devolves decisions to the individual in-stances that make up the overall network.
Content-moderation subsidiarity
In the fediverse, content moderation decisions are made at low organization levels—at the instance level—rather than on a global scale.
-
moderator’s trilemma. The first prong is that platform userbases arelarge and diverse. The second prong is that the platforms use centralized,top-down moderation policies and practices. The third prong is that theplatforms would like to avoid angering large swaths of their users. But thepast decade of content moderation controversies suggests that these threegoals can’t all be met. The large closed platforms are unwilling to shrink theiruser bases or give up control over content moderation, so they have tacitlyaccepted high levels of dissatisfaction with their moderation decisions. TheFediverse, by contrast, responds to the moderator’s trilemma by giving upon centralized moderation.
Moderator's Trilemma
Classic case of can't have it all:
- Large, diverse userbases
- Centralized, top-down moderation
- Happy users
The fediverse gives up #2, but only after giving up #1? Particularly the "large" part?
-
An early challenge to the open Internet came from the first generation of giant onlineservices providers like America Online, Compuserve, and Prodigy, which combineddial-up Internet access with an all-encompassing web portal that provided bothInternet content and messaging. But as Internet speeds increased and web browsingimproved, users discovered that the limits of these closed systems outweighed theirbenefits, and they faded into irrelevance by the 2000s.
AOL and the like as early examples of closed systems that were replaced by open environments
-
A core architectural building block of the Internet is the open protocol. A protocolis the rules that govern the transmission of data. The Internet consists of manysuch protocols, ranging from those that direct the physical transmission ofdata to those that govern the most common Internet applications, like emailor web browsing. Crucially, all these protocols are open, in that anyone canset up and operate a router, website, or email server without needing toregister with or get permission from a central authority.5 Open protocolswere key to the first phase of the Internet’s growth because they enabledunfettered access, removing barriers and bridging gaps between differentcommunities. This enabled and encouraged interactions between groupswith various interests and knowledge, resulting in immense creativity andidea-sharing.
Internet built on open protocols
The domain name registration isn't as much of an outlier as this author makes it out to be. DNS itself is an open protocol—any server can be queried by any client. The DNS registration process replaced manual host tables on each node, which quickly grew unscalable. There are similar notions of port registration, MIME-type registration, and other registries.
-
There is a limit to how heatedthe debates around email content moderation can be, because there’s anarchitectural limit to how much email moderation is possible. This raises theintriguing possibility of what social media, and its accompanying content-moderation issues, would look like if it, too, operated as a decentralizedprotocol.
Comparing email moderation and centralized moderation
-
-
www.washingtonpost.com www.washingtonpost.com
-
Roughly 700 prompt engineers now use PromptBase to sell prompts by commission for buyers who want, say, a custom script for an e-book or a personalized “motivational life coach.” The freelance site Fiverr offers more than 9,000 listings for AI artists; one seller offers to “draw your dreams into art” for $5.
Prompts are for sale
Freelancer and gig-economy work.
-
Some AI experts argue that these engineers only wield the illusion of control. No one knows how exactly these systems will respond, and the same prompt can yield dozens of conflicting answers — an indication that the models’ replies are based not on comprehension but on crudely imitating speech to resolve tasks they don’t understand.“Whatever is driving the models’ behavior in response to the prompts is not a deep linguistic understanding,” said Shane Steinert-Threlkeld, an assistant professor in linguistics who is studying natural language processing at the University of Washington. “They explicitly are just telling us what they think we want to hear or what we have already said. We’re the ones who are interpreting those outputs and attributing meaning to them.”He worried that the rise of prompt engineering would lead people to overestimate not just its technical rigor but also the reliability of the results anyone could get from a deceptive and ever-changing black box.“It’s not a science,” he said. “It’s ‘let’s poke the bear in different ways and see how it roars back.’”
Prompt engineering is not science
-
prompt engineer. His role involves creating and refining the text prompts people type into the AI in hopes of coaxing from it the optimal result. Unlike traditional coders, prompt engineers program in prose, sending commands written in plain text to the AI systems, which then do the actual work.
Summary of prompt engineer work
-
-
nymag.com nymag.com
-
“Now you’re getting one of the most important points,” Lemoine said. “Whether these things actually are people or not — I happen to think they are; I don’t think I can convince the people who don’t think they are — the whole point is you can’t tell the difference. So we are going to be habituating people to treat things that seem like people as if they’re not.”
When you can tell the difference between algorithm and humanity
Quote from Blake Lemoine, Google AI researcher fired after claiming that LaMDA, Google’s LLM, was sentient
-
But the road from language model to existential crisis is short indeed. Joseph Weizenbaum, who created ELIZA, the first chatbot, in 1966, spent most of the rest of his life regretting it. The technology, he wrote ten years later in Computer Power and Human Reason, raises questions that “at bottom … are about nothing less than man’s place in the universe.” The toys are fun, enchanting, and addicting, and that, he believed even 47 years ago, will be our ruin: “No wonder that men who live day in and day out with machines to which they believe themselves to have become slaves begin to believe that men are machines.”
Creator of ELIZA in 1956 regrets doing so
Is conversing with machines a natural thing to do?
-
Bender and Manning’s biggest disagreement is over how meaning is created — the stuff of the octopus paper. Until recently, philosophers and linguists alike agreed with Bender’s take: Referents, actual things and ideas in the world, like coconuts and heartbreak, are needed to produce meaning. This refers to that. Manning now sees this idea as antiquated, the “sort of standard 20th-century philosophy-of-language position.” “I’m not going to say that’s completely invalid as a position in semantics, but it’s also a narrow position,” he told me. He advocates for “a broader sense of meaning.” In a recent paper, he proposed the term distributional semantics: “The meaning of a word is simply a description of the contexts in which it appears.” (When I asked Manning how he defines meaning, he said, “Honestly, I think that’s difficult.”)
Distributional Semantics
Christopher Manning, a computational linguist And director of the Stanford Artificial Intelligence Laboratory, is a proponent of this theory of linguistical meaning.
-
stochastic parrot (coinage Bender’s) is an entity “for haphazardly stitching together sequences of linguistic forms … according to probabilistic information about how they combine, but without any reference to meaning.” In March 2021, Bender published “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” with three co-authors. After the paper came out, two of the co-authors, both women, lost their jobs as co-leads of Google’s Ethical AI team. The controversy around it solidified Bender’s position as the go-to linguist in arguing against AI boosterism.
Stochastic Parrot definition
The term was coined by Bender. It is “not a write-up of original research. It’s a synthesis of LLM critiques that Bender and others have made”
Two co-authors on [[Google]]’s Ethical AI team lost their jobs after publication.
OpenAI CEO Sam Altman And others adopted the phrase and turned it benign.
-
OpenAI also contracted out what’s known as ghost labor: gig workers, including some in Kenya (a former British Empire state, where people speak Empire English) who make $2 an hour to read and tag the worst stuff imaginable — pedophilia, bestiality, you name it — so it can be weeded out. The filtering leads to its own issues. If you remove content with words about sex, you lose content of in-groups talking with one another about those things.
OpenAI’s use of human taggers
-
Our List of Dirty, Naughty, Obscene, and Otherwise Bad Word
-
Tech-makers assuming their reality accurately represents the world create many different kinds of problems. The training data for ChatGPT is believed to include most or all of Wikipedia, pages linked from Reddit, a billion words grabbed off the internet
LLMs as a model of reality, but not reality
There are limits to any model. In this case, the training data. What biases are implicitly in that model based on how it was selected and what it contained?
The paragraph goes on to list some biases: race, wealth, and “vast swamps”
-
It can’t include, say, e-book copies of everything in the Stanford library, as books are protected by copyright law.
GPT-3 doesn’t contain book content in its training?
“Copyright” can’t be an answer because everything post-1976 is copyrighted.
-
“Systematic Approaches to Learning Algorithms and Machine Inferences.” Then people would be out here asking, “Is this SALAMI intelligent? Can this SALAMI write a novel? Does this SALAMI deserve human rights?”
AI -> SALAMI
“[[Artificial Intelligence]]” as a phrase has a white supremacy background. Besides, who gets to define “intelligent” and against what metric?
-
“We call on the field to recognize that applications that aim to believably mimic humans bring risk of extreme harms,” she co-wrote in 2021. “Work on synthetic human behavior is a bright line in ethical Al development, where downstream effects need to be understood and modeled in order to block foreseeable harm to society and different social groups.”
Synthetic human behavior as AI bright line
Quote from Bender
-
We go around assuming ours is a world in which speakers — people, creators of products, the products themselves — mean to say what they say and expect to live with the implications of their words. This is what philosopher of mind Daniel Dennett calls “the intentional stance.” But we’ve altered the world. We’ve learned to make “machines that can mindlessly generate text,” Bender told me when we met this winter. “But we haven’t learned how to stop imagining the mind behind it.”
Intentional Stance
We (mostly) assume people mean what they say. What happens when we live in a world where we can no longer assume that.
-
The models are built on statistics. They work by looking for patterns in huge troves of text and then using those patterns to guess what the next word in a string of words should be. They’re great at mimicry and bad at facts. Why? LLMs, like the octopus, have no access to real-world, embodied referents. This makes LLMs beguiling, amoral, and the Platonic ideal of the bullshitter, as philosopher Harry Frankfurt, author of On Bullshit, defined the term. Bullshitters, Frankfurt argued, are worse than liars. They don’t care whether something is true or false. They care only about rhetorical power — if a listener or reader is persuaded.
Why LLMs are “great at mimicry and bad at facts”
-
Climbing Towards NLU: On Meaning, Form, and Understanding in the Age of Data
Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data (Bender & Koller, ACL 2020)
-
Say that A and B, both fluent speakers of English, are independently stranded on two uninhabited islands. They soon discover that previous visitors to these islands have left behind telegraphs and that they can communicate with each other via an underwater cable. A and B start happily typing messages to each other. Meanwhile, O, a hyperintelligent deep-sea octopus who is unable to visit or observe the two islands, discovers a way to tap into the underwater cable and listen in on A and B’s conversations. O knows nothing about English initially but is very good at detecting statistical patterns. Over time, O learns to predict with great accuracy how B will respond to each of A’s utterances. Soon, the octopus enters the conversation and starts impersonating B and replying to A. This ruse works for a while, and A believes that O communicates as both she and B do — with meaning and intent. Then one day A calls out: “I’m being attacked by an angry bear. Help me figure out how to defend myself. I’ve got some sticks.” The octopus, impersonating B, fails to help. How could it succeed? The octopus has no referents, no idea what bears or sticks are. No way to give relevant instructions, like to go grab some coconuts and rope and build a catapult. A is in trouble and feels duped. The octopus is exposed as a fraud.
The LLM Octopus Problem
The octopus has observed the conversations and starts to impersonate one of the participants. What happens when the octopus lacks context to hold up its end of the conversation.
-
-
- Feb 2023
-
pluralistic.net pluralistic.net
-
I have no doubt that robber barons would have engaged in zuckerbergian shenanigans if they could have – but here we run up against the stubborn inertness of atoms and the slippery liveliness of bits. Changing a railroad schedule to make direct connections with cities where you want to destroy a rival ferry business (or hell, laying track to those cities) is a slow proposition. Changing the content recommendation system at Facebook is something you do with a few mouse-clicks.
Difference between railroad monopolies and digital platform monopolies
Railroad monopolies were limited to physical space and time. Platform monopolies can easily change algorithms to shift user attention and content recommendations.
-
Enshittification, you'll recall, is the lifecycle of the online platform: first, the platform allocates surpluses to end-users; then, once users are locked in, those surpluses are taken away and given to business-customers. Once the advertisers, publishers, sellers, creators and performers are locked in, the surplus is clawed away from them and taken by the publishers.
Defining "enshittification"
The post continues to explain how this happened with [[Facebook]]:
1 First, gain a huge user base with network effects and lock users into the platform. 2. Spy on users to offer precision targeted advertising. Companies added beacons to websites to improve targeting. 3. Raise ad rates and decrease use of expensive anti-fraud measures.
-
-
deliverypdf.ssrn.com deliverypdf.ssrn.com
-
Rozenshtein, Alan Z., Moderating the Fediverse: Content Moderation on Distributed Social Media (November 23, 2022). 2 Journal of Free Speech Law (2023, Forthcoming), Available at SSRN: https://ssrn.com/abstract=4213674 or http://dx.doi.org/10.2139/ssrn.4213674
Found via Nathan Schneider
Abstract
Current approaches to content moderation generally assume the continued dominance of “walled gardens”: social media platforms that control who can use their services and how. But an emerging form of decentralized social media—the "Fediverse"—offers an alternative model, one more akin to how email works and that avoids many of the pitfalls of centralized moderation. This essay, which builds on an emerging literature around decentralized social media, seeks to give an overview of the Fediverse, its benefits and drawbacks, and how government action can influence and encourage its development.
Part I describes the Fediverse and how it works, beginning with a general description of open versus closed protocols and then proceeding to a description of the current Fediverse ecosystem, focusing on its major protocols and applications. Part II looks at the specific issue of content moderation on the Fediverse, using Mastodon, a Twitter-like microblogging service, as a case study to draw out the advantages and disadvantages of the federated content-moderation approach as compared to the current dominant closed-platform model. Part III considers how policymakers can encourage the Fediverse, whether through direct regulation, antitrust enforcement, or liability shields.
-
-
nymag.com nymag.com
-
In ChatGPT and Bing’s conversations about themselves, you see evidence of the corpus everywhere: the sci-fi, the news articles with boilerplate paragraphs about machine uprisings, the papers about what AI researchers are working on. You also see evidence of the more rigorous coverage and criticism of OpenAI, etc., that has elucidated possible harms that could result from the careless deployment of AI tools.
ChatGPT/Bing's self-reflection comes from the corpus of discussions about AI that it has ingested
-
Tay is from a different generation of AI and bears little technical resemblance to something like the new Bing. Still, it was orders of magnitude more sophisticated, and less technologically comprehensible to its users, than something like ELIZA.
Microsoft's Tay project
-
Attempting to thwart a simple rules-based chatbot is mostly a matter of discovering dead ends and mapping the machine; the new generation of chatbots just keeps on generating. Per Weizenbaum, however, that should be an invitation to bring them back over the threshold, as even lay people eventually did with bots like ELIZA, no programming knowledge required. In other words, what’s happening in these encounters is weird and hard to explain — but, also, similarly, with a little distance, it makes sense. It’s intuitive.
Thwarting Eliza versus thwarting Sydney
-
More interesting or alarming or hilarious, depending on the interlocutor, is its propensity to challenge or even chastise its users, and to answer, in often emotional language, questions about itself.
Examples of Bing/ChatGPT/Sydney gaslighting users
- Being very emphatic about the current year being 2022 instead of 2023
- How Sydney spied on its developers
- How Sydney expressed devotion to the user and expressed a desire to break up a marriage
-
In his 1976 book, Computer Power and Human Reason: From Judgment to Calculation, the computer scientist Joseph Weizenbaum observed some interesting tendencies in his fellow humans. In one now-famous anecdote, he described his secretary’s early interactions with his program ELIZA, a proto-chatbot he created in 1966.
Description of Joseph Weizenbaum's ELIZA program
When rule-based artificial intelligence was the state-of-the-art.
-
-
storage.courtlistener.com storage.courtlistener.com
-
Upon information and belief, Stability AI has copiedmore than 12 million photographs from Getty Images’ collection, along with the associatedcaptions and metadata, without permission from or compensation to Getty Images, as part of itsefforts to build a competing business. As part of its unlawful scheme, Stability AI has removedor altered Getty Images’ copyright management information, provided false copyrightmanagement information, and infringed Getty Images’ famous trademarks.
Grounds for complaint
- Removed/altered Getty's "copyright management information" (presumably the visible watermark plus attribution, perhaps some embedded steganography data as well)
- False copyright information (that there is no copyright on AI-generated images?)
- Infringing on trademark (Stable Diffusion creates a watermark that resembles Getty Images')
-
Making matters worse, Stability AI has caused the Stable Diffusion model toincorporate a modified version of the Getty Images’ watermark to bizarre or grotesque syntheticimagery that tarnishes Getty Images’ hard-earned reputation, such as the image below
Very similar watermark implies Getty Images affiliation
Two points in this section of the complaint:
- in paragraph 58, Getty Images says that "Stability AI has knowingly removed" the watermark from some of the images, but it does not provide evidence of that in the complaint.
- in paragraph 59, the AI-generated image created a watermark that strongly resembles the Getty Images watermark, and this watermark is on an image that Getty would not have in its collection. This would seem to be the trademark violation complaint.
-
In many cases, and as discussed further below, the output delivered by StabilityAI includes a modified version of a Getty Images watermark, underscoring the clear linkbetween the copyrighted images that Stability AI copied without permission and the output itsmodel delivers.
Modified watermark in the output underscores a clear link
The example embedded in the complaint is of two soccer players with their arms outstretched and the Getty Images watermark is clearly visible. In the AI-generated image, there are two soccer players in weird positions; the team logos and jersey colors match.
-
Getty Images’ content is extremely valuable to the datasets used to train StableDiffusion. Getty Images’ websites provide access to millions of high quality images and a vastarray of subject matter. High quality images such as those offered by Getty Images on itswebsites are more useful for training an AI model such as Stable Diffusion than low qualityimages because they contain more detail or data about the image that can be copied. By contrast,a low quality image, such as one that has been compressed and posted as a small thumbnail on atypical social media site, is less valuable because it only provides a rough, poor qualityframework of the underlying image and may not be accompanied by detailed text or other usefulmetadata.
Getty Images' content is well suited for AI training
High quality images and detailed descriptions.
-
Upon information and belief, Stability AI then created yet additional copies withvisual noise added, while retaining encoded copies of the original images without noise forcomparison to help train its model.
Copies with added noise
-
Stable Diffusion was trained on 5 billion image-text pairs from datasets preparedby non-party LAION, a German entity that works in conjunction with and is sponsored byStability AI. Upon information and belief, Stability AI provided LAION with both funding andsignificant computing resources to produce its datasets in furtherance of Stability AI’s infringingscheme.
Role of LAION
LAION, from their website: a non-profit organization providing datasets, tools and models to liberate machine learning research. By doing so, we encourage open public education and a more environment-friendly use of resources by reusing existing datasets and models.
Wikipedia: The Large-scale Artificial Intelligence Open Network (LAION) is a German non-profit with a stated goal "to make large-scale machine learning models, datasets and related code available to the general public". It is best known for releasing a number of large datasets of images and captions scraped from the web which have been used to train a number of high-profile text-to-image models, including Stable Diffusion and Imagen.
-
Stability AI created and maintains a model called Stable Diffusion. Uponinformation and belief, Stability AI utilizes the following steps from input to output
Description of the LLM training process
Specifically, the use of altering images to add "noise" and training the LLM to remove the noise associated with text from the description.
-
The Getty Images websites from which Stability AI copiedimages without permission is subject to express terms and conditions of use which, among otherthings, expressly prohibit, inter alia: (i) downloading, copying or re-transmitting any or all of thewebsite or its contents without a license; and (ii) using any data mining, robots or similar datagathering or extraction methods.
Terms of service violation
Getty Images' terms of service do not permit data mining.
-
Getty Images has registered its copyright of the Database with the United StatesCopyright Office. The copyright registration number is TXu002346096.
Getty database copyright
Registration TXU002346096 with the title: Getty Images Asset Data Records November 28, 2022
-
Stability AI has copied at least 12 million copyrighted images from Getty Images’websites, along with associated text and metadata, in order to train its Stable Diffusion model.
12 million claimed; 7,316 listed
Attachment A has an itemized list of images with 7,316 lines
From paragraph 24:
For purposes of the copyright infringement claims set forth herein and establishing the unlawful nature of Stability AI’s conduct, Getty Images has selected 7,216 examples from the millions of images that Stability AI copied without permission and used to train one or more versions of Stable Diffusion. The copyrights for each of these images (as well as for many other images) have been registered with the U.S. Copyright Office. A list of these works, together with their copyright registration numbers, is attached as Exhibit A.
-
Often, the output generated by Stable Diffusion contains a modified version of aGetty Images watermark, creating confusion as to the source of the images and falsely implyingan association with Getty Images.
Trademark infringement on the Getty watermark
Although the watermark in the AI-generated images isn't exactly Getty's, it is recognizable as such.
-
Getty Images’ visual assets are highly desirable for use in connection withartificial intelligence and machine learning because of their high quality, and because they areaccompanied by content-specific, detailed captions and rich metadata.
Descriptive information makes the Getty collection more valuable for LLM training
-
COMPLAINT filed with Jury Demand against Stability AI, Inc. Getty Images (US), Inc. v. Stability AI, Inc. (1:23-cv-00135) District Court, D. Delaware
https://www.courtlistener.com/docket/66788385/getty-images-us-inc-v-stability-ai-inc/
-
-
static1.squarespace.com static1.squarespace.com
-
Staff and studentsare rarely in a position to understand the extent to which data is being used, nor are they able todetermine the extent to which automated decision-making is leveraged in the curation oramplification of content.
Is this a data (or privacy) literacy problem? A lack of regulation by experts in this field?
-
-
arxiv.org arxiv.org
-
Certainly it would not be possible if theLLM were doing nothing more than cutting-and-pasting fragments of text from its training setand assembling them into a response. But this isnot what an LLM does. Rather, an LLM mod-els a distribution that is unimaginably complex,and allows users and applications to sample fromthat distribution.
LLMs are not cut and paste; the matrix of token-following-token probabilities are "unimaginably complex"
I wonder how this fact will work its way into the LLM copyright cases that have been filed. Is this enough to make a the LLM output a "derivative work"?
-
Including a prompt prefix in the chain-of-thought style encourages the model to generatefollow-on sequences in the same style, which isto say comprising a series of explicit reasoningsteps that lead to the final answer. This abilityto learn a general pattern from a few examples ina prompt prefix, and to complete sequences in away that conforms to that pattern, is sometimescalled in-context learning or few-shot prompt-ing. Chain-of-thought prompting showcases thisemergent property of large language model at itsmost striking.
Emulating deductive reasoning with prompt engineering
I think "emulating deductive reasoning" is the correct shorthand here.
-
Human language userscan consult the world to settle their disagree-ments and update their beliefs. They can, so tospeak, “triangulate” on objective reality. In iso-lation, an LLM is not the sort of thing that cando this, but in application, LLMs are embeddedin larger systems. What if an LLM is embeddedin a system capable of interacting with a worldexternal to itself? What if the system in ques-tion is embodied, either physically in a robot orvirtually in an avatar?
Humans reach an objective reality; can an LLM embedded in a system interacting with the external world also find an objective reality?
-
Vision-language mod-els (VLMs) such as VilBERT (Lu et al., 2019)and Flamingo (Alayrac et al., 2022), for exam-ple, combine a language model with an imageencoder, and are trained on a multi-modal cor-pus of text-image pairs. This enables them topredict how a given sequence of words will con-tinue in the context of a given image
Definition of "vision-language models"
-
The real issue here is that, whatever emergentproperties it has, the LLM itself has no accessto any external reality against which its wordsmight be measured, nor the means to apply anyother external criteria of truth, such as agree-ment with other language-users.
The LLM cannot see beyond its training to measure its sense of "truth"
One can embed the LLM in a larger system that might have capabilities that look into the outer world.
-
Shanahan, Murray. "Talking About Large Language Models." arXiv, (2022). https://doi.org/10.48550/arXiv.2212.03551.
Found via Simon Wilson.
Abstract
Thanks to rapid progress in artificial intelligence, we have entered an era when technology and philosophy intersect in interesting ways. Sitting squarely at the centre of this intersection are large language models (LLMs). The more adept LLMs become at mimicking human language, the more vulnerable we become to anthropomorphism, to seeing the systems in which they are embedded as more human-like than they really are. This trend is amplified by the natural tendency to use philosophically loaded terms, such as "knows", "believes", and "thinks", when describing these systems. To mitigate this trend, this paper advocates the practice of repeatedly stepping back to remind ourselves of how LLMs, and the systems of which they form a part, actually work. The hope is that increased scientific precision will encourage more philosophical nuance in the discourse around artificial intelligence, both within the field and in the public sphere.
-
A bare-bones LLM doesn’t “really” know any-thing because all it does, at a fundamental level,is sequence prediction. Sometimes a predictedsequence takes the form of a proposition. But thespecial relationship propositional sequences haveto truth is apparent only to the humans who areasking questions, or to those who provided thedata the model was trained on. Sequences ofwords with a propositional form are not specialto the model itself in the way they are to us. Themodel itself has no notion of truth or falsehood,properly speaking, because it lacks the means toexercise these concepts in anything like the waywe do.
An LLM relies on statistical probability to construct a word sequence without regard to truth and falsehood
The LLM's motivation is not truth or falsehood; it has no motivation. Humans anthropomorphize motivation and assign truth or belief to the generated statements. "Knowing" is the wrong word to ascribe to the LLM's capabilities.
-
Turning an LLM into a question-answering sys-tem by a) embedding it in a larger system, andb) using prompt engineering to elicit the requiredbehaviour exemplifies a pattern found in muchcontemporary work. In a similar fashion, LLMscan be used not only for question-answering,but also to summarise news articles, to generatescreenplays, to solve logic puzzles, and to trans-late between languages, among other things.There are two important takeaways here. First,the basic function of a large language model,namely to generate statistically likely continua-tions of word sequences, is extraordinarily versa-tile. Second, notwithstanding this versatility, atthe heart of every such application is a model do-ing just that one thing: generating statisticallylikely continuations of word sequences.
LLM characteristics that drive their usefulness
-
Dialogue is just one application of LLMs thatcan be facilitated by the judicious use of promptprefixes. In a similar way, LLMs can be adaptedto perform numerous tasks without further train-ing (Brown et al., 2020). This has led to a wholenew category of AI research, namely prompt en-gineering, which will remain relevant until wehave better models of the relationship betweenwhat we say and what we want.
Prompt engineering
-
In the background, the LLM is invisiblyprompted with a prefix along the following lines.
Pre-work to make the LLM conversational
-
However, in the case of LLMs, such istheir power, things can get a little blurry. Whenan LLM can be made to improve its performanceon reasoning tasks simply by being told to “thinkstep by step” (Kojima et al., 2022) (to pick justone remarkable discovery), the temptation to seeit as having human-like characteristics is almostoverwhelming.
Intentional stance meets uncanny valley
Intentional stance language becomes problematic when we can no longer distinguish the inanimate object's behavior from human behavior.
-