- Jul 2022
-
herman.bearblog.dev herman.bearblog.dev
-
The most common way is to log the number of upvotes (or likes/downvotes/angry-faces/retweets/poop-emojis/etc) and algorithmically determine the quality of a post by consensus.
When thinking about algorithmic feeds, one probably ought to not include simple likes/favorites/bookmarks as they're such low hanging fruit. Better indicators are interactions which take time, effort, work to post.
Using various forms of webmention as indicators could be interesting as one can parse responses and make an actual comment worth more than a dozen "likes", for example.
Curating people (who respond) as well as curating the responses themselves could be useful.
Time windowing curation of people and curators could be a useful metric.
Attempting to be "democratic" in these processes may often lead to the Harry and Mary Beercan effect and gaming issues seen in spaces like Digg or Twitter and have dramatic consequences for the broader readership and community. Democracy in these spaces is more likely to get you cat videos and vitriol with a soupçon of listicles and clickbait.
-
- Jan 2022
-
districts.neocities.org districts.neocities.org
- Mar 2021
-
twitter.com twitter.com
-
Darren Dahly. (2019, September 4). It seems appropriate to do a thread on our recent session about the use of Twitter by statisticians. Https://t.co/eFwLDuXnOU [Tweet]. @statsepi. https://twitter.com/statsepi/status/1169313702715281408
-