🔹 **Rich World Knowledge:** Leads all current open models, trailing only Gemini-3.1-Pro.
这里提供了模型知识能力的相对排名:领先所有当前开源模型,但仅落后于Gemini-3.1-Pro。这是一个相对定位而非绝对性能数据。这种表述暗示DeepSeek-V4-Pro在知识广度上达到了接近顶级闭源模型的水平,这对需要广泛知识的应用场景具有重要意义。然而,缺乏具体的评估指标和分数,难以准确量化这一差距。
🔹 **Rich World Knowledge:** Leads all current open models, trailing only Gemini-3.1-Pro.
这里提供了模型知识能力的相对排名:领先所有当前开源模型,但仅落后于Gemini-3.1-Pro。这是一个相对定位而非绝对性能数据。这种表述暗示DeepSeek-V4-Pro在知识广度上达到了接近顶级闭源模型的水平,这对需要广泛知识的应用场景具有重要意义。然而,缺乏具体的评估指标和分数,难以准确量化这一差距。
thought to be a potential approach to create a better consensus in a world where multiple truths sometimes seem to co-exist. Today, each side argues only their “truth” is true, and the other is a lie, which has made it difficult to find agreement. The bridging algorithm looks for areas where both sides agree. Ideally, platforms would then reward behavior that “bridges divides” rather than reward posts that create further division.
Ranking higher comments in which multiple groups can agree.
The most common way is to log the number of upvotes (or likes/downvotes/angry-faces/retweets/poop-emojis/etc) and algorithmically determine the quality of a post by consensus.
When thinking about algorithmic feeds, one probably ought to not include simple likes/favorites/bookmarks as they're such low hanging fruit. Better indicators are interactions which take time, effort, work to post.
Using various forms of webmention as indicators could be interesting as one can parse responses and make an actual comment worth more than a dozen "likes", for example.
Curating people (who respond) as well as curating the responses themselves could be useful.
Time windowing curation of people and curators could be a useful metric.
Attempting to be "democratic" in these processes may often lead to the Harry and Mary Beercan effect and gaming issues seen in spaces like Digg or Twitter and have dramatic consequences for the broader readership and community. Democracy in these spaces is more likely to get you cat videos and vitriol with a soupçon of listicles and clickbait.
„NZ, Vietnam Top List of Countries with Best Responses to the Pandemic“, 27. Januar 2021. https://www.abc.net.au/news/2021-01-28/new-zealand-tops-list-as-country-with-best-covid-response/13095758.
Holcombe, A. (2020, September 30). Conventional journal rankings—Fight them! Medium. https://medium.com/@ceptional/conventional-journal-rankings-fight-them-9c6db600b0dd
(((Howard Forman))) on Twitter. (n.d.). Twitter. Retrieved September 7, 2020, from https://twitter.com/thehowie/status/1302722027665666048
Why has the UK done so badly on Covid-19? There are still no simple answers | David Spiegelhalter. (2020). Retrieved 4 August 2020, from https://www.theguardian.com/commentisfree/2020/aug/02/uk-covid-19-excess-deaths
COVID-19 Regional Safety Assessment | DKG. (n.d.). DKV. Retrieved June 17, 2020, from https://www.dkv.global/covid-safety-assessment-200-regions
Routley, N. (2019, August 7). Ranking the Top 100 Websites in the World. Visual Capitalist. https://www.visualcapitalist.com/ranking-the-top-100-websites-in-the-world/
Gurfinkel, A. J., & Rikvold, P. A. (2020). A Current-Flow Centrality With Adjustable Reach. ArXiv:2005.14356 [Physics]. http://arxiv.org/abs/2005.14356
Mariani, M. S., & Lü, L. (2020). Network-based ranking in social systems: Three challenges. Journal of Physics: Complexity, 1(1), 011001. https://doi.org/10.1088/2632-072X/ab8a61
The new and improved Times Higher Education (THE) Impact Rankings 2020 were published this week with as much online fanfare as THE could muster. Unfortunately, they are not improved enough.
My sketchnotes here: https://photos.app.goo.gl/Dj47SHE2ehzdEMM17
“There are limits to what universities can do and the SDGs don’t capture everything about the impact of our research.”
Plus, the measurement is based on journal articles from commercial databases (eg: Scopus). Those databases index have language bias. On the other hand, we are lacking of national level scientific database that provide dataset for those rankings to process.
All rankings measure the following components, which all of them contain level of bias:
These are the rankings that increasingly drive institutional behaviour – and competition between them.
and not to mention it drives external economical-social setting eg: labour market, top university labeling in the mind of parents, etc.
As a result, the THE clings to a methodology that despite taking insufficient account of the false precision and the uncertainties introduced by the proxy nature of the indicators used to ‘measure’ actual performance, still claims to be able to distinguish universities on scores that differ by 0.1%. It is laughable to claim this level of precision. It is to universities’ discredit that they go along.
For less economically stable countries (eg Indonesia), many indicators are very much controlled by national level situations (regulations, funding), geographical settings, and the large sum of high school graduates to enter undergraduate degree. On the contrary, all rankings only relevant for graduate research.
This page, Top Tools for Learning, is updated every year. It lists and briefly describes the top tech tools for adult learning. For the current (2018) list, they are YouTube, PowerPoint, and Google Search. The list proceeds through the top 200 and there are links to each tool. The purpose of this page is to list them; tutorials, etc. are not offered. Rating 4/5
New Media Consortium Horizon Report This page provides a link to the annual Horizon Report. The report becomes available late in the year. The report identifies emerging technologies that are likely to be influential and describes the timeline and prospective impact for each. Unlike the link to top learning tools that anyone can use, the technologies listed here may be beyond the ability of the average trainer to implement. While it is informative and perhaps a good idea to stay abreast of these listings, it is not necessarily something that the average instructional designer can apply. Rating: 3/5
Es gehört zwar unter Akademikern zum guten Ton, den Wert dieser Rankings herunterzuspielen, aber wenn man dort weit vorne rangiert – wie die beiden eidgenössischen Hochschulen in Zürich und Lausanne und überhaupt praktisch alle grossen Schweizer Universitäten –, ist man nur allzu gerne bereit, trotz aller Vorbehalte das eigene gute Abschneiden gebührend auszuschlachten.
s Rennie and Flanagin(1994) remind us, there is no standard method for determin-ing order, nor any universalistic criteria for conferring au-thorship status:
bibliography on authorship practices
lphabetization through weightedlisting to reverse seniority (e.g., Spiegel & Keith-Spiegel,1970; Riesenberg & Lundberg, 1990).
bibliography on authorship ranking and practices