At 50 million tokens, the design space for AI applications changes fundamentally.
文章提到5000万token上下文将 fundamentally 改变AI应用的设计空间。这是一个前瞻性的数据点,表明SubQ技术的长期潜力,虽然当前产品仅支持100万token,但架构设计已为未来更大规模应用奠定基础。
At 50 million tokens, the design space for AI applications changes fundamentally.
文章提到5000万token上下文将 fundamentally 改变AI应用的设计空间。这是一个前瞻性的数据点,表明SubQ技术的长期潜力,虽然当前产品仅支持100万token,但架构设计已为未来更大规模应用奠定基础。
Parameters are estimated by unweighted least squares. Time t is measured in years since the first observation in each dataset.
研究使用最小二乘法进行参数估计,时间以年为单位从每个数据集的第一个观测点开始计算。这种方法选择是统计标准做法,但未加权处理可能低估了近期数据点的重要性,因为近期数据点通常代表更先进的模型能力。时间单位的选择也影响了增长率解释的直观性。
Without any architectural modification, MinerU2.5-Pro achieves 95.69 on OmniDocBench v1.6, improving over the same-architecture baseline by 2.71 points and surpassing all existing methods including models with over 200× more parameters.
大多数人认为更大的模型架构必然带来性能提升,但作者仅通过数据工程和训练策略优化,在保持1.2B参数架构不变的情况下,超越了参数量超过200倍的现有模型,这挑战了'越大越好'的行业共识,证明了数据质量的重要性。
how the features are all on the same relative scale. The relative spaces between each feature’s values have been maintained.
However, this doesn’t mean that Min-Max scaling is not useful at all! A popular application is image processing, where pixel intensities have to be normalized to fit within a certain range (i.e., 0 to 255 for the RGB color range). Also, typical neural network algorithm require data that on a 0-1 scale.
Use min-max scaling for image processing & neural networks.