【Hacker News搬运】InternetLM2
-
Title: InternLM2
InternetLM2
Text:
Url: https://arxiv.org/abs/2403.17297
标题:InternLM2技术报告 作者:Zheng Cai, Maosong Cao, Haojiong Chen, Kai Chen, Keyu Chen, Xin Chen, Xun Chen, Zehui Chen, Zhi Chen, Pei Chu, Xiaoyi Dong, Haodong Duan, Qi Fan, Zhaoye Fei, Yang Gao, Jiaye Ge, Chenya Gu, Yuzhe Gu, Tao Gui, Aijia Guo, Qipeng Guo, Conghui He, Yingfan Hu, Ting Huang, Tao Jiang, Penglong Jiao, Zhenjiang Jin, Zhikai Lei, Jiaxing Li, Jingwen Li, Linyang Li, Shuaibin Li, Wei Li, Yining Li, Hongwei Liu, Jiangning Liu, Jiawei Hong, Kaiwen Liu, Kuikun Liu, Xiaoran Liu, Chengqi Lv, Haijun Lv, Kai Lv, Li Ma, Runyuan Ma, Zerun Ma, Wenchang Ning, Linke Ouyang, Jiantao Qiu, Yuan Qu, Fukai Shang, Yunfan Shao, Demin Song, Zifan Song, Zhihao Sui, Peng Sun, Yu Sun, Huanze Tang, Bin Wang, Guoteng Wang, Jiaqi Wang, Jiayu Wang, Rui Wang, Yudong Wang, Ziyi Wang, Xingjian Wei, Qizhen Weng, Fan Wu, Yingtong Xiong, Chao Xu, Ruiliang Xu, Hang Yan, Yirong Yan, Xiaogui Yang, Haochen Ye, Huaiyuan Ying, Jia Yu, Jing Yu, Yuhang Zang, Chuyu Zhang, Li Zhang, Pan Zhang, Peng Zhang, Ruijie Zhang, Shuo Zhang, Songyang Zhang, Wenjian Zhang, Wenwei Zhang, Xingcheng Zhang, Xinyue Zhang, Hui Zhao, Qian Zhao, Xiaomeng Zhao, Fengzhe Zhou, Zaida Zhou, Jingming Zhuo, Yicheng Zou, Xipeng Qiu, Yu Qiao, Dahua Lin 发布日期:2024年3月26日 摘要:大型语言模型(如ChatGPT和GPT-4)的发展引发了关于人工智能通用智能(AGI)到来的讨论。然而,在开源模型中复制这样的进展一直具有挑战性。本文介绍了InternLM2,这是一个在6个维度和30个基准测试中超越其前身的开源LLM,通过创新的预训练和优化技术,在长文本建模和开放式的的主观评估中表现出色。详细介绍了InternLM2的预训练过程,突出了准备多样化的数据类型,包括文本、代码和长文本数据。InternLM2有效地捕获了长期依赖关系,在预训练和微调阶段最初训练了4k个令牌,然后扩展到32k个令牌,在200k "Needle-in-a-Haystack"测试中表现出了显著的性能。InternLM2进一步使用监督微调(SFT)和一种新颖的条件在线强化学习人类反馈(COOL RLHF)策略,以解决相互冲突的人类偏好和奖励黑客攻击。通过发布不同训练阶段和模型大小的InternLM2模型,我们为社区提供了了解模型发展的洞察。 提交历史: - 从:Hang Yan [查看电子邮件] - [v1] Tue, 26 Mar 2024 00:53:24 UTC (3,291 KB)
Post by: milliondreams
Comments:
zone411: We really need better long context benchmarks than needle-in-a-haystack. There is LV-Eval (<a href="https://arxiv.org/abs/2402.05136" rel="nofollow">https://arxiv.org/abs/2402.05136</a>) with multi-hop QA that's better but still pretty basic.
zone411: 我们真的需要比大海捞针更好的长上下文基准测试。有LV Eval(<a href=“https://;/;arxiv.org//,abs/!2402.05136”rel=“nofollow”>https://;#xx2F;arxiv.org/#xx2F,abs/:2402.05136</a>),具有多跳QA;这更好,但还是很基本。
****:
****:
viraptor: The repo is here: <a href="https://github.com/InternLM/InternLM">https://github.com/InternLM/InternLM</a>
viraptor: 回购在这里:<a href=“https://;/;github.com/,InternetLM/”>https:///;github.com/;InternetLM/;InternetLM</a>
dannyw: How good is the base (non-instruction-tuned) model? Everyone is trying to make chat bots, but for my use cases, I find base models more suitable.
dannyw: 基础(非指令调优)模型的性能如何?每个人都在尝试制作聊天机器人,但对于我的用例,我发现基本模型更适合。
esha_manideep: Pretty amazing to see training data being discussed more openly
esha_manideep: 看到训练数据被更公开地讨论,真是令人惊讶