ByteDance Doubao Big Model Team officially opens source of the first multilingual SWE dataset
On April 10, the ByteDance Doubao Big Model team officially open-sourced the first multi-language SWE dataset, Multi-SWE-bench, which can be used to evaluate and improve the "automatic bug fixing" capabilities of big models. Based on SWE-bench, Multi-SWE-bench covers seven mainstream programming languages other than Python for the first time, and is a true evaluation benchmark for "full-stack engineering". Its data comes from GitHub issues and took nearly a year to build, in order to evaluate and improve the high-level programming intelligence level of big models as accurately as possible.