About 41,500 results
Open links in new tab
  1. GitHub - XiaomiMiMo/MiMo-V2-Flash

    MiMo-V2-Flash is a Mixture-of-Experts (MoE) language model with 309B total parameters and 15B active parameters. Designed for high-speed reasoning and agentic workflows, it utilizes a novel …

  2. MiMo-V2-Flash | Xiaomi

    1 day ago · MiMo-V2-Flash is a Mixture-of-Experts model with 309B total parameters and 15B active parameters, adopting a hybrid attention architecture that interleaves sliding-window and full attention, …

  3. SGLang Day-0 Support for MiMo-V2-Flash Model | LMSYS Org

    1 day ago · XiaomiMiMo/MiMo-V2-Flash, with 309B total parameters and 15B activated parameters, is a new inference-centric model designed to maximize decoding efficiency. It is based on two key …

  4. Xiaomi MiMo-V2-Flash: A Bold Leap Toward AGI with Cutting ...

    1 day ago · MiMo-V2-Flash is more than just a technical feat; it’s a clear window into Xiaomi’s future. With a massive 309 billion total parameters and 15 billion active parameters, as detailed on Hugging …

  5. Xiaomi: MiMo-V2-Flash (free) – Performance Metrics

    3 days ago · See performance metrics across providers for Xiaomi: MiMo-V2-Flash (free) - MiMo-V2-Flash is an open-source foundation language model developed by Xiaomi. It is a Mixture-of-Experts …

  6. Xiaomi releases MiMo-V2-Flash, an open-weight MoE - One News Page

    1 day ago · Xiaomi releases MiMo-V2-Flash, an open-weight MoE model with 309B total and 15B active parameters, saying it excels in reasoning, coding, and agentic scenarios

  7. Xiaomi Open Sources 309 Billion Parameter MiMo-V2-Flash Large ...

    MiMo-V2-Flash adopts a sparse activation architecture, with a total parameter count of 309 billion, but only activates 15 billion parameters per inference, significantly reducing computational costs while …