First “modern and powerful” open source LLM?

Key features

  • Fully open model: open weights + open data + full training details including all data and training recipes
  • Massively Multilingual: 1811 natively supported languages
  • Compliant Apertus is trained while respecting opt-out consent of data owners (even retrospectivey), and avoiding memorization of training data
  • Zerush@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    4 days ago

    You can find it in HuggingFace.

    Apertus is designed with transparency at its core, thereby ensuring full reproducibility of the training process. Alongside the models, the research team has published a range of resources: comprehensive documentation and source code of the training process and datasets used, model weights including intermediate checkpoints – all released under a permissive open-source license, which also allows for commercial use. The terms and conditions are available via Hugging Face. ​Apertus was developed with due consideration to Swiss data protection laws, Swiss copyright laws, and the transparency obligations under the EU AI Act. Particular attention has been paid to data integrity and ethical standards: the training corpus builds only on data which is publicly available. It is filtered to respect machine-readable opt-out requests from websites, even retroactively, and to remove personal data, and other undesired content before training begins.

    You can use it here (optional free account).

    Review:

    Apertus truly delivers on its transparency promises, representing one of the most open and transparent LLM projects to date. The philosophy has been “open at every level,” backed by concrete actions that set new standards for AI transparency.