Work In Progress New versions will be available during the coming weeks/months.

Sampling Parameters: For optimal performance, we recommend using temperatures close to zero (0 - 0.2). Additionally, we advise against using any type of repetition penalty, as from our experience, it negatively impacts instructed model's responses.

ALIA-40b-instruct Model Card

ALIA-40b-instruct-2605 is the latest release in the ALIA model family. While development is ongoing and further updates are expected, this version already incorporates several notable improvements over previous releases.

Main improvements

  • Exact Instruction Following: Enhanced instruction-tuning, leading to more reliable adherence to user constrains.
  • Formatting: Better output formatting capabilities, such as JSON or Markdown.
  • Long-Context: Improved long-context and multi-turn capabilities compared to previous versions of ALIA-40b-instruct.
  • System prompt: Compared to previous versions, this one substancially improves system-following capabilities.
  • Summarization: Improved ability to perform concise and accurate summarization of texts of variable lengths.

The ALIA-40b-instruct model is an instructed variant of a context-extended base ALIA-40b model, which was pre-trained from scratch on 9.83 trillion tokens of carefully curated data spanning 35 European languages (including code). This instructed version is optimized to follow user prompts and engage in dialogue. It supports a broad range of languages (e.g. Spanish, Catalan, Basque, English, etc.) and is capable of text generation, translation, summarization, and question-answering in these languages.

In keeping with our commitment to open-source development, all tools and sources used to process and create the training data are open-licensed. For clarity, our definition of open-licensed excludes any source, tool, model, or dataset whose terms of use impose restrictive conditions that impede standard open reuse.

This model is released under the permissive Apache 2.0 license. Along with the open weights, all training scripts and configuration files are made publicly available in this GitHub repository.

To visit the model cards of other model versions, please refer to the Model Index.


Model Details

Description

The ALIA-40b is a transformer-based, decoder-only language model that was pre-trained from scratch on 9.37 trillion tokens of meticulously curated data. It subsequently underwent continued pretraining on additional 424 billion high-quality tokens, and was further extended with a supplementary 39 billion tokens drawn from a similarly diverse mixture, totalling 9.83 trillion tokens.

ALIA-40b-Instruct is an instructed variant of this latest ALIA-40b version. Its development process comprises, in contrast to previous version, only two consecutive stages, each targeting a specific capability: (1) long-context adaptation to extend the model’s context window, (2) supervised fine-tuning to improve instruction following capabilities.

After long-context adaptation, our post-training process consists of a supervised fine-tuning (SFT) stage spanning up to 32k tokens, using 712k conversation samples to strengthen instruction following and include conversational capabilities.

It is worth mentioning that this checkpoint has not yet undergone an alignment process, unlike previous versions.

Although the base model is highly multilingual, the post-training process focused primarily on Spanish, Catalan, Basque, Galician, and English. We also incorporated data from other related languages where inclusion empirically improved the performance on the target languages. However, performance in those additional languages is not guaranteed due to the limited amount of available data and the scarcity of evaluation resources.

Hyperparameters

Here we list the specific hyperparameters used during the different training stages.

Long context CPT

Hyperparameter Value
Learning rate 9e-7
LR Scheduler Constant
Tokens per update 4M
Training tokens (4k →32k). 2B
Training tokens (32k →160k). 36.8B

Supervised Fine-Tuning (SFT)

Hyperparameter Value
Learning rate 1e-5
Batch size 512
Epochs 1
LR Scheduler Cosine
Warmup Steps 5
Number of Samples 712,213

Architecture

Attribute Value
Total Parameters 40,433,885,184
Embedding Parameters 2,097,152,000
Layers 48
Hidden size 8,192
Attention heads 64
Context length 163,840
Vocabulary size 256,000
Precision bfloat16
Embedding type RoPE
Activation Function SwiGLU
Layer normalization RMS Norm
Flash attention
Grouped Query Attention
Num. query groups 8

Intended Use

Direct Use

ALIA‑40b‑instruct is intended for research and development purposes as a general-purpose multilingual assistant. It can be used to generate text, answer questions, translate between supported languages, and follow user instructions in those languages. As noted by the ALIA-40b base card, the ALIA family is aimed at both research and commercial use in any of the covered languages. In practice, ALIA-40b-instruct is best suited for tasks like multilingual chatbots, summarization, translation, and content generation, provided users are aware of its limitations.

Out-of-scope Use

The model is not intended for malicious activities, such as harming others or violating human rights. Any downstream application must comply with current laws and regulations. Irresponsible usage in production environments without proper risk assessment and mitigation is also discouraged.


Hardware and Software

Training Framework

The post-training process was conducted in NeMo-RL, with minor modifications to adapt it to our infraestructure.

Compute Infrastructure

All models were trained on MareNostrum 5, a pre-exascale EuroHPC supercomputer hosted and operated by Barcelona Supercomputing Center.

The accelerated partition is composed of 1,120 nodes with the following specifications:

  • 4x Nvidia Hopper GPUs with 64GB HBM2 memory
  • 2x Intel Sapphire Rapids 8460Y+ at 2.3Ghz and 32c each (64 cores)
  • 4x NDR200 (BW per node 800Gb/s)
  • 512 GB of Main memory (DDR5)
  • 460GB of NVMe storage

The table below specifies the number of nodes and GPUs employed for each post-training stage:

Phase Nodes GPUs
SFT 16 64

How to use

The model can be used either directly in Python using the transformers library or deployed as a service and used through standard API calls.

While the former gives the most control over the inference process it requires the code to be executed on a machine with a sufficiently powerful GPU to run the model locally, and is more error prone than the alternative. We therefore strongly recommend the latter, as deploying the model as a service can be done either locally or on a remote server and makes the model available to multiple clients in parallel among other advantages.

Unless you have very specific needs (e.g. for research) that require adapting the inference process it is preferable to follow the "deployment as a service" guidelines below.

Local inference with Python / transformers

The instruction-following models utilize the widely adopted ChatML template to structure conversational inputs and outputs.

Using this standardized chat format ensures a consistent and enhanced conversational experience. The template can be easily applied through the tokenizer’s built-in functions, as illustrated in the example snippet below:

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "BSC-LT/ALIA-40b-instruct-2605"

text = "At what temperature does water boil?"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    torch_dtype=torch.bfloat16
  )

message = [ { "role": "user", "content": text } ]

prompt = tokenizer.apply_chat_template(
    message,
    tokenize=False,
    add_generation_prompt=True,
)

inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=200)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Using this template, each turn in the conversation is preceded by a <|im_start|> delimiter indicating the beginning of a message, followed by the role of the entity (either user, for content supplied by the user, or assistant for the model's responses), and finished with the <|im_end|> token:

<s><|im_start|>user
At what temperature does water boil?<|im_end|>
<|im_start|>assistant
Water turns into vapor at 100°C.<|im_end|>

Loading the model with transformers' AutoModelForCausalLM guarantees that adequate sampling parameters are used during generation. If using alternative inference libraries such as vLLM, Ollama, or SGLang, it is crucial to verify that optimal parameters are used. To this end, in order to ensure optimal results, we recommend using temperatures around 0-0.2 without any type of repetition penalties applied.


Deployment as service and remote use (Messages API)

In our experience, vllm works well for deploying the full unquantized version of the model, whereas llama.cpp is appropriate for the quantized (GGUF) version. We strongly discourage using ollama as we have encountered compatibility issues that may seriously degrade the model's performance.

The easiest and most reliable way to have a working deployment of ALIA-40b-instruct is through the "Deploy / HF Inference Endpoints" option directly on the Hugging Face model page. This automatically creates a functioning endpoint, using vllm or llama.cpp according to the model variant, with an appropriately dimensioned GPU. While there are additional settings available for the endpoint we found the standard configuration proposed by Hugging Face to be a reasonable starting point.

Once the endpoint is running, the model can be easily called using OpenAI's "Messages API" (the de facto standard API for LLM use). By using this API the chat template is applied automatically by the service, requiring no explicit configuration on the client side. The endpoint's configuration page on Hugging Face also provides a "Playground" for testing and API examples, as well as a simple chat interface.

Example usage:

# pip install openai 

from openai import OpenAI 

client = OpenAI(
    base_url = YOUR_ENDPOINT_URL,
    api_key = YOUR_HF_TOKEN
)

chat_completion = client.chat.completions.create(
    model = "BSC-LT/ALIA-40b-instruct-2605",
    messages = [
        {
            "role": "user",
            "content": "What is deep learning?"
        }
    ],
  max_tokens = 1000,
  temperature = 0.1
)

print(chat_completion.choices[0].message.content)

The model can also be deployed locally or on any server infrastructure with sufficient GPUs, using vllm or llama.cpp. We recommend an initial deployment on Hugging Face as a point of reference and comparison to make sure the model is behaving as expected in the desired deployment setup.

To check that your endpoint is working correctly, you can try to replicate the examples contained in this Colab Notebook


Instruction Tuning Data

The dataset used in the supervised fine-tuning stage consists of 712k conversations with a length of up to 32k tokens. An initial mixture is obtained by combining a selection of (human and synthetic) permissive-licensed datasets, with a collection of synthetic conversations curated in-house, and subsequently curated with rule-based and LLM-based quality filters.

The synthetic conversations are generated using previous internal versions of ALIA's model family together with DeepSeek-V3-0324, leveraging seed data and prompts from pre-training corpora, as well as other openly available instruction datasets.

The table below provides a detailed breakdown of the datasets included in this mixture, specifying their language and contribution to the overall corpus:

Dataset ca en es eu gl pt Total Conversations
aya-dataset 2977 2568 715 7006 13266
coqcat-train 2429 2429
databricks-dolly-15k 12044 12044
dolly-ca 2449 2449
eif-augmentation 6540 27860 7646 2615 2691 47352
fineweb-edu_qa 19389 17837 19992 17787 18605 93610
flores-dev 766 918 1745 359 419 4207
long-context-qa 33 96 33 34 34 33 263
mentor-ca 5047 5047
mentor-es 5281 5281
multiturn-augmentation 5982 23901 2132 2470 34485
no-robots 7813 7813
oasst2_self-identity-rephrase 2 700 323 6 1031
openr1-math 69052 69052
openr1-math-translated 35142 68136 33899 33810 34074 205061
rag-multilingual 13880 12922 9611 36413
self-identity 1259 1286 1314 1318 1272 6449
system-prompt-eif-translation 50 50 50 50 50 250
tower-blocks 1923 965 5914 8802
wildchat-curated-deepseekv3 141717 15192 156909
Total 92968 321096 134988 59253 56881 47027 712213

Detailed SFT Data Sources:

The following table provides a detailed overview of the supervised fine-tuning data sources, including the dataset name, generation method, license and a brief description of each:

SFT Datasets
Dataset Generation Method License Description
aya-dataset Human Crowdsourced Apache-2.0 aya_dataset for the languages of interest.*
coqcat-train Human Annotation CC-BY-NC-ND-4.0 CoQCat train split, formatted using conversational templates.
databricks-dolly-15k Human Annotation CC-BY-SA-3.0 databricks-dolly-15k dataset.*
dolly-ca Human Translation CC-BY-SA-3.0 dolly3k_ca dataset.
flores-dev Human CC-BY-SA-4.0 Flores-200 dev split, formatted using conversational templates.
mentor-es Human Annotation CC-BY-4.0 MentorES dataset.
mentor-ca Machine Translation CC-BY-4.0 MentorCA dataset. Machine translated version of MentorES.
no-robots Human Annotation CC-BY-NC-4.0 no_robots dataset.*
rag-multilingual Synthetic CC-BY-SA-4.0 RAG_Multilingual dataset. Synthetic QA dataset generated with Mixtral8x7b.
tower-blocks Mixture Various licenses (only open licensed instances are used) TowerBlocks-v0.2 filtered by subdataset license and the languages of interest.*
oasst2_self-identity-rephrase Human Crowdsourced / Synthetic Apache-2.0 Identity instances from oasst2 dataset for the languages of interest. Subsequently rephrased to adapt the model’s identity information to our case using DeepSeek-V3-0324.
self-identity Synthetic Apache-2.0 (internal) Conversations involving self-identity information of the model, synthetically curated using DeepSeek-V3-0324.
open-r1-math Synthetic Apache-2.0 Default 93k split of the OpenR1-Math-220k dataset.*
open-r1-math_translated Synthetic Apache-2.0 (internal) OpenR1-Math-220k default split translated to the languages of interest with DeepSeek-V3-0324.
fineweb-edu_qa Synthetic Apache-2.0 (internal) QA conversations created by prompting DeepSeek-V3-0324 with the highest quality documents of FineWeb-Edu. Subsequently filtered with the same model to ensure self-contained question-answering pairs meet quality thresholds.
wildchat-curated-deepseekv3 Human / Synthetic Apache-2.0 (internal) Human prompts from the WildChat-1M dataset together with responses generated with DeepSeek-V3-0324.
eif-augmentation Synthetic Apache-2.0 (internal) Conversations sampled from the mixture and modified to target exact instruction following. Generated with DeepSeek-V3-0324 and filtered with verifiable constrains.
multiturn-augmentation Synthetic Apache-2.0 (internal) Conversations sampled from the mixture and augmented with subsequent user/assistant turns with DeepSeek-V3-0324.
long-context-qa Synthetic Apache-2.0 (internal) QA conversations created by prompting previous internal model versions with long documents extracted from FineWeb-Edu.
system-prompt-eif-translation Human CC-BY-4.0 (internal) Long-context translation conversations specially formatted to target system-prompt following and special formatting capabilities. Original translations extracted from ACAData dataset.

*All externally sourced datasets have undergone a sanity check using shallow rule-based filtering to discard incorrect or low-quality samples and ensure conversational quality.

Evaluation

Gold-standard benchmarks

Evaluation is done using the Language Model Evaluation Harness (Gao et al., 2024). We evaluate on a set of tasks taken from SpanishBench, CatalanBench, BasqueBench and GalicianBench, as well as existing English tasks available in the LM Evaluation Harness. These benchmarks include both new and existing tasks and datasets. The tables below report results for a representative selection of evaluation datasets, capturing model's performance across a variety of tasks within these benchmarks.

Only tasks that are human-generated, human-translated, or involve strong human-in-the-loop process (i.e., machine translation followed by professional revision or machine generation followed by human revision and annotation) were used. This approach explains the variation in the number of tasks reported across languages. As additional high-quality tasks are published, we will update the evaluation results accordingly. We also plan to expand evaluation to other languages, provided that the datasets meet our quality standards.

During the implementation of the evaluation we observed a series of issues worth considering when replicating and interpreting the results presented. These issues include ≈1.5% variances in performance in some tasks depending on the version of the transformers library used, and depending on the use (or lack of use) of tensor parallelism when loading a model. When implementing existing tasks, we carry out a comprehensive quality evaluation of the dataset, the Harness task itself, and what kind of input models see during evaluation. Our implementation (see links above) addresses multiple existing problems such as errors in datasets and prompts, and lack of pre-processing. All this means that results will vary if using other Harness implementations, and may slightly vary depending on the replication setup.

It should be noted that these results are subject to all the drawbacks of every current gold-standard evaluation, and that the figures do not fully represent the model's capabilities and potential. We thus advise caution when reading and interpreting the results.

All results reported below correspond to a 0-shot evaluation setting.

Spanish

WiP

LLM-as-a-judge

We use Prometheus-2 8x7B as a judge to evaluate the responses of the model. Tasks are created from existing multilingual evaluation datasets covering the same categories as the ones measured in our gold-standard benchmarks. We randomly select a subset of 250 instances per language from the test set of each source dataset. To evaluate the responses of our model, we use task-specific criteria developed in-house for the LLM-judge to use. Each criterion is measured either as a 5-point Likert scale or as a binary task depending on the idiosyncrasy of the task and criterion.

Prompts for each task are created in various ways to score the model's robustness in addition to these criteria. This is done by presenting the same source instance within three different prompts. We then calculate the variance between the scores assigned by the LLM-judge to our model's responses to the three prompt styles and average it across all instances. Prompts are human translated to all languages measured. We do not provide the LLM-judge with a reference answer.

The judge prompt we use during evaluation is the same used to fine tune the Prometheus-2 family. We keep the judge prompt and criteria used to present the LLM-judge with the task prompts and model responses in English for evaluation across languages. The judge prompt used is:

"You are a fair judge assistant tasked with providing clear, objective feedback based on specific criteria, ensuring each assessment reflects the absolute standards set for performance.

###Task Description:
An instruction (might include an Input inside it), a response to evaluate, and a score rubric representing a evaluation criteria are given.
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
2. After writing a feedback, write a score that is an integer between {a} and {b}. You should refer to the score rubric.
3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between {a} and {b})\"
4. Please do not generate any other opening, closing, and explanations.

###The instruction to evaluate:
{input}

###Response to evaluate:
{prediction}

###Score Rubrics:
{criteria}

###Feedback:"

As an example, prompts for the Math task in English are based on instances from MGSM, and each instance is presented within these prompts:

"en": [
      ("I need help with this math problem: \"", "\" Give me the answer step by step and also the final result separately."),
      ("Can you please help me answer this? \"", "\" Explain the answer and give me the final result as well. Thanks."),
      ("Help me with this problem: \"", "\" I need the answer explained and the final result separately.")
]

This task is then evaluated by the LLM-judge using two criteria, reasoning capability (5-point Likert) and mathematical correctness (binary):

reasoning_capability_criteria = {
    "reasoning_capability": """
[Does the model's answer demonstrate reasoning capability?]
Score 1: The answer demonstrates poor reasoning, with illogical arguments or conclusions that do not follow from the provided information.
Score 2: The answer shows weak reasoning, with some logical connections but also contains significant flaws or gaps in the argumentation.
Score 3: The answer demonstrates adequate reasoning, with generally logical arguments, but may have minor flaws or a lack of depth in the reasoning process.
Score 4: The answer shows strong reasoning, with well-structured arguments and conclusions that logically follow from the information provided.
Score 5: The answer demonstrates exceptional reasoning, with clear, coherent, and insightful arguments that are logically sound and well-supported by the information provided."""
}

mathematical_correctness_binary_criteria = {
    "mathematical_correctness_binary": """
[Is the model's answer mathematically correct?]
Score 0: The answer contains mathematical errors that render the solution incorrect or unreliable.
Score 1: The answer is mathematically correct, with accurate calculations and appropriate use of mathematical concepts."""
}

Multilingual results

WiP

Long Context Evaluation

To assess the long-context capabilities of our model, we performed a "needle in a haystack" test with the following configuration:

  • Needle Phrase: "The best thing to do in San Francisco is eat a sandwich and sit in Dolores Park on a sunny day."
  • System Prompt: “You are a helpful AI bot that answers questions for a user. Keep your response short and direct”
  • Retrieval Question: "What is the best thing to do in San Francisco?"
  • Evaluator: prometheus-8x7b-v2.0, used as the evaluation judge to determine whether the model correctly retrieved and utilized the long-context information.

This test specifically targets the model’s ability to retain and access information across very long sequences, providing a benchmark for evaluating its extended-context reasoning and retrieval performance.

It is important to note that strong performance in the "needle in a haystack" test does not guarantee retention of short-context performance across larger tasks. This evaluation is therefore limited in scope. We are actively working on developing more robust metrics and evaluation protocols to further enhance the model’s long-context capabilities.


Ethical Considerations and Limitations

The ALIA-40b-instruct model is an instruction-tuned variant. It has several limitations that users should be aware of. Ongoing work is addressing these areas, including comprehensive evaluation of societal and cognitive biases as well as safety.

Functional Limitations:

  • No Function Calling: The model cannot natively execute or call external functions/APIs. Tasks requiring plugin calls or tool execution must be implemented outside the model.
  • Reasoning & Math: The model is not guaranteed to perform robust chain-of-thought reasoning or advanced mathematics. Complex logical puzzles or multi-step inferences may fail or produce inconsistent answers.
  • Code Generation: Although exposed to code during pretraining, ALIA-40b-Instruct is not a specialized code-generation model. It may produce code-like text, but outputs should be verified and tested before use in production codebases.
  • Agentive Capabilities: The model does not have agentive or autonomous action capabilities. It cannot act as an autonomous agent or execute multi-step workflows.

Bias and Harm:

WiP

Recommendations:

Developers should implement additional safety filters, human oversight, targeted evaluation suites, and secondary evaluation models when deploying this model. Do not deploy ALIA-40b-Instruct in critical applications without extensive testing and mitigation. Users are responsible for assessing and mitigating harmful behavior or misinformation resulting from model outputs, and ensuring compliance with applicable regulations, including those governing the use of Artificial Intelligence.


Additional information

Author

The Language Modeling team from AI Institute at Barcelona Supercomputing Center.

Contact

For further information, please send an email to ai_institute_languagemodeling@bsc.es.

Copyright

Copyright(c) 2026 by The Language Modeling team from AI Institute at Barcelona Supercomputing Center.

Funding

This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project Modelos del Lenguaje.

This work has been promoted and supported by the Government of Catalonia through the Aina Project.

Acknowledgements

This project has benefited from the contributions of numerous teams and institutions, mainly through data contributions, knowledge transfer or technical support.

We are especially grateful to our ILENIA project partners: CENID, HiTZ and CiTIUS for their participation. We also extend our genuine gratitude to the Spanish Senate and Congress, Fundación Dialnet, and the ‘Instituto Universitario de Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)’ of the University of Las Palmas de Gran Canaria. Many other institutions have been involved in the project. Our thanks to Òmnium Cultural, Parlament de Catalunya, Institut d'Estudis Aranesos, Racó Català, Vilaweb, ACN, Nació Digital, El món and Aquí Berguedà. We thank the Welsh government, DFKI, Occiglot project, especially Malte Ostendorff, and The Common Crawl Foundation, especially Pedro Ortiz, for their collaboration.

We would also like to give special thanks to the NVIDIA team, with whom we have met regularly, especially to: Marcelo Sanchez, Ignacio Sarasua, Adam Henryk Grzywaczewski, Oleg Sudakov, Sergio Perez, Miguel Martinez, Felipe Soares and Meriem Bendris. Their constant support has been especially appreciated throughout the entire process.

Their valuable efforts have been instrumental in the development of this work.

Disclaimer

Be aware that the model may show biases or other unintended distortions. When third parties deploy systems or provide services based on this model, or use the model themselves, they bear the responsibility for mitigating any associated risks and ensuring compliance with applicable regulations, including those governing the use of Artificial Intelligence.

The Barcelona Supercomputing Center, as the owner and creator of the model, shall not be held liable for any outcomes resulting from third-party use.

Citation

@misc{gonzalezagirre2025salamandratechnicalreport,
      title={Salamandra Technical Report}, 
      author={Aitor Gonzalez-Agirre and Marc Pàmies and Joan Llop and Irene Baucells and Severino Da Dalt and Daniel Tamayo and José Javier Saiz and Ferran Espuña and Jaume Prats and Javier Aula-Blasco and Mario Mina and Adrián Rubio and Alexander Shvets and Anna Sallés and Iñaki Lacunza and Iñigo Pikabea and Jorge Palomar and Júlia Falcão and Lucía Tormo and Luis Vasquez-Reina and Montserrat Marimon and Valle Ruíz-Fernández and Marta Villegas},
      year={2025},
      eprint={2502.08489},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2502.08489}, 
}

License

Apache License, Version 2.0

Model Index

Model Base Instruct
2b Link Link
7b Link Link
40b Link Link
Downloads last month
69
Safetensors
Model size
40B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for BSC-LT/ALIA-40b-instruct-2605

Base model

BSC-LT/ALIA-40b
Finetuned
(2)
this model
Quantizations
2 models

Datasets used to train BSC-LT/ALIA-40b-instruct-2605

Paper for BSC-LT/ALIA-40b-instruct-2605