| | --- |
| | license: apache-2.0 |
| | language: |
| | - en |
| | tags: |
| | - rust |
| | - code-search |
| | - text-to-code |
| | - code-to-text |
| | - source-code |
| | - systems |
| | - backend |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: train |
| | path: data/train-* |
| | - split: validation |
| | path: data/validation-* |
| | - split: test |
| | path: data/test-* |
| | dataset_info: |
| | features: |
| | - name: code |
| | dtype: string |
| | - name: docstring |
| | dtype: string |
| | - name: func_name |
| | dtype: string |
| | - name: language |
| | dtype: string |
| | - name: repo |
| | dtype: string |
| | - name: path |
| | dtype: string |
| | - name: url |
| | dtype: string |
| | - name: license |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 376169830 |
| | num_examples: 381521 |
| | - name: validation |
| | num_bytes: 6478931 |
| | num_examples: 6333 |
| | - name: test |
| | num_bytes: 10118426 |
| | num_examples: 8868 |
| | download_size: 116295779 |
| | dataset_size: 392767187 |
| | --- |
| | |
| | # Rust CodeSearch Dataset (Shuu12121/rust-treesitter-dedupe-filtered-datasetsV2) |
| |
|
| | ## Dataset Description |
| | This dataset contains Rust functions and methods paired with their documentation comments, extracted from open-source Rust repositories on GitHub. |
| | It is formatted similarly to the CodeSearchNet challenge dataset. |
| |
|
| | Each entry includes: |
| | - `code`: The source code of a rust function or method. |
| | - `docstring`: The docstring or Javadoc associated with the function/method. |
| | - `func_name`: The name of the function/method. |
| | - `language`: The programming language (always "rust"). |
| | - `repo`: The GitHub repository from which the code was sourced (e.g., "owner/repo"). |
| | - `path`: The file path within the repository where the function/method is located. |
| | - `url`: A direct URL to the function/method's source file on GitHub (approximated to master/main branch). |
| | - `license`: The SPDX identifier of the license governing the source repository (e.g., "MIT", "Apache-2.0"). |
| | Additional metrics if available (from Lizard tool): |
| | - `ccn`: Cyclomatic Complexity Number. |
| | - `params`: Number of parameters of the function/method. |
| | - `nloc`: Non-commenting lines of code. |
| | - `token_count`: Number of tokens in the function/method. |
| |
|
| | ## Dataset Structure |
| | The dataset is divided into the following splits: |
| |
|
| | - `train`: 381,521 examples |
| | - `validation`: 6,333 examples |
| | - `test`: 8,868 examples |
| |
|
| | ## Data Collection |
| | The data was collected by: |
| | 1. Identifying popular and relevant Rust repositories on GitHub. |
| | 2. Cloning these repositories. |
| | 3. Parsing Rust files (`.rs`) using tree-sitter to extract functions/methods and their docstrings/Javadoc. |
| | 4. Filtering functions/methods based on code length and presence of a non-empty docstring/Javadoc. |
| | 5. Using the `lizard` tool to calculate code metrics (CCN, NLOC, params). |
| | 6. Storing the extracted data in JSONL format, including repository and license information. |
| | 7. Splitting the data by repository to ensure no data leakage between train, validation, and test sets. |
| |
|
| | ## Intended Use |
| | This dataset can be used for tasks such as: |
| | - Training and evaluating models for code search (natural language to code). |
| | - Code summarization / docstring generation (code to natural language). |
| | - Studies on Rust code practices and documentation habits. |
| |
|
| | ## Licensing |
| | The code examples within this dataset are sourced from repositories with permissive licenses (typically MIT, Apache-2.0, BSD). |
| | Each sample includes its original license information in the `license` field. |
| | The dataset compilation itself is provided under a permissive license (e.g., MIT or CC-BY-SA-4.0), |
| | but users should respect the original licenses of the underlying code. |
| |
|
| | ## Example Usage |
| | ```python |
| | from datasets import load_dataset |
| | |
| | # Load the dataset |
| | dataset = load_dataset("Shuu12121/rust-treesitter-dedupe-filtered-datasetsV2") |
| | |
| | # Access a split (e.g., train) |
| | train_data = dataset["train"] |
| | |
| | # Print the first example |
| | print(train_data[0]) |
| | ``` |
| |
|