Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      JSON parse error: Column(/evidence/[]/locator) changed from string to object in row 1
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 174, in _generate_tables
                  df = pandas_read_json(f)
                       ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
                  return pd.read_json(path_or_buf, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 815, in read_json
                  return json_reader.read()
                         ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1014, in read
                  obj = self._get_object_parser(self.data)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1040, in _get_object_parser
                  obj = FrameParser(json, **kwargs).parse()
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1176, in parse
                  self._parse()
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1392, in _parse
                  ujson_loads(json, precise_float=self.precise_float), dtype=None
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
              ValueError: Trailing data
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 503, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 350, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 177, in _generate_tables
                  raise e
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 151, in _generate_tables
                  pa_table = paj.read_json(
                             ^^^^^^^^^^^^^^
                File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Column(/evidence/[]/locator) changed from string to object in row 1

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Awakened-Ethics-Free

An open sample of classical philosophy and ethics from Iron Bank v1.1.0


🎬 See It In Action

Video Demo: Watch this pack power a baseline vs ADS comparison β†’ YouTube – Awakened Ethics Demo


Dataset Description

This dataset contains 5,276 wisdom nodes extracted from classical philosophy and ethics texts (Project Gutenberg public domain). Sources include Marcus Aurelius, Seneca, Epictetus, Plato, and other timeless thinkers.

Unlike typical scraped data, every node has:

  • Evidence chains back to source text
  • Posterior score (extraction confidence: 0.0-1.0)
  • Warmth rating (human-relevance indicator)
  • Provenance tracking via lineage fields

Statistics

Metric Value
Nodes 5,276
Average Posterior 0.923
Source Iron Bank v1.1.0, pack ethics_gutenberg
Warmth Filter high, medium

πŸ”— Related Resources

This dataset is part of the Iron Bank v1.1.0 corpus:


Intended Uses

  • Evaluation: Test retrieval quality on philosophical content
  • Prompt Engineering: Inject classical wisdom into LLM prompts
  • Fine-tuning: Ethics-aware model training
  • Demos: Show structured wisdom extraction capabilities

Schema

Each row contains:

Field Description
wisdom_id Unique identifier
pack_id Source pack (ethics_gutenberg)
core_insight The extracted wisdom
context Background/situation
evidence Supporting quotes from source
posterior Extraction confidence (0.0-1.0)
warmth Human-relevance (high/medium/low)
source_uri Link to original text
lineage Extraction provenance

Data Source

Curated from public-domain texts via Project Gutenberg. This is NOT scraped web data β€” it's extracted wisdom from humanity's best philosophical writing.

License

Data: CC BY-SA 4.0
Code: MIT

About Awakened Intelligence

We build cathedral-grade AI training data. Every node is extracted with the same care a master craftsman puts into foundation stones.

"Measure twice, cut once."


Generated from Iron Bank v1.1.0 on 2025-12-27

Downloads last month
8