Update README.md
Browse files
README.md
CHANGED
|
@@ -77,48 +77,63 @@ The AI Alignment Research Dataset is a collection of documents related to AI Ali
|
|
| 77 |
|
| 78 |
## Sources
|
| 79 |
|
| 80 |
-
|
| 81 |
|
| 82 |
-
- [
|
| 83 |
-
- [
|
| 84 |
-
- [aisafety.
|
| 85 |
-
- [aisafety.info](https://aisafety.info/)
|
| 86 |
- [alignmentforum](https://www.alignmentforum.org)
|
| 87 |
- [alignment_newsletter](https://rohinshah.com/alignment-newsletter/)
|
| 88 |
- [arbital](https://arbital.com/)
|
| 89 |
-
-
|
| 90 |
-
|
| 91 |
-
-
|
| 92 |
-
- [
|
| 93 |
-
- [
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 94 |
- [distill](https://distill.pub/)
|
| 95 |
- [eaforum](https://forum.effectivealtruism.org/) - selected posts
|
| 96 |
-
- [generative.ink](https://generative.ink/posts/)
|
| 97 |
-
- [gwern_blog](https://gwern.net/)
|
| 98 |
-
- [importai](https://importai.substack.com)
|
| 99 |
-
- [jsteinhardt_blog](https://jsteinhardt.wordpress.com/)
|
| 100 |
- [lesswrong](https://www.lesswrong.com/) - selected posts
|
| 101 |
-
|
| 102 |
-
-
|
| 103 |
-
-
|
| 104 |
-
|
| 105 |
-
-
|
| 106 |
-
- [
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 107 |
|
| 108 |
## Keys
|
| 109 |
|
| 110 |
-
|
| 111 |
|
| 112 |
-
- `id` - unique identifier
|
| 113 |
-
- `source` -
|
| 114 |
-
- `title` - title of document
|
|
|
|
| 115 |
- `text` - full text of document content
|
| 116 |
-
- `url` -
|
| 117 |
-
- `date_published` -
|
| 118 |
-
- `authors` - list of author names, may be empty
|
| 119 |
-
- `summary` - list of human written summaries from various newsletters, may be empty
|
| 120 |
|
| 121 |
-
|
| 122 |
|
| 123 |
## Usage
|
| 124 |
|
|
|
|
| 77 |
|
| 78 |
## Sources
|
| 79 |
|
| 80 |
+
Here are the list of sources along with sample contents:
|
| 81 |
|
| 82 |
+
- [agentmodel](https://agentmodels.org/)
|
| 83 |
+
- [agisf](https://course.aisafetyfundamentals.com/) - recommended readings from AGI Safety Fundamentals
|
| 84 |
+
- [aisafety.info](https://aisafety.info/) - Stampy's FAQ
|
|
|
|
| 85 |
- [alignmentforum](https://www.alignmentforum.org)
|
| 86 |
- [alignment_newsletter](https://rohinshah.com/alignment-newsletter/)
|
| 87 |
- [arbital](https://arbital.com/)
|
| 88 |
+
- [arxiv](https://arxiv.org/) - relevant research papers
|
| 89 |
+
|
| 90 |
+
- blogs - entire websites automatically scraped
|
| 91 |
+
- [AI Impacts](https://aiimpacts.org/)
|
| 92 |
+
- [AI Safety Camp](https://aisafety.camp/)
|
| 93 |
+
- [carado.moe](https://carado.moe/)
|
| 94 |
+
- [Cold Takes](https://www.cold-takes.com/)
|
| 95 |
+
- [DeepMind technical blogs](https://www.deepmind.com/blog-categories/technical-blogs)
|
| 96 |
+
- [DeepMind AI Safety Research](https://deepmindsafetyresearch.medium.com/)
|
| 97 |
+
- [EleutherAI](https://blog.eleuther.ai/)
|
| 98 |
+
- [generative.ink](https://generative.ink/posts/)
|
| 99 |
+
- [Gwern Branwen's blog](https://gwern.net/)
|
| 100 |
+
- [Jack Clark's Import AI](https://importai.substack.com/)
|
| 101 |
+
- [MIRI](https://intelligence.org/)
|
| 102 |
+
- [Jacob Steinhardt's blog](https://jsteinhardt.wordpress.com/)
|
| 103 |
+
- [ML Safety Newsletter](https://newsletter.mlsafety.org/)
|
| 104 |
+
- [Transformer Circuits Thread](https://transformer-circuits.pub/)
|
| 105 |
+
- [Open AI Research](https://openai.com/research/)
|
| 106 |
+
- [Victoria Krakovna's blog](https://vkrakovna.wordpress.com/)
|
| 107 |
+
- [Eliezer Yudkowsky's blog](https://www.yudkowsky.net/)
|
| 108 |
+
|
| 109 |
- [distill](https://distill.pub/)
|
| 110 |
- [eaforum](https://forum.effectivealtruism.org/) - selected posts
|
|
|
|
|
|
|
|
|
|
|
|
|
| 111 |
- [lesswrong](https://www.lesswrong.com/) - selected posts
|
| 112 |
+
|
| 113 |
+
- special_docs - individual documents curated from various resources
|
| 114 |
+
- [Make a suggestion](https://bit.ly/ard-suggestion) for sources not already in the dataset
|
| 115 |
+
|
| 116 |
+
- youtube - playlists & channels
|
| 117 |
+
- [AI Alignment playlist](https://www.youtube.com/playlist?list=PLCRVRLd2RhZTpdUdEzJjo3qhmX3y3skWA) and other lists
|
| 118 |
+
- [AI Explained](https://www.youtube.com/@aiexplained-official)
|
| 119 |
+
- [Evan Hubinger's AI Safety Talks](https://www.youtube.com/@aisafetytalks)
|
| 120 |
+
- [AI Safety Reading Group](https://www.youtube.com/@aisafetyreadinggroup/videos)
|
| 121 |
+
- [AiTech - TU Delft](https://www.youtube.com/@AiTechTUDelft/)
|
| 122 |
+
- [Rob Miles AI](https://www.youtube.com/@RobertMilesAI)
|
| 123 |
|
| 124 |
## Keys
|
| 125 |
|
| 126 |
+
All entries contain the following keys:
|
| 127 |
|
| 128 |
+
- `id` - string of unique identifier
|
| 129 |
+
- `source` - string of data source listed above
|
| 130 |
+
- `title` - string of document title of document
|
| 131 |
+
- `authors` - list of strings
|
| 132 |
- `text` - full text of document content
|
| 133 |
+
- `url` - string of valid link to text content
|
| 134 |
+
- `date_published` - in UTC format
|
|
|
|
|
|
|
| 135 |
|
| 136 |
+
Additional keys may be available depending on the source document.
|
| 137 |
|
| 138 |
## Usage
|
| 139 |
|