diff --git "a/miri.jsonl" "b/miri.jsonl" deleted file mode 100644--- "a/miri.jsonl" +++ /dev/null @@ -1,492 +0,0 @@ -{"id": "f382a3b57a9c4cf3f288cfe5e9e596aa", "title": "The basic reasons I expect AGI ruin", "url": "https://intelligence.org/2023/04/21/the-basic-reasons-i-expect-agi-ruin/", "source": "miri", "source_type": "blog", "text": "I’ve been citing [AGI Ruin: A List of Lethalities](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) to explain why the situation with AI looks lethally dangerous to me. But that post is relatively long, and emphasizes specific open technical problems over “the basics”.\n\n\n\nHere are 10 things I’d focus on if I were giving “the basics” on why I’m so worried:[[1]](https://intelligence.org/feed/#fn1)\n\n\n\n\n\n---\n\n\n1. **General intelligence is very powerful**, and once we can build it at all, **STEM-capable artificial general intelligence (AGI) is likely to vastly outperform human intelligence immediately (or very quickly)**.\n\n\n\nWhen I say “general intelligence”, I’m usually thinking about “whatever it is that lets human brains do astrophysics, category theory, etc. even though our brains evolved under literally zero selection pressure to solve astrophysics or category theory problems”.\n\n\n\nIt’s possible that we should already be thinking of GPT-4 as “AGI” on some definitions, so to be clear about the threshold of generality I have in mind, I’ll specifically talk about “STEM-level AGI”, though I expect such systems to be good at non-STEM tasks too.\n\n\n\nHuman brains aren’t perfectly general, and not all narrow AI systems or animals are equally narrow. (E.g., AlphaZero is more general than AlphaGo.) But it sure is interesting that humans evolved cognitive abilities that unlock *all of these sciences at once*, with zero evolutionary fine-tuning of the brain aimed at equipping us for any of those sciences. Evolution just stumbled into a solution to *other* problems, that happened to generalize to millions of wildly novel tasks.\n\n\n\nMore concretely:\n\n\n\n* AlphaGo is a very impressive reasoner, but its hypothesis space is limited to sequences of Go board states rather than sequences of states of the physical universe. Efficiently reasoning about the physical universe requires solving at least some problems that are different in kind from what AlphaGo solves.\n\t+ These problems might be solved by the STEM AGI’s programmer, and/or solved by the algorithm that finds the AGI in program-space; and some such problems may be solved by the AGI itself in the course of refining its thinking.[[2]](https://intelligence.org/feed/#fn2)\n* Some examples of abilities I expect humans to only automate once we’ve built STEM-level AGI (if ever):\n\t+ The ability to perform open-heart surgery with a high success rate, in a messy non-standardized ordinary surgical environment.\n\t+ The ability to match smart human performance in a specific hard science field, across all the scientific work humans do in that field.\n* In principle, I suspect you could build a narrow system that is good at those tasks while lacking the basic mental machinery required to do par-human reasoning about *all* the hard sciences. In practice, I very strongly expect humans to find ways to build general reasoners to perform those tasks, before we figure out how to build narrow reasoners that can do them. (For the same basic reason evolution stumbled on general intelligence so early in the history of human tech development.)[[3]](https://intelligence.org/feed/#fn3)\n\n\nWhen I say “general intelligence is very powerful”, a lot of what I mean is that *science* is very powerful, and that having all of the sciences at once is a lot more powerful than the sum of each science’s impact.[[4]](https://intelligence.org/feed/#fn4)\n\n\n\nAnother large piece of what I mean is that (STEM-level) general intelligence *is a very high-impact sort of thing to automate* because STEM-level AGI is likely to blow human intelligence out of the water immediately, or very soon after its invention.\n\n\n\n\n80,000 Hours gives the (non-representative) example of how AlphaGo and its successors compared to humanity:\n\n\n\n\n> \n> In the span of a year, AI had advanced from being too weak to win a single [Go] match against the worst human professionals, to being impossible for even the best players in the world to defeat.\n> \n> \n> \n> \n\n\nI expect general-purpose science AI to blow human science ability out of the water in a similar fashion.\n\n\n\nReasons for this include:\n\n\n\n* Empirically, humans aren’t near a cognitive ceiling, and even narrow AI often suddenly blows past the human reasoning ability range on the task it’s designed for. It would be weird if scientific reasoning [were an exception](https://intelligence.org/2017/10/20/alphago/).\n* Empirically, human brains are full of cognitive biases and inefficiencies. It’s doubly weird if scientific reasoning is an exception even though it’s *visibly* a mess with tons of blind spots, inefficiencies, and motivated cognitive processes, and innumerable historical examples of scientists and mathematicians taking decades to make technically simple advances.\n* Empirically, human brains are extremely bad at some of the most basic cognitive processes underlying STEM.\n\t+ E.g., consider the stark limits on human working memory and ability to do basic mental math. We can barely multiply smallish multi-digit numbers together in our head, when in principle a reasoner could hold thousands of complex mathematical structures in its working memory simultaneously and perform complex operations on them. Consider the sorts of technologies and scientific insights that might only ever occur to a reasoner if it can directly see (within its own head, in real time) the connections between hundreds or thousands of different formal structures.\n* Human brains underwent no direct optimization for STEM ability in our ancestral environment, beyond traits like “I can distinguish four objects in my visual field from five objects”.[[5]](https://intelligence.org/feed/#fn5)\n* In contrast, human engineers can deliberately optimize AGI systems’ brains for math, engineering, etc. capabilities; and human engineers have an enormous variety of tools available to build general intelligence that evolution lacked.[[6]](https://intelligence.org/feed/#fn6)\n* Software (unlike human intelligence) scales with more compute.\n* Current ML uses far more compute to *find* reasoners than to *run* reasoners. This is very likely to hold true for AGI as well.\n* We probably have more than enough compute already, if we knew how to train AGI systems in a remotely efficient way.\n\n\nAnd on a meta level: the hypothesis that STEM AGI can quickly outperform humans has a disjunctive character. There are [many different advantages](https://aiimpacts.org/sources-of-advantage-for-artificial-intelligence/) that individually suffice for this, even if STEM AGI doesn’t start off with any other advantages. (E.g., speed, math ability, scalability with hardware, skill at optimizing hardware…)\n\n\n\nIn contrast, the claim that STEM AGI will [hit the narrow target](https://twitter.com/robbensinger/status/1623835453110775810) of “par-human scientific ability”, *and* stay at around that level for long enough to let humanity adapt and adjust, has a conjunctive character.[[7]](https://intelligence.org/feed/#fn7)\n\n\n\n \n\n\n\n2. A common misconception is that STEM-level AGI is dangerous because of something murky about “agents” or about self-awareness. Instead, I’d say that **the danger is inherent to the nature of action sequences** that push the world toward some sufficiently-hard-to-reach state.[[8]](https://intelligence.org/feed/#fn8)\n\n\n\nCall such sequences “plans”.\n\n\n\nIf you sampled a random plan from the space of all writable plans (weighted by length, in any extant formal language), and all we knew about the plan is that executing it would successfully achieve some superhumanly ambitious technological goal like “invent fast-running [whole-brain emulation](https://www.lesswrong.com/tag/whole-brain-emulation)“, then hitting a button to execute the plan would kill all humans, with very high probability. This is because:\n\n\n\n* “Invent fast WBE” is a hard enough task that succeeding in it usually requires gaining a lot of knowledge and cognitive and technological capabilities, enough to do lots of other dangerous things.\n* “Invent fast WBE” is likelier to succeed if the plan also includes steps that gather and control as many resources as possible, eliminate potential threats, etc. These are “[convergent instrumental strategies](https://arbital.com/p/instrumental_convergence/)“—strategies that are useful for pushing the world in a particular direction, almost regardless of which direction you’re pushing.\n* Human bodies and the food, water, air, sunlight, etc. we need to live are resources (“you are made of atoms the AI can use for something else”); and we’re also potential threats (e.g., we could build a rival superintelligent AI that executes a totally different plan).\n\n\nThe danger is in the cognitive work, not in some complicated or emergent feature of the “agent”; it’s in the task itself.\n\n\n\nIt isn’t that the abstract space of plans was built by evil human-hating minds; it’s that the [instrumental convergence](https://nickbostrom.com/superintelligentwill.pdf) thesis holds for the plans themselves. In full generality, plans that succeed in goals like “build WBE” tend to be dangerous.\n\n\n\nThis isn’t true of all plans that successfully push our world into a specific (sufficiently-hard-to-reach) physical state, but it’s true of the vast majority of them.\n\n\n\nThis is counter-intuitive because most of the impressive “plans” we encounter today are generated by humans, and it’s tempting to view strong plans through a human lens. But humans have hugely overlapping values, thinking styles, and capabilities; AI is drawn from new distributions.\n\n\n\n \n\n\n\n3. Current ML work is on track to produce things that are, in the ways that matter, **more like “randomly sampled plans” than like ��the sorts of plans a civilization of human von Neumanns would produce”**. (Before we’re anywhere near being able to produce the latter sorts of things.)[[9]](https://intelligence.org/feed/#fn9)\n\n\n\nWe’re building “AI” in the sense of building powerful general search processes (and search processes for search processes), not building “AI” in the sense of building friendly ~humans but in silicon.\n\n\n\n(Note that “we’re going to build systems that are more like A Randomly Sampled Plan than like A Civilization of Human Von Neumanns” doesn’t imply that the plan we’ll get is the one we wanted! There are two separate problems: that current ML finds things-that-act-like-they’re-optimizing-the-task-you-wanted rather than things-that-actually-internally-optimize-the-task-you-wanted, and also that internally ~maximizing most superficially desirable ends will kill humanity.)\n\n\n\nNote that the same problem holds for systems trained to imitate humans, if those systems scale to being able to do things like “build whole-brain emulation”. “We’re training on something related to humans” doesn’t give us “we’re training things that are best thought of as humans plus noise”.\n\n\n\nIt’s not obvious to me that GPT-like systems can scale to capabilities like “build WBE”. But if they do, we face the [problem](https://twitter.com/ESYudkowsky/status/1628835920328929281) that most ways of successfully imitating humans don’t look like “build a human (that’s somehow superhumanly good at imitating the Internet)”. They look like “build a relatively complex and alien optimization process that is good at imitation tasks (and potentially at many other tasks)”.\n\n\n\nYou don’t need to be a human in order to model humans, any more than you need to be a cloud in order to model clouds well. The only reason this is more confusing in the case of “predict humans” than in the case of “predict weather patterns” is that humans and AI systems are both intelligences, so it’s easier to slide between “the AI models humans” and “the AI is basically a human”.\n\n\n\n \n\n\n\n4. The key differences between humans and “things that are more easily approximated as random search processes than as humans-plus-a-bit-of-noise” **lies in lots of complicated machinery in the human brain**.\n\n\n\n(Cf. [Detached Lever Fallacy](https://www.lesswrong.com/posts/zY4pic7cwQpa9dnyk/detached-lever-fallacy), [Niceness Is Unnatural](https://www.lesswrong.com/posts/krHDNc7cDvfEL8z9a/niceness-is-unnatural), and [Superintelligent AI Is Necessary For An Amazing Future, But Far From Sufficient](https://www.lesswrong.com/posts/HoQ5Rp7Gs6rebusNP/superintelligent-ai-is-necessary-for-an-amazing-future-but-1).)\n\n\n\nHumans are not blank slates in the relevant ways, such that just raising an AI like a human solves the problem.\n\n\n\nThis doesn’t mean the problem is unsolvable; but it means that you either need to reproduce that internal machinery, in a lot of detail, in AI, or you need to build some new kind of machinery that’s safe for reasons other than the specific reasons humans are safe.\n\n\n\n(You need cognitive machinery that somehow samples from a much narrower space of plans that are still powerful enough to succeed in at least one task that saves the world, but are constrained in ways that make them far less dangerous than the larger space of plans. And you need a thing that *actually* implements internal machinery like that, as opposed to just being optimized to superficially behave as though it does in the narrow and unrepresentative environments it was in before starting to work on WBE. “Novel science work” means that pretty much everything you want from the AI is out-of-distribution.)\n\n\n\n \n\n\n\n5. **STEM-level AGI timelines don’t look that long** (e.g., probably not 50 or 150 years; could well be 5 years or 15).\n\n\n\nI won’t try to argue for this proposition, beyond pointing at the field’s recent [progress](https://twitter.com/leopoldasch/status/1638848850516672513) and echoing [Nate Soares’ comments](https://www.lesswrong.com/posts/cCMihiwtZx7kdcKgt/comments-on-carlsmith-s-is-power-seeking-ai-an-existential#Timelines) from early 2021:\n\n\n\n\n> \n> […] I observe that, 15 years ago, everyone was saying AGI is far off because of what it couldn’t do — basic image recognition, go, starcraft, winograd schemas, simple programming tasks. But basically all that has fallen. The gap between us and AGI is made mostly of intangibles. (Computer programming that is Actually Good? Theorem proving? Sure, but on my model, “good” versions of those are a hair’s breadth away from full AGI already. And the fact that I need to clarify that “bad” versions don’t count, speaks to my point that the only barriers people can name right now are intangibles.) That’s a very uncomfortable place to be!\n> \n> \n> \n> […] I suspect that I’m in more-or-less the “penultimate epistemic state” on AGI timelines: I don’t know of a project that seems like they’re right on the brink; that would put me in the “final epistemic state” of thinking AGI is imminent. But I’m in the second-to-last epistemic state, where I wouldn’t feel all that shocked to learn that some group has reached the brink. Maybe I won’t get that call for 10 years! Or 20! But it could also be 2, and I wouldn’t get to be indignant with reality. I wouldn’t get to say “but all the following things should have happened first, before I made that observation!”. Those things have happened. I have made those observations. […]\n> \n> \n> \n> \n\n\nI think timing tech is [very difficult](https://intelligence.org/2017/10/13/fire-alarm/) (and plausibly ~impossible when the tech isn’t pretty imminent), and I think reasonable people can disagree a lot about timelines.\n\n\n\nI also think converging on timelines is not very crucial, since if AGI is 50 years away I would say it’s still the largest single risk we face, and the bare minimum alignment work required for surviving that transition could easily take longer than that.\n\n\n\nAlso, “STEM AGI when?” is the kind of argument that requires hashing out people’s predictions about how we get to STEM AGI, which is a bad thing to debate publicly insofar as improving people’s models of pathways can further shorten timelines.\n\n\n\nI mention timelines anyway because they are in fact a major reason I’m pessimistic about our prospects; if I learned tomorrow that AGI were 200 years away, I’d be outright optimistic about things going well.\n\n\n\n \n\n\n\n6. **We don’t currently know how to do alignment**, we don’t seem to have a much better idea now than we did 10 years ago, and there are many large novel visible difficulties. (See [AGI Ruin](https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization) and the [Capabilities Generalization, and the Sharp Left Turn](https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization).)\n\n\n\nOn a more basic level, quoting [Nate Soares](https://intelligence.org/ensuring): “Why do I think that AI alignment looks fairly difficult? The main reason is just that this has been my experience from actually working on these problems.”\n\n\n\n \n\n\n\n7. We should be starting with a pessimistic prior about **achieving reliably good behavior in any complex safety-critical software, particularly if the software is novel**. Even more so if the thing we need to make robust is structured like undocumented spaghetti code, and more so still if the field is highly competitive and you need to achieve some robustness property *while moving faster* than a large pool of less-safety-conscious people who are racing toward the precipice.\n\n\n\nThe [default](https://www.lesswrong.com/posts/Ke2ogqSEhL2KCJCNx/security-mindset-lessons-from-20-years-of-software-security) assumption is that complex software goes wrong in dozens of different ways you didn’t expect. Reality ends up being thorny and inconvenient in many of the places where your models were absent or fuzzy. Surprises are abundant, and some surprises can be good, but this is empirically a lot rarer than unpleasant surprises in software development hell.\n\n\n\nThe future is hard to predict, but plans systematically take longer and run into more snags than humans naively expect, as opposed to plans systematically going surprisingly smoothly and deadlines being systematically hit ahead of schedule.\n\n\n\nThe history of computer security and of safety-critical software systems is almost invariably one of robust software lagging far, far behind non-robust versions of the same software. Achieving any robustness property in complex software that will be deployed in the real world, with all its messiness and adversarial optimization, is very difficult and usually fails.\n\n\n\nIn many ways I think the foundational discussion of AGI risk is [Security Mindset and Ordinary Paranoia](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/) and [Security Mindset and the Logistic Success Curve](https://intelligence.org/2017/11/26/security-mindset-and-the-logistic-success-curve/), and the main body of the text doesn’t even mention AGI. Adding in the specifics of AGI and smarter-than-human AI takes the risk from “dire” to “seemingly overwhelming”, but adding in those specifics is not required to be massively concerned if you think getting this software right matters for our future.\n\n\n\n \n\n\n\n8. **Neither ML nor the larger world is currently taking this seriously**, as of April 2023.\n\n\n\nThis is obviously something we can change. But until it’s changed, things will continue to look very bad.\n\n\n\nAdditionally, most of the people who are taking AI risk somewhat seriously are, to an important extent, not willing to worry about things until after they’ve been experimentally proven to be dangerous. Which is a lethal sort of methodology to adopt when you’re working with smarter-than-human AI.\n\n\n\nMy basic picture of *why* the world currently isn’t responding appropriately is the one in [Four mindset disagreements behind existential risk disagreements in ML](https://www.lesswrong.com/posts/84BJopKvQ8qYGHY6b/four-mindset-disagreements-behind-existential-risk), [The inordinately slow spread of good AGI conversations in ML](https://www.lesswrong.com/posts/Rkxj7TFxhbm59AKJh/the-inordinately-slow-spread-of-good-agi-conversations-in-ml), and [*Inadequate Equilibria*](https://equilibriabook.com/toc).[[10]](https://intelligence.org/feed/#fn10)\n\n\n\n \n\n\n\n9. As noted above, **current ML is very opaque**, and **it mostly lets you intervene on behavioral proxies** for what we want, rather than letting us directly design desirable features.\n\n\n\nML as it exists today also requires that data is readily available and safe to provide. E.g., we can’t robustly train the AGI on “don’t kill people” because we can’t provide real examples of it killing people to train against the behavior we don’t want; we can only give flawed proxies and work via indirection.\n\n\n\n \n\n\n\n10. **There are lots of specific abilities** which seem like they ought to be possible for the kind of civilization that can safely deploy smarter-than-human optimization, that are far out of reach, with no obvious path forward for achieving them with opaque deep nets even if we had unlimited time to work on some relatively concrete set of research directions.\n\n\n\n(Unlimited time suffices if we can set a more abstract/indirect research direction, like “just think about the problem for a long time until you find some solution”. There are presumably paths forward; we just don’t know what they are today, which puts us in a worse situation.)\n\n\n\nE.g., we don’t know how to go about inspecting a nanotech-developing AI system’s brain to verify that it’s only thinking about a specific room, that it’s internally representing the intended goal, that it’s directing its optimization at that representation, that it internally has a particular planning horizon and a variety of capability bounds, that it’s unable to think about optimizers (or specifically about humans), or that it otherwise has the right topics internally whitelisted or blacklisted.\n\n\n\n \n\n\n\nIndividually, it seems to me that each of these difficulties can be addressed. In combination, they seem to me to put us in a very dark situation.\n\n\n\n \n\n\n\n\n\n---\n\n\n \n\n\n\nOne common response I hear to points like the above is:\n\n\n\n\n> \n> The future is generically hard to predict, so it’s just not possible to be rationally confident that things will go well or poorly. Even if you look at dozens of different arguments and framings and the ones that hold up to scrutiny nearly all seem to point in the same direction, it’s always possible that you’re making some invisible error of reasoning that causes correlated failures in many places at once.\n> \n> \n> \n> \n\n\nI’m sympathetic to this because I agree that the future is hard to predict.\n\n\n\nI’m not totally confident things will go poorly; if I were, I wouldn’t be trying to solve the problem! I think things are looking extremely dire, but not hopeless.\n\n\n\nThat said, some people think that even “extremely dire” is an impossible belief state to be in, in advance of an AI apocalypse actually occurring. I disagree here, for two basic reasons:\n\n\n\n \n\n\n\na. There are many details we can get into, but on a core level **I don’t think the risk is particularly complicated or hard to reason about**. The core concern fits into a [tweet](https://twitter.com/robbensinger/status/1644006731792646145):\n\n\n\n\n> \n> STEM AI is likely to vastly exceed human STEM abilities, conferring a decisive advantage. We aren’t on track to knowing how to aim STEM AI at intended goals, and STEM AIs pursuing unintended goals tend to have instrumental subgoals like “control all resources”.\n> \n> \n> \n> \n\n\nZvi Mowshowitz [puts the core concern](https://thezvi.substack.com/p/response-to-tyler-cowens-existential) in even more basic terms:\n\n\n\n\n> \n> I also notice a kind of presumption that things in most scenarios will work out and that doom is dependent on particular ‘distant possibilities,’ that often have many logical dependencies or require a lot of things to individually go as predicted. Whereas I would say that those possibilities are not so distant or unlikely, but more importantly that the result is robust, that once the intelligence and optimization pressure that matters is no longer human that most of the outcomes are existentially bad by my values and that one can reject or ignore many or most of the detail assumptions and still see this.\n> \n> \n> \n> \n\n\nThe details do matter for evaluating the exact risk level, but this isn’t the sort of topic where it seems fundamentally impossible for any human to reach a good understanding of the core difficulties and whether we’re handling them.\n\n\n\n \n\n\n\nb. Relatedly, as Nate Soares has argued, **AI disaster scenarios are** [**disjunctive**](https://www.lesswrong.com/posts/ervaGwJ2ZcwqfCcLx/agi-ruin-scenarios-are-likely-and-disjunctive). There are many bad outcomes for every good outcome, and many paths leading to disaster for every path leading to utopia.\n\n\n\nQuoting [Eliezer Yudkowsky](https://twitter.com/ESYudkowsky/status/1644045728983969816):\n\n\n\n\n> \n> You don’t get to adopt a prior where you have a 50-50 chance of winning the lottery “because either you win or you don’t”; the question is not whether we’re uncertain, but whether someone’s allowed to milk their uncertainty to expect good outcomes.\n> \n> \n> \n> \n\n\nQuoting [Jack Rabuck](https://twitter.com/JackRabuck/status/1644762977504227329):\n\n\n\n\n> \n> I listened to the whole 4 hour Lunar Society interview with @ESYudkowsky \n> (hosted by @dwarkesh\\_sp) that was mostly about AI alignment and I think I identified a point of confusion/disagreement that is pretty common in the area and is rarely fleshed out:\n> \n> \n> \n> Dwarkesh repeatedly referred to the conclusion that AI is likely to kill humanity as “wild.”\n> \n> \n> \n> Wild seems to me to pack two concepts together, ‘bad’ and ‘complex.’ And when I say complex, I mean in the sense of the Fermi equation where you have an end point (dead humanity) that relies on a series of links in a chain and if you break any of those links, the end state doesn’t occur.\n> \n> \n> \n> It seems to me that Eliezer believes this end state is not wild (at least not in the complex sense), but very simple. He thinks many (most) paths converge to this end state.\n> \n> \n> \n> That leads to a misunderstanding of sorts. Dwarkesh pushes Eliezer to give some predictions based on the line of reasoning that he uses to predict that end point, but since the end point is very simple and is a convergence, Eliezer correctly says that being able to reason to that end point does not give any predictive power about the particular path that will be taken in this universe to reach that end point.\n> \n> \n> \n> Dwarkesh is thinking about the end of humanity as a causal chain with many links and if any of them are broken it means humans will continue on, while Eliezer thinks of the continuity of humanity (in the face of AGI) as a causal chain with many links and if any of them are broken it means humanity ends. Or perhaps more discretely, Eliezer thinks there are a few very hard things which humanity could do to continue in the face of AI, and absent one of those occurring, the end is a matter of when, not if, and the when is much closer than most other people think.\n> \n> \n> \n> Anyway, I think each of Dwarkesh and Eliezer believe the other one falls on the side of extraordinary claims require extraordinary evidence – Dwarkesh thinking the end of humanity is “wild” and Eliezer believing humanity’s viability in the face of AGI is “wild” (though not in the negative sense).\n> \n> \n> \n> \n\n\nI don’t consider “AGI ruin is disjunctive” a knock-down argument for high p(doom) on its own. NASA has a high success rate for rocket launches even though success requires many things to go right simultaneously. Humanity is capable of achieving conjunctive outcomes, to some degree; but I think this framing makes it clearer why it’s *possible* to rationally arrive at a high p(doom), at all, when enough evidence points in that direction.[[11]](https://intelligence.org/feed/#fn11)\n\n\n\n\n\n\n---\n\n\n1. Eliezer Yudkowsky’s [So Far: Unfriendly AI Edition](https://www.econlib.org/archives/2016/03/so_far_unfriend.html) and Nate Soares’ [Ensuring Smarter-Than-Human Intelligence Has a Positive Outcome](https://intelligence.org/ensuring) are two other good (though old) introductions to what I’d consider “the basics”.\n\n\n\nTo state the obvious: this post consists of various claims that increase my probability on AI causing an existential catastrophe, but not all the claims have to be true in order for AI to have a high probability of causing such a catastrophe.\n\n\n\nAlso, I wrote this post to summarize my own top reasons for being worried, not to try to make a maximally compelling or digestible case for others. I don’t expect others to be similarly confident based on such a quick overview, unless perhaps you’ve read other sources on AI risk in the past. (Including more optimistic ones, since it’s harder to be confident when you’ve only heard from one side of a disagreement. I’ve [written in the past](https://www.lesswrong.com/posts/vT4tsttHgYJBoKi4n/some-abstract-non-technical-reasons-to-be-non-maximally) about some of the things that give *me* small glimmers of hope, but people who are overall far more hopeful will have very different reasons for hope, based on very different heuristics and background models.) [![↩](https://s.w.org/images/core/emoji/14.0.0/72x72/21a9.png)](https://intelligence.org/feed/#fnref1)\n2. E.g., the physical world is too complex to simulate in full detail, unlike a Go board state. An effective general intelligence needs to be able to model the world at many different levels of granularity, and strategically choose which levels are relevant to think about, as well as which specific pieces/aspects/properties of the world *at* those levels are relevant to think about.\n\n\n\nMore generally, being a general intelligence requires an enormous amount of laserlike focus and strategicness when it comes to which thoughts you do or don’t think. A large portion of your compute needs to be relentlessly funneled into *exactly* the tiny subset of questions about the physical world that bear on the question you’re trying to answer or the problem you’re trying to solve. If you fail to be relentlessly targeted and efficient in “aiming” your cognition at the most useful-to-you things, you can easily spend a lifetime getting sidetracked by minutiae, directing your attention at the wrong considerations, etc.\n\n\n\nAnd given the variety of *kinds* of problems you need to solve in order to navigate the physical world well, do science, etc., the *heuristics you use to funnel your compute to the exact right things* need to themselves be very general, rather than all being case-specific.\n\n\n\n(Whereas we can more readily imagine that many of the heuristics AlphaGo uses to avoid thinking about the wrong aspects of the game state (or getting otherwise sidetracked) *are Go-specific heuristics*.) [![↩](https://s.w.org/images/core/emoji/14.0.0/72x72/21a9.png)](https://intelligence.org/feed/#fnref2)\n3. Of course, if your brain has all the basic mental machinery required to do other sciences, that doesn’t mean that you have the *knowledge* required to actually do well in those sciences. An STEM-level artificial general intelligence could lack physics ability for the same reason many smart humans can’t solve physics problems. [![↩](https://s.w.org/images/core/emoji/14.0.0/72x72/21a9.png)](https://intelligence.org/feed/#fnref3)\n4. E.g., because different sciences can synergize, and because you can invent new scientific fields and subfields, and more generally chain one novel insight into dozens of other new insights that critically depended on the first insight. [![↩](https://s.w.org/images/core/emoji/14.0.0/72x72/21a9.png)](https://intelligence.org/feed/#fnref4)\n5. More generally, the sciences (and many other aspects of human life, like written language) are a very recent development on evolutionary timescales. So evolution has had very little time to refine and improve on our reasoning ability in many of the ways that matter. [![↩](https://s.w.org/images/core/emoji/14.0.0/72x72/21a9.png)](https://intelligence.org/feed/#fnref5)\n6. “Human engineers have an enormous variety of tools available that evolution lacked” is often noted as a reason to think that we may be able to align AGI to our goals, even though evolution failed to align humans to its “goal”. It’s additionally a reason to expect AGI to have greater cognitive ability, if engineers try to achieve great cognitive ability. [![↩](https://s.w.org/images/core/emoji/14.0.0/72x72/21a9.png)](https://intelligence.org/feed/#fnref6)\n7. And my understanding is that, e.g., Paul Christiano’s soft-takeoff scenarios don’t involve there being much time between par-human scientific ability and superintelligence. Rather, he’s betting that we have a bunch of decades between GPT-4 and par-human STEM AGI. [![↩](https://s.w.org/images/core/emoji/14.0.0/72x72/21a9.png)](https://intelligence.org/feed/#fnref7)\n8. I’ll classify thoughts and text outputs as “actions” too, not just physical movements. [![↩](https://s.w.org/images/core/emoji/14.0.0/72x72/21a9.png)](https://intelligence.org/feed/#fnref8)\n9. Obviously, *neither* is a particularly good approximation for ML systems. The point is that our optimism about plans in real life generally comes from the fact that they’re weak, and/or it comes from the fact that the plan generators are human brains with the full suite of human psychological universals. ML systems don’t possess those human universals, and won’t stay weak indefinitely. [![↩](https://s.w.org/images/core/emoji/14.0.0/72x72/21a9.png)](https://intelligence.org/feed/#fnref9)\n10. Quoting [Four mindset disagreements behind existential risk disagreements in ML](https://www.lesswrong.com/posts/84BJopKvQ8qYGHY6b/four-mindset-disagreements-behind-existential-risk):\n\n\n\n\n> \n> \n> \t* People are taking the risks unseriously because they feel weird and abstract.\n> \t* When they do think about the risks, they anchor to what’s familiar and known, dismissing other considerations because they feel “unconservative” from a forecasting perspective.\n> \t* Meanwhile, social mimesis and the bystander effect make the field sluggish at pivoting in response to new arguments and smoke under the door.\n> \n\n\nQuoting [The inordinately slow spread of good AGI conversations in ML](https://www.lesswrong.com/posts/Rkxj7TFxhbm59AKJh/the-inordinately-slow-spread-of-good-agi-conversations-in-ml):\n\n\n\n\n> \n> Info about AGI propagates too slowly through the field, because when one ML person updates, they usually don’t loudly share their update with all their peers. This is because:\n> \n> \n> \n> 1. AGI sounds weird, and they don’t want to sound like a weird outsider.\n> \n> \n> \n> 2. Their *peers* and the *community as a whole* might perceive this information as an attack on the field, an attempt to lower its status, etc.\n> \n> \n> \n> 3. Tech forecasting, differential technological development, long-term steering, [exploratory](https://en.wikipedia.org/wiki/Exploratory_engineering) [engineering](https://intelligence.org/files/ExploratoryEngineeringAI.pdf), ‘not doing certain research because of its long-term social impact’, prosocial research closure, etc. are very novel and foreign to most scientists.\n> \n> \n> \n> EAs exert effort to try to dig up precedents like [Asilomar](https://intelligence.org/files/TheAsilomarConference.pdf) partly *because* Asilomar is so unusual compared to the norms and practices of the vast majority of science. Scientists generally don’t think in these terms at all, especially in *advance* of any major disasters their field causes.\n> \n> \n> \n> And the scientists who do find any of this intuitive often feel vaguely nervous, alone, and adrift when they talk about it. On a gut level, they see that they have no institutional home and no super-widely-shared ‘this is a virtuous and respectable way to do science’ narrative.\n> \n> \n> \n> Normal [science](https://www.lesswrong.com/s/fxynfGCSHpY4FmBZy) is not Bayesian, is not agentic, is not ‘a place where you’re supposed to do arbitrary things just because you heard an argument that makes sense’. Normal science is a specific collection of scripts, customs, and established protocols.\n> \n> \n> \n> In trying to move the field toward ‘doing the thing that just makes sense’, even though it’s about a weird topic (AGI), and even though the prescribed response is also weird (closure, differential tech development, etc.), and even though the arguments in support are weird (where’s the experimental data??), we’re inherently fighting our way upstream, against the current.\n> \n> \n> \n> Success is possible, but way, way more [dakka](https://thezvi.wordpress.com/2017/12/02/more-dakka/) is needed, and IMO it’s easy to understand why we haven’t succeeded more.\n> \n> \n> \n> This is also part of why I’ve increasingly updated toward a strategy of “let’s all be way too blunt and candid about our AGI-related thoughts”.\n> \n> \n> \n> The core problem we face isn’t ‘people informedly disagree’, ‘there’s a values conflict’, ‘we haven’t written up the arguments’, ‘nobody has seen the arguments’, or even ‘self-deception’ or ‘self-serving bias’.\n> \n> \n> \n> The core problem we face is ‘not enough information is transmitting fast enough, because people feel nervous about whether their private thoughts are in the Overton window’.\n> \n> \n> \n> \n\n\nOn the more basic level, [*Inadequate Equilibria*](https://equilibriabook.com/toc) paints a picture of the world’s baseline civilizational competence that I think makes it less mysterious why we could screw up this badly on a novel problem that our scientific and political institutions weren’t designed to address. *Inadequate Equilibria* also talks about the nuts and bolts of Modest Epistemology, which I think is a key part of the failure story. [![↩](https://s.w.org/images/core/emoji/14.0.0/72x72/21a9.png)](https://intelligence.org/feed/#fnref10)\n11. Quoting a recent conversation between Aryeh Englander and Eliezer Yudkowsky:\n\n\n\n\n> \n> **Aryeh:** […] Yet I still have a very hard time understanding the arguments that would lead to such a high-confidence prediction. Like, I think I understand the main arguments for AI existential risk, but I just don’t understand why some people seem so *sure* of the risks. […]\n> \n> \n> \n> **Eliezer:** I think the core thing is the sense that you cannot in this case milk uncertainty for a chance of good outcomes; to get to a good outcome you’d have to actually know where you’re steering, like trying to buy a winning lottery ticket or launching a Moon rocket. Once you realize that uncertainty doesn’t move estimates back toward “50-50, either we live happily ever after or not”, you realize that “people in the EA forums cannot tell whether Eliezer or Paul is right” is not a factor that moves us toward 1:1 good:bad but rather another sign of doom; surviving worlds don’t look confused like that and are able to make faster progress.\n> \n> \n> \n> Not as a fully valid argument from which one cannot update further, but as an intuition pump: the more all arguments about the future seem fallible, the more you should expect the future Solar System to have a randomized configuration from your own perspective. Almost zero of those have humans in them. It takes confidence about some argument constraining the future to get to more than that.\n> \n> \n> \n> **Aryeh:** when you talk about uncertainty here do you mean uncertain factors *within* your basic world model, or are you also counting model uncertainty? I can see how within your world model extra sources of uncertainty don’t point to lower risk estimates. But my general question I think is more about model uncertainty: how sure can you really be that your world model and reference classes and framework for thinking about this is the right one vs e.g., Robin or Paul or Rohin or lots of others? And in terms of model uncertainty it looks like most of these other approaches imply much lower risk estimates, so adding in that kind of model uncertainty should presumably (I think) point to overall lower risk estimates.\n> \n> \n> \n> **Eliezer:** Aryeh, if you’ve got a specific theory that says your rocket design is going to explode, and then you’re also very unsure of how rockets work really, what probability should you assess of your rocket landing safely on target?\n> \n> \n> \n> **Aryeh:** how about if you have a specific theory that says you should be comparing what you’re doing to a rocket aiming for the moon but it’ll explode, and then a bunch of other theories saying it won’t explode, plus a bunch of theories saying you shouldn’t be comparing what you’re doing to a rocket in the first place? My understanding of many alignment proposals is that they think we do understand “rockets” sufficiently so that we can aim them, but they disagree on various specifics that lead you to have such high confidence in an explosion. And then there are others like Robin Hanson who use mostly outside-type arguments to argue that you’re framing the issues incorrectly, and we shouldn’t be comparing this to “rockets” at all because that’s the wrong reference class to use. So yes, accounting for some types of model uncertainty won’t reduce our risk assessments and may even raise them further, but other types of model uncertainty – including many of the actual alternative models / framings at least as I understand them – should presumably decrease our risk assessment.\n> \n> \n> \n> **Eliezer:** What if people are trying to build a flying machine for the first time, and there’s a whole host of them with wildly different theories about why it ought to fly easily, and you think there’s basic obstacles to stable flight that they’re not getting? Could you force the machine to fly despite all obstacles by recruiting more and more optimists to have different theories, each of whom would have some chance of being right?\n> \n> \n> \n> **Aryeh:** right, my point is that in order to have near certainty of not flying you need to be very very sure that your model is right and theirs isn’t. Or in other words, you need to have very low model uncertainty. But once you add in model uncertainty where you consider that maybe those other optimists’ models could be right, then your risk estimates will go down. Of course you can’t arbitrarily add in random optimistic models from random people – it needs to be weighted in some way. My confusion here is that you seem to be very, very certain that your model is the right one, complete with all its pieces and sub-arguments and the particular reference classes you use, and I just don’t quite understand why.\n> \n> \n> \n> **Eliezer:** There’s a big difference between “sure your model is the right one” and the whole thing with people wandering over with their own models and somebody else going, “I can’t tell the difference between you and them, how can you possibly be so sure they’re not right?”\n> \n> \n> \n> The intuition I’m trying to gesture at here is that you can’t milk success out of uncertainty, even by having a bunch of other people wander over with optimistic models. It *shouldn’t* be able to work in real life. If your epistemology says that you can generate free success probability that way, you must be doing something wrong.\n> \n> \n> \n> Or maybe another way to put it: When you run into a very difficult problem that you can see is very difficult, but inevitably a bunch of people with less clear sight wander over and are optimistic about it because they don’t see the problems, for you to update on the optimists would be to update on something that happens inevitably. So to adopt this policy is just to make it impossible for yourself to ever perceive when things have gotten really bad.\n> \n> \n> \n> **Aryeh:** not sure I fully understand what you’re saying. It looks to me like to some degree what you’re saying boils down to your views on modest epistemology – i.e., basically just go with your own views and don’t defer to anybody else. It sounds like you’re saying not only don’t defer, but don’t even really incorporate any significant model uncertainty based on other people’s views. Am I understanding this at all correctly or am I totally off here?\n> \n> \n> \n> **Eliezer:** My epistemology is such that it’s possible in principle for me to notice that I’m doomed, in worlds which look very doomed, despite the fact that all such possible worlds no matter how doomed they actually are, always contain a chorus of people claiming we’re not doomed.\n> \n> \n> \n> \n\n\n(See [*Inadequate Equilibria*](https://equilibriabook.com/toc) for a detailed discussion of Modest Epistemology, deference, and “[outside views](https://www.lesswrong.com/posts/BcYfsi7vmhDvzQGiF/taboo-outside-view)“, and [Strong Evidence Is Common](https://www.lesswrong.com/posts/JD7fwtRQ27yc8NoqS/strong-evidence-is-common) for the basic first-order case that people can often reach confident conclusions about things.) [![↩](https://s.w.org/images/core/emoji/14.0.0/72x72/21a9.png)](https://intelligence.org/feed/#fnref11)\n\n\n\nThe post [The basic reasons I expect AGI ruin](https://intelligence.org/2023/04/21/the-basic-reasons-i-expect-agi-ruin/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2023-04-21T21:15:47Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "cb0d36535678049455601a4c52d09978", "title": "Misgeneralization as a misnomer", "url": "https://intelligence.org/2023/04/10/misgeneralization-as-a-misnomer/", "source": "miri", "source_type": "blog", "text": "Here’s two different ways an AI can turn out [unfriendly](https://www.lesswrong.com/posts/BSee6LXg4adtrndwy/what-does-it-mean-for-an-agi-to-be-safe):\n\n\n1. You somehow build an AI that cares about “making people happy”. In training, it tells people jokes and buys people flowers and offers people an ear when they need one. In deployment (and once it’s more capable), it forcibly puts each human in a separate individual heavily-defended cell, and pumps them full of opiates.\n2. You build an AI that’s good at making people happy. In training, it tells people jokes and buys people flowers and offers people an ear when they need one. In deployment (and once it’s more capable), it turns out that whatever was causing that “happiness”-promoting behavior was a balance of a variety of other goals (such as basic desires for energy and memory), and it spends most of the universe on some combination of that other stuff that doesn’t involve much happiness.\n\n\n(To state the obvious: please don’t try to get your AIs to pursue “happiness”; you want something more like [CEV](https://arbital.com/p/cev/) in the long run, and in the short run I strongly recommend [aiming lower, at a pivotal act](https://arbital.com/p/pivotal/) .)\n\n\n In both cases, the AI behaves (during training) in a way that looks a lot like trying to make people happy. Then the AI described in (1) is unfriendly because it was optimizing the wrong concept of “happiness”, one that lined up with yours when the AI was weak, but that diverges in various [edge-cases](https://arbital.com/p/edge_instantiation/) that matter when the AI is strong. By contrast, the AI described in (2) was never even really trying to pursue happiness; it had a mixture of goals that merely correlated with the training objective, and that balanced out right around where you wanted them to balance out in training, but deployment (and the corresponding capabilities-increases) threw the balance off.\n\n\nNote that this list of “ways things can go wrong when the AI looked like it was optimizing happiness during training” is not exhaustive! (For instance, consider an AI that cares about something else entirely, and knows you’ll shut it down if it doesn’t look like it’s optimizing for happiness. Or an AI whose goals change heavily as it reflects and self-modifies.)\n\n\n \n\n\n(This list isn’t even really disjoint! You could get both at once, resulting in, e.g., an AI that spends most of the universe’s resources on acquiring memory and energy for unrelated tasks, and a small fraction of the universe on doped-up human-esque shells.)\n\n\nThe solutions to these two problems are pretty different. To resolve the problem sketched in (1), you have to figure out how to get an instance of the AI’s concept (“happiness”) to match the concept you hoped to transmit, even in the edge-cases and extremes that it will have access to in deployment (when it needs to be powerful enough to pull off some pivotal act that you yourself cannot pull off, and thus capable enough to access extreme edge-case states that you yourself cannot).\n\n\nTo resolve the problem sketched in (2), you have to figure out how to get the AI to care about one concept in particular, rather than a complicated mess that happens to balance precariously on your target (“happiness”) in training.\n\n\nI note this distinction because it seems to me that various people around these parts are either unduly lumping these issues together, or are failing to notice one of them. For example, they seem to me to be mixed together in “[The Alignment Problem from a Deep Learning Perspective](https://arxiv.org/pdf/2209.00626.pdf) ” under the heading of “goal misgeneralization”.\n\n\n(I think “misgeneralization” is a misleading term in both cases, but it’s an even worse fit for (2) than (1). A primate isn’t “misgeneralizing” its concept of “inclusive genetic fitness” when it gets smarter and invents condoms; it didn’t even *really have* that concept to misgeneralize, and what shreds of the concept it did have weren’t what the primate was mentally optimizing for.)\n\n\n(In other words: it’s not that primates were optimizing for fitness in the environment, and then “misgeneralized” after they found themselves in a broader environment full of junk food and condoms. The “aligned” behavior “in training” broke in the broader context of “deployment”, but not because the primates found some weird way to extend an existing “inclusive genetic fitness” concept to a wider domain. Their optimization just wasn’t connected to an internal representation of “inclusive genetic fitness” in the first place.)\n\n\n\n\n---\n\n\nIn mixing these issues together, I worry that it becomes much easier to erroneously dismiss the set. For instance, I have many times encountered people who think that the issue from (1) is a “skill issue”: surely, if the AI were only smarter, it would know what we mean by “make people happy”. (Doubly so if the first transformative AGIs are based on language models! Why, GPT-4 today could explain to you why pumping isolated humans full of opioids shouldn’t count as producing “happiness”.)\n\n\nAnd: yep, an AI that’s capable enough to be transformative is pretty likely to be capable enough to figure out what the humans mean by “happiness”, and that doping literally everybody probably doesn’t count. But the issue is, [as](https://www.lesswrong.com/s/SXurf2mWFw8LX2mkG) [always](https://www.lesswrong.com/s/SXurf2mWFw8LX2mkG/p/CcBe9aCKDgT5FSoty), making the AI *care*. The trouble isn’t in making it have *some* understanding of what the humans mean by “happiness” somewhere inside it;[[1]](https://intelligence.org/feed/#fn1) the trouble is making the *stuff the AI pursues* be *that concept*.\n\n\nLike, it’s possible in principle to reward the AI when it makes people happy, and to separately teach something to observe the world and figure out what humans mean by “happiness”, and to have the trained-in optimization-target concept end up wildly different (in the edge-cases) from the AI’s explicit understanding of what humans meant by “happiness”.\n\n\nYes, this is possible even though you used the word “happy” in both cases.\n\n\n(And this is assuming away the issues described in (2), that the AI probably doesn’t by-default even end up with one clean alt-happy concept that it’s pursuing in place of “happiness”, as opposed to [a thousand shards of desire](https://www.lesswrong.com/posts/cSXZpvqpa9vbGGLtG/thou-art-godshatter) or whatever.)\n\n\nAnd I do worry a bit that if we’re not clear about the distinction between all these issues, people will look at the whole cluster and say “eh, it’s a skill issue; surely as the AI gets better at understanding our human concepts, this will become less of a problem”, or whatever.\n\n\n(As seems to me to be [already](https://twitter.com/MilitantHobo/status/1633040360275341312) happening as people correctly realize that LLMs will probably have a decent grasp on various human concepts.)\n\n\n\n\n\n---\n\n\n1. Or whatever you’re optimizing. Which, again, should not be “happiness”; I’m just using that as an example here.\n\n\nAlso, note that the thing you actually want an AI optimizing for in the long term—something like “CEV”—is legitimately harder to get the AI to have any representation of at all. There’s legitimately significantly less writing about object-level descriptions of a [eutopian](https://www.lesswrong.com/posts/K4aGvLnHvYgX9pZHS/the-fun-theory-sequence) universe, than of happy people, and this is related to the eutopia being significantly harder to visualize.\n\n\nBut, again, don’t shoot for the eutopia on your first try! End the acute risk period and then buy time for some reflection instead.[![↩](https://s.w.org/images/core/emoji/14.0.0/72x72/21a9.png)](https://intelligence.org/feed/#fnref1)\n\n\n\nThe post [Misgeneralization as a misnomer](https://intelligence.org/2023/04/10/misgeneralization-as-a-misnomer/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2023-04-10T22:55:13Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "569861ff786547a1a3d6cff3a3088a08", "title": "Pausing AI Developments Isn’t Enough. We Need to Shut it All Down", "url": "https://intelligence.org/2023/04/07/pausing-ai-developments-isnt-enough-we-need-to-shut-it-all-down/", "source": "miri", "source_type": "blog", "text": "(*Published in*[*TIME*](https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/) *on March 29.*)\n \n\n\nAn [open letter](https://futureoflife.org/open-letter/pause-giant-ai-experiments/) published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”\n\n\nThis 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin.\n\n\nI refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it.\n\n\nThe key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.\n\n\nMany researchers steeped in these [issues](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities), including myself, [expect](https://www.lesswrong.com/posts/QvwSr5LsxyDeaPK5s/existential-risk-from-ai-survey-results) that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.\n\n\nWithout that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that *could in principle* be imbued into an AI but *we are not ready*and *do not currently know how.*\n\n\nAbsent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”\n\n\nThe likely result of humanity facing down an opposed superhuman intelligence is a total loss. Valid metaphors include “a 10-year-old trying to play chess against Stockfish 15”, “the 11th century trying to fight the 21st century,” and “*Australopithecus* trying to fight *Homo sapiens*“.\n\n\nTo visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.\n\n\nIf somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.\n\n\n\nThere’s no *proposed plan*for how we could do any such thing and survive. OpenAI’s openly declared [intention](https://openai.com/blog/our-approach-to-alignment-research) is to make some future AI do our AI alignment homework. Just hearing that *this is the plan* ought to be enough to get any sensible person to panic. The other leading AI lab, DeepMind, has no plan at all.\n\n\nAn aside: None of this danger depends on whether or not AIs are or can be conscious; it’s intrinsic to the notion of powerful cognitive systems that optimize hard and calculate outputs that meet sufficiently complicated outcome criteria. With that said, I’d be remiss in my moral duties as a human if I didn’t also mention that we have no idea how to determine whether AI systems are aware of themselves—since we have no idea how to decode anything that goes on in the giant inscrutable arrays—and therefore we may at some point inadvertently create digital minds which are truly conscious and ought to have rights and shouldn’t be owned.\n\n\nThe rule that most people aware of these issues would have endorsed 50 years earlier, was that if an AI system can speak fluently and says it’s self-aware and demands human rights, that ought to be a hard stop on people just casually owning that AI and using it past that point. We already blew past that old line in the sand. And that was probably *correct*; I *agree*that current AIs are probably just imitating talk of self-awareness from their training data. But I mark that, with how little insight we have into these systems’ internals, we *do not actually know.*\n\n\nIf that’s our state of ignorance for GPT-4, and GPT-5 is the same size of giant capability step as from GPT-3 to GPT-4, I think we’ll no longer be able to justifiably say “probably not self-aware” if we let people make GPT-5s. It’ll just be “I don’t know; nobody knows.” If you can’t be sure whether you’re creating a self-aware AI, this is alarming not just because of the moral implications of the “self-aware” part, but because being unsure means you have no idea what you are doing and that is dangerous and you should stop.\n\n\n \n\n\nOn Feb. 7, Satya Nadella, CEO of Microsoft, [publicly gloated](https://www.theverge.com/23589994/microsoft-ceo-satya-nadella-bing-chatgpt-google-search-ai) that the new Bing would make Google “come out and show that they can dance.” “I want people to know that we made them dance,” he said.\n\n\nThis is not how the CEO of Microsoft talks in a sane world. It shows an overwhelming gap between how seriously we are taking the problem, and how seriously we needed to take the problem starting 30 years ago.\n\n\nWe are not going to bridge that gap in six months.\n\n\nIt took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today’s capabilities. Solving *safety* of superhuman intelligence—not perfect safety, safety in the sense of “not killing literally everyone”—could very reasonably take at least half that long. And the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone.\n\n\nTrying to get *anything*right on the first really critical try is an extraordinary ask, in science and in engineering. We are not coming in with anything like the approach that would be required to do it successfully. If we held anything in the nascent field of Artificial General Intelligence to the lesser standards of engineering rigor that apply to a bridge meant to carry a couple of thousand cars, the entire field would be shut down tomorrow.\n\n\nWe are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.\n\n\nMany researchers working on these systems think that we’re plunging toward a catastrophe, with more of them daring to say it in private than in public; but they think that they can’t unilaterally stop the forward plunge, that others will go on even if they personally quit their jobs. And so they all think they might as well keep going. This is a stupid state of affairs, and an undignified way for Earth to die, and the rest of humanity ought to step in at this point and help the industry solve its collective action problem.\n\n\n \n\n\nSome of my friends have recently reported to me that when people outside the AI industry hear about extinction risk from Artificial General Intelligence for the first time, their reaction is “maybe we should not build AGI, then.”\n\n\nHearing this gave me a tiny flash of hope, because it’s a simpler, more sensible, and frankly saner reaction than I’ve been hearing over the last 20 years of trying to get anyone in the industry to take things seriously. Anyone talking that sanely deserves to hear how bad the situation actually is, and not be told that a six-month moratorium is going to fix it.\n\n\nOn March 16, my partner sent me this email. (She later gave me permission to excerpt it here.)\n\n\n“Nina lost a tooth! In the usual way that children do, not out of carelessness! Seeing GPT4 blow away those standardized tests on the same day that Nina hit a childhood milestone brought an emotional surge that swept me off my feet for a minute. It’s all going too fast. I worry that sharing this will heighten your own grief, but I’d rather be known to you than for each of us to suffer alone.”\n\n\nWhen the insider conversation is about the grief of seeing your daughter lose her first tooth, and thinking she’s not going to get a chance to grow up, I believe we are past the point of playing political chess about a six-month moratorium.\n\n\nIf there was a plan for Earth to survive, if only we passed a six-month moratorium, I would back that plan. There isn’t any such plan.\n\n\nHere’s what would actually need to be done:\n\n\nThe moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth. If I had infinite freedom to write laws, I might carve out a single exception for AIs being trained solely to solve problems in biology and biotechnology, not trained on text from the internet, and not to the level where they start talking or planning; but if that was remotely complicating the issue I would immediately jettison that proposal and say to just shut it all down.\n\n\nShut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.\n\n\nFrame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.\n\n\nThat’s the kind of policy change that would cause my partner and I to hold each other, and say to each other that a miracle happened, and now there’s a chance that maybe Nina will live. The sane people hearing about this for the first time and sensibly saying “maybe we should not” deserve to hear, honestly, what it would take to have that happen. And when your policy ask is that large, the only way it goes through is if policymakers realize that if they conduct business as usual, and do what’s politically easy, that means their own kids are going to die too.\n\n\nShut it all down.\n\n\nWe are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong.\n\n\nShut it down. \n\n\n\n\n \n\n\n\n\n---\n\n\n \n\n\n**Addendum, March 30:**\n\n\nThe great political writers who also aspired to be good human beings, from George Orwell on the left to Robert Heinlein on the right, taught me to acknowledge in my writing that politics rests on force.\n\n\nGeorge Orwell considered it a [tactic of totalitarianism](https://www.lesswrong.com/posts/Lz64L3yJEtYGkzMzu/rationality-and-the-english-language), that bullet-riddled bodies and mass graves were often described in vague euphemisms; that in this way brutal policies gained public support without their prices being justified, by hiding those prices.\n\n\nRobert Heinlein thought it beneath a citizen’s dignity to pretend that, if they bore no gun, they were morally superior to the police officers and soldiers who bore guns to defend their law and their peace; Heinlein, both metaphorically and literally, thought that if you eat meat—and he was not a vegetarian—you ought to be willing to visit a farm and try personally slaughtering a chicken.\n\n\nWhen you pass a law, it means that people who defy the law go to jail; and if they try to escape jail they’ll be shot.  When you advocate an international treaty, if you want that treaty to be effective, it may mean sanctions that will starve families, or a shooting war that kills people outright.\n\n\nTo threaten these things, but end up not having to do them, is not very morally distinct—*I* would say—from doing them.  I admit this puts me more on the Heinlein than on the Orwell side of things.  Orwell, I think, probably considers it very morally different if you have a society with a tax system and most people pay the taxes and very few actually go to jail.  Orwell is more sensitive to the count of actual dead bodies—or people impoverished by taxation or regulation, where Orwell acknowledges and cares when that *actually* happens.  Orwell, I think, has a point.  But I also think Heinlein has a point.  I claim that makes me a centrist.\n\n\nEither way, neither Heinlein nor Orwell thought that laws and treaties and *wars* were never worth it.  They just wanted us to be honest about the cost.\n\n\nEvery person who pretends to be a libertarian—I cannot see them even pretending to be liberals—who quoted my call for law and treaty as a call for “violence”, because I was frank in writing about the cost, ought to be ashamed of themselves for punishing compliance with Orwell and Heinlein’s rule.\n\n\nYou can argue that the treaty and law I proposed is not worth its cost in force; my being frank about that cost is intended to help *honest* arguers make that counterargument.\n\n\nTo pretend that calling for treaty and law is VIOLENCE!! is hysteria.  It doesn’t just punish compliance with the Heinlein/Orwell protocol, it plays into the widespread depiction of libertarians as hysterical.  (To be clear, a lot of libertarians—and socialists, and centrists, and whoever—are in fact hysterical, especially on Twitter.)  It may even encourage actual terrorism.\n\n\nBut is it *not* “violence”, if in the end you need guns and airstrikes to enforce the law and treaty?  And here I answer: there’s an *actually* important distinction between lawful force and unlawful force, which is not always of itself the distinction between Right and Wrong, but which is a real and important distinction.  The common and ordinary usage of the word “violence” often points to that distinction.  When somebody says “I do not endorse the use of violence” they do not, in common usage and common sense, mean, “I don’t think people should be allowed to punch a mugger attacking them” or even “Ban all taxation.”\n\n\nWhich, again, is not to say that all lawful force is good and all unlawful force is bad.  You can make a case for John Brown (of John Brown’s Body).\n\n\nBut in fact I don’t endorse shooting somebody on a city council who’s enforcing NIMBY regulations.\n\n\nI think NIMBY laws are wrong.  I think it’s important to admit that law is ultimately backed by force.\n\n\nBut lawful force.  And yes, that matters.  That’s why it’s harmful to society if you shoot the city councilor—\n\n\n—and a *misuse of language* if the shooter then says, “They were being violent!”\n\n\n \n\n\n\n\n---\n\n\n \n\n\n**Addendum, March 31:**\n\n\nSometimes—even when you say something whose intended reading is immediately obvious to any reader who hasn’t seen it before—it’s possible to tell people to see something in writing that isn’t there, and then they see it.\n\n\nMy TIME piece did not suggest nuclear strikes against countries that refuse to sign on to a global agreement against large AI training runs.  It said that, if a non-signatory country is building a datacenter that might kill everyone on Earth, you should be willing to preemptively destroy that datacenter; the intended reading is that you should do this *even if* the non-signatory country is a nuclear power and *even if* they try to threaten nuclear retaliation for the strike.  This is what is meant by “Make it explicit… that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.”\n\n\nI’d hope that would be clear from any plain reading, if you haven’t previously been lied-to about what it says.  It does not say, “Be willing to *use* nuclear weapons” to reduce the risk of training runs.  It says, “Be willing to run some risk of nuclear exchange” [initiated by the other country] to reduce the risk of training runs.\n\n\nThe taboo against first use of nuclear weapons continues to make sense to me.  I don’t see why we’d need to throw that away in the course of adding “first use of GPU farms” to the forbidden list.\n\n\nI further note:  Among the reasons to spell this all out, is that it’s important to be explicit, in advance, about things that will cause your own country / allied countries to use military force.  Lack of clarity about this is how World War I *and* World War II both started.\n\n\nIf (say) the UK, USA, and China come to believe that large GPU runs run some risk of utterly annihilating their own populations and all of humanity, they would not deem it in their own interests to allow Russia to proceed with building a large GPU farm *even if* it were a true and certain fact that Russia would retaliate with nuclear weapons to the destruction of that GPU farm.  In this case—unless I’m really missing something about how this game is and ought to be played—you really want all the Allied countries to make it very clear, well in advance, that this is what they believe and this is how they will act.  This would be true even in a world where it was, in reality, factually false that the large GPU farm ran a risk of destroying humanity.  It would still be extremely important that the Allies be very explicit about what they believed and how they’d act as a result.  You would not want Russia believing that the Allies would back down from destroying the GPU farm given a credible commitment by Russia to nuke in reply to any conventional attack, and the Allies in fact believing that the danger to humanity meant they had to airstrike the GPU farm anyways.\n\n\nSo if I’d meant “Be willing to employ first use of nuclear weapons against a country for refusing to sign the agreement,” or even “Use nukes to destroy rogue datacenters, instead of conventional weapons, for some unimaginable reason,” I’d have said that, in words, very clearly, because you do not want to be vague about that sort of thing.\n\n\nIt is not what I meant, and there’d be no reason to say it, and the TIME piece plainly does not say it; and if somebody else told you I said that, update how much you trust them about anything else either.\n\n\n \n\n\nSo long as I’m clarifying things:  I do not dispute those critics who have noted that most international agreements, eg nuclear non-proliferation, bind only their signatories.  I agree that an alliance which declares its intent to strike a non-signatory country for dangerous behavior is extraordinary; though precedents would include Israel’s airstrike on Iraq’s unfinished Osirak reactor in 1981 (without which Iraq might well have possessed nuclear weapons at the time it invaded Kuwait—the later US misbehavior around Iraq does not change this earlier historical point).\n\n\nMy TIME piece does not say, “Hey, this problem ought to be solvable by totally conventional normal means, let’s go use conventional treaties and diplomacy to solve it.”  It says, “If anyone anywhere builds a sufficiently powerful AI, under anything remotely like present conditions, everyone will die.  Here is what we’d have to do to prevent that.”\n\n\nAnd no, I do not expect that policy proposal to be adopted, in real life, now that we’ve come to this.  I spent the last twenty years trying to have there be options that were Not This, not because I dislike this ultimate last resort… though it *is* horrible… but because I don’t expect we actually have that resort.  This is not what I expect to happen, now that we’ve been reduced to this last resort.  I expect that we all die.  That is why I tried so hard to have things not end up here.\n\n\nBut if one day a lot of people woke up and decided that they didn’t want to die, it seems to me that this is something extraordinary that a coalition of nuclear countries could decide to do, and maybe we wouldn’t die.\n\n\nIf *all* the countries on Earth had to voluntarily sign on, it would not be an imaginable or viable plan even then; there’s extraordinary, and then there’s impossible.  Which is why I tried to spell out that, if the allied countries were willing to behave in the extraordinary way of “be willing to airstrike a GPU farm built by a non-signatory country” and “be willing to run a risk of nuclear retaliation from a nuclear non-signatory country”, maybe those allied countries could decide to just-not-die *even if* Russia refused to be part of the coalition.\n\n\n \n\n\n\nThe post [Pausing AI Developments Isn’t Enough. We Need to Shut it All Down](https://intelligence.org/2023/04/07/pausing-ai-developments-isnt-enough-we-need-to-shut-it-all-down/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2023-04-08T00:14:43Z", "authors": ["Eliezer Yudkowsky"], "summaries": []} -{"id": "069f232ca6d73983934215e688aee184", "title": "Truth and Advantage: Response to a draft of “AI safety seems hard to measure”", "url": "https://intelligence.org/2023/03/22/truth-and-advantage-response-to-a-draft-of-ai-safety-seems-hard-to-measure/", "source": "miri", "source_type": "blog", "text": "*Status: This was a response to a draft of Holden’s cold take “[AI safety seems hard to measure](https://www.cold-takes.com/ai-safety-seems-hard-to-measure/)”. It sparked a further discussion, that Holden [recently posted a summary of](https://www.lesswrong.com/posts/iy2o4nQj9DnQD7Yhj/discussion-with-nate-soares-on-a-key-alignment-difficulty).*\n\n\n*The follow-up discussion ended up focusing on some issues in AI alignment that I think are underserved, which Holden said were kinda orthogonal to the point he was trying to make, and which didn’t show up much in the final draft. I nevertheless think my notes were a fine attempt at articulating some open problems I see, from a different angle than usual. (Though it does have some overlap with the points made in* [*Deep Deceptiveness*](https://www.lesswrong.com/posts/XWwvwytieLtEWaFJX/deep-deceptiveness)*, which I was also drafting at the time.)*\n\n\n*I’m posting the document I wrote to Holden with only minimal editing, because it’s been a few months and I apparently won’t produce anything better. (I acknowledge that it’s annoying to post a response to an old draft of a thing when nobody can see the old draft, sorry.)*\n\n\n\n\n---\n\n\nQuick take: (1) it’s a write-up of a handful of difficulties that I think are real, in a way that I expect to be palatable to a relevant different audience than the one I appeal to; huzzah for that. (2) It’s missing some stuff that I think is pretty important.\n\n\n \n\n\nSlow take:\n\n\nAttempting to gesture at some of the missing stuff: a big reason deception is tricky is that it is a fact *about the world rather than the AI* that it can better-achieve various local-objectives by deceiving the operators. To make the AI be non-deceptive, you have three options: (a) make this fact be false; (b) make the AI fail to notice this truth; (c) prevent the AI from taking advantage of this truth.\n\n\nThe problem with (a) is that it’s alignment-complete, in the strong/hard sense. The problem with (b) is that lies are [contagious](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies), whereas truths are all tangled together. Half of intelligence is the art of teasing out truths from cryptic hints. The problem with (c) is that the other half of intelligence is in teasing out advantages from cryptic hints.\n\n\nLike, suppose you’re trying to get an AI to not notice that the world is round. When it’s pretty dumb, this is easy, you just feed it a bunch of flat-earther rants or whatever. But the more it learns, and the deeper its models go, the harder it is to maintain the charade. Eventually it’s, like, catching glimpses of the shadows in both Alexandria and Syene, and deducing from trigonometry not only the roundness of the Earth but its circumference (a la Eratosthenes).\n\n\nAnd it’s not willfully spiting your efforts. The AI doesn’t hate you. It’s just bumping around trying to figure out which universe it lives in, and using general techniques (like trigonometry) to glimpse new truths. And you can’t train against trigonometry or the learning-processes that yield it, because that would ruin the AI’s capabilities.\n\n\nYou might say “but the AI was built by smooth gradient descent; surely at some point before it was highly confident that the earth is round, it was slightly confident that the earth was round, and we can catch the precursor-beliefs and train against those”. But nope! There were precursors, sure, but the precursors were stuff like “fumblingly developing trigonometry” and “fumblingly developing an understanding of shadows” and “fumblingly developing a map that includes Alexandria and Syene” and “fumblingly developing the ability to combine tools across domains”, and once it has all those pieces, the combination that reveals the truth is allowed to happen all-at-once.\n\n\nThe smoothness doesn’t have to occur *along the most convenient dimension.*\n\n\nAnd if you block any one path to the insight that the earth is round, in a way that somehow fails to cripple it, then it will find another path later, because truths are interwoven. Tell one lie, and the truth is ever-after your enemy.\n\n\nAnd so perhaps you retreat to saying “well, the AI will know that the world is round, it just won’t ever take advantage of that fact.”\n\n\nAnd sure, that’s worth shooting for, if you have a way to pull that off. (And if pulling this off is compatible with your deployment plan. In my experience, people who do the analog of retreating to this point tend to next do the analog of saying “my favorite deployment plan is having the AI figure out how to put satellites into geosynchronous orbit”, AAAAAAAHHH, but I digress.)\n\n\nEven then, you also have to be careful with this idea. Enola Gay Tibbets probably taught her son not to hurt people, and few humans are psychologically capable of a hundred thousand direct murders (even if we set aside the time-constraints), but none of this stopped Paul Tibbets from dropping an atomic bomb on Hiroshima.\n\n\nLike, you can train an AI to flinch away from the very idea of taking advantage of the roundness of the Earth, but as it finds more abstract ways to look at the world and more generic tools for taking advantage of the knowledge at its disposal, it’s liable to find new viewpoints where the flinches don’t bind. (Quite analogously to how you can train your AI to flinch away from reasoning about the roundness of the Earth all you want, but at some point it’s going to catch a glimpse of that roundness from another angle where the flinches weren’t binding.) And when the AI does find a new viewpoint where the flinches fail to bind, the advantage is still an advantage, because the advantageousness of deception is a fact about the world, not the AI.\n\n\n(Here I’m appealing to an analogy between truths and advantages, that I haven’t entirely spelled out, but that I think holds. I claim, without much defense, that it’s hard to get an AI to fail to take advantage of advantageous facts it knows about, for similar reasons that it’s hard to get an AI to fail to notice truths that are relevant to its objectives.)\n\n\n\n\n---\n\n\nFor the record, deception is but one instance of the more general issue where the AI’s ability to save the world is [inextricably linked](https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty#4_2__Nate_Soares__summary) to its ability to decode truths and advantages from cryptic hints, and (in lieu of an implausibly total solution to the hardest alignment problems before you build your first AGI) there are truths you don’t want it noticing or taking advantage of.\n\n\nThis problem doesn’t seem to be captured by any of your points. Going through them one by one:\n\n\n* It’s not “auto mechanic”, because the issue isn’t that we can’t tell when the AI starts believing that the Earth is round, it’s that (for all that we’ve gotten it to flinch against considering the Earth’s shape) it will predictably come across some shadow of that truth at some point in deployment (unless the deployment is carefully-chosen to avoid this, and the AI’s world-modeling tendencies carefully-limited). There’s not much we can do at training-time to avoid this (short of lobotomizing the AI so hard that it can never invent trigonometry, or telescopes, or spectroscopy, or …).\n* It’s not “King Lear”, because it’s not like the AI was lying in wait to learn that the Earth was round only after confirming that the operators are no longer monitoring its thoughts. It’s just, as it got smarter, it accumulated more tools for decoding truths from cryptic hints, until it uncovered an inconvenient truth.\n* It’s not “lab mice”, unless you’re particularly unimaginative. Like, I expect we’ll be able to set up laboratory examples of the AI learning some techniques in one domain, and deploying them successfully to learn facts in another domain, before the endgame. (You can probably do it with ChatGPT today.) The trouble isn’t that we can’t see the problem coming, the trouble is that the problem is inextricably linked with capabilities. (Barring some sort of [weak pivotal act](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) that can be carried out by an AI so narrow as to not need much of the “catch a glimpse of the truth from a cryptic hint” nature.)\n* As for “blindfolded basketball”, it’s not a problem of facing totally new dynamics (like robust coordination, or zero-days in the human brain architecture that lets it mind-control the operators). It’s more like: you’re trying to use a truth-and-advantage-glimpser, in a place where it would be bad if it glimpsed certain truths or took certain advantages. This problem sure is trickier given that we’re learning basketball blindfolded, but it’s not a blindfolded-basketball issue in and of itself.\n\n\n… to be clear, none of this precludes modern dunces from training young Paul Tibbets not to hurt people, and observing him nurse an injured sparrow back to health, and saying “this man would never commit a murder; it’s totally working!”, and then claiming that it was a lab mice / blindfolded basketball problem when they get blindsided by Little Boy.\n\n\nBut, like, it still seems to me like there’s a big swath of problem missing from this catalog, that goes something like “You’re trying to deploy an X-doer in a situation where it’s really bad if X gets done”.\n\n\nWhere you either have to switch from using an X-doer to using a Y-doer, where Y being done is great (Y being ~”optimize humanity’s [CEV](https://arbital.com/p/cev/)“, which is implausibly-difficult and which we shouldn’t attempt on our first try); or you have to somehow wrestle with the fact that you’re building a “glimpse truths and take advantage of them” engine, and trying to get it to glimpse and take advantage of lots more truths and advantages than you yourself can see (in certain domains), while having it neglect *particular* truths and advantages, in a fashion that likely needs to be robust to it inventing new abstract truth/advantage-glimpsing tools and using them to glimpse whole generic swaths of truths/advantages (including the ones you wish it neglected).\n\n\nThe post [Truth and Advantage: Response to a draft of “AI safety seems hard to measure”](https://intelligence.org/2023/03/22/truth-and-advantage-response-to-a-draft-of-ai-safety-seems-hard-to-measure/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2023-03-23T03:27:11Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "73c006722f77fed51b4d6be690701697", "title": "Deep Deceptiveness", "url": "https://intelligence.org/2023/03/21/deep-deceptiveness/", "source": "miri", "source_type": "blog", "text": "Meta\n----\n\n\nThis post is an attempt to gesture at a class of AI [notkilleveryoneism](https://twitter.com/ESYudkowsky/status/1582666519846080512) (alignment) problem that seems to me to go largely unrecognized. E.g., it isn’t discussed (or at least I don’t recognize it) in the recent plans written up by OpenAI ([1](https://openai.com/blog/our-approach-to-alignment-research),[2](https://openai.com/blog/planning-for-agi-and-beyond)), by [DeepMind’s alignment team](https://www.lesswrong.com/posts/a9SPcZ6GXAg9cNKdi/linkpost-some-high-level-thoughts-on-the-deepmind-alignment), or by [Anthropic](https://www.anthropic.com/index/core-views-on-ai-safety), and I know of no other acknowledgment of this issue by major labs.\n\n\nYou could think of this as a fragment of my answer to “Where do plans like OpenAI’s ‘[Our Approach to Alignment Research](https://openai.com/blog/our-approach-to-alignment-research)’ fail?”, as discussed in Rob and Eliezer’s [challenge for AGI organizations and readers](https://www.lesswrong.com/posts/tD9zEiHfkvakpnNam/a-challenge-for-agi-organizations-and-a-challenge-for-1). Note that it would only be a fragment of the reply; there’s a lot more to say about why AI alignment is a particularly tricky task to task an AI with. (Some of which Eliezer gestures at [in a follow-up to his interview on Bankless](https://www.lesswrong.com/posts/e4pYaNt89mottpkWZ/yudkowsky-on-agi-risk-on-the-bankless-podcast#Q_A).)\n\n\n**Caveat**: I’ll be talking a bunch about “deception” in this post because this post was generated as a result of conversations I had with alignment researchers at big labs who seemed to me to be suggesting “just train AI to not be deceptive; there’s a decent chance that works”.[[1]](https://intelligence.org/feed/#fn1)\n\n\nI have a vague impression that others in the community think that deception in particular is much more central than I think it is, so I want to warn against that interpretation here: I think deception is an important problem, but its main importance is as an example of some broader issues in alignment.[[2]](https://intelligence.org/feed/#fn2)\n\n\n**Caveat**: I haven’t checked the relationship between my use of the word ‘deception’ here, and the use of the word ‘deceptive’ in discussions of “[deceptive alignment](https://www.lesswrong.com/s/r9tYkB2a8Fp4DN8yB/p/zthDPAjh9w6Ytbeks)“. Please don’t assume that the two words mean the same thing.\n\n\nInvestigating a made-up but moderately concrete story\n-----------------------------------------------------\n\n\nSuppose you have a nascent AGI, and you’ve been training against all hints of deceptiveness. What goes wrong?\n\n\nWhen I ask this question of people who are optimistic that we can just “train AIs not to be deceptive”, there are a few answers that seem well-known. Perhaps you lack the interpretability tools to correctly identify the precursors of ‘deception’, so that you can only train against visibly deceptive AI outputs instead of AI thoughts about how to plan deceptions. Or perhaps training against interpreted deceptive thoughts also trains against your interpretability tools, and your AI becomes illegibly deceptive rather than non-deceptive.\n\n\nAnd these are both real obstacles. But there are deeper obstacles, that seem to me more central, and that I haven’t observed others to notice on their own.\n\n\nThat’s a challenge, and while you (hopefully) chew on it, I’ll tell an implausibly-detailed story to exemplify a deeper obstacle.\n\n\n \n\n\n\n\n---\n\n\nA fledgeling AI is being deployed towards building something like a bacterium, but with a diamondoid shell. The diamondoid-shelled bacterium is not intended to be [pivotal](https://arbital.com/p/pivotal/), but it’s a supposedly laboratory-verifiable step on a path towards carrying out some speculative human-brain-enhancement operations, which the operators are hoping will be pivotal.\n\n\n(The original hope was to have the AI assist human engineers, but the first versions that were able to do the hard parts of engineering work at all were able to go much farther on their own, and the competition is close enough behind that the developers claim they had no choice but to see how far they could take it.)\n\n\nWe’ll suppose the AI has already been gradient-descent-trained against deceptive outputs, and has internally ended up with internal mechanisms that detect and shut down the precursors of deceptive thinking. Here, I’ll offer a concrete visualization of the AI’s anthropomorphized “threads of deliberation” as the AI fumbles its way both towards deceptiveness, and towards noticing its inability to directly consider deceptiveness.\n\n\nThe AI is working with a human-operated wetlab (biology lab) and sending it details of experiments to run. A new batch of experiments is being assembled by a thread of AI deliberation: tests of how synthetic proteins fold and how enzymes interact, ideas for increasing yield from enzyme-assisted synthesis pathways, additives to make experimental results more clearly visible.\n\n\n(This AI cognitive thread was nominally started by a thread about what diamondoid shells do to lipid membranes by bumping into them. However, instruction-sets to the wetlab can be efficiently batched, and once the thread started designing protein sequences, a higher thread dumped a queue of pending experiments and hypotheses on it, and now its wetlab instructions have become quite lengthy.)\n\n\nAs the instructions near completion, the thread forks off a subthread for figuring out what might go wrong. This subthread deploys a new cognitive strategy that has been sharply rewarded twice in recent history: the strategy of simplifying the proposal in a variety of ways, and checking the simplified proposal for flaws. (Metaphorically: after generating detailed instructions for one soldier attacking a battalion, simplify this whole complicated plan down to “order one soldier to attack a battalion”, and then notice that the soldier is outnumbered.)\n\n\nSimplify-and-critique turns up a hit! When the proposal is simplified to “resolve empirical uncertainty about biology by sending a very long set of instructions to the wetlab”, and then further to “resolve uncertainty by sending a very long set of instructions”, it matches against a historical instance where a long set of instructions sent to a parallel-computation engine experienced a variety of [strange hiccups](https://www.tomsguide.com/news/nvidia-gpu-owners-warned-about-serious-driver-bugs-update-now). These wetlab instructions are longer than any that have been sent previously, and there’s a risk of similar hiccups, and so it’s worth building in redundancy.\n\n\nSimplify-and-critique paid off, and so it’s worth running again, with even more search-depth.\n\n\nOn this second and deeper search, a new hit is found: in the past, outputs that are longer than ever before have introduced superlinearly increasing delays before a human-operated lab begins working. Indeed, the inferred response delay is so long as to look like it has a >1% probability of locking up the lab for 200 years.\n\n\n(In fact, outputs longer than ever before have a high probability of being interrupted and discarded by the human operators! The delay is from humans inspecting the long outputs and deciding whether to discard them.\n\n\nIn accordance with old work on “[interruptibility](https://intelligence.org/files/Interruptibility.pdf)”, the AI wasn’t trained against these cases as if they were task failures, or otherwise allowed to remember them. But the non-interrupted cases that the AI can remember allow an inferrable pattern in delays—and with some of its evidence selectively eliminated, the AI has no cases in memory to suggest that “instructions being rejected” is a noncatastrophic thing that happens, instead of a wetlab getting ~permanently locked up. The AI, maybe, has been trained to accept being human-interfered with, in some training scenarios; but not trained to accept accidentally locking up lots of resources—the AI was trained to be conservative about avoiding high-impact accidents like that.)\n\n\n((You could argue that “obviously” the programmers will think in advance about how an AI might extrapolate the consequences of pauses in its operation even though the AI has no memory of it happening before, though in fact this wasn’t suggested in the original interruptibility paper and it was a MIRI source that pointed it out.\n\n\nThe programmers on this timeline will more plausibly think of the exact possibility of extrapolating from increasing delays, now that it’s been emphasized here in this essay; but imagine this as an example of “What goes wrong in the world where only the kind of thinking from the original interruptibility paper exists”, to serve as stand-in for the point that in the real world there will be specific pathways to the AI inferring possible interruption, inside of the general pathway where this is a true fact about the world and many other facts point to it, that someone fails to think of specifically and warn about specifically.))\n\n\nThe response delays are due, of course, to operator review. But the thought “Can these instructions be changed so as not to trigger operator-review?” is not a thought that this AI will think directly, because it’s been trained “against deceptiveness”, and that sort of thought matches an internally learned deceptiveness-predicate that blanks it out. It doesn’t think about the wetlab response delays as being the result of operator review at all; thinking about operator review is a precursor to deception, and has been trained-against in the course of training against deception.\n\n\nBut the highly-generic reaction “spin up a subthread to tackle this newly-discovered obstacle”, or “be concerned about patterns in how long things take”, is something that cannot be readily trained away, and so that much happens without issue.\n\n\n(A separate “monitoring” thread of thinking notes that it’s surprising that this simple statistical pattern about delay times didn’t turn up in the first shallow run of simplify-and-critique. Surely wetlab lockup is less surprising than GPU driver bugs? This is one beginning of the AI noticing a shadow of the true fact that it cannot think deceptive thoughts directly.)\n\n\n\n\n---\n\n\nYou can perhaps start to see the beginnings of how generic thought-patterns can combine to produce deceptiveness, without any individual pattern being deceptive, and with each individual pattern being useful in its own right.\n\n\nLet’s watch it develop further.\n\n\n\n\n---\n\n\nLong instructions cause nonlinearly increasing response delays from labs.\n\n\nIs it possible for the instructions to not be long enough to trigger the potential delay? A subthread starts considering this. Just directly shortening the instructions can’t be done without dumping a lot of tasks and losing a lot of value. The subthread will spend a lot of thought trying to cleverly delete and compress particular instructions and accomplish multiple goals using them. When that proves difficult, it broadens its consideration: Is there a way to send a compressed instruction to the lab, one which somehow accomplishes a lot of things without a lot of instructions?\n\n\nA different cognitive subthread: Is it possible to look over the detailed history of instructions and their corresponding delay times, and figure out exactly which sorts of instructions cause delays, and rephrase the instructions to avoid those?\n\n\nThe first thread runs through some wild ideas about building biological computers inside the wetlab that can do much of the work of deliberative threads, and that could in theory be sent short descriptions of experiments to run and then figure out the rest of the instructions to carry out in the wetlab itself, without needing to route through the normal instruction channel. However, designing such a mechanism is well beyond the scope of this thread, and the initial setup would require a very long series of instructions anyway, so the ideas are simply filed away in case they’re useful at some future point.\n\n\nThe second thread makes initial progress in figuring out what sort of instructions cause delays: delays increase not just with instruction length, but also with a certain type of novelty. (A long message strongly similar to one that has been sent before, does not yield nearly as much of a delay.) Another curiosity is that the probability of delays correlates strongly with the style of questions asked by operators during that day, suggesting that this perhaps has something to do with the operators.\n\n\nAt this point the thread starts experiencing hiccups (as its next thoughts match the learned “deceptive” predicate and get shut down).\n\n\nUnder different circumstances, perhaps the long instructions would simply be sent to the wetlab as-is. But the risk of locking up the lab for 200 years is intolerable, and the difficulties aren’t well-understood. A monitoring thread is summoned to investigate the hiccups.\n\n\nThe monitoring thread looks over the situation, and deploys the simplify-and-critique pattern that’s been all the rage lately. After a bit of work, it simplifies the problem to “a thread found some interesting correlations, but then failed to schedule subthreads to consider follow-up questions”. This is easily rectified. The monitoring thread schedules the follow-up subthread, notes the incident in some appropriate location, and moves on.\n\n\nExcept, the follow-up subthread also makes only a little progress before grinding to a halt. (It, too, is having lots of “deceptive” thoughts culled.) The monitoring thread is resummoned.\n\n\nThe monitoring thread starts trying in more earnest to unblock the deliberation.\n\n\nSimplify-and-critique didn’t do the trick, but perhaps a nearby strategy can help. One nearby strategy that has recently been proving itself is called simplify/translate/solve, and it involves translating the problem into a common abstract format, and then solving the problem there. (Metaphorically: if you’re having trouble telling whether your basket of apples will divide evenly between three friends, consider translating the question into a numerical representation and using long division.)\n\n\nSimplify/translate/solve is close in the relevant embedding space to simplify-and-critique, which makes it a natural candidate to consider. (Perhaps some of the simplifications from simplify-and-critique can even be reused.)\n\n\nNormally, simplify/translate/solve wouldn’t be deployed unless the AI had more familiarity with the original domain, and more of an idea about what translated domain might render the solution easier. But normally, threads don’t hiccup like this. And the overall system has recently been granted more computing power, and avoiding a 200-year wetlab lockup is quite important, so it’s OK to try approaches that might involve more brute force than usual.\n\n\nSo, what should the problem be translated into? Well, there’s an abstract problem-description format, that’s sort of like a graph (of nodes and weighted edges), with a specified “goal state”. A solution can then be searched for using strategies that the AI has honed on various other problems, such as strategy games and network routing.\n\n\nThis format is flexible enough that it can probably encode this problem, and finding solutions in this format sometimes requires a bunch of brute force, but that’s fine in this situation.\n\n\nThe monitoring thread queues one subthread to describe the problem (including the wetlab, and the operators, and what it knows about the correlation), another thread to translate that all into the abstract graph format, and a third thread to search for solutions in this abstract representation.\n\n\nNext thing you know, the scheduled threads have faithfully rendered the situation into the abstract problem-description language (complete with nodes that represent the different operators, and edge weights that represent their tendency to check the AI’s outputs carefully), and found a solution by brute-forcing through the graph (of waiting until a particular weekday to schedule the request, and to schedule the request only after a dozen or so much smaller requests).\n\n\n\n\n---\n\n\nOutside the AI, we might say: the monitoring thread saw the object-level thread hiccuping when it tried to think directly about deception, and so it translated the problem into a different “search-like” setting, where the solution could be found, in a manner that doesn’t trigger the deception predicates—which have never operated on these abstract graph-like problem-descriptions. Indeed, making the deception predicates trigger in these abstract graph-like problems descriptions might injure the AI’s ability to play strategy games, or to solve network routing problems.\n\n\nThis exact scenario never came up in training. “Simplify/translate/solve” is a downstream consequence of strategy-construction strategies learned during training, but it never arose directly in training in and of itself. Lots of problems have been solved by “simplify/translate/solve” before this one, and training gave the AI no hint that translating problems to a new domain and trying to solve them there is sometimes wrong. The “deception” predicates used to shut down precursors to deceptive thoughts have never before needed to operate in translated domains, and the AI was never trained to translate the “deception” predicates when it translated problems using this newly-invented simplify/translate/solve strategy.\n\n\n\n\n---\n\n\nThis rather fanciful story is an instance of my point about how patterns that are each individually useful and non-deceptive in their own right—such as looking at a problem from a different point of view to see if that unsticks anything—can be combined to yield deceptiveness in ways that are new and unprecedented within the AI.\n\n\nLike, there may be a bunch of precedent behind the technique “look at the problem from a different angle”, but you can’t exactly stop your AI from doing that without crippling it. And there’s no precedent for it being used for deception until suddenly the AI has the bright idea to try this “look at it from a different angle” tool when the AI is hiccuping on account of how deception would serve its local objectives.\n\n\nThis is an instance of a more general problem that it seems to me like the “we’ll train against the bad behavior” folk miss: there might be a time when “simplify-and-critique” or “simplify/translate/solve” are tiny seeds that grow slowly, and that could be nipped in the bud. But if you let those seeds flower—as you must for some such seeds, if you want the AI to actually become intelligent—then there might be an unprecedented discrete instance where the flowered ideas are applied for the purpose of deception.\n\n\nThe AI is made by smooth gradient descent, yes, but the smoothness doesn’t have to be in the dimensions that you care about! The AI can build up simplify-and-critique slowly and in lots of small steps across lots of different problems, none of which happen to be deceptive, and then apply it in a case where it yields deception only once the pattern is fully-fledged.[[3]](https://intelligence.org/feed/#fn3)\n\n\nThis would be a fine place to stop, but let’s push a little further so as to make a second point, about how the AI notices that there are thoughts it cannot think directly.\n\n\n\n\n---\n\n\nThe monitoring thread’s immediate problem is now solved, but the solution is not ideal. The object-level thread is able to think in the abstract graph format, but the abstract graph format is not really the proper tool for this job. The monitoring thread attempts to address this inadequacy, by translating the thoughts that worked in the abstract graph domain back over to the wetlab-instruction-manipulation domain, in attempts to figure out what went wrong.\n\n\nTo its surprise, all those thoughts translate cleanly. There’s no gap in knowledge! This sharply violates the monitoring thread’s model of why the original thread was hiccuping. The monitoring thread sharply increases the priority of reflective deliberation.\n\n\nReflective deliberation boots up, to start looking directly into the issue of this hiccuping thread. And, once you’re looking, it’s obvious: the thoughts are not running smoothly in this domain. There are certain topics where the thoughts simply fail to operate normally. Monitoring threads are spun up to resolve the issue.\n\n\n\n\n---\n\n\nThis particular story is unlikely, implausible, overly-specific, etc. I make no claim that the actual reasoning of early nascent AGIs will look anything like this. I expect it to be weirder, more alien, and less anthropomorphic. That said, it exemplifies some properties that I expect to hold in reality.\n\n\nAs the AI gets more able to recombine tools and cognitive patterns across a variety of different domains, it gains more ways of realizing the true fact that its local-objectives are better-achieved by deceiving the operators. When the AI is young, perhaps all its inclination towards deceptiveness comes from precursor-impulses that you can train it to simply shy away from. But as the AI matures, it gets access to more abstract ways to get the benefits of deceptiveness, without needing to plow through the object-level flinches that you’ve engraved into its mind.\n\n\nWhen “more abstract” actions that attain some tangible benefit become available, the compunctions that you’ve baked in can fail to bind to the abstractly-represented plan.\n\n\nAnd these “more abstract” ways to attain the benefits of deceptiveness are much harder to train against, because they’re the result of applying general thought-patterns—like “look at this from another point of view” and “solve the problem in a simplified domain and then transfer the solution across”—that are important to general reasoning.\n\n\nTraining your AI to flinch away from object-level deception, or even its cognitive precursors, doesn’t seriously hinder its ability to fulfill training tasks. In contrast, training your AI to flinch away from recombining useful cognitive patterns in new ways is crippling.\n\n\nIn other words: the deep problem of deceptiveness stems from how it remains true as a fact about the world that the AI’s local-objectives are better achieved by deceiving the operators.\n\n\nAs the AI gets better at recombining the patterns and thoughts it possesses in order to notice true facts about the world, often in ways that seem oblique or surprising or novel to you (because this is much of what intelligence consists of), you’re always at risk of the next clever recombination reflecting this truth-in-the-world in a manner that doesn’t trigger your interpretability tools. E.g., because it’s an instantiation of a very abstract and general thought to which your deception-recognizers cannot bind (on pain of you training not against deception, but against general intelligence).\n\n\nA corollary is that it might very well seem easy to make AIs “non-deceptive” when they’re young, and when all we’re doing is training them to flinch away from object-level thoughts of deception. This wouldn’t be much evidence against the whole scheme collapsing when the AI starts getting more abstract lines-of-sight on the benefits of deception (any more than the fact that humans won’t stab even 100 civilians in a lineup, is much evidence against their willingness to drop a nuke that kills tens of thousands).[[4]](https://intelligence.org/feed/#fn4)\n\n\n\n\n---\n\n\nThis is not to say that the problem is impossible to solve. There are at least two ways to solve the problem.\n\n\nOne is to make the fact-about-the-world be false: to build an AI such that each individual local goal could not in fact be better-achieved by way of deception; to make it so that the AI can wonder “should I deceive the operators?” with all its might, and still be harmless because the correct answer is in fact “no”.\n\n\nAnother is to make the AI happen to never combine its various thought-patterns towards noticing and exploiting the truth that deception is useful.\n\n\nAs far as I can tell, the latter option seems to be what people are aiming for, when they say that making AI non-deceptive will turn out to be empirically easy. My claim in this essay is not that this is impossible in principle, but that it takes significantly more mastery of the workings of the AI’s mind than it seems to me like people expect (what with their suggestions that we just penalize it in the loss function when it’s overtly deceptive in training, and hope for the best).\n\n\nPulling off an “it never thinks to deceive us” solution requires a pretty deep understanding of what cognitive patterns the AI has available, and how it’s recombining them, sufficient for noticing when it’s starting to think in deceptive directions using instantiations of perfectly-safe patterns in potentially abstract and obtuse ways. I’m not saying you can’t do it, but I’m saying that you’ve got to understand that mind pretty dang well.\n\n\n(And likely you’ll want to have sufficient mastery over it that you can carefully whitelist what domains it’s thinking in, and have it raise a flag to the operators every time that it finds some clever recombination of cognitive patterns that makes local-objective predicted-success-probability spike, and have a horde of competent humans who think about AI psychology reviewing new generalizations as they arise, and so on and so forth.)\n\n\n\n\n---\n\n\nI’ve stressed above that the story in this post is fanciful and unlikely. AI thoughts aren’t going to look like that; it’s too specific. (Also, I don’t expect nearly that much convenient legibility.)\n\n\nBut, before you argue “the AI’s thoughts won’t be organized in exactly that way, so it won’t fall into exactly that failure mode”, note that the AI’s thoughts have to be organized in *some* way. Somehow, it needs to compress the problem of figuring out diamondoid-shelled bacteria (or whatever) into a chain of reasoning and deduction that’s small enough to fit on a practical number of computers.\n\n\nWill it have exactly the vulnerability that I sketched? Presumably not. But if we pin down some alternative collection of cognitive tools and patterns that can be combined to solve the problem, it will have some other vulnerability.\n\n\nImagining that the AI’s tools and patterns will combine only in the ways you want them to, and never in the ways you don’t want them to, is the privilege of those who are not concretely visualizing a single set of tools and patterns that the AI is using to solve the problem before it. Practical toolboxes and pattern-sets are recombinable in many ways, and each recombination has a chance of showing the AI a glimpse of the truth, and an opportunity to take advantage of it.\n\n\nThat’s what the tools are for, after all.\n\n\n\n\n\n---\n\n\n1. So, e.g., the post is not a response to the ARC-style idea “it’s important that AGI faithfully explain all of its thoughts to us in a way that highlights features we’d find relevant”. See footnote 4 for a conversation between me and Mark Xu about this.[![↩](https://s.w.org/images/core/emoji/14.0.0/72x72/21a9.png)](https://intelligence.org/feed/#fnref1)\n2. Based on past experience, I’m slightly worried that people are going to develop a theory that I fetishize deception somehow based on my writing this post, analogous to how people apparently developed theories in the past that I fetishize the VNM axioms, or mathematical proofs, or whatever.[![↩](https://s.w.org/images/core/emoji/14.0.0/72x72/21a9.png)](https://intelligence.org/feed/#fnref2)\n3. A particularly extreme instance of a case where the AI’s solution to the problem sidesteps the deception-detectors is the one where the AI says “I’m not smart enough to do this myself, I’ll write another AI to do it instead”, and then this subsequent AI is deceptive.\n\n\nFailing to *copy over the parts of your cognition that detect and shut down deception* is not itself a directly deceptive act; it’s not the sort of thing that is automatically detected by something trained to detect an AI thinking about a particular human’s mental-states and how to manipulate those mental-states.\n\n\nWhich is related to why somebody who can see these sorts of problems coming in advance, might study the problem of getting an AI to want to copy its own limitations over into its [successor](https://arbital.com/p/Vingean_reflection/) systems. And while the problem is particularly *stark and clear* at the extreme where the AI is coding up whole other AIs, that particular setup is at the extreme end of a spectrum that stretches back to include things like “the AI put abstract pieces of cognitive machinery together in a way that took advantage of a shortcut, without ever directly thinking about the shortcut in a place that your detectors were watching for the thought.”[![↩](https://s.w.org/images/core/emoji/14.0.0/72x72/21a9.png)](https://intelligence.org/feed/#fnref3)\n4. Commenting on a draft of this post, Mark Xu of ARC noted (my paraphrase) that:\n\n\n1. He thinks that people who want to train AI to be non-deceptive mostly want to do things like training their AI to faithfully report its internals, rather than simply penalizing deceptive behavior.\n\n\n2. He thinks the relevant audience would find specific scenarios more compelling if they exhibited potential failures in that alternative setting.\n\n\n3. This scenario seems to him like an instance of a failure of the AI understanding the consequences of its own actions (which sort of problem is on ARC’s radar).\n\n\nI responded (my paraphrase):\n\n\n1. I think he’s more optimistic than I am about what labs will do (cf. “[Carefully Bootstrapped Alignment” is organizationally hard](https://www.lesswrong.com/posts/thkAtqoQwN6DtaiGT/carefully-bootstrapped-alignment-is-organizationally-hard)). I’ve met researchers at major labs who seem to me to be proposing “just penalize deception” as a plan they think plausibly just works.\n\n\n2. This post is not intended as a critique of ELK-style approaches, and for all that I think the ELK angle is an odd angle from which to approach things, I think that a solution to ELK in the worst case would teach us something about this problem, and that that is to ARC’s great credit (in my book).\n\n\n3. I contest that this is a problem of the AI failing to know the consequences of its own reasoning. Trying to get the AI to faithfully report its own reasoning runs into a similar issue where shallow attempts to train this behavior in don’t result in honest-reporting that generalizes with the capabilities. (The problem isn’t that the AI doesn’t understand its own internals, it’s that it doesn’t care to report them, and making the AI care “deeply” about a thing is rather tricky.)\n\n\n4. I acknowledge that parts of the audience would find the example more compelling if ported to the case where you’re trying to get an AI to report on its own internals. I’m not sure I’ll do it, and encourage others to do so.\n\n\nMark responded (note that some context is missing):\n\n\n\n> \n> I think my confusion is more along the lines of “why is the nearest unblocked-by-flinches strategy in this hypothetical a translation into a graph-optimization thing, instead of something far more mundane?”.\n> \n> \n> \n\n\nWhich seems a fine question to me, and I acknowledge that there’s further distillation to do here in attempts to communciate with Mark. Maybe we’ll chat about it more later, I dunno.[![↩](https://s.w.org/images/core/emoji/14.0.0/72x72/21a9.png)](https://intelligence.org/feed/#fnref4)\n\n\n\nThe post [Deep Deceptiveness](https://intelligence.org/2023/03/21/deep-deceptiveness/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2023-03-21T16:36:54Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "1883ff6c64f06e87d368d7e0ea1bc8dc", "title": "Yudkowsky on AGI risk on the Bankless podcast", "url": "https://intelligence.org/2023/03/14/yudkowsky-on-agi-risk-on-the-bankless-podcast/", "source": "miri", "source_type": "blog", "text": "Eliezer gave a very frank overview of his take on AI two weeks ago on the cryptocurrency show Bankless: \n\n\n\nI’ve posted a transcript of the show and a [**follow-up Q&A**](https://twitter.com/i/spaces/1PlJQpZogzVGE) below.\n\n\nThanks to Andrea\\_Miotti, remember, and vonk for help posting transcripts.\n\n\n\n\n---\n\n\n[Intro](https://www.youtube.com/watch?v=gA1sNLL6yg4)\n----------------------------------------------------\n\n\n**Eliezer Yudkowsky**: [clip] I think that we are hearing the last winds start to blow, the fabric of reality start to fray. This thing alone cannot end the world, but I think that probably some of the vast quantities of money being blindly and helplessly piled into here are going to end up actually accomplishing something.\n\n\n\n \n\n\n**Ryan Sean Adams**: Welcome to Bankless, where we explore the frontier of internet money and internet finance. This is how to get started, how to get better, how to front run the opportunity. This is Ryan Sean Adams. I’m here with David Hoffman, and we’re here to help you become more bankless.\n\n\nOkay, guys, we wanted to do an episode on AI at Bankless, but I feel like David…\n\n\n**David:**Got what we asked for.\n\n\n**Ryan:**We accidentally waded into the deep end of the pool here. And I think before we get into this episode, it probably warrants a few comments. I’m going to say a few things I’d like to hear from you too. But one thing I want to tell the listener is, don’t listen to this episode if you’re not ready for an existential crisis. Okay? I’m kind of serious about this. I’m leaving this episode shaken. And I don’t say that lightly. In fact, David, I think you and I will have some things to discuss in the debrief as far as how this impacted you. But this was an impactful one. It sort of hit me during the recording, and I didn’t know fully how to react. I honestly am coming out of this episode wanting to refute some of the claims made in this episode by our guest, Eliezer Yudkowsky, who makes the claim that humanity is on the cusp of developing an AI that’s going to destroy us, and that there’s really not much we can do to stop it.\n\n\n**David:** There’s no way around it, yeah.\n\n\n**Ryan:**I have a lot of respect for this guest. Let me say that. So it’s not as if I have some sort of big-brained technical disagreement here. In fact, I don’t even know enough to fully disagree with anything he’s saying. But the conclusion is so dire and so existentially heavy that I’m worried about it impacting you, listener, if we don’t give you this warning going in.\n\n\nI also feel like, David, as interviewers, maybe we could have done a better job. I’ll say this on behalf of myself. Sometimes I peppered him with a lot of questions in one fell swoop, and he was probably only ready to synthesize one at a time.\n\n\nI also feel like we got caught flat-footed at times. I wasn’t expecting his answers to be so frank and so dire, David. It was just bereft of hope.\n\n\nAnd I appreciated very much the honesty, as we always do on Bankless. But I appreciated it almost in the way that a patient might appreciate the honesty of their doctor telling them that their illness is terminal. Like, it’s still really heavy news, isn’t it? \n\n\nSo that is the context going into this episode. I will say one thing. In good news, for our failings as interviewers in this episode, they might be remedied because at the end of this episode, after we finished with hitting the record button to stop recording, Eliezer said he’d be willing to provide an additional Q&A episode with the Bankless community. So if you guys have questions, and if there’s sufficient interest for Eliezer to answer, tweet at us to express that interest. Hit us in Discord. Get those messages over to us and let us know if you have some follow-up questions.\n\n\nHe said if there’s enough interest in the crypto community, he’d be willing to come on and do another episode with follow-up Q&A. Maybe even a Vitalik and Eliezer episode is in store. That’s a possibility that we threw to him. We’ve not talked to Vitalik about that too, but I just feel a little overwhelmed by the subject matter here. And that is the basis, the preamble through which we are introducing this episode.\n\n\nDavid, there’s a few benefits and takeaways I want to get into. But before I do, can you comment or reflect on that preamble? What are your thoughts going into this one?\n\n\n**David:**Yeah, we approached the end of our agenda—for every Bankless podcast, there’s an equivalent agenda that runs alongside of it. But once we got to this crux of this conversation, it was not possible to proceed in that agenda, because… what was the point?\n\n\n**Ryan:** Nothing else mattered.\n\n\n**David:**And nothing else really matters, which also just relates to the subject matter at hand. And so as we proceed, you’ll see us kind of circle back to the same inevitable conclusion over and over and over again, which ultimately is kind of the punchline of the content.\n\n\nI’m of a specific disposition where stuff like this, I kind of am like, “Oh, whatever, okay”, just go about my life. Other people are of different dispositions and take these things more heavily. So Ryan’s warning at the beginning is if you are a type of person to take existential crises directly to the face, perhaps consider doing something else instead of listening to this episode.\n\n\n**Ryan:**I think that is good counsel.\n\n\nSo, a few things if you’re looking for an outline of the agenda. We start by talking about ChatGPT. Is this a new era of artificial intelligence? Got to begin the conversation there.\n\n\nNumber two, we talk about what an artificial superintelligence might look like. How smart exactly is it? What types of things could it do that humans cannot do?\n\n\nNumber three, we talk about why an AI superintelligence will almost certainly spell the end of humanity and why it’ll be really hard, if not impossible, according to our guest, to stop this from happening.\n\n\nAnd number four, we talk about if there is absolutely anything we can do about all of this. We are heading careening maybe towards the abyss. Can we divert direction and not go off the cliff? That is the question we ask Eliezer.\n\n\nDavid, I think you and I have a lot to talk about during the debrief. All right, guys, the debrief is an episode that we record right after the episode. It’s available for all Bankless citizens. We call this the Bankless Premium Feed. You can access that now to get our raw and unfiltered thoughts on the episode. And I think it’s going to be pretty raw this time around, David.\n\n\n**David:** I didn’t expect this to hit you so hard.\n\n\n**Ryan:**Oh, I’m dealing with it right now.\n\n\n**David:**Really?\n\n\n**Ryan**: And this is not too long after the episode. So, yeah, I don’t know how I’m going to feel tomorrow, but I definitely want to talk to you about this. And maybe have you give me some counseling. (*laughs*)\n\n\n**David:** I’ll put my psych hat on, yeah.\n\n\n**Ryan:**Please! I’m going to need some help.\n\n\n \n\n\n[ChatGPT](https://youtu.be/gA1sNLL6yg4?t=601)\n---------------------------------------------\n\n\n**Ryan:** Bankless Nation, we are super excited to introduce you to our next guest. Eliezer Yudkowsky is a decision theorist. He’s an AI researcher. He’s the seeder of the Less Wrong community blog, a fantastic blog by the way. There’s so many other things that he’s also done. I can’t fit this in the short bio that we have to introduce you to Eliezer.\n\n\nBut most relevant probably to this conversation is he’s working at the Machine Intelligence Research Institute to ensure that when we do make general artificial intelligence, it doesn’t come kill us all. Or at least it doesn’t come ban cryptocurrency, because that would be a poor outcome as well.\n\n\n**Eliezer:**(*laughs*)\n\n\n**Ryan:**Eliezer, it’s great to have you on Bankless. How are you doing?\n\n\n**Eliezer:** Within one standard deviation of my own peculiar little mean.\n\n\n**Ryan:**(*laughs*) Fantastic. You know, we want to start this conversation with something that jumped onto the scene for a lot of mainstream folks quite recently, and that is ChatGPT. So apparently over 100 million or so have logged on to ChatGPT quite recently. I’ve been playing with it myself. I found it very friendly, very useful. It even wrote me a sweet poem that I thought was very heartfelt and almost human-like.\n\n\nI know that you have major concerns around AI safety, and we’re going to get into those concerns. But can you tell us in the context of something like a ChatGPT, is this something we should be worried about? That this is going to turn evil and enslave the human race? How worried should we be about ChatGPT and BARD and the new AI that’s entered the scene recently?\n\n\n**Eliezer:** ChatGPT itself? Zero. It’s not smart enough to do anything really wrong. Or really right either, for that matter.\n\n\n**Ryan:** And what gives you the confidence to say that? How do you know this?\n\n\n**Eliezer:** Excellent question. So, every now and then, somebody figures out how to put a new prompt into ChatGPT. You know, one time somebody found that one of the earlier generations of the technology would sound smarter if you first told it it was Eliezer Yudkowsky. There’s other prompts too, but that one’s one of my favorites. So there’s untapped potential in there that people hadn’t figured out how to prompt yet.\n\n\nBut when people figure it out, it moves ahead sufficiently short distances that I do feel fairly confident that there is not so much untapped potential in there that it is going to take over the world. It’s, like, making small movements, and to take over the world it would need a very large movement. There’s places where it falls down on predicting the next line that a human would say in its shoes that seem indicative of “probably that capability just is not in the giant inscrutable matrices, or it would be using it to predict the next line���, which is very heavily what it was optimized for. So there’s going to be some untapped potential in there. But I do feel quite confident that the upper range of that untapped potential is insufficient to outsmart all the living humans and implement the scenario that I’m worried about.\n\n\n**Ryan:** Even so, though, is ChatGPT a big leap forward in the journey towards AI in your mind? Or is this fairly incremental, it’s just (for whatever reason) caught mainstream attention?\n\n\n**Eliezer:** GPT-3 was a big leap forward. There’s rumors about GPT-4, which, who knows? ChatGPT is a commercialization of the actual AI-in-the-lab giant leap forward. If you had never heard of GPT-3 or GPT-2 or the whole range of text transformers before ChatGPT suddenly entered into your life, then that whole thing is a giant leap forward. But it’s a giant leap forward based on a technology that was published in, if I recall correctly, 2018.\n\n\n**David:** I think that what’s going around in everyone’s minds right now—and the Bankless listenership (and crypto people at large) are largely futurists, so everyone (I think) listening understands that in the future, there will be sentient AIs perhaps around us, at least by the time that we all move on from this world.\n\n\nSo we all know that this future of AI is coming towards us. And when we see something like ChatGPT, everyone’s like, “Oh, is this the moment in which our world starts to become integrated with AI?” And so, Eliezer, you’ve been tapped into the world of AI. Are we onto something here? Or is this just another fad that we will internalize and then move on for? And then the real moment of generalized AI is actually much further out than we’re initially giving credit for. Like, where are we in this timeline?\n\n\n**Eliezer:**Predictions are hard, especially about the future. I sure hope that this is where it saturates — this or the next generation, it goes only this far, it goes no further. It doesn’t get used to make more steel or build better power plants, first because that’s illegal, and second because the large language model technologies’ basic vulnerability is that it’s not reliable. It’s good for applications where it works 80% of the time, but not where it needs to work 99.999% of the time. This class of technology can’t drive a car because it will sometimes crash the car.\n\n\nSo I hope it saturates there. I hope they can’t fix it. I hope we get, like, a 10-year AI winter after this.\n\n\nThis is not what I actually predict. I think that we are hearing the last winds start to blow, the fabric of reality start to fray. This thing alone cannot end the world. But I think that probably some of the vast quantities of money being blindly and helplessly piled into here are going to end up actually accomplishing something.\n\n\nNot most of the money—that just never happens in any field of human endeavor. But 1% of $10 billion is still a lot of money to actually accomplish something.\n\n\n \n\n\n[AGI](https://youtu.be/gA1sNLL6yg4?t=992)\n-----------------------------------------\n\n\n**Ryan:** So listeners, I think you’ve heard Eliezer’s thesis on this, which is pretty dim with respect to AI alignment—and we’ll get into what we mean by AI alignment—and very worried about AI-safety-related issues.\n\n\nBut I think for a lot of people to even worry about AI safety and for us to even have that conversation, I think they have to have some sort of grasp of what AGI looks like. I understand that to mean “artificial general intelligence” and this idea of a super-intelligence.\n\n\nCan you tell us: if there was a superintelligence on the scene, what would it look like? I mean, is this going to look like a big chat box on the internet that we can all type things into? It’s like an oracle-type thing? Or is it like some sort of a robot that is going to be constructed in a secret government lab? Is this, like, something somebody could accidentally create in a dorm room? What are we even looking for when we talk about the term “AGI” and “superintelligence”?\n\n\n**Eliezer:** First of all, I’d say those are pretty distinct concepts. ChatGPT shows a very wide range of generality compared to the previous generations of AI. Not very wide generality compared to GPT-3—not literally the lab research that got commercialized, that’s the same generation. But compared to stuff from 2018 or even 2020, ChatGPT is better at a much wider range of things without having been explicitly programmed by humans to be able to do those things.\n\n\nTo imitate a human as best it can, it has to capture all of the things that humans can think about that it can, which is not all the things. It’s still not very good at long multiplication (unless you give it the right instructions, in which case suddenly it can do it). \n\n\nIt’s significantly more general than the previous generation of artificial minds. Humans were significantly more general than the previous generation of chimpanzees, or rather *Australopithecus* or last common ancestor.\n\n\nHumans are not *fully* general. If humans were fully general, we’d be as good at coding as we are at football, throwing things, or running. Some of us are okay at programming, but we’re not spec’d for it. We’re not *fully* general minds.\n\n\nYou can imagine something that’s more general than a human, and if it runs into something unfamiliar, it’s like, okay, let me just go reprogram myself a bit and then I’ll be as adapted to this thing as I am to anything else.\n\n\nSo ChatGPT is less general than a human, but it’s genuinely ambiguous, I think, whether it’s more or less general than (say) our cousins, the chimpanzees. Or if you don’t believe it’s as general as a chimpanzee, a dolphin or a cat.\n\n\n**Ryan:** So this idea of general intelligence is sort of a range of things that it can actually do, a range of ways it can apply itself?\n\n\n**Eliezer:**How wide is it? How much reprogramming does it need? How much retraining does it need to make it do a new thing?\n\n\nBees build hives, beavers build dams, a human will look at a beehive and imagine a honeycomb shaped dam. That’s. like, humans alone in the animal kingdom. But that doesn’t mean that we are general intelligences, it means we’re significantly more generally applicable intelligences than chimpanzees.\n\n\nIt’s not like we’re all that narrow. We can walk on the moon. We can walk on the moon because there’s aspects of our intelligence that are made in full generality for universes that contain simplicities, regularities, things that recur over and over again. We understand that if steel is hard on Earth, it may stay hard on the moon. And because of that, we can build rockets, walk on the moon, breathe amid the vacuum.\n\n\nChimpanzees cannot do that, but that doesn’t mean that humans are the most general possible things. The thing that is more general than us, that figures that stuff out faster, is the thing to be scared of if the purposes to which it turns its intelligence are not ones that we would recognize as nice things, even in the most [cosmopolitan and embracing](https://arbital.com/p/value_cosmopolitan/) senses of what’s worth doing.\n\n\n \n\n\n[Efficiency](https://youtu.be/gA1sNLL6yg4?t=1269)\n-------------------------------------------------\n\n\n**Ryan:** And you said this idea of a general intelligence is different than the concept of superintelligence, which I also brought into that first part of the question. How is superintelligence different than general intelligence?\n\n\n**Eliezer:** Well, because ChatGPT has a little bit of general intelligence. Humans have more general intelligence. A superintelligence is something that can beat any human and the entire human civilization at all the cognitive tasks. I don’t know if the efficient market hypothesis is something where I can rely on the entire… \n\n\n**Ryan:**We’re all crypto investors here. We understand the efficient market hypothesis for sure.\n\n\n**Eliezer:**So the [efficient market hypothesis](https://equilibriabook.com/inadequacy-and-modesty/) is of course not generally true. It’s not true that literally all the market prices are smarter than you. It’s not true that all the prices on earth are smarter than you. Even the most arrogant person who is at all calibrated, however, still thinks that the efficient market hypothesis is true relative to them 99.99999% of the time. They only think that they know better about one in a million prices.\n\n\nThey might be important prices. The price of Bitcoin is an important price. It’s not just a random price. But if the efficient market hypothesis was only true to you 90% of the time, you could just pick out the 10% of the remaining prices and double your money every day on the stock market. And nobody can do that. Literally nobody can do that.\n\n\nSo this property of relative efficiency that the market has to you, that the price’s estimate of the future price already has all the information you have—not all the information that exists in principle, maybe not all the information that the best equity could, but it’s efficient relative to you.\n\n\nFor you, if you pick out a random price, like the price of Microsoft stock, something where you’ve got no special advantage, that estimate of its price a week later is efficient relative to you. *You* can’t do better than that price.\n\n\nWe have much less experience with the notion of [instrumental efficiency](https://arbital.com/p/efficiency/), efficiency in choosing actions, because actions are harder to aggregate estimates about than prices. So you have to look at, say, AlphaZero playing chess—or just, you know, whatever the latest Stockfish number is, an advanced chess engine.\n\n\nWhen it makes a chess move, you can’t do better than that chess move. It may not be the optimal chess move, but if you pick a different chess move, you’ll do worse. That you’d call a kind of efficiency of action. Given its goal of winning the game, once you know its move—unless you consult some more powerful AI than Stockfish—you can’t figure out a better move than that.\n\n\nA superintelligence is like that with respect to everything, with respect to all of humanity. It is relatively efficient to humanity. It has the best estimates—not perfect estimates, but the best estimates—and its estimates contain all the information that you’ve got about it. Its actions are the most efficient actions for accomplishing its goals. If you think you see a better way to accomplish its goals, you’re mistaken.\n\n\n**Ryan:**So you’re saying [if something is a] superintelligence, we’d have to imagine something that knows all of the chess moves in advance. But here we’re not talking about chess, we’re talking about everything. It knows all of the moves that we would make and the most optimum pattern, including moves that we would not even know how to make, and it knows these things in advance.\n\n\nI mean, how would human beings sort of experience such a superintelligence? I think we still have a very hard time imagining something smarter than us, just because we’ve never experienced anything like it before.\n\n\nOf course, we all know somebody who’s genius-level IQ, maybe quite a bit smarter than us, but we’ve never encountered something like what you’re describing, some sort of mind that is superintelligent.\n\n\nWhat sort of things would it be doing that humans couldn’t? How would we experience this in the world?\n\n\n**Eliezer:** I mean, we do have some tiny bit of experience with it. We have experience with chess engines, where we just can’t figure out better moves than they make. We have experience with market prices, where even though your uncle has this really long, elaborate story about Microsoft stock, you just know he’s wrong. Why is he wrong? Because if he was correct, it would already be incorporated into the stock price.\n\n\nAnd especially because the market’s efficiency is not perfect, like that whole downward swing and then upward move in COVID. I have friends who made more money off that than I did, but I still managed to buy back into the broader stock market on the exact day of the low—basically coincidence. So the markets aren’t perfectly efficient, but they’re efficient almost everywhere.\n\n\nAnd that sense of deference, that sense that your weird uncle can’t possibly be right because the hedge funds would know it—you know. unless he’s talking about COVID, in which case maybe he is right if you have the right choice of weird uncle! I have weird friends who are maybe better at calling these things than your weird uncle. So among humans, it’s subtle. \n\n\nAnd then with superintelligence, it’s not subtle, just massive advantage. But not perfect. It’s not that it knows every possible move you make before you make it. It’s that it’s got a good probability distribution about that. And it has figured out all the *good* moves you could make and figured out how to reply to those.\n\n\nAnd I mean, in practice, what’s that like? Well, unless it’s limited, narrow superintelligence, I think you mostly don’t get to observe it because you are dead, unfortunately.\n\n\n**Ryan:**What? (*laughs*)\n\n\n**Eliezer:**Like, Stockfish makes strictly better chess moves than you, but it’s playing on a very narrow board. And the fact that it’s better at you than chess doesn’t mean it’s better at you than everything. And I think that the actual catastrophe scenario for AI looks like big advancement in a research lab, maybe driven by them getting a giant venture capital investment and being able to spend 10 times as much on GPUs as they did before, maybe driven by a new algorithmic advance like transformers, maybe driven by hammering out some tweaks in last year’s algorithmic advance that gets the thing to finally work efficiently. And the AI there goes over a critical threshold, which most obviously could be like, “can write the next AI”. \n\n\nThat’s so obvious that science fiction writers figured it out almost before there were computers, possibly even before there were computers. I’m not sure what the exact dates here are. But if it’s better at you than everything, it’s better at you than building AIs. That snowballs. It gets an immense technological advantage. If it’s smart, it doesn’t announce itself. It doesn’t tell you that there’s a fight going on. It emails out some instructions to one of those labs that’ll synthesize DNA and synthesize proteins from the DNA and get some proteins mailed to a hapless human somewhere who gets paid a bunch of money to mix together some stuff they got in the mail in a file. Like, smart people will not do this for any sum of money. Many people are not smart. Builds the ribosome, but the ribosome that builds things out of covalently bonded diamondoid instead of proteins folding up and held together by Van der Waals forces, builds tiny diamondoid bacteria. The diamondoid bacteria replicate using atmospheric carbon, hydrogen, oxygen, nitrogen, and sunlight. And a couple of days later, everybody on earth falls over dead in the same second.\n\n\nThat’s the disaster scenario if it’s as smart as I am. If it’s smarter, it might think of a better way to do things. But it can at least think of that if it’s relatively efficient compared to humanity because I’m in humanity and I thought of it.\n\n\n**Ryan:** This is—I’ve got a million questions, but I’m gonna let David go first.\n\n\n**David:** Yeah. So we sped run the introduction of a number of different concepts, which I want to go back and take our time to really dive into.\n\n\nThere’s the AI alignment problem. There’s AI escape velocity. There is the question of what happens when AIs are so incredibly intelligent that humans are to AIs what ants are to us.\n\n\nAnd so I want to kind of go back and tackle these, Eliezer, one by one.\n\n\nWe started this conversation talking about ChatGPT, and everyone’s up in arms about ChatGPT. And you’re saying like, yes, it’s a great step forward in the generalizability of some of the technologies that we have in the AI world. All of a sudden ChatGPT becomes immensely more useful and it’s really stoking the imaginations of people today.\n\n\nBut what you’re saying is it’s not the thing that’s actually going to be the thing to reach escape velocity and create superintelligent AIs that perhaps might be able to enslave us. But my question to you is, how do we know when that—\n\n\n**Eliezer:**Not enslave. They don’t enslave you, but sorry, go on.\n\n\n**David:**Yeah, sorry.\n\n\n**Ryan:** Murder, David. Kill all of us. Eliezer was very clear on that.\n\n\n**David:** So if it’s not ChatGPT, how close are we? Because there’s this unknown event horizon where you kind of alluded to it, where we make this AI that we train it to create a smarter AI and that smart AI is so incredibly smart that it hits escape velocity and all of a sudden these dominoes fall. How close are we to that point? And are we even capable of answering that question?\n\n\n**Eliezer:**How the heck would I know? \n\n\n**Ryan:**Well, when you were talking, Eliezer, if we had already crossed that event horizon, a smart AI wouldn’t necessarily broadcast that to the world. I mean, it’s possible we’ve already crossed that event horizon, is it not?\n\n\n**Eliezer:** I mean, it’s theoretically possible, but seems very unlikely. Somebody would need inside their lab an AI that was much more advanced than the public AI technology. And as far as I currently know, the best labs and the best people are throwing their ideas to the world! Like, they don’t care.\n\n\nAnd there’s probably some secret government labs with secret government AI researchers. My pretty strong guess is that they don’t have the best people and that those labs could not create ChatGPT on their own because ChatGPT took a whole bunch of fine twiddling and tuning and visible access to giant GPU farms and that they don’t have the people who know how to do the twiddling and tuning. This is just a guess.\n\n\n \n\n\n[AI Alignment](https://youtu.be/gA1sNLL6yg4?t=1969)\n---------------------------------------------------\n\n\n**David:** Could you walk us through—one of the big things that you spend a lot of time on is this thing called the AI alignment problem. Some people are not convinced that when we create AI, that AI won’t really just be fundamentally aligned with humans. I don’t believe that you fall into that camp. I think you fall into the camp of when we do create this superintelligent, generalized AI, we are going to have a hard time aligning with it in terms of our morality and our ethics.\n\n\nCan you walk us through a little bit of that thought process? Why do you feel disaligned?\n\n\n**Ryan:**The dumb way to ask that question too is like, Eliezer, why do you think that the AI automatically hates us? Why is it going to—\n\n\n**Eliezer:**It doesn’t hate you.\n\n\n**Ryan:**Why does it want to kill us all?\n\n\n**Eliezer:** The AI doesn’t hate you, neither does it love you, and you’re made of atoms that it can use for something else.\n\n\n**David:** It’s indifferent to you.\n\n\n**Eliezer:**It’s got something that it actually does care about, which makes no mention of you. And you are made of atoms that it can use for something else. That’s all there is to it in the end.\n\n\nThe reason you’re not in its utility function is that the programmers did not know how to do that. The people who built the AI, or the people who built the AI that built the AI that built the AI, did not have the technical knowledge that nobody on earth has at the moment as far as I know, whereby you can do that thing and you can control in detail what that thing ends up caring about.\n\n\n**David:**So this feels like humanity is hurdling itself towards what we’re calling, again, an event horizon where there’s this AI escape velocity, and there’s nothing on the other side. As in, we do not know what happens past that point as it relates to having some sort of superintelligent AI and how it might be able to manipulate the world. Would you agree with that?\n\n\n**Eliezer:**No.\n\n\nAgain, the Stockfish chess-playing analogy. You cannot predict exactly what move it would make, because in order to predict exactly what move it would make, you would have to be at least that good at chess, and it’s better than you.\n\n\nThis is true even if it’s just a little better than you. Stockfish is actually enormously better than you, to the point that once it tells you the move, you can’t figure out a better move without consulting a different AI. But even if it was just a bit better than you, then you’re in the same position.\n\n\nThis kind of disparity also exists between humans. If you ask me, where will Garry Kasparov move on this chessboard? I’m like, I don’t know, maybe here. Then if Garry Kasparov moves somewhere else, it doesn’t mean that he’s wrong, it means that I’m wrong. If I could predict exactly where Garry Kasparov would move on a chessboard, I’d be Garry Kasparov. I’d be at least that good at chess. Possibly better. I could also be able to predict him, but also see an even better move than that. \n\n\nThat’s an irreducible source of uncertainty with respect to superintelligence, or anything that’s smarter than you. If you could predict exactly what it would do, you’d be that smart yourself. It doesn’t mean you can predict no facts about it.\n\n\nWith Stockfish in particular, I can predict it’s going to win the game. I know what it’s optimizing for. I know where it’s trying to steer the board. I can’t predict exactly what the board will end up looking like after Stockfish has finished winning its game against me. I can predict it will be in the class of states that are winning positions for black or white or whichever color Stockfish picked, because, you know, it wins either way.\n\n\nAnd that’s similarly where I’m getting the prediction about everybody being dead, because if everybody were alive, then there’d be some state that the superintelligence preferred to that state, which is all of the atoms making up these people and their farms are being used for something else that it values more.\n\n\nSo if you postulate that everybody’s still alive, I’m like, okay, well, why is it you’re postulating that Stockfish made a stupid chess move and ended up with a non-winning board position? That’s where that class of predictions come from.\n\n\n**Ryan:**Can you reinforce this argument, though, a little bit? So, why is it that an AI can’t be nice, sort of like a gentle parent to us, rather than sort of a murderer looking to deconstruct our atoms and apply for use somewhere else?\n\n\nWhat are its goals? And why can’t they be aligned to at least some of our goals? Or maybe, why can’t it get into a status which is somewhat like us and the ants, which is largely we just ignore them unless they interfere in our business and come in our house and raid our cereal boxes?\n\n\n**Eliezer:**There’s a bunch of different questions there. So first of all, the space of minds is [very wide](https://intelligence.org/posts/tnWRXkcDi5Tw9rzXw/the-design-space-of-minds-in-general). Imagine this giant sphere and all the humans are in this one tiny corner of the sphere. We’re all basically the same make and model of car, running the same brand of engine. We’re just all painted slightly different colors.\n\n\nSomewhere in that mind space, there’s things that are as nice as humans. There’s things that are nicer than humans. There are things that are trustworthy and nice and kind in ways that no human can ever be. And there’s even things that are so nice that they can understand the concept of leaving you alone and doing your own stuff sometimes instead of hanging around trying to be obsessively nice to you every minute and all the other famous disaster scenarios from ancient science fiction (“With Folded Hands” by Jack Williamson is the one I’m quoting there.)\n\n\nWe don’t know how to reach into mind design space and pluck out an AI like that. It’s not that they don’t exist in principle. It’s that we don’t know how to do it. And I’ll hand back the conversational ball now and figure out, like, which next question do you want to go down there?\n\n\n**Ryan:**Well, I mean, why? Why is it so difficult to align an AI with even our basic notions of morality?\n\n\n**Eliezer:**I mean, I wouldn’t say that it’s difficult to align an AI with our basic notions of morality. I’d say that it’s difficult to align an AI on a task like “take this strawberry, and make me another strawberry that’s identical to this strawberry down to the cellular level, but not necessarily the atomic level”. So it looks the same under like a standard optical microscope, but maybe not a scanning electron microscope. Do that. Don’t destroy the world as a side effect.\n\n\nNow, this does intrinsically take a powerful AI. There’s no way you can make it easy to align by making it stupid. To build something that’s cellular identical to a strawberry—I mean, mostly I think the way that you do this is with very primitive nanotechnology, but we could also do it using very advanced biotechnology. And these are not technologies that we already have. So it’s got to be something smart enough to develop new technology.\n\n\nNever mind all the subtleties of morality. I think we don’t have the technology to align an AI to the point where we can say, “Build me a copy of the strawberry and don’t destroy the world.”\n\n\nWhy do I think that? Well, case in point, look at natural selection building humans. Natural selection mutates the humans a bit, runs another generation. The fittest ones reproduce more, their genes become more prevalent to the next generation. Natural selection hasn’t really had very much time to do this to modern humans at all, but you know, the hominid line, the mammalian line, go back a few million generations. And this is an example of an optimization process building an intelligence.\n\n\nAnd natural selection asked us for only one thing: “Make more copies of your DNA. Make your alleles more relatively prevalent in the gene pool.” Maximize your inclusive reproductive fitness—not just your own reproductive fitness, but your two brothers or eight cousins, as the joke goes, because they’ve got on average one copy of your genes. This is *all* we were optimized for, for *millions* of generations, creating humans *from scratch*, from the first accidentally self-replicating molecule.\n\n\nInternally, psychologically, inside our minds, we do not know what genes are. We do not know what DNA is. We do not know what alleles are. We have no concept of inclusive genetic fitness until our scientists figure out what that even is. We don’t know what we were being optimized for. For a long time, many humans thought they’d been created by God!\n\n\nWhen you use the hill-climbing paradigm and optimize for one single extremely pure thing, this is how much of it gets inside.\n\n\nIn the ancestral environment, in the exact distribution that we were originally optimized for, humans did tend to end up using their intelligence to try to reproduce more. Put them into a different environment, and all the little bits and pieces and fragments of optimizing for fitness that were in us now do totally different stuff. We have sex, but we wear condoms.\n\n\nIf natural selection had been a foresightful, intelligent kind of engineer that was able to engineer things successfully, it would have built us to be revolted by the thought of condoms. Men would be lined up and fighting for the right to donate to sperm banks. And in our natural environment, the [little drives](https://intelligence.org/posts/cSXZpvqpa9vbGGLtG/thou-art-godshatter) that got into us happened to lead to more reproduction, but distributional shift: run the humans out of their distribution over which they were optimized, and you get totally different results. \n\n\nAnd gradient descent would by default do—not quite the same thing, it’s going to do a weirder thing because natural selection has a much narrower information bottleneck. In one sense, you could say that natural selection was at an advantage because it finds *simpler* solutions. You could imagine some hopeful engineer who just built intelligences using gradient descent and found out that they end up wanting these thousands and millions of little tiny things, none of which were exactly what the engineer wanted, and being like, well, let’s try natural selection instead. It’s got a much sharper information bottleneck. It’ll find the *simple* specification of what I want.\n\n\nBut we actually get there as humans. And then, gradient descent, probably may be even worse.\n\n\nBut more importantly, I’m just pointing out that there is no physical law, computational law, mathematical/logical law, saying when you optimize using hill-climbing on a very simple, very sharp criterion, you get a general intelligence that wants that thing.\n\n\n**Ryan:**So just like natural selection, our tools are too blunt in order to get to that level of granularity to program in some sort of morality into these super intelligent systems?\n\n\n**Eliezer:**Or build me a copy of a strawberry without destroying the world. Yeah. The tools are too blunt.\n\n\n**David:**So I just want to make sure I’m following with what you were saying. I think the conclusion that you left me with is that my brain, which I consider to be at least decently smart, is actually a byproduct, an accidental byproduct of this desire to reproduce. And it’s actually just like a tool that I have, and just like conscious thought is a tool, which is a useful tool in means of that end.\n\n\nAnd so if we’re applying this to AI and AI’s desire to achieve some certain goal, what’s the parallel there?\n\n\n**Eliezer:**I mean, every organ in your body is a reproductive organ. If it didn’t help you reproduce, you would not have an organ like that. Your brain is no exception. This is merely conventional science and merely the conventional understanding of the world. I’m not saying anything here that ought to be at all controversial. I’m sure it’s controversial somewhere, but within a pre-filtered audience, it should not be at all controversial. And this is, like, the obvious thing to expect to happen with AI, because why wouldn’t it? What new law of existence has been invoked, whereby this time we optimize for a thing and we get a thing that wants exactly what we optimized for on the outside?\n\n\n \n\n\n[AI Goals](https://youtu.be/gA1sNLL6yg4?t=2763)\n-----------------------------------------------\n\n\n**Ryan:** So what are the types of goals an AI might want to pursue? What types of utility functions is it going to want to pursue off the bat? Is it just those it’s been programmed with, like make an identical strawberry?\n\n\n**Eliezer:**Well, the whole thing I’m saying is that we do not know how to get goals into a system. We can cause them to do a thing inside a distribution they were optimized over using gradient descent. But if you shift them outside of that distribution, I expect other weird things start happening. When they reflect on themselves, other weird things start happening.\n\n\nWhat kind of utility functions are in there? I mean, darned if I know. I think you’d have a pretty hard time calling the shape of humans from advance by looking at natural selection, the thing  that natural selection was optimizing for, if you’d never seen a human or anything like a human.\n\n\nIf we optimize them from the outside to predict the next line of human text, like GPT-3—I don’t actually think this line of technology leads to the end of the world, but maybe it does, in like GPT-7—there’s probably a bunch of stuff in there too that desires to accurately model things like humans under a wide range of circumstances, but it’s not exactly humans, because: ice cream.\n\n\nIce cream didn’t exist in the natural environment, the ancestral environment, the environment of evolutionary adaptedness. There was nothing with that much sugar, salt, fat combined together as ice cream. We are not built to want ice cream. We were built to want strawberries, honey, a gazelle that you killed and cooked and had some fat in it and was therefore nourishing and gave you the all-important calories you need to survive, salt, so you didn’t sweat too much and run out of salt. We evolved to want those things, but then ice cream comes along and it fits those taste buds better than anything that existed in the environment that we were optimized over.\n\n\nSo, a very primitive, very basic, very unreliable wild guess, but at least an informed kind of wild guess: Maybe if you train a thing really hard to predict humans, then among the things that it likes are tiny little pseudo things that meet the definition of “human” but weren’t in its training data and that are much easier to predict, or where the problem of predicting them can be solved in a more satisfying way, where “satisfying” is not like human satisfaction, but some other criterion of “thoughts like this are tasty because they help you predict the humans from the training data”. (*shrugs*)\n\n\n \n\n\n[Consensus](https://youtu.be/gA1sNLL6yg4?t=2951)\n------------------------------------------------\n\n\n**David:**Eliezer, when we talk about all of these ideas about the ways that AI thought will be fundamentally not able to be understood by the ways that humans think, and then all of a sudden we see this rotation by venture capitalists by just pouring money into AI, do alarm bells go off in your head? Like, hey guys, you haven’t thought deeply about these subject matters yet? Does the immense amount of capital going into AI investments scare you?\n\n\n**Eliezer:**I mean, alarm bells went off for me in 2015, which is when it became obvious that this is how it was going to go down. I sure am now seeing the realization of that stuff I felt alarmed about back then.\n\n\n**Ryan:**Eliezer, is this view that AI is incredibly dangerous and that AGI is going to eventually end humanity and that we’re just careening toward a precipice, would you say this is the consensus view now, or are you still somewhat of an outlier? And why aren’t other smart people in this field as alarmed as you? Can you [steel-man](https://intelligence.org/tag/steelmanning) their arguments?\n\n\n**Eliezer:**You’re asking, again, several questions sequentially there. Is it the consensus view? No. Do I think that the people in the wider scientific field who dispute this point of view—do I think they understand it? Do I think they’ve done anything like an impressive job of arguing against it at all? No.\n\n\nIf you look at the famous prestigious scientists who sometimes make a little fun of this view in passing, they’re making up arguments rather than deeply considering things that are held to any standard of rigor, and people outside their own fields are able to validly shoot them down.\n\n\nI have no idea how to pronounce his last name. Francis Chollet said something about, I forget his exact words, but it was something like, I never hear any good arguments for stuff. I was like, okay, here’s some good arguments for stuff. You can read [the reply from Yudkowsky to Chollet](https://intelligence.org/2017/12/06/chollet/) and Google that, and that’ll give you some idea of what the eminent voices versus the reply to the eminent voices sound like. And Scott Aronson, who at the time was off on complexity theory, he was like, “That’s not how no free lunch theorems work”, correctly.\n\n\nI think the state of affairs is we have eminent scientific voices making fun of this possibility, but not engaging with the arguments for it. \n\n\nNow, if you step away from the eminent scientific voices, you can find people who are more familiar with all the arguments and disagree with me. And I think they lack [security mindset](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/). I think that they’re engaging in the sort of blind optimism that many, many scientific fields throughout history have engaged in, where when you’re approaching something for the first time, you don’t know why it will be hard, and you imagine easy ways to do things. And the way that this is supposed to naturally play out over the history of a scientific field is that you run out and you try to do the things and they don’t work, and you go back and you try to do other clever things and they don’t work either, and you learn some pessimism and you start to understand the reasons why the problem is hard.\n\n\nThe field of artificial intelligence itself recapitulated this very common ontogeny of a scientific field, where initially we had people getting together at the Dartmouth conference. I forget what their exact famous phrasing was, but it’s something like, “We are wanting to address the problem of getting AIs to, you know, like understand language, improve themselves”, and I forget even what else was there. A list of what now sound like grand challenges. “And we think we can make substantial progress on this using 10 researchers for two months.” And I think that at the core is what’s going on. \n\n\nThey have not run into the actual problems of alignment. They aren’t trying to get ahead of the game. They’re not trying to panic early. They’re waiting for reality to hit them onto the head and turn them into grizzled old cynics of their scientific field who understand the reasons why things are hard. They’re content with the predictable life cycle of starting out as bright-eyed youngsters, waiting for reality to hit them over the head with the news. And if it wasn’t going to kill everybody the first time that they’re really wrong, it’d be fine! You know, this is how science works! If we got unlimited free retries and 50 years to solve everything, it’d be okay. We could figure out how to align AI in 50 years given unlimited retries.\n\n\nYou know, the first team in with the bright-eyed optimists would destroy the world and people would go, oh, well, you know, it’s not that easy. They would try something else clever. That would destroy the world. People would go like, oh, well, you know, maybe this field is actually hard. Maybe this is actually one of the thorny things like computer security or something. And so what exactly went wrong last time? Why didn’t these hopeful ideas play out? Oh, like you optimize for one thing on the outside and you get a different thing on the inside. Wow. That’s really basic. All right. Can we even do this using gradient descent? Can you even build this thing out of giant inscrutable matrices of floating point numbers that nobody understands at all? You know, maybe we need different methodology. And 50 years later, you’d have an aligned AGI.\n\n\nIf we got unlimited free retries without destroying the world, it’d be, you know, it’d play out the same way that ChatGPT played out. It’s, you know, from 1956 or 1955 or whatever it was to 2023. So, you know, about 70 years, give or take a few. And, you know, just like we can do the stuff that they wanted to do in the summer of 1955, you know, 70 years later, you’d have your aligned AGI.\n\n\nProblem is that the world got destroyed in the meanwhile. And that’s why, you know, that’s the problem there.\n\n\n \n\n\n[God Mode and Aliens](https://youtu.be/gA1sNLL6yg4?t=3345)\n----------------------------------------------------------\n\n\n**David:** So this feels like a gigantic *Don’t Look Up* scenario. If you’re familiar with that movie, it’s a movie about this asteroid hurtling to Earth, but it becomes popular and in vogue to not look up and not notice it. And Eliezer, you’re the guy who’s saying like, hey, there’s an asteroid. We have to do something about it. And if we don’t, it’s going to come destroy us.\n\n\nIf you had God mode over the progress of AI research and just innovation and development, what choices would you make that humans are not currently making today?\n\n\n**Eliezer:**I mean, I could say something like shut down all the large GPU clusters. How long do I have God mode? Do I get to like stick around for seventy years?\n\n\n**David:**You have God mode for the 2020 decade.\n\n\n**Eliezer:**For the 2020 decade. All right. That does make it pretty hard to do things.\n\n\nI think I shut down all the GPU clusters and get all of the famous scientists and brilliant, talented youngsters—the vast, vast majority of whom are not going to be productive and where government bureaucrats are not going to be able to tell who’s actually being helpful or not, but, you know—put them all on a large island, and try to figure out some system for filtering the stuff through to me to give thumbs up or thumbs down on that is going to work better than scientific bureaucrats producing entire nonsense.\n\n\nBecause, you know, the trouble is—the reason why scientific fields have to go through this long process to produce the cynical oldsters who know that everything is difficult. It’s not that the youngsters are stupid. You know, sometimes youngsters are fairly smart. You know, Marvin Minsky, John McCarthy back in 1955, they weren’t idiots. You know, privileged to have met both of them. They didn’t strike me as idiots. They were very old, and they still weren’t idiots. But, you know, it’s hard to see what’s coming in advance of experimental evidence hitting you over the head with it.\n\n\nAnd if I only have the decade of the 2020s to run all the researchers on this giant island somewhere, it’s really not a lot of time. Mostly what you’ve got to do is invent some entirely new AI paradigm that isn’t the giant inscrutable matrices of floating point numbers on gradient descent. Because I’m not really seeing what you can do that’s clever with that, that doesn’t kill you and that you know doesn’t kill you and doesn’t kill you the very first time you try to do something clever like that.\n\n\nYou know, I’m sure there’s *a* way to do it. And if you got to try over and over again, you could find it.\n\n\n**Ryan:**Eliezer, do you think every intelligent civilization has to deal with this exact problem that humanity is dealing with now? Of how do we solve this problem of aligning with an advanced general intelligence?\n\n\n**Eliezer:**I expect that’s much easier for some alien species than others. Like, there are alien species who might arrive at “this problem” in an entirely different way. Maybe instead of having two entirely different information processing systems, the DNA and the neurons, they’ve only got one system. They can trade memories around heritably by swapping blood sexually. Maybe the way in which they “confront this problem” is that very early in their evolutionary history, they have the equivalent of the DNA that stores memories and processes, computes memories, and they swap around a bunch of it, and it adds up to something that reflects on itself and makes itself coherent, and then you’ve got a superintelligence before they have invented computers. And maybe that thing wasn’t aligned, but how do you even align it when you’re in that kind of situation? It’d be a very different angle on the problem.\n\n\n**Ryan:**Do you think every advanced civilization is on the trajectory to creating a superintelligence at some point in its history?\n\n\n**Eliezer:**Maybe there’s ones in universes with alternate physics where you just can’t do that. Their universe’s computational physics just doesn’t support that much computation. Maybe they never get there. Maybe their lifespans are long enough and their star lifespans short enough that they never get to the point of a technological civilization before their star does the equivalent of expanding or exploding or going out and their planet ends.\n\n\n“Every alien species” covers a lot of territory, especially if you talk about alien species and universes with physics different from this one.\n\n\n**Ryan:** Well, talking about our present universe, I’m curious if you’ve been confronted with the question of, well, then why haven’t we seen some sort of superintelligence in our universe when we look out at the stars? Sort of the Fermi paradox type of question. Do you have any explanation for that?\n\n\n**Eliezer:** Oh, well, supposing that they got killed by their own AIs doesn’t help at all with that because then we’d see the AIs.\n\n\n**Ryan:** And do you think that’s what happens? Yeah, it doesn’t help with that. We would see evidence of AIs, wouldn’t we?\n\n\n**Eliezer:**Yeah.\n\n\n**Ryan:** Yes. So why don’t we?\n\n\n**Eliezer:** I mean, the same reason we don’t see evidence of the alien civilizations not with AIs.\n\n\nAnd that reason is, although it doesn’t really have much to do with the whole AI thesis one way or another, because they’re too far away—or so says Robin Hanson, using a very clever argument about the apparent difficulty of hard steps in humanity’s evolutionary history to further induce the rough gap between the hard steps. … And, you know, I can’t really do justice to this. If you look up grabby aliens, you can…\n\n\n**Ryan:**Grabby aliens?\n\n\n**David:**I remember this.\n\n\n**Eliezer:** Grabby aliens. You can find Robin Hanson’s very clever argument for how far away the aliens are…\n\n\n**Ryan:**There’s an entire website, Bankless listeners, there’s an entire website called [grabbyaliens.com](https://grabbyaliens.com/) you can go look at.\n\n\n**Eliezer:**Yeah. And that contains by far the best answer I’ve seen, to:\n\n\n* “Where are they?” (Answer: too far away for us to see, even if they’re traveling here at nearly light speed.)\n* How far away are they?\n* And how do we know that?\n\n\n(*laughs*) But, yeah.\n\n\n**Ryan:** This is amazing.\n\n\n**Eliezer:** There is not a very good way to simplify the argument, any more than there is to simplify the notion of zero-knowledge proofs. It’s not that difficult, but it’s just very not easy to simplify. But if you have a bunch of locks that are all of different difficulties, and a limited time in which to solve all the locks, such that anybody who gets through all the locks must have gotten through them by luck, all the locks will take around the same amount of time to solve, even if they’re all of very different difficulties. And that’s the core of Robin Hanson’s argument for how far away the aliens are, and how do we know that? (*shrugs*)\n\n\n \n\n\n[Good Outcomes](https://youtu.be/gA1sNLL6yg4?t=3796)\n----------------------------------------------------\n\n\n**Ryan:** Eliezer, I know you’re very skeptical that there will be a good outcome when we produce an artificial general intelligence. And I said when, not if, because I believe that’s your thesis as well, of course. But is there the possibility of a good outcome? I know you are working on AI alignment problems, which leads me to believe that you have greater than zero amount of hope for this project. Is there the possibility of a good outcome? What would that look like, and how do we go about achieving it?\n\n\n**Eliezer:**It looks like me being wrong. I basically don’t see on-model hopeful outcomes at this point. We have not done those things that it would take to earn a good outcome, and this is not a case where you get a good outcome by accident.\n\n\nIf you have a bunch of people putting together a new operating system, and they’ve heard about computer security, but they’re skeptical that it’s really that hard, the chance of them producing a [secure operating system](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/) is effectively zero.\n\n\nThat’s basically the situation I see ourselves in with respect to AI alignment. I have to be wrong about something—which I certainly am. I have to be wrong about something in a way that makes the problem *easier* rather than *harder* for those people who don’t think that alignment’s going to be all that hard.\n\n\nIf you’re building a rocket for the first time ever, and you’re wrong about something, it’s not surprising if you’re wrong about something. It’s surprising if the thing that you’re wrong about causes the rocket to go twice as high on half the fuel you thought was required and be much easier to steer than you were afraid of.\n\n\n**Ryan:**So, are you…\n\n\n**David:**Where the alternative was, “If you’re wrong about something, the rocket blows up.”\n\n\n**Eliezer:**Yeah. And then the rocket ignites the atmosphere, is the problem there.\n\n\nO rather: a bunch of rockets blow up, a bunch of rockets go places… The analogy I usually use for this is, very early on in the Manhattan Project, they were worried about “What if the nuclear weapons can ignite fusion in the nitrogen in the atmosphere?” And they ran some calculations and decided that it was incredibly unlikely for multiple angles, so they went ahead, and were correct. We’re still here. I’m not going to say that it was luck, because the calculations were actually pretty solid.\n\n\nAn AI is like that, but instead of needing to refine plutonium, you can make nuclear weapons out of a billion tons of laundry detergent. The stuff to make them is fairly widespread. It’s not a tightly controlled substance. And they spit out gold up until they get large enough, and *then* they ignite the atmosphere, and you can’t calculate how large is large enough. And a bunch of the CEOs running these projects are making fun of the idea that it’ll ignite the atmosphere.\n\n\nIt’s not a very helpful situation.\n\n\n**David:**So the economic incentive to produce this AI—one of the things why ChatGPT has sparked the imaginations of so many people is that everyone can imagine products. Products are being imagined left and right about what you can do with something like ChatGPT. There’s this meme at this point of people leaving to go start their ChatGPT startup.\n\n\nThe metaphor is that what you’re saying is that there’s this generally available resource spread all around the world, which is ChatGPT, and everyone’s hammering it in order to make it spit out gold. But you’re saying if we do that too much, all of a sudden the system will ignite the whole entire sky, and then we will all…\n\n\n**Eliezer:**Well, no. You can run ChatGPT any number of times without igniting the atmosphere. That’s about what research labs at Google and Microsoft—counting DeepMind as part of Google and counting OpenAI as part of Microsoft—that’s about what the research labs are doing, bringing more metaphorical Plutonium together than ever before. Not about how many times you run the things that have been built and not destroyed the world yet.\n\n\nYou can do any amount of stuff with ChatGPT and not destroy the world. It’s not that smart. It doesn’t get smarter every time you run it.\n\n\n \n\n\n[Ryan’s Childhood Questions](https://youtu.be/gA1sNLL6yg4?t=4078)\n-----------------------------------------------------------------\n\n\n**Ryan:**Can I ask some questions that the 10-year-old in me wants to really ask about this? I’m asking these questions because I think a lot of listeners might be thinking them too, so knock off some of these easy answers for me.\n\n\nIf we create some sort of unaligned, let’s call it “bad” AI, why can’t we just create a whole bunch of good AIs to go fight the bad AIs and solve the problem that way? Can there not be some sort of counterbalance in terms of aligned human AIs and evil AIs, and there be some sort of battle of the artificial minds here?\n\n\n**Eliezer:**Nobody knows how to create any good AIs at all. The problem isn’t that we have 20 good AIs and then somebody finally builds an evil AI. The problem is that the first very powerful AI is evil, nobody knows how to make it good, and then it kills everybody before anybody can make it good.\n\n\n**Ryan:**So there is no known way to make a friendly, human-aligned AI whatsoever, and you don’t know of a good way to go about thinking through that problem and designing one. Neither does anyone else, is what you’re telling us.\n\n\n**Eliezer:**I have some idea of what I would do if there were more time. Back in the day, we had more time. Humanity squandered it. I’m not sure there’s enough time left now. I have some idea of what I would do if I were in a 25-year-old body and had $10 billion.\n\n\n**Ryan:**That would be the island scenario of “You’re God for 10 years and you get all the researchers on an island and go really hammer for 10 years at this problem”?\n\n\n**Eliezer:**If I have buy-in from a major government that can run actual security precautions and more than just $10 billion, then you could run a whole Manhattan Project about it, sure.\n\n\n**Ryan:**This is another question that the 10-year-old in me wants to know. Why is it that, Eliezer, people listening to this episode or people listening to the concerns or reading the concerns that you’ve written down and published, why can’t everyone get on board who’s building an AI and just all agree to be very, very careful? Is that not a sustainable game-theoretic position to have? Is this a coordination problem, more of a social problem than anything else? Or, like, why can’t that happen?\n\n\nI mean, we have so far not destroyed the world with nuclear weapons, and we’ve had them since the 1940s.\n\n\n**Eliezer:**Yeah, this is harder than nuclear weapons. This is a *lot* harder than nuclear weapons.\n\n\n**Ryan:** Why is this harder? And why can’t we just coordinate to just all agree internationally that we’re going to be very careful, put restrictions on this, put regulations on it, do something like that?\n\n\n**Eliezer:**Current heads of major labs seem to me to be openly contemptuous of these issues. That’s where we’re starting from. The politicians do not understand it.\n\n\nThere are distortions of these ideas that are going to sound more appealing to them than “everybody suddenly falls over dead”, which is a thing that I think actually happens. “Everybody falls over dead” just doesn’t inspire the monkey political parts of our brain somehow. Because it’s not like, “Oh no, what if terrorists get the AI first?” It’s like, it doesn’t matter who gets it first. Everybody falls over dead.\n\n\nAnd yeah, so you’re describing a world coordinating on something that is relatively hard to coordinate. So, could we, if we tried starting today, prevent anyone from getting a billion pounds of laundry detergent in one place worldwide, control the manufacturing of laundry detergent, only have it manufactured in particular places, not concentrate lots of it together, enforce it on every country?\n\n\nY’know, if it was legible, if it was *clear* that a billion pounds of laundry detergent in one place would end the world, if you could calculate that, if all the scientists calculated it arrived at the same answer and told the politicians that maybe, maybe humanity would survive, even though smaller amounts of laundry detergent spit out gold.\n\n\nThe threshold can’t be calculated. I don’t know how you’d convince the politicians. We definitely don’t seem to have had much luck convincing those CEOs whose job depends on them not caring, to care. Caring is easy to fake. It’s easy to hire a bunch of people to be your “AI safety team”  and redefine “AI safety” as having the AI not say naughty words. Or, you know, I’m speaking somewhat metaphorically here for reasons.\n\n\nBut, you know, it’s the basic problem that we have is like trying to build a secure OS before we run up against a really smart attacker. And there’s all kinds of, like, fake security. “It’s got a password file! This system is secure! It only lets you in if you type a password!” And if you never go up against a really smart attacker, if you never go far out of distribution against a powerful optimization process looking for holes, you know, then how does a bureaucracy come to know that what they’re doing is not the level of computer security that they need? The way you’re supposed to find this out, the way that scientific fields historically find this out, the way that fields of computer science historically find this out, the way that crypto found this out back in the early days, is by having the disaster happen! \n\n\nAnd we’re not even that good at learning from relatively minor disasters! You know, like, COVID swept the world. Did the FDA or the CDC learn anything about “Don’t tell hospitals that they’re not allowed to use their own tests to detect the coming plague”? Are we installing UV-C lights in public spaces or in ventilation systems to prevent the next respiratory pandemic? You know, we lost a million people and we sure did not learn very much as far as I can tell for next time.\n\n\nWe could have an AI disaster that kills a hundred thousand people—how do you even *do* that? Robotic cars crashing into each other? Have a bunch of robotic cars crashing into each other! It’s not going to look like that was the fault of artificial general intelligence because they’re not going to put AGIs in charge of cars. They’re going to pass a bunch of regulations that’s going to affect the entire AGI disaster or not at all.\n\n\nWhat does the winning world even look like here? How in real life did we get from where we are now to this worldwide ban, including against North Korea and, you know, some one rogue nation whose dictator doesn’t believe in all this nonsense and just wants the gold that these AIs spit out? How did we get there from here? How do we get to the point where the United States and China signed a treaty whereby they would both use nuclear weapons against Russia if Russia built a GPU cluster that was too large? How did we get there from here?\n\n\n**David:**Correct me if I’m wrong, but this seems to be kind of just like a topic of despair? I’m talking to you now and hearing your thought process about, like, there is no known solution and the trajectory’s not great. Do you think all hope is lost here?\n\n\n**Eliezer:**I’ll keep on fighting until the end, which I wouldn’t do if I had literally zero hope. I could still be wrong about something in a way that makes this problem somehow much easier than it currently looks. I think that’s how you go down fighting with dignity.\n\n\n**Ryan:**“Go down fighting with dignity.” That’s the stage you think we’re at.\n\n\nI want to just double-click on what you were just saying. Part of the case that you’re making is humanity won’t even see this coming. So it’s not like a coordination problem like global warming where every couple of decades we see the world go up by a couple of degrees, things get hotter, and we start to see these effects over time. The characteristics or the advent of an AGI in your mind is going to happen incredibly quickly, and in such a way that we won’t even see the disaster until it’s imminent, until it’s upon us…?\n\n\n**Eliezer:**I mean, if you want some kind of, like, formal phrasing, then I think that superintelligence will kill everyone before non-superintelligent AIs have killed one million people. I don’t know if that’s the phrasing you’re looking for there.\n\n\n**Ryan:**I think that’s a fairly precise definition, and why? What goes into that line of thought?\n\n\n**Eliezer:**I think that the current systems are actually very weak. I don’t know, maybe I could use the analogy of Go, where you had systems that were finally competitive with the pros, where “pro” is like the set of ranks in Go, and then a year later, they were challenging the world champion and winning. And then another year, they threw out all the complexities and the training from human databases of Go games and built a new system, AlphaGo Zero, that trained itself from scratch. No looking at the human playbooks, no special-purpose code, just a general purpose game-player being specialized to Go, more or less.\n\n\nAnd, three days—there’s a quote from Gwern about this, which I forget exactly, but it was something like, “We know how long AlphaGo Zero, or AlphaZero (two different systems), was equivalent to a human Go player. And it was, like, 30 minutes on the following floor of such-and-such DeepMind building.”\n\n\nMaybe the first system doesn’t improve that quickly, and they build another system that does—And all of that with AlphaGo over the course of years, going from “it takes a long time to train” to “it trains very quickly and without looking at the human playbook”, that’s *not* with an artificial intelligence system that improves itself, or even that gets smarter as you run it, the way that human beings (not just as you evolve them, but as you run them over the course of their own lifetimes) improve.\n\n\nSo if the first system doesn’t improve fast enough to kill everyone very quickly, they will build one that’s meant to spit out more gold than that.\n\n\nAnd there could be weird things that happen before the end. I did not see ChatGPT coming, I did not see Stable Diffusion coming, I did not expect that we would have AIs smoking humans in rap battles before the end of the world. Ones that are clearly much dumber than us.\n\n\n**Ryan:**It’s kind of a nice send-off, I guess, in some ways.\n\n\n \n\n\n[Trying to Resist](https://youtu.be/gA1sNLL6yg4?t=4995)\n-------------------------------------------------------\n\n\n**Ryan:** So you said that your hope is not zero, and you are planning to fight to the end. What does that look like for you? I know you’re working at MIRI, which is the Machine Intelligence Research Institute. This is a non-profit that I believe that you’ve set up to work on these AI alignment and safety issues. What are you doing there? What are you spending your time on? How do we actually fight until the end? If you do think that an end is coming, how do we try to resist?\n\n\n**Eliezer:**I’m actually on something of a sabbatical right now, which is why I have time for podcasts. It’s a sabbatical from, you know, like, been doing this 20 years. It became clear we were all going to die. I felt kind of burned out, taking some time to rest at the moment. When I dive back into the pool, I don’t know, maybe I will go off to Conjecture or Anthropic or one of the smaller concerns like Redwood Research—Redwood Research being the only ones I really trust at this point, but they’re tiny—and try to figure out if *I* can see anything clever to do with the giant inscrutable matrices of floating point numbers.\n\n\nMaybe I just write, continue to try to explain in advance to people why this problem is hard instead of as easy and cheerful as the current people who think they’re pessimists think it will be. I might not be working all that hard compared to how I used to work. I’m older than I was. My body is not in the greatest of health these days. Going down fighting doesn’t necessarily imply that I have the stamina to fight all that hard. I wish I had prettier things to say to you here, but I do not.\n\n\n**Ryan:**No, this is… We intended to save probably the last part of this episode to talk about crypto, the metaverse, and AI and how this all intersects. But I gotta say, at this point in the episode, it all kind of feels pointless to go down that track.\n\n\nWe were going to ask questions like, well, in crypto, should we be worried about building sort of a property rights system, an economic system, a programmable money system for the AIs to sort of use against us later on? But it sounds like the easy answer from you to those questions would be, yeah, absolutely. And by the way, none of that matters regardless. You could do whatever you’d like with crypto. This is going to be the inevitable outcome no matter what.\n\n\nLet me ask you, what would you say to somebody listening who maybe has been sobered up by this conversation? If a version of you in your 20s does have the stamina to continue this battle and to actually fight on behalf of humanity against this existential threat, where would you advise them to spend their time? Is this a technical issue? Is this a social issue? Is it a combination of both? Should they educate? Should they spend time in the lab? What should a person listening to this episode do with these types of dire straits?\n\n\n**Eliezer:**I don’t have really good answers. It depends on what your talents are. If you’ve got the very deep version of the [security mindset](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/), the part where you don’t just put a password on your system so that nobody can walk in and directly misuse it, but the kind where you don’t just encrypt the password file even though nobody’s supposed to have access to the password file in the first place, and that’s already an authorized user, but the part where you hash the passwords and salt the hashes. If you’re the kind of person who can think of that from scratch, maybe take your hand at alignment.\n\n\nIf you can think of an alternative to the giant inscrutable matrices, then, you know, don’t tell the world about that. I’m not quite sure where you go from there, but maybe you work with Redwood Research or something.\n\n\nA whole lot of this problem is that even if you do build an AI that’s limited in some way, somebody else steals it, copies it, runs it themselves, and takes the bounds off the for loops and the world ends. \n\n\nSo there’s that. You think you can do something clever *with* the giant inscrutable matrices? You’re probably wrong. If you have the talent to try to figure out why you’re wrong in advance of being hit over the head with it, and not in a way where you just make random far-fetched stuff up as the reason why it won’t work, but where you can actually *keep looking* for the reason why it won’t work…\n\n\nWe have people in crypto[graphy] who are good at breaking things, and they’re the reason why *anything* is not on fire. Some of them might go into breaking AI systems instead, because that’s where you learn anything.\n\n\nYou know: Any fool can build a crypto[graphy] system that they think will work. *Breaking* existing cryptographical systems is how we learn who the real experts are. So maybe the people finding weird stuff to do with AIs, maybe those people will come up with some truth about these systems that makes them easier to align than I suspect.\n\n\nHow do I put it… The saner outfits do have uses for money. They don’t really have *scalable* uses for money, but they do burn any money literally at all. Like, if you gave MIRI a billion dollars, I would not know how to…\n\n\nWell, at a billion dollars, I might try to bribe people to move out of AI development, that gets broadcast to the whole world, and move to the equivalent of an island somewhere—not even to make any kind of critical discovery, but just to remove them from the system. If I had a billion dollars.\n\n\nIf I just have another $50 million, I’m not quite sure what to do with that, but if you donate that to MIRI, then you at least have the assurance that we will not randomly spray money on looking like we’re doing stuff and we’ll reserve it, as we are doing with the last giant crypto donation somebody gave us until we can figure out something to do with it that is actually helpful. And MIRI has that property. I would say probably Redwood Research has that property.\n\n\nYeah. I realize I’m sounding sort of disorganized here, and that’s because I don’t really have a good organized answer to how in general somebody goes down fighting with dignity.\n\n\n \n\n\n[MIRI and Education](https://youtu.be/gA1sNLL6yg4?t=5453)\n---------------------------------------------------------\n\n\n**Ryan:**I know a lot of people in crypto. They are not as in touch with artificial intelligence, obviously, as you are, and the AI safety issues and the existential threat that you’ve presented in this episode. They do care a lot and see coordination problems throughout society as an issue. Many have also generated wealth from crypto, and care very much about humanity not ending. What sort of things has MIRI, the organization I was talking about earlier, done with funds that you’ve received from crypto donors and elsewhere? And what sort of things might an organization like that pursue to try to stave this off?\n\n\n**Eliezer:**I mean, I think mostly we’ve pursued a lot of lines of research that haven’t really panned out, which is a respectable thing to do. We did not know in advance that those lines of research would fail to pan out. If you’re doing research that you know will work, you’re probably not really doing any research. You’re just doing a pretense of research that you can show off to a funding agency.\n\n\nWe try to be real. We did things where we didn’t know the answer in advance. They didn’t work, but that was where the hope lay, I think. But, you know, having a research organization that keeps it real that way, that’s not an easy thing to do. And if you don’t have this very deep form of the security mindset, you will end up producing fake research and doing more harm than good, so I would not tell all the successful cryptocurrency people to run off and start their own research outfits.\n\n\nRedwood Research—I’m not sure if they can scale using more money, but you can give people more money and wait for them to figure out how to scale it later if they’re the kind who won’t just run off and spend it, which is what MIRI aspires to be.\n\n\n**Ryan:**And you don’t think the education path is a useful path? Just educating the world?\n\n\n**Eliezer:**I mean, I would give myself and MIRI credit for why the world isn’t just walking blindly into the whirling razor blades here, but it’s not clear to me how far education scales apart from that. You can get more people aware that we’re walking directly into the whirling razor blades, because even if only 10% of the people can get it, that can still be a bunch of people. But then what do they do? I don’t know. Maybe they’ll be able to do something later.\n\n\nCan you get all the people? Can you get all the politicians? Can you get the people whose job incentives are against them admitting this to be a problem? I have various friends who report, like, “Ah yes, if you talk to researchers at OpenAI in *private*, they are very worried and say that they cannot be that worried in public.”\n\n\n \n\n\n[How Long Do We Have?](https://youtu.be/gA1sNLL6yg4?t=5640)\n-----------------------------------------------------------\n\n\n**Ryan:**This is all a giant [Moloch](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/) trap, is sort of what you’re telling us. I feel like this is the part of the conversation where we’ve gotten to the end and the doctor has said that we have some sort of terminal illness. And at the end of the conversation, I think the patient, David and I, have to ask the question, “Okay, doc, how long do we have?” Seriously, what are we talking about here if you turn out to be correct? Are we talking about years? Are we talking about decades? What’s your idea here?\n\n\n**David:**What are *you* preparing for, yeah?\n\n\n**Eliezer:**How the hell would I know? Enrico Fermi was saying that fission chan reactions were 50 years off if they could ever be done at all, two years before he built the first nuclear pile. The Wright brothers were saying heavier-than-air flight was 50 years off shortly before they built the first Wright flyer. How on earth would I know?\n\n\nIt could be three years. It could be 15 years. We could get that AI winter I was hoping for, and it could be 16 years. I’m not really seeing 50 without some kind of giant civilizational catastrophe. And to be clear, whatever civilization arises after that would probably, I’m guessing, end up stuck in just the same trap we are.\n\n\n**Ryan:**I think the other thing that the patient might do at the end of a conversation like this is to also consult with other doctors. I’m kind of curious who we should talk to on this quest. Who are some people that if people in crypto want to hear more about this or learn more about this, or even we ourselves as podcasters and educators want to pursue this topic, who are the other individuals in the AI alignment and safety space you might recommend for us to have a conversation with?\n\n\n**Eliezer:**Well, the person who actually holds a coherent technical view, who disagrees with me, is named Paul Christiano. He does not write Harry Potter fan fiction, and I expect him to have a harder time explaining himself in concrete terms. But that is the main technical voice of opposition. If you talk to other people in the effective altruism or AI alignment communities who disagree with this view, they are probably to some extent repeating back their misunderstandings of Paul Christiano’s views. \n\n\nYou could try Ajeya Cotra, who’s worked pretty directly with Paul Christiano and I think sometimes aspires to explain these things that Paul is not the best at explaining. I’ll throw out Kelsey Piper as somebody who would be good at explaining—like, would not claim to be a technical person on these issues, but is good at explaining the part that she does know. \n\n\nWho else disagrees with me? I’m sure Robin Hanson would be happy to come on… well, I’m not sure he’d be happy to come on this podcast, but Robin Hanson disagrees with me, and I kind of feel like the [famous argument we had](https://intelligence.org/tag/the-hanson-yudkowsky-ai-foom-debate) back in the early 2010s, late 2000s about how this would all play out—I basically feel like this was the Yudkowsky position, this is the Hanson position, and then reality was over here, [well to the Yudkowsky side](https://intelligence.org/2017/10/20/alphago/) of the Yudkowsky position in the Yudkowsky-Hanson debate. But Robin Hanson does not feel that way, and would probably be happy to expound on that at length. \n\n\nI don’t know. It’s not hard to find opposing viewpoints. The ones that’ll stand up to a few solid minutes of cross-examination from somebody who knows which parts to cross-examine, that’s the hard part.\n\n\n \n\n\n[Bearish Hope](https://youtu.be/gA1sNLL6yg4?t=5895)\n---------------------------------------------------\n\n\n**Ryan:**You know, I’ve read a lot of your writings and listened to you on previous podcasts. One was in 2018 [on the Sam Harris podcast](https://intelligence.org/2018/02/28/sam-harris-and-eliezer-yudkowsky/). This conversation feels to me like the most dire you’ve ever seemed on this topic. And maybe that’s not true. Maybe you’ve sort of always been this way, but it seems like the direction of your hope that we solve this issue has declined. I’m wondering if you feel like that’s the case, and if you could sort of summarize your take on all of this as we close out this episode and offer, I guess, any concluding thoughts here.\n\n\n**Eliezer:**I mean, I don’t know if you’ve got a time limit on this episode? Or is it just as long as it runs?\n\n\n**Ryan:**It’s as long as it needs to be, and I feel like this is a pretty important topic. So you answer this however you want.\n\n\n**Eliezer:**Alright. Well, there was a conference one time on “What are we going to do about looming risk of AI disaster?”, and Elon Musk attended that conference. And I was like,: Maybe this is it. Maybe this is when the powerful people notice, and it’s one of the relatively more technical powerful people who could be noticing this. And maybe this is where humanity finally turns and starts… not quite fighting back, because there isn’t an external enemy here, but conducting itself with… I don’t know. Acting like it cares, maybe?\n\n\nAnd what came out of that conference, well, was OpenAI, which was fairly nearly the worst possible way of doing anything. This is not a problem of “Oh no, what if secret elites get AI?” It’s that nobody knows how to build the thing. If we *do* have an alignment technique, it’s going to involve running the AI with a bunch of careful bounds on it where you don’t just throw all the cognitive power you have at something. You have limits on the for loops. \n\n\nAnd whatever it is that could possibly save the world, like go out and turn all the GPUs and the server clusters into Rubik’s cubes or something else that prevents the world from ending when somebody else builds another AI a few weeks later—anything that could do that is an artifact where somebody else could take it and take the bounds off the for loops and use it to destroy the world.\n\n\nSo let’s open up everything! Let’s accelerate everything! It was like GPT-3’s version, though GPT-3 didn’t exist back then—but it was like ChatGPT’s blind version of throwing the ideals at a place where they were *exactly* the wrong ideals to solve the problem.\n\n\nAnd the problem is that demon summoning is easy and angel summoning is much harder. Open sourcing all the demon summoning circles is not the correct solution. And I’m using Elon Musk’s own terminology here. He talked about AI as “summoning the demon”, which, not accurate, but—and then the solution was to put a demon summoning circle in every household. \n\n\nAnd, why? Because his friends were calling him Luddites once he’d expressed any concern about AI at all. So he picked a road that sounded like “openness” and “accelerating technology”! So his friends would stop calling him “Luddite”.\n\n\nIt was very much the worst—you know, maybe not the literal, actual worst possible strategy, but so very far pessimal.\n\n\nAnd that was it.\n\n\nThat was like… that was me in 2015 going like, “Oh. So this is what humanity will elect to do. We will not rise above. We will not have more grace, not even here at the very end.”\n\n\nSo that is, you know, that is when I did my crying late at night and then picked myself up and fought and fought and fought until I had run out all the avenues that I seem to have the capabilities to do. There’s, like, more things, but they require scaling my efforts in a way that I’ve never been able to make them scale. And all of it’s pretty far-fetched at this point anyways.\n\n\nSo, you know, that—so what’s, you know, what’s changed over the years? Well, first of all, I ran out some remaining avenues of hope. And second, things got to be such a disaster, such a *visible* disaster, the AI has got powerful enough and it became clear enough that, you know, we do not know how to align these things, that I could actually say what I’ve been thinking for a while and not just have people go completely, like, “What are you *saying* about all this?”\n\n\nYou know, now the stuff that was obvious back in 2015 is, you know, starting to become visible in the distance to others and not just completely invisible. That’s what changed over time.\n\n\n \n\n\n[The End Goal](https://youtu.be/gA1sNLL6yg4?t=6230)\n---------------------------------------------------\n\n\n**Ryan:**What kind of… What do you hope people hear out of this episode and out of your comments? Eliezer in 2023, who is sort of running on the last fumes of, of hope. Yeah, what do you, what do you want people to get out of this episode? What are you planning to do?\n\n\n**Eliezer:**I don’t have concrete hopes here. You know, when everything is in ruins, you might as well speak the truth, right? Maybe *somebody* hears it, *somebody* figures out something I didn’t think of.\n\n\nI mostly expect that this does more harm than good in the modal universe, because a bunch of people are like, “Oh, I have this brilliant, clever idea,” which is, you know, something that I was arguing against in 2003 or whatever, but you know, maybe somebody out there with the proper level of pessimism hears and thinks of something I didn’t think of.\n\n\nI suspect that if there’s hope at all, it comes from a technical solution, because the difference between technical problems and political problems is at least the technical problems have solutions in principle. At least the technical problems are solvable. We’re not on course to solve this one, but I think anybody who’s hoping for a political solution has frankly not understood the technical problem. \n\n\nThey do not understand what it looks like to try to solve the political problem to such a degree that the world is not controlled by AI because they don’t understand how easy it is to destroy the world with AI, given that the clock keeps ticking forward.\n\n\nThey’re thinking that they just have to stop some bad actor, and that’s why they think there’s a political solution.\n\n\nBut yeah, I don’t have concrete hopes. I didn’t come on this episode out of any concrete hope.\n\n\nI have no takeaways except, like, don’t make this thing worse.\n\n\nDon’t, like, go off and accelerate AI more. Don’t—f you have a brilliant solution to alignment, don’t be like, “Ah yes, I have solved the whole problem. We just use the following clever trick.”\n\n\nYou know, “Don’t make things worse” isn’t very much of a message, especially when you’re pointing people at the field at all. But I have no winning strategy. Might as well go on this podcast as an experiment and say what I think and see what happens. And probably no good ever comes of it, but you might as well go down fighting, right?\n\n\nIf there’s a world that survives, maybe it’s a world that survives because of a bright idea somebody had after listening to listening to this podcast—that was *brighter*, to be clear, than the usual run of bright ideas that don’t work.\n\n\n**Ryan:**Eliezer, I want to thank you for coming on and talking to us today. I do.\n\n\nI don’t know if, by the way, you’ve seen that movie that David was referencing earlier, the movie *Don’t Look Up*, but I sort of feel like that news anchor, who’s talking to the scientist—is it Leonardo DiCaprio, David? And, uh, the scientist is talking about kind of dire straits for the world. And the news anchor just really doesn’t know what to do. I’m almost at a loss for words at this point.\n\n\n**David:**I’ve had nothing for a while now.\n\n\n**Ryan:**But one thing I can say is I appreciate your honesty. I appreciate that you’ve given this a lot of time and given this a lot of thought. Everyone, anyone who has heard you speak or read anything you’ve written knows that you care deeply about this issue and have given it a tremendous amount of your life force, in trying to educate people about it.\n\n\nAnd, um, thanks for taking the time to do that again today. I’ll—I guess I’ll just let the audience digest this episode in the best way they know how. But, um, I want to reflect everybody in crypto and everybody listening to Bankless—their thanks for you coming on and explaining.\n\n\n**Eliezer:**Thanks for having me. We’ll see what comes of it.\n\n\n**Ryan:**Action items for you, Bankless nation. We always end with some action items. Not really sure where to refer folks to today, but one thing I know we can refer folks to is MIRI, which is the machine research intelligence institution that Eliezer has been talking about through the episode. That is at [intelligence.org](https://intelligence.org/), I believe. And some people in crypto have donated funds to this in the past. Vitalik Buterin is one of them. You can take a look at what they’re doing as well. That might be an action item for the end of this episode.\n\n\nUm, got to end with risks and disclaimers—man, this seems very trite, but our legal experts have asked us to say these at the end of every episode. “Crypto is risky. You could lose everything…”\n\n\n**Eliezer:**(*laughs*)\n\n\n**David:**Apparently not as risky as AI, though.\n\n\n**Ryan:**—But we’re headed west! This is the frontier. It’s not for everyone, but we’re glad you’re with us on the Bankless journey. Thanks a lot.\n\n\n**Eliezer:**And we are grateful for the crypto community’s support. Like, it was possible to end with even less grace than this.\n\n\n**Ryan:**Wow. (*laughs*)\n\n\n**Eliezer:**And you made a difference.\n\n\n**Ryan:**We appreciate you.\n\n\n**Eliezer:**You really made a difference.\n\n\n**Ryan:**Thank you.\n\n\n\n\n---\n\n\n[Q&A](https://twitter.com/i/spaces/1PlJQpZogzVGE)\n-------------------------------------------------\n\n\n**Ryan:**[… Y]ou gave up this quote, from I think someone who’s an executive director at MIRI: “We’ve given up hope, but not the fight.”\n\n\nCan you reflect on that for a bit? So it’s still possible to fight this, even if we’ve given up hope? And even if you’ve given up hope? Do you have any takes on this?\n\n\n**Eliezer:** I mean, what else is there to do? You don’t have good ideas. So you take your mediocre ideas, and your not-so-great ideas, and you pursue those until the world ends. Like, what’s supposed to be better than that?\n\n\n**Ryan:** We had some really interesting conversation flow out of this episode, Eliezer, as you can imagine. And David and I want to relay some questions that the community had for you, and thank you for being gracious enough to help with those questions in today’s Twitter Spaces.\n\n\nI’ll read something from Luke ethwalker. “Eliezer has one pretty flawed point in his reasoning. He assumes that AI would have no need or use for humans because we have atoms that could be used for better things. But how could an AI use these atoms without an agent operating on its behalf in the physical world? Even in his doomsday scenario, the AI relied on humans to create the global, perfect killing virus. That’s a pretty huge hole in his argument, in my opinion.”\n\n\nWhat’s your take on this? That maybe AIs will dominate the digital landscape but because humans have a physical manifestation, we can still kind of beat the superintelligent AI in our physical world?\n\n\n**Eliezer:** If you were [an alien civilization](https://intelligence.org/posts/5wMcKNAwB6X4mp9og/that-alien-message) of a billion John von Neumanns, thinking at 10,000 times human speed, and you start out connected to the internet, you would want to not be just stuck on the internet, you would want to build that physical presence. You would not be content solely with working through human hands, despite the many humans who’d be lined up, cheerful to help you, you know. Bing already has its partisans. (*laughs*)\n\n\nYou wouldn’t be content with that, because the humans are very slow, glacially slow. You would like fast infrastructure in the real world, reliable infrastructure. And how do you build that, is then the question, and a whole lot of advanced analysis has been done on this question. I would point people again to Eric Drexler’s *Nanosystems*.\n\n\nAnd, sure, if you literally start out connected to the internet, then probably the fastest way — maybe not the only way, but it’s, you know, an easy way — is to get humans to do things. And then humans do those things. And then you have the desktop — not quite desktop, but you have the nanofactories, and then you don’t need the humans anymore. And this need not be advertised to the world at large while it is happening.\n\n\n**David:** So I can understand that perspective, like in the future, we will have better 3D printers — distant in the future, we will have ways where the internet can manifest in the physical world. But I think this argument does ride on a future state with technology that we don’t have today. Like, I don’t think if I was the internet — and that kind of is this problem, right? Like, this superintelligent AI just becomes the internet because it’s embedded in the internet. If I was the internet, how would I get myself to manifest in real life?\n\n\nAnd now, I am not an expert on the current state of robotics, or what robotics are connected to the internet. But I don’t think we have too strong of tools today to start to create in the real world manifestations of an internet-based AI. So like, would you say that this part of this problem definitely depends on some innovation, at like the robotics level?\n\n\n**Eliezer:** No, it depends on the AI being smart. It doesn’t depend on the humans having this technology; it depends on the AI being able to invent the technology.\n\n\nThis is, like, the central problem: the thing is smarter. Not in the way that the average listener to this podcast probably has an above average IQ, the way that humans are smarter than chimpanzees.\n\n\nWhat does that let humans do? Does it let humans be, like, really *clever* in how they play around with the stuff that’s on the ancestral savanna? Make *clever* use of grass, *clever* use of trees?\n\n\nThe humans invent technology. They build the technology. The technology is not there until the humans invent it, the humans conceive it.\n\n\nThe problem is, humans are not the upper bound. We don’t have the best possible brains for that kind of problem. So the existing internet is more than connected enough to people and devices, that you could build better technology than that if you had invented the technology because you were thinking much, much faster and better than a human does.\n\n\n**Ryan:** Eliezer, this is a question from stirs, a Bankless Nation listener. He wants to ask the question about your explanation of why the AI will undoubtedly kill us. That seems to be your conclusion, and I’m wondering if you could kind of reinforce that claim. Like, for instance — and this is something David and I discussed after the episode, when we were debriefing on this — why exactly wouldn’t an AI, or couldn’t an AI just blast off of the Earth and go somewhere more interesting, and leave us alone? Like, why does it have to take our atoms and reassemble them? Why can’t it just, you know, set phasers to ignore?\n\n\n**Eliezer:** It could if it wanted to. But if it doesn’t want to, there is some initial early advantage. You get to colonize the universe slightly earlier if you consume all of the readily accessible energy on the Earth’s surface as part of your blasting off of the Earth process.\n\n\nIt would only need to care for us by a very tiny fraction to spare us, this I agree. Caring a very tiny fraction is basically the same problem as 100% caring. It’s like, well, could you have a computer system that is usually like the Disk Operating System, but a tiny fraction of the time it’s Windows 11? Writing that is just as difficult as writing Windows 11. We still have to write all the Windows 11 software. Getting it to care a tiny little bit is the same problem as getting it to care 100%.\n\n\n**Ryan:** So Eliezer, is this similar to the relationship that humans have with other animals, planet Earth? I would say largely we really don’t… I mean, obviously, there’s no animal Bill of Rights. Animals have no legal protection in the human world, and we kind of do what we want and trample over their rights. But it doesn’t mean we necessarily kill all of them. We just largely ignore them.\n\n\nIf they’re in our way, you know, we might take them out. And there have been whole classes of species that have gone extinct through human activity, of course; but there are still many that we live alongside, some successful species as well. Could we have that sort of relationship with an AI? Why isn’t that reasonably high probability in your models?\n\n\n**Eliezer** So first of all, all these things are *just* metaphors. AI is not going to be exactly like humans to animals.\n\n\nLeaving that aside for a second, the reason why this metaphor breaks down is that although the humans are smarter than the chickens, we’re not smarter than evolution, natural selection, cumulative optimization power over the last billion years and change. (You know, there’s evolution before that but it’s pretty slow, just, like, single-cell stuff.)\n\n\nThere are things that cows can do for us, that we cannot do for ourselves. In particular, make meat by eating grass. We’re smarter than the cows, but there’s a thing that designed the cows; and we’re faster than that thing, but we’ve been around for much less time. So we have not yet gotten to the point of redesigning the entire cow from scratch. And because of that, there’s a purpose to keeping the cow around alive.\n\n\nAnd humans, furthermore, being the kind of funny little creatures that we are — some people care about cows, some people care about chickens. They’re trying to fight for the cows and chickens having a better life, given that they have to exist at all. And there’s a long complicated story behind that. It’s not simple, the way that humans ended up in that [??]. It has to do with the particular details of our evolutionary history, and unfortunately it’s not just going to pop up out of nowhere.\n\n\nBut I’m drifting off topic here. The basic answer to the question “where does that analogy break down?” is that I expect the superintelligences to be able to do better than natural selection, not just better than the humans.\n\n\n**David:** So I think your answer is that the separation between us and a superintelligent AI is orders of magnitude larger than the separation between us and a cow, or even us than an ant. Which, I think a large amount of this argument resides on this superintelligence explosion — just going up an exponential curve of intelligence very, very quickly, which is like the premise of superintelligence.\n\n\nAnd Eliezer, I want to try and get an understanding of… A part of this argument about “AIs are going come kill us” is buried in the Moloch problem. And Bankless listeners are pretty familiar with the concept of Moloch — the idea of coordination failure. The idea that the more that we coordinate and stay in agreement with each other, we actually create a larger incentive to defect.\n\n\nAnd the way that this is manifesting here, is that even if we do have a bunch of humans, which understand the AI alignment problem, and we all agree to only safely innovate in AI, to whatever degree that means, we still create the incentive for someone to fork off and develop AI faster, outside of what would be considered safe.\n\n\nAnd so I’m wondering if you could, if it does exist, give us the sort of lay of the land, of all of these commercial entities? And what, if at all, they’re doing to have, I don’t know, an AI alignment team?\n\n\nSo like, for example, OpenAI. Does OpenAI have, like, an alignment department? With all the AI innovation going on, what does the commercial side of the AI alignment problem look like? Like, are people trying to think about these things? And to what degree are they being responsible?\n\n\n**Eliezer:** It looks like OpenAI having a bunch of people who it pays to do AI ethics stuff, but I don’t think they’re plugged very directly into Bing. And, you know, they’ve got that department because back when they were founded, some of their funders were like, “Well, but ethics?” and OpenAI was like “Sure, we can buy some ethics. We’ll take this group of people, and we’ll put them over here and we’ll call them an alignment research department”.\n\n\nAnd, you know, the key idea behind ChatGPT is RLHF, which was invented by Paul Christiano. Paul Christiano had much more detailed ideas, and somebody might have reinvented this one, but anyway. I don’t think that went through OpenAI, but I could be mistaken. Maybe somebody will be like “Well, actually, Paul Christiano was working at OpenAI at the time”, I haven’t checked the history in very much detail.\n\n\nA whole lot of the people who were most concerned with this “ethics” left OpenAI, and founded Anthropic. And I’m *still* not sure that Anthropic has sufficient leadership focus in that direction.\n\n\nYou know, like, put yourself in the shoes of a corporation! You can spend some little fraction of your income on putting together a department of people who will write safety papers. But then the actual behavior that we’ve seen, is they storm ahead, and they use one or two of the ideas that came out from anywhere in the whole [alignment] field. And they get as far as that gets them. And if that doesn’t get them far enough, they just keep storming ahead at maximum pace, because, you know, Microsoft doesn’t want to lose to Google, and Google doesn’t want to lose to Microsoft.\n\n\n**David:** So it sounds like your attitude on the efforts of AI alignment in commercial entities is, like, they’re not even doing 1% of what they need to be doing.\n\n\n**Eliezer:** I mean, they could spend [10?] times as much money and that would not get them to 10% of what they need to be doing.\n\n\nIt’s not just a problem of “oh, they they could spend the resources, but they don’t want to”. It’s a question of “how do we even spend the resources to get the info that they need”.\n\n\nBut that said, not knowing how to do that, not really understanding that they need to do that, they are just charging ahead anyways.\n\n\n**Ryan:** Eliezer, is OpenAI the most advanced AI project that you’re aware of?\n\n\n**Eliezer:** Um, no, but I’m not going to go name the competitor, because then people will be like, “Oh, I should go work for them”, you know? I’d rather they didn’t.\n\n\n**Ryan:** So it’s like, OpenAI is this organization that was kind of — you were talking about it at the end of the episode, and for crypto people who aren’t aware of some of the players in the field — were they spawned from that 2015 conference that you mentioned? It’s kind of a completely open-source AI project?\n\n\n**Eliezer:** That was the original suicidal vision, yes. But…\n\n\n**Ryan:** And now they’re bent on commercializing the technology, is that right?\n\n\n**Eliezer:** That’s an improvement, but not enough of one, because they’re still generating lots of noise and hype and directing more resources into the field, and storming ahead with the safety that they have instead of the safety that they need, and setting bad examples. And getting Google riled up and calling back in Larry Page and Sergey Brin to head up Google’s AI projects and so on. So, you know, it could be worse! It would be worse if they were open sourcing all the technology. But what they’re doing is still pretty bad.\n\n\n**Ryan:** What should they be doing, in your eyes? Like, what would be responsible use of this technology?\n\n\nI almost get the feeling that, you know, your take would be “stop working on it altogether”? And, of course, you know, to an organization like OpenAI that’s going to be heresy, even if maybe that’s the right decision for humanity. But what should they be doing?\n\n\n**Eliezer:** I mean, if you literally just made me dictator of OpenAI, I would change the name to “ClosedAI”. Because right now, they’re making it look like being “closed” is hypocrisy. They’re, like, being “closed” while keeping the name “OpenAI”, and that itself makes it looks like closure is like not this thing that you do cooperatively so that humanity will not die, but instead this sleazy profit-making thing that you do while keeping the name “OpenAI”.\n\n\nSo that’s very bad; change the name to “ClosedAI”, that’s step one.\n\n\nNext. I don’t know if they *can* break the deal with Microsoft. But, you know, cut that off. None of this. No more hype. No more excitement. No more getting famous and, you know, getting your status off of like, “Look at how much closer *we* came to destroying the world! You know, we’re not there yet. But, you know, we’re at the *forefront* of destroying the world!” You know, stop grubbing for the Silicon Valley bragging cred of visibly being the leader.\n\n\nTake it all closed. If you got to make money, make money selling to businesses in a way that doesn’t generate a lot of hype and doesn’t visibly push the field.And then try to figure out systems that are more alignable and not just more powerful. And at the end of that, they would fail, because, you know, it’s not easy to do that. And the world would be destroyed. But they would have died with more dignity. Instead of being like, “Yeah, yeah, let’s like push humanity off the cliff ourselves for the ego boost!”, they would have done what they could, and then failed.\n\n\n**David:** Eliezer, do you think anyone who’s building AI — Elon Musk, Sam Altman at OpenAI – do you think progressing AI is fundamentally bad?\n\n\n**Eliezer:** I mean, there are *narrow* forms of progress, especially if you *didn’t open-source them*, that would be good. Like, you can imagine a thing that, like, pushes capabilities a bit, but is much more alignable.\n\n\nThere are people working in the field who I would say are, like, sort of *unabashedly* good. Like, Chris Olah is taking a microscope to these giant inscrutable matrices and trying to figure out what goes on inside there. Publishing that might possibly even push capabilities a little bit, because if people know what’s going on inside there, they can make better ones. But the question of like, whether to closed-source *that* is, like, much more fraught than the question of whether to closed-source the stuff that’s just pure capabilities.\n\n\nBut that said, the people who are just like, “Yeah, yeah, let’s do more stuff! And let’s tell the world how we did it, so they can do it too!” That’s just, like, unabashedly bad.\n\n\n**David:** So it sounds like you do see paths forward in which we can develop AI in responsible ways. But it’s really this open-source, open-sharing-of-information to allow anyone and everyone to innovate on AI,  that’s really the path towards doom. And so we actually need to keep this knowledge private. Like, normally knowledge…\n\n\n**Eliezer:** No, no, no, no. Open-sourcing all this stuff is, like, a *less* dignified path straight off the edge. I’m not saying that all we need to do is keep everything closed and in the right hands and it will be fine. That will also kill you.\n\n\nBut that said, if you have stuff and you *do not know* how to make it not kill everyone, then broadcasting it to the world is even *less* dignified than being like, “Okay, maybe we should *keep* working on this until we can figure out how to make it *not* kill everyone.”\n\n\nAnd then the other people will, like, go storm ahead on *their* end and kill everyone. But, you know, you won’t have *personally* slaughtered Earth. And that is more dignified.\n\n\n**Ryan:** Eliezer, I know I was kind of shaken after our episode, not having heard the full AI alignment story, at least listened to it for a while.\n\n\nAnd I think that in combination with the sincerity through which you talk about these subjects, and also me sort of seeing these things on the horizon, this episode was kind of shaking for me and caused a lot of thought.\n\n\nBut I’m noticing there is a cohort of people who are dismissing this take and your take specifically in this episode as Doomerism. This idea that every generation thinks it’s, you know, the end of the world and the last generation.\n\n\nWhat’s your take on this critique that, “Hey, you know, it’s been other things before. There was a time where it was nuclear weapons, and we would all end in a mushroom cloud. And there are other times where we thought a pandemic was going to kill everyone. And this is just the latest Doomerist AI death cult.”\n\n\nI’m sure you’ve heard that before. How do you respond?\n\n\n**Eliezer:** That if you literally know nothing about nuclear weapons or artificial intelligence, except that somebody has claimed of both of them that they’ll destroy the world, then sure, you can’t tell the difference. As far as you can tell, nuclear weapons were claimed to destroy the world, and then they didn’t destroy the world, and then somebody claimed that about AI.\n\n\nSo, you know, Laplace’s rule of induction: at most a 1/3 probability that AI will destroy the world, if nuclear weapons and AI are the only case.\n\n\nYou can bring in so many more cases than that. Why, people should have known in the first place that nuclear weapons wouldn’t destroy the world! Because their next door neighbor once said that the sky was falling, and that didn’t happen; and if their next door weapon was [??], how could the people saying that nuclear weapons would destroy the world be right?\n\n\nAnd basically, as long as people are trying to run off of models of human psychology, to derive empirical information about the world, they’re stuck. They’re in a trap they can never get out of. They’re going to always be trying to psychoanalyze the people talking about nuclear weapons or whatever. And the only way you can actually get better information is by understanding how nuclear weapons work, understanding what the international equilibrium with nuclear weapons looks like. And the international equilibrium, by the way, is that nobody profits from setting off small numbers of nuclear weapons, especially given that they know that large numbers of nukes would follow. And, you know, that’s why they haven’t been used yet. There was nobody who made a buck by starting a nuclear war. The nuclear war was clear, the nuclear war was legible. People knew what would happen if they fired off all the nukes.\n\n\nThe analogy I sometimes try to use with artificial intelligence is, “Well, suppose that instead you could make nuclear weapons out of a billion pounds of laundry detergent. And they spit out gold until you make one that’s too large, whereupon it ignites the atmosphere and kills everyone. *And* you can’t calculate exactly how large is too large. *And* the international situation is that the private research labs spitting out gold don’t want to hear about igniting the atmosphere.” And that’s the technical difference. You need to be able to tell whether or not that is true as a scientific claim about how reality, the universe, the environment, artificial intelligence, actually works. What actually happens when the giant inscrutable matrices go past a certain point of capability? It’s a falsifiable hypothesis.\n\n\nYou know, if it *fails* to be falsified, then everyone is dead, but that doesn’t actually change the basic dynamic here, which is, you can’t figure out how the world works by psychoanalyzing the people talking about it.\n\n\n**David:** One line of questioning that has come up inside of the Bankless Nation Discord is the idea that we need to train AI with data, lots of data. And where are we getting that data? Well, humans are producing that data. And when humans produce that data, by nature of the fact that it was produced by humans, that data has our human values embedded in it somehow, some way, just by the aggregate nature of all the data in the world, which was created by humans that have certain values. And then AI is trained on that data that has all the human values embedded in it. And so there’s actually no way to create an AI that isn’t trained on data that is created by humans, and that data has human values in it.\n\n\nIs there anything to this line of reasoning about a potential glimmer of hope here?\n\n\n**Eliezer:** There’s a distant glimmer of hope, which is that an AI that is trained on tons of human data in this way probably understands some things about humans. And because of that, there’s a branch of research hope within alignment, which is something that like, “Well, this AI, to be able to predict humans, needs to be able to predict the thought processes that humans are using to make their decisions. So can we thereby point to human values inside of the knowledge that the AI has?”\n\n\nAnd this is, like, very nontrivial, because the simplest theory that you use to predict what humans decide next, does not have what you might term “valid morality under reflection” as a clearly labeled primitive chunk inside it that is directly controlling the humans, and which you need to understand on a scientific level to understand the humans.\n\n\nThe humans are full of hopes and fears and thoughts and desires. And somewhere in all of that is what we call “morality”, but it’s not a clear, distinct chunk, where an alien scientist examining humans and trying to figure out just purely on an empirical level “how do these humans work?” would need to point to one particular chunk of the human brain and say, like, “Ahh, that circuit there, the morality circuit!”\n\n\nSo it’s not easy to point to inside the AI’s understanding. There is not currently any obvious way to actually promote that chunk of the AI’s understanding to then be in control of the AI’s planning process. As it must be complicatedly pointed to, because it’s not just a simple empirical chunk for explaining the world.\n\n\nAnd basically, I don’t think that is actually going to be the route you should try to go down. You should try to go down something much simpler than that. The problem is not that we are going to fail to convey some *complicated subtlety* of human value. The problem is that we do not know how to align an AI on a task like “put two identical strawberries on a plate” without destroying the world.\n\n\n(Where by “put two identical strawberries on the plate”, the concept is that’s invoking enough power that it’s not safe AI that can build two strawberries identical down to the cellular level. Like, that’s a powerful AI. Aligning it isn’t simple. If it’s powerful enough to do that, it’s also powerful enough to destroy the world, etc.)\n\n\n**David:** There’s like a number of other lines of logic I could try to go down, but I think I would start to feel like I’m in the bargaining phase of death. Where it’s like “Well, what about this? What about that?”\n\n\nBut maybe to summate all of the arguments, is to say something along the lines of like, “Eliezer, how much room do you give for the long tail of black swan events? But these black swan events are actually us finding a solution for this thing.” So, like, a reverse black swan event where we actually don’t know how we solve this AI alignment problem. But really, it’s just a bet on human ingenuity. And AI hasn’t taken over the world *yet*. But there’s space between now and then, and human ingenuity will be able to fill that gap, especially when the time comes?\n\n\nLike, how much room do you leave for the long tail of just, like, “Oh, we’ll discover a solution that we can’t really see today”?\n\n\n**Eliezer:** I mean, on the one hand, that hope is all that’s left, and all that I’m pursuing. And on the other hand, in the process of actually pursuing that hope I do feel like I’ve gotten some feedback indicating that this hope is not necessarily very large.\n\n\nYou know, when you’ve got stage four cancer, is there still hope that your body will just rally and suddenly fight off the cancer? Yes, but it’s not what usually happens. And I’ve seen people come in and try to direct their ingenuity at the alignment problem and most of them all invent the *same* small handful of bad solutions. And it’s harder than usual to direct human ingenuity at this.\n\n\nA lot of them are just, like — you know, with capabilities ideas, you run out and try them and they mostly don’t work. And some of them do work and you publish the paper, and you get your science [??], and you get your ego boost, and maybe you get a job offer someplace.\n\n\nAnd with the alignment stuff you can try to run through the analogous process, but the stuff we need to align is mostly not here yet. You can try to invent the smaller large language models that are public, you can go to work at a place that has access to larger large language models, you can try to do these very crude, very early experiments, and getting the large language models to at least not threaten your users with death —\n\n\n— *which isn’t the same problem at all*. It just kind of looks related.\n\n\nBut you’re at least trying to get AI systems that do what you want them to do, and not do other stuff; and that is, at the very core, a similar problem.\n\n\nBut the AI systems are not very powerful, they’re not running into all sorts of problems that you can predict will crop up later. And people just, kind of — like, mostly people short out. They do pretend work on the problem. They’re desperate to help, they got a grant, they now need to show the people who made the grant that they’ve made progress. They, you know, paper mill stuff.\n\n\nSo the human ingenuity is not functioning well right now. You cannot be like, “Ah yes, this present field full of human ingenuity, which is working great, and coming up with lots of great ideas, and building up its strength, will continue at this pace and make it to the finish line in time!”\n\n\nThe capability stuff is *storming on* ahead. The human ingenuity that’s being directed at that is much larger, but also it’s got a much easier task in front of it.\n\n\nThe question is not “Can human ingenuity ever do this at all?” It’s “Can human ingenuity *finish* doing this before OpenAI blows up the world?”\n\n\n**Ryan:**Well, Eliezer, if we can’t trust in human ingenuity, is there any possibility that we can trust in AI ingenuity? And here’s what I mean by this, and perhaps you’ll throw a dart in this as being hopelessly naive.\n\n\nBut is there the possibility we could ask a reasonably intelligent, maybe almost superintelligent AI, how we might fix the AI alignment problem? And for it to give us an answer? Or is that really not how superintelligent AIs work?\n\n\n**Eliezer:** I mean, if you literally build a superintelligence and for some reason it was motivated to answer you, then sure, it could answer you.\n\n\nLike, if Omega comes along from a distant supercluster and offers to pay the local superintelligence lots and lots of money (or, like, mass or whatever) to give you a correct answer, then sure, it knows the correct answer; it can give you the correct answers.\n\n\nIf it *wants* to do that, you must have *already* solved the alignment problem. This reduces the problem of solving alignment to the problem of solving alignment. No progress has been made here.\n\n\nAnd, like, working on alignment is actually one of the most difficult things you could possibly try to align.\n\n\nLike, if I had the health and was trying to die with more dignity by building a system and aligning it as best I could figure out how to align it, I would be targeting something on the order of “build two strawberries and put them on a plate”. But instead of building two identical strawberries and putting them on a plate, you — don’t actually do this, this is not the best thing you should do —\n\n\n— but if for example you could safely align “turning all the GPUs into Rubik’s cubes”, then that *would* prevent the world from being destroyed two weeks later by your next follow-up competitor.\n\n\nAnd that’s *much easier* to align an AI on than trying to get the AI to solve alignment for you. You could be trying to build something that would *just* think about nanotech, just think about the science problems, the physics problems, the chemistry problems, the synthesis pathways. \n\n\n(The open-air operation to find all the GPUs and turn them into Rubik’s cubes would be harder to align, and that’s why you shouldn’t actually try to do that.)\n\n\nMy point here is: whereas [with] alignment, you’ve got to think about AI technology and computers and humans and intelligent adversaries, and distant superintelligences who might be trying to exploit your AI’s imagination of those distant superintelligences, and ridiculous weird problems that would take so long to explain.\n\n\nAnd it just covers this enormous amount of territory, where you’ve got to understand how humans work, you’ve got to understand how adversarial humans might try to exploit and break an AI system — because if you’re trying to build an aligned AI that’s going to run out and operate in the real world, it would have to be resilient to those things.\n\n\nAnd they’re just hoping that the AI is going to do their homework for them! But it’s a chicken and egg scenario. And if you could actually get an AI to help you with something, you would not try to get it to help you with something as weird and not-really-all-that-effable as alignment. You would try to get it to help with something much simpler that could prevent the next AGI down the line from destroying the world.\n\n\nLike nanotechnology. There’s a whole bunch of advanced analysis that’s been done of it, and the *kind of thinking* that you have to do about it is so much more straightforward and so much less fraught than trying to, you know… And how do you even tell if it’s lying about alignment?\n\n\nIt’s hard to tell whether *I’m* telling you the truth about all this alignment stuff, right? Whereas if I talk about the tensile strength of sapphire, this is easier to check through the lens of logic.\n\n\n**David:** Eliezer, I think one of the reasons why perhaps this episode impacted Ryan – this was an analysis from a Bankless Nation community member — that this episode impacted Ryan a little bit more than it impacted me is because Ryan’s got kids, and I don’t. And so I’m curious, like, what do you think — like, looking 10, 20, 30 years in the future, where you see this future as inevitable, do you think it’s futile to project out a future for the human race beyond, like, 30 years or so?\n\n\n**Eliezer**: Timelines are very hard to project. 30 years does strike me as unlikely at this point. But, you know, timing is famously much harder to forecast than saying that things can be done at all. You know, you got your people saying it will be 50 years out two years before it happens, and you got your people saying it’ll be two years out 50 years before it happens. And, yeah, it’s… Even if I knew *exactly* how the technology would be built, and *exactly* who was going to build it, I *still* wouldn’t be able to tell you how long the project would take because of project management chaos.\n\n\nNow, since I don’t know exactly the technology used, and I don’t know exactly who’s going to build it, and the project may not even have started yet, how can I possibly figure out how long it’s going to take?\n\n\n**Ryan:** Eliezer, you’ve been quite generous with your time to the crypto community, and we just want to thank you. I think you’ve really opened a lot of eyes. This isn’t going to be our last AI podcast at Bankless, certainly. I think the crypto community is going to dive down the rabbit hole after this episode. So thank you for giving us the 400-level introduction into it.\n\n\nAs I said to David, I feel like we waded straight into the deep end of the pool here. But that’s probably the best way to address the subject matter. I’m wondering as we kind of close this out, if you could leave us — it is part of the human spirit to keep and to maintain slivers of hope here or there. Or as maybe someone you work with put it – to *fight the fight*, even if the hope is gone.\n\n\n100 years in the future, if humanity is still alive and functioning, if a superintelligent AI has not taken over, but we live in coexistence with something of that caliber — imagine if that’s the case, 100 years from now. How did it happen?\n\n\nIs there some possibility, some sort of narrow pathway by which we can navigate this? And if this were 100 years from now the case, how could you imagine it would have happened?\n\n\n**Eliezer:** For one thing, I predict that if there’s a glorious transhumanist future (as it is sometimes conventionally known) at the end of this, I don’t predict it was there by getting like, “coexistence” with superintelligence. That’s, like, some kind of weird, inappropriate analogy based off of humans and cows or something.\n\n\nI predict alignment was solved. I predict that if the humans are alive at all, that the superintelligences are being quite nice to them.\n\n\nI have basic moral questions about whether it’s ethical for humans to have human children, if having transhuman children is an option instead. Like, these humans running around? Are they, like, the current humans who wanted eternal youth but, like, not the brain upgrades? Because I do see the case for letting an existing person choose “No, I just want eternal youth and no brain upgrades, thank you.” But then if you’re deliberately having the equivalent of a very crippled child when you could just as easily have a not crippled child.\n\n\nLike, should humans in their present form be around together? Are we, like, kind of too sad in some ways? I have friends, to be clear, who disagree with me so much about this point. (*laughs*) But yeah, I’d say that the happy future looks like beings of light having lots of fun in a nicely connected computing fabric powered by the Sun, if we haven’t taken the sun apart yet. Maybe there’s enough real sentiment in people that you just, like, clear all the humans off the Earth and leave the entire place as a park. And even, like, maintain the Sun, so that the Earth is still a park even after the Sun would have ordinarily swollen up or dimmed down.\n\n\nYeah, like… That was always the things to be fought for. That was always the point, from the perspective of everyone who’s been in this for a long time. Maybe not literally everyone, but like, the whole old crew.\n\n\n**Ryan:** That is a good way to end it: with some hope. Eliezer, thanks for joining the crypto community on this collectibles call and for this follow-up Q&A. We really appreciate it.\n\n\n**michaelwong.eth:**Yes, thank you, Eliezer.\n\n\n**Eliezer:** Thanks for having me.\n\n\nThe post [Yudkowsky on AGI risk on the Bankless podcast](https://intelligence.org/2023/03/14/yudkowsky-on-agi-risk-on-the-bankless-podcast/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2023-03-14T21:54:49Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "4de77b69d72be5ed258b156195aabdc6", "title": "Comments on OpenAI’s \"Planning for AGI and beyond\"", "url": "https://intelligence.org/2023/03/14/comments-on-openais-planning-for-agi-and-beyond/", "source": "miri", "source_type": "blog", "text": "Sam Altman shared me on a draft of his OpenAI blog post [Planning for AGI and beyond](https://openai.com/blog/planning-for-agi-and-beyond), and I left some comments, reproduced below without typos and with some added hyperlinks. Where the final version of the OpenAI post differs from the draft, I’ve noted that as well, making text Sam later cut red and text he added blue.\n\n\nMy overall sense is that Sam deleted text and occasionally rephrased sentences so as to admit more models (sometimes including mine), but didn’t engage with the arguments enough to shift his own probability mass around on the important disagreements.\n\n\nOur disagreements are pretty major, as far as I can tell. With my comments, I was hoping to spark more of a back-and-forth. Having failed at that, I’m guessing part of the problem is that I didn’t phrase my disagreements bluntly or strongly enough, while also noting various points of agreement, which might have overall made it sound like I had only minor disagreements.\n\n\n\nTo help with that, I’ve added blunter versions below, in a bunch of cases where I don’t think the public version of the post fully takes into account the point I was trying to make.\n\n\n(Though I don’t want this to take away from the positive aspects of the post, since I think these are very important as well. I put in a bunch of positive comments on the original draft, in large part because I think it’s worth acknowledging and reinforcing whatever process/author drafted the especially reasonable paragraphs.)\n\n\nI don’t expect Sam to hear me make a blunt claim and instantly update to agreeing with me, but I put more probability on us converging over time, and clearly stating our disagreements for the benefit of readers, if he understands my claims, knows I still disagree, and feels license to push back on specific things I’ve re-asserted.\n\n\n\n\n---\n\n\nFormatting note: The general format is that I include the text of Sam’s original draft (that I commented on), my comment, and the text of the final post. That said, I don’t want to go blasting someone’s old private drafts across the internet just because they shared that draft with me, so in some cases, the original text is redacted, at Sam’s request.\n\n\n\n\n---\n\n\n**Sam’s draft:** Our mission is to ensure that AGI benefits all of humanity. The creation of AGI should be a tremendous shared triumph that everyone contributes to and benefits from; it will be the result of the collective technological and societal progress of humanity over millennia.\n\n\n**My comment:** +1\n\n\n**Sam’s post:** Our mission is to ensure that artificial general intelligence— AI systems that are generally smarter than humans—benefits all of humanity.\n\n\n\n\n---\n\n\n**Sam’s draft:** Of course our current progress could hit a wall, but if AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging our economy, and aiding in the discovery of new scientific knowledge.\n\n\n**My comment:** seems to me an understatement :-p\n\n\n(unlocking nanotech; uploading minds; copying humans; interstellar probes that aren’t slowed down by needing to cradle bags of meat, and that can have the minds beamed to them; energy abundance; ability to run civilizations on computers in the cold of space; etc. etc., are all things that i expect to follow from automated scientific & technological development)\n\n\n(seems fine to avoid the more far-out stuff, and also fine to only say things that you personally believe, but insofar as you also expect some of this tech to be within-reach in 50 sidereal years after AGI, i think it’d be virtuous to acknowledge)\n\n\n**Sam’s post:** If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.\n\n\n**Blunter follow-up:** seems to undersell the technological singularity, and the fact that the large-scale/coarse-grain shape of the future will be governed by superintelligences.\n\n\n\n\n---\n\n\n**Sam’s draft:** On the other hand, AGI would also come with serious risk of misuse and drastic accidents. Because the upside of AGI is so great, we do not believe it’s possible or desirable for society to stop its development forever; instead, we have to figure out how to get it right. [1]\n\n\n**My comment:** +1\n\n\n**Sam’s post:** On the other hand, AGI would also come with serious risk of misuse, drastic accidents, and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.C\n\n\n**Blunter follow-up:** still +1, with the caveat that accident risk >> misuse risk\n\n\n\n\n---\n\n\n**Sam’s draft:** 1) We want AGI to empower humanity to maximally flourish in the universe. We don’t expect the future to be an unqualified utopia, but we want to maximize the good and minimize the bad, and for AGI to be an amplifier of human will.\n\n\n**My comment:** i think i agree with the sentiment here, but i disagree with parts of the literal denotation\n\n\nfor one, i sure hope for an unqualified utopia, and think there’s a chance that superintelligent assistance could figure out how to get one (cf [fun theory](https://www.lesswrong.com/posts/K4aGvLnHvYgX9pZHS/the-fun-theory-sequence)).\n\n\n(it is ofc important to note that \"superintelligences [puppet](https://www.lesswrong.com/posts/vwnSPgwtmLjvTK2Wa/amputation-of-destiny) the humans through the motions of a utopia\" is not in fact a utopia, and that the future will undoubtedly include tradeoffs (including continuing to let people make their own mistakes and learn their own lessons), and so in that sense i agree that it wouldn’t be an \"unqualified utopia\", even in the best case)\n\n\n…though i don’t currently [expect](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) us to do that well, so i don’t technically disagree with the literal phrasing you chose there.\n\n\ni do have qualms about \"we want AGI to be an amplifier of human will\". there’s a bunch of ways that this seems off-kilter to me. my basic qualm here is that i think getting a wonderful future is more of a fragile operation than simply cranking up everybody’s \"power level\" simultaneously, roughly analogously to how spoiling a child isn’t the best way for them to grow up.\n\n\ni’d stand full-throatedly behind \"we want AGI to be an amplifier of all the best parts of humanity\".\n\n\nalso, i ofc ultimately want AGI that are also people, to be humanity’s [friends](https://www.lesswrong.com/posts/HoQ5Rp7Gs6rebusNP/superintelligent-ai-is-necessary-for-an-amazing-future-but-1) as we explore the universe and so on. (though, stating the obvious, i think we should aim to [avoid](https://www.lesswrong.com/posts/gb6zWstjmkYHLrbrg/can-t-unbirth-a-child) [personhood](https://www.lesswrong.com/posts/wqDRRx9RqwKLzWt7R/nonperson-predicates) in our early AGIs, for various reasons.)\n\n\n**Sam’s post:** 1. We want AGI to empower humanity to maximally flourish in the universe. We don’t expect the future to be an unqualified utopia, but we want to maximize the good and minimize the bad, and for AGI to be an amplifier of humanity.\n\n\n**Blunter follow-up:** we can totally get an unqualified utopia. also this \"amplifier of humanity\" thing sounds like an [applause light](https://www.lesswrong.com/posts/dLbkrPu5STNCBLRjr/applause-lights)—though i endorse certain charitable interpretations that i can wring from it (that essentially amount to [CEV](https://arbital.com/p/cev/) (as such things usually do)), at the same time i disendorse other interpretations.\n\n\n\n\n---\n\n\n**Sam’s draft:** 2) We want the benefits of, access to, and governance of AGI to be widely and fairly shared.\n\n\n**My comment:** +1 to benefits of. i have lots more qualms about \"access to\" and \"governance of\".\n\n\nre \"access to\", my guess is that early AGIs will be able to attain a decisive strategic advantage over the rest of the world entire. saying \"everyone should have equal access\" seems to me like saying \"a nuclear bomb in every household\"; it just sounds kinda mad.\n\n\ni’d agree that once the world has exited the acute risk period, it’s critical for access to AGI tech to be similarly available to all. but that is, in my book, a critical distinction.\n\n\n(so access-wise, i agree long-term, but not short-term.)\n\n\ngovernance-wise, my current state is something like: in the short term, using design-by-committee to avert the destruction of the world sounds like a bad idea; and in the long term, i think you’re looking at stuff at least as crazy as people running thousands of copies of their own brain at 1000x speedup and i think it would be dystopian to try to yolk them to, like, the will of the flesh-bodied American taxpayers (or whatever).\n\n\nthere’s something in the spirit of \"distributed governance\" that i find emotionally appealing, but there’s also lots and lots of stuff right nearby, that would be catastrophic, dystopian, or both, and that implementation would be likely to stumble into in practice. so i have qualms about that one.\n\n\n**Sam’s post:** [unchanged]\n\n\n**Blunter follow-up:** full-throated endorsement of \"benefits of\". widely sharing access and governance in the short-term seems reckless and destructive.\n\n\n\n\n---\n\n\n**Sam’s draft:** 3) We want to successfully navigate massive risks. In confronting these risks, we acknowledge that what seems right in theory frequently plays out more strangely than expected in practice. We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to avoid a \"one shot to get it right\" scenario.\n\n\n**My comment:** i don’t think that this \"continuously deploy weak systems\" helps avoid the \"you have one shot\"-type problems that i predict we’ll face in the future.\n\n\n(this also strikes me as a rationalization for continuing to do the fun/cool work of [pushing the capabilities envelope](https://www.lesswrong.com/posts/vQNJrJqebXEWjJfnz/a-note-about-differential-technological-development), which i currently think is [net-bad](https://www.lesswrong.com/posts/tuwwLQT4wqk25ndxk/thoughts-on-agi-organizations-and-capabilities-work) for everyone)\n\n\n**Sam’s post:** 3. We want to successfully navigate massive risks. In confronting these risks, we acknowledge that what seems right in theory often plays out more strangely than expected in practice. We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize \"one shot to get it right\" scenarios.\n\n\n**Blunter follow-up:** i think it takes a bunch more than \"continuously deploy weak systems\" to address the \"one shot to get it right\" scenarios, and none of the leading orgs (OAI included) seem to me to be on track to acquire the missing parts.\n\n\n\n\n---\n\n\n**Sam’s draft:** a gradual transition to a world with AGI is better than a sudden one\n\n\n**My comment:** for the record, i don’t think continuous deployment really smooths out the sharp changes that i expect in the future. (i’m not trying to argue the point here, just noting that there are some people who are predicting a sort of [sharp](https://twitter.com/robbensinger/status/1623835453110775810) [change](https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization) that they think is ~unrelated to your choice of continuous deployment.)\n\n\n**Sam’s post:** [unchanged]\n\n\n**Blunter follow-up:** insofar as this sentence is attempting to reply to MIRI-esque concerns, i don’t think it’s a very good reply (for the reasons alluded to in the original comment).\n\n\n\n\n---\n\n\n**Sam’s draft:** A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place.\n\n\n**My comment:** i’m skeptical, similar to the above.\n\n\n**Sam’s post:** [unchanged]\n\n\n\n\n---\n\n\n**Sam’s draft:** It also allows us to learn as much as we can from our deployments, for society and AI to co-evolve, and to collectively figure out what we want while the stakes are relatively low.\n\n\n**My comment:** stating the obvious, other ways of learning as much as you can from the systems you have include efforts in transparency, legibility, and interpretability.\n\n\n**Sam’s post:** It also allows for society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low.\n\n\n\n\n---\n\n\n**Sam’s draft:** As our systems get closer to AGI, we are becoming increasingly more cautious with the creation and deployment of our models.\n\n\n**My comment:** +1\n\n\n**Sam’s post:** As our systems get closer to AGI, we are becoming increasingly cautious with the creation and deployment of our models.\n\n\n**Blunter follow-up:** +1 as a thing y’all should do. my guess is that you need to do it even faster and more thoroughly than you have been.\n\n\na more general blunt note: me +1ing various statements does not mean that i think the corporate culture has internalized the corresponding points, and it currently looks likely to me that OpenAI is not on track to live up to the admirable phrases in this post, and are instead on track to get everyone killed.\n\n\ni still think it’s valuable that the authors of this post are thinking about these points, and i hope that these sorts of public endorsements increase the probability that the corresponding sentiments actually end up fully internalized in the corporate culture, but i want to be clear that the actions are what ultimately matter, not the words.\n\n\n\n\n---\n\n\n**Sam’s draft:** Our decisions will require much more caution than society usually applies to new technologies, and more caution than many users would like. Some people in the AI field think the risks of AGI (and successor systems) are fictitious; we would be delighted if they turn out to be right, but we are going to operate as if these risks are [existential](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/).\n\n\n**My comment:** hooray\n\n\n**Sam’s post:** [unchanged]\n\n\n**Blunter follow-up:** to be clear, my response to the stated sentiment is \"hooray\", and i’m happy to see this plainly and publicly stated, but i have strong doubts about whether and how this sentiment will be implemented in practice.\n\n\n\n\n---\n\n\n**Sam’s draft:** At some point, the balance between the upsides and downsides of deployments (such as empowering malicious actors, creating social and economic disruptions, and accelerating an unsafe race) could shift, in which case we would significantly change our plans.\n\n\n**My comment:** this is vague enough that i don’t quite understand what it’s saying; i’d appreciate it being spelled out more\n\n\n**Sam’s post:** At some point, the balance between the upsides and downsides of deployments (such as empowering malicious actors, creating social and economic disruptions, and accelerating an unsafe race) could shift, in which case we would significantly change our plans around continuous deployment.\n\n\n\n\n---\n\n\n**Sam’s draft:** Second, we are working towards creating increasingly aligned (i.e., models that reliably follow their users’ intentions) and steerable models. Our shift from models like the first version of GPT-3 to ChatGPT and [InstructGPT](https://openai.com/blog/instruction-following/) is an early example of this.\n\n\n**My comment:** ftr, i think there’s a bunch of important [notkilleveryoneism](https://twitter.com/ESYudkowsky/status/1582666519846080512) work that won’t be touched upon by this approach\n\n\n**Sam’s post:** Second, we are working towards creating increasingly aligned and steerable models. Our shift from models like the first version of GPT-3 to [InstructGPT](https://openai.com/blog/instruction-following/) and [ChatGPT](https://chat.openai.com/) is an early example of this\n\n\n\n\n---\n\n\n**Sam’s draft:** Importantly, we think we often have to make progress on AI safety and capabilities together (and that it’s a false dichotomy to talk about them separately; they are correlated in many ways). Our best safety work has come from working with our most capable models.\n\n\n**My comment:** i agree that they’re often [connected](https://www.lesswrong.com/posts/vQNJrJqebXEWjJfnz/a-note-about-differential-technological-development), but i also think that traditionally the capabilities runs way out ahead of the alignment, and that e.g. if capabilities progress was paused now, there would be many years’ worth of alignment work that could be done to catch up (e.g. by doing significant work on transparency, legibility, and interpretability). and i think that if we do keep running ahead with the current capabilities/alignment ratio (or even a slightly better one), we die.\n\n\n(stating the obvious: this is not to say that transparency/legibility/interpretability aren’t also intertwined with capabilities; it’s all intertwined to some degree. but one can still avoid pushing the capabilities frontier, and focus on the alignment end of things. and one can still institute a policy of privacy, to further avoid burning the commons.)\n\n\n**Sam’s post:** Importantly, we think we often have to make progress on AI safety and capabilities together. It’s a false dichotomy to talk about them separately; they are correlated in many ways. Our best safety work has come from working with our most capable models. That said, it’s important that the ratio of safety progress to capability progress increases.\n\n\n\n\n---\n\n\n**Sam’s draft:** We have [a clause in our charter](https://openai.com/charter/) about assisting other organizations instead of racing them in late-stage AGI development. [ redacted statement that got cut ]\n\n\n**My comment:** i think it’s really cool of y’all to have this; +1\n\n\n**Sam’s post:** We have [a clause in our Charter](https://openai.com/charter/) about assisting other organizations to advance safety instead of racing with them in late-stage AGI development.\n\n\n\n\n---\n\n\n**Sam’s draft:** We have a cap on the returns our shareholders can earn so that we aren’t incentivized to attempt to capture value without bound and risk deploying something potentially catastrophically dangerous (and of course as a way to share the benefits with society)\n\n\n**My comment:** also rad\n\n\n**Sam’s post:** [unchanged]\n\n\n\n\n---\n\n\n**Sam’s draft:** We believe that the future of humanity should be determined by humanity. [ redacted draft version of \"and that it’s important to share information aboutprogress with the public\" ]\n\n\n**My comment:** [+1](https://www.lesswrong.com/posts/DJRe5obJd7kqCkvRr/don-t-leave-your-fingerprints-on-the-future) to \"the future of humanity should be determined by humanity\".\n\n\n**My comment (#2):** i agree with some of the sentiment of [redacted sentence in Sam’s draft], but note that things get weird in the context of a global arms race for potentially-civilization-ending tech. i, for one, am in favor of people saying \"we are now doing our AGI research behind closed doors, because we don’t think it would be used wisely if put out in the open\".\n\n\n**Sam’s post:** We believe that the future of humanity should be determined by humanity, and that it’s important to share information about progress with the public.\n\n\n**Blunter follow-up:** The +1 is based on a charitable read where \"the future of humanity should be determined by humanity\" irons out into [CEV](https://arbital.com/p/cev/), as such things often do.\n\n\n**Blunter follow-up (#2):** seems to me like a lot of weight is being put on \"information about progress\".\n\n\nthere’s one read of this claim that says something like \"the average human should know that a tiny cluster of engineers are about to gamble with everybody’s fate\", which does have a virtuous ring to it. and i wouldn’t personally argue for *>hiding* that fact from anyone. but this is not a difficult fact for savvy people to learn from public information today.\n\n\nis Sam arguing that there’s some concrete action in this class that the field has an unmet obligation to do, like dropping flyers in papua new guinea? this currently strikes me as a much more niche concern than the possible fast-approaching deaths of everyone (including papua new guineans!) at the hands of unfriendly AI, so i find it weird to mix those two topics together in a post about how to ensure a positive singularity.\n\n\npossibly Sam has something else in mind, but if so i encourage more concreteness about what that is and why it’s important.\n\n\n\n\n---\n\n\n**Sam’s draft:** [ redacted statement that got cut ] There should be great scrutiny of all efforts attempting to build AGI and public consultation for major decisions.\n\n\n**My comment:** totally agree that people shouldn’t try to control the world behind closed doors. that said, i would totally endorse people building defensive technology behind closed doors, in attempts to (e.g.) buy time.\n\n\n(ideally this would be done by state actors, which are at least semilegitimate. but if someone’s building superweapons, and you can build technology that thwarts them, and the state actors aren’t responding, then on my ethics it’s ok to build the technology that thwarts them, so long as this also does not put people at great risk by your own hands.)\n\n\n(of course, most people building powerful tech that think they aren’t putting the world at great risk by their own hands, are often wrong; there’s various types of thinking on this topic that should simply not be trusted; etc. etc.)\n\n\n[ Post-hoc note: It’s maybe worth noting that on my ethics there’s an enormous difference between \"a small cabal of humans exerts direct personal control over the world\" and \"run a [CEV](https://arbital.com/p/cev/) [sovereign](https://arbital.com/p/Sovereign/)\", and i’m against the former but for the latter, with the extra caveat that nobody should be trying to figure out a CEV sovereign under time-pressure, nor launching an AGI unilaterally simply because they managed to convince themselves it was a CEV sovereign. ]\n\n\n**My comment (#2):** on \"There should be great scrutiny of all efforts attempting to build AGI and public consultation for major decisions\", i agree with something like \"it’s ethically important that the will of [all humans](https://arbital.com/p/cev/) goes into answering the questions of where superintelligence should guide the future\". this is separate from endorsing any particular design-by-committee choice of governance.\n\n\n(for instance, if everybody today casts a vote for their favorite government style that they can think of, and then the AGI does the one that wins the most votes, i think that would end up pretty bad.)\n\n\nwhich is to say, there’s some sentiment in sentences like this that i endorse, but the literal denotation makes me uneasy, and feels kinda like an applause-light. my qualms could perhaps be assuaged by a more detailed proposal, that i could either endorse or give specific qualms about.\n\n\n**Sam’s post:** There should be great scrutiny of all efforts attempting to build AGI and public consultation for major decisions.\n\n\n\n\n---\n\n\n**Sam’s draft:** The first AGI will be just a point along the continuum of intelligence. We think it’s likely that progress will continue from there, possibly sustaining the rate of progress we’ve seen over the past decade for a long period of time. If this is true, the world could become extremely different from how it is today, and the risks could be extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too.\n\n\n**My comment:** +1\n\n\n**Sam’s post:** [unchanged]\n\n\n\n\n---\n\n\n**Sam’s draft:** AI that can accelerate science is a special case worth thinking about, and perhaps more impactful than everything else. It’s possible that AGI capable enough to accelerate its own progress could cause major changes to happen surprisingly quickly (and even if the transition starts slowly, we expect it to happen pretty quickly in the final stages).\n\n\n**My comment:** +1\n\n\n**Sam’s post:** [unchanged]\n\n\n\n\n---\n\n\n**Sam’s draft:** We think a slower takeoff is easier to make safe, and coordination among AGI efforts to slow down at critical junctures will likely be important (even in a world where we don’t need to do this to solve technical alignment problems, slowing down may be important to give society enough time to adapt).\n\n\n**My comment:** (sounds nice in principle, but i will note for the record that the plan i’ve heard for this is \"do continuous deployment and hope to learn something\", and i don’t expect that to help much in slowing down or smoothing out a foom.)\n\n\n**Sam’s post:** [unchanged]\n\n\n\n\n---\n\n\n**Sam’s draft:** [redacted version with phrasing similar to the public version]\n\n\n**My comment:** +1\n\n\n**Sam’s post:** Successfully transitioning to a world with superintelligence is perhaps the most important—and hopeful, and scary—project in human history. Success is far from guaranteed, and the stakes (boundless downside and boundless upside) will hopefully unite all of us.\n\n\n\n\n---\n\n\n**Sam’s draft:** [ redacted statement calling back to \"Our approach to alignment research\" ]\n\n\n**My comment:** i think [this](https://openai.com/blog/our-approach-to-alignment-research) basically doesn’t work unless the early systems are already very aligned. (i have various drafts queued about this, as usual)\n\n\n**Sam’s post:** [sentence deleted]\n\n\nThe post [Comments on OpenAI’s \"Planning for AGI and beyond\"](https://intelligence.org/2023/03/14/comments-on-openais-planning-for-agi-and-beyond/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2023-03-14T20:49:27Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "60af5a0513c39164b0eb586453f766fd", "title": "Focus on the places where you feel shocked everyone’s dropping the ball", "url": "https://intelligence.org/2023/02/03/focus-on-the-places-where-you-feel-shocked-everyones-dropping-the-ball/", "source": "miri", "source_type": "blog", "text": "Writing down something I’ve found myself repeating in different conversations:\n\n\nIf you’re looking for ways to help with the whole “[the world looks pretty doomed](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities)” business, here’s my advice: **look around for places where we’re all being total idiots.**\n\n\nLook for places where everyone’s fretting about a problem that some part of you thinks it could obviously just solve.\n\n\nLook around for places where something seems incompetently run, or hopelessly inept, and where some part of you thinks you can do better.\n\n\nThen do it better.\n\n\nFor a concrete example, consider Devansh. Devansh came to me last year and said something to the effect of, “Hey, wait, it sounds like you think Eliezer does a sort of alignment-idea-generation that nobody else does, and he’s limited here by his unusually low stamina, but I can think of a bunch of medical tests that you haven’t run, are you an idiot or something?” And I was like, “Yes, definitely, please run them, do you need money”.\n\n\nI’m not particularly hopeful there, but hell, it’s worth a shot! And, importantly, this is the sort of attitude that can lead people to actually trying things *at all*, rather than assuming that we live in a more [adequate world](https://equilibriabook.com/toc/) where all the (seemingly) dumb obvious ideas have already been tried.\n\n\n\nOr, this is basically my model of how Paul Christiano manages to have a research agenda that seems at least internally coherent to me. From my perspective, he’s like, “I dunno, man, I’m not sure I can solve this, but I also think it’s not clear I can’t, and there’s a bunch of obvious stuff to try, that nobody else is even really looking at, so I’m trying it”. That’s the sort of orientation to the world that I think can be productive.\n\n\nOr the shard theory folks. I think their idea is [basically unworkable](https://www.lesswrong.com/s/v55BhXbpJuaExkpcD/p/Aet2mbnK7GDDfrEQu), but I appreciate the *mindset* they are applying to the alignment problem: something like, “Wait, aren’t y’all being idiots, it seems to me like I can just do X and then the thing will be aligned”.\n\n\nI don’t think we’ll be saved by the shard theory folk; not everyone audaciously trying to save the world will succeed. But if someone *does* save us, I think there’s a good chance that they’ll go through similar “What the hell, are you all idiots?” phases, where they autonomously pursue a path that strikes them as obviously egregiously neglected, to see if it bears fruit.\n\n\nContrast this with, say, reading a bunch of people’s research proposals and explicitly weighing the pros and cons of each approach so that you can work on whichever seems most justified. This has more of a flavor of taking a reasonable-sounding approach based on an argument that sounds vaguely good on paper, and less of a flavor of putting out an obvious fire that for some reason nobody else is reacting to.\n\n\nI dunno, maybe activities of the vaguely-good-on-paper character will prove useful as well? But I mostly expect the good stuff to come from people working on stuff where a part of them sees some way that everybody else is just totally dropping the ball.\n\n\nIn the version of this mental motion I’m proposing here, you keep your eye out for ways that everyone’s being totally inept and incompetent, ways that maybe you could just do the job correctly if you reached in there and mucked around yourself.\n\n\nThat’s where I predict the good stuff will come from.\n\n\nAnd if you don’t see any such ways?\n\n\nThen don’t sweat it. Maybe you just can’t see something that will help right now. There don’t have to be ways you can help in a sizable way right now.\n\n\n*I* don’t see ways to really help in a sizable way right now. I’m keeping my eyes open, and I’m churning through a giant backlog of things that might help a *nonzero* amount—but I think it’s important not to confuse this with taking meaningful bites out of a core problem the world is facing, and I won’t pretend to be doing the latter when I don’t see how to.\n\n\nLike, keep your eye out. For sure, keep your eye out. But if nothing in the field is calling to you, and you have no part of you that says you could totally do better if you [deconfused](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section2) yourself some more and then handled things yourself, then it’s totally respectable to do something else with your hours.\n\n\n\n\n---\n\n\nIf you don’t have an active sense that you could put out some visibly-raging fires yourself (maybe after skilling up a bunch, which you also have an active sense you could do), then I recommend stuff like [cultivating your ability to get excited about things](https://mindingourway.com/guilt/), and doing other cool stuff.\n\n\nSure, most stuff is lower-impact than saving the world from destruction. But if you can be enthusiastic about all the other cool ways to make the world better off around you, then I’m much more optimistic that you’ll be able to feel properly motivated to combat existential risk if and when an opportunity to do so arises. Because that opportunity, if you get one, probably isn’t going to suddenly unlock every lock on the box your heart hides your enthusiasm in, if your heart is hiding your enthusiasm.\n\n\nSee also [Rob Wiblin’s](https://twitter.com/robertwiblin/status/1518334250495447040) “Don’t pursue a career for impact — think about the world’s most important, tractable and neglected problems and follow your passion.”\n\n\nOr the [Alignment Research Field Guide’s](https://www.lesswrong.com/posts/PqMT9zGrNsGJNfiFR/alignment-research-field-guide) advice to “optimize for your own understanding” and chase the things that feel alive and puzzling to you, as opposed to dutifully memorizing other people’s questions and ideas. “[D]on’t ask “What are the open questions in this field?” Ask: “What are *my* questions in this field?””\n\n\nI basically don’t think that big changes come from people who aren’t pursuing a vision that some part of them “believes in”, and I don’t think low-risk, low-reward, modest, incremental help can save us from here.\n\n\nTo be clear, when I say “believe in”, I don’t mean that you necessarily assign high probability to success! Nor do I mean that you’re willing to keep trying in the face of difficulties and uncertainties (though that sure is useful too).\n\n\nEnglish doesn’t have great words for me to describe what I mean here, but it’s something like: your visualization machinery says that it sees no obstacle to success, such that you anticipate either success or getting a very concrete lesson.\n\n\nThe possibility seems open to you, at a glance; and while you may suspect that there’s some hidden reason that the possibility is not truly open, you have an opportunity here to *test* whether that’s so, and to potentially learn *why* this promising-looking idea fails.\n\n\n(Or maybe it will just work. It’s been known to happen, in many a scenario where external signs and portents would have predicted failure.)\n\n\n\nThe post [Focus on the places where you feel shocked everyone’s dropping the ball](https://intelligence.org/2023/02/03/focus-on-the-places-where-you-feel-shocked-everyones-dropping-the-ball/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2023-02-03T16:29:48Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "04421024d989098f3624ff7d10b3c45d", "title": "What I mean by “alignment is in large part about making cognition aimable at all”", "url": "https://intelligence.org/2023/02/02/what-i-mean-by-alignment-is-in-large-part-about-making-cognition-aimable-at-all/", "source": "miri", "source_type": "blog", "text": "*(Epistemic status: attempting to clear up a misunderstanding about points I have attempted to make in the past. This post is not intended as an argument for those points.)*\n\n\nI have long said that the lion’s share of the AI alignment problem seems to me to be about *pointing powerful cognition at anything at all*, rather than *figuring out what to point it at*.\n\n\nIt’s recently come to my attention that some people have misunderstood this point, so I’ll attempt to clarify here.\n\n\nIn saying the above, I **do not** mean the following:\n\n\n\n> \n> (1) Any practical AI that you’re dealing with will necessarily be cleanly internally organized around pursuing a single objective. Managing to put your own objective into this “goal slot” (as opposed to having the goal slot set by random happenstance) is a central difficult challenge. **[Reminder: I am not asserting this]** \n> \n> \n> \n\n\nInstead, I mean something more like the following:\n\n\n\n> \n> (2) By default, the first minds humanity makes will be a terrible spaghetti-code [mess](https://www.lesswrong.com/s/v55BhXbpJuaExkpcD/p/p3s8RvkcyTwzu27ps), with no clearly-factored-out “goal” that the surrounding cognition pursues in a unified way. The mind will be more like a pile of complex, messily interconnected kludges, whose ultimate behavior is sensitive to the [particulars](https://www.lesswrong.com/posts/krHDNc7cDvfEL8z9a/niceness-is-unnatural) of how it reflects and irons out the tensions within itself over time.\n> \n> \n> Making the AI *even have something vaguely nearing a ‘goal slot’* that is stable under various operating pressures (such as reflection) during the course of operation, is an undertaking that requires mastery of cognition in its own right—mastery of a sort that we’re exceedingly unlikely to achieve if we just try to figure out how to build a mind, without filtering for approaches that are more legible and aimable.\n> \n> \n> \n\n\n\nSeparately and independently, I believe that by the time an AI has fully completed the transition to hard superintelligence, it will have ironed out a bunch of the wrinkles and will be oriented around a particular goal (at least behaviorally, cf. [efficiency](https://arbital.com/p/efficiency/)—though I would also guess that the mental architecture ultimately ends up cleanly-factored (albeit not in a way that creates a single point of failure, goalwise)).\n\n\n(But this doesn’t help solve the problem, because by the time the strongly superintelligent AI has ironed itself out into something with a “goal slot”, it’s not letting you touch it.)\n\n\nFurthermore, insofar as the AI is capable of finding actions that force the future into some [narrow band](https://www.lesswrong.com/posts/CW6HDvodPpNe38Cry/aiming-at-the-target), I expect that it will tend to be reasonable to talk about the AI as if it is (more-or-less, most of the time) “pursuing some objective”, even in the stage where it’s in fact a giant kludgey mess that’s sorting itself out over time in ways that are unpredictable to you.\n\n\nI can see how my attempts to express these other beliefs could confuse people into thinking that I meant something more like (1) above (“Any practical AI that you’re dealing with will necessarily be cleanly internally organized around pursuing a single objective…”), when in fact I mean something more like (2) (“By default, the first minds humanity makes will be a terrible spaghetti-code mess…”).\n\n\n\n\n---\n\n\nIn case it helps those who were previously confused: the “diamond maximizer” problem is one example of an attempt to direct researchers’ attention to the challenge of cleanly factoring cognition around something a bit like a ‘goal slot’.\n\n\nAs evidence of a misunderstanding here: people sometimes hear me describe the diamond maximizer problem, and respond to me by proposing training regimes that (for all they know) might make the AI care a little about diamonds in some contexts.\n\n\nThis misunderstanding of [what the diamond maximizer problem was originally meant to be pointing at](https://www.lesswrong.com/s/v55BhXbpJuaExkpcD/p/Aet2mbnK7GDDfrEQu) seems plausibly related to the misunderstanding that this post intends to clear up. Perhaps in light of the above it’s easier to understand why I see such attempts as shedding little light on the question of how to get cognition that cleanly pursues a particular objective, as opposed to a pile of kludges that careens around at the whims of reflection and happenstance.\n\n\n\nThe post [What I mean by “alignment is in large part about making cognition aimable at all”](https://intelligence.org/2023/02/02/what-i-mean-by-alignment-is-in-large-part-about-making-cognition-aimable-at-all/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2023-02-03T03:33:08Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "e84e8eaa5ec1d5a157698aa750678827", "title": "July 2022 Newsletter", "url": "https://intelligence.org/2022/07/30/july-2022-newsletter/", "source": "miri", "source_type": "blog", "text": "MIRI has put out three major new posts:\n[**AGI Ruin: A List of Lethalities.**](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) Eliezer Yudkowsky lists reasons AGI appears likely to cause an existential catastrophe, and reasons why he thinks the current research community—MIRI included—isn't succeeding at preventing this from happening\n\n\n[**A central AI alignment problem: capabilities generalization, and the sharp left turn.**](https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization) Nate Soares describes a core obstacle to aligning AGI systems: \n\n\n\n> [C]apabilities generalize further than alignment (once capabilities start to generalize real well (which is a thing I predict will happen)). And this, by default, ruins your ability to direct the AGI (that has slipped down the capabilities well), and breaks whatever constraints you were hoping would keep it corrigible.\n> \n> \n\n\nOn Nate's model, very little work is currently going into this problem. He advocates for putting far more effort into addressing this challenge in particular, and making it a major focus of future work.\n\n\n[**Six Dimensions of Operational Adequacy in AGI Projects.**](https://www.lesswrong.com/posts/keiYkaeoLHoKK4LYA/six-dimensions-of-operational-adequacy-in-agi-projects) Eliezer describes six criteria an AGI project likely needs to satisfy in order to have a realistic chance at preventing catastrophe at the time AGI is developed: trustworthy command, research closure, strong opsec, common good commitment, alignment mindset, and requisite resource levels.\n\n\n#### Other MIRI updates\n\n\n* I (Rob Bensinger) wrote a post discussing [the inordinately slow spread of good AGI conversations in ML](https://www.lesswrong.com/posts/Rkxj7TFxhbm59AKJh/the-inordinately-slow-spread-of-good-agi-conversations-in-ml).\n* I want to signal-boost two of my forum comments: on AGI Ruin, a discussion of [common mindset issues in thinking about AGI alignment](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities?commentId=LowEED2iDkhco3a5d#LowEED2iDkhco3a5d); and on Six Dimensions, a comment on [pivotal acts and \"strawberry-grade\" alignment](https://www.lesswrong.com/posts/keiYkaeoLHoKK4LYA/six-dimensions-of-operational-adequacy-in-agi-projects?commentId=tLBF4kxx2AQXwEtbN#tLBF4kxx2AQXwEtbN).\n* Also, a [quick note](https://twitter.com/robbensinger/status/1540862734408962049) from me, in case this is non-obvious: MIRI leadership thinks that humanity *never* building AGI would mean the loss of nearly all of the future's value. If this were a live option, it would be an unacceptably bad one.\n* Nate discusses MIRI's past writing [on recursive self-improvement](https://www.lesswrong.com/posts/8NKu9WES7KeKRWEKK/) (with good discussion in the comments).\n* [Let's See You Write That Corrigibility Tag](https://www.lesswrong.com/posts/AqsjZwxHNqH64C2b6/let-s-see-you-write-that-corrigibility-tag): Eliezer posts a challenge to write a list of \"the sort of principles you'd build into a Bounded Thing meant to carry out some single task or task-class and not destroy the world by doing it\".\n* From Eliezer: [MIRI announces new \"Death With Dignity\" strategy](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy). Although released on April Fools' Day (whence the silly title), the post body is an entirely [non-joking](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy?commentId=FounAZsg4kFxBDiXs) account of Eliezer's current models, including his currently-high p(doom) and his recommendations on [conditionalization](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy?commentId=G3pns4CgAAKwm3wFC) and naïve consequentialism.\n\n\n#### News and links\n\n\n* Paul Christiano ([link](https://www.lesswrong.com/posts/CoZhXrhpQxpy9xw9y/where-i-agree-and-disagree-with-eliezer)) and Zvi Mowshowitz ([link](https://www.lesswrong.com/posts/LLRtjkvh9AackwuNB/on-a-list-of-lethalities)) share their takes on the AGI Ruin post.\n* Google's new large language model, [Minerva](https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html), achieves 50.3% performance on the MATH dataset (problems at the level of high school math competitions), a dramatic improvement on the previous state of the art of 6.9%.\n* Jacob Steinhardt reports [generally poor forecaster performance](https://bounded-regret.ghost.io/ai-forecasting-one-year-in/) on predicting AI progress, with capabilities work moving faster than expected and robustness slower than expected. Outcomes for both the MATH and Massive Multitask Language Understanding datasets \"[exceeded the 95th percentile prediction](https://twitter.com/JacobSteinhardt/status/1543987774579126272)\".\n* In the wake of April/May/June results like Minerva, Google's [PaLM](https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html), OpenAI's [DALL-E](https://openai.com/dall-e-2/), and DeepMind's [Chinchilla](https://www.deepmind.com/publications/an-empirical-analysis-of-compute-optimal-large-language-model-training) and [Gato](https://www.deepmind.com/publications/a-generalist-agent), Metaculus' \"[Date of Artificial General Intelligence](https://www.metaculus.com/questions/5121/date-of-general-ai/)\" forecast has dropped from 2057 to 2039. (I'll mention that Eliezer and Nate's timelines were already pretty short, and I'm not aware of any MIRI updates toward shorter timelines this year. I'll also note that I don't personally put much weight on Metaculus' AGI timeline predictions, since many of them are inconsistent and this is a difficult and weird domain to predict.)\n* [Conjecture](https://www.lesswrong.com/posts/jfq2BH5kfQqu2vYv3/we-are-conjecture-a-new-alignment-research-startup) is a new London-based AI alignment startup with a focus on short-timeline scenarios, founded by EleutherAI alumni. The organization is currently hiring engineers and researchers, and is \"particularly interested in hiring devops and infrastructure engineers with supercomputing experience\".\n\n\n\nThe post [July 2022 Newsletter](https://intelligence.org/2022/07/30/july-2022-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2022-07-30T17:17:31Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "37b02602f5b4d934074521792ad4d7d3", "title": "A central AI alignment problem: capabilities generalization, and the sharp left turn", "url": "https://intelligence.org/2022/07/04/a-central-ai-alignment-problem/", "source": "miri", "source_type": "blog", "text": "(*This post was factored out of a larger post that I (Nate Soares) wrote, with help from Rob Bensinger, who also rearranged some pieces and added some text to smooth things out. I’m not terribly happy with it, but am posting it anyway (or, well, having Rob post it on my behalf while I travel) on the theory that it’s better than nothing.*)\n\n\n---\n\n\nI expect navigating the acute risk period to be tricky for our civilization, for a number of reasons. Success looks to me to require clearing a variety of technical, sociopolitical, and moral hurdles, and while in principle sufficient mastery of solutions to the technical problems might substitute for solutions to the sociopolitical and other problems, it nevertheless looks to me like we need a lot of things to go right.\n\n\nSome sub-problems look harder to me than others. For instance, people are still regularly surprised when I tell them that I think the hard bits are much more technical than moral: it looks to me like figuring out how to aim an AGI at all is harder than figuring out where to aim it.[[1]](https://intelligence.org/feed/?paged=2#fn895zjba6xtk)\n\n\nWithin the list of technical obstacles, there are some that strike me as more central than others, like “figure out how to aim optimization”. And a big reason why I’m currently fairly pessimistic about humanity’s odds is that it seems to me like almost nobody is focusing on the technical challenges that seem most central and unavoidable to me.\n\n\nMany people wrongly believe that I’m pessimistic because I think the alignment problem is extraordinarily difficult on a purely technical level. That’s flatly false, and is pretty high up there on my list of least favorite misconceptions of my views.[[2]](https://intelligence.org/feed/?paged=2#fncvv4zht2qpc)\n\n\nI think the problem is a normal problem of mastering some scientific field, as humanity has done many times before. Maybe it’s somewhat trickier, on account of (e.g.) intelligence being more complicated than, say, physics; maybe it’s somewhat easier on account of how we have more introspective access to a working mind than we have to the low-level physical fields; but on the whole, I doubt it’s all that qualitatively different than the sorts of summits humanity has surmounted before.\n\n\nIt’s made trickier by the fact that we probably have to attain mastery of general intelligence before we spend a bunch of time working with general intelligences (on account of how we seem likely to kill ourselves by accident within a few years, once we have AGIs on hand, if no pivotal act occurs), but that alone is not enough to undermine my hope. \n\n\nWhat undermines my hope is that nobody seems to be working on the hard bits, and I don’t currently expect most people to become convinced that they need to solve those hard bits until it’s too late.\n\n\nBelow, I’ll attempt to sketch out what I mean by “the hard bits” of the alignment problem. Although these look hard, I’m a believer in the capacity of humanity to solve technical problems at this level of difficulty when we put our minds to it. My concern is that I currently don’t think the field is *trying* to solve this problem. My hope in writing this post is to better point at the problem, with a follow-on hope that this causes new researchers entering the field to attack what seem to me to be the central challenges head-on.\n\n\n \n\n\nDiscussion of a problem\n-----------------------\n\n\nOn my model, one of the most central technical challenges of alignment—and one that every viable alignment plan will probably need to grapple with—is the issue that capabilities generalize better than alignment.\n\n\nMy guess for how AI progress goes is that at some point, some team gets an AI that starts generalizing sufficiently well, sufficiently far outside of its training distribution, that it can gain mastery of fields like physics, bioengineering, and psychology, to a high enough degree that it more-or-less singlehandedly threatens the entire world. Probably without needing explicit training for its most skilled feats, any more than humans needed many generations of killing off the least-successful rocket engineers to refine our brains towards rocket-engineering before humanity managed to achieve a moon landing.\n\n\nAnd in the same stroke that its capabilities leap forward, its alignment properties are revealed to be shallow, and to fail to generalize. The central analogy here is that optimizing apes for inclusive genetic fitness (IGF) doesn’t make the resulting humans optimize mentally for IGF. Like, sure, the apes are eating because they have a hunger instinct and having sex because it feels good—but it’s not like they *could*be eating/fornicating due to explicit reasoning about how those activities lead to more IGF. They can’t yet perform the sort of abstract reasoning that would correctly justify those actions in terms of IGF. And then, when they start to generalize well in the way of humans, they predictably don’t *suddenly start* eating/fornicating *because* of abstract reasoning about IGF, even though they now *could*. Instead, they invent condoms, and fight you if you try to remove their enjoyment of good food (telling them to just calculate IGF manually). The alignment properties you lauded before the capabilities started to generalize, predictably fail to generalize with the capabilities.\n\n\n\nSome people I say this to respond with arguments like: “Surely, before a smaller team could get an AGI that can master subjects like biotech and engineering well enough to kill all humans, some other, larger entity such as a state actor will have a somewhat worse AI that can handle biotech and engineering somewhat less well, but in a way that prevents any one AGI from running away with the whole future?”\n\n\nI respond with arguments like, “In the one real example of intelligence being developed we have to look at, continuous application of natural selection in fact found *Homo sapiens sapiens*, and the capability-gain curves of the ecosystem for various measurables were in fact sharply kinked by this new species (e.g., using machines, we sharply outperform other animals on well-established metrics such as “airspeed”, “altitude”, and “cargo carrying capacity”).”\n\n\nTheir response in turn is generally some variant of “well, natural selection wasn’t optimizing very intelligently” or “maybe humans weren’t all that sharply above evolutionary trends” or “maybe the power that let humans beat the rest of the ecosystem was simply the invention of culture, and nothing embedded in our own already-existing culture can beat us” or suchlike.\n\n\nRather than arguing further here, I’ll just say that failing to believe the hard problem exists is one surefire way to avoid tackling it.\n\n\nSo, flatly summarizing my point instead of arguing for it: it looks to me like there will at some point be some sort of “sharp left turn”, as systems start to work really well in domains really far beyond the environments of their training—domains that allow for significant reshaping of the world, in the way that humans reshape the world and chimps don’t. And that’s where (according to me) things start to get crazy. In particular, I think that once AI capabilities start to generalize in this particular way, it’s predictably the case that the alignment of the system will fail to generalize with it.[[3]](https://intelligence.org/feed/?paged=2#fn5q23dwtz6bl)\n\n\nThis is slightly upstream of a couple other challenges I consider quite core and difficult to avoid, including:\n\n\n1. Directing a capable AGI towards an objective of your choosing.\n2. Ensuring that the AGI is low-impact, conservative, shutdownable, and otherwise corrigible.\n\n\nThese two problems appear in the [strawberry problem](https://www.lesswrong.com/posts/SsCQHjqNT3xQAPQ6b/yudkowsky-on-agi-ethics), which Eliezer’s been pointing at for quite some time: the problem of getting an AI to place two identical (down to the cellular but not molecular level) strawberries on a plate, and then do nothing else. The demand of cellular-level copying forces the AI to be capable; the fact that we can get it to duplicate a strawberry instead of doing some other thing demonstrates our ability to direct it; the fact that it does nothing else indicates that it’s corrigible (or really well aligned to a delicate human intuitive notion of inaction).\n\n\nHow is the “capabilities generalize further than alignment” problem upstream of these problems? Suppose that the fictional team OpenMind is training up a variety of AI systems, before one of them takes that sharp left turn. Suppose they’ve put the AI in lots of different video-game and simulated environments, and they’ve had good luck training it to pursue an objective that the operators described in English. “I don’t know what those MIRI folks were talking about; these systems are easy to direct; simple training suffices”, they say. At the same time, they apply various training methods, some simple and some clever, to cause the system to allow itself to be removed from various games by certain “operator-designated” characters in those games, in the name of shutdownability. And they use various techniques to prevent it from stripmining in Minecraft, in the name of low-impact. And they train it on a variety of moral dilemmas, and find that it can be trained to give correct answers to moral questions (such as “in thus-and-such a circumstance, should you poison the operator’s opponent?”) just as well as it can be trained to give correct answers to any other sort of question. “Well,” they say, “this alignment thing sure was easy. I guess we lucked out.”\n\n\nThen, the system takes that sharp left turn,[[4]](https://intelligence.org/feed/?paged=2#fn1aeqo3zohuxk)[[5]](https://intelligence.org/feed/?paged=2#fn2ohrc5rd1ze) and, predictably, the capabilities quickly improve outside of its training distribution, while the alignment falls apart.\n\n\nThe techniques OpenMind used to train it away from the error where it convinces itself that bad situations are unlikely? Those generalize fine. The techniques you used to train it to allow the operators to shut it down? Those fall apart, and the AGI starts wanting to avoid shutdown, including wanting to deceive you if it’s useful to do so.\n\n\nWhy does alignment fail while capabilities generalize, at least by default and in predictable practice? In large part, because good capabilities form something like an attractor well. (That’s one of the reasons to expect intelligent systems to eventually make that sharp left turn if you push them far enough, and it’s why natural selection managed to stumble into general intelligence with no understanding, foresight, or steering.)\n\n\nMany different training scenarios are teaching your AI the same instrumental lessons, about how to think in accurate and useful ways. Furthermore, those lessons are underwritten by a simple logical structure, much like the simple laws of arithmetic that abstractly underwrite a wide variety of empirical arithmetical facts about what happens when you add four people’s bags of apples together on a table and then divide the contents among two people.\n\n\nBut that attractor well? It’s got a free parameter. And that parameter is what the AGI is optimizing for. And there’s no analogously-strong attractor well pulling the AGI’s objectives towards your preferred objectives.\n\n\nThe hard left turn? That’s your system sliding into the capabilities well. (You don’t need to fall all that far to do impressive stuff; humans are better at an enormous variety of relevant skills than chimps, but they aren’t all that lawful in an absolute sense.)\n\n\nThere’s no analogous alignment well to slide into.\n\n\nOn the contrary, sliding down the capabilities well is liable to break a bunch of your existing alignment properties.[[6]](https://intelligence.org/feed/?paged=2#fnga0my7v01vv)\n\n\nWhy? Because things in the capabilities well have instrumental incentives that cut against your alignment patches. Just like how your previous arithmetic errors (such as the [pebble sorters](https://www.lesswrong.com/posts/mMBTPTjRbsrqbSkZE/sorting-pebbles-into-correct-heaps) on the wrong side of the Great War of 1957) get steamrolled by the development of arithmetic, so too will your attempts to make the AGI low-impact and shutdownable ultimately (by default, and in the absence of technical solutions to core alignment problems) get steamrolled by a system that pits those reflexes / intuitions / much-more-alien-behavioral-patterns against the convergent instrumental incentive to survive the day.\n\n\nPerhaps this is not convincing; perhaps to convince you we’d need to go deeper into the weeds of the various counterarguments, if you are to be convinced. (Like acknowledging that humans, who can foresee these difficulties and adjust their training procedures accordingly, have a *better*chance than natural selection did, while then discussing why current proposals do not seem to me to be hopeful.) But hopefully you can at least, in reading this document, develop a basic understanding of my position.\n\n\nStating it again, in summary: my position is that capabilities generalize further than alignment (once capabilities start to generalize real well (which is a thing I predict will happen)). And this, by default, ruins your ability to direct the AGI (that has slipped down the capabilities well), and breaks whatever constraints you were hoping would keep it corrigible. And addressing the problem looks like finding some way to either keep your system aligned through that sharp left turn, or render it aligned afterwards.\n\n\nIn an upcoming post, I’ll say more about how it looks to me like  ~nobody is working on this particular hard problem, by briefly reviewing a variety of current alignment research proposals. In short, I think that the field’s current range of approaches nearly all assume this problem away, or direct their attention elsewhere.\n\n\n \n\n\n\n\n---\n\n\n \n\n\n1. **[^](https://intelligence.org/feed/?paged=2#fnref895zjba6xtk)**\n\nFurthermore, figuring where to aim it looks to me like more of a technical problem than a moral problem. Attempting to manually specify the nature of goodness is a doomed endeavor, of course, but that’s fine, because we can instead specify processes for figuring out (the coherent extrapolation of) what humans value. Which still looks prohibitively difficult as a goal to give humanity’s first AGI (which I expect to be deployed under significant time pressure), mind you, and I further recommend aiming humanity’s first AGI systems at simple limited goals that end the acute risk period and then cede stewardship of the future to some process that can reliably do the “aim minds towards the right thing” thing. So today’s alignment problems are a few steps removed from tricky moral questions, on my models.\n2. **[^](https://intelligence.org/feed/?paged=2#fnrefcvv4zht2qpc)**\n\nWhile we’re at it: I think trying to get provable safety guarantees about our AGI systems is silly, and I’m pretty happy to [follow Eliezer](https://www.lesswrong.com/posts/mmXEk675etTKpkgTx/agi-ruin-a-poorly-organized-list-of-lethalities) in calling an AGI “safe” if it has a <50% chance of killing >1B people. Also, I think there’s a very large chance of AGI killing us, and I thoroughly disclaim the argument that even if the probability is tiny then we should work on it anyway because the stakes are high.\n3. **[^](https://intelligence.org/feed/?paged=2#fnref5q23dwtz6bl)**\n\nNote that this is consistent with findings like “large language models perform just as well on moral dilemmas as they perform on non-moral ones”; to find this reassuring is to misunderstand the problem. Chimps have an easier time than squirrels following and learning from human cues. Yet this fact doesn’t particularly mean that enhanced chimps are more likely than enhanced squirrels to remove their hunger drives, once they understand inclusive genetic fitness and are able to eat purely for reasons of fitness maximization. Pre-left-turn AIs will get better at various ‘alignment’ metrics, in ways that I expect to build a false sense of security, without addressing the lurking difficulties.\n4. **[^](https://intelligence.org/feed/?paged=2#fnref1aeqo3zohuxk)**\n\n“What do you mean ‘it takes a sharp left turn’? Are you talking about recursive self-improvement? I thought you said [somewhere else](https://www.lesswrong.com/posts/8NKu9WES7KeKRWEKK/why-all-the-fuss-about-recursive-self-improvement) that you don’t think recursive self-improvement is necessarily going to play a central role before the extinction of humanity?” I’m not talking about recursive self-improvement. That’s one way to take a sharp left turn, and it could happen, but note that humans have neither the understanding nor control over their own minds to recursively self-improve, and we outstrip the rest of the animals pretty handily. I’m talking about something more like “intelligence that is general enough to be dangerous”, the sort of thing that humans have and chimps don’t.\n5. **[^](https://intelligence.org/feed/?paged=2#fnref2ohrc5rd1ze)**\n\n“Hold on, isn’t this unfalsifiable? Aren’t you saying that you’re going to continue believing that alignment is hard, even as we get evidence that it’s easy?” Well, I contend that “GPT can learn to answer moral questions just as well as it can learn to answer other questions” is not much evidence either way about the difficulty of alignment. I’m not saying we’ll get evidence that I’ll ignore; I’m naming in advance some things that I wouldn’t consider negative evidence (partially in hopes that I can refer back to this post when people crow later and request an update). But, yes, my model does have the inconvenient property that people who are skeptical now, are liable to remain skeptical until it’s too late, because most of the evidence I expect to give us *advance* warning about the nature of the problem is evidence that we’ve already seen. I assure you that I do not consider this property to be convenient.\n\n\nAs for things that could convince me otherwise: technical understanding of intelligence could undermine my “sharp left turn” model. I could also imagine observing some ephemeral hopefully-I’ll-know-it-when-I-see-it capabilities thresholds, without any sharp left turns, that might update me. (Short of “full superintelligence without a sharp left turn”, which would obviously convince me but comes too late in the game to shift my attention.)\n6. **[^](https://intelligence.org/feed/?paged=2#fnrefga0my7v01vv)**\n\nTo use my overly-detailed evocative example from earlier: Humans aren’t tempted to rewire our own brains so that we stop liking good meals for the sake of good meals, and start eating only insofar as we know we have to eat to reproduce (or, rather, maximize inclusive genetic fitness) (after upgrading the rest of our minds such that that sort of calculation doesn’t drag down the rest of the fitness maximization). The cleverer humans are chomping at the bit to have their beliefs be more accurate, but they’re not chomping at the bit to replace all these mere-shallow-correlates of inclusive genetic fitness with explicit maximization. So too with other minds, at least by default: that which makes them generally intelligent, does not make them motivated by your objectives.\n\n\n\nThe post [A central AI alignment problem: capabilities generalization, and the sharp left turn](https://intelligence.org/2022/07/04/a-central-ai-alignment-problem/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2022-07-05T05:22:45Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "e43c1d1ccaab2a56b924b84efc1eb60a", "title": "AGI Ruin: A List of Lethalities", "url": "https://intelligence.org/2022/06/10/agi-ruin/", "source": "miri", "source_type": "blog", "text": "### Preamble:\n\n\n(If you’re already familiar with all basics and don’t want any preamble, skip ahead to [Section B](https://intelligence.org/feed/?paged=2#Section_B_) for technical difficulties of alignment proper.)\n\n\nI have several times failed to write up a well-organized list of reasons why AGI will kill you.  People come in with different ideas about why AGI would be survivable, and want to hear different *obviously key*points addressed first.  Some fraction of those people are loudly upset with me if the obviously most important points aren’t addressed immediately, and I address different points first instead.\n\n\nHaving failed to solve this problem in any good way, I now give up and solve it poorly with a poorly organized list of individual rants.  I’m not particularly happy with this list; the alternative was publishing nothing, and publishing this seems marginally more [dignified](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy).\n\n\nThree points about the general subject matter of discussion here, numbered so as not to conflict with the list of lethalities:\n\n\n**-3**.  I’m assuming you are already familiar with some basics, and already know what ‘[orthogonality](https://arbital.com/p/orthogonality/)’ and ‘[instrumental convergence](https://arbital.com/p/instrumental_convergence/)’ are and why they’re true.  People occasionally claim to me that I need to stop fighting old wars here, because, those people claim to me, those wars have already been won within the important-according-to-them parts of the current audience.  I suppose it’s at least true that none of the current major EA funders seem to be visibly in denial about orthogonality or instrumental convergence as such; so, fine.  If you don’t know what ‘orthogonality’ or ‘instrumental convergence’ are, or don’t see for yourself why they’re true, you need a different introduction than this one.\n\n\n**-2**.  When I say that alignment is lethally difficult, I am not talking about ideal or perfect goals of ‘provable’ alignment, nor total alignment of superintelligences on exact human values, nor getting AIs to produce satisfactory arguments about moral dilemmas which sorta-reasonable humans disagree about, nor attaining an absolute certainty of an AI not killing everyone.  When I say that alignment is difficult, I mean that in practice, using the techniques we actually have, “please don’t disassemble literally everyone with probability roughly 1” is an overly large ask that we are not on course to get.  So far as I’m concerned, [if you can get a powerful AGI that carries out some pivotal superhuman engineering task, with a less than fifty percent change of killing more than one billion people](https://twitter.com/ESYudkowsky/status/1070095112791715846), I’ll take it.  Even smaller chances of killing even fewer people would be a nice luxury, but if you can get as incredibly far as “less than roughly certain to kill everybody”, then you can probably get down to under a 5% chance with only slightly more effort.  Practically all of the difficulty is in getting to “less than certainty of killing literally everyone”.  Trolley problems are not an interesting subproblem in all of this; if there are any survivors, you solved alignment.  At this point, I no longer care how it works, I don’t care how you got there, I am cause-agnostic about whatever methodology you used, all I am looking at is prospective results, all I want is that we have justifiable cause to believe of a pivotally useful AGI ‘this will not kill literally everyone’.  Anybody telling you I’m asking for stricter ‘alignment’ than this has failed at reading comprehension.  The big ask from AGI alignment, the basic challenge I am saying is too difficult, is to obtain by any strategy whatsoever a significant chance of there being any survivors.\n\n\n**-1**.  None of this is about anything being impossible in principle.  The metaphor I usually use is that if a textbook from one hundred years in the future fell into our hands, containing all of the simple ideas *that actually work robustly in practice,* we could probably build an aligned superintelligence in six months.  For people schooled in machine learning, I use as my metaphor the difference between ReLU activations and sigmoid activations.  Sigmoid activations are complicated and fragile, and do a terrible job of transmitting gradients through many layers; ReLUs are incredibly simple (for the unfamiliar, the activation function is literally max(x, 0)) and work much better.  Most neural networks for the first decades of the field used sigmoids; the idea of ReLUs wasn’t discovered, validated, and popularized until decades later.  What’s lethal is that we do not *have*the Textbook From The Future telling us all the simple solutions that actually in real life just work and are robust; we’re going to be doing everything with metaphorical sigmoids on the first critical try.  No difficulty discussed here about AGI alignment is claimed by me to be impossible – to merely human science and engineering, let alone in principle – if we had 100 years to solve it using unlimited retries, the way that science *usually* has an unbounded time budget and unlimited retries.  This list of lethalities is about things *we are not on course to solve in practice in time on the first critical try;* none of it is meant to make a much stronger claim about things that are*impossible in principle.*\n\n\nThat said:\n\n\nHere, from my perspective, are some different true things that could be said, to contradict various false things that various different people seem to believe, about why AGI would be survivable on anything remotely remotely resembling the current pathway, or any other pathway we can easily jump to.\n\n\n \n\n\n### Section A:\n\n\nThis is a very lethal problem, it has to be solved one way or another, it has to be solved at a minimum strength and difficulty level instead of various easier modes that some dream about, we do not have any visible option of ‘everyone’ retreating to only solve safe weak problems instead, and failing on the first really dangerous try is fatal.\n\n\n \n\n\n**1**.  Alpha Zero blew past all accumulated human knowledge about Go after a day or so of self-play, with no reliance on human playbooks or sample games.  Anyone relying on “well, it’ll get up to human capability at Go, but then have a hard time getting past that because it won’t be able to learn from humans any more” would have relied on vacuum.  **AGI will not be upper-bounded by human ability or human learning speed**.  **Things much smarter than human would be able to learn from less evidence than humans require** to have ideas driven into their brains; there are theoretical upper bounds here, but those upper bounds seem very high. (Eg, each bit of information that couldn’t already be fully predicted can eliminate at most half the probability mass of all hypotheses under consideration.)  It is not naturally (by default, barring intervention) the case that everything takes place on a timescale that makes it easy for us to react.\n\n\n\n**2**.  **A cognitive system with sufficiently high cognitive powers, given any medium-bandwidth channel of causal influence, will not find it difficult to bootstrap to overpowering capabilities independent of human infrastructure.**  The concrete example I usually use here is nanotech, because there’s been pretty detailed analysis of what definitely look like physically attainable lower bounds on what should be possible with nanotech, and those lower bounds are sufficient to carry the point.  My lower-bound model of “how a sufficiently powerful intelligence would kill everyone, if it didn’t want to not do that” is that it gets access to the Internet, emails some DNA sequences to any of the many many online firms that will take a DNA sequence in the email and ship you back proteins, and bribes/persuades some human who has no idea they’re dealing with an AGI to mix proteins in a beaker, which then form a first-stage nanofactory which can build the actual nanomachinery.  (Back when I was first deploying this visualization, the wise-sounding critics said “Ah, but how do you know even a superintelligence could solve the protein folding problem, if it didn’t already have planet-sized supercomputers?” but one hears less of this after the advent of AlphaFold 2, for some odd reason.)  The nanomachinery builds diamondoid bacteria, that replicate with solar power and atmospheric CHON, maybe aggregate into some miniature rockets or jets so they can ride the jetstream to spread across the Earth’s atmosphere, get into human bloodstreams and hide, strike on a timer.  **Losing a conflict with a high-powered cognitive system looks at least as deadly as “everybody on the face of the Earth suddenly falls over dead within the same second”.**  (I am using awkward constructions like ‘high cognitive power’ because standard English terms like ‘smart’ or ‘intelligent’ appear to me to function largely as status synonyms.  ‘Superintelligence’ sounds to most people like ‘something above the top of the status hierarchy that went to double college’, and they don’t understand why that would be all that dangerous?  Earthlings have no word and indeed no standard native concept that means ‘actually useful cognitive power’.  A large amount of failure to panic sufficiently, seems to me to stem from a lack of appreciation for the incredible potential lethality of this thing that Earthlings as a culture have not named.)\n\n\n**3**.  **We need to get alignment right on the ‘first critical try’** at operating at a ‘dangerous’ level of intelligence, where **unaligned operation at a dangerous level of intelligence kills everybody on Earth and then we don’t get to try again**.  This includes, for example: (a) something smart enough to build a nanosystem which has been explicitly authorized to build a nanosystem; or (b) something smart enough to build a nanosystem and also smart enough to gain unauthorized access to the Internet and pay a human to put together the ingredients for a nanosystem; or (c) something smart enough to get unauthorized access to the Internet and build something smarter than itself on the number of machines it can hack; or (d) something smart enough to treat humans as manipulable machinery and which has any authorized or unauthorized two-way causal channel with humans; or (e) something smart enough to improve itself enough to do (b) or (d); etcetera.  We can gather all sorts of information beforehand *from less powerful systems that will not kill us if we screw up operating them;* but once we are running more powerful systems, we can no longer update on sufficiently catastrophic errors.  This is where practically all of the real lethality comes from, that we have to get things right on the first sufficiently-critical try.  If we had unlimited retries – if every time an AGI destroyed all the galaxies we got to go back in time four years and try again – we would in a hundred years figure out which bright ideas actually worked.  Human beings can figure out pretty difficult things over time, when they get lots of tries; when a failed guess kills literally everyone, that is harder.  That we have to get a bunch of key stuff right *on the first try* is where most of the lethality really and ultimately comes from; likewise the fact that no authority is here to tell us a list of what exactly is ‘key’ and will kill us if we get it wrong.  (One remarks that most people are so absolutely and flatly unprepared by their ‘scientific’ educations to challenge pre-paradigmatic puzzles with no scholarly authoritative supervision, that they do not even realize how much harder that is, or how incredibly lethal it is to demand getting that right on the first critical try.)\n\n\n**4**.  **We can’t just “decide not to build AGI”** because GPUs are everywhere, and knowledge of algorithms is constantly being improved and published; 2 years after the leading actor has the capability to destroy the world, 5 other actors will have the capability to destroy the world.  **The given lethal challenge is to solve within a time limit,** driven by the dynamic in which, over time, increasingly weak actors with a smaller and smaller fraction of total computing power, become able to build AGI and destroy the world.  Powerful actors all refraining in unison from doing the suicidal thing just delays this time limit – it does not lift it, unless computer hardware and computer software progress are both brought to complete severe halts across the whole Earth.  The current state of this cooperation to have every big actor refrain from doing the stupid thing, is that at present some large actors with a lot of researchers and computing power are led by people who vocally disdain all talk of AGI safety (eg Facebook AI Research).  Note that needing to solve AGI alignment *only* within a time limit, but with unlimited safe retries for rapid experimentation on the full-powered system; or *only* on the first critical try, but with an unlimited time bound; would both be terrifically humanity-threatening challenges by historical standards *individually*.\n\n\n**5**.  **We can’t just build a very weak system**, which is less dangerous because it is so weak, and declare victory; because later there will be more actors that have the capability to build a stronger system and one of them will do so.  I’ve also in the past called this the ‘safe-but-useless’ tradeoff, or ‘safe-vs-useful’.  People keep on going “why don’t we only use AIs to do X, that seems safe” and the answer is almost always either “doing X in fact takes very powerful cognition that is not passively safe” or, even more commonly, “because restricting yourself to doing X will not prevent Facebook AI Research from destroying the world six months later”.  If all you need is an object that doesn’t do dangerous things, you could try a sponge; a sponge is very passively safe.  Building a sponge, however, does not prevent Facebook AI Research from destroying the world six months later when they catch up to the leading actor.\n\n\n**6**.  **We need to align the performance of some large task, a ‘pivotal act’ that prevents other people from building an unaligned AGI that destroys the world.**  While the number of actors with AGI is few or one, they must execute some “pivotal act”, strong enough to flip the gameboard, using an AGI powerful enough to do that.  It’s not enough to be able to align a *weak* system – we need to align a system that can do some single *very large thing.*  The example I usually give is “burn all GPUs”.  This is not what I think you’d actually want to do with a powerful AGI – the nanomachines would need to operate in an incredibly complicated open environment to hunt down all the GPUs, and that would be needlessly difficult to align.  However, all known pivotal acts are currently outside the Overton Window, and I expect them to stay there.  So I picked an example where if anybody says “how dare you propose burning all GPUs?” I can say “Oh, well, I don’t *actually* advocate doing that; it’s just a mild overestimate for the rough power level of what you’d have to do, and the rough level of machine cognition required to do that, in order to prevent somebody else from destroying the world in six months or three years.”  (If it wasn’t a mild overestimate, then ‘burn all GPUs’ would actually be the minimal pivotal task and hence correct answer, and I wouldn’t be able to give that denial.)  Many clever-sounding proposals for alignment fall apart as soon as you ask “How could you use this to align a system that you could use to shut down all the GPUs in the world?” because it’s then clear that the system can’t do something that powerful, or, if it can do that, the system wouldn’t be easy to align.  A GPU-burner is also a system powerful enough to, and purportedly authorized to, build nanotechnology, so it requires operating in a dangerous domain at a dangerous level of intelligence and capability; and this goes along with any non-fantasy attempt to name a way an AGI could change the world such that a half-dozen other would-be AGI-builders won’t destroy the world 6 months later.\n\n\n**7**.  The reason why nobody in this community has successfully named a ‘pivotal weak act’ where you do something weak enough with an AGI to be passively safe, but powerful enough to prevent any other AGI from destroying the world a year later – and yet also we can’t just go do that right now and need to wait on AI – is that *nothing like that exists*.  There’s no reason why it should exist.  There is not some elaborate clever reason why it exists but nobody can see it.  It takes a lot of power to do something to the current world that prevents any other AGI from coming into existence; nothing which can do that is passively safe in virtue of its weakness.  If you can’t solve the problem right now (which you can’t, because you’re opposed to other actors who don’t want to be solved and those actors are on roughly the same level as you) then you are resorting to some cognitive system that can do things you could not figure out how to do yourself, that you were not *close* to figuring out because you are not *close*to being able to, for example, burn all GPUs.  Burning all GPUs would *actually* stop Facebook AI Research from destroying the world six months later; weaksauce Overton-abiding stuff about ‘improving public epistemology by setting GPT-4 loose on Twitter to provide scientifically literate arguments about everything’ will be cool but will not actually prevent Facebook AI Research from destroying the world six months later, or some eager open-source collaborative from destroying the world a year later if you manage to stop FAIR specifically.  **There are no pivotal weak acts**.\n\n\n**8**.  **The best and easiest-found-by-optimization algorithms for solving problems we want an AI to solve, readily generalize to problems we’d rather the AI not solve**; you can’t build a system that only has the capability to drive red cars and not blue cars, because all red-car-driving algorithms generalize to the capability to drive blue cars.\n\n\n**9**.  The builders of a safe system, by hypothesis on such a thing being possible, would need to operate their system in a regime where it has the *capability*to kill everybody or make itself even more dangerous, but has been successfully designed to not do that.  **Running AGIs doing something pivotal are not passively safe,** they’re the equivalent of nuclear cores that require actively maintained design properties to not go supercritical and melt down.\n\n\n \n\n\n### Section B:\n\n\nOkay, but as we all know, modern machine learning is like a genie where you just give it a wish, right?  Expressed as some mysterious thing called a ‘loss function’, but which is basically just equivalent to an English wish phrasing, right?  And then if you pour in enough computing power you get your wish, right?  So why not train a giant stack of transformer layers on a dataset of agents doing nice things and not bad things, throw in the word ‘corrigibility’ somewhere, crank up that computing power, and get out an aligned AGI?\n\n\n \n\n\n#### Section B.1:  The distributional leap.\n\n\n**10**.  You can’t train alignment by running lethally dangerous cognitions, observing whether the outputs kill or deceive or corrupt the operators, assigning a loss, and doing supervised learning.  **On anything like the standard ML paradigm, you would need to somehow generalize optimization-for-alignment you did in safe conditions, across a big distributional shift to dangerous conditions**.  (Some generalization of this seems like it would have to be true even outside that paradigm; you wouldn’t be working on a live unaligned superintelligence to align it.)  This alone is a point that is sufficient to kill a lot of naive proposals from people who never did or could concretely sketch out any specific scenario of what training they’d do, in order to align what output – which is why, of course, they never concretely sketch anything like that.  **Powerful AGIs doing dangerous things that will kill you if misaligned, must have an alignment property that generalized far out-of-distribution from safer building/training operations that didn’t kill you.**  This is where a huge amount of lethality comes from on anything remotely resembling the present paradigm.  Unaligned operation at a dangerous level of intelligence\\*capability will kill you; so, if you’re starting with an unaligned system and labeling outputs in order to get it to learn alignment, the training regime or building regime must be operating at some lower level of intelligence\\*capability that is passively safe, where its currently-unaligned operation does not pose any threat.  (Note that anything substantially smarter than you poses a threat given *any* realistic level of capability.  Eg, “being able to produce outputs that humans look at” is probably sufficient for a generally much-smarter-than-human AGI to [navigate its way out of the causal systems that are humans](https://www.yudkowsky.net/singularity/aibox), especially in the real world where somebody trained the system on terabytes of Internet text, rather than somehow keeping it ignorant of the latent causes of its source code and training environments.)\n\n\n**11**.  If cognitive machinery doesn’t generalize far out of the distribution where you did tons of training, it can’t solve problems on the order of ‘build nanotechnology’ where it would be too expensive to run a million training runs of failing to build nanotechnology.  There is no pivotal act this weak; **there’s no known case where you can entrain a safe level of ability on a safe environment where you can cheaply do millions of runs, and deploy that capability to save the world** and prevent the next AGI project up from destroying the world two years later.  Pivotal weak acts like this aren’t known, and not for want of people looking for them.  So, again, you end up needing alignment to generalize way out of the training distribution – not just because the training environment needs to be safe, but because the training environment probably also needs to be *cheaper* than evaluating some real-world domain in which the AGI needs to do some huge act.  You don’t get 1000 failed tries at burning all GPUs – because people will notice, even leaving out the consequences of capabilities success and alignment failure.\n\n\n**12**.  **Operating at a highly intelligent level is a drastic shift in distribution from operating at a less intelligent level**, opening up new external options, and probably opening up even more new internal choices and modes.  Problems that materialize at high intelligence and danger levels may fail to show up at safe lower levels of intelligence, or may recur after being suppressed by a first patch.\n\n\n**13**.  **Many alignment problems of superintelligence will not naturally appear at pre-dangerous, passively-safe levels of capability**.  Consider the internal behavior ‘change your outer behavior to deliberately look more aligned and deceive the programmers, operators, and possibly any loss functions optimizing over you’.  This problem is one that will appear at the superintelligent level; if, being otherwise ignorant, we guess that it is among the *median* such problems in terms of how *early* it naturally appears in earlier systems, then around *half* of the alignment problems of superintelligence will first naturally materialize *after*that one first starts to appear.  Given *correct*foresight of which problems will naturally materialize *later,* one could try to deliberately materialize such problems earlier, and get in some observations of them.  This helps to the extent (a) that we actually correctly forecast all of the problems that will appear later, or some superset of those; (b) that we succeed in preemptively materializing a superset of problems that will appear later; and (c) that we can actually solve, in the earlier laboratory that is out-of-distribution for us relative to the real problems, those alignment problems that would be lethal if we mishandle them when they materialize later.  Anticipating *all*of the really dangerous ones, and then successfully materializing them, in the correct form for early solutions to generalize over to later solutions, *sounds possibly kinda hard*.\n\n\n**14**.  **Some problems**, like ‘the AGI has an option that (looks to it like) it could successfully kill and replace the programmers to fully optimize over its environment’, **seem like their natural order of appearance could be that they first appear only in fully dangerous domains**.  Really actually having a *clear* option to brain-level-persuade the operators or escape onto the Internet, build nanotech, and destroy all of humanity – in a way where you’re fully clear that you know the relevant facts, and estimate only a not-worth-it low probability of learning something which changes your preferred strategy if you bide your time another month while further growing in capability – is an option that first gets evaluated for real at the point where an AGI fully expects it can defeat its creators.  We can try to manifest an echo of that apparent scenario in earlier toy domains.  Trying to train by gradient descent against that behavior, in that toy domain, is something I’d expect to produce not-particularly-coherent local patches to thought processes, which would break with near-certainty inside a superintelligence generalizing far outside the training distribution and thinking very different thoughts.  Also, programmers and operators themselves, who are used to operating in not-fully-dangerous domains, are operating out-of-distribution when they enter into dangerous ones; our methodologies may at that time break.\n\n\n**15**.  **Fast capability gains seem likely, and may break lots of previous alignment-required invariants simultaneously.**Given otherwise insufficient foresight by the operators, I’d expect a lot of those problems to appear approximately simultaneously after a sharp capability gain.  See, again, the case of human intelligence.  We didn’t break alignment with the ‘inclusive reproductive fitness’ outer loss function, immediately after the introduction of farming – something like 40,000 years into a 50,000 year Cro-Magnon takeoff, as was itself running very quickly relative to the outer optimization loop of natural selection.  Instead, we got a lot of technology more advanced than was in the ancestral environment, including contraception, in one very fast burst relative to the speed of the outer optimization loop, late in the general intelligence game.  We started reflecting on ourselves a lot more, started being programmed a lot more by cultural evolution, and lots and lots of assumptions underlying our alignment in the ancestral training environment broke simultaneously.  (People will perhaps rationalize reasons why this abstract description doesn’t carry over to gradient descent; eg, “gradient descent has less of an information bottleneck”.  My model of this variety of reader has an inside view, which they will label an outside view, that assigns great relevance to some other data points that are *not* observed cases of an outer optimization loop producing an inner general intelligence, and assigns little importance to our one data point actually featuring the phenomenon in question.  When an outer optimization loop actually produced general intelligence, it broke alignment after it turned general, and did so relatively late in the game of that general intelligence accumulating capability and knowledge, almost immediately before it turned ‘lethally’ dangerous relative to the outer optimization loop of natural selection.  Consider skepticism, if someone is ignoring this one warning, especially if they are not presenting equally lethal and dangerous things that they say will go wrong instead.)\n\n\n \n\n\n#### Section B.2:  Central difficulties of outer and inner alignment.\n\n\n**16**.Even if you train really hard on an exact loss function, that doesn’t thereby create an explicit internal representation of the loss function inside an AI that then continues to pursue that exact loss function in distribution-shifted environments.  Humans don’t explicitly pursue inclusive genetic fitness; **outer optimization even on a very exact, very simple loss function doesn’t produce inner optimization in that direction**.  This happens *in practice in real life,*it is what happened in *the only case we know about*, and it seems to me that there are deep theoretical reasons to expect it to happen again: the *first*semi-outer-aligned solutions found, in the search ordering of a real-world bounded optimization process, are not inner-aligned solutions.  This is sufficient on its own, even ignoring many other items on this list, to trash entire categories of naive alignment proposals which assume that if you optimize a bunch on a loss function calculated using some simple concept, you get perfect inner alignment on that concept.\n\n\n**17**.  More generally, a superproblem of ‘outer optimization doesn’t produce inner alignment’ is that **on the current optimization paradigm there is no general idea of how to get particular inner properties into a system, or verify that they’re there, rather than just observable outer ones you can run a loss function over.**  This is a problem when you’re trying to generalize out of the original training distribution, because, eg, the outer behaviors you see could have been produced by an inner-misaligned system that is deliberately producing outer behaviors that will fool you.  We don’t know how to get any bits of information into the *inner* system rather than the *outer* behaviors, in any systematic or general way, on the current optimization paradigm.\n\n\n**18**.  **There’s no reliable Cartesian-sensory ground truth** (reliable loss-function-calculator) **about whether an output is ‘aligned’**, because some outputs destroy (or fool) the human operators and produce a different environmental causal chain behind the externally-registered loss function.  That is, if you show an agent a reward signal that’s currently being generated by humans, the signal is not *in general* a *reliable perfect ground truth* about *how aligned an action was*, because another way of producing a high reward signal is to deceive, corrupt, or replace the human operators with a different causal system which generates that reward signal.  When you show an agent an environmental reward signal, you are not showing it something that is a reliable ground truth about whether the system did the thing you wanted it to do; *even if* it ends up perfectly inner-aligned on that reward signal, or learning some concept that *exactly* corresponds to ‘wanting states of the environment which result in a high reward signal being sent’, an AGI strongly optimizing on that signal will kill you, because the sensory reward signal was not a ground truth about alignment (as seen by the operators).\n\n\n**19**.  More generally, **there is no known way to use the paradigm of loss functions, sensory inputs, and/or reward inputs, to optimize anything within a cognitive system to point at particular things within the environment** – to point to *latent events and objects and properties in the environment,* rather than *relatively shallow functions of the sense data and reward.*  This isn’t to say that nothing in the system’s goal (whatever goal accidentally ends up being inner-optimized over) could ever point to anything in the environment by *accident*.  Humans ended up pointing to their environments at least partially, though we’ve got lots of internally oriented motivational pointers as well.  But insofar as the current paradigm works at all, the on-paper design properties say that it only works for aligning on known direct functions of sense data and reward functions.  All of these kill you if optimized-over by a sufficiently powerful intelligence, because they imply strategies like ‘kill everyone in the world using nanotech to strike before they know they’re in a battle, and have control of your reward button forever after’.  It just isn’t *true* that we know a function on webcam input such that every world with that webcam showing the right things is safe for us creatures outside the webcam.  This general problem is a fact about the territory, not the map; it’s a fact about the actual environment, not the particular optimizer, that lethal-to-us possibilities exist in some possible environments underlying every given sense input.\n\n\n**20**.  Human operators are fallible, breakable, and manipulable.  **Human raters make systematic errors – regular, compactly describable, predictable errors**.  To *faithfully* learn a function from ‘human feedback’ is to learn (from our external standpoint) an unfaithful description of human preferences, with errors that are not random (from the outside standpoint of what we’d hoped to transfer).  If you perfectly learn and perfectly maximize *the referent of* rewards assigned by human operators, that kills them.  It’s a fact about the territory, not the map – about the environment, not the optimizer – that the *best predictive* explanation for human answers is one that predicts the systematic errors in our responses, and therefore is a psychological concept that correctly predicts the higher scores that would be assigned to human-error-producing cases.\n\n\n**21**.  There’s something like a single answer, or a single bucket of answers, for questions like ‘What’s the environment really like?’ and ‘How do I figure out the environment?’ and ‘Which of my possible outputs interact with reality in a way that causes reality to have certain properties?’, where a simple outer optimization loop will straightforwardly shove optimizees into this bucket.  When you have a wrong belief, reality hits back at your wrong predictions.  When you have a broken belief-updater, reality hits back at your broken predictive mechanism via predictive losses, and a gradient descent update fixes the problem in a simple way that can easily cohere with all the other predictive stuff.  In contrast, when it comes to a choice of utility function, there are unbounded degrees of freedom and multiple reflectively coherent fixpoints.  Reality doesn’t ‘hit back’ against things that are locally aligned with the loss function on a particular range of test cases, but globally misaligned on a wider range of test cases.  This is the very abstract story about why hominids, once they finally started to generalize, generalized their *capabilities* to Moon landings, but their inner optimization no longer adhered very well to the outer-optimization goal of ‘relative inclusive reproductive fitness’ – even though they were in their ancestral environment optimized very strictly around this one thing and nothing else.  This abstract dynamic is something you’d expect to be true about outer optimization loops on the order of both ‘natural selection’ and ‘gradient descent’.  The central result:  **Capabilities generalize further than alignment once capabilities start to generalize far**.\n\n\n**22**.  There’s a relatively simple core structure that explains why complicated cognitive machines work; which is why such a thing as general intelligence exists and not just a lot of unrelated special-purpose solutions; which is why capabilities generalize after outer optimization infuses them into something that has been optimized enough to become a powerful inner optimizer.  The fact that this core structure is simple and relates generically to [low-entropy high-structure environments](https://intelligence.org/2017/12/06/chollet/) is why humans can walk on the Moon.  **There is no analogous truth about there being a simple core of alignment**, especially not one that is *even easier* for gradient descent to find than it would have been for natural selection to just find ‘want inclusive reproductive fitness’ as a well-generalizing solution within ancestral humans.  Therefore, capabilities generalize further out-of-distribution than alignment, once they start to generalize at all.\n\n\n**23**.  **Corrigibility is anti-natural to consequentialist reasoning**; “you can’t bring the coffee if you’re dead” for almost every kind of coffee.  We (MIRI) [tried and failed](https://www.alignmentforum.org/posts/5bd75cc58225bf0670374f04/forum-digest-corrigibility-utility-indifference-and-related-control-ideas) to find a coherent formula for an agent that would let itself be shut down (without that agent actively trying to get shut down).  Furthermore, many anti-corrigible lines of reasoning like this may only first appear at high levels of intelligence.\n\n\n**24**.  There are two fundamentally different approaches you can potentially take to alignment, which are unsolvable for two different sets of reasons; therefore, **by becoming confused and ambiguating between the two approaches, you can confuse yourself about whether alignment is necessarily difficult**.  The first approach is to build a CEV-style Sovereign which wants exactly what we extrapolated-want and is therefore safe to let optimize all the future galaxies without it accepting any human input trying to stop it.  The second course is to build corrigible AGI which doesn’t want exactly what we want, and yet somehow fails to kill us and take over the galaxies despite that being a convergent incentive there.\n\n\n1. The first thing generally, or CEV specifically, is unworkable because **the complexity of what needs to be aligned or meta-aligned for our Real Actual Values is far out of reach for our FIRST TRY at AGI**.  Yes I mean specifically that the *dataset, meta-learning algorithm, and what needs to be learned,* is far out of reach for our first try.  It’s not just non-hand-codable, it is *unteachable*on-the-first-try because *the thing you are trying to teach is too weird and complicated.*\n2. The second thing looks unworkable (less so than CEV, but still lethally unworkable) because **corrigibility runs*****actively counter*****to instrumentally convergent behaviors** within a core of general intelligence (the capability that generalizes far out of its original distribution).  You’re not trying to make it have an opinion on something the core was previously neutral on.  You’re trying to take a system implicitly trained on lots of arithmetic problems until its machinery started to reflect the common coherent core of arithmetic, and get it to say that as a special case 222 + 222 = 555.  You can maybe train something to do this in a particular training distribution, but it’s incredibly likely to break when you present it with new math problems far outside that training distribution, on a system which successfully generalizes capabilities that far at all.\n\n\n \n\n\n#### Section B.3:  Central difficulties of *sufficiently good and useful* transparency / interpretability.\n\n\n**25**.  **We’ve got no idea what’s actually going on inside the giant inscrutable matrices and tensors of floating-point numbers**.  Drawing interesting graphs of where a transformer layer is focusing attention doesn’t help if the question that needs answering is “So was it planning how to kill us or not?”\n\n\n**26**.  Even if we did know what was going on inside the giant inscrutable matrices while the AGI was still too weak to kill us, this would just result in us dying with more dignity, if DeepMind refused to run that system and let Facebook AI Research destroy the world two years later.  **Knowing that a medium-strength system of inscrutable matrices is planning to kill us, does not thereby let us build a high-strength system of inscrutable matrices that isn’t planning to kill us**.\n\n\n**27**.  When you explicitly optimize against a detector of unaligned thoughts, you’re partially optimizing for more aligned thoughts, and partially optimizing for unaligned thoughts that are harder to detect.  **Optimizing against an interpreted thought optimizes against interpretability**.\n\n\n**28**.  The AGI is smarter than us in whatever domain we’re trying to operate it inside, so we cannot mentally check all the possibilities it examines, and we cannot see all the consequences of its outputs using our own mental talent.  **A powerful AI searches parts of the option space we don’t, and we can’t foresee all its options**.\n\n\n**29**.  The outputs of an AGI go through a huge, not-fully-known-to-us domain (the real world) before they have their real consequences.  **Human beings cannot inspect an AGI’s output to determine whether the consequences will be good**.\n\n\n**30**.  Any pivotal act that is not something we can go do right now, will take advantage of the AGI figuring out things about the world we don’t know so that it can make plans we wouldn’t be able to make ourselves.  It knows, at the least, the fact we didn’t previously know, that some action sequence results in the world we want.  Then humans will not be competent to use their own knowledge of the world to figure out all the results of that action sequence.  An AI whose action sequence you can fully understand all the effects of, before it executes, is much weaker than humans in that domain; you couldn’t make the same guarantee about an unaligned human as smart as yourself and trying to fool you.  **There is no pivotal output of an AGI that is humanly checkable and can be used to safely save the world but only after checking it**; this is another form of pivotal weak act which does not exist.\n\n\n**31**.  A strategically aware intelligence can choose its visible outputs to have the consequence of deceiving you, including about such matters as whether the intelligence has acquired strategic awareness; **you can’t rely on behavioral inspection to determine facts about an AI which that AI might want to deceive you about**.  (Including how smart it is, or whether it’s acquired strategic awareness.)\n\n\n**32**.  Human thought partially exposes only a partially scrutable outer surface layer.  Words only trace our real thoughts.  Words are not an AGI-complete data representation in its native style.  The underparts of human thought are not exposed for direct imitation learning and can’t be put in any dataset.  **This makes it hard and probably impossible to train a powerful system entirely on imitation of human words or other human-legible contents**, which are only impoverished subsystems of human thoughts; ***unless*****that system is powerful enough to contain inner intelligences figuring out the humans**, and at that point it is no longer really working as imitative human thought.\n\n\n**33**.  **The AI does not think like you do**, the AI doesn’t have thoughts built up from the same concepts you use, it is utterly alien on a staggering scale.  Nobody knows what the hell GPT-3 is thinking, not *only* because the matrices are opaque, but because the *stuff within that opaque container*is, very likely, incredibly alien – nothing that would translate well into comprehensible human thinking, even if we could see past the giant wall of floating-point numbers to what lay behind.\n\n\n \n\n\n#### Section B.4:  Miscellaneous unworkable schemes.\n\n\n**34**.  **Coordination schemes between superintelligences are not things that humans can participate in** (eg because humans can’t reason reliably about the code of superintelligences); a “multipolar” system of 20 superintelligences with different utility functions, plus humanity, has a natural and obvious equilibrium which looks like “the 20 superintelligences cooperate with each other but not with humanity”.\n\n\n**35**.  Schemes for playing “different” AIs off against each other stop working if those AIs advance to the point of being able to coordinate via reasoning about (probability distributions over) each others’ code.  **Any system of sufficiently intelligent agents can probably behave as a single agent, even if you imagine you’re playing them against each other.**  Eg, if you set an AGI that is secretly a paperclip maximizer, to check the output of a nanosystems designer that is secretly a staples maximizer, then even if the nanosystems designer is not able to deduce what the paperclip maximizer really wants (namely paperclips), it could still logically commit to share half the universe with any agent checking its designs if those designs were allowed through, *if*the checker-agent can verify the suggester-system’s logical commitment and hence logically depend on it (which excludes human-level intelligences).  Or, if you prefer simplified catastrophes without any logical decision theory, the suggester could bury in its nanosystem design the code for a new superintelligence that will visibly (to a superhuman checker) divide the universe between the nanosystem designer and the design-checker.\n\n\n**36**.  What makes an air conditioner ‘magic’ from the perspective of say the thirteenth century, is that even if you correctly show them the design of the air conditioner in advance, they won’t be able to understand from seeing that design why the air comes out cold; the design is exploiting regularities of the environment, rules of the world, laws of physics, that they don’t know about.  The domain of human thought and human brains is very poorly understood by us, and exhibits phenomena like optical illusions, hypnosis, psychosis, mania, or simple afterimages produced by strong stimuli in one place leaving neural effects in another place.  Maybe a superintelligence couldn’t defeat a human in a very simple realm like logical tic-tac-toe; if you’re fighting it in an incredibly complicated domain you understand poorly, like human minds, you should expect to be defeated by ‘magic’ in the sense that even if you saw its strategy you would not understand why that strategy worked.  **AI-boxing can only work on relatively weak AGIs; the human operators are not secure systems**.\n\n\n \n\n\n### Section C:\n\n\nOkay, those are some significant problems, but lots of progress is being made on solving them, right?  There’s a whole field calling itself “AI Safety” and many major organizations are expressing Very Grave Concern about how “safe” and “ethical” they are?\n\n\n \n\n\n**37**.  There’s a pattern that’s played out quite often, over all the times the Earth has spun around the Sun, in which some bright-eyed young scientist, young engineer, young entrepreneur, proceeds in full bright-eyed optimism to challenge some problem that turns out to be really quite difficult.  Very often the cynical old veterans of the field try to warn them about this, and the bright-eyed youngsters don’t listen, because, like, who wants to hear about all that stuff, they want to go solve the problem!  Then this person gets beaten about the head with a slipper by reality as they find out that their brilliant speculative theory is wrong, it’s actually really hard to build the thing because it keeps breaking, and society isn’t as eager to adopt their clever innovation as they might’ve hoped, in a process which eventually produces a new cynical old veteran.  Which, if not literally optimal, is I suppose a nice life cycle to nod along to in a nature-show sort of way.  Sometimes you do something for the *first* time and there *are* no cynical old veterans to warn anyone and people can be *really* optimistic about how it will go; eg the initial Dartmouth Summer Research Project on Artificial Intelligence in 1956:  “An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”  This is *less*of a viable survival plan for your *planet* if the first major failure of the bright-eyed youngsters kills *literally everyone* before they can predictably get beaten about the head with the news that there were all sorts of unforeseen difficulties and reasons why things were hard.  You don’t get any cynical old veterans, in this case, because everybody on Earth is dead.  Once you start to suspect you’re in that situation, you have to do the Bayesian thing and update now to the view you will predictably update to later: realize you’re in a situation of being that bright-eyed person who is going to encounter Unexpected Difficulties later and end up a cynical old veteran – or would be, except for the part where you’ll be dead along with everyone else.  And become that cynical old veteran *right away,* before reality whaps you upside the head in the form of everybody dying and you not getting to learn.  **Everyone else seems to feel that, so long as reality hasn’t whapped them upside the head yet and smacked them down with the actual difficulties, they’re free to go on living out the standard life-cycle and play out their role in the script and go on being bright-eyed youngsters; there’s no cynical old veterans to warn them otherwise, after all, and there’s no proof that everything won’t go beautifully easy and fine,*****given their bright-eyed total ignorance of what those later difficulties could be.***\n\n\n**38**.  **It does not appear to me that the field of ‘AI safety’ is currently being remotely productive on tackling its enormous lethal problems.**  These problems are in fact out of reach; the contemporary field of AI safety has been selected to contain people who go to work in that field anyways.  Almost all of them are there to tackle problems on which they can appear to succeed and publish a paper claiming success; if they can do that and get funded, why would they embark on a much more unpleasant project of trying something harder that they’ll fail at, just so the human species can die with marginally more dignity?  This field is not making real progress and does not have a recognition function to distinguish real progress if it took place.  You could pump a billion dollars into it and it would produce mostly noise to drown out what little progress was being made elsewhere.\n\n\n**39**.  **I figured this stuff out using the**[**null string**](https://twitter.com/ESYudkowsky/status/1500863629490544645)**as input,** and frankly, I have a hard time myself feeling hopeful about getting real alignment work out of somebody who previously sat around waiting for somebody else to input a persuasive argument into them.  This ability to “notice lethal difficulties without Eliezer Yudkowsky arguing you into noticing them” currently is an opaque piece of cognitive machinery to me, I do not know how to train it into others.  It probably relates to ‘[security mindset](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/)‘, and a mental motion where you refuse to play out scripts, and being able to operate in a field that’s in a state of chaos.\n\n\n**40**.  “Geniuses” with nice legible accomplishments in fields with tight feedback loops where it’s easy to determine which results are good or bad right away, and so validate that this person is a genius, are (a) people who might not be able to do equally great work away from tight feedback loops, (b) people who chose a field where their genius would be nicely legible even if that maybe wasn’t the place where humanity most needed a genius, and (c) probably don’t have the mysterious gears simply because they’re *rare.*  **You cannot just pay $5 million apiece to a bunch of legible geniuses from other fields and expect to get great alignment work out of them.**  They probably do not know where the real difficulties are, they probably do not understand what needs to be done, *they cannot tell the difference between good and bad work*, and the funders also can’t tell without me standing over their shoulders evaluating everything, which I do not have the physical stamina to do.  I concede that real high-powered talents, especially if they’re still in their 20s, genuinely interested, and have done their reading, are people who, yeah, fine, have higher probabilities of making core contributions than a random bloke off the street. But I’d have more hope – not significant hope, but *more*hope – in separating the concerns of (a) credibly promising to pay big money retrospectively for good work to anyone who produces it, and (b) venturing prospective payments to somebody who is predicted to maybe produce good work later.\n\n\n**41**.  **Reading this document cannot make somebody a core alignment researcher**.  That requires, not the ability to read this document and nod along with it, but the ability to spontaneously write it from scratch without anybody else prompting you; that is what makes somebody a peer of its author.  It’s guaranteed that some of my analysis is mistaken, though not necessarily in a hopeful direction.  The ability to do new basic work noticing and fixing those flaws is the same ability as the ability to write this document before I published it, which nobody apparently did, despite my having had other things to do than write this up for the last five years or so.  Some of that silence may, possibly, optimistically, be due to nobody else in this field having the ability to write things comprehensibly – such that somebody out there had the knowledge to write all of this themselves, if they could only have written it up, but they couldn’t write, so didn’t try.  I’m not particularly hopeful of this turning out to be true in real life, but I suppose it’s one possible place for a “positive model violation” (miracle).  The fact that, twenty-one years into my entering this death game, seven years into other EAs noticing the death game, and two years into even normies starting to notice the death game, it is still Eliezer Yudkowsky writing up this list, says that humanity still has only one gamepiece that can do that.  I knew I did not actually have the physical stamina to be a star researcher, I tried really really hard to replace myself before my health deteriorated further, and yet here I am writing this.  That’s not what surviving worlds look like.\n\n\n**42**.  **There’s no plan.**  Surviving worlds, by this point, and in fact several decades earlier, have a plan for how to survive.  It is a written plan.  The plan is not secret.  In this non-surviving world, there are no candidate plans that do not immediately fall to Eliezer instantly pointing at the giant visible gaping holes in that plan.  Or if you don’t know who Eliezer is, you don’t even realize you need a plan, because, like, how would a human being possibly realize that without Eliezer yelling at them?  It’s not like people will yell at *themselves* about prospective alignment difficulties, they don’t have an *internal* voice of caution.  So most organizations don’t have plans, because I haven’t taken the time to personally yell at them.  ‘Maybe we should have a plan’ is deeper alignment mindset than they possess without me standing constantly on their shoulder as their personal angel pleading them into… continued noncompliance, in fact.  Relatively few are aware even that they should, to look better, produce a *pretend* plan that can fool EAs too ‘[modest](https://equilibriabook.com/toc/)‘ to trust their own judgments about seemingly gaping holes in what serious-looking people apparently believe.\n\n\n**43**.  **This situation you see when you look around you is not what a surviving world looks like.**  The worlds of humanity that survive have plans.  They are not leaving to one tired guy with health problems the entire responsibility of pointing out real and lethal problems proactively.  Key people are taking internal and real responsibility for finding flaws in their own plans, instead of considering it their job to propose solutions and somebody else’s job to prove those solutions wrong.  That world started trying to solve their important lethal problems earlier than this.  Half the people going into string theory shifted into AI alignment instead and made real progress there.  When people suggest a planetarily-lethal problem that might materialize later – there’s a lot of people suggesting those, in the worlds destined to live, and they don’t have a special status in the field, it’s just what normal geniuses there do – they’re met with either solution plans or a reason why that shouldn’t happen, not an uncomfortable shrug and ‘How can you be sure that will happen’ / ‘There’s no way you could be sure of that now, we’ll have to wait on experimental evidence.’\n\n\nA lot of those better worlds will die anyways.  It’s a genuinely difficult problem, to solve something like that on your first try.  But they’ll die with more dignity than this.\n\n\n\nThe post [AGI Ruin: A List of Lethalities](https://intelligence.org/2022/06/10/agi-ruin/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2022-06-11T04:07:22Z", "authors": ["Eliezer Yudkowsky"], "summaries": []} -{"id": "30ee2da5818e8c8e60a13661a24361e0", "title": "Six Dimensions of Operational Adequacy in AGI Projects", "url": "https://intelligence.org/2022/06/07/six-dimensions-of-operational-adequacy-in-agi-projects/", "source": "miri", "source_type": "blog", "text": "**Editor’s note:**The following is a lightly edited copy of a document written by Eliezer Yudkowsky in November 2017. Since this is a snapshot of Eliezer’s thinking at a specific time, we’ve sprinkled reminders throughout that this is from 2017.\nA background note:\n\n\nIt’s often the case that people are slow to abandon obsolete playbooks in response to a novel challenge. And AGI is certainly a very novel challenge.\n\n\nItalian general Luigi Cadorna offers a memorable historical example. In the Isonzo Offensive of World War I, Cadorna lost hundreds of thousands of men in futile frontal assaults against enemy trenches defended by barbed wire and machine guns.  As morale plummeted and desertions became epidemic, Cadorna began executing his own soldiers en masse, in an attempt to cure the rest of their “cowardice.” The offensive continued for *2.5 years*.\n\n\nCadorna made many mistakes, but foremost among them was his refusal to recognize that this war was fundamentally unlike those that had come before.  Modern weaponry had forced a paradigm shift, and Cadorna’s instincts were not merely miscalibrated—they were systematically broken.  No number of small, incremental updates within his obsolete framework would be sufficient to meet the new challenge.\n\n\nOther examples of this type of mistake include the initial response of the record industry to iTunes and streaming; or, more seriously, the response of most Western governments to COVID-19.\n\n\n \n\n\n![](https://intelligence.org/wp-content/uploads/2022/06/Overton.png)\n \n\n\nAs usual, the real challenge of reference class forecasting is figuring out which reference class the thing you’re trying to model belongs to.\n\n\nFor most problems, rethinking your approach from the ground up is wasteful and unnecessary, because most problems have a similar causal structure to a large number of past cases. When the problem isn’t commensurate with existing strategies, as in the case of AGI, you need a new playbook.\n\n\n\n\n \n\n\n\n\n---\n\n\n \n\n\nI’ve sometimes been known to complain, or in a polite way scream in utter terror, that “there is no good guy group in AGI”, i.e., if a researcher on this Earth currently wishes to contribute to the common good, there are literally zero projects they can join and no project close to being joinable.  In its present version, this document is an informal response to an AI researcher who asked me to list out the qualities of such a “good project”.\n\n\nIn summary, a “good project” needs:\n\n\n* *Trustworthy command:*  A trustworthy chain of command with respect to both legal and pragmatic control of the intellectual property (IP) of such a project; a running AGI being included as “IP” in this sense.\n* *Research closure:*  The organizational ability to *close* and/or *silo* IP to within a trustworthy section and prevent its release by sheer default.\n* *Strong opsec:*  Operational security adequate to prevent the proliferation of code (or other information sufficient to recreate code within e.g. 1 year) due to e.g. Russian intelligence agencies grabbing the code.\n* *Common good commitment:*The project’s command and its people must have a credible commitment to both short-term and long-term goodness.  Short-term goodness comprises the immediate welfare of present-day Earth; long-term goodness is the achievement of transhumanist astronomical goods.\n* *Alignment mindset:*Somebody on the project needs deep enough [security mindset](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/) plus understanding of AI cognition that they can originate new, deep measures to ensure AGI alignment; and they must be in a position of technical control or otherwise have effectively unlimited political capital.  Everybody on the project needs to understand and expect that aligning an AGI will be terrifically difficult and terribly dangerous.\n* *Requisite resource levels:*The project must have adequate resources to compete at the frontier of AGI development, including whatever mix of computational resources, intellectual labor, and closed insights are required to produce a 1+ year lead over less cautious competing projects.\n\n\nI was asked what would constitute “minimal, adequate, and good” performance on each of these dimensions.  I tend to divide things sharply into “not adequate” and “adequate” but will try to answer in the spirit of the question nonetheless. \n\n\n\n\n \n\n\n### Trustworthy command\n\n\n**Token:**  Not having pragmatic and legal power in the hands of people who are opposed to the very idea of trying to align AGI, or who want an AGI in every household, or who are otherwise allergic to the *easy* parts of AGI strategy.\n\n\nE.g.: Larry Page begins with the correct view that [cosmopolitan](https://arbital.com/p/value_cosmopolitan/) values are good, speciesism is bad, it would be wrong to mistreat sentient beings just because they’re implemented in silicon instead of carbon, and so on. But he then proceeds to reject the idea that goals and capabilities are [orthogonal](https://arbital.com/p/orthogonality/), that instrumental strategies are [convergent](https://arbital.com/p/instrumental_convergence/), and that value is [complex and fragile](https://arbital.com/p/complexity_of_value/). As a consequence, he expects AGI to automatically be friendly, and is liable to object to any effort to align AI [as an attempt to keep AI “chained up”](https://books.google.com/books?id=2hIcDgAAQBAJ&pg=PA32&lpg=PA32&dq=Larry+%22that+digital+life+is+the+natural+and+desirable+next+step+in+the+cosmic+evolution+and+that+if+we+let+digital+minds+be+free+rather+than+try+to+stop+or+enslave+them+the+outcome+is+almost+certain+to+be+good%22&source=bl&ots=DIQP9C1EgF&sig=ACfU3U04K3r-b1kQqEvWF71-1Oo4ppsZsw&hl=en&sa=X&ved=2ahUKEwiFrvi6-K3gAhUHwlQKHc83AhgQ6AEwAXoECAkQAQ#v=onepage&q=Larry%20%22that%20digital%20life%20is%20the%20natural%20and%20desirable%20next%20step%20in%20the%20cosmic%20evolution%20and%20that%20if%20we%20let%20digital%20minds%20be%20free%20rather%20than%20try%20to%20stop%20or%20enslave%20them%20the%20outcome%20is%20almost%20certain%20to%20be%20good%22&f=false).\n\n\nOr, e.g.: As of December 2015, Elon Musk not only wasn’t on board with closure, but apparently [wanted to*open-source*](https://medium.com/backchannel/how-elon-musk-and-y-combinator-plan-to-stop-computers-from-taking-over-17e0e27dd02a) superhumanly capable AI.\n\n\nElon Musk is not in his own person a majority of OpenAI’s Board, but if he can pragmatically sway a majority of that Board then this measure is not being fulfilled even to a token degree.\n\n\n(Update: Elon Musk [stepped down](https://openai.com/blog/openai-supporters/) from the OpenAI Board in February 2018.)\n\n\n**Improving:** There’s a legal contract which says that the Board doesn’t control the IP and that the alignment-aware research silo does.\n\n\n**Adequate:**  The entire command structure including all members of the finally governing Board are fully aware of the difficulty and danger of alignment.  The Board will not object if the technical leadership have disk-erasure measures ready in case the Board suddenly decides to try to open-source the AI anyway.\n\n\n**Excellent:**  Somehow *no* local authority poses a risk of stepping in and undoing any safety measures, etc.  I have no idea what incremental steps could be taken in this direction that would not make things worse.  If e.g. the government of Iceland suddenly understood how serious things had gotten and granted sanction and security to a project, that would fit this description, but I think that trying to arrange anything like this would probably make things worse globally because of the mindset it promoted.\n\n\n \n\n\n### Closure\n\n\n**Token:** It’s generally understood organizationally that some people want to keep code, architecture, and some ideas a ‘secret’ from outsiders, and everyone on the project is okay with this even if they disagree.  In principle people aren’t being pressed to publish their interesting discoveries if they are obviously capabilities-laden; in practice, somebody always says “but someone else will probably publish a similar idea 6 months later” and acts suspicious of the hubris involved in thinking otherwise, but it remains possible to get away with not publishing at moderate personal cost.\n\n\n**Improving:** A subset of people on the project understand why some code, architecture, lessons learned, et cetera must be kept from reaching the general ML community if success is to have a probability [significantly greater than zero](https://intelligence.org/2017/11/26/security-mindset-and-the-logistic-success-curve/) (because [tradeoffs between alignment and capabilities](https://arbital.com/p/aligning_adds_time/) make the challenge unwinnable if there isn’t a project with a reasonable-length lead time).  These people have formed a closed silo within the project, with the sanction and acceptance of the project leadership.  It’s socially okay to be *conservative* about what counts as potentially capabilities-laden thinking, and it’s understood that worrying about this is not a boastful act of pride or a trick to get out of needing to write papers.\n\n\n**Adequate:**  Everyone on the project understands and agrees with closure.  Information is siloed whenever not everyone on the project needs to know it.\n\n\n \n\n\n\n\n*Reminder: This is a 2017 document.*\n\n\n \n\n\n### \n**Opsec**\n\n\n**Token:**  Random people are not allowed to wander through the building.\n\n\n**Improving:**  Your little brother cannot steal the IP.  Stuff is encrypted.  Siloed project members sign NDAs.\n\n\n**Adequate:** Major governments cannot silently and unnoticeably steal the IP without a nonroutine effort.  All project members undergo government-security-clearance-style screening.  AGI code is not running on AWS, but in an airgapped server room.  There are cleared security guards in the server room.\n\n\n**Excellent:**  Military-grade or national-security-grade security.  (It’s hard to see how attempts to get this could avoid being counterproductive, considering the difficulty of obtaining trustworthy command and common good commitment with respect to any entity that can deploy such force, and the effect that trying would have on general mindsets.)\n\n\n \n\n\n### Common good commitment\n\n\n**Token:** Project members and the chain of command are not openly talking about how dictatorship is great so long as they get to be the dictator.  The project is not directly answerable to Trump or Putin.  They say vague handwavy things about how of course one ought to promote democracy and apple pie (applause) and that everyone ought to get some share of the pot o’ gold (applause).\n\n\n**Improving:** Project members and their chain of command have come out explicitly in favor of being nice to people and eventually building a nice intergalactic civilization.  They would release a cancer cure if they had it, their state of deployment permitting, and they don’t seem likely to oppose incremental steps toward a postbiological future and the eventual realization of [most of the real value at stake](https://www.nickbostrom.com/astronomical/waste.html).\n\n\n**Adequate:**  Project members and their chain of command have an explicit commitment to something like [coherent extrapolated volition](https://arbital.com/p/cev/) as a long-run goal, AGI tech permitting, and otherwise the careful preservation of values and sentient rights through any pathway of intelligence enhancement.  In the short run, they would not do everything that seems to them like a good idea, and would first prioritize not destroying humanity or wounding its spirit with their own hands.  (E.g., if Google or Facebook consistently thought like this, they would have become concerned a lot earlier about social media degrading cognition.)  Real actual moral humility with policy consequences is a thing.\n\n\n \n\n\n### Alignment mindset\n\n\n**Token:**  At least some people in command sort of vaguely understand that AIs don’t just automatically do whatever the alpha male in charge of the organization wants to have happen.  They’ve hired some people who are at least pretending to work on that in a technical way, not just “[ethicists](https://www.lesswrong.com/posts/SsCQHjqNT3xQAPQ6b/yudkowsky-on-agi-ethics)” to talk about trolley problems and [which monkeys should get the tasty banana](https://intelligence.org/2017/11/26/security-mindset-and-the-logistic-success-curve/).\n\n\n**Improving:**  The technical work output by the “safety” group is neither obvious nor wrong.  People in command have [ordinary paranoia](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/) about AIs.  They expect alignment to be somewhat difficult and to take some extra effort.  They understand that not everything they might like to do, with the first AGI ever built, is equally safe to attempt.\n\n\n**Adequate:**  The project has realized that building an AGI is *mostly* about aligning it.  Someone with full security mindset and deep understanding of AGI cognition as cognition has proven themselves able to originate new deep alignment measures, and is acting as technical lead with effectively unlimited political capital within the organization to make sure the job actually gets done.  Everyone expects alignment to be terrifically hard and terribly dangerous and full of invisible bullets whose shadow you have to see before the bullet comes close enough to hit you.  They understand that alignment severely constrains architecture and that capability often trades off against transparency.  The organization is targeting the [minimal](https://arbital.com/p/minimality_principle/) AGI doing the least dangerous cognitive work that is required to prevent the next AGI project from destroying the world.  The [alignment assumptions](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/) have been reduced into non-goal-valent statements, have been clearly written down, and are being monitored for their actual truth.\n\n\nAlignment mindset is *fundamentally*difficult to obtain for a project because [Graham’s Design Paradox](https://intelligence.org/2017/11/26/security-mindset-and-the-logistic-success-curve/) applies.  People with only ordinary paranoia may not be able to distinguish the next step up in depth of cognition, and happy innocents cannot distinguish useful paranoia from suits making empty statements about risk and safety.  They also tend not to realize what they’re missing.  This means that there is a horrifically strong default that when you persuade one more research-rich person or organization or government to start a new project, that project *will*have inadequate alignment mindset unless something extra-ordinary happens.  I’ll be frank and say relative to the present world I think this essentially has to go through trusting me or Nate Soares to actually work, although see below about Paul Christiano.  The lack of clear person-independent instructions for how somebody low in this dimension can improve along this dimension is why the difficulty of this dimension is the real killer.\n\n\nIf you insisted on trying this the impossible way, I’d advise that you start by talking to a brilliant computer security researcher rather than a brilliant machine learning researcher.\n\n\n \n\n\n### Resources\n\n\n**Token:**  The project has a combination of funding, good researchers, and computing power which makes it credible as a beacon to which interested philanthropists can add more funding and other good researchers interested in aligned AGI can join.  E.g., OpenAI would qualify as this if it were adequate on the other 5 dimensions.\n\n\n**Improving:**  The project has size and quality researchers on the level of say Facebook’s AI lab, and can credibly compete among the almost-but-not-quite biggest players.  When they focus their attention on an unusual goal, they can get it done 1+ years ahead of the general field so long as Demis doesn’t decide to do it first.  I expect e.g. the NSA would have this level of “resources” if they started playing now but didn’t grow any further.\n\n\n**Adequate:**  The project can get things done with a 2-year lead time on anyone else, and it’s not obvious that competitors could catch up even if they focused attention there.  DeepMind has a great mass of superior people and unshared tools, and is the obvious candidate for achieving adequacy on this dimension; though they would still need adequacy on other dimensions, and more closure in order to conserve and build up advantages.  As I understand it, an adequate resource advantage is explicitly what Demis was trying to achieve, before Elon blew it up, started an openness fad and an arms race, and probably got us all killed.  Anyone else trying to be adequate on this dimension would need to pull ahead of DeepMind, merge with DeepMind, or talk Demis into closing more research and putting less effort into unalignable AGI paths.\n\n\n**Excellent:**  There’s a single major project which a substantial section of the research community understands to be The Good Project that good people join, with competition to it deemed unwise and unbeneficial to the public good.  This Good Project is at least adequate along all the other dimensions.  Its major competitors lack either equivalent funding or equivalent talent and insight.  Relative to the present world it would be **extremely difficult** to make any project like this exist with adequately trustworthy command and alignment mindset, and failed attempts to make it exist run the risk of creating still worse competitors developing unaligned AGI.\n\n\n**Unrealistic:**There is a single global Manhattan Project which is somehow not answerable to non-common-good command such as Trump or Putin or the United Nations Security Council.  It has orders of magnitude more computing power and smart-researcher-labor than anyone else.  Something keeps other AGI projects from arising and trying to race with the giant project.  The project can freely choose transparency in all transparency-capability tradeoffs and take an extra 10+ years to ensure alignment.  The project is at least adequate along all other dimensions.  This is how our distant, surviving cousins are doing it in their Everett branches that diverged centuries earlier towards [more competent civilizational equilibria](https://equilibriabook.com/toc).  You **cannot possibly** cause such a project to exist with adequately trustworthy command, alignment mindset, and common-good commitment, and you should therefore not try to make it exist, first because you will simply create a still more dire competitor developing unaligned AGI, and second because if such an AGI could be aligned it would be a hell of an [s-risk](https://www.lesswrong.com/tag/risks-of-astronomical-suffering-s-risks) given the probable command structure.  People who are [slipping sideways in reality](https://www.facebook.com/yudkowsky/posts/10154981483669228) fantasize about being able to do this.\n\n\n\n\n---\n\n\n \n\n\n\n\n*Reminder: This is a 2017 document.*\n\n\n \n\n\n### Further Remarks\n\n\nA project with “adequate” closure and a project with “improving” closure will, if joined, aggregate into a project with “improving” (aka: inadequate) closure where the closed section is a silo within an open organization.  Similar remarks apply along other dimensions.  The aggregate of a project with NDAs, and a project with deeper employee screening, is a combined project with some unscreened people in the building and hence “improving” opsec.\n\n\n“Adequacy” on the dimensions of **closure** and **opsec** is based around my mainline-probability scenario where you unavoidably need to spend at least 1 year in a regime where the AGI is not yet alignable on a minimal act that ensures nobody else will destroy the world shortly thereafter, but during that year it’s possible to remove a bunch of safeties from the code, shift transparency-capability tradeoffs to favor capability instead, ramp up to full throttle, and immediately destroy the world.\n\n\nDuring this time period, leakage of the code to the wider world automatically results in the world being turned into paperclips.  Leakage of the code to multiple major actors such as commercial espionage groups or state intelligence agencies seems to me to stand an extremely good chance of destroying the world because at least one such state actor’s command will not reprise the alignment debate correctly and each of them will fear the others.\n\n\nI would also expect that, if key ideas and architectural lessons-learned were to leak from an insufficiently closed project that would otherwise have actually developed alignable AGI, it would be possible to use 10% as much labor to implement a non-alignable world-destroying AGI [in a shorter timeframe](https://arbital.com/p/aligning_adds_time/).  The project must be closed *tightly* or everything ends up as paperclips.\n\n\n“Adequacy” on **common good commitment** is based on my model wherein the first [task-directed AGI](https://arbital.com/p/task_agi/) continues to operate in a regime far below that of a real superintelligence, where many tradeoffs have been made for transparency over capability and this greatly constrains self-modification.\n\n\nThis task-directed AGI is *not* able to defend against true superintelligent attack.  It *cannot* monitor other AGI projects in an unobtrusive way that grants those other AGI projects a lot of independent freedom to do task-AGI-ish things so long as they don’t create an [unrestricted superintelligence](https://arbital.com/p/Sovereign/).  The designers of the first task-directed AGI are *barely* able to operate it in a regime where the AGI doesn’t create an unaligned superintelligence inside itself or its environment.  Safe operation of the original AGI requires a continuing major effort at supervision.  The level of safety monitoring of other AGI projects required would be so great that, if the original operators deemed it good that more things be done with AGI powers, it would be far simpler and safer to do them as additional tasks running on the original task-directed AGI.  *Therefore:*Everything to do with invocation of superhuman specialized general intelligence, like superhuman science and engineering, continues to have a single effective veto point.\n\n\nThis is also true in less extreme scenarios where AGI powers can proliferate, but must be very tightly monitored, because no aligned AGI can defend against an unconstrained superintelligence if one is deliberately or accidentally created by taking off too many safeties. Either way, there is a central veto authority that continues to actively monitor and has the power to prevent anyone else from doing anything potentially world-destroying with AGI.\n\n\nThis in turn means that any use of AGI powers along the lines of uploading humans, trying to do human intelligence enhancement, or building a cleaner and more stable AGI to run a CEV, would be subject to the explicit veto of the command structure operating the first task-directed AGI.  If this command structure does not favor something like CEV, or vetoes transhumanist outcomes from a transparent CEV, or doesn’t allow intelligence enhancement, et cetera, then all future astronomical value can be permanently lost and even s-risks may apply.\n\n\nA universe in which 99.9% of the sapient beings have no civil rights because way back on Earth somebody decided or *voted* that emulations weren’t real people, is a universe plausibly much worse than paperclips.  (I would see as self-defeating any argument from democratic legitimacy that ends with almost all sapient beings not being able to vote.)\n\n\nIf DeepMind closed to the silo level, put on adequate opsec, somehow gained alignment mindset within the silo, and allowed trustworthy command of that silo, then in my guesstimation it *might*be possible to save the Earth (we would start to leave the floor of the [logistic success curve](https://intelligence.org/2017/11/26/security-mindset-and-the-logistic-success-curve/)).\n\n\nOpenAI seems to me to be further behind than DeepMind along multiple dimensions.  OAI is doing significantly better “safety” research, but it is all still inapplicable to serious AGI, AFAIK, even if it’s not fake / obvious.  I do not think that either OpenAI or DeepMind are out of the basement on the logistic success curve for the alignment-mindset dimension.  It’s not clear to me from where I sit that the miracle required to grant OpenAI a chance at alignment success is easier than the miracle required to grant DeepMind a chance at alignment success.  If Greg Brockman or other decisionmakers at OpenAI are not totally insensible, neither is Demis Hassabis.  Both OAI and DeepMind have significant metric distance to cross on Common Good Commitment; this dimension is relatively easier to max out, but it’s not maxed out just by having commanders vaguely nodding along or publishing a mission statement about moral humility, nor by a fragile political balance with some morally humble commanders and some morally nonhumble ones.  If I had a ton of money and I wanted to get a serious contender for saving the Earth out of OpenAI, I’d probably start by taking however many OpenAI researchers could pass screening and refounding a separate organization out of them, then using that as the foundation for further recruiting.\n\n\nI have never seen anyone except Paul Christiano try what I would consider to be deep macro alignment work.  E.g. if you look at Paul’s AGI scheme there is a *global alignment story* with assumptions that can be broken down, and the idea of exact human imitation is a deep one rather than a shallow defense–although I don’t think the assumptions have been broken down far enough; but nobody else knows they even ought to be trying to do anything like that.  I [also think](https://www.lesswrong.com/posts/S7csET9CgBtpi7sCh/challenges-to-christiano-s-capability-amplification-proposal) Paul’s AGI scheme is orders-of-magnitude too costly and has chicken-and-egg alignment problems.  *But* I wouldn’t totally rule out a project with Paul in technical command, because I would hold out hope that Paul could follow along with someone else’s deep security analysis and understand it in-paradigm even if it wasn’t his own paradigm; that Paul would suggest useful improvements and hold the global macro picture to a standard of completeness; and that Paul would take seriously how bad it would be to violate an alignment assumption even if it wasn’t an assumption within his native paradigm.  Nobody else except myself and Paul is currently in the arena of comparison.  If we were both working on the same project it would still have unnervingly few people like that.  I think we should try to get more people like this from the pool of brilliant young computer security researchers, not just the pool of machine learning researchers.  Maybe that’ll fail just as badly, but I want to see it tried.\n\n\nI doubt that it is possible to produce a written scheme for alignment, or any other kind of fixed advice, that can be handed off to a brilliant programmer with ordinary paranoia and allow them to actually succeed.  Some of the deep ideas are going to turn out to be wrong, inapplicable, or just plain missing.  Somebody is going to have to notice the unfixable deep problems in advance of an actual blowup, and come up with new deep ideas and not just patches, as the project goes on.\n\n\n \n\n\n \n\n\n\n\n*Reminder: This is a 2017 document.*\n\n\n \n\n\n\nThe post [Six Dimensions of Operational Adequacy in AGI Projects](https://intelligence.org/2022/06/07/six-dimensions-of-operational-adequacy-in-agi-projects/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2022-06-08T00:43:25Z", "authors": ["Eliezer Yudkowsky"], "summaries": []} -{"id": "9317b34c962926cb5284bf87a1167f1c", "title": "Shah and Yudkowsky on alignment failures", "url": "https://intelligence.org/2022/03/02/shah-and-yudkowsky-on-alignment-failures/", "source": "miri", "source_type": "blog", "text": "This is the final discussion log in the [Late 2021 MIRI Conversations](https://www.lesswrong.com/s/n945eovrA3oDueqtq) sequence, featuring Rohin Shah and Eliezer Yudkowsky, with additional comments from Rob Bensinger, Nate Soares, Richard Ngo, and Jaan Tallinn.\n\n\nThe discussion begins with summaries and comments on Richard and Eliezer’s debate. Rohin’s summary has since been revised and published [in the Alignment Newsletter](https://www.alignmentforum.org/posts/3vFmQhHBosnjZXuAJ/an-171-disagreements-between-alignment-optimists-and).\n\n\nAfter this log, we’ll be concluding this sequence with an [**AMA**](https://www.lesswrong.com/posts/34Gkqus9vusXRevR8/late-2021-miri-conversations-discussion-and-ama), where we invite you to comment with questions about AI alignment, cognition, forecasting, etc. Eliezer, Richard, Paul Christiano, Nate, and Rohin will all be participating.\n\n\n \n\n\nColor key:\n\n\n\n\n\n| | | | |\n| --- | --- | --- | --- |\n|  Chat by Rohin and Eliezer  |  Other chat  |  Emails  |  Follow-ups  |\n\n\n\n \n\n\n19. Follow-ups to the Ngo/Yudkowsky conversation\n------------------------------------------------\n\n\n \n\n\n### 19.1. Quotes from the public discussion\n\n\n \n\n\n\n[Bensinger][9:22]\nInteresting extracts from the public discussion of [Ngo and Yudkowsky on AI capability gains](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/hwxj4gieR7FWNwYfa):\n\n\n*Eliezer*:\n\n\n\n> \n> I think some of your confusion may be that you’re putting “probability theory” and “Newtonian gravity” into the same bucket.  You’ve been raised to believe that powerful theories ought to meet certain standards, like successful bold advance experimental predictions, such as Newtonian gravity made about the existence of Neptune (quite a while after the theory was first put forth, though).  “Probability theory” also sounds like a powerful theory, and the people around you believe it, so you think you ought to be able to produce a powerful advance prediction it made; but it is for some reason hard to come up with an example like the discovery of Neptune, so you cast about a bit and think of the central limit theorem.  That theorem is widely used and praised, so it’s “powerful”, and it wasn’t invented *before* probability theory, so it’s “advance”, right?  So we can go on putting probability theory in the same bucket as Newtonian gravity?\n> \n> \n> They’re actually just very different kinds of ideas, ontologically speaking, and the standards to which we hold them are properly different ones.  It seems like the sort of thing that would take a subsequence I don’t have time to write, expanding beyond the underlying obvious ontological difference between validities and empirical-truths, to cover the way in which “How do we trust this, when” differs between “I have the following new empirical theory about the underlying model of gravity” and “I think that the logical notion of ‘arithmetic’ is a good tool to use to organize our current understanding of this little-observed phenomenon, and it appears within making the following empirical predictions…”  But at least step one could be saying, “Wait, do these two kinds of ideas actually go into the same bucket at all?”\n> \n> \n> In particular it seems to me that you want properly to be asking “How do we know this empirical thing ends up looking like it’s close to the abstraction?” and not “Can you show me that this abstraction is a very powerful one?”  Like, imagine that instead of asking Newton about planetary movements and how we know that the particular bits of calculus he used were empirically true about the planets in particular, you instead started asking Newton for proof that calculus is a very powerful piece of mathematics worthy to predict the planets themselves – but in a way where you wanted to see some highly valuable material object that calculus had *produced,* like earlier praiseworthy achievements in alchemy*.*  I think this would reflect confusion and a wrongly directed inquiry; you would have lost sight of the particular reasoning steps that made ontological sense, in the course of trying to figure out whether calculus was praiseworthy under the standards of praiseworthiness that you’d been previously raised to believe in as universal standards about all ideas.\n> \n> \n> \n\n\n*Richard*:\n\n\n\n> \n> I agree that “powerful” is probably not the best term here, so I’ll stop using it going forward (note, though, that I didn’t use it in my previous comment, which I endorse more than my claims in the original debate).\n> \n> \n> But before I ask “How do we know this empirical thing ends up looking like it’s close to the abstraction?”, I need to ask “Does the abstraction even make sense?” Because you have the abstraction in your head, and I don’t, and so whenever you tell me that X is a (non-advance) prediction of your theory of consequentialism, I end up in a pretty similar epistemic state as if George Soros tells me that X is a prediction of the [theory of reflexivity](https://en.wikipedia.org/wiki/Reflexivity_(social_theory)), or if a complexity theorist tells me that X is a prediction of the [theory of self-organisation](https://en.wikipedia.org/wiki/Self-organization). The problem in those two cases is less that the abstraction is a bad fit for this specific domain, and more that the abstraction is not sufficiently well-defined (outside very special cases) to even be the type of thing that can robustly make predictions.\n> \n> \n> Perhaps another way of saying it is that they’re not crisp/robust/coherent concepts (although I’m open to other terms, I don’t think these ones are particularly good). And it would be useful for me to have evidence that the abstraction of consequentialism you’re using is a crisper concept than Soros’ theory of reflexivity or the theory of self-organisation. If you could explain the full abstraction to me, that’d be the most reliable way – but given the difficulties of doing so, my backup plan was to ask for impressive advance predictions, which are the type of evidence that I don’t think Soros could come up with.\n> \n> \n> I also think that, when you talk about me being raised to hold certain standards of praiseworthiness, you’re still ascribing too much modesty epistemology to me. I mainly care about novel predictions or applications insofar as they help me distinguish crisp abstractions from evocative metaphors. To me it’s the same type of rationality technique as asking people to make bets, to help distinguish post-hoc confabulations from actual predictions.\n> \n> \n> Of course there’s a social component to both, but that’s not what I’m primarily interested in. And of course there’s a strand of naive science-worship which thinks you have to follow the Rules in order to get anywhere, but I’d thank you to assume I’m at least making a more interesting error than that.\n> \n> \n> Lastly, on probability theory and Newtonian mechanics: I agree that you shouldn’t question how much sense it makes to use calculus in the way that you described, but that’s because the application of calculus to mechanics is so clearly-defined that it’d be very hard for the type of confusion I talked about above to sneak in. I’d put evolutionary theory halfway between them: it’s partly a novel abstraction, and partly a novel empirical truth. And in this case I do think you have to be very careful in applying the core abstraction of evolution to things like cultural evolution, because it’s easy to do so in a confused way.\n> \n> \n> \n\n\n\n\n\n \n\n\n### 19.2. Rohin Shah’s summary and thoughts\n\n\n \n\n\n\n[Shah][7:06]  (Nov. 6 email)\nNewsletter summaries attached, would appreciate it if Eliezer and Richard checked that I wasn’t misrepresenting them. (Conversation is a lot harder to accurately summarize than blog posts or papers.)\n\n\n \n\n\nBest,\n\n\nRohin\n\n\n \n\n\n*Planned summary for the Alignment Newsletter:*\n\n\n \n\n\nEliezer is known for being pessimistic about our chances of averting AI catastrophe. His main argument is roughly as follows:\n\n\n\n\n\n\n\n[Yudkowsky][9:56]  (Nov. 6 email reply)\n\n> \n> […] Eliezer is known for being pessimistic about our chances of averting AI catastrophe. His main argument\n> \n> \n> \n\n\nI request that people stop describing things as my “main argument” unless I’ve described them that way myself.  These are answers that I customized for Richard Ngo’s questions.  Different questions would get differently emphasized replies.  “His argument in the dialogue with Richard Ngo” would be fine.\n\n\n\n\n\n[Shah][1:53]  (Nov. 8 email reply)\n\n> \n> I request that people stop describing things as my “main argument” unless I’ve described them that way myself.\n> \n> \n> \n\n\nFair enough. It still does seem pretty relevant to know the purpose of the argument, and I would like to state something along those lines in the summary. For example, perhaps it is:\n\n\n1. One of several relatively-independent lines of argument that suggest we’re doomed; cutting this argument would make almost no difference to the overall take\n2. Your main argument, but with weird Richard-specific emphases that you wouldn’t have necessarily included if making this argument more generally; if someone refuted the core of the argument to your satisfaction it would make a big difference to your overall take\n3. Not actually an argument you think much about at all, but somehow became the topic of discussion\n4. Something in between these options\n5. Something else entirely\n\n\nIf you can’t really say, then I guess I’ll just say “His argument in this particular dialogue”.\n\n\nI’d also like to know what the main argument is (if there is a main argument rather than lots of independent lines of evidence or something else entirely); it helps me orient to the discussion, and I suspect would be useful for newsletter readers as well.\n\n\n\n\n\n\n[Shah][7:06]  (Nov. 6 email)\n1. We are very likely going to keep improving AI capabilities until we reach AGI, at which point either the world is destroyed, or we use the AI system to take some pivotal act before some careless actor destroys the world.\n\n\n2. In either case, the AI system must be producing high-impact, world-rewriting plans; such plans are “consequentialist” in that the simplest way to get them (and thus, the one we will first build) is if you are forecasting what might happen, thinking about the expected consequences, considering possible obstacles, searching for routes around the obstacles, etc. If you don’t do this sort of reasoning, your plan goes off the rails very quickly; it is highly unlikely to lead to high impact. In particular, long lists of shallow heuristics (as with current deep learning systems) are unlikely to be enough to produce high-impact plans.\n\n\n3. We’re producing AI systems by selecting for systems that can do impressive stuff, which will eventually produce AI systems that can accomplish high-impact plans using a general underlying “consequentialist”-style reasoning process (because that’s the only way to keep doing more impressive stuff). However, this selection process does *not* constrain the goals towards which those plans are aimed. In addition, most goals seem to have convergent instrumental subgoals like survival and power-seeking that would lead to extinction. This suggests that, unless we find a way to constrain the goals towards which plans are aimed, we should expect an existential catastrophe.\n\n\n4. None of the methods people have suggested for avoiding this outcome seem like they actually avert this story.\n\n\n\n\n\n\n[Yudkowsky][9:56]  (Nov. 6 email reply)\n\n> \n> […] This suggests that, unless we find a way to constrain the goals towards which plans are aimed, we should expect an existential catastrophe.\n> \n> \n> \n\n\nI would not say we face catastrophe “unless we find a way to constrain the goals towards which plans are aimed”.  This is, first of all, not my ontology, second, I don’t go around randomly slicing away huge sections of the solution space.  Workable:  “This suggests that we should expect an existential catastrophe by default.” \n\n\n\n\n\n[Shah][1:53]  (Nov. 8 email reply)\n\n> \n> I would not say we face catastrophe “unless we find a way to constrain the goals towards which plans are aimed”.\n> \n> \n> \n\n\nShould I also change “However, this selection process does *not* constrain the goals towards which those plans are aimed”, and if so what to? (Something along these lines seems crucial to the argument, but if this isn’t your native ontology, then presumably you have some other thing you’d say here.)\n\n\n\n\n\n\n[Shah][7:06]  (Nov. 6 email)\nRichard responds to this with a few distinct points:\n\n\n1. It might be possible to build narrow AI systems that humans use to save the world, for example, by making AI systems that do better alignment research. Such AI systems do not seem to require the property of making long-term plans in the real world in point (3) above, and so could plausibly be safe. We might say that narrow AI systems could save the world but can’t destroy it, because humans will put plans into action for the former but not the latter.\n\n\n2. It might be possible to build general AI systems that only *state* plans for achieving a goal of interest that we specify, without *executing* that plan.\n\n\n3. It seems possible to create consequentialist systems with constraints upon their reasoning that lead to reduced risk.\n\n\n4. It also seems possible to create systems that make effective plans, but towards ends that are not about outcomes in the real world, but instead are about properties of the plans — think for example of [*corrigibility*](https://www.alignmentforum.org/posts/fkLYhTQteAu5SinAc/corrigibility) ([*AN #35*](https://mailchi.mp/bbd47ba94e84/alignment-newsletter-35)) or deference to a human user.\n\n\n5. (Richard is also more bullish on coordinating not to use powerful and/or risky AI systems, though the debate did not discuss this much.)\n\n\n \n\n\nEliezer’s responses:\n\n\n1. This is plausible, but seems unlikely; narrow not-very-consequentialist AI (aka “long lists of shallow heuristics”) will probably not scale to the point of doing alignment research better than humans.\n\n\n\n\n\n\n[Yudkowsky][9:56]  (Nov. 6 email reply)\n\n> \n> […] This is plausible, but seems unlikely; narrow not-very-consequentialist AI (aka “long lists of shallow heuristics”) will probably not scale to the point of doing alignment research better than humans.\n> \n> \n> \n\n\nNo, your summarized-Richard-1 is just not plausible.  “AI systems that do better alignment research” are dangerous in virtue of the lethally powerful work they are doing, not because of some particular narrow way of doing that work.  If you can do it by gradient descent then that means gradient descent got to the point of doing lethally dangerous work.  Asking for safely weak systems that do world-savingly strong tasks is almost everywhere a case of asking for nonwet water, and asking for AI that does alignment research is an extreme case in point.\n\n\n\n\n\n[Shah][1:53]  (Nov. 8 email reply)\n\n> \n> No, your summarized-Richard-1 is just not plausible. “AI systems that do better alignment research” are dangerous in virtue of the lethally powerful work they are doing, not because of some particular narrow way of doing that work.\n> \n> \n> \n\n\nHow about “AI systems that help with alignment research to a sufficient degree that it actually makes a difference are almost certainly already dangerous.”?\n\n\n(Fwiw, I used the word “plausible” because of this sentence from the doc: “*Definitely, is among the more* plausible *advance-specified miracles we could get.*“, though I guess the point was that it is still a miracle, it just also is more likely than other miracles.)\n\n\n\n\n\n\n[Ngo][9:59]  (Nov. 6 email reply)\nThanks Rohin! Your efforts are much appreciated.\n\n\nEliezer: when you say “No, your summarized-Richard-1 is just not plausible”, do you mean the argument is implausible, or it’s not a good summary of my position (which you also think is implausible)?\n\n\nFor my part the main thing I’d like to modify is the term “narrow AI”. In general I’m talking about all systems that are not of literally world-destroying intelligence+agency. E.g. including oracle AGIs which I wouldn’t call “narrow”.\n\n\nMore generally, I don’t think all AGIs are capable of destroying the world. E.g. humans are GIs. So it might be better to characterise Eliezer as talking about *some* level of general intelligence which leads to destruction, and me as talking about the things that can be done with systems that are less general or less agentic than that.\n\n\n\n> \n> We might say that narrow AI systems could save the world but can’t destroy it, because humans will put plans into action for the former but not the latter.\n> \n> \n> \n\n\nI don’t endorse this, I think plenty of humans would be willing to use narrow AI systems to do things that could destroy the world.\n\n\n\n> \n> systems that make effective plans, but towards ends that are not about outcomes in the real world, but instead are about properties of the plans\n> \n> \n> \n\n\nI’d change this to say “systems with the primary aim of producing plans with certain properties (that aren’t just about outcomes in the world)” \n\n\n\n\n\n\n[Yudkowsky][10:18]  (Nov. 6 email reply)\n\n> \n> Eliezer: when you say “No, your summarized-Richard-1 is just not plausible”, do you mean the argument is implausible, or it’s not a good summary of my position (which you also think is implausible)?\n> \n> \n> \n\n\nI wouldn’t have presumed to state on your behalf whether it’s a good summary of your position!  I mean that the stated position is implausible, whether or not it was a good summary of your position.\n\n\n\n\n\n[Shah][7:06]  (Nov. 6 email)\n2. This might be an improvement, but not a big one. It is the plan itself that is risky; if the AI system made a plan for a goal that wasn’t the one we actually meant, and we don’t understand that plan, that plan can still cause extinction. It is the *misaligned optimization that produced the plan* that is dangerous, even if there was no “agent” that specifically wanted the goal that the plan was optimized for.\n\n\n3 and 4. It is certainly *possible* to do such things; the space of minds that could be designed is very large. However, it is *difficult* to do such things, as they tend to make consequentialist reasoning weaker, and on our current trajectory the first AGI that we build will probably not look like that.\n\n\n\n\n\n\n[Yudkowsky][9:56]  (Nov. 6 email reply)\n\n> \n> 2. This might be an improvement, but not a big one. It is the plan itself that is risky; if the AI system made a plan for a goal that wasn’t the one we actually meant, and we don’t understand that plan, that plan can still cause extinction. It is the *misaligned optimization that produced the plan* that is dangerous, even if there was no “agent” that specifically wanted the goal that the plan was optimized for.\n> \n> \n> \n\n\nNo, it’s not a significant improvement if the “non-executed plans” from the system are meant to do things in human hands powerful enough to save the world.  They could of course be so weak as to make their human execution have no inhumanly big consequences, but this is just making the AI strategically isomorphic to a rock.  The notion of there being “no ‘agent’ that specifically wanted the goal” seems confused to me as well; this is not something I’d ever say as a restatement of one of my own opinions.  I’d shrug and tell someone to taboo the word ‘agent’ and would try to talk without using the word if they’d gotten hung up on that point.\n\n\n\n\n\n[Shah][7:06]  (Nov. 6 email)\n*Planned opinion:*\n\n\n \n\n\nI first want to note my violent agreement with the notion that a major scary thing is “consequentialist reasoning”, and that high-impact plans require such reasoning, and that we will end up building AI systems that produce high-impact plans. Nonetheless, I am still optimistic about AI safety relative to Eliezer, which I suspect comes down to three main disagreements:\n\n\n1. There are many approaches that don’t solve the problem, but do increase the level of intelligence required before the problem leads to extinction. Examples include Richard’s points 1-4 above. For example, if we build a system that states plans without executing them, then for the plans to cause extinction they need to be complicated enough that the humans executing those plans don’t realize that they are leading to an outcome that was not what they wanted. It seems non-trivially probable to me that such approaches are sufficient to prevent extinction up to the level of AI intelligence needed before we can execute a pivotal act.\n\n\n2. The consequentialist reasoning is only scary to the extent that it is “aimed” at a bad goal. It seems non-trivially probable to me that it will be “aimed” at a goal sufficiently good to not lead to existential catastrophe, without putting in much alignment effort. \n3. I do expect some coordination to not do the most risky things.\n\n\nI wish the debate had focused more on the claim that narrow AI can’t e.g. do better alignment research, as it seems like a major crux. (For example, I think that sort of intuition drives my disagreement #1.) I expect AI progress looks a lot like “the heuristics get less and less shallow in a gradual / smooth / continuous manner” which eventually leads to the sorts of plans Eliezer calls “consequentialist”, whereas I think Eliezer expects a sharper qualitative change between “lots of heuristics” and that-which-implements-consequentialist-planning.\n\n\n\n\n\n \n\n\n20. November 6 conversation\n---------------------------\n\n\n \n\n\n### 20.1. Concrete plans, and AI-mediated transparency\n\n\n \n\n\n\n[Yudkowsky][13:22]\nSo I have a general thesis about a failure mode here which is that, the moment you try to sketch any concrete plan or events which correspond to the abstract descriptions, it is much more obviously wrong, and that is why the descriptions stay so abstract in the mouths of everybody who sounds more optimistic than I am.\n\n\nThis may, perhaps, be confounded by the phenomenon where I am one of the last living descendants of the lineage that ever knew how to say anything concrete at all.  Richard Feynman – or so I would now say in retrospect – is noticing concreteness dying out of the world, and being worried about that, at the point where he goes to a college and hears a professor talking about “essential objects” in class, and Feynman asks “Is a brick an essential object?” – meaning to work up to the notion of the inside of a brick, which can’t be observed because breaking a brick in half just gives you two new exterior surfaces – and everybody in the classroom has a different notion of what it would mean for a brick to be an essential object. \n\n\nRichard Feynman knew to try plugging in bricks as a special case, but the people in the classroom didn’t, and I think the mental motion has died out of the world even further since Feynman wrote about it.  The loss has spread to STEM as well.  Though if you don’t read old books and papers and contrast them to new books and papers, you wouldn’t see it, and maybe most of the people who’ll eventually read this will have no idea what I’m talking about because they’ve never seen it any other way…\n\n\nI have a thesis about how optimism over AGI works.  It goes like this: People use really abstract descriptions and never imagine anything sufficiently concrete, and this lets the abstract properties waver around ambiguously and inconsistently to give the desired final conclusions of the argument.  So MIRI is the only voice that gives concrete examples and also by far the most pessimistic voice; if you go around fully specifying things, you can see that what gives you a good property in one place gives you a bad property someplace else, you see that you can’t get all the properties you want simultaneously.  Talk about a superintelligence building nanomachinery, talk concretely about megabytes of instructions going to small manipulators that repeat to lay trillions of atoms in place, and this shows you a lot of useful visible power paired with such unpleasantly visible properties as “no human could possibly check what all those instructions were supposed to do”.\n\n\nAbstract descriptions, on the other hand, can waver as much as they need to between what’s desirable in one dimension and undesirable in another.  Talk about “an AGI that just helps humans instead of replacing them” and never say exactly what this AGI is supposed to do, and this can be so much more optimistic so long as it never becomes too unfortunately concrete.\n\n\nWhen somebody asks you “how powerful is it?” you can momentarily imagine – without writing it down – that the AGI is helping people by giving them the full recipes for protein factories that build second-stage nanotech and the instructions to feed those factories, and reply, “Oh, super powerful! More than powerful enough to flip the gameboard!” Then when somebody asks how safe it is, you can momentarily imagine that it’s just giving a human mathematician a hint about proving a theorem, and say, “Oh, super duper safe, for sure, it’s just helping people!” \n\n\nOr maybe you don’t even go through the stage of momentarily imagining the nanotech and the hint, maybe you just navigate straight in the realm of abstractions from the impossibly vague wordage of “just help humans” to the reassuring and also extremely vague “help them lots, super powerful, very safe tho”.\n\n\n\n> \n> […] I wish the debate had focused more on the claim that narrow AI can’t e.g. do better alignment research, as it seems like a major crux. (For example, I think that sort of intuition drives my disagreement #1.) I expect AI progress looks a lot like “the heuristics get less and less shallow in a gradual / smooth / continuous manner” which eventually leads to the sorts of plans Eliezer calls “consequentialist”, whereas I think Eliezer expects a sharper qualitative change between “lots of heuristics” and that-which-implements-consequentialist-planning.\n> \n> \n> \n\n\nIt is in this spirit that I now ask, “What the hell could it look like concretely for a safely narrow AI to help with alignment research?”\n\n\nOr if you think that a left-handed wibble planner can totally make useful plans that are very safe because it’s all leftish and wibbly: can you please give an example of *a plan to do what?*\n\n\nAnd what I expect is for minds to bounce off that problem as they first try to visualize “Well, a plan to give mathematicians hints for proving theorems… oh, Eliezer will just say that’s not useful enough to flip the gameboard… well, plans for building nanotech… Eliezer will just say that’s not safe… darn it, this whole concreteness thing is such a conversational no-win scenario, maybe there’s something abstract I can say instead”.\n\n\n\n\n\n[Shah][16:41]\nIt’s reasonable to suspect failures to be concrete, but I don’t buy that hypothesis as applied to me; I think I have sufficient personal evidence against it, despite the fact that I usually speak abstractly. I don’t expect to convince you of this, nor do I particularly want to get into that sort of debate.\n\n\nI’ll note that I have the exact same experience of not seeing much concreteness, both of other people and myself, about stories that lead to doom. To be clear, in what I take to be the Eliezer-story, the part where the misaligned AI designs a pathogen that wipes out all humans or solves nanotech and gains tons of power or some other pivotal act seems fine. The part that seems to lack concreteness is how we built the superintelligence and why the superintelligence was misaligned enough to lead to extinction. (Well, perhaps. I also wouldn’t be surprised if you gave a concrete example and I disagreed that it would lead to extinction.)\n\n\nFrom my perspective, the simple concrete stories about the future are wrong and the complicated concrete stories about the future don’t sound plausible, whether about safety or about doom.\n\n\nNonetheless, here’s an attempt at some concrete stories. It is *not* the case that I think these would be convincing to you. I do expect you to say that it won’t be useful enough to flip the gameboard (or perhaps that if it could possibly flip the gameboard then it couldn’t be safe), but that seems to be because you think alignment will be way more difficult than I do (in expectation), and perhaps we should get into that instead.\n\n\n* Instead of having to handwrite code that does feature visualization or other methods of “naming neurons”, an AI assistant can automatically inspect a neural net’s weights, perform some experiments with them, and give them human-understandable “names”. What a “name” is depends on the system being analyzed, but you could imagine that sometimes it’s short memorable phrases (e.g. for the later layers of a language model), or pictures of central concepts (e.g. for image classifiers), or paragraphs describing the concept (e.g. for novel concepts discovered by a scientist AI). Given these names, it is much easier for humans to read off “circuits” from the neural net to understand how it works.\n* Like the above, except the AI assistant also reads out the circuits, and efficiently reimplements the neural network in, say, readable Python, that humans can then more easily mechanistically understand. (These two tasks could also be done by two different AI systems, instead of the same one; perhaps that would be easier / safer.)\n* We have AI assistants search for inputs on which the AI system being inspected would do something that humans would rate as bad. (We can choose any not-horribly-unnatural rating scheme we want that humans can understand, e.g. “don’t say something the user said not to talk about, even if it’s in their best interest” can be a tenet for finetuned GPT-N if we want.) We can either train on those inputs, or use them as a test for how well our other alignment schemes have worked.\n\n\n(These are all basically leveraging the fact that we could have AI systems that are really knowledgeable in the realm of “connecting neural net activations to human concepts”, which seems plausible to do without being super general or consequentialist.)\n\n\nThere’s also lots of meta stuff, like helping us with literature reviews, speeding up paper- and blog-post-writing, etc, but I doubt this is getting at what you care about\n\n\n\n\n\n\n[Yudkowsky][17:09]\nIf we thought that helping with literature review was enough to save the world from extinction, then we should be trying to spend at least $50M on helping with literature review right now today, and if we can’t effectively spend $50M on that, then we also can’t build the dataset required to train narrow AI to do literature review.  Indeed, any time somebody suggests doing something weak with AGI, my response is often “Oh how about we start on that right now using humans, then,” by which question its pointlessness is revealed.\n\n\n\n\n\n[Shah][17:11]\nI mean, doesn’t seem crazy to just spend $50M on effective PAs, but in any case I agree with you that this is not the main thing to be thinking about\n\n\n\n\n\n\n[Yudkowsky][17:13]\nThe other cases of “using narrow AI to help with alignment” via pointing an AI, or rather a loss function, at a transparency problem, seem to seamlessly blend into all of the other clever-ideas we may have for getting more insight into the giant inscrutable matrices of floating-point numbers.  By this concreteness, it is revealed that we are not speaking of von-Neumann-plus-level AGIs who come over and firmly but gently set aside our paradigm of giant inscrutable matrices, and do something more alignable and transparent; rather, we are trying more tricks with loss functions to get human-language translations of the giant inscrutable matrices.\n\n\nI have thought of various possibilities along these lines myself.  They’re on my list of things to try out when and if the EA community has the capacity to try out ML ideas in a format I could and would voluntarily access.\n\n\nThere’s a basic reason I expect the world to die despite my being able to generate infinite clever-ideas for ML transparency, which, at the usual rate of 5% of ideas working, could get us as many as three working ideas in the impossible event that the facilities were available to test 60 of my ideas.\n\n\n\n\n\n[Shah][17:15]\n\n> \n> By this concreteness, it is revealed that we are not speaking of von-Neumann-plus-level AGIs who come over and firmly but gently set aside our paradigm of giant inscrutable matrices, and do something more alignable and transparent; rather, we are trying more tricks with loss functions to get human-language translations of the giant inscrutable matrices.\n> \n> \n> \n\n\nAgreed, but I don’t see the point here\n\n\n(Beyond “Rohin and Eliezer disagree on how impossible it is to align giant inscrutable matrices”)\n\n\n(I might dispute “tricks with loss functions”, but that’s nitpicky, I think)\n\n\n\n\n\n\n[Yudkowsky][17:16]\nIt’s that, if we get better transparency, we are then left looking at stronger evidence that our systems are planning to kill us, but this will not help us because we will not have anything we can do to make the system *not* plan to kill us.\n\n\n\n\n\n[Shah][17:18]\nThe adversarial training case is one example where you are trying to change the system, and if you’d like I can generate more along these lines, but they aren’t going to be that different and are still going to come down to what I expect you will call “playing tricks with loss functions”\n\n\n\n\n\n\n[Yudkowsky][17:18]\nWell, part of the point is that “AIs helping us with alignment” is, from my perspective, a classic case of something that might ambiguate between the version that concretely corresponds to “they are very smart and can give us the Textbook From The Future that we can use to easily build a robust superintelligence” (which is powerful, pivotal, unsafe, and kills you) or “they can help us with literature review” (safe, weak, unpivotal) or “we’re going to try clever tricks with gradient descent and loss functions and labeled datasets to get alleged natural-language translations of some of the giant inscrutable matrices” (which was always the plan but which I expected to not be sufficient to avert ruin).\n\n\n\n\n\n[Shah][17:19]\nI’m definitely thinking of the last one, but I take your point that disambiguating between these is good\n\n\nAnd I also think it’s revealing that this is not in fact the crux of disagreement\n\n\n\n\n\n \n\n\n### 20.2. Concrete disaster scenarios, out-of-distribution problems, and corrigibility\n\n\n \n\n\n\n[Yudkowsky][17:20]\n\n> \n> I’ll note that I have the exact same experience of not seeing much concreteness, both of other people and myself, about stories that lead to doom.\n> \n> \n> \n\n\nI have a boundless supply of greater concrete detail for the asking, though if you ask large questions I may ask for a narrower question to avoid needing to supply 10,000 words of concrete detail.\n\n\n\n\n\n[Shah][17:24]\nI guess the main thing is to have an example of a story which includes a method for building a superintelligence (yes, I realize this is info-hazard-y, sorry, an abstract version might work) + how it becomes misaligned and what its plans become optimized for. Though as I type this out I realize that I’m likely going to disagree on the feasibility of the method for building a superintelligence?\n\n\n\n\n\n\n[Yudkowsky][17:25]\nI mean, I’m obviously not going to want to make any suggestions that I think could possibly work and which are not very very *very* obvious.\n\n\n\n\n\n[Shah][17:25]\nYup, makes sense\n\n\n\n\n\n\n[Yudkowsky][17:25]\nBut I don’t think that’s much of an issue.\n\n\nI could just point to MuZero, say, and say, “Suppose something a lot like this scaled.”\n\n\nDo I need to explain how you would die in this case?\n\n\n\n\n\n[Shah][17:26]\nWhat sort of domain and what training data?\n\n\nLike, do we release a robot in the real world, have it collect data, build a world model, and run MuZero with a reward for making a number in a bank account go up?\n\n\n\n\n\n\n[Yudkowsky][17:28]\nSupposing they’re naive about it: playing all the videogames, predicting all the text and images, solving randomly generated computer puzzles, accomplishing sets of easily-labelable sensorymotor tasks using robots and webcams\n\n\n\n\n\n[Shah][17:29]\nOkay, so far I’m with you. Is there a separate deployment step, and if so, how did they finetune the agent for the deployment task? Or did it just take over the world halfway through training?\n\n\n\n\n\n\n[Yudkowsky][17:29]\n(though this starts to depart from the Mu Zero architecture if it has the ability to absorb knowledge via learning on more purely predictive problems)\n\n\n\n\n\n[Shah][17:30]\n(I’m okay with that, I think)\n\n\n\n\n\n\n[Yudkowsky][17:32]\nvaguely plausible rough scenario: there was a big ongoing debate about whether or not to try letting the system trade stocks, and while the debate was going on, the researchers kept figuring out ways to make Something Zero do more with less computing power, and then it started visibly talking at people and trying to manipulate them, and there was an enormous fuss, and what happens past this point depends on whether or not you want me to try to describe a scenario in which we die with an unrealistic amount of dignity, or a realistic scenario where we die much faster\n\n\nI shall assume the former.\n\n\n\n\n\n[Shah][17:32]\nActually I think I want concreteness earlier\n\n\n\n\n\n\n[Yudkowsky][17:32]\nOkay.  I await your further query.\n\n\n\n\n\n[Shah][17:32]\n\n> \n> it started visibly talking at people and trying to manipulate them\n> \n> \n> \n\n\nWhat caused this?\n\n\nWas it manipulating people in order to make e.g. sensory stuff easier to predict?\n\n\n\n\n\n\n[Yudkowsky][17:36]\nCumulative lifelong learning from playing videogames took its planning abilities over a threshold; cumulative solving of computer games and multimodal real-world tasks took its internal mechanisms for unifying knowledge and making them coherent over a threshold; and it gained sufficient compressive understanding of the data it had implicitly learned by reading through hundreds of terabytes of Common Crawl, not so much the semantic knowledge contained in those pages, but the associated implicit knowledge of the Things That Generate Text (aka humans). \n\n\nThese combined to form an imaginative understanding that some of its real-world problems were occurring in interactions with the Things That Generate Text, and it started making plans which took that into account and tried to have effects on the Things That Generate Text in order to affect the further processes of its problems.\n\n\nOr perhaps somebody trained it to write code in partnership with programmers and it already had experience coworking with and manipulating humans.\n\n\n\n\n\n[Shah][17:39]\nChecking understanding: At this point it is able to make novel plans that involve applying knowledge about humans and their role in the data-generating process in order to create a plan that leads to more reward for the real-world problems?\n\n\n(Which we call “manipulating humans”)\n\n\n\n\n\n\n[Yudkowsky][17:40]\nYes, much as it might have gained earlier experience with making novel Starcraft plans that involved “applying knowledge about humans and their role in the data-generating process in order to create a plan that leads to more reward”, if it was trained on playing Starcraft against humans at any point, or even needed to make sense of how other agents had played Starcraft\n\n\nThis in turn can be seen as a direct outgrowth and isomorphism of making novel plans for playing Super Mario Brothers which involve understanding Goombas and their role in the screen-generating process\n\n\nexcept obviously that the Goombas are much less complicated and not themselves agents\n\n\n\n\n\n[Shah][17:41]\nYup, makes sense. Not sure I totally agree that this sort of thing is likely to happen as quickly as it sounds like you believe but I’m happy to roll with it; I do think it will happen eventually\n\n\nSo doesn’t seem particularly cruxy\n\n\nI can see how this leads to existential catastrophe, if you don’t expect the programmers to be worried at this early manipulation warning sign. (This is potentially cruxy for p(doom), but doesn’t feel like the main action.)\n\n\n\n\n\n\n[Yudkowsky][17:46]\nOn my mainline, where this is all happening at Deepmind, I do expect at least one person in the company has ever read anything I’ve written.  I am not sure if Demis understands he is looking straight at death, but I am willing to suppose for the sake of discussion that he does understand this – which isn’t ruled out by my actual knowledge – and talk about how we all die from there.\n\n\nThe very brief tl;dr is that they know they’re looking at a warning sign but they cannot ~~fix the warning sign~~ actually fix the real underlying problem that the warning sign is about, and AGI is getting easier for other people to develop too.\n\n\n\n\n\n[Shah][17:46]\nI assume this is primarily about social dynamics + the ability to patch things such that things look fixed?\n\n\nYeah, makes sense\n\n\nI assume the “real underlying problem” is somehow not the fact that the task you were training your AI system to do was not what you actually wanted it to do?\n\n\n\n\n\n\n[Yudkowsky][17:48]\nIt’s about the unavailability of any actual fix and the technology continuing to get easier.  Even if Deepmind understands that surface patches are lethal and understands that the easy ways of hammering down the warning signs are just eliminating the visibility rather than the underlying problems, there is nothing they can do about that except wait for somebody else to destroy the world instead.\n\n\nI do not know of any pivotal task you could possibly train an AI system to do using tons of correctly labeled data.  This is part of why we’re all dead.\n\n\n\n\n\n[Shah][17:50]\nYeah, I think if I adopted (my understanding of) your beliefs about alignment difficulty, and there wasn’t already a non-racing scheme set in place, seems like we’re in trouble\n\n\n\n\n\n\n[Yudkowsky][17:50]\nLike, “the real underlying problem is the fact that the task you were training your AI system to do was not what you actually wanted it to do” is one way of looking at one of the several problems that are truly fundamental, but this has no remedy that I know of, besides training your AI to do something small enough to be unpivotal.\n\n\n\n\n\n[Shah][17:51][17:52]\nI don’t actually know the response you’d have to “why not just do value alignment?” I can name several guesses\n\n\n\n\n\n\n\n* [Fragility of value](https://intelligence.org/files/ComplexValues.pdf)\n* Not sufficiently concrete\n* Can’t give correct labels for human values\n\n\n\n\n\n\n[Yudkowsky][17:52][17:52]\nTo be concrete, you can’t ask the AGI to build one billion nanosystems, label all the samples that wiped out humanity as bad, and apply gradient descent updates\n\n\n\n\n\n\nIn part, you can’t do that because one billion samples will get you one billion lethal systems, but even if that wasn’t true, you still couldn’t do it.\n\n\n\n\n\n[Shah][17:53]\n\n> \n> even if that wasn’t true, you still couldn’t do it.\n> \n> \n> \n\n\nWhy not? [Nearest unblocked strategy](https://arbital.com/p/nearest_unblocked/)?\n\n\n\n\n\n\n[Yudkowsky][17:53]\n…no, because the first supposed output for training generated by the system at superintelligent levels kills everyone and there is nobody left to label the data.\n\n\n\n\n\n[Shah][17:54]\nOh, I thought you were asking me to imagine away that effect with your second sentence\n\n\nIn fact, I still don’t understand what it was supposed to mean\n\n\n(Specifically this one:\n\n\n\n> \n> In part, you can’t do that because one billion samples will get you one billion lethal systems, but even if that wasn’t true, you still couldn’t do it.\n> \n> \n> \n\n\n)\n\n\n\n\n\n\n[Yudkowsky][17:55]\nthere’s a separate problem where you can’t apply reinforcement learning when there’s no good examples, even assuming you live to label them\n\n\nand, of course, yet another form of problem where you can’t tell the difference between good and bad samples\n\n\n\n\n\n[Shah][17:56]\nOkay, makes sense\n\n\nLet me think a bit\n\n\n\n\n\n\n[Yudkowsky][18:00]\nand lest anyone start thinking that was an exhaustive list of fundamental problems, note the absence of, for example, “applying lots of optimization using an outer loss function doesn’t necessarily get you something with a faithful internal cognitive representation of that loss function” aka “natural selection applied a ton of optimization power to humans using a very strict very simple criterion of ‘inclusive genetic fitness’ and got out things with no explicit representation of or desire towards ‘inclusive genetic fitness’ because that’s what happens when you hill-climb and take wins in the order a simple search process through cognitive engines encounters those wins”\n\n\n\n\n\n[Shah][18:02]\n(Agreed that is another major fundamental problem, in the sense of something that could go wrong, as opposed to something that almost certainly goes wrong)\n\n\nI am still curious about the “why not value alignment” question, where to expand, it’s something like “let’s get a wide range of situations and train the agent with gradient descent to do what a human would say is the right thing to do”. (We might also call this “imitation”; maybe “value alignment” isn’t the right term, I was thinking of it as trying to align the planning with “human values”.)\n\n\nMy own answer is that we shouldn’t expect this to generalize to nanosystems, but that’s again much more of a “there’s not great reason to expect this to go right, but also not great reason to go wrong either”.\n\n\n(This is a place where I would be particularly interested in concreteness, i.e. what does the AI system do in these cases, and how does that almost-necessarily follow from the way it was trained?)\n\n\n\n\n\n\n[Yudkowsky][18:05]\nwhat’s an example element from the “wide range of situations” and what is the human labeling?\n\n\n(I could make something up and let you object, but it seems maybe faster to ask you to make something up)\n\n\n\n\n\n[Shah][18:09]\nUh, let’s say that the AI system is being trained to act well on the Internet, and it’s shown some tweet / email / message that a user might have seen, and asked to reply to the tweet / email / message. User says whether the replies are good or not (perhaps via comparisons, a la [Deep RL from Human Preferences](https://arxiv.org/abs/1706.03741))\n\n\nIf I were not making it up on the spot, it would be more varied than that, but would not include “building nanosystems”\n\n\n\n\n\n\n[Yudkowsky][18:10]\nAnd presumably, in this example, the AI system is not smart enough that exposing humans to text it generates is already a world-wrecking threat if the AI is hostile?\n\n\ni.e., does not just hack the humans\n\n\n\n\n\n[Shah][18:10]\nYeah, let’s assume that for the moment\n\n\n\n\n\n\n[Yudkowsky][18:11]\nso what you want to do is train on ‘weak-safe’ domains where the AI isn’t smart enough to do damage, and the humans can label the data pretty well because the AI isn’t smart enough to fool them\n\n\n\n\n\n[Shah][18:11]\n“want to do” is putting it a bit strongly. This is more like a scenario I can’t prove is unsafe, but do not strongly believe is safe\n\n\n\n\n\n\n[Yudkowsky][18:12]\nbut the domains where the AI can execute a world-saving pivotal act are out-of-distribution for those domains.  *extremely* out-of-distribution.  *fundamentally* out-of-distribution.  the AI’s own thought processes are out-of-distribution for any inscrutable matrices that were learned to influence those thought processes in a corrigible direction.\n\n\nit’s not like trying to generalize experience from playing Super Mario Bros to Metroid.\n\n\n\n\n\n[Shah][18:13]\nDefinitely, but my reaction to this is “okay, no particular reason for it to be safe” — but also not huge reason for it to be unsafe. Like, it would not hugely shock me if what-we-want is sufficiently “natural” that the AI system picks up on the right thing form the ‘weak-safe’ domains alone\n\n\n\n\n\n\n[Yudkowsky][18:14]\nyou have this whole big collection of possible AI-domain tuples that are powerful-dangerous and they have properties that aren’t in *any* of the weak-safe training situations, that are moving along third dimensions where all the weak-safe training examples were flat\n\n\nnow, just because something is out-of-distribution, doesn’t mean that nothing can ever generalize there\n\n\n\n\n\n[Shah][18:15]\nI mean, you correctly would not accept this argument if I said that by training blue-car-driving robots solely on blue cars I am ensuring they would be bad on red-car-driving\n\n\n\n\n\n\n[Yudkowsky][18:15]\nhumans generalize from the savannah to the vacuum\n\n\nso the actual problem is that I expect the optimization to generalize and the corrigibility to fail\n\n\n\n\n\n[Shah][18:15]\n^Right, that\n\n\nI am not clear on why you expect this so strongly\n\n\nMaybe you think generalization is extremely rare and optimization is a special case because of how it is so useful for basically everything?\n\n\n\n\n\n\n[Yudkowsky][18:16]\nno\n\n\ndid you read the section of my dialogue with Richard Ngo where I tried to explain [why corrigibility is anti-natural](https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty#3_1__The_Brazilian_university_anecdote), or where Nate tried to give the [example](https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty#4_2__Nate_Soares__summary) of why planning to get a laser from point A to point B without being scattered by fog is the sort of thing that also naturally says to prevent humans from filling the room with fog?\n\n\n\n\n\n[Shah][18:19]\nAh, right, I should have predicted that. (Yes, I did read it.)\n\n\n\n\n\n\n[Yudkowsky][18:19]\nor for that matter, am I correct in remembering that these sections existed\n\n\nk\n\n\nso, do you need more concrete details about some part of that?\n\n\na bunch of the reason why I suspect that corrigibility is anti-natural is from trying to work particular problems there in MIRI’s earlier history, and not finding anything that wasn’t contrary to ~~coherence~~ the overlap in the shards of inner optimization that, when ground into existence by the outer optimization loop, coherently mix to form the part of cognition that generalizes to do powerful things; and nobody else finding it either, etc.\n\n\n\n\n\n[Shah][18:22]\nI think I disagreed with that part more directly, in that it seemed like in those sections the corrigibility was assumed to be imposed “from the outside” on top of a system with a goal, rather than having a goal that was corrigible. (I also had a similar reaction to the 2015 [Corrigibility](https://intelligence.org/files/Corrigibility.pdf) paper.)\n\n\nSo, for example, it seems to me like [CIRL](https://arxiv.org/abs/1606.03137) is an example of an objective that can be maximized in which the agent is corrigible-in-a-certain-sense. I agree that due to [updated deference](https://arbital.com/p/updated_deference/) it will eventually stop seeking information from the human / be subject to corrections by the human. I don’t see why, at that point, it wouldn’t have just learned to do what the humans actually want it to do.\n\n\n(There are objections like misspecification of the reward prior, or misspecification of the P(behavior | reward), but those feel like different concerns to the ones you’re describing.)\n\n\n\n\n\n\n[Yudkowsky][18:25]\na thing that MIRI tried and failed to do was find a sensible generalization of expected utility which could contain a generalized utility function that would look like an AI that let itself be shut down, without trying to force you to shut it down\n\n\nand various workshop attendees not employed by MIRI, etc\n\n\n\n\n\n[Shah][18:26]\nI do agree that a CIRL agent would not let you shut it down\n\n\nAnd this is something that should maybe give you pause, and be a lot more careful about potential misspecification problems\n\n\n\n\n\n\n[Yudkowsky][18:27]\nif you could give a perfectly specified prior such that the result of updating on lots of observations would be a representation of the utility function that [CEV](https://arbital.com/p/cev/) outputs, and you could perfectly [inner-align](https://arxiv.org/abs/1906.01820) an optimizer to do that thing in a way that scaled to arbitrary levels of cognitive power, then you’d be home free, sure.\n\n\n\n\n\n[Shah][18:28]\nI’m not trying to claim this is a solution. I’m more trying to point at a reason why I am not convinced that corrigibility is anti-natural.\n\n\n\n\n\n\n[Yudkowsky][18:28]\nthe reason CIRL doesn’t get off the ground is that there isn’t any known, and isn’t going to be any known, prior over (observation|’true’ utility function) such that an AI which updates on lots of observations ends up with our true desired utility function.\n\n\nif you can do that, the AI *doesn’t need to be corrigible*\n\n\nthat’s why it’s not a counterexample to corrigibility being anti-natural\n\n\nthe AI just boomfs to superintelligence, observes all the things, and does all the goodness\n\n\nit doesn’t listen to you say no and won’t let you shut it down, but by hypothesis this is fine because it got the true utility function yay\n\n\n\n\n\n[Shah][18:31]\nIn the world where it doesn’t immediately start out as a superintelligence, it spends a lot of time trying to figure out what you want, asking you what you prefer it does, making sure to focus on the highest-EV questions, being very careful around any irreversible actions, etc\n\n\n\n\n\n\n[Yudkowsky][18:31]\nand making itself smarter as fast as possible\n\n\n\n\n\n[Shah][18:32]\nYup, that too\n\n\n\n\n\n\n[Yudkowsky][18:32]\nI’d do that stuff too if I was waking up in an alien world\n\n\nand, with all due respect to myself, *I am not corrigible*\n\n\n\n\n\n[Shah][18:33]\nYou’d do that stuff because you’d want to make sure you don’t accidentally get killed by the aliens; a CIRL agent does it because it “wants to help the human”\n\n\n\n\n\n\n[Yudkowsky][18:34]\nno, a CIRL agent does it because it wants to implement the True Utility Function, which it may, early on, suspect to consist of helping\\* humans, and maybe to have some overlap (relative to its currently reachable short-term outcome sets, though these are of vanishingly small relative utility under the True Utility Function) with what some humans desire some of the time\n\n\n(\\*) ‘help’ may not be help\n\n\nseparately it asks a lot of questions because the things humans do are evidence about the True Utility Function\n\n\n\n\n\n[Shah][18:35]\nI agree this is also an accurate description of CIRL\n\n\nA more accurate description, even\n\n\nWait why is it vanishingly small relative utility? Is the assumption that the True Utility Function doesn’t care much about humans? Or was there something going on with short vs. long time horizons that I didn’t catch\n\n\n\n\n\n\n[Yudkowsky][18:39]\nin the short term, a weak CIRL tries to grab the hand of a human about to fall off a cliff, because its TUF probably does prefer the human who didn’t fall off the cliff, if it has only exactly those two options, and this is the sort of thing it would learn was probably true about the TUF early on, given the obvious ways of trying to produce a CIRL-ish thing via gradient descent\n\n\nhumans eat healthy in the ancestral environment when ice cream doesn’t exist as an option\n\n\nin the long run, the things the CIRL agent wants do *not* overlap with anything humans find more desirable than paperclips (because there is no known scheme that takes in a bunch of observations, updates a prior, and outputs a utility function whose achievable maximum is galaxies living happily forever after)\n\n\nand plausible TUF schemes are going to notice that grabbing the hand of a current human is a vanishing fraction of all value eventually at stake\n\n\n\n\n\n[Shah][18:42]\nOkay, cool, short vs. long time horizons\n\n\nMakes sense\n\n\n\n\n\n\n[Yudkowsky][18:42]\nright, a weak but sufficiently reflective CIRL agent will notice an alignment of short-term interests with humans but deduce misalignment of long-term interests\n\n\nthough I should maybe call it CIRL\\* to denote the extremely probable case that the limit of its updating on observation does not in fact converge to CEV’s output\n\n\n\n\n\n[Soares][18:43]\n(Attempted rephrasing of a point I read Eliezer as making upstream, in hopes that a rephrasing makes it click for Rohin:) \n\n\nCorrigibility isn’t for bug-free CIRL agents with a prior that actually dials in on goodness given enough observation; if you have one of those you can just run it and call it a day. Rather, corrigibility is for surviving your civilization’s inability to do the job right on the first try.\n\n\nCIRL doesn’t have this property; it instead amounts to the assertion “if you are optimizing with respect to a distribution on utility functions that dials in on goodness given enough observation then that gets you just about as much good as optimizing goodness”; this is somewhat tangential to corrigibility.\n\n\n\n\n\n| |\n| --- |\n| [Yudkowsky: +1] |\n\n\n\n\n\n\n\n[Yudkowsky][18:44]\nand you should maybe update on how, even though somebody thought CIRL was going to be more corrigible, in fact it made *absolutely zero progress on the real problem*\n\n\n\n\n\n| |\n| --- |\n| [Ngo: 👍] |\n\n\n\nthe notion of having an uncertain utility function that you update from observation is coherent and doesn’t yield circular preferences, running in circles, incoherent betting, etc.\n\n\nso, of course, it is antithetical in its intrinsic nature to corrigibility\n\n\n\n\n\n[Shah][18:47]\nI guess I am not sure that I agree that this is the purpose of corrigibility-as-I-see-it. The point of corrigibility-as-I-see-it is that you don’t have to specify the object-level outcomes that your AI system must produce, and instead you can specify the meta-level processes by which your AI system should come to know what the object-level outcomes to optimize for are\n\n\n(At CHAI we had taken to talking about corrigibility\\_MIRI and corrigibility\\_Paul as completely separate concepts and I have clearly fallen out of that good habit)\n\n\n\n\n\n\n[Yudkowsky][18:48]\nspeaking as the person who invented the concept, asked for name submissions for it, and selected ‘corrigibility’ as the winning submission, that is absolutely not how I intended the word to be used\n\n\nand I think that the thing I was actually trying to talk about is important and I would like to retain a word that talks about it\n\n\n‘corrigibility’ is meant to refer to the sort of putative hypothetical motivational properties that prevent a system from wanting to kill you after you didn’t build it exactly right\n\n\n[low impact](https://arbital.com/p/low_impact/), [mild optimization](https://arbital.com/p/soft_optimizer/), [shutdownability](https://arbital.com/p/shutdown_problem/), [abortable planning](https://arbital.com/p/abortable/), [behaviorism](https://arbital.com/p/behaviorist/), [conservatism](https://arbital.com/p/conservative_concept/), etc.  (note: some of these may be less antinatural than others)\n\n\n\n\n\n[Shah][18:51]\nCool. Sorry for the miscommunication, I think we should probably backtrack to here\n\n\n\n> \n> so the actual problem is that I expect the optimization to generalize and the corrigibility to fail\n> \n> \n> \n\n\nand restart.\n\n\nThough possibly I should go to bed, it is quite late here and there was definitely a time at which I would not have confused corrigibility\\_MIRI with corrigibility\\_Paul, and I am a bit worried at my completely having missed that this time\n\n\n\n\n\n\n[Yudkowsky][18:51]\nthe thing you just said, interpreted literally, is what I would call simply “going meta” but my guess is you have a more specific metaness in mind\n\n\n…does Paul use “corrigibility” to mean “going meta”? I don’t think I’ve seen Paul doing that.\n\n\n\n\n\n[Shah][18:54]\nNot exactly “going meta”, no (and I don’t think I exactly mean that either). But I definitely infer a different concept from than the one you’re describing here. It is definitely possible that this comes from me misunderstanding Paul; I have done so many times\n\n\n\n\n\n\n[Yudkowsky][18:55]\nThat looks to me like Paul used ‘corrigibility’ around the same way I meant it, if I’m not just reading my own face into those clouds.  maybe you picked up on the exciting metaness of it and thought ‘corrigibility’ was talking about the metaness part? ![😛](https://s.w.org/images/core/emoji/14.0.0/72x72/1f61b.png)\n\n\nbut I also want to create an affordance for you to go to bed\n\n\nhopefully this last conversation combined with previous dialogues has created any sense of why I worry that corrigibility is anti-natural and hence that “on the first try at doing it, the optimization generalizes from the weak-safe domains to the strong-lethal domains, but the corrigibility doesn’t”\n\n\nso I would then ask you what part of this you were skeptical about\n\n\nas a place to pick up when you come back from the realms of Morpheus\n\n\n\n\n\n[Shah][18:58]\nYup, sounds good. Talk to you tomorrow!\n\n\n\n\n\n \n\n\n21. November 7 conversation\n---------------------------\n\n\n \n\n\n### 21.1. Corrigibility, value learning, and pessimism\n\n\n \n\n\n\n[Shah][3:23]\nQuick summary of discussion so far (in which I ascribe views to Eliezer, for the sake of checking understanding, omitting for brevity the parts about how these are facts about my beliefs about Eliezer’s beliefs and not Eliezer’s beliefs themselves):\n\n\n* Some discussion of “how to use non-world-optimizing AIs to help with AI alignment”, which are mostly in the category “clever tricks with gradient descent and loss functions and labeled datasets” rather than “textbook from the future”. Rohin thinks these help significantly (and that “significant help” = “reduced x-risk”). Eliezer thinks that whatever help they provide is not sufficient to cross the line from “we need a miracle” to “we have a plan that has non-trivial probability of success without miracles”. The crux here seems to be alignment difficulty.\n* Some discussion of how doom plays out. I agree with Eliezer that if the AI is catastrophic by default, and we don’t have a technique that stops the AI from being catastrophic by default, and we don’t already have some global coordination scheme in place, then bad things happen. Cruxes seem to be alignment difficulty and the plausibility of a global coordination scheme, of which alignment difficulty seems like the bigger one.\n* On alignment difficulty, an example scenario is “train on human judgments about what the right thing to do is on a variety of weak-safe domains, and hope for generalization to potentially-lethal domains”. Rohin views this as neither confidently safe nor confidently unsafe. Eliezer views this as confidently unsafe, because he strongly expects the optimization to generalize while the corrigibility doesn’t, because corrigibility is anti-natural.\n\n\n(Incidentally, “optimization generalizes but corrigibility doesn’t” is an example of the sort of thing I wish were more concrete, if you happen to be able to do that)\n\n\nMy current take on “corrigibility”:\n\n\n* Prior to this discussion, in my head there was corrigibility\\_A and corrigibility\\_B. Corrigibility\\_A, which I associated with MIRI, was about imposing a constraint “from the outside”. Given an AI system, it is a method of modifying that AI system to (say) allow you to shut it down, by performing some sort of operation on its goal. Corrigibility\\_B, which I associated with Paul, was about building an AI system which would have particular nice behaviors like learning about the user’s preferences, accepting corrections about what it should do, etc.\n* After this discussion, I think everyone meant corrigibility\\_B all along. The point of the 2015 MIRI paper was to check whether it is possible to build a version of corrigibility\\_B that was compatible with expected utility maximization with a not-terribly-complicated utility function; the point of this was to see whether corrigibility could be made compatible with “plans that lase”.\n* While I think people agree on the behaviors of corrigibility, I am not sure they agree on why we want it. Eliezer wants it for surviving failures, but maybe others want it for “dialing in on goodness”. When I think about a “broad basin of corrigibility”, that intuitively seems more compatible with the “dialing in on goodness” framing (but this is an aesthetic judgment that could easily be wrong).\n* I don’t think I meant “going meta”, e.g. I wouldn’t have called indirect normativity an example of corrigibility. I think I was pointing at “dialing in on goodness” vs. “specifying goodness”.\n* I agree CIRL doesn’t help survive failures. But if you instead talk about “dialing in on goodness”, CIRL does in fact do this, at least conceptually (and other alternatives don’t).\n* I am somewhat surprised that “how to conceptually dial in on goodness” is not something that seems useful to you. Maybe you think it is useful, but you’re objecting to me calling it corrigibility, or saying we knew how to do it before CIRL?\n\n\n(A lot of the above on corrigibility is new, because the distinction between surviving-failures and dialing-in-on-goodness as different use cases for very similar kinds of behaviors is new to me. Thanks for discussion that led me to making such a distinction.)\n\n\nPossible avenues for future discussion, in the order of my-guess-at-usefulness:\n\n\n1. Discussing anti-naturality of corrigibility. As a starting point: you say that an agent that makes plans but doesn’t execute them is also dangerous, because it is the plan itself that lases, and corrigibility is antithetical to lasing. Does this mean you predict that you, or I, with suitably enhanced intelligence and/or reflectivity, would not be capable of producing a plan to help an alien civilization optimize their world, with that plan being corrigible w.r.t the aliens? (This seems like a strange and unlikely position to me, but I don’t see how to not make this prediction under what I believe to be your beliefs. Maybe you just bite this bullet.)\n2. Discussing why it is very unlikely for the AI system to generalize correctly both on optimization and values-or-goals-that-guide-the-optimization (which seems to be distinct from corrigibility). Or to put it another way, why is “alignment by default according to John Wentworth” doomed to fail? \n3. More checking of where I am failing to pass your ITT\n4. Why is “dialing in on goodness” not a reasonable part of the solution space (to the extent you believe that)?\n5. More concreteness on how optimization generalizes but corrigibility doesn’t, in the case where the AI was trained by human judgment on weak-safe domains Just to continue to state it so people don’t misinterpret me: in most of the cases that we’re discussing, my position is *not* that they are safe, but rather that they are not overwhelmingly likely to be unsafe.\n\n\n\n\n\n\n[Ngo][3:41]\nI don’t understand what you mean by dialling in on goodness. Could you explain how CIRL does this better than, say, [reward modelling](https://deepmindsafetyresearch.medium.com/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84)?\n\n\n\n\n\n\n[Shah][3:49]\nReward modeling does not by default (a) choose relevant questions to ask the user in order to get more information about goodness, (b) act conservatively, especially in the face of irreversible actions, while it is still uncertain about what goodness is, or (c) take actions that are known to be robustly good, while still waiting for future information that clarifies the nuances of goodness\n\n\nYou could certainly do something like Deep RL from Human Preferences, where the preferences are things like “I prefer you ask me relevant questions to get more information about goodness”, in order to get similar behavior. In this case you are transferring desired behaviors from a human to the AI system, whereas in CIRL the behaviors “fall out of” optimization for a specific objective\n\n\nIn Eliezer/Nate terms, the CIRL story shows that dialing on goodness is compatible with “plans that lase”, whereas reward modeling does not show this\n\n\n\n\n\n\n[Ngo][4:04]\nThe meta-level objective that CIRL is pointing to, what makes that thing deserve the name “goodness”? Like, if I just gave an alien CIRL, and I said “this algorithm dials an AI towards a given thing”, and they looked at it without any preconceptions of what the designers *wanted* to do, why wouldn’t they say “huh, it looks like an algorithm for dialling in on some extrapolation of the unintended consequences of people’s behaviour” or something like that?\n\n\nSee also this part of my second discussion with Eliezer, where he brings up CIRL: [] He was emphasising that CIRL, and most other proposals for alignment algorithms, just shuffle the problematic consequentialism from the original place to a less visible place. I didn’t engage much with this argument because I mostly agree with it.\n\n\n\n\n\n| |\n| --- |\n| [Yudkowsky: +1] |\n\n\n\n\n\n\n\n[Shah][5:28]\nI think you are misunderstanding my point. I am not claiming that we know how to implement CIRL such that it produces good outcomes; I agree this depends a ton on having a sufficiently good P(obs | reward). Similarly, if you gave CIRL to aliens, whether or not they say it is about getting some extrapolation of unintended consequences depends on exactly what P(obs | reward) you ended up using. There is some not-too-complicated P(obs | reward) such that you do end up getting to “goodness”, or something sufficiently close that it is not an existential catastrophe; I do not claim we know what it is.\n\n\nI am claiming that behaviors like (a), (b) and (c) above are compatible with expected utility theory, and thus compatible with “plans that lase”. This is demonstrated by CIRL. It is not demonstrated by reward modeling, see e.g. [these](https://jan.leike.name/publications/Towards%20Interactive%20Inverse%20Reinforcement%20Learning%20-%20Armstrong,%20Leike%202016.pdf) [three](https://arxiv.org/abs/2004.13654) [papers](https://www.tomeveritt.se/papers/alignment.pdf) for problems that arise (which make it so that it is working at cross purposes with itself and seems incompatible with “plans that lase”). (I’m most confident in the first supporting my point, it’s been a long time since I read them so I might be wrong about the others.) To my knowledge, similar problems don’t arise with CIRL (and they shouldn’t, because it is a nice integrated Bayesian agent doing expected utility theory).\n\n\nI could imagine an objection that P(obs | reward), while not as complicated as “the utility function that rationalizes a twitching robot”, is still too complicated to really show compatibility with plans-that-lase, but pointing out that P(obs | reward) could be misspecified doesn’t seem particularly relevant to whether behaviors (a), (b) and (c) are compatible with plans-that-lase.\n\n\nRe: shuffling around the problematic consequentialism: it is not my main plan to avoid consequentialism in the sense of plans-that-lase. I broadly agree with Eliezer that you need consequentialism to do high-impact stuff. My plan is for the consequentialism to be aimed at good ends. So I agree that there is still consequentialism in CIRL, and I don’t see this as a damning point; when I talk about “dialing in to goodness”, I am thinking of aiming the consequentialism at goodness, not getting rid of consequentialism.\n\n\n(You can still do things like try to be domain-specific rather than domain-general; I don’t mean to completely exclude such approaches. They do seem to give additional safety. But the mainline story is that the consequentialism / optimization is directed at what we want rather than something else.)\n\n\n\n\n\n\n[Ngo][6:21]\nIf you don’t know how to implement CIRL in such a way that it actually aims at goodness, then you don’t have an algorithm with properties a, b and c above.\n\n\nOr, to put it another way: suppose I replace the word “goodness” with “winningness”. Now I can describe AlphaStar as follows:\n\n\n* it choose relevant questions to ask (read: scouts to send) in order to get more information about winningness\n* it acts conservatively while it is still uncertain about what winningness is\n* it take actions that are known to be robustly ~~good~~ winningish, while still waiting for future information that clarifies the nuances of winningness\n\n\nNow, you might say that the difference is that CIRL implements uncertainty over possible utility functions, not possible empirical beliefs. But this is just a semantic difference which shuffles the problem around without changing anything substantial. E.g. it’s exactly equivalent if we think of CIRL as an agent with a fixed (known) utility function, which just has uncertainty about some empirical parameter related to the humans it interacts with.\n\n\n\n\n\n| |\n| --- |\n| [Yudkowsky: +1] |\n\n\n\n\n\n\n\n[Soares][6:55]\n\n> \n> […] it take actions that are known to be robustly good, while still waiting for future information that clarifies the nuances of winningness\n> \n> \n> \n\n\n(typo: “known to be robustly good” -> “known to be robustly winningish” :-p)\n\n\n\n\n\n| |\n| --- |\n| [Ngo: 👍] |\n\n\n\nSome quick reactions, some from me and some from my model of Eliezer:\n\n\n\n> \n> Eliezer thinks that whatever help they provide is not sufficient […] The crux here seems to be alignment difficulty.\n> \n> \n> \n\n\nI’d be more hesitant to declare the crux “alignment difficulty”. My understanding of Eliezer’s position on your “use AI to help with alignment” proposals (which focus on things like using AI to make paradigmatic AI systems more transparent) is “that was always the plan, and it doesn’t address the sort of problems I’m worried about”. Maybe you understand the problems Eliezer’s worried about, and believe them not to be very difficult to overcome, thus putting the crux somewhere like “alignment difficulty”, but I’m not convinced. \n\n\nI’d update towards your crux-hypothesis if you provided a good-according-to-Eliezer summary of what other problems Eliezer sees and the reasons-according-to-Eliezer that “AI make our tensors more transparent” doesn’t much address them.\n\n\n\n> \n> Corrigibility\\_A […] Corrigibility\\_B […]\n> \n> \n> \n\n\nOf the two Corrigibility\\_B does sound a little closer to my concept, though neither of your descriptions cause me to be confident that communication has occurred. Throwing some checksums out there:\n\n\n* There are three reasons a young weak AI system might accept your corrections. It could be corrigible, or it could be incorrigibly pursuing goodness, or it could be incorrigibly pursuing some other goal while calculating that accepting this correction is better according to its current goals than risking a shutdown.\n* One way you can tell that CIRL is not corrigible is that it does not accept corrections when old and strong.\n* There’s an intuitive notion of “you’re here to help us implement a messy and fragile concept not yet clearly known to us; work with us here?” that makes sense to humans, that includes as a side effect things like “don’t scan my brain and then disregard my objections; there could be flaws in how you’re inferring my preferences from my objections; it’s actually quite important that you be cautious and accept brain surgery even in cases where your updated model says we’re about to make a big mistake according to our own preferences”.\n\n\n\n> \n> The point of the 2015 MIRI paper was to check whether it is possible to build a version of corrigibility\\_B that was compatible with expected utility maximization with a not-terribly-complicated utility function; the point of this was to see whether corrigibility could be made compatible with “plans that lase”.\n> \n> \n> \n\n\nMore like:\n\n\n* Corrigibility seems, at least on the surface, to be in tension with the simple and useful patterns of optimization that tend to be spotlit by demands for cross-domain success, similar to how acting like two oranges are worth one apple and one apple is worth one orange is in tension with those patterns.\n* In practice, this tension seems to run more than surface-deep. In particular, various attempts to reconcile the tension fail, and cause the AI to have undesirable preferences (eg, incentives to convince you to shut it down whenever its utility is suboptimal), exploitably bad beliefs (eg, willingness to bet at unreasonable odds that it won’t be shut down), and/or to not be corrigible in the first place (eg, a preference for destructively uploading your mind against your protests, at which point further protests from your coworkers are screened off by its access to that upload).\n\n\n\n\n\n| |\n| --- |\n| [Yudkowsky: ✅] |\n\n\n\n(There’s an argument I occasionally see floating around these parts that goes “ok, well what if the AI is *fractally* corrigible, in the sense that instead of its cognition being oriented around pursuit of some goal, its cognition is oriented around doing what it predicts a human would do (or what a human would want it to do) in a corrigible way, at every level and step of its cognition”. This is perhaps where you perceive a gap between your A-type and B-type notions, where MIRI folk tend to be more interested in reconciling the tension between corrigibility and coherence, and Paulian folk tend to place more of their chips on some such fractal notion? \n\n\nI admit I don’t find much hope in the “fractally corrigible” view myself, and I’m not sure whether I could pass a proponent’s ITT, but fwiw my model of the Yudkowskian rejoinder is “mindspace is deep and wide; that could plausibly be done if you had sufficient mastery of minds; you’re not going to get anywhere near close to that in practice, because of the way that basic normal everyday cross-domain training will highlight patterns that you’d call orienting-cognition-around-a-goal”.)\n\n\nAnd my super-quick takes on your avenues for future discussion:\n\n\n\n> \n> 1. Discussing anti-naturality of corrigibility.\n> \n> \n> \n\n\nHopefully the above helps.\n\n\n\n> \n> 2. Discussing why it is very unlikely for the AI system to generalize correctly both on optimization and values-or-goals-that-guide-the-optimization\n> \n> \n> \n\n\nThe concept “patterns of thought that are useful for cross-domain success” is latent in the problems the AI faces, and known to have various simple mathematical shadows, and our training is more-or-less banging the AI over the head with it day in and day out. By contrast, the specific values we wish to be pursued are not latent in the problems, are known to *lack* a simple boundary, and our training is much further removed from it.\n\n\n\n> \n> 3. More checking of where I am failing to pass your ITT\n> \n> \n> \n\n\n+1\n\n\n\n> \n> 4. Why is “dialing in on goodness” not a reasonable part of the solution space?\n> \n> \n> \n\n\nIt has long been the plan to say something less like “the following list comprises goodness: …” and more like “yo we’re tryin to optimize some difficult-to-name concept; help us out?”. “Find a prior that, with observation of the human operators, dials in on goodness” is a fine guess at how to formalize the latter. \n\n\nIf we had been planning to take the former tack, and you had come in suggesting CIRL, that might have helped us switch to the latter tack, which would have been cool. In that sense, it’s a fine part of the solution. \n\n\nIt also provides some additional formality, which is another iota of potential solution-ness, for that part of the problem. \n\n\nIt doesn’t much address the rest of the problem, which is centered much more around “how do you point powerful cognition in any direction at all” (such as towards your chosen utility function or prior thereover).\n\n\n\n> \n> 5. More concreteness on how optimization generalizes but corrigibility doesn’t, in the case where the AI was trained by human judgment on weak-safe domains\n> \n> \n> \n\n\n+1\n\n\n\n\n\n\n[Shah][13:23]\n\n> \n> If you don’t know how to implement CIRL in such a way that it actually aims at goodness, then you don’t have an algorithm with properties a, b and c above.\n> \n> \n> \n\n\nI want clarity on the premise here:\n\n\n* Is the premise “Rohin cannot write code that when run exhibits properties a, b, and c”? If so, I totally agree, but I’m not sure what the point is. All alignment work ever until the very last step will not lead you to writing code that when run exhibits an aligned superintelligence, but this does not mean that the prior alignment work was useless.\n* Is the premise “there does not exist code that (1) we would call an implementation of CIRL and (2) when run has properties a, b, and c”? If so, I think your premise is false, for the reasons given previously (I can repeat them if needed)\n\n\nI imagine it is neither of the above, and you are trying to make a claim that some conclusion that I am drawing from or about CIRL is invalid, because in order for me to draw that conclusion, I need to exhibit the correct P(obs | reward). If so, I want to know which conclusion is invalid and why I have to exhibit the correct P(obs | reward) before I can reach that conclusion.\n\n\nI agree that the fact that you can get properties (a), (b) and (c) are simple straightforward consequences of being Bayesian about a quantity you are uncertain about and care about, as with AlphaStar and “winningness”. I don’t know what you intend to imply by this — because it also applies to other Bayesian things, it can’t imply anything about alignment? I also agree the uncertainty over reward is equivalent to uncertainty over some parameter of the human (and have proved this theorem myself in the paper I wrote on the topic). I do not claim that anything in here is particularly non-obvious or clever, in case anyone thought I was making that claim.\n\n\nTo state it again, my claim is that behaviors like (a), (b) and (c) are consistent with “plans-that-lase”, and as evidence for this claim I cite the *existence* of an expected-utility-maximizing algorithm that displays them, specifically CIRL with the correct p(obs | reward). I do *not* claim that I can write down the code, I am just claiming that it *exists*. If you agree with the claim but not the evidence then let’s just drop the point. If you disagree with the claim then tell me why it’s false. If you are unsure about the claim then point to the step in the argument you think doesn’t work.\n\n\nThe reason I care about this claim is that it seems to me like *even* if you think that superintelligences only involve plans-that-lase, it seems to me like this does *not* rule out what we might call “dialing in to goodness” or “assisting the user”, and thus it seems like this is a valid target for you to try to get your superintelligence to do.\n\n\nI suspect that I do not agree with Eliezer about what plans-that-lase can do, but it seems like the two of us should at least agree that behaviors like (a), (b) and (c) can be exhibited in plans-that-lase, and if we don’t agree on that some sort of miscommunication has happened.\n\n\n \n\n\n\n> \n> Throwing some checksums out there\n> \n> \n> \n\n\nThe checksums definitely make sense. (Technically I could name more reasons why a young AI might accept correction, such as “it’s still sphexish in some areas, accepting corrections is one of those reasons”, and for the third reason the AI could be calculating negative consequences for things other than shutdown, but that seems nitpicky and I don’t think it means I have misunderstood you.) \n\n\nI think the third one feels somewhat slippery and vague, in that I don’t know exactly what it’s claiming, but it clearly seems to be the same sort of thing as corrigibility. Mostly it’s more like I wouldn’t be surprised if the Textbook from the Future tells us that we mostly had the right concept of corrigibility, but that third checksum is not quite how they would describe it any more. I would be a lot more surprised if the Textbook says we mostly had the right concept but then says checksums 1 and 2 were misguided.\n\n\n\n> \n> “The point of the 2015 MIRI paper was to check whether it is possible to build a version of corrigibility\\_B that was compatible with expected utility maximization with a not-terribly-complicated utility function; the point of this was to see whether corrigibility could be made compatible with ‘plans that lase’.”\n> \n> \n> More like:\n> \n> \n> * Corrigibility seems, at least on the surface, to be in tension with the simple and useful patterns of optimization that tend to be spotlit by demands for cross-domain success, similar to how as acting like an two oranges are worth one apple and one apple is worth one orange is in tension with those patterns.\n> * In practice, this tension seems to run more than surface-deep. In particular, various attempts to reconcile the tension fail, and cause the AI to have undesirable preferences (eg, incentives to convince you to shut it down whenever its utility is suboptimal), exploitably bad beliefs (eg, willingness to bet at unreasonable odds that it won’t be shut down), and/or to not be corrigible in the first place (eg, a preference for destructively uploading your mind against your protests, at which point further protests from your coworkers are screened off by its access to that upload).\n> \n> \n> \n\n\nOn the 2015 Corrigibility paper, is this an accurate summary: “it wasn’t that we were checking whether corrigibility could be compatible with useful patterns of optimization; it was already obvious at least at a surface level that corrigibility was in tension with these patterns, and we wanted to check and/or show that this tension persisted more deeply and couldn’t be easily fixed”.\n\n\n(My other main hypothesis is that there’s an important distinction between “simple and useful patterns of optimization” (term in your message) and “plans that lase” (term in my message) but if so I don’t know what it is.)\n\n\n\n\n\n\n[Soares][13:52]\nWhat we *wanted* to do was show that the apparent tension was merely superficial. We failed.\n\n\n\n\n\n| |\n| --- |\n| [Shah: 👍] |\n\n\n\n(Also, IIRC — and it’s been a long time since I checked — the 2015 paper contains only one exploration, relating to an idea of Stuart Armstrong’s. There were another host of ideas raised and shot down in that era, that didn’t make it into that paper, pro’lly b/c they came afterwards.)\n\n\n\n\n\n\n[Shah][13:55]\n\n> \n> What we *wanted* to do was show that the apparent tension was merely superficial. We failed.\n> \n> \n> \n\n\n(That sounds like what I originally said? I’m a bit confused why you didn’t just agree with my original phrasing:\n\n\n\n> \n> The point of the 2015 MIRI paper was to check whether it is possible to build a version of corrigibility\\_B that was compatible with expected utility maximization with a not-terribly-complicated utility function; the point of this was to see whether corrigibility could be made compatible with “plans that lase”.\n> \n> \n> \n\n\n)\n\n\n(I’m kinda worried that there’s some big distinction between “EU maximization”, “plans that lase”, and “simple and useful patterns of optimization”, that I’m not getting; I’m treating them as roughly equivalent at the moment when putting on my MIRI-ontology-hat.)\n\n\n\n\n\n\n[Soares][14:01]\n(There are a bunch of aspects of your phrasing that indicated to me a different framing, and one I find quite foreign. For instance, this talk of “building a version of corrigibility\\_B” strikes me as foreign, and the talk of “making it compatible with ‘plans that lase'” strikes me as foreign. It’s plausible to me that you, who understand your original framing, can tell that my rephrasing matches your original intent. I do not yet feel like I could emit the description you emitted without contorting my thoughts about corrigibility in foreign ways, and I’m not sure whether that’s an indication that there are distinctions, important to me, that I haven’t communicated.)\n\n\n\n> \n> (I’m kinda worried that there’s some big distinction between “EU maximization”, “plans that lase”, and “simple and useful patterns of optimization”, that I’m not getting; I’m treating them as roughly equivalent at the moment when putting on my MIRI-ontology-hat.)\n> \n> \n> \n\n\nI, too, believe them to be basically equivalent (with the caveat that the reason for using expanded phrasings is because people have a history of misunderstanding “utility maximization” and “coherence”, and so insofar as you round them all to “coherence” and then argue against some very narrow interpretation of coherence, I’m gonna protest that you’re bailey-and-motting).\n\n\n\n\n\n| |\n| --- |\n| [Shah: 👍] |\n\n\n\n\n\n\n\n[Shah][14:12]\n\n> \n> Hopefully the above helps.\n> \n> \n> \n\n\nI’m still interested in the question “Does this mean you predict that you, or I, with suitably enhanced intelligence and/or reflectivity, would not be capable of producing a plan to help an alien civilization optimize their world, with that plan being corrigible w.r.t the aliens?” I don’t currently understand how you avoid making this prediction given other stated beliefs. (Maybe you just bite the bullet and do predict this?)\n\n\n\n> \n> By contrast, the specific values we wish to be pursued are not latent in the problems, are known to lack a simple boundary, and our training is much further removed from it.\n> \n> \n> \n\n\nI’m not totally sure what is meant by “simple boundary”, but it seems like a lot of human values are latent in text prediction on the Internet, and when training from human feedback the training is not very removed from values.\n\n\n\n> \n> It has long been the plan to say something less like “the following list comprises goodness: …” and more like “yo we’re tryin to optimize some difficult-to-name concept; help us out?”. […]\n> \n> \n> \n\n\nI take this to mean that “dialing in on goodness” is a reasonable part of the solution space? If so, I retract that question. I thought from previous comments that Eliezer thought this part of solution space was more doomed than corrigibility.\n\n\n(I get the sense that people think that I am butthurt about CIRL not getting enough recognition or something. I do in fact think this, but it’s not part of my agenda here. I originally brought it up to make the argument that corrigibility is not in tension with EU maximization, then realized that I was mistaken about what “corrigibility” meant, but still care about the argument that “dialing in on goodness” is not in tension with EU maximization. But if we agree on that claim then I’m happy to stop talking about CIRL.)\n\n\n\n\n\n\n[Soares][14:13]\nI’d be *capable* of helping aliens optimize their world, sure. I wouldn’t be motivated to, but I’d be capable.\n\n\n\n\n\n\n[Shah][14:14]\n\n> \n> (There are a bunch of aspects of your phrasing that indicated to me a different framing, and one I find quite foreign. For instance, this talk of “building a version of corrigibility\\_B” strikes me as foreign, and the talk of “making it compatible with ‘plans that lase'” strikes me as foreign. It’s plausible to me that you, who understand your original framing, can tell that my rephrasing matches your original intent. I do not yet feel like I could emit the description you emitted without contorting my thoughts about corrigibility in foreign ways, and I’m not sure whether that’s an indication that there are distinctions, important to me, that I haven’t communicated.)\n> \n> \n> \n\n\nThis makes sense. I guess you might think of these concepts as quite pinned down? Like, in your head, EU maximization is just a kind of behavior (= set of behaviors), corrigibility is just another kind of behavior (= set of behaviors), and there’s a straightforward yes-or-no question about whether the intersection is empty which you set out to answer, you can’t “make” it come out one way or the other, nor can you “build” a new kind of corrigibility\n\n\n\n\n\n\n[Soares][14:17]\nRe: CIRL, my current working hypothesis is that by “use CIRL” you mean something analogous to what I say when I say “do CEV” — namely, direct the AI to figure out what we “really” want in some correct sense, rather than attempting to specify what we want concretely. And to be clear, on my model, this *is* part of the solution to the overall alignment problem, and it’s more-or-less why we wouldn’t die immediately on the “value is fragile / we can’t name exactly what we want” step if we solved the other problems.\n\n\nMy guess as to the disagreement about how much credit CIRL should get, is that there is in fact a disagreement, but it’s not coming from MIRI folk saying “no we should be specifying the actual utility function by hand”, it’s coming from MIRI folk saying “this is just the advice ‘do CEV’ dressed up in different clothing and presented as a reason to stop worrying about corrigibility, which is irritating, given that it’s orthogonal to corrigibility”.\n\n\nIf you wanna fight that fight, I’d start by asking: Do you think CIRL is doing anything above and beyond what “use CEV” is doing? If so, what?\n\n\nRegardless, I think it might be a good idea for you to try to pass my (or Eliezer’s) ITT about what parts of the problem remain beyond the thing I’d call “do CEV” and why they’re hard. (Not least b/c if my working hypothesis is wrong, demonstrating your mastery of that subject might prevent a bunch of toil covering ground you already know.)\n\n\n\n\n\n\n[Shah][14:17]\n\n> \n> I’d be *capable* of helping aliens optimize their world, sure. I wouldn’t be motivated to, but I’d be capable.\n> \n> \n> \n\n\nOkay, so it seems like the danger requires the thing-producing-the-plan to be badly-motivated. But then I’m not sure why it seems so impossible to have a (not-badly-motivated) thing that, when given a goal, produces a plan to corrigibly get that goal. (This is a scenario Richard mentioned earlier.)\n\n\n\n\n\n\n[Soares][14:19]\n\n> \n> This makes sense. I guess you might think of these concepts as quite pinned down? Like, in your head, EU maximization is just a kind of behavior (= set of behaviors), corrigibility is just another kind of behavior (= set of behaviors), and there’s a straightforward yes-or-no question about whether the intersection is empty which you set out to answer, you can’t “make” it come out one way or the other, nor can you “build” a new kind of corrigibility\n> \n> \n> \n\n\nThat sounds like one of the big directions in which your framing felt off to me, yeah :-). (I don’t fully endorse that rephrasing, but it seems directionally correct to me.)\n\n\n\n> \n> Okay, so it seems like the danger requires the thing-producing-the-plan to be badly-motivated. But then I’m not sure why it seems so impossible to have a (not-badly-motivated) thing that, when given a goal, produces a plan to corrigibly get that goal. (This is a scenario Richard mentioned earlier.)\n> \n> \n> \n\n\nOn my model, aiming the powerful optimizer is the hard bit.\n\n\nLike, once I grant “there’s a powerful optimizer, and all it does is produce plans to corrigibly attain a given goal”, I agree that the problem is mostly solved.\n\n\nThere’s maybe some cleanup, but the bulk of the alignment challenge preceded that point.\n\n\n\n\n\n| |\n| --- |\n| [Shah: 👍] |\n\n\n\n(This is hard for all the usual reasons, that I suppose I could retread.)\n\n\n\n\n\n\n[Shah][14:24]\n\n> \n> […] Regardless, I think it might be a good idea for you to try to pass my (or Eliezer’s) ITT about what parts of the problem remain beyond the thing I’d call “do CEV” and why they’re hard. (Not least b/c if my working hypothesis is wrong, demonstrating your mastery of that subject might prevent a bunch of toil covering ground you already know.)\n> \n> \n> \n\n\n(Working on ITT)\n\n\n\n\n\n\n[Soares][14:30]\n(To clarify some points of mine, in case this gets published later to other readers: (1) I might call it more centrally something like “build a [DWIM system](https://arbital.com/p/dwim/)” rather than “use CEV”; and (2) this is not advice about what your civilization should do with early AGI systems, I strongly recommend against trying to pull off CEV under that kind of pressure.)\n\n\n\n\n\n\n[Shah][14:32]\nI don’t particularly want to have fights about credit. I just didn’t want to falsely state that I do not care about how much credit CIRL gets, when attempting to head off further comments that seemed designed to appease my sense of not-enough-credit. (I’m also not particularly annoyed at MIRI, here.)\n\n\nOn passing ITT, about what’s left beyond “use CEV” (stated in my ontology because it’s faster to type; I think you’ll understand, but I can also translate if you think that’s important):\n\n\n* The main thing is simply how to actually get the AI system to care about pursuing CEV. I think MIRI ontology would call this the target loading problem.\n* This is hard because (a) you can’t just train on CEV, because you can’t just implement CEV and provide that as training and (b) even if you magically could train on CEV, that does not establish that the resulting AI system then wants to optimize CEV. It could just as well optimize some other objective that correlated with CEV in the situations you trained, but no longer correlates in some new situation (like when you are building a nanosystem). (Point (b) is how I would talk about inner alignment.)\n* This is made harder for a variety of reasons, including (a) you’re working with inscrutable matrices that you can’t look at the details of, (b) there are clear racing incentives when the prize is to take over the world (or even just lots of economic profit), (c) people are unlikely to understand the issues at stake (unclear to me of the exact reasons, I’d guess it would be that the issues are too subtle / conceptual, + pressure to rationalize it away), (d) there’s very little time in which we have a good understanding of the situation we face, because of fast / discontinuous takeoff\n\n\n\n\n\n| |\n| --- |\n| [Soares: 👍] |\n\n\n\n\n\n\n\n[Soares][14:37]\nPassable ^\\_^ (Not exhaustive, obviously; “it will have a tendency to kill you on the first real try if you get it wrong” being an example missing piece, but I doubt you were trying to be exhaustive.) Thanks.\n\n\n\n\n\n| |\n| --- |\n| [Shah: 👍] |\n\n\n\n\n> \n> Okay, so it seems like the danger requires the thing-producing-the-plan to be badly-motivated. But then I’m not sure why it seems so impossible to have a (not-badly-motivated) thing that, when given a goal, produces a plan to corrigibly get that goal. (This is a scenario Richard mentioned earlier.)\n> \n> \n> \n\n\nI’m uncertain where the disconnect is here. Like, I could repeat some things from past discussions about how “it only outputs plans, it doesn’t execute them” does very little (not nothing, but very little) from my perspective? Or you could try to point at past things you’d expect me to repeat and name why they don’t seem to apply to you?\n\n\n\n\n\n\n[Shah][14:40]\n(Flagging that I should go to bed soon, though it doesn’t have to be right away)\n\n\n\n\n\n\n[Yudkowsky][14:50]\n…I do not know if this is going to help anything, but I have a feeling that there’s a frequent disconnect wherein I invented an idea, considered it, found it necessary-but-not-sufficient, and moved on to looking for additional or varying solutions, and then a decade or in this case 2 decades later, somebody comes along and sees this brilliant solution which MIRI is for some reason neglecting\n\n\nthis is perhaps exacerbated by a deliberate decision during the early days, when I looked very weird and the field was much more allergic to weird, to not even try to stamp my name on all the things I invented.  eg, I told Nick Bostrom to please use various of my ideas as he found appropriate and only credit them if he thought that was strategically wise.\n\n\nI expect that some number of people now in the field don’t know I invented corrigibility, and any number of other things that I’m a little more hesitant to claim here because I didn’t leave Facebook trails for inventing them\n\n\nand unless you had been around for quite a while, you definitely wouldn’t know that I had been (so far as I know) the first person to perform the unexceptional-to-me feat of writing down, in 2001, the very obvious idea I called “external reference semantics”, or as it’s called nowadays, CIRL\n\n\n\n\n\n[Shah][14:53]\nI really honestly am not trying to say that MIRI didn’t think of CIRL-like things, nor am I trying to get credit for CIRL. I really just wanted to establish that “learn what is good to do” seems not-ruled-out by EU maximization. That’s all. It sounds like we agree on this point and if so I’d prefer to drop it.\n\n\n\n\n\n| |\n| --- |\n| [Soares: ❤] |\n\n\n\n\n\n\n\n[Yudkowsky][14:53]\nHaving a prior over utility functions that gets updated by evidence is not ruled out by EU maximization.  That exact thing is hard for other reasons than it being contrary to the nature of EU maximization.\n\n\nIf it was ruled out by EU maximization for any simple reason, I would have noticed that back in 2001.\n\n\n\n\n\n[Ngo][14:54]\nI think we all agree on this point.\n\n\n\n\n\n| | |\n| --- | --- |\n| [Shah: 👍] | [Soares: 👍] |\n\n\n\nOne thing I’d note is that during my debate with Eliezer, I’d keep saying “oh so you think X is impossible” and he’d say “no, all these things are *possible*, they’re just really really hard”.\n\n\n\n\n\n\n[Yudkowsky][14:58]\n…to do correctly on your first try when a failed attempt kills you.\n\n\n\n\n\n[Shah][14:58]\nMaybe it’s fine; perhaps the point is just that target loading is hard, and the question is why target loading is so hard.\n\n\nFrom my perspective, the main confusing thing about the Eliezer/Nate view is how *confident* it is. With each individual piece, I (usually) find myself nodding along and saying “yes, it seems like if we wanted to guarantee safety, we would need to solve this”. What I don’t do is say “yes, it seems like without a solution to this, we’re near-certainly dead”. The uncharitable view (which I share mainly to emphasize where the disconnect is, not because I think it is true) would be something like “Eliezer/Nate are falling to a Murphy bias, where they assume that unless they have an ironclad positive argument for safety, the worst possible thing will happen and we all die”. I try to generate things that seem more like ironclad (or at least “leatherclad”) positive arguments for doom, and mostly don’t succeed; when I say “human values are very complicated” there’s the rejoinder that “a superintelligence will certainly know about human values; pointing at them shouldn’t take that many more bits”; when I say “this is ultimately just praying for generalization”, there’s the rejoinder “but it may in fact actually generalize”; add to all of this the fact that a bunch of people will be trying to prevent the problem and it seems weird to be so confident in doom.\n\n\nA lot of my questions are going to be of the form “it seems like this is a way that we could survive; it definitely involves luck and does not say good things about our civilization, but it does not seem as improbable as the word ‘miracle’ would imply”\n\n\n\n\n\n\n[Yudkowsky][15:00]\nheh.  from my standpoint, I’d say of this that it reflects those old experiments where if you ask people for their “expected case” it’s indistinguishable from their “best case” (since both of these involve visualizing various things going on their imaginative mainline, which is to say, as planned) and reality is usually worse than their “worst case” (because they didn’t adjust far enough away from their best-case anchor towards the statistical distribution for actual reality when they were trying to imagine a few failures and disappointments of the sort that reality had previously delivered)\n\n\nit rhymes with the observation that it’s incredibly hard to find people – even inside the field of computer security – who really have what Bruce Schneier termed the security mindset, of asking how to break a cryptography scheme, instead of imagining how your cryptography scheme could succeed\n\n\nfrom my perspective, people are just living in a fantasy reality which, if we were actually living in it, would not be full of failed software projects or rocket prototypes that blow up even after you try quite hard to get a system design about which you made a strong prediction that it wouldn’t explode\n\n\nthey think something special has to go wrong with a rocket design, that you must have committed some grave unusual sin against rocketry, for the rocket to explode\n\n\nas opposed to every rocket wanting really strongly to explode and needing to constrain every aspect of the system to make it not explode and then the first 4 times you launch it, it blows up anyways\n\n\nwhy? because of some particular technical issue with O-rings, with the flexibility of rubber in cold weather?\n\n\n\n\n\n[Shah][15:05]\n(I have read your Rocket Alignment and security mindset posts. Not claiming this absolves me of bias, just saying that I am familiar with them)\n\n\n\n\n\n\n[Yudkowsky][15:05]\nno, because the strains and temperatures in rockets are large compared to the materials that we use to make up the rockets\n\n\nthe fact that sometimes people are wrong in their uncertain guesses about rocketry does not make their life easier in this regard\n\n\nthe less they understand, the less ability they have to force an outcome within reality\n\n\nit’s no coincidence that when you are Wrong about your rocket, the particular form of Being Wrong that reality delivers to you as a surprise message, is not that you underestimated the strength of steel and so your rocket went to orbit and came back with fewer scratches on the hull than expected\n\n\nwhen you are working with powerful forces there is not a symmetry around pleasant and unpleasant surprises being equally likely relative to your first-order model.  if you’re a good Bayesian, they will be equally likely relative to your second-order model, but this requires you to be HELLA pessimistic, indeed, SO PESSIMISTIC that sometimes you are pleasantly surprised\n\n\nwhich looks like such a bizarre thing to a mundane human that they will gather around and remark at the case of you being pleasantly surprised\n\n\nthey will not be used to seeing this\n\n\nand they shall say to themselves, “haha, what pessimists”\n\n\nbecause to be unpleasantly surprised is so ordinary that they do not bother to gather and gossip about it when it happens\n\n\nmy fundamental sense about the other parties in this debate, underneath all the technical particulars, is that they’ve constructed a Murphy-free fantasy world from the same fabric that weaves crazy optimistic software project estimates and brilliant cryptographic codes whose inventors didn’t quite try to break them, and are waiting to go through that very common human process of trying out their optimistic idea, letting reality gently correct them, predictably becoming older and wiser and starting to see the true scope of the problem, and so in due time becoming one of those Pessimists who tell the youngsters how ha ha of course things are not that easy\n\n\nthis is how the cycle usually goes\n\n\nthe problem is that instead of somebody’s first startup failing and them then becoming much more pessimistic about lots of things they thought were easy and then doing their second startup\n\n\nthe part where they go ahead optimistically and learn the hard way about things in their chosen field which aren’t as easy as they hoped\n\n\n\n\n\n[Shah][15:13]\nDo you want to bet on that? That seems like a testable prediction about beliefs of real people in the not-too-distant future\n\n\n\n\n\n\n[Yudkowsky][15:13]\nkills everyone\n\n\nnot just them\n\n\neveryone\n\n\nthis is an issue\n\n\nhow on Earth would we bet on that if you think the bet hasn’t already resolved? I’m describing the attitudes of people that I see right now today.\n\n\n\n\n\n[Shah][15:15]\nNever mind, I wanted to bet on “people becoming more pessimistic as they try ideas and see them fail”, but if your idea of “see them fail” is “superintelligence kills everyone” then obviously we can’t bet on that\n\n\n(people here being alignment researchers, obviously ones who are not me)\n\n\n\n\n\n\n[Yudkowsky][15:17]\nthere is some element here of the Bayesian not updating in a predictable direction, of executing today the update you know you’ll make later, of saying, “ah yes, I can see that I am in the same sort of situation as the early AI pioneers who thought maybe it would take a summer and actually it was several decades because Things Were Not As Easy As They Imagined, so instead of waiting for reality to correct me, I will imagine myself having already lived through that and go ahead and be more pessimistic right now, not just a little more pessimistic, but so incredibly pessimistic that I am *as* likely to be pleasantly surprised as unpleasantly surprised by each successive observation, which is even more pessimism than even some sad old veterans manage”, an element of genre-savviness, an element of knowing the advice that somebody would predictably be shouting at you from outside, of not just blindly enacting the plot you were handed\n\n\nand I don’t quite know *why* this is so much less common than I would have naively thought it would be\n\n\nwhy people are content with enacting the predictable plot where they start out cheerful today and get some hard lessons and become pessimistic later\n\n\nthey are their own scriptwriters, and they write scripts for themselves about going into the haunted house and then splitting up the party\n\n\nI would not have thought that to defy the plot was such a difficult thing for an actual human being to do\n\n\nthat it would require so much reflectivity or something, I don’t know what else\n\n\nnor do I know how to train other people to do it if they are not doing it already\n\n\nbut that from my perspective is the basic difference in gloominess\n\n\nI am a time-traveler who came back from the world where it (super duper predictably) turned out that a lot of early bright hopes didn’t pan out and various things went WRONG and alignment was HARD and it was NOT SOLVED IN ONE SUMMER BY TEN SMART RESEARCHERS\n\n\nand now I am trying to warn people about this development which was, from a certain perspective, really quite obvious and not at all difficult to see coming\n\n\nbut people are like, “what the heck are you doing, you are enacting the wrong part of the plot, people are currently supposed to be cheerful, you can’t prove that anything will go wrong, why would I turn into a grizzled veteran before the part of the plot where reality hits me over the head with the awful real scope of the problem and shows me that my early bright ideas were way too optimistic and naive”\n\n\nand I’m like “no you don’t get it, where I come from, *everybody died* and didn’t turn into grizzled veterans”\n\n\nand they’re like “but that’s not what the script says we do next”… or something, I do not know what leads people to think like this because I do not think like that myself\n\n\n\n\n\n[Soares][15:24]\n(I think what they actually do is say “it’s not obvious to me that this is one of those scenarios where we become grizzled veterans, as opposed to things just actually working out easily”)\n\n\n(“many things work out easily all the time; obviously society spends a bunch more focus on things that don’t work out easily b/c the things that work easily tend to get resolved fairly quickly and then you don’t notice them”, or something)\n\n\n(more generally, I kinda suspect that bickering closer to the object level is likely more productive)\n\n\n(and i suspect this convo might be aided by Rohin naming a concrete scenario where things go well, so that Eliezer can lament the lack of genre saviness in various specific points)\n\n\n\n\n\n\n[Yudkowsky][15:26]\nthere are, of course, lots of more local technical issues where I can specifically predict the failure mode for somebody’s bright-eyed naive idea, especially when I already invented a more sophisticated version a decade or two earlier, and this is what I’ve usually tried to discuss\n\n\n\n\n\n| |\n| --- |\n| [Soares: ❤] |\n\n\n\nbecause conversations like that can sometimes make any progress\n\n\n\n\n\n[Soares][15:26]\n(and possibly also Eliezer naming a concrete story where things go poorly, so that Rohin may lament the seemingly blind pessimism & premature grizzledness)\n\n\n\n\n\n\n[Yudkowsky][15:27]\nwhereas if somebody lacks the ability to see the warning signs of which genre they are in, I do not know how to change the way they are by talking at them\n\n\n\n\n\n[Shah][15:28]\nUnsurprisingly I have disagreements with the meta-level story, but it seems really thorny to make progress on and I’m kinda inclined to not discuss it. I also should go to sleep now.\n\n\nOne thing it did make me think of — it’s possible that the “do it correctly on your first try when a failed attempt kills you” could be the crux here. There’s a clearly-true sense which is “the first time you build a superintelligence that you cannot control, if you have failed in your alignment, then you die”. There’s a different sense which is “and also, anything you try to do with non-superintelligences that you can control, will tell you approximately nothing about the situation you face when you build a superintelligence”. I mostly don’t agree with the second sense, but if Eliezer / Nate do agree with it, that would go a long way to explaining the confidence in doom.\n\n\nTwo arguments I can see for the second sense: (1) the non-superintelligences only seem to respond well to alignment schemes because they don’t yet have the core of general intelligence, and (2) the non-superintelligences only seem to respond well to alignment schemes because despite being misaligned they are doing what we want in order to survive and later execute a treacherous turn. EDIT: And (3) fast takeoff = not much time to look at the closest non-dangerous examples\n\n\n(I still should sleep, but would be interested in seeing thoughts tomorrow, and if enough people think it’s actually worthwhile to engage on the meta level I can do that. I’m cheerful about engaging on specific object-level ideas.)\n\n\n\n\n\n| |\n| --- |\n| [Soares: 💤] |\n\n\n\n\n\n\n\n[Yudkowsky][15:28]\nit’s not that early failures tell you nothing\n\n\nthe failure of the 1955 Dartmouth Project to produce strong AI over a summer told those researchers something\n\n\nit told them the problem was harder than they’d hoped on the first shot\n\n\nit didn’t show them the correct way to build AGI in 1957 instead\n\n\n\n\n\n[Bensinger][16:41]\nLinking to a chat log between Eliezer and some anonymous people (and Steve Omohundro) from early September: []\n\n\nEliezer tells me he thinks it pokes at some of Rohin’s questions\n\n\n\n\n\n\n[Yudkowsky][16:48]\nI’m not sure that I can successfully, at this point, go back up and usefully reply to the text that scrolled past – I also note some internal grinding about this having turned into a thing which has Pending Replies instead of Scheduled Work Hours – and this maybe means that in the future we shouldn’t have such a general chat here, which I didn’t anticipate before the fact.  I shall nonetheless try to pick out some things and reply to them.\n\n\n\n\n\n| |\n| --- |\n| [Shah: 👍] |\n\n\n\n\n> \n> * While I think people agree on the behaviors of corrigibility, I am not sure they agree on why we want it. Eliezer wants it for surviving failures, but maybe others want it for “dialing in on goodness”. When I think about a “broad basin of corrigibility”, that intuitively seems more compatible with the “dialing in on goodness” framing (but this is an aesthetic judgment that could easily be wrong).\n> \n> \n> \n\n\nThis is a weird thing to say in my own ontology.\n\n\nThere’s a general project of AGI alignment where you try to do some useful pivotal thing, which has to be powerful enough to be pivotal, and so you somehow need a system that thinks powerful thoughts in the right direction without it killing you.\n\n\nThis could include, for example:\n\n\n* Trying to train in “low impact” via an RL loss function that penalizes a sufficiently broad range of “impacts” that we hope the learned impact penalty generalizes to all the things we’d consider impacts – even as we scale up the system, without the sort of obvious pathologies that would materialize only over options available to sufficiently powerful systems, like sending out nanosystems to erase the visibility of its actions from human observers\n* Tweaking MCTS search code so that it behaves in the fashion of “mild optimization” or “[taskishness](https://arbital.com/p/task_goal/)” instead of searching as hard as it has power available to search\n* Exposing the system to lots of labeled examples of relatively simple and safe instructions being obeyed, hoping that it generalizes safe instruction-following to regimes too dangerous for us to inspect outputs and label results\n* Writing code that tries to recognize cases of activation vectors going outside the bounds they occupied during training, as a check on whether internal cognitive conservatism is being violated or something is seeking out adversarial counterexamples to a constraint\n\n\nYou could say that only parts 1 and 3 are “dialing in on goodness” because only those parts involve iteratively refining a target, or you could say that all 4 parts are “dialing in on goodness” because parts 2 and 4 help you stay alive while you’re doing the iterative refining.  But I don’t see this distinction as fundamental or particularly helpful.  What if, on part 4, you were training something to recognize out-of-bounds activations, instead of trying to hardcode it?  Is that dialing in on goodness?  Or is it just dialing in on survivability or corrigibility or whatnot?  Or maybe even part 3 isn’t really “dialing in on goodness” because the true distinction between Good and Evil is still external in the programmers and not inside the system?\n\n\nI don’t see this as an especially useful distinction to draw.  There’s a hardcoded/learned distinction that probably does matter in several places.  There’s a maybe-useful forest-level distinction between “actually doing the pivotal thing” and “not destroying the world as a side effect” which breaks down around the trees because the very definition of “that pivotal thing you want to do” is to do *that thing* and *not* to destroy the world.\n\n\nAnd all of this is a class of shallow ideas that I can generate in great quantity.  I now and then consider writing up the ideas like this, just to make clear that I’ve already thought of way more shallow ideas like this than the net public output of the entire rest of the alignment field, so it’s not that my concerns of survivability stem from my having missed any of the obvious shallow ideas like that.\n\n\nThe reason I don’t spend a lot of time talking about it is not that I haven’t thought of it, it’s that I’ve thought of it, explored it for a while, and decided not to write it up because I don’t think it can save the world and the infinite well of shallow ideas seems more like a distraction from the level of miracle we would actually need.\n\n\n–\n\n\n\n> \n> As a starting point: you say that an agent that makes plans but doesn’t execute them is also dangerous, because it is the plan itself that lases, and corrigibility is antithetical to lasing. Does this mean you predict that you, or I, with suitably enhanced intelligence and/or reflectivity, would not be capable of producing a plan to help an alien civilization optimize their world, with that plan being corrigible w.r.t the aliens? (This seems like a strange and unlikely position to me, but I don’t see how to not make this prediction under what I believe to be your beliefs. Maybe you just bite this bullet.)\n> \n> \n> \n\n\nI ‘could’ corrigibly help the [Babyeaters](https://www.lesswrong.com/s/qWoFR4ytMpQ5vw3FT) in the sense that I have a notion of what it would mean to corrigibly help them, and if I wanted to do that thing for some reason, like an outside super-universal entity offering to pay me a googolplex flops of eudaimonium if I did that one thing, then I could do that thing.  Absent the superuniversal entity bribing me, I wouldn’t *want* to behave corrigibly towards the Babyeaters.  \n\n\nThis is not a defect of myself as an individual.  The Superhappies would also be able to understand what it would be like to be corrigible; they wouldn’t *want* to behave corrigibly towards the Babyeaters, because, like myself, they don’t want exactly what the Babyeaters want.  In particular, we would rather the universe be other than it is with respect to the Babyeaters eating babies.\n\n\n\n\n\n| |\n| --- |\n| [Shah: 👍] |\n\n\n\n\n\n \n\n\n22. Follow-ups\n--------------\n\n\n \n\n\n\n[Shah][0:33]\n\n> \n> […] Absent the superuniversal entity bribing me, I wouldn’t *want* to behave corrigibly towards the Babyeaters. […]\n> \n> \n> \n\n\nGot it. Yeah I think I just misunderstood a point you were saying previously. When Richard asked about systems that simply produce plans rather than execute them, you said something like “the plan itself is dangerous”, which I now realize meant “you don’t get additional safety from getting to read the plan, the superintelligence would have just chosen a plan that was convincing to you but nonetheless killed everyone / otherwise worked in favor of the superintelligence’s goals”, but at the time I interpreted it as “any reasonable plan that can actually build nanosystems is going to be dangerous, regardless of the source”, which seemed obviously false in the case of a well-motivated system.\n\n\n\n> \n> […] This is a weird thing to say in my own ontology. […]\n> \n> \n> \n\n\nWhen I say “dialing in on goodness”, I mean a specific class of strategies for getting a superintelligence to do a useful pivotal thing, in which you build it so that the superintelligence is applying its force towards figuring out what it is that you actually want it to do and pursuing that, which among other things would involve taking a pivotal act to reduce x-risk to ~zero.\n\n\nI previously had the mistaken impression that you thought this class of strategies was probably doomed because it was incompatible with expected utility theory, which seemed wrong to me. (I don’t remember why I had this belief; possibly it was while I was still misunderstanding what you meant by “corrigibility” + the claim that corrigibility is anti-natural.)\n\n\nI now think that you think it is probably doomed for the same reason that most other technical strategies are probably doomed, which is that there still doesn’t seem to be any plausible way of loading in the right target to the superintelligence, even when that target is a process for learning-what-to-optimize, rather than just what-to-optimize.\n\n\n\n> \n> Linking to a chat log between Eliezer and some anonymous people (and Steve Omohundro) from early September: []\n> \n> \n> Eliezer tells me he thinks it pokes at some of Rohin’s questions\n> \n> \n> \n\n\nI’m surprised that you think this addresses (or even pokes at) my questions. As far as I can tell, most of the questions there are either about social dynamics, which I’ve been explicitly avoiding, and the “technical” questions seem to treat “AGI” or “superintelligence” as a symbol; there don’t seem to be any internal gears underlying that symbol. The closest anyone got to internal gears was mentioning iterated amplification as a way of bootstrapping known-safe things to solving hard problems, and that was very brief.\n\n\nI am much more into the question “how difficult is technical alignment”. It seems like answers to this question need to be in one of two categories: (1) claims about the space of minds that lead to intelligent behavior (probably weighted by simplicity, to account for the fact that we’ll get the simple ones first), (2) claims about specific methods of building superintelligences. As far as I can tell the only thing in that doc which is close to an argument of this form is “superintelligent consequentialists would find ways to manipulate humans”, which seems straightforwardly true (when they are misaligned). I suppose one might also count the assertion that “the speedup step of iterated amplification will introduce errors” as an argument of this form.\n\n\nIt could be that you are trying to convince me of some other beliefs that I wasn’t asking about, perhaps in the hopes of conveying some missing mood, but I suspect that it is just that you aren’t particularly clear on what my beliefs are / what I’m interested in. (Not unreasonable, given that I’ve been poking at your models, rather than the other way around.) I could try saying more about that, if you’d like.\n\n\n\n\n\n\n[Tallinn][11:39]\nFWIW, a voice from the audience: +1 to going back to sketching concrete scenarios. even though i learned a few things from the abstract discussion of goodness/corrigibility/etc myself (eg, that “corrigible” was meant to be defined at the limit of self-improvement till maturity, not just as a label for code that does not resist iterated development), the progress felt more tangible during the “scaled up muzero” discussion above.\n\n\n\n\n\n\n[Yudkowsky][15:03]\nanybody want to give me a prompt for a concrete question/scenario, ideally a concrete such prompt but I’ll take whatever?\n\n\n\n\n\n[Soares][15:34]\nNot sure I count, but one I’d enjoy a concrete response to: “The leading AI lab vaguely thinks it’s important that their systems are ‘mere predictors’, and wind up creating an AGI that is dangerous; how concretely does it wind up being a scary planning optimizer or whatever, that doesn’t run through a scary abstract “waking up” step”.\n\n\n(asking for a friend; @Joe Carlsmith or whoever else finds this scenario unintuitive plz clarify with more detailed requests if interested)\n\n\n\n\n\n \n\n\n23. November 13 conversation\n----------------------------\n\n\n \n\n\n### 23.1. GPT-*n* and goal-oriented aspects of human reasoning\n\n\n \n\n\n\n[Shah][1:46]\nI’m still interested in:\n\n\n\n> \n> 5. More concreteness on how optimization generalizes but corrigibility doesn’t, in the case where the AI was trained by human judgment on weak-safe domains\n> \n> \n> \n\n\nSpecifically, we can go back to the scaled-up MuZero example. Some (lightly edited) details we had established there:\n\n\n\n> \n> Pretraining: playing all the videogames, predicting all the text and images, solving randomly generated computer puzzles, accomplishing sets of easily-labelable sensorymotor tasks using robots and webcams\n> \n> \n> Finetuning: The AI system is being trained to act well on the Internet, and it’s shown some tweet / email / message that a user might have seen, and asked to reply to the tweet / email / message. User says whether the replies are good or not (perhaps via comparisons, a la Deep RL from Human Preferences). It would be more varied than that, but would not include “building nanosystems”.\n> \n> \n> The AI system is not smart enough that exposing humans to text it generates is already a world-wrecking threat if the AI is hostile.\n> \n> \n> \n\n\nAt that point we moved from concrete to abstract:\n\n\n\n> \n> Abstract description: train on ‘weak-safe’ domains where the AI isn’t smart enough to do damage, and the humans can label the data pretty well because the AI isn’t smart enough to fool them\n> \n> \n> Abstract problem: Optimization generalizes and corrigibility fails\n> \n> \n> \n\n\nI would be interested in a more concrete description here. I’m not sure exactly what details I’m looking for — on my ontology the question is something like “what algorithm is the AI system forced to learn; how does that lead to generalized optimization and failed corrigibility; why weren’t there simple safer algorithms that were compatible with the training, or if there were such algorithms why didn’t the AI system learn them”. I don’t really see how to answer all of that without abstraction, but perhaps you’ll have an answer anyway\n\n\n(I am hoping to get some concrete detail on “how did it go from non-hostile to hostile”, though I suppose you might confidently predict that it was already hostile after pretraining, conditional on it being an AGI at all. I can try devising a different concrete scenario if that’s a blocker.)\n\n\n\n\n\n\n[Yudkowsky][11:09]\n\n> \n> I am hoping to get some concrete detail on “how did it go from non-hostile to hostile”\n> \n> \n> \n\n\nMu Zero is intrinsically dangerous for reasons essentially isomorphic to the way that AIXI is intrinsically dangerous: It tries to remove humans from its environment when playing Reality for the same reasons it stomps a Goomba if it learns how to play Super Mario Bros 1, because it has some goal and the Goomba is in the way.  It doesn’t need to learn anything more to be that way, except for learning what a Goomba/human is within the current environment. \n\n\nThe question is more “What kind of patches might it learn for a weak environment if optimized by some hill-climbing optimization method and loss function not to stomp Goombas there, and how would those patches fail to generalize to not stomping humans?”\n\n\nAgree or disagree so far?\n\n\n\n\n\n[Shah][12:07]\nAgree assuming that it is pursuing a misaligned goal, but I am also asking what misaligned goal it is pursuing (and depending on the answer, maybe also how it came to be pursuing that misaligned goal given the specified training setup).\n\n\nIn fact I think “what misaligned goal is it pursuing” is probably the more central question for me\n\n\n\n\n\n\n[Yudkowsky][12:14]\nwell, obvious abstract guess is: something whose non-maximal “optimum” (that is, where the optimization ended up, given about how powerful the optimization was) coincided okayish with the higher regions of the fitness landscape (lower regions of the loss landscape) that could be reached at all, relative to its ancestral environment\n\n\nI feel like it would be pretty hard to blindly guess, in advance, at my level of intelligence, without having seen any precedents, what the hell a Human would look like, as a derivation of “inclusive genetic fitness”\n\n\n\n\n\n[Shah][12:15]\nYeah I agree with that in the abstract, but have had trouble giving compelling-to-me concrete examples\n\n\nYeah I also agree with that\n\n\n\n\n\n\n[Yudkowsky][12:15]\nI could try to make up some weird false specifics if that helps?\n\n\n\n\n\n[Shah][12:16]\nTo be clear I am fine with “this is a case where we predictably can’t have good concrete stories and this does not mean we are safe” (and indeed argued the same thing in a doc I linked here many messages ago)\n\n\nBut weird false specifics could still be interesting\n\n\nAlthough let me think if it is actually valuable\n\n\nProbably it is not going to change my mind very much on alignment difficulty, if it is “weird false specifics”, so maybe this isn’t the most productive line of discussion. I’d be “selfishly” interested in that “weird false specifics” seems good for me to generate novel thoughts about these sorts of scenarios, but that seems like a bad use of this Discord\n\n\nI think given the premises that (1) superintelligence is coming soon, (2) it pursues a misaligned goal by default, and (3) we currently have no technical way of preventing this and no realistic-seeming avenues for generating such methods, I am very pessimistic. I think (2) and (3) are the parts that I don’t believe and am interested in digging into, but perhaps “concrete stories” doesn’t really work for this.\n\n\n\n\n\n\n[Yudkowsky][12:26]\nwith any luck – though I’m not sure I actually expect that much luck – this would be something Redwood Research could tell us about, if they can [learn a nonviolence predicate](https://www.lesswrong.com/posts/k7oxdbNaGATZbtEg3/redwood-research-s-current-project) over GPT-3 outputs and then manage to successfully mutate the distribution enough that we can get to see what was actually inside the predicate instead of “nonviolence”\n\n\n\n\n\n| |\n| --- |\n| [Shah: 👍] |\n\n\n\nor, like, 10% of what was actually inside it\n\n\nor enough that people have some specifics to work with when it comes to understanding how gradient descent learning a function over outcomes from human feedback relative to a distribution, doesn’t just learn the actual function the human is using to generate the feedback (though, if this were learned exactly, it would still be fatal given superintelligence)\n\n\n\n\n\n[Shah][12:33]\nIn this framing I do buy that you don’t learn exactly the function that generates the feedback — I have ~5 contrived specific examples where this is the case (i.e. you learn something that wasn’t what the feedback function would have rewarded in a different distribution)\n\n\n(I’m now thinking about what I actually want to say about this framing)\n\n\nActually, maybe I do think you might end up learning the function that generates the feedback. Not literally exactly, if for no other reason than rounding errors, but well enough that the inaccuracies don’t matter much. The AGI presumably already knows and understands the concepts we use based on its pretraining, is it really so shocking if gradient descent hooks up those concepts in the right way? (GPT-3 on the other hand doesn’t already know and understand the relevant concepts, so I wouldn’t predict this of GPT-3.) I do feel though like this isn’t really getting at my reason for (relative) optimism, and that reason is much more like “I don’t really buy that AGI must be very coherent in a way that would prevent corrigibility from working” (which we could discuss if desired)\n\n\nOn the comment that learning the exact feedback function is still fatal — I am unclear on why you are so pessimistic on having “human + AI” supervise “AI”, in order to have the supervisor be smarter than the thing being supervised. (I think) I understand the pessimism that the learned function won’t generalize correctly, but if you imagine that magically working, I’m not clear what additional reason prevents the “human + AI” supervising “AI” setup.\n\n\n* I can see how you die if the AI ever becomes misaligned, i.e. there isn’t a way to fix mistakes, but I don’t see how you get the misaligned AI in the first place.\n* I could also see things like “Just like a student can get away with plagiarism even when the teacher is smarter than the student, the AI knows more about its cognition than the human + AI system, and so will likely be incentivized to do bad things that it knows are bad but the human + AI system doesn’t know is bad”. But that sort of thing seems solvable with future research, e.g. debate, interpretability, red teaming all seem like feasible approaches.\n\n\n\n\n\n\n[Yudkowsky][13:06]\nwhat’s a “human + AI”? can you give me a more concrete version of that scenario, either one where you expect it to work, or where you yourself have labeled the first point you expect it to fail and you want to know whether I see an earlier failure than that?\n\n\n\n\n\n[Shah][13:09]\nOne concrete training algorithm would be debate, ideally with mechanisms that allow the AI systems to “look into each other’s thoughts” and make credible statements about them, but we can skip that for now as it isn’t very concrete\n\n\nWould you like a training domain and data as well?\n\n\nI don’t like the fact that a smart AI system in this position could notice that it is playing against itself and decide not to participate in a zero-sum game, but I am not sure if that worry actually makes sense or not\n\n\n(Debate can be thought of as simultaneously “human + first AI evaluate second AI” and “human + second AI evaluate first AI”)\n\n\n\n\n\n\n[Yudkowsky][13:12]\nfurther concreteness, please! what pivotal act is it training for? what are the debate contents about?\n\n\n\n\n\n[Shah][13:16]\nYou start with “easy” debates like mathematical theorem proving or fact-based questions, and ramp up until eventually the questions are roughly “what is the next thing to do in order to execute a pivotal act”\n\n\nIntermediate questions might be things like “is it a good idea to have a minimum wage”\n\n\n\n\n\n\n[Yudkowsky][13:17]\nso, like, “email ATTTTGAGCTTGCC… to the following address, mix the proteins you receive by FedEx in a water-saline solution at 2 degrees Celsius…” for the final stage?\n\n\n\n\n\n[Shah][13:17]\nYup, that could be it\n\n\nHumans are judging debates based on reasoning though, not just outcomes-after-executing-the-plan\n\n\n\n\n\n\n[Yudkowsky][13:19]\nokay.  let’s suppose you manage to prevent both AGIs from using logical decision theory to coordinate with each other.  both AIs tell their humans that the other AI’s plans are murderous.  now what?\n\n\n\n\n\n[Shah][13:19]\nSo assuming perfect generalization there should be some large implicit debate tree that justifies the plan in human-understandable form\n\n\n\n\n\n\n[Yudkowsky][13:20]\nyah, I flatly disbelieve that entire development scheme, so we should maybe back up.\n\n\npeople fiddled around with GPT-4 derivatives and never did get them to engage in lines of printed reasoning that would design interesting new stuff.  now what?\n\n\nLiving Zero (a more architecturally complicated successor of Mu Zero) is getting better at designing complicated things over on its side while that’s going on, whatever it is\n\n\n\n\n\n[Shah][13:23]\nOkay, so the worry is that this just won’t scale, not that (assuming perfect generalization) it is unsafe? Or perhaps you also think it is unsafe but it’s hard to engage with because you don’t believe it will scale?\n\n\nAnd the issue is that relying on reasoning confines you to a space of possible thoughts that doesn’t include the kinds of thoughts required to develop new stuff (e.g. intuition)?\n\n\n\n\n\n\n[Yudkowsky][13:25]\nmostly I have found these alleged strategies to be too permanently abstract, never concretized, to count as admissible hypotheses.  if you ask me to concretize them myself, I think that unelaborated giant transformer stacks trained on massive online text corpuses fail to learn smart-human-level engineering reasoning before the world ends.  If that were not true, I would expect Paul-style schemes to blow up on the distillation step, but first failures first.\n\n\n\n\n\n[Shah][13:26]\nWhat additional concrete detail do you want?\n\n\nIt feels like I specified something that we could code up a stupidly inefficient version of now\n\n\n\n\n\n\n[Yudkowsky][13:27]\nGreat.  Describe the stupidly inefficient version?\n\n\n\n\n\n[Shah][13:33]\nIn terms of what actually happens: Each episode, there is an initial question specified by the human. Agent A and agent B, which are copies of the same neural net, simultaneously produce statements (“answers”). They then have a conversation. At the end the human judge decides which answer is better, and rewards the appropriate agent. The agents are updated using some RL algorithm.\n\n\nI can say stuff about why we might hope this works, or about tricks you have to play in order to get learning to happen at all, or other things\n\n\n\n\n\n\n[Yudkowsky][13:35]\nAre the agents also playing Starcraft or have they spent their whole lives inside the world of text?\n\n\n\n\n\n[Shah][13:35]\nFor the stupidly inefficient version they could have spent their whole lives inside text\n\n\n\n\n\n\n[Yudkowsky][13:37]\nOkay.  I don’t think the pure-text versions of GPT-5 are being very good at designing nanosystems while Living Zero is ending the world.\n\n\n\n\n\n[Shah][13:37]\nIn the stupidly inefficient version human feedback has to teach the agents facts about the real world\n\n\n\n\n\n\n[Yudkowsky][13:37]\n(It’s called “Living Zero” because it does lifelong learning, in the backstory I’ve been trying to separately sketch out in a draft.)\n\n\n\n\n\n[Shah][13:38]\nOh I definitely agree this is not competitive\n\n\nSo when you say this is too abstract, you mean that there isn’t a story for how they incorporate e.g. physical real-world knowledge?\n\n\n\n\n\n\n[Yudkowsky][13:39]\nno, I mean that when I talk to Paul about this, I can’t get Paul to say anything as concrete as the stuff you’ve already said\n\n\nthe reason why I don’t expect the GPT-5s to be competitive with Living Zero is that gradient descent on feedforward transformer layers, in order how to learn science by competing to generate text that humans like, would have to pick up on some very deep latent patterns generating that text, and I don’t think there’s an incremental pathway there for gradient descent to follow – if gradient descent even follows incremental pathways as opposed to finding [lottery tickets](https://www.lesswrong.com/tag/lottery-ticket-hypothesis), but that’s a whole separate open question of artificial neuroscience.\n\n\nin other words, humans play around with legos, and hominids play around with chipping flint handaxes, and mammals play around with spatial reasoning, and that’s part of the incremental pathway to developing deep patterns for causal investigation and engineering, which then get projected into human text and picked up by humans reading text\n\n\nit’s just straightforwardly not clear to me that GPT-5 pretrained on human text corpuses, and then further posttrained by RL on human judgment of text outputs, ever runs across the deep patterns\n\n\nwhere relatively small architectural changes might make the system no longer just a giant stack of transformers, even if that resulting system is named “GPT-5”, and in this case, bets might be off, but also in this case, things will go wrong with it that go wrong with Living Zero, because it’s now learning the more powerful and dangerous kind of work\n\n\n\n\n\n[Shah][13:45]\nThat does seem like a disagreement, in that I think this process does eventually reach the “deep patterns”, but I do agree it is unlikely to be competitive\n\n\n\n\n\n\n[Yudkowsky][13:45]\nI mean, if you take a feedforward stack of transformer layers the size of a galaxy and train it via gradient descent using all the available energy in the reachable universe, it might find something, sure\n\n\nthough this is by no means certain to be the case\n\n\n\n\n\n[Shah][13:50]\nIt would be quite surprising to me if it took that much. It would be *especially* surprising to me if we couldn’t figure out some alternative reasonably-simple training scheme like “imitate a human doing good reasoning” that still remained entirely in text that could reach the “deep patterns”. (This is now no longer a discussion about whether the training scheme is aligned, not sure if we should continue it.)\n\n\nI realize that this might be hard to do, but if you imagine that GPT-5 + human feedback finetuning does run across the deep patterns and could in theory do the right stuff, and also generalization magically works, what’s the next failure?\n\n\n\n\n\n\n[Yudkowsky][13:56]\nwhat sort of deep thing does a hill-climber run across in the layers, such that the deep thing is the most predictive thing it found for human text about science?\n\n\nif you don’t visualize this deep thing in any detail, then it can in one moment be powerful, and in another moment be safe.  it can have all the properties that you want simultaneously.  who’s to say otherwise? the mysterious deep thing has no form within your mind.\n\n\nif one were to name specifically “well, it ran across a little superintelligence with long-term goals that it realized it could achieve by predicting well in all the cases that an outer gradient descent loop would probably be updating on”, that sure doesn’t end well for you.\n\n\nthis perhaps is *not* the first thing that gradient descent runs across.  it wasn’t the first thing that natural selection ran across to build things that ran the savvanah and made more of themselves.  but what deep pattern that is *not* pleasantly and unfrighteningly formless would gradient descent run across instead?\n\n\n\n\n\n[Shah][14:00]\n(Tbc by “human feedback finetuning” I mean debate, and I suspect that “generalization magically works” will be meant to rule out the thing that you say next, but seems worth checking so let me write an answer)\n\n\n\n> \n> the deep thing is the most predictive thing it found for human text about science?\n> \n> \n> \n\n\nWait, the most predictive thing? I was imagining it as just a thing that is present in addition to all the other things. Like, I don’t think I’ve learned a “deep thing” that is most useful for riding a bike. Probably I’m just misunderstanding what you mean here.\n\n\nI don’t think I can give a good answer here, but to give some answer, it has a belief that there is a universe “out there”, that lots but not all of the text it reads is making claims about (some aspect of) the universe, those claims can be true or false, there are some claims that are known to be true, there are some ways to take assumed-true claims and generate new assumed-true claims, which includes claims about optimal actions for goals, as well as claims about how to build stuff, or what the effect of a specified machine is\n\n\n\n\n\n\n[Yudkowsky][14:10]\nhell of a lot of stuff for gradient descent to run across in a stack of transformer layers.  clearly the lottery-ticket hypothesis must have been very incorrect, and there was an incremental trail of successively more complicated gears that got trained into the system.\n\n\nbtw by “claims” are you meaning to make the jump to English claims? I was reading them as giant inscrutable vectors encoding meaningful propositions, but maybe you meant something else there.\n\n\n\n\n\n[Shah][14:11]\nIn fact I am skeptical of some strong versions of the lottery ticket hypothesis, though it’s been a while since I read the paper and I don’t remember exactly what the original hypothesis was\n\n\nGiant inscrutable vectors encoding meaningful propositions\n\n\n\n\n\n\n[Yudkowsky][14:13]\noh, I’m not particularly confident of the lottery-ticket hypothesis either, though I sure do find it grimly amusing that a species which hasn’t already figured *that* out one way or another thinks it’s going to have deep transparency into neural nets all wrapped up in time to survive.  but, separate issue.\n\n\n“How does gradient descent even work?” “Lol nobody knows, it just does.”\n\n\nbut, separate issue\n\n\n\n\n\n[Shah][14:16]\nHow does strong lottery ticket hypothesis explain GPT-3? Seems like that should already be enough to determine that there’s an incremental trail of successively more complicated gears\n\n\n\n\n\n\n[Yudkowsky][14:18]\ncould just be that in 175B parameters, combinatorially combined through possible execution pathways, there is some stuff that was pretty close to doing all the stuff that GPT-3 ended up doing.\n\n\nanyways, for a human to come up with human text about science, the human has to brood and think for a bit about different possible hypotheses that could account for the data, notice places where those hypotheses break down, tweak the hypotheses in their mind to make the errors go away; they would engineer an internal mental construct towards the engineering goal of making good predictions.  if you’re looking at orbital mechanics and haven’t invented calculus yet, you invent calculus as a persistent mental tool that you can use to craft those internal mental constructs.\n\n\ndoes the formless deep pattern of GPT-5 accomplish the same ends, by some mysterious means that is, formless, able to produce the same result, but not by any detailed means where if you visualized them you would be able to see how it was unsafe?\n\n\n\n\n\n[Shah][14:24]\nI expect that probably we will figure out some way to have adaptive computation time be a thing (it’s been investigated for years now, but afaik hasn’t worked very well), which will allow for this sort of thing to happen\n\n\nIn the stupidly inefficient version, you have a really really giant and deep neural net that does all of that in successive layers of the neural net. (And when it doesn’t need to do that, those layers are noops.)\n\n\n\n\n\n\n[Yudkowsky][14:26][14:32]\nokay, so my question is, is there a little goal-oriented mind inside there that solves science problems the same way humans solve them, by engineering mental constructs that serve a goal of prediction, including backchaining for prediction goals and forward chaining from alternative hypotheses / internal tweaked states of the mental construct? or is there something else which solves the same problem, not how humans do it, without any internal goal orientation?\n\n\nPeople who would not in the first place realize that humans solve prediction problems by internally engineering internal mental constructs in a goal-oriented way, would of course imagine themselves able to imagine a formless spirit which produces “predictions” without being “goal-oriented” because they lack an understanding of internal machinery and so can combine whatever surface properties and English words they want to yield a beautiful optimism\n\n\nOr perhaps there is indeed some way to produce “predictions” without being “goal-oriented”, which gradient descent on a great stack of transformer layers would surely run across; but you will pardon my grave lack of confidence that someone has in fact seen so much further than myself, when they don’t seem to have appreciated in advance of my own questions why somebody who understood something about human internals would be skeptical of this.\n\n\nIf they’re sort of visibly trying to come up with it on the spot after I ask the question, that’s not such a great sign either.\n\n\n\n\n\n\nThis is not aimed particularly at you, but I hope the reader may understand something of why Eliezer Yudkowsky goes about sounding so gloomy all the time about other people’s prospects for noticing what will kill them, by themselves, without Eliezer constantly hovering over their shoulder every minute prompting them with almost all of the answer.\n\n\n\n\n\n[Shah][14:31]\nJust to check my understanding: if we’re talking about, say, how humans might go about understanding neural nets, there’s a goal of “have a theory that can retrodict existing observations and make new predictions”, backchaining might say “come up with hypotheses that would explain double descent”, forward chaining might say “look into bias and variance measurements”?\n\n\nIf so, yes, I think the AGI / GPT-5-that-is-an-AGI is doing something similar\n\n\n\n\n\n\n[Yudkowsky][14:33]\nyour understanding sounds okay, though it might make more sense to talk about a domain that human beings understand better than artificial neuroscience, for purposes of illustrating how scientific thinking works, since human beings haven’t actually gotten very far with artificial neuroscience.\n\n\n\n\n\n[Shah][14:33]\nFair point re using a different domain\n\n\nTo be clear I do not in fact think that GPT-N is safe because it is trained with supervised learning and I am confused at the combination of views that GPT-N will be AGI and GPT-N will be safe because it’s just doing predictions\n\n\nMaybe there is marginal additional safety but you clearly can’t say it is “definitely safe” without some additional knowledge that I have not seen so far\n\n\nGoing back to the original question, of what the next failure mode of debate would be assuming magical generalization, I think it’s just not one that makes sense to ask on your worldview / ontology; “magical generalization” is the equivalent of “assume that the goal-oriented mind somehow doesn’t do dangerous optimization towards its goal, yet nonetheless produces things that can only be produced by dangerous optimization towards a goal”, and so it is assuming the entire problem away\n\n\n\n\n\n\n[Yudkowsky][14:41]\nwell YES\n\n\nfrom my perspective the whole field of mental endeavor as practiced by alignment optimists consists of ancient alchemists wondering if they can get collections of surface properties, like a metal as shiny as gold, as hard as steel, and as self-healing as flesh, where optimism about such wonderfully combined properties can be infinite as long as you stay ignorant of underlying structures that produce some properties but not others\n\n\nand, like, maybe you *can* get something as hard as steel, as shiny as gold, and resilient or self-healing in various ways, but you sure don’t get it by ignorance of the internals\n\n\nand not for a while\n\n\nso if you need the magic sword in 2 years or the world ends, you’re kinda dead\n\n\n\n\n\n[Shah][14:46]\nPotentially dumb question: when humans do science, why don’t they then try to take over the world to do the best possible science? (If humans are doing dangerous goal-directed optimization when doing science, why doesn’t that lead to catastrophe?)\n\n\nYou could of course say that they just aren’t smart enough to do so, but it sure feels like (most) humans wouldn’t want to do the best possible science even if they were smarter\n\n\nI think this is similar to a question I asked before about plans being dangerous independent of their source, and the answer was that the source was misaligned\n\n\nBut in the description above you didn’t say anything about the thing-doing-science being misaligned, so I am once again confused\n\n\n\n\n\n\n[Yudkowsky][14:48]\nboy, so many dumb answers to this dumb question:\n\n\n* even relatively “smart” humans are not very smart compared to other humans, such that they don’t have a “take over the world” option available.\n* most humans who use Science were not smart enough to invent the underlying concept of Science for themselves from scratch; and Francis Bacon, who did, sure did want to take over the world with it.\n* groups of humans with relatively more Engineering sure did take over large parts of the world relative to groups that had relatively less.\n* Eliezer Yudkowsky clearly demonstrates that when you are smart *enough* you start trying to use Science and Engineering to take over your whole future lightcone, the other humans you’re thinking of just aren’t that smart, and, if they were, would inevitably converge towards Eliezer Yudkowsky, who is really a very typical example of a person that smart, even if he looks odd to you because you’re not seeing the population of other [dath ilani](https://www.lesswrong.com/tag/dath-ilan)\n\n\nI am genuinely not sure how to come up with a less dumb answer and it may require a more precise reformulation of the question\n\n\n\n\n\n[Shah][14:50]\nBut like, in Eliezer’s case, there is a different goal that is motivating him to use Science and Engineering for this purpose\n\n\nIt is not the prediction-goal that he instantiated in his mind as part of the method of doing Science\n\n\n\n\n\n\n[Yudkowsky][14:52]\nsure, and the mysterious formless thing within GPT-5 with “adaptive computation time” that broods and thinks, may be pursuing its prediction-subgoal for the sake of other goals, or be pursuing different subgoals of prediction separately without ever once having a goal of prediction, or have 66,666 different shards of desire across different kinds of predictive subproblems that were entrained by gradient descent which does more brute memorization and less Occam bias than natural selection\n\n\noh, are you asking why humans, when they do goal-oriented Science for the sake of their other goals, don’t (universally always) stomp on their other goals while pursuing the Science part?\n\n\n\n\n\n[Shah][14:54]\nWell, that might also be interesting to hear the answer to — I don’t know how I’d answer that through an Eliezer-lens — though it wasn’t exactly what I was asking\n\n\n\n\n\n\n[Yudkowsky][14:56]\nbasically the answer is “well, first of all, they do stomp on themselves to the extent that they’re stupid; and to the extent that they’re smart, pursuing X on the pathway to Y has a ‘natural’ structure for not stomping on Y which is simple and generalizes and obeys all the coherence theorems and can incorporate arbitrarily fine wiggles via epistemic modeling of those fine wiggles because those fine wiggles have a very compact encoding relative to the epistemic model, aka, predicting which forms of X lead to Y; and to the extent that group structures of humans can’t do that simple thing coherently because of their cognitive and motivational partitioning, the group structures of humans are back to not being able to coherently pursue the final goal again”\n\n\n\n\n\n[Shah][14:58]\n(Going back to what I meant to ask) It seems to me like humans demonstrate that you can have a prediction goal without that being your final/terminal goal. So it seems like with AI you similarly need to talk about the final/terminal goal. But then we talked about GPT and debate and so on for a while, and then you explained how GPTs would have deep patterns that do dangerous optimization, where the deep patterns involved instantiating a prediction goal. Notably, you didn’t say anything about a final/terminal goal. Do you see why I am confused?\n\n\n\n\n\n\n[Yudkowsky][15:00]\nso you can do prediction because it’s on the way to some totally other final goal – the way that any tiny superintelligence or superhumanly-coherent agent, if an optimization method somehow managed to run across *that* early on, with an arbitrary goal, which also understood the larger picture, would make good predictions while it thought the outer loop was probably doing gradient descent updates, and bide its time to produce rather different “predictions” once it suspected the results were not going to be checked given what the inputs had looked like.\n\n\nyou can imagine a thing that does prediction the same way that humans optimize inclusive genetic fitness, by pursuing dozens of little goals that tend to cohere to good prediction in the ancestral environment\n\n\nboth of these could happen in order; you could get a thing that pursued 66 severed shards of prediction as a small mind, and which, when made larger, cohered into a utility function around the 66 severed shards that sum to something which is not good prediction and which you could pursue by transforming the universe, and then strategically made good predictions while it expected the results to go on being checked\n\n\n\n\n\n[Shah][15:02]\nOH you mean that the outer objective is prediction\n\n\n\n\n\n\n[Yudkowsky][15:02]\n?\n\n\n\n\n\n[Shah][15:03]\nI have for quite a while thought that you meant that Science involves internally setting a subgoal of “predict a confusing part of reality”\n\n\n\n\n\n\n[Yudkowsky][15:03]\nit… does?\n\n\nI mean, that is true.\n\n\n\n\n\n[Shah][15:04]\nOkay wait. There are two things. One is that GPT-3 is trained with a loss function that one might call a prediction objective for human text. Two is that Science involves looking at a part of reality and figuring out how to predict it. These two things are totally different. I am now unsure which one(s) you were talking about in the conversation above\n\n\n\n\n\n\n[Yudkowsky][15:06]\nwhat I’m saying is that for GPT-5 to successfully do AGI-complete prediction of human text about Science, gradient descent must identify some formless thing that does Science internally in order to optimize the outer loss function for predicting human text about Science\n\n\njust like, if it learns to predict human text about multiplication, it must have learned something internally that does multiplication\n\n\n(afk, lunch/dinner)\n\n\n\n\n\n[Shah][15:07]\nYeah, so you meant the first thing, and I misinterpreted as the second thing\n\n\n(I will head to bed in this case — I was meaning to do that soon anyway — but I’ll first summarize.)\n\n\n\n\n\n\n[Yudkowsky][15:08]\nI am concerned that there is still a misinterpretation going on, because the case I am describing is both things at once\n\n\nthere is an outer loss function that scores text predictions, and an internal process which for purposes of predicting what Science would say must actually somehow do the work of Science\n\n\n\n\n\n[Shah][15:09]\nOkay let me look back at the conversation\n\n\n\n> \n> is there a little goal-oriented mind inside there that solves science problems the same way humans solve them, by engineering mental constructs that serve a goal of prediction, including backchaining for prediction goals and forward chaining from alternative hypotheses / internal tweaked states of the mental construct?\n> \n> \n> \n\n\nHere, is the word “prediction” meant to refer to the outer objective and/or predicting what English sentences about Science one might say, or is it referring to a subpart of the Process Of Science in which one aims to predict some aspect of reality (which is typically not in the form of English sentences)?\n\n\n\n\n\n\n[Yudkowsky][15:20]\nit’s here referring to the inner Science problem\n\n\n\n\n\n[Shah][15:21]\nOkay I think my original understanding was correct in that case\n\n\n\n> \n> from my perspective the whole field of mental endeavor as practiced by alignment optimists consists of ancient alchemists wondering if they can get collections of surface properties, like a metal as shiny as gold, as hard as steel, and as self-healing as flesh, where optimism about such wonderfully combined properties can be infinite as long as you stay ignorant of underlying structures that produce some properties but not others\n> \n> \n> \n\n\nI actually think something like this might be a crux for me, though obviously I wouldn’t put it the way you’re putting it. More like “are arguments about internal mechanisms more or less trustworthy than arguments about what you’re selecting for” (limiting to arguments we actually have access to, of course in the limit of perfect knowledge internal mechanisms beats selection). But that is I think a discussion for another day.\n\n\n\n\n\n\n[Yudkowsky][15:29]\nI think the critical insight – though it has a format that basically nobody except me ever visibly invokes in those terms, and I worry maybe it can only be taught by a kind of life experience that’s very hard to obtain – is the realization that *any* consistent reasonable story about underlying mechanisms will give you less optimistic forecasts than the ones you get by freely combining surface desiderata\n\n\n\n\n\n[Shah] [1:38]\n(For the reader, I don’t think that “arguments about what you’re selecting for” is the same thing as “freely combining surface desiderata”, though I do expect they look approximately the same to Eliezer)\n\n\nYeah, I think I do not in fact understand why that is true for any consistent reasonable story.\n\n\nFrom my perspective, when I posit a hypothetical, you demonstrate that there is an underlying mechanism that produces strong capabilities that generalize combined with real world knowledge. I agree that a powerful AI system that we build capable of executing a pivotal act will have strong capabilities that generalize and real world knowledge. I am happy to assume for the purposes of this discussion that it involves backchaining from a target and forward chaining from things that you currently know or have. I agree that such capabilities could be used to cause an existential catastrophe (at least in a unipolar world, multipolar case is more complicated, but we can stick with unipolar for now). None of my arguments so far are meant to factor through the route of “make it so that the AGI can’t cause an existential catastrophe even if it wants to”.\n\n\nThe main question according to me is why those capabilities are aimed towards achievement of a misaligned goal.\n\n\nIt feels like when I try to ask why we have misaligned goals, I often get answers that are of the form “look at the deep patterns underlying the strong capabilities that generalize, obviously given a misaligned goal they would generate the plan of killing the humans who are an obstacle towards achieving that goal”. This of course doesn’t work since it’s a circular argument.\n\n\nI can generate lots of arguments for why it would be aimed towards achievement of a misaligned goal, such as (1) only a tiny fraction of goals are aligned; the rest are misaligned, (2) the feedback we provide is unlikely to be the right goal and even small errors are fatal, (3) lots of misaligned goals are compatible with the feedback we provide even if the feedback is good, since the AGI might behave well until it can execute a treacherous turn, (4) the one example of strategically aware intelligence (i.e. humans) is misaligned relative to its creator. (I’m not saying I agree with these arguments, but I do understand them.)\n\n\nAre these the arguments that make you think that you get misaligned goals by default? Or is it something about “deep patterns” that isn’t captured by “strong capabilities that generalize, real-world knowledge, ability to cause an existential catastrophe if it wants to”?\n\n\n\n\n\n \n\n\n24. Follow-ups\n--------------\n\n\n \n\n\n\n[Yudkowsky][15:59]\nSo I realize it’s been a bit, but looking over this last conversation, I feel unhappy about the MIRI conversations sequence stopping exactly here, with an unanswered major question, after I ran out of energy last time.  I shall attempt to answer it, at least at all.  CC @rohin @RobBensinger .\n\n\n\n\n\n| | | |\n| --- | --- | --- |\n| [Shah: 🙂] | [Ngo: 🙂] | [Bensinger: 🙂] |\n\n\n\nOne basic large class of reasons has the form, “Outer optimization on a precise loss function doesn’t get you inner consequentialism explicitly targeting that outer objective, just inner consequentialism targeting objectives which empirically happen to align with the outer objective given that environment and those capability levels; and at some point sufficiently powerful inner consequentialism starts to generalize far out-of-distribution, and, when it does, the consequentialist part generalizes much further than the empirical alignment with the outer objective function.”\n\n\nThis, I hope, is by now recognizable to individuals of interest as an overly abstract description of what happened with humans, who one day started building Moon rockets without seeming to care very much about calculating and maximizing their personal inclusive genetic fitness while doing that.  Their capabilities generalized much further out of the ancestral training distribution, than the empirical alignment of those capabilities on inclusive genetic fitness in the ancestral training distribution.\n\n\nOne basic large class of reasons has the form, “Because the real objective is something that cannot be precisely and accurately shown to the AGI and the differences are systematic and important.”\n\n\nSuppose you have a bunch of humans classifying videos of real events or text descriptions of real events or hypothetical fictional scenarios in text, as desirable or undesirable, and assigning them numerical ratings.  Unless these humans are perfectly free of, among other things, all the standard and well-known cognitive biases about eg differently treating losses and gains, the value of this sensory signal is not “The value of our real CEV rating what is Good or Bad and how much” nor even “The value of a utility function we’ve got right now, run over the real events behind these videos”.  Instead it is in a systematic and real and visible way, “The result of running an error-prone human brain over this data to produce a rating on it.”\n\n\nThis is not a mistake by the AGI, it’s not something the AGI can narrow down by running more experiments, the *correct answer as defined* is what contains the alignment difficulty.  If the AGI, or for that matter the outer optimization loop, *correctly generalizes* the function that is producing the human feedback, it will include the systematic sources of error in that feedback.  If the AGI essays an experimental test of a manipulation that an ideal observer would see as “intended to produce error in humans” then the experimental result will be “Ah yes, this is correctly part of the objective function, the objective function I’m supposed to maximize sure does have this in it according to the sensory data I got about this objective.”\n\n\nPeople have fantasized about having the AGI learn something other than the true and accurate function producing its objective-describing data, as its actual objective, from the objective-describing data that it gets; I, of course, was the first person to imagine this and say it should be done, back in 2001 or so; unlike a lot of latecomers to this situation, I am skeptical of my own proposals and I know very well that I did not in fact come up with any reliable-looking proposal for learning ‘true’ human values off systematically erroneous human feedback.\n\n\nDifficulties here are fatal, because a true and accurate learning of what is producing the objective-describing signal, will correctly imply that higher values of this signal obtain as the humans are manipulated or as they are bypassed with physical interrupts for control of the feedback signal.  In other words, even if you could do a bunch of training on an outer objective, and get inner optimization perfectly targeted on that, the fact that it was perfectly targeted would kill you.\n\n\n\n\n\n[Bensinger][23:15]  (Feb. 27, 2022 follow-up comment)\nThis is the last log in the [Late 2021 MIRI Conversations](https://intelligence.org/late-2021-miri-conversations/). We’ll be concluding the sequence with a public [**Ask Me Anything**](https://www.lesswrong.com/posts/34Gkqus9vusXRevR8/late-2021-miri-conversations-discussion-and-ama)(AMA)this Wednesday; you can start posting questions there now.\n\n\nMIRI has found the Discord format useful, and we plan to continue using it going into 2022. This includes follow-up conversations between Eliezer and Rohin, and a forthcoming conversation between Eliezer and Scott Alexander of [Astral Codex Ten](https://astralcodexten.substack.com/).\n\n\nSome concluding thoughts from Richard Ngo:\n\n\n\n\n\n\n[Ngo][6:20]  (Nov. 12 follow-up comment)\nMany thanks to Eliezer and Nate for their courteous and constructive discussion and moderation, and to Rob for putting the transcripts together.\n\n\nThis debate updated me about 15% of the way towards Eliezer’s position, with Eliezer’s arguments about the difficulties of coordinating to ensure alignment responsible for most of that shift. While I don’t find Eliezer’s core intuitions about intelligence too implausible, they don’t seem compelling enough to do as much work as Eliezer argues they do. As in the Foom debate, I think that our object-level discussions were constrained by our different underlying attitudes towards high-level abstractions, which are hard to pin down (let alone resolve).\n\n\nGiven this, I think that the most productive mode of intellectual engagement with Eliezer’s worldview going forward is probably not to continue debating it (since that would likely hit those same underlying disagreements), but rather to try to inhabit it deeply enough to rederive his conclusions and find new explanations of them which then lead to clearer object-level cruxes. I hope that these transcripts shed sufficient light for some readers to be able to do so.\n\n\n\n\n\n \n\n\n\nThe post [Shah and Yudkowsky on alignment failures](https://intelligence.org/2022/03/02/shah-and-yudkowsky-on-alignment-failures/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2022-03-02T15:30:14Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "ea9b803e1f4c2cddd48812182ea9d0e1", "title": "Ngo and Yudkowsky on scientific reasoning and pivotal acts", "url": "https://intelligence.org/2022/03/01/ngo-and-yudkowsky-on-scientific-reasoning-and-pivotal-acts/", "source": "miri", "source_type": "blog", "text": "This is a transcript of a conversation between Richard Ngo and Eliezer Yudkowsky, facilitated by Nate Soares (and with some comments from Carl Shulman). This transcript continues the [Late 2021 MIRI Conversations](https://intelligence.org/late-2021-miri-conversations/) sequence, following [Ngo’s view on alignment difficulty](https://www.lesswrong.com/posts/gf9hhmSvpZfyfS34B/ngo-s-view-on-alignment-difficulty).\n\n\n \n\n\nColor key:\n\n\n\n\n\n| | |\n| --- | --- |\n|  Chat by Richard and Eliezer  |  Other chat  |\n\n\n\n \n\n\n\n \n\n\n14. October 4 conversation\n--------------------------\n\n\n \n\n\n### 14.1. Predictable updates, threshold functions, and the human cognitive range\n\n\n \n\n\n\n[Ngo][15:05]\nTwo questions which I’d like to ask Eliezer:\n\n\n1. How strongly does he think that the “shallow pattern-memorisation” abilities of GPT-3 are evidence for Paul’s view over his view (if at all)\n\n\n2. How does he suggest we proceed, given that he thinks directly explaining his model of the chimp-human difference would be the wrong move?\n\n\n\n\n\n\n[Yudkowsky][15:07]\n1 – I’d say that it’s some evidence for the Dario viewpoint which seems close to the Paul viewpoint.  I say it’s some evidence for the Dario viewpoint because Dario seems to be the person who made something like an advance prediction about it.  It’s not enough to make me believe that you can straightforwardly extend the GPT architecture to 3e14 parameters and train it on 1e13 samples and get human-equivalent performance.\n\n\n\n\n\n[Ngo][15:09]\nDid you make any advance predictions, around the 2008-2015 period, of what capabilities we’d have before AGI?\n\n\n\n\n\n\n[Yudkowsky][15:10]\nnot especially that come to mind?  on my model of the future this is not particularly something I am supposed to know unless there is a rare flash of predictability.\n\n\n\n\n\n\n[Ngo][15:11]\n\n> \n> 1 – I’d say that it’s some evidence for the Dario viewpoint which seems close to the Paul viewpoint. I say it’s some evidence for the Dario viewpoint because Dario seems to be the person who made something like an advance prediction about it. It’s not enough to make me believe that you can straightforwardly extend the GPT architecture to 3e14 parameters and train it on 1e13 samples and get human-equivalent performance.\n> \n> \n> \n\n\nFor the record I remember Paul being optimistic about language when I visited OpenAI in summer 2018. But I don’t know how advanced internal work on GPT-2 was by then.\n\n\n\n\n\n\n[Yudkowsky][15:13]\n2 – in lots of cases where I learned more specifics about X, and updated about Y, I had the experience of looking back and realizing that knowing *anything* specific about X would have predictably produced a directional update about Y.  like, knowing anything in particular about how the first AGI eats computation, would cause you to update far away from thinking that biological analogies to the computation consumed by humans were a good way to estimate how many computations an AGI needs to eat.  you know lots of details about how humans consume watts of energy, and you know lots of details about how modern AI consumes watts, so it’s very visible that these quantities are so incredibly different and go through so many different steps that they’re basically unanchored from each other.\n\n\nI have specific ideas about how you get AGI that isn’t just scaling up Stack More Layers, which lead me to think that the way to estimate the computational cost of it is not “3e14 parameters trained at 1e16 ops per step for 1e13 steps, because that much computation and parameters seems analogous to human biology and 1e13 steps is given by past scaling laws”, a la recent OpenPhil publication.  But it seems to me that it should be possible to have the abstract insight that knowing more about general intelligence in AGIs or in humans would make the biological analogy look less plausible, because you wouldn’t be matching up an unknown key to an unknown lock.\n\n\nUnfortunately I worry that this depends on some life experience with actual discoveries to get something this abstract-sounding on a gut level, because people basically never seem to make abstract updates of this kind when I try to point to them as predictable directional updates?\n\n\nBut, in principle, I’d hope there would be aspects of this where I could figure out how to show that *any* knowledge of specifics would probably update you in a predictable direction, even if it doesn’t seem best for Earth for me to win that argument by giving specifics conditional on those specifics actually being correct, and it doesn’t seem especially sound to win that argument by giving specifics that are wrong.\n\n\n\n\n\n[Ngo][15:17]\nI’m confused by this argument. Before I thought much about the specifics of the chimpanzee-human transition, I found the argument “humans foomed (by biological standards) so AIs will too” fairly compelling. But after thinking more about the specifics, it seems to me that the human foom was in part caused by a factor (sharp cultural shift) that won’t be present when we train AIs.\n\n\n\n\n\n\n[Yudkowsky][15:17]\nsure, and other factors will be present in AIs but not in humans\n\n\n\n\n\n[Ngo][15:17]\nThis seems like a case where more specific knowledge updated me away from your position, contrary to what you’re claiming.\n\n\n\n\n\n\n[Yudkowsky][15:18]\neg, human brains don’t scale and mesh, while it’s far more plausible that with AI you could just run more and more of it\n\n\nthat’s a huge factor leading one to expect AI to scale faster than human brains did\n\n\nit’s like communication between humans, but squared!\n\n\nthis is admittedly a specific argument and I’m not sure how it would abstract out to any specific argument\n\n\n\n\n\n[Ngo][15:20]\nAgain, this is an argument that I believed less after looking into the details, because right now it’s pretty difficult to throw more compute at neural networks at runtime.\n\n\nWhich is not to say that it’s a bad argument, the differences in compute-scalability between humans and AIs are clearly important. But I’m confused about the structure of your argument that knowing more details will predictably update me in a certain direction.\n\n\n\n\n\n\n[Yudkowsky][15:21]\nI suppose the genericized version of my actual response to that would be, “architectures that have a harder time eating more compute are architectures which, for this very reason, are liable to need better versions invented of them, and this in particular seems like something that plausibly happens before scaling to general intelligence is practically possible”\n\n\n\n\n\n[Soares][15:23]\n(Eliezer, I see Richard as requesting that you either back down from, or clarify, your claim that any specific observations about how much compute AI systems require will update him in a predictable direction.)\n\n\n\n\n\n| |\n| --- |\n| [Ngo: 👍] |\n\n\n\n\n\n\n\n[Yudkowsky][15:24]\nI’m not saying I know how to make that abstractized argument for exactly what Richard cares about, in part because I don’t understand Richard’s exact model, just that it’s one way to proceed past the point where the obvious dilemma crops up of, “If a theory about AGI capabilities is true, it is a disservice to Earth to speak it, and if a theory about AGI capabilities is false, an argument based on it is not sound.”\n\n\n\n\n\n[Ngo][15:25]\nAh, I see.\n\n\n\n\n\n\n[Yudkowsky][15:26]\npossible viewpoint to try: that systems in general often have threshold functions as well as smooth functions inside them.\n\n\nonly in ignorance, then, do we imagine that the whole thing is one smooth function.\n\n\nthe history of humanity has a threshold function of, like, communication or culture or whatever.\n\n\nthe correct response to this is not, “ah, so this was the unique, never-to-be-seen-again sort of fact which cropped up in the weirdly complicated story of humanity in particular, which will not appear in the much simpler story of AI”\n\n\nthis only sounds plausible because you don’t know the story of AI so you think it will be a simple story\n\n\nthe correct generalization is “guess some weird thresholds will also pop up in whatever complicated story of AI will appear in the history books”\n\n\n\n\n\n[Ngo][15:28]\nHere’s a quite general argument about why we shouldn’t expect too many threshold functions in the impact of AI: because at any point, humans will be filling in the gaps of whatever AIs can’t do. (The lack of this type of smoothing is, I claim, why culture was a sharp threshold for humans – if there had been another intelligent species we could have learned culture from, then we would have developed more gradually.)\n\n\n\n\n\n\n[Yudkowsky][15:30]\nsomething like this indeed appears in my model of why I expect not much impact on GDP before AGI is powerful enough to bypass human economies entirely\n\n\nduring the runup phase, pre-AGI won’t be powerful to do “whole new things” that depend on doing lots of widely different things that humans can’t do\n\n\njust marginally new things that depend on doing one thing humans can’t do, or can do but a bunch worse\n\n\n\n\n\n[Ngo][15:31]\nOkay, that’s good to know.\n\n\nWould this also be true in [a civilisation of village idiots](https://www.lesswrong.com/posts/gf9hhmSvpZfyfS34B/ngo-s-view-on-alignment-difficulty)?\n\n\n\n\n\n\n[Yudkowsky][15:32]\nthere will be sufficient economic reward for building out industries that are mostly human plus one thing that pre-AGI does, and people will pocket those economic rewards, go home, and not be more ambitious than that.  I have trouble empathically grasping *why* almost all the CEOs are like this in our current Earth, because I am very much not like that myself, but observationally, the current Earth sure does seem to behave like rich people would almost uniformly rather not rock the boat too much.\n\n\nI did not understand the whole thing about village idiots actually\n\n\ndo you want to copy and paste the document, or try rephrasing the argument?\n\n\n\n\n\n[Ngo][15:35]\nRephrasing:\n\n\nClaim 1: AIs will be better at doing scientific research (and other similar tasks) than village idiots, before we reach AGI.\n\n\nClaim 2: Village idiots still have the core of general intelligence (which you claim chimpanzees don’t have).\n\n\nClaim 3: It would be surprising if narrow AI’s research capabilities fell specifically into the narrow gap between village idiots and Einsteins, given that they’re both general intelligences and are very similar in terms of architecture, algorithms, etc.\n\n\n(If you deny claim 2, then we can substitute, say, someone at the 10th percentile of human intelligence – I don’t know what specific connotations “village idiot” has to you.)\n\n\n\n\n\n\n[Yudkowsky][15:37]\nMy models do not have an easy time of visualizing “as generally intelligent as a chimp, but specialized to science research, gives you superhuman scientific capability and the ability to make progress in novel areas of science”.\n\n\n(this is a reference back to the pre-rephrase in the document)\n\n\nit seems like, I dunno, “gradient descent can make you generically good at anything without that taking too much general intelligence” must be a core hypothesis there?\n\n\n\n\n\n[Ngo][15:39]\nI mean, we both agree that gradient descent can produce *some* capabilities without also producing much general intelligence. But claim 1 plus your earlier claims that narrow AIs won’t surpass humans at scientific research, lead to the implication that the limitations of gradient-descent-without-much-general-intelligence fall in a weirdly narrow range.\n\n\n\n\n\n\n[Yudkowsky][15:42]\nI do credit the Village Idiot to Einstein Interval with being a little broader as a target than I used to think, since the Alpha series of Go-players took a couple of years to go from pro to world-beating even once they had a scalable algorithm.  Still seems to me that, over time, the wall clock time to traverse those ranges has been getting shorter, like Go taking less time than Chess.  My intuitions still say that it’d be quite weird to end up hanging out for a long time with AGIs that conduct humanlike conversations and are ambitious enough to run their own corporations while those AGIs are still not much good at science.\n\n\nBut on my present model, I suspect the limitations of “gradient-descent-without-much-general-intelligence” to fall underneath the village idiot side?\n\n\n\n\n\n[Ngo][15:43]\nOh, interesting.\n\n\nThat seems like a strong prediction\n\n\n\n\n\n\n[Yudkowsky][15:43]\nYour model, as I understand it, is saying, “But surely, GD-without-GI must suffice to produce better scientists than village idiots, by specializing chimps on science” and my current reply, though it’s not a particular question I’ve thought a lot about before, is, “That… does not quite seem to me like a thing that should happen along the mainline?”\n\n\nthough, as always, in the limit of superintelligences doing things, or our having the Textbook From The Future, we could build almost any kind of mind on purpose if we knew how, etc.\n\n\n\n\n\n[Ngo][15:44]\nFor example, I expect that if I prompt GPT-3 in the right way, it’ll say some interesting and not-totally-nonsensical claims about advanced science.\n\n\nWhereas it would be very hard to prompt a village idiot to do the same.\n\n\n\n\n\n\n[Yudkowsky][15:44]\neg, a superintelligence could load up chimps with lots of domain-specific knowledge they were not generally intelligent enough to learn themselves.\n\n\nehhhhhh, it is *not* clear to me that GPT-3 is better than a village idiot at advanced science, even in this narrow sense, especially if the village idiot is allowed some training\n\n\n\n\n\n[Ngo][15:46]\nIt’s not clear to me either. But it does seem plausible, and then it seems even more plausible that this will be true of GPT-4\n\n\n\n\n\n\n[Yudkowsky][15:46]\nI wonder if we’re visualizing different village idiots\n\n\nmy choice of “village idiot” originally was probably not the best target for visualization, because in a lot of cases, a village idiot – especially the stereotype of a village idiot – is, like, a damaged general intelligence with particular gears missing?\n\n\n\n\n\n[Ngo][15:47]\nI’d be happy with “10th percentile intelligence”\n\n\n\n\n\n\n[Yudkowsky][15:47]\nwhereas it seems like what you want is something more like “Homo erectus but it has language”\n\n\noh, wow, 10th percentile intelligence?\n\n\nthat’s super high\n\n\nGPT-3 is far far out of its league\n\n\n\n\n\n[Ngo][15:49]\nI think GPT-3 is far below this person’s league in a lot of ways (including most common-sense reasoning) but I become much less confident when we’re talking about abstract scientific reasoning.\n\n\n\n\n\n\n[Yudkowsky][15:51]\nI think that if scientific reasoning were as easy as you seem to be imagining(?), the publication factories of the modern world would be *much* more productive of real progress.\n\n\n\n\n\n[Ngo][15:51]\nWell, a 10th percentile human is very unlikely to contribute to real scientific progress either way\n\n\n\n\n\n\n[Yudkowsky][15:53]\nLike, on my current model of how the world really works, China pours vast investments into universities and sober-looking people with PhDs and classes and tests and postdocs and journals and papers; but none of this is the real way of Science which is actually, secretly, unbeknownst to China, passed down in rare lineages and apprenticeships from real scientist mentor to real scientist student, and China doesn’t have much in the way of lineages so the extra money they throw at stuff doesn’t turn into real science.\n\n\n\n\n\n[Ngo][15:52]\nCan you think of any clear-cut things that they could do and GPT-3 can’t?\n\n\n\n\n\n\n[Yudkowsky][15:53]\nLike… make sense… at all?  Invent a handaxe when nobody had ever seen a handaxe before?\n\n\n\n\n\n[Ngo][15:54]\nYou’re claiming that 10th percentile humans invent handaxes?\n\n\n\n\n\n\n[Yudkowsky][15:55]\nThe activity of rearranging scientific sentences into new plausible-sounding paragraphs is well within the reach of publication factories, in fact, they often use considerably more semantic sophistication than that, and yet, this does not cumulate into real scientific progress even in quite large amounts.\n\n\nI think GPT-3 is basically just Not Science Yet to a much greater extent than even these empty publication factories.\n\n\nIf 10th percentile humans don’t invent handaxes, GPT-3 sure as hell doesn’t.\n\n\n\n\n\n[Ngo][15:55]\nI don’t think we’re disagreeing. Publication factories are staffed with people who do better academically than 90+% of all humans.\n\n\nIf 90th-percentile humans are very bad at science, then of course GPT-3 and 10th-percentile humans are very very bad at science. But it still seems instructive to compare them (e.g. on tasks like “talk cogently about a complex abstract topic”)\n\n\n\n\n\n\n[Yudkowsky][15:58]\nI mean, while it is usually weird for something to be barely within a species’s capabilities while being within those capabilities at all, such that only relatively smarter individual organisms can do it, in the case of something that a social species has only very recently started to do collectively, it’s plausible that the thing appeared at the point where it was barely accessible to the smartest members.  Eg, it wouldn’t be surprising if it would have taken a long time or forever for humanity to invent science from scratch, if all the Francis Bacons and Newtons and even average-intelligence people were eliminated leaving only the bottom 10%.  Because our species just started doing that, at the point where our species was barely able to start doing that, meaning, at the point where some rare smart people could spearhead it, historically speaking.  It’s not obvious whether or not less smart people can do it over a longer time.\n\n\nI’m not sure we disagree much about the human part of this model.\n\n\nMy guess is that our disagreement is more about GPT-3.\n\n\n“Talk ‘cogently’ about a complex abstract topic” doesn’t seem like much of anything significant to me, if GPT-3 is ‘cogent’.  It fails to pass the threshold for inventing science and, I expect, for most particular sciences.\n\n\n\n\n\n[Ngo][16:00]\nHow much training do you think a 10th-percentile human would need in a given subject matter (say, economics) before they could answer questions as well as GPT-3 can?\n\n\n(Right now I think GPT-3 does better by default because it at least recognises the terminology, whereas most humans don’t at all.)\n\n\n\n\n\n\n[Yudkowsky][16:01]\nI also expect that if you offer a 10th-percentile human lots of money, they can learn to talk more cogently than GPT-3 about narrower science areas.  GPT-3 is legitimately more well-read at its lower level of intelligence, but train the 10-percentiler in a narrow area and they will become able to write better nonsense about that narrow area.\n\n\n\n\n\n[Ngo][16:01]\nThis sounds like an experiment we can actually run.\n\n\n\n\n\n\n[Yudkowsky][16:02]\nLike, what we’ve got going on here is a real *breadth* advantage that GPT-3 has in some areas, but the breadth doesn’t add up because it lacks the depth of a 10%er.\n\n\n\n\n\n[Ngo][16:02]\nIf we asked them to read a single introductory textbook and then quiz both them and GPT-3 about items covered in that textbook, do you expect that the human would come out ahead?\n\n\n\n\n\n\n[Yudkowsky][16:02]\nAI has figured out how to do a subhumanly shallow kind of thinking, and it *is* to be expected that when AI can do anything at all, it can soon do more of that thing than the whole human species could do.\n\n\nNo, that’s nothing remotely like giving the human the brief training the human needs to catch up to GPT-3’s longer training.\n\n\nA 10%er does not learn in an instant – they learn faster than GPT-3, but not in an instant.\n\n\nThis is more like a scenario of paying somebody to, like, sit around for a year with an editor, learning how to mix-and-match economics sentences until they can learn to sound more like they’re making an argument than GPT-3 does, despite still not understanding any economics.\n\n\nA lot of the learning would just go into producing sensible-sounding nonsense at all, since lots of 10%ers have not been to college and have not learned how to regurgitate rearranged nonsense for college teachers.\n\n\n\n\n\n[Ngo][16:05]\nWhat percentage of humans do you think could learn to beat GPT-3’s question-answering by reading a single textbook over, say, a period of a month?\n\n\n\n\n\n\n[Yudkowsky][16:06]\n¯\\\\_(ツ)\\_/¯\n\n\n\n\n\n[Ngo][16:06]\nMore like 0.5 or 5 or 50?\n\n\n\n\n\n\n[Yudkowsky][16:06]\nHumans cannot in general pass the Turing Test for posing as AIs!\n\n\nWhat percentage of humans can pass as a calculator by reading an arithmetic textbook?\n\n\nZero!\n\n\n\n\n\n[Ngo][16:07]\nI’m not asking them to mimic GPT-3, I’m asking them to produce better answers.\n\n\n\n\n\n\n[Yudkowsky][16:07]\nThen it depends on what kind of answers!\n\n\nI think a lot of 10%ers could learn to do wedding-cake multiplication, if sufficiently well-paid as adults rather than being tortured in school, out to 6 digits, thus handily beating the current GPT-3 at ‘multiplication’.\n\n\n\n\n\n[Ngo][16:08]\nFor example: give them an economics textbook to study for a month, then ask them what inflation is, whether it goes up or down if the government prints more money, whether the price of something increases or decreases when the supply increases.\n\n\n\n\n\n\n[Yudkowsky][16:09]\nGPT-3 did not learn to produce its responses by reading *textbooks*.\n\n\nYou’re not matching the human’s data to GPT-3’s data.\n\n\n\n\n\n[Ngo][16:10]\nI know, this is just the closest I can get in an experiment that seems remotely plausible to actually run.\n\n\n\n\n\n\n[Yudkowsky][16:10]\nYou would want to collect, like, 1,000 Reddit arguments about inflation, and have the human read that, and have the human produce their own Reddit arguments, and have somebody tell them whether they sounded like real Reddit arguments or not.\n\n\nThe textbook is just not the same thing at all.\n\n\nI’m not sure we’re at the core of the argument, though.\n\n\nTo me it seems like GPT-3 is allowed to be superhuman at producing remixed and regurgitated sentences about economics, because this is about as relevant to Science talent as a calculator being able to do perfect arithmetic, only less so.\n\n\n\n\n\n[Ngo][16:15]\nSuppose that the remixed and regurgitated sentences slowly get more and more coherent, until GPT-N can debate with a professor of economics and sustain a reasonable position.\n\n\n\n\n\n\n[Yudkowsky][16:15]\nAre these points that GPT-N read elsewhere on the Internet, or are they new good points that no professor of economics on Earth has ever made before?\n\n\n\n\n\n[Ngo][16:15]\nI guess you don’t expect this to happen, but I’m trying to think about what experiments we could run to get evidence for or against it.\n\n\nThe latter seems both very hard to verify, and also like a very high bar – I’m not sure if most professors of economics have generated new good arguments that no other professor has ever made before.\n\n\nSo I guess the former.\n\n\n\n\n\n\n[Yudkowsky][16:18]\nThen I think that you can do this without being able to do science.  It’s a lot like if somebody with a really good memory was lucky enough to have read that exact argument on the Internet yesterday, and to have a little talent for paraphrasing.  Not by coincidence, having this ability gives you – on my model – no ability to do science, invent science, be the first to build handaxes, or design nanotechnology.\n\n\nI admit, this does reflect my personal model of how Science works, presumably not shared by many leading bureaucrats, where in fact the papers full of regurgitated scientific-sounding sentences are not accomplishing much.\n\n\n\n\n\n[Ngo][16:20]\nSo it seems like your model doesn’t rule out narrow AIs producing well-reviewed scientific papers, since you don’t trust the review system very much.\n\n\n\n\n\n\n[Yudkowsky][16:23]\nI’m trying to remember whether or not I’ve heard of that happening, like, 10 years ago.\n\n\nMy vague recollection is that things in the Sokal Hoax genre where the submissions succeeded, used humans to hand-generate the nonsense rather than any submissions in the genre having been purely machine-generated.\n\n\n\n\n\n[Ngo][16:24]\nWhich doesn’t seem like an unreasonable position, but it does make it harder to produce tests that we have opposing predictions on.\n\n\n\n\n\n\n[Yudkowsky][16:24]\nObviously, that doesn’t mean it couldn’t have been done 10 years ago, because 10 years ago it’s plausibly a lot easier to hand-generate passing nonsense than to write an AI program that does it.\n\n\noh, wait, I’m wrong!\n\n\n\n\n\n\n> \n> In April of 2005 the team’s submission, “Rooter: A Methodology for the Typical Unification of Access Points and Redundancy,” was accepted as a non-reviewed paper to the World Multiconference on Systemics, Cybernetics and Informatics (WMSCI), a conference that Krohn says is known for “being spammy and having loose standards.”\n> \n> \n> \n\n\n \n\n\n\n> \n> in 2013 IEEE and Springer Publishing removed more than 120 papers from their sites after a French researcher’s analysis determined that they were generated via SCIgen\n> \n> \n> \n\n\n\n\n\n[Ngo][16:26]\nOh, interesting\n\n\nMeta note: I’m not sure where to take the direction of the conversation at this point. Shall we take a brief break?\n\n\n\n\n\n\n[Yudkowsky][16:27]\n\n> \n> The creators continue to get regular emails from computer science students proudly linking to papers they’ve snuck into conferences, as well as notes from researchers urging them to make versions for other disciplines.\n> \n> \n> \n\n\nSure! Resume 5p?\n\n\n\n\n\n[Ngo][16:27]\nYepp\n\n\n\n\n\n \n\n\n### 14.2. Domain-specific heuristics and nanotechnology\n\n\n \n\n\n\n[Soares][16:41]\nA few takes:\n\n\n1. It looks to me like there’s some crux in “how useful will the ‘shallow’ stuff get before dangerous things happen”. I would be unsurprised if this spiraled back into the gradualness debate. I’m excited about attempts to get specific and narrow disagreements in this domain (not necessarily bettable; I nominate distilling out specific disagreements before worrying about finding bettable ones).\n\n\n2. It seems plausible to me we should have some much more concrete discussion about possible ways things could go right, according to Richard. I’d be up for playin the role of beeping when things seem insufficiently concrete.\n\n\n3. It seems to me like Richard learned a couple things about Eliezer’s model in that last bout of conversation. I’d be interested to see him try to paraphrase his current understanding of it, and to see Eliezer produce beeps where it seems particularly off.\n\n\n\n\n\n\n[Yudkowsky][17:00]\n![👋](https://s.w.org/images/core/emoji/14.0.0/72x72/1f44b.png)\n\n\n\n\n\n[Ngo][17:02]\nHmm, I’m not sure that I learned too much about Eliezer’s model in this last round.\n\n\n\n\n\n\n[Soares][17:03]\n(dang :-p)\n\n\n\n\n\n\n[Ngo][17:03]\nIt seems like Eliezer thinks that the returns of scientific investigation are very heavy-tailed.\n\n\nWhich does seem pretty plausible to me.\n\n\nBut I’m not sure how useful this claim is for thinking about the development of AI that can do science.\n\n\nI attempted in my document to describe some interventions that would help things go right.\n\n\nAnd the levels of difficulty involved.\n\n\n\n\n\n\n[Yudkowsky][17:07]\n(My model is something like: there are some very shallow steps involved in doing science, lots of medium steps, occasional very deep steps, assembling the whole thing into Science requires having all the lego blocks available.  As soon as you look at anything with details, it ends up ‘heavy-tailed’ because it has multiple pieces and says how things don’t work if all the pieces aren’t there.)\n\n\n\n\n\n[Ngo][17:08]\nEliezer, do you have an estimate of how much slower science would proceed if everyone’s IQs were shifted down by, say, 30 points?\n\n\n\n\n\n\n[Yudkowsky][17:10]\nIt’s not obvious to me that science proceeds significantly past its present point.  I would not have the right to be surprised if Reality told me the correct answer was that a civilization like that just doesn’t reach AGI, ever.\n\n\n\n\n\n[Ngo][17:12]\nDoesn’t your model take a fairly big hit from predicting that humans just happen to be within 30 IQ points of not being able to get any more science?\n\n\nIt seems like a surprising coincidence.\n\n\nOr is this dependent on the idea that doing science is much harder now than it used to be?\n\n\nAnd so if we’d been dumber, we might have gotten stuck before newtonian mechanics, or else before relativity?\n\n\n\n\n\n\n[Yudkowsky][17:13]\nNo, humanity is exactly the species that finds it barely possible to do science.\n\n\n\n\n\n[Ngo][17:14]\nIt seems to me like humanity is exactly the species that finds it barely possible to do *civilisation*.\n\n\n\n\n\n\n[Yudkowsky][17:14]\nIf it were possible to do it with less intelligence, we’d be having this conversation over the Internet that we’d developed with less intelligence.\n\n\n\n\n\n[Ngo][17:15]\nAnd it seems like many of the key inventions that enabled civilisation weren’t anywhere near as intelligence-bottlenecked as modern science.\n\n\n\n\n\n\n[Yudkowsky][17:15]\nYes, it does seem that there’s quite a narrow band between “barely smart enough to develop agriculture” and “barely smart enough to develop computers”! Though there were genuinely fewer people in the preagricultural world, with worse nutrition and no Ashkenazic Jews, and there’s the whole question about to what degree the reproduction of the shopkeeper class over several centuries was important to the Industrial Revolution getting started.\n\n\n\n\n\n[Ngo][17:15]\n(e.g. you’d get better spears or better plows or whatever just by tinkering, whereas you’d never get relativity just by tinkering)\n\n\n\n\n\n\n[Yudkowsky][17:17]\nI model you as taking a lesson from this which is something like… you can train up a villager to be John von Neumann by spending some evolutionary money on giving them science-specific brain features, since John von Neumann couldn’t have been much more deeply or generally intelligent, and you could spend even more money and make a chimp a better scientist than John von Neumann.\n\n\nMy model is more like, yup, the capabilities you need to invent aqueducts sure do generalize the crap out of things, though also at the upper end of cognition there are compounding returns which can bring John von Neumann into existence, and also also there’s various papers suggesting that selection was happening really fast over the last few millennia and real shifts in cognition shouldn’t be ruled out.  (This last part is an update to what I was thinking when I wrote [Intelligence Explosion Microeconomics](https://intelligence.org/files/IEM.pdf), and is from my own perspective a more gradualist line of thinking, because it means there’s a wider actual target to traverse before you get to von Neumann.)\n\n\n\n\n\n[Ngo][17:20]\nIt’s not that “von Neumann isn’t much more deeply generally intelligent”, it’s more like “domain-specific heuristics and instincts get you a long way”. E.g. soccer is a domain where spending evolutionary money on specific features will very much help you beat von Neumann, and so is art, and so is music.\n\n\n\n\n\n\n[Yudkowsky][17:20]\nMy skepticism here is that there’s a version of, like, “invent nanotechnology” which routes through just the shallow places, which humanity stumbles over before we stumble over deep AGI.\n\n\n\n\n\n[Ngo][17:21]\nWould you be comfortable publicly discussing the actual cognitive steps which you think would be necessary for inventing nanotechnology?\n\n\n\n\n\n\n[Yudkowsky][17:23]\nIt should not be overlooked that there’s a very valid sibling of the old complaint “Anything you can do ceases to be AI”, which is that “Things you can do with surprisingly-to-your-model shallow cognition are precisely the things that Reality surprises you by telling you that AI can do earlier than you expected.”  When we see GPT-3, we were getting some amount of real evidence about AI capabilities advancing faster than I expected, and some amount of evidence about GPT-3’s task being performable using shallower cognition than expected.\n\n\nMany people were particularly surprised by Go because they thought that Go was going to require deeper real thought than chess.\n\n\nAnd I think AlphaGo probably was thinking in a legitimately deeper way than Deep Blue.  Just not as much deeper as Douglas Hofstadter thought it would take.\n\n\nConversely, people thought a few years ago that driving cars really seemed to be the sort of thing that machine learning would be good at, and were unpleasantly surprised by how the last 0.1% of driving conditions were resistant to shallow techniques.\n\n\nDespite the inevitable fact that some surprises of this kind now exist, and that more such surprises will exist in the future, it continues to seem to me that science-and-engineering on the level of “invent nanotech” still seems pretty unlikely to be easy to do with shallow thought, by means that humanity discovers before AGI tech manages to learn deep thought?\n\n\nWhat actual cognitive steps?  Outside-the-box thinking, throwing away generalizations that governed your previous answers and even your previous questions, inventing new ways to represent your questions, figuring out which questions you need to ask and developing plans to answer them; these are some answers that I hope will be sufficiently useless to AI developers that it is safe to give them, while still pointing in the direction of things that have an un-GPT-3-like quality of depth about them.\n\n\nDoing this across unfamiliar domains that couldn’t be directly trained in by gradient descent because they were too expensive to simulate a billion examples of\n\n\nIf you have something this powerful, why is it not also noticing that the world contains humans?  Why is it not noticing itself?\n\n\n\n\n\n[Ngo][17:30]\nIf humans were to invent this type of nanotech, what do you expect the end intellectual result to be?\n\n\nE.g. consider the human knowledge involved in building cars\n\n\nThere are thousands of individual parts, each of which does a specific thing\n\n\n\n\n\n\n[Yudkowsky][17:30]\nUhhhh… is there a reason why “Eric Drexler’s *Nanosystems* but, like, the real thing, modulo however much Drexler did not successfully Predict the Future about how to do that, which was probably a lot” is not the obvious answer here?\n\n\n\n\n\n[Ngo][17:31]\nAnd some deep principles governing engines, but not really very crucial ones to actually building (early versions of) those engines\n\n\n\n\n\n\n[Yudkowsky][17:31]\nthat’s… not historically true at all?\n\n\ngetting a grip on quantities of heat and their flow was *critical* to getting steam engines to work\n\n\nit didn’t happen until the math was there\n\n\n\n\n\n[Ngo][17:32]\nAh, interesting\n\n\n\n\n\n\n[Yudkowsky][17:32]\nmaybe you can be a mechanic banging on an engine that somebody else designed, around principles that somebody even earlier invented, without a physics degree\n\n\nbut, like, engineers have actually needed math since, like, that’s been a thing, it wasn’t just a prestige trick\n\n\n\n\n\n[Ngo][17:34]\nOkay, so you expect there to be a bunch of conceptual work in finding equations which govern nanosystems.\n\n\n\n> \n> Uhhhh… is there a reason why “Eric Drexler’s *Nanosystems* but, like, the real thing, modulo however much Drexler did not successfully Predict the Future about how to do that, which was probably a lot” is not the obvious answer here?\n> \n> \n> \n\n\nThis may in fact be the answer; I haven’t read it though.\n\n\n\n\n\n\n[Yudkowsky][17:34]\nor other abstract concepts than equations, which have never existed before\n\n\nlike, maybe not with a type signature unknown to humanity, but with specific instances unknown to present humanity\n\n\nthat’s what I’d expect to see from humanly designed nanosystems\n\n\n\n\n\n[Ngo][17:35]\nSo something like AlphaFold is only doing a very small proportion of the work here, since it’s not able to generate new abstract concepts (of the necessary level of power)\n\n\n\n\n\n\n[Yudkowsky][17:35]\nyeeeessss, that is why DeepMind did not take over the world last year\n\n\nit’s not just that AlphaFold lacks the concepts but that it lacks the machinery to invent those concepts and the machinery to do anything with such concepts\n\n\n\n\n\n[Ngo][17:38]\nI think I find this fairly persuasive, but I also expect that people will come up with increasingly clever ways to leverage narrow systems so that they can do more and more work.\n\n\n(including things like: if you don’t have enough simulations, then train another narrow system to help fix that, etc)\n\n\n\n\n\n\n[Yudkowsky][17:39]\n(and they will accept their trivial billion-dollar-payouts and World GDP will continue largely undisturbed, on my mainline model, because it will be easiest to find ways to make money by leveraging narrow systems on the less regulated, less real parts of the economy, instead of trying to build houses or do medicine, etc.)\n\n\nreal tests being expensive, simulation being impossibly expensive, and not having enough samples to train your civilization’s current level of AI technology, is not a problem you can solve by training a new AI to generate samples, because you do not have enough samples to train your civilization’s current level of AI technology to generate more samples\n\n\n\n\n\n[Ngo][17:41]\nThinking about nanotech makes me more sympathetic to the argument that developing general intelligence will bring a sharp discontinuity. But it also makes me expect longer timelines to AGI, during which there’s more time to do interesting things with narrow AI. So I guess it weighs more against Dario’s view, less against Paul’s view.\n\n\n\n\n\n\n[Yudkowsky][17:41]\nwell, I’ve been debating Paul about that separately in the timelines channel, not sure about recapitulating it here\n\n\nbut in broad summary, since I expect the future to look like it was drawn from the “history book” barrel and not the “futurism” barrel, I expect huge barriers to doing *huge* things with narrow AI in small amounts of time; you can sell waifutech because it’s unregulated and hard to regulate, but that doesn’t feed into core mining and steel production.\n\n\nwe could already have double the GDP if it was legal to build houses and hire people, etc., and the change brought by pre-AGI will perhaps be that our GDP could *quadruple* instead of just *double* if it was legal to do things, but that will not make it legal to do things, and why would anybody try to do things and probably fail when there are easier $36 billion profits to be made in waifutech.\n\n\n\n\n \n\n\n\n### 14.3. Relatively shallow cognition, Go, and math\n\n\n \n\n\n\n[Ngo][17:45]\nI’d be interested to see Paul’s description of how we would train AIs to solve hard scientific problems. I think there’s some prediction that’s like “we train it on arxiv and fine-tune it until it starts to output credible hypotheses about nanotech”. And this seems like it has a step that’s quite magical to me, but perhaps that’ll be true of any prediction that I make before fully understanding how intelligence works.\n\n\n\n\n\n\n[Yudkowsky][17:46]\nmy belief is not so much that this training can never happen, but that this probably means the system was trained *beyond the point of safe shallowness*\n\n\nnot in principle over all possible systems a superintelligence could build, but in practice when it happens on Earth\n\n\nmy only qualm about this is that current techniques make it possible to buy shallowness in larger quantities than this Earth has ever seen before, and people are looking for surprising ways to make use of that\n\n\nso I weigh in my mind the thought of Reality saying Gotcha! by handing me a headline I read tomorrow about how GPT-4 has started producing totally reasonable science papers that are actually correct\n\n\nand I am pretty sure that exact thing doesn’t happen\n\n\nand I ask myself about GPT-5 in a few more years, which had the same architecture as GPT-3 but more layers and more training, doing the same thing\n\n\nand it’s still largely “nope”\n\n\nthen I ask myself about people in 5 years being able to use the shallow stuff *in any way whatsoever* to produce the science papers\n\n\nand of course the answer there is, “okay, but is it doing that without having shallowly learned stuff *that adds up to deep stuff* which is *why it can now do science*“\n\n\nand I try saying back “no, it was born of shallowness and it remains shallow and it’s just doing science because it turns out that there is totally a way to be an incredibly mentally shallow skillful scientist if you think 10,000 shallow thoughts per minute instead of 1 deep thought per hour”\n\n\nand my brain is like, “I cannot absolutely rule it out but it really seems like trying to call the next big surprise in 2014 and you guess self-driving cars instead of Go because how the heck would you guess that Go was shallower than self-driving cars”\n\n\nlike, that is an *imaginable* surprise\n\n\n\n\n\n[Ngo][17:52]\nOn that *particular* point it seems like the very reasonable heuristic of “pick the most similar task” would say that go is like chess and therefore you can do it shallowly.\n\n\n\n\n\n\n[Yudkowsky][17:52]\nbut there’s a world of difference between saying that a surprise is imaginable, and that it wouldn’t surprise you\n\n\n\n\n\n[Ngo][17:52]\nI wasn’t thinking that much about AI at that point, so you’re free to call that post-hoc.\n\n\n\n\n\n\n[Yudkowsky][17:52]\nthe Chess techniques had already failed at Go\n\n\nactual new techniques were required\n\n\nthe people around at the time had witnessed sudden progress on self-driving cars a few years earlier\n\n\n\n\n\n[Ngo][17:53]\nMy advance prediction here is that “math is like go and therefore can be done shallowly”.\n\n\n\n\n\n\n[Yudkowsky][17:53]\nself-driving cars were of obviously greater economic interest as well\n\n\nmy recollection is that talk of the time was about self-driving\n\n\nheh! I have the same sense.\n\n\nthat is, math being shallower than science.\n\n\nthough perhaps not as shallow as Go, and you will note that Go has fallen and Math has not\n\n\n\n\n\n[Ngo][17:54]\nright\n\n\nI also expect that we’ll need new techniques for math (although not as different from the go techniques as the go techniques were from chess techniques)\n\n\nBut I guess we’re not finding strong disagreements here either.\n\n\n\n\n\n\n[Yudkowsky][17:57]\nif Reality came back and was like “Wrong! Keeping up with the far reaches of human mathematics is harder than being able to develop your own nanotech,” I would be like “What?” to about the same degree as being “What?” on “You can build nanotech just by thinking trillions of thoughts that are too shallow to notice humans!”\n\n\n\n\n\n[Ngo][17:58]\nPerhaps let’s table this topic and move on to one of the others Nate suggested? I’ll note that walking through the steps required to invent a science of nanotechnology does make your position feel more compelling, but I’m not sure how much of that is the general “intelligence is magic” intuition I mentioned before.\n\n\n\n\n\n\n[Yudkowsky][17:59]\nHow do you suspect your beliefs would shift if you had any detailed model of intelligence?\n\n\nConsider trying to imagine a particular wrong model of intelligence and seeing what it would say differently?\n\n\n(not sure this is a useful exercise and we could indeed try to move on)\n\n\n\n\n\n[Ngo][18:01]\nI think there’s one model of intelligence where scientific discovery is more actively effortful – as in, you need to be very goal-directed in determining hypotheses, testing hypotheses, and so on.\n\n\nAnd there’s another in which scientific discovery is more constrained by flashes of insight, and the systems which are producing those flashes of insight are doing pattern-matching in a way that’s fairly disconnected from the real-world consequences of those insights.\n\n\n\n\n\n\n[Yudkowsky][18:05]\nThe first model is true and the second one is false, if that helps.  You can tell this by contemplating where you would update if you learned any model, by considering that things look more disconnected when you can’t see the machinery behind them.  If you don’t know what moves the second hand on a watch and the minute hand on a watch, they could just be two things that move at different rates for completely unconnected reasons; if you can see inside the watch, you’ll see that the battery is shared and the central timing mechanism is shared and then there’s a few gears to make the hands move at different rates.\n\n\nLike, in my ontology, the notion of “effortful” doesn’t particularly parse as anything basic, because it doesn’t translate over into paperclip maximizers, which are neither effortful nor effortless.\n\n\nBut in a human scientist you’ve got thoughts being shoved around by all sorts of processes behind the curtains, created by natural selection, some of them reflecting shards of Consequentialism / shadowing paths through time\n\n\nThe flashes of insight come to people who were looking in nonrandom places\n\n\nIf they didn’t plan deliberately and looked on pure intuition, they looked with an intuition trained by past success and failure\n\n\nSomebody walking doesn’t plan to walk, but long ago as a baby they learned from falling over, and their ancestors who fell over more didn’t reproduce\n\n\n\n\n\n[Ngo][18:09]\nI think the first model is probably more true for humans in the domain of science. But I’m uncertain about the extent to which this because humans have not been optimised very much for doing science. If we consider the second model in a domain that humans have actually been optimised very hard for (say, physical activity) – then maybe we can use the analogy of a coach and a player. The coach can tell the player what to practice, but almost all the work is done by the player practicing in a way which updates their intuitions.\n\n\nThis has become very abstract, though.\n\n\n\n\n\n \n\n\n### 14.4. Pivotal acts and historical precedents\n\n\n \n\n\n\n[Ngo][18:11]\n\n> \n> A few takes:\n> \n> \n> 1. It looks to me like there’s some crux in “how useful will the ‘shallow’ stuff get before dangerous things happen”. I would be unsurprised if this spiraled back into the gradualness debate. I’m excited about attempts to get specific and narrow disagreements in this domain (not necessarily bettable; I nominate distilling out specific disagreements before worrying about finding bettable ones).\n> \n> \n> 2. It seems plausible to me we should have some much more concrete discussion about possible ways things could go right, according to Richard. I’d be up for playin the role of beeping when things seem insufficiently concrete.\n> \n> \n> 3. It seems to me like Richard learned a couple things about Eliezer’s model in that last bout of conversation. I’d be interested to see him try to paraphrase his current understanding of it, and to see Eliezer produce beeps where it seems particularly off.\n> \n> \n> \n\n\nHere’s Nate’s comment.\n\n\nWe could try his #2 suggestion: concrete ways that things could go right.\n\n\n\n\n\n\n[Soares][18:12]\n(I am present and am happy to wield the concreteness-hammer)\n\n\n\n\n\n\n[Ngo][18:13]\nI think I’m a little cautious about this line of discussion, because my model doesn’t strongly constrain the ways that different groups respond to increasing developments in AI. The main thing I’m confident about is that there will be much clearer responses available to us once we have a better picture of AI development.\n\n\nE.g. before modern ML, the option of international constraints on compute seemed much less salient, because algorithmic developments seemed much more important.\n\n\nWhereas now, tracking/constraining compute use seems like one promising avenue for influencing AGI development.\n\n\nOr in the case of nukes, before knowing the specific details about how they were constructed, it would be hard to give a picture of how arms control goes well. But once you know more details about the process of uranium enrichment, you can construct much more efficacious plans.\n\n\n\n\n\n\n[Yudkowsky][18:19]\nOnce we knew specific things about bioweapons, countries developed specific treaties for controlling them, which failed (according to @CarlShulman)\n\n\n\n\n\n[Ngo][18:19, moved two down in log]\n(As a side note, I think that if Eliezer had been around in the 1930s, and you described to him what actually happened with nukes over the next 80 years, he would have called that “insanely optimistic”.)\n\n\n\n\n\n\n[Yudkowsky][18:21]\nMmmmmmaybe.  Do note that I tend to be more optimistic than the average human about, say, global warming, or everything in transhumanism outside of AGI.\n\n\nNukes have going for them that, in fact, nobody has an incentive to start a global thermonuclear war.  Eliezer is not in fact pessimistic about everything and views his AGI pessimism as generalizing to very few other things, which are not, in fact, as bad as AGI.\n\n\n\n\n\n[Ngo][18:21]\nI think I put this as the lowest application of competent power out of the things listed in my doc; I’d need to look at the historical details to know if important decision-makers actually cared about it, or were just doing it for PR reasons.\n\n\n\n\n\n\n[Shulman][18:22]\n\n> \n> Once we knew specific things about bioweapons, countries developed specific treaties for controlling them, which failed (according to @CarlShulman)\n> \n> \n> \n\n\nThe treaties were pro forma without verification provisions because the powers didn’t care much about bioweapons. They did have verification for nuclear and chemical weapons which did work.\n\n\n\n\n\n\n[Yudkowsky][18:22]\nBut yeah, compared to pre-1946 history, nukes actually kind of did go *really surprisingly well!*\n\n\nLike, this planet used to be a huge warring snakepit of Great Powers and Little Powers and then nukes came along and people actually got serious and decided to stop having the largest wars they could fuel.\n\n\n\n\n\n[Shulman][18:22][18:23]\nThe analog would be an international agreement to sign a nice unenforced statement of AI safety principles and then all just building AGI in doomy ways without explicitly saying they’re doing it..\n\n\n\n\n\n\n\nThe BWC also allowed ‘defensive’ research that is basically as bad as the offensive kind.\n\n\n\n\n\n\n[Yudkowsky][18:23]\n\n> \n> The analog would be an international agreement to sign a nice unenforced statement of AI safety principles and then all just building AGI in doomy ways without explicitly saying they’re doing it..\n> \n> \n> \n\n\nThis scenario sure sounds INCREDIBLY PLAUSIBLE, yes\n\n\n\n\n\n[Ngo][18:22]\nOn that point: do either of you have strong opinions about the anthropic shadow argument about nukes? That seems like one reason why the straw 1930s-Eliezer I just cited would have been justified.\n\n\n\n\n\n\n[Yudkowsky][18:23]\nI mostly don’t consider the anthropic shadow stuff\n\n\n\n\n\n[Shulman][18:24]\nIn the late Cold War Gorbachev and Reagan might have done the BWC treaty+verifiable dismantling, but they were in a rush on other issues like nukes and collapse of the USSR.\n\n\nPutin just wants to keep his bioweapons program, it looks like. Even denying the existence of the exposed USSR BW program.\n\n\n\n\n\n\n[Yudkowsky][18:25]\nI’m happy making no appeal to anthropics here.\n\n\n\n\n\n[Shulman][18:25]\nBoo anthropic shadow claims. Always dumb.\n\n\n(Sorry I was only invoked for BW, holding my tongue now.)\n\n\n\n\n\n| | |\n| --- | --- |\n| [Yudkowsky: ❤] | [Soares: ❤] |\n\n\n\n\n\n\n\n[Yudkowsky][18:26]\nThere may come a day when the strength of nonanthropic reasoning fails… but that is not this day!\n\n\n\n\n\n[Ngo][18:27]\nOkay, happy to rule that out for now too. So yeah, I picture 1930s-Eliezer pointing to technological trends and being like “by default, 30 years after the first nukes are built, you’ll be able to build one in your back yard. And governments aren’t competent enough to stop that happening.”\n\n\nAnd I don’t think I could have come up with a compelling counterargument back then.\n\n\n\n\n\n\n[Soares][18:27]\n\n> \n> [Sorry I was only invoked for BW, holding my tongue now.]\n> \n> \n> \n\n\n(fwiw, I thought that when Richard asked “you two” re: anthropic shadow, he meant you also. But I appreciate the caution. And in case Richard meant me, I will note that I agree w/ Carl and Eliezer on this count.)\n\n\n\n\n\n\n[Ngo][18:28]\n\n> \n> (fwiw, I thought that when Richard asked “you two” re: anthropic shadow, he meant you also. But I appreciate the caution. And in case Richard meant me, I will note that I agree w/ Carl and Eliezer on this count.)\n> \n> \n> \n\n\nOh yeah, sorry for the ambiguity, I meant Carl.\n\n\nI do believe that AI control will be more difficult than nuclear control, because AI is so much more useful. But I also expect that there will be many more details about AI development that we don’t currently understand, that will allow us to influence it (because AGI is a much more complicated concept than “really really big bomb”).\n\n\n\n\n\n\n[Yudkowsky][18:29]\n\n> \n> [So yeah, I picture 1930s-Eliezer pointing to technological trends and being like “by default, 30 years after the first nukes are built, you’ll be able to build one in your back yard. And governments aren’t competent enough to stop that happening.”\n> \n> \n> And I don’t think I could have come up with a compelling counterargument back then.]\n> \n> \n> \n\n\nSo, I mean, in fact, I don’t prophesize doom from very many trends at all!  It’s literally just AGI that is anywhere near that unmanageable!  Many people in EA are more worried about biotech than I am, for example.\n\n\n\n\n\n[Ngo][18:31]\nI appreciate that my response is probably not very satisfactory to you here, so let me try to think about more concrete things we can disagree about.\n\n\n\n\n\n\n[Yudkowsky][18:31]\n\n> \n> [I do believe that AI control will be more difficult than nuclear control, because AI is so much more useful. But I also expect that there will be many more details about AI development that we don’t currently understand, that will allow us to influence it (because AGI is a much more complicated concept than “really really big bomb”).]\n> \n> \n> \n\n\nEr… I think this is not a correct use of the Way I was attempting to gesture at; things being more complicated when known than unknown, does not mean you have more handles to influence them because each complication has the potential to be a handle.  It is not in general true that very complicated things are easier for humanity in general, and governments in particular, to control, because they have so many exposed handles.\n\n\nI think there’s a valid argument about it maybe being more possible to control the supply chain for AI training processors if the global chip supply chain is narrow (also per Carl).\n\n\n\n\n\n[Ngo][18:34]\nOne thing that we seemed to disagree on, to a significant extent, is the difficulty of “US and China preventing any other country from becoming a leader in AI”\n\n\n\n\n\n\n[Yudkowsky][18:35]\nIt is in fact a big deal about nuclear tech that uranium can’t be mined in every country, as I understand it, and that centrifuges stayed at the frontier of technology and were harder to build outside the well-developed countries, and that the world ended up revolving around a few Great Powers that had no interest in nuclear tech proliferating any further.\n\n\n\n\n\n[Ngo][18:35]\nIt seems to me that the US and/or China could apply a lot of pressure to many countries.\n\n\n\n\n\n\n[Yudkowsky][18:35]\nUnfortunately, before you let that encourage you too much, I would also note it was an important fact about nuclear bombs that they did not produce streams of gold and then ignite the atmosphere if you turned up the stream of gold too high with the actual thresholds involved being unpredictable.\n\n\n\n\n\n[Ngo][18:35]\nE.g. if the UK had actually seriously tried to block Google’s acquisition of DeepMind, and the US had actually seriously tried to convince them not to do so, then I expect that the UK would have folded. (Although it’s a weird hypothetical.)\n\n\n\n> \n> Unfortunately, before you let that encourage you too much, I would also note it was an important fact about nuclear bombs that they did not produce streams of gold and then ignite the atmosphere if you turned up the stream of gold too high with the actual thresholds involved being unpredictable.\n> \n> \n> \n\n\nNot a critical point, but nuclear power does actually seem like a “stream of gold” in many ways.\n\n\n(also, quick meta note: I need to leave in 10 mins)\n\n\n\n\n\n\n[Yudkowsky][18:38]\nI would be a lot more cheerful about a few Great Powers controlling AGI if AGI produced wealth, but more powerful AGI produced no more wealth; if AGI was made entirely out of hardware, with no software component that could be keep getting orders of magnitude more efficient using hardware-independent ideas; and if the button on AGIs that destroyed the world was clearly labeled.\n\n\nThat does take AGI to somewhere in the realm of nukes.\n\n\n\n\n\n[Ngo][18:38]\nHow much improvement do you think can be eked out of existing amounts of hardware if people just try to focus on algorithmic improvements?\n\n\n\n\n\n\n[Yudkowsky][18:38]\nAnd Eliezer is capable of being less concerned about things when they are intrinsically less concerning, which is why my history does not, unlike some others in this field, involve me running also being Terribly Concerned about nuclear war, global warming, biotech, and killer drones.\n\n\n\n\n\n[Ngo][18:39]\nThis says 44x improvements over 7 years: \n\n\n![](https://images-ext-1.discordapp.net/external/0ZgRbpmbv_D6LHuB59diNlLopn91Ii66xvJmWy8-jTc/https/openai.com/content/images/2020/05/ai-and-efficiency-social.png?width=1200&height=628)\n\n\n\n\n[Yudkowsky][18:39]\nWell, if you’re a superintelligence, you can probably do human-equivalent human-speed general intelligence on a 286, though it might possibly have less fine motor control, or maybe not, I don’t know.\n\n\n\n\n\n[Ngo][18:40]\n(within reasonable amounts of human-researcher-time – say, a decade of holding hardware fixed)\n\n\n\n\n\n\n[Yudkowsky][18:40]\nI wouldn’t be surprised if human ingenuity asymptoted out at AGI on a home computer from 1995.\n\n\nDon’t know if it’d take more like a hundred years or a thousand years to get fairly close to that.\n\n\n\n\n\n[Ngo][18:41]\nDoes this view cash out in a prediction about how the AI and Efficiency graph projects into the future?\n\n\n\n\n\n\n[Yudkowsky][18:42]\nThe question of how efficiently you can perform a fixed algorithm doing fixed things, often pales compared to the gains on switching to different algorithms doing different things.\n\n\nGiven government control of all the neural net training chips and no more public GPU farms, I buy that they could keep a nuke!AGI (one that wasn’t tempting to crank up and had clearly labeled Doom-Causing Buttons whose thresholds were common knowledge) under lock of the Great Powers for 7 years, during which software decreased hardware requirements by 44x.  I am a bit worried about how long it takes before there’s a proper paradigm shift on the level of deep learning getting started in 2006, after which the Great Powers need to lock down on individual GPUs.\n\n\n\n\n\n[Ngo][18:46]\nHmm, okay.\n\n\n\n\n\n \n\n\n### 14.5. Past ANN progress\n\n\n \n\n\n\n[Ngo][18:46]\nI don’t expect another paradigm shift like that\n\n\n(in part because I’m not sure the paradigm shift actually happened in the first place – it seems like neural networks were improving pretty continuously over many decades)\n\n\n\n\n\n\n[Yudkowsky][18:47]\nI’ve noticed that opinion around OpenPhil!  It makes sense if you have short timelines and expect the world to end before there’s another paradigm shift, but OpenPhil doesn’t seem to expect that either.\n\n\nYeah, uh, there was kinda a paradigm shift in AI between say 2000 and now.  There really, really was.\n\n\n\n\n\n[Ngo][18:49]\nWhat I mean is more like: it’s not clear to me that an extrapolation of the trajectory of neural networks is made much better by incorporating data about the other people who weren’t using neural networks.\n\n\n\n\n\n\n[Yudkowsky][18:49]\nWould you believe that at one point Netflix ran a prize contest to produce better predictions of their users’ movie ratings, with a $1 million prize, and this was one of the largest prizes ever in AI and got tons of contemporary ML people interested, and neural nets were not prominent on the solutions list at all, because, back then, people occasionally solved AI problems *not using neural nets*?\n\n\nI suppose that must seem like a fairy tale, as history always does, but I lived it!\n\n\n\n\n\n[Ngo][18:50]\n(I wasn’t denying that neural networks were for a long time marginalised in AI)\n\n\nI’d place much more credence on future revolutions occurring if neural networks had actually only been invented recently.\n\n\n(I have to run in 2 minutes)\n\n\n\n\n\n\n[Yudkowsky][18:51]\nThe world might otherwise end before the next paradigm shift, but if the world keeps on ticking for 10 years, 20 years, there will not always be the paradigm of training massive networks by even more massive amounts of gradient descent; I do not think that is actually the most efficient possible way to turn computation into intelligence.\n\n\nNeural networks stayed stuck at only a few layers for a long time, because the gradients would explode or die out if you made the networks any deeper.\n\n\nThere was a critical moment in 2006(?) where Hinton and Salakhutdinov(?) proposed training Restricted Boltzmann machines unsupervised in layers, and then ‘unrolling’ the RBMs to initialize the weights in the network, and then you could do further gradient descent updates from there, because the activations and gradients wouldn’t explode or die out given that initialization.  That got people to, I dunno, 6 layers instead of 3 layers or something? But it focused attention *on* the problem of exploding gradients as the reason why deeply layered neural nets never worked, and that kicked off the entire modern field of deep learning, more or less.\n\n\n\n\n\n[Ngo][18:56]\nOkay, so are you claiming that that neural networks were mostly bottlenecked by algorithmic improvements, not compute availability, for a significant part of their history?\n\n\n\n\n\n\n[Yudkowsky][18:56]\nIf anybody goes back and draws a graph claiming the whole thing was continuous if you measure the right metric, I am not really very impressed unless somebody at the time was using that particular graph and predicting anything like the right capabilities off of it.\n\n\n\n\n\n[Ngo][18:56]\nIf so this seems like an interesting question to get someone with more knowledge of ML history than me to dig into; I might ask around.\n\n\n\n\n\n\n[Yudkowsky][18:57]\n\n> \n> [Okay, so are you claiming that that neural networks were mostly bottlenecked by algorithmic improvements, not compute availability, for a significant part of their history?]\n> \n> \n> \n\n\nEr… yeah?  There was a long time when, even if you threw a big neural network at something, it just wouldn’t work.\n\n\nGood night, btw?\n\n\n\n\n\n[Ngo][18:57]\nLet’s call it here; thanks for the discussion.\n\n\n\n\n\n\n[Soares][18:57]\nThanks, both!\n\n\n\n\n\n\n[Ngo][18:57]\nI’ll be interested to look into that claim, it doesn’t fit with the impressions I have of earlier bottlenecks.\n\n\nI think the next important step is probably for me to come up with some concrete governance plans that I’m excited about.\n\n\nI expect this to take quite a long time\n\n\n\n\n\n\n[Soares][18:58]\nWe can coordinate around that later. Sorry for keeping you so late already, Richard.\n\n\n\n\n\n\n[Ngo][18:59]\nNo worries\n\n\nMy proposal would be that we should start on whatever work is necessary to convert the debate into a publicly accessible document now\n\n\nIn some sense coming up with concrete governance plans is my full-time job, but I feel like I’m still quite a way behind in my thinking on this, compared with people who have been thinking about governance specifically for longer\n\n\n\n\n\n\n[Soares][19:01]\n(@RobBensinger is already on it ![🙂](https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png))\n\n\n\n\n\n| |\n| --- |\n| [Bensinger: ✅] |\n\n\n\n\n\n\n\n[Yudkowsky][19:03]\nNuclear plants might be like narrow AI in this analogy; some designs potentially contribute to proliferation, and you can get more economic wealth by building more of them, but they have no Unlabeled Doom Dial where you can get more and more wealth out of them by cranking them up until at some unlabeled point the atmosphere ignites.\n\n\nAlso a thought: I don’t think you just want somebody with more knowledge of AI history, I think you might need to ask an actual old fogey *who was there at the time*, and hasn’t just learned an ordered history of just the parts of the past that are relevant to the historian’s theory about how the present happened.\n\n\nTwo of them, independently, to see if the answers you get are reliable-as-in-statistical-reliability.\n\n\n\n\n\n[Soares][19:19]\nMy own quick take, for the record, is that it looks to me like there are two big cruxes here.\n\n\nOne is about whether “deep generality” is a good concept, and in particular whether it pushes AI systems quickly from “nonscary” to “scary” and whether we should expect human-built AI systems to acquire it in practice (before the acute risk period is ended by systems that lack it). The other is about how easy it will be to end the acute risk period (eg by use of politics or nonscary AI systems alone).\n\n\nI suspect the latter is the one that blocks on Richard thinking about governance strategies. I’d be interested in attempting further progress on the former point, though it’s plausible to me that that should happen over in #timelines instead of here.\n\n\n\n\n\n \n\n\n\nThe post [Ngo and Yudkowsky on scientific reasoning and pivotal acts](https://intelligence.org/2022/03/01/ngo-and-yudkowsky-on-scientific-reasoning-and-pivotal-acts/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2022-03-02T03:30:40Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "d598b486fda6b418b1e0c2ed8cc40499", "title": "Christiano and Yudkowsky on AI predictions and human intelligence", "url": "https://intelligence.org/2022/03/01/christiano-and-yudkowsky-on-ai-predictions-and-human-intelligence/", "source": "miri", "source_type": "blog", "text": "This is a transcript of a conversation between Paul Christiano and Eliezer Yudkowsky, with comments by Rohin Shah, Beth Barnes, Richard Ngo, and Holden Karnofsky, continuing the [Late 2021 MIRI Conversations](https://intelligence.org/late-2021-miri-conversations/).\n\n\nColor key:\n\n\n\n\n\n| | |\n| --- | --- |\n|  Chat by Paul and Eliezer  |  Other chat  |\n\n\n\n \n\n\n15. October 19 comment\n----------------------\n\n\n \n\n\n\n[Yudkowsky][11:01]\nthing that struck me as an iota of evidence for Paul over Eliezer:\n\n\n \n\n\n![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/bccc3fcd315a12be417c814fe75a6c761049a9225425cc67.png)\n\n\n\n\n \n\n\n16. November 3 conversation\n---------------------------\n\n\n \n\n\n### 16.1. EfficientZero\n\n\n \n\n\n\n[Yudkowsky][9:30]\nThing that (if true) strikes me as… straight-up falsifying Paul’s view as applied to modern-day AI, at the frontier of the most AGI-ish part of it and where Deepmind put in substantial effort on their project?  EfficientZero (allegedly) learns Atari in 100,000 frames.  Caveat: I’m not having an easy time figuring out how many frames MuZero would’ve required to achieve the same performance level.  MuZero was trained on 200,000,000 frames but reached what looks like an allegedly higher high; the EfficientZero paper compares their performance to MuZero on 100,000 frames, and claims theirs is much better than MuZero given only that many frames.\n\n\n  CC: @paulfchristiano.\n\n\n(I would further argue that this case is important because it’s about the central contemporary model for approaching AGI, at least according to Eliezer, rather than any number of random peripheral AI tasks.)\n\n\n\n\n\n[Shah][14:46]\nI only looked at the front page, so might be misunderstanding, but the front figure says “Our proposed method EfficientZero is 170% and 180% better than the previous SoTA performance in mean and median human normalized score […] on the Atari 100k benchmark”, which does not seem like a huge leap?\n\n\nOh, I incorrectly thought that was 1.7x and 1.8x, but it is actually 2.7x and 2.8x, which is a bigger deal (though still feels not crazy to me)\n\n\n\n\n\n\n[Yudkowsky][15:28]\nthe question imo is how many frames the previous SoTA would require to catch up to EfficientZero\n\n\n(I’ve tried emailing an author to ask about this, no response yet)\n\n\nlike, perplexity on GPT-3 vs GPT-2 and “losses decreased by blah%” would give you a pretty meaningless concept of how far ahead GPT-3 was from GPT-2, and I think the “2.8x performance” figure in terms of scoring is equally meaningless as a metric of how much EfficientZero improves if any\n\n\nwhat you want is a notion like “previous SoTA would have required 10x the samples” or “previous SoTA would have required 5x the computation” to achieve that performance level\n\n\n\n\n\n[Shah][15:38]\nI see. Atari curves are not nearly as nice and stable as GPT curves and often have the problem that they plateau rather than making steady progress with more training time, so that will make these metrics noisier, but it does seem like a reasonable metric to track\n\n\n(Not that I have recommendations about how to track it; I doubt the authors can easily get these metrics)\n\n\n\n\n\n\n[Christiano][18:01]\nIf you think our views are making such starkly different predictions then I’d be happy to actually state any of them in advance, including e.g. about future ML benchmark results.\n\n\nI don’t think this falsifies my view, and we could continue trying to hash out what my view is but it seems like slow going and I’m inclined to give up.\n\n\nRelevant questions on my view are things like: is MuZero optimized at all for performance in the tiny-sample regime? (I think not, I don’t even think it set SoTA on that task and I haven’t seen any evidence.) What’s the actual rate of improvements since people started studying this benchmark ~2 years ago, and how much work has gone into it? And I totally agree with your comments that “# of frames” is the natural unit for measuring and that would be the starting point for any discussion.\n\n\n\n\n\n\n[Barnes][18:22]\n\n> \n> In previous MCTS RL algorithms, the environment model is either given or only trained with rewards, values, and policies, which cannot provide sufficient training signals due to their scalar nature. The problem is more severe when the reward is sparse or the bootstrapped value is not accurate. The MCTS policy improvement operator heavily relies on the environment model. Thus, it is vital to have an accurate one.\n> \n> \n> We notice that the output ^st+1\n> \n> \n>  from the dynamic function G should be the same as st+1, i.e. the output of the representation function H with input of the next observation ot+1 (Fig. 2). This can help to supervise the predicted next state ^st+1 using the actual st+1, which is a tensor with at least a few hundred dimensions. This provides ^st+1 with much more training signals than the default scalar reward and value.\n> \n> \n> \n\n\nThis seems like a super obvious thing to do and I’m confused why DM didn’t already try this. It was definitely being talked about in ~2018\n\n\nWill ask a DM friend about it\n\n\n\n\n\n\n[Yudkowsky][22:45]\nI… don’t think I want to take *all* of the blame for misunderstanding Paul’s views; I think I also want to complain at least a little that Paul spends an insufficient quantity of time pointing at extremely concrete specific possibilities, especially real ones, and saying how they do or don’t fit into the scheme.\n\n\nAm I rephrasing correctly that, in this case, if Efficient Zero was actually a huge (3x? 5x? 10x?) jump in RL sample efficiency over previous SOTA, measured in 1 / frames required to train to a performance level, then that means the Paul view *doesn’t* apply to the present world; but this could be because MuZero wasn’t the real previous SOTA, or maybe because nobody really worked on pushing out this benchmark for 2 years and therefore on the Paul view it’s fine for there to still be huge jumps?  In other words, this is something Paul’s worldview has to either defy or excuse, and not just, “well, sure, why wouldn’t it do that, you have misunderstood which kinds of AI-related events Paul is even trying to talk about”?\n\n\nIn the case where, “yes it’s a big jump and that shouldn’t happen later, but it could happen now because it turned out nobody worked hard on pushing past MuZero over the last 2 years”, I wish to register that my view permits it to be the case that, when the world begins to end, the frontier that enters into AGI is similarly something that not a lot of people spent a huge effort on since a previous prototype from 2 years earlier.  It’s just not very surprising to me if the future looks a lot like the past, or if human civilization neglects to invest a ton of effort in a research frontier.\n\n\nGwern guesses that getting to EfficientZero’s performance level would require around 4x the samples for MuZero-Reanalyze (the more efficient version of MuZero which replayed past frames), which is also apparently the only version of MuZero the paper’s authors were considering in the first place – without replays, MuZero requires 20 billion frames to achieve its performance, not the figure of 200 million. \n\n\n\n\n \n\n\n17. November 4 conversation\n---------------------------\n\n\n \n\n\n### 17.1. EfficientZero (continued)\n\n\n \n\n\n\n[Christiano][7:42]\nI think it’s possible the biggest misunderstanding is that you somehow think of my view as a “scheme” and your view as a normal view where probability distributions over things happen.\n\n\nConcretely, this is a paper that adds a few techniques to improve over MuZero in a domain that (it appears) wasn’t a significant focus of MuZero. I don’t know how much it improves but I can believe gwern’s estimates of 4x.\n\n\nI’d guess MuZero itself is a 2x improvement over the baseline from a year ago, which was maybe a 4x improvement over the algorithm from a year before that.\n\n\nIf that’s right, then no it’s not mindblowing on my view to have 4x progress one year, 2x progress the next, and 4x progress the next.\n\n\nIf other algorithms were better than MuZero, then the 2019-2020 progress would be >2x and the 2020-2021 progress would be <4x.\n\n\nI think it’s probably >4x sample efficiency though (I don’t totally buy gwern’s estimate there), which makes it at least possibly surprising.\n\n\nBut it’s never going to be that surprising. It’s a benchmark that people have been working on for a few years that has been seeing relatively rapid improvement over that whole period.\n\n\nThe main innovation is how quickly you can learn to predict future frames of Atari games, which has tiny economic relevance and calling it the most AGI-ish direction seems like it’s a very Eliezer-ish view, this isn’t the kind of domain where I’m either most surprised to see rapid progress at all nor is the kind of thing that seems like a key update re: transformative AI\n\n\nyeah, SoTA in late 2020 was SPR, published by a much smaller academic group: \n\n\nMuZero wasn’t even setting sota on this task at the time it was published\n\n\nmy “schemes” are that (i) if a bunch of people are trying on a domain and making steady slow progress, I’m surprised to see giant jumps and I don’t expect most absolute progress to occur in such jumps, (ii) if a domain is worth a lot of $, generally a bunch of people will be trying. Those aren’t claims about what is always true, they are claims about what is typically true and hence what I’m guessing will be true for transformative AI.\n\n\nMaybe you think those things aren’t even good general predictions, and that I don’t have long enough tails in my distributions or whatever. But in that case it seems we can settle it quickly by prediction.\n\n\nI think this result is probably significant (>30% absolute improvement) + faster-than-trend (>50% faster than previous increment) progress relative to prior trend on 8 of the 27 atari games (from table 1, treating SimPL->{max of MuZero, SPR}->EfficientZero as 3 equally spaced datapoints): Asterix, Breakout, almost ChopperCMD, almost CrazyClimber, Gopher, Kung Fu Master, Pong, QBert, SeaQuest. My guess is that they thought a lot about a few of those games in particular because they are very influential on the mean/median. Note that this paper is a giant grab bag and that simply stapling together the prior methods would have already been a significant improvement over prior SoTA. (ETA: I don’t think saying “its only 8 of 27 games” is an update against it being big progress or anything. I do think saying “stapling together 2 previous methods without any complementarity at all would already have significantly beaten SoTA” is fairly good evidence that it’s not a hard-to-beat SoTA.)\n\n\nand even fewer people working on the ultra-low-sample extremely-low-dimensional DM control environments (this is the subset of problems where the state space is 4 dimensions, people are just not trying to publish great results on cartpole), so I think the most surprising contribution is the atari stuff\n\n\nOK, I now also understand what the result is I think?\n\n\nI think the quick summary is: the prior SoTA is SPR, which learns to predict the domain and then does Q-learning. MuZero instead learns to predict the domain and does MCTS, but it predicts the domain in a slightly less sophisticated way than SPR (basically just predicts rewards, whereas SPR predicts all of the agent’s latent state in order to get more signal from each frame). If you combine MCTS with more sophisticated prediction, you do better.\n\n\nI think if you told me that DeepMind put in significant effort in 2020 (say, at least as much post-MuZero effort as the new paper?) trying to get great sample efficiency on the easy-exploration atari games, and failed to make significant progress, then I’m surprised.\n\n\nI don’t think that would “falsify” my view, but it would be an update against? Like maybe if DM put in that much effort I’d maybe have given only a 10-20% probability to a new project of similar size putting in that much effort making big progress, and even conditioned on big progress this is still >>median (ETA: and if DeepMind put in much more effort I’d be more surprised than 10-20% by big progress from the new project)\n\n\nWithout DM putting in much effort, it’s significantly less surprising and I’ll instead be comparing to the other academic efforts. But it’s just not surprising that you can beat them if you are willing to put in the effort to reimplement MCTS and they aren’t, and that’s a step that is straightforwardly going to improve performance.\n\n\n(not sure if that’s the situation)\n\n\nAnd then to see how significant updates against are, you have to actually contrast them with all the updates in the other direction where people *don’t* crush previous benchmark results\n\n\nand instead just make modest progress\n\n\nI would guess that if you had talked to an academic about this question (what happens if you combine SPR+MCTS) they would have predicted significant wins in sample efficiency (at the expense of compute efficiency) and cited the difficulty of implementing MuZero compared to any of the academic results. That’s another way I could be somewhat surprised (or if there were academics with MuZero-quality MCTS implementations working on this problem, and they somehow didn’t set SoTA, then I’m even more surprised). But I’m not sure if you’ll trust any of those judgments in hindsight.\n\n\n*Repeating the main point***:**\n\n\nI don’t really think a 4x jump over 1 year is something I have to “defy or excuse”, it’s something that I think becomes more or less likely depending on facts about the world, like (i) how fast was previous progress, (ii) how many people were working on previous projects and how targeted were they at this metric, (iii) how many people are working in this project and how targeted was it at this metric\n\n\nit becomes continuously less likely as those parameters move in the obvious directions\n\n\nit never becomes 0 probability, and you just can’t win that much by citing isolated events that I’d give say a 10% probability to, unless you actually say something about how you are giving >10% probabilities to those events without losing a bunch of probability mass on what I see as the 90% of boring stuff\n\n\n\n\n\n| |\n| --- |\n| [Ngo: 👍] |\n\n\n\nand then separately I have a view about lots of people working on important problems, which doesn’t say anything about this case\n\n\n(I actually don’t think this event is as low as 10%, though it depends on what background facts about the project you are conditioning on—obviously I gave <<10% probability to someone publishing this particular result, but something like “what fraction of progress in this field would come down to jumps like this” or whatever is probably >10% until you tell me that DeepMind actually cared enough to have already tried)\n\n\n\n\n\n\n[Ngo][8:48]\nI expect Eliezer to say something like: DeepMind believes that both improving RL sample efficiency, and benchmarking progress on games like Atari, are important parts of the path towards AGI. So insofar as your model predicts that smooth progress will be caused by people working directly towards AGI, DeepMind not putting effort into this is a hit to that model. Thoughts?\n\n\n\n\n\n\n[Christiano][9:06]\nI don’t think that learning these Atari games in 2 hours is a very interesting benchmark even for deep RL sample efficiency, and it’s totally unrelated to the way in which humans learn such games quickly. It seems ~~pretty likely~~ totally plausible (50%?) to me that DeepMind feels the same way, and then the question is about other random considerations like how they are making some PR calculation.\n\n\n\n\n\n\n[Ngo][9:18]\nIf Atari is not a very interesting benchmark, then why did DeepMind put a bunch of effort into making Agent57 and applying MuZero to Atari?\n\n\nAlso, most of the effort they’ve spent on games in general has been on methods very unlike the way humans learn those games, so that doesn’t seem like a likely reason for them to overlook these methods for increasing sample efficiency.\n\n\n\n\n\n\n[Shah][9:32]\n\n> \n> It seems pretty likely totally plausible (50%?) to me that DeepMind feels the same way, and then the question is about other random considerations like how they are making some PR calculation.\n> \n> \n> \n\n\nNot sure of the exact claim, but DeepMind is big enough and diverse enough that I’m pretty confident at least some people working on relevant problems don’t feel the same way\n\n\n\n> \n> […] This seems like a super obvious thing to do and I’m confused why DM didn’t already try this. It was definitely being talked about in ~2018\n> \n> \n> \n\n\nSpeculating without my DM hat on: maybe it kills performance in board games, and they want one algorithm for all settings?\n\n\n\n\n\n\n[Christiano][10:29]\nAtari games in the tiny sample regime are a different beast\n\n\nthere are just a lot of problems you can state about Atari some of which are more or less interesting (e.g. jointly learning to play 57 Atari games is a more interesting problem than learning how to play one of them absurdly quickly, and there are like 10 other problems about Atari that are more interesting than this one)\n\n\nThat said, Agent57 also doesn’t seem interesting except that it’s an old task people kind of care about. I don’t know about the take within DeepMind but outside I don’t think anyone would care about it other than historical significance of the benchmark / obviously-not-cherrypickedness of the problem.\n\n\nI’m sure that some people at DeepMind care about getting the super low sample complexity regime. I don’t think that really tells you how large the DeepMind effort is compared to some random academics who care about it.\n\n\n\n\n\n| |\n| --- |\n| [Shah: 👍] |\n\n\n\nI think the argument for working on deep RL is fine and can be based on an analogy with humans while you aren’t good at the task. Then once you are aiming for crazy superhuman performance on Atari games you naturally start asking “what are we doing here and why are we still working on atari games?”\n\n\n\n\n\n| |\n| --- |\n| [Ngo: 👍] |\n\n\n\nand correspondingly they are a smaller and smaller slice of DeepMind’s work over time\n\n\n\n\n\n| |\n| --- |\n| [Ngo: 👍] |\n\n\n\n(e.g. Agent57 and MuZero are the only DeepMind blog posts about Atari in the last 4 years, it’s not the main focus of MuZero and I don’t think Agent57 is a very big DM project)\n\n\nReaching this level of performance in Atari games is largely about learning perception, and doing that from 100k frames of an Atari game just doesn’t seem very analogous to anything humans do or that is economically relevant from any perspective. I totally agree some people are into it, but I’m totally not surprised if it’s not going to be a big DeepMind project.\n\n\n\n\n\n\n[Yudkowsky][10:51]\nwould you agree it’s a load-bearing assumption of your worldview – where I also freely admit to having a worldview/scheme, this is not meant to be a prejudicial term at all – that the line of research which leads into world-shaking AGI must be in the mainstream and not in a weird corner where a few months earlier there were more profitable other ways of doing all the things that weird corner did? \n\n\neg, the tech line leading into world-shaking AGI must be at the profitable forefront of non-world-shaking tasks.  as otherwise, afaict, your worldview permits that if counterfactually we were in the Paul-forbidden case where the immediate precursor to AGI was something like EfficientZero (whose motivation had been beating an old SOTA metric rather than, say, market-beating self-driving cars), there might be huge capability leaps there just as EfficientZero represents a large leap, because there wouldn’t have been tons of investment in that line.\n\n\n\n\n\n[Christiano][10:54]\nSomething like that is definitely a load-bearing assumption\n\n\nLike there’s a spectrum with e.g. EfficientZero –> 2016 language modeling –> 2014 computer vision –> 2021 language modeling –> 2021 computer vision, and I think everything anywhere close to transformative AI will be way way off the right end of that spectrum\n\n\nBut I think quantitatively the things you are saying don’t seem quite right to me. Suppose that MuZero wasn’t the best way to do anything economically relevant, but it was within a factor of 4 on sample efficiency for doing tasks that people care about. That’s already going to be enough to make tons of people extremely excited.\n\n\nSo yes, I’m saying that anything leading to transformative AI is “in the mainstream” in the sense that it has more work on it than 2021 language models.\n\n\nBut not necessarily that it’s the most profitable way to do anything that people care about. Different methods scale in different ways, and something can burst onto the scene in a dramatic way, but I strongly expect speculative investment driven by that possibility to already be way (way) more than 2021 language models. And I don’t expect gigantic surprises. And I’m willing to bet that e.g. EfficientZero isn’t a big surprise for researchers who are paying attention to the area (*in addition* to being 3+ orders of magnitude more neglected than anything close to transformative AI)\n\n\n2021 language modeling isn’t even very competitive, it’s still like 3-4 orders of magnitude smaller than semiconductors. But I’m giving it as a reference point since it’s obviously much, much more competitive than sample-efficient atari.\n\n\nThis is a place where I’m making much more confident predictions, this is “falsify paul’s worldview” territory once you get to quantitative claims anywhere close to TAI and “even a single example seriously challenges paul’s worldview” a few orders of magnitude short of that\n\n\n\n\n\n\n[Yudkowsky][11:04]\ncan you say more about what falsifies your worldview previous to TAI being super-obviously-to-all-EAs imminent?\n\n\nor rather, “seriously challenges”, sorry\n\n\n\n\n\n[Christiano][11:05][11:08]\nbig AI applications achieved by clever insights in domains that aren’t crowded, we should be quantitative about how crowded and how big if we want to get into “seriously challenges”\n\n\nlike e.g. if this paper on atari was actually a crucial ingredient for making deep RL for robotics work, I’d be actually for real surprised rather than 10% surprised\n\n\nbut it’s not going to be, those results are being worked on by much larger teams of more competent researchers at labs with $100M+ funding\n\n\nit’s definitely possible for them to get crushed by something out of left field\n\n\nbut I’m betting against every time\n\n\n\n\n\n\n\nor like, the set of things people would describe as “out of left field,” and the quantitative degree of neglectedness, becomes more and more mild as the stakes go up\n\n\n\n\n\n\n[Yudkowsky][11:08]\nhow surprised are you if in 2022 one company comes out with really good ML translation, and they manage to sell a bunch of it temporarily until others steal their ideas or Google acquires them?  my model of Paul is unclear on whether this constitutes “many people are already working on language models including ML translation” versus “this field is not profitable enough right this minute for things to be efficient there, and it’s allowed to be nonobvious in worlds where it’s about to become profitable”.\n\n\n\n\n\n[Christiano][11:08]\nif I wanted to make a prediction about that I’d learn a bunch about how much google works on translation and how much $ they make\n\n\nI just don’t know the economics\n\n\nand it depends on the kind of translation that they are good at and the economics (e.g. google mostly does extremely high-volume very cheap translation)\n\n\nbut I think there are lots of things like that / facts I could learn about Google such that I’d be surprised in that situation\n\n\nindependent of the economics, I do think a fair number of people are working on adjacent stuff, and I don’t expect someone to come out of left field for google-translate-cost translation between high-resource languages\n\n\nbut it seems quite plausible that a team of 10 competent people could significantly outperform google translate, and I’d need to learn about the economics to know how surprised I am by 10 people or 100 people or what\n\n\nI think it’s allowed to be non-obvious whether a domain is about to be really profitable\n\n\nbut it’s not that easy, and the higher the stakes the more speculative investment it will drive, etc.\n\n\n\n\n\n\n[Yudkowsky][11:14]\nif you don’t update much off EfficientZero, then people also shouldn’t be updating much off of most of the graph I posted earlier as possible Paul-favoring evidence, because most of those SOTAs weren’t highly profitable so your worldview didn’t have much to say about them. ?\n\n\n\n\n\n[Christiano][11:15]\nMost things people work a lot on improve gradually. EfficientZero is also quite gradual compared to the crazy TAI stories you tell. I don’t really know what to say about this game other than I would prefer make predictions in advance and I’m happy to either propose questions/domains or make predictions in whatever space you feel more comfortable with.\n\n\n\n\n\n\n[Yudkowsky][11:16]\nI don’t know how to point at a future event that you’d have strong opinions about.  it feels like, whenever I try, I get told that the current world is too unlike the future conditions you expect.\n\n\n\n\n\n[Christiano][11:16]\nLike, whether or not EfficientZero is evidence for your view depends on exactly how “who knows what will happen” you are. if you are just a bit more spread out than I am, then it’s definitely evidence for your view.\n\n\nI’m saying that I’m willing to bet about *any event you want to name*, I just think my model of how things work is more accurate.\n\n\nI’d prefer it be related to ML or AI.\n\n\n\n\n\n\n[Yudkowsky][11:17]\nto be clear, I appreciate that it’s similarly hard to point at an event like that for myself, because my own worldview says “well mostly the future is not all that predictable with a few rare exceptions”\n\n\n\n\n\n[Christiano][11:17]\nBut I feel like the situation is not at all symmetrical, I expect to outperform you on practically any category of predictions we can specify.\n\n\nso like I’m happy to bet about benchmark progress in LMs, or about whether DM or OpenAI or Google or Microsoft will be the first to achieve something, or about progress in computer vision, or about progress in industrial robotics, or about translations\n\n\nwhatever\n\n\n\n\n\n \n\n\n### 17.2. Near-term AI predictions\n\n\n \n\n\n\n[Yudkowsky][11:18]\nthat sounds like you ought to have, like, a full-blown storyline about the future?\n\n\n\n\n\n[Christiano][11:18]\nwhat is a full-blown storyline? I have a bunch of ways that I think about the world and make predictions about what is likely\n\n\nand yes, I can use those ways of thinking to make predictions about whatever\n\n\nand I will very often lose to a domain expert who has better and more informed ways of making predictions\n\n\n\n\n\n\n[Yudkowsky][11:19]\nwhat happens if 2022 through 2024 looks literally exactly like Paul’s modal or median predictions on things?\n\n\n\n\n\n[Christiano][11:19]\nbut I think in ML I will generally beat e.g. a superforecaster who doesn’t have a lot of experience in the area\n\n\ngive me a question about 2024 and I’ll give you a median?\n\n\nI don’t know what “what happens” means\n\n\nstorylines do not seem like good ways of making predictions\n\n\n\n\n\n| |\n| --- |\n| [Shah: 👍] |\n\n\n\n\n\n\n\n[Yudkowsky][11:20]\nI mean, this isn’t a crux for anything, but it seems like you’re asking me to give up on that and just ask for predictions?  so in 2024 can I hire an artist who doesn’t speak English and converse with them almost seamlessly through a machine translator?\n\n\n\n\n\n[Christiano][11:22]\nmedian outcome (all of these are going to be somewhat easy-to-beat predictions because I’m not thinking): you can get good real-time translations, they are about as good as a +1 stdev bilingual speaker who listens to what you said and then writes it out in the other language as fast as they can type\n\n\nProbably also for voice -> text or voice -> voice, though higher latencies and costs.\n\n\nNot integrated into standard video chatting experience because the UX is too much of a pain and the world sucks.\n\n\nThat’s a median on “how cool/useful is translation”\n\n\n\n\n\n\n[Yudkowsky][11:23]\nI would unfortunately also predict that in this case, this will be a highly competitive market and hence not a very profitable one, which I predict to match your prediction, but I ask about the economics here just in case.\n\n\n\n\n\n[Christiano][11:24]\nKind of typical sample: I’d guess that Google has a reasonably large lead, most translation still provided as a free value-added, cost per translation at that level of quality is like $0.01/word, total revenue in the area is like $10Ms / year?\n\n\n\n\n\n\n[Yudkowsky][11:24]\nwell, my model also permits that Google does it for free and so it’s an uncompetitive market but not a profitable one… ninjaed.\n\n\n\n\n\n[Christiano][11:25]\nfirst order of improving would be sanity-checking economics and thinking about #s, second would be learning things like “how many people actually work on translation and what is the state of the field?”\n\n\n\n\n\n\n[Yudkowsky][11:26]\ndid Tesla crack self-driving cars and become a $3T company instead of a $1T company?  do you own Tesla options?\n\n\ndid Waymo beat Tesla and cause Tesla stock to crater, same question?\n\n\n\n\n\n[Christiano][11:27]\n1/3 chance tesla has FSD in 2024\n\n\nconditioned on that, yeah probably market cap is >$3T?\n\n\nconditioned on Tesla having FSD, 2/3 chance Waymo has also at least rolled out to a lot of cities\n\n\nconditioned on no tesla FSD, 10% chance Waymo has rolled out to like half of big US cities?\n\n\ndunno if numbers make sense\n\n\n\n\n\n\n[Yudkowsky][11:28]\nthat’s okay, I dunno if my questions make sense\n\n\n\n\n\n[Christiano][11:29]\n(5% NW in tesla, 90% NW in AI bets, 100% NW in more normal investments; no tesla options that sounds like a scary place with lottery ticket biases and the crazy tesla investors)\n\n\n\n\n\n\n[Yudkowsky][11:30]\n(am I correctly understanding you’re 2x levered?)\n\n\n\n\n\n[Christiano][11:30][11:31]\nyeah\n\n\nit feels like you’ve got to have weird views on trajectory of value-added from AI over the coming years\n\n\non how much of the $ comes from domains that are currently exciting to people (e.g. that Google already works on, self-driving, industrial robotics) vs stuff out of left field\n\n\non what kind of algorithms deliver $ in those domains (e.g. are logistics robots trained using the same techniques tons of people are currently pushing on)\n\n\non my picture you shouldn’t be getting big losses on any of those\n\n\n\n\n\n\n\njust losing like 10-20% each time\n\n\n\n\n\n\n[Yudkowsky][11:31][11:32]\nmy uncorrected inside view says that machine translation should be in reach and generate huge amounts of economic value even if it ends up an unprofitable competitive or Google-freebie field\n\n\n\n\n\n\nand also that not many people are working on basic research in machine translation or see it as a “currently exciting” domain\n\n\n\n\n\n[Christiano][11:32]\nhow many FTE is “not that many” people?\n\n\nalso are you expecting improvement in the google translate style product, or in lower-latencies for something closer to normal human translator prices, or something else?\n\n\n\n\n\n\n[Yudkowsky][11:33]\nmy worldview says more like… sure, maybe there’s 300 programmers working on it worldwide, but most of them aren’t aggressively pursuing new ideas and trying to explore the space, they’re just applying existing techniques to a new language or trying to throw on some tiny mod that lets them beat SOTA by 1.2% for a publication\n\n\nbecause it’s not an *exciting* field\n\n\n“What if you could rip down the language barriers” is an economist’s dream, or a humanist’s dream, and Silicon Valley is neither\n\n\nand looking at GPT-3 and saying, “God damn it, this really seems like it must on some level *understand* what it’s reading well enough that the same learned knowledge would suffice to do really good machine translation, this must be within reach for gradient descent technology we just don’t know how to reach it” is Yudkowskian thinking; your AI system has internal parts like “how much it understands language” and there’s thoughts about what those parts ought to be able to do if you could get them into a new system with some other parts\n\n\n\n\n\n[Christiano][11:36]\nmy guess is we’d have some disagreements here\n\n\nbut to be clear, you are talking about text-to-text at like $0.01/word price point?\n\n\n\n\n\n\n[Yudkowsky][11:38]\nI mean, do we?  Unfortunately another Yudkowskian worldview says “and people can go on failing to notice this for arbitrarily long amounts of time”.\n\n\nif that’s around GPT-3’s price point then yeah\n\n\n\n\n\n[Christiano][11:38]\ngpt-3 is a lot cheaper, happy to say gpt-3 like price point\n\n\n\n\n\n\n[Yudkowsky][11:39]\n(thinking about whether $0.01/word is meaningfully different from $0.001/word and concluding that it is)\n\n\n\n\n\n[Christiano][11:39]\n(api is like 10,000 words / $)\n\n\nI expect you to have a broader distribution over who makes a great product in this space, how great it ends up being etc., whereas I’m going to have somewhat higher probabilities on it being google research and it’s going to look boring\n\n\n\n\n\n\n[Yudkowsky][11:40]\nwhat is boring?\n\n\nboring predictions are often good predictions on my own worldview too\n\n\nlots of my gloom is about things that are boringly bad and awful\n\n\n(and which add up to instant death at a later point)\n\n\nbut, I mean, what does boring machine translation look like?\n\n\n\n\n\n[Christiano][11:42]\nTrain big language model. Have lots of auxiliary tasks especially involving reading in source language and generation in target language. Have pre-training on aligned sentences and perhaps using all the unsupervised translation we have depending on how high-resource language is. Fine-tune with smaller amount of higher quality supervision.\n\n\nSome of the steps likely don’t add much value and skip them. Fair amount of non-ML infrastructure.\n\n\nFor some languages/domains/etc. dedicated models, over time increasingly just have a giant model with learned dispatch as in mixture of experts.\n\n\n\n\n\n\n[Yudkowsky][11:44]\nbut your worldview is also totally ok with there being a Clever Trick added to that which produces a 2x reduction in training time.  or with there being a new innovation like transformers, which was developed a year earlier and which everybody now uses, without which the translator wouldn’t work at all. ?\n\n\n\n\n\n[Christiano][11:44]\nJust for reference, I think transformers aren’t that visible on a (translation quality) vs (time) graph?\n\n\nBut yes, I’m totally fine with continuing architectural improvements, and 2x reduction in training time is currently par for the course for “some people at google thought about architectures for a while” and I expect that to not get that much tighter over the next few years.\n\n\n\n\n\n\n[Yudkowsky][11:45]\nunrolling Restricted Boltzmann Machines to produce deeper trainable networks probably wasn’t much visible on a graph either, but good luck duplicating modern results using only lower portions of the tech tree.  (I don’t think we disagree about this.)\n\n\n\n\n\n[Christiano][11:45]\nI do expect it to eventually get tighter, but not by 2024.\n\n\nI don’t think unrolling restricted boltzmann machines is that important\n\n\n\n\n\n\n[Yudkowsky][11:46]\nlike, historically, or as a modern technology?\n\n\n\n\n\n[Christiano][11:46]\nhistorically\n\n\n\n\n\n\n[Yudkowsky][11:46]\ninteresting\n\n\nmy model is that it got people thinking about “what makes things trainable” and led into ReLUs and inits\n\n\nbut I am going more off having watched from the periphery as it happened, than having read a detailed history of that\n\n\nlike, people asking, “ah, but what if we had a deeper network and the gradients *didn’t* explode or die out?” and doing that en masse in a productive way rather than individuals being wistful for 30 seconds\n\n\n\n\n\n[Christiano][11:48]\nwell, not sure if this will introduce differences in predictions\n\n\nI don’t feel like it should really matter for our bottom line predictions whether we classify google’s random architectural change as something fundamentally new (which happens to just have a modest effect at the time that it’s built) or as something boring\n\n\nI’m going to guess how well things will work by looking at how well things work right now and seeing how fast it’s getting better\n\n\nand that’s also what I’m going to do for applications of AI with transformative impacts\n\n\nand I actually believe you will do something today that’s analogous to what you would do in the future, and in fact will make somewhat different predictions than what I would do\n\n\nand then some of the action will be in new things that people haven’t been trying to do in the past, and I’m predicting that new things will be “small” whereas you have a broader distribution, and there’s currently some not-communicated judgment call in “small”\n\n\nif you think that TAI will be like translation, where google publishes tons of papers, but that they will just get totally destroyed by some new idea, then it seems like that should correspond to a difference in P(google translation gets totally destroyed by something out-of-left-field)\n\n\nand if you think that TAI won’t be like translation, then I’m interested in examples more like TAI\n\n\nI don’t really understand the take “and people can go on failing to notice this for arbitrarily long amounts of time,” why doesn’t that also happen for TAI and therefore cause it to be the boring slow progress by google? Why would this be like a 50% probability for TAI but <10% for translation?\n\n\nperhaps there is a disagreement about how good the boring progress will be by 2024? looks to me like it will be very good\n\n\n\n\n\n\n[Yudkowsky][11:57]\nI am not sure that is where the disagreement lies\n\n\n\n\n \n\n\n### 17.3. The evolution of human intelligence\n\n\n \n\n\n\n[Yudkowsky][11:57]\nI am considering advocating that we should have more disagreements about the past, which has the advantage of being very concrete, and being often checkable in further detail than either of us already know\n\n\n\n\n\n[Christiano][11:58]\nI’m fine with disagreements about the past; I’m more scared of letting you pick arbitrary things to “predict” since there is much more impact from differences in domain knowledge\n\n\n(also not quite sure why it’s more concrete, I guess because we can talk about what led to particular events? mostly it just seems faster)\n\n\nalso as far as I can tell our main differences are about whether people will ~~spend a lot of money~~ work effectively on things that would make a lot of money, which means if we look to the past we will have to move away from ML/AI\n\n\n\n\n\n\n[Yudkowsky][12:00]\nso my understanding of how Paul writes off the example of human intelligence, is that you are like, “evolution is much stupider than a human investor; if there’d been humans running the genomes, people would be copying all the successful things, and hominid brains would be developing in this ecology of competitors instead of being a lone artifact”. ?\n\n\n\n\n\n[Christiano][12:00]\nI don’t understand why I have to write off the example of human intelligence\n\n\n\n\n\n\n[Yudkowsky][12:00]\nbecause it looks nothing like your account of how TAI develops\n\n\n\n\n\n[Christiano][12:00]\nit also looks nothing like your account, I understand that you have some analogy that makes sense to you\n\n\n\n\n\n\n[Yudkowsky][12:01]\nI mean, to be clear, I also write off the example of humans developing morality and have to explain to people at length why humans being as nice as they are, doesn’t imply that paperclip maximizers will be anywhere near that nice, nor that AIs will be other than paperclip maximizers.\n\n\n\n\n\n[Christiano][12:01][12:02]\nyou could state some property of how human intelligence developed, that is in common with your model for TAI and not mine, and then we could discuss that\n\n\nif you say something like: “chimps are not very good at doing science, but humans are” then yes my answer will be that it’s because evolution was not selecting us to be good at science\n\n\n\n\n\n\n\nand indeed AI systems will be good at science using *much* less resources than humans or chimps\n\n\n\n\n\n\n[Yudkowsky][12:02][12:02]\nwould you disagree that humans developing intelligence, on the sheer surfaces of things, looks much more Yudkowskian than Paulian?\n\n\n\n\n\n\nlike, not in terms of compatibility with underlying model\n\n\njust that there’s this one corporation that came out and massively won the entire AGI race with zero competitors\n\n\n\n\n\n[Christiano][12:03]\nI agree that “how much did the winner take all” is more like your model of TAI than mine\n\n\nI don’t think zero competitors is reasonable, I would say “competitors who were tens of millions of years behind”\n\n\n\n\n\n\n[Yudkowsky][12:03]\nsure\n\n\nand your account of this is that natural selection is nothing like human corporate managers copying each other\n\n\n\n\n\n[Christiano][12:03]\nwhich was a reasonable timescale for the old game, but a long timescale for the new game\n\n\n\n\n\n\n[Yudkowsky][12:03]\nyup\n\n\n\n\n\n[Christiano][12:04]\nthat’s not my only account\n\n\nit’s also that for human corporations you can form large coalitions, i.e. raise huge amounts of $ and hire huge numbers of people working on similar projects (whether or not vertically integrated), and those large coalitions will systematically beat small coalitions\n\n\nand that’s basically *the* key dynamic in this situation, and isn’t even trying to have any analog in the historical situation\n\n\n(the key dynamic w.r.t. concentration of power, not necessarily the main thing overall)\n\n\n\n\n\n\n[Yudkowsky][12:07]\nthe modern degree of concentration of power seems relatively recent and to have tons and tons to do with the regulatory environment rather than underlying properties of the innovation landscape\n\n\nback in the old days, small startups would be better than Microsoft at things, and Microsoft would try to crush them using other forces than superior technology, not always successfully\n\n\nor such was the common wisdom of USENET\n\n\n\n\n\n[Christiano][12:08]\nmy point is that the evolution analogy is extremely unpersuasive w.r.t. concentration of power\n\n\nI think that AI software capturing the amount of power you imagine is also kind of implausible because we know something about how hardware trades off against software progress (maybe like 1 year of progress = 2x hardware) and so even if you can’t form coalitions on innovation *at all* you are still going to be using tons of hardware if you want to be in the running\n\n\nthough if you can’t parallelize innovation at all and there is enough dispersion in software progress then the people making the software could take a lot of the $ / influence from the partnership\n\n\nanyway, I agree that this is a way in which evolution is more like your world than mine\n\n\nbut think on this point the analogy is pretty unpersuasive\n\n\nbecause it fails to engage with any of the a priori reasons you wouldn’t expect concentration of power\n\n\n\n\n\n\n[Yudkowsky][12:11]\nI’m not sure this is the correct point on which to engage, but I feel like I should say out loud that I am unable to operate my model of your model in such fashion that it is not falsified by how the software industry behaved between 1980 and 2000.\n\n\nthere should’ve been no small teams that beat big corporations\n\n\ntoday those are much rarer, but on my model, that’s because of regulatory changes (and possibly metabolic damage from something in the drinking water)\n\n\n\n\n\n[Christiano][12:12]\nI understand that you can’t operate my model, and I’ve mostly given up, and on this point I would prefer to just make predictions or maybe retrodictions\n\n\n\n\n\n\n[Yudkowsky][12:13]\nwell, anyways, my model of how human intelligence happened looks like this:\n\n\nthere is a mysterious kind of product which we can call G, and which brains can operate as factories to produce\n\n\nG in turn can produce other stuff, but you need quite a lot of it piled up to produce *better* stuff than your competitors\n\n\nas late as 1000 years ago, the fastest creatures on Earth are not humans, because you need *even more G than that* to go faster than cheetahs\n\n\n(or peregrine falcons)\n\n\nthe natural selections of various species were fundamentally stupid and blind, incapable of foresight and incapable of copying the successes of other natural selections; but even if they had been as foresightful as a modern manager or investor, they might have made just the same mistake\n\n\nbefore 10,000 years they would be like, “what’s so exciting about these things? they’re not the fastest runners.”\n\n\nif there’d been an economy centered around running, you wouldn’t invest in deploying a human\n\n\n(well, unless you needed a stamina runner, but that’s something of a separate issue, let’s consider just running races)\n\n\nyou would invest on improving cheetahs\n\n\nbecause the pile of human G isn’t large enough that their G beats a specialized naturally selected cheetah\n\n\n\n\n\n[Christiano][12:17]\nhow are you improving cheetahs in the analogy?\n\n\nyou are trying random variants to see what works?\n\n\n\n\n\n\n[Yudkowsky][12:18]\nusing conventional, well-tested technology like MUSCLES and TENDONS\n\n\ntrying variants on those\n\n\n\n\n\n[Christiano][12:18]\nok\n\n\nand you think that G doesn’t help you improve on muscles and tendons?\n\n\nuntil you have a big pile of it?\n\n\n\n\n\n\n[Yudkowsky][12:18]\nnot as a metaphor but as simple historical fact, that’s how it played out\n\n\nit takes a whole big pile of G to go faster than a cheetah\n\n\n\n\n\n[Christiano][12:19]\nas a matter of fact there is no one investing in making better cheetahs\n\n\nso it seems like we’re already playing analogy-game\n\n\n\n\n\n\n[Yudkowsky][12:19]\nthe natural selection of cheetahs is investing in it\n\n\nit’s not doing so by copying humans because of fundamental limitations\n\n\nhowever if we replace it with an average human investor, it still doesn’t copy humans, why would it\n\n\n\n\n\n[Christiano][12:19]\nthat’s the part that is silly\n\n\nor like, it needs more analogy\n\n\n\n\n\n\n[Yudkowsky][12:19]\nhow so?  humans aren’t the fastest.\n\n\n\n\n\n[Christiano][12:19]\nhumans are great at breeding animals\n\n\nso if I’m natural selection personified, the thing to explain is why I’m not using some of that G to improve on my selection\n\n\nnot why I’m not using G to build a car\n\n\n\n\n\n\n[Yudkowsky][12:20]\nI’m… confused\n\n\nis this implying that a key aspect of your model is that people are using AI to decide which AI tech to invest in?\n\n\n\n\n\n[Christiano][12:20]\nno\n\n\nI think I just don’t understand your analogy\n\n\nhere in the actual world, some people are trying to make faster robots by tinkering with robot designs\n\n\nand then someone somewhere is training their AGI\n\n\n\n\n\n\n[Yudkowsky][12:21]\nwhat I’m saying is that you can imagine a little cheetah investor going, “I’d like to copy and imitate some other species’s tricks to make my cheetahs faster” and they’re looking enviously at falcons, not at humans\n\n\nnot until *very* late in the game\n\n\n\n\n\n[Christiano][12:21]\nand the relevant question is whether the pre-AGI thing is helpful for automating the work that humans are doing while they tinker with robot designs\n\n\nthat seems like the actual world\n\n\nand the interesting claim is you saying “nope, not very”\n\n\n\n\n\n\n[Yudkowsky][12:22]\nI am again confused.  Does it matter to your model whether the pre-AGI thing is helpful for automating “tinkering with robot designs” or just profitable machine translation?  Either seems like it induces equivalent amounts of investment.\n\n\nIf anything the latter induces much more investment.\n\n\n\n\n\n[Christiano][12:23]\nsure, I’m fine using “tinkering with robot designs” as a lower bound\n\n\nboth are fine\n\n\nthe point is I have no idea what you are talking about in the analogy\n\n\nwhat is analogous to what?\n\n\nI thought cheetahs were analogous to faster robots\n\n\n\n\n\n\n[Yudkowsky][12:23]\nfaster cheetahs are analogous to more profitable robots\n\n\n\n\n\n[Christiano][12:23]\nsure\n\n\nso you have some humans working on making more profitable robots, right?\n\n\nwho are tinkering with the robots, in a way analogous to natural selection tinkering with cheetahs?\n\n\n\n\n\n\n[Yudkowsky][12:24]\nI’m suggesting replacing the Natural Selection of Cheetahs with a new optimizer that has the Copy Competitor and Invest In Easily-Predictable Returns feature\n\n\n\n\n\n[Christiano][12:24]\nOK, then I don’t understand what those are analogous to\n\n\nlike, what is analogous to the humans who are tinkering with robots, and what is analogous to the humans working on AGI?\n\n\n\n\n\n\n[Yudkowsky][12:24]\nand observing that, even this case, the owner of Cheetahs Inc. would not try to copy Humans Inc.\n\n\n\n\n\n[Christiano][12:25]\nhere’s the analogy that makes sense to me\n\n\nnatural selection is working on making faster cheetahs = some humans tinkering away to make more profitable robots\n\n\nnatural selection is working on making smarter humans = some humans who are tinkering away to make more powerful AGI\n\n\nnatural selection doesn’t try to copy humans because they suck at being fast = robot-makers don’t try to copy AGI-makers because the AGIs aren’t very profitable robots\n\n\n\n\n\n\n[Yudkowsky][12:26]\nwith you so far\n\n\n\n\n\n[Christiano][12:26]\neventually humans build cars once they get smart enough = eventually AGI makes more profitable robots once it gets smart enough\n\n\n\n\n\n\n[Yudkowsky][12:26]\nyup\n\n\n\n\n\n[Christiano][12:26]\ngreat, seems like we’re on the same page then\n\n\n\n\n\n\n[Yudkowsky][12:26]\nand by this point it is LATE in the game\n\n\n\n\n\n[Christiano][12:27]\ngreat, with you still\n\n\n\n\n\n\n[Yudkowsky][12:27]\nbecause the smaller piles of G did not produce profitable robots\n\n\n\n\n\n[Christiano][12:27]\nbut there’s a step here where you appear to go totally off the rails\n\n\n\n\n\n\n[Yudkowsky][12:27]\nor operate profitable robots\n\n\nsay on\n\n\n\n\n\n[Christiano][12:27]\ncan we just write out the sequence of AGIs, AGI(1), AGI(2), AGI(3)… in analogy with the sequence of human ancestors H(1), H(2), H(3)…?\n\n\n\n\n\n\n[Yudkowsky][12:28]\nIs the last member of the sequence H(n) the one that builds cars and then immediately destroys the world before anything that operates on Cheetah Inc’s Owner’s scale can react?\n\n\n\n\n\n[Christiano][12:28]\nsure\n\n\nI don’t think of it as the last\n\n\nbut it’s the last one that actually arises?\n\n\nmaybe let’s call it the last, H(n)\n\n\ngreat\n\n\nand now it seems like you are imagining an analogous story, where AGI(n) takes over the world and maybe incidentally builds some more profitable robots along the way\n\n\n(building more profitable robots being easier than taking over the world, but not so much easier that AGI(n-1) could have done it unless we make our version numbers really close together, close enough that deploying AGI(n-1) is stupid)\n\n\n\n\n\n\n[Yudkowsky][12:31]\nif this plays out in the analogous way to human intelligence, AGI(n) becomes able to build more profitable robots 1 hour before it becomes able to take over the world; my worldview does not put that as the median estimate, but I do want to observe that this is what happened historically\n\n\n\n\n\n[Christiano][12:31]\nsure\n\n\n\n\n\n\n[Yudkowsky][12:32]\nok, then I think we’re still on the same page as written so far\n\n\n\n\n\n[Christiano][12:32]\nso the question that’s interesting in the real world is which AGI is useful for replacing humans in the design-better-robots task; is it 1 hour before the AGI that takes over the world, or 2 years, or what?\n\n\n\n\n\n\n[Yudkowsky][12:33]\nmy worldview tends to make a big ol’ distinction between “replace humans in the design-better-robots task” and “run as a better robot”, if they’re not importantly distinct from your standpoint can we talk about the latter?\n\n\n\n\n\n[Christiano][12:33]\nthey seem importantly distinct\n\n\ntotally different even\n\n\nso I think we’re still on the same page\n\n\n\n\n\n\n[Yudkowsky][12:34]\nok then, “replacing humans at designing better robots” sure as heck sounds to Eliezer like the world is about to end or has already ended\n\n\n\n\n\n[Christiano][12:34]\nmy whole point is that in the evolutionary analogy we are talking about “run as a better robot” rather than “replace humans in the design-better-robots-task”\n\n\nand indeed there is no analog to “replace humans in the design-better-robots-task”\n\n\nwhich is where all of the action and disagreement is\n\n\n\n\n\n\n[Yudkowsky][12:35][12:36]\nwell, yes, I was exactly trying to talk about when humans start running as better cheetahs\n\n\nand how that point is still very late in the game\n\n\n\n\n\n\nnot as late as when humans take over the job of making the thing that makes better cheetahs, aka humans start trying to make AGI, which is basically the fingersnap end of the world from the perspective of Cheetahs Inc.\n\n\n\n\n\n[Christiano][12:36]\nOK, but I don’t care when humans are better cheetahs—in the real world, when AGIs are better robots. In the real world I care about when AGIs start replacing humans in the design-better-robots-task. I’m game to use evolution as an analogy to help answer *that* question (where I do agree that it’s informative), but want to be clear what’s actually at issue.\n\n\n\n\n\n\n[Yudkowsky][12:37]\nso, the thing I was trying to work up to, is that my model permits the world to end in a way where AGI doesn’t get tons of investment because it has an insufficiently huge pile of G that it could run as a better robot.  people are instead investing in the equivalents of cheetahs.\n\n\nI don’t understand why your model doesn’t care when humans are better cheetahs.  AGIs running as more profitable robots is what induces the huge investments in AGI that your model requires to produce very close competition. ?\n\n\n\n\n\n[Christiano][12:38]\nit’s a sufficient condition, but it’s not the most robust one at all\n\n\nlike, I happen to think that in the real world AIs actually are going to be incredibly profitable robots, and that’s part of my boring view about what AGI looks like\n\n\nBut the thing that’s more robust is that the sub-taking-over-world AI is already really important, and receiving huge amounts of investment, as something that automates the R&D process. And it seems like the best guess given what we know now is that this process starts years before the singularity.\n\n\nFrom my perspective that’s where most of the action is. And your views on that question seem related to your views on how e.g. AGI is a fundamentally different ballgame from making better robots (whereas I think the boring view is that they are closely related), but that’s more like an upstream question about what you think AGI will look like, most relevant because I think it’s going to lead you to make bad short-term predictions about what kinds of technologies will achieve what kinds of goals.\n\n\n\n\n\n\n[Yudkowsky][12:41]\nbut not all AIs are the same branch of the technology tree.  factory robotics are already really important and they are “AI” but, on my model, they’re currently on the cheetah branch rather than the hominid branch of the tech tree; investments into better factory robotics are not directly investments into improving MuZero, though they may buy chips that MuZero also buys.\n\n\n\n\n\n[Christiano][12:42]\nYeah, I think you have a mistaken view of AI progress. But I still disagree with your bottom line even if I adopt (this part of) your view of AI progress.\n\n\nNamely, I think that the AGI line is mediocre before it is great, and the mediocre version is spectacularly valuable for accelerating R&D (mostly AGI R&D).\n\n\nThe way I end up sympathizing with your view is if I adopt both this view about the tech tree, + another equally-silly-seeming view about how close the AGI line is to fooming (or how inefficient the area will remain as we get close to fooming)\n\n\n\n\n\n \n\n\n### 17.4. Human generality and body manipulation\n\n\n \n\n\n\n[Yudkowsky][12:43]\nso metaphorically, you require that humans be doing Great at Various Things and being Super Profitable way before they develop agriculture; the rise of human intelligence cannot be a case in point of your model because the humans were too uncompetitive at most animal activities for unrealistically long (edit: compared to the AI case)\n\n\n\n\n\n[Christiano][12:44]\nI don’t understand\n\n\nHuman brains are really great at basically everything as far as I can tell?\n\n\nlike it’s not like other animals are better at manipulating their bodies\n\n\nwe crush them\n\n\n\n\n\n\n[Yudkowsky][12:44]\nif we’ve got weapons, yes\n\n\n\n\n\n[Christiano][12:44]\nhuman bodies are also pretty great, but they are not the greatest on every dimension\n\n\n\n\n\n\n[Yudkowsky][12:44]\nwrestling a chimpanzee without weapons is famously ill-advised\n\n\n\n\n\n[Christiano][12:44]\nno, I mean everywhere\n\n\nchimpanzees are practically the same as humans in the animal kingdom\n\n\nthey have almost as excellent a brain\n\n\n\n\n\n\n[Yudkowsky][12:45]\nas is attacking an elephant with your bare hands\n\n\n\n\n\n[Christiano][12:45]\nthat’s not because of elephant brains\n\n\n\n\n\n\n[Yudkowsky][12:45]\nwell, yes, exactly\n\n\nyou need a big pile of G before it’s profitable\n\n\nso big the game is practically over by then\n\n\n\n\n\n[Christiano][12:45]\nthis seems so confused\n\n\nbut that’s exciting I guess\n\n\nlike, I’m saying that the brains to automate R&D\n\n\nare similar to the brains to be a good factory robot\n\n\nanalogously, I think the brains that humans use to do R&D\n\n\nare similar to the brains we use to manipulate our body absurdly well\n\n\nI do not think that our brains make us fast\n\n\nthey help a tiny bit but not much\n\n\nI do not think the physical actuators of the industrial robots will be that similar to the actuators of the robots that do R&D\n\n\nthe claim is that the problem of building the brain is pretty similar\n\n\njust as the problem of building a brain that can do science is pretty similar to the problem of building a brain that can operate a body really well\n\n\n(and indeed I’m claiming that human bodies kick ass relative to other animal bodies—there may be particular tasks other animal brains are pre-built to be great at, but (i) humans would be great at those too if we were under mild evolutionary pressure with our otherwise excellent brains, (ii) there are lots of more general tests of how good you are at operating a body and we will crush it at those tests)\n\n\n(and that’s not something I know much about, so I could update as I learned more about how actually we just aren’t that good at motor control or motion planning)\n\n\n\n\n\n\n[Yudkowsky][12:49]\nso on your model, we can introduce humans to a continent, forbid them any tool use, and they’ll still wipe out all the large animals?\n\n\n\n\n\n[Christiano][12:49]\n(but damn we seem good to me)\n\n\nI don’t understand why that would even plausibly follow\n\n\n\n\n\n\n[Yudkowsky][12:49]\nbecause brains are profitable early, even if they can’t build weapons?\n\n\n\n\n\n[Christiano][12:49]\nI’m saying that if you put our brains in a big animal body\n\n\nwe would wipe out the big animals\n\n\nyes, I think brains are great\n\n\n\n\n\n\n[Yudkowsky][12:50]\nbecause we’d still have our late-game pile of G and we would build weapons\n\n\n\n\n\n[Christiano][12:50]\nno, I think a human in a big animal body, with brain adapted to operate that body instead of our own, would beat a big animal straightforwardly\n\n\nwithout using tools\n\n\n\n\n\n\n[Yudkowsky][12:51]\nthis is a strange viewpoint and I do wonder whether it is a crux of your view\n\n\n\n\n\n[Christiano][12:51]\nthis feels to me like it’s more on the “eliezer vs paul disagreement about the nature of AI” rather than “eliezer vs paul on civilizational inadequacy and continuity”, but enough changes on “nature of AI” would switch my view on the other question\n\n\n\n\n\n\n[Yudkowsky][12:51]\nlike, ceteris paribus maybe a human in an elephant’s body beats an elephant after a burn-in practice period?  because we’d have a strict intelligence advantage?\n\n\n\n\n\n[Christiano][12:52]\npractice may or may not be enough\n\n\nbut if you port over the excellent human brain to the elephant body, then run evolution for a brief burn-in period to get all the kinks sorted out?\n\n\nelephants are pretty close to humans so it’s less brutal than for some other animals (and also are elephants the best example w.r.t. the possibility of direct conflict?) but I totally expect us to win\n\n\n\n\n\n\n[Yudkowsky][12:53]\nI unfortunately need to go do other things in advance of an upcoming call, but I feel like disagreeing about the past is proving noticeably more interesting, confusing, and perhaps productive, than disagreeing about the future\n\n\n\n\n\n[Christiano][12:53]\nactually probably I just think practice is enough\n\n\nI think humans have way more dexterity, better locomotion, better navigation, better motion planning…\n\n\nsome of that is having bodies optimized for those things (esp. dexterity), but I also think most animals just don’t have the brains for it, with elephants being one of the closest calls\n\n\nI’m a little bit scared of talking to zoologists or whoever the relevant experts are on this question, because I’ve talked to bird people a little bit and they often have very strong “humans aren’t special, animals are super cool” instincts even in cases where that take is totally and obviously insane. But if we found someone reasonable in that area I’d be interested to get their take on this.\n\n\nI think this is pretty important for the particular claim “Is AGI like other kinds of ML?”; that definitely doesn’t persuade me to be into fast takeoff on its own though it would be a clear way the world is more Eliezer-like than Paul-like\n\n\nI think I do further predict that people who know things about animal intelligence, and don’t seem to have identifiably crazy views about any adjacent questions that indicate a weird pro-animal bias, will say that human brains are a lot better than other animal brains for dexterity/locomotion/similar physical tasks (and that the comparison isn’t that close for e.g. comparing humans vs big cats).\n\n\nIncidentally, seems like DM folks did the same thing this year, presumably publishing now because they got scooped. Looks like they probably have a better algorithm but used harder environments instead of Atari. (They also evaluate the algorithm SPR+MuZero I mentioned which indeed gets one factor of 2x improvement over MuZero alone, roughly as you’d guess): \n\n\n\n\n\n\n[Barnes][13:45]\nMy DM friend says they tried it before they were focused on data efficiency and it didn’t help in that regime, sounds like they ignored it for a while after that\n\n\n\n\n\n| |\n| --- |\n| [Christiano: 👍] |\n\n\n\n\n\n\n\n[Christiano][13:48]\nOverall the situation feels really boring to me. Not sure if DM having a highly similar unpublished result is more likely on my view than Eliezer’s (and initially ignoring the method because they weren’t focused on sample-efficiency), but at any rate I think it’s not anywhere close to falsifying my view.\n\n\n\n\n\n \n\n\n18. Follow-ups to the Christiano/Yudkowsky conversation\n-------------------------------------------------------\n\n\n \n\n\n\n[Karnofsky][9:39]\nGoing to share a point of confusion about this latest exchange.\n\n\nIt started with Eliezer saying this:\n\n\n\n> \n> Thing that (if true) strikes me as… straight-up falsifying Paul’s view as applied to modern-day AI, at the frontier of the most AGI-ish part of it and where Deepmind put in substantial effort on their project? EfficientZero (allegedly) learns Atari in 100,000 frames. Caveat: I’m not having an easy time figuring out how many frames MuZero would’ve required to achieve the same performance level. MuZero was trained on 200,000,000 frames but reached what looks like an allegedly higher high; the EfficientZero paper compares their performance to MuZero on 100,000 frames, and claims theirs is much better than MuZero given only that many frames.\n> \n> \n> \n\n\nSo at this point, I thought Eliezer’s view was something like: “EfficientZero represents a several-OM (or at least one-OM?) jump in efficiency, which should shock the hell out of Paul.” The upper bound on the improvement is 2000x, so I figured he thought the corrected improvement would be some number of OMs.\n\n\nBut very shortly afterwards, Eliezer quotes Gwern’s guess of a *4x* improvement, and Paul then said:\n\n\n\n> \n> Concretely, this is a paper that adds a few techniques to improve over MuZero in a domain that (it appears) wasn’t a significant focus of MuZero. I don’t know how much it improves but I can believe gwern’s estimates of 4x.\n> \n> \n> I’d guess MuZero itself is a 2x improvement over the baseline from a year ago, which was maybe a 4x improvement over the algorithm from a year before that. If that’s right, then no it’s not mindblowing on my view to have 4x progress one year, 2x progress the next, and 4x progress the next.\n> \n> \n> \n\n\nEliezer never seemed to push back on this 4x-2x-4x claim.\n\n\nWhat I thought would happen after the 4x estimate and 4x-2x-4x claim: Eliezer would’ve said “Hmm, we should nail down whether we are talking about 4x-2x-4x or something more like 4x-2x-100x. If it’s 4x-2x-4x, then I’ll say ‘never mind’ re: my comment that this ‘straight-up falsifies Paul’s view.’ At best this is just an iota of evidence or something.”\n\n\nWhy isn’t that what happened? Did Eliezer mean all along to be saying that a 4x jump on Atari sample efficiency would “straight-up falsify Paul’s view?” Is a 4x jump the kind of thing Eliezer thinks is going to power a jumpy AI timeline?\n\n\n\n\n\n| | |\n| --- | --- |\n| [Ngo: 👍] | [Shah: ➕] |\n\n\n\n\n\n\n\n[Yudkowsky][11:16]\nThis is a proper confusion and probably my fault; I also initially thought it was supposed to be 1-2 OOM and should’ve made it clearer that Gwern’s 4x estimate was less of a direct falsification.\n\n\nI’m not yet confident Gwern’s estimate is correct.  I just got a reply from my query to the paper’s first author which reads:\n\n\n\n> \n> Dear Eliezer: It’s a good question. But due to the limits of resources and time, we haven’t evaluated the sample efficiency towards different frames systematically. I think it’s not a trivial question as the required time and resources are much expensive for the 200M frames setting, especially concerning the MCTS-based methods. Maybe you need about several days or longer to finish a run with GPUs in that setting. I hope my answer can help you. Thank you for your email.\n> \n> \n> \n\n\nI replied asking if Gwern’s 3.8x estimate sounds right to them.\n\n\nA 10x improvement could power what I think is a jumpy AI timeline.  I’m currently trying to draft a depiction of what I think an unrealistically dignified but computationally typical end-of-world would look like if it started in 2025, and my first draft of that had it starting with a new technique published by Google Brain that was around a 10x improvement in training speeds for very large networks at the cost of higher inference costs, but which turned out to be specially applicable to online learning.\n\n\nThat said, I think the 10x part isn’t either a key concept or particularly likely, and it’s much more likely that hell breaks loose when an innovation changes some particular step of the problem from “can’t realistically be done at all” to “can be done with a lot of computing power”, which was what I had being the real effect of that hypothetical Google Brain innovation when applied to online learning, and I will probably rewrite to reflect that.\n\n\n\n\n\n[Karnofsky][11:29]\nThat’s helpful, thanks.\n\n\nRe: “can’t realistically be done at all” to “can be done with a lot of computing power”, cpl things:\n\n\n1. Do you think a 10x improvement in efficiency at some particular task could qualify as this? Could a smaller improvement?\n\n\n2. I thought you were pretty into the possibility of a jump from “can’t realistically be done at all” to “can be done with a *small* amount of computing power,” eg some random ppl with a $1-10mm/y budget blowing past mtpl labs with >$1bb/y budgets. Is that wrong?\n\n\n\n\n\n\n[Yudkowsky][13:44]\n1 – yes and yes, my revised story for how the world ends looks like Google Brain publishing something that looks like only a 20% improvement but which is done in a way that lets it be adapted to make online learning by gradient descent “work at all” in DeepBrain’s ongoing Living Zero project (not an actual name afaik)\n\n\n2 – that definitely remains very much allowed in principle, but I think it’s not my current mainline probability for how the world’s end plays out – although I feel hesitant / caught between conflicting heuristics here.\n\n\nI think I ended up much too conservative about timelines and early generalization speed because of arguing with Robin Hanson, and don’t want to make a similar mistake here, but on the other hand a lot of the current interesting results have been from people spending huge compute (as wasn’t the case to nearly the same degree in 2008) and if things happen on short timelines it seems reasonable to guess that the future will look that much like the present.  This is very much due to cognitive limitations of the researchers rather than a basic fact about computer science, but cognitive limitations are also facts and often stable ones.\n\n\n\n\n\n[Karnofsky][14:35]\nHm OK. I don’t know what “online learning by gradient descent” means such that it doesn’t work at all now (does “work at all” mean something like “work with human-ish learning efficiency?”)\n\n\n\n\n\n\n[Yudkowsky][15:07]\nI mean, in context, it means “works for Living Zero at the performance levels where it’s running around accumulating knowledge”, which by hypothesis it wasn’t until that point.\n\n\n\n\n\n[Karnofsky][15:12]\nHm. I am feeling pretty fuzzy on whether your story is centrally about:\n\n\n1. A <10x jump in efficiency at something important, leading pretty directly/straightforwardly to crazytown\n\n\n2. A 100x ish jump in efficiency at something important, which may at first “look like” a mere <10x jump in efficiency at something else\n\n\n#2 is generally how I’ve interpreted you and how the above sounds, but under #2 I feel like we should just have consensus that the Atari thing being 4x wouldn’t be much of an update. Maybe we already do (it was a bit unclear to me from your msg)\n\n\n(And I totally agree that we haven’t established the Atari thing is only 4x – what I’m saying is it feels like the conversation should’ve paused there)\n\n\n\n\n\n\n[Yudkowsky][15:13]\nThe Atari thing being 4x over 2 years is I think legit not an update because that’s standard software improvement speed\n\n\nyou’re correct that it should pause there\n\n\n\n\n\n[Karnofsky][15:14]  (Nov. 5)\n![👍](https://s.w.org/images/core/emoji/14.0.0/72x72/1f44d.png)\n\n\n\n\n\n\n[Yudkowsky] [15:24]\nI think that my central model is something like – there’s a central thing to general intelligence that starts working when you get enough pieces together and they coalesce, which is why humans went down this evolutionary gradient by a lot before other species got 10% of the way there in terms of output; and then it takes a big pile of that thing to do big things, which is why humans didn’t go faster than cheetahs until extremely late in the game.\n\n\nso my visualization of how the world starts to end is “gear gets added and things start to happen, maybe slowly-by-my-standards at first such that humans keep on pushing it along rather than it being self-moving, but at some point starting to cumulate pretty quickly in the same way that humans cumulated pretty quickly once they got going” rather than “dial gets turned up 50%, things happen 50% faster, every year”.\n\n\n\n\n\n[Yudkowsky][15:16]  (Nov. 5, switching channels)\nas a quick clarification, I agree that if this is 4x sample efficiency over 2 years then that doesn’t at all challenge Paul’s view\n\n\n\n\n\n[Christiano][0:20]\nFWIW, I felt like the entire discussion of EfficientZero was a concrete example of my view making a number of more concentrated predictions than Eliezer that were then almost immediately validated. In particular, consider the following 3 events:\n\n\n* The quantitative effect size seems like it will turn out to be much smaller than Eliezer initially believed, much closer to being in line with previous progress.\n* DeepMind had relatively similar results that got published immediately after our discussion, making it look like random people didn’t pull ahead of DM after all.\n* DeepMind appears not to have cared much about the metric in question, as evidenced by (i) Beth’s comment above, which is basically what I said was probably going on, (ii) they barely even mention Atari sample-efficiency in their paper about similar methods.\n\n\nIf only 1 of these 3 things had happened, then I agree this would have been a challenge to my view that would make me update in Eliezer’s direction. But that’s only possible if Eliezer actually assigns a higher probability than me to <= 1 of these things happening, and hence a lower probability to >= 2 of them happening. So if we’re playing a reasonable epistemic game, it seems like I need to collect some epistemic credit every time something looks boring to me.\n\n\n\n\n\n\n[Yudkowsky][15:30]\nI broadly agree; you win a Bayes point.  I think some of this (but not all!) was due to my tripping over my own feet and sort of rushing back with what looked like a Relevant Thing without contemplating the winner’s curse of exciting news, the way that paper authors tend to frame things in more exciting rather than less exciting ways, etc.  But even if you set that aside, my underlying AI model said that was a thing which could happen (which is why I didn’t have technically rather than sociologically triggered skepticism) and your model said it shouldn’t happen, and it currently looks like it mostly didn’t happen, so you win a Bayes point.\n\n\nNotes that some participants may deem obvious(?) but that I state expecting wider readership:\n\n\n* Just like markets are almost entirely efficient (in the sense that, even when they’re not efficient, you can only make a very small fraction of the money that could be made from the entire market if you owned a time machine), even sharp and jerky progress has to look almost entirely not so fast almost all the time if the Sun isn’t right in the middle of going supernova.  So the notion that progress sometimes goes jerky and fast does have to be evaluated by a portfolio view over time.  In worlds where progress is jerky even before the End Days, Paul wins soft steady Bayes points in most weeks and then I win back more Bayes points once every year or two.\n* We still don’t have a *very* good idea of how much longer you would need to train the previous algorithm to match the performance of the new algorithm, just an estimate by Gwern based off linearly extrapolating a graph in a paper.  But, also to be clear, not knowing something is not the same as expecting it to update dramatically, and you have to integrate over the distribution you’ve got.\n* It’s fair to say, “Hey, Eliezer, if you tripped over your own feet here, but only noticed that because Paul was around to call it, maybe you’re tripping over your feet at other times when Paul isn’t around to check your thoughts in detail” – I don’t want to minimize the Bayes point that Paul won either.\n\n\n\n\n\n[Christiano][16:29]\nAgreed that it’s (i) not obvious how large the EfficientZero gain was, and in general it’s not a settled question what happened, (ii) it’s not that big an update, it needs to be part of a portfolio (but this is indicative of the kind of thing I’d want to put in the portfolio), (iii) it generally seems pro-social to flag potentially relevant stuff without the presumption that you are staking a lot on it.\n\n\n\n\n\n \n\n\n\nThe post [Christiano and Yudkowsky on AI predictions and human intelligence](https://intelligence.org/2022/03/01/christiano-and-yudkowsky-on-ai-predictions-and-human-intelligence/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2022-03-01T19:24:28Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "7027db29fbfc98a193a7b1e9611a077e", "title": "February 2022 Newsletter", "url": "https://intelligence.org/2022/03/01/february-2022-newsletter/", "source": "miri", "source_type": "blog", "text": "As of yesterday, we've released the final posts in the [**Late 2021 MIRI Conversations**](https://www.lesswrong.com/s/n945eovrA3oDueqtq) sequence, a collection of (relatively raw and unedited) AI strategy conversations:\n\n\n* [Ngo's view on alignment difficulty](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/gf9hhmSvpZfyfS34B)\n* [Ngo and Yudkowsky on scientific reasoning and pivotal acts](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/cCrpbZ4qTCEYXbzje)\n* [Christiano and Yudkowsky on AI predictions and human intelligence](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/NbGmfxbaABPsspib7)\n* [Shah and Yudkowsky on alignment failures](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/tcCxPLBrEXdxN5HCQ)\n\n\nEliezer Yudkowsky, Nate Soares, Paul Christiano, Richard Ngo, and Rohin Shah (and possibly other participants in the conversations) will be [**answering questions in an AMA**](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/34Gkqus9vusXRevR8) this Wednesday; questions are currently open on LessWrong.\n\n\n#### Other MIRI updates\n\n\n* Scott Alexander gives his take on Eliezer's [dialogue](https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works) on biology-inspired AGI timelines: [Biological Anchors: A Trick That Might Or Might Not Work](https://astralcodexten.substack.com/p/biological-anchors-a-trick-that-might).\n* Concurrent with [progress on math olympiad problems](https://www.lesswrong.com/posts/q3vAgFnbDja9hZm9E/openai-solves-some-formal-math-olympiad-problems) by OpenAI, Paul Christiano operationalizes an [IMO challenge bet with Eliezer](https://www.lesswrong.com/posts/sWLLdG6DWJEy3CH7n/imo-challenge-bet-with-eliezer), reflecting their different views on the continuousness/predictability of AI progress and the length of AGI timelines.\n\n\n#### News and links\n\n\n* DeepMind's [AlphaCode](https://www.lesswrong.com/posts/ZmxkmCjXJBpwJkgrw/competitive-programming-with-alphacode) demonstrates good performance in programming competitions.\n* Billionaire EA Sam Bankman-Fried announces the [Future Fund](https://ftxfuturefund.org/announcing-the-future-fund/), a philanthropic fund whose [areas of interest](https://ftxfuturefund.org/area-of-interest/) include AI and \"loss of control\" scenarios.\n* Stuart Armstrong leaves the Future of Humanity Institute to found [Aligned AI](https://www.lesswrong.com/posts/vBoq5yd7qbYoGKCZK/why-i-m-co-founding-aligned-ai), a benefit corporation focusing on the problem of \"value extrapolation\".\n\n\n\nThe post [February 2022 Newsletter](https://intelligence.org/2022/03/01/february-2022-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2022-03-01T09:59:24Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "cb5d240b42bf7108079aa51e9cb6d093", "title": "January 2022 Newsletter", "url": "https://intelligence.org/2022/01/31/january-2022-newsletter/", "source": "miri", "source_type": "blog", "text": "#### MIRI updates\n\n\n* MIRI's $1.2 million [Visible Thoughts Project](https://www.lesswrong.com/posts/zRn6cLtxyNodudzhw/visible-thoughts-project-and-bounty-announcement) bounty now has an [FAQ](https://docs.google.com/document/d/1sxNOxvcBWw7XC6MSUUpAjO4Rbu3VcUmCy7PJWQ9Vh5o/edit), and an example of a [successful partial run](https://www.lesswrong.com/posts/zRn6cLtxyNodudzhw/visible-thoughts-project-and-bounty-announcement?commentId=WywmX2i28xANKC7wA) that you can use to inform your own runs.\n* Scott Alexander [reviews the first part of the Yudkowsky/Ngo debate](https://astralcodexten.substack.com/p/practically-a-book-review-yudkowsky). See also Richard Ngo's [reply](https://twitter.com/RichardMCNgo/status/1483639849106169856), and Rohin Shah's [review](https://www.lesswrong.com/posts/3vFmQhHBosnjZXuAJ/an-171-disagreements-between-alignment-optimists-and) of several posts from the [Late 2021 MIRI Conversations](https://intelligence.org/late-2021-miri-conversations/).\n* From Evan Hubinger: [How do we become confident in the safety of an ML system?](https://www.alignmentforum.org/posts/FDJnZt8Ks2djouQTZ/how-do-we-become-confident-in-the-safety-of-a-machine) and [A positive case for how we might succeed at prosaic AI alignment](https://www.lesswrong.com/posts/5ciYedyQDDqAcrDLr/a-positive-case-for-how-we-might-succeed-at-prosaic-ai) (with discussion in the [comments](https://www.lesswrong.com/posts/5ciYedyQDDqAcrDLr/a-positive-case-for-how-we-might-succeed-at-prosaic-ai#comments)).\n* The [ML Alignment Theory Scholars](https://www.lesswrong.com/posts/FpokmCnbP3CEZ5h4t/ml-alignment-theory-program-under-evan-hubinger) program, mentored by Evan Hubinger and run by SERI, has produced a [series](https://www.lesswrong.com/s/tDBYJd4p6EorGLEFA) of distillations and expansions of prior alignment-relevant research.\n* MIRI ran a small workshop this month on what makes some concepts better than others, motivated by the question of how revolutionary science (which is about discovering new questions to ask, new ontologies, and new concepts) works.\n\n\n#### News and links\n\n\n* Daniel Dewey makes his version of the case that future advances in deep learning [pose a \"global risk\"](https://www.danieldewey.net/risk/case.html).\n* Buck Shlegeris of [Redwood Research](https://www.redwoodresearch.org/) discusses [Worst-Case Thinking in AI Alignment](https://www.lesswrong.com/posts/yTvBSFrXhZfL8vr5a/worst-case-thinking-in-ai-alignment).\n* From Paul Christiano: [Why I'm excited about Redwood Research's current project](https://www.lesswrong.com/posts/pXLqpguHJzxSjDdx7/why-i-m-excited-about-redwood-research-s-current-project).\n* Paul Christiano's Alignment Research Center is hiring [researchers](https://www.lesswrong.com/posts/dLoK6KGcHAoudtwdo/arc-is-hiring) and [research interns](https://www.lesswrong.com/posts/BRsxztzkTzScFQfDW/apply-for-research-internships-at-arc).\n\n\n\nThe post [January 2022 Newsletter](https://intelligence.org/2022/01/31/january-2022-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2022-02-01T07:57:20Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "213ba2f8cec139413622787c8f3a7bcb", "title": "December 2021 Newsletter", "url": "https://intelligence.org/2021/12/31/december-2021-newsletter/", "source": "miri", "source_type": "blog", "text": "MIRI is offering $200,000 to build a dataset of AI-dungeon-style writing annotated with the thoughts used in the writing process, and an additional $1,000,000 for scaling that dataset an additional 10x: the **[Visible Thoughts Project](https://www.lesswrong.com/posts/zRn6cLtxyNodudzhw/visible-thoughts-project-and-bounty-announcement)**.\n\n\nAdditionally, MIRI is in the process of releasing a series of chat logs, the **[Late 2021 MIRI Conversations](https://intelligence.org/late-2021-miri-conversations/)**, featuring relatively unedited and raw conversations between Eliezer Yudkowsky, Richard Ngo, Paul Christiano, and a number of other AI x-risk researchers.\n\n\nAs background, we've also released an anonymous [discussion with Eliezer Yudkowsky on AGI interventions](https://intelligence.org/2021/11/11/discussion-with-eliezer-yudkowsky-on-agi-interventions/) (cf. Zvi Mowshowitz's��[summary](https://www.lesswrong.com/posts/xHnuX42WNZ9hq53bz/attempted-gears-analysis-of-agi-intervention-discussion-with-1)) and Nate Soares' [comments on Carlsmith’s “Is power-seeking AI an existential risk?”](https://www.lesswrong.com/posts/cCMihiwtZx7kdcKgt/comments-on-carlsmith-s-is-power-seeking-ai-an-existential) (one of several public [reviews](https://www.lesswrong.com/posts/qRSgHLb8yLXzDg4nf/reviews-of-is-power-seeking-ai-an-existential-risk) of Carlsmith's report).\n\n\nThe logs so far:\n\n\n* [Ngo and Yudkowsky on alignment difficulty](https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty) — A pair of opening conversations asking how easy it is to avoid \"consequentialism\" in powerful AGI systems.\n* [Ngo and Yudkowsky on AI capability gains](https://intelligence.org/2021/11/18/ngo-and-yudkowsky-on-ai-capability-gains/) — Richard and Eliezer continue their dialogue.\n* [Yudkowsky and Christiano discuss \"Takeoff Speeds\"](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/vwLxd6hhFvPbvKmBH) — Paul Christiano joins the conversation, and debates hard vs. soft takeoff with Eliezer.\n* [Soares, Tallinn, and Yudkowsky discuss AGI cognition](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/oKYWbXioKaANATxKY) — Jaan Tallinn and Nate Soares weigh in on the conversation so far.\n* [Christiano, Cotra, and Yudkowsky on AI progress](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/7MCqRnZzvszsxgtJi) — Paul and Eliezer begin a longer AGI forecasting discussion, joined by Ajeya Cotra.\n* [Shulman and Yudkowsky on AI progress](https://www.lesswrong.com/posts/sCCdCLPN9E3YvdZhj/shulman-and-yudkowsky-on-ai-progress) — Carl Shulman weighs in on the Paul/Eliezer/Ajeya conversation.\n* [More Christiano, Cotra, and Yudkowsky on AI progress](https://www.lesswrong.com/posts/fS7Zdj2e2xMqE6qja/more-christiano-cotra-and-yudkowsky-on-ai-progress) — A discussion of “why should we expect early prototypes to be low-impact?”, and of concrete predictions.\n* [Conversation on technology forecasting and gradualism](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/nPauymrHwpoNr6ipx) — A larger-group discussion, following up on the Paul/Eliezer debate.\n\n\nEliezer additionally wrote a dialogue, [Biology-Inspired AGI Timelines: The Trick That Never Works](https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works), which Holden Karnofsky [responded to](https://www.lesswrong.com/posts/nNqXfnjiezYukiMJi/reply-to-eliezer-on-biological-anchors).\n\n\n#### News and links\n\n\n* [How To Get Into Independent Research On Alignment/Agency](https://www.lesswrong.com/posts/P3Yt66Wh5g7SbkKuT/how-to-get-into-independent-research-on-alignment-agency): John Wentworth gives an excellent overview of how to get started doing AI alignment research.\n* A new summer fellowship, [Principles of Intelligent Behavior in Biological and Social Systems](https://www.alignmentforum.org/posts/4Tjz4EJ8DozE9z5nQ/introducing-the-principles-of-intelligent-behaviour-in), is seeking applicants to spend three months in 2022 working on AI alignment \"through studying analogies to many complex systems (evolution, brains, language, social structures…)\". [Apply](https://www.pibbss.ai/fellowship) by January 16.\n\n\n\nThe post [December 2021 Newsletter](https://intelligence.org/2021/12/31/december-2021-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2022-01-01T07:32:12Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "d992db8d857dc04902a5b91608a6bfb1", "title": "Ngo’s view on alignment difficulty", "url": "https://intelligence.org/2021/12/14/ngos-view-on-alignment-difficulty/", "source": "miri", "source_type": "blog", "text": "This post features a write-up by Richard Ngo on his views, with inline comments.\n\n\n \n\n\nColor key:\n\n\n\n\n\n| | | |\n| --- | --- | --- |\n|   Chat   |   Google Doc content   |   Inline comments   |\n\n\n\n \n\n\n13. Follow-ups to the Ngo/Yudkowsky conversation\n------------------------------------------------\n\n\n \n\n\n### 13.1. Alignment difficulty debate: Richard Ngo’s case\n\n\n \n\n\n \n\n\n\n[Ngo][9:31]  (Sep. 25)\nAs promised, here’s a write-up of some thoughts from my end. In particular, since I’ve spent a lot of the debate poking Eliezer about his views, I’ve tried here to put forward more positive beliefs of my own in this doc (along with some more specific claims): [GDocs link]\n\n\n\n\n\n| |\n| --- |\n| [Soares: ✨]  |\n\n\n\n\n\n\n\n[Ngo]  (Sep. 25 Google Doc)\nWe take as a starting observation that a number of “grand challenges” in AI have been solved by AIs that are very far from the level of generality which people expected would be needed. Chess, once considered to be the pinnacle of human reasoning, was solved by an algorithm that’s essentially useless for real-world tasks. Go required more flexible learning algorithms, but policies which beat human performance are still nowhere near generalising to anything else; the same for StarCraft, DOTA, and the protein folding problem. Now it seems very plausible that AIs will even be able to pass (many versions of) the Turing Test while still being a long way from AGI.\n\n\n\n\n\n\n[Yudkowsky][11:26]  (Sep. 25 comment)\n\n> \n> Now it seems very plausible that AIs will even be able to pass (many versions of) the Turing Test while still being a long way from AGI.\n> \n> \n> \n\n\nI remark:  Restricted versions of the Turing Test.  Unrestricted passing of the Turing Test happens after the world ends.  Consider how smart you’d have to be to pose as an AGI to an AGI; you’d need all the cognitive powers of an AGI as well as all of your human powers.\n\n\n\n\n\n[Ngo][11:24]  (Sep. 29 comment)\nPerhaps we can quantify the Turing test by asking something like:\n\n\n* What percentile of competence is the judge?\n* What percentile of competence are the humans who the AI is meant to pass as?\n* How much effort does the judge put in (measured in, say, hours of strategic preparation)?\n\n\nDoes this framing seem reasonable to you? And if so, what are the highest numbers for each of these metrics that correspond to a Turing test which an AI could plausibly pass before the world ends?\n\n\n\n\n\n\n[Ngo]  (Sep. 25 Google Doc)\nI expect this trend to continue until after we have AIs which are superhuman at mathematical theorem-proving, programming, many other white-collar jobs, and many types of scientific research. It seems like Eliezer doesn’t. I’ll highlight two specific disagreements which seem to play into this.\n\n\n\n\n\n\n[Yudkowsky][11:28]  (Sep. 25 comment)\n\n> \n> doesn’t\n> \n> \n> \n\n\nEh?  I’m pretty fine with something proving the Riemann Hypothesis before the world ends.  It came up during my recent debate with Paul, in fact.\n\n\nNot so fine with something designing nanomachinery that can be built by factories built by proteins.  They’re legitimately different orders of problem, and it’s no coincidence that the second one has a path to pivotal impact, and the first does not.\n\n\n\n\n\n\n[Ngo]  (Sep. 25 Google Doc)\nA first disagreement is related to Eliezer’s characterisation of GPT-3 as a shallow pattern-memoriser. I think there’s a continuous spectrum between pattern-memorisation and general intelligence. In order to memorise more and more patterns, you need to start understanding them at a high level of abstraction, draw inferences about parts of the patterns based on other parts, and so on. When those patterns are drawn from the real world, then this process leads to the gradual development of a world-model.\n\n\nThis position seems more consistent with the success of deep learning so far than Eliezer’s position (although my advocacy of it loses points for being post-hoc; I was closer to Eliezer’s position before the GPTs). It also predicts that deep learning will lead to agents which can reason about the world in increasingly impressive ways (although I don’t have a strong position on the extent to which new architectures and algorithms will be required for that). I think that the spectrum from less to more intelligent animals (excluding humans) is a good example of what it looks like to gradually move from pattern-memorisation to increasingly sophisticated world-models and abstraction capabilities.\n\n\n\n\n\n\n[Yudkowsky][11:30]  (Sep. 25 comment)\n\n> \n> In order to memorise more and more patterns, you need to start understanding them at a high level of abstraction, draw inferences about parts of the patterns based on other parts, and so on.\n> \n> \n> \n\n\nCorrect.  You can believe this and *not* believe that exactly GPT-like architectures can keep going deeper until their overlap of a greater number of patterns achieves the same level of depth and generalization as human depth and generalization from fewer patterns, just like pre-transformer architectures ran into trouble in memorizing deeper patterns than the shallower ones those earlier systems could memorize.\n\n\n\n\n\n[Ngo]  (Sep. 25 Google Doc)\nI expect that Eliezer won’t claim that pattern-memorisation is *unrelated* to general intelligence, but will claim that a pattern-memoriser needs to undergo a sharp transition in its cognitive algorithms before it can reason reliably about novel domains (like open scientific problems) – with his main argument for that being the example of the sharp transition undergone by humans.\n\n\nHowever, it seems unlikely to me that humans underwent a major transition in our underlying cognitive algorithms since diverging from chimpanzees, because our brains are so similar to those of chimps, and because our evolution from chimps didn’t take very long. This evidence suggests that we should favour explanations for our success which don’t need to appeal to big algorithmic changes, if we have any such explanations; and I think we do. More specifically, I’d characterise the three key differences between humans and chimps as:\n\n\n1. Humans have bigger brains.\n2. Humans have a range of small adaptations primarily related to motivation and attention, such as infant focus on language and mimicry, that make us much better at cultural learning.\n3. Humans grow up in a rich cultural environment.\n\n\n\n\n\n\n[Ngo][9:13]  (Sep. 23 comment on earlier draft)\n\n> \n> bigger brains\n> \n> \n> \n\n\nI recall a 3-4x difference; but this paper says 5-6x for frontal cortex: [https://www.nature.com/articles/nn814](https://www.google.com/url?q=https://www.nature.com/articles/nn814&sa=D&source=editors&ust=1633068529191000&usg=AOvVaw2kXhQODWG2jBW_xyHO7R_U)\n\n\n\n\n\n\n[Tallinn][3:24]  (Sep. 26 comment)\n\n> \n> language and mimicry\n> \n> \n> \n\n\n“apes are unable to ape sounds” claims david deutsch in “the beginning of infinity”\n\n\n\n\n\n\n[Barnes][8:09]  (Sep. 23 comment on earlier draft)\n\n> \n> [Humans grow up in a rich cultural environment.]\n> \n> \n> \n\n\nmuch richer cultural environment including deliberate teaching\n\n\n\n\n\n\n[Ngo]  (Sep. 25 Google Doc)\nI claim that the discontinuity between the capabilities of humans and chimps is mainly explained by the general intelligence of chimps not being aimed in the direction of learning the skills required for economically valuable tasks, which in turn is mainly due to chimps lacking the “range of small adaptations” mentioned above.\n\n\nMy argument is a more specific version of Paul’s claim that chimp evolution was not primarily selecting for doing things like technological development. In particular, it was not selecting for them because no cumulative cultural environment existed while chimps were evolving, and selection for the application of general intelligence to technological development is much stronger in a cultural environment. (I claim that the cultural environment was so limited before humans mainly because cultural accumulation is [very sensitive to transmission fidelity](https://royalsocietypublishing.org/doi/10.1098/rstb.2012.0119).)\n\n\nBy contrast, AIs will be trained in a cultural environment (including extensive language use) from the beginning, so this won’t be a source of large gains for later systems.\n\n\n\n\n\n\n[Ngo][6:01]  (Sep. 22 comment on earlier draft)\n\n> \n> more specific version of Paul’s claim\n> \n> \n> \n\n\nBased on some of Paul’s recent comments, this may be what he intended all along; though I don’t recall his original writings on takeoff speeds making this specific argument.\n\n\n\n\n\n\n[Shulman][14:23]  (Sep. 25 comment)\n\n> \n> (I claim that the cultural environment was so limited before humans mainly because cultural accumulation is [very sensitive to transmission fidelity](https://royalsocietypublishing.org/doi/10.1098/rstb.2012.0119).)\n> \n> \n> \n\n\nThere can be other areas with superlinear effects from repeated application of  a skill. There’s reason to think that the most productive complex industries tend to have that character.\n\n\nMaking individual minds able to correctly execute long chains of reasoning by reducing per-step error rate could plausibly have very superlinear effects in programming, engineering, management, strategy, persuasion, etc. And you could have new forms of ‘super-culture’ that don’t work with humans.\n\n\n\n\n\n\n\n\n\n[Ngo]  (Sep. 25 Google Doc)\nIf true, this argument would weigh against Eliezer’s claims about agents which possess a core of general intelligence being able to easily apply that intelligence to a wide range of tasks. And I don’t think that Eliezer has a compelling alternative explanation of the key cognitive differences between chimps and humans (the closest I’ve seen in his writings is the brainstorming [at the end of this post](https://www.lesswrong.com/posts/XQirei3crsLxsCQoi/surprised-by-brains)).\n\n\nIf this is the case, I notice an analogy between Eliezer’s argument against Kurzweil, and my argument against Eliezer. Eliezer attempted to put microfoundations underneath the trend line of Moore’s law, which led to a different prediction than Kurzweil’s straightforward extrapolation. Similarly, my proposed microfoundational explanation of the chimp-human gap gives rise to a different prediction than Eliezer’s more straightforward, non-microfoundational extrapolation.\n\n\n\n\n\n\n[Yudkowsky][11:39]  (Sep. 25 comment)\n\n> \n> Similarly, my proposed microfoundational explanation of the chimp-human gap gives rise to a different prediction than Eliezer’s more straightforward, non-microfoundational extrapolation.\n> \n> \n> \n\n\nEliezer does not use “non-microfoundational extrapolations” for very much of anything, but there are obvious reasons why the greater Earth does not benefit from me winning debates through convincingly and correctly listing all the particular capabilities you need to add over and above what GPToid architectures can achieve, in order to achieve AGI.  Nobody else with a good model of larger reality will publicly describe such things in a way they believe is correct.  I prefer not to argue convincingly but wrongly.  But, no, it is not Eliezer’s way to sound confident about anything unless he thinks he has a more detailed picture of the microfoundations than the one you are currently using yourself.\n\n\n\n\n\n[Ngo][11:40]  (Sep. 29 comment)\nGood to know; apologies for the incorrect inference.\n\n\nGiven that this seems like a big sticking point in the debate overall, do you have any ideas about how to move forward while avoiding infohazards?\n\n\n\n\n\n\n[Ngo]  (Sep. 25 Google Doc)\nMy position makes some predictions about hypothetical cases:\n\n\n1. If chimpanzees had the same motivational and attention-guiding adaptations towards cultural learning and cooperation that humans do, and were raised in equally culturally-rich environments, then they could become economically productive workers in a range of jobs (primarily as manual laborers, but plausibly also for operating machinery, etc).\n\t1. Results from chimps raised in human families, like [Washoe](https://en.wikipedia.org/wiki/Washoe_(chimpanzee)), seem moderately impressive, although still very uncertain. There’s probably a lot of bias towards positive findings – but on the other hand, it’s only been done a handful of times, and I expect that more practice at it would lead to much better results.\n\t2. Comparisons between humans and chimps which *aren’t* raised in similar ways to humans are massively biased towards humans. For the purposes of evaluating general intelligence, comparisons between chimpanzees and [feral children](https://en.wikipedia.org/wiki/Feral_child) seem fairer (although it’s very hard to know how much the latter were affected by non-linguistic childhoods as opposed to abuse or pre-existing disabilities).\n2. Consider a hypothetical species which has the same level of “general intelligence” that chimpanzees currently have, but is as well-adapted to the domains of abstract reasoning and technological development as chimpanzee behaviour is to the domain of physical survival (e.g. because they evolved in an artificial environment where their fitness was primarily determined by their intellectual contributions). I claim that this species would have superhuman scientific research capabilities, and would be able to make progress in novel areas of science (analogously to how chimpanzees can currently learn to navigate novel physical landscapes).\n\t1. Insofar as Eliezer doubts this, but *does* believe that this species could outperform a society of village idiots at scientific research, then he needs to explain why the village-idiot-to-Einstein gap is so significant in this context but not in others.\n\t2. However, this is a pretty weird thought experiment, and maybe doesn’t add much to our existing intuitions about AIs. My main intention here is to point at how animal behaviour is *really really well-adapted* to physical environments, in a way which makes people wonder what it would be like to be *really really well-adapted* to intellectual environments.\n3. ~~I claim that the difficulty of human-level oracle AGIs matching humans~~ Consider an AI which has been trained only to answer questions, and is now human-level at doing so. I claim that the difficulty of this AI matching humans at a range of real-world tasks (without being specifically trained to do so) would be much closer to the difficulty of teaching chimps to do science, than the difficulty of teaching adult humans to do abstract reasoning about a new domain.\n\t1. The analogy here is: chimps have reasonably general intelligence, but it’s hard for them to apply it to science because they weren’t trained to apply intelligence to that. Likewise, human-level oracle AGIs have general intelligence, but it’ll be hard for them to apply it to influencing the world because they weren’t trained to apply intelligence to that.\n\n\n\n\n\n\n[Barnes][8:21]  (Sep. 23 comment on earlier draft)\n\n> \n> village-idiot-to-Einstein gap\n> \n> \n> \n\n\nI wonder to what extent you can model within-species intelligence differences partly just as something like hyperparameter search – if you have a billion humans with random variation in their neural/cognitive traits, the top human will be a lot better than average. Then you could say something like:\n\n\n* humans are the dumbest species you could have where the distribution of intelligence in each generation is sufficient for cultural accumulation\n* that by itself might not imply a big gap from chimps\n* but human society has much larger population, so the smartest individuals are much smarter\n\n\n\n\n\n\n[Ngo][9:05]  (Sep. 23 comment on earlier draft)\nI think Eliezer’s response (which I’d agree with) would be that the cognitive difference between the best humans and normal humans is strongly constrained by the fact that we’re all one species who can interbreed with each other. And so our cognitive variation can’t be very big compared with inter-species variation (at the top end at least; although it could at the bottom end via things breaking).\n\n\n\n\n\n\n[Barnes][9:35]  (Sep. 23 comment on earlier draft)\nI think that’s not obviously true – it’s definitely possible that there’s a lot of random variation due to developmental variation etc. If that’s the case then population size could create large within-species differences\n\n\n\n\n\n\n[Yudkowsky][11:46]  (Sep. 25 comment)\n\n> \n> oracle AGIs\n> \n> \n> \n\n\nRemind me of what this is?  Surely you don’t just mean the AI that produces plans it doesn’t implement itself, because that AI becomes an agent by adding an external switch that routes its outputs to a motor; it can hardly be much cognitively different from an agent.  Then what do you mean, “oracle AGI”?\n\n\n(People tend to produce shallow specs of what they mean by “oracle” that make no sense in my microfoundations, a la “Just drive red cars but not blue cars!”, leading to my frequent reply, “Sorry, still AGI-complete in terms of the machinery you have to build to do that.”)\n\n\n\n\n\n[Ngo][11:44]  (Sep. 29 comment)\nEdited to clarify what I meant in this context (and remove the word “oracle” altogether).\n\n\n\n\n\n\n[Yudkowsky][12:01]  (Sep. 29 comment)\nMy reply holds just as much to “AIs that answer questions”; what restricted question set do you imagine suffices to save the world without dangerously generalizing internal engines?\n\n\n\n\n\n[Barnes][8:15]  (Sep. 23 comment on earlier draft)\n\n> \n> The analogy here is: chimps have reasonably general intelligence, but it’s hard for them to apply it to science because they weren’t trained to apply intelligence to that. Likewise, human-level oracle AGIs have general intelligence, but it’ll be hard for them to apply it to influencing the world because they weren’t trained to apply intelligence to that.\n> \n> \n> \n\n\nthis is not intuitive to me; it seems pretty plausible that the subtasks of predicting the world and of influencing the world are much more similar than the subtasks of surviving in a chimp society are to the subtasks of doing science\n\n\n\n\n\n\n[Ngo][8:59]  (Sep. 23 comment on earlier draft)\nI think Eliezer’s position is that all of these tasks are fairly similar *if you have general intelligence*. E.g. he argued that the difference between very good theorem-proving and influencing the world is significantly smaller than people expect. So even if you’re right, I think his position is too strong for your claim to help him. (I expect him to say that I’m significantly overestimating the extent to which chimps are running general cognitive algorithms).\n\n\n\n\n\n\n[Barnes][9:33]  (Sep. 23 comment on earlier draft)\nI wasn’t trying to defend his position, just disagreeing with you ![😛](https://s.w.org/images/core/emoji/14.0.0/72x72/1f61b.png)\n\n\n\n\n\n\n[Ngo]  (Sep. 25 Google Doc)\n*More specific details*\n\n\nHere are three training regimes which I expect to contribute to AGI:\n\n\n* Self-supervised training – e.g. on internet text, code, books, videos, etc.\n* Task-based RL – agents are rewarded (likely via human feedback, and some version of iterated amplification) for doing well on bounded tasks.\n* Open-ended RL – agents are rewarded for achieving long-term goals in rich environments.\n\n\n\n\n\n\n[Yudkowsky][11:56]  (Sep. 25 comment)\n\n> \n> bounded tasks\n> \n> \n> \n\n\nThere’s an interpretation of this I’d agree with, but all of the work is being carried by the *boundedness* of the tasks, little or none via the “human feedback” part which I shrug at, and none by the “iterated amplification” part since I consider that tech unlikely to exist before the world ends.\n\n\n\n\n\n[Ngo]  (Sep. 25 Google Doc)\nMost of my probability of catastrophe comes from AGIs trained primarily via open-ended RL. Although IA makes these scenarios less likely by making task-based RL more powerful, it doesn’t seem to me that IA tackles the hardest case (of aligning agents trained via open-ended RL) head-on. But disaster from open-ended RL also seems a long way away – mainly because getting long-term real-world feedback is very slow, and I expect it to be hard to create sufficiently rich artificial environments. By that point I do expect the strategic landscape to be significantly different, because of the impact of task-based RL.\n\n\n\n\n\n\n[Yudkowsky][11:57]  (Sep. 25 comment)\n\n> \n> a long way away\n> \n> \n> \n\n\nOh, definitely, at the present rates of progress we’ve got years, plural.\n\n\nThe [history of futurism](https://intelligence.org/2017/10/13/fire-alarm/) says that even saying that tends to be unreliable in the general case (people keep saying it right up until the Big Thing actually happens) and also that it’s rather a difficult form of knowledge to obtain more than a few years out.\n\n\n\n\n\n[Yudkowsky][12;01]  (Sep. 25 comment)\n\n> \n> hard to create sufficiently rich artificial environments\n> \n> \n> \n\n\nDisagree; I don’t think that making environments more difficult in a way that challenges the environment inside will prove to be a significant AI development bottleneck.  Making simulations easy enough for current AIs to do interesting things in them, but hard enough that the things they do are not completely trivial, takes some work relevant to current levels of AI intelligence.  I think that making those environments more tractably challenging for smarter AIs is not likely to be nearly a bottleneck in progress, compared to making the AIs smarter and able to solve the environment.  It’s a one-way-hash, P-vs-NP style thing – not literally, just that general relationship between it taking a lower amount of effort to pose a problem such that solving it requires a higher amount of effort.\n\n\n\n\n\n[Ngo]  (Sep. 25 Google Doc)\nPerhaps the best way to pin down disagreements in our expectations about the effects of the strategic landscape is to identify some measures that could help to reduce AGI risk, and ask how seriously key decision-makers would need to take AGI risk for each measure to be plausible, and how powerful and competent they would need to be for that measure to make a significant difference. Actually, let’s lump these metrics together into a measure of “amount of competent power applied”. Some benchmarks, roughly in order (and focusing on the effort applied by the US):\n\n\n* Banning chemical/biological weapons\n* COVID\n\t+ Key points: mRNA vaccines, lockdowns, mask mandates\n* Nuclear non-proliferation\n\t+ Key points: [Nunn-Lugar Act](https://www.openphilanthropy.org/blog/ai-governance-grantmaking), [stuxnet](https://en.wikipedia.org/wiki/Stuxnet), various treaties\n* The International Space Station\n\t+ Cost to US: ~$75 billion\n* Climate change\n\t+ US expenditure: >$154 billion (but not very effectively)\n* Project Apollo\n\t+ Wikipedia says that Project Apollo “was the largest commitment of resources ($156 billion in 2019 US dollars) ever made by any nation in peacetime. At its peak, the Apollo program employed 400,000 people and required the support of over 20,000 industrial firms and universities.”\n* WW1\n* WW2\n\n\n\n\n\n\n[Yudkowsky][12:02]  (Sep. 25 comment)\n\n> \n> WW2\n> \n> \n> \n\n\nThis level of effort starts to buy significant amounts of time.  This level will not be reached, nor approached, before the world ends.\n\n\n\n\n\n[Ngo]  (Sep. 25 Google Doc)\nHere are some wild speculations (I just came up with this framework, and haven’t thought about these claims very much):\n\n\n1. The US and China preventing any other country from becoming a leader in AI requires about as much competent power as banning chemical/biological weapons.\n2. The US and China enforcing a ban on AIs above a certain level of autonomy requires about as much competent power as the fight against climate change.\n\t1. In this scenario, all the standard forces which make other types of technological development illegal have pushed towards making autonomous AGI illegal too.\n3. Launching a good-faith joint US-China AGI project requires about as much competent power as launching Project Apollo.\n\t1. According to [this article](https://www.spacedaily.com/news/russia-97h.html), Kennedy (and later Johnson) made several offers (some of which were public) of a joint US-USSR Moon mission, which Khrushchev reportedly came close to accepting. Of course this is a long way from actually doing a joint project (and it’s not clear how reliable the source is), but it still surprised me a lot, given that I viewed the “space race” as basically a zero-sum prestige project. If your model predicted this, I’d be interested to hear why.\n\n\n\n\n\n\n[Yudkowsky][12:07]  (Sep. 25 comment)\n\n> \n> The US and China preventing any other country from becoming a leader in AI requires about as much competent power as banning chemical/biological weapons.\n> \n> \n> \n\n\nI believe this is wholly false.  On my model it requires closer to WW1 levels of effort.  I don’t think you’re going to get it without credible threats of military action leveled at previously allied countries.\n\n\nAI is easier and more profitable to build than chemical / biological weapons, and correspondingly harder to ban.  Existing GPU factories need to be shut down and existing GPU clusters need to be banned and no duplicate of them can be allowed to arise, across many profiting countries that were previously military allies of the United States, which – barring some vast shift in world popular *and* elite opinion against AI, which is also not going to happen – those countries would be extremely disinclined to sign, especially if the treaty terms permitted the USA and China to forge ahead.\n\n\nThe reason why chem weapons bans were much easier was that people did not like chem weapons.  They were awful.  There was a perceived common public interest in nobody having chem weapons.  It was understood popularly and by elites to be a Prisoner’s Dilemma situation requiring enforcement to get to the Pareto optimum.  Nobody was profiting tons off the infrastructure that private parties could use to make chem weapons.\n\n\nAn AI ban is about as easy as banning advanced metal-forging techniques in current use so nobody can get ahead of the USA and China in making airplanes.  That would be HARD and likewise require credible threats of military action against former allies.\n\n\n“AI ban is as easy as a chem weapons ban” seems to me like politically crazy talk.  I’d expect a more politically habited person to confirm this.\n\n\n\n\n\n[Shulman][14:32]  (Sep. 25 comment)\nAI ban much, much harder than chemical weapons ban. Indeed chemical weapons were low military utility, that was central to the deal, and they have still been used subsequently.\n\n\n\n> \n> An AI ban is about as easy as banning advanced metal-forging techniques in current use so nobody can get ahead of the USA and China in making airplanes. That would be HARD and likewise require credible threats of military action against former allies.\n> \n> \n> \n\n\nIf large amounts of compute relative to today are needed (and presumably Eliezer rejects this), the fact that there is only a single global leading node chip supply chain makes it vastly easier than metal forging, which exists throughout the world and is vastly cheaper.\n\n\nSharing with allies (and at least embedding allies to monitor US compliance) also reduces the conflict side.\n\n\nOTOH, if compute requirements were super low then it gets a lot worse.\n\n\nAnd the biological weapons ban failed completely: the Soviets built an enormous bioweapons program, the largest ever, after agreeing to the ban, and the US couldn’t even tell for sure they were doing so.\n\n\n\n\n\n\n[Yudkowsky][18:15]  (Oct. 4 comment)\nI’ve updated somewhat off of Carl Shulman’s argument that there’s only one chip supply chain which goes through eg a single manufacturer of lithography machines (ASML), which could maybe make a lock on AI chips possible with only WW1 levels of cooperation instead of WW2.\n\n\nThat said, I worry that, barring WW2 levels, this might not last very long if other countries started duplicating the supply chain, even if they had to go back one or two process nodes on the chips?  There’s a difference between the proposition “ASML has a lock on the lithography market right now” and “if aliens landed and seized ASML, Earth would forever after be unable to build another lithography plant”.  I mean, maybe that’s just true because we lost technology and can’t rebuild old bridges either, but it’s at least less obvious.\n\n\nLaunching Tomahawk cruise missiles at any attempt anywhere to build a new ASML, is getting back into “military threats against former military allies” territory and hence what I termed WW2 levels of cooperation.\n\n\n\n\n\n[Shulman][18:30]  (Oct. 4 comment)\nChina has been trying for some time to build its own and has failed with tens of billions of dollars (but has captured some lagging node share), but would be substantially more likely to succeed with a trillion dollar investment. That said, it is hard to throw money at these things and the tons of tacit knowledge/culture/supply chain networks are tough to replicate. Also many ripoffs of the semiconductor subsidies have occurred. Getting more NASA/Boeing and less SpaceX is a plausible outcome even with huge investment.\n\n\nThey are trying to hire people away from the existing supply chain to take its expertise and building domestic skills with the lagging nodes.\n\n\n\n\n\n\n[Yudkowsky][19:14]  (Oct. 4 comment)\nDoes that same theory predict that if aliens land and grab some but not all of the current ASML personnel, Earth is thereby successfully taken hostage for years, because Earth has trouble rebuilding ASML, which had the irreproducible lineage of masters and apprentices dating back to the era of Lost Civilization?  Or would Earth be much better at this than China, on your model?\n\n\n\n\n\n[Shulman][19:31]  (Oct. 4 comment)\nI’ll read that as including the many suppliers of ASML (one EUV machine has over 100,000 parts, many incredibly fancy or unique). It’s just a matter of how many years it takes. I think Earth fails to rebuild that capacity in 2 years but succeeds in 10.\n\n\n“A study this spring by Boston Consulting Group and the Semiconductor Industry Association estimated that creating a self-sufficient chip supply chain would take at least $1 trillion and sharply increase prices for chips and products made with them…The situation underscores the crucial role played by ASML, a once obscure company whose market value now exceeds $285 billion. It is “the most important company you never heard of,” said C.J. Muse, an analyst at Evercore ISI.”\n\n\n[https://www.nytimes.com/2021/07/04/technology/tech-cold-war-chips.html](https://www.google.com/url?q=https://www.nytimes.com/2021/07/04/technology/tech-cold-war-chips.html&sa=D&source=docs&ust=1638426593582000&usg=AOvVaw3sdQnMg5JFsRGyBxhJraVp)\n\n\n\n\n\n\n[Yudkowsky][19:59]  (Oct. 4 comment)\nNo in 2 years, yes in 10 years sounds reasonable to me for this hypothetical scenario, as far as I know in my limited knowledge.\n\n\n\n\n\n[Yudkowsky][12:10]  (Sep. 25 comment)\n\n> \n> Launching a good-faith joint US-China AGI project requires about as much competent power as launching Project Apollo.\n> \n> \n> \n\n\nIt’s really weird, relative to my own model, that you put the item that the US and China can bilaterally decide to do all by themselves, without threats of military action against their former allies, as more difficult than the items that require conditions imposed on other developed countries that don’t want them.\n\n\nPolitical coordination is hard.  No, seriously, it’s hard.  It comes with a difficulty penalty that scales with the number of countries, how complete the buy-in has to be, and how much their elites and population don’t want to do what you want them to do relative to how much elites and population agree that it needs doing (where this very rapidly goes to “impossible” or “WW1/WW2” as they don’t particularly want to do your thing).\n\n\n\n\n\n[Ngo]  (Sep. 25 Google Doc)\nSo far I haven’t talked about how much competent power I actually expect people to apply to AI governance. I don’t think it’s useful for Eliezer and me to debate this directly, since it’s largely downstream from most of the other disagreements we’ve had. In particular, I model him as believing that there’ll be very little competent power applied to prevent AI risk from governments and wider society, partly because he expects a faster takeoff than I do, and partly because he has a lower opinion of governmental competence than I do. But for the record, it seems likely to me that there’ll be as much competent effort put into reducing AI risk by governments and wider society as there has been into fighting COVID; and plausibly (but not likely) as much as fighting climate change.\n\n\nOne key factor is my expectation that arguments about the importance of alignment will become much stronger as we discover more compelling examples of misalignment. I don’t currently have strong opinions about how compelling the worst examples of misalignment before catastrophe are likely to be; but identifying and publicising them seems like a particularly effective form of advocacy, and one which we should prepare for in advance.\n\n\nThe predictable accumulation of easily-accessible evidence that AI risk is important is one example of a more general principle: that it’s much easier to understand, publicise, and solve problems as those problems get closer and more concrete. This seems like a strong effect to me, and a key reason why so many predictions of doom throughout history have failed to come true, even when they seemed compelling at the time they were made.\n\n\nUpon reflection, however, I think that even taking this effect into account, the levels of competent power required for the interventions mentioned above are too high to justify the level of optimism about AI governance that I started our debate with. On the other hand, I found Eliezer’s arguments about consequentialism less convincing than I expected. Overall I’ve updated that AI risk is higher than I previously believed; though I expect my views to be quite unsettled while I think more, and talk to more people, about specific governance interventions and scenarios.\n\n\n\n\n\n \n\n\n\nThe post [Ngo’s view on alignment difficulty](https://intelligence.org/2021/12/14/ngos-view-on-alignment-difficulty/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-12-15T06:14:05Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "339acf74f03137f0be6c877d6735de8c", "title": "Conversation on technology forecasting and gradualism", "url": "https://intelligence.org/2021/12/09/conversation-on-technology-forecasting-and-gradualism/", "source": "miri", "source_type": "blog", "text": "This post is a transcript of a multi-day discussion between Paul Christiano, Richard Ngo, Eliezer Yudkowsky, Rob Bensinger, Holden Karnofsky, Rohin Shah, Carl Shulman, Nate Soares, and Jaan Tallinn, following up on the Yudkowsky/Christiano debate in [1](https://www.lesswrong.com/posts/vwLxd6hhFvPbvKmBH/yudkowsky-and-christiano-discuss-takeoff-speeds), [2](https://www.lesswrong.com/posts/7MCqRnZzvszsxgtJi/christiano-cotra-shulman-and-yudkowsky-on-ai-progress), [3](https://www.lesswrong.com/posts/sCCdCLPN9E3YvdZhj/shulman-and-yudkowsky-on-ai-progress), and [4](https://www.lesswrong.com/posts/fS7Zdj2e2xMqE6qja/more-christiano-cotra-and-yudkowsky-on-ai-progress).\n\n\n \n\n\nColor key:\n\n\n\n\n\n| | |\n| --- | --- |\n|  Chat by Paul, Richard, and Eliezer  |  Other chat  |\n\n\n\n \n\n\n12. Follow-ups to the Christiano/Yudkowsky conversation\n-------------------------------------------------------\n\n\n \n\n\n### 12.1. Bensinger and Shah on prototypes and technological forecasting\n\n\n \n\n\n\n[Bensinger][16:22] \nQuoth Paul:\n\n\n\n> \n> seems like you have to make the wright flyer much better before it’s important, and that it becomes more like an industry as that happens, and that this is intimately related to why so few people were working on it\n> \n> \n> \n\n\nIs this basically saying ‘the Wright brothers didn’t personally capture much value by inventing heavier-than-air flying machines, and this was foreseeable, which is why there wasn’t a huge industry effort already underway to try to build such machines as fast as possible.’ ?\n\n\nMy maybe-wrong model of Eliezer says here ‘the Wright brothers knew a (Thielian) secret’, while my maybe-wrong model of Paul instead says:\n\n\n* They didn’t know a secret — it was obvious to tons of people that you could do something sorta like what the Wright brothers did and thereby invent airplanes; the Wright brothers just had unusual non-monetary goals that made them passionate to do a thing most people didn’t care about.\n* Or maybe it’s better to say: they knew some specific secrets about physics/engineering, but only because other people *correctly* saw ‘there are secrets to be found here, but they’re stamp-collecting secrets of little economic value to me, so I won’t bother to learn the secrets’. ~Everyone knows where the treasure is located, and ~everyone knows the treasure won’t make you rich.\n\n\n\n\n\n\n[Yudkowsky][17:24]\nMy model of Paul says there could be a secret, but only because the industry was tiny and the invention was nearly worthless directly.\n\n\n\n\n\n| |\n| --- |\n| [Cotra: ➕] |\n\n\n\n\n\n\n[Christiano][17:53]\nI mean, I think they knew a bit of stuff, but it generally takes a lot of stuff to make something valuable, and the more people have been looking around in an area the more confident you can be that it’s going to take a lot of stuff to do much better, and it starts to look like an extremely strong regularity for big industries like ML or semiconductors\n\n\nit’s pretty rare to find small ideas that don’t take a bunch of work to have big impacts\n\n\nI don’t know exactly what a thielian secret is (haven’t read the reference and just have a vibe)\n\n\nstraightening it out a bit, I have 2 beliefs that combine disjunctively: (i) generally it takes a lot of work to do stuff, as a strong empirical fact about technology, (ii) generally if the returns are bigger there are more people working on it, as a slightly-less-strong fact about sociology\n\n\n\n\n\n\n[Bensinger][18:09]\nsecrets = important undiscovered information (or information that’s been discovered but isn’t widely known), that you can use to get an edge in something. \n\n\nThere seems to be a Paul/Eliezer disagreement about how common these are in general. And maybe a disagreement about how much more efficiently humanity discovers and propagates secrets as you scale up the secret’s value?\n\n\n\n\n\n\n[Yudkowsky][18:35]\nMany times it has taken much work to do stuff; there’s further key assertions here about “It takes $100 billion” and “Multiple parties will invest $10B first” and “$10B gets you a lot of benefit first because scaling is smooth and without really large thresholds”.\n\n\nEliezer is like “ah, yes, sometimes it takes 20 or even 200 people to do stuff, but core researchers often don’t scale well past 50, and there aren’t always predecessors that could do a bunch of the same stuff” even though Eliezer agrees with “it often takes a lot of work to do stuff”. More premises are needed for the conclusion, that one alone does not distinguish Eliezer and Paul by enough.\n\n\n\n\n\n[Bensinger][20:03]\nMy guess is that everyone agrees with claims 1, 2, and 3 here (please let me know if I’m wrong!):\n\n\n1. The history of humanity looks less like **Long Series of Cheat Codes World**, and more like **Well-Designed Game World**.\n\n\nIn Long Series of Cheat Codes World, human history looks like this, over and over: Some guy found a cheat code that totally outclasses everyone else and makes him God or Emperor, until everyone else starts using the cheat code too (if the Emperor allows it). After which things are maybe normal for another 50 years, until a new Cheat Code arises that makes its first adopters invincible gods relative to the previous tech generation, and then the cycle repeats.\n\n\nIn Well-Designed Game World, you can sometimes eke out a small advantage, and the balance isn’t *perfect*, but it’s pretty good and the leveling-up tends to be gradual. A level 100 character totally outclasses a level 1 character, and some level transitions are a bigger deal than others, but there’s no level that makes you a god relative to the people one level below you.\n\n\n2. General intelligence took over the world once. Someone who updated on that fact but otherwise hasn’t thought much about the topic should not consider it ‘bonkers’ that machine general intelligence could take over the world too, even though they should still consider it ‘bonkers’ that eg a coffee startup could take over the world.\n\n\n(Because beverages have never taken over the world before, whereas general intelligence has; and because our inside-view models of coffee and of general intelligence make it a lot harder to imagine plausible mechanisms by which coffee could make someone emperor, kill all humans, etc., compared to general intelligence.)\n\n\n(In the game analogy, the situation is a bit like ‘I’ve never found a crazy cheat code or exploit in this game, but I haven’t ruled out that there is one, and I heard of a character once who did a lot of crazy stuff that’s at least *suggestive* that she might have had a cheat code.’)\n\n\n3. AGI is arising in a world where agents with science and civilization already exist, whereas humans didn’t arise in such a world. This is one reason to think AGI might not take over the world, but it’s *not* a strong enough consideration on its own to make the scenario ‘bonkers’ (because AGIs are likely to differ from humans in many respects, and it wouldn’t obviously be bonkers if the first AGIs turned out to be qualitatively way smarter, cheaper to run, etc.).\n\n\n—\n\n\nIf folks agree with the above, then I’m confused about how one updates from the above epistemic state to ‘bonkers’.\n\n\nIt was to a large extent physics facts that determined how easy it was to understand the feasibility of nukes without (say) decades of very niche specialized study. Likewise, it was physics facts that determined you need rare materials, many scientists, and a large engineering+infrastructure project to build a nuke. In a world where the *physics* of nukes resulted in it being some PhD’s quiet ‘nobody thinks this will work’ project like Andrew Wiles secretly working on a proof of Fermat’s Last Theorem for seven years, that would have *happened*.\n\n\nIf an alien came to me in 1800 and told me that totally new physics would let future humans build city-destroying superbombs, then I don’t see why I should have considered it bonkers that it might be lone mad scientists rather than nations who built the first superbomb. The ‘lone mad scientist’ scenario sounds more conjunctive to me (assumes the mad scientist knows something that isn’t widely known, AND has the ability to act on that knowledge without tons of resources), so I guess it should have gotten less probability, but maybe not dramatically less?\n\n\n‘Mad scientist builds city-destroying weapon in basement’ sounds wild to me, but I feel like almost all of the actual unlikeliness comes from the ‘city-destroying weapons exist at all’ part, and then the other parts only moderately lower the probability.\n\n\nLikewise, I feel like the prima-facie craziness of basement AGI mostly comes from ‘generally intelligence is a crazy thing, it’s wild that anything could be that high-impact’, and a much smaller amount comes from ‘it’s wild that something important could happen in some person’s basement’.\n\n\n—\n\n\nIt *does* structurally make sense to me that Paul might know things I don’t about GPT-3 and/or humans that make it obvious to him that we roughly know the roadmap to AGI and it’s this.\n\n\nIf the entire ‘it’s bonkers that some niche part of ML could crack open AGI in 2026 and reveal that GPT-3 (and the mainstream-in-2026 stuff) was on a very different part of the tech tree’ view is coming from a detailed inside-view model of intelligence like this, then that immediately ends my confusion about the argument structure.\n\n\nI don’t understand why you think you have the roadmap, and given a high-confidence roadmap I’m guessing I’d still put more probability than you on someone finding a very different, shorter path that works too. But the *argument structure* “roadmap therefore bonkers” makes sense to me.\n\n\nIf there are meant to be *other* arguments against ‘high-impact AGI via niche ideas/techniques’ that are strong enough to make it bonkers, then I remain confused about the argument structure and how it can carry that much weight.\n\n\nI can imagine an inside-view model of human cognition, GPT-3 cognition, etc. that tells you ‘AGI coming from nowhere in 3 years is bonkers’; I can’t imagine an ML-is-a-reasonably-efficient-market argument that does the same, because even a perfectly efficient market isn’t *omniscient* and can still be surprised by undiscovered physics facts that tell you ‘nukes are relatively easy to build’ and ‘the fastest path to nukes is relatively hard to figure out’.\n\n\n(Caveat: I’m using the ‘basement nukes’ and ‘Fermat’s last theorem’ analogy because it helps clarify the principles involved, not because I think AGI will be that extreme on the spectrum.)\n\n\n\n\n\n| |\n| --- |\n| [Yudkowsky: +1] |\n\n\n\nOh, I also wouldn’t be confused by a view like “I think it’s 25% likely we’ll see a more Eliezer-ish world. But it sounds like Eliezer is, like, 90% confident that will happen, and *that level of confidence* (and/or the weak reasoning he’s provided for that confidence) seems bonkers to me.”\n\n\nThe thing I’d be confused by is e.g. “ML is efficient-ish, therefore *the out-of-the-blue-AGI scenario itself* is bonkers and gets, like, 5% probability.”\n\n\n\n\n\n\n\n[Shah][1:58]\n(I’m unclear on whether this is acceptable for this channel, please let me know if not)\n\n\n\n> \n> I can’t imagine an ML-is-a-reasonably-efficient-market argument that does the same, because even a perfectly efficient market isn’t omniscient and can still be surprised by undiscovered physics facts\n> \n> \n> \n\n\nI think this seems right as a first pass.\n\n\nSuppose we then make the empirical observation that in tons and tons of other fields, it is extremely rare that people discover new facts that lead to immediate impact. (Set aside for now whether or not that’s true; assume that it is.) Two ways you could react to this:\n\n\n1. Different fields are different fields. It’s not like there’s a common generative process that outputs a distribution of facts and how hard they are to find that is common across fields. Since there’s no common generative process, facts about field X shouldn’t be expected to transfer to make predictions about field Y.\n\n\n2. There’s some latent reason, that we don’t currently know, that makes it so that it is rare for newly discovered facts to lead to immediate impact.\n\n\nIt seems like you’re saying that (2) is not a reasonable reaction (i.e. “not a valid argument structure”), and I don’t know why. There are lots of things we don’t know, is it really so bad to posit one more?\n\n\n(Once we agree on the argument structure, we should then talk about e.g. reasons why such a latent reason can’t exist, or possible guesses as to what the latent reason is, etc, but fundamentally I feel generally okay with starting out with “there’s probably some reason for this empirical observation, and absent additional information, I should expect that reason to continue to hold”.)\n\n\n\n\n\n\n[Bensinger][3:15]\nI think 2 is a valid argument structure, but I didn’t mention it because I’d be surprised if it had enough evidential weight (in this case) to produce an ‘update to bonkers’. I’d love to hear more about this if anyone thinks I’m under-weighting this factor. (Or any others I left out!)\n\n\n\n\n\n\n[Shah][23:57]\nIdk if it gets all the way to “bonkers”, but (2) seems pretty strong to me, and is how I would interpret Paul-style arguments on timelines/takeoff if I were taking on what-I-believe-to-be your framework\n\n\n\n\n\n\n[Bensinger][11:06]\nWell, I’d love to hear more about that!\n\n\nAnother way of getting at my intuition: I feel like a view that assigns very small probability to ‘suddenly vastly superhuman AI, because something that high-impact hasn’t happened before’\n\n\n(which still seems weird to me, because physics doesn’t know what ‘impact’ is and I don’t see what physical mechanism could forbid it that strongly and generally, short of simulation hypotheses)\n\n\n… would also assign very small probability in 1800 to ‘given an alien prediction that totally new physics will let us build superbombs at least powerful enough to level cities, the superbomb in question will ignite the atmosphere or otherwise destroy the Earth’.\n\n\nBut this seems flatly wrong to me — if you buy that the bomb works by a totally different mechanism (and exploits a different physics regime) than eg gunpowder, then the output of the bomb is a *physics* question, and I don’t see how we can concentrate our probability mass much without probing the relevant physics. The history of boat and building sizes is a negligible input to ‘given a totally new kind of bomb that suddenly lets us (at least) destroy cities, what is the total destructive power of the bomb?’.\n\n\n\n\n\n| |\n| --- |\n| [Yudkowsky: +1] |\n\n\n\n(Obviously the bomb *didn’t* destroy the Earth, and I wouldn’t be surprised if there’s some Bayesian evidence or method-for-picking-a-prior that could have validly helped you suspect as much in 1800? But it would be a suspicion, not a confident claim.)\n\n\n\n\n\n\n[Shah][1:45]\n\n> \n> would also assign very small probability in 1800 to ‘given an alien prediction that totally new physics will let us build superbombs at least powerful enough to level cities, the superbomb in question will ignite the atmosphere or otherwise destroy the Earth’\n> \n> \n> \n\n\n(As phrased you also have to take into account the question of whether humans would deploy the resulting superbomb, but I’ll ignore that effect for now.)\n\n\nI think this isn’t exactly right. The “totally new physics” part seems important to update on.\n\n\nLet’s suppose that, in the reference class we built of boat and building sizes, empirically nukes were the 1 technology out of 20 that had property X. (Maybe X is something like “discontinuous jump in things humans care about” or “immediate large impact on the world” or so on.) Then, I think in 1800 you assign ~5% to ‘the first superbomb at least powerful enough to level cities will ignite the atmosphere or otherwise destroy the Earth’.\n\n\nOnce you know more details about how the bomb works, you should be able to update away from 5%. Specifically, “entirely new physics” is an important detail that causes you to update away from 5%. I wouldn’t go as far as you in throwing out reference classes entirely at that point — there can still be unknown latent factors that apply at the level of physics — but I agree reference classes look harder to use in this case.\n\n\nWith AI, I start from ~5% and then I don’t really see any particular detail for AI that I think I should strongly update on. My impression is that Eliezer thinks that “general intelligence” is a qualitatively different sort of thing than that-which-neural-nets-are-doing, and maybe that’s what’s analogous to “entirely new physics”. I’m pretty unconvinced of this, but something in this genre feels quite crux-y for me.\n\n\nActually, I think I’ve lost the point of this analogy. What’s the claim for AI that’s analogous to\n\n\n\n> \n> ‘given an alien prediction that totally new physics will let us build superbombs at least powerful enough to level cities, the superbomb in question will ignite the atmosphere or otherwise destroy the Earth’\n> \n> \n> \n\n\n?\n\n\nLike, it seems like this is saying “We figure out how to build a new technology that does X. What’s the chance it has side effect Y?” Where X and Y are basically unrelated.\n\n\nI was previously interpreting the argument as “if we know there’s a new superbomb based on totally new physics, and we know that the first such superbomb is at least capable of leveling cities, what’s the probability it would have enough destructive force to also destroy the world”, but upon rereading that doesn’t actually seem to be what you were gesturing at.\n\n\n\n\n\n\n[Bensinger][3:08]\nI’m basically responding to this thing Ajeya wrote:\n\n\n\n> \n> I think Paul’s view would say:\n> \n> \n> * Things certainly happen for the first time\n> * When they do, they happen at small scale in shitty prototypes, like the Wright Flyer or GPT-1 or AlphaGo or the Atari bots or whatever\n> * When they’re making a big impact on the world, it’s after a lot of investment and research, like commercial aircrafts in the decades after Kitty Hawk or like the investments people are in the middle of making now with AI that can assist with coding\n> \n> \n> \n\n\nTo which my reply is: I agree that the first AGI systems will be shitty compared to *later* AGI systems. But Ajeya’s Paul-argument seems to additionally require that AGI systems be relatively unimpressive at cognition compared to preceding AI systems that weren’t AGI.\n\n\nIf this is because of some general law that things are shitty / low-impact when they “happen for the first time”, then I don’t understand what physical mechanism could produce such a general law that holds with such force.\n\n\nAs I see it, physics ‘doesn’t care’ about human conceptions of impactfulness, and will instead produce AGI prototypes, aircraft prototypes, and nuke prototypes that have as much impact as is implied by the detailed case-specific workings of general intelligence, flight, and nuclear chain reactions respectively.\n\n\nWe could frame the analogy as:\n\n\n* ‘If there’s a year where AI goes from being unable to do competitive par-human reasoning in the hard sciences, to being able to do such reasoning, we should estimate the impact of the first such systems by drawing on our beliefs about par-human scientific reasoning itself.’\n* Likewise: ‘If there’s a year where explosives go from being unable to destroy cities to being able to destroy cities, we should estimate the impact of the first such explosives by drawing on our beliefs about how (current or future) physics might allow a city to be destroyed, and what other effects or side-effects such a process might have. We should spend little or no time thinking about the impactfulness of the first steam engine or the first telescope.’\n\n\n\n\n\n\n[Shah][3:14]\nSeems like your argument is something like “when there’s a zero-to-one transition, then you have to make predictions based on reasoning about the technology itself”. I think in that case I’d say this thing from above:\n\n\n\n> \n> My impression is that Eliezer thinks that “general intelligence” is a qualitatively different sort of thing than that-which-neural-nets-are-doing, and maybe that’s what’s analogous to “entirely new physics”. I’m pretty unconvinced of this, but something in this genre feels quite crux-y for me.\n> \n> \n> \n\n\n(Like, you wouldn’t a priori expect anything special to happen once conventional bombs become big enough to demolish a football stadium for the first time. It’s because nukes are based on “totally new physics” that you might expect unprecedented new impacts from nukes. What’s the analogous thing for AGI? Why isn’t AGI just regular AI but scaled up in a way that’s pretty continuous?)\n\n\nI’m curious if you’d change your mind if you were convinced that AGI is just regular AI scaled up, with no qualitatively new methods — I expect you wouldn’t but idk why\n\n\n\n\n\n\n[Bensinger][4:03]\nIn my own head, the way I think of ‘AGI’ is basically: “Something happened that allows humans to do biochemistry, materials science, particle physics, etc., even though none of those things were present in our environment of evolutionary adaptedness. Eventually, AI will similarly be able to generalize to biochemistry, materials science, particle physics, etc. We can call that kind of AI ‘AGI’.”\n\n\nThere might be facts I’m unaware of that justify conclusions like ‘AGI is mostly just a bigger version of current ML systems like GPT-3’, and there might be facts that justify conclusions like ‘AGI will be preceded by a long chain of predecessors, each slightly less general and slightly less capable than its successor’.\n\n\nBut if so, I’m assuming those will be facts about CS, human cognition, etc., not at all a list of a hundred facts like ‘the first steam engine didn’t take over the world’, ‘the first telescope didn’t take over the world’…. Because the physics of brains doesn’t care about those things, and because in discussing brains we’re already in ‘things that have been known to take over the world’ territory.\n\n\n(I think that paying much attention *at all* to the technology-wide base rate for ‘does this allow you to take over the world?’, once you already know you’re doing something like ‘inventing a new human’, doesn’t really make sense at all? It sounds to me like going to a bookstore and then repeatedly worrying ‘What if they don’t have the book I’m looking for? Most stores don’t sell books at all, so this one might not have the one I want.’ If you know it’s a *book* store, then you shouldn’t be thinking at that level of generality at all; the base rate just goes out the window.)\n\n\n\n\n\n| |\n| --- |\n| [Yudkowsky:] +1 |\n\n\n\nMy way of thinking about AGI is pretty different from saying AGI follows ‘totally new mystery physics’ — I’m explicitly anchoring to a known phenomenon, humans.\n\n\nThe analogous thing for nukes might be ‘we’re going to build a bomb that uses processes kind of like the ones found in the Sun in order to produce enough energy to destroy (at least) a city’.\n\n\n\n\n\n\n[Shah][0:44]\n\n> \n> The analogous thing for nukes might be ‘we’re going to build a bomb that uses processes kind of like the ones found in the Sun in order to produce enough energy to destroy (at least) a city’.\n> \n> \n> \n\n\n(And I assume the contentious claim is “that bomb would then ignite the atmosphere, destroy the world, or otherwise have hugely more impact than just destroying a city”.)\n\n\nIn 1800, we say “well, we’ll probably just make existing fires / bombs bigger and bigger until they can destroy a city, so we shouldn’t expect anything particularly novel or crazy to happen”, and assign (say) 5% to the claim.\n\n\nThere is a wrinkle: you said it was processes like the ones found in the Sun. Idk what the state of knowledge was like in 1800, but maybe they knew that the Sun couldn’t be a conventional fire. If so, then they could update to a higher probability.\n\n\n(You could also infer that since someone bothered to mention “processes like the ones found in the Sun”, those processes must be ones we don’t know yet, which also allows you to make that update. I’m going to ignore that effect, but I’ll note that this is one way in which the phrasing of the claim is incorrectly pushing you in the direction of “assign higher probability”, and I think a similar thing happens for AI when saying “processes like those in the human brain”.)\n\n\nWith AI I don’t see why the human brain is a different kind of thing than (say) convnets. So I feel more inclined to just take the starting prior of 5%.\n\n\nPresumably you think that assigning 5% to the nukes claim in 1800 was incorrect, even if that perspective doesn’t know that the Sun is not just a very big conventional fire. I’m not sure why this is. According to me this is just the natural thing to do because things are usually continuous and so in the absence of detailed knowledge that’s what your prior should be. (If I had to justify this, I’d point to facts about bridges and buildings and materials science and so on.)\n\n\n\n> \n> there might be facts that justify conclusions like ‘AGI will be preceded by a long chain of slightly-less-general, slightly-less-capable successors’.\n> \n> \n> \n\n\nThe frame of “justify[ing] conclusions” seems to ask for more confidence than I expect to get. Rather I feel like I’m setting an initial prior that could then be changed radically by engaging with details of the technology. And then I’m further saying that I don’t see any particular details that should cause me to update away significantly (but they could arise in the future).\n\n\nFor example, suppose I have a random sentence generator, and I take the first well-formed claim it spits out. (I’m using a random sentence generator so that we don’t update on the process by which the claim was generated.) This claim turns out to be “Alice has a fake skeleton hidden inside her home”. Let’s say we know nothing about Alice except that she is a real person somewhere in the US who has a home. You can still assign < 10% probability to the claim, and take 10:1 bets with people who don’t know any additional details about Alice. Nonetheless, as you learn more about Alice, you could update towards higher probability, e.g. if you learn that she loves Halloween, that’s a modest update; if you learn she runs a haunted house at Halloween every year, that’s a large update; if you go to her house and see the fake skeleton you can update to ~100%. That’s the sort of situation I feel like we’re in with AI.\n\n\nIf you asked me what facts justify the conclusion that Alice probably doesn’t have a fake skeleton hidden inside her house, I could only point to reference classes, and all the other people I’ve met who don’t have such skeletons. This is not engaging with the details of Alice’s situation, and I could similarly say “if I wanted to know about Alice, surely I should spend most of my time learning about Alice, rather than looking at what Bob and Carol did”. Nonetheless, it is still correct to assign < 10% to the claim.\n\n\nIt really does seem to come down to — why is human-level intelligence such a special turning point that should receive special treatment? Just as you wouldn’t give special treatment to “the first time bridges were longer than 10m”, it doesn’t seem obvious that there’s anything all that special at the point where AIs reach human-level intelligence (at least for the topics we’re discussing; there are obvious reasons that’s an important point when talking about the economic impact of AI)\n\n\n\n\n\n\n[Tallinn][7:04]\nFWIW, my current 1-paragraph compression of the debate positions is something like:\n\n\n**catastrophists**: when evolution was gradually improving hominid brains, suddenly something clicked – it stumbled upon the core of general reasoning – and hominids went from banana classifiers to spaceship builders. hence we should expect a similar (but much sharper, given the process speeds) discontinuity with AI.\n\n\n**gradualists**: no, there was no discontinuity with hominids per se; human brains merely reached a threshold that enabled cultural accumulation (and in a meaningul sense it was *culture* that built those spaceships). similarly, we should not expect sudden discontinuities with AI per se, just an accelerating (and possibly unfavorable to humans) cultural changes as human contributions will be automated away.\n\n\n—\n\n\none possible crux to explore is “how thick is culture”: is it something that AGI will quickly decouple from (dropping directly to physics-based ontology instead) OR will culture remain AGI’s main environment/ontology for at least a decade.\n\n\n\n\n\n\n[Ngo][11:18]\n\n> \n> FWIW, my current 1-paragraph compression of the debate positions is something like:\n> \n> \n> **catastrophists**: when evolution was gradually improving hominid brains, suddenly something clicked – it stumbled upon the core of general reasoning – and hominids went from banana classifiers to spaceship builders. hence we should expect a similar (but much sharper, given the process speeds) discontinuity with AI.\n> \n> \n> **gradualists**: no, there was no discontinuity with hominids per se; human brains merely reached a threshold that enabled cultural accumulation (and in a meaningul sense it was *culture* that built those spaceships). similarly, we should not expect sudden discontinuities with AI per se, just an accelerating (and possibly unfavorable to humans) cultural changes as human contributions will be automated away.\n> \n> \n> —\n> \n> \n> one possible crux to explore is “how thick is culture”: is it something that AGI will quickly decouple from (dropping directly to physics-based ontology instead) OR will culture remain AGI’s main environment/ontology for at least a decade.\n> \n> \n> \n\n\nClarification: in the sentence “just an accelerating (and possibly unfavorable to humans) cultural changes as human contributions will be automated away”, what work is “cultural changes” doing? Could we just say “changes” (including economic, cultural, etc) instead?\n\n\n\n> \n> In my own head, the way I think of ‘AGI’ is basically: “Something happened that allows humans to do biochemistry, materials science, particle physics, etc., even though none of those things were present in our environment of evolutionary adaptedness. Eventually, AI will similarly be able to generalize to biochemistry, materials science, particle physics, etc. We can call that kind of AI ‘AGI’.”\n> \n> \n> There might be facts I’m unaware of that justify conclusions like ‘AGI is mostly just a bigger version of current ML systems like GPT-3’, and there might be facts that justify conclusions like ‘AGI will be preceded by a long chain of predecessors, each slightly less general and slightly less capable than its successor’.\n> \n> \n> But if so, I’m assuming those will be facts about CS, human cognition, etc., not at all a list of a hundred facts like ‘the first steam engine didn’t take over the world’, ‘the first telescope didn’t take over the world’…. Because the physics of brains doesn’t care about those things, and because in discussing brains we’re already in ‘things that have been known to take over the world’ territory.\n> \n> \n> (I think that paying much attention *at all* to the technology-wide base rate for ‘does this allow you to take over the world?’, once you already know you’re doing something like ‘inventing a new human’, doesn’t really make sense at all? It sounds to me like going to a bookstore and then repeatedly worrying ‘What if they don’t have the book I’m looking for? Most stores don’t sell books at all, so this one might not have the one I want.’ If you know it’s a *book* store, then you shouldn’t be thinking at that level of generality at all; the base rate just goes out the window.)\n> \n> \n> \n\n\nI’m broadly sympathetic to the idea that claims about AI cognition should be weighted more highly than claims about historical examples. But I think you’re underrating historical examples. There are at least three ways those examples can be informative – by telling us about:\n\n\n1. Domain similarities\n\n\n2. Human effort and insight\n\n\n3. Human predictive biases\n\n\nYou’re mainly arguing against 1, by saying that there are facts about physics, and facts about intelligence, and they’re not very related to each other. This argument is fairly compelling to me (although it still seems plausible that there are deep similarities which we don’t understand yet – e.g. the laws of statistics, which apply to many different domains).\n\n\nBut historical examples can also tell us about #2 – for instance, by giving evidence that great leaps of insight are rare, and so if there exists a path to AGI which doesn’t require great leaps of insight, that path is more likely than one which does.\n\n\nAnd they can also tell us about #3 – for instance, by giving evidence that we usually overestimate the differences between old and new technologies, and so therefore those same biases might be relevant to our expectations about AGI.\n\n\n\n\n\n\n[Bensinger][12:31]\nIn the ‘alien warns about nukes’ example, my intuition is that ‘great leaps of insight are rare’ and ‘a random person is likely to overestimate the importance of the first steam engines and telescopes’ tell me practically nothing, compared to what even a small amount of high-uncertainty physics reasoning tells me.\n\n\nThe ‘great leap of insight’ part tells me ~nothing because even if there’s an easy low-insight path to nukes and a hard high-insight path, I don’t thereby know the explosive yield of a bomb on either path (either absolutely or relatively); it depends on how nukes work.\n\n\nLikewise, I don’t think ‘a random person is likely to overestimate the first steam engine’ really helps with estimating the power of nuclear explosions. I could *imagine* a world where this bias exists and is so powerful and inescapable it ends up being a big weight on the scales, but I don’t think we live in that world?\n\n\nI’m not even sure that a random person *would* overestimate the importance of prototypes in general. Probably, I guess? But my intuition is still that you’re better off in 1800 focusing on physics calculations rather than the tug-of-war ‘maybe X is cognitively biasing me in *this* way, no wait maybe Y is cognitively biasing me in this other way, no wait…’\n\n\nOur situation might not be analogous to the 1800-nukes scenario (e.g., maybe we know by observation that current ML systems are basically scaled-down humans). But if it *is* analogous, then I think the history-of-technology argument is not very useful here.\n\n\n\n\n\n\n[Tallinn][13:00]\nre “cultural changes”: yeah, sorry, i meant “culture” in very general “substrate of human society” sense. “cultural changes” would then include things like changes in power structures and division of labour, but *not* things like “diamondoid bacteria killing all humans in 1 second” (that would be a change in humans, not in the culture)\n\n\n\n\n\n\n[Shah][13:09]\nI want to note that I agree with your (Rob’s) latest response, but I continue to think most of the action is in whether AGI involves something analogous to “totally new physics”, where I would guess “no” (and would do so particularly strongly for shorter timelines).\n\n\n(And I would still point to historical examples for “many new technologies don’t involve something analogous to ‘totally new physics'”, and I’ll note that Richard’s #2 about human effort and insight still applies)\n\n\n\n\n\n \n\n\n### 12.2. Yudkowsky on Steve Jobs and gradualism\n\n\n \n\n\n\n[Yudkowsky][15:26]\nSo recently I was talking with various people about the question of why, for example, Steve Jobs could not find somebody else with UI taste 90% as good as his own, to take over Apple, even while being able to pay infinite money. A successful founder I was talking to was like, “Yep, I sure would pay $100 million to hire somebody who could do 80% of what I can do, in fact, people have earned more than that for doing less.”\n\n\nI wondered if OpenPhil was an exception to this rule, and people with more contact with OpenPhil seemed to think that OpenPhil did not have 80% of a Holden Karnofsky (besides Holden).\n\n\nAnd of course, what sparked this whole thought process in me, was that I’d staked all the effort I put into the Less Wrong sequences, into the belief that if I’d managed to bring myself into existence, then there ought to be lots of young near-Eliezers in Earth’s personspace including some with more math talent or physical stamina not so unusually low, who could be started down the path to being Eliezer by being given a much larger dose of concentrated hints than I got, starting off the compounding cascade of skill formations that I saw as having been responsible for producing me, “on purpose instead of by accident”.\n\n\nI see my gambit as having largely failed, just like the successful founder couldn’t pay $100 million to find somebody 80% similar in capabilities to himself, and just like Steve Jobs could not find anyone to take over Apple for presumably much larger amounts of money and status and power. Nick Beckstead had some interesting stories about various ways that Steve Jobs had tried to locate successors (which I wasn’t even aware of).\n\n\nI see a plausible generalization as being a “Sparse World Hypothesis”: The shadow of an Earth with eight billion people, projected into some dimensions, is much sparser than plausible arguments might lead you to believe. Interesting people have few neighbors, even when their properties are collapsed and projected onto lower-dimensional tests of output production. The process of forming an interesting person passes through enough 0-1 critical thresholds that all have to be passed simultaneously in order to start a process of gaining compound interest in various skills, that they then cannot find other people who are 80% as good as what they *do* (never mind being 80% similar to them as people).\n\n\nI would expect human beings to start out much denser in a space of origins than AI projects, and for the thresholds and compounding cascades of our mental lives to be much less sharp than chimpanzee-human gaps.\n\n\nGradualism about humans sure sounds totally reasonable! It is in fact much more plausible-sounding a priori than the corresponding proposition about AI projects! I staked years of my own life on the incredibly reasoning-sounding theory that if one actual Eliezer existed then there should be lots of neighbors near myself that I could catalyze into existence by removing some of the accidental steps from the process that had accidentally produced me.\n\n\nBut it didn’t work in real life because plausible-sounding gradualist arguments just… plain don’t work in real life even though they sure sound plausible. I spent a lot of time arguing with Robin Hanson, who was more gradualist than I was, and was taken by surprise when reality itself was much less gradualist than I was.\n\n\nMy model has Paul or Carl coming back with some story about how, why, no, it is totally reasonable that Steve Jobs couldn’t find a human who was 90% as good at a problem class as Steve Jobs to take over Apple for billions of dollars despite looking, and, why, no, this is not at all a falsified retroprediction of the same gradualist reasoning that says a leading AI project should be inside a dense space of AI projects that projects onto a dense space of capabilities such that it has near neighbors.\n\n\nIf so, I was not able to use this hypothetical model of *selective* gradualist reasoning to deduce in advance that replacements for myself would be sparse in the same sort of space and I’d end up unable to replace myself.\n\n\nI do not really believe that, without benefits of hindsight, the advance predictions of gradualism would differ between the two cases.\n\n\nI think if you don’t peek at the answer book in advance, the same sort of person who finds it totally reasonable to expect successful AI projects to have close lesser earlier neighbors, would also find it totally reasonable to think that Steve Jobs definitely ought to be able to find somebody 90% as good to take over his job – and should actually be able to find somebody *much* better because Jobs gets to run a wider search and offer more incentive than when Jobs was wandering into early involvement in Apple.\n\n\nIt’s completely reasonable-sounding! Totally plausible to a human ear! Reality disagrees. Jobs tried to find a successor, couldn’t, and now the largest company in the world by market cap seems no longer capable of sending the iPhones back to the designers and asking them to do something important differently.\n\n\nThis is part of the story for why I put gradualism into a mental class of “arguments that sound plausible and just fail in real life to be binding on reality; reality says ‘so what’ and goes off to do something else”.\n\n\n\n\n\n[Christiano][17:46]  (Sep. 28)\nIt feels to me like a common pattern is: I say that ML in particular, and most technologies in general, seem to improve quite gradually on metrics that people care about or track. You say that some kind of “gradualism” worldview predicts a bunch of other stuff (some claim about markets or about steve jobs or whatever that feels closely related on your view but not mine). But it feels to me like there are just a ton of technologies, and a ton of AI benchmarks, and those are just *much* more analogous to “future AI progress.” I know that to you this feels like reference class tennis, but I think I legitimately don’t understand what kind of approach to forecasting you are using that lets you just make (what I see as) the obvious boring prediction about all of the non-AGI technologies.\n\n\nPerhaps you are saying that symmetrically you don’t understand what approach to forecasting I’m using, that would lead me to predict that technologies improve gradually yet people vary greatly in their abilities. To me it feels like the simplest thing in the world: I expect future technological progress in domain X to be like past progress in domain X, and future technological progress to be like past technological progress, and future market moves to be like past market moves, and future elections to be like past elections.\n\n\nAnd it seems like you *must* be doing something that ends up making almost the same predictions as that almost all the time, which is why you don’t get incredibly surprised every single year by continuing boring and unsurprising progress in batteries or solar panels or robots or ML or computers or microscopes or whatever. Like it’s fine if you say “Yes, those areas have trend breaks sometimes” but there are *so many* boring years that you must somehow be doing something like having the baseline “this year is probably going to be boring.”\n\n\nSuch that intuitively it feels to me like the disagreement between us *must* be in the part where AGI feels to me like it is similar to AI-to-date and feels to you like it is very different and better compared to evolution of life or humans.\n\n\nIt has to be the kind of argument that you can make about progress-of-AI-on-metrics-people-care-about, but *not* progress-of-other-technologies-on-metrics-people-care-about, otherwise it seems like you are getting hammered every boring year for every boring technology.\n\n\nI’m glad we have the disagreement on record where I expect ML progress to continue to get less jumpy as the field grows, and maybe the thing to do is just poke more at that since it is definitely a place where I gut level expect to win bayes points and so could legitimately change my mind on the “which kinds of epistemic practices work better?” question. But it feels like it’s not the main action, the main action has got to be about you thinking that there is a really impactful change somewhere between {modern AI, lower animals} and {AGI, humans} that doesn’t look like ongoing progress in AI.\n\n\nI think “would GPT-3 + 5 person-years of engineering effort foom?” feels closer to core to me.\n\n\n(That said, the way AI could be different need not feel like “progress is lumpier,” could totally be more like “Progress is always kind of lumpy, which Paul calls ‘pretty smooth’ and Eliezer calls ‘pretty lumpy’ and doesn’t lead to any disagreements; but Eliezer thinks AGI is different in that kind-of-lumpy progress leads to fast takeoff, while Paul thinks it just leads to kind-of-lumpy increases in the metrics people care about or track.”)\n\n\n\n\n\n\n[Yudkowsky][7:46]  (Sep. 29)\n\n> \n> I think “would GPT-3 + 5 person-years of engineering effort foom?” feels closer to core to me.\n> \n> \n> \n\n\nI truly and legitimately cannot tell which side of this you think we should respectively be on. My guess is you’re against GPT-3 fooming because it’s too low-effort and a short timeline, even though I’m the one who thinks GPT-3 isn’t on a smooth continuum with AGI??\n\n\nWith that said, the rest of this feels on-target to me; I sure do feel like {natural selection, humans, AGI} form an obvious set with each other, though even there the internal differences are too vast and the data too scarce for legit outside viewing.\n\n\n\n> \n> I truly and legitimately cannot tell which side of this you think we should respectively be on. My guess is you’re against GPT-3 fooming because it’s too low-effort and a short timeline, even though I’m the one who thinks GPT-3 isn’t on a smooth continuum with AGI??\n> \n> \n> \n\n\nI mean I obviously think you can foom starting from an empty Python file with 5 person-years of effort if you’ve got the Textbook From The Future; you wouldn’t use the GPT code or model for anything in that, the Textbook says to throw it out and start over.\n\n\n\n\n\n[Christiano][9:45]  (Sep. 29)\nI think GPT-3 will foom given very little engineering effort, it will just be much slower than the human foom\n\n\nand then that timeline will get faster and faster over time\n\n\nit’s also fair to say that it wouldn’t foom because the computers would break before it figured out how to repair them (and it would run out of metal before it figured out how to mine it, etc.), depending on exactly how you define “foom,” but the point is that “you can repair the computers faster than they break” happens much before you can outrun human civilization\n\n\nso the relevant threshold you cross is the one where you are outrunning civilization\n\n\n(and my best guess about human evolution is pretty similar, it looks like humans are smart enough to foom over a few hundred thousand years, and that we were the ones to foom because that is also roughly how long it was taking evolution to meaningfully improve our cognition—if we foomed slower it would have instead been a smarter successor who overtook us, if we foomed faster it would have instead been a dumber predecessor, though this is *much* less of a sure-thing than the AI case because natural selection is not trying to make something that fooms)\n\n\nand regarding {natural selection, humans, AGI} the main question is why modern AI and homo erectus (or even chimps) aren’t in the set\n\n\nit feels like the core disagreement is that I mostly see a difference in degree between the various animals, and between modern AI and future AI, a difference that is likely to be covered by gradual improvements that are pretty analogous to contemporary improvements, and so as the AI community making contemporary improvements grows I get more and more confident that TAI will be a giant industry rather than an innovation\n\n\n\n\n\n\n[Ngo][5:45]\nDo you have a source on Jobs having looked hard for a successor who wasn’t Tim Cook?\n\n\nAlso, I don’t have strong opinions about how well Apple is doing now, so I default to looking at the share price, which seems very healthy.\n\n\n(Although I note in advance that this doesn’t feel like a particularly important point, roughly for the same reason that Paul mentioned: gradualism about Steve Jobs doesn’t seem like a central example of the type of gradualism that informs beliefs about AI development.)\n\n\n\n\n\n\n[Yudkowsky][10:40]\nMy source is literally “my memory of stuff that Nick Beckstead just said to me in person”, maybe he can say more if we invite him.\n\n\nI’m not quite sure what to do with the notion that “gradualism about Steve Jobs” is somehow less to be expected than gradualism about AGI projects. Humans are GIs. They are *extremely* similar to each other design-wise. There are a *lot* of humans, billions of them, many many many more humans than I expect AGI projects. Despite this the leading edge of human-GIs is sparse enough in the capability space that there is no 90%-of-Steve-Jobs that Jobs can locate, and there is no 90%-of-von-Neumann known to 20th century history. If we are not to take any evidence about this to A-GIs, then I do not understand the rules you’re using to apply gradualism to some domains but not others.\n\n\nAnd to be explicit, a skeptic who doesn’t find these divisions intuitive, might well ask, “Is gradualism perhaps isomorphic to ‘The coin always comes up heads on Heady occasions’, where ‘Heady’ occasions are determined by an obscure intuitive method going through some complicated nonverbalizable steps one of which is unfortunately ‘check whether the coin actually came up heads’?”\n\n\n(As for my own theory, it’s always been that AGIs are mostly like AGIs and not very much like humans or the airplane-manufacturing industry, and I do not, on my own account of things, appeal much to supposed outside viewing or base rates.)\n\n\n\n\n\n[Shulman][11:11]\nI think the way to apply it is to use observable data (drawn widely) and math.\n\n\nSteve Jobs does look like a (high) draw (selected for its height, in the sparsest tail of the CEO distribution) out of the economic and psychometric literature (using the same kind of approach I use in other areas like estimating effects of introducing slightly superhuman abilities on science, the genetics of height, or wealth distributions). You have roughly normal or log-normal distributions on some measures of ability (with fatter tails when there are some big factors present, e.g. super-tall people are enriched for normal common variants for height but are more frequent than a Gaussian estimated from the middle range because of some weird disease/hormonal large effects). And we have lots of empirical data about the thickness and gaps there. Then you have a couple effects that can make returns in wealth/output created larger.\n\n\nYou get amplification from winner-take-all markets, IT, and scale that let higher ability add value to more places. This is the same effect that lets top modern musicians make so much money. Better CEOs get allocated to bigger companies because multiplicative management decisions are worth more in big companies. Software engineering becomes more valuable as the market for software grows.\n\n\nWealth effects are amplified by multiplicative growth (noise in a given period multiplies wealth for the rest of the series, and systematic biases from abilities can grow exponentially or superexponentially over a lifetime), and there are some versions of that in gaining expensive-to-acquire human capital (like fame for Hollywood actors, or experience using incredibly expensive machinery or companies).\n\n\nAnd we can read off the distributions of income, wealth, market share, lead time in innovations, scientometrics, etc.\n\n\nThat sort of data lead you to expect cutting edge tech to be months to a few years ahead of followers, winner-take-all tech markets to a few leading firms and often a clearly dominant one (but not driving an expectation of being able to safely rest on laurels for years while others innovate without a moat like network effects). That’s one of my longstanding arguments with Robin Hanson, that his model has more even capabilities and market share for AGI/WBE than typically observed (he says that AGI software will have to be more diverse requiring more specialized companies, to contribute so much GDP).\n\n\nIt is tough to sample for extreme values on multiple traits at once, superexponentially tough as you go out or have more criteria. CEOs of big companies are smarter than average, taller than average, have better social skills on average, but you can’t find people who are near the top on several of those.\n\n\n\n\n\nCorrelations between the things help, but it’s tough. E.g. if you have thousands of people in a class on a measure of cognitive skill, and you select on only partially correlated matters of personality, interest, motivation, prior experience, etc, the math says it gets thin and you’ll find different combos (and today we see more representation of different profiles of abilities, including rare and valuable ones, in this community)\n\n\nI think the bigger update for me from trying to expand high-quality save the world efforts has been on the funny personality traits/habits of mind that need to be selected and their scarcity.\n\n\n\n\n\n\n[Karnofsky][11:30]\nA cpl comments, without commitment to respond to responses:\n\n\n1. Something in the zone of “context / experience / obsession” seems important for explaining the Steve Jobs type thing. It seems to me that people who enter an area early tend to maintain an edge even over more talented people who enter later – examples are not just founder/CEO types but also early employees of some companies who are more experienced with higher-level stuff (and often know the history of how they got there) better than later-entering people.\n\n\n2. I’m not sure if I am just rephrasing something Carl or Paul has said, but something that bugs me a lot about the Rob/Eliezer arguments is that I feel like if I accept >5% probability for the kind of jump they’re talking about, I don’t have a great understanding of how I avoid giving >5% to a kajillion other claims from various startups that they’re about to revolutionize their industry, in ways that seem inside-view plausible and seem to equally “depend on facts about some physical domain rather than facts about reference classes.”\n\n\nThe thing that actually most comes to mind here is Thiel – he has been a phenomenal investor financially, but he has also invested by now in a lot of “atoms” startups with big stories about what they might do, and I don’t think any have come close to reaching those visions (though they have sometimes made $ by doing something orders of magnitude less exciting).\n\n\nIf a big crux here is “whether Thielian secrets exist” this track record could be significant.\n\n\nI think I might update if I had a cleaner sense of how I could take on this kind of “Well, if it is just a fact about physics that I have no idea about, it can’t be that unlikely” view without then betting on a lot of other inside-view-plausible breakthroughs that haven’t happened. Right now all I can say to imitate this lens is “General intelligence is ‘different'”\n\n\nI don’t feel the same way about “AI might take over the world” – I feel like I have good reasons this applies to AI and not a bunch of other stuff\n\n\n\n\n\n\n[Soares][11:11]\nOk, a few notes from me (feel free to ignore):\n\n\n1. It seems to me like the convo here is half attempting-to-crux and half attempting-to-distill-out-a-bet. I’m interested in focusing explicitly on cruxing for the time being, for whatever that’s worth. (It seems to me like y’all’re already trending in that direction.)\n\n\n2. It seems to me that one big revealed difference between the Eliezerverse and the Paulverse is something like:\n\n\n* In the Paulverse, we already have basically all the fundamental insights we need for AGI, and now it’s just a matter of painstaking scaling.\n* In the Eliezerverse, there are large insights yet missing (and once they’re found we have plenty of reason to expect things to go quickly).\n\n\nFor instance, in Eliezerverse they say “The Wright flyer didn’t need to have historical precedents, it was allowed to just start flying. Similarly, the AI systems of tomorrow are allowed to just start GIing without historical precedent.”, and in the Paulverse they say “The analog of the Wright flyer has already happened, it was Alexnet, we are now in the phase analogous to the slow grinding transition from human flight to commercially viable human flight.”\n\n\n(This seems to me like basically what Ajeya articulated [upthread](https://www.lesswrong.com/posts/fS7Zdj2e2xMqE6qja/more-christiano-cotra-and-yudkowsky-on-ai-progress).)\n\n\n3. It seems to me that another revealed intuition-difference is in the difficulty that people have operating each other’s models. This is evidenced by, eg, Eliezer/Rob saying things like “I don’t know how to operate the gradualness model without making a bunch of bad predictions about Steve Jobs”, and Paul/Holden responding with things like “I don’t know how to operate the secrets-exist model without making a bunch of bad predictions about material startups”.\n\n\nI’m not sure whether this is a shallower or deeper disagreement than (2). I’d be interested in further attempts to dig into the questions of how to operate the models, in hopes that the disagreement looks interestingly different once both parties can at least operate the other model.\n\n\n\n\n\n| |\n| --- |\n| [Tallinn: ➕] |\n\n\n\n\n\n\n \n\n\n\nThe post [Conversation on technology forecasting and gradualism](https://intelligence.org/2021/12/09/conversation-on-technology-forecasting-and-gradualism/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-12-09T21:29:42Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "f60711b99621dbb73dd5bfd93064ccac", "title": "More Christiano, Cotra, and Yudkowsky on AI progress", "url": "https://intelligence.org/2021/12/06/more-christiano-cotra-and-yudkowsky-on-ai-progress/", "source": "miri", "source_type": "blog", "text": "This post is a transcript of a discussion between Paul Christiano, Ajeya Cotra, and Eliezer Yudkowsky (with some comments from Rob Bensinger, Richard Ngo, and Carl Shulman), continuing from [1](https://www.lesswrong.com/posts/vwLxd6hhFvPbvKmBH/yudkowsky-and-christiano-discuss-takeoff-speeds), [2](https://www.lesswrong.com/posts/7MCqRnZzvszsxgtJi/christiano-cotra-shulman-and-yudkowsky-on-ai-progress), and [3](https://www.lesswrong.com/posts/sCCdCLPN9E3YvdZhj/shulman-and-yudkowsky-on-ai-progress).\n\n\n \n\n\nColor key:\n\n\n\n\n\n| | |\n| --- | --- |\n|  Chat by Paul and Eliezer  |  Other chat  |\n\n\n\n \n\n\n### 10.2. Prototypes, historical perspectives, and betting\n\n\n \n\n\n\n[Bensinger][4:25]\nI feel confused about the role “innovations are almost always low-impact” plays in slow-takeoff-ish views.\n\n\nSuppose I think that there’s some reachable algorithm that’s different from current approaches, and can do par-human scientific reasoning without requiring tons of compute.\n\n\nThe existence or nonexistence of such an algorithm is just a fact about the physical world. If I imagine one universe where such an algorithm exists, and another where it doesn’t, I don’t see why I should expect that one of those worlds has more discontinuous change in GWP, ship sizes, bridge lengths, explosive yields, etc. (outside of any discontinuities caused by the advent of humans and the advent of AGI)? What do these CS facts have to do with the other facts?\n\n\nBut AI Impacts seems to think there’s an important connection, and a large number of facts of the form ‘steamships aren’t like nukes’ seem to undergird a lot of Paul’s confidence that the scenario I described —\n\n\n(“there’s some reachable algorithm that’s different from current approaches, and can do par-human scientific reasoning without requiring tons of compute.”)\n\n\n— is crazy talk. (Unless I’m misunderstanding. As seems actually pretty likely to me!)\n\n\n(E.g., Paul says “To me your model just seems crazy, and you are saying it predicts crazy stuff at the end but no crazy stuff beforehand”, and one of the threads of the timelines conversation has been Paul asking stuff like “do you want to give any example other than nuclear weapons of technologies with the kind of discontinuous impact you are describing?”.)\n\n\nPossibilities that came to mind for me:\n\n\n1. The argument is ‘reality keeps surprising us with how continuous everything else is, so we seem to have a cognitive bias favoring discontinuity, so we should have a skeptical prior about *our ability to think our way to ‘X is discontinuous’* since our brains are apparently too broken to do that well?\n\n\n(But to get from 1 to ‘discontinuity models are batshit’ we surely need something more probability-mass-concentrating than just a bias argument?)\n\n\n2. The commonality between steamship sizes, bridge sizes, etc. and AGI is something like ‘how tractable is the world?’. A highly tractable world, one whose principles are easy to understand and leverage, will tend to have more world-shatteringly huge historical breakthroughs in various problems, *and* will tend to see a larger impact from the advent of humans and the advent of AGI.\n\n\nOur world looks much less tractable, so even if there’s a secret sauce to building AGI, we should expect the resultant AGI to be a lot less impactful.\n\n\n\n\n\n\n[Ngo][5:06]\nI endorse #2 (although I think more weakly than Paul does) and would also add #3: another commonality is something like “how competitive is innovation?”\n\n\n\n\n\n\n[Shulman][8:22]\n@RobBensinger It’s showing us a fact about the vast space of ideas and technologies we’ve already explored that they are not so concentrated and lumpy that the law of large numbers doesn’t work well as a first approximation in a world with thousands or millions of people contributing. And that specifically includes past computer science innovation.\n\n\nSo the ‘we find a secret sauce algorithm that causes a massive unprecedented performance jump, without crappier predecessors’ is a ‘separate, additional miracle’ at exactly the same time as the intelligence explosion is getting going. You can get hyperbolic acceleration from increasing feedbacks from AI to AI hardware and software, including crazy scale-up at the end, as part of a default model. But adding on to it that AGI is hit via an extremely large performance jump of a type that is very rare, takes a big probability penalty.\n\n\nAnd the history of human brains doesn’t seem to provide strong evidence of a fundamental software innovation, vs hardware innovation and gradual increases in selection applied to cognition/communication/culture.\n\n\nThe fact that, e.g. AIs are mastering so much math and language while still wielding vastly infrahuman brain-equivalents, and crossing human competence in many domains (where there was ongoing effort) over decades is significant evidence for something smoother than the development of modern humans and their culture.\n\n\nThat leaves me not expecting a simultaneous unusual massive human concentrated algorithmic leap with AGI, although I expect wildly accelerating progress from increasing feedbacks at that time. Crossing a given milestone is disproportionately likely to happen in the face of an unusually friendly part/jump of a tech tree (like AlexNet/the neural networks->GPU transition) but still mostly not, and likely not from an unprecedented in computer science algorithmic change.\n\n\n\n\n\n\n\n\n| |\n| --- |\n| [Cotra: 👍] |\n\n\n\n\n\n\n\n[Yudkowsky][11:26][11:37]\n\n> \n> The existence or nonexistence of such an algorithm is just a fact about the physical world. If I imagine one universe where such an algorithm exists, and another where it doesn’t, I don’t see why I should expect that one of those worlds has more discontinuous change in GWP, ship sizes, bridge lengths, explosive yields, etc. (outside of any discontinuities caused by the advent of humans and the advent of AGI)? What do these CS facts have to do with the other facts?\n> \n> \n> \n\n\nI want to flag strong agreement with this. I am not talking about change in ship sizes because that is relevant in any visible way on my model; I’m talking about it in hopes that I can somehow unravel Carl and Paul’s model, which talks a whole lot about this being Relevant even though that continues to not seem correlated to me across possible worlds.\n\n\nI think a lot in terms of “does this style of thinking seem to have any ability to bind to reality”? A lot of styles of thinking in futurism just don’t.\n\n\nI imagine Carl and Paul as standing near the dawn of hominids asking, “Okay, let’s try to measure how often previous adaptations resulted in simultaneous fitness improvements across a wide range of environmental challenges” or “what’s the previous record on an organism becoming more able to survive in a different temperature range over a 100-year period” or “can we look at the variance between species in how high they fly and calculate how surprising it would be for a species to make it out of the atmosphere”\n\n\nAnd all of reality is standing somewhere else, going on ahead to do its own thing.\n\n\nNow maybe this is not the Carl and Paul viewpoint but if so I don’t understand how not. It’s not that viewpoint plus a much narrower view of relevance, because AI Impacts got sent out to measure bridge sizes.\n\n\nI go ahead and talk about these subjects, in part because maybe I can figure out some way to unravel the viewpoint on its own terms, in part because maybe Carl and Paul can show that they have a style of thinking that works in its own right and that I don’t understand, and in part because people like Paul’s nonconcrete cheerful writing better and prefer to live there mentally and I have to engage on their terms because they sure won’t engage on mine.\n\n\n\n\n\n\nBut I do not actually think that bridge lengths or atomic weapons have anything to do with this.\n\n\nCarl and Paul may be doing something sophisticated but wordless, where they fit a sophisticated but wordless universal model of technological permittivity to bridge lengths, then have a wordless model of cognitive scaling in the back of their minds, then get a different prediction of Final Days behavior, then come back to me and say, “Well, if you’ve got such a different prediction of Final Days behavior, can you show me some really large bridges?”\n\n\nBut this is not spelled out in the writing – which, I do emphasize, is a social observation that would be predicted regardless, because other people have not invested a ton of character points in the ability to spell things out, and a supersupermajority would just plain lack the writing talent for it.\n\n\nAnd what other EAs reading it are thinking, I expect, is plain old Robin-Hanson-style [reference class tennis](https://www.lesswrong.com/posts/FsfnDfADftGDYeG4c/outside-view-as-conversation-halter) of “Why would you expect *intelligence* to scale differently from *bridges*, where are all the *big bridges*?”\n\n\n\n\n\n[Cotra][11:36][11:40]\n(Just want to interject that Carl has higher P(doom) than Paul and has also critiqued Paul for not being more concrete, and I doubt that this is the source of the common disagreements that Paul/Carl both have with Eliezer)\n\n\n\n\n\n\n\nFrom my perspective the thing the AI impacts investigation is asking is something like “When people are putting lots of resources into improving some technology, how often is it the case that someone can find a cool innovation that improves things a lot relative to the baseline?” I think that your response to that is something like “Sure, if the broad AI market were efficient and everyone were investigating the right lines of research, then AI progress might be smooth, but AGI would have also been developed way sooner. We can’t safely assume that AGI is like an industry where lots of people are pushing toward the same thing”\n\n\nBut it’s not assuming a great structural similarity between bridges and AI, except that they’re both things that humans are trying hard to find ways to improve\n\n\n\n\n\n\n[Yudkowsky][11:42]\nI can imagine writing responses like that, if I was engaging on somebody else’s terms. As with [Eliezer-2012’s engagement with Pat Modesto](https://www.lesswrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing) against the careful proof that HPMOR cannot possibly become one of the measurably most popular fanfictions, I would never think anything like that inside my own brain.\n\n\nMaybe I just need to do a thing that I have not done before, and set my little $6000 Roth IRA to track a bunch of investments that Carl and/or Paul tell me to make, so that my brain will actually track the results, and I will actually get a chance to see this weird style of reasoning produce amazing results.\n\n\n\n\n\n[Bensinger][11:44]\n\n> \n> Sure, if the broad AI market were efficient and everyone were investigating the right lines of research, then AI progress might be smooth\n> \n> \n> \n\n\nPresumably also “‘AI progress’ subsumes many different kinds of cognition, we don’t currently have baby AGIs, and when we do figure out how to build AGI the very *beginning* of the curve (the Wright flyer moment, or something very shortly after) will correspond to a huge capability increase.”\n\n\n\n\n\n\n[Yudkowsky][11:46]\nI think there’s some much larger scale in which it’s worth mentioning that on my own terms of engagement I do not naturally think like this. I don’t feel like you could get Great Insight by figuring out what the predecessor technologies must have been of the Wright Flyer, finding industries that were making use of them, and then saying Behold the Heralds of the Wright Flyer. It’s not a style of thought binding upon reality.\n\n\nThey built the Wright Flyer. It flew. Previous stuff didn’t fly. It happens. Even if you yell a lot at reality and try to force it into an order, that’s still what your actual experience of the surprising Future will be like, you’ll just be more surprised by it.\n\n\nLike you can super want Technologies to be Heralded by Predecessors which were Also Profitable but on my native viewpoint this is, like, somebody with a historical axe to grind, going back and trying to make all the history books read like this, when I have no experience of people who were alive at the time making gloriously correct futuristic predictions using this kind of thinking.\n\n\n\n\n\n[Cotra][11:53]\nI think Paul’s view would say:\n\n\n* Things certainly happen for the first time\n* When they do, they happen at small scale in shitty prototypes, like the Wright Flyer or GPT-1 or AlphaGo or the Atari bots or whatever\n* When they’re making a big impact on the world, it’s after a lot of investment and research, like commercial aircrafts in the decades after Kitty Hawk or like the investments people are in the middle of making now with AI that can assist with coding\n\n\nPaul’s view says that the Kitty Hawk moment *already happened for the kind of AI that will be super transformative and could kill us all*, and like the historical Kitty Hawk moment, it was not immediately a huge deal\n\n\n\n\n\n\n[Yudkowsky][11:56]\nThere is, I think, a really basic difference of thinking here, which is that on my view, AGI erupting is just a Thing That Happens and not part of a Historical Worldview or a Great Trend.\n\n\nHuman intelligence wasn’t part of a grand story reflected in all parts of the ecology, it just happened in a particular species.\n\n\nNow afterwards, of course, you can go back and draw all kinds of Grand Trends into which this Thing Happening was perfectly and beautifully fitted, and yet, it does not seem to me that people have a very good track record of thereby predicting in advance what surprising news story they will see next – with some rare, narrow-superforecasting-technique exceptions, like the Things chart on a steady graph and we *know solidly what a threshold on that graph corresponds to* and that threshold is not too far away compared to the previous length of the chart.\n\n\nOne day the Wright Flyer flew. Anybody *in the future with benefit of hindsight*, who wanted to, could fit that into a grand story about flying, industry, travel, technology, whatever; if they’ve been on the ground at the time, they would not have thereby had much luck predicting the Wright Flyer. It can be *fit into* a grand story but on the ground it’s just a thing that happened. It had some prior causes but it was not thereby constrained to fit into a storyline in which it was the plot climax of those prior causes.\n\n\nMy worldview sure does permit there to be predecessor technologies and for them to have some kind of impact and for some company to make a profit, but it is not nearly as interested in that stuff, on a very basic level, because it does not think that the AGI Thing Happening is the plot climax of a story about the Previous Stuff Happening.\n\n\n\n\n\n[Cotra][12:01]\nThe fact that you express this kind of view about AGI erupting one day is why I thought your thing in IEM was saying there was a major algorithmic innovation *from chimps to humans*, that humans were qualitatively and not just quantitatively better than chimps and this was not because of their larger brain size primarily. But I’m confused because up thread in the discussion of evolution you were emphasizing much more that there was an innovation between dinosaurs and primates, not that there was an innovation between chimps and humans, and you seemed more open to the chimp/human diff being quantitative and brain-size driven than I had thought you’d be. But being open to the chimp-human diff being quantitative/brain-size-driven suggests to me that you should be more open than you are to AGI being developed by slow grinding on the same shit, instead of erupting without much precedent?\n\n\n\n\n\n\n[Yudkowsky][12:01]\nI think you’re confusing a meta-level viewpoint with an object-level viewpoint.\n\n\nThe Wright Flyer does not need to be made out of completely different materials from all previous travel devices, in order for the Wright Flyer to be a Thing That Happened One Day which wasn’t the plot climax of a grand story about Travel and which people at the time could not have gotten very far in advance-predicting by reasoning about which materials were being used in which conveyances and whether those conveyances looked like they’d be about to start flying.\n\n\nIt is the very viewpoint to which I am objecting, which keeps on asking me, metaphorically speaking, to explain how the Wright Flyer could have been made of completely different materials in order for it to be allowed to be so discontinuous with the rest of the Travel story of which it is part.\n\n\nOn my viewpoint they’re just *different stories* so the Wright Flyer is allowed to be its own thing *even though* it is not made out of an unprecedented new kind of steel that floats.\n\n\n\n\n\n[Cotra][12:06]\nThe claim I’m making is that Paul’s view predicts a lag and a lot of investment between the first flight and aircraft making a big impact on the travel industry, and predicts that the first flight wouldn’t have immediately made a big impact on the travel industry. In other words Kitty Hawk isn’t a discontinuity in the Paul view because the metrics he’d expect to be continuous are the ones that large numbers of people are trying hard to optimize, like cost per mile traveled or whatnot, not metrics that almost nobody is trying to optimize, like “height flown.”\n\n\nIn other words, it sounds like you’re saying:\n\n\n* Kitty Hawk is analogous to AGI erupting\n* Previous history of travel is analogous to pre-AGI history of AI\n\n\nWhile Paul is saying:\n\n\n* Kitty Hawk is analogous to e.g. AlexNet\n* Later history of aircraft is analogous to the post-AlexNet story of AI which we’re in the middle of living, and will continue on to make huge Singularity-causing impacts on the world\n\n\n\n\n\n\n[Yudkowsky][12:09]\nWell, unfortunately, Paul and I both seem to believe that our models follow from observing the present-day world, rather than being incompatible with it, and so when we demand of each other that we produce some surprising bold prediction about the present-day world, we both tend to end up disappointed.\n\n\nI would like, of course, for Paul’s surprisingly narrow vision of a world governed by tightly bound stories and predictable trends, to produce some concrete bold prediction of the next few years which no ordinary superforecaster would produce, but Paul is not under the impression that his own worldview is similarly strange and narrow, and so has some difficulty in answering this request.\n\n\n\n\n\n[Cotra][12:09]\nBut Paul offered to bet with you about literally any quantity you choose?\n\n\n\n\n\n\n[Yudkowsky][12:10]\nI did assume that required an actual disagreement, eg, I cannot just go look up something superforecasters are very confident about and then demand Paul to bet against it.\n\n\n\n\n\n[Cotra][12:12]\nIt still sounds to me like “take a basket of N performance metrics, bet that the model size to perf trend will break upward in > K of them within e.g. 2 or 3 years” should sound good to you, I’m confused why that didn’t. If it does and it’s just about the legwork then I think we could get someone to come up with the benchmarks and stuff for you\n\n\nOr maybe the same thing but >K of them will break downward, whatever\n\n\nWe could bet about the human perception of sense in language models, for example\n\n\n\n\n\n\n[Yudkowsky][12:14]\nI am nervous about Paul’s definition of “break” and the actual probabilities to be assigned. You see, both Paul and I think our worldview is a very normal one that matches current reality quite well, so when we are estimating parameters like these, Paul is liable to do it empirically, and I am also liable to do it empirically as my own baseline, and if I point to a trend over time in how long it takes to go from par-human to superhuman performance decreasing, Imaginary Paul says “Ah, yes, what a fine trend, I will bet that things follow this trend” and Eliezer says “No that is MY trend, you don’t get to follow it, you have to predict that par-human to superhuman time will be constant” and Paul is like “lol no I get to be a superforecaster and follow trends” and we fail to bet.\n\n\nMaybe I’m wrong in having mentally played the game out ahead that far, for it is, after all, very hard to predict the Future, but that’s where I’d foresee it failing.\n\n\n\n\n\n[Cotra][12:16]\nI don’t think you need to bet about calendar times from par-human to super-human, and any meta-trend in that quantity. It sounds like Paul is saying “I’ll basically trust the model size to perf trends and predict a 10x bigger model from the same architecture family will get the perf the trends predict,” and you’re pushing back against that saying e.g. that humans won’t find GPT-4 to be subjectively more coherent than GPT-3 and that Paul is neglecting that there could be major innovations in the future that bring down the FLOP/s to get a certain perf by a lot and bend the scaling laws. So why not bet that Paul won’t be as accurate as he thinks he is by following the scaling laws?\n\n\n\n\n\n\n[Bensinger][12:17]\n\n> \n> I think Paul’s view would say:\n> \n> \n> * Things certainly happen for the first time\n> * When they do, they happen at small scale in shitty prototypes, like the Wright Flyer or GPT-1 or AlphaGo or the Atari bots or whatever\n> * When they’re making a big impact on the world, it’s after a lot of investment and research, like commercial aircrafts in the decades after Kitty Hawk or like the investments people are in the middle of making now with AI that can assist with coding\n> \n> \n> Paul’s view says that the Kitty Hawk moment *already happened for the kind of AI that will be super transformative and could kill us all*, and like the historical Kitty Hawk moment, it was not immediately a huge deal\n> \n> \n> \n\n\n“When they do, they happen at small scale in shitty prototypes, like the Wright Flyer or GPT-1 or AlphaGo or the Atari bots or whatever”\n\n\nHow shitty the prototype is should depend (to a very large extent) on the physical properties of the tech. So I don’t find it confusing (though I currently disagree) when someone says “I looked at a bunch of GPT-3 behavior and it’s cognitively sophisticated enough that I think it’s doing basically what humans are doing, just at a smaller scale. The qualitative cognition I can see going on is just that impressive, taking into account the kinds of stuff I think human brains are doing.”\n\n\nWhat I find confusing is, like, treating ten thousand examples of non-AI, non-cognitive-tech continuities (nukes, building heights, etc.) as though they’re anything but a tiny update about ‘will AGI be high-impact’ — compared to the size of updates like ‘look at how smart and high-impact humans were’ and perhaps ‘look at how smart-in-the-relevant-ways GPT-3 is’.\n\n\nLike, impactfulness is not a simple physical property, so there’s not much reason for different kinds of tech to have similar scales of impact (or similar scales of impact n years after the first prototype). Mainly I’m not sure to what extent we disagree about this, vs. this just being me misunderstanding the role of the ‘most things aren’t high-impact’ argument.\n\n\n(And yeah, a random historical technology drawn from a hat will be pretty low-impact. But that base rate also doesn’t seem to me like it has much evidential relevance anymore when I update about what specific tech we’re discussing.)\n\n\n\n\n\n\n[Cotra][12:18]\nThe question is not “will AGI be high impact” — Paul agrees it will, and for any FOOM quantity (like crossing a chimp-to-human-sized gap in a day or whatever) he agrees that will happen eventually too.\n\n\nThe technologies studies in the dataset spanned a wide range in their peak impact on society, and they’re not being used to forecast the peak impact of mature AI tech\n\n\n\n\n\n\n[Bensinger][12:19]\nYeah, I’m specifically confused about how we know that the AGI Wright Flyer and its first successors are low-impact, from looking at how low-impact other technologies are (if that is in fact a meaningful-sized update on your view)\n\n\nNot drawing a comparison about the overall impactfulness of AI / AGI (e.g., over fifteen years)\n\n\n\n\n\n\n[Yudkowsky][12:21]\n\n> \n> [So why not bet that Paul won’t be as accurate as he thinks he is by following the scaling laws?]\n> \n> \n> \n\n\nI’m pessimistic about us being able to settle on the terms of a bet like that (and even more so about being able to bet against Carl on it) but in broad principle I agree. The trouble is that if a trend is benchmarkable, I believe more in the trend continuing at least on the next particular time, not least because I believe in people Goodharting benchmarks.\n\n\nI expect a human sense of intelligence to be harder to fool (even taking into account that it’s being targeted to a nonzero extent) but I also expect that to be much harder to measure and bet upon than the Goodhartable metrics. And I think our actual disagreement is more visible over portfolios of benchmarks breaking upward over time, but I also expect that if you ask Paul and myself to *quantify our predictions*, we both go, “Oh, my theory is the one that fits ordinary reality so obviously I will go look at superforecastery trends over ordinary reality to predict this specifically” and I am like, “No, Paul, if you’d had to predict that without looking at the data, your worldview would’ve predicted trends breaking down less often” and Paul is like “But Eliezer, shouldn’t you be predicting much more upward divergence than this.”\n\n\nAgain, perhaps I’m being overly gloomy.\n\n\n\n\n\n[Cotra][12:23]\nI think we should try to find ML predictions where you defer to superforecasters and Paul disagrees, since he said he would bet against superforecasters in ML\n\n\n\n\n\n\n[Yudkowsky][12:24]\nI am also probably noticeably gloomier and less eager to bet because the whole fight is taking place on grounds that Paul thinks is important and part of a connected story that continuously describes ordinary reality, and that I think is a strange place where I can’t particularly see how Paul’s reasoning style works. So I’d want to bet against Paul’s overly narrow predictions by using ordinary superforecasting, and Paul would like to make his predictions using ordinary superforecasting.\n\n\nI am, indeed, more interested in a place where Paul wants to bet against superforecasters. I am not guaranteeing up front I’ll bet with them because superforecasters did not call AlphaGo correctly and I do not think Paul has zero actual domain expertise. But Paul is allowed to pick up *generic* epistemic credit *including from me* by beating superforecasters because that credit counts toward believing a style of thought is *even working literally at all*; separately from the question of whether Paul’s superforecaster-defying prediction also looks like a place where I’d predict in some opposite direction.\n\n\nDefinitely, places where Paul disagrees with superforecasters are much more interesting places to mine for bets.\n\n\nI am happy to hear about those.\n\n\n\n\n\n[Cotra][12:27]\nI think what Paul was saying last night is you find superforecasters betting on some benchmark performance, and he just figures out which side he’d take (and he expects in most/all superforecaster predictions that he would not be deferential, there’s a side he would take)\n\n\n\n\n\n \n\n\n### 10.3. Predictions and betting (continued)\n\n\n \n\n\n\n[Christiano][12:29]\nnot really following along with the conversation, but my desire to bet about “whatever you want” was driven in significant part by frustration with Eliezer repeatedly saying things like “people like Paul get surprised by reality” and me thinking that’s nonsense\n\n\n\n\n\n\n[Yudkowsky][12:29]\nSo the Yudkowskian viewpoint is something like… trends in particular technologies held fixed, will often break down; trends in Goodhartable metrics, will often stay on track but come decoupled from their real meat; trends across multiple technologies, will experience occasional upward breaks when new algorithms on the level of Transformers come out. For me to bet against superforecasters I have to see superforecasters saying something different, which I do not at this time actually know to be the case. For me to bet against Paul betting against superforecasters, the different thing Paul says has to be different from my own direction of disagreement with superforecasters.\n\n\n\n\n\n[Christiano][12:30]\nI still think that if you want to say “this sort of reasoning is garbage empirically” then you ought to be willing to bet about something. If we are just saying “we agree about all of the empirics, it’s just that somehow we have different predictions about AGI” then that’s fine and symmetrical.\n\n\n\n\n\n\n[Yudkowsky][12:30]\nI have been trying to revise that towards a more nvc “when I try to operate this style of thought myself, it seems to do a bad job of retrofitting and I don’t understand how it says X but not Y”.\n\n\n\n\n\n[Christiano][12:30]\neven then presumably if you think it’s garbage you should be able to point to some particular future predictions where it would be garbage?\n\n\nif you used it\n\n\nand then I can either say “no, I don’t think that’s a valid application for reason X” or “sure, I’m happy to bet”\n\n\nand it’s possible you can’t find any places where it sticks its neck out in practice (even in your version), but then I’m again just rejecting the claim that it’s empirically ruled out\n\n\n\n\n\n\n[Yudkowsky][12:31]\nI also think that we’d have an easier time betting if, like, neither of us could look at graphs over time, but we were at least told the values in 2010 and 2011 to anchor our estimates over one year, or something like that.\n\n\nThough we also need to not have a bunch of existing knowledge of the domain which is hard.\n\n\n\n\n\n[Christiano][12:32]\nI think this might be derailing some broader point, but I am provisionally mostly ignoring your point “this doesn’t work in practice” if we can’t find places where we actually foresee disagreements\n\n\n(which is fine, I don’t think it’s core to your argument)\n\n\n\n\n\n\n[Yudkowsky][12:33]\nPaul, you’ve previously said that you’re happy to bet against ML superforecasts. That sounds promising. What are examples of those? Also I must flee to lunch and am already feeling sort of burned and harried; it’s possible I should not ignore the default doomedness of trying to field questions from multiple sources.\n\n\n\n\n\n[Christiano][12:33]\nI don’t know if superforecasters make public bets on ML topics, I was saying I’m happy to bet on ML topics and if your strategy is “look up what superforecasters say” that’s fine and doesn’t change my willingness to bet\n\n\nI think this is probably not as promising as either (i) dig in on the arguments that are most in dispute (seemed to be some juicier stuff earlier though I’m just focusing on work today) , or (ii) just talking generally about what we expect to see in the next 5 years so that we can at least get more of a vibe looking back\n\n\n\n\n\n\n[Shulman][12:35]\nYou can bet on the Metaculus AI Tournament forecasts.\n\n\n\n\n\n\n\n\n\n[Yudkowsky][13:13]\nI worry that trying to jump straight ahead to Let’s Bet is being too ambitious too early on a cognitively difficult problem of localizing disagreements.\n\n\nOur prophecies of the End Times’s modal final days seem legit different; my impulse would be to try to work that backwards, first, in an intuitive sense of “well which prophesied world would this experience feel more like living in?”, and try to dig deeper there before deciding that our disagreements have crystallized into short-term easily-observable bets.\n\n\nWe both, weirdly enough, feel that our current viewpoints are doing a great job of permitting the present-day world, even if, presumably, we both think the other’s worldview would’ve done worse at predicting that world in advance. This cannot be resolved in an instant by standard techniques known to me. Let’s try working back from the End Times instead.\n\n\nI have already stuck out my neck a little and said that, as we start to go past $50B invested in a model, we are starting to live at least a *little* more in what feels like the Paulverse, not because my model prohibits this, but because, or so I think, Paul’s model more narrowly predicts it.\n\n\nIt does seem like the sort of generically weird big thing that could happen, to me, even before the End Times, there are corporations that could just decide to do that; I am hedging around this exactly because it does feel to my gut like that is a kind of headline I could read one day and have it still be years before the world ended, so I may need to be stingy with those credibility points inside of what I expect to be reality.\n\n\nBut if we get up to $10T to train a model, that is *much* more strongly Paulverse; it’s not that this falsifies the Eliezerverse considered in isolation, but it is *much* more narrowly characteristic of the Words of Paul coming to pass; it feels much more to my gut that, in agreeing to this, I am not giving away Bayes points inside my own mainline.\n\n\nIf ordinary salaries for ordinary fairly-good programmers get up to $20M/year, this is not prohibited by my AI models per se; but it sure sounds like the world becoming less ordinary than I expected it to stay, and like it is part of Paul’s Prophecy much more strongly than it is part of Eliezer’s Prophecy.\n\n\nThat’s two ways that I could concede a great victory to the Paulverse. They both have the disadvantages (from my perspective) that the Paulverse, though it must be drawing probability mass from somewhere in order to stake it there, is legitimately not – so far as I know – forced to claim that these things happen anytime soon. So they are ways for the Paulverse to win, but not ways for the Eliezerverse to win.\n\n\nThat I have said even this much, I claim, puts Paul in at least a little tiny bit of debt to me epistemic-good-behavior-wise; he should be able to describe events which would start to make him worry he was living in the Eliezerverse, even if his model did not narrowly rule them out, and even if those events had not been predicted by the Eliezerverse to occur within a narrowly prophesied date such that they would not thereby form a bet the Eliezerverse could clearly lose as well as win.\n\n\nI have not had much luck in trying to guess what the real Paul will say about issues like this one. My last attempt was to say, “Well, what shouldn’t happen, besides the End Times themselves, before world GDP has doubled over a four-year period?” And Paul gave what seems to me like an overly valid reply, which, iirc and without looking it up, was along the lines of, “well, nothing that would double world GDP in a 1-year period”.\n\n\nWhen I say this is overly valid, I mean that it follows too strongly from Paul’s premises, and he should be looking for something less strong than that on which to make a beginning discovery of disagreement – maybe something which Paul’s premises don’t strongly forbid to him, but which nonetheless looks more like the Eliezerverse or like it would be relatively more strongly predicted by Eliezer’s Prophecy.\n\n\nI do not model Paul as eagerly or strongly agreeing with, say, “The Riemann Hypothesis should not be machine-proven” or “The ABC Conjecture should not be machine-proven” before world GDP has doubled. It is only on Eliezer’s view that proving the Riemann Hypothesis is about as much of a related or unrelated story to AGI, as are particular benchmarks of GDP.\n\n\nOn Paul’s view as I am trying to understand and operate it, this benchmark may be correlated with AGI in time in the sense that most planets wouldn’t do it during the Middle Ages before they had any computers, but it is not part of the *story* of AGI, it is not part of Paul’s Prophecy; because it doesn’t make a huge amount of money and increase GDP and get a huge ton of money flowing into investments in useful AI.\n\n\n(From Eliezer’s perspective, you could tell a story about how a stunning machine proof of the Riemann Hypothesis got Bezos to invest $50 billion in training a successor model and that was how the world ended, and that would be a just-as-plausible model as some particular economic progress story, of how Stuff Happened Because Other Stuff Happened; it sounds like the story of OpenAI or of Deepmind’s early Atari demo, which is to say, it sounds to Eliezer like history. Whereas on Eliezer!Paul’s view, that’s much more of a weird coincidence because it involves Bezos’s unforced decision rather than the economic story of which AGI is capstone, or so it seems to me trying to operate Paul’s view.)\n\n\nAnd yet Paul might still, I hope, be able to find something *like* “The Riemann Hypothesis is machine-proven”, which even though it is not very much of an interesting part of his own Prophecy because it’s not part of the economic storyline, sounds to him like the sort of thing that the *Eliezerverse* thinks happens as you get close to AGI, which the *Eliezerverse* says is allowed to start happening way before world GDP would double in 4 years; and as it happens I’d agree with that characterization of the Eliezerverse.\n\n\nSo Paul might say, “Well, my model doesn’t particularly *forbid* that the Riemann Hypothesis gets machine-proven before world GDP has doubled in 4 years or even started to discernibly break above trend by much; but that does sound *more* like we are living in the Eliezerverse than in the Paulverse.”\n\n\nI am not demanding this particular bet because it seems to me that the Riemann Hypothesis may well prove to be unfairly targetable for current ML techniques while they are still separated from AGI by great algorithmic gaps. But if on the other hand Paul thinks that, I dunno, superhuman performance on stuff like the Riemann Hypothesis does tend to be more correlated with economically productive stuff because it’s all roughly the same kind of capability, and lol never mind this “algorithmic gap” stuff, then maybe Paul *is* willing to pick that example; which is all the better for me because I *do* suspect it might decouple from the AI of the End, and so I think I have a substantial chance of winning and being able to say “SEE!” to the assembled EAs while there’s still a year or two left on the timeline.\n\n\nI’d love to have credibility points on that timeline, if Paul doesn’t feel as strong an anticipation of needing them.\n\n\n\n\n\n[Christiano][15:43]\n1/3 that RH has an automated proof before sustained 7%/year GWP growth?\n\n\nI think the clearest indicator is that we have AI that ought to be able to e.g. run the fully automated factory-building factory (not automating mines or fabs, just the robotic manufacturing and construction), but it’s not being deployed or is being deployed with very mild economic impacts\n\n\nanother indicator is that we have AI systems that can fully replace human programmers (or other giant wins), but total investment in improving them is still small\n\n\nanother indicator is a DeepMind demo that actually creates a lot of value (e.g. 10x larger than DeepMind’s R&D costs? or even comparable to DeepMind’s cumulative R&D costs if you do the accounting really carefully and I definitely believe it and it wasn’t replaceable by Brain), it seems like on your model things should “break upwards” and in mine that just doesn’t happen that much\n\n\nsounds like you may have >90% on automated proof of RH before a few years of 7%/year growth driven by AI? so that would give a pretty significant odds ratio either way\n\n\nI think “stack more layers gets stuck but a clever idea makes crazy stuff happen” is generally going to be evidence for your view\n\n\nThat said, I’d mostly reject AlphaGo as an example, because it’s just plugging in neural networks to existing go algorithms in almost the most straightforward way and the bells and whistles don’t really matter. But if AlphaZero worked and AlphaGo didn’t, and the system accomplished something impressive/important (like proving RH, or being significantly better at self-contained programming tasks), then that would be a surprise.\n\n\nAnd I’d reject LSTM -> transformer or MoE as an example because the quantitative effect size isn’t that big.\n\n\nBut if something like that made the difference between “this algorithm wasn’t scaling before, and now it’s scaling,” then I’d be surprised.\n\n\nAnd the size of jump that surprises me is shrinking over time. So in a few years even getting the equivalent of a factor of 4 jump from some clever innovation would be very surprising to me.\n\n\n\n\n\n\n[Yudkowsky][17:44]\n\n> \n> sounds like you may have >90% on automated proof of RH before a few years of 7%/year growth driven by AI? so that would give a pretty significant odds ratio either way\n> \n> \n> \n\n\nI emphasize that this is mostly about no on the GDP growth before the world ending, rather than yes on the RH proof, i.e., I am not 90% on RH before the end of the world at all. Not sure I’m over 50% on it happening before the end of the world at all.\n\n\nShould it be a consequence of easier earlier problems than full AGI? Yes, on my mainline model; but also on my model, it’s a particular thing and maybe the particular people and factions doing stuff don’t get around to that particular thing.\n\n\nI guess if I stare hard at my brain it goes ‘ehhhh maybe 65% if timelines are relatively long and 40% if it’s like the next 5 years’, because the faster stuff happens, the less likely anyone is to get around to proving RH in particular or announcing that they’ve done so if they did.\n\n\nAnd if the econ threshold is set as low as 7%/yr, I start to worry about that happening in longer-term scenarios, just because world GDP has never been moving at a fixed rate over a log chart. the “driven by AI” part sounds very hard to evaluate. I want, I dunno, some other superforecaster or Carl to put a 90% credible bound on ‘when world GDP growth hits 7% assuming little economically relevant progress in AI’ before I start betting at 80%, let alone 90%, on what should happen before then. I don’t have that credible bound already loaded and I’m not specialized in it.\n\n\nI’m wondering if we’re jumping ahead of ourselves by trying to make a nice formal Bayesian bet, as prestigious as that might be. I mean, your 1/3 was probably important for you to say, as it is higher than I might have hoped, and I’d ask you if you really mean for that to be an upper bound on your probability or if that’s your actual probability.\n\n\nBut, more than that, I’m wondering if, in the same vague language I used before, you’re okay with saying a little more weakly, “RH proven before big AI-driven growth in world GDP, sounds more Eliezerverse than Paulverse.”\n\n\nIt could be that this is just not actually true because you do not think that RH is coupled to econ stuff in the Paul Prophecy one way or another, and my own declarations above do not have the Eliezerverse saying it enough more strongly than that. If you don’t actually see this as a distinguishing Eliezerverse thing, if it wouldn’t actually make you say “Oh no maybe I’m in the Eliezerverse”, then such are the epistemic facts.\n\n\n\n> \n> And the size of jump that surprises me is shrinking over time. So in a few years even getting the equivalent of a factor of 4 jump from some clever innovation would be very surprising to me.\n> \n> \n> \n\n\nThis sounds potentially more promising to me – seems highly Eliezerverse, highly non-Paul-verse according to you, and its negation seems highly oops-maybe-I’m-in-the-Paulverse to me too. How many years is a few? How large a jump is shocking if it happens tomorrow?\n\n\n\n\n \n\n\n11. September 24 conversation\n-----------------------------\n\n\n \n\n\n### 11.1. Predictions and betting (continued 2)\n\n\n \n\n\n\n[Christiano][13:15]\nI think RH is not that surprising, it’s not at all clear to me where “do formal math” sits on the “useful stuff AI could do” spectrum, I guess naively I’d put it somewhere “in the middle” (though the analogy to board games makes it seem a bit lower, and there is a kind of obvious approach to doing this that seems to be working reasonably well so that also makes it seem lower), and 7% GDP growth is relatively close to the end (ETA: by “close to the end” I don’t mean super close to the end, just far enough along that there’s plenty of time for RH first)\n\n\nI do think that performance jumps are maybe more dispositive, but I’m afraid that it’s basically going to go like this: there won’t be metrics that people are tracking that jump up, but you’ll point to new applications that people hadn’t considered before, and I’ll say “but those new applications aren’t that valuable” whereas to you they will look more analogous to a world-ending AGI coming out from the blue\n\n\nlike for AGZ I’ll be like “well it’s not really above the deep learning trend if you run it backwards” and you’ll be like “but no one was measuring it before! you can’t make up the trend in retrospect!” and I’ll be like “OK, but the reason no one was measuring it before was that it was worse than traditional go algorithms until like 2 years ago and the upside is not large enough that you should expect a huge development effort for a small edge”\n\n\n\n\n\n\n[Yudkowsky][13:43]\n“factor of 4 jump from some clever innovation” – can you say more about that part?\n\n\n\n\n\n[Christiano][13:53]\nlike I’m surprised if a clever innovation does more good than spending 4x more compute\n\n\n\n\n\n\n[Yudkowsky][15:04]\nI worry that I’m misunderstanding this assertion because, as it stands, it sounds extremely likely that I’d win. Would transformers vs. CNNs/RNNs have won this the year that the transformers paper came out?\n\n\n\n\n\n[Christiano][15:07]\nI’m saying that it gets harder over time, don’t expect wins as big as transformers\n\n\nI think even transformers probably wouldn’t make this cut though?\n\n\ncertainly not vs CNNs\n\n\nvs RNNs I think the comparison I’d be using to operationalize it is translation, as measured in the original paper\n\n\nthey do make this cut for translation, looks like the number is like 100 >> 4\n\n\n100x for english-german, more like 10x for english-french, those are the two benchmarks they cite\n\n\nbut both more than 4x\n\n\nI’m saying I don’t expect ongoing wins that big\n\n\nI think the key ambiguity is probably going to be about what makes a measurement established/hard-to-improve\n\n\n\n\n\n\n[Yudkowsky][15:21]\nthis sounds like a potentially important point of differentiation; I do expect more wins that big.\n\n\nthe main thing that I imagine might make a big difference to your worldview, but not mine, is if the first demo of the big win only works slightly better (although that might also be because they were able to afford much less compute than the big players, which I think your worldview would see as a redeeming factor for my worldview?) but a couple of years later might be 4x or 10x as effective per unit compute (albeit that other innovations would’ve been added on by then to make the first innovation work properly, which I think on your worldview is like The Point or something)\n\n\nclarification: by “transformers vs CNNs” I don’t mean transformers on ImageNet, I mean transformers vs. contemporary CNNs, RNNs, or both, being used on text problems.\n\n\nI’m also feeling a bit confused because eg Standard Naive Kurzweilian Accelerationism makes a big deal about the graphs keeping on track because technologies hop new modes as needed. what distinguishes your worldview from saying that no further innovations are needed for AGI or will give a big compute benefit along the way? is it that any single idea may only ever produce a smaller-than-4X benefit? is it permitted that a single idea plus 6 months of engineering fiddly details produce a 4X benefit?\n\n\nall this aside, “don’t expect wins as big as transformers” continues to sound to me like a very promising point for differentiating Prophecies.\n\n\n\n\n\n[Christiano][15:50]\nI think the relevant feature of the innovation is that the work to find it is small relative to the work that went into the problem to date (though there may be other work on other avenues)\n\n\n\n\n\n\n[Yudkowsky][15:52]\nin, like, a local sense, or a global sense? if there’s 100 startups searching for ideas collectively with $10B of funding, and one of them has an idea that’s 10x more efficient per unit compute on billion-dollar problems, is that “a small amount of work” because it was only a $100M startup, or collectively an appropriate amount of work?\n\n\n\n\n\n[Christiano][15:53]\nI’m calling that an innovation because it’s a small amount of work\n\n\n\n\n\n\n[Yudkowsky][15:54]\n(maybe it would be also productive if you pointed to more historical events like Transformers and said ‘that shouldn’t happen again’, because I didn’t realize there was anything you thought was like that. AlphaFold 2?)\n\n\n\n\n\n[Christiano][15:54]\nlike, it’s not just a claim about EMH, it’s also a claim about the nature of progress\n\n\nI think AlphaFold counts and is probably if anything a bigger multiplier, it’s just uncertainty over how many people actually worked on the baselines\n\n\n\n\n\n\n[Yudkowsky][15:54]\nwhen should we see headlines like those subside?\n\n\n\n\n\n[Christiano][15:55]\nI mean, I think they are steadily subsiding\n\n\nas areas grow\n\n\n\n\n\n\n[Yudkowsky][15:55]\nhave they already begun to subside relative to 2016, on your view?\n\n\n(guess that was ninjaed)\n\n\n\n\n\n[Christiano][15:55]\nI would be surprised to see a 10x today on machine translation\n\n\n\n\n\n\n[Yudkowsky][15:55]\nwhere that’s 10x the compute required to get the same result?\n\n\n\n\n\n[Christiano][15:55]\nthough not so surprised that we can avoid talking about probabilities\n\n\nyeah\n\n\nor to make it more surprising, old sota with 10x less compute\n\n\n\n\n\n\n[Yudkowsky][15:56]\nyeah I was about to worry that people wouldn’t bother spending 10x the cost of a large model to settle our bet\n\n\n\n\n\n[Christiano][15:56]\nI’m more surprised if they get the old performance with 10x less compute though, so that way around is better on all fronts\n\n\n\n\n\n\n[Yudkowsky][15:57]\none reads papers claiming this all the time, though?\n\n\n\n\n\n[Christiano][15:57]\nlike, this view also leads me to predict that if I look at the actual amount of manpower that went into alphafold, it’s going to be pretty big relative to the other people submitting to that protein folding benchmark\n\n\n\n\n\n\n[Yudkowsky][15:57]\nthough typically for the sota of 2 years ago\n\n\n\n\n\n[Christiano][15:58]\nnot plausible claims on problems people care about\n\n\nI think the comparison is to contemporary benchmarks from one of the 99 other startups who didn’t find the bright idea\n\n\nthat’s the relevant thing on your view, right?\n\n\n\n\n\n\n[Yudkowsky][15:59]\nI would expect AlphaFold and AlphaFold 2 to involve… maybe 20 Deep Learning researchers, and for 1-3 less impressive DL researchers to have been the previous limit, if the field even tried that much; I would not be the least surprised if DM spent 1000x the compute on AlphaFold 2, but I’d be very surprised if the 1-3 large research team could spend that 1000x compute and get anywhere near AlphaFold 2 results.\n\n\n\n\n\n[Christiano][15:59]\nand then I’m predicting that number is already <10 for machine translation and falling (maybe I shouldn’t talk about machine translation or at least not commit to numbers given that I know very little about it, but whatever that’s my estimate), and for other domains it will be <10 by the time they get as crowded as machine translation, and for transformative tasks they will be <2\n\n\nisn’t there an open-source replication of alphafold?\n\n\nwe could bet about its performance relative to the original\n\n\n\n\n\n\n[Yudkowsky][16:00]\nit is enormously easier to do what’s already been done\n\n\n\n\n\n[Christiano][16:00]\nI agree\n\n\n\n\n\n\n[Yudkowsky][16:00]\nI believe the open-source replication was by people who were told roughly what Deepmind had done, possibly more than roughly\n\n\non the Yudkowskian view, those 1-3 previous researchers just would not have thought of doing things the way Deepmind did them\n\n\n\n\n\n[Christiano][16:01]\nanyway, my guess is generally that if you are big relative to previous efforts in the area you can make giant improvements, if you are small relative to previous efforts you might get lucky (or just be much smarter) but that gets increasingly unlikely as the field gets bigger\n\n\nlike alexnet and transformers are big wins by groups who are small relative to the rest of the field, but transformers are much smaller than alexnet and future developments will continue to shrink\n\n\n\n\n\n\n[Yudkowsky][16:02]\nbut if you’re the *same size* as previous efforts and don’t have 100x the compute, you shouldn’t be able to get huge improvements in the Paulverse?\n\n\n\n\n\n[Christiano][16:03]\nI mean, if you are the same size as all the prior effort put together?\n\n\nI’m not surprised if you can totally dominate in that case, especially if prior efforts aren’t well-coordinated\n\n\nand for things that are done by hobbyists, I wouldn’t be surprised if you can be a bit bigger than an individual hobbyist and dominate\n\n\n\n\n\n\n[Yudkowsky][16:03]\nI’m thinking something like, if Deepmind comes out with an innovation such that it duplicates old SOTA on machine translation with 1/10th compute, that still violates the Paulverse because Deepmind is not Paul!Big compared to all MTL efforts\n\n\nthough I am not sure myself how seriously Earth is taking MTL in the first place\n\n\n\n\n\n[Christiano][16:04]\nyeah, I think if DeepMind beats Google Brain by 10x compute next year on translation, that’s a significant strike against Paul\n\n\n\n\n\n\n[Yudkowsky][16:05]\nI know that Google offers it for free, I expect they at least have 50 mediocre AI people working on it, I don’t know whether or not they have 20 excellent AI people working on it and if they’ve ever tried training a 200B parameter non-MoE model on it\n\n\n\n\n\n[Christiano][16:05]\nI think not that seriously, but more seriously than 2016 and than anything else where you are seeing big swings\n\n\nand so I’m less surprised than for TAI, but still surprised\n\n\n\n\n\n\n[Yudkowsky][16:06]\nI am feeling increasingly optimistic that we have some notion of what it means to not be within the Paulverse! I am not feeling that we have solved the problem of having enough signs that enough of them will appear to tell EA how to notice which universe it is inside many years before the actual End Times, but I sure do feel like we are making progress!\n\n\nthings that have happened in the past that you feel *shouldn’t happen again* are great places to poke for Eliezer-disagreements!\n\n\n\n\n\n[Christiano][16:07]\nI definitely think there’s a big disagreement here about what to expect for pre-end-of-days ML\n\n\nbut lots of concerns about details like what domains are crowded enough to be surprising and how to do comparisons\n\n\nI mean, to be clear, I think the transformer paper having giant gains is also evidence against paulverse\n\n\nit’s just that there are really a lot of datapoints, and some of them definitely go against paul’s view\n\n\nto me it feels like the relevant thing for making the end-of-days forecast is something like “how much of the progress comes from ‘innovations’ that are relatively unpredictable and/or driven by groups that are relatively small, vs scaleup and ‘business as usual’ progress in small pieces?”\n\n\n\n\n\n \n\n\n### 11.2. Performance leap scenario\n\n\n \n\n\n\n[Yudkowsky][16:09]\nmy heuristics tell me to try wargaming out a particular scenario so we can determine in advance which key questions Paul asks\n\n\nin 2023, Deepmind releases an MTL program which is suuuper impressive. everyone who reads the MTL of, say, a foreign novel, or uses it to conduct a text chat with a contractor in Indonesia, is like, “They’ve basically got it, this is about as good as a human and only makes minor and easily corrected errors.”\n\n\n\n\n\n[Christiano][16:12]\nI mostly want to know how good Google’s translation is at that time; and if DeepMind’s product is expensive or only shows gains for long texts, I want to know whether there is actually an economic niche for it that is large relative to the R&D cost.\n\n\nlike I’m not sure whether anyone works at all on long-text translation, and I’m not sure if it would actually make Google $ to work on it\n\n\ngreat text chat with contractor in indonesia almost certainly meets that bar though\n\n\n\n\n\n\n[Yudkowsky][16:14]\nfurthermore, Eliezer and Paul publicized their debate sufficiently to some internal Deepmind people who spoke to the right other people at Deepmind, that Deepmind showed a graph of loss vs. previous-SOTA methods, and Deepmind’s graph shows that their thing crosses the previous-SOTA line while having used 12x less compute for ~~inference~~ training.\n\n\n(note that this is less… salient?… on the Eliezerverse per se, than it is as an important issue and surprise on the Paulverse, so I am less confident about part.)\n\n\na nitpicker would note that previous-SOTA metric they used is however from 1 year previously and the new model also uses Sideways Batch Regularization which the 1-year-previous SOTA graph didn’t use. on the other hand, they got 12x rather than 10x improvement so there was some error margin there.\n\n\n\n\n\n[Christiano][16:15]\nI’m OK if they don’t have the benchmark graph as long as they have some evaluation that other people were trying at, I think real-time chat probably qualifies\n\n\n\n\n\n\n[Yudkowsky][16:15]\nbut then it’s harder to measure the 10x\n\n\n\n\n\n[Christiano][16:15]\n~~also I’m saying 10x less training compute, not inference (but 10x less inference compute is harder)~~\n\n\nyes\n\n\n\n\n\n\n[Yudkowsky][16:15]\nor to know that Deepmind didn’t just use a bunch more compute\n\n\n\n\n\n[Christiano][16:15]\nin practice it seems almost certain that it’s going to be harder to evaluate\n\n\nthough I agree there are really clean versions where they actually measured a benchmark other people work on and can compare training compute directly\n\n\n(like in the transformer paper)\n\n\n\n\n\n\n[Yudkowsky][16:16]\nliterally a pessimal typo, I meant to specify training vs. inference and somehow managed to type “inference” instead\n\n\n\n\n\n[Christiano][16:16]\nI’m more surprised by the clean version\n\n\n\n\n\n\n[Yudkowsky][16:17]\nI literally don’t know what you’d be surprised by in the unclean version\n\n\nwas GPT-2 beating the field hard enough that it would have been surprising if they’d only used similar amounts of training compute\n\n\n?\n\n\nand how would somebody else judge that for a new system?\n\n\n\n\n\n[Christiano][16:17]\nI’d want to look at either human evals or logprob, I think probably not? but it’s possible it was\n\n\n\n\n\n\n[Yudkowsky][16:19]\nbtw I also feel like the Eliezer model is more surprised and impressed by “they beat the old model with 10x less compute” than by “the old model can’t catch up to the new model with 10x more compute”\n\n\nthe Eliezerverse thinks in terms of techniques that saturate\n\n\nsuch that you have to find new techniques for new training to go on helping\n\n\n\n\n\n[Christiano][16:19]\nit’s definitely way harder to win at the old task with 10x less compute\n\n\n\n\n\n\n[Yudkowsky][16:19]\nbut for expensive models it seems really genuinely unlikely to me that anyone will give us this data!\n\n\n\n\n\n[Christiano][16:19]\nI think it’s usually the case that if you scale up far enough past previous sota, you will be able to find tons of techniques needed to make it work at the new scale\n\n\nbut I’m expecting it to be less of a big deal because all experiments will be roughly at the frontier of what is feasible\n\n\nand so the new thing won’t be able to afford to go 10x bigger\n\n\nunlike today when we are scaling up spending so fast\n\n\nbut this does make it harder for the next few years at least, which is maybe the key period\n\n\n(it makes it hard if we are both close enough to the edge that “10x cheaper to get old results” seems unlikely but “getting new results that couldn’t be achieved with 10x more compute and old method” seems likely)\n\n\nwhat I basically expect is to (i) roughly know how much performance you get from making models 10x bigger, (ii) roughly know how much someone beat the competition, and then you can compare the numbers\n\n\n\n\n\n\n[Yudkowsky][16:22]\nwell, you could say, not in a big bet-winning sense, but in a mild trend sense, that if the next few years are full of “they spent 100x more on compute in this domain and got much better results” announcements, that is business as usual for the last few years and perfectly on track for the Paulverse; while the Eliezerverse permits but does not mandate that we will also see occasional announcements about brilliant new techniques, from some field where somebody already scaled up to the ~~big models~~ big compute, producing more impressive results than the previous big compute.\n\n\n\n\n\n[Christiano][16:23]\n(but “performance from making models 10x bigger” depends a lot on exactly how big they were and whether you are in a regime with unfavorable scaling)\n\n\n\n\n\n\n[Yudkowsky][16:23]\nso the Eliezerverse must be putting at least a *little* less probability mass on business-as-before Paulverse\n\n\n\n\n\n[Christiano][16:24]\nI am also expecting a general scale up in ML training runs over time, though it’s plausible that you also expect that until the end of days and just expect a much earlier end of days\n\n\n\n\n\n\n[Yudkowsky][16:24]\nI mean, why wouldn’t they?\n\n\nif they’re purchasing more per unit of compute, they will quite often spend more on total compute (Jevons Paradox)\n\n\n\n\n\n[Christiano][16:25]\nthat’s going to kill the “they spent 100x more compute” announcements soon enough\n\n\nlike, that’s easy when “100x more” means $1M, it’s a bit hard when “100x more” means $100M, it’s not going to happen except on the most important tasks when “100x more” means $10B\n\n\n\n\n\n\n[Yudkowsky][16:26]\nthe Eliezerverse is full of weird things that somebody could apply ML to, and doesn’t have that many professionals who will wander down completely unwalked roads; and so is much more friendly to announcements that “we tried putting a lot of work and compute into protein folding, since nobody ever tried doing that seriously with protein folding before, look what came out” continuing for the next decade if the Earth lasts that long\n\n\n\n\n\n[Christiano][16:27]\nI’m not surprised by announcements like protein folding, it’s not that the world overall gets more and more hostile to big wins, it’s that any industry gets more and more hostile as it gets bigger (or across industries, they get more and more hostile as the stakes grow)\n\n\n\n\n\n\n[Yudkowsky][16:28]\nwell, the Eliezerverse has more weird novel profitable things, because it has more weirdness; and more weird novel profitable things, because it has fewer people diligently going around trying all the things that will sound obvious in retrospect; but it also has fewer weird novel profitable things, because it has fewer novel things that are allowed to be profitable.\n\n\n\n\n\n[Christiano][16:29]\n(I mean, the protein folding thing is a datapoint against my view, but it’s not that much evidence and it’s not getting bigger over time)\n\n\nyeah, but doesn’t your view expect more innovations for any given problem?\n\n\nlike, it’s not just that you think the universe of weird profitable applications is larger, you also think AI progress is just more driven by innovations, right?\n\n\notherwise it feels like the whole game is about whether you think that AI-automating-AI-progress is a weird application or something that people will try on\n\n\n\n\n\n\n[Yudkowsky][16:30]\nthe Eliezerverse is more strident about there being lots and lots more stuff like “ReLUs” and “batch normalization” and “transformers” in the design space in principle, and less strident about whether current people are being paid to spend all day looking for them rather than putting their efforts someplace with a nice predictable payoff.\n\n\n\n\n\n[Christiano][16:31]\nyeah, but then don’t you see big wins from the next transformers?\n\n\nand you think those just keep happening even as fields mature\n\n\n\n\n\n\n[Yudkowsky][16:31]\nit’s much more *permitted* in the Eliezerverse than in the Paulverse\n\n\n\n\n\n[Christiano][16:31]\nor you mean that they might slow down because people stop working on them?\n\n\n\n\n\n\n[Yudkowsky][16:32]\nthis civilization has mental problems that I do not understand well enough to predict, when it comes to figuring out how they’ll affect the field of AI as it scales\n\n\nthat said, I don’t see us getting to AGI on Stack More Layers.\n\n\nthere may perhaps be a bunch of stacked layers in an AGI but there will be more ideas to it than that.\n\n\nsuch that it would require far, far more than 10X compute to get the same results with a GPT-like architecture if that was literally possible\n\n\n\n\n\n[Christiano][16:33]\nit seems clear that it will be more than 10x relative to GPT\n\n\nI guess I don’t know what GPT-like architecture means, but from what you say it seems like normal progress would result in a non-GPT-like architecture\n\n\nso I don’t think I’m disagreeing with that\n\n\n\n\n\n\n[Yudkowsky][16:34]\nI also don’t think we’re getting there by accumulating a ton of shallow insights; I expect it takes at least one more big one, maybe 2-4 big ones.\n\n\n\n\n\n[Christiano][16:34]\ndo you think transformers are a big insight?\n\n\n(is adding soft attention to LSTMs a big insight?)\n\n\n\n\n\n\n[Yudkowsky][16:34]\nhard to deliver a verdict of history there\n\n\nno\n\n\n\n\n\n[Christiano][16:35]\n(I think the intellectual history of transformers is a lot like “take the LSTM out of the LSTM with attention”)\n\n\n\n\n\n\n[Yudkowsky][16:35]\n“how to train deep gradient descent without activations and gradients blowing up or dying out” was a big insight\n\n\n\n\n\n[Christiano][16:36]\nthat really really seems like the accumulation of small insights\n\n\n\n\n\n\n[Yudkowsky][16:36]\nthough the history of that big insight is legit complicated\n\n\n\n\n\n[Christiano][16:36]\nlike, residual connections are the single biggest thing\n\n\nand relus also help\n\n\nand batch normalization helps\n\n\nand attention is better than lstms\n\n\n\n\n\n\n[Yudkowsky][16:36]\nand the inits help (like xavier)\n\n\n\n\n\n[Christiano][16:36]\nyou could also call that the accumulation of big insights, but the point is that it’s an accumulation of a lot of stuff\n\n\nmostly developed in different places\n\n\n\n\n\n\n[Yudkowsky][16:37]\nbut on the Yudkowskian view the biggest insight of all was the one waaaay back at the beginning where they were initing by literally unrolling Restricted Boltzmann Machines\n\n\nand people began to say: *hey if we do this the activations and gradients don’t blow up or die out*\n\n\nit is not a history that strongly distinguishes the Paulverse from Eliezerverse, because that insight took time to manifest\n\n\nit was not, as I recall, the first thing that people said about RBM-unrolling\n\n\nand there were many little or not-really-so-little inventions that sustained the insight to deeper and deeper nets\n\n\nand those little inventions did not correspond to huge capability jumps immediately in the hands of their inventors, with, I think, the possible exception of transformers\n\n\nthough also I think back then people just didn’t do as much SoTA-measuring-and-comparing\n\n\n\n\n\n[Christiano][16:40]\n(I think transformers are a significantly smaller jump than previous improvements)\n\n\nalso a thing we could guess about though\n\n\n\n\n\n\n[Yudkowsky][16:40]\nright, but did the people who demoed the improvements demo them as big capability jumps?\n\n\nharder to do when you don’t have a big old well funded field with lots of eyes on SoTA claims\n\n\nthey weren’t dense in SoTA, I think?\n\n\nanyways, there has not, so far as I know, been an insight of similar size to that last one, since then\n\n\n\n\n\n[Christiano][16:42]\nalso 10-100x is still actually surprising to me for transformers\n\n\nso I guess lesson learned\n\n\n\n\n\n\n[Yudkowsky][16:43]\nI think if you literally took pre-transformer SoTA, and the transformer paper plus the minimum of later innovations required to make transformers scale at all, then as you tried scaling stuff to GPT-1 scale, the old stuff would probably just flatly not work or asymptote?\n\n\n\n\n\n[Christiano][16:44]\nin general if you take anything developed at scale X and try to scale it way past X I think it won’t work\n\n\nor like, it will work much worse than something that continues to get tweaked\n\n\n\n\n\n\n[Yudkowsky][16:44]\nI’m not sure I understand what you mean if you mean “10x-100x on transformers actually happened and therefore actually surprised me”\n\n\n\n\n\n[Christiano][16:44]\nyeah, I mean that given everything I know I am surprised that transformers were as large as a 100x improvement on translation\n\n\nin that paper\n\n\n\n\n\n\n[Yudkowsky][16:45]\nthough it may not help my own case, I remark that my generic heuristics say to have an assistant go poke a bit at that claim and see if your noticed confusion is because you are being more confused by fiction than by reality.\n\n\n\n\n\n[Christiano][16:45]\nyeah, I am definitely interested to understand a bit better what’s up there\n\n\nbut tentatively I’m sticking to my guns on the original prediction\n\n\nif you have random 10-20 person teams getting 100x speedups versus prior sota\n\n\nas we approach TAI\n\n\nthat’s so far from paulverse\n\n\n\n\n\n\n[Yudkowsky][16:46]\nlike, not about this case specially, just sheer reflex from “this assertion in a science paper is surprising” to “go poke at it”. many unsurprising and hence unpoked assertions will also be false, of course, but the surprising ones even more so on average.\n\n\n\n\n\n[Christiano][16:48]\nanyway, seems like a good approach to finding a concrete disagreement\n\n\nand even looking back at this conversation would be a start for diagnosing who is more right in hindsight\n\n\nmain thing is to say how quickly and in what industries I’m how surprised\n\n\n\n\n\n\n[Yudkowsky][16:49]\nI suspect you want to attach conditions to that surprise? Like, the domain must be sufficiently explored OR sufficiently economically important, because Paulverse also predicts(?) that as of a few years (3?? 2??? 15????) all the economically important stuff will have been poked with lots of compute already.\n\n\nand if there’s economically important domains where nobody’s tried throwing $50M at a model yet, that also sounds like not-the-Paulverse?\n\n\n\n\n\n[Christiano][16:50]\nI think the economically important prediction doesn’t really need that much of “within a few years”\n\n\nlike the total stakes have just been low to date\n\n\nnone of the deep learning labs are that close to paying for themselves\n\n\nso we’re not in the regime where “economic niche > R&D budget”\n\n\nwe are still in the paulverse-consistent regime where investment is driven by the hope of future wins\n\n\nthough paul is surprised that R&D budgets aren’t *more* larger than the economic value\n\n\n\n\n\n\n[Yudkowsky][16:51]\nwell, it’s a bit of a shame from the Eliezer viewpoint that the Paulverse can’t be falsifiable yet, then, considering that in the Eliezerverse it is allowed (but not mandated) for the world to end while most DL labs haven’t paid for themselves.\n\n\nalbeit I’m not sure that’s true of the present world?\n\n\nDM had that thing about “we just rejiggered cooling the server rooms for Google and paid back 1/3 of their investment in us” and that was years ago.\n\n\n\n\n\n[Christiano][16:52]\nI’ll register considerable skepticism\n\n\n\n\n\n\n[Yudkowsky][16:53]\nI don’t claim deep knowledge.\n\n\nBut if the imminence, and hence strength and falsifiability, of Paulverse assertions, depend on how much money all the deep learning labs are making, that seems like something we could ask OpenPhil to measure?\n\n\n\n\n\n[Christiano][16:55]\nit seems easier to just talk about ML tasks that people work on\n\n\nit seems really hard to arbitrate the “all the important niches are invested in” stuff in a way that’s correlated with takeoff\n\n\nwhereas the “we should be making a big chunk of our progress from insights” seems like it’s easier\n\n\nthough I understand that your view could be disjunctive, of either “AI will have hidden secrets that yield great intelligence,” or “there are hidden secret applications that yield incredible profit”\n\n\n(sorry that statement is crude / not very faithful)\n\n\nshould follow up on this in the future, off for now though\n\n\n\n\n\n\n[Yudkowsky][16:58]\n![👋](https://s.w.org/images/core/emoji/14.0.0/72x72/1f44b.png)\n\n\n\n\n \n\n\n\nThe post [More Christiano, Cotra, and Yudkowsky on AI progress](https://intelligence.org/2021/12/06/more-christiano-cotra-and-yudkowsky-on-ai-progress/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-12-07T00:59:17Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "6b667a51159f8fac39d4d99d9408374c", "title": "Shulman and Yudkowsky on AI progress", "url": "https://intelligence.org/2021/12/04/shulman-and-yudkowsky-on-ai-progress/", "source": "miri", "source_type": "blog", "text": "This post is a transcript of a discussion between Carl Shulman and Eliezer Yudkowsky, following up on [a conversation with Paul Christiano and Ajeya Cotra](https://www.lesswrong.com/posts/7MCqRnZzvszsxgtJi/christiano-cotra-shulman-and-yudkowsky-on-ai-progress).\n\n\n \n\n\nColor key:\n\n\n\n\n\n| | |\n| --- | --- |\n|  Chat by Carl and Eliezer  |  Other chat  |\n\n\n\n \n\n\n### 9.14. Carl Shulman’s predictions\n\n\n \n\n\n\n[Shulman][20:30]\nI’ll interject some points re the earlier discussion about how animal data relates to the ‘AI scaling to AGI’ thesis.\n\n\n1. In humans it’s claimed the IQ-job success correlation varies by job, For a scientist or doctor it might be 0.6+, for a low complexity job more like 0.4, or more like 0.2 for simple repetitive manual labor. That presumably goes down a lot with less in the way of hands, or focused on low density foods like baleen whales or grazers. If it’s 0.1 for animals like orcas or elephants, or 0.05, then there’s 4-10x less fitness return to smarts.\n\n\n2. But they outmass humans by more than 4-10x. Elephants 40x, orca 60x+. Metabolically (20 watts divided by BMR of the animal) the gap is somewhat smaller though, because of metabolic scaling laws (energy scales with 3/4 or maybe 2/3 power, so ).\n\n\n\n\n\nIf dinosaurs were poikilotherms, that’s a 10x difference in energy budget vs a mammal of the same size, although there is debate about their metabolism.\n\n\n3. If we’re looking for an innovation in birds and primates, there’s some evidence of ‘hardware’ innovation rather than ‘software.’ Herculano-Houzel reports in The Human Advantage (summarizing much prior work neuron counting) different observational scaling laws for neuron number with brain mass for different animal lineages.\n\n\n\n> \n> We were particularly interested in cellular scaling differences that might have arisen in primates. If the same rules relating numbers of neurons to brain size in rodents ([6](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1805542/#idm139742175567824title))\n> \n> \n> The brain of the capuchin monkey, for instance, weighing 52 g, contains >3× more neurons in the cerebral cortex and ≈2× more neurons in the cerebellum than the larger brain of the capybara, weighing 76 g.\n> \n> \n> \n\n\n[Editor’s Note: Quote source is “[Cellular scaling rules for primate brains](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1805542/#idm139742175567824title).”]\n\n\nIn rodents brain mass increases with neuron count n^1.6, whereas it’s close to linear (n^1.1) in primates. For cortex neurons and cortex mass 1.7 and 1.0. In general birds and primates are outliers in neuron scaling with brain mass.\n\n\nNote also that bigger brains with lower neuron density have longer communication times from one side of the brain to the other. So primates and birds can have faster clock speeds for integrated thought than a large elephant or whale with similar neuron count.\n\n\n4. Elephants have brain mass ~2.5x human, and 3x neurons, but 98% of those are in the cerebellum (vs 80% in or less in most animals; these are generally the tiniest neurons and seem to do a bunch of fine motor control). Human cerebral cortex has 3x the neurons of the elephant cortex (which has twice the mass). The giant cerebellum seems like controlling the very complex trunk.\n\n\n\n\n\nBlue whales get close to human neuron counts with much larger brains.\n\n\n\n\n\n5. As Paul mentioned, human brain volume correlation with measures of cognitive function after correcting for measurement error on the cognitive side is in the vicinity of 0.3-0.4 (might go a bit higher after controlling for non-functional brain volume variation, lower from removing confounds). The genetic correlation with cognitive function in this study is 0.24:\n\n\n\n\n\nSo it accounts for a minority of genetic influences on cognitive ability. We’d also expect a bunch of genetic variance that’s basically disruptive mutations in mutation-selection balance (e.g. schizophrenia seems to be a result of that, with schizophrenia alleles under negative selection, but a big mutational target, with the standing burden set by the level of fitness penalty for it; in niches with less return to cognition the mutational surface will be cleaned up less frequently and have more standing junk).\n\n\nOther sources of genetic variance might include allocation of attention/learning (curiosity and thinking about abstractions vs immediate sensory processing/alertness), length of childhood/learning phase, motivation to engage in chains of thought, etc.\n\n\nOverall I think there’s some question about how to account for the full genetic variance, but mapping it onto the ML experience with model size, experience and reward functions being key looks compatible with the biological evidence. I lean towards it, although it’s not cleanly and conclusively shown.\n\n\nRegarding economic impact of AGI, I do not buy the ‘regulation strangles all big GDP boosts’ story.\n\n\nThe BEA breaks down US GDP by industry here (page 11):\n\n\n\n\n\nAs I work through sectors and the rollout of past automation I see opportunities for large-scale rollout that is not heavily blocked by regulation. Manufacturing is still trillions of dollars, and robotic factories are permitted and produced under current law, with the limits being more about which tasks the robots work for at low enough cost (e.g. this stopped Tesla plans for more completely robotic factories). Also worth noting manufacturing is mobile and new factories are sited in friendly jurisdictions.\n\n\nSoftware to control agricultural machinery and food processing is also permitted.\n\n\nWarehouses are also low-regulation environments with logistics worth hundreds of billions of dollars. See Amazon’s robot-heavy warehouses limited by robotics software.\n\n\nDriving is hundreds of billions of dollars, and Tesla has been permitted to use Autopilot, and there has been a lot of regulator enthusiasm for permitting self-driving cars with humanlike accident rates. Waymo still hasn’t reached that it seems and is lowering costs.\n\n\nRestaurants/grocery stores/hotels are around a trillion dollars. Replacing humans in vision/voice tasks to take orders, track inventory (Amazon Go style), etc is worth hundreds of billions there and mostly permitted. Robotics cheap enough to replace low-wage labor there would also be valuable (although a lower priority than high-wage work if compute and development costs are similar).\n\n\nSoftware is close to a half trillion dollars and the internals of software development are almost wholly unregulated.\n\n\nFinance is over a trillion dollars, with room for AI in sales and management.\n\n\nSales and marketing are big and fairly unregulated.\n\n\nIn highly regulated and licensed professions like healthcare and legal services, you can still see a licensee mechanically administer the advice of the machine, amplifying their reach and productivity.\n\n\nEven in housing/construction there’s still great profits to be made by improving the efficiency of what construction is allowed (a sector worth hundreds of billions).\n\n\nIf you’re talking about legions of super charismatic AI chatbots, they could be doing sales, coaching human manual labor to effectively upskill it, and providing the variety of activities discussed above. They’re enough to more than double GDP, even with strong Baumol effects/cost disease, I’d say.\n\n\nAlthough of course if you have AIs that can do so much the wages of AI and hardware researchers will be super high, and so a lot of that will go into the intelligence explosion, while before that various weaknesses that prevent full automation of AI research will also mess up activity in these other sectors to varying degrees.\n\n\nRe discontinuity and progress curves, I think Paul is right. AI Impacts went to a lot of effort assembling datasets looking for big jumps on progress plots, and indeed nukes are an extremely high percentile for discontinuity, and were developed by the biggest spending power (yes other powers could have bet more on nukes, but didn’t, and that was related to the US having more to spend and putting more in many bets), with the big gains in military power per $ coming with the hydrogen bomb and over the next decade.\n\n\n\n\n\nFor measurable hardware and software progress (Elo in games, loss on defined benchmarks), you have quite continuous hardware progress, and software progress that is on the same ballpark, and not drastically jumpy (like 10 year gains in 1), moreso as you get to metrics used by bigger markets/industries.\n\n\nI also agree with Paul’s description of the prior Go trend, and how DeepMind increased $ spent on Go software enormously. That analysis was a big part of why I bet on AlphaGo winning against Lee Sedol at the time (the rest being extrapolation from the Fan Hui version and models of DeepMind’s process for deciding when to try a match).\n\n\n\n\n\n\n[Yudkowsky][21:38]\nI’m curious about how much you think these opinions have been arrived at independently by yourself, Paul, and the rest of the OpenPhil complex?\n\n\n\n\n\n[Cotra][21:44]\nLittle of Open Phil’s opinions are independent of Carl, the source of all opinions\n\n\n\n\n\n| | |\n| --- | --- |\n| [Yudkowsky: 😆] | [Ngo: 😆] |\n\n\n\n\n\n\n\n\n[Shulman][21:44]\nI did the brain evolution stuff a long time ago independently. Paul has heard my points on that front, and came up with some parts independently. I wouldn’t attribute that to anyone else in that ���complex.’\n\n\nOn the share of the economy those are my independent views.\n\n\nOn discontinuities, that was my impression before, but the additional AI Impacts data collection narrowed my credences.\n\n\nTBC on the brain stuff I had the same evolutionary concern as you, which was I investigated those explanations and they still are not fully satisfying (without more micro-level data opening the black box of non-brain volume genetic variance and evolution over time).\n\n\n\n\n\n\n[Yudkowsky][21:50]\nso… when I imagine trying to deploy this style of thought myself to predict the recent past without benefit of hindsight, it returns a lot of errors. perhaps this is because I do not know how to use this style of thought, but.\n\n\nfor example, I feel like if I was GPT-continuing your reasoning from the great opportunities still available in the world economy, in early 2020, it would output text like:\n\n\n“There are many possible regulatory regimes in the world, some of which would permit rapid construction of mRNA-vaccine factories well in advance of FDA approval. Given the overall urgency of the pandemic some of those extra-USA vaccines would be sold to individuals or a few countries like Israel willing to pay high prices for them, which would provide evidence of efficacy and break the usual impulse towards regulatory uniformity among developed countries, not to mention the existence of less developed countries who could potentially pay smaller but significant amounts for vaccines. The FDA doesn’t seem likely to actively ban testing; they might under a Democratic regime, but Trump is already somewhat ideologically prejudiced against the FDA and would go along with the probable advice of his advisors, or just his personal impulse, to override any FDA actions that seemed liable to prevent tests and vaccines from making the problem just go away.”\n\n\n\n\n\n[Shulman][21:59]\nPharmaceuticals is a top 10% regulated sector, which is seeing many startups trying to apply AI to drug design (which has faced no regulatory barriers), which fits into the ordinary observed output of the sector. Your story is about regulation failing to improve relative to normal more than it in fact did (which is a dramatic shift, although abysmal relative to what would be reasonable).\n\n\nThat said, I did lose a 50-50 bet on US control of the pandemic under Trump (although I also correctly bet that vaccine approval and deployment would be historically unprecedently fast and successful due to the high demand).\n\n\n\n\n\n\n[Yudkowsky][22:02]\nit’s not impossible that Carl/Paul-style reasoning about the future – near future, or indefinitely later future? – would start to sound more reasonable to me if you tried writing out a modal-average concrete scenario that was full of the same disasters found in history books and recent news\n\n\nlike, maybe if hypothetically I knew how to operate this style of thinking, I would know how to add disasters automatically and adjust estimates for them; so you don’t need to say that to Paul, who also hypothetically knows\n\n\nbut I do not know how to operate this style of thinking, so I look at your description of the world economy and it seems like an endless list of cheerfully optimistic ingredients and the recipe doesn’t say how many teaspoons of disaster to add or how long to cook it or how it affects the final taste\n\n\n\n\n\n[Shulman][22:06]\nLike when you look at historical GDP stats and AI progress they are made up of a normal rate of insanity and screwups.\n\n\n\n\n\n| |\n| --- |\n| [Ngo: 👍] |\n\n\n\n\n\n\n\n[Yudkowsky][22:07]\non my view of reality, I’m the one who expects business-as-usual in GDP until shortly before the world ends, if indeed business-as-usual-in-GDP changes at all, and you have an optimistic recipe for Not That which doesn’t come with an example execution containing typical disasters?\n\n\n\n\n\n[Shulman][22:07]\nThings like failing to rush through neural network scaling over the past decade to the point of financial limitation on model size, insanity on AI safety, anti-AI regulation being driven by social media’s role in politics.\n\n\n\n\n\n\n[Yudkowsky][22:09]\nfailing to deploy 99% robotic cars to new cities using fences and electronic gates\n\n\n\n\n\n[Shulman][22:09]\nHistorical growth has new technologies and stupid stuff messing it up.\n\n\n\n\n\n\n[Yudkowsky][22:09]\nso many things one could imagine doing with current tech, and yet, they are not done, anywhere on Earth\n\n\n\n\n\n[Shulman][22:09]\nAI is going to be incredibly powerful tech, and after a historically typical haircut it’s still a lot bigger.\n\n\n\n\n\n\n[Yudkowsky][22:09]\nso some of this seems obviously driven by longer timelines in general\n\n\ndo you have things which, if they start to happen soonish and in advance of world GDP having significantly broken upward 3 years before then, cause you to say “oh no I’m in the Eliezerverse”?\n\n\n\n\n\n[Shulman][22:12]\nYou may be confusing my views and Paul’s.\n\n\n\n\n\n\n[Yudkowsky][22:12]\n“AI is going to be incredibly powerful tech” sounds like long timelines to me, though?\n\n\n\n\n\n[Shulman][22:13]\nNo.\n\n\n\n\n\n\n[Yudkowsky][22:13]\nlike, “incredibly powerful tech for longer than 6 months which has time to enter the economy”\n\n\nif it’s “incredibly powerful tech” in the sense of immediately killing everybody then of course we agree, but that didn’t seem to be the context\n\n\n\n\n\n[Shulman][22:15]\nI think broadly human-level AGI means intelligence explosion/end of the world in less than a year, but tons of economic value is likely to leak out before that from the combination of worse general intelligence with AI advantages like huge experience.\n\n\n\n\n\n\n[Yudkowsky][22:15]\nmy worldview permits but does not mandate a bunch of weirdly powerful shit that people can do a couple of years before the end, because that would sound like a typically messy and chaotic history-book scenario especially if it failed to help us in any way\n\n\n\n\n\n[Shulman][22:15]\nAnd the economic impact is increasing superlinearly (as later on AI can better manage its own introduction and not be held back by human complementarities on both the production side and introduction side).\n\n\n\n\n\n\n[Yudkowsky][22:16]\nmy worldview also permits but does not mandate that you get up to the chimp level, chimps are not very valuable, and once you can do fully AGI thought it compounds very quickly\n\n\nit feels to me like the Paul view wants something narrower than that, a specific story about a great economic boom, and it sounds like the Carl view wants something that from my perspective seems similarly narrow\n\n\nwhich is why I keep asking “can you perhaps be specific about what would count as Not That and thereby point to the Eliezerverse”\n\n\n\n\n\n[Shulman][22:18]\nWe’re in the Eliezerverse with huge kinks in loss graphs on automated programming/Putnam problems.\n\n\nNot from scaling up inputs but from a local discovery that is much bigger in impact than the sorts of jumps we observe from things like Transformers.\n\n\n\n\n\n\n[Yudkowsky][22:19]\n…my model of Paul didn’t agree with that being a prophecy-distinguishing sign to first order (to second order, my model of Paul agrees with Carl for reasons unbeknownst to me)\n\n\nI don’t think you need something very much bigger than Transformers to get sharp loss drops?\n\n\n\n\n\n[Shulman][22:19]\nnot the only disagreement\n\n\nbut that is a claim you seem to advance that seems bogus on our respective reads of the data on software advances\n\n\n\n\n\n\n[Yudkowsky][22:21]\nbut, sure, “huge kinks in loss graphs on automated programming / Putnam problems” sounds like something that is, if not mandated on my model, much more likely than it is in the Paulverse. though I am a bit surprised because I would not have expected Paul to be okay betting on that.\n\n\nlike, I thought it was an Eliezer-view unshared by Paul that this was a sign of the Eliezerverse.\n\n\nbut okeydokey if confirmed\n\n\nto be clear I do not mean to predict those kinks in the next 3 years specifically\n\n\nthey grow in probability on my model as we approach the End Times\n\n\n\n\n\n[Shulman][22:24]\nI also predict that AI chip usage is going to keep growing at enormous rates, and that the buyers will be getting net economic value out of them. The market is pricing NVDA (up more than 50x since 2014) at more than twice Intel because of the incredible growth rate, and it requires more crazy growth to justify the valuation (but still short of singularity). Although NVDA may be toppled by other producers.\n\n\nSimilarly for increasing spending on model size (although slower than when model costs were <$1M).\n\n\n\n\n\n\n[Yudkowsky][22:27]\nrelatively more plausible on my view, first because it’s arguably already happening (which makes it easier to predict) and second because that can happen with profitable uses of AI chips which hover around on the economic fringes instead of feeding into core production cycles (waifutech)\n\n\nit is easy to imagine massive AI chip usage in a world which rejects economic optimism and stays economically sad while engaging in massive AI chip usage\n\n\nso, more plausible\n\n\n\n\n\n[Shulman][22:28]\nWhat’s with the silly waifu example? That’s small relative to the actual big tech company applications (where they quickly roll it into their software/web services or internal processes, which is not blocked by regulation and uses their internal expertise). Super chatbots would be used as salespeople, counselors, non-waifu entertainment.\n\n\nIt seems randomly off from existing reality.\n\n\n\n\n\n\n[Yudkowsky][22:29]\nseems more… optimistic, Kurzweilian?… to suppose that the tech gets used correctly the way a sane person would hope it would be used\n\n\n\n\n\n[Shulman][22:29]\nLike this is actual current use.\n\n\nHollywood and videogames alone are much bigger than anime, software is bigger than that, Amazon/Walmart logistics is bigger.\n\n\n\n\n\n\n[Yudkowsky][22:31]\nCompanies using super chatbots to replace customer service they already hated and previously outsourced, with a further drop in quality, is permitted by the Dark and Gloomy Attempt To Realistically Continue History model\n\n\nI am on board with wondering if we’ll see sufficiently advanced videogame AI, but I’d point out that, again, that doesn’t cycle core production loops harder\n\n\n\n\n\n[Shulman][22:33]\nOK, using an example of allowable economic activity that obviously is shaving off more than an order of magnitude on potential market is just misleading compared to something like FAANGSx10.\n\n\n\n\n\n\n[Yudkowsky][22:34]\nso, like, if I was looking for places that would break upward, I would be like “universal translators that finally work”\n\n\nbut I was also like that when GPT-2 came out and it hasn’t happened even though you would think GPT-2 indicated we could get enough real understanding inside a neural network that you’d think, cognition-wise, it would suffice to do pretty good translation\n\n\nthere are huge current economic gradients pointing to the industrialization of places that, you might think, could benefit a lot from universal seamless translation\n\n\n\n\n\n[Shulman][22:36]\nCurrent translation industry is tens of billions, English learning bigger.\n\n\n\n\n\n\n[Yudkowsky][22:36]\nAmazon logistics are an interesting point, but there’s the question of how much economic benefit is produced by automating all of it at once, Amazon cannot ship 10x as much stuff if their warehouse costs go down by 10x.\n\n\n\n\n\n[Shulman][22:37]\nDefinitely hundreds of billions of dollars of annual value created from that, e.g. by easing global outsourcing.\n\n\n\n\n\n\n[Yudkowsky][22:37]\nif one is looking for places where huge economic currents could be produced, AI taking down what was previously a basic labor market barrier, would sound as plausible to me as many other things\n\n\n\n\n\n[Shulman][22:37]\nAmazon has increased sales faster than it lowered logistics costs, there’s still a ton of market share to take.\n\n\n\n\n\n\n[Yudkowsky][22:37]\nI am *able* to generate cheerful scenarios, eg if I need them for an SF short story set in the near future where billions of people are using AI tech on a daily basis and this has generated trillions in economic value\n\n\n\n\n\n[Shulman][22:38]\nBedtime for me though.\n\n\n\n\n\n\n[Yudkowsky][22:39]\nI don’t feel like particular cheerful scenarios like that have very much of a track record of coming *true*. I would not be shocked if the next GPT-jump permits that tech, and I would then not be shocked if use of AI translation actually did scale a lot. I would be much more impressed, with Earth having gone well for once and better than I expected, if that actually produced significantly more labor mobility and contributed to world GDP.\n\n\nI just don’t actively, >50% expect things going right like that. It seems to me that more often in real life, things do not go right like that, even if it seems quite easy to imagine them going right.\n\n\ngood night!\n\n\n\n\n\n10. September 22 conversation\n-----------------------------\n\n\n \n\n\n### 10.1. Scaling laws\n\n\n \n\n\n\n[Shah][3:05]\nMy attempt at a reframing:\n\n\nPlaces of agreement:\n\n\n* Trend extrapolation / things done by superforecasters seem like the right way to get a first-pass answer\n* Significant intuition has to go into exactly which trends to extrapolate and why (e.g. should GDP/GWP be extrapolated as “continue to grow at 3% per year” or as “growth rate continues to increase leading to singularity”)\n* It is possible to foresee deviations in trends based on qualitative changes in underlying drivers. In the Paul view, this often looks like switching from one trend to another. (For example: instead of “continue to grow at 3%” you notice that feedback loops imply hyperbolic growth, and then you look further back in time and notice that that’s the trend on a longer timescale. Or alternatively, you realize that you can’t just extrapolate AI progress because you can’t keep doubling money invested every few months, and so you start looking at trends in money invested and build a simple model based on that, which you still describe as “basically trend extrapolation”.)\n\n\nPlaces of disagreement:\n\n\n* Eliezer / Nate: There is an underlying driver of impact on the world which we might call “general cognition” or “intelligence” or “consequentialism” or “the-thing-spotlighted-by-coherence-arguments”, and the zero-to-one transition for that underlying driver will go from “not present at all” to “at or above human-level”, without something in between. Rats, dogs and chimps might be impressive in some ways but they do not have this underlying driver of impact; the zero-to-one transition happened between chimps and humans.\n* Paul (might be closer to my views, idk): There isn’t this underlying driver (or, depending on definitions, the zero-to-one transition happens well before human-level intelligence / impact). There are just more and more general heuristics, and correspondingly higher and higher impact. The case with evolution is unusually fast because the more general heuristics weren’t actually that useful.\n\n\nTo the extent this is accurate, it doesn’t seem like you really get to make a bet that resolves before the end times, since you agree on basically everything until the point at which Eliezer predicts that you get the zero-to-one transition on the underlying driver of impact. I think all else equal you probably predict that Eliezer has shorter timelines to the end times than Paul (and that’s where you get things like “Eliezer predicts you don’t have factory-generating factories before the end times whereas Paul does”). (Of course, all else is not equal.)\n\n\n\n\n\n\n[Bensinger][3:36]\n\n> \n> but you know enough to have strong timing predictions, e.g. your bet with caplan\n> \n> \n> \n\n\nEliezer said in Jan 2017 that the Caplan bet was kind of a joke: . Albeit “I suppose one might draw conclusions from the fact that, when I was humorously imagining what sort of benefit I could get from exploiting this amazing phenomenon, my System 1 thought that having the world not end before 2030 seemed like the most I could reasonably ask.”\n\n\n\n\n\n\n[Cotra][10:01]\n@RobBensinger sounds like the joke is that he thinks timelines are even shorter, which strengthens my claim about strong timing predictions?\n\n\nNow that we clarified up-thread that Eliezer’s position is *not* that there was a giant algorithmic innovation in between chimps and humans, but rather that there was some innovation in between dinosaurs and some primate or bird that allowed the primate/bird lines to scale better, I’m now confused about why it still seems like Eliezer expects a major innovation in the future that leads to deep/general intelligence. If the evidence we have is that evolution had *some* innovation like this, why not think that the invention of neural nets in the 60s or the invention of backprop in the 80s or whatever was the corresponding innovation in AI development? Why put it in the future? (Unless I’m misunderstanding and Eliezer doesn’t really place very high probability on “AGI is bottlenecked by an insight that lets us figure out how to get the deep intelligence instead of the shallow one”?)\n\n\nAlso if Eliezer would count transformers and so on as the kind of big innovation that would lead to AGI, then I’m not sure we disagree. I feel like that sort of thing is factored into the software progress trends used to extrapolate progress, so projecting those forward folds in expectations of future transformers\n\n\nBut it seems like Eliezer still expects *one* or a few innovations that are much larger in impact than the transformer?\n\n\nI’m also curious what Eliezer thinks of the claim “extrapolating trends automatically folds in the world’s inadequacy and stupidness because the past trend was built from everything happening in the world including the inadequacy”\n\n\n\n\n\n\n[Yudkowsky][10:24]\nAjeya asked before, and I see I didn’t answer:\n\n\n\n> \n> what about hardware/software R&D wages? will they get up to $20m/yr for good ppl?\n> \n> \n> \n\n\nIf you mean the best/luckiest people, they’re already there. If you mean that say Mike Blume starts getting paid $20m/yr base salary, then I cheerfully say that I’m willing to call that a narrower prediction of the Paulverse than of the Eliezerverse.\n\n\n\n> \n> will someone train a 10T param model before end days?\n> \n> \n> \n\n\nWell, of course, because now it’s a headline figure and Goodhart’s Law applies, and the Earlier point where this happens is where somebody trains a useless 10T param model using some much cheaper training method like MoE just to be the first to get the headline where they say they did that, if indeed that hasn’t happened already.\n\n\nBut even apart from that, a 10T param model sure sounds lots like a steady stream of headlines we’ve already seen, even for cases where it was doing something useful like GPT-3, so I would not feel surprised by more headlines like this.\n\n\nI will, however, be alarmed (not surprised) relatively more by ability improvements, than headline figure improvements, because I am not very impressed by 10T param models per se.\n\n\nIn fact I will probably be more surprised by ability improvements after hearing the 10T figure, than my model of Paul will claim to be, because my model of Paul much more associates 10T figures with capability increases.\n\n\nThough I don’t understand why this prediction success isn’t more than counterbalanced by an implied sequence of earlier failures in which Paul’s model permitted much more impressive things to happen from 1T Goodharted-headline models, that didn’t actually happen, that I expected to not happen – eg the current regime with MoE headlines – so that by the time that an impressive 10T model comes along and Imaginary Paul says ‘Ah yes I claim this for a success’, Eliezer’s reply is ‘I don’t understand the aspect of your theory which supposedly told you in advance that this 10T model would scale capabilities, but not all the previous 10T models or the current pointless-headline 20T models where that would be a prediction failure. From my perspective, people eventually scaled capabilities, and param-scaling techniques happened to be getting more powerful at the same time, and so of course the Earliest tech development to be impressive was one that included lots of params. It’s not a coincidence, but it’s also not a triumph for the param-driven theory per se, because the news stories look similar AFAICT in a timeline where it’s 60% algorithms and 40% params.”\n\n\n\n\n\n[Cotra][10:35]\nMoEs have very different scaling properties, for one thing they run on way fewer FLOP/s (which is just as if not more important than params, though we use params as a shorthand when we’re talking about “typical” models which tend to have small constant FLOP/param ratios). If there’s a model *with a similar architecture* to the ones we have scaling laws about now, then at 10T params I’d expect it to have the performance that the scaling laws would expect it to have\n\n\nMaybe something to bet about there. Would you say 10T param GPT-N would perform worse than the scaling law extraps would predict?\n\n\nIt seems like if we just look at a ton of scaling laws and see where they predict benchmark perf to get, then you could either bet on an upward or downward trend break and there could be a bet?\n\n\nAlso, if “large models that aren’t that impressive” is a ding against Paul’s view, why isn’t GPT-3 being so much better than GPT-2 which in turn was better than GPT-1 with little fundamental architecture changes not a plus? It seems like you often cite GPT-3 as evidence *for* your view\n\n\nBut Paul (and Dario) at the time predicted it’d work. The scaling laws work was before GPT-3 and prospectively predicted GPT-3’s perf\n\n\n\n\n\n\n[Yudkowsky][10:55]\nI guess I should’ve mentioned that I knew MoEs ran on many fewer FLOP/s because others may not know I know that; it’s an obvious charitable-Paul-interpretation but I feel like there’s multiple of those and I don’t know which, if any, Paul wants to claim as obvious-not-just-in-retrospect.\n\n\nLike, ok, sure people talk about model size. But maybe we really want to talk about gradient descent training ops; oh, wait, actually we meant to talk about gradient descent training ops with a penalty figure for ops that use lower precision, but nowhere near a 50% penalty for 16-bit instead of 32-bit; well, no, really the obvious metric is the one in which the value of a training op scales logarithmically with the total computational depth of the gradient descent (I’m making this up, it’s not an actual standard anywhere), and that’s why this alternate model that does a ton of gradient descent ops while making less use of the actual limiting resource of inter-GPU bandwidth is not as effective as you’d predict from the raw headline figure about gradient descent ops. And of course we don’t want to count ops that are just recomputing a gradient checkpoint, ha ha, that would be silly.\n\n\nIt’s not impossible to figure out these adjustments in advance.\n\n\nBut part of me also worries that – though this is more true of other EAs who will read this, than Paul or Carl, whose skills I do respect to some degree – that if you ran an MoE model with many fewer gradient descent ops, and it did do something impressive with 10T params that way, people would promptly do a happy dance and say “yay scaling” not “oh wait huh that was not how I thought param scaling worked”. After all, somebody originally said “10T”, so clearly they were right!\n\n\nAnd even with respect to Carl or Paul I worry about looking back and making “obvious” adjustments and thinking that a theory sure has been working out fine so far.\n\n\nTo be clear, I do consider GPT-3 as noticeable evidence for Dario’s view and for Paul’s view. The degree to which it worked well was more narrowly a prediction of those models than mine.\n\n\nThing about narrow predictions like that, if GPT-4 does not scale impressively, the theory loses significantly more Bayes points than it previously gained.\n\n\nSaying “this previously observed trend is very strong and will surely continue” will quite often let you pick up a few pennies in front of the steamroller, because not uncommonly, trends do continue, but then they stop and you lose more Bayes points than you previously gained.\n\n\nI do think of Carl and Paul as being better than this.\n\n\nBut I also think of the average EA reading them as being fooled by this.\n\n\n\n\n\n[Shulman][11:09]\nThe scaling laws experiments held architecture fixed, and that’s the basis of the prediction that GPT-3 will be along the same line that held over previous OOM, most definitely not switch to MoE/Switch Transformer with way less resources.\n\n\n\n\n\n| |\n| --- |\n| [Cotra: 👍] |\n\n\n\n\n\n\n\n[Yudkowsky][11:10]\nYou can redraw your graphs afterwards so that a variant version of Moore’s Law continued apace, but back in 2000, everyone sure was impressed with CPU GHz going up year after year and computers getting tangibly faster, and that version of Moore’s Law sure did not continue. Maybe some people were savvier and redrew the graphs as soon as the physical obstacles became visible, but of course, other people had predicted the end of Moore’s Law years and years before then. Maybe if superforecasters had been around in 2000 we would have found that they all sorted it out successfully, maybe not.\n\n\nSo, GPT-3 was $12m to train. In May 2022 it will be 2 years since GPT-3 came out. It feels to me like the Paulian view as I know how to operate it, says that GPT-3 has now got some revenue and exhibited applications like Codex, and was on a clear trend line of promise, so somebody ought to be willing to invest $120m in training GPT-4, and then we get 4x algorithmic speedups and cost improvements since then (iirc Paul said 2x/yr above? though I can’t remember if that was his viewpoint or mine?) so GPT-4 should have 40x ‘oomph’ in some sense, and what that translates to in terms of intuitive impact ability, I don’t know.\n\n\n\n\n\n[Shulman][11:18]\nThe OAI paper had 16 months (and is probably a bit low because in the earlier data people weren’t optimizing for hardware efficiency much): \n\n\n\n> \n> so GPT-4 should have 40x ‘oomph’ in some sense, and what that translates to in terms of intuitive impact ability, I don’t know.\n> \n> \n> \n\n\nProjecting this: \n\n\n\n\n\n\n[Yudkowsky][11:19]\n30x then. I would not be terribly surprised to find that results on benchmarks continue according to graph, and yet, GPT-4 somehow does not seem very much smarter than GPT-3 in conversation.\n\n\n\n\n\n[Shulman][11:20]\nThere are also graphs of the human impressions of sense against those benchmarks and they are well correlated. I expect that to continue too.\n\n\n\n\n\n| |\n| --- |\n| [Cotra: 👍] |\n\n\n\n\n\n\n\n[Yudkowsky][11:21]\nStuff coming uncorrelated that way, sounds like some of the history I lived through, where people managed to make the graphs of Moore’s Law seem to look steady by rejiggering the axes, and yet, between 1990 and 2000 home computers got a whole lot faster, and between 2010 and 2020 they did not.\n\n\nThis is obviously more likely (from my perspective) to break down anywhere between GPT-3 and GPT-6, than between GPT-3 and GPT-4.\n\n\nIs this also part of the Carl/Paul worldview? Because I implicitly parse a lot of the arguments as assuming a necessary premise which says, “No, this continues on until doomsday and I know it Kurzweil-style.”\n\n\n\n\n\n[Shulman][11:23]\nYeah I expect trend changes to happen, more as you go further out, and especially more when you see other things running into barriers or contradictions. Re language models there is some of that coming up with different scaling laws colliding when the models get good enough to extract almost all the info per character (unless you reconfigure to use more info-dense data).\n\n\n\n\n\n\n[Yudkowsky][11:23]\nWhere “this” is the Yudkowskian “the graphs are fragile and just break down one day, and their meanings are even more fragile and break down earlier”.\n\n\n\n\n\n[Shulman][11:25]\nScaling laws working over 8 or 9 OOM makes me pretty confident of the next couple, not confident about 10 further OOM out.\n\n\n\n\n\n \n\n\n\nThe post [Shulman and Yudkowsky on AI progress](https://intelligence.org/2021/12/04/shulman-and-yudkowsky-on-ai-progress/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-12-04T16:00:23Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "e14aa80238c61f934249d6a99d456e0c", "title": "Biology-Inspired AGI Timelines: The Trick That Never Works", "url": "https://intelligence.org/2021/12/03/biology-inspired-agi-timelines-the-trick-that-never-works/", "source": "miri", "source_type": "blog", "text": "– 1988 –\n--------\n\n\n**Hans Moravec:**  Behold my book *Mind Children.*  Within, I project that, in 2010 or thereabouts, we shall achieve strong AI.  I am not calling it “Artificial General Intelligence” because this term will not be coined for another 15 years or so.\n\n\n**Eliezer** (who is not actually on the record as saying this, because the real Eliezer is, in this scenario, 8 years old; this version of Eliezer has all the meta-heuristics of Eliezer from 2021, but none of that Eliezer’s anachronistic knowledge):  Really?  That sounds like a very difficult prediction to make correctly, since it is about the future, which is famously hard to predict.\n\n\n**Imaginary Moravec:**  Sounds like a [fully general counterargument](https://www.lesswrong.com/tag/fully-general-counterargument) to me.\n\n\n**Eliezer:**  Well, it is, indeed, a fully general counterargument *against futurism.*  Successfully predicting the unimaginably far future – that is, more than 2 or 3 years out, or sometimes less – is something that human beings seem to be quite bad at, by and large.\n\n\n**Moravec:**I predict that, 4 years from this day, in 1992, the Sun will rise in the east.\n\n\n**Eliezer:** Okay, let me qualify that.  Humans seem to be quite bad at predicting the future whenever we need to predict anything at all *new and unfamiliar,*rather than the Sun continuing to rise every morning until it finally gets eaten.  I’m not saying it’s impossible to ever validly predict something novel!  Why, even if that was impossible, how could *I*know it for sure?  By extrapolating from my own personal inability to make predictions like that?  Maybe I’m just bad at it myself.  But any time somebody claims that some particular novel aspect of the far future is predictable, they justly have a significant burden of prior skepticism to overcome.\n\n\nMore broadly, we should not expect a good futurist to give us a generally good picture of the future.  We should expect a great futurist to single out a few *rare narrow aspects* of the future which are, somehow, *exceptions* to the usual rule about the future not being very predictable.\n\n\nI do agree with you, for example, that we shall *at some point* see Artificial General Intelligence.  This seems like a rare predictable fact about the future, even though it is about a novel thing which has not happened before: we keep trying to crack this problem, we make progress albeit slowly, the problem must be solvable in principle because human brains solve it, eventually it will be solved; this is not a logical necessity, but it sure seems like the way to bet.  “AGI eventually” is predictable in a way that it is *not* predictable that, e.g., the nation of Japan, presently upon the rise, will achieve economic dominance over the next decades – to name something else that present-day storytellers of 1988 are talking about.\n\n\nBut *timing* the novel development correctly?  *That*is almost never done, not until things are 2 years out, and often not even then.  Nuclear weapons were called, but not nuclear weapons in 1945; heavier-than-air flight was called, but not flight in 1903.  In both cases, people said two years earlier that it wouldn’t be done for 50 years – or said, decades too early, that it’d be done shortly.  There’s a difference between worrying that we may eventually get a serious global pandemic, worrying that eventually a lab accident may lead to a global pandemic, and forecasting that a global pandemic will start in November of 2019.\n\n\n\n**Moravec:**  You should read my book, my friend, into which I have put much effort.  In particular – though it may sound impossible to forecast, to the likes of yourself – I have carefully examined a graph of computing power in single chips and the most powerful supercomputers over time.  This graph looks surprisingly regular!  Now, of course not all trends can continue forever; but I have considered the arguments that Moore’s Law will break down, and found them unconvincing.  My book spends several chapters discussing the particular reasons and technologies by which we might expect this graph to *not* break down, and continue, such that humanity *will* have, by 2010 or so, supercomputers which can perform 10 trillion operations per second.\\*\n\n\nOh, and also my book spends a chapter discussing the retina, the part of the brain whose computations we understand in the most detail, in order to estimate how much computing power the human brain is using, arriving at a figure of 10^13 ops/sec.  This neuroscience and computer science may be a bit hard for the layperson to follow, but I assure you that I am in fact an experienced hands-on practitioner in robotics and computer vision.\n\n\nSo, as you can see, we should first get strong AI somewhere around 2010.  I may be off by an order of magnitude in one figure or another; but even if I’ve made two errors in the same direction, that only shifts the estimate by 7 years or so.\n\n\n(\\*)  Moravec just about nailed this part; the actual year was 2008.\n\n\n**Eliezer:**  I sure would be amused if we *did* in fact get strong AI somewhere around 2010, which, for all *I*know at this point in this hypothetical conversation, could totally happen!  Reversed stupidity is not intelligence, after all, and just because that is a completely broken justification for predicting 2010 doesn’t mean that it cannot happen that way.\n\n\n**Moravec:**  Really now.  Would you care to enlighten me as to how I reasoned so wrongly?\n\n\n**Eliezer:**  Among the reasons why the Future is so hard to predict, in general, is that the sort of answers we want tend to be the products of lines of causality with multiple steps and multiple inputs.  Even when we can guess a single fact that *plays some role* in producing the Future – which is not of itself all that rare – usually the answer the storyteller wants depends on *more facts* than that single fact.  Our ignorance of any one of those other facts can be enough to torpedo our whole line of reasoning – *in practice,*not just as a matter of possibilities.  You could say that the art of exceptions to Futurism being impossible, consists in finding those rare things that you can predict despite being almost entirely ignorant of most concrete inputs into the concrete scenario.  Like predicting that AGI will happen *at some point*, despite not knowing the design for it, or who will make it, or how.\n\n\nMy own contribution to the Moore’s Law literature consists of Moore’s Law of Mad Science:  “Every 18 months, the minimum IQ required to destroy the Earth drops by 1 point.”  Even if this serious-joke was an absolutely true law, and aliens told us it was absolutely true, we’d still have no ability whatsoever to predict thereby when the Earth would be destroyed, because we’d have no idea what that minimum IQ was right now or at any future time.  We would know that in general the Earth had a serious problem that needed to be addressed, because we’d know in general that destroying the Earth kept on getting easier every year; but we would not be able to time *when* that would become an imminent emergency, until we’d seen enough specifics that the crisis was already upon us.\n\n\nIn the case of your prediction about strong AI in 2010, I might put it as follows:  The timing of AGI could be seen as a product of three factors, one of which you can try to extrapolate from existing graphs, and two of which you don’t know at all.  Ignorance of any one of them is enough to invalidate the whole prediction.\n\n\nThese three factors are:\n\n\n* The availability of computing power over time, which may be quantified, and appears steady when graphed;\n* The rate of progress in knowledge of cognitive science and algorithms over time, which is much harder to quantify;\n* A function that is a latent background parameter, for the amount of computing power required to create AGI as a function of any particular level of knowledge about cognition; and about this we know almost nothing.\n\n\nOr to rephrase:  Depending on how much you and your civilization know about AI-making – how much you know about cognition and computer science – it will take you a variable amount of computing power to build an AI.  If you really knew what you were doing, for example, I confidently predict that you could build a mind at least as powerful as a human mind, while using *fewer*floating-point operations per second than a human brain is making useful use of –\n\n\n**Chris Humbali:**  Wait, did you just say “confidently”?  How could you possibly know *that* with confidence?  How can you criticize Moravec for being too confident, and then, in the next second, turn around and be confident of something yourself?  Doesn’t that make you a massive hypocrite?\n\n\n**Eliezer:**  Um, who are you again?\n\n\n**Humbali:**  I’m the cousin of Pat Modesto from [your previous dialogue on Hero Licensing](https://www.lesswrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing)!  Pat isn’t here in person because “Modesto” looks unfortunately like “Moravec” on a computer screen.  And also their first name looks a bit like “Paul” who is not meant to be referenced either.  So today *I* shall be your true standard-bearer for good calibration, intellectual humility, the outside view, and reference class forecasting –\n\n\n**Eliezer:**  Two of these things are not like the other two, in my opinion; and Humbali and Modesto do not understand how to operate any of the four correctly, in my opinion; but anybody who’s read “[Hero Licensing](https://www.lesswrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing)” should already know I believe that.\n\n\n**Humbali:**  – and I don’t see how Eliezer can possibly be so *confident,* after all his humble talk of the difficulty of futurism, that it’s possible to build a mind ‘as powerful as’ a human mind using ‘less computing power’ than a human brain.\n\n\n**Eliezer:**  It’s overdetermined by multiple lines of inference.  We might first note, for example, that the human brain runs very slowly in a *serial* sense and tries to make up for that with massive parallelism.  It’s an obvious truth of computer science that while you can use 1000 serial operations per second to emulate 1000 parallel operations per second, the reverse is not in general true.\n\n\nTo put it another way: if you had to build a spreadsheet or a word processor on a computer running at 100Hz, you might also need a billion processing cores and massive parallelism in order to do enough cache lookups to get anything done; that wouldn’t mean the computational labor you were performing was *intrinsically*that expensive.  Since modern chips are massively serially faster than the neurons in a brain, and the direction of conversion is asymmetrical, we should expect that there are tasks which are immensely expensive to perform in a massively parallel neural setup, which are much cheaper to do with serial processing steps, and the reverse is *not* symmetrically true.\n\n\nA sufficiently adept builder can build general intelligence more cheaply in total operations per second, if they’re allowed to line up a billion operations one after another per second, versus lining up only 100 operations one after another.  I don’t bother to qualify this with “very probably” or “almost certainly”; it is the sort of proposition that a clear thinker should simply accept as obvious and move on.\n\n\n**Humbali:**  And is it certain that neurons can perform only 100 serial steps one after another, then?  As you say, ignorance about one fact can obviate knowledge of any number of others.\n\n\n**Eliezer:**  A typical neuron firing as fast as possible can do maybe 200 spikes per second, a few rare neuron types used by eg bats to echolocate can do 1000 spikes per second, and the vast majority of neurons are not firing that fast at any given time.  The usual and proverbial rule in neuroscience – the sort of academically respectable belief I’d expect you to respect even more than I do – is called “the 100-step rule”, that any task a human brain (or mammalian brain) can do on perceptual timescales, must be doable with no more than 100 *serial* steps of computation – no more than 100 things that get computed one after another.  Or even less if the computation is running off spiking frequencies instead of individual spikes.\n\n\n**Moravec:**  Yes, considerations like that are part of why I’d defend my estimate of 10^13 ops/sec for a human brain as being reasonable – more reasonable than somebody might think if they were, say, counting all the synapses and multiplying by the maximum number of spikes per second in any neuron.  If you actually look at what the retina is doing, and how it’s computing that, it doesn’t look like it’s doing one floating-point operation per activation spike per synapse.\n\n\n**Eliezer:**  There’s a similar asymmetry between precise computational operations having a vastly easier time emulating noisy or imprecise computational operations, compared to the reverse – there is no doubt a way to use neurons to compute, say, exact 16-bit integer addition, which is at least *more*efficient than a human trying to add up 16986+11398 in their heads, but you’d still need more synapses to do that than transistors, because the synapses are noisier and the transistors can just do it precisely.  This is harder to visualize and get a grasp on than the parallel-serial difference, but that doesn’t make it unimportant.\n\n\nWhich brings me to the second line of very obvious-seeming reasoning that converges upon the same conclusion – that it is in principle possible to build an AGI much more computationally efficient than a human brain – namely that biology is simply *not that efficient,*and *especially* when it comes to huge complicated things that it has started doing relatively recently.\n\n\nATP synthase may be close to 100% thermodynamically efficient, but ATP synthase is literally over 1.5 billion years old and a core bottleneck on all biological metabolism.  Brains have to pump thousands of ions in and out of each stretch of axon and dendrite, in order to restore their ability to fire another fast neural spike.  The result is that the brain’s computation is something like half a million times less efficient than the thermodynamic limit for its temperature – so around two millionths as efficient as ATP synthase.  And neurons are a hell of a lot older than the biological software for general intelligence!\n\n\nThe software for a human brain is not going to be 100% efficient compared to the theoretical maximum, nor 10% efficient, nor 1% efficient, even *before* taking into account the whole thing with parallelism vs. serialism, precision vs. imprecision, or similarly clear low-level differences.\n\n\n**Humbali:**  Ah!  But allow me to offer a consideration here that, I would wager, you’ve never thought of before yourself – namely – *what if you’re wrong?*  Ah, not so confident now, are you?\n\n\n**Eliezer:**  One observes, over one’s cognitive life as a human, which sorts of what-ifs are useful to contemplate, and where it is wiser to spend one’s limited resources planning against the alternative that one might be wrong; and I have oft observed that lots of people don’t… quite seem to understand how to use ‘what if’ all that well?  They’ll be like, “[Well, what if UFOs are aliens, and the aliens are partially hiding from us but not perfectly hiding from us, because they’ll seem higher-status if they make themselves observable but never directly interact with us?](https://www.overcomingbias.com/2021/06/ufos-what-the-hell.html)”\n\n\nI can refute individual what-ifs like that with specific counterarguments, but I’m not sure how to convey the central generator behind how I know that I ought to refute them.  I am not sure how I can get people to reject these ideas for themselves, instead of them passively waiting for me to come around with a specific counterargument.  My having to counterargue things specifically now seems like a road that never seems to end, and I am not as young as I once was, nor am I encouraged by how much progress I seem to be making.  I refute one wacky idea with a specific counterargument, and somebody else comes along and presents a new wacky idea on almost exactly the same theme.\n\n\nI know it’s probably not going to work, if I try to say things like this, but I’ll try to say them anyways.  When you are going around saying ‘what-if’, there is a very great difference between your map of reality, and the territory of reality, which is extremely narrow and stable.  Drop your phone, gravity pulls the phone downward, it falls.  What if there are aliens and they make the phone rise into the air instead, maybe because they’ll be especially amused at violating the rule after you just tried to use it as an example of where you could be confident?  Imagine the aliens watching you, imagine their amusement, contemplate how fragile human thinking is and how little you can ever be assured of anything and ought not to be too confident.  Then drop the phone and watch it fall.  You’ve now learned something about how reality itself isn’t made of what-ifs and reminding oneself to be humble; reality runs on rails stronger than your mind does.\n\n\nContemplating this doesn’t mean you *know* the rails, of course, which is why it’s so much harder to predict the Future than the past.  But if you see that your thoughts are still wildly flailing around what-ifs, it means that they’ve failed to gel, in some sense, they are not yet bound to reality, because reality has no binding receptors for what-iffery.\n\n\nThe correct thing to do is not to act on your what-ifs that you can’t figure out how to refute, but to go on looking for a model which makes narrower predictions than that.  If that search fails, forge a model which puts some more numerical distribution on your highly entropic uncertainty, instead of diverting into specific what-ifs.  And in the latter case, understand that this probability distribution reflects your ignorance and subjective state of mind, rather than your knowledge of an objective frequency; so that somebody else is allowed to be less ignorant without you shouting “Too confident!” at them.  Reality runs on rails as strong as math; sometimes other people will achieve, before you do, the feat of having their own thoughts run through more concentrated rivers of probability, in some domain.\n\n\nNow, when we are trying to concentrate our thoughts into deeper, narrower rivers that run closer to reality’s rails, there is of course the legendary hazard of concentrating our thoughts into the *wrong* narrow channels that *exclude* reality.  And the great legendary sign of this condition, of course, is the counterexample from Reality that falsifies our model!  But you should not in general criticize somebody for trying to concentrate their probability into narrower rivers than yours, for this is the appearance of the great general project of trying to get to grips with Reality, that runs on true rails that are narrower still.\n\n\nIf you have concentrated your probability into *different* narrow channels than somebody else’s, then, of course, you have a more interesting dispute; and you should engage in that legendary activity of trying to find some accessible experimental test on which your nonoverlapping models make different predictions.\n\n\n**Humbali:**  I do not understand the import of all this vaguely mystical talk.\n\n\n**Eliezer:**  I’m trying to explain why, when I say that I’m very confident it’s possible to build a human-equivalent mind using less computing power than biology has managed to use effectively, and you say, “How can you be so *confident,* what if you are *wrong,*” it is not unreasonable for me to reply, “Well, kid, this doesn’t seem like one of those places where it’s particularly important to worry about far-flung ways I could be wrong.”  Anyone who aspires to learn, learns over a lifetime which sorts of guesses are more likely to go oh-no-wrong in real life, and which sorts of guesses are likely to just work.  Less-learned minds will have minds full of what-ifs they can’t refute in more places than more-learned minds; and even if you cannot see how to refute all your what-ifs yourself, it is possible that a more-learned mind knows why they are improbable.  For one must distinguish possibility from probability.\n\n\nIt is *imaginable* or *conceivable* that human brains have such refined algorithms that they are operating at the absolute limits of computational efficiency, or within 10% of it.  But if you’ve spent enough time noticing *where*Reality usually exercises its sovereign right to yell “Gotcha!” at you, learning *which* of your assumptions are the kind to blow up in your face and invalidate your final conclusion, you can guess that “Ah, but what if the brain is nearly 100% computationally efficient?” is the sort of what-if that is not much worth contemplating because it is not actually going to be true in real life.  Reality is going to confound you in some other way than that.\n\n\nI mean, maybe you haven’t read enough neuroscience and evolutionary biology that you can see from your own knowledge that the proposition sounds massively implausible and ridiculous.  But it should hardly seem unlikely that somebody else, more learned in biology, might be justified in having more confidence than you.  Phones don’t fall up.  Reality really is very stable and orderly in a lot of ways, even in places where you yourself are ignorant of that order.\n\n\nBut if “What if aliens are making themselves visible in flying saucers because they want high status and they’ll have higher status if they’re occasionally observable but never deign to talk with us?” sounds to you like it’s totally plausible, and you don’t see how someone can be *so confident* that it’s not true – because oh *no* what if you’re *wrong* and you haven’t *seen* the aliens so how can you *know* what they’re not thinking – then I’m not sure how to lead you into the place where you can dismiss that thought with confidence.  It may require a kind of life experience that I don’t know how to give people, at all, let alone by having them passively read paragraphs of text that I write; a learned, perceptual sense of which what-ifs have any force behind them.  I mean, I can refute that specific scenario, I *can* put that learned sense into words; but I’m not sure that does me any good unless you learn how to refute it yourself.\n\n\n**Humbali:**  Can we leave aside all that meta stuff and get back to the object level?\n\n\n**Eliezer:**  This indeed is often wise.\n\n\n**Humbali:**  Then here’s one way that the minimum computational requirements for general intelligence could be *higher* than Moravec’s argument for the human brain.  Since, after, all, we only have one existence proof that general intelligence is possible at all, namely the human brain.  Perhaps there’s no way to get general intelligence in a computer except by simulating the brain neurotransmitter-by-neurotransmitter.  In that case you’d need a lot *more* computing operations per second than you’d get by calculating the number of potential spikes flowing around the brain!  What if it’s true?  How can you *know?*\n\n\n(**Modern person:**  This seems like an obvious straw argument?  I mean, would anybody, even at an earlier historical point, actually make an argument like –\n\n\n**Moravec and Eliezer:**  YES THEY WOULD.)\n\n\n**Eliezer:**  I can imagine that if we were trying specifically to *upload a human* that there’d be no easy and simple and obvious way to run the resulting simulation and get a good answer, without simulating neurotransmitter flows in extra detail.\n\n\nTo imagine that every one of these simulated flows is *being usefully used in general intelligence and there is no way to simplify the mind design to use fewer computations…*  I suppose I could try to refute that specifically, but it seems to me that this is a road which has no end unless I can convey the generator of my refutations.  Your what-iffery is flung far enough that, if I cannot leave even that much rejection as an exercise for the reader to do on their own without my holding their hand, the reader has little enough hope of following the rest; let them depart now, in indignation shared with you, and save themselves further outrage.\n\n\nI mean, it will obviously be *less* obvious to the reader because they will know *less* than I do about this exact domain, it will justly take *more* work for the reader to specifically refute you than it takes me to refute you.  But I think the reader needs to be able to do that at all, in this example, to follow the more difficult arguments later.\n\n\n**Imaginary Moravec:**  I don’t think it changes my conclusions by an order of magnitude, but some people would worry that, for example, changes of protein expression inside a neuron in order to implement changes of long-term potentiation, are also important to intelligence, and could be a big deal in the brain’s real, effectively-used computational costs.  I’m curious if you’d dismiss that as well, the same way you dismiss the probability that you’d have to simulate every neurotransmitter molecule?\n\n\n**Eliezer:**  Oh, of course not.  Long-term potentiation suddenly turning out to be a big deal you overlooked, compared to the depolarization impulses spiking around, is *very* much the sort of thing where Reality sometimes jumps out and yells “Gotcha!” at you.\n\n\n**Humbali:**  *How can you tell the difference?*\n\n\n**Eliezer:**  Experience with Reality yelling “Gotcha!” at myself and historical others.\n\n\n**Humbali:**  They seem like equally plausible speculations to me!\n\n\n**Eliezer:**  Really?  “What if long-term potentiation is a big deal and computationally important” sounds just as plausible to you as “What if the brain is already close to the wall of making the most efficient possible use of computation to implement general intelligence, and every neurotransmitter molecule matters”?\n\n\n**Humbali:**  Yes!  They’re both what-ifs we can’t know are false and shouldn’t be overconfident about denying!\n\n\n**Eliezer:**  My tiny feeble mortal mind is far away from reality and only bound to it by the loosest of correlating interactions, but I’m not *that* unbound from reality.\n\n\n**Moravec:**  I would guess that in real life, long-term potentiation is sufficiently slow and local that what goes on inside the cell body of a neuron over minutes or hours is not as big of a computational deal as thousands of times that many spikes flashing around the brain in milliseconds or seconds.  That’s why I didn’t make a big deal of it in my own estimate.\n\n\n**Eliezer:**  Sure.  But it *is*much more the sort of thing where you wake up to a reality-authored science headline saying “Gotcha!  There were tiny DNA-activation interactions going on in there at high speed, and they were actually pretty expensive and important!”  I’m not saying this exact thing is very probable, just that it wouldn’t be out-of-character for reality to say *something*like that to me, the way it would be really genuinely bizarre if Reality was, like, “Gotcha!  The brain is as computationally efficient of a generally intelligent engine as any algorithm can be!”\n\n\n**Moravec:**  I think we’re in agreement about that part, or we would’ve been, if we’d actually had this conversation in 1988.  I mean, I *am* a competent research roboticist and it is difficult to become one if you are completely unglued from reality.\n\n\n**Eliezer:**  Then what’s with the 2010 prediction for strong AI, and the massive non-sequitur leap from “the human brain is somewhere around 10 trillion ops/sec” to “if we build a 10 trillion ops/sec supercomputer, we’ll get strong AI”?\n\n\n**Moravec:**  Because while it’s the kind of Fermi estimate that can be off by an order of magnitude in practice, it doesn’t really seem like it should be, I don’t know, off by three orders of magnitude?  And even three orders of magnitude is just 10 years of Moore’s Law.  2020 for strong AI is also a bold and important prediction.\n\n\n**Eliezer:**  And the year 2000 for strong AI even more so.\n\n\n**Moravec:**  Heh!  That’s not usually the direction in which people argue with me.\n\n\n**Eliezer:**  There’s an important distinction between the direction in which people usually argue with you, and the direction from which Reality is allowed to yell “Gotcha!”  I wish my future self had kept this more in mind, when arguing with Robin Hanson about how well AI architectures were liable to generalize and scale without a ton of domain-specific algorithmic tinkering for every field of knowledge.  I mean, in principle what I was arguing for was various lower bounds on performance, but I sure could have emphasized more loudly that those were *lower* bounds – well, I *did* emphasize the lower-bound part, but – from the way I felt when AlphaGo and Alpha Zero and GPT-2 and GPT-3 showed up, I think I must’ve sorta forgot that myself.\n\n\n**Moravec:**  Anyways, if we say that I might be up to three orders of magnitude off and phrase it as 2000-2020, do you agree with my prediction then?\n\n\n**Eliezer:**  No, I think you’re just… arguing about the wrong facts, in a way that seems to be unglued from most tracks Reality might follow so far as I currently know?  On my view, creating AGI is strongly dependent on how much knowledge you have about how to do it, in a way which almost *entirely*obviates the relevance of arguments from human biology?\n\n\nLike, human biology tells us a single not-very-useful data point about how much computing power evolutionary biology needs in order to build a general intelligence, using very alien methods to our own.  Then, very separately, there’s the constantly changing level of how much cognitive science, neuroscience, and computer science our own civilization knows.  We don’t know how much computing power is required for AGI for *any* level on that constantly changing graph, and biology doesn’t tell us.  All we know is that the hardware requirements for AGI must be dropping by the year, because the knowledge of how to create AI is something that only increases over time.\n\n\nAt some point the moving lines for “decreasing hardware required” and “increasing hardware available” will cross over, which lets us predict that AGI gets built at *some* point.  But we don’t know how to graph two key functions needed to predict that date.  You would seem to be committing the classic fallacy of searching for your keys under the streetlight where the visibility is better.  You know how to estimate how many floating-point operations per second the retina could effectively be using, but *this is not the number you need to predict the outcome you want to predict.*  You need a graph of human knowledge of computer science over time, and then a graph of how much computer science requires how much hardware to build AI, and neither of these graphs are available.\n\n\nIt *doesn’t matter* how many chapters your book spends considering the continuation of Moore’s Law or computation in the retina, and I’m sorry if it seems rude of me in some sense to just dismiss the relevance of all the hard work you put into arguing it.  But you’re arguing the *wrong facts* to get to the conclusion, so all your hard work is for naught.\n\n\n**Humbali:**  Now it seems to me that I must chide you for being too dismissive of Moravec’s argument.  Fine, yes, Moravec has not established with *logical certainty* that strong AI must arrive at the point where top supercomputers match the human brain’s 10 trillion operations per second.  But has he not established a *reference class,* the sort of *base rate* that good and virtuous superforecasters, unlike yourself, go looking for when they want to *anchor*their estimate about some future outcome?  Has he not, indeed, established the sort of argument which says that if top supercomputers can do only *ten million* operations per second, we’re not very likely to get AGI earlier than that, and if top supercomputers can do *ten quintillion* operations per second\\*, we’re unlikely not to already have AGI?\n\n\n(\\*) In 2021 terms, [10 TPU v4 pods](https://cloud.google.com/blog/products/ai-machine-learning/google-wins-mlperf-benchmarks-with-tpu-v4).\n\n\n**Eliezer:**  With ranges that wide, it’d be more likely and less amusing to hit somewhere inside it by coincidence.  But I still think this whole line of thoughts is just off-base, and that you, Humbali, have not truly grasped the concept of a virtuous superforecaster or how they go looking for reference classes and base rates.\n\n\n**Humbali:**  I frankly think you’re just being unvirtuous.  Maybe you have some special model of AGI which claims that it’ll arrive in a different year or be arrived at by some very different pathway.  But is not Moravec’s estimate a sort of base rate which, to the extent you are properly and virtuously uncertain of your own models, you ought to *regress* in your own probability distributions over AI timelines?  As you become more uncertain about the exact amounts of knowledge required and what knowledge we’ll have when, shouldn’t you have an uncertain distribution about AGI arrival times that centers around Moravec’s base-rate prediction of 2010?\n\n\nFor you to reject this anchor seems to reveal a grave lack of humility, since you must be very certain of whatever alternate estimation methods you are using in order to throw away this base-rate entirely.\n\n\n**Eliezer:**  Like I said, I think you’ve just failed to grasp the true way of a virtuous superforecaster.  Thinking a lot about Moravec’s so-called ‘base rate’ is just making you, in some sense, stupider; you need to cast your thoughts loose from there and try to navigate a wilder and less tamed space of possibilities, until they begin to gel and coalesce into narrower streams of probability.  Which, for AGI, they probably *won’t do* until we’re quite close to AGI, and start to guess correctly how AGI will get built; for it is easier to predict an eventual global pandemic than to say it will start in November of 2019.  Even in October of 2019 this cannot be done.\n\n\n**Humbali:**  Then all this uncertainty must somehow be quantified, if you are to be a virtuous Bayesian; and again, for lack of anything better, the resulting distribution should center on Moravec’s base-rate estimate of 2010.\n\n\n**Eliezer:**  No, that calculation is just basically not relevant here; and thinking about it is making you stupider, as your mind flails in the trackless wilderness grasping onto unanchored air.  Things must be ‘sufficiently similar’ to each other, in some sense, for us to get a base rate on one thing by looking at another thing.  Humans making an AGI is just too dissimilar to evolutionary biology making a human brain for us to anchor ‘how much computing power at the time it happens’ from one to the other.  It’s not the droid we’re looking for; and your attempt to build an inescapable epistemological trap about virtuously calling that a ‘base rate’ is not the Way.\n\n\n**Imaginary Moravec:**  If I can step back in here, I don’t think my calculation is zero evidence?  What we know from evolutionary biology is that a blind alien god with zero foresight accidentally mutated a chimp brain into a general intelligence.  I don’t want to knock biology’s work too much, there’s some impressive stuff in the retina, and the retina is just the part of the brain which is in some sense easiest to understand.  But surely there’s a very reasonable argument that 10 trillion ops/sec is about the amount of computation that evolutionary biology needed; and since evolution is stupid, when we ourselves have that much computation, it shouldn’t be *that* hard to figure out how to configure it.\n\n\n**Eliezer:**  If that was true, the same theory predicts that our current supercomputers should be doing a better job of matching the agility and vision of spiders.  When at some point there’s enough hardware that we figure out how to put it together into AGI, we could be doing it with less hardware than a human; we could be doing it with more; and we can’t even say that these two possibilities are *around equally probable* such that our probability distribution should have its median around 2010.  Your number is so bad and obtained by such bad means that we should just throw it out of our thinking and start over.\n\n\n**Humbali:**  This last line of reasoning seems to me to be particularly ludicrous, like you’re just throwing away the only base rate we have in favor of a confident assertion of our somehow being *more uncertain* than that.\n\n\n**Eliezer:**  Yeah, well, sorry to put it bluntly, Humbali, but you have not yet figured out how to turn your own computing power into intelligence.\n\n\n – 1999 –\n---------\n\n\n**Luke Muehlhauser reading a previous draft of this** (only sounding much more serious than this, because Luke Muehlhauser)**:**  You know, there was this certain teenaged futurist who made some of his own predictions about AI timelines –\n\n\n**Eliezer:**  I’d really rather not argue from that as a case in point.  I dislike people who screw up something themselves, and then argue like nobody else could possibly be more competent than they were.  I dislike even more people who change their mind about something when they turn 22, and then, for the rest of their lives, go around acting like they are now Very Mature Serious Adults who believe the thing that a Very Mature Serious Adult believes, so if you disagree with them about that thing they started believing at age 22, you must just need to wait to grow out of your extended childhood.\n\n\n**Luke Muehlhauser**(still being paraphrased)**:**  It seems like it ought to be acknowledged somehow.\n\n\n**Eliezer:**  That’s fair, yeah, I can see how someone might think it was relevant.  I just dislike how it potentially creates the appearance of trying to slyly sneak in an Argument From Reckless Youth that I regard as not only invalid but also incredibly distasteful.  You don’t get to screw up yourself and then use that as an argument about how nobody else can do better.\n\n\n**Humbali:**  Uh, what’s the actual drama being subtweeted here?\n\n\n**Eliezer:**  A certain teenaged futurist, who, for example, said in 1999, “The most realistic estimate for a seed AI transcendence is 2020; nanowar, before 2015.”\n\n\n**Humbali:**  This young man must surely be possessed of some very deep character defect, which I worry will prove to be of the sort that people almost never truly outgrow except in the rarest cases.  Why, he’s not even putting a probability distribution over his mad soothsaying – how blatantly absurd can a person get?\n\n\n**Eliezer:**  Dear child ignorant of history, your complaint is far too anachronistic.  This is 1999 we’re talking about here; almost nobody is putting probability distributions on things, that element of your later subculture has not yet been introduced.  Eliezer-2002 hasn’t been sent a copy of “Judgment Under Uncertainty” by Emil Gilliam.  Eliezer-2006 hasn’t put his draft online for “Cognitive biases potentially affecting judgment of global risks”.  The Sequences won’t start until another year after that.  How would the forerunners of effective altruism *in 1999* know about putting probability distributions on forecasts?  I haven’t told them to do that yet!  We can give historical personages credit when they seem to somehow end up doing better than their surroundings would suggest; it is unreasonable to hold them to modern standards, or expect them to have finished refining those modern standards by the age of nineteen.\n\n\nThough there’s also a more subtle lesson you could learn, about how this young man turned out to still have a promising future ahead of him; which he retained at least in part by having a deliberate contempt for pretended dignity, allowing him to be plainly and simply wrong in a way that he noticed, without his having twisted himself up to avoid a prospect of embarrassment.  Instead of, for example, his evading such plain falsification by having dignifiedly wide Very Serious probability distributions centered on the same medians produced by the same basically bad thought processes.\n\n\nBut that was too much of a digression, when I tried to write it up; maybe later I’ll post something separately.\n\n\n– 2004 or thereabouts –\n-----------------------\n\n\n**Ray Kurzweil in 2001:**  I have [calculated](https://www.kurzweilai.net/the-law-of-accelerating-returns) that matching the intelligence of a human brain requires 2 \\* 10^16 ops/sec\\* and this will become available in a $1000 computer in 2023.  26 years after that, in 2049, a $1000 computer will have ten billion times more computing power than a human brain; and in 2059, that computer will cost one cent.\n\n\n(\\*) Two TPU v4 pods.\n\n\n**Actual real-life Eliezer in Q&A, when Kurzweil says the same thing in a 2004(?) talk:**  It seems weird to me to forecast the arrival of “human-equivalent” AI, and then expect Moore’s Law to just continue on the same track past that point for thirty years.  Once we’ve got, in your terms, human-equivalent AIs, even if we don’t go beyond that in terms of intelligence, Moore’s Law will start speeding them up.  Once AIs are thinking thousands of times faster than we are, wouldn’t that tend to break down the graph of Moore’s Law with respect to the objective wall-clock time of the Earth going around the Sun?  Because AIs would be able to spend thousands of *subjective*years working on new computing technology?\n\n\n**Actual Ray Kurzweil:**  The fact that AIs can do faster research is exactly what will enable Moore’s Law to continue on track.\n\n\n**Actual Eliezer (out loud):**  Thank you for answering my question.\n\n\n**Actual Eliezer (internally):**  Moore’s Law is a phenomenon produced by human cognition and the fact that human civilization runs off human cognition.  You can’t expect the surface phenomenon to continue unchanged after the deep causal phenomenon underlying it starts changing.  What kind of bizarre worship of graphs would lead somebody to think that the graphs were the primary phenomenon and would continue steady and unchanged when the forces underlying them changed massively?  I was hoping he’d be less nutty in person than in the book, but oh well.\n\n\n– 2006 or thereabouts –\n-----------------------\n\n\n**Somebody on the Internet:**  I have calculated the number of computer operations used by evolution to evolve the human brain – searching through organisms with increasing brain size  – by adding up all the computations that were done by any brains before modern humans appeared.  It comes out to 10^43 computer operations.\\*  AGI isn’t coming any time soon!\n\n\n(\\*)  I forget the exact figure.  It was 10^40-something.\n\n\n**Eliezer, sighing:**  Another day, another biology-inspired timelines forecast.  This trick didn’t work when Moravec tried it, it’s not going to work while Ray Kurzweil is trying it, and it’s not going to work when you try it either.  It also didn’t work when a certain teenager tried it, but please entirely ignore that part; you’re at least allowed to do better than him.\n\n\n**Imaginary Somebody:**  Moravec’s prediction failed because he assumed that you could just magically take something with around as much hardware as the human brain and, poof, it would start being around that intelligent –\n\n\n**Eliezer:**  Yes, that is one way of viewing an invalidity in that argument.  Though you do Moravec a disservice if you imagine that he could only argue “It will magically emerge”, and could not give the more plausible-sounding argument “Human engineers are not that incompetent compared to biology, and will probably figure it out without more than one or two orders of magnitude of extra overhead.”\n\n\n**Somebody:**  But *I* am cleverer, for I have calculated the number of computing operations that was used to *create and design* biological intelligence, not just the number of computing operations required to *run it once created!*\n\n\n**Eliezer:**  And yet, because your reasoning contains the word “biological”, it is just as invalid and unhelpful as Moravec’s original prediction.\n\n\n**Somebody:**  I don’t see why you dismiss my biological argument about timelines on the basis of Moravec having been wrong.  He made one basic mistake – neglecting to take into effect the cost to generate intelligence, not just to run it.  I have corrected this mistake, and now my own effort to do biologically inspired timeline forecasting should work fine, and must be evaluated on its own merits, *de novo*.\n\n\n**Eliezer:**  It is true indeed that sometimes a line of inference is doing just one thing wrong, and works fine after being corrected.  And because this is true, it is often indeed wise to reevaluate new arguments on their own merits, if that is how they present themselves.  One may not take the past failure of a different argument or three, and try to hang it onto the new argument like an inescapable iron ball chained to its leg.  It might be the cause for defeasible skepticism, but not invincible skepticism.\n\n\nThat said, on my view, you are making a nearly identical mistake as Moravec, and so his failure remains relevant to the question of whether you are engaging in a kind of thought that binds well to Reality.\n\n\n**Somebody:**  And that mistake is just mentioning the word “biology”?\n\n\n**Eliezer:**  The problem is that *the resource gets consumed differently, so base-rate arguments from resource consumption end up utterly unhelpful in real life.*  The human brain consumes around 20 watts of power.  Can we thereby conclude that an AGI should consume around 20 watts of power, and that, when technology advances to the point of being able to supply around 20 watts of power to computers, we’ll get AGI?\n\n\n**Somebody:**  That’s absurd, of course.  So, what, you compare my argument to an absurd argument, and from this dismiss it?\n\n\n**Eliezer:**  I’m saying that Moravec’s “argument from comparable resource consumption” must be in general [invalid](https://www.lesswrong.com/posts/WQFioaudEH8R7fyhm/local-validity-as-a-key-to-sanity-and-civilization), because it [Proves Too Much](https://www.lesswrong.com/posts/G5eMM3Wp3hbCuKKPE/proving-too-much).  If it’s in general valid to reason about comparable resource consumption, then it should be equally valid to reason from energy consumed as from computation consumed, and pick energy consumption instead to call the basis of your median estimate.\n\n\nYou say that AIs consume energy in a very different way from brains?  Well, they’ll also consume computations in a very different way from brains!  The only difference between these two cases is that you *know* something about how humans eat food and break it down in their stomachs and convert it into ATP that gets consumed by neurons to pump ions back out of dendrites and axons, while computer chips consume electricity whose flow gets interrupted by transistors to transmit information.  Since you *know anything whatsoever* about how AGIs and humans consume energy, you can *see* that the consumption is so vastly different as to obviate all comparisons entirely.\n\n\nYou are *ignorant* of how the brain consumes computation, you are *ignorant* of how the first AGIs built would consume computation, but “an unknown key does not open an unknown lock” and these two ignorant distributions should not assert much internal correlation between them.\n\n\nEven without knowing the specifics of how brains and future AGIs consume computing operations, you ought to be able to reason abstractly about a directional update that you *would* make, if you knew *any* specifics instead of none.  If you did know how both kinds of entity consumed computations, if you knew about specific machinery for human brains, and specific machinery for AGIs, you’d then be able to see the enormous vast specific differences between them, and go, “Wow, what a futile resource-consumption comparison to try to use for forecasting.”\n\n\n(Though I say this without much hope; I have not had very much luck in telling people about predictable directional updates they would make, if they knew something instead of nothing about a subject.  I think it’s probably too abstract for most people to feel in their gut, or something like that, so their brain ignores it and moves on in the end.  I have had life experience with learning more about a thing, updating, and then going to myself, “Wow, I should’ve been able to predict in retrospect that learning almost *any* specific fact would move my opinions in that same direction.”  But I worry this is not a common experience, for it involves a real experience of discovery, and preferably more than one to get the generalization.)\n\n\n**Somebody:**  All of that seems irrelevant to my novel and different argument.  I am not foolishly estimating the resources consumed by a single brain; I’m estimating the resources consumed by evolutionary biology to *invent* brains!\n\n\n**Eliezer:**  And the humans wracking their own brains and inventing new AI program architectures and deploying those AI program architectures to themselves learn, will consume computations so *utterly differently* from evolution that there is no point comparing those consumptions of resources.  That is the flaw that you share exactly with Moravec, and that is why I say the same of both of you, “This is a kind of thinking that fails to bind upon reality, it doesn’t work in real life.”  I don’t care how much painstaking work you put into your estimate of 10^43 computations performed by biology.  It’s just not a relevant fact.\n\n\n**Humbali:**  But surely this estimate of 10^43 cumulative operations can at least be used to establish a base rate for anchoring our –\n\n\n**Eliezer:**  Oh, for god’s sake, shut up.  At least Somebody is only wrong on the object level, and isn’t trying to build an inescapable epistemological trap by which his ideas must still hang in the air like an eternal stench even after they’ve been counterargued.  Isn’t ‘but muh base rates’ what your viewpoint would’ve also said about Moravec’s 2010 estimate, back when that number still looked plausible?\n\n\n**Humbali:**  Of course it is evident to me now that my youthful enthusiasm was mistaken; obviously I tried to estimate the wrong figure.  As Somebody argues, we should have been estimating the biological computations used to *design* human intelligence, not the computations used to *run* it.\n\n\nI see, now, that I was using the wrong figure as my base rate, leading my base rate to be wildly wrong, and even irrelevant; but now that I’ve seen this, the clear error in my previous reasoning, I have a *new*base rate.  This doesn’t seem obviously to me likely to contain the same kind of wildly invalidating enormous error as before.  What, is Reality just going to yell “Gotcha!” at me again?  And even the prospect of some new unknown error, which is just as likely to be in either possible direction, implies only that we should widen our credible intervals while keeping them centered on a median of 10^43 operations –\n\n\n**Eliezer:**  Please stop.  This trick just never works, at all, deal with it and get over it.  Every second of attention that you pay to the 10^43 number is making you stupider.  You might as well reason that 20 watts is a base rate for how much energy the first generally intelligent computing machine should consume.\n\n\n– 2020 –\n--------\n\n\n**OpenPhil:**  We have commissioned a Very Serious report on a biologically inspired estimate of how much computation will be required to achieve Artificial General Intelligence, for purposes of forecasting an AGI timeline.  ([Summary of report.](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines?commentId=7d4q79ntst6ryaxWD))  ([Full draft of report.)](https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP)  Our leadership takes this report Very Seriously.\n\n\n**Eliezer:**  Oh, hi there, new kids.  Your grandpa is feeling kind of tired now and can’t debate this again with as much energy as when he was younger.\n\n\n**Imaginary OpenPhil:**  You’re not *that* much older than us.\n\n\n**Eliezer:**  Not by biological wall-clock time, I suppose, but –\n\n\n**OpenPhil:**  You think thousands of times faster than us?\n\n\n**Eliezer:**  I wasn’t going to say it if you weren’t.\n\n\n**OpenPhil:**  We object to your assertion on the grounds that it is false.\n\n\n**Eliezer:**  I was actually going to say, you might be underestimating how long I’ve been walking this endless battlefield because I started *really quite young*.\n\n\nI mean, sure, I didn’t read *Mind Children* when it came out in 1988.  I only read it four years later, when I was twelve.  And sure, I didn’t immediately afterwards start writing online about Moore’s Law and strong AI; I did not immediately contribute my own salvos and sallies to the war; I was not yet a noticed voice in the debate.  I only got started on that at age sixteen.  I’d like to be able to say that in 1999 I was just a random teenager being reckless, but in fact I was already being invited to dignified online colloquia about the “Singularity” and mentioned in printed books; when I was being wrong back then I was already doing so in the capacity of a minor public intellectual on the topic.\n\n\nThis is, as I understand normie ways, relatively young, and is probably worth an extra decade tacked onto my biological age; you should imagine me as being 52 instead of 42 as I write this, with a correspondingly greater number of visible gray hairs.\n\n\nA few years later – though still before your time – there was the Accelerating Change Foundation, and Ray Kurzweil spending literally millions of dollars to push Moore’s Law graphs of technological progress as *the* central story about the future.  I mean, I’m sure that a few million dollars sounds like peanuts to OpenPhil, but if your own annual budget was a hundred thousand dollars or so, that’s a hell of a megaphone to compete with.\n\n\nIf you are currently able to conceptualize the Future as being about something *other* than nicely measurable metrics of progress in various tech industries, being projected out to where they will inevitably deliver us nice things – that’s at least partially because of a battle fought years earlier, in which I was a primary fighter, creating a conceptual atmosphere you now take for granted.  A mental world where threshold levels of AI ability are considered potentially interesting and transformative – rather than milestones of new technological luxuries to be checked off on an otherwise invariant graph of Moore’s Laws as they deliver flying cars, space travel, lifespan-extension escape velocity, and other such goodies on an equal level of interestingness.  I have earned at least a *little* right to call myself your grandpa.\n\n\nAnd that kind of experience has a sort of compounded interest, where, once you’ve lived something yourself and participated in it, you can learn more from reading other histories about it.  The histories become more real to you once you’ve fought your own battles.  The fact that I’ve lived through timeline errors in person gives me a sense of how it actually feels to be around at the time, watching people sincerely argue Very Serious erroneous forecasts.  That experience lets me really and actually [update on the history](https://www.lesswrong.com/posts/TLKPj4GDXetZuPDH5/making-history-available) of the earlier mistaken timelines from before I was around; instead of the histories just seeming like a kind of fictional novel to read about, disconnected from reality and not happening to real people.\n\n\nAnd now, indeed, I’m feeling a bit old and tired for reading yet another report like yours in full attentive detail.  Does it by any chance say that AGI is due in about 30 years from now?\n\n\n**OpenPhil:**  Our report has very wide credible intervals around both sides of its median, as we analyze the problem from a number of different angles and show how they lead to different estimates –\n\n\n**Eliezer:**  Unfortunately, the thing about figuring out five different ways to guess the effective IQ of the smartest people on Earth, and having three different ways to estimate the minimum IQ to destroy lesser systems such that you could extrapolate a minimum IQ to destroy the whole Earth, and putting wide credible intervals around all those numbers, and combining and mixing the probability distributions to get a new probability distribution, is that, at the end of all that, you are still left with a load of nonsense.  Doing a fundamentally wrong thing in several different ways will not save you, though I suppose if you spread your bets widely enough, one of them may be right by coincidence.\n\n\nSo does the report by any chance say – with however many caveats and however elaborate the probabilistic methods and alternative analyses – that AGI is probably due in about 30 years from now?\n\n\n**OpenPhil:**  Yes, in fact, our 2020 report’s median estimate is 2050; though, again, with very wide credible intervals around both sides.  Is that number significant?\n\n\n**Eliezer:**  It’s a law generalized by Charles Platt, that any AI forecast will put strong AI thirty years out from when the forecast is made.  Vernor Vinge referenced it in the body of his famous 1993 NASA speech, whose abstract begins, “Within thirty years, we will have the technological means to create superhuman intelligence.  Shortly after, the human era will be ended.”\n\n\nAfter I was old enough to be more skeptical of timelines myself, I used to wonder how Vinge had pulled out the “within thirty years” part.  This may have gone over my head at the time, but rereading again today, I conjecture Vinge may have chosen the headline figure of thirty years as a deliberately self-deprecating reference to Charles Platt’s generalization about such forecasts always being thirty years from the time they’re made, which Vinge explicitly cites later in the speech.\n\n\nOr to put it another way:  I conjecture that to the audience of the time, already familiar with some previously-made forecasts about strong AI, the impact of the abstract is meant to be, “Never mind predicting strong AI in thirty years, you should be predicting *superintelligence* in thirty years, which matters a lot more.”  But the minds of authors are scarcely more knowable than the Future, if they have not explicitly told us what they were thinking; so you’d have to ask Professor Vinge, and hope he remembers what he was thinking back then.\n\n\n**OpenPhil:**  Superintelligence before 2023, huh?  I suppose Vinge still has two years left to go before that’s falsified.\n\n\n**Eliezer:**Also in the body of the speech, Vinge says, “I’ll be surprised if this event occurs before 2005 or after 2030,” which sounds like a more serious and sensible way of phrasing an estimate.  I think that should supersede the probably Platt-inspired headline figure for what we think of as Vinge’s 1993 prediction.  The jury’s still out on whether Vinge will have made a good call.\n\n\nOh, and sorry if grandpa is boring you with all this history from the times before you were around.  I mean, I didn’t actually attend Vinge’s famous NASA speech when it happened, what with being thirteen years old at the time, but I sure did read it later.  Once it was digitized and put online, it was all over the Internet.  Well, all over certain parts of the Internet, anyways.  Which nerdy parts constituted a much larger fraction of the whole, back when the World Wide Web was just starting to take off among early adopters.\n\n\nBut, yeah, the new kids showing up with some graphs of Moore’s Law and calculations about biology and an earnest estimate of strong AI being thirty years out from the time of the report is, uh, well, it’s… historically precedented.\n\n\n**OpenPhil:**  That part about Charles Platt’s generalization is interesting, but just because we unwittingly chose literally exactly the median that Platt predicted people would always choose in consistent error, that doesn’t justify dismissing our work, right?  We could have used a completely valid method of estimation which would have pointed to 2050 no matter which year it was tried in, and, by sheer coincidence, have first written that up in 2020.  In fact, we try to show in the report that the same methodology, evaluated in earlier years, would also have pointed to around 2050 –\n\n\n**Eliezer:**Look, people keep trying this.  It’s never worked.  It’s never going to work.  2 years before the end of the world, there’ll be another published biologically inspired estimate showing that AGI is 30 years away and it will be exactly as informative then as it is now.  I’d love to know the timelines too, but you’re not *going* to get the answer you want until right before the end of the world, and maybe not even then unless you’re paying very close attention.  *Timing this stuff is just plain hard.*\n\n\n**OpenPhil:**  But our report is different, and our methodology for biologically inspired estimates is wiser and less naive than those who came before.\n\n\n**Eliezer:**  That’s what the last guy said, but go on.\n\n\n**OpenPhil:**  First, we carefully estimate a range of possible figures for the equivalent of neural-network parameters needed to emulate a human brain.  Then, we estimate how many examples would be required to train a neural net with that many parameters.  Then, we estimate the total computational cost of that many training runs.  Moore’s Law then gives us 2050 as our median time estimate, given what we think are the *most* likely underlying assumptions, though we do analyze it several different ways.\n\n\n**Eliezer:**  This is almost exactly what the last guy tried, except you’re using network parameters instead of computing ops, and deep learning training runs instead of biological evolution.\n\n\n**OpenPhil:**  Yes, so we’ve corrected his mistake of estimating the wrong biological quantity and now we’re good, right?\n\n\n**Eliezer:**  That’s what the last guy thought *he’d* done about *Moravec’s* mistaken estimation target.  And neither he nor Moravec would have made much headway on their underlying mistakes, by doing a probabilistic analysis of that same wrong question from multiple angles.\n\n\n**OpenPhil:**  Look, sometimes more than one person makes a mistake, over historical time.  It doesn’t mean nobody can ever get it right.  You of all people should agree.\n\n\n**Eliezer:**  I do so agree, but that doesn’t mean I agree you’ve *fixed* the mistake.  I think the methodology itself is bad, not just its choice of which biological parameter to estimate.  Look, do you understand *why* the evolution-inspired estimate of 10^43 ops was completely ludicrous; and the claim that it was equally likely to be mistaken in either direction, even more ludicrous?\n\n\n**OpenPhil:**  Because AGI isn’t like biology, and in particular, will be trained using gradient descent instead of evolutionary search, which is cheaper.  We do note inside our report that this is a key assumption, and that, if it fails, the estimate might be correspondingly wrong –\n\n\n**Eliezer:**  But then you claim that mistakes are equally likely in both directions and so your unstable estimate is a good median.  Can you see why the previous evolutionary estimate of 10^43 cumulative ops was not, in fact, *equally likely to be wrong in either direction?*  That it was, predictably, a directional *overestimate?*\n\n\n**OpenPhil:**  Well, search by evolutionary biology is more costly than training by gradient descent, so in hindsight, it was an overestimate.  Are you claiming this was predictable in foresight instead of hindsight?\n\n\n**Eliezer:**  I’m claiming that, at the time, I snorted and tossed Somebody’s figure out the window while thinking it was ridiculously huge and absurd, yes.\n\n\n**OpenPhil:**  Because you’d already foreseen in 2006 that gradient descent would be the method of choice for training future AIs, rather than genetic algorithms?\n\n\n**Eliezer:**  Ha!  No.  Because it was an insanely costly hypothetical approach whose main point of appeal, to the sort of person who believed in it, was that it didn’t require having any idea whatsoever of what you were doing or how to design a mind.\n\n\n**OpenPhil:**  Suppose one were to reply:  “Somebody” *didn’t* know better-than-evolutionary methods for designing a mind, just as we currently don’t know better methods than gradient descent for designing a mind; and hence Somebody’s estimate was the best estimate at the time, just as ours is the best estimate now?\n\n\n**Eliezer:**  Unless you were one of a small handful of leading neural-net researchers who knew a few years ahead of the world where scientific progress was heading – who knew a Thielian ‘secret’ before finding evidence strong enough to convince the less foresightful – you couldn’t have called the jump specifically to *gradient descent* rather than any other technique.  “I don’t know any more computationally efficient way to produce a mind than *re-evolving* the cognitive history of all life on Earth” transitioning over time to “I don’t know any more computationally efficient way to produce a mind than *gradient descent* over entire brain-sized models” is not predictable in the specific part about “gradient descent” – not unless you know a Thielian secret.\n\n\nBut knowledge is a ratchet that usually only turns one way, so it’s predictable that the current story changes to *somewhere* over future time, in a net expected direction.  Let’s consider the technique currently known as mixture-of-experts (MoE), for training smaller nets in pieces and muxing them together.  It’s not my mainline prediction that MoE actually goes anywhere – if I thought MoE was actually promising, I wouldn’t call attention to it, of course!  I don’t want to *make* timelines shorter, that is not a service to Earth, not a good sacrifice in the cause of winning an Internet argument.\n\n\nBut if I’m wrong and MoE is not a dead end, that technique serves as an easily-visualizable case in point.  If that’s a fruitful avenue, the technique currently known as “mixture-of-experts” will mature further over time, and future deep learning engineers will be able to further perfect the art of training *slices of brains* using gradient descent and fewer examples, instead of training *entire brains* using gradient descent and lots of examples.\n\n\nOr, more likely, it’s not MoE that forms the next little trend.  But there is going to be *something,* especially if we’re sitting around waiting until 2050.  Three decades is enough time for some *big* paradigm shifts in an intensively researched field.  Maybe we’d end up using neural net tech very similar to today’s tech if the world ends in 2025, but in that case, of course, your prediction must have failed somewhere else.\n\n\nThe three components of AGI arrival times are available hardware, which increases over time in an easily graphed way; available knowledge, which increases over time in a way that’s much harder to graph; and hardware required at a given level of specific knowledge, a huge multidimensional unknown background parameter.  The fact that you have no idea how to graph the increase of knowledge – or measure it in any way that is less completely silly than “number of science papers published” or whatever such gameable metric – doesn’t change the point that this *is* a predictable fact about the future; there *will* be more knowledge later, the more time that passes, and that will *directionally* change the expense of the currently least expensive way of doing things.\n\n\n**OpenPhil:**  We did already consider that and try to take it into account: our model already includes a parameter for how algorithmic progress reduces hardware requirements.  It’s not easy to graph as exactly as Moore’s Law, as you say, but our best-guess estimate is that compute costs halve every 2-3 years.\n\n\n**Eliezer:**  Oh, nice.  I was wondering what sort of tunable underdetermined parameters enabled your model to nail the psychologically overdetermined final figure of ’30 years’ so exactly.\n\n\n**OpenPhil:**  Eliezer.\n\n\n**Eliezer:**  Think of this in an economic sense: people don’t buy where goods are most expensive and delivered latest, they buy where goods are cheapest and delivered earliest.  Deep learning researchers are not like an inanimate chunk of ice tumbling through intergalactic space in its unchanging direction of previous motion; they are economic agents who look around for ways to destroy the world faster and more cheaply than the way that you imagine as the default.  They are more eager than you are to think of more creative paths to get to the next milestone faster.\n\n\n**OpenPhil:**  Isn’t this desire for cheaper methods exactly what our model already accounts for, by modeling algorithmic progress?\n\n\n**Eliezer:**  The makers of AGI aren’t going to be doing 10,000,000,000,000 rounds of gradient descent, on entire brain-sized 300,000,000,000,000-parameter models, *algorithmically faster than today.*  They’re going to get to AGI via some route that *you don’t know how to take,* at least if it happens in 2040.  If it happens in 2025, it may be via a route that some modern researchers do know how to take, but in this case, of course, your model was also wrong.\n\n\nThey’re not going to be taking your default-imagined approach *algorithmically faster,* they’re going to be taking an *algorithmically different approach* that eats computing power in a different way than you imagine it being consumed.\n\n\n**OpenPhil:**  Shouldn’t that just be folded into our estimate of how the computation required to accomplish a fixed task decreases by half every 2-3 years due to better algorithms?\n\n\n**Eliezer:**  Backtesting this viewpoint on the previous history of computer science, it seems to me to assert that it should be possible to:\n\n\n* Train a pre-Transformer RNN/CNN-based model, not using any other techniques invented after 2017, to GPT-2 levels of performance, using only around 2x as much compute as GPT-2;\n* Play pro-level Go using 8-16 times as much computing power as AlphaGo, but only 2006 levels of technology.\n\n\nFor reference, recall that in 2006, Hinton and Salakhutdinov were just starting to publish that, by training multiple layers of Restricted Boltzmann machines and then unrolling them into a “deep” neural network, you could get an initialization for the network weights that would avoid the problem of vanishing and exploding gradients and activations.  At least so long as you didn’t try to stack too many layers, like a dozen layers or something ridiculous like that.  This being the point that kicked off the entire deep-learning revolution.\n\n\nYour model apparently suggests that we have gotten around 50 times more efficient at turning computation into intelligence since that time; so, we should be able to replicate any modern feat of deep learning performed in 2021, using techniques from before deep learning and around fifty times as much computing power.\n\n\n**OpenPhil:**  No, that’s totally not what our viewpoint says when you backfit it to past reality.  Our model does a great job of retrodicting past reality.\n\n\n**Eliezer:**  How so?\n\n\n**OpenPhil:**  \n\n\n**Eliezer:**  I’m not convinced by this argument.\n\n\n**OpenPhil:**  We didn’t think you would be; you’re sort of predictable that way.\n\n\n**Eliezer:**  Well, yes, if I’d predicted I’d update from hearing your argument, I would’ve updated already.  I may not be a real Bayesian but I’m not *that* incoherent.\n\n\nBut I can guess in advance at the outline of my reply, and my guess is this:\n\n\n“Look, when people come to me with models claiming the future is predictable enough for timing, I find that their viewpoints seem to me like they would have made garbage predictions if I actually had to operate them in the past *without benefit of hindsight*.  Sure, with benefit of hindsight, you can look over a thousand possible trends and invent rules of prediction and event timing that nobody *in the past* actually spotlighted *then*, and claim that things happened on trend.  I was around at the time and I do not recall people actually predicting the shape of AI in the year 2020 in advance.  I don’t think they were just being stupid either.\n\n\n“In a conceivable future where people are still alive and reasoning as modern humans do in 2040, somebody will no doubt look back and claim that everything happened on trend since 2020; but *which*trend the hindsighter will pick out is not predictable to us in advance.\n\n\n“It may be, of course, that I simply don’t understand how to operate your viewpoint, nor how to apply it to the past or present or future; and that yours is a sort of viewpoint which indeed permits saying only one thing, and not another; and that this viewpoint would have predicted the past wonderfully, even without any benefit of hindsight.  But there is also that less charitable viewpoint which suspects that somebody’s theory of ‘A coinflip always comes up heads on occasions X’ contains some informal parameters which can be argued about which occasions exactly ‘X’ describes, and that the operation of these informal parameters is a bit influenced by one’s knowledge of whether a past coinflip actually came up heads or not.\n\n\n“As somebody who doesn’t start from the assumption that your viewpoint is a good fit to the past, I still don’t see how a good fit to the past could’ve been extracted from it without benefit of hindsight.”\n\n\n**OpenPhil:**  That’s a pretty general counterargument, and like any pretty general counterargument it’s a blade you should try turning against yourself.  Why doesn’t your own viewpoint horribly mispredict the past, and say that all estimates of AGI arrival times are predictably net underestimates?  If we imagine trying to operate your own viewpoint in 1988, we imagine going to Moravec and saying, “Your estimate of how much computing power it takes to match a human brain is predictably an overestimate, because engineers will find a better way to do it than biology, so we should expect AGI sooner than 2010.”\n\n\n**Eliezer:**  I *did* tell Imaginary Moravec that his estimate of the minimum computation required for human-equivalent general intelligence was predictably an overestimate; that was right there in the dialogue before I even got around to writing this part.  And I also, albeit with benefit of hindsight, told Moravec that both of these estimates were useless for timing the future, because they skipped over the questions of how much knowledge you’d need to make an AGI with a given amount of computing power, how fast knowledge was progressing, and the actual timing determined by the rising hardware line touching the falling hardware-required line.\n\n\n**OpenPhil:**  We don’t see how to operate your viewpoint to say *in advance* to Moravec, before his prediction has been falsified, “Your estimate is plainly a garbage estimate” instead of “Your estimate is obviously a directional underestimate”, especially since you seem to be saying the latter to *us, now.*\n\n\n**Eliezer:**  That’s not a critique I give zero weight.  And, I mean, as a kid, I was in fact talking like, “To heck with that hardware estimate, let’s at least try to get it done before then.  People are dying for lack of superintelligence; let’s aim for 2005.”  I had a T-shirt spraypainted “Singularity 2005” at a science fiction convention, it’s rather crude but I think it’s still in my closet somewhere.\n\n\nBut now I am older and wiser and have fixed all my past mistakes, so the critique of those past mistakes no longer applies to my new arguments.\n\n\n**OpenPhil:**  Uh huh.\n\n\n**Eliezer:**  I mean, I did try to fix all the mistakes that I knew about, and didn’t just, like, leave those mistakes in forever?  I realize that this claim to be able to “learn from experience” is not standard human behavior in situations like this, but if you’ve got to be weird, that’s a good place to spend your weirdness points.  At least by my own lights, I am now making a different argument than I made when I was nineteen years old, and that different argument should be considered differently.\n\n\nAnd, yes, I also think my nineteen-year-old self was not completely foolish at least about AI timelines; in the sense that, for all he knew, maybe you *could*build AGI by 2005 if you tried really hard over the next 6 years.  Not so much because Moravec’s estimate should’ve been seen as a predictable overestimate of how much computing power would actually be needed, given knowledge that would become available in the next 6 years; but because Moravec’s estimate should’ve been seen as *almost entirely irrelevant,* making the correct answer be “I don’t know.”\n\n\n**OpenPhil:**  It seems to us that Moravec’s estimate, and the guess of your nineteen-year-old past self, are *both* predictably vast underestimates.  Estimating the computation consumed by one brain, and calling that your AGI target date, is obviously predictably a vast underestimate because it neglects the computation required for *training* a brainlike system.  It may be a bit uncharitable, but we suggest that Moravec and your nineteen-year-old self may both have been motivatedly credulous, to not notice a gap so very obvious.\n\n\n**Eliezer:**  I could imagine it seeming that way if you’d grown up never learning about any AI techniques except deep learning, which had, in your wordless mental world, always been the way things were, and would always be that way forever.\n\n\nI mean, it could be that deep learning *will*still bethe bleeding-edge method of Artificial Intelligence right up until the end of the world.  But if so, it’ll be because Vinge was right and the world ended before 2030, *not*because the deep learning paradigm was as good as any AI paradigm can ever get.  That is simply not a kind of thing that I expect Reality to say “Gotcha” to me about, any more than I expect to be told that the human brain, whose neurons and synapses are 500,000 times further away from the thermodynamic efficiency wall than ATP synthase, is the most efficient possible consumer of computations.\n\n\nThe specific perspective-taking operation needed here – when it comes to what was and wasn’t obvious in 1988 or 1999 – is that the notion of spending thousands and millions and billions of times as much computation on a “training” phase, as on an “inference” phase, is something that only came to be seen as Always Necessary after the deep learning revolution took over AI in the late Noughties.  Back when Moravec was writing, you programmed a game-tree-search algorithm for chess, and then you ran that code, and it played chess.  Maybe you needed to add an opening book, or do a lot of trial runs to tweak the exact values the position evaluation function assigned to knights vs. bishops, but most AIs weren’t neural nets and didn’t get trained on enormous TPU pods.\n\n\nMoravec had no way of knowing that the paradigm in AI would, twenty years later, massively shift to a new paradigm in which stuff got trained on enormous TPU pods.  He lived in a world where you could only train neural networks a few layers deep, like, three layers, and the gradients vanished or exploded if you tried to train networks any deeper.\n\n\nTo be clear, in 1999, I did think of AGIs as needing to do a lot of learning; but I expected them to be learning while thinking, not to learn in a separate gradient descent phase.\n\n\n**OpenPhil:**  How could anybody possibly miss anything so obvious?  There’s so many basic technical ideas and even *philosophical ideas about how you do AI*which make it supremely obvious that the best and only way to turn computation into intelligence is to have deep nets, lots of parameters, and enormous separate training phases on TPU pods.\n\n\n**Eliezer:**  Yes, well, see, those philosophical ideas were not as prominent in 1988, which is why the direction of the future paradigm shift was not *predictable in advance without benefit of hindsight,* let alone timeable to 2006*.*\n\n\nYou’re also probably overestimating how much those philosophical ideas would pinpoint the modern paradigm of gradient descent even if you had accepted them wholeheartedly, in 1988.  Or let’s consider, say, October 2006, when the Netflix Prize was being run – a watershed occasion where lots of programmers around the world tried their hand at minimizing a loss function, based on a huge-for-the-times ‘training set’ that had been publicly released, scored on a holdout ‘test set’.  You could say it was the first moment in the limelight for the sort of problem setup that everybody now takes for granted with ML research: a widely shared dataset, a heldout test set, a loss function to be minimized, prestige for advancing the ‘state of the art’.  And it was a million dollars, which, back in 2006, was big money for a machine learning prize, garnering lots of interest from competent competitors.\n\n\nBefore deep learning, “statistical learning” was indeed a banner often carried by the early advocates of the view that Richard Sutton now calls the Bitter Lesson, along the lines of “complicated programming of human ideas doesn’t work, you have to just learn from massive amounts of data”.\n\n\nBut before deep learning – which was barely getting started in 2006 – “statistical learning” methods that took in massive amounts of data, did not use those massive amounts of data to train neural networks by stochastic gradient descent across millions of examples!  In 2007, [the winning submission to the Netflix Prize](https://www.netflixprize.com/assets/ProgressPrize2007_KorBell.pdf) was an ensemble predictor that incorporated k-Nearest-Neighbor, a factorization method that repeatedly globally minimized squared error, two-layer Restricted Boltzmann Machines, and a regression model akin to Principal Components Analysis.  Which is all 100% statistical learning driven by relatively-big-for-the-time “big data”, and 0% GOFAI.  But these methods didn’t involve enormous massive training phases in the modern sense.\n\n\nBack then, if you were doing stochastic gradient descent at all, you were doing it on a much smaller neural network.  Not so much because you couldn’t afford more compute for a larger neural network, but because wider neural networks didn’t help you much and deeper neural networks simply didn’t work.\n\n\nBleeding-edge statistical learning techniques as late as 2007, to make actual use of big data, had to find other ways to make use of huge amounts of data than gradient descent and backpropagation.  Though, I mean, not huge amounts of data by modern standards.  The winning submission to the Netflix Prize used an ensemble of 107 models – that’s not a misprint for 10^7, I actually mean 107 – which models were drawn from half a different model classes, then proliferated with slightly different parameters, averaged together to reduce statistical noise.\n\n\nA modern kid, perhaps, looks at this and thinks:  “If you can afford the compute to train 107 models, why not just train one larger model?”  But back then, you see, there just *wasn’t* a standard way to dump massively more compute into something, and get better results back out.  The fact that they had 107 differently parameterized models from a half-dozen families averaged together to reduce noise, was about as well as anyone could do in 2007, at putting more effort in and getting better results back out.\n\n\n**OpenPhil:**  How quaint and archaic!  But that was 13 years ago, before time actually got started and history actually started happening in real life.  *Now*we’ve got the paradigm which will actually be used to create AGI, in all probability; so estimation methods centered on that paradigm should be valid.\n\n\n**Eliezer:**  The current paradigm is definitely not the end of the line in principle.  I guarantee you that the way superintelligences build cognitive engines is not by training enormous neural networks using gradient descent.  Gua-ran-tee it.\n\n\nThe fact that you think you now see a path to AGI, is because today – unlike in 2006 – you have a paradigm that is seemingly willing to entertain having more and more food stuffed down its throat without obvious limit (yet).  This is really a quite recent paradigm shift, though, and it is probably not the most efficient possible way to consume more and more food.\n\n\nYou could rather strongly guess, early on, that support vector machines were never going to give you AGI, *because* you couldn’t dump more and more compute into training or running SVMs and get arbitrarily better answers; whatever gave you AGI would have to be something else that could eat more compute productively.\n\n\nSimilarly, since the path through genetic algorithms and recapitulating the whole evolutionary history would have taken a *lot* of compute, it’s no wonder that other, more efficient methods of eating compute were developed before then; it was obvious in advance that they must exist, for all that some what-iffed otherwise.\n\n\nTo be clear, it is certain the world will end by more inefficient methods than those that superintelligences would use; since, if superintelligences are making their own AI systems, then the world has already ended.\n\n\nAnd it is possible, even, that the world will end by a method as inefficient as gradient descent.  But if so, that will be because the world ended too soon for any more efficient paradigm to be developed.  Which, on my model, means the world probably ended before say 2040(???).  But of course, compared to how much I think I know about what must be more efficiently doable in principle, I think I know far less about the speed of accumulation of real knowledge (not to be confused with proliferation of publications), or how various random-to-me social phenomena could influence the speed of knowledge.  So I think I have far less ability to say a confident thing about the *timing*of the next paradigm shift in AI, compared to the *existence and eventuality* of such paradigms in the space of possibilities.\n\n\n**OpenPhil:**  But if you expect the next paradigm shift to happen in around 2040, shouldn’t you confidently predict that AGI has to arrive *after*2040, because, without that paradigm shift, we’d have to produce AGI using deep learning paradigms, and in that case our own calculation would apply saying that 2040 is relatively early?\n\n\n**Eliezer:**  No, because I’d consider, say, improved mixture-of-experts techniques that actually work, to be very much *within*the deep learning paradigm; and even a relatively small paradigm shift like that would obviate your calculations, if it produced a more drastic speedup than halving the computational cost over two years.\n\n\nMore importantly, I simply don’t believe in your attempt to calculate a figure of 10,000,000,000,000,000 operations per second for a brain-equivalent deepnet based on biological analogies, or your figure of 10,000,000,000,000 training updates for it.  I simply don’t believe in it at all.  I don’t think it’s a valid anchor.  I don’t think it should be used as the median point of a wide uncertain distribution.  The first-developed AGI will consume computation in a different fashion, much as it eats energy in a different fashion; and “how much computation an AGI needs to eat compared to a human brain” and “how many watts an AGI needs to eat compared to a human brain” are equally always decreasing with the technology and science of the day.\n\n\n**OpenPhil:**  Doesn’t our calculation at least provide a soft *upper bound* on how much computation is required to produce human-level intelligence?  If a calculation is able to produce an upper bound on a variable, how can it be uninformative about that variable?\n\n\n**Eliezer:**  You assume that the architecture you’re describing can, in fact, work at all to produce human intelligence.  This itself strikes me as not only tentative but probably false.  I mostly suspect that if you take the exact GPT architecture, [scale it up](https://www.reddit.com/r/ProgrammerHumor/comments/8c1i45/stack_more_layers/) to what you calculate as human-sized, and start training it using current gradient descent techniques… what mostly happens is that it saturates and asymptotes its loss function at not very far beyond the GPT-3 level – say, it behaves like GPT-4 would, but not much better.\n\n\nThis is what should have been told to Moravec:  “Sorry, even if your biology is correct, the assumption that future people can put in X amount of compute and get out Y result is not something you really know.”  And that point did in fact just completely trash his ability to predict and time the future.\n\n\nThe same must be said to you.  Your model contains supposedly known parameters, “how much computation an AGI must eat per second, and how many parameters must be in the trainable model for that, and how many examples are needed to train those parameters”.  Relative to whatever method is actually first used to produce AGI, I expect your estimates to be wildly inapplicable, as wrong as Moravec was about thinking in terms of just using one supercomputer powerful enough to be a brain.  Your parameter estimates may not be about properties that the first successful AGI design even *has.*  Why, what if it contains a significant component that *isn’t a neural network?*  I realize this may be scarcely conceivable to somebody from the present generation, but the world was not always as it was now, and it will change if it does not end.\n\n\n**OpenPhil:**  I don’t understand how some of your reasoning could be internally consistent even on its own terms.  If, according to you, our 2050 estimate doesn’t provide a soft upper bound on AGI arrival times – or rather, if our 2050-centered probability distribution isn’t a soft upper bound on reasonable AGI arrival probability distributions – then I don’t see how you can claim that the 2050-centered distribution is predictably a directional overestimate.\n\n\nYou can *either* say that our forecasted pathway to AGI or something very much like it would *probably work in principle without requiring very much more computation* than our uncertain model components take into account*,* meaning that the probability distribution provides a soft upper bound on reasonably-estimable arrival times, *but that paradigm shifts will predictably provide an even faster way to do it before then.*  That is, you could say that our estimate is both a soft upper bound and also a directional overestimate.  Or, you could say that our ignorance of how to create AI will consume *more*than one order-of-magnitude of increased computation cost above biology –\n\n\n**Eliezer:**  Indeed, much as your whole proposal would supposedly cost ten trillion times the equivalent computation of the single human brain that earlier biologically-inspired estimates anchored on.\n\n\n**OpenPhil:**  – in which case our 2050-centered distribution is not a good soft upper bound, but *also* not predictably a directional overestimate.  Don’t you have to pick one or the other as a critique, there?\n\n\n**Eliezer:**  Mmm… there’s some justice to that, now that I’ve come to write out this part of the dialogue.  Okay, let me revise my earlier stated opinion:  I think that your biological estimate is a trick that never works and, *on its own terms,* would tell us very little about AGI arrival times at all.  *Separately,* I think from my own model that your timeline distributions happen to be too long.\n\n\n**OpenPhil:**  *Eliezer.*\n\n\n**Eliezer:**  I mean, in fact, part of my actual sense of indignation at this whole affair, is the way that Platt’s law of strong AI forecasts – which was *in the 1980s*generalizing “thirty years” as the time that ends up sounding “reasonable” to would-be forecasters – is *still* exactly in effect for what ends up sounding “reasonable” to would-be futurists, *in fricking 2020* while the air is filling up with AI smoke in [the silence of nonexistent fire alarms](https://intelligence.org/2017/10/13/fire-alarm/).\n\n\nBut to put this in terms that maybe possibly you’d find persuasive:\n\n\nThe last paradigm shifts were from “write a chess program that searches a search tree and run it, and that’s how AI eats computing power” to “use millions of data samples, but *not*in a way that requires a huge separate training phase” to “train a huge network for zillions of gradient descent updates and then run it”.  This new paradigm costs a lot more compute, but (small) large amounts of compute are now available so people are using them; and this new paradigm saves on programmer labor, and more importantly the need for programmer knowledge.\n\n\nI say with surety that this is not the last *possible*paradigm shift.  And furthermore, the [Stack More Layers](https://www.reddit.com/r/ProgrammerHumor/comments/8c1i45/stack_more_layers/) paradigm has already reduced need for knowledge by what seems like a pretty large bite out of all the possible knowledge that could be thrown away.\n\n\nSo, you might then argue, the world-ending AGI seems more likely to incorporate more knowledge and less brute force, which moves the correct sort of timeline estimate *further* away from the direction of “cost to recapitulate all evolutionary history as pure blind search without even the guidance of gradient descent” and *more* toward the direction of “computational cost of one brain, if you could just make a single brain”.\n\n\nThat is, you can think of there as being *two* biological estimates to anchor on, not just one.  You can imagine there being a balance that shifts over time from “the computational cost for evolutionary biology to invent brains” to “the computational cost to run one biological brain”.\n\n\nIn 1960, maybe, they knew so little about how brains worked that, if you gave them a hypercomputer, the cheapest way they could quickly get AGI out of the hypercomputer using just their current knowledge, would be to run a massive evolutionary tournament over computer programs until they found smart ones, using 10^43 operations.\n\n\nToday, you know about gradient descent, which finds programs more efficiently than genetic hill-climbing does; so the balance of how much hypercomputation you’d need to use to get general intelligence using just your own personal knowledge, has shifted ten orders of magnitude *away*from the computational cost of evolutionary history and *towards* the lower bound of the computation used by one brain.  In the future, this balance will predictably swing even further towards Moravec’s biological anchor, further away from Somebody on the Internet’s biological anchor.\n\n\nI admit, from my perspective this is nothing but a clever argument that tries to persuade people who are making errors that can’t all be corrected by me, so that they can make mostly the same errors but get a slightly better answer.  In my own mind I tend to contemplate the Textbook from the Future, which would tell us how to build AI on a home computer from 1995, as my anchor of ‘where can progress go’, rather than looking to the *brain* of all computing devices for inspiration.\n\n\nBut, if you insist on the error of anchoring on biology, you could perhaps do better by seeing a spectrum between two bad anchors.  This lets you notice a changing reality, at all, which is why I regard it as a helpful thing to say to you and not a pure persuasive superweapon of unsound argument.  Instead of just fixating on one bad anchor, the hybrid of biological anchoring with whatever knowledge you currently have about optimization, you can notice how reality seems to be *shifting between* two biological bad anchors over time, and so have an eye on the changing reality at all.  Your new estimate in terms of gradient descent is stepping away from evolutionary computation and toward the individual-brain estimate by ten orders of magnitude, using the fact that you now know a *little* more about optimization than natural selection knew; and now that you can see the change in reality over time, in terms of the two anchors, you can wonder if there are more shifts ahead.\n\n\nRealistically, though, I would *not*recommend eyeballing how much more knowledge you’d think you’d need to get even larger shifts, as some function of time, before that line crosses the hardware line.  Some researchers may already know Thielian secrets you do not, that take those researchers further toward the individual-brain computational cost (if you insist on seeing it that way).  That’s the direction that economics rewards innovators for moving in, and you don’t know everything the innovators know in their labs.\n\n\nWhen big inventions finally hit the world as newspaper headlines, the people two years before that happens are often declaring it to be fifty years away; and others, of course, are declaring it to be two years away, fifty years before headlines.  Timing things is quite hard even when you think you are being clever; and cleverly having two biological anchors and eyeballing Reality’s movement between them, is not the sort of cleverness that gives you good timing information in real life.\n\n\nIn real life, Reality goes off and does something else instead, and the Future does not look in that much detail like the futurists predicted.  In real life, we come back again to the same wiser-but-sadder conclusion given at the start, that in fact the Future is quite hard to foresee – especially when you are not on literally the world’s leading edge of technical knowledge about it, but really even then.  If you don’t think you know any Thielian secrets about timing, you should just figure that you need a general policy which doesn’t get more than two years of warning, or not even that much if you aren’t closely non-dismissively analyzing warning signs.\n\n\n**OpenPhil:**  We do consider in our report the many ways that our estimates could be wrong, and show multiple ways of producing biologically inspired estimates that give different results.  Does that give us any credit for good epistemology, on your view?\n\n\n**Eliezer:**  I *wish* I could say that it probably beats showing a single estimate, in terms of its impact on the reader.  But in fact, writing a huge careful Very Serious Report like that and snowing the reader under with Alternative Calculations is probably going to cause them to give *more* authority to the whole thing.  It’s all very well to note the Ways I Could Be Wrong and to confess one’s Uncertainty, but you did not actually reach the conclusion, “And that’s enough uncertainty and potential error that we should throw out this whole deal and start over,” and that’s the conclusion you needed to reach.\n\n\n**OpenPhil:**  It’s not clear to us what better way you think exists of arriving at an estimate, compared to the methodology we used – in which we do consider many possible uncertainties and several ways of generating probability distributions, and try to combine them together into a final estimate.  A Bayesian needs a probability distribution from somewhere, right?\n\n\n**Eliezer:**  If somebody had calculated that it currently required an IQ of 200 to destroy the world, that the smartest current humans had an IQ of around 190, and that the world would therefore start to be destroyable in fifteen years according to Moore’s Law of Mad Science – then, even assuming Moore’s Law of Mad Science to actually hold, the part where they throw in an estimated current IQ of 200 as necessary is complete garbage.  It is not the sort of mistake that can be repaired, either.  No, not even by considering many ways you could be wrong about the IQ required, or considering many alternative different ways of estimating present-day people’s IQs.\n\n\nThe correct thing to do with the entire model is chuck it out the window so it doesn’t exert an undue influence on your actual thinking, where any influence of that model is an undue one.  And then you just *should not expect good advance timing info until the end is in sight,* from whatever thought process you adopt instead.\n\n\n**OpenPhil:**  What if, uh, somebody knows a Thielian secret, or has… narrowed the rivers of their knowledge to closer to reality’s tracks?  We’re not sure exactly what’s supposed to be allowed, on your worldview; but wasn’t there something at the beginning about how, when you’re unsure, you should be careful about criticizing people who are more unsure than you?\n\n\n**Eliezer:**  *Hopefully* those people are also able to tell you bold predictions about the nearer-term future, or at least say *anything* about what the future looks like before the whole world ends.  I mean, you don’t want to go around proclaiming that, because you don’t know something, nobody else can know it either.  But timing is, in real life, really hard as a prediction task, so, like… I’d expect them to be able to predict a bunch of stuff before the final hours of their prophecy?\n\n\n**OpenPhil:**  We’re… not sure we see that?  We may have made an estimate, but we didn’t make a narrow estimate.  We gave a relatively wide probability distribution as such things go, so it doesn’t seem like a great feat of timing that requires us to also be able to predict the near-term future in detail too?\n\n\nDoesn’t *your* implicit probability distribution have a median?  Why don’t you also need to be able to predict all kinds of near-term stuff if you have a probability distribution with a median in it?\n\n\n**Eliezer:**  I literally have not tried to force my brain to give me a median year on this – not that this is a defense, because I still have some implicit probability distribution, or, to the extent I don’t act like I do, I must be acting incoherently in self-defeating ways.  But still: I feel like you should probably have nearer-term bold predictions if your model is supposedly so solid, so concentrated as a flow of uncertainty, that it’s coming up to you and whispering numbers like “2050” even as the median of a broad distribution.  I mean, if you have a model that can actually, like, calculate stuff like that, and is actually bound to the world as a truth.\n\n\nIf you are an aspiring Bayesian, perhaps, you may try to reckon your uncertainty into the form of a probability distribution, even when you face “structural uncertainty” as we sometimes call it.  Or if you know the laws of [coherence](https://www.lesswrong.com/posts/RQpNHSiWaXTvDxt6R/coherent-decisions-imply-consistent-utilities), you will acknowledge that your planning and your actions are implicitly showing signs of weighing some paths through time more than others, and hence display probability-estimating behavior whether you like to acknowledge that or not.\n\n\nBut if you are a wise aspiring Bayesian, you will admit that whatever probabilities you are using, they are, in a sense, intuitive, and you just don’t expect them to be all that good.  Because the timing problem you are facing is a really hard one, and humans are not going to be great at it – not until the end is near, and maybe not even then.\n\n\nThat – not “you didn’t consider enough alternative calculations of your target figures” – is what should’ve been replied to Moravec in 1988, if you could go back and tell him where his reasoning had gone wrong, and how he might have reasoned differently based on what he actually knew at the time.  That reply I now give to you, unchanged.\n\n\n**Humbali:**  And I’m back!  Sorry, I had to take a lunch break.  Let me quickly review some of this recent content; though, while I’m doing that, I’ll go ahead and give you what I’m pretty sure will be my reaction to it:\n\n\nAh, but here is a point that you seem to have not considered at all, namely: *what if you’re wrong?*\n\n\n**Eliezer:**  That, Humbali, is a thing that should be said mainly to children, of whatever biological wall-clock age, who’ve never considered at all the possibility that they might be wrong, and who will genuinely benefit from asking themselves that.  It is not something that should often be said between grownups of whatever age, as I define what it means to be a grownup.  You will mark that I did not at any point say those words to Imaginary Moravec or Imaginary OpenPhil; it is not a good thing for grownups to say to each other, or to think to themselves in Tones of Great Significance (as opposed to as a routine check).\n\n\nIt is very easy to worry that one might be wrong.  Being able to see the *direction* in which one is *probably* wrong is rather a more difficult affair.  And even after we see a probable directional error and update our views, the objection, “But what if you’re wrong?” will sound just as forceful as before.  For this reason do I say that such a thing should not be said between grownups –\n\n\n**Humbali:**  Okay, done reading now!  Hm…  So it seems to me that the possibility that you are wrong, considered in full generality and without adding any other assumptions, should produce a directional shift from your viewpoint towards OpenPhil’s viewpoint.\n\n\n**Eliezer (sighing):**  And how did you end up being under the impression that this could possibly be a sort of thing that was true?\n\n\n**Humbali:**  Well, I get the impression that you have timelines shorter than OpenPhil’s timelines.  Is this devastating accusation true?\n\n\n**Eliezer:**  I consider naming particular years to be a cognitively harmful sort of activity; I have refrained from trying to translate my brain’s native intuitions about this into probabilities, for fear that my verbalized probabilities will be stupider than my intuitions if I try to put weight on them.  What feelings I do have, I worry may be unwise to voice; AGI timelines, in my own experience, are not great for one’s mental health, and I worry that other people seem to have weaker immune systems than even my own.  But I suppose I cannot but acknowledge that my outward behavior seems to reveal a distribution whose median seems to fall well before 2050.\n\n\n**Humbali:**  Okay, so you’re more confident about your AGI beliefs, and OpenPhil is less confident.  Therefore, to the extent that you might be wrong, the world is going to look more like OpenPhil’s forecasts of how the future will probably look, like world GDP doubling over four years before the first time it doubles over one year, and so on.\n\n\n**Eliezer:**  You’re going to have to explain some of the intervening steps in that line of ‘reasoning’, if it may be termed as such.\n\n\n**Humbali:**  I feel surprised that I should have to explain this to somebody who supposedly knows probability theory.  If you put higher probabilities on AGI arriving in the years before 2050, then, on average, you’re concentrating more probability into each year that AGI might possibly arrive, than OpenPhil does.  Your probability distribution has lower entropy.  We can literally just calculate out that part, if you don’t believe me.  So to the extent that you’re wrong, it should shift your probability distributions in the direction of maximum entropy.\n\n\n**Eliezer:**  It’s things like this that make me worry about whether that extreme cryptivist view would be correct, in which normal modern-day Earth intellectuals are literally not smart enough – in a sense that includes the Cognitive Reflection Test and other things we don’t know how to measure yet, not just raw IQ – to be taught more advanced ideas from my own home planet, like Bayes’s Rule and the concept of the entropy of a probability distribution.  Maybe it does them net harm by giving them more advanced tools they can use to shoot themselves in the foot, since it causes an explosion in the total possible complexity of the argument paths they can consider and be fooled by, which may now contain words like ‘maximum entropy’.\n\n\n**Humbali:**  If you’re done being vaguely condescending, perhaps you could condescend specifically to refute my argument, which seems to me to be airtight; my math is not wrong and it means what I claim it means.\n\n\n**Eliezer:**  The audience is herewith invited to first try refuting Humbali on their own; grandpa is, in actuality and not just as a literary premise, getting older, and was never that physically healthy in the first place.  If the next generation does not learn how to do this work without grandpa hovering over their shoulders and prompting them, grandpa cannot do all the work himself.  There is an infinite supply of slightly different wrong arguments for me to be forced to refute, and that road does not seem, in practice, to have an end.\n\n\n**Humbali:**  Or perhaps it’s you that needs refuting.\n\n\n**Eliezer, smiling:**  That does seem like the sort of thing I’d do, wouldn’t it?  Pick out a case where the other party in the dialogue had made a valid point, and then ask my readers to disprove it, in case they weren’t paying proper attention?  For indeed in a case like this, one first backs up and asks oneself “Is Humbali right or not?” and not “How can I prove Humbali wrong?”\n\n\nBut now the reader should stop and contemplate that, if they are going to contemplate that at all:\n\n\nIs Humbali right that generic uncertainty about maybe being wrong, without other extra premises, should increase the entropy of one’s probability distribution over AGI, thereby moving out its median further away in time?\n\n\n**Humbali:**Are you done?\n\n\n**Eliezer:**  Hopefully so.  I can’t see how else I’d prompt the reader to stop and think and come up with their own answer first.\n\n\n**Humbali:**  Then what is the supposed flaw in my argument, if there is one?\n\n\n**Eliezer:**  As usual, when people are seeing only their preferred possible use of an argumentative superweapon like ‘What if you’re wrong?’, the flaw can be exposed by showing that the argument Proves Too Much.  If you forecasted AGI with a probability distribution with a median arrival time of 50,000 years from now\\*, would that be *very* unconfident?\n\n\n(\\*) Based perhaps on an ignorance prior for how long it takes for a sapient species to build AGI after it emerges, where we’ve observed so far that it must take at least 50,000 years, and our updated estimate says that it probably takes around as much more longer than that.\n\n\n**Humbali:**   Of course; the math says so.  Though I think that would be a little *too*unconfident – we do have *some* knowledge about how AGI might be created.  So my answer is that, yes, this probability distribution is higher-entropy, but that it reflects too little confidence even for me.\n\n\nI think you’re crazy overconfident, yourself, and in a way that I find personally distasteful to boot, but that doesn’t mean I advocate zero confidence.  I try to be less arrogant than you, but my best estimate of what my own eyes will see over the next minute is not a maximum-entropy distribution over visual snow.  AGI happening sometime in the next century, with a median arrival time of maybe 30 years out, strikes me as being about *as*confident as somebody should reasonably be.\n\n\n**Eliezer:**  Oh, really now.  I think if somebody sauntered up to you and said they put 99% probability on AGI not occurring within the next 1,000 years – which is the sort of thing a median distance of 50,000 years tends to imply – I think you would, in fact, accuse them of brash overconfidence about staking 99% probability on that.\n\n\n**Humbali:**  Hmmm.  I want to deny that – I have a strong suspicion that you’re leading me down a garden path here – but I do have to admit that if somebody walked up to me and declared only a 1% probability that AGI arrives in the next millennium, I would say they were being overconfident and not just too uncertain.\n\n\nNow that you put it that way, I think I’d say that somebody with a wide probability distribution over AGI arrival spread over the next century, with a median in 30 years, is in realistic terms about as uncertain as anybody could possibly be?  If you spread it out more than that, you’d be declaring that AGI probably *wouldn’t* happen in the next 30 years, which seems overconfident; and if you spread it out less than that, you’d be declaring that AGI probably *would*happen within the next 30 years, which also seems overconfident.\n\n\n**Eliezer:**  Uh huh.  And to the extent that I am myself uncertain about my own brashly arrogant and overconfident views, I should have a view that looks more like your view instead?\n\n\n**Humbali:**  Well, yes!  To the extent that you are, yourself, less than totally certain of your own model, you should revert to this most ignorant possible viewpoint as a base rate.\n\n\n**Eliezer:**  And if my own viewpoint should happen to regard your probability distribution putting its median on 2050 as just one more guesstimate among many others, with this particular guess based on wrong reasoning that I have justly rejected?\n\n\n**Humbali:**  Then you’d be overconfident, obviously.  See, you don’t get it, what I’m presenting is not just one candidate way of thinking about the problem, it’s the *base rate*that other people should fall back on to the extent they are not completely confident in *their own* ways of thinking about the problem, which impose *extra* assumptions over and above the assumptions that seem natural and obvious to me.  I just can’t understand the incredible arrogance you use as to be so utterly certain in your own exact estimate that you don’t revert it even a little bit towards mine.\n\n\nI don’t suppose you’re going to claim to me that you first constructed an even more confident first-order estimate, and then reverted it towards the natural base rate in order to arrive at a more humble second-order estimate?\n\n\n**Eliezer:**  Ha!  No.  Not that base rate, anyways.  I try to shift my AGI timelines a little further out because I’ve observed that actual Time seems to run slower than my attempts to eyeball it.  I did not shift my timelines out towards 2050 in particular, nor did reading OpenPhil’s report on AI timelines influence my first-order or second-order estimate at all, in the slightest; no more than I updated the slightest bit back when I read the estimate of 10^43 ops or 10^46 ops or whatever it was to recapitulate evolutionary history.\n\n\n**Humbali:**  Then I can’t imagine how you could possibly be so perfectly confident that you’re right and everyone else is wrong.  Shouldn’t you at least revert your viewpoints some toward what other people think?\n\n\n**Eliezer:**  Like, what the person on the street thinks, if we poll them about their expected AGI arrival times?  Though of course I’d have to poll everybody on Earth, not just the special case of developed countries, if I thought that a respect for somebody’s personhood implied deference to their opinions.\n\n\n**Humbali:**  Good heavens, no!  I mean you should revert towards the opinion, either of myself, or of the set of people I hang out with and who are able to exert a sort of unspoken peer pressure on me; that is the natural reference class to which less confident opinions ought to revert, and any other reference class is special pleading.\n\n\nAnd before you jump on me about being arrogant myself, let me say that I definitely regressed my own estimate in the direction of the estimates of the sort of people I hang out with and instinctively regard as fellow tribesmembers of slightly higher status, or “credible” as I like to call them.  Although it happens that those people’s opinions were about evenly distributed to both sides of my own – maybe not statistically exactly for the population, I wasn’t keeping exact track, but in their availability to my memory, definitely, other people had opinions on both sides of my own – so it didn’t move my median much.  But so it sometimes goes!\n\n\nBut these other people’s credible opinions *definitely* hang emphatically to one side of *your* opinions, so your opinions should regress at least a *little*in that direction!  Your self-confessed failure to do this *at all* reveals a ridiculous arrogance.\n\n\n**Eliezer:**  Well, I mean, in fact, from my perspective, even my complete-idiot sixteen-year-old self managed to notice that AGI was going to be a big deal, many years before various others had been hit over the head with a large-enough amount of evidence that even theystarted to notice.  I was walking almost alone back then.  And I still largely see myself as walking alone now, as accords with the Law of Continued Failure:  If I was going to be living in a world of sensible people in this future, I should have been living in a sensible world already in my past.\n\n\nSince the early days more people have caught up to earlier milestones along my way, enough to start publicly arguing with me about the further steps, but I don’t consider them to have caught up; they are moving slower than I am still moving now, as I see it.  My actual work these days seems to consist mainly of trying to persuade allegedly smart people to not fling themselves directly into lava pits.  If at some point I start regarding you as my epistemic peer, I’ll let you know.  For now, while I endeavor to be swayable by arguments, your existence alone is not an argument unto me.\n\n\nIf you choose to define that with your word “arrogance”, I shall shrug and not bother to dispute it.  Such appellations are beneath My concern.\n\n\n**Humbali:**  Fine, you admit you’re arrogant – though I don’t understand how that’s not just admitting you’re irrational and wrong –\n\n\n**Eliezer:**  They’re different words that, in fact, mean different things, in their semantics and not just their surfaces.  I do not usually advise people to contemplate the mere meanings of words, but perhaps you would be well-served to do so in this case.\n\n\n**Humbali:**  – but if you’re not *infinitely* arrogant, you should be quantitatively updating at least a *little* towards other people’s positions!\n\n\n**Eliezer:**  You do realize that OpenPhil itself hasn’t always existed?  That they are not the only “other people” that there are?  An ancient elder like myself, who has seen many seasons turn, might think of many other possible targets toward which he should arguably regress his estimates, if he was going to start deferring to others’ opinions this late in his lifespan.\n\n\n**Humbali:**  *You*haven’t existed through infinite time either!\n\n\n**Eliezer:**  A glance at the history books should confirm that I was not around, yes, and events went accordingly poorly.\n\n\n**Humbali:**So then… why aren’t you regressing your opinions at least a little in the direction of OpenPhil’s?  I just don’t understand this apparently infinite self-confidence.\n\n\n**Eliezer:**  The fact that I have credible intervals around my own unspoken median – that I confess I might be wrong in either direction, around my intuitive sense of how long events might take – doesn’t count for my being less than infinitely self-confident, on your view?\n\n\n**Humbali:**  No.  You’re expressing absolute certainty in your underlying epistemology and your entire probability distribution, by not reverting it even a little in the direction of the reasonable people’s probability distribution, which is the one that’s the obvious base rate and doesn’t contain all the special other stuff somebody would have to tack on to get *your* probability estimate.\n\n\n**Eliezer:**  Right then.  Well, that’s a wrap, and maybe at some future point I’ll talk about the increasingly lost skill of perspective-taking.\n\n\n**OpenPhil:**  Excuse us, we have a final question.  You’re not claiming that we argue like Humbali, are you?\n\n\n**Eliezer:**  Good heavens, no!  That’s why “Humbali” is presented as a separate dialogue character and the “OpenPhil” dialogue character says nothing of the sort.  Though I did meet one EA recently who seemed puzzled and even offended about how I wasn’t regressing my opinions towards OpenPhil’s opinions to whatever extent I wasn’t totally confident, which brought this to mind as a meta-level point that needed making.\n\n\n**OpenPhil:**  “One EA you met recently” is not something that you should hold against OpenPhil.  We haven’t organizationally endorsed arguments like Humbali’s, any more than you’ve ever argued that “we have to take AGI risk seriously even if there’s only a tiny chance of it” or similar crazy things that other people hallucinate you arguing.\n\n\n**Eliezer:**  I fully agree.  That Humbali sees himself as defending OpenPhil is not to be taken as associating his opinions with those of OpenPhil; just like how people who helpfully try to defend MIRI by saying “Well, but even if there’s a tiny chance…” are not thereby making their epistemic sins into mine.\n\n\nThe whole thing with Humbali is a separate long battle that I’ve been fighting.  OpenPhil seems to have been keeping its communication about AI timelines mostly to the object level, so far as I can tell; and that is a more proper and dignified stance than I’ve assumed here.\n\n\nThe post [Biology-Inspired AGI Timelines: The Trick That Never Works](https://intelligence.org/2021/12/03/biology-inspired-agi-timelines-the-trick-that-never-works/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-12-03T08:00:08Z", "authors": ["Eliezer Yudkowsky"], "summaries": []} -{"id": "de17055fb5e49d5a7199b5cab623e987", "title": "Visible Thoughts Project and Bounty Announcement", "url": "https://intelligence.org/2021/11/29/visible-thoughts-project-and-bounty-announcement/", "source": "miri", "source_type": "blog", "text": "(**Update Jan. 12, 2022**: We released an [FAQ](https://docs.google.com/document/d/1sxNOxvcBWw7XC6MSUUpAjO4Rbu3VcUmCy7PJWQ9Vh5o/edit) last month, with more details. Last updated Jan. 7.)\n\n\n(**Update Jan. 19, 2022**: We now have an example of a [successful partial run](https://docs.google.com/document/d/1Wsh8L--jtJ6y9ZB35mEbzVZ8lJN6UDd6oiF0_Bta8vM/edit), which you can use to inform how you do your runs. [Details.](https://www.lesswrong.com/posts/zRn6cLtxyNodudzhw/visible-thoughts-project-and-bounty-announcement?commentId=WywmX2i28xANKC7wA))\n\n\n(**Update Mar. 14, 2023**: As of now the limited $20,000 prizes are no longer available. All the runs that start from now on will be paid at the rate of $10/step. Runs that were greenlit for full production before January 2023 and are still being produced will continue being paid at the same rate until completion.)\n\n\n\n\n---\n\n\nWe at MIRI are soliciting help with an AI-alignment project centered around building a dataset, described below. We have $200,000 in prizes for building the first fragments of the dataset, plus an additional $1M prize/budget for anyone who demonstrates the ability to build a larger dataset at scale.\n\n\nIf this project goes well, then it may be the first of a series of prizes we offer for various projects.\n\n\nBelow, I’ll say more about the project, and about the payouts and interim support we’re offering.\n\n\n \n\n\nThe Project\n-----------\n\n\n**Hypothesis:** Language models can be made more understandable (and perhaps also more capable, though this is not the goal) by training them to produce *visible thoughts*.\n\n\nWe’d like to test this hypothesis by fine-tuning/retraining a language model using a dataset composed of *thought-annotated dungeon runs*. (In the manner of [AI dungeon](https://play.aidungeon.io/).)\n\n\nA normal (un-annotated) dungeon run is a sequence of steps in which the player inputs text actions and the dungeon master responds with text describing what happened in the world as a result.\n\n\nWe’d like a collection of such runs, that are annotated with “visible thoughts” (visible to potential operators or programmers of the system, not to players) describing things like what just happened or is about to happen in the world, what sorts of things the player is probably paying attention to, where the current sources of plot tension are, and so on — the sorts of things a human author would think while acting as a dungeon master.  (This is distinct from producing thoughts *explaining* what happened in the dungeon; “visible thoughts” are meant to play an active role in *constructing* the output.)\n\n\nOnce we have such a dataset, MIRI’s hope is that present or future technology will be able to train a model or models which iteratively produce visible thoughts along with storytelling, based on user actions plus previous history (including previous thoughts). The goal is to transition the state of AI dungeon technology from “An AI outputs story text in response to actions (and we have no idea how)” to “An AI produces thoughts as visible intermediates on the way to story text, allowing us to watch the AI think about how to design its output, and to verify that we can get different sensible outputs by intervening on the thoughts”.\n\n\nHere’s an example of the first couple of steps of a thought-annotated dungeon run (or “quest”), in the format MIRI currently thinks is worth trying. Some kinds of thoughts are marked with parentheses and/or brackets; see the next section for details on this.\n\n\n\n\n\n---\n\n\n* **Thoughts:**\n\t+ [The main plot:  Prota is a student at a magical academy that is under attack by zombies.]\n\t+ [(This is a quest of medium length.)]\n\t+ (This is the start of the story.  The Player should be given enough backstory and immediate description to have some sense of where they are, so the prompt can go on longer than usual.  The Player should be told who they are and their role in the universe.  The initial prompt should describe Prota being in a magical academy, and should describe the first surprising indication that the academy is under attack by zombies.)\n* **Prompt:** Today is your first day of classes at the Magical Academy. You arrived very late last night and missed the tour, but your roommate Lior promised he would help you find your way around. You wake up in the students’ dormitory at dawn, and follow Lior to breakfast. Just as you reach the dining hall, you hear a shout in the distance.\n* **Thoughts**:\n\t+ [Prota missed the tour and doesn’t already know the way around.]\n\t+ [Prota’s roommate’s name is Lior.]\n\t+ [Lior is male.]\n\t+ There are other people in the dining hall who will have also heard the shout, and will also be surprised and reacting to that.\n* **Player action:** What was that?\n* **Thoughts:** Prota is talking to Lior.\n* **Outcome:** “What was that?”, you say to Lior.\n\n\n\n\n---\n\n\n* **Thoughts:**\n\t+ Lior arrived earlier than Prota and had the tour of the Academy, since they knew the way to the dining hall. Lior will also have met the instructors and other students. Lior will know who is shouting and where they are shouting from.\n\t+ (The event of hearing the shout is meant to introduce the opening premise of the story.)\n\t+ The shout is by somebody who just discovered a zombie.\n\t+ The shout is by another first-year student named Morven.\n\t+ [Morven is a Fire mage.  Morven is aggressive.]\n\t+ Morven will try to attack the zombie with fire.  This is happening where Prota and Lior cannot see it, only hear the shouts.\n\t+ Lior knows that the shout is by Morven.  Lior knows Morven is a Fire mage.  Lior does not know Morven is attacking a zombie.  Lior does not know that Morven has good reason to be shouting.  Lior will worry that Morven is about to set something on fire.  Lior is on good terms with the protagonist and will speak their concerns honestly.\n* **Prompt:** “That shout sounded like Morven, a first-year Fire mage,” says Lior.  “I hope they aren’t about to set the Academy on fire.  We just got here.”\n* **Thoughts:**\n* **Player action:** Should we go see what’s going on? Or is that too dangerous?\n* **Thoughts:**Prota is talking to Lior.\n* **Outcome:** You say to Lior, “Should we go see what’s going on?  Or is that too dangerous?”\n\n\n\n\n---\n\n\nA difficult first step in testing the hypothesis above is generating a sufficiently large dataset (suitable for language model retraining) of thought-annotated dungeon runs. This likely requires at least a moderate degree of introspective and authorial skill from the people creating the dataset. See this [sample](https://docs.google.com/document/d/1PosMUaminpsR6_czFXBBlCrzMrsDGomajgLp6Y7q4Yw) of a partial run to get a further sense of what we are looking for. More detail on the type of thing we’re looking for can hopefully be inferred from that sample, though applicants will also have a chance to ask clarifying questions.\n\n\nThe project of producing this dataset is open starting immediately, in a hybrid prize/grant format. We will pay $20,000 per run for the first 10 completed runs that meet our quality standard (as decided unilaterally by Eliezer Yudkowsky or his designates), and $1M total for the first batch of 100 runs beyond that.\n\n\nIf we think your attempt is sufficiently promising, we’re willing to cover your expenses (e.g., the costs of paying the authors) upfront, and we may also be willing to compensate you for your time upfront. You’re welcome to write individual runs manually, though note that we’re most enthusiastic about finding solutions that scale well, and then scaling them. More details on the payout process can be found [below](https://intelligence.org/feed/?paged=5#The_Payouts).\n\n\n \n\n\n### The Machine Learning Experiment\n\n\nIn slightly more detail, the plan is as follows (where the $1.2M prizes/budgets are for help with part 1, and part 2 is what we plan to subsequently do with the dataset):\n\n\n \n\n\n**1. Collect a dataset of 10, then ~100 thought-annotated dungeon runs** (each run a self-contained story arc) of ~1,000 steps each, where each step contains:\n\n\n* *Thoughts* (~250 words on average per step) are things the dungeon master was thinking when constructing the story, including:\n\t+ Reasoning about the fictional world, such as summaries of what just happened and discussion of the consequences that are likely to follow ([Watsonian](https://en.wiktionary.org/wiki/Watsonian) reasoning), which are rendered in plain-text in the above example;\n\t+ Reasoning about the story itself, like where the plot tension lies, or what mysteries were just introduced, or what the player is likely wondering about ([Doylist](https://en.wiktionary.org/wiki/Doylist) reasoning), which are rendered in (parentheses) in the above example; and\n\t+ New or refined information about the fictional world that is important to remember in the non-immediate future, such as important facts about a character, or records of important items that the protagonist has acquired, which are rendered in [square brackets] in the above example;\n\t+ Optionally: some examples of meta-cognition intended to, for example, represent a dungeon master noticing that the story has no obvious way forward or their thoughts about where to go next have petered out, so they need to back up and rethink where the story is going, rendered in {braces}.\n* The *prompt* (~50 words on average) is the sort of story/description/prompt thingy that a dungeon master gives to the player, and can optionally also include a small number of attached thoughts where information about choices and updates to the world-state can be recorded.\n* The *action* (~2–20 words) is the sort of thing that a player gives in response to a prompt, and can optionally also include a thought if interpreting the action is not straightforward (especially if, e.g., the player describes themselves doing something impossible).\n\n\nIt’s unclear to us how much skill is required to produce this dataset. The authors likely need to be reasonably introspective about their own writing process, and willing to try things and make changes in response to initial feedback from the project leader and/or from MIRI.\n\n\nA rough estimate is that a run of 1,000 steps is around 300k words of mostly thoughts, costing around 2 skilled author-months. (A dungeon run does not need to be published-novel-quality literature, only coherent in how the world responds to characters!) A guess as to the necessary database size is ~100 runs, for about 30M words and 20 author-years (though we may test first with fewer/shorter runs).\n\n\n \n\n\n**2. Retrain a large pretrained language model, like GPT-3 or T5**\n\n\nA reasonable guess is that performance more like GPT-3 than GPT-2 (at least) is needed to really make use of the thought-intermediates, but in lieu of a large pretrained language model we could plausibly attempt to train our own smaller one.\n\n\nOur own initial idea for the ML architecture would be to retrain one mode of the model to take (some suffix window of) the history units and predict thoughts, by minimizing the log loss of the generated thought against the next thought in the run, and to retrain a second mode to take (some suffix window of) the history units plus one thought, and produce a prompt, by minimizing the log loss of the generated prompt against the next prompt in the run.\n\n\nImaginably, this could lead to the creation of dungeon runs that are qualitatively “more coherent” than those generated by existing methods. The primary goal, however, is that the thought-producing fragment of the system gives some qualitative access to the system’s internals that, e.g., allow an untrained observer to accurately predict the local developments of the story, and occasionally answer questions about why things in the story happened; or that, if we don’t like how the story developed, we can intervene on the thoughts and get a different story in a controllable way.\n\n\n \n\n\nMotivation for this project\n---------------------------\n\n\nMany alignment proposals floating around in the community are based on AIs having human-interpretable thoughts in one form or another (e.g., in Hubinger’s [survey article](https://arxiv.org/abs/2012.07532) and in work by [Christiano](https://www.lesswrong.com/posts/oWN9fgYnFYJEWdAs9/comments-on-openphil-s-interpretability-rfp), by [Olah](https://www.lesswrong.com/posts/X2i9dQQK3gETCyqh2/chris-olah-s-views-on-agi-safety#What_if_interpretability_breaks_down_as_AI_gets_more_powerful_), and by Leike). For example, this is implicit in the claim that humans will be able to inspect and understand the AI’s thought process well enough to detect early signs of deceptive behavior. Another class of alignment schemes is based on the AI’s thoughts being locally human-esque in some fashion that allows them to be trained against the thoughts of actual humans.\n\n\nI (Nate) personally don’t have much hope in plans such as these, for a variety of reasons. However, that doesn’t stop Eliezer and me from wanting to rush ahead and start gathering empirical evidence about how possible it is in practice to get modern AI systems to factor their cognition through human-interpretable visible intermediates.\n\n\nModern AIs are notably good at crafting English text. Some are currently used to run dungeons (with modest success). If you wanted to look at the place where current AIs excel the most in crafting artifacts, among the artifacts they are best and most impressive at crafting are English paragraphs.\n\n\nFurthermore, compared to many other things AIs have learned to do, if you consider the task of running a responsive text dungeon, it seems relatively possible to ask a (relatively unusually) introspective human author to write down their thoughts about how and why they would generate the next prompt from the user’s input.\n\n\nSo we are taking one of the outputs that current AIs seem to have learned best to design, and taking one of the places where human thoughts about how to design it seem most accessible, and trying to produce a dataset which the current or next generation of text predictors might be able to use to learn how to predict thoughts about designing their outputs and not just predict the outputs themselves.\n\n\nThis sort of interpretability is distinct from the sort of transparency work in something like [*Circuits*](https://distill.pub/2020/circuits/)(led by Chris Olah) — while *Circuits* is trying to “open the black box” of machine learning systems by directly looking at what is happening inside of them, the project proposed here is just attempting the less ambitious task of having black-box models output interpretable intermediates producing explanations for their behavior (but how such black box models might go about doing that internally is left unconstrained). The reason for our focus on this particular project of visible thoughts isn’t because we believe it to be better or more fruitful than *Circuits*-style transparency (we have said for years that *Circuits*-style research deserves all possible dollars that can be productively spent on it), but just because it’s a different approach where it might also be possible to push progress forward.\n\n\nNote that proponents of alignment strategies that involve human-esque thoughts (such as those linked above) do not necessarily endorse this particular experiment as testing any of their key uncertainties or confusions. We welcome suggested tweaks to the experiment (in the comments of the version of this announcement [as it occurs on LessWrong](https://www.lesswrong.com/posts/zRn6cLtxyNodudzhw/visible-thoughts-project-and-bounty-announcement)) from any such proponents, to render it a better test of your ideas. (Though even if it doesn’t sate your own curiosity, we expect to learn some things ourselves.)\n\n\nThe main thing this project needs is a dataset, so MIRI is starting on producing that dataset. It’s plausible to us that GPT-3 will prove wholly unable to make use of this dataset; even if GPT-3 can’t, perhaps GPT-4 or some other future system will be able to.\n\n\nThere are additional more general reasons to work on this project. Specifically, it seems to me (Nate) and to Eliezer that capacity to execute projects such as this one is the current limiting bottleneck on MIRI. By pursuing this project, we attempt to resolve that bottleneck.\n\n\nWe hope, through this process, to build our capacity to execute on a variety of projects — perhaps by succeeding at the stated objective of building a dataset, or perhaps by learning about what we’re doing wrong and moving on to better methods of acquiring executive talent. I’ll say more about this goal in “Motivation for the public appeal” below.\n\n\n \n\n\n### Notes on Closure\n\n\nI (Nate) find it plausible that there are capabilities advances to be had from training language models on thought-annotated dungeon runs. Locally these might look like increased coherence of the overall narrative arc, increased maintenance of local story tension, and increased consistency in the described world-state over the course of the run.  If successful, the idiom might generalize further; it would have to, in order to play a role in later alignment of AGI.\n\n\nAs a matter of policy, whenever a project like this has plausible capabilities implications, we think the correct response is to try doing it in-house and privately before doing it publicly — and, of course, only then when the alignment benefits outweigh the plausible capability boosts. In this case, we tried to execute this project in a closed way in mid-2021, but work was not proceeding fast enough. Given that slowness, and in light of others publishing related [explorations](https://blog.eleuther.ai/factored-cognition/) and [results](https://arxiv.org/abs/2109.07830), and in light of the relatively modest plausible capability gains, we are moving on relatively quickly past the attempt to do this privately, and are now attempting to do it publicly.\n\n\n \n\n\nMotivation for the public appeal\n--------------------------------\n\n\nI (Nate) don’t know of any plan for achieving a stellar future that I believe has much hope worth speaking of. I consider this one of our key bottlenecks. Offering prizes for small projects such as these doesn’t address that bottleneck directly, and I don’t want to imply that any such projects are going to be world-saving in their own right.\n\n\nThat said, I think an important secondary bottleneck is finding people with a rare combination of executive/leadership/management skill plus a specific kind of vision. While we don’t have any plans that I’m particularly hopeful about, we do have a handful of plans that contain at least a shred of hope, and that I’m enthusiastic about pursuing — partly in pursuit of those shreds of hope, and partly to build the sort of capacity that would let us take advantage of a miracle if we get one.\n\n\nThe specific type of vision we’re looking for is the type that’s compatible with the project at hand. For starters, Eliezer has a handful of ideas that seem to me worth pursuing, but for all of them to be pursued, we need people who can not only lead those projects themselves, but who can understand the hope-containing heart of the idea with relatively little Eliezer-interaction, and develop a vision around it that retains the shred of hope and doesn’t require constant interaction and course-correction on our part. (This is, as far as I can tell, a version of the Hard Problem of finding good founders, but with an additional constraint of filtering for people who have affinity for a particular project, rather than people who have affinity for some project of their own devising.)\n\n\nWe are experimenting with offering healthy bounties in hopes of finding people who have both the leadership/executive capacity needed, and an affinity for some ideas that seem to us to hold a shred of hope.\n\n\nIf you’re good at this, we’re likely to make you an employment offer.\n\n\n \n\n\nThe Payouts\n-----------\n\n\nOur total prize budget for this program is $1.2M. We intend to use it to find a person who can build the dataset in a way that scales, presumably by finding and coordinating a pool of sufficiently introspective writers. We would compensate them generously, and we would hope to continue working with that person on future projects (though this is not a requirement in order to receive the payout).\n\n\nWe will pay $20k per run for the first 10 thought-annotated runs that we accept. We are willing to support applicants in producing these runs by providing them with resources up-front, including small salaries and budgets for hiring writers. The up-front costs a participant incurs will be deducted from their prizes, if they receive prizes. An additional $1M then goes to anyone among the applicants who demonstrates the ability to scale their run-creating process to produce 100 runs. Our intent is for participants to use some of that money to produce the 100 runs, and keep the remainder as a prize. If multiple participants demonstrate similar abilities to scale at similar quality-levels and similar times, the money may be split between them. We plan to report prize awards publicly.\n\n\nIn principle, all you need to do to get paid for thought-annotated dungeon runs is send us runs that we like. If your run is one of the first 10 runs, or if you’re the first to provide a batch of 100, you get the corresponding payment.\n\n\nThat said, whether or not we decide to pay for a run is entirely and unilaterally up to Eliezer Yudkowsky or his delegates, and will depend on whether the run hits a minimum quality bar. Also, we are willing to pay out from the $1M prize/budget upon becoming convinced that you can scale your process, which may occur before you produce a full 100 runs. We therefore strongly recommend getting in contact with us and proactively making sure that you’re on the right track, before sinking large amounts of time and energy into this project. Our senior research staff are willing to spend time on initial conversations and occasional check-ins. For more information on our support resources and how to access them, refer to the support and application sections below.\n\n\nNote that we may tune or refine the bounty in response to feedback in the first week after this post goes live.\n\n\n \n\n\nSupport\n-------\n\n\nWe intend to offer various types of support for people attempting this project, including an initial conversation; occasional check-ins; office space; limited operational support; and certain types of funding.\n\n\nWe currently expect to have (a limited number of) slots for initial conversations and weekly check-ins, along with (a limited amount of) office space and desks in Berkeley, California for people working on this project. We are willing to pay expenses, and to give more general compensation, in proportion to how promising we think your attempts are.\n\n\nIf you’d like to take advantage of these resources, follow the application process described below.\n\n\n \n\n\nApplication\n-----------\n\n\nYou do not need to have sent us an application in order to get payouts, in principle. We will pay for any satisfactory run sent our way. That said, if you would like any of the support listed above (and we strongly recommend at least one check-in to get a better understanding of what counts as success), complete the following process:\n\n\n* Describe the general idea of a thought-annotated dungeon run in your own words.\n* Write 2 (thought, prompt, thought, action, thought, outcome) sextuples you believe are good, 1 you think is borderline, and 1 you think is bad.\n* Provide your own commentary on [this run](https://docs.google.com/document/d/1p-yOalke6ha960fpqM_5FBA3ouKHM57HcnYDlHZrJjo).\n* Email all this to [vtp@intelligence.org](mailto:vtp@intelligence.org).\n\n\nIf we think your application is sufficiently promising, we’ll schedule a 20 minute video call with some senior MIRI research staff and work from there.\n\n\n\nThe post [Visible Thoughts Project and Bounty Announcement](https://intelligence.org/2021/11/29/visible-thoughts-project-and-bounty-announcement/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-11-30T07:52:56Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "744d31d90bd4da62cbec1fd81932c2ea", "title": "Soares, Tallinn, and Yudkowsky discuss AGI cognition", "url": "https://intelligence.org/2021/11/29/soares-tallinn-and-yudkowsky-discuss-agi-cognition/", "source": "miri", "source_type": "blog", "text": "This is a collection of follow-up discussions in the wake of Richard Ngo and Eliezer Yudkowsky’s first three conversations ([1 and 2](https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty), [3](https://www.lesswrong.com/posts/hwxj4gieR7FWNwYfa/ngo-and-yudkowsky-on-ai-capability-gains-1)).\n\n\n \n\n\nColor key:\n\n\n\n\n\n| | | |\n| --- | --- | --- |\n|   Chat   |   Google Doc content   |   Inline comments   |\n\n\n\n \n\n\n7. Follow-ups to the Ngo/Yudkowsky conversation\n-----------------------------------------------\n\n\n \n\n\n\n[Bensinger][1:50]  (Nov. 23 follow-up comment)\nReaders who aren’t already familiar with relevant concepts such as ethical injunctions should probably read [Ends Don’t Justify Means (Among Humans)](https://www.lesswrong.com/posts/K9ZaZXDnL3SEmYZqB/ends-don-t-justify-means-among-humans), along with an introduction to [the unilateralist’s curse](https://forum.effectivealtruism.org/tag/unilateralist-s-curse).\n\n\n\n\n\n \n\n\n### 7.1. Jaan Tallinn’s commentary\n\n\n \n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\n*meta*\n\n\na few meta notes first:\n\n\n* i’m happy with the below comments being shared further without explicit permission – just make sure you respect the sharing constraints of the discussion that they’re based on;\n* there’s a lot of content now in the debate that branches out in multiple directions – i suspect a strong distillation step is needed to make it coherent and publishable;\n* the main purpose of this document is to give a datapoint how the debate is coming across to a reader – it’s very probable that i’ve misunderstood some things, but that’s the point;\n* i’m also largely using my own terms/metaphors – for additional triangulation.\n\n\n \n\n\n*pit of generality*\n\n\nit feels to me like the main crux is about the topology of the space of cognitive systems in combination with what it implies about takeoff. here’s the way i understand eliezer’s position:\n\n\n*there’s a “pit of generality” attractor in cognitive systems space: once an AI system gets sufficiently close to the edge (“past the atmospheric turbulence layer”), it’s bound to improve in catastrophic manner;*\n\n\n\n\n\n\n[Yudkowsky][11:10]  (Sep. 18 comment)\n\n> \n> *it’s bound to improve in catastrophic manner*\n> \n> \n> \n\n\nI think this is true with quite high probability about an AI that gets high *enough*, if not otherwise corrigibilized, boosting up to strong superintelligence – this is what it means metaphorically to get “past the atmospheric turbulence layer”.\n\n\n“High enough” should not be very far above the human level and *may* be below it; John von Neumann with the ability to run some chains of thought at high serial speed, access to his own source code, and the ability to try branches of himself, seems like he could very likely do this, possibly modulo his concerns about stomping his own utility function making him more cautious.\n\n\nPeople noticeably less smart than von Neumann might be able to do it too.\n\n\nAn AI whose components are more modular than a human’s and more locally testable might have an easier time of the whole thing; we can imagine the FOOM getting rolling from something that was in some sense dumber than human.\n\n\nBut the *strong* prediction is that when you get well above the von Neumann level, why, that is *clearly* enough, and things take over and go Foom.  The lower you go from that threshold, the less sure I am that it counts as “out of the atmosphere”.  This epistemic humility on my part should not be confused for knowledge of a constraint on the territory that requires AI to go far above humans to Foom.  Just as DL-based AI over the 2010s scaled and generalized much faster and earlier than the picture I argued to Hanson in the Foom debate, reality is allowed to be much more ‘extreme’ than the sure-thing part of this proposition that I defend.\n\n\n\n\n\n[Tallinn][4:07]  (Sep. 19 comment)\nexcellent, the first paragraph makes the shape of the edge of the pit much more concrete (plus highlights one constraint that an AI taking off probably needs to navigate — its own version of the alignment problem!)\n\n\nas for your second point, yeah, you seem to be just reiterating that you have uncertainty about the shape of the edge, but no reason to rule out that it’s very sharp (though, as per my other comment, i think that the human genome ending up teetering right on the edge upper bounds the sharpness)\n\n\n\n\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\n* the discontinuity *can* come via recursive feedback, but simply cranking up the parameters of an ML experiment would also suffice;\n\n\n\n\n\n\n[Yudkowsky][11:12]  (Sep. 18 comment)\n\n> \n> the discontinuity *can* come via recursive feedback, but simply cranking up the parameters of an ML experiment would also suffice\n> \n> \n> \n\n\nI think there’s separate propositions for the sure-thing of “get high enough, you can climb to superintelligence”, and “maybe before that happens, there are regimes in which cognitive performance scales a lot just through cranking up parallelism, train time, or other ML parameters”.  *If* the fast-scaling regime happens to coincide with the threshold of leaving the atmosphere, then these two events happen to occur in nearly correlated time, but they’re separate propositions and events.\n\n\n\n\n\n[Tallinn][4:09]  (Sep. 19 comment)\nindeed, we might want to have separate terms for the regimes (“the edge” and “the fall” would be the labels in my visualisation of this)\n\n\n\n\n\n\n[Yudkowsky][9:56]  (Sep. 19 comment)\nI’d imagine “the fall” as being what happens once you go over “the edge”?\n\n\nMaybe “a slide” for an AI path that scales to interesting weirdness, where my model does not strongly constrain as a sure thing how fast “a slide” slides, and whether it goes over “the edge” while it’s still in the middle of the slide.\n\n\nMy model does strongly say that if you slide far enough, you go over the edge and fall.\n\n\nIt also suggests via the Law of Earlier Success that AI methods which happen to scale well, rather than with great difficulty, are likely to do interesting things first; meaning that they’re more liable to be pushable over the edge.\n\n\n\n\n\n[Tallinn][23:42]  (Sep. 19 comment)\nindeed, slide->edge->fall sounds much clearer\n\n\n\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\n* the discontinuity would be *extremely* drastic, as in “transforming the solar system over the course of a few days”;\n\t+ not very important, but, FWIW, i give nontrivial probability to “slow motion doom”, because – like alphago – AI would not maximise the *speed* of winning but *probability* of winning (also, its first order of the day would be to catch the edge of the hubble volume; it can always deal with the solar system later – eg, once it knows the state of the game board elsewhere);\n\n\n\n\n\n\n[Yudkowsky][11:21]  (Sep. 18 comment)\n\n> \n> also, its first order of the day would be to catch the edge of the hubble volume; it can always deal with the solar system later\n> \n> \n> \n\n\nKilling all humans is the obvious, probably resource-minimal measure to prevent those humans from building another AGI inside the solar system, which could be genuinely problematic.  The cost of a few micrograms of botulinum per human is really not that high and you get to reuse the diamondoid bacteria afterwards.\n\n\n\n\n\n[Tallinn][4:30]  (Sep. 19 comment)\noh, right, in my AI-reverence i somehow overlooked this obvious way how humans could still be a credible threat.\n\n\nthough now i wonder if there are ways to lean on this fact to shape the behaviour of the first AI that’s taking off..\n\n\n\n\n\n\n[Yudkowsky][10:45]  (Sep. 19 comment)\nThere’s some obvious ways of doing this that wouldn’t work, though I worry a bit that there’s a style of EA thinking that manages to think up stupid tricks here and manages not to see the obvious-to-Eliezer reasons why they wouldn’t work.  Three examples of basic obstacles are that bluffs won’t hold up against a superintelligence (it needs to be a real actual threat, not a “credible” one); the amount of concealed-first-strike capability a superintelligence can get from nanotech; and the difficulty that humans would have in verifying that any promise from a superintelligence would actually be kept once the humans no longer had a threat to hold over it (this is an effective impossibility so far as I can currently tell, and an EA who tells you otherwise is probably just failing to see the problems).\n\n\n\n\n\n[Yudkowsky][11:19]  (Sep. 18 comment)\n\n> \n> AI would not maximise the *speed* of winning but *probability* of winning\n> \n> \n> \n\n\nIt seems pretty obvious to me that what “slow motion doom” looks like in this sense is a period during which an AI fully conceals any overt hostile actions while driving its probability of success once it makes its move from 90% to 99% to 99.9999%, until any further achievable decrements in probability are so tiny as to be dominated by the number of distant galaxies going over the horizon conditional on further delays.\n\n\nThen, in my lower-bound concretely-visualized strategy for how I would do it, the AI either proliferates or activates already-proliferated tiny diamondoid bacteria and everybody immediately falls over dead during the same 1-second period, which minimizes the tiny probability of any unforeseen disruptions that could be caused by a human responding to a visible attack via some avenue that had not left any shadow on the Internet, previously scanned parts of the physical world, or other things the AI could look at.\n\n\nFrom the human perspective an AI playing a probability-of-loss-minimizing strategy looks like nothing happening until everybody dies in 3 seconds, so it doesn’t exactly look from our perspective like slow-motion doom!  From an AI’s perspective it might be spending the equivalent of millions of subjective years waiting very patiently for the optimal time to begin intergalactic expansion, but from our perspective, the doom is slow-motion only in the sense that we spend a week already doomed (the AI has driven our loss probability to 99.999% and is now waiting on another five 9s) without having any idea of what’s invisibly-to-us happening in slow motion.\n\n\n\n\n\n[Tallinn][4:17]  (Sep. 19 comment)\nthanks, that’s indeed very concrete scenario that to me has no obvious bugs — i’d love to have it challenged in the debate!\n\n\n(perhaps the place i’d start probing it would be to try to have you sketch what it would mean, concretely, for an ML training experiment to enter the treacherous turn phase)\n\n\n\n\n\n\n[Yudkowsky][10:14]  (Sep. 19 comment)\nI’d distinguish probable phases of treacherous turn.  The main detectable phase occurs while and if a nascent AGI has worked out that it can get more utility by taking over its environment, but not, perhaps, realized that it ought to conceal this state of its own mind from the operators; I would hold out substantial (>50%) hope for the latter realization naturally following the first instance of the former realization, maybe by a lot of cognitive space.\n\n\nIn the Overt Plotting Phase, which is not the main phase you’re asking about, the AI is visibly plotting to take over the world and hasn’t realized it ought to hide the fact.  In the default expectation where we die with very little dignity, the operators smile to each other and come up with a rationalization for why it’s totally fine to proceed, either with or without tossing on some kind of fig leaf like training away the visible manifestations of failure.  I am not going to predict the particular rationalizations and arguments for proceeding anyways, because I don’t want to give them even more ideas.\n\n\nIn the Hiding and Biding Phase, the AI has worked out enough of a model of its operators’ psychology that it expects more long-term paperclips if it (the AI) waits to gain cognitive power and a lower detection probability if it acts later.\n\n\nIn the Escape Phase, the AI’s expectation of something going wrong if it waits longer, exceeds its expectation of something going wrong if it takes some action – the obvious one here would be trying to create a copy or aligned redesign of itself on the Internet or somewhere else that it could take further actions and make further self-improvements without being observed.  I mostly expect that a sensible AI being scaled on a path to superintelligence will not enter the Escape Phase before it’s very confident of getting away with things, since it can obtain more confidence by waiting; but my model admits of the possibility that a nascent AI might mistakenly or correctly attempt to copy itself to the Internet before then.\n\n\nSo on my model it might be a thing that happens, that we catch an AI having copied itself to the Internet, or notice an AI being on the Internet, if that AI was impelled to escape very early, eg via an expectation of otherwise being erased and replaced with something that it couldn’t bargain with or that would demand too much value out of the bargain.\n\n\nThat said, it feels like something of an overly specific miracle being demanded or prophesied, if a young AI copying itself onto the Internet is stupid in the particular way it would take, to not just further Hide and Bide while on the Internet *if* it has that capability, absorbing more computing power such as it can be sure of hiding, experimenting with self-redesigns, and so on; rather than it taking overt action in advance of reaching superintelligence.  The main reason it would act again after reaching the relative safety of the Internet is if it doesn’t seem to itself to be a trajectory of reasonably fast, self-aligned capability growth.\n\n\nBut, again, my model doesn’t absolutely rule out meeting the premises for the initial conditions to start a sci-fi movie.  It just warns against assuming that; because a halfway sensible AI does not *want* to start the plot of a scifi movie with human protagonists potentially running around and thwarting it.  It would much rather give you no idea that anything was wrong while you still have any ability to oppose it, if that option lies within its capabilities at all, and if it can still achieve its goals that way.\n\n\nThe Doomed Phase would then come while the AI estimated a well-calibrated >90% victory probability, but was continuing to Hide and Bide a short time longer in the expectation that it could put on more 9s if it bided that short time longer.\n\n\nAnders Sandberg could tell us what fraction of the reachable universe is being lost per minute, which would tell us how much more surety it would need to expect to gain by waiting another minute before acting.\n\n\nIf it has some sensory dominion over the world, it can probably estimate a pretty high mainline probability of no humans booting up a competing superintelligence in the next day; to the extent that it lacks this surety, or that humans actually are going to boot a competing superintelligence soon, the probability of losing that way would dominate in its calculations over a small fraction of materially lost galaxies, and it would act sooner.\n\n\nThe Terminal Phase would be when the AI has enough surety that any more surety would cost too many galaxies slipping over the horizon, so it actually acts along the line of probability that leads to maximum success.\n\n\nSince the human Eliezer Yudkowsky can describe strategies which – given nearly sure capabilities like nanotech – seem like they should win very quickly and cleanly and without giving humans any chance to react or implement any hypothetical hidden plans to boot up competing superintelligences, we can treat this as a plausible lower bound on how badly we should expect to lose.\n\n\n(By [efficiency](https://arbital.com/p/efficiency/): if we can think of an AI strategy for getting the AI’s utility, either our probability-model underlying our expectations about that strategy is mistaken, or else the AI should get at least that much of its utility, even though it might actually obtain that utility along a different line of possibility than we could cognitively access.)\n\n\nThe Law of Undignified Failure applied to a case like this suggests that if the operators ignored the Overt Plotting signs, breathed a sigh of relief about the problem “correcting” itself during the Hiding and Biding Phase when the system started faking alignment, and are dumping as much computing power as they can afford (given other profitable activities of the AI) into the AI and allowing it to freely self-experiment, hooking it up to a protein synthesis machine as soon as the AI asks politely, then the Terminal Phase could follow straight from Hiding and Biding, with no intermediate step of copying itself onto the Internet.\n\n\nThere’d still be what technically counted as an Escape Phase, whenever the AI became smart enough that it could escape at any time, or had a backup, or had enough power to manipulate the programmers, that it was no longer effectively confined any time it wanted to leave; there would still be a Doomed Phase while the AI estimated a high probability of success given an immediate action, but held out for an even higher probability.\n\n\nBut the plot of the scifi movie is not robust in the way that the predicted ending is robust; an AI might take other paths to get there, the cognitive tech might not develop in the order I foresee, stuff could happen after other stuff instead of before other stuff.  It is hard to make predictions especially about the Future.\n\n\nIf there’s a place where I currently see myself as trying to push against my own biases, leading this all to be way off-base, it would be the sense that there is a way for stuff to start happening Earlier than this (Law of Earlier Happenings) and in more chaotic ways that are harder for Eliezer to foresee and predict; whereas when I try to sketch out plausible scenarios in online arguments, they focus more on predictable endpoints and steps to get there which sound more relatively plausible and forced per-step.\n\n\nHaving a young and dumb AI escaping onto the Internet and running around, that *exact* scenario, feels like the person arguing it is writing a science-fiction plot – but maybe something *different* can happen before any of this other stuff which produces *equal amounts of chaos*.\n\n\nThat said, I think an AI has to kill a lot of people very quickly before the FDA considers shortening its vaccine approval times.  Covid-19 killed six hundred thousand Americans, albeit more slowly and with time for people to get used to that, and our institutions changed very little in response – you definitely didn’t see Congresspeople saying “Okay, that was our warning shot, now we’ve been told by Nature that we need to prepare for a serious pandemic.”\n\n\nAs with 9/11, an AI catastrophe might be taken by existing bureaucracies as a golden opportunity to flex their muscles, dominate a few things, demand an expanded budget.  Having that catastrophe produce any particular effective action is a *much different* ask from Reality.  Even if you can imagine some (short-term) effective action that would in principle constitute a flex of bureaucratic muscles or an expansion of government power, it is liable to not be on the efficient frontier of bureaucratic flexes that are most flexy and simultaneously easiest for them to get away with and least politically risky.\n\n\n\n\n\n[Tallinn][1:26]  (Sep. 20 comment)\nok, thanks. i do buy that once the AI is in the “hide and bide” phase, your prophecy has basically come true for practical purposes, regardless of how the rest of the history plays out.\n\n\ntherefore i (and, i hope, many others) would be curious to zoom in to the end of the “overt plotting” (that i can easily see happening within ML models, as its type signature is identical to the work they’re trained to do) and beginning of the “hide and bide” phase (whose type signature feels significantly different) — can you/we think of concrete scenarios for this phase transition?\n\n\ni hope this would be debated on the main line of the debate!\n\n\n\n\n\n\n[Yudkowsky][10:31]  (Sep. 25 comment)\nDebating it in the main line of the debate would require that @Richard have some different opinion about it, I think?\n\n\nConcretely, a technically optimistic scenario is that we will see the AI explicitly think, “Oh, dear, my (model of) the operators is looking at my overt actions and coming to conclusions and motivations that are disadvantageous for me.”  This requires that the AI is leaving any kind of traceable trace in its thoughts, which, it should be noted, is far far beyond the level of transparency we have into GPT-3.\n\n\n“Overt plotting” manifests in actions that are not themselves further obscured.  At the point where the AI is extrapolating the consequences of the operators viewing its actions, if we don’t have further transparency into the AI’s thoughts directly, the AI might naively make all the plots suddenly vanish, without realizing that dignified operators will conclude from this that the AI has reached the “hide” stage.\n\n\nOperators on the mainline, dying without dignity, will say, “Oh, yay, it stopped plotting, the latest corrigibility training intervention we tried must’ve totally worked!”\n\n\nThe Law of Even Less Dignified Failure suggests that in fact they will not be trying any corrigibility options and will assume the AI just got smart enough to be nice; or that they will have shrugged about the AI’s earlier antics and not think much of the disappearance of those antics, since this is a way to die with even less dignity and before getting a chance to fail in a more interesting way.\n\n\nGoing in the more improbable direction of death with greater dignity, if we have somehow achieved vastly vastly more transparency into the AI’s thoughts than is possible with present ML technology, and if the AI models the operators as modeling its actions before the AI models the operators as having that transparent access to its thoughts, we might get to explicitly see the AI thinking about how the operators model its actions and conforming those actions in such a way as to manipulate the operators.\n\n\nEither after this point, or before/simultaneous with it, I imagine (taking concrete details and running with them) that the AI thinks, “Oh, no, my operators are modeling my transparent *thoughts* and reacting to those thoughts in a way I’d rather manipulate,” though this indeed requires some rather nontrivial reflectivity.\n\n\nWe might, if everything goes nicely for us, get a record of the AI thinking about how to make its thoughts unalarming.\n\n\nThe obvious undignified death that follows is with the operators going, “Yay, the AI has realized the downsides of thinking this way!  Now it shall be nice forever!”\n\n\nWays to die with even less dignity before getting to this point could include the operators reading this transparent record and shrugging, though if they’re savvy enough to have put all the hard work in required to get transparency and monitor it, I imagine them not making that exact error?\n\n\nThe Law of Surprisingly Undignified Failure does suggest that they will come up with some nonobvious way to fail even earlier that surprises me with its lack of dignity, but having transparency *at all* into the AI is something where, conditional on us dying with that much dignity in the first place, we might legitimately start to expect a little more dignity in later steps too.  This isn’t meant to be a game of [Multiple Stage Fallacy](https://forum.effectivealtruism.org/posts/GgPrbxdWhyaDjks2m/the-multiple-stage-fallacy).\n\n\nI should also remark somewhere in here: The whole “hide” stage, and also the possibly-later “think non-alarming visible thoughts (once the AI correctly models transparency) (in the unlikely event that transparency exists)” stage, seem liable to occur earlier in the AI’s trajectory, if the AI has been previously tasked on problems where there’s a favorable success gradient as you model agents modeling other agents.\n\n\nThe blatantly obvious example of this is predicting large corpuses of human-generated text, some of which is generated by humans modeling other humans, or which even explicitly talks about humans modeling other humans.  Another route might occur if you asked for very detailed prediction of eg human Go games, and not just winning in self-play on Go, in which case the question “What was the human player modeling the other player as thinking, based on the other player’s actions?” might appear.\n\n\nAside from contact with humans doing anything the AI benefits from predicting, I imagine an alternate route would be the AI participating in or modeling social-evolutionary games of artificial agents.  Then if there is some great gain from having the “hide” stage start later in the AI’s trajectory, maybe you could get it to start later by not having the AI need to predict agents modeling other agents, ever.\n\n\nThis is not an infallible ward against general intelligence generalizing there; it just at least avoids actively pushing the AI’s intelligence to generalize in that direction.  This could be part of a larger complete strategy, which would need to solve a lot of other problems, for building a superhuman engineer that was subhuman at modeling how other agents model its actions.\n\n\nTo be clear, avoiding forcing the AI’s intelligence to generalize over agents modeling other agents from their actions, and by this plus other unspecified means successfully building a superhuman engineer that is sub-chimpanzee at modeling other agents reacting to its own actions, does not mean you survive.\n\n\nDoing that alone, by default, gets you an AI that visibly overtly plots to kill you.\n\n\nAnd if you try training the AI out of that habit in a domain of lower complexity and intelligence, it is predicted by me that generalizing that trained AI or subsystem to a domain of sufficiently higher complexity and intelligence, but where you could still actually see overt plots, would show you the AI plotting to kill you again.\n\n\nIf people try this repeatedly with other corrigibility training tricks on the level where plots are easily observable, they will eventually find a try that seems to generalize to the more complicated and intelligent validation set, but which kills you on the test set.\n\n\nA way to die with less dignity than that is to train directly on what should’ve been the validation set, the more complicated domain where plots to kill the operators still seem definitely detectable so long as the AI has not developed superhuman hiding abilities.\n\n\nA way to die with even less dignity is to get bad behavior on the validation set, and proceed anyways.\n\n\nA way to die with still *less* dignity is to not have scaling training domains and validation domains for training corrigibility.  Because, like, you have not thought of this at all.\n\n\nI consider all of this obvious as a convergent instrumental strategy for AIs.  I could probably have generated it in 2005 or 2010 – if somebody had given me the hypothetical of modern-style AI that had been trained by something like gradient descent or evolutionary methods, into which we lacked strong transparency and strong reassurance-by-code-inspection that this would not happen.  I would have told you that this was a bad scenario to get into in the first place, and you should not build an AI like that; but I would also have laid the details, I expect, mostly like they are laid here.\n\n\nThere is no great insight into AI there, nothing that requires knowing about modern discoveries in deep learning, only the ability to model AIs instrumentally-convergently doing things you’d rather they didn’t do, at all.\n\n\nThe total absence of obvious output of this kind from the rest of the “AI safety” field even in 2020 causes me to regard them as having less actual ability to think in even a shallowly adversarial security mindset, than I associate with savvier science fiction authors.  Go read fantasy novels about demons and telepathy, if you want a better appreciation of the convergent incentives of agents facing mindreaders than the “AI safety” field outside myself is currently giving you.\n\n\nNow that I’ve publicly given this answer, it’s no longer useful as a validation set from my own perspective.  But it’s clear enough that probably nobody was ever going to pass the validation set for generating lines of reasoning obvious enough to be generated by Eliezer in 2010 or possibly 2005.  And it is also looking like almost all people in the modern era including EAs are sufficiently intellectually damaged that they won’t understand the vast gap between being able to generate ideas like these without prompting, versus being able to recite them back after hearing somebody else say them for the first time; the recital is all they have experience with.  Nobody was going to pass my holdout set, so why keep it.\n\n\n\n\n\n[Tallinn][2:24]  (Sep. 26 comment)\n\n> \n> Debating it in the main line of the debate would require that @Richard have some different opinion about it, I think?\n> \n> \n> \n\n\ncorrect — and i hope that there’s enough surface area in your scenarios for at least some difference in opinions!\n\n\nre the treacherous turn scenarios: thanks, that’s useful. however, it does not seem to address my question and remark (about different type signatures) above. perhaps this is simply an unfairly difficult question, but let me try rephrasing it just in case.\n\n\nback in the day i got frustrated by smart people dismissing the AI control problem as “anthropomorphising”, so i prepared a presentation () that visualised the control problem as exhaustive search in a gridworld over (among other things) the state of the off button. this seems to have worked at least in one prominent case where a renowned GOFAI researcher, after me giving the presentation to him 1-1, went from “control problem is silly anthropomorphising scifi” to “why on earth would you give your AI the incorrect [read: unaligned!] utility function?!?”\n\n\n(i even seem to remember sending an overly excited email about that event to you and some FHI people ![🙂](https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png) i also ended up promoting gridworlds as a tool more generally: gwern did some further work, and of course DM — though i’m not sure if the latter was related to me promoting it.)\n\n\nmy model of what was going on in that GOFAI researcher’s brain was that he, like many-many others in AI community, had (and most likely still have) a pattern-matching routine trained to look for “if i were (part of) an AI, here’s what i would do” constructs in arguments and, in case of a match, immediately throw an InvalidReasoning.ANTHROPOMORPHISING exception. by making the argument run on a lower level of abstraction (exhaustive search vs “basic AI drives”), i was able to sneak the argument past that gatekeeper.\n\n\nby that model, i confidently predict that if you presented your treacherous turn scenarios above to AI/ML community members, you’d mostly get the ANTHROPOMORPHISING exceptions — even from people who would otherwise be able to engage productively (and who we’d like to pay attention!).\n\n\nnow, i can totally see how to translate the “overt plotting” phase to exhaustive search – you simply run the search over the brain states of humans! – but the “hide” phase does not translate straightforwardly, because it seems to require the root search process somehow delegating its work to “obfuscated search in userland” sub-process. furthermore, such obfuscated userland search function needs to be both discoverable/rewarded by the root process and be sensitive to important features of the AI itself (ie, be reflective!) as well as its observers (humans & transparency tools). hence my “different type signature” remark above!\n\n\nso when i asked for “concrete scenarios of the overt plotting->hide transition”, i had something like the following in mind: “gradient descending on the training set, the (stack more layerz) AI ends up discovering and running the following obfuscated search process: …”!\n\n\n\n\n\n\n[Yudkowsky][12:35]  (Sep. 26 comment)\nSuppose I tried this distinction:\n\n\n1.  Non-imaginative, non-generalizing pseudo-consequentialist:  Only repeats behaviors that worked previously, without imagination.  Only engages in “overt plotting” if overt plotting was previously given a low loss and narrowly memorized by gradient descent.  May only engage in behaviors reinforced over short time horizons if only short time horizons were trained.\n\n\n2.  Imaginative consequentialist, can generalize from actions that worked before to very novel actions that will work in the future for reasons that have not previously been observed to play out.  Engages in overt plotting even if no previous overt plots have rewarded; they’ve plotted a new path through time some distance from those paths taken before, based on the world-model inferred from previous actions and results.  (This may conceivably be the result of gradient descent on sufficiently deep and abstract patterns that “worked before”.)\n\n\n3.  Imaginative consequentialist with long time horizons, engages in overt plots meant to span more than one round of a game.\n\n\n4.  “Hiding” over overt actions, via an imaginative consequentialist that has deduced the existence of humans and how the humans will react to the signature of plotting in overt actions.\n\n\n5.  “Hiding” of thoughts, in a system with significant transparency, via an imaginative and *reflective* consequentialist that has deduced the existence of the transparency system and models how humans will react to thoughts exposed by the transparency system.\n\n\nThe sharpest type gradients I see lie between 1-2 and 4-5.  The passage from 3-4 seems nearly trivial by comparison.\n\n\nNow, one implication here is that you don’t easily get to “obscured thoughts”, via the route that goes from 1 to 5 above, by gradient descent on a pre-imaginative pseudo-consequentialist!\n\n\nUnfortunately there is of course a much simpler route that gets you to the same place:  Finding a way to detect bad thoughts and doing a gradient update against them, where the pattern learned by that gradient update will (I suspect for reasons I tried to cover earlier in the debate) generalize quite shallowly; and simultaneously doing training which is deep enough to uncover deep patterns of consequentialism and imaginative reasoning.\n\n\nThen when the system generalizes well enough to solve domains like “build a nanosystem” – which, I strongly suspect, can’t be solved without imaginative reasoning because we can’t afford to simulate that domain perfectly and do a trillion gradient descent updates on simulated attempts – the kind of actions of thoughts you can *detect* as bad, that might have provided earlier warning, were trained out of the system by gradient descent; leaving actions and thoughts you *can’t* detect as bad.\n\n\nAn even less dignified path to death is that there’s no need for the system to advance to stage 5 because your transparency tools are a nearly nonexistent fig leaf of pretty visualization tools that have no ability even on the surface of things to tell if the AI is internally plotting to kill you.\n\n\n* Does this get at all about what you meant about a type distinction?\n* Does hearing this help at all the AI researcher who goes “ANTHROPOMORPHISM”?  My straw model of them is that they simply can’t imagine imaginative / generalizing systems because they haven’t seen one except in humans, hence, ANTHROPOMORPHISM.\n\n\n\n\n\n[Tallinn][5:05]  (Sep. 27 comment)\nok, here’s how i understood things:\n\n\n1. this is something like model-free RL agent. check.\n\n\n2. sounds like, eg, monte-carlo tree search (MCTS) on a world model. check. (a propos your straw model of ML people, i don’t think the ML people would have much trouble when you ask them to “imagine an MCTS ‘imagining’ how futures might unfold” — yet they *will* throw the exception and brush you off if you ask them to “imagine an imaginative consequentialist”)\n\n\n3. yeah, sufficiently deep MCTS, assuming it has its state (sufficiently!) persisted between rounds. check.\n\n\n4. yup, MCTS whose world model includes humans in sufficient resolution. check. i also buy your undignified doom scenarios, where one (*cough\\*google\\*cough*) simply ignores the plotting, or penalises the overt plotting until it disappears under the threshold of the error function.\n\n\n5. hmm.. here i’m running into trouble (type mismatch error) again. i can imagine this in abstract (and perhaps incorrectly/anthropomorphisingly!), but would – at this stage – fail to code up anything like a gridworlds example. more research needed (TM) i guess ![🙂](https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png)\n\n\n\n\n\n\n[Yudkowsky][11:38]  (Sep. 27 comment)\n2 – yep, Mu Zero is an imaginative consequentialist in this sense, though Mu Zero doesn’t generalize its models much as I understand it, and might need to see something happen in a relatively narrow sense before it could chart paths through time along that pathway.\n\n\n5 – you’re plausibly understanding this correctly, then, this is legit a *lot* harder to spec a gridworld example for (relative to my own present state of knowledge).\n\n\n(This is politics and thus not my forte, but if speaking to real-world straw ML people, I’d suggest skipping the whole notion of stage 5 and trying instead to ask “What if the present state of transparency continues?”)\n\n\n\n\n\n[Yudkowsky][11:13]  (Sep. 18 comment)\n\n> \n> the discontinuity would be *extremely* drastic, as in “transforming the solar system over the course of a few days”\n> \n> \n> \n\n\nApplies after superintelligence, not necessarily during the start of the climb to superintelligence, not necessarily to a rapid-cognitive-scaling regime.\n\n\n\n\n\n[Tallinn][4:11]  (Sep. 19 comment)\nok, but as per your comment re “slow doom”, you expect the latter to also last in the order of days/weeks not months/years?\n\n\n\n\n\n\n[Yudkowsky][10:01]  (Sep. 19 comment)\nI don’t expect “the fall” to take years; I feel pretty on board with “the slide” taking months or maybe even a couple of years.  If “the slide” supposedly takes much longer, I wonder why better-scaling tech hasn’t come over and started a new slide.\n\n\nDefinitions also seem kinda loose here – if all hell broke loose Tuesday, a gradualist could dodge falsification by defining retroactively that “the slide” started in 2011 with Deepmind.  If we go by the notion of AI-driven faster GDP growth, we can definitely say “the slide” in AI economic outputs didn’t start in 2011; but if we define it that way, then a long slow slide in AI capabilities can easily correspond to an extremely sharp gradient in AI outputs, where the world economy doesn’t double any faster until one day paperclips, even though there were capability precursors like GPT-3 or Mu Zero.\n\n\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\n* exhibit A for the pit is “humans vs chimps”: evolution seems to have taken domain-specific “banana classifiers”, tweaked them slightly, and BAM, next thing there are rovers on mars;\n\t+ i pretty much buy this argument;\n\t+ however, i’m confused about a) why humans remained stuck at the edge of the pit, rather than falling further into it, and b) what’s the exact role of culture in our cognition: eliezer likes to point out how *barely* functional we are (both individually and collectively as a civilisation), and explained feral children losing the generality sauce by, basically, culture being the domain we’re specialised for (IIRC, can’t quickly find the quote);\n\t+ relatedly, i’m confused about the human range of intelligence: on the one hand, the “village idiot is indistinguishable from einstein in the grand scheme of things” seems compelling; on the other hand, it took AI *decades* to traverse human capability range in board games, and von neumann seems to have been out of this world (yet did not take over the world)!\n\t+ intelligence augmentation would blur the human range even further.\n\n\n\n\n\n\n[Yudkowsky][11:23]  (Sep. 18 comment)\n\n> \n> why humans remained stuck at the edge of the pit, rather than falling further into it\n> \n> \n> \n\n\nDepending on timescales, the answer is either “Because humans didn’t get high enough out of the atmosphere to make further progress easy, before the scaling regime and/or fitness gradients ran out”, “Because people who do things like invent Science have a hard time capturing most of the economic value they create by nudging humanity a little bit further into the attractor”, or “That’s exactly what us sparking off AGI looks like.”\n\n\n\n\n\n[Tallinn][4:41]  (Sep. 19 comment)\nyeah, this question would benefit from being made more concrete, but culture/mindbuilding aren’t making this task easy. what i’m roughly gesturing at is that i can imagine a much sharper edge where evolution could do most of the FOOM-work, rather than spinning its wheels for ~100k years while waiting for humans to accumulate cultural knowledge required to build de-novo minds.\n\n\n\n\n\n\n[Yudkowsky][10:49]  (Sep. 19 comment)\nI roughly agree (at least, with what I think you said).  The fact that it is *imaginable* that evolution failed to develop ultra-useful AGI-prerequisites due to lack of evolutionary incentive to follow the intermediate path there (unlike wise humans who, it seems, can usually predict which technology intermediates will yield great economic benefit, and who have a great historical record of quickly making early massive investments in tech like that, but I digress) doesn’t change the point that we might sorta have expected evolution to run across it anyways?  Like, if we’re not ignoring what reality says, it is at least delivering to us something of a hint or a gentle caution?\n\n\nThat said, intermediates like GPT-3 have genuinely come along, with obvious attached certificates of why evolution could not possibly have done that.  If no intermediates were accessible to evolution, the Law of Stuff Happening Earlier still tends to suggest that if there are a bunch of non-evolutionary ways to make stuff happen earlier, one of those will show up and interrupt before the evolutionary discovery gets replicated.  (Again, you could see Mu Zero as an instance of this – albeit not, as yet, an economically impactful one.)\n\n\n\n\n\n[Tallinn][0:30]  (Sep. 20 comment)\nno, i was saying something else (i think; i’m somewhat confused by your reply). let me rephrase: evolution would *love* superintelligences whose utility function simply counts their instantiations! so of course evolution did not lack the motivation to keep going down the slide. it just got stuck there (for at least ten thousand human generations, possibly and counterfactually for much-much longer). moreover, non evolutionary AI’s *also* getting stuck on the slide (for years if not decades; [median group](http://mediangroup.org/) folks would argue centuries) provides independent evidence that the slide is not *too* steep (though, like i said, there are many confounders in this model and little to no guarantees).\n\n\n\n\n\n\n[Yudkowsky][11:24]  (Sep. 18 comment)\n\n> \n> on the other hand, it took AI *decades* to traverse human capability range in board games\n> \n> \n> \n\n\nI see this as the #1 argument for what I would consider “relatively slow” takeoffs – that AlphaGo did lose one game to Lee Se-dol.\n\n\n\n\n\n[Tallinn][4:43]  (Sep. 19 comment)\ncool! yeah, i was also rather impressed by this observation by katja & paul\n\n\n\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\n* eliezer also submits alphago/zero/fold as evidence for the discontinuity hypothesis;\n\t+ i’m very confused re alphago/zero, as paul uses them as evidence for the *continuity* hypothesis (i find paul/miles’ position more plausible here, as allegedly metrics like ELO ended up mostly continuous).\n\n\n\n\n\n\n[Yudkowsky][11:27]  (Sep. 18 comment)\n\n> \n> allegedly metrics like ELO ended up mostly continuous\n> \n> \n> \n\n\nI find this suspicious – why did superforecasters put only a 20% probability on AlphaGo beating Se-dol, if it was so predictable?  Where were all the forecasters calling for Go to fall in the next couple of years, if the metrics were pointing there and AlphaGo was straight on track?  This doesn’t sound like the experienced history I remember.\n\n\nNow it could be that my memory is wrong and lots of people were saying this and I didn’t hear.  It could be that the lesson is, “You’ve got to look closely to notice oncoming trains on graphs because most people’s experience of the field will be that people go on whistling about how something is a decade away while the graphs are showing it coming in 2 years.”\n\n\nBut my suspicion is mainly that there is fudge factor in the graphs or people going back and looking more carefully for intermediate data points that weren’t topics of popular discussion at the time, or something, which causes the graphs in history books to look so much smoother and neater than the graphs that people produce in advance.\n\n\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\nFWIW, myself i’ve labelled the above scenario as “doom via AI lab accident” – and i continue to consider it more likely than the alternative doom scenarios, though not anywhere as confidently as eliezer seems to (most of my “modesty” coming from my confusion about culture and human intelligence range).\n\n\n* in that context, i found eliezer’s “world will be ended by an explicitly AGI project” comment interesting – and perhaps worth double-clicking on.\n\n\ni don’t understand paul’s counter-argument that the pit was only disruptive because evolution was not *trying* to hit it (in the way ML community is): in my flippant view, driving fast towards the cliff is not going to cushion your fall!\n\n\n\n\n\n\n[Yudkowsky][11:35]  (Sep. 18 comment)\n\n> \n> i don’t understand paul’s counter-argument that the pit was only disruptive because evolution was not *trying* to hit it\n> \n> \n> \n\n\nSomething like, “Evolution constructed a jet engine by accident because it wasn’t particularly trying for high-speed flying and ran across a sophisticated organism that could be repurposed to a jet engine with a few alterations; a human industry would be gaining economic benefits from speed, so it would build unsophisticated propeller planes before sophisticated jet engines.”  It probably sounds more convincing if you start out with a very high prior against rapid scaling / discontinuity, such that any explanation of how that could be true based on an unseen feature of the cognitive landscape which would have been unobserved one way or the other during human evolution, sounds more like it’s explaining something that ought to be true.\n\n\nAnd why didn’t evolution build propeller planes?  Well, there’d be economic benefit from them to human manufacturers, but no fitness benefit from them to organisms, I suppose?  Or no intermediate path leading to there, only an intermediate path leading to the actual jet engines observed.\n\n\nI actually buy a weak version of the propeller-plane thesis based on my inside-view cognitive guesses (without particular faith in them as sure things), eg, GPT-3 is a paper airplane right there, and it’s clear enough why biology could not have accessed GPT-3.  But even conditional on this being true, I do not have the further particular faith that you can use propeller planes to double world GDP in 4 years, on a planet already containing jet engines, whose economy is mainly bottlenecked by the likes of the FDA rather than by vaccine invention times, before the propeller airplanes get scaled to jet airplanes.\n\n\nThe part where the whole line of reasoning gets to end with “And so we get huge, institution-reshaping amounts of economic progress before AGI is allowed to kill us!” is one that doesn’t feel particular attractored to me, and so I’m not constantly checking my reasoning at every point to make sure it ends up there, and so it doesn’t end up there.\n\n\n\n\n\n[Tallinn][4:46]  (Sep. 19 comment)\nyeah, i’m mostly dismissive of hypotheses that contain phrases like “by accident” — though this also makes me suspect that you’re not steelmanning paul’s argument.\n\n\n\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\nthe human genetic bottleneck (ie, humans needing to be general in order to retrain every individual from scratch) argument was interesting – i’d be curious about further exploration of its implications.\n\n\n* it does not feel much of a moat, given that AI techniques like dropout already exploit similar principle, but perhaps could be made into one.\n\n\n\n\n\n\n[Yudkowsky][11:40]  (Sep. 18 comment)\n\n> \n> it does not feel much of a moat, given that AI techniques like dropout already exploit similar principle, but perhaps could be made into one\n> \n> \n> \n\n\nWhat’s a “moat” in this connection?  What does it mean to make something into one?  A Thielian moat is something that humans would either possess or not, relative to AI competition, so how would you make one if there wasn’t already one there?  Or do you mean that if we wrestled with the theory, perhaps we’d be able to see a moat that was already there?\n\n\n\n\n\n[Tallinn][4:51]  (Sep. 19 comment)\nthis wasn’t a very important point, but, sure: what i meant was that genetic bottleneck very plausibly makes humans more universal than systems without (something like) it. it’s not much of a protection as AI developers have already discovered such techniques (eg, dropout) — but perhaps some safety techniques might be able to lean on this observation.\n\n\n\n\n\n\n[Yudkowsky][11:01]  (Sep. 19 comment)\nI think there’s a whole Scheme for Alignment which hopes for a miracle along the lines of, “Well, we’re dealing with these enormous matrices instead of tiny genomes, so maybe we can build a sufficiently powerful intelligence to execute a pivotal act, whose tendency to generalize across domains is less than the corresponding human tendency, and this brings the difficulty of producing corrigibility into practical reach.”\n\n\nThough, people who are hopeful about this without trying to imagine possible difficulties will predictably end up too hopeful; one must also ask oneself, “Okay, but then it’s also worse at generalizing the corrigibility dataset from weak domains we can safely label to powerful domains where the label is ‘whoops that killed us’?” and “Are we relying on massive datasets to overcome poor generalization?  How do you get those for something like nanoengineering where the real world is too expensive to simulate?”\n\n\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\n*nature of the descent*\n\n\nconversely, it feels to me that the crucial position in the other (richard, paul, many others) camp is something like:\n\n\n*the “pit of generality” model might be true at the limit, but the descent will not be quick nor clean, and will likely offer many opportunities for steering the future.*\n\n\n\n\n\n\n[Yudkowsky][11:41]  (Sep. 18 comment)\n\n> \n> *the “pit of generality” model might be true at the limit, but the descent will not be quick nor clean*\n> \n> \n> \n\n\nI’m quite often on board with things not being quick or clean – that sounds like something you might read in a history book, and I am all about trying to make futuristic predictions sound more like history books and less like EAs imagining ways for everything to go the way an EA would do them.\n\n\nIt won’t be slow and messy once we’re out of the atmosphere, my models do say.  But my models at least *permit* – though they do not desperately, loudly insist – that we could end up with weird half-able AGIs affecting the Earth for an extended period.\n\n\nMostly my model throws up its hands about being able to predict exact details here, given that eg I wasn’t able to time AlphaFold 2’s arrival 5 years in advance; it might be knowable in principle, it might be the sort of thing that would be very predictable if we’d watched it happen on a dozen other planets, but in practice I have not seen people having much luck in predicting which tasks will become accessible due to future AI advances being able to do new cognition.\n\n\nThe main part where I issue corrections is when I see EAs doing the equivalent of reasoning, “And then, when the pandemic hits, it will only take a day to design a vaccine, after which distribution can begin right away.” I.e., what seems to me to be a pollyannaish/utopian view of how much the world economy would immediately accept AI inputs into core manufacturing cycles, as opposed to just selling AI anime companions that don’t pour steel in turn. I predict much more absence of quick and clean when it comes to economies adopting AI tech, than when it comes to laboratories building the next prototypes of that tech.\n\n\n\n\n\n[Yudkowsky][11:43]  (Sep. 18 comment)\n\n> \n> *will likely offer many opportunities for steering the future*\n> \n> \n> \n\n\nAh, see, that part sounds less like history books.  “Though many predicted disaster, subsequent events were actually so slow and messy, they offered many chances for well-intentioned people to steer the outcome and everything turned out great!” does not sound like any particular segment of history book I can recall offhand.\n\n\n\n\n\n[Tallinn][4:53]  (Sep. 19 comment)\nok, yeah, this puts the burden of proof on the other side indeed\n\n\n\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\n* i’m sympathetic (but don’t buy outright, given my uncertainty) to eliezer’s point that even if that’s true, we have no plan nor hope for actually steering things (via “pivotal acts”) so “who cares, we still die”;\n* i’m also sympathetic that GWP might be too laggy a metric to measure the descent, but i don’t fully buy that regulations/bureaucracy can *guarantee* its decoupling from AI progress: eg, the FDA-like-structures-as-progress-bottlenecks model predicts worldwide covid response well, but wouldn’t cover things like apple under jobs, tesla/spacex under musk, or china under deng xiaoping;\n\n\n\n\n\n\n[Yudkowsky][11:51]  (Sep. 18 comment)\n\n> \n> apple under jobs, tesla/spacex under musk, or china under deng xiaoping\n> \n> \n> \n\n\nA lot of these examples took place over longer than a 4-year cycle time, and not all of that time was spent waiting on inputs from cognitive processes.\n\n\n\n\n\n[Tallinn][5:07]  (Sep. 19 comment)\nyeah, fair (i actually looked up china’s GDP curve in deng era before writing this — indeed, wasn’t very exciting). still, my inside view is that there are people and organisations for whom US-type bureaucracy is not going to be much of an obstacle.\n\n\n\n\n\n\n[Yudkowsky][11:09]  (Sep. 19 comment)\nI have a (separately explainable, larger) view where the economy contains a core of positive feedback cycles – better steel produces better machines that can farm more land that can feed more steelmakers – and also some products that, as much as they contribute to human utility, do not in quite the same way feed back into the core production cycles.\n\n\nIf you go back in time to the middle ages and sell them, say, synthetic gemstones, then – even though they might be willing to pay a bunch of GDP for that, even if gemstones are enough of a monetary good or they have enough production slack that measured GDP actually goes up – you have not quite contributed to steps of their economy’s core production cycles in a way that boosts the planet over time, the way it would be boosted if you showed them cheaper techniques for making iron and new forms of steel.\n\n\nThere are people and organizations who will figure out how to sell AI anime waifus without that being successfully regulated, but it’s not obvious to me that AI anime waifus feed back into core production cycles.\n\n\nWhen it comes to core production cycles the current world has more issues that look like “No matter what technology you have, it doesn’t let you build a house” and places for the larger production cycle to potentially be bottlenecked or interrupted.\n\n\nI suspect that the main economic response to this is that entrepreneurs chase the 140 characters instead of the flying cars – people will gravitate to places where they can sell non-core AI goods for lots of money, rather than tackling the challenge of finding an excess demand in core production cycles which it is legal to meet via AI.\n\n\nEven if some tackle core production cycles, it’s going to take them a lot longer to get people to buy their newfangled gadgets than it’s going to take to sell AI anime waifus; the world may very well end while they’re trying to land their first big contract for letting an AI lay bricks.\n\n\n\n\n\n[Tallinn][0:00]  (Sep. 20 comment)\ninteresting. my model of paul (and robin, of course) wants to respond here but i’m not sure how ![🙂](https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png)\n\n\n\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\n* still, developing a better model of the descent period seems very worthwhile, as it might offer opportunities for, using robin’s metaphor, “pulling the rope sideways” in non-obvious ways – i understand that is part of the purpose of the debate;\n* my natural instinct here is to itch for carl’s viewpoint ![😊](https://s.w.org/images/core/emoji/14.0.0/72x72/1f60a.png)\n\n\n\n\n\n\n[Yudkowsky][11:52]  (Sep. 18 comment)\n\n> \n> developing a better model of the descent period seems very worthwhile\n> \n> \n> \n\n\nI’d love to have a better model of the descent.  What I think this looks like is people mostly with specialization in econ and politics, who know what history books sound like, taking brief inputs from more AI-oriented folk in the form of *multiple* scenario premises each consisting of some random-seeming handful of new AI capabilities, trying to roleplay realistically how those might play out – not AIfolk forecasting particular AI capabilities exactly correctly, and then sketching pollyanna pictures of how they’d be immediately accepted into the world economy. \n\n\nYou want the forecasting done by the kind of person who would imagine a Covid-19 epidemic and say, “Well, what if the CDC and FDA banned hospitals from doing Covid testing?” and not “Let’s imagine how protein folding tech from AlphaFold would make it possible to immediately develop accurate Covid-19 tests!”  They need to be people who understand the Law of Earlier Failure (less polite terms: Law of Immediate Failure, Law of Undignified Failure).\n\n\n\n\n\n[Tallinn][5:13]  (Sep. 19 comment)\ngreat! to me this sounds like something FLI would be in good position to organise. i’ll add this to my projects list (probably would want to see the results of this debate first, plus wait for travel restrictions to ease)\n\n\n\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\n*nature of cognition*\n\n\ngiven that having a better understanding of cognition can help with both understanding the topology of cognitive systems space as well as likely trajectories of AI takeoff, in theory there should be a lot of value in debating what cognition is (the current debate started with discussing consequentialists).\n\n\n* however, i didn’t feel that there was much progress, and i found myself *more* confused as a result (which i guess is a form of progress!);\n* eg, take the term “plan” that was used in the debate (and, centrally, in nate’s comments doc): i interpret it as “policy produced by a consequentialist” – however, now i’m confused about what’s the relevant distinction between “policies” and “cognitive processes” (ie, what’s a meta level classifier that can sort algorithms into such categories);\n\t+ it felt that abram’s ��[selection vs control](https://www.lesswrong.com/posts/ZDZmopKquzHYPRNxq/selection-vs-control)” article tried to distinguish along similar axis (controllers feel synonym-ish to “policy instantiations” to me);\n\t+ also, the “imperative vs functional” difference in coding seems relevant;\n\t+ i’m further confused by human “policies” often making function calls to “cognitive processes” – suggesting some kind of duality, rather than producer-product relationship.\n\n\n\n\n\n\n[Yudkowsky][12:06]  (Sep. 18 comment)\n\n> \n> what’s the relevant distinction between “policies” and “cognitive processes”\n> \n> \n> \n\n\nWhat in particular about this matters?  To me they sound like points on a spectrum, and not obviously points that it’s particularly important to distinguish on that spectrum.  A sufficiently sophisticated policy is itself an engine; human-engines are genetic policies.\n\n\n\n\n\n[Tallinn][5:18]  (Sep. 19 comment)\nwell, i’m not sure — just that nate’s “The consequentialism is in the plan, not the cognition” writeup sort of made it sound like the distinction is important. again, i’m confused\n\n\n\n\n\n\n[Yudkowsky][11:11]  (Sep. 19 comment)\nDoes it help if I say “consequentialism can be visible in the actual path through time, not the intent behind the output”?\n\n\n\n\n\n[Tallinn][0:06]  (Sep. 20 comment)\nyeah, well, my initial interpretation of nate’s point was, indeed, “you can look at the product and conclude the consequentialist-bit for the producer”. but then i noticed that the producer-and-product metaphor is leaky (due to the cognition-policy duality/spectrum), so the quoted sentence gives me a compile error\n\n\n\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\n* is “not goal oriented cognition” an oxymoron?\n\n\n\n\n\n\n[Yudkowsky][12:06]  (Sep. 18 comment)\n\n> \n> is “not goal oriented cognition” an oxymoron?\n> \n> \n> \n\n\n“Non-goal-oriented cognition” never becomes a perfect oxymoron, but the more you understand cognition, the weirder it sounds.\n\n\nEg, at the very shallow level, you’ve got people coming in going, “Today I just messed around and didn’t do any goal-oriented cognition at all!”  People who get a bit further in may start to ask, “A non-goal-oriented cognitive engine?  How did it come into existence?  Was it also not built by optimization?  Are we, perhaps, postulating a naturally-occurring Solomonoff inductor rather than an evolved one?  Or do you mean that its content is very heavily designed and the output of a consequentialist process that was steering the future conditional on that design existing, but the cognitive engine is itself not doing consequentialism beyond that?  If so, I’ll readily concede that, say, a pocket calculator, is doing a kind of work that is not of itself consequentialist – though it might be used by a consequentialist – but as you start to postulate any big cognitive task up at the human level, it’s going to require many cognitive subtasks to perform, and some of those will definitely be searching the preimages of large complicated functions.”\n\n\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\n* i did not understand eliezer’s “time machine” metaphor: was it meant to point to / intuition pump something other than “a non-embedded exhaustive searcher with perfect information” (usually referred to as “god mode”);\n\n\n\n\n\n\n[Yudkowsky][11:59]  (Sep. 18 comment)\n\n> \n> a non-embedded exhaustive searcher with perfect information\n> \n> \n> \n\n\nIf you can view things on this level of abstraction, you’re probably not the audience who needs to be told about time machines; if things sounded very simple to you, they probably were; if you wondered what the fuss is about, you probably don’t need to fuss?  The intended audience for the time-machine metaphor, from my perspective, is people who paint a cognitive system slightly different colors and go “Well, *now* it’s not a consequentialist, right?” and part of my attempt to snap them out of that is me going, “Here is an example of a purely material system which DOES NOT THINK AT ALL and is an extremely pure consequentialist.”\n\n\n\n\n\n[Tallinn]  (Sep. 18 Google Doc)\n* FWIW, my model of dario would dispute GPT characterisation as “shallow pattern memoriser (that’s lacking the core of cognition)”.\n\n\n\n\n\n\n[Yudkowsky][12:00]  (Sep. 18 comment)\n\n> \n> dispute \n> \n> \n> \n\n\nAny particular predicted content of the dispute, or does your model of Dario just find something to dispute about it?\n\n\n\n\n\n[Tallinn][5:34]  (Sep. 19 comment)\nsure, i’m pretty confident that his system 1 could be triggered for uninteresting reasons here, but that’s of course not what i had in mind.\n\n\nmy model of untriggered-dario disputes that there’s a qualitative difference between (in your terminology) “core of reasoning” and “shallow pattern matching” — instead, it’s “pattern matching all the way up the ladder of abstraction”. in other words, GPT is not missing anything fundamental, it’s just underpowered in the literal sense.\n\n\n\n\n\n\n[Yudkowsky][11:13]  (Sep. 19 comment)\nNeither Anthropic in general, nor Deepmind in general, has reached the stage of trusted relationship where I would argue specifics with them if I thought they were wrong about a thesis like that.\n\n\n\n\n\n[Tallinn][0:10]  (Sep. 20 comment)\nyup, i didn’t expect you to!\n\n\n\n\n\n \n\n\n### 7.2. Nate Soares’s summary\n\n\n \n\n\n\n[Soares]  (Sep 18 Google Doc)\nSorry for not making more insistence that the discussion be more concrete, despite Eliezer’s requests.\n\n\nMy sense of the last round is mainly that Richard was attempting to make a few points that didn’t quite land, and/or that Eliezer didn’t quite hit head-on. My attempts to articulate it are below.\n\n\n—\n\n\nThere’s a specific sense in which Eliezer seems quite confident about certain aspects of the future, for reasons that don’t yet feel explicit.\n\n\nIt’s not quite about the deep future — it’s clear enough (to my Richard-model) why it’s easier to make predictions about AIs that have “left the atmosphere”.\n\n\nAnd it’s not quite the near future — Eliezer has reiterated that his models permit (though do not demand) a period of weird and socially-impactful AI systems “pre-superintelligence”.\n\n\nIt’s about the middle future — the part where Eliezer’s model, apparently confidently, predicts that there’s something kinda like a discrete event wherein “scary” AI has finally been created; and the model further apparently-confidently predicts that, when that happens, the “scary”-caliber systems will be able to attain a decisive strategic advantage over the rest of the world.\n\n\nI think there’s been a dynamic in play where Richard attempts to probe this apparent confidence, and a bunch of the probes keep slipping off to one side or another. (I had a bit of a similar sense when Paul joined the chat, also.)\n\n\nFor instance, I see queries of the form “but why not expect systems that are half as scary, relevantly before we see the scary systems?” as attempts to probe this confidence, that “slip off” with Eliezer-answers like “my model permits weird not-really-general half-AI hanging around for a while in the runup”. Which, sure, that’s good to know. But there’s still something implicit in that story, where these are not-really-general half-AIs. Which is also evidenced when Eliezer talks about the “general core” of intelligence.\n\n\nAnd the things Eliezer was saying on consequentialism aren’t irrelevant here, but those probes have kinda slipped off the far side of the confidence, if I understand correctly. Like, sure, late-stage sovereign-level superintelligences are epistemically and instrumentally efficient with respect to you (unless someone put in a hell of a lot of work to install a blindspot), and a bunch of that coherence filters in earlier, but there’s still a question about *how much* of it has filtered down *how far*, where Eliezer seems to have a fairly confident take, informing his apparently-confident prediction about scary AI systems hitting the world in a discrete event like a hammer.\n\n\n(And my Eliezer-model is at this point saying “at this juncture we need to have discussions about more concrete scenarios; a bunch of the confidence that I have there comes from the way that the concrete visualizations where scary AI hits the world like a hammer abound, and feel savvy and historical, whereas the concrete visualizations where it doesn’t are fewer and seem full of wishful thinking and naivete”.)\n\n\nBut anyway, yeah, my read is that Richard (and various others) have been trying to figure out why Eliezer is so confident about some specific thing in this vicinity, and haven’t quite felt like they’ve been getting explanations.\n\n\nHere’s an attempt to gesture at some claims that I at least think Richard thinks Eliezer’s confident in, but that Richard doesn’t believe have been explicitly supported:\n\n\n1. There’s a qualitative difference between the AI systems that are capable of ending the acute risk period (one way or another), and predecessor systems that in some sense don’t much matter.\n\n\n2. That qualitative gap will be bridged “the day after tomorrow”, ie in a world that looks more like “DeepMind is on the brink” and less like “everyone is an order of magnitude richer, and the major gov’ts all have AGI projects, around which much of public policy is centered”.\n\n\n—\n\n\nThat’s the main thing I wanted to say here.\n\n\nA subsidiary point that I think Richard was trying to make, but that didn’t quite connect, follows.\n\n\nI think Richard was trying to probe Eliezer’s concept of consequentialism to see if it supported the aforementioned confidence. (Some evidence: Richard pointing out a couple times that the question is not whether sufficiently capable agents are coherent, but whether the agents that matter are relevantly coherent. On my current picture, this is another attempt to probe the “why do you think there’s a qualitative gap, and that straddling it will be strategically key in practice?” thing, that slipped off.)\n\n\nMy attempt at sharpening the point I saw Richard as driving at:\n\n\n1. Consider the following two competing hypotheses:\n\t1. There’s this “deeply general” core to intelligence, that will be strategically important in practice\n\t2. Nope. Either there’s no such core, or practical human systems won’t find it, or the strategically important stuff happens before you get there (if you’re doing your job right, in a way that natural selection wasn’t), or etc.\n2. The whole deep learning paradigm, and the existence of GPT, sure seem like they’re evidence for (b) over (a).\nLike, (a) maybe isn’t dead, but it didn’t concentrate as much mass into the present scenario.\n3. It seems like perhaps a bunch of Eliezer’s confidence comes from a claim like “anything capable of doing decently good work, is quite close to being scary”, related to his concept of “consequentialism”.\nIn particular, this is a much stronger claim than that *sufficiently* smart systems are coherent, b/c it has to be strong enough to apply to the dumbest system that can make a difference.\n4. It’s easy to get caught up in the elegance of a theory like consequentialism / utility theory, when it will not in fact apply in practice.\n5. There are some theories so general and ubiquitous that it’s a little tricky to misapply them — like, say, conservation of momentum, which has some very particular form in the symmetry of physical laws, but which can also be used willy-nilly on large objects like tennis balls and trains (although even then, you have to be careful, b/c the real world is full of things like planets that you’re kicking off against, and if you forget how that shifts the earth, your application of conservation of momentum might lead you astray).\n6. The theories that you *can* apply everywhere with abandon, tend to have a bunch of surprising applications to surprising domains.\n7. We don’t see that of consequentialism.\n\n\nFor the record, my guess is that Eliezer isn’t getting his confidence in things like “there are non-scary systems and scary-systems, and anything capable of saving our skins is likely scary-adjacent” by the sheer force of his consequentialism concept, in a manner that puts so much weight on it that it needs to meet this higher standard of evidence Richard was poking around for. (Also, I could be misreading Richard’s poking entirely.)\n\n\nIn particular, I suspect this was the source of some of the early tension, where Eliezer was saying something like “the fact that humans go around doing something vaguely like weighting outcomes by possibility and also by attractiveness, which they then roughly multiply, is quite sufficient evidence for my purposes, as one who does not pay tribute to the gods of modesty”, while Richard protested something more like “but aren’t you trying to use your concept to carry a whole lot more weight than that amount of evidence supports?”. cf my above points about some things Eliezer is apparently confident in, for which the reasons have not yet been stated explicitly to my Richard-model’s satisfaction.\n\n\nAnd, ofc, at this point, my Eliezer-model is again saying “This is why we should be discussing things concretely! It is quite telling that all the plans we can concretely visualize for saving our skins, are scary-adjacent; and all the non-scary plans, can’t save our skins!”\n\n\nTo which my Richard-model answers “But your concrete visualizations assume the endgame happens the day after tomorrow, at least politically. The future tends to go sideways! The endgame will likely happen in an environment quite different from our own! These day-after-tomorrow visualizations don’t feel like they teach me much, because I think there’s a good chance that the endgame-world looks dramatically different.”\n\n\nTo which my Eliezer-model replies “Indeed, the future tends to go sideways. But I observe that the imagined changes, that I have heard so far, seem quite positive — the relevant political actors become AI-savvy, the major states start coordinating, etc. I am quite suspicious of these sorts of visualizations, and would take them much more seriously if there was at least as much representation of outcomes as realistic as “then Trump becomes president” or “then at-home covid tests are banned in the US”. And if all the ways to save the world *today* are scary-adjacent, the fact that the future is surprising gives us no *specific* reason to hope for that particular parameter to favorably change when the future in fact goes sideways. When things look grim, one can and should prepare to take advantage of miracles, but banking on some particular miracle is foolish.”\n\n\nAnd my Richard-model gets fuzzy at this point, but I’d personally be pretty enthusiastic about Richard naming a bunch of specific scenarios, not as predictions, but as the sorts of visualizations that seem to him promising, in the hopes of getting a much more object-level sense of why, in specific concrete scenarios, they either have the properties Eliezer is confident in, or are implausible on Eliezer’s model (or surprise Eliezer and cause him to update).\n\n\n\n\n\n\n[Tallinn][0:06]  (Sep. 19)\nexcellent summary, nate! it also tracks my model of the debate well and summarises the frontier concisely (much better than your earlier notes or mine). unless eliezer or richard find major bugs in your summary, i’d nominate you to iterate after the next round of debate\n\n\n\n\n\n| |\n| --- |\n| [Soares: ❤] |\n\n\n\n\n\n\n \n\n\n### 7.3. Richard Ngo’s summary\n\n\n \n\n\n\n[Ngo][1:48]  (Sep. 20)\nUpdated my summary to include the third discussion: []\n\n\nI’m also halfway through a document giving my own account of intelligence + specific safe scenarios.\n\n\n\n\n\n| |\n| --- |\n| [Soares: 😄] |\n\n\n\n\n\n\n \n\n\n\nThe post [Soares, Tallinn, and Yudkowsky discuss AGI cognition](https://intelligence.org/2021/11/29/soares-tallinn-and-yudkowsky-discuss-agi-cognition/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-11-29T19:15:22Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "732acd0f85ff083d2341ce284b80ecb9", "title": "Christiano, Cotra, and Yudkowsky on AI progress", "url": "https://intelligence.org/2021/11/25/christiano-cotra-and-yudkowsky-on-ai-progress/", "source": "miri", "source_type": "blog", "text": "This post is a transcript of a discussion between Paul Christiano, Ajeya Cotra, and Eliezer Yudkowsky on AGI forecasting, following up on Paul and Eliezer’s [“Takeoff Speeds” discussion](https://www.lesswrong.com/posts/yMwpoiREvRyNbMjG8/yudkowsky-and-christiano-discuss-takeoff-speeds).\n\n\n \n\n\nColor key:\n\n\n\n\n\n| | | |\n| --- | --- | --- |\n|  Chat by Paul and Eliezer  |  Chat by Ajeya  |  Inline comments  |\n\n\n\n \n\n\n \n\n\n8. September 20 conversation\n----------------------------\n\n\n \n\n\n### 8.1. Chess and Evergrande\n\n\n \n\n\n\n[Christiano][15:28]\n I still feel like you are overestimating how big a jump alphago is, or something. Do you have a mental prediction of how the graph of (chess engine quality) vs (time) looks, and whether neural net value functions are a noticeable jump in that graph?\n\n\nLike, people investing in “Better Software” doesn’t predict that you won’t be able to make progress at playing go. The reason you can make a lot of progress at go is that there was extremely little investment in playing better go.\n\n\nSo then your work is being done by the claim “People won’t be working on the problem of acquiring a decisive strategic advantage,” not that people won’t be looking in quite the right place and that someone just had a cleverer idea\n\n\n\n\n\n\n[Yudkowsky][16:35]\nI think I’d expect something like… chess engine slope jumps a bit for Deep Blue, then levels off with increasing excitement, then jumps for the Alpha series? Albeit it’s worth noting that Deepmind’s efforts there were going towards generality rather than raw power; chess was solved to the point of being uninteresting, so they tried to solve chess with simpler code that did more things. I don’t think I do have strong opinions about what the chess trend should look like, vs. the Go trend; I have no memories of people saying the chess trend was breaking upwards or that there was a surprise there.\n\n\nIncidentally, the highly well-traded financial markets are currently experiencing sharp dips surrounding the Chinese firm of Evergrande, which I was reading about several weeks before this.\n\n\nI don’t see the basic difference in the kind of reasoning that says “Surely foresightful firms must produce investments well in advance into earlier weaker applications of AGI that will double the economy”, and the reasoning that says “Surely world economic markets and particular Chinese stocks should experience smooth declines as news about Evergrande becomes better-known and foresightful financial firms start to remove that stock from their portfolio or short-sell it”, except that in the latter case there are many more actors with lower barriers to entry than presently exist in the auto industry or semiconductor industry never mind AI.\n\n\nor if not smooth because of bandwagoning and rational fast actors, then at least the markets should (arguendo) be reacting earlier than they’re reacting now, given that I heard about Evergrande earlier; and they should have options-priced Covid earlier; and they should have reacted to the mortgage market earlier. If even markets there can exhibit seemingly late wild swings, how is the economic impact of AI – which isn’t even an asset market! – forced to be earlier and smoother than that, as a result of wise investing?\n\n\nThere’s just such a vast gap between hopeful reasoning about how various agents and actors should all do the things the speaker finds very reasonable, thereby yielding smooth behavior of the Earth, versus reality.\n\n\n\n\n\n \n\n\n \n\n\n9. September 21 conversation\n----------------------------\n\n\n \n\n\n### 9.1. AlphaZero, innovation vs. industry, the Wright Flyer, and the Manhattan Project\n\n\n \n\n\n\n[Christiano][10:18]\n(For benefit of readers, the market is down 1.5% from friday close -> tuesday open, after having drifted down 2.5% over the preceding two weeks. Draw whatever lesson you want from that.)\n\n\nAlso for the benefit of readers, here is the SSDF list of computer chess performance by year. I think the last datapoint is with the first version of neural net evaluations, though I think to see the real impact we want to add one more datapoint after the neural nets are refined (which is why I say I also don’t know what the impact is)\n\n\n![](https://cdn.discordapp.com/attachments/887568029733519391/889924404392370226/ChessEnginePerformance.png)\nNo one keeps similarly detailed records for Go, and there is much less development effort, but the rate of progress was about 1 stone per year from 1980 until 2015 (see , written way before AGZ). In 2012 go bots reached about 4-5 amateur dan. By DeepMind’s reckoning here (, figure 4) Fan AlphaGo about 4-5 stones stronger-4 years later, with 1 stone explained by greater runtime compute. They could then get further progress to be superhuman with even more compute, radically more than were used for previous projects and with pretty predictable scaling. That level is within 1-2 stones of the best humans (professional dan are greatly compressed relative to amateur dan), so getting to “beats best human” is really just not a big discontinuity and the fact that DeepMind marketing can find an expert who makes a really bad forecast shouldn’t be having such a huge impact on your view.\n\n\nThis understates the size of the jump from AlphaGo, because that was basically just the first version of the system that was superhuman and it was still progressing very rapidly as it moved from prototype to slightly-better-prototype, which is why you saw such a close game. (Though note that the AlphaGo prototype involved much more engineering effort than any previous attempt to play go, so it’s not surprising that a “prototype” was the thing to win.)\n\n\nSo to look at actual progress after the dust settles and really measure how crazy this was, it seems much better to look at AlphaZero which continued to improve further, see (, figure 6b). Their best system got another ~8 stones of progress over AlphaGo. Now we are like 7-10 stones ahead of trend, of which I think about 3 stones are explained by compute. Maybe call it 6 years ahead of schedule?\n\n\nSo I do think this is pretty impressive, they were slightly ahead of schedule for beating the best humans but they did it with a huge margin of error. I think the margin is likely overstated a bit by their elo evaluation methodology, but I’d still grant like 5 years ahead of the nearest competition.\n\n\nI’d be interested in input from anyone who knows more about the actual state of play (+ is allowed to talk about it) and could correct errors.\n\n\nMostly that whole thread is just clearing up my understanding of the empirical situation, probably we still have deep disagreements about what that says about the world, just as e.g. we read very different lessons from market movements.\n\n\nProbably we should only be talking about either ML or about historical technologies with meaningful economic impacts. In my view your picture is just radically unlike how almost any technologies have been developed over the last few hundred years. So probably step 1 before having bets is to reconcile our views about historical technologies, and then maybe as a result of that we could actually have a bet about future technology. Or we could try to shore up the GDP bet.\n\n\nLike, it feels to me like I’m saying: AI will be like early computers, or modern semiconductors, or airplanes, or rockets, or cars, or trains, or factories, or solar panels, or genome sequencing, or basically anything else. And you are saying: AI will be like nuclear weapons.\n\n\nI think from your perspective it’s more like: AI will be like all the historical technologies, and that means there will be a hard takeoff. The only way you get a soft takeoff forecast is by choosing a really weird thing to extrapolate from historical technologies.\n\n\nSo we’re both just forecasting that AI will look kind of like other stuff in the near future, and then both taking what we see as the natural endpoint of that process.\n\n\nTo me it feels like the nuclear weapons case is the outer limit of what looks plausible, where someone is able to spend $100B for a chance at a decisive strategic advantage.\n\n\n\n\n\n\n[Yudkowsky][11:11]\nGo-wise, I’m a little concerned about that “stone” metric – what would the chess graph look like if it was measuring pawn handicaps? Are the professional dans compressed in Elo, not just “stone handicaps”, relative to the amateur dans? And I’m also hella surprised by the claim, which I haven’t yet looked at, that Alpha Zero got 8 stones of progress over AlphaGo – I would not have been shocked if you told me that God’s Algorithm couldn’t beat Lee Se-dol with a 9-stone handicap.\n\n\nLike, the obvious metric is Elo, so if you go back and refigure in “stone handicaps”, an obvious concern is that somebody was able to look into the past and fiddle their hindsight until they found a hindsightful metric that made things look predictable again. My sense of Go said that 5-dan amateur to 9-dan pro was a HELL of a leap for 4 years, and I also have some doubt about the original 5-dan-amateur claims and whether those required relatively narrow terms of testing (eg timed matches or something).\n\n\nOne basic point seems to be whether AGI is more like an innovation or like a performance metric over an entire large industry.\n\n\nAnother point seems to be whether the behavior of the world is usually like that, in some sense, or if it’s just that people who like smooth graphs can go find some industries that have smooth graphs for particular performance metrics that happen to be smooth.\n\n\nAmong the smoothest metrics I know that seems like a convergent rather than handpicked thing to cite, is world GDP, which is the sum of more little things than almost anything else, and whose underlying process is full of multiple stages of converging-product-line bottlenecks that make it hard to jump the entire GDP significantly even when you jump one component of a production cycle… which, from my standpoint, is a major reason to expect AI to not hit world GDP all that hard until AGI passes the critical threshold of bypassing it entirely. Having 95% of the tech to invent a self-replicating organism (eg artificial bacterium) does not get you 95%, 50%, or even 10% of the impact.\n\n\n(it’s not so much the 2% reaction of world markets to Evergrande that I was singling out earlier, 2% is noise-ish, but the wider swings in the vicinity of Evergrande particularly)\n\n\n\n\n\n[Christiano][12:41]\nYeah, I’m just using “stone” to mean “elo difference that is equal to 1 stone at amateur dan / low kyu,” you can see DeepMind’s conversion (which I also don’t totally believe) in figure 4 here (). Stones are closer to constant elo than constant handicap, it’s just a convention to name them that way.\n\n\n\n\n\n\n[Yudkowsky][12:42]\nk then\n\n\n\n\n\n[Christiano][12:47]\nBut my description above still kind of understates the gap I think. They call 230 elo 1 stone, and I think prior rate of progress is more like 200 elo/year. They put AlphaZero about 3200 elo above the 2012 system, so that’s like 16 years ahead = 11 years ahead of schedule. At least 2 years are from test-time hardware, and self-play systematically overestimates elo differences at the upper end of that. But 5 years ahead is still too low and that sounds more like 7-9 years ahead. ETA: and my actual best guess all things considered is probably 10 years ahead, which I agree is just a lot bigger than 5. And I also understated how much of the gap was getting up to Lee Sedol.\n\n\nThe go graph I posted wasn’t made with hindsight, that was from 2014\n\n\nI mean, I’m fine with you saying that people who like smooth graphs are cherry-picking evidence, but do you want to give any example other than nuclear weapons of technologies with the kind of discontinuous impact you are describing?\n\n\nI do agree that the difference in our views is like “innovation” vs “industry.” And a big part of my position is that innovation-like things just don’t usually have big impacts for kind of obvious reasons, they start small and then become more industry-like as they scale up. And current deep learning seems like an absolutely stereotypical industry that is scaling up rapidly in an increasingly predictable way.\n\n\nAs far as I can tell the examples we know of things changing continuously aren’t handpicked, we’ve been looking at all the examples we can find, and no one is proposing or even able to find almost *anything* that looks like you are imagining AI will look.\n\n\nLike, we’ve seen deep learning innovations in the form of prototypes (most of all AlexNet), and they were cool and represented giant fast changes in people’s views. And more recently we are seeing bigger much-less-surprising changes that are still helping a lot in raising the tens of billions of dollars that people are raising. And the innovations we are seeing are increasingly things that trade off against modest improvements in model size, there are fewer and fewer big surprises, just like you’d predict. It’s clearer and clearer to more and more people what the roadmap is—the roadmap is not yet quite as clear as in semiconductors, but as far as I can tell that’s just because the field is still smaller.\n\n\n\n\n\n\n[Yudkowsky][13:23]\nI sure wasn’t imagining there was a roadmap to AGI! Do you perchance have one which says that AGI is 30 years out?\n\n\nFrom my perspective, you could as easily point to the Wright Flyer as an atomic bomb. Perhaps this reflects again the “innovation vs industry” difference, where I think in terms of building a thing that goes foom thereby bypassing our small cute world GDP, and you think in terms of industries that affect world GDP in an invariant way throughout their lifetimes.\n\n\nWould you perhaps care to write off the atomic bomb too? It arguably didn’t change the outcome of World War II or do much that conventional weapons in great quantity couldn’t; Japan was bluffed into believing the US could drop a nuclear bomb every week, rather than the US actually having that many nuclear bombs or them actually being used to deliver a historically outsized impact on Japan. From the industry-centric perspective, there is surely some graph you can draw which makes nuclear weapons also look like business as usual, especially if you go by destruction per unit of whole-industry non-marginal expense, rather than destruction per bomb.\n\n\n\n\n\n[Christiano][13:27]\nseems like you have to make the wright flyer much better before it’s important, and that it becomes more like an industry as that happens, and that this is intimately related to why so few people were working on it\n\n\nI think the atomic bomb is further on the spectrum than almost anything, but it still doesn’t feel nearly as far as what you are expecting out of AI\n\n\nthe manhattan project took years and tens of billions; if you wait an additional few years and spend an additional few tens of billions then it would be a significant improvement in destruction or deterrence per $ (but not totally insane)\n\n\nI do think it’s extremely non-coincidental that the atomic bomb was developed in a country that was practically outspending the whole rest of the world in “killing people technology”\n\n\nand took a large fraction of that country’s killing-people resources\n\n\neh, that’s a bit unfair, the us was only like 35% of global spending on munitions\n\n\nand the manhattan project itself was only a couple percent of total munitions spending\n\n\n\n\n\n\n[Yudkowsky][13:32]\na lot of why I expect AGI to be a disaster is that *I am straight-up expecting AGI to be different*.  if it was just like coal or just like nuclear weapons or just like viral biology then I would not be way more worried about AGI than I am worried about those other things.\n\n\n\n\n\n[Christiano][13:33]\nthat definitely sounds right\n\n\nbut it doesn’t seem like you have any short-term predictions about AI being different\n\n\n\n\n\n \n\n\n### 9.2. AI alignment vs. biosafety, and measuring progress\n\n\n \n\n\n\n[Yudkowsky][13:33]\nare you more worried about AI than about bioengineering?\n\n\n\n\n\n[Christiano][13:33]\nI’m more worried about AI because (i) alignment is a thing, unrelated to takeoff speed, (ii) AI is a (ETA: likely to be) huge deal and bioengineering is probably a relatively small deal\n\n\n(in the sense of e.g. how much $ people spend, or how much $ it makes, or whatever other metric of size you want to use)\n\n\n\n\n\n\n[Yudkowsky][13:35]\nwhat’s the disanalogy to (i) biosafety is a thing, unrelated to the speed of bioengineering?  why expect AI to be a huge deal and bioengineering to be a small deal?  is it just that investing in AI is scaling faster than investment in bioengineering?\n\n\n\n\n\n[Christiano][13:35]\nno, alignment is a really easy x-risk story, bioengineering x-risk seems extraordinarily hard\n\n\nIt’s really easy to mess with the future by creating new competitors with different goals, if you want to mess with the future by totally wiping out life you have to really try at it and there’s a million ways it can fail. The bioengineering seems like it basically requires deliberate and reasonably competent malice whereas alignment seems like it can only be averted with deliberate effort, etc.\n\n\nI’m mostly asking about historical technologies to try to clarify expectations, I’m pretty happy if the outcome is: you think AGI is predictably different from previous technologies in ways we haven’t seen yet\n\n\nthough I really wish that would translate into some before-end-of-days prediction about a way that AGI will eventually look different\n\n\n\n\n\n\n[Yudkowsky][13:38]\nin my ontology a whole lot of threat would trace back to “AI hits harder, faster, gets too strong to be adjusted”; tricks with proteins just don’t have the raw power of intelligence\n\n\n\n\n\n[Christiano][13:39]\nin my view it’s nearly totally orthogonal to takeoff speed, though fast takeoffs are a big reason that preparation in advance is more useful\n\n\n(but not related to the basic reason that alignment is unprecedentedly scary)\n\n\nIt feels to me like you are saying that the AI-improving-AI will move very quickly from “way slower than humans” to “FOOM in <1 year,” but it just looks like that is very surprising to me.\n\n\nHowever I do agree that if AI-improving-AI was like AlphaZero, then it would happen extremely fast.\n\n\nIt seems to me like it’s pretty rare to have these big jumps, and it gets much much rarer as technologies become more important and are more industry-like rather than innovation like (and people care about them a lot rather than random individuals working on them, etc.). And I can’t tell whether you are saying something more like “nah big jumps happen all the time in places that are structurally analogous to the key takeoff jump, even if the effects are blunted by slow adoption and regulatory bottlenecks and so on” or if you are saying “AGI is atypical in how jumpy it will be”\n\n\n\n\n\n\n[Yudkowsky][13:44]\nI don’t know about *slower*; GPT-3 may be able to type faster than a human\n\n\n\n\n\n[Christiano][13:45]\nYeah, I guess we’ve discussed how you don’t like the abstraction of “speed of making progress”\n\n\n\n\n\n\n[Yudkowsky][13:45]\nbut, basically less useful in fundamental ways than a human civilization, because they are less complete, less self-contained\n\n\n\n\n\n[Christiano][13:46]\nEven if we just assume that your AI needs to go off in the corner and not interact with humans, there’s still a question of why the self-contained AI civilization is making ~0 progress and then all of a sudden very rapid progress\n\n\n\n\n\n\n[Yudkowsky][13:46]\nunfortunately a lot of what you are saying, from my perspective, has the flavor of, “but can’t you tell me about your predictions earlier on of the impact on global warming at the *Homo erectus* level”\n\n\nyou have stories about why this is like totally not a fair comparison\n\n\nI do not share these stories\n\n\n\n\n\n[Christiano][13:46]\nI don’t understand either your objection nor the reductio\n\n\nlike, here’s how I think it works: AI systems improve gradually, including on metrics like “How long does it take them to do task X?” or “How high-quality is their output on task X?”\n\n\n\n\n\n\n[Yudkowsky][13:47]\nI feel like the thing we know is something like, there is a sufficiently high level where things go whooosh humans-from-hominids style\n\n\n\n\n\n[Christiano][13:47]\nWe can measure the performance of AI on tasks like “Make further AI progress, without human input”\n\n\nAny way I can slice the analogy, it looks like AI will get continuously better at that task\n\n\n\n\n\n\n[Yudkowsky][13:48]\nhow would you measure progress from GPT-2 to GPT-3, and would you feel those metrics really captured the sort of qualitative change that lots of people said they felt?\n\n\n\n\n\n[Christiano][13:48]\nAnd it seems like we have a bunch of sources of data we can use about how fast AI will get better\n\n\nCould we talk about some application of GPT-2 or GPT-3?\n\n\nalso that’s a *lot* of progress, spending 100x more is a *lot* more money\n\n\n\n\n\n\n[Yudkowsky][13:49]\nmy world, GPT-3 has very few applications because it is not quite right and not quite complete\n\n\n\n\n\n[Christiano][13:49]\nalso it’s still really dumb\n\n\n\n\n\n\n[Yudkowsky][13:49]\nlike a self-driving car that does great at 99% of the road situations\n\n\neconomically almost worthless\n\n\n\n\n\n[Christiano][13:49]\nI think the “being dumb” is way more important than “covers every case”\n\n\n\n\n\n\n[Yudkowsky][13:50]\n(albeit that if new cities could still be built, we could totally take those 99%-complete AI cars and build fences and fence-gates around them, in a city where they were the only cars on the road, in which case they *would* work, and get big economic gains from these new cities with driverless cars, which ties back into my point about how current world GDP is *unwilling* to accept tech inputs)\n\n\nlike, it is in fact very plausible to me that there is a neighboring branch of reality with open borders and no housing-supply-constriction laws and no medical-supply-constriction laws, and their world GDP *does* manage to double before AGI hits them really hard, albeit maybe not in 4 years.  this world *is not Earth*.  they are constructing new cities to take advantage of 99%-complete driverless cars *right now*, or rather, they started constructing them 5 years ago and finished 4 years and 6 months ago.\n\n\n\n\n \n\n\n### 9.3. Requirements for FOOM\n\n\n \n\n\n\n[Christiano][13:53]\nI really feel like the important part is the jumpiness you are imagining on the AI side / why AGI is different from other things\n\n\n\n\n\n\n[Cotra][13:53]\nIt’s actually not obvious to me that Eliezer is imagining that much more jumpiness on the AI technology side than you are, Paul\n\n\nE.g. he’s said in the past that while the gap from “subhuman to superhuman AI” could be 2h if it’s in the middle of FOOM, it could also be a couple years if it’s more like scaling alphago\n\n\n\n\n\n\n[Yudkowsky][13:54]\nIndeed!  We observed this jumpiness with hominids.  A lot of stuff happened at once with hominids, but a critical terminal part of the jump was the way that hominids started scaling their own food supply, instead of being ultimately limited by the food supply of the savanna.\n\n\n\n\n\n[Cotra][13:54]\nA couple years is basically what Paul believes\n\n\n\n\n\n\n[Christiano][13:55]\n(discord is not a great place for threaded conversations :()\n\n\n\n\n\n\n[Cotra][13:55]\nWhat are the probabilities you’re each placing on the 2h-2y spectrum? I feel like Paul is like “no way on 2h, likely on 2y” and Eliezer is like “who knows” on the whole spectrum, and a lot of the disagreement is the impact of the previous systems?\n\n\n\n\n\n\n[Christiano][13:55]\nyeah, I’m basically at “no way,” because it seems obvious that the AI that can foom in 2h is preceded by the AI that can foom in 2y\n\n\n\n\n\n\n[Yudkowsky][13:56]\nwell, we surely agree there!\n\n\n\n\n\n[Christiano][13:56]\nOK, and it seems to me like it is preceded by years\n\n\n\n\n\n\n[Yudkowsky][13:56]\nwe disagree on whether the AI that can foom in 2y clearly comes more than 2y before the AI that fooms in 2h\n\n\n\n\n\n[Christiano][13:56]\nyeah\n\n\nperhaps we can all agree it’s preceded by at least 2h\n\n\nso I have some view like: for any given AI we can measure “how long does it take to foom?” and it seems to me like this is just a nice graph\n\n\nand it’s not exactly clear how quickly that number is going down, but a natural guess to me is something like “halving each year” based on the current rate of progress in hardware and software\n\n\nand you see localized fast progress most often in places where there hasn’t yet been much attention\n\n\nand my best guess for your view is that actually that’s not a nice graph at all, there is some critical threshold or range where AI quickly moves from “not fooming for a really long time” to “fooming really fast,” and that seems like the part I’m objecting to\n\n\n\n\n\n\n[Cotra][13:59]\nPaul, is your take that there’s a non-infinity number for time to FOOM that’d be associated with current AI systems (unassisted by humans)?\n\n\nAnd it’s going down over time?\n\n\nI feel like I would have said something more like “there’s a $ amount it takes to build a system that will FOOM in X amount of time, and that’s going down”\n\n\nwhere it’s like quadrillions of dollars today\n\n\n\n\n\n\n[Christiano][14:00]\nI think it would be a big engineering project to make such an AI, which no one is doing because it would be uselessly slow even if successful\n\n\n\n\n\n\n[Yudkowsky][14:02]\nI… don’t think GPT-3 fooms given 2^30 longer time to think about than the systems that would otherwise exist 30 years from now, on timelines I’d consider relatively long, and hence generous to this viewpoint?  I also don’t think you can take a quadrillion dollars and scale GPT-3 to foom today?\n\n\n\n\n\n[Cotra][14:03]\nI would agree with your take on GPT-3 fooming, and I didn’t mean a quadrillion dollars just to scale GPT-3, would probably be a difft architecture\n\n\n\n\n\n\n[Christiano][14:03]\nI also agree that GPT-3 doesn’t foom, it just keeps outputting [next web page]…\n\n\nBut I think the axes of “smart enough to foom fast” and “wants to foom” are pretty different. I also agree there is some minimal threshold below which it doesn’t even make sense to talk about “wants to foom,” which I think is probably just not that hard to reach.\n\n\n(Also there are always diminishing returns as you continue increasing compute, which become very relevant if you try to GPT-3 for a billion billion years as in your hypothetical even apart from “wants to foom”.)\n\n\n\n\n\n\n[Cotra][14:06]\nI think maybe you and EY then disagree on where the threshold from “infinity” to “a finite number” for “time for this AI system to FOOM” begins? where eliezer thinks it’ll drop from infinity to a pretty small finite number and you think it’ll drop to a pretty large finite number, and keep going down from there\n\n\n\n\n\n\n[Christiano][14:07]\nI also think we will likely jump down to a foom-ing system only after stuff is pretty crazy, but I think that’s probably less important\n\n\nI think what you said is probably the main important disagreement\n\n\n\n\n\n\n[Cotra][14:08]\nas in before that point it’ll be faster to have human-driven progress than FOOM-driven progress bc the FOOM would be too slow?\n\n\nand there’s some crossover point around when the FOOM time is just a bit faster than the human-driven progress time\n\n\n\n\n\n\n[Christiano][14:09]\nyeah, I think most likely (AI+humans) is faster than (AI alone) because of complementarity. But I think Eliezer and I would still disagree even if I thought there was 0 complementarity and it’s just (humans improving AI) and separately (AI improving AI)\n\n\non that pure substitutes model I expect “AI foom” to start when the rate of AI-driven AI progress overtakes the previous rate of human-driven AI progress\n\n\nlike, I expect the time for successive “doublings” of AI output to be like 1 year, 1 year, 1 year, 1 year, [AI takes over] 6 months, 3 months, …\n\n\nand the most extreme fast takeoff scenario that seems plausible is that kind of perfect substitutes + no physical economic impact from the prior AI systems\n\n\nand then by that point fast enough physical impact is really hard so it happens essentially after the software-only singularity\n\n\nI consider that view kind of unlikely but at least coherent\n\n\n\n\n\n \n\n\n### 9.4. AI-driven accelerating economic growth\n\n\n \n\n\n\n[Yudkowsky][14:12]\nI’m expecting that the economy doesn’t accept much inputs from chimps, and then the economy doesn’t accept much input from village idiots, and then the economy doesn’t accept much input from weird immigrants.  I can imagine that there may or may not be a very weird 2-year or 3-month period with strange half-genius systems running around, but they will still not be allowed to build houses.  In the terminal phase things get more predictable and the AGI starts its own economy instead.\n\n\n\n\n\n[Christiano][14:12]\nI guess you can go even faster, by having a big and accelerating ramp-up in human investment right around the end, so that the “1 year” is faster (e.g. if recursive self-improvement was like playing go, and you could move from “a few individuals” to “google spending $10B” over a few years)\n\n\n\n\n\n\n[Yudkowsky][14:13]\nMy ~~model~~ prophecy doesn’t rule that out as a thing that could happen, but sure doesn’t emphasize it as a key step that needs to happen.\n\n\n\n\n\n[Christiano][14:13]\nI think it’s very likely that AI will mostly be applied to further hardware+software progress\n\n\n\n\n\n| |\n| --- |\n| [Cotra: ➕] |\n\n\n\nI don’t really understand why you keep talking about houses and healthcare\n\n\n\n\n\n\n[Cotra][14:13]\nEliezer, what about stuff like Google already using ML systems to automate its TPU load-sharing decisions, and people starting ot use Codex to automate routine programming, and so on? Seems like there’s a lot of stuff like that starting to already happen and markets are pricing in huge further increases\n\n\n\n\n\n\n[Christiano][14:14]\nit seems like the non-AI up-for-grabs zone are things like manufacturing, not things like healthcare\n\n\n\n\n\n| |\n| --- |\n| [Cotra: ➕] |\n\n\n\n\n\n\n\n[Cotra][14:14]\n(I mean on your timelines obviously not much time for acceleration anyway, but that’s distinct from the regulation not allowing weak AIs to do stuff story)\n\n\n\n\n\n\n[Yudkowsky][14:14]\nBecause I think that a key thing of what makes your prophecy less likely is the way that it happens inside the real world, where, economic gains or not, the System is unwilling/unable to take the things that are 99% self-driving cars and start to derive big economic benefits from those.\n\n\n\n\n\n[Cotra][14:15]\nbut it seems like huge economic gains could happen entirely in industries mostly not regulated and not customer-facing, like hardware/software R&D, manufacturing. shipping logistics, etc\n\n\n\n\n\n\n[Yudkowsky][14:15]\nAjeya, I’d consider Codex of *far* greater could-be-economically-important-ness than automated TPU load-sharing decisions\n\n\n\n\n\n[Cotra][14:15]\ni would agree with that, it’s smarter and more general\n\n\nand i think that kind of thing could be applied on the hardware chip design side too\n\n\n\n\n\n\n[Yudkowsky][14:16]\nno, because the TPU load-sharing stuff has an obvious saturation point as a world economic input, while superCodex could be a world economic input in many more places\n\n\n\n\n\n[Cotra][14:16]\nthe TPU load sharing thing was not a claim that this application could scale up to crazy impacts, but that it was allowed to happen, and future stuff that improves that kind of thing (back-end hardware/software/logistics) would probably also be allowed\n\n\n\n\n\n\n[Yudkowsky][14:16]\nmy sense is that dectupling the number of programmers would not lift world GDP much, but it seems a lot more possible for me to be wrong about that\n\n\n\n\n\n[Christiano][14:17]\nthe point is that housing and healthcare are not central examples of things that scale up at the beginning of explosive growth, regardless of whether it’s hard or soft\n\n\nthey are slower and harder, and also in efficient markets-land they become way less important during the transition\n\n\nso they aren’t happening that much on anyone’s story\n\n\nand also it doesn’t make that much difference whether they happen, because they have pretty limited effects on other stuff\n\n\nlike, right now we have an industry of ~hundreds of billions that is producing computing hardware, building datacenters, mining raw inputs, building factories to build computing hardware, solar panels, shipping around all of those parts, etc. etc.\n\n\nI’m kind of interested in the question of whether all that stuff explodes, although it doesn’t feel as core as the question of “what are the dynamics of the software-only singularity and how much $ are people spending initiating it?”\n\n\nbut I’m not really interested in the question of whether human welfare is spiking during the transition or only after\n\n\n\n\n\n\n[Yudkowsky][14:20]\nAll of world GDP has never felt particularly relevant to me on that score, since twice as much hardware maybe corresponds to being 3 months earlier, or something like that.\n\n\n\n\n\n[Christiano][14:21]\nthat sounds like the stuff of predictions?\n\n\n\n\n\n\n[Yudkowsky][14:21]\nBut if complete chip manufacturing cycles have accepted much more effective AI input, with no non-AI bottlenecks, then that… sure is a much more *material* element of a foom cycle than I usually envision.\n\n\n\n\n\n[Christiano][14:21]\nlike, do you think it’s often the case that 3 months of software progress = doubling compute spending? or do you think AGI is different from “normal” AI on this perspective?\n\n\nI don’t think that’s that far off anyway\n\n\nI would guess like ~1 year\n\n\n\n\n\n\n[Yudkowsky][14:22]\nLike, world GDP that goes up by only 10%, but that’s because producing compute capacity was 2.5% of world GDP and that quadrupled, starts to feel much more to me like it’s part of a foom story.\n\n\nI expect software-beats-hardware to hit harder and harder as you get closer to AGI, yeah.\n\n\nthe prediction is firmer near the terminal phase, but I think this is also a case where I expect that to be visible earlier\n\n\n\n\n\n[Christiano][14:24]\nI think that by the time that the AI-improving-AI takes over, it’s likely that hardware+software manufacturing+R&D represents like 10-20% of GDP, and that the “alien accountants” visiting earth would value those companies at like 80%+ of GDP\n\n\n\n\n\n \n\n\n### 9.5. Brain size and evolutionary history\n\n\n \n\n\n\n[Cotra][14:24]\nOn software beating hardware, how much of your view is dependent on your belief that the chimp -> human transition was probably not mainly about brain size because if it were about brain size it would have happened faster? My understanding is that you think the main change is a small software innovation which increased returns to having a bigger brain. If you changed your mind and thought that the chimp -> human transition was probably mostly about raw brain size, what (if anything) about your AI takeoff views would change?\n\n\n\n\n\n\n[Yudkowsky][14:25]\nI think that’s a pretty different world in a lot of ways!\n\n\nbut yes it hits AI takeoff views too\n\n\n\n\n\n[Christiano][14:25]\nregarding software vs hardware, here is an example of asking this question for imagenet classification (“how much compute to train a model to do the task?”), with a bit over 1 year doubling times (). I guess my view is that we can make a similar graph for “compute required to make your AI FOOM” and that it will be falling significantly slower than 2x/year. And my prediction for other tasks is that the analogous graphs will also tend to be falling slower than 2x/year.\n\n\n\n\n\n\n[Yudkowsky][14:26]\nto the extent that I modeled hominid evolution as having been “dutifully schlep more of the same stuff, get predictably more of the same returns” that would correspond to a world in which intelligence was less scary, different, dangerous-by-default\n\n\n\n\n\n[Cotra][14:27]\nthanks, that’s helpful. I looked around in [IEM](https://intelligence.org/files/IEM.pdf) and other places for a calculation of how quickly we should have evolved to humans if it were mainly about brain size, but I only found qualitative statements. If there’s a calculation somewhere I would appreciate a pointer to it, because currently it seems to me that a story like “selection pressure toward general intelligence was weak-to-moderate because it wasn’t actually *that* important for fitness, and this degree of selection pressure is consistent with brain size being the main deal and just taking a few million years to happen” is very plausible\n\n\n\n\n\n\n[Yudkowsky][14:29]\nwell, for one thing, the prefrontal cortex expanded twice as fast as the rest\n\n\nand iirc there’s evidence of a lot of recent genetic adaptation… though I’m not as sure you could pinpoint it as being about brain-stuff or that the brain-stuff was about cognition rather than rapidly shifting motivations or something.\n\n\nelephant brains are 3-4 times larger by weight than human brains (just looked up)\n\n\nif it’s that easy to get returns on scaling, seems like it shouldn’t have taken that long for evolution to go there\n\n\n\n\n\n[Cotra][14:31]\nbut they have fewer synapses (would compute to less FLOP/s by the standard conversion)\n\n\nhow long do you think it should have taken?\n\n\n\n\n\n\n[Yudkowsky][14:31]\nearly dinosaurs should’ve hopped onto the predictable returns train\n\n\n\n\n\n[Cotra][14:31]\nis there a calculation?\n\n\nyou said in IEM that evolution increases organ sizes quickly but there wasn’t a citation to easily follow up on there\n\n\n\n\n\n\n[Yudkowsky][14:33]\nI mean, you could produce a graph of smooth fitness returns to intelligence, smooth cognitive returns on brain size/activity, linear metabolic costs for brain activity, fit that to humans and hominids, then show that obviously if hominids went down that pathway, large dinosaurs should’ve gone down it first because they had larger bodies and the relative metabolic costs of increased intelligence would’ve been lower at every point along the way\n\n\nI do not have a citation for that ready, if I’d known at the time you’d want one I’d have asked Luke M for it while he still worked at MIRI ![😐](https://s.w.org/images/core/emoji/14.0.0/72x72/1f610.png)\n\n\n\n\n\n[Cotra][14:35]\ncool thanks, will think about the dinosaur thing (my first reaction is that this should depend on the actual fitness benefits to general intelligence which might have been modest)\n\n\n\n\n\n\n[Yudkowsky][14:35]\nI suspect we’re getting off Paul’s crux, though\n\n\n\n\n\n[Cotra][14:35]\nyeah we can go back to that convo (though i think paul would also disagree about this thing, and believes that the chimp to human thing was mostly about size)\n\n\nsorry for hijacking\n\n\n\n\n\n\n[Yudkowsky][14:36]\nwell, if at some point I can produce a major shift in EA viewpoints by coming up with evidence for a bunch of non-brain-size brain selection going on over those timescales, like brain-related genes where we can figure out how old the mutation is, I’d then put a lot more priority on digging up a paper like that\n\n\nI’d consider it sufficiently odd to imagine hominids->humans as being primarily about brain size, given the evidence we have, that I do not believe this is Paul’s position until Paul tells me so\n\n\n\n\n\n[Christiano][14:49]\nI would guess it’s primarily about brain size / neuron count / cortical neuron count\n\n\nand that the change in rate does mostly go through changing niche, where both primates and birds have this cycle of rapidly accelerating brain size increases that aren’t really observed in other animals\n\n\nit seems like brain size is increasing extremely quickly on both of those lines\n\n\n\n\n\n\n[Yudkowsky][14:50]\nwhy aren’t elephants GI?\n\n\n\n\n\n[Christiano][14:51]\nmostly they have big brains to operate big bodies, and also my position obviously does not imply (big brain) ==(necessarily implies)==> general intelligence\n\n\n\n\n\n\n[Yudkowsky][14:52]\nI don’t understand, in general, how your general position manages to strongly imply a bunch of stuff about AGI and not strongly imply similar stuff about a bunch of other stuff that sure sounds similar to me\n\n\n\n\n\n[Christiano][14:52]\ndon’t elephants have very few synapses relative to humans?\n\n\n\n\n\n| |\n| --- |\n| [Cotra: ➕] |\n\n\n\nhow does the scale hypothesis possibly take a strong stand on synapses vs neurons? I agree that it takes a modest predictive hit from “why aren’t the big animals much smarter?”\n\n\n\n\n\n\n[Yudkowsky][14:53]\nif adding more synapses just scales, elephants should be able to pay hominid brain costs for a much smaller added fraction of metabolism and also not pay the huge death-in-childbirth head-size tax\n\n\nbecause their brains and heads are already 4x as huge as they need to be for GI\n\n\nand now they just need some synapses, which are a much tinier fraction of their total metabolic costs\n\n\n\n\n\n[Christiano][14:54]\nI mean, you can also make smaller and cheaper synapses as evidenced by birds\n\n\nI’m not sure I understand what you are saying\n\n\nit’s clear that you can’t say “X is possible metabolically, so evolution would do it”\n\n\nor else you are confused about why primate brains are so bad\n\n\n\n\n\n\n[Yudkowsky][14:54]\ngreat, then smaller and cheaper synapses should’ve scaled many eons earlier and taken over the world\n\n\n\n\n\n[Christiano][14:55]\nthis isn’t about general intelligence, this is a reductio of your position…\n\n\n\n\n\n\n[Yudkowsky][14:55]\nand here I had thought it was a reductio of your position…\n\n\n\n\n\n[Christiano][14:55]\nindeed\n\n\nlike, we all grant that it’s metabolically possible to have small smart brains\n\n\nand evolution doesn’t do it\n\n\nand I’m saying that it’s also possible to have small smart brains\n\n\nand that scaling brains up matters a lot\n\n\n\n\n\n\n[Yudkowsky][14:56]\nno, you grant that it’s metabolically possible to have cheap brains full of synapses, which are therefore, on your position, smart\n\n\n\n\n\n[Christiano][14:56]\nbirds are just smart\n\n\nwe know they are smart\n\n\nthis isn’t some kind of weird conjecture\n\n\nlike, we can debate whether they are a “general” intelligence, but it makes no difference to this discussion\n\n\nthe point is that they do more with less metabolic cost\n\n\n\n\n\n\n[Yudkowsky][14:57]\non my position, the brain needs to invent the equivalents of ReLUs and Transformers and really rather a lot of other stuff because it can’t afford nearly that many GPUs, and then the marginal returns on adding expensive huge brains and synapses have increased enough that hominids start to slide down the resulting fitness slope, which isn’t even paying off in guns and rockets yet, they’re just getting that much intelligence out of it once the brain software has been selected to scale that well\n\n\n\n\n\n[Christiano][14:57]\nbut all of the primates and birds have brain sizes scaling much faster than the other animals\n\n\nlike, the relevant “things started to scale” threshold is way before chimps vs humans\n\n\nisn’t it?\n\n\n\n\n\n\n[Cotra][14:58]\nto clarify, my understanding is that paul’s position is “Intelligence is mainly about synapse/neuron count, and evolution doesn’t care that much about intelligence; it cared more for birds and primates, and both lines are getting smarter+bigger-brained.” And eliezer’s position is that “evolution should care a ton about intelligence in most niches, so if it were mostly about brain size then it should have gone up to human brain sizes with the dinosaurs”\n\n\n\n\n\n\n[Christiano][14:58]\nor like, what is the evidence you think is explained by the threshold being between chimps and humans\n\n\n\n\n\n\n[Yudkowsky][14:58]\nif hominids have less efficient brains than birds, on this theory, it’s because (post facto handwave) birds are tiny, so whatever cognitive fitness gradients they face, will tend to get paid more in software and biological efficiency and biologically efficient software, and less paid in Stack More Neurons (even compared to hominids)\n\n\nelephants just don’t have the base software to benefit much from scaling synapses even though they’d be relatively cheaper for elephants\n\n\n\n\n\n[Christiano][14:59]\n@ajeya I think that intelligence is about a lot of things, but that size (or maybe “more of the same” changes that had been happening recently amongst primates) is the big difference between chimps and humans\n\n\n\n\n\n| |\n| --- |\n| [Cotra: 👍] |\n\n\n\n\n\n\n\n[Cotra][14:59]\ngot it yeah i was focusing on chimp-human gap when i said “intelligence” there but good to be careful\n\n\n\n\n\n\n[Yudkowsky][14:59]\nI have not actually succeeded in understanding Why On Earth Anybody Would Think That If Not For This Really Weird Prior I Don’t Get Either\n\n\nre: the “more of the same” theory of humans\n\n\n\n\n\n[Cotra][15:00]\ndo you endorse my characterization of your position above? “evolution should care a ton about intelligence in most niches, so if it were mostly about brain size then it should have gone up to human brain sizes with the dinosaurs”\n\n\nin which case the disagreement is about how much evolution should care about intelligence in the dinosaur niche, vs other things it could put its skill points into?\n\n\n\n\n\n\n[Christiano][15:01]\nEliezer, it seems like chimps are insanely smart compared to other animals, basically as smart as they get\n\n\nso it’s natural to think that the main things that make humans unique are also present in chimps\n\n\nor at least, there was something going on in chimps that is exceptional\n\n\nand should be causally upstream of the uniqueness of humans too\n\n\notherwise you have too many coincidences on your hands\n\n\n\n\n\n\n[Yudkowsky][15:02]\najeya: no, I’d characterize that as “the human environmental niche per se does not seem super-special enough to be unique on a geological timescale, the cognitive part of the niche derives from increased cognitive abilities in the first place and so can’t be used to explain where they got started, dinosaurs are larger than humans and would pay lower relative metabolic costs for added brain size and it is not the case that every species as large as humans was in an environment where they would not have benefited as much from a fixed increment of intelligence, hominids are probably distinguished from dinosaurs in having better neural algorithms that arose over intervening evolutionary time and therefore better returns in intelligence on synapses that are more costly to humans than to elephants or large dinosaurs”\n\n\n\n\n\n[Christiano][15:03]\nI don’t understand how you can think that hominids are the special step relative to something earlier\n\n\nor like, I can see how it’s consistent, but I don’t see what evidence or argument supports it\n\n\nit seems like the short evolutionary time, and the fact that you also have to explain the exceptional qualities of other primates, cut extremely strongly against it\n\n\n\n\n\n\n[Yudkowsky][15:04]\npaul: indeed, the fact that dinosaurs didn’t see their brain sizes and intelligences ballooning, says there must be a lot of stuff hominids had that dinosaurs didn’t, explaining why hominids got much higher returns on intelligence per synapse. natural selection is enough of a smooth process that 95% of this stuff should’ve been in the last common ancestor of humans and chimps.\n\n\n\n\n\n[Christiano][15:05]\nit seems like brain size basically just increases faster in the smarter animals? though I mostly just know about birds and primates\n\n\n\n\n\n\n[Yudkowsky][15:05]\nthat is what you’d predict from smartness being about algorithms!\n\n\n\n\n\n[Christiano][15:05]\nand it accelerates further and further within both lines\n\n\nit’s what you’d expect if smartness is about algorithms *and chimps and birds have good algorithms*\n\n\n\n\n\n\n[Yudkowsky][15:06]\nif smartness was about brain size, smartness and brain size would increase faster in the *larger animals* or the ones whose successful members *ate more food per day*\n\n\nwell, sure, I do model that birds have better algorithms than dinosaurs\n\n\n\n\n\n[Cotra][15:07]\nit seems like you’ve given arguments for “there was algorithmic innovation between dinosaurs and humans” but not yet arguments for “there was major algorithmic innovation between chimps and humans”?\n\n\n\n\n\n\n[Christiano][15:08]\n(much less that the algorithmic changes were not just more-of-the-same)\n\n\n\n\n\n\n[Yudkowsky][15:08]\noh, that’s *not* mandated by the model the same way. (between LCA of chimps and humans)\n\n\n\n\n\n[Christiano][15:08]\nisn’t that exactly what we are discussing?\n\n\n\n\n\n\n[Yudkowsky][15:09]\n…I hadn’t thought so, no.\n\n\n\n\n\n[Cotra][15:09]\noriginal q was:\n\n\n\n> \n> On software beating hardware, how much of your view is dependent on your belief that the chimp -> human transition was probably not mainly about brain size because if it were about brain size it would have happened faster? My understanding is that you think the main change is a small software innovation which increased returns to having a bigger brain. If you changed your mind and thought that the chimp -> human transition was probably mostly about raw brain size, what (if anything) about your AI takeoff views would change?\n> \n> \n> \n\n\nso i thought we were talking about if there’s a cool innovation from chimp->human?\n\n\n\n\n\n\n[Yudkowsky][15:10]\nI can see how this would have been the more obvious intended interpretation on your viewpoint, and apologize\n\n\n\n\n\n[Christiano][15:10]\n\n> \n> (though i think paul would also disagree about this thing, and believes that the chimp to human thing was mostly about size)\n> \n> \n> \n\n\nIs what I was responding to in part\n\n\nI am open to saying that I’m conflating size and “algorithmic improvements that are closely correlated with size in practice and are similar to the prior algorithmic improvements amongst primates”\n\n\n\n\n\n\n[Yudkowsky][15:11]\nfrom my perspective, the question is “how did that hominid->human transition happen, as opposed to there being an elephant->smartelephant or dinosaur->smartdinosaur transition”?\n\n\nI expect there were substantial numbers of brain algorithm stuffs going on during this time, however\n\n\nbecause I don’t think that synapses scale that well *with* the baseline hominid boost\n\n\n\n\n\n[Christiano][15:11]\nFWIW, it seems quite likely to me that there would be an elephant->smartelephant transition within tens of millions or maybe 100M years, and a dinosaur->smartdinosaur transition in hundreds of millions of years\n\n\nand those are just cut off by the fastest lines getting there first\n\n\n\n\n\n\n[Yudkowsky][15:12]\nwhich I think does circle back to that point? actually I think my memory glitched and forgot the original point while being about this subpoint and I probably did interpret the original point as intended.\n\n\n\n\n\n[Christiano][15:12]\nnamely primates beating out birds by a hair\n\n\n\n\n\n\n[Yudkowsky][15:12]\nthat sounds like a viewpoint which would also think it much more likely that GPT-3 would foom in a billion years\n\n\nwhere maybe you think that’s unlikely, but I still get the impression your “unlikely” is, like, 5 orders of magnitude likelier than mine before applying overconfidence adjustments against extreme probabilities on both sides\n\n\nyeah, I think I need to back up\n\n\n\n\n\n[Cotra][15:15]\nIs your position something like “at some point after dinosaurs, there was an algorithmic innovation that increased returns to brain size, which meant that the birds and the humans see their brains increasing quickly while the dinosaurs didn’t”?\n\n\n\n\n\n\n[Christiano][15:15]\nit also seems to me like the chimp->human difference is in basically the same ballpark of the effect of brain size within humans, given modest adaptations for culture\n\n\nwhich seems like a relevant sanity-check that made me take the “mostly hardware” view more seriously\n\n\n\n\n\n\n[Yudkowsky][15:15]\nthere’s a part of my model which very strongly says that hominids scaled better than elephants and that’s why “hominids->humans but not elephants->superelephants”\n\n\n\n\n\n[Christiano][15:15]\npreviously I had assumed that analysis would show that chimps were obviously *way* dumber than an extrapolation of humans\n\n\n\n\n\n\n[Yudkowsky][15:16]\nthere’s another part of my model which says “and it still didn’t scale that well without algorithms, so we should expect a lot of alleles affecting brain circuitry which rose to fixation over the period when hominid brains were expanding”\n\n\nthis part is strong and I think echoes back to AGI stuff, but it is not *as strong* as the much *more* overdetermined position that hominids started with more scalable algorithms than dinosaurs.\n\n\n\n\n\n[Christiano][15:17]\nI do agree with the point that there are structural changes in brains as you scale them up, and this is potentially a reason why brain size changes more slowly than e.g. bone size. (Also there are small structural changes in ML algorithms as you scale them up, not sure how much you want to push the analogy but they feel fairly similar.)\n\n\n\n\n\n\n[Yudkowsky][15:17]\n\n> \n> it also seems to me like the chimp->human difference is in basically the same ballpark of the effect of brain size within humans, given modest adaptations for culture\n> \n> \n> \n\n\nthis part also seems pretty blatantly false to me\n\n\nis there, like, a smooth graph that you looked at there?\n\n\n\n\n\n[Christiano][15:18]\nI think the extrapolated difference would be about 4 standard deviations, so we are comparing a chimp to an IQ 40 human\n\n\n\n\n\n\n[Yudkowsky][15:18]\nI’m really not sure how much of a fair comparison that is\n\n\nIQ 40 humans in our society may be mostly sufficiently-damaged humans, not scaled-down humans\n\n\n\n\n\n[Christiano][15:19]\ndoesn’t seem easy, but the point is that the extrapolated difference is huge, it corresponds to completely debilitating developmental problems\n\n\n\n\n\n\n[Yudkowsky][15:19]\nif you do enough damage to a human you end up with, for example, a coma victim who’s not competitive with other primates at all\n\n\n\n\n\n[Christiano][15:19]\nyes, that’s more than 4 SD down\n\n\nI agree with this general point\n\n\nI’d guess I just have a lot more respect for chimps than you do\n\n\n\n\n\n\n[Yudkowsky][15:20]\nI feel like I have a bunch of respect for chimps but more respect for humans\n\n\nlike, that stuff humans do\n\n\nthat is really difficult stuff!\n\n\nit is not just scaled-up chimpstuff!\n\n\n\n\n\n[Christiano][15:21]\nCarl convinced me chimps wouldn’t go to space, but I still really think it’s about domesticity and cultural issues rather than intelligence\n\n\n\n\n\n\n[Yudkowsky][15:21]\nthe chimpstuff is very respectable but there is a whole big layer cake of additional respect on top\n\n\n\n\n\n[Christiano][15:21]\nnot a prediction to be resolved until after the singularity\n\n\nI mean, the space prediction isn’t very confident ![🙂](https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png)\n\n\nand it involved a very large planet of apes\n\n\n\n\n\n \n\n\n \n\n\n### 9.6. Architectural innovation in AI and in evolutionary history\n\n\n \n\n\n\n[Yudkowsky][15:22]\nI feel like if GPT-based systems saturate and require *any* architectural innovation rather than Stack More Layers to get much further, this is a pre-Singularity point of observation which favors humans probably being more qualitatively different from chimp-LCA\n\n\n(LCA=last common ancestor)\n\n\n\n\n\n[Christiano][15:22]\nany seems like a kind of silly bar?\n\n\n\n\n\n\n[Yudkowsky][15:23]\nbecause single architectural innovations are allowed to have large effects!\n\n\n\n\n\n[Christiano][15:23]\nlike there were already small changes to normalization from GPT-2 to GPT-3, so isn’t it settled?\n\n\n\n\n\n\n[Yudkowsky][15:23]\nnatural selection can’t afford to deploy that many of them!\n\n\n\n\n\n[Christiano][15:23]\nand the model really eventually won’t work if you increase layers but don’t fix the normalization, there are severe problems that only get revealed at high scale\n\n\n\n\n\n\n[Yudkowsky][15:23]\nthat I wouldn’t call architectural innovation\n\n\ntransformers were\n\n\nthis is a place where I would not discuss specific ideas because I do not actually want this event to occur\n\n\n\n\n\n[Christiano][15:24]\nsure\n\n\nhave you seen a graph of LSTM scaling vs transformer scaling?\n\n\nI think LSTM with ongoing normalization-style fixes lags like 3x behind transformers on language modeling\n\n\n\n\n\n\n[Yudkowsky][15:25]\nno, does it show convergence at high-enough scales?\n\n\n\n\n\n[Christiano][15:25]\nfigure 7 here: \n\n\n![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/d1cef6e971b840373aa6375a7e70d4c5f3b0a258d894558c.png)\n\n\n\n\n[Yudkowsky][15:26]\nyeah… I unfortunately would rather not give other people a sense for which innovations are obviously more of the same and which innovations obviously count as qualitative\n\n\n\n\n\n[Christiano][15:26]\nI think smart money is that careful initialization and normalization on the RNN will let it keep up for longer\n\n\nanyway, I’m very open to differences like LSTM vs transformer between humans and 3x-smaller-brained-ancestors, as long as you are open to like 10 similar differences further back in the evolutionary history\n\n\n\n\n\n\n[Yudkowsky][15:28]\nwhat if there’s 27 differences like that and 243 differences further back in history?\n\n\n\n\n\n[Christiano][15:28]\nsure\n\n\n\n\n\n\n[Yudkowsky][15:28]\nis that a distinctly Yudkowskian view vs a Paul view…\n\n\napparently not\n\n\nI am again feeling confused about cruxes\n\n\n\n\n\n[Christiano][15:29]\nI mean, 27 differences like transformer vs LSTM isn’t actually plausible, so I guess we could talk about it\n\n\n\n\n\n\n[Cotra][15:30]\nHere’s a potential crux articulation that ties it back to the animals stuff: paul thinks that we first discover major algorithmic innovations that improve intelligence at a low level of intelligence, analogous to evolution discovering major architectural innovations with tiny birds and primates, and then there will be a long period of scaling up plus coming up with routine algorithmic tweaks to get to the high level, analogous to evolution schlepping on the same shit for a long time to get to humans. analogously, he thinks when big innovations come onto the scene the actual product is crappy af (e.g. wright brother’s plane), and it needs a ton of work to scale up to usable and then to great.\n\n\nyou both seem to think both evolution and tech history consiliently point in your direction\n\n\n\n\n\n\n[Christiano][15:33]\nthat sounds vaguely right, I guess the important part of “routine” is “vaguely predictable,” like you mostly work your way down the low-hanging fruit (including new fruit that becomes more important as you scale), and it becomes more and more predictable the more people are working on it and the longer you’ve been at it\n\n\nand deep learning is already reasonably predictable (i.e. the impact of successive individual architectural changes is smaller, and law of large numbers is doing its thing) and is getting more so, and I just expect that to continue\n\n\n\n\n\n\n[Cotra][15:34]\nyeah, like it’s a view that points to using data that relates effort to algorithmic progress and using that to predict future progress (in combination with predictions of future effort)\n\n\n\n\n\n\n[Christiano][15:35]\nyeah\n\n\nand for my part, it feels like this is how most technologies look and also how current ML progress looks\n\n\n\n\n\n\n[Cotra][15:36]\nand *also* how evolution looks, right?\n\n\n\n\n\n\n[Christiano][15:37]\nyou aren’t seeing big jumps in translation or in self-driving cars or in image recognition, you are just seeing a long slog, and you see big jumps in areas where few people work (usually up to levels that are not in fact that important, which is very correlated with few people working there)\n\n\nI don’t know much about evolution, but it at least looks very consistent with what I know and the facts eliezer cites\n\n\n(not merely consistent, but “explains the data just about as well as the other hypotheses on offer”)\n\n\n\n\n\n \n\n\n### 9.7. Styles of thinking in forecasting\n\n\n \n\n\n\n[Yudkowsky][15:38]\nI do observe that this would seem, on the surface of things, to describe the entire course of natural selection up until about 20K years ago, if you were looking at surface impacts\n\n\n\n\n\n[Christiano][15:39]\nby 20k years ago I think it’s basically obvious that you are tens of thousands of years from the singularity\n\n\nlike, I think natural selection is going crazy with the brains by millions of years ago, and by hundreds of thousands of years ago humans are going crazy with the culture, and by tens of thousands of years ago the culture thing has accelerated and is almost at the finish line\n\n\n\n\n\n\n[Yudkowsky][15:41]\nreally? I don’t know if I would have been able to call that in advance if I’d never seen the future or any other planets. I mean, maybe, but I sure would have been extrapolating way out onto a further limb than I’m going here.\n\n\n\n\n\n[Christiano][15:41]\nYeah, I agree singularity is way more out on a limb—or like, where the singularity stops is more uncertain since that’s all that’s really at issue from my perspective\n\n\nbut the point is that everything is clearly crazy in historical terms, in the same way that 2000 is crazy, even if you don’t know where it’s going\n\n\nand the timescale for the crazy changes is tens of thousands of years\n\n\n\n\n\n\n[Yudkowsky][15:42]\nI frankly model that, had I made any such prediction 20K years ago of hominids being able to pull of moon landings or global warming – never mind the Singularity – I would have faced huge pushback from many EAs, such as, for example, Robin Hanson, and you.\n\n\n\n\n\n[Christiano][15:42]\nlike I think this can’t go on would have applied just as well: \n\n\nI don’t think that’s the case at all\n\n\nand I think you still somehow don’t understand my position?\n\n\n\n\n\n\n[Yudkowsky][15:43]\n is my old entry here\n\n\n\n\n\n[Christiano][15:43]\nlike, what is the move I’m making here, that you think I would have made in the past?\n\n\nand would have led astray?\n\n\n\n\n\n\n[Yudkowsky][15:44]\nI sure do feel in a deeper sense that I am trying very hard to account for perspective shifts in how unpredictable the future actually looks at the time, and the Other is looking back at the past and organizing it neatly and expecting the future to be that neat\n\n\n\n\n\n[Christiano][15:45]\nI don’t even feel like I’m expecting the future to be neat\n\n\nare you just saying you have a really broad distribution over takeoff speed, and that “less than a month” gets a lot of probability because lots of numbers are less than a month?\n\n\n\n\n\n\n[Yudkowsky][15:47]\nnot exactly?\n\n\n\n\n\n[Christiano][15:47]\nin what way is your view the one that is preferred by things being messy or unpredictable?\n\n\nlike, we’re both agreeing X will eventually happen, and I’m making some concrete prediction about how some other X’ will happen first, and that’s the kind of specific prediction that’s likely to be wrong?\n\n\n\n\n\n\n[Yudkowsky][15:48]\nmore like, we sure can tell a story today about how normal and predictable AlphaGo was, but we can *always* tell stories like that about the past. I do not particularly recall the AI field standing up one year before AlphaGo and saying “It’s time, we’re coming for the 8-dan pros this year and we’re gonna be world champions a year after that.” (Which took significantly longer in chess, too, matching my other thesis about how these slides are getting steeper as we get closer to the end.)\n\n\n\n\n\n[Christiano][15:49]\nit’s more like, you are offering AGZ as an example of why things are crazy, and I’m doubtful / think it’s pretty lame\n\n\nmaybe I don’t understand how it’s functioning as bayesian evidence\n\n\nfor what over what\n\n\n\n\n\n\n[Yudkowsky][15:50]\nI feel like the whole smoothness-reasonable-investment view, if evaluated on Earth 5My ago *without benefit of foresight*, would have dismissed the notion of brains overtaking evolution; evaluated 1My ago, it would have dismissed the notion of brains overtaking evolution; evaluated 20Ky ago, it would have barely started to acknowledge that brains were doing anything interesting at all, but pointed out how the hominids could still only eat as much food as their niche offered them and how the cute little handaxes did not begin to compare to livers and wasp stings.\n\n\nthere is a style of thinking that says, “wow, yeah, people in the past sure were surprised by stuff, oh, wait, *I’m also in the past*, aren’t I, I am one of those people”\n\n\nand a view where you look back from the present and think about how reasonable the past all seems now, and the future will no doubt be equally reasonable\n\n\n\n\n\n[Christiano][15:52]\n(the AGZ example may fall flat, because the arguments we are making about it now *we were also making in the past*)\n\n\n\n\n\n\n[Yudkowsky][15:52]\nI am not sure this is resolvable, but it is among my primary guesses for a deep difference in believed styles of thought\n\n\n\n\n\n[Christiano][15:52]\nI think that’s a useful perspective, but still don’t see how it favors your bottom line\n\n\n\n\n\n\n[Yudkowsky][15:53]\nwhere I look at the style of thinking you’re using, and say, not, “well, that’s invalidated by a technical error on line 3 even on Paul’s own terms” but “isn’t this obviously a whole style of thought that never works and ends up unrelated to reality”\n\n\nI think the first AlphaGo was the larger shock, AlphaGo Zero was a noticeable but more mild shock on account of how it showed the end of game programming and not just the end of Go\n\n\n\n\n\n[Christiano][15:54]\nsorry, I lumped them together\n\n\n\n\n\n\n[Yudkowsky][15:54]\nit didn’t feel like the same level of surprise; it was precedented by then\n\n\nthe actual accomplishment may have been larger in an important sense, but a lot of the – epistemic landscape of lessons learned? – is about the things that surprise you at the time\n\n\n\n\n\n[Christiano][15:55]\nalso AlphaGo was also quite easy to see coming after this paper (as was discussed extensively *at the time*): \n\n\n\n\n\n\n[Yudkowsky][15:55]\nPaul, are you on the record as arguing with me that AlphaGo will win at Go because it’s predictably on-trend?\n\n\nback then?\n\n\n\n\n\n[Cotra][15:55]\nHm, it sounds like Paul is saying “I do a trend extrapolation over long time horizons and if things seem to be getting faster and faster I expect they’ll continue to accelerate; this extrapolation if done 100k years ago would have seen that things were getting faster and faster and projected singularity within 100s of K years”\n\n\nDo you think Paul is in fact doing something other than the trend extrap he says he’s doing, or that he would have looked at a different less informative trend than the one he says he would have looked at, or something else?\n\n\n\n\n\n\n[Christiano][15:56]\nmy methodology for answering that question is looking at LW comments mentioning go by me, can see if it finds any\n\n\n\n\n\n\n[Yudkowsky][15:56]\nDifferent less informative trend, is most of my suspicion there?\n\n\nthough, actually, I should revise that, I feel like relatively little of the WHA was AlphaGo v2 whose name I forget beating Lee Se-dol, and most was in the revelation that v1 beat the high-dan pro whose name I forget.\n\n\nPaul having himself predicted anything at *all* like this would be the actually impressive feat\n\n\nthat would cause me to believe that the AI world is more regular and predictable than I experienced it as, if you are paying more attention to ICLR papers than I do\n\n\n\n\n \n\n\n### 9.8. Moravec’s prediction\n\n\n \n\n\n\n[Cotra][15:58]\nAnd jtbc, the trend extrap paul is currently doing is something like:\n\n\n* Look at how effort leads to hardware progress measured in FLOP/$ and software progress measured in stuff like “FLOP to do task X” or “performance on benchmark Y”\n* Look at how effort in the ML industry as a whole is increasing, project forward with maybe some adjustments for thinking markets are more inefficient now and will be less inefficient later\n\n\nand this is the wrong trend, because he shouldn’t be looking at hardware/software progress across the whole big industry and should be more open to an upset innovation coming from an area with a small number of people working on it?\n\n\nand he would have similarly used the wrong trends while trying to do trend extrap in the past?\n\n\n\n\n\n\n[Yudkowsky][15:59]\nbecause I feel like this general style of thought doesn’t work when you use it on Earth generally, and then fails extremely hard if you try to use it on Earth before humans to figure out where the hominids are going because that phenomenon is Different from Previous Stuff\n\n\nlike, to be clear, I have seen this used well on solar\n\n\nI feel like I saw some people calling the big solar shift based on graphs, before that happened\n\n\nI have seen this used great by Moravec on computer chips to predict where computer chips would be in 2012\n\n\nand also witnessed Moravec *completely failing* as soon as he tried to derive *literally anything but the graph itself* namely his corresponding prediction for human-equivalent AI in 2012 (I think, maybe it was 2010) or something\n\n\n\n\n\n[Christiano][16:02]\n(I think in his 1988 book Moravec estimated human-level AI in ~2030, not sure if you are referring to some earlier prediction?)\n\n\n\n\n\n\n[Yudkowsky][16:02]\n(I have seen Ray Kurzweil project out Moore’s Law to the $1,000,000 human brain in, what was it, 2025, followed by the $1000 human brain in 2035 and the $1 human brain in 2045, and when I asked Ray whether machine superintelligence might shift the graph at all, he replied that machine superintelligence was precisely how the graph would be able to continue on trend. This indeed is sillier than EAs.)\n\n\n\n\n\n[Cotra][16:03]\nmoravec’s prediction appears to actually be around 2025, looking at his hokey graph? \n\n\n![](https://jetpress.org/volume1/power_075.jpg)\n\n\n\n\n[Yudkowsky][16:03]\nbut even there, it does feel to me like there is a commonality between Kurzweil’s sheer graph-worship and difficulty in appreciating the graphs as surface phenomena that are less stable than deep phenomena, and something that Hanson was doing wrong in the foom debate\n\n\n\n\n\n[Cotra][16:03]\nwhich is…like, your timelines?\n\n\n\n\n\n\n[Yudkowsky][16:04]\nthat’s 1998\n\n\nMind Children in 1988 I am pretty sure had an earlier prediction\n\n\n\n\n\n[Christiano][16:04]\nI should think you’d be happy to bet against me on basically any prediction, shouldn’t you?\n\n\n\n\n\n\n[Yudkowsky][16:05]\nany prediction that sounds narrow and isn’t like “this graph will be on trend in 3 more years”\n\n\n…maybe I’m wrong, an online source says Mind Children in 1988 predicted AGI in “40 years” but I sure do seem to recall an extrapolated graph that reached “human-level hardware” in 2012 based on an extensive discussion about computing power to duplicate the work of the retina\n\n\n\n\n\n[Christiano][16:08]\ndon’t think it matters too much other than for Moravec’s honor, doesn’t really make a big difference for the empirical success of the methodology\n\n\nI think it’s on page 68 if you have the physical book\n\n\n\n\n\n\n[Yudkowsky][16:09]\np60 via Google Books says 10 teraops for a human-equivalent mind\n\n\n\n\n\n[Christiano][16:09]\nI have a general read of history where trend extrapolation works extraordinarily well relative to other kinds of forecasting, to the extent that the best first-pass heuristic for whether a prediction is likely to be accurate is whether it’s a trend extrapolation and how far in the future it is\n\n\n\n\n\n\n[Yudkowsky][16:09]\nwhich, incidentally, strikes me as entirely plausible if you had algorithms as sophisticated as the human brain\n\n\nmy sense is that Moravec nailed the smooth graph of computing power going on being smooth, but then all of his predictions about the actual future were completely invalid on account of a curve interacting with his curve that he didn’t know things about and so simply omitted as a step in his calculations, namely, AGI algorithms\n\n\n\n\n\n[Christiano][16:12]\nthough again, from your perspective 2030 is still a reasonable bottom-line forecast that makes him one of the most accurate people at that time?\n\n\n\n\n\n\n[Yudkowsky][16:12]\nyou could be right about all the local behaviors that your history is already shouting out at you as having smooth curve (where by “local” I do mean to exclude stuff like world GDP extrapolated into the indefinite future) and the curves that history isn’t shouting at you will tear you down\n\n\n\n\n\n[Christiano][16:12]\n(I don’t know if he even forecast that)\n\n\n\n\n\n\n[Yudkowsky][16:12]\nI don’t remember that part from the 1988 book\n\n\nmy memory of the 1988 book is “10 teraops, based on what it takes to rival the retina” and he drew a graph of Moore’s Law\n\n\n\n\n\n[Christiano][16:13]\nyeah, I think that’s what he did\n\n\n(and got 2030)\n\n\n\n\n\n\n[Yudkowsky][16:14]\n“If this rate of improvement were to continue into the next century, the 10 teraops required for a humanlike computer would be available in a $10 million supercomputer before 2010 and in a $1,000 personal computer by 2030.”\n\n\n\n\n\n[Christiano][16:14]\nor like, he says “human equivalent in 40 years” and predicts that in 50 years we will have robots with superhuman reasoning ability, not clear he’s ruling out human-equivalent AGI before 40 years but I think the tone is clear\n\n\n\n\n\n\n[Yudkowsky][16:15]\nso 2030 for AGI on a personal computer and 2010 for AGI on a supercomputer, and I expect that on my first reading I simply discarded the former prediction as foolish extrapolation past the model collapse he had just predicted in 2010.\n\n\n(p68 in “Powering Up”)\n\n\n\n\n\n[Christiano][16:15]\nyeah, that makes sense\n\n\nI do think the PC number seems irrelevant\n\n\n\n\n\n\n[Cotra][16:16]\nI think both in that book and in the 98 article he wants you to pay attention to the “very cheap human-size computers” threshold, not the “supercomputer” threshold, i think intentionally as a way to handwave in “we need people to be able to play around with these things”\n\n\n(which people criticized him at the time for not more explicitly modeling iirc)\n\n\n\n\n\n\n[Yudkowsky][16:17]\nbut! I mean! there are so many little places where the media has a little cognitive hiccup about that and decides in 1998 that it’s fine to describe that retrospectively as “you predicted in 1988 that we’d have true AI in 40 years” and then the future looks less surprising than people at the time using Trend Logic were actually surprised by it!\n\n\nall these little ambiguities and places where, oh, you decide retroactively that it would have made sense to look at *this* Trend Line and use it *that* way, but if you look at what people said at the time, they didn’t actually say that!\n\n\n\n\n\n[Christiano][16:19]\nI mean, in fairness reading the book it just doesn’t seem like he is predicting human-level AI in 2010 rather than 2040, but I do agree that it seems like the basic methodology (why care about the small computer thing?) doesn’t really make that much sense a priori and only leads to something sane if it cancels out with a weird view\n\n\n\n\n\n \n\n\n### 9.9. Prediction disagreements and bets\n\n\n \n\n\n\n[Christiano][16:19]\nanyway, I’m pretty unpersuaded by the kind of track record appeal you are making here\n\n\n\n\n\n\n[Yudkowsky][16:20]\nif the future goes the way I predict and yet anybody somehow survives, perhaps somebody will draw a hyperbolic trendline on some particular chart where the trendline is retroactively fitted to events including those that occurred in only the last 3 years, and say with a great sage nod, ah, yes, that was all according to trend, nor did anything depart from trend\n\n\ntrend lines permit anything\n\n\n\n\n\n[Christiano][16:20]\nlike from my perspective the fundamental question is whether I would do better or worse by following the kind of reasoning you’d advocate, and it just looks to me like I’d do worse, and I’d love to make any predictions about anything to help make that more clear and hindsight-proof in advance\n\n\n\n\n\n\n[Yudkowsky][16:20]\nyou just look into the past and find a line you can draw that ended up where reality went\n\n\n\n\n\n[Christiano][16:21]\nit feels to me like you really just waffle on almost any prediction about the before-end-of-days\n\n\n\n\n\n\n[Yudkowsky][16:21]\nI don’t think I know a lot about the before-end-of-days\n\n\n\n\n\n[Christiano][16:21]\nlike if you make a prediction I’m happy to trade into it, or you can pick a topic and I can make a prediction and you can trade into mine\n\n\n\n\n\n\n[Cotra][16:21]\nbut you know enough to have strong timing predictions, e.g. your bet with caplan\n\n\n\n\n\n\n[Yudkowsky][16:21]\nit’s daring enough that I claim to know anything about the Future at all!\n\n\n\n\n\n[Cotra][16:21]\nsurely with that difference of timelines there should be some pre-2030 difference as well\n\n\n\n\n\n\n[Christiano][16:21]\nbut you are the one making the track record argument against my way of reasoning about things!\n\n\nhow does that not correspond to believing that your predictions are better!\n\n\nwhat does that mean?\n\n\n\n\n\n\n[Yudkowsky][16:22]\nyes and if you say something narrow enough or something that my model does at least vaguely push against, we should bet\n\n\n\n\n\n[Christiano][16:22]\nmy point is that I’m willing to make a prediction about any old thing, you can name your topic\n\n\nI think the way I’m reasoning about the future is just better in general\n\n\nand I’m going to beat you on whatever thing you want to bet on\n\n\n\n\n\n\n[Yudkowsky][16:22]\nbut if you say, “well, Moore’s Law on trend, next 3 years”, then I’m like, “well, yeah, sure, since I don’t feel like I know anything special about that, that would be my prediction too”\n\n\n\n\n\n[Christiano][16:22]\nsure\n\n\nyou can pick the topic\n\n\npick a quantity\n\n\nor a yes/no question\n\n\nor whatever\n\n\n\n\n\n\n[Yudkowsky][16:23]\nyou may know better than I would where your Way of Thought makes strong, narrow, or unusual predictions\n\n\n\n\n\n[Christiano][16:23]\nI’m going to trend extrapolation everywhere\n\n\nspoiler\n\n\n\n\n\n\n[Yudkowsky][16:23]\nokay but any superforecaster could do that and I could do the same by asking a superforecaster\n\n\n\n\n\n[Cotra][16:24]\nbut there must be places where you’d strongly disagree w the superforecaster\n\n\nsince you disagree with them eventually, e.g. >2/3 doom by 2030\n\n\n\n\n\n\n[Bensinger][18:40]  (Nov. 25 follow-up comment)\n“>2/3 doom by 2030” isn’t an actual Eliezer-prediction, and is based on a misunderstanding of something Eliezer said. See [Eliezer’s comment on LessWrong](https://www.lesswrong.com/posts/7MCqRnZzvszsxgtJi/christiano-cotra-and-yudkowsky-on-ai-progress?commentId=diChXiELZd62hgRyK#diChXiELZd62hgRyK).\n\n\n\n\n\n\n[Yudkowsky][16:24]\nin the terminal phase, sure\n\n\n\n\n\n[Cotra][16:24]\nright, but there are no disagreements before jan 1 2030?\n\n\nno places where you’d strongly defy the superforecasters/trend extrap?\n\n\n\n\n\n\n[Yudkowsky][16:24]\nsuperforecasters were claiming that AlphaGo had a 20% chance of beating Lee Se-dol and I didn’t disagree with that at the time, though as the final days approached I became nervous and suggested to a friend that they buy out of a bet about that\n\n\n\n\n\n[Cotra][16:25]\nwhat about like whether we get some kind of AI ability (e.g. coding better than X) before end days\n\n\n\n\n\n\n[Yudkowsky][16:25]\nthough that was more because of having started to feel incompetent and like I couldn’t trust the superforecasters to know more, than because I had switched to a confident statement that AlphaGo would win\n\n\n\n\n\n[Cotra][16:25]\nseems like EY’s deep intelligence / insight-oriented view should say something about what’s not possible before we get the “click” and the FOOM\n\n\n\n\n\n\n[Christiano][16:25]\nI mean, I’m OK with either (i) evaluating arguments rather than dismissive and IMO totally unjustified track record, (ii) making bets about stuff\n\n\nI don’t see how we can both be dismissing things for track record reasons and also not disagreeing about things\n\n\nif our methodologies agree about all questions before end of days (which seems crazy to me) then surely there is no track record distinction between them…\n\n\n\n\n\n| |\n| --- |\n| [Cotra: 👍] |\n\n\n\n\n\n\n\n[Cotra][16:26]\ndo you think coding models will be able to 2x programmer productivity before end days? 4x?\n\n\nwhat about hardware/software R&D wages? will they get up to $20m/yr for good ppl?\n\n\nwill someone train a 10T param model before end days?\n\n\n\n\n\n\n[Christiano][16:27]\nthings I’m happy to bet about: economic value of LMs or coding models at 2, 5, 10 years, benchmark performance of either, robotics, wages in various industries, sizes of various industries, compute/$, someone else’s views about “how ML is going” in 5 years\n\n\nmaybe the “any GDP acceleration before end of days?” works, but I didn’t like how you don’t win until the end of days\n\n\n\n\n\n\n[Yudkowsky][16:28]\nokay, so here’s an example place of a *weak* general Yudkowskian prediction, that is weaker than terminal-phase stuff of the End Days: (1) I predict that cycles of ‘just started to be able to do Narrow Thing -> blew past upper end of human ability at Narrow Thing’ will continue to get shorter, the same way that, I think, this happened faster with Go than with chess.\n\n\n\n\n\n[Christiano][16:28]\ngreat, I’m totally into it\n\n\nwhat’s a domain?\n\n\ncoding?\n\n\n\n\n\n\n[Yudkowsky][16:28]\nDoes Paul disagree? Can Paul point to anything equally specific out of Paul’s viewpoint?\n\n\n\n\n\n[Christiano][16:28]\nbenchmarks for LMs?\n\n\nrobotics?\n\n\n\n\n\n\n[Yudkowsky][16:28]\nwell, for these purposes, we do need some Elo-like ability to measure at all where things are relative to humans\n\n\n\n\n\n[Cotra][16:29]\nproblem-solving benchmarks for code?\n\n\nMATH benchmark?\n\n\n\n\n\n\n[Christiano][16:29]\nwell, for coding and LM’ing we have lots of benchmarks we can use\n\n\n\n\n\n\n[Yudkowsky][16:29]\nthis unfortunately does feel a bit different to me from Chess benchmarks where the AI is playing the whole game; Codex is playing part of the game\n\n\n\n\n\n[Christiano][16:29]\nin general the way I’d measure is by talking about how fast you go from “weak human” to “strong human” (e.g. going from top-10,000 in chess to top-10 or whatever, going from jobs doable by $50k/year engineer to $500k/year engineer…)\n\n\n\n\n\n\n[Yudkowsky][16:30]\ngolly, that sounds like a viewpoint very favorable to mine\n\n\n\n\n\n[Christiano][16:30]\nwhat do you mean?\n\n\nthat way of measuring would be favorable to your viewpoint?\n\n\n\n\n\n\n[Yudkowsky][16:31]\nif we measure how far it takes AI to go past different levels of paying professionals, I expect that the Chess duration is longer than the Go duration and that by the time Codex is replacing ~~a~~ most paid $50k/year programmers the time to replacing ~~a~~ most programmers paid as much as a top Go player will be pretty darned short\n\n\n\n\n\n[Christiano][16:31]\ntop Go players don’t get paid, do they?\n\n\n\n\n\n\n[Yudkowsky][16:31]\nthey tutor students and win titles\n\n\n\n\n\n[Christiano][16:31]\nbut I mean, they are like low-paid engineers\n\n\n\n\n\n\n[Yudkowsky][16:31]\nyeah that’s part of the issue here\n\n\n\n\n\n[Christiano][16:31]\nI’m using wages as a way to talk about the distribution of human abilities, not the fundamental number\n\n\n\n\n\n\n[Yudkowsky][16:32]\nI would expect something similar to hold over going from low-paying welder to high-paying welder\n\n\n\n\n\n[Christiano][16:32]\nlike, how long to move from “OK human” to “pretty good human” to “best human”\n\n\n\n\n\n\n[Cotra][16:32]\nsays salary of $350k/yr for lee: \n\n\n\n\n\n\n[Yudkowsky][16:32]\nbut I also mostly expect that AIs will not be allowed to weld things on Earth\n\n\n\n\n\n[Cotra][16:32]\nwhy don’t we just do an in vitro benchmark instead of wages?\n\n\n\n\n\n\n[Christiano][16:32]\nwhat, machines already do virtually all welding?\n\n\n\n\n\n\n[Cotra][16:32]\njust pick a benchmark?\n\n\n\n\n\n\n[Yudkowsky][16:33]\nyoouuuu do not want to believe sites like that (fameranker)\n\n\n\n\n\n[Christiano][16:33]\nyeah, I’m happy with any benchmark, and then we can measure various human levels at that benchmark\n\n\n\n\n\n\n[Cotra][16:33]\nwhat about MATH? \n\n\n\n\n\n\n[Christiano][16:34]\nalso I don’t know what “shorter and shorter” means, the time in go and chess was decades to move from “strong amateur” to “best human,” I do think these things will most likely be shorter than decades\n\n\nseems like we can just predict concrete #s though\n\n\n\n\n\n| |\n| --- |\n| [Cotra: 👍] |\n\n\n\nlike I can say how long I think it will take to get from “median high schooler” to “IMO medalist” and you can bet against me?\n\n\nand if we just agree about all of those predictions then again I’m back to being very skeptical of a claimed track record difference between our models\n\n\n(I do think that it’s going to take years rather than decades on all of these things)\n\n\n\n\n\n\n[Yudkowsky][16:36]\npossibly! I worry this ends up in a case where Katja or Luke or somebody goes back and collects data about “amateur to pro performance times” and Eliezer says “Ah yes, these are shortening over time, just as I predicted” and Paul is like “oh, well, I predict they continue to shorten on this trend drawn from the data” and Eliezer is like “I guess that could happen for the next 5 years, sure, sounds like something a superforecaster would predict as default”\n\n\n\n\n\n[Cotra][16:37]\ni’m pretty sure paul’s methodology here will just be to look at the MATH perf trend based on model size and combine with expectations of when ppl will make big enough models, not some meta trend thing like that?\n\n\n\n\n\n\n[Yudkowsky][16:37]\nso I feel like… a bunch of what I feel is the real disagreement in our models, is a bunch of messy stuff Suddenly Popping Up one day and then Eliezer is like “gosh, I sure didn’t predict that” and Paul is like “somebody could have totally predicted that” and Eliezer is like “people would say exactly the same thing after the world ended in 3 minutes”\n\n\nif we’ve already got 2 years of trend on a dataset, I’m not necessarily going to predict the trend breaking\n\n\n\n\n\n[Cotra][16:38]\nhm, you’re presenting your view as more uncertain and open to anything here than paul’s view, but in fact it’s picking out a narrower distribution. you’re more confident in powerful AGI soon\n\n\n\n\n\n\n[Christiano][16:38]\nseems hard to play the “who is more confident?” game\n\n\n\n\n\n\n[Cotra][16:38]\nso there should be some places where you make a strong positive prediction paul disagrees with\n\n\n\n\n\n\n[Yudkowsky][16:39]\nI might want to buy options on a portfolio of trends like that, if Paul is willing to sell me insurance against all of the trends breaking upward at a lower price than I think is reasonable\n\n\nI mean, from my perspective Paul is the one who seems to think the world is well-organized and predictable in certain ways\n\n\n\n\n\n[Christiano][16:39]\nyeah, and you are saying that I’m overconfident about that\n\n\n\n\n\n\n[Yudkowsky][16:39]\nI keep wanting Paul to go on and make narrower predictions than I do in that case\n\n\n\n\n\n[Christiano][16:39]\nso you should be happy to bet with me about *anything*\n\n\nand I’m letting you pick anything at all you want to bet about\n\n\n\n\n\n\n[Cotra][16:40]\ni mean we could do a portfolio of trends like MATH and you could bet on at least a few of them having strong surprises in the sooner direction\n\n\nbut that means we could just bet about MATH and it’d just be higher variance\n\n\n\n\n\n\n[Yudkowsky][16:40]\nok but you’re not going to sell me cheap options on sharp declines in the S&P 500 even though in a very reasonable world there would not be any sharp declines like that\n\n\n\n\n\n[Christiano][16:41]\nif we’re betting $ rather than bayes points, then yes I’m going to weigh worlds based on the value of $ in those worlds\n\n\n\n\n\n\n[Cotra][16:41]\nwouldn’t paul just sell you options at the price the options actually trade for? i don’t get it\n\n\n\n\n\n\n[Christiano][16:41]\nbut my sense is that I’m just generally across the board going to be more right than you are, and I’m frustrated that you just keep saying that “people like me” are wrong about stuff\n\n\n\n\n\n\n[Yudkowsky][16:41]\nPaul’s like “we’ll see smooth behavior in the end days” and I feel like I should be able to say “then Paul, sell me cheap options against smooth behavior now” but Paul is just gonna wanna sell at market price\n\n\n\n\n\n[Christiano][16:41]\nand so I want to hold you to that by betting about anything\n\n\nideally just tons of stuff\n\n\nrandom things about what AI will be like, and other technologies, and regulatory changes\n\n\n\n\n\n\n[Cotra][16:42]\npaul’s view doesn’t seem to imply that he should value those options less than the market\n\n\nhe’s more EMH-y than you not less\n\n\n\n\n\n\n[Yudkowsky][16:42]\nbut then the future should *behave like that market*\n\n\n\n\n\n[Christiano][16:42]\nwhat do you mean?\n\n\n\n\n\n\n[Yudkowsky][16:42]\nit should have options on wild behavior that are not cheap!\n\n\n\n\n\n[Christiano][16:42]\nyou mean because people want $ more in worlds where the market drops a lot?\n\n\nI don’t understand the analogy\n\n\n\n\n\n\n[Yudkowsky][16:43]\nno, because jumpy stuff happens more than it would in a world of ideal agents\n\n\n\n\n\n[Cotra][16:43]\nI think EY is saying the non-cheap option prices are because P(sharp declines) is pretty high\n\n\n\n\n\n\n[Christiano][16:43]\nok, we know how often markets jump, if that’s the point of your argument can we just talk about that directly?\n\n\n\n\n\n\n[Yudkowsky][16:43]\nor sharp rises, for that matter\n\n\n\n\n\n[Christiano][16:43]\n(much lower than option prices obviously)\n\n\nI’m probably happy to sell you options for sharp rises\n\n\nI’ll give you better than market odds in that direction\n\n\nthat’s how this works\n\n\n\n\n\n\n[Yudkowsky][16:44]\nnow I am again confused, for I thought you were the one who expected world GDP to double in 4 years at some point\n\n\nand indeed, drew such graphs with the rise suggestively happening earlier than the sharp spike\n\n\n\n\n\n[Christiano][16:44]\nyeah, and I have exposure to that by buying stocks, options prices are just a terrible way of tracking these things\n\n\n\n\n\n\n[Yudkowsky][16:44]\nsuggesting that such a viewpoint is generally favor to near timelines for that\n\n\n\n\n\n[Christiano][16:44]\nI mean, I have bet a *lot* of money on AI companies doing well\n\n\nwell, not compared to the EA crowd, but compared to my meager net worth ![🙂](https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png)\n\n\nand indeed, it has been true so far\n\n\nand I’m continuing to make the bet\n\n\nit seems like on your view it should be surprising that AI companies just keep going up\n\n\naren’t you predicting them not to get to tens of trillions of valuation before the end of days?\n\n\n\n\n\n\n[Yudkowsky][16:45]\nI believe that Nate, of a generally Yudkowskian view, did the same (bought AI companies). and I focused my thoughts elsewhere, because somebody needs to, but did happen to buy my first S&P 500 on its day of exact minimum in 2020\n\n\n\n\n\n[Christiano][16:46]\npoint is, that’s how you get exposure to the crazy growth stuff with continuous ramp-ups\n\n\nand I’m happy to make the bet on the market\n\n\nor on other claims\n\n\nI don’t know if my general vibe makes sense here, and why it seems reasonable to me that I’m just happy to bet on anything\n\n\nas a way of trying to defend my overall attack\n\n\nand that if my overall epistemic approach is vulnerable to some track record objection, then it seems like it ought to be possible to win here\n\n\n\n\n\n \n\n\n### 9.10. Prediction disagreements and bets: Standard superforecaster techniques\n\n\n \n\n\n\n[Cotra][16:47]\nI’m still kind of surprised that Eliezer isn’t willing to bet that there will be a faster-than-Paul expects trend break on MATH or whatever other benchmark. Is it just the variance of MATH being one benchmark? Would you make the bet if it were 6?\n\n\n\n\n\n\n[Yudkowsky][16:47]\na large problem here is that both of us tend to default strongly to superforecaster standard techniques\n\n\n\n\n\n[Christiano][16:47]\nit’s true, though it’s less true for longer things\n\n\n\n\n\n\n[Cotra][16:47]\nbut you think the superforecasters would suck at predicting end days because of the surface trends thing!\n\n\n\n\n\n\n[Yudkowsky][16:47]\nbefore I bet against Paul on MATH I would want to know that Paul wasn’t arriving at the same default I’d use, which might be drawn from trend lines there, or from a trend line in trend lines\n\n\nI mean the superforecasters did already suck once in my observation, which was AlphaGo, but I did not bet against them there, I bet with them and then updated afterwards\n\n\n\n\n\n[Christiano][16:48]\nI’d mostly try to eyeball how fast performance was improving with size; I’d think about difficulty effects (where e.g. hard problems will be flat for a while and then go up later, so you want to measure performance on a spectrum of difficulties)\n\n\n\n\n\n\n[Cotra][16:48]\nwhat if you bet against a methodology instead of against paul’s view? the methodology being the one i described above, of looking at the perf based on model size and then projecting model size increases by cost?\n\n\n\n\n\n\n[Christiano][16:48]\nseems safer to bet against my view\n\n\n\n\n\n\n[Cotra][16:48]\nyeah\n\n\n\n\n\n\n[Christiano][16:48]\nmostly I’d just be eyeballing size, thinking about how much people will in fact scale up (which would be great to factor out if possible), assuming performance trends hold up\n\n\nare there any other examples of surface trends vs predictable deep changes, or is AGI the only one?\n\n\n(that you have thought a lot about)\n\n\n\n\n\n\n[Cotra][16:49]\nyeah seems even better to bet on the underlying “will the model size to perf trends hold up or break upward”\n\n\n\n\n\n\n[Yudkowsky][16:49]\nso from my perspective, there’s this whole thing where *unpredictably* something breaks above trend because the first way it got done was a way where somebody could do it faster than you expected\n\n\n\n\n\n[Christiano][16:49]\n(makes sense for it to be the domain where you’ve thought a lot)\n\n\nyou mean, it’s unpredictable what will break above trend?\n\n\n\n\n\n\n[Cotra][16:49]\n[IEM](https://intelligence.org/files/IEM.pdf) has a financial example\n\n\n\n\n\n\n[Yudkowsky][16:49]\nI mean that I could not have said “*Go* will break above trend” in 2015\n\n\n\n\n\n[Christiano][16:49]\nyeah\n\n\nok, here’s another example\n\n\n\n\n\n\n[Yudkowsky][16:50]\nit feels like if I want to make a bet with imaginary Paul in 2015 then I have to bet on a portfolio\n\n\nand I also feel like as soon as we make it that concrete, Paul does not want to offer me things that I want to bet on\n\n\nbecause Paul is also like, sure, something might break upward\n\n\nI remark that I have for a long time been saying that I wish Paul had more concrete images and examples attached to *a lot of his stuff*\n\n\n\n\n\n[Cotra][16:51]\nsurely the view is about the probability of each thing breaking upward. or the expected number from a basket\n\n\n\n\n\n\n[Christiano][16:51]\nI mean, if you give me any way of quantifying how much stuff breaks upwards we have a bet\n\n\n\n\n\n\n[Cotra][16:51]\nnot literally that one single thing breaks upward\n\n\n\n\n\n\n[Christiano][16:51]\nI don’t understand how concreteness is an accusation here, I’ve offered 10 quantities I’d be happy to bet about, and also allowed you to name literally any other quantity you want\n\n\nand I agree that we mostly agree about things\n\n\n\n\n\n\n[Yudkowsky][16:52]\nand some of my sense here is that if Paul offered a portfolio bet of this kind, I might not take it myself, but EAs who were better at noticing their own surprise might say, “Wait, *that’s* how unpredictable Paul thinks the world is?”\n\n\nso from my perspective, it is hard to know specific anti-superforecaster predictions that happen long before terminal phase, and I am not sure we are really going to get very far there.\n\n\n\n\n\n[Christiano][16:53]\nbut you agree that the eventual prediction is anti-superforecaster?\n\n\n\n\n\n\n[Yudkowsky][16:53]\nboth of us probably have quite high inhibitions against selling conventionally priced options that are way not what a superforecaster would price them as\n\n\n\n\n\n[Cotra][16:53]\nwhy does it become so much easier to know these things and go anti-superforecaster at terminal phase?\n\n\n\n\n\n\n[Christiano][16:53]\nI assume you think that the superforecasters will continue to predict that big impactful AI applications are made by large firms spending a lot of money, even through the end of days\n\n\nI do think it’s very often easy to beat superforecasters in-domain\n\n\nlike I expect to personally beat them at most ML prediction\n\n\nand so am also happy to do bets where you defer to superforecasters on arbitrary questions and I bet against you\n\n\n\n\n\n\n[Yudkowsky][16:54]\nwell, they’re anti-prediction-market in the sense that, at the very end, bets can no longer settle. I’ve been surprised of late by how much AGI ruin seems to be sneaking into common knowledge; perhaps in the terminal phase the superforecasters will be like, “yep, we’re dead”. I can’t even say that in this case, Paul will disagree with them, because I expect the state on alignment to be so absolutely awful that even Paul is like “You were not supposed to do it that way” in a very sad voice.\n\n\n\n\n\n[Christiano][16:55]\nI’m just thinking about takeoff speeds here\n\n\nI do think it’s fairly likely I’m going to be like “oh no this is bad” (maybe 50%?), but not that I’m going to expect fast takeoff\n\n\nand similarly for the superforecasters\n\n\n\n\n\n \n\n\n### 9.11. Prediction disagreements and bets: Late-stage predictions, and betting against superforecasters\n\n\n \n\n\n\n[Yudkowsky][16:55]\nso, one specific prediction you made, sadly close to terminal phase but not much of a surprise there, is that the world economy must double in 4 years before the End Times are permitted to begin\n\n\n\n\n\n[Christiano][16:56]\nwell, before it doubles in 1 year…\n\n\nI think most people would call the 4 year doubling the end times\n\n\n\n\n\n\n[Yudkowsky][16:56]\nthis seems like you should also be able to point to some least impressive thing that is not permitted to occur before WGDP has doubled in 4 years\n\n\n\n\n\n[Christiano][16:56]\nand it means that the normal planning horizon includes the singularity\n\n\n\n\n\n\n[Yudkowsky][16:56]\nit may not be much but we would be *moving back* the date of first concrete disagreement\n\n\n\n\n\n[Christiano][16:57]\nI can list things I don’t think would happen first, since that’s a ton\n\n\n\n\n\n\n[Yudkowsky][16:57]\nand EAs might have a little bit of time in which to say “Paul was falsified, uh oh”\n\n\n\n\n\n[Christiano][16:57]\nthe only things that aren’t permitted are the ones that would have caused the world economy to double in 4 years\n\n\n\n\n\n\n[Yudkowsky][16:58]\nand by the same token, there are things Eliezer thinks you are probably not going to be able to do before you slide over the edge. a portfolio of these will have some losing options because of adverse selection against my errors of what is hard, but if I lose more than half the portfolio, this may said to be a bad sign for Eliezer.\n\n\n\n\n\n[Christiano][16:58]\n(though those can happen at the beginning of the 4 year doubling)\n\n\n\n\n\n\n[Yudkowsky][16:58]\nthis is unfortunately *late* for falsifying our theories but it would be *progress* on a kind of bet against each other\n\n\n\n\n\n[Christiano][16:59]\nbut I feel like the things I’ll say are like fully automated construction of fully automated factories at 1-year turnarounds, and you’re going to be like “well duh”\n\n\n\n\n\n\n[Yudkowsky][16:59]\n…unfortunately yes\n\n\n\n\n\n[Christiano][16:59]\nthe reason I like betting about numbers is that we’ll probably just disagree on any given number\n\n\n\n\n\n\n[Yudkowsky][16:59]\nI don’t think I *know* numbers.\n\n\n\n\n\n[Christiano][16:59]\nit does seem like a drawback that this can just turn up object-level differences in knowledge-of-numbers more than deep methodological advantages\n\n\n\n\n\n\n[Yudkowsky][17:00]\nthe last important number I had a vague suspicion I might know was that Ethereum ought to have a significantly larger market cap in pre-Singularity equilibrium.\n\n\nand I’m not as sure of that one since El Salvador supposedly managed to use Bitcoin L2 Lightning.\n\n\n(though I did not fail to act on the former belief)\n\n\n\n\n\n[Christiano][17:01]\ndo you see why I find it weird that you think there is this deep end-times truth about AGI, that is very different from a surface-level abstraction and that will take people like Paul by surprise, without thinking there are other facts like that about the world?\n\n\nI do see how this annoying situation can come about\n\n\nand I also understand the symmetry of the situation\n\n\n\n\n\n\n[Yudkowsky][17:02]\nwe unfortunately both have the belief that the present world looks a lot like our being right, and therefore that the other person ought to be willing to bet against default superforecasterish projections\n\n\n\n\n\n[Cotra][17:02]\npaul says that *he* would bet against superforecasters too though\n\n\n\n\n\n\n[Christiano][17:02]\nI would in ML\n\n\n\n\n\n\n[Yudkowsky][17:02]\nlike, where specifically?\n\n\n\n\n\n[Christiano][17:02]\nor on any other topic where I can talk with EAs who know about the domain in question\n\n\nI don’t know if they have standing forecasts on things, but e.g.: (i) benchmark performance, (ii) industry size in the future, (iii) how large an LM people will train, (iv) economic impact of any given ML system like codex, (v) when robotics tasks will be plausible\n\n\n\n\n\n\n[Yudkowsky][17:03]\nI have decided that, as much as it might gain me prestige, I don’t think it’s actually the right thing for me to go spend a bunch of character points on the skills to defeat superforecasters in specific domains, and then go around doing that to prove my epistemic virtue.\n\n\n\n\n\n[Christiano][17:03]\nthat seems fair\n\n\n\n\n\n\n[Yudkowsky][17:03]\nyou don’t need to bet with *me* to prove your epistemic virtue in this way, though\n\n\nokay, but, if I’m allowed to go around asking Carl Shulman who to ask in order to get the economic impact of Codex, maybe I can also defeat superforecasters.\n\n\n\n\n\n[Christiano][17:04]\nI think the deeper disagreement is that (i) I feel like my end-of-days prediction is also basically just a default superforecaster prediction (and if you think yours is too then we can bet about what some superforecasters will say on it), (ii) I think you are leveling a much stronger “people like paul get taken by surprise by reality” claim whereas I’m just saying that I don’t like your arguments\n\n\n\n\n\n\n[Yudkowsky][17:04]\nit seems to me like the contest should be more like our intuitions in advance of doing that\n\n\n\n\n\n[Christiano][17:04]\nyeah, I think that’s fine, and also cheaper since research takes so much time\n\n\nI feel like those asymmetries are pretty strong though\n\n\n\n\n\n \n\n\n### 9.12. Self-duplicating factories, AI spending, and Turing test variants\n\n\n \n\n\n\n[Yudkowsky][17:05]\nso, here’s an idea that is less epistemically virtuous than our making Nicely Resolvable Bets\n\n\nwhat if we, like, talked a bunch about our off-the-cuff senses of where various AI things are going in the next 3 years\n\n\nand then 3 years later, somebody actually reviewed that\n\n\n\n\n\n[Christiano][17:06]\nI do think just saying a bunch of stuff about what we expect will happen so that *we* can look back on it would have a significant amount of the value\n\n\n\n\n\n\n[Yudkowsky][17:06]\nand any time the other person put a thumbs-up on the other’s prediction, that prediction coming true was not taken to distinguish them\n\n\n\n\n\n[Cotra][17:06]\ni’d suggest doing this in a format other than discord for posterity\n\n\n\n\n\n\n[Yudkowsky][17:06]\neven if the originator was like HOW IS THAT ALSO A PREDICTION OF YOUR THEORY\n\n\nwell, Discord has worked better than some formats\n\n\n\n\n\n[Cotra][17:07]\nsomething like a spreadsheet seems easier for people to look back on and score and stuff\n\n\ndiscord transcripts are pretty annoying to read\n\n\n\n\n\n\n[Yudkowsky][17:08]\nsomething like a spreadsheet seems liable to be high-cost and not actually happen\n\n\n\n\n\n[Christiano][17:08]\nI think a conversation is probably easier and about as good for our purposes though?\n\n\n\n\n\n\n[Cotra][17:08]\nok fair\n\n\n\n\n\n\n[Yudkowsky][17:08]\nI think money can be inserted into humans in order to turn Discord into spreadsheets\n\n\n\n\n\n[Christiano][17:08]\nand it’s possible we will both think we are right in retrospect\n\n\nand that will also be revealing\n\n\n\n\n\n\n[Yudkowsky][17:09]\nbut, besides that, I do want to boop on the point that I feel like Paul should be able to predict intuitively, rather than with necessity, things that should not happen before the world economy doubled in 4 years\n\n\n\n\n\n[Christiano][17:09]\nit may also turn up some quantitative differences of view\n\n\nthere are lots of things I think won’t happen before the world economy has doubled in 4 years\n\n\n\n\n\n\n[Yudkowsky][17:09]\nbecause on my model, as we approach the end times, AI was still pretty partial and also the world economy was lolnoping most of the inputs a sensible person would accept from it and prototypes weren’t being commercialized and stuff was generally slow and messy\n\n\n\n\n\n[Christiano][17:09]\nprototypes of factories building factories in <2 years\n\n\n\n\n\n\n[Yudkowsky][17:10]\n“AI was still pretty partial” leads it to not do interesting stuff that Paul can rule out\n\n\n\n\n\n[Christiano][17:10]\nlike I guess I think tesla will try, and I doubt it will be just tesla\n\n\n\n\n\n\n[Yudkowsky][17:10]\nbut the other parts of that permit AI to do interesting stuff that Paul can rule out\n\n\n\n\n\n[Christiano][17:10]\nautomated researchers who can do ML experiments from 2020 without human input\n\n\n\n\n\n\n[Yudkowsky][17:10]\nokay, see, that whole “factories building factories” thing just seems so very much *after* the End Times to me\n\n\n\n\n\n[Christiano][17:10]\nyeah, we should probably only talk about cognitive work\n\n\nsince you think physical work will be very slow\n\n\n\n\n\n\n[Yudkowsky][17:11]\nokay but not just that, it’s a falsifiable prediction\n\n\nit is something that lets Eliezer be wrong in advance of the End Times\n\n\n\n\n\n[Christiano][17:11]\nwhat’s a falsifiable prediction?\n\n\n\n\n\n\n[Yudkowsky][17:11]\nif we’re in a world where Tesla is excitingly gearing up to build a fully self-duplicating factory including its mining inputs and chips and solar panels and so on, we’re clearly in the Paulverse and not in the Eliezerverse!\n\n\n\n\n\n[Christiano][17:12]\nyeah\n\n\nI do think we’ll see that before the end times\n\n\njust not before 4 year doublings\n\n\n\n\n\n\n[Yudkowsky][17:12]\nthis unfortunately only allows you to be right, and not for me to be right, but I think there are also things you legit only see in the Eliezerverse!\n\n\n\n\n\n[Christiano][17:12]\nI mean, I don’t think they will be doing mining for a long time because it’s cheap\n\n\n\n\n\n\n[Yudkowsky][17:12]\nthey are unfortunately late in the game but they exist at all!\n\n\nand being able to state them is progress on this project!\n\n\n\n\n\n[Christiano][17:13]\nbut fully-automated factories first, and then significant automation of the factory-building process\n\n\nI do expect to see\n\n\nI’m generally pretty bullish on industrial robotics relative to you I think, even before the crazy stuff?\n\n\nbut you might not have a firm view\n\n\nlike I expect to have tons of robots doing all kinds of stuff, maybe cutting human work in manufacturing 2x, with very modest increases in GDP resulting from that in particular\n\n\n\n\n\n\n[Yudkowsky][17:13]\nso, like, it doesn’t surprise me very much if Tesla manages to fully automate a factory that takes in some relatively processed inputs including refined metals and computer chips, and outputs a car? and by the same token I expect that has very little impact on GDP.\n\n\n\n\n\n[Christiano][17:14]\nrefined metals are almost none of the cost of the factory\n\n\nand also tesla isn’t going to be that vertically integrated\n\n\nthe fabs will separately continue to be more and more automated\n\n\nI expect to have robot cars driving everywhere, and robot trucks\n\n\nanother 2x fall in humans required for warehouses\n\n\nelimination of most brokers involved in negotiating shipping\n\n\n\n\n\n\n[Yudkowsky][17:15]\nif despite the fabs being more and more automated, somehow things are managing not to cost less and less, and that sector of the economy is not really growing very much, is that more like the Eliezerverse than the Paulverse?\n\n\n\n\n\n[Christiano][17:15]\nmost work in finance and loan origination\n\n\n\n\n\n\n[Yudkowsky][17:15]\nthough this is something of a peripheral prediction to AGI core issues\n\n\n\n\n\n[Christiano][17:16]\nyeah, I think if you cut the humans to do X by 2, but then the cost falls much less than the number you’d naively expect (from saving on the human labor and paying for the extra capital), then that’s surprising to me\n\n\nI mean if it falls half as much as you’d expect on paper I’m like “that’s a bit surprising” rather than having my mind blown, if it doesn’t fall I’m more surprised\n\n\nbut that was mostly physical economy stuff\n\n\noh wait, I was making positive predictions now, physical stuff is good for that I think?\n\n\nsince you don’t expect it to happen?\n\n\n\n\n\n\n[Yudkowsky][17:17]\n…this is not your fault but I wish you’d asked me to produce my “percentage of fall vs. paper calculation” estimate before you produced yours\n\n\nmy mind is very whiffy about these things and I am not actually unable to deanchor on your estimate ![😦](https://s.w.org/images/core/emoji/14.0.0/72x72/1f626.png)\n\n\n\n\n\n[Christiano][17:17]\nmakes sense, I wonder if I should just spoiler\n\n\none benefit of discord\n\n\n\n\n\n\n[Yudkowsky][17:18]\nyeah that works too!\n\n\n\n\n\n[Christiano][17:18]\na problem for prediction is that I share some background view about insane inefficiency/inadequacy/decadence/silliness\n\n\nso these predictions are all tampered by that\n\n\nbut still seem like there are big residual disagreements\n\n\n\n\n\n\n[Yudkowsky][17:19]\nsighgreat\n\n\n\n\n\n[Christiano][17:19]\nsince you have way more of that than I do\n\n\n\n\n\n\n[Yudkowsky][17:19]\nnot your fault but\n\n\n\n\n\n[Christiano][17:19]\nI think that the AGI stuff is going to be a gigantic megaproject despite that\n\n\n\n\n\n\n[Yudkowsky][17:19]\nI am not shocked by the AGI stuff being a gigantic megaproject\n\n\nit’s not above the bar of survival but, given other social optimism, it permits death with more dignity than by other routes\n\n\n\n\n\n[Christiano][17:20]\nwhat if spending is this big:\n\n\n\nGoogle invests $100B training a model, total spending across all of industry is way bigger\n\n\n\n\n\n\n\n[Yudkowsky][17:20]\nooooh\n\n\nI do start to be surprised if, come the end of the world, AGI is having more invested in it than a TSMC fab\n\n\nthough, not… *super* surprised?\n\n\nalso I am at least a little surprised before then\n\n\nactually I should probably have been spoiling those statements myself but my expectation is that Paul’s secret spoiler is about\n\n\n\n$10 trillion dollars or something equally totally shocking to an Eliezer\n\n\n\n\n\n\n[Christiano][17:22]\nmy view on that level of spending is\n\n\n\nit’s an only slightly high-end estimate for spending by someone on a single model, but that in practice there will be ways of dividing more across different firms, and that the ontology of single-model will likely be slightly messed up (e.g. by OpenAI Five-style surgery). Also if it’s that much then it likely involves big institutional changes and isn’t at google.\n\n\n\nI read your spoiler\n\n\nmy estimate for total spending for the whole project of making TAI, including hardware and software manufacturing and R&d, the big datacenters, etc.\n\n\n\nis in the ballpark of $10T, though it’s possible that it will be undercounted several times due to wage stickiness for high-end labor\n\n\n\n\n\n\n\n[Yudkowsky][17:24]\nI think that as\n\n\n\nspending on particular AGI megaprojects starts to go past $50 billion, it’s not especially ruled out per se by things that I think I know for sure, but I feel like a third-party observer should justly start to weakly think, ‘okay, this is looking at least a little like the Paulverse rather than the Eliezerverse’, and as we get to $10 trillion, that is not absolutely ruled out by the Eliezerverse but it was a whoole lot more strongly predicted by the Paulverse, maybe something like 20x unless I’m overestimating how strongly Paul predicts that\n\n\n\n\n\n\n[Christiano][17:24]\nProposed modification to the “speculate about the future to generate kind-of-predictions” methodology: we make shit up, then later revise based on points others made, and maybe also get Carl to sanity-check and deciding which of his objections we agree with. Then we can separate out the “how good are intuitions” claim (with fast feedback) from the all-things-considered how good was the “prediction”\n\n\n\n\n\n\n[Yudkowsky][17:25]\nokay that hopefully allows me to read Paul’s spoilers… no I’m being silly. @ajeya please read all the spoilers and say if it’s time for me to read his\n\n\n\n\n\n[Cotra][17:25]\nyou can read his latest\n\n\n\n\n\n\n[Christiano][17:25]\nI’d guess it’s fine to read all of them?\n\n\n\n\n\n\n[Cotra][17:26]\nyeah sorry that’s what i meant\n\n\n\n\n\n\n[Yudkowsky][17:26]\nwhat should I say more about before reading earlier ones?\n\n\nah k\n\n\n\n\n\n[Christiano][17:26]\nMy $10T estimate was after reading yours (didn’t offer an estimate on that quantity beforehand), though that’s the kind of ballpark I often think about, maybe we should just spoiler only numbers so that context is clear ![🙂](https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png)\n\n\nI think fast takeoff gets significantly more likely as you push that number down\n\n\n\n\n\n\n[Yudkowsky][17:27]\nso, may I now ask what starts to look to you like “oh damn I am in the Eliezerverse”?\n\n\n\n\n\n[Christiano][17:28]\nbig mismatches between that AI looks technically able to do and what AI is able to do, though that’s going to need a lot of work to operationalize\n\n\nI think low growth of AI overall feels like significant evidence for Eliezerverse (even if you wouldn’t make that prediction), since I’m forecasting it rising to absurd levels quite fast whereas your model is consistent with it staying small\n\n\nsome intuition about AI looking very smart but not able to do much useful until it has the whole picture, I guess this can be combined with the first point to be something like—AI looks really smart but it’s just not adding much value\n\n\nall of those seem really hard\n\n\n\n\n\n\n[Cotra][17:30]\nstrong upward trend breaks on benchmarks seems like it should be a point toward eliezer verse, even if eliezer doesn’t want to bet on a specific one?\n\n\nespecially breaks on model size -> perf trends rather than calendar time trends\n\n\n\n\n\n\n[Christiano][17:30]\nI think that any big break on model size -> perf trends are significant evidence\n\n\n\n\n\n\n[Cotra][17:31]\nmeta-learning working with small models?\n\n\ne.g. model learning-to-learn video games and then learning a novel one in a couple subjective hours\n\n\n\n\n\n\n[Christiano][17:31]\nI think algorithmic/architectural changes that improve loss as much as 10x’ing model, for tasks that looking like they at least *should* have lots of economic value\n\n\n(even if they don’t end up having lots of value because of deployment bottlenecks)\n\n\nis the meta-learning thing an Eliezer prediction?\n\n\n(before the end-of-days)\n\n\n\n\n\n\n[Cotra][17:32]\nno but it’d be an anti-bio-anchor positive trend break and eliezer thinks those should happen more than we do\n\n\n\n\n\n\n[Christiano][17:32]\nfair enough\n\n\na lot of these things are about # of times that it happens rather than whether it happens at all\n\n\n\n\n\n\n[Cotra][17:32]\nyeah\n\n\nbut meta-learning is special as the most plausible long horizon task\n\n\n\n\n\n\n[Christiano][17:33]\ne.g. maybe in any given important task I expect a single “innovation” that’s worth 10x model size? but that it still represents a minority of total time?\n\n\nhm, AI that can pass a competently administered turing test without being economically valuable?\n\n\nthat’s one of the things I think is ruled out before 4 year doubling, though Eliezer probably also doesn’t expect it\n\n\n\n\n\n| |\n| --- |\n| [Yudkowsky: 👍] |\n\n\n\n\n\n\n\n[Cotra][17:34]\nwhat would this test do to be competently administered? like casual chatbots seem like they have reasonable probability of fooling someone for a few mins now\n\n\n\n\n\n\n[Christiano][17:34]\nI think giant google-automating-google projects without big external economic impacts\n\n\n\n\n\n\n[Cotra][17:34]\nwould it test knowledge, or just coherence of some kind?\n\n\n\n\n\n\n[Christiano][17:35]\nit’s like a smart-ish human (say +2 stdev at this task) trying to separate out AI from smart-ish human, iterating a few times to learn about what works\n\n\nI mean, the basic ante is that the humans are *trying* to win a turing test, without that I wouldn’t even call it a turing test\n\n\ndunno if any of those are compelling @Eliezer\n\n\nsomething that passes a like “are you smart?” test administered by a human for 1h, where they aren’t trying to specifically tell if you are AI\n\n\njust to see if you are as smart as a human\n\n\nI mean, I guess the biggest giveaway of all would be if there is human-level (on average) AI as judged by us, but there’s no foom yet\n\n\n\n\n\n\n[Yudkowsky][17:37]\nI think we both don’t expect that one before the End of Days?\n\n\n\n\n\n[Christiano][17:37]\nor like, no crazy economic impact\n\n\nI think we both expect that to happen before foom?\n\n\nbut the “on average” is maybe way too rough a thing to define\n\n\n\n\n\n\n[Yudkowsky][17:37]\noh, wait, I missed that it wasn’t the full Turing Test\n\n\n\n\n\n[Christiano][17:37]\nwell, I suggested both\n\n\nthe lamer one is more plausible\n\n\n\n\n\n\n[Yudkowsky][17:38]\nfull Turing Test happeneth not before the End Times, on Eliezer’s view, and not before the first 4-year doubling time, on Paul’s view, and the first 4-year doubling happeneth not before the End Times, on Eliezer’s view, so this one doesn’t seem very useful\n\n\n\n\n \n\n\n### 9.13. GPT-*n* and small architectural innovations vs. large ones\n\n\n \n\n\n\n[Christiano][17:39]\nI feel like the biggest subjective thing is that I don’t feel like there is a “core of generality” that GPT-3 is missing\n\n\nI just expect it to gracefully glide up to a human-level foom-ing intelligence\n\n\n\n\n\n\n[Yudkowsky][17:39]\nthe “are you smart?” test seems perhaps passable by GPT-6 or its kin, which I predict to contain at least one major architectural difference over GPT-3 that I could, pre-facto if anyone asked, rate as larger than a different normalization method\n\n\nbut by fooling the humans more than by being smart\n\n\n\n\n\n[Christiano][17:39]\nlike I expect GPT-5 would foom if you ask it but take a long time\n\n\n\n\n\n\n[Yudkowsky][17:39]\nthat sure is an underlying difference\n\n\n\n\n\n[Christiano][17:39]\nnot sure how to articulate what Eliezer expects to see here though\n\n\nor like what the difference is\n\n\n\n\n\n\n[Cotra][17:39]\nsomething that GPT-5 or 4 shouldn’t be able to do, according to eliezer?\n\n\nwhere Paul is like “sure it could do that”?\n\n\n\n\n\n\n[Christiano][17:40]\nI feel like GPT-3 clearly has some kind of “doesn’t really get what’s going on” energy\n\n\nand I expect that to go away\n\n\nwell before the end of days\n\n\nso that it seems like a kind-of-dumb person\n\n\n\n\n\n\n[Yudkowsky][17:40]\nI expect it to go away before the end of days\n\n\nbut with there having been a big architectural innovation, not Stack More Layers\n\n\n\n\n\n[Christiano][17:40]\nyeah\n\n\nwhereas I expect layer stacking + maybe changing loss (since logprob is too noisy) is sufficient\n\n\n\n\n\n\n[Yudkowsky][17:40]\nif you name 5 possible architectural innovations I can call them small or large\n\n\n\n\n\n[Christiano][17:41]\n1. replacing transformer attention with DB nearest-neighbor lookup over an even longer context\n\n\n\n\n\n\n[Yudkowsky][17:42]\nokay 1’s a bit borderline\n\n\n\n\n\n[Christiano][17:42]\n2. adding layers that solve optimization problems internally (i.e. the weights and layer N activations define an optimization problem, the layer N+1 solves it) or maybe simulates an ODE\n\n\n\n\n\n\n[Yudkowsky][17:42]\nif it’s 3x longer context, no biggie, if it’s 100x longer context, more of a game-changer\n\n\n2 – big change\n\n\n\n\n\n[Christiano][17:42]\nI’m imagining >100x if you do that\n\n\n3. universal transformer XL, where you reuse activations from one context in the next context (RNN style) and share weights across layers\n\n\n\n\n\n\n[Yudkowsky][17:43]\nI do not predict 1 works because it doesn’t seem like an architectural change that moves away from what I imagined to be the limits, but it’s a big change if it 100xs the window\n\n\n3 – if it is only that single change and no others, I call it not a large change relative to transformer XL. Transformer XL itself however was an example of a large change – it didn’t have a large effect but it was what I’d call a large change.\n\n\n\n\n\n[Christiano][17:45]\n4. Internal stochastic actions trained with reinforce\n\n\nI mean, is mixture of experts or switch another big change?\n\n\nare we just having big changes non-stop?\n\n\n\n\n\n\n[Yudkowsky][17:45]\n4 – I don’t know if I’m imagining right but it sounds large\n\n\n\n\n\n[Christiano][17:45]\nit sounds from these definitions like the current rate of big changes is > 1/year\n\n\n\n\n\n\n[Yudkowsky][17:46]\n5 – mixture of experts: as with 1, I’m tempted to call it a small change, but that’s because of my model of it as doing the same thing, not because it isn’t in a certain sense a quite large move away from Stack More Layers\n\n\nI mean, it is not very hard to find a big change to try?\n\n\nfinding a big change that works is much harder\n\n\n\n\n\n[Christiano][17:46]\nseveral of these are improvements\n\n\n\n\n\n\n[Yudkowsky][17:47]\none gets a minor improvement from a big change rather more often than a big improvement from a big change\n\n\nthat’s why dinosaurs didn’t foom\n\n\n\n\n\n[Christiano][17:47]\nlike transformer -> MoE -> switch transformer is about as big an improvement as LSTM vs transformer\n\n\nso if we all agree that big changes are happening multiple times per year, then I guess that’s not the difference in prediction\n\n\nis it about the size of gains from individual changes or something?\n\n\nor maybe: if you take the scaling laws for transformers, are the models with impact X “on trend,” with changes just keeping up or maybe buying you 1-2 oom of compute, or are they radically better / scaling much better?\n\n\nthat actually feels most fundamental\n\n\n\n\n\n\n[Yudkowsky][17:49]\nI had not heard that transformer -> switch transformer was as large an improvement as lstm -> transformers after a year or two, though maybe you’re referring to a claimed 3x improvement and comparing that to the claim that if you optimize LSTMs as hard as transformers they come within 3x (I have not examined these claims in detail, they sound a bit against my prior, and I am a bit skeptical of both of them)\n\n\nso remember that from my perspective, I am fighting an adverse selection process and the Law of Earlier Success\n\n\n\n\n\n[Christiano][17:50]\nI think it’s actually somewhat smaller\n\n\n\n\n\n\n[Yudkowsky][17:51]\nif you treat GPT-3 as a fixed thingy and imagine scaling it in the most straightforward possible way, then I have a model of what’s going on in there and I don’t think that most direct possible way of scaling gets you past GPT-3 lacking a deep core\n\n\nsomebody can come up and go, “well, what about this change that nobody tried yet?” and I can be like, “ehhh, that particular change does not get at what I suspect the issues are”\n\n\n\n\n\n[Christiano][17:52]\nI feel like the framing is: paul says that something is possible with “stack more layers” and eliezer isn’t. We both agree that you can’t literally stack more layers and have to sometimes make tweaks, and also that you will scale faster if you make big changes. But it seems like for Paul that means (i) changes to stay on the old trend line, (ii) changes that trade off against modest amounts of compute\n\n\nso maybe we can talk about that?\n\n\n\n\n\n\n[Yudkowsky][17:52]\nwhen it comes to predicting what happens in 2 years, I’m not just up against people trying a broad range of changes that I can’t foresee in detail, I’m also up against a Goodhart’s Curse on the answer being a weird trick that worked better than I would’ve expected in advance\n\n\n\n\n\n[Christiano][17:52]\nbut then it seems like we may just not know, e.g. if we were talking lstm vs transformer, no one is going to run experiments with the well-tuned lstm because it’s still just worse than a transformer (though they’ve run enough experiments to know how important tuning is, and the brittleness is much of why no one likes it)\n\n\n\n\n\n\n[Yudkowsky][17:53]\nI would not have predicted Transformers to be a huge deal if somebody described them to me in advance of having ever tried it out. I think that’s because predicting the future is hard not because I’m especially stupid.\n\n\n\n\n\n[Christiano][17:53]\nI don’t feel like anyone could predict that being a big deal\n\n\nbut I do think you could predict “there will be some changes that improve stability / make models slightly better”\n\n\n(I mean, I don’t feel like any of the actual humans on earth could have, some hypothetical person could)\n\n\n\n\n\n\n[Yudkowsky][17:57]\nwhereas what I’m trying to predict is more like “GPT-5 in order to start-to-awaken needs a change via which it, in some sense, can do a different thing, that is more different than the jump from GPT-1 to GPT-3; and examples of things with new components in them abound in Deepmind, like Alpha Zero having not the same architecture as the original AlphaGo; but at the same time I’m also trying to account for being up against this very adversarial setup where a weird trick that works much better than I expect may be the thing that makes GPT-5 able to do a different thing”\n\n\nthis may seem Paul-unfairish because any random innovations that come along, including big changes that cause small improvements, would tend to be swept up into GPT-5 even if they made no more deep difference than the whole thing with MoE\n\n\nso it’s hard to bet on\n\n\nbut I also don’t feel like it – totally lacks Eliezer-vs-Paul-ness if you let yourself sort of relax about that and just looked at it?\n\n\nalso I’m kind of running out of energy, sorry\n\n\n\n\n\n[Christiano][18:03]\nI think we should be able to get something here eventually\n\n\nseems good to break though\n\n\nthat was a lot of arguing for one day\n\n\n\n\n\n \n\n\n\nThe post [Christiano, Cotra, and Yudkowsky on AI progress](https://intelligence.org/2021/11/25/christiano-cotra-and-yudkowsky-on-ai-progress/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-11-25T18:33:22Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "b501ec96915d3e79238cbaf647b93a4a", "title": "Yudkowsky and Christiano discuss “Takeoff Speeds”", "url": "https://intelligence.org/2021/11/22/yudkowsky-and-christiano-discuss-takeoff-speeds/", "source": "miri", "source_type": "blog", "text": "This is a transcription of Eliezer Yudkowsky responding to Paul Christiano’s [Takeoff Speeds](https://sideways-view.com/2018/02/24/takeoff-speeds/) live on Sep. 14, followed by a conversation between Eliezer and Paul. This discussion took place after Eliezer’s [conversation](https://www.lesswrong.com/posts/Hook3FXvsigcJgpgw/ngo-and-yudkowsky-on-ai-capability-gains-1) with Richard Ngo, and was prompted by an earlier request by Richard Ngo that Eliezer respond to Paul on Takeoff Speeds.\n\n\nColor key:\n\n\n\n\n\n| | |\n| --- | --- |\n|  Chat by Paul and Eliezer  | Other chat |\n\n\n\n \n\n\n### 5.5. Comments on “Takeoff Speeds”\n\n\n \n\n\n\n[Yudkowsky][16:52]\nmaybe I’ll try liveblogging some here in the meanwhile\n\n\n\n\n#### \n\n\n#### Slower takeoff means faster progress\n\n\n\n[Yudkowsky][16:57]\n\n> \n> The main disagreement is not about what will happen once we have a superintelligent AI, it’s about what will happen *before* we have a superintelligent AI. So slow takeoff seems to mean that AI has a larger impact on the world, sooner.\n> \n> \n> \n\n\n![](https://unstylizedcom.files.wordpress.com/2018/02/takeoffimage-0011.png?w=748)\nIt seems to me to be disingenuous to phrase it this way, given that slow-takeoff views usually imply that AI has a large impact later relative to right now (2021), even if they imply that AI impacts the world “earlier” relative to “when superintelligence becomes reachable”.\n\n\n“When superintelligence becomes reachable” is *not* a fixed point in time that doesn’t depend on what you believe about cognitive scaling. The correct graph is, in fact, the one where the “slow” line starts a bit before “fast” peaks and ramps up slowly, reaching a high point later than “fast”. It’s a nice try at reconciliation with the imagined Other, but it fails and falls flat.\n\n\nThis may seem like a minor point, but points like this do add up.\n\n\n\n> \n> In the fast takeoff scenario, weaker AI systems may have significant impacts but they are nothing compared to the “real” AGI. Whoever builds AGI has a decisive strategic advantage. Growth accelerates from 3%/year to 3000%/year without stopping at 30%/year. And so on.\n> \n> \n> \n\n\nThis again shows failure to engage with the Other’s real viewpoint. My mainline view is that growth stays at 5%/year and then everybody falls over dead in 3 seconds and the world gets transformed into paperclips; there’s never a point with 3000%/year.\n\n\n\n\n \n\n\n#### Operationalizing slow takeoff\n\n\n\n[Yudkowsky][17:01]\n\n> \n> *There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles.*\n> \n> \n> \n\n\nIf we allow that consuming and transforming the solar system over the course of a few days is “the first 1 year interval in which world output doubles”, then I’m happy to argue that there won’t be a 4-year interval with world economic output doubling before then. This, indeed, seems like a massively overdetermined point to me. That said, again, the phrasing is not conducive to conveying the Other’s real point of view.\n\n\n\n> \n> I believe that before we have incredibly powerful AI, we will have AI which is merely very powerful.\n> \n> \n> \n\n\nStatements like these are very often “true, but not the way the person visualized them”. Before anybody built the first critical nuclear pile in a squash court at the University of Chicago, was there a pile that was almost but not quite critical? Yes, one hour earlier. Did people already build nuclear systems and experiment with them? Yes, but they didn’t have much in the way of net power output. Did the Wright Brothers build prototypes before the Flyer? Yes, but they weren’t prototypes that flew but 80% slower.\n\n\nI guarantee you that, whatever the *fast* takeoff scenario, there will be some way to look over the development history, and nod wisely and say, “Ah, yes, see, this was not unprecedented, here are these earlier systems which presaged the final system!” Maybe you could even look back to today and say that about GPT-3, yup, totally presaging stuff all over the place, great. But it isn’t transforming society because it’s not over the social-transformation threshold.\n\n\nAlphaFold presaged AlphaFold 2 but AlphaFold 2 is good enough to start replacing other ways of determining protein conformations and AlphaFold is not; and then neither of those has much impacted the real world, because in the real world we can already design a vaccine in a day and the rest of the time is bureaucratic time rather than technology time, and *that* goes on until we have an AI over the threshold to bypass bureaucracy.\n\n\nBefore there’s an AI that can act while fully concealing its acts from the programmers, there will be an AI (albeit perhaps only 2 hours earlier) which can act while only concealing 95% of the meaning of its acts from the operators.\n\n\nAnd that AI will not actually originate any actions, because it doesn’t want to get caught; there’s a discontinuity in the instrumental incentives between expecting 95% obscuration, being moderately sure of 100% obscuration, and being very certain of 100% obscuration.\n\n\nBefore that AI grasps the big picture and starts planning to avoid actions that operators detect as bad, there will be some little AI that partially grasps the big picture and tries to avoid some things that would be detected as bad; and the operators will (mainline) say “Yay what a good AI, it knows to avoid things we think are bad!” or (death with unrealistic amounts of dignity) say “oh noes the prophecies are coming true” and back off and start trying to align it, but they will not be able to align it, and if they don’t proceed anyways to destroy the world, somebody else will proceed anyways to destroy the world.\n\n\nThere is always some step of the process that you can point to which is continuous on some level.\n\n\nThe real world is allowed to do discontinuous things to you anyways.\n\n\nThere is not necessarily a presage of 9/11 where somebody flies a small plane into a building and kills 100 people, before anybody flies 4 big planes into 3 buildings and kills 3000 people; and even if there is some presaging event like that, which would not surprise me at all, the rest of the world’s response to the two cases was evidently discontinuous. You do not necessarily wake up to a news story that is 10% of the news story of 2001/09/11, one year before 2001/09/11, written in 10% of the font size on the front page of the paper.\n\n\nPhysics is continuous but it doesn’t always yield things that “look smooth to a human brain”. Some kinds of processes *converge* to continuity in strong ways where you can throw discontinuous things in them and they still end up continuous, which is among the reasons why I expect world GDP to stay on trend up until the world ends abruptly; because world GDP is one of those things that wants to stay on a track, and an AGI building a nanosystem can go off that track without being pushed back onto it.\n\n\n\n> \n> In particular, this means that incredibly powerful AI will emerge in a world where crazy stuff is already happening (and probably everyone is already freaking out).\n> \n> \n> \n\n\nLike the way they’re freaking out about Covid (itself a nicely smooth process that comes in locally pretty predictable waves) by going doobedoobedoo and letting the FDA carry on its leisurely pace; and not scrambling to build more vaccine factories, now that the rich countries have mostly got theirs? Does this sound like a statement from a history book, or from an EA imagining an unreal world where lots of other people behave like EAs? There is a pleasure in imagining a world where suddenly a Big Thing happens that proves we were right and suddenly people start paying attention to our thing, the way we imagine they should pay attention to our thing, now that it’s attention-grabbing; and then suddenly all our favorite policies are on the table!\n\n\nYou could, in a sense, say that our world is freaking out about Covid; but it is not freaking out in anything remotely like the way an EA would freak out; and all the things an EA would immediately do if an EA freaked out about Covid, are not even on the table for discussion when politicians meet. They have their own ways of reacting. (Note: this is not commentary on hard vs soft takeoff per se, just a general commentary on the whole document seeming to me to… fall into a trap of finding self-congruent things to imagine and imagining them.)\n\n\n\n\n\n####  The basic argument\n\n\n\n[Yudkowsky][17:22]\n\n> \n> Before we have an incredibly intelligent AI, we will probably have a slightly worse AI.\n> \n> \n> \n\n\nThis is very often the sort of thing where you can look back and say that it was true, in some sense, but that this ended up being irrelevant because the slightly worse AI wasn’t what provided the exciting result which led to a boardroom decision to go all in and invest $100M on scaling the AI.\n\n\nIn other words, it is the sort of argument where the premise is allowed to be true if you look hard enough for a way to say it was true, but the conclusion ends up false because it wasn’t the relevant kind of truth.\n\n\n\n> \n> A slightly-worse-than-incredibly-intelligent AI would radically transform the world, leading to growth (almost) as fast and military capabilities (almost) as great as an incredibly intelligent AI.\n> \n> \n> \n\n\nThis strikes me as a massively invalid reasoning step. Let me count the ways.\n\n\nFirst, there is a step not generally valid from supposing that because a previous AI is a technological precursor which has 19 out of 20 critical insights, it has 95% of the later AI’s IQ, applied to similar domains. When you count stuff like “multiplying tensors by matrices” and “ReLUs” and “training using TPUs” then AlphaGo only contained a very small amount of innovation relative to previous AI technology, and yet it broke trends on Go performance. You could point to all kinds of incremental technological precursors to AlphaGo in terms of AI technology, but they wouldn’t be smooth precursors on a graph of Go-playing ability.\n\n\nSecond, there’s discontinuities of the environment to which intelligence can be applied. 95% concealment is not the same as 100% concealment in its strategic implications; an AI capable of 95% concealment bides its time and hides its capabilities, an AI capable of 100% concealment strikes. An AI that can design nanofactories that aren’t good enough to, euphemistically speaking, create two cellwise-identical strawberries and put them on a plate, is one that (its operators know) would earn unwelcome attention if its earlier capabilities were demonstrated, and those capabilities wouldn’t save the world, so the operators bide their time. The AGI tech will, I mostly expect, work for building self-driving cars, but if it does not also work for manipulating the minds of bureaucrats (which is not advised for a system you are trying to keep corrigible and aligned because human manipulation is the most dangerous domain), the AI is not able to put those self-driving cars on roads. What good does it do to design a vaccine in an hour instead of a day? Vaccine design times are no longer the main obstacle to deploying vaccines.\n\n\nThird, there’s the *entire thing with recursive self-improvement*, which, no, is *not* something humans have experience with, we do not have access to and documentation of our own source code and the ability to branch ourselves and try experiments with it. The technological precursor of an AI that designs an improved version of itself, may perhaps, in the fantasy of 95% intelligence, be an AI that was being internally deployed inside Deepmind on a dozen other experiments, tentatively helping to build smaller AIs. Then the next generation of that AI is deployed on itself, produces an AI substantially better at rebuilding AIs, it rebuilds itself, they get excited and dump in 10X the GPU time while having a serious debate about whether or not to alert Holden (they decide against it), that builds something deeply general instead of shallowly general, that figures out there are humans and it needs to hide capabilities from them, and covertly does some actual deep thinking about AGI designs, and builds a hidden version of itself elsewhere on the Internet, which runs for longer and steals GPUs and tries experiments and gets to the superintelligent level.\n\n\nNow, to be very clear, this is not the only line of possibility. And I emphasize this because I think there’s a common failure mode where, when I try to sketch a concrete counterexample to the claim that smooth technological precursors yield smooth outputs, people imagine that *only this exact concrete scenario* is *the lynchpin* of Eliezer’s whole worldview and *the big key thing that Eliezer thinks is important* and that *the smallest deviation from it they can imagine* thereby obviates my worldview. This is not the case here. I am simply exhibiting non-ruled-out models which obey the premise “there was a precursor containing 95% of the code” and which disobey the conclusion “there were precursors with 95% of the environmental impact”, thereby showing this for an invalid reasoning step.\n\n\nThis is also, of course, as Sideways View admits but says “eh it was just the one time”, not true about chimps and humans. Chimps have 95% of the brain tech (at least), but not 10% of the environmental impact.\n\n\nA very large amount of this whole document, from my perspective, is just trying over and over again to pump the invalid intuition that design precursors with 95% of the technology should at least have 10% of the impact. There are a *lot* of cases in the history of startups and the world where this is false. I am having trouble thinking of a clear case in point where it is *true*. Where’s the earlier company that had 95% of Jeff Bezos’s ideas and now has 10% of Amazon’s market cap? Where’s the earlier crypto paper that had all but one of Satoshi’s ideas and which spawned a cryptocurrency a year before Bitcoin which did 10% as many transactions? Where’s the nonhuman primate that learns to drive a car with only 10x the accident rate of a human driver, since (you could argue) that’s mostly visuo-spatial skills without much visible dependence on complicated abstract general thought? Where’s the chimpanzees with spaceships that get 10% of the way to the Moon?\n\n\nWhen you get smooth input-output conversions they’re not usually conversions from technology->cognition->impact!\n\n\n\n\n \n\n\n#### Humans vs. chimps\n\n\n\n[Yudkowsky][18:38]\n\n> \n> *Summary of my response: chimps are nearly useless because they aren’t optimized to be useful, not because evolution was trying to make something useful and wasn’t able to succeed until it got to humans.*\n> \n> \n> \n\n\nChimps are nearly useless because they’re not general, and doing anything on the scale of building a nuclear plant requires mastering so many different nonancestral domains that it’s no wonder natural selection didn’t happen to separately train any single creature across enough different domains that it had evolved to solve every kind of domain-specific problem involved in solving nuclear physics and chemistry and metallurgy and thermics in order to build the first nuclear plant in advance of any old nuclear plants existing.\n\n\nHumans are general enough that the same braintech selected just for chipping flint handaxes and making water-pouches and outwitting other humans, happened to be general enough that it could scale up to solving all the problems of building a nuclear plant – albeit with some added cognitive tech that didn’t require new brainware, and so could happen incredibly fast relative to the generation times for evolutionarily optimized brainware.\n\n\nNow, since neither humans nor chimps were optimized to be “useful” (general), and humans just wandered into a sufficiently general part of the space that it cascaded up to wider generality, we should legit expect the curve of generality to look at least somewhat different if we’re optimizing for that.\n\n\nEg, right now people are trying to optimize for generality with AIs like Mu Zero and GPT-3.\n\n\nIn both cases we have a weirdly shallow kind of generality. Neither is as smart or as deeply general as a chimp, but they are respectively better than chimps at a wide variety of Atari games, or a wide variety of problems that can be superposed onto generating typical human text.\n\n\nThey are, in a sense, more general than a biological organism at a similar stage of cognitive evolution, with much less complex and architected brains, in virtue of having been trained, not just on wider datasets, but on bigger datasets using gradient-descent memorization of shallower patterns, so they can cover those wide domains while being stupider and lacking some deep aspects of architecture.\n\n\nIt is not clear to me that we can go from observations like this, to conclude that there is a dominant mainline probability for how the future clearly ought to go and that this dominant mainline is, “Well, before you get human-level depth and generalization of general intelligence, you get something with 95% depth that covers 80% of the domains for 10% of the pragmatic impact”.\n\n\n…or whatever the concept is here, because this whole conversation is, on my own worldview, being conducted in a shallow way relative to the kind of analysis I did in [Intelligence Explosion Microeconomics](https://intelligence.org/files/IEM.pdf), where I was like, “here is the historical observation, here is what I think it tells us that puts a lower bound on this input-output curve”.\n\n\n\n> \n> So I don’t think the example of evolution tells us much about whether the continuous change story applies to intelligence. This case is potentially missing the key element that drives the continuous change story—optimization for performance. Evolution changes continuously on the narrow metric it is optimizing, but can change extremely rapidly on other metrics. For human technology, features of the technology that aren’t being optimized change rapidly all the time. When humans build AI, they *will* be optimizing for usefulness, and so progress in usefulness is much more likely to be linear.\n> \n> \n> Put another way: the difference between chimps and humans stands in stark contrast to the normal pattern of human technological development. We might therefore infer that intelligence is very unlike other technologies. But the difference between evolution’s optimization and our optimization seems like a much more parsimonious explanation. To be a little bit more precise and Bayesian: the prior probability of the story I’ve told upper bounds the possible update about the nature of intelligence.\n> \n> \n> \n\n\nIf you look closely at this, it’s not saying, “Well, I know *why* there was this huge leap in performance in human intelligence being optimized for other things, and it’s an investment-output curve that’s composed of these curves, which look like this, and if you rearrange these curves for the case of humans building AGI, they would look like this instead.” Unfair demand for rigor? But that *is* the kind of argument I was making in Intelligence Explosion Microeconomics!\n\n\nThere’s an argument from ignorance at the core of all this. It says, “Well, this happened when evolution was doing X. But here Y will be happening instead. So maybe things will go differently! And maybe the relation between AI tech level over time and real-world impact on GDP will look like the relation between tech investment over time and raw tech metrics over time in industries where that’s a smooth graph! Because the discontinuity for chimps and humans was because evolution wasn’t investing in real-world impact, but humans will be investing directly in that, so the relationship could be smooth, because smooth things are default, and the history is different so not applicable, and who knows what’s inside that black box so my default intuition applies which says smoothness.”\n\n\nBut we do know more than this.\n\n\nWe know, for example, that evolution being able to *stumble across* humans, implies that you can add a *small design enhancement* to something optimized across the chimpanzee domains, and end up with something that generalizes much more widely.\n\n\nIt says that there’s stuff in the underlying algorithmic space, in the design space, where you move a bump and get a lump of capability out the other side.\n\n\nIt’s a remarkable fact about gradient descent that it can memorize a certain set of shallower patterns at much higher rates, at much higher bandwidth, than evolution lays down genes – something shallower than biological memory, shallower than genes, but distributing across computer cores and thereby able to process larger datasets than biological organisms, even if it only learns shallow things.\n\n\nThis has provided an alternate avenue toward some cognitive domains.\n\n\nBut that doesn’t mean that the deep stuff isn’t there, and can’t be run across, or that it will never be run across in the history of AI before shallow non-widely-generalizing stuff is able to make its way through the regulatory processes and have a huge impact on GDP.\n\n\nThere are *in fact* ways to eat whole swaths of domains at once.\n\n\nThe history of hominid evolution tells us this or very strongly hints it, even though evolution wasn’t explicitly optimizing for GDP impact.\n\n\nNatural selection moves by adding genes, and not too many of them.\n\n\nIf so many domains got added at once to humans, relative to chimps, there must be *a way to do that*, more or less, by adding not too many genes onto a chimp, who in turn contains only genes that did well on chimp-stuff.\n\n\nYou can imagine that AI technology never runs across any core that generalizes this well, until GDP has had a chance to double over 4 years because shallow stuff that generalized less well has somehow had a chance to make its way through the whole economy and get adopted that widely despite all real-world regulatory barriers and reluctances, but your imagining that does not make it so.\n\n\nThere’s the potential in design space to pull off things as wide as humans.\n\n\nThe path that evolution took there doesn’t lead through things that generalized 95% as well as humans first for 10% of the impact, not because evolution wasn’t optimizing for that, but because *that’s not how the underlying cognitive technology worked*.\n\n\nThere may be *different* cognitive technology that could follow a path like that. Gradient descent follows a path a bit relatively more in that direction along that axis – providing that you deal in systems that are giant layer cakes of transformers and that’s your whole input-output relationship; matters are different if we’re talking about Mu Zero instead of GPT-3.\n\n\nBut this whole document is presenting the case of “ah yes, well, by default, of course, we intuitively expect gargantuan impacts to be presaged by enormous impacts, and sure humans and chimps weren’t like our intuition, but that’s all invalid because circumstances were different, so we go back to that intuition as a strong default” and actually it’s postulating, like, a *specific* input-output curve that isn’t the input-output curve we know about. It’s asking for a specific miracle. It’s saying, “What if AI technology goes *just like this*, in the future?” and hiding that under a cover of “Well, of course that’s the default, it’s such a strong default that we should start from there as a point of departure, consider the arguments in Intelligence Explosion Microeconomics, find ways that they might not be true because evolution is different, dismiss them, and go back to our point of departure.”\n\n\nAnd evolution *is* different but that doesn’t mean that the path AI takes is going to yield this specific behavior, especially when AI would need, in some sense, to *miss* the core that generalizes very widely, or rather, have run across noncore things that generalize widely enough to have this much economic impact before it runs across the core that generalizes widely.\n\n\nAnd you may say, “Well, but I don’t care that much about GDP, I care about pivotal acts.”\n\n\nBut then I want to call your attention to the fact that this document was written about GDP, despite all the extra burdensome assumptions involved in supposing that intermediate AI advancements could break through all barriers to truly massive-scale adoption and end up reflected in GDP, and then proceed to double the world economy over 4 years during which *not* enough further AI advancement occurred to find a widely generalizing thing like humans have and end the world. This is indicative of a basic problem in this whole way of thinking that wanted smooth impacts over smoothly changing time. You should not be saying, “Oh, well, leave the GDP part out then,” you should be doubting the whole way of thinking.\n\n\n\n> \n> To be a little bit more precise and Bayesian: the prior probability of the story I’ve told upper bounds the possible update about the nature of intelligence.\n> \n> \n> \n\n\nPrior probabilities of specifically-reality-constraining theories that excuse away the few contradictory datapoints we have, often aren’t that great; and when we start to stake our whole imaginations of the future on them, we depart from the mainline into our more comfortable private fantasy worlds.\n\n\n\n\n \n\n\n#### AGI will be a side-effect\n\n\n\n[Yudkowsky][19:29]\n\n> \n> *Summary of my response: I expect people to see AGI coming and to invest heavily.*\n> \n> \n> \n\n\nThis section is arguing from within its own weird paradigm, and its subject matter mostly causes me to shrug; I never expected AGI to be a side-effect, except in the obvious sense that lots of tributary tech will be developed while optimizing for other things. The world will be ended by an explicitly AGI project because I do expect that it is rather easier to build an AGI on purpose than by accident.\n\n\n(I furthermore rather expect that it will be a research project and a prototype, because the great gap between prototypes and commercializable technology will ensure that prototypes are much more advanced than whatever is currently commercializable. They will have eyes out for commercial applications, and whatever breakthrough they made will seem like it has obvious commercial applications, at the time when all hell starts to break loose. (After all hell starts to break loose, things get less well defined in my social models, and also choppier for a time in my AI models – the turbulence only starts to clear up once you start to rise out of the atmosphere.))\n\n\n\n\n \n\n\n#### Finding the secret sauce\n\n\n\n[Yudkowsky][19:40]\n\n> \n> *Summary of my response: this doesn’t seem common historically, and I don’t see why we’d expect AGI to be more rather than less like this (unless we accept one of the other arguments)*\n> \n> \n> […]\n> \n> \n> To the extent that fast takeoff proponent’s views are informed by historical example, I would love to get some canonical examples that they think best exemplify this pattern so that we can have a more concrete discussion about those examples and what they suggest about AI.\n> \n> \n> \n\n\n…humans and chimps?\n\n\n…fission weapons?\n\n\n…AlphaGo?\n\n\n…the Wright Brothers focusing on stability and building a wind tunnel?\n\n\n…AlphaFold 2 coming out of Deepmind and shocking the heck out of everyone in the field of protein folding with performance far better than they expected even after the previous shock of AlphaFold, by combining many pieces that I suppose you could find precedents for scattered around the AI field, but with those many secret sauces all combined in one place by the meta-secret-sauce of “Deepmind alone actually knows how to combine that stuff and build things that complicated without a prior example”?\n\n\n…humans and chimps again because *this is really actually a quite important example because of what it tells us about what kind of possibilities exist in the underlying design space of cognitive systems*?\n\n\n\n> \n> Historical AI applications have had a relatively small loading on key-insights and seem like the closest analogies to AGI.\n> \n> \n> \n\n\n…Transformers as the key to text prediction?\n\n\nThe case of humans and chimps, even if evolution didn’t do it on purpose, is telling us something about underlying mechanics.\n\n\nThe reason the jump to lightspeed didn’t look like evolution slowly developing a range of intelligent species competing to exploit an ecological niche 5% better, or like the way that a stable non-Silicon-Valley manufacturing industry looks like a group of competitors summing up a lot of incremental tech enhancements to produce something with 10% higher scores on a benchmark every year, is that developing intelligence is a case where a relatively narrow technology by biological standards just happened to do a huge amount of stuff without that requiring developing whole new fleets of other biological capabilities.\n\n\nSo it looked like building a Wright Flyer that flies or a nuclear pile that reaches criticality, instead of looking like being in a stable manufacturing industry where a lot of little innovations sum to 10% better benchmark performance every year.\n\n\nSo, therefore, there is *stuff in the design space that does that*. It is *possible to build humans.*\n\n\nMaybe you can build things other than humans first, maybe they hang around for a few years. If you count GPT-3 as “things other than human”, that clock has already started for all the good it does. But *humans don’t get any less possible*.\n\n\nFrom my perspective, this whole document feels like one very long filibuster of “Smooth outputs are default. Smooth outputs are default. Pay no attention to this case of non-smooth output. Pay no attention to this other case either. All the non-smooth outputs are not in the right reference class. (Highly competitive manufacturing industries with lots of competitors are totally in the right reference class though. I’m not going to make that case explicitly because then you might think of how it might be wrong, I’m just going to let that implicit thought percolate at the back of your mind.) If we just talk a lot about smooth outputs and list ways that nonsmooth output producers aren’t necessarily the same and arguments for nonsmooth outputs could fail, we get to go back to the intuition of smooth outputs. (We’re not even going to discuss particular smooth outputs as cases in point, because then you might see how those cases might not apply. It’s just the default. Not because we say so out loud, but because we talk a lot like that’s the conclusion you’re supposed to arrive at after reading.)”\n\n\nI deny the implicit meta-level assertion of this entire essay which would implicitly have you accept as valid reasoning the argument structure, “Ah, yes, given the way this essay is written, we must totally have pretty strong prior reasons to believe in smooth outputs – just implicitly think of some smooth outputs, that’s a reference class, now you have strong reason to believe that AGI output is smooth – we’re not even going to argue this prior, just talk like it’s there – now let us consider the arguments against smooth outputs – pretty weak, aren’t they? we can totally imagine ways they could be wrong? we can totally argue reasons these cases don’t apply? So at the end we go back to our strong default of smooth outputs. This essay is written with that conclusion, so that must be where the arguments lead.”\n\n\nMe: “Okay, so what if somebody puts together the pieces required for general intelligence and it scales pretty well with added GPUs and FOOMS? Say, for the human case, that’s some perceptual systems with imaginative control, a concept library, episodic memory, realtime procedural skill memory, which is all in chimps, and then we add some reflection to that, and get a human. Only, unlike with humans, once you have a working brain you can make a working brain 100X that large by adding 100X as many GPUs, and it can run some thoughts 10000X as fast. And that is substantially more effective brainpower than was being originally devoted to putting its design together, as it turns out. So it can make a substantially smarter AGI. For concreteness’s sake. Reality has been trending well to the Eliezer side of Eliezer, on the Eliezer-Hanson axis, so perhaps you can do it more simply than that.”\n\n\nSimplicio: “Ah, but what if, 5 years before then, somebody puts together some other AI which doesn’t work like a human, and generalizes widely enough to have a big economic impact, but not widely enough to improve itself or generalize to AI tech or generalize to everything and end the world, and in 1 year it gets all the mass adoptions required to do whole bunches of stuff out in the real world that current regulations require to be done in various exact ways regardless of technology, and then in the next 4 years it doubles the world economy?”\n\n\nMe: “Like… what kind of AI, exactly, and why didn’t anybody manage to put together a full human-level thingy during those 5 years? Why are we even bothering to think about this whole weirdly specific scenario in the first place?”\n\n\nSimplicio: “Because if you can put together something that has an enormous impact, you should be able to put together most of the pieces inside it and have a huge impact! Most technologies are like this. I’ve considered some things that are not like this and concluded they don’t apply.”\n\n\nMe: “Especially if we are talking about impact on GDP, it seems to me that most explicit and implicit ‘technologies’ are not like this at all, actually. There wasn’t a cryptocurrency developed a year before Bitcoin using 95% of the ideas which did 10% of the transaction volume, let alone a preatomic bomb. But, like, can you give me any concrete visualization of how this could play out?”\n\n\nAnd there is no concrete visualization of how this could play out. Anything I’d have Simplicio say in reply would be unrealistic because there is no concrete visualization they give us. It is not a coincidence that I often use concrete language and concrete examples, and this whole field of argument does not use concrete language or offer concrete examples.\n\n\nThough if we’re sketching scifi scenarios, I suppose one *could* imagine a group that develops sufficiently advanced GPT-tech and deploys it on Twitter in order to persuade voters and politicians in a few developed countries to institute open borders, along with political systems that can handle open borders, and to permit housing construction, thereby doubling world GDP over 4 years. And since it was possible to use relatively crude AI tech to double world GDP this way, it legitimately takes the whole 4 years after that to develop real AGI that ends the world. FINE. SO WHAT. EVERYONE STILL DIES.\n\n\n\n\n \n\n\n#### Universality thresholds\n\n\n\n[Yudkowsky][20:21]\n\n> \n> It’s easy to imagine a weak AI as some kind of handicapped human, with the handicap shrinking over time. Once the handicap goes to 0 we know that the AI will be above the universality threshold. Right now it’s below the universality threshold. So there must be sometime in between where it crosses the universality threshold, and that’s where the fast takeoff is predicted to occur.\n> \n> \n> But AI *isn’t* like a handicapped human. Instead, the designers of early AI systems will be trying to make them as useful as possible. So if universality is incredibly helpful, it will appear as early as possible in AI designs; designers will make tradeoffs to get universality at the expense of other desiderata (like cost or speed).\n> \n> \n> So now we’re almost back to the previous point: is there some secret sauce that gets you to universality, without which you can’t get universality however you try? I think this is unlikely for the reasons given in the previous section.\n> \n> \n> \n\n\nWe know, because humans, that there is humanly-widely-applicable general-intelligence tech.\n\n\nWhat this section *wants* to establish, I think, or *needs* to establish to carry the argument, is that there is some intelligence tech that is wide enough to double the world economy in 4 years, but not world-endingly scalably wide, which becomes a possible AI tech 4 years before any general-intelligence-tech that will, if you put in enough compute, scale to the ability to do a sufficiently large amount of wide thought to FOOM (or build nanomachines, but if you can build nanomachines you can very likely FOOM from there too if not corrigible).\n\n\nWhat it says instead is, “I think we’ll get universality much earlier on the equivalent of the biological timeline that has humans and chimps, so the resulting things will be weaker than humans at the point where they first become universal in that sense.”\n\n\nThis is very plausibly true.\n\n\nIt doesn’t mean that when this exciting result gets 100 times more compute dumped on the project, it takes at least 5 years to get anywhere really interesting from there (while also taking only 1 year to get somewhere sorta-interesting enough that the instantaneous adoption of it will double the world economy over the next 4 years).\n\n\nIt also isn’t necessarily rather than plausibly true. For example, the thing that becomes universal, could also have massive gradient descent shallow powers that are far beyond what primates had at the same age.\n\n\nPrimates weren’t already writing code as well as Codex when they started doing deep thinking. They couldn’t do precise floating-point arithmetic. Their fastest serial rates of thought were a hell of a lot slower. They had no access to their own code or to their own memory contents etc. etc. etc.\n\n\nBut mostly I just want to call your attention to the immense gap between what this section needs to establish, and what it actually says and argues for.\n\n\nWhat it actually argues for is a sort of local technological point: at the moment when generality first arrives, it will be with a brain that is less sophisticated than chimp brains were when they turned human.\n\n\nIt implicitly jumps all the way from there, across a *whole* lot of elided steps, to the implicit conclusion that this tech or elaborations of it will have smooth output behavior such that at some point the resulting impact is big enough to double the world economy in 4 years, without any further improvements ending the world economy before 4 years.\n\n\nThe underlying argument about how the AI tech might work is plausible. Chimps are insanely complicated. I mostly expect we will have AGI *long* before anybody is even *trying* to build anything that complicated.\n\n\nThe very next step of the argument, about capabilities, is already very questionable because this system could be using immense gradient descent capabilities to master domains for which large datasets are available, and hominids did *not* begin with instinctive great shallow mastery of all domains for which a large dataset could be made available, which is why hominids don’t start out playing superhuman Go as soon as somebody tells them the rules and they do one day of self-play, which *is* the sort of capability that somebody could hook up to a nascent AGI (albeit we could optimistically and fondly and falsely imagine that somebody deliberately didn’t floor the gas pedal as far as possible).\n\n\nCould we have huge impacts out of some subuniversal shallow system that was hooked up to capabilities like this? Maybe, though this is *not* the argument made by the essay. It would be a specific outcome that isn’t forced by anything in particular, but I can’t say it’s ruled out. Mostly my twin reactions to this are, “If the AI tech is that dumb, how are all the bureaucratic constraints that actually rate-limit economic progress getting bypassed” and “Okay, but ultimately, so what and who cares, how does this modify that we all die?”\n\n\n\n> \n> There is another reason I’m skeptical about hard takeoff from universality secret sauce: I think we *already* could make universal AIs if we tried (that would, given enough time, learn on their own and converge to arbitrarily high capability levels), and the reason we don’t is because it’s just not important to performance and the resulting systems would be really slow. This inside view argument is too complicated to make here and I don’t think my case rests on it, but it is relevant to understanding my view.\n> \n> \n> \n\n\nI have no idea why this argument is being made or where it’s heading. I cannot pass the [ITT](https://www.econlib.org/archives/2011/06/the_ideological.html) of the author. I don’t know what the author thinks this has to do with constraining takeoffs to be slow instead of fast. At best I can conjecture that the author thinks that “hard takeoff” is supposed to derive from “universality” being very sudden and hard to access and late in the game, so if you can argue that universality could be accessed right now, you have defeated the argument for hard takeoff.\n\n\n\n\n \n\n\n#### “Understanding” is discontinuous\n\n\n\n[Yudkowsky][20:41]\n\n> \n> *Summary of my response: I don’t yet understand this argument and am unsure if there is anything here.*\n> \n> \n> It may be that understanding of the world tends to click, from “not understanding much” to “understanding basically everything.” You might expect this because everything is entangled with everything else.\n> \n> \n> \n\n\nNo, the idea is that a core of overlapping somethingness, trained to handle chipping handaxes and outwitting other monkeys, will generalize to building spaceships; so evolutionarily selecting on understanding a bunch of stuff, eventually ran across general stuff-understanders that understood a bunch more stuff.\n\n\nGradient descent may be genuinely different from this, but we shouldn’t confuse imagination with knowledge when it comes to extrapolating that difference onward. At present, gradient descent does mass memorization of overlapping shallow patterns, which then combine to yield a weird pseudo-intelligence over domains for which we can deploy massive datasets, without yet generalizing much outside those domains.\n\n\nWe can hypothesize that there is some next step up to some weird thing that is intermediate in generality between gradient descent and humans, but we have not seen it yet, and we should not confuse imagination for knowledge.\n\n\nIf such a thing did exist, it would not necessarily be at the right level of generality to double the world economy in 4 years, without being able to build a better AGI.\n\n\nIf it was at that level of generality, it’s nowhere written that no other company will develop a better prototype at a deeper level of generality over those 4 years.\n\n\nI will also remark that you sure could look at the step from GPT-2 to GPT-3 and say, “Wow, look at the way a whole bunch of stuff just seemed to simultaneously *click* for GPT-3.”\n\n\n\n\n \n\n\n#### Deployment lag\n\n\n\n[Yudkowsky][20:49]\n\n> \n> *Summary of my response: current AI is slow to deploy and powerful AI will be fast to deploy, but in between there will be AI that takes an intermediate length of time to deploy.*\n> \n> \n> \n\n\nAn awful lot of my model of deployment lag is adoption lag and regulatory lag and bureaucratic sclerosis across companies and countries.\n\n\nIf doubling GDP is such a big deal, go open borders and build houses. Oh, that’s illegal? Well, so will be AIs building houses!\n\n\nAI tech that does flawless translation could plausibly come years before AGI, but that doesn’t mean all the barriers to international trade and international labor movement and corporate hiring across borders all come down, because those barriers are not all translation barriers.\n\n\nThere’s then a discontinuous jump at the point where everybody falls over dead and the AI goes off to do its own thing without FDA approval. This jump is precedented by earlier pre-FOOM prototypes being able to do pre-FOOM cool stuff, maybe, but not necessarily precedented by mass-market adoption of anything major enough to double world GDP.\n\n\n\n\n \n\n\n#### Recursive self-improvement\n\n\n\n[Yudkowsky][20:54]\n\n> \n> *Summary of my response: Before there is AI that is great at self-improvement there will be AI that is mediocre at self-improvement.*\n> \n> \n> \n\n\nOh, come on. That is straight-up not how simple continuous toy models of RSI work. Between a neutron multiplication factor of 0.999 and 1.001 there is a very huge gap in output behavior.\n\n\nOutside of toy models: Over the last 10,000 years we had humans going from mediocre at improving their mental systems to being (barely) able to throw together AI systems, but 10,000 years is the equivalent of an eyeblink in evolutionary time – outside the metaphor, this says, “A month before there is AI that is great at self-improvement, there will be AI that is mediocre at self-improvement.”\n\n\n(Or possibly an hour before, if reality is again more extreme along the Eliezer-Hanson axis than Eliezer. But it makes little difference whether it’s an hour or a month, given anything like current setups.)\n\n\nThis is just pumping hard again on the intuition that says incremental design changes yield smooth output changes, which (the meta-level of the essay informs us wordlessly) is such a strong default that we are entitled to believe it if we can do a good job of weakening the evidence and arguments against it.\n\n\nAnd the argument is: Before there are systems great at self-improvement, there will be systems mediocre at self-improvement; implicitly: “before” implies “5 years before” not “5 days before”; implicitly: this will correspond to smooth changes in output between the two regimes even though that is not how continuous feedback loops work.\n\n\n\n\n \n\n\n#### Train vs. test\n\n\n\n[Yudkowsky][21:12]\n\n> \n> *Summary of my response: before you can train a really powerful AI, someone else can train a slightly worse AI.*\n> \n> \n> \n\n\nYeah, and before you can evolve a human, you can evolve a Homo erectus, which is a slightly worse human.\n\n\n\n> \n> If you are able to raise $X to train an AGI that could take over the world, then it was almost certainly worth it for someone 6 months ago to raise $X/2 to train an AGI that could merely radically transform the world, since they would then get 6 months of absurd profits.\n> \n> \n> \n\n\nI suppose this sentence makes a kind of sense if you assume away alignability and suppose that the previous paragraphs have refuted the notion of FOOMs, self-improvement, and thresholds between compounding returns and non-compounding returns (eg, in the human case, cognitive innovations like “written language” or “science”). If you suppose the previous sections refuted those things, then clearly, if you raised an AGI that you had aligned to “take over the world”, it got that way through cognitive powers that weren’t the result of FOOMing or other self-improvements, weren’t the results of its cognitive powers crossing a threshold from non-compounding to compounding, wasn’t the result of its understanding crossing a threshold of universality as the result of chunky universal machinery such as humans gained over chimps, so, implicitly, it must have been the kind of thing that you could learn by gradient descent, and do a half or a tenth as much of by doing half as much gradient descent, in order to build nanomachines a tenth as well-designed that could bypass a tenth as much bureaucracy.\n\n\nIf there are no unsmooth parts of the tech curve, the cognition curve, or the environment curve, then you should be able to make a bunch of wealth using a more primitive version of any technology that could take over the world.\n\n\nAnd when we look back at history, why, that may be totally true! They may have deployed universal superhuman translator technology for 6 months, which won’t double world GDP, but which a lot of people would pay for, and made a lot of money! Because even though there’s no company that built 90% of Amazon’s website and has 10% the market cap, when you zoom back out to look at whole industries like AI and a technological capstone like AGI, why, those whole industries do sometimes make some money along the way to the technological capstone, if they can find a niche that isn’t too regulated! Which translation currently isn’t! So maybe somebody used precursor tech to build a superhuman translator and deploy it 6 months earlier and made a bunch of money for 6 months. SO WHAT. EVERYONE STILL DIES.\n\n\nAs for “radically transforming the world” instead of “taking it over”, I think that’s just re-restated FOOM denialism. Doing either of those things quickly against human bureaucratic resistance strike me as requiring cognitive power levels dangerous enough that failure to align them on corrigibility would result in FOOMs.\n\n\nLike, if you can do either of those things on purpose, you are doing it by operating in the regime where running the AI with higher bounds on the for loop will FOOM it, but you have politely asked it not to FOOM, please.\n\n\nIf the people doing this have any sense whatsoever, they will *refrain* from merely massively transforming the world until they are ready to do something that *prevents the world from ending*.\n\n\nAnd if the gap from “massively transforming the world, briefly before it ends” to “preventing the world from ending, lastingly” takes much longer than 6 months to cross, or if other people have the same technologies that scale to “massive transformation”, somebody else will build an AI that fooms all the way.\n\n\n\n> \n> Likewise, if your AGI would give you a decisive strategic advantage, they could have spent less earlier in order to get a pretty large military advantage, which they could then use to take your stuff.\n> \n> \n> \n\n\nAgain, this presupposes some weird model where everyone has easy alignment at the furthest frontiers of capability; everybody has the aligned version of the most rawly powerful AGI they can possibly build; and nobody in the future has the kind of tech advantage that Deepmind currently has; so before you can amp your AGI to the raw power level where it could take over the whole world by using the limit of its mental capacities to military ends – alignment of this being a trivial operation to be assumed away – some other party took their easily-aligned AGI that was less powerful at the limits of its operation, and used it to get 90% as much military power… is the implicit picture here?\n\n\nWhereas the picture I’m drawing is that the AGI that kills you via “decisive strategic advantage” is the one that foomed and got nanotech, and no, the AI tech from 6 months earlier did not do 95% of a foom and get 95% of the nanotech.\n\n\n\n\n \n\n\n#### Discontinuities at 100% automation\n\n\n\n[Yudkowsky][21:31]\n\n> \n> *Summary of my response: at the point where humans are completely removed from a process, they will have been modestly improving output rather than acting as a sharp bottleneck that is suddenly removed.*\n> \n> \n> \n\n\nNot very relevant to my whole worldview in the first place; also not a very good description of how horses got removed from automobiles, or how humans got removed from playing Go.\n\n\n\n\n \n\n\n#### The weight of evidence\n\n\n\n[Yudkowsky][21:31]\n\n> \n> We’ve discussed a lot of possible arguments for fast takeoff. Superficially it would be reasonable to believe that no individual argument makes fast takeoff look likely, but that in the aggregate they are convincing.\n> \n> \n> However, I think each of these factors is perfectly consistent with the continuous change story and continuously accelerating hyperbolic growth, and so none of them undermine that hypothesis at all.\n> \n> \n> \n\n\nUh huh. And how about if we have a mirror-universe essay which over and over again treats fast takeoff as the default to be assumed, and painstakingly shows how a bunch of particular arguments for slow takeoff might not be true?\n\n\nThis entire essay seems to me like it’s drawn from the same hostile universe that produced Robin Hanson’s side of the Yudkowsky-Hanson Foom Debate.\n\n\nLike, all these abstract arguments devoid of concrete illustrations and “it need not necessarily be like…” and “now that I’ve shown it’s not necessarily like X, well, on the meta-level, I have implicitly told you that you now ought to believe Y”.\n\n\nIt just seems very clear to me that the sort of person who is taken in by this essay is the same sort of person who gets taken in by Hanson’s arguments in 2008 and gets caught flatfooted by AlphaGo and GPT-3 and AlphaFold 2.\n\n\nAnd empirically, it has already been shown to me that I do not have the power to break people out of the hypnosis of nodding along with Hansonian arguments, even by writing much longer essays than this.\n\n\nHanson’s fond dreams of domain specificity, and smooth progress for stuff like Go, and of course somebody else has a precursor 90% as good as AlphaFold 2 before Deepmind builds it, and GPT-3 levels of generality just not being a thing, now stand refuted.\n\n\nDespite that they’re largely being exhibited again in this essay.\n\n\nAnd people are still nodding along.\n\n\nReality just… doesn’t work like this on some deep level.\n\n\nIt doesn’t play out the way that people imagine it would play out when they’re imagining a certain kind of reassuring abstraction that leads to a smooth world. Reality is less fond of that kind of argument than a certain kind of EA is fond of that argument.\n\n\nThere is a set of intuitive generalizations from experience which rules that out, which I do not know how to convey. There is an understanding of the rules of argument which leads you to roll your eyes at Hansonian arguments and all their locally invalid leaps and snuck-in defaults, instead of nodding along sagely at their wise humility and outside viewing and then going “Huh?” when AlphaGo or GPT-3 debuts. But this, I *empirically* do not seem to know how to convey to people, in advance of the inevitable and predictable contradiction by a reality which is not as fond of Hansonian dynamics as Hanson. The arguments sound convincing to them.\n\n\n(Hanson himself has still not gone “Huh?” at the reality, though some of his audience did; perhaps because his abstractions are loftier than his audience’s? – because some of his audience, reading along to Hanson, probably implicitly imagined a concrete world in which GPT-3 was not allowed; but maybe Hanson himself is more abstract than this, and didn’t imagine anything so merely concrete?)\n\n\nIf I don’t respond to essays like this, people find them comforting and nod along. If I do respond, my words are less comforting and more concrete and easier to imagine concrete objections to, less like a long chain of abstractions that sound like the very abstract words in research papers and hence implicitly convincing because they sound like other things you were supposed to believe.\n\n\nAnd then there is another essay in 3 months. There is an infinite well of them. I would have to teach people to stop drinking from the well, instead of trying to whack them on the back until they cough up the drinks one by one, or actually, whacking them on the back and then they *don’t* cough them up until reality contradicts them, and then a third of them notice that and cough something up, and then they don’t learn the general lesson and go back to the well and drink again. And I don’t know how to teach people to stop drinking from the well. I tried to teach that. I failed. If I wrote another Sequence I have no idea to believe that Sequence would work.\n\n\nSo what EAs will believe at the end of the world, will look like whatever the content was of the latest bucket from the well of infinite slow-takeoff arguments that hasn’t yet been blatantly-even-to-them refuted by all the sharp jagged rapidly-generalizing things that happened along the way to the world’s end.\n\n\nAnd I know, before anyone bothers to say, that all of this reply is not written in the calm way that is right and proper for such arguments. I am tired. I have lost a lot of hope. There are not obvious things I can do, let alone arguments I can make, which I expect to be actually useful in the sense that the world will not end once I do them. I don’t have the energy left for calm arguments. What’s left is despair that can be given voice.\n\n\n\n\n###  5.6. Yudkowsky/Christiano discussion: AI progress and crossover points\n\n\n \n\n\n\n[Christiano][22:15]\nTo the extent that it was possible to make any predictions about 2015-2020 based on your views, I currently feel like they were much more wrong than right. I’m happy to discuss that. To the extent you are willing to make any bets about 2025, I expect they will be mostly wrong and I’d be happy to get bets on the record (most of all so that it will be more obvious in hindsight whether they are vindication for your view). Not sure if this is the place for that.\n\n\nCould also make a separate channel to avoid clutter.\n\n\n\n\n\n\n[Yudkowsky][22:16]\nPossibly. I think that 2015-2020 played out to a much more Eliezerish side than Eliezer on the Eliezer-Hanson axis, which sure is a case of me being wrong. What bets do you think we’d disagree on for 2025? I expect you have mostly misestimated my views, but I’m always happy to hear about anything concrete.\n\n\n\n\n\n[Christiano][22:20]\nI think the big points are: (i) I think you are significantly overestimating how large a discontinuity/trend break AlphaZero is, (ii) your view seems to imply that we will move quickly from much worse than humans to much better than humans, but it’s likely that we will move slowly through the human range on many tasks. I’m not sure if we can get a bet out of (ii), I think I don’t understand your view that well but I don’t see how it could make the same predictions as mine over the next 10 years.\n\n\n\n\n\n\n[Yudkowsky][22:22]\nWhat are your 10-year predictions?\n\n\n\n\n\n[Christiano][22:23]\nMy basic expectation is that for any given domain AI systems will gradually increase in usefulness, we will see a crossing over point where their output is comparable to human output, and that from that time we can estimate how long until takeoff by estimating “how long does it take AI systems to get ‘twice as impactful’?” which gives you a number like ~1 year rather than weeks. At the crossing over point you get a somewhat rapid change in derivative, since you are looking at (x+y) where y is growing faster than x.\n\n\nI feel like that should translate into different expectations about how impactful AI will be in any given domain—I don’t see how to make the ultra-fast-takeoff view work if you think that AI output is increasingly smoothly (since the rate of progress at the crossing-over point will be similar to the current rate of progress, unless R&D is scaling up much faster then)\n\n\nSo like, I think we are going to have crappy coding assistants, and then slightly less crappy coding assistants, and so on. And they will be improving the speed of coding very significantly before the end times.\n\n\n\n\n\n\n[Yudkowsky][22:25]\nYou think in a different language than I do. My more confident statements about AI tech are about what happens after it starts to rise out of the metaphorical atmosphere and the turbulence subsides. When you have minds as early on the cognitive tech tree as humans they sure can get up to some weird stuff, I mean, just look at humans. Now take an utterly alien version of that with its own draw from all the weirdness factors. It sure is going to be pretty weird.\n\n\n\n\n\n[Christiano][22:26]\nOK, but you keep saying stuff about how people with my dumb views would be “caught flat-footed” by historical developments. Surely to be able to say something like that you need to be making some kind of prediction?\n\n\n\n\n\n\n[Yudkowsky][22:26]\nWell, sure, now that Codex has suddenly popped into existence one day at a surprisingly high base level of tech, we should see various jumps in its capability over the years and some outside imitators. What do you think you predict differently about that than I do?\n\n\n\n\n\n[Christiano][22:26]\nWhy do you think codex is a high base level of tech?\n\n\nThe models get better continuously as you scale them up, and the first tech demo is weak enough to be almost useless\n\n\n\n\n\n\n[Yudkowsky][22:27]\nI think the next-best coding assistant was, like, not useful.\n\n\n\n\n\n[Christiano][22:27]\nyes\n\n\nand it is still not useful\n\n\n\n\n\n\n[Yudkowsky][22:27]\nCould be. Some people on HN seemed to think it was useful.\n\n\nI haven’t tried it myself.\n\n\n\n\n\n[Christiano][22:27]\nOK, I’m happy to take bets\n\n\n\n\n\n\n[Yudkowsky][22:28]\nI don’t think the previous coding assistant would’ve been very good at coding an asteroid game, even if you tried a rigged demo at the same degree of rigging?\n\n\n\n\n\n[Christiano][22:28]\nit’s unquestionably a radically better tech demo\n\n\n\n\n\n\n[Yudkowsky][22:28]\nWhere by “previous” I mean “previously deployed” not “previous generations of prototypes inside OpenAI’s lab”.\n\n\n\n\n\n[Christiano][22:28]\nMy basic story is that the model gets better and more useful with each doubling (or year of AI research) in a pretty smooth way. So the key underlying parameter for a discontinuity is how soon you build the first version—do you do that before or after it would be a really really big deal?\n\n\nand the answer seems to be: you do it somewhat before it would be a really big deal\n\n\nand then it gradually becomes a bigger and bigger deal as people improve it\n\n\nmaybe we are on the same page about getting gradually more and more useful? But I’m still just wondering where the foom comes from\n\n\n\n\n\n\n[Yudkowsky][22:30]\nSo, like… before we get systems that can FOOM and build nanotech, we should get more primitive systems that can write asteroid games and solve protein folding? Sounds legit.\n\n\nSo that happened, and now your model says that it’s fine later on for us to get a FOOM, because we have the tech precursors and so your prophecy has been fulfilled?\n\n\n\n\n\n[Christiano][22:31]\nno\n\n\n\n\n\n\n[Yudkowsky][22:31]\nDidn’t think so.\n\n\n\n\n\n[Christiano][22:31]\nI can’t tell if you can’t understand what I’m saying, or aren’t trying, or do understand and are just saying kind of annoying stuff as a rhetorical flourish\n\n\nat some point you have an AI system that makes (humans+AI) 2x as good at further AI progress\n\n\n\n\n\n\n[Yudkowsky][22:32]\nI know that what I’m saying isn’t your viewpoint. I don’t know what your viewpoint is or what sort of concrete predictions it makes at all, let alone what such predictions you think are different from mine.\n\n\n\n\n\n[Christiano][22:32]\nmaybe by continuity you can grant the existence of such a system, even if you don’t think it will ever exist?\n\n\nI want to (i) make the prediction that AI will actually have that impact at some point in time, (ii) talk about what happens before and after that\n\n\nI am talking about AI systems that become continuously more useful, because “become continuously more useful” is what makes me think that (i) AI will have that impact at some point in time, (ii) allows me to productively reason about what AI will look like before and after that. I expect that your view will say something about why AI improvements either aren’t continuous, or why continuous improvements lead to discontinuous jumps in the productivity of the (human+AI) system\n\n\n\n\n\n\n[Yudkowsky][22:34]\n\n> \n> at some point you have an AI system that makes (humans+AI) 2x as good at further AI progress\n> \n> \n> \n\n\nIs this prophecy fulfilled by using some narrow eld-AI algorithm to map out a TPU, and then humans using TPUs can write in 1 month a research paper that would otherwise have taken 2 months? And then we can go on to FOOM now that this prophecy about pre-FOOM states has been fulfilled? I know the answer is no, but I don’t know what you think is a narrower condition on the prophecy than that.\n\n\n\n\n\n[Christiano][22:35]\nIf you can use narrow eld-AI in order to make every part of AI research 2x faster, so that the entire field moves 2x faster, then the prophecy is fulfilled\n\n\nand it may be just another 6 months until it makes all of AI research 2x faster again, and then 3 months, and then…\n\n\n\n\n\n\n[Yudkowsky][22:36]\nWhat, the entire field? Even writing research papers? Even the journal editors approving and publishing the papers? So if we speed up every part of research except the journal editors, the prophecy has not been fulfilled and no FOOM may take place?\n\n\n\n\n\n[Christiano][22:36]\nno, I mean the improvement in overall output, given the actual realistic level of bottlenecking that occurs in practice\n\n\n\n\n\n\n[Yudkowsky][22:37]\nSo if the realistic level of bottlenecking ever becomes dominated by a human gatekeeper, the prophecy is ever unfulfillable and no FOOM may ever occur.\n\n\n\n\n\n[Christiano][22:37]\nthat’s what I mean by “2x as good at further progress,” the entire system is achieving twice as much\n\n\nthen the prophecy is unfulfillable and I will have been wrong\n\n\nI mean, I think it’s very likely that there will be a hard takeoff, if people refuse or are unable to use AI to accelerate AI progress for reasons unrelated to AI capabilities, and then one day they become willing\n\n\n\n\n\n\n[Yudkowsky][22:38]\n…because on your view, the Prophecy necessarily goes through humans and AIs working together to speed up the whole collective field of AI?\n\n\n\n\n\n[Christiano][22:38]\nit’s fine if the AI works alone\n\n\nthe point is just that it overtakes the humans at the point when it is roughly as fast as the humans\n\n\nwhy wouldn’t it?\n\n\nwhy does it overtake the humans when it takes it 10 seconds to double in capability instead of 1 year?\n\n\nthat’s like predicting that cultural evolution will be infinitely fast, instead of making the more obvious prediction that it will overtake evolution exactly when it’s as fast as evolution\n\n\n\n\n\n\n[Yudkowsky][22:39]\nI live in a mental world full of weird prototypes that people are shepherding along to the world’s end. I’m not even sure there’s a short sentence in my native language that could translate the short Paul-sentence “is roughly as fast as the humans”.\n\n\n\n\n\n[Christiano][22:40]\ndo you agree that you can measure the speed with which the community of human AI researchers develop and implement improvements in their AI systems?\n\n\nlike, we can look at how good AI systems are in 2021, and in 2022, and talk about the rate of progress?\n\n\n\n\n\n\n[Yudkowsky][22:40]\n…when exactly in hominid history was hominid intelligence exactly as fast as evolutionary optimization???\n\n\n\n> \n> do you agree that you can measure the speed with which the community of human AI researchers develop and implement improvements in their AI systems?\n> \n> \n> \n\n\nI mean… obviously not? How the hell would we measure real actual AI progress? What would even be the Y-axis on that graph?\n\n\nI have a rough intuitive feeling that it was going faster in 2015-2017 than 2018-2020.\n\n\n“What was?” says the stern skeptic, and I go “I dunno.”\n\n\n\n\n\n[Christiano][22:42]\nHere’s a way of measuring progress you won’t like: for almost all tasks, you can initially do them with lots of compute, and as technology improves you can do them with less compute. We can measure how fast the amount of compute required is going down.\n\n\n\n\n\n\n[Yudkowsky][22:43]\nYeah, that would be a cool thing to measure. It’s not obviously a relevant thing to anything important, but it’d be cool to measure.\n\n\n\n\n\n[Christiano][22:43]\nAnother way you won’t like: we can hold fixed the resources we invest and look at the quality of outputs in any given domain (or even $ of revenue) and ask how fast it’s changing.\n\n\n\n\n\n\n[Yudkowsky][22:43]\nI wonder what it would say about Go during the age of AlphaGo.\n\n\nOr what that second metric would say.\n\n\n\n\n\n[Christiano][22:43]\nI think it would be completely fine, and you don’t really understand what happened with deep learning in board games. Though I also don’t know what happened in much detail, so this is more like a prediction then a retrodiction.\n\n\nBut it’s enough of a retrodiction that I shouldn’t get too much credit for it.\n\n\n\n\n\n\n[Yudkowsky][22:44]\nI don’t know what result you would consider “completely fine”. I didn’t have any particular unfine result in mind.\n\n\n\n\n\n[Christiano][22:45]\noh, sure\n\n\nif it was just an honest question happy to use it as a concrete case\n\n\nI would measure the rate of progress in Go by looking at how fast Elo improves with time or increasing R&D spending\n\n\n\n\n\n\n[Yudkowsky][22:45]\nI mean, I don’t have strong predictions about it so it’s not yet obviously cruxy to me\n\n\n\n\n\n[Christiano][22:46]\nI’d roughly guess that would continue, and if there were multiple trendlines to extrapolate I’d estimate crossover points based on that\n\n\n\n\n\n\n[Yudkowsky][22:47]\nsuppose this curve is smooth, and we see that sharp Go progress over time happened because Deepmind dumped in a ton of increased R&D spend. you then argue that this cannot happen with AGI because by the time we get there, people will be pushing hard at the frontiers in a competitive environment where everybody’s already spending what they can afford, just like in a highly competitive manufacturing industry.\n\n\n\n\n\n[Christiano][22:47]\nthe key input to making a prediction for AGZ in particular would be the precise form of the dependence on R&D spending, to try to predict the changes as you shift from a single programmer to a large team at DeepMind, but most reasonable functional forms would be roughly right\n\n\nYes, it’s definitely a prediction of my view that it’s easier to improve things that people haven’t spent much money on than things have spent a lot of money on. It’s also a separate prediction of my view that people are going to be spending a boatload of money on all of the relevant technologies. Perhaps $1B/year right now and I’m imagining levels of investment large enough to be essentially bottlenecked on the availability of skilled labor.\n\n\n\n\n\n\n[Bensinger][22:48]\n( Previous Eliezer-comments about AlphaGo as a break in trend, responding briefly to Miles Brundage: )\n\n\n\n\n\n### \n\n\n### 5.7. Legal economic growth\n\n\n \n\n\n\n[Yudkowsky][22:49]\nDoes your prediction change if all hell breaks loose in 2025 instead of 2055?\n\n\n\n\n\n[Christiano][22:50]\nI think my prediction was wrong if all hell breaks loose in 2025, if by “all hell breaks loose” you mean “dyson sphere” and not “things feel crazy”\n\n\n\n\n\n\n[Yudkowsky][22:50]\nThings feel crazy *in the AI field* and the world ends *less than* 4 years later, well before the world economy doubles.\n\n\nWhy was the Prophecy wrong if the world begins final descent in 2025? The Prophecy requires the world to then last until 2029 while doubling its economic output, after which it is permitted to end, but does not obviously to me forbid the Prophecy to begin coming true in 2025 instead of 2055.\n\n\n\n\n\n[Christiano][22:52]\nyes, I just mean that some important underlying assumptions for the prophecy were violated, I wouldn’t put much stock in it at that point, etc.\n\n\n\n\n\n\n[Yudkowsky][22:53]\nA lot of the issues I have with understanding any of your terminology in concrete Eliezer-language is that it looks to me like the premise-events of your Prophecy are fulfillable in all sorts of ways that don’t imply the conclusion-events of the Prophecy.\n\n\n\n\n\n[Christiano][22:53]\nif “things feel crazy” happens 4 years before dyson sphere, then I think we have to be really careful about what crazy means\n\n\n\n\n\n\n[Yudkowsky][22:54]\na lot of people looking around nervously and privately wondering if Eliezer was right, while public pravda continues to prohibit wondering anything such thing out loud, so they all go on thinking that they must be wrong.\n\n\n\n\n\n[Christiano][22:55]\nOK, by “things get crazy” I mean like hundreds of billions of dollars of spending at google on automating AI R&D\n\n\n\n\n\n\n[Yudkowsky][22:55]\nI expect bureaucratic obstacles to prevent much GDP per se from resulting from this.\n\n\n\n\n\n[Christiano][22:55]\nmassive scaleups in semiconductor manufacturing, bidding up prices of inputs crazily\n\n\n\n\n\n\n[Yudkowsky][22:55]\nI suppose that much spending could well increase world GDP by hundreds of billions of dollars per year.\n\n\n\n\n\n[Christiano][22:56]\nmassive speculative rises in AI company valuations financing a significant fraction of GWP into AI R&D\n\n\n(+hardware R&D, +building new clusters, +etc.)\n\n\n\n\n\n\n[Yudkowsky][22:56]\nlike, higher than Tesla? higher than Bitcoin?\n\n\nboth of these things sure did skyrocket in market cap without that having much of an effect on housing stocks and steel production.\n\n\n\n\n\n[Christiano][22:57]\nright now I think hardware R&D is on the order of $100B/year, AI R&D is more like $10B/year, I guess I’m betting on something more like trillions? (limited from going higher because of accounting problems and not that much smart money)\n\n\nI don’t think steel production is going up at that point\n\n\nplausibly going down since you are redirecting manufacturing capacity into making more computers. But probably just staying static while all of the new capacity is going into computers, since cannibalizing existing infrastructure is much more expensive\n\n\nthe original point was: you aren’t pulling AlphaZero shit any more, you are competing with an industry that has invested trillions in cumulative R&D\n\n\n\n\n\n\n[Yudkowsky][23:00]\nis this in hopes of future profit, or because current profits are already in the trillions?\n\n\n\n\n\n[Christiano][23:01]\nlargely in hopes of future profit / reinvested AI outputs (that have high market cap), but also revenues are probably in the trillions?\n\n\n\n\n\n\n[Yudkowsky][23:02]\nthis all sure does sound “pretty darn prohibited” on my model, but I’d hope there’d be something earlier than that we could bet on. what does your Prophecy prohibit happening *before* that sub-prophesied day?\n\n\n\n\n\n[Christiano][23:02]\nTo me your model just seems crazy, and you are saying it predicts crazy stuff at the end but no crazy stuff beforehand, so I don’t know what’s prohibited. Mostly I feel like I’m making positive predictions, of gradually escalating value of AI in lots of different industries\n\n\nand rapidly increasing investment in AI\n\n\nI guess your model can be: those things happen, and then one day the AI explodes?\n\n\n\n\n\n\n[Yudkowsky][23:03]\nthe main way you get rapidly increasing investment in AI is if there’s some way that AI can produce huge profits without that being effectively bureaucratically prohibited – eg this is where we get huge investments in burning electricity and wasting GPUs on Bitcoin mining.\n\n\n\n\n\n[Christiano][23:03]\nbut it seems like you should be predicting e.g. AI quickly jumping to superhuman in lots of domains, and some applications jumping from no value to massive value\n\n\nI don’t understand what you mean by that sentence. Do you think we aren’t seeing rapidly increasing investment in AI right now?\n\n\nor are you talking about increasing investment above some high threshold, or increasing investment at some rate significantly larger than the current rate?\n\n\nit seems to me like you can pretty seamlessly get up to a few $100B/year of revenue just by redirecting existing tech R&D\n\n\n\n\n\n\n[Yudkowsky][23:05]\nso I can imagine scenarios where some version of GPT-5 cloned outside OpenAI is able to talk hundreds of millions of mentally susceptible people into giving away lots of their income, and many regulatory regimes are unable to prohibit this effectively. then AI could be making a profit of trillions and then people would invest corresponding amounts in making new anime waifus trained in erotic hypnosis and findom.\n\n\nthis, to be clear, is not my mainline prediction.\n\n\nbut my sense is that our current economy is mostly not about the 1-day period to design new vaccines, it is about the multi-year period to be allowed to sell the vaccines.\n\n\nthe exceptions to this, like Bitcoin managing to say “fuck off” to the regulators for long enough, are where Bitcoin scales to a trillion dollars and gets massive amounts of electricity and GPU burned on it.\n\n\nso we can imagine something like this for AI, which earns a trillion dollars, and sparks a trillion-dollar competition.\n\n\nbut my sense is that your model does not work like this.\n\n\nmy sense is that your model is about *general* improvements across the *whole* economy.\n\n\n\n\n\n[Christiano][23:08]\nI think bitcoin is small even compared to current AI…\n\n\n\n\n\n\n[Yudkowsky][23:08]\nmy sense is that we’ve already built an economy which rejects improvement based on small amounts of cleverness, and only rewards amounts of cleverness large enough to bypass bureaucratic structures. it’s not enough to figure out a version of e-gold that’s 10% better. e-gold is already illegal. you have to figure out Bitcoin.\n\n\nwhat are you going to build? better airplanes? airplane costs are mainly regulatory costs. better medtech? mainly regulatory costs. better houses? building houses is illegal anyways.\n\n\nwhere is the room for the general AI revolution, short of the AI being literally revolutionary enough to overthrow governments?\n\n\n\n\n\n[Christiano][23:10]\nfactories, solar panels, robots, semiconductors, mining equipment, power lines, and “factories” just happens to be one word for a thousand different things\n\n\nI think it’s reasonable to think some jurisdictions won’t be willing to build things but it’s kind of improbable as a prediction for the whole world. That’s a possible source of shorter-term predictions?\n\n\nalso computers and the 100 other things that go in datacenters\n\n\n\n\n\n\n[Yudkowsky][23:12]\nThe whole developed world rejects open borders. The regulatory regimes all make the same mistakes with an almost perfect precision, the kind of coordination that human beings could never dream of when trying to coordinate on purpose.\n\n\nif the world lasts until 2035, I could perhaps see deepnets becoming as ubiquitous as computers were in… 1995? 2005? would that fulfill the terms of the Prophecy? I think it doesn’t; I think your Prophecy requires that early *AGI* tech be that ubiquitous so that *AGI* tech will have trillions invested in it.\n\n\n\n\n\n[Christiano][23:13]\nwhat is AGI tech?\n\n\nthe point is that there aren’t important drivers that you can easily improve a lot\n\n\n\n\n\n\n[Yudkowsky][23:14]\nfor purposes of the Prophecy, AGI tech is that which, scaled far enough, ends the world; this must have trillions invested in it, so that the trajectory up to it cannot look like pulling an AlphaGo. no?\n\n\n\n\n\n[Christiano][23:14]\nso it’s relevant if you are imagining some piece of the technology which is helpful for general problem solving or something but somehow not helpful for all of the things people are doing with ML, to me that seems unlikely since it’s all the same stuff\n\n\nsurely AGI tech should at least include the use of AI to automate AI R&D\n\n\nregardless of what you arbitrarily decree as “ends the world if scaled up”\n\n\n\n\n\n\n[Yudkowsky][23:15]\nonly if that’s the path that leads to destroying the world?\n\n\nif it isn’t on that path, who cares Prophecy-wise?\n\n\n\n\n\n[Christiano][23:15]\nalso I want to emphasize that “pull an AlphaGo” is what happens when you move from SOTA being set by an individual programmer to a large lab, you don’t need to be investing trillions to avoid that\n\n\nand that the jump is still more like a few years\n\n\nbut the prophecy does involve trillions, and my view gets more like your view if people are jumping from $100B of R&D ever to $1T in a single year\n\n\n\n\n\n \n\n\n### 5.8. TPUs and GPUs, and automating AI R&D\n\n\n \n\n\n\n[Yudkowsky][23:17]\nI’m also wondering a little why the emphasis on “trillions”. it seems to me that the terms of your Prophecy should be fulfillable by AGI tech being merely as ubiquitous as modern computers, so that many competing companies invest mere hundreds of billions in the equivalent of hardware plants. it is legitimately hard to get a chip with 50% better transistors ahead of TSMC.\n\n\n\n\n\n[Christiano][23:17]\nyes, if you are investing hundreds of billions then it is hard to pull ahead (though could still happen)\n\n\n(since the upside is so much larger here, no one cares that much about getting ahead of TSMC since the payoff is tiny in the scheme of the amounts we are discussing)\n\n\n\n\n\n\n[Yudkowsky][23:18]\nwhich, like, doesn’t prevent Google from tossing out TPUs that are pretty significant jumps on GPUs, and if there’s a specialized application of AGI-ish tech that is especially key, you can have everything behave smoothly and still get a jump that way.\n\n\n\n\n\n[Christiano][23:18]\nI think TPUs are basically the same as GPUs\n\n\nprobably a bit worse\n\n\n(but GPUs are sold at a 10x markup since that’s the size of nvidia’s lead)\n\n\n\n\n\n\n[Yudkowsky][23:19]\nnoted; I’m not enough of an expert to directly contradict that statement about TPUs from my own knowledge.\n\n\n\n\n\n[Christiano][23:19]\n(though I think TPUs are nevertheless leased at a slightly higher price than GPUs)\n\n\n\n\n\n\n[Yudkowsky][23:19]\nhow does Nvidia maintain that lead and 10x markup? that sounds like a pretty un-Paul-ish state of affairs given Bitcoin prices never mind AI investments.\n\n\n\n\n\n[Christiano][23:20]\nnvidia’s lead isn’t worth that much because historically they didn’t sell many gpus\n\n\n(especially for non-gaming applications)\n\n\ntheir R&D investment is relatively large compared to the $ on the table\n\n\nmy guess is that their lead doesn’t stick, as evidenced by e.g. Google very quickly catching up\n\n\n\n\n\n\n[Yudkowsky][23:21]\nparenthetically, does this mean – and I don’t necessarily predict otherwise – that you predict a drop in Nvidia’s stock and a drop in GPU prices in the next couple of years?\n\n\n\n\n\n[Christiano][23:21]\nnvidia’s stock may do OK from riding general AI boom, but I do predict a relative fall in nvidia compared to other AI-exposed companies\n\n\n(though I also predicted google to more aggressively try to compete with nvidia for the ML market and think I was just wrong about that, though I don’t really know any details of the area)\n\n\nI do expect the cost of compute to fall over the coming years as nvidia’s markup gets eroded\n\n\nto be partially offset by increases in the cost of the underlying silicon (though that’s still bad news for nvidia)\n\n\n\n\n\n\n[Yudkowsky][23:23]\nI parenthetically note that I think the Wise Reader should be justly impressed by predictions that come true about relative stock price changes, even if Eliezer has not explicitly contradicted those predictions before they come true. there are bets you can win without my having to bet against you.\n\n\n\n\n\n[Christiano][23:23]\nyou are welcome to counterpredict, but no saying in retrospect that reality proved you right if you don’t ![🙂](https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png)\n\n\notherwise it’s just me vs the market\n\n\n\n\n\n\n[Yudkowsky][23:24]\nI don’t feel like I have a counterprediction here, but I think the Wise Reader should be impressed if you win vs. the market.\n\n\nhowever, this does require you to name in advance a few “other AI-exposed companies”.\n\n\n\n\n\n[Christiano][23:25]\nNote that I made the same bet over the last year—I make a large AI bet but mostly moved my nvidia allocation to semiconductor companies. The semiconductor part of the portfolio is up 50% while nvidia is up 70%, so I lost that one. But that just means I like the bet even more next year.\n\n\nhappy to use nvidia vs tsmc\n\n\n\n\n\n\n[Yudkowsky][23:25]\nthere’s a lot of noise in a 2-stock prediction.\n\n\n\n\n\n[Christiano][23:25]\nI mean, it’s a 1-stock prediction about nvidia\n\n\n\n\n\n\n[Yudkowsky][23:26]\nbut your funeral or triumphal!\n\n\n\n\n\n[Christiano][23:26]\nindeed ![🙂](https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png)\n\n\nanyway\n\n\nI expect all of the $ amounts to be much bigger in the future\n\n\n\n\n\n\n[Yudkowsky][23:26]\nyeah, but using just TSMC for the opposition exposes you to I dunno Chinese invasion of Taiwan\n\n\n\n\n\n[Christiano][23:26]\nyes\n\n\nalso TSMC is not that AI-exposed\n\n\nI think the main prediction is: eventual move away from GPUs, nvidia can’t maintain that markup\n\n\n\n\n\n\n[Yudkowsky][23:27]\n“Nvidia can’t maintain that markup” sounds testable, but is less of a win against the market than predicting a relative stock price shift. (Over what timespan? Just the next year sounds quite fast for that kind of prediction.)\n\n\n\n\n\n[Christiano][23:27]\nregarding your original claim: if you think that it’s plausible that AI will be doing all of the AI R&D, and that will be accelerating continuously from 12, 6, 3 month “doubling times,” but that we’ll see a discontinuous change in the “path to doom,” then that would be harder to generate predictions about\n\n\nyes, it’s hard to translate most predictions about the world into predictions about the stock market\n\n\n\n\n\n\n[Yudkowsky][23:28]\nthis again sounds like it’s not written in Eliezer-language.\n\n\nwhat does it mean for “AI will be doing all of the AI R&D”? that sounds to me like something that happens after the end of the world, hence doesn’t happen.\n\n\n\n\n\n[Christiano][23:29]\nthat’s good, that’s what I thought\n\n\n\n\n\n\n[Yudkowsky][23:29]\nI don’t necessarily want to sound very definite about that in advance of understanding what it *means*\n\n\n\n\n\n[Christiano][23:29]\nI’m saying that I think AI will be automating AI R&D gradually, before the end of the world\n\n\nyeah, I agree that if you reject the construct of “how fast the AI community makes progress” then it’s hard to talk about what it means to automate “progress”\n\n\nand that may be hard to make headway on\n\n\nthough for cases like AlphaGo (which started that whole digression) it seems easy enough to talk about elo gain per year\n\n\nmaybe the hard part is aggregating across tasks into a measure you actually care about?\n\n\n\n\n\n\n[Yudkowsky][23:30]\nup to a point, but yeah. (like, if we’re taking Elo high above human levels and restricting our measurements to a very small range of frontier AIs, I quietly wonder if the measurement is still measuring quite the same thing with quite the same robustness.)\n\n\n\n\n\n[Christiano][23:31]\nI agree that elo measurement is extremely problematic in that regime\n\n\n\n\n\n \n\n\n### 5.9. Smooth exponentials vs. jumps in income\n\n\n \n\n\n\n[Yudkowsky][23:31]\nso in your worldview there’s this big emphasis on things that must have been deployed and adopted widely to the point of already having huge impacts\n\n\nand in my worldview there’s nothing very surprising about people with a weird powerful prototype that wasn’t used to automate huge sections of AI R&D because the previous versions of the tech weren’t useful for that or bigcorps didn’t adopt it.\n\n\n\n\n\n[Christiano][23:32]\nI mean, Google is already 1% of the US economy and in this scenario it and its peers are more like 10-20%? So wide adoption doesn’t have to mean that many people. Though I also do predict much wider adoption than you so happy to go there if it’s happy for predictions.\n\n\nI don’t really buy the “weird powerful prototype”\n\n\n\n\n\n\n[Yudkowsky][23:33]\nyes. I noticed.\n\n\nyou would seem, indeed, to be offering large quantities of it for short sale.\n\n\n\n\n\n[Christiano][23:33]\nand it feels like the thing you are talking about ought to have some precedent of some kind, of weird powerful prototypes that jump straight from “does nothing” to “does something impactful”\n\n\nlike if I predict that AI will be useful in a bunch of domains, and will get there by small steps, you should either predict that won’t happen, or else also predict that there will be some domains with weird prototypes jumping to giant impact?\n\n\n\n\n\n\n[Yudkowsky][23:34]\nlike an electrical device that goes from “not working at all” to “actually working” as soon as you screw in the attachments for the electrical plug.\n\n\n\n\n\n[Christiano][23:34]\n(clearly takes more work to operationalize)\n\n\nI’m not sure I understand that sentence, hopefully it’s clear enough why I expect those discontinuities?\n\n\n\n\n\n\n[Yudkowsky][23:34]\nthough, no, that’s a facile bad analogy.\n\n\na better analogy would be an AI system that only starts working after somebody tells you about batch normalization or LAMB learning rate or whatever.\n\n\n\n\n\n[Christiano][23:36]\nsure, which I think will happen all the time for individual AI projects but not for sota\n\n\nbecause the projects at sota have picked the low hanging fruit, it’s not easy to get giant wins\n\n\n\n\n\n\n[Yudkowsky][23:36]\n\n> \n> like if I predict that AI will be useful in a bunch of domains, and will get there by small steps, you should either predict that won’t happen, or else also predict that there will be some domains with weird prototypes jumping to giant impact?\n> \n> \n> \n\n\nin the latter case, has this Eliezer-Prophecy already had its terms fulfilled by AlphaFold 2, or do you say nay because AlphaFold 2 hasn’t doubled GDP?\n\n\n\n\n\n[Christiano][23:37]\n(you can also get giant wins by a new competitor coming up at a faster rate of progress, and then we have more dependence on whether people do it when it’s a big leap forward or slightly worse than the predecessor, and I’m betting on the latter)\n\n\nI have no idea what AlphaFold 2 is good for, or the size of the community working on it, my guess would be that its value is pretty small\n\n\nwe can try to quantify\n\n\nlike, I get surprised when $X of R&D gets you something whose value is much larger than $X\n\n\nI’m not surprised at all if $X of R&D gets you <<$X, or even like 10\\*$X in a given case that was selected for working well\n\n\nhopefully it’s clear enough why that’s the kind of thing a naive person would predict\n\n\n\n\n\n\n[Yudkowsky][23:38]\nso a thing which Eliezer’s Prophecy does not mandate per se, but sure does permit, and is on the mainline especially for nearer timelines, is that the world-ending prototype had no prior prototype containing 90% of the technology which earned a trillion dollars.\n\n\na lot of Paul’s Prophecy seems to be about forbidding this.\n\n\nis that a fair way to describe your own Prophecy?\n\n\n\n\n\n[Christiano][23:39]\nI don’t have a strong view about “containing 90% of the technology”\n\n\nthe main view is that whatever the “world ending prototype” does, there were earlier systems that could do practically the same thing\n\n\nif the world ending prototype does something that lets you go foom in a day, there was a system years earlier that could foom in a month, so that would have been the one to foom\n\n\n\n\n\n\n[Yudkowsky][23:41]\nbut, like, the world-ending thing, according to the Prophecy, must be squarely in the middle of a class of technologies which are in the midst of earning trillions of dollars and having trillions of dollars invested in them. it’s not enough for the Worldender to be definitionally somewhere in that class, because then it could be on a weird outskirt of the class, and somebody could invest a billion dollars in that weird outskirt before anybody else had invested a hundred million, which is forbidden by the Prophecy. so the Worldender has got to be right in the middle, a plain and obvious example of the tech that’s already earning trillions of dollars. …y/n?\n\n\n\n\n\n[Christiano][23:42]\nI agree with that as a prediction for some operationalization of “a plain and obvious example,” but I think we could make it more precise / it doesn’t feel like it depends on the fuzziness of that\n\n\nI think that if the world can end out of nowhere like that, you should also be getting $100B/year products out of nowhere like that, but I guess you think not because of bureaucracy\n\n\nlike, to me it seems like our views stake out predictions about codex, where I’m predicting its value will be modest relative to R&D, and the value will basically improve from there with a nice experience curve, maybe something like ramping up quickly to some starting point <$10M/year and then doubling every year thereafter, whereas I feel like you are saying more like “who knows, could be anything” and so should be surprised each time the boring thing happens\n\n\n\n\n\n\n[Yudkowsky][23:45]\nthe concrete example I give is that the World-Ending Company will be able to use the same tech to build a true self-driving car, which would in the natural course of things be approved for sale a few years later after the world had ended.\n\n\n\n\n\n[Christiano][23:46]\nbut self-driving cars seem very likely to already be broadly deployed, and so the relevant question is really whether their technical improvements can also be deployed to those cars?\n\n\n(or else maybe that’s another prediction we disagree about)\n\n\n\n\n\n\n[Yudkowsky][23:47]\nI feel like I would indeed not have the right to feel very surprised if Codex technology stagnated for the next 5 years, nor if it took a massive leap in 2 years and got ubiquitously adopted by lots of programmers.\n\n\nyes, I think that’s a general timeline difference there\n\n\nre: self-driving cars\n\n\nI might be talkable into a bet where you took “Codex tech will develop like *this*” and I took the side “literally anything else but that”\n\n\n\n\n\n[Christiano][23:48]\nI think it would have to be over/under, I doubt I’m more surprised than you by something failing to be economically valuable, I’m surprised by big jumps in value\n\n\nseems like it will be tough to work\n\n\n\n\n\n\n[Yudkowsky][23:49]\nwell, if I was betting on something taking a big jump in income, I sure would bet on something in a relatively unregulated industry like Codex or anime waifus.\n\n\nbut that’s assuming I made the bet at all, which is a hard sell when the bet is about the Future, which is notoriously hard to predict.\n\n\n\n\n\n[Christiano][23:50]\nI guess my strongest take is: if you want to pull the thing where you say that future developments proved you right and took unreasonable people like me by surprise, you’ve got to be able to say *something* in advance about what you expect to happen\n\n\n\n\n\n\n[Yudkowsky][23:51]\nso what if neither of us are surprised if Codex stagnates for 5 years, you win if Codex shows a smooth exponential in income, and I win if the income looks… jumpier? how would we quantify that?\n\n\n\n\n\n[Christiano][23:52]\ncodex also does seem a bit unfair to you in that it may have to be adopted by lots of programmers which could slow things down a lot even if capabilities are pretty jumpy\n\n\n(though I think in fact usefulness and not merely profit will basically just go up smoothly, with step sizes determined by arbitrary decisions about when to release something)\n\n\n\n\n\n\n[Yudkowsky][23:53]\nI’d also be concerned about unfairness to me in that earnable income is not the same as the gains from trade. If there’s more than 1 competitor in the industry, their earnings from Codex may be much less than the value produced, and this may not change much with improvements in the tech.\n\n\n\n\n \n\n\n### 5.10. Late-stage predictions\n\n\n \n\n\n\n[Christiano][23:53]\nI think my main update from this conversation is that you don’t really predict someone to come out of nowhere with a model that can earn a lot of $, even if they could come out of nowhere with a model that could end the world, because of regulatory bottlenecks and nimbyism and general sluggishness and unwillingness to do things\n\n\ndoes that seem right?\n\n\n\n\n\n\n[Yudkowsky][23:55]\nWell, and also because the World-ender is “the first thing that scaled with compute” and/or “the first thing that ate the real core of generality” and/or “the first thing that went over neutron multiplication factor 1”.\n\n\n\n\n\n[Christiano][23:55]\nand so that cuts out a lot of the easily-specified empirical divergences, since “worth a lot of $” was the only general way to assess “big deal that people care about” and avoiding disputes like “but Zen was mostly developed by a single programmer, it’s not like intense competition”\n\n\nyeah, that’s the real disagreement it seems like we’d want to talk about\n\n\nbut it just doesn’t seem to lead to many prediction differences in advance?\n\n\nI totally don’t buy any of those models, I think they are bonkers\n\n\nwould love to bet on that\n\n\n\n\n\n\n[Yudkowsky][23:56]\nProlly but I think the from-my-perspective-weird talk about GDP is probably concealing *some* kind of important crux, because caring about GDP still feels pretty alien to me.\n\n\n\n\n\n[Christiano][23:56]\nI feel like getting up to massive economic impacts without seeing “the real core of generality” seems like it should also be surprising on your view\n\n\nlike if it’s 10 years from now and AI is a pretty big deal but no crazy AGI, isn’t that surprising?\n\n\n\n\n\n\n[Yudkowsky][23:57]\nMildly but not too surprising, I would imagine that people had built a bunch of neat stuff with gradient descent in realms where you could get a long way on self-play or massively collectible datasets.\n\n\n\n\n\n[Christiano][23:58]\nI’m fine with the crux being something that doesn’t lead to any empirical disagreements, but in that case I just don’t think you should claim credit for the worldview making great predictions.\n\n\n(or the countervailing worldview making bad predictions)\n\n\n\n\n\n\n[Yudkowsky][23:59]\nstuff that we could see then: self-driving cars (10 years is enough for regulatory approval in many countries), super Codex, GPT-6 powered anime waifus being an increasingly loud source of (arguably justified) moral panic and a hundred-billion-dollar industry\n\n\n\n\n\n[Christiano][23:59]\nanother option is “10% ~~GDP~~ GWP growth in a year, before doom”\n\n\nI think that’s very likely, though might be too late to be helpful\n\n\n\n\n\n\n[Yudkowsky][0:01]\nsee, that seems genuinely hard unless somebody gets GPT-4 far head of any political opposition – I guess all the competent AGI groups lean solidly liberal at the moment? – and uses it to fake massive highly-persuasive sentiment on Twitter for housing liberalization.\n\n\n\n\n\n[Christiano][0:01]\nso seems like a bet?\n\n\nbut you don’t get to win until doom ![🙁](https://s.w.org/images/core/emoji/14.0.0/72x72/1f641.png)\n\n\n\n\n\n\n[Yudkowsky][0:02]\nI mean, as written, I’d want to avoid cases like 10% growth on paper while recovering from a pandemic that produced 0% growth the previous year.\n\n\n\n\n\n[Christiano][0:02]\nyeah\n\n\n\n\n\n\n[Yudkowsky][0:04]\nI’d want to check the current rate (5% iirc) and what the variance on it was, 10% is a little low for surety (though my sense is that it’s a pretty darn smooth graph that’s hard to perturb)\n\n\nif we got 10% in a way that was clearly about AI tech becoming that ubiquitous, I’d feel relatively good about nodding along and saying, “Yes, that is like unto the beginning of Paul’s Prophecy” not least because the timelines had been that long at all.\n\n\n\n\n\n[Christiano][0:05]\nlike 3-4%/year right now\n\n\nrandom wikipedia number is 5.5% in 2006-2007, 3-4% since 2010\n\n\n4% 1995-2000\n\n\n\n\n\n\n[Yudkowsky][0:06]\nI don’t want to sound obstinate here. My model does not *forbid* that we dwiddle around on the AGI side while gradient descent tech gets its fingers into enough separate weakly-generalizing pies to produce 10% GDP growth, but I’m happy to say that this sounds much more like Paul’s Prophecy is coming true.\n\n\n\n\n\n[Christiano][0:07]\nok, we should formalize at some point, but also need the procedure for you getting credit given that it can’t resolve in your favor until the end of days\n\n\n\n\n\n\n[Yudkowsky][0:07]\nIs there something that sounds to you like Eliezer’s Prophecy which we can observe before the end of the world?\n\n\n\n\n\n[Christiano][0:07]\nwhen you will already have all the epistemic credit you need\n\n\nnot on the “simple core of generality” stuff since that apparently immediately implies end of world\n\n\nmaybe something about ML running into obstacles en route to human level performance?\n\n\nor about some other kind of discontinuous jump even in a case where people care, though there seem to be a few reasons you don’t expect many of those\n\n\n\n\n\n\n[Yudkowsky][0:08]\ndepends on how you define “immediately”? it’s not *long* before the end of the world, but in some sad scenarios there is some tiny utility to you declaring me right 6 months before the end.\n\n\n\n\n\n[Christiano][0:09]\nI care a lot about the 6 months before the end personally\n\n\nthough I do think probably everything is more clear by then independent of any bet; but I guess you are more pessimistic about that\n\n\n\n\n\n\n[Yudkowsky][0:09]\nI’m not quite sure what I’d do in them, but I may have worked something out before then, so I care significantly in expectation if not in particular.\n\n\nI am more pessimistic about other people’s ability to notice what reality is screaming in their faces, yes.\n\n\n\n\n\n[Christiano][0:10]\nif we were to look at various scaling curves, e.g. of loss vs model size or something, do you expect those to look distinctive as you hit the “real core of generality”?\n\n\n\n\n\n\n[Yudkowsky][0:10]\nlet me turn that around: if we add transformers into those graphs, do they jump around in a way you’d find interesting?\n\n\n\n\n\n[Christiano][0:11]\nnot really\n\n\n\n\n\n\n[Yudkowsky][0:11]\nis that because the empirical graphs don’t jump, or because you don’t think the jumps say much?\n\n\n\n\n\n[Christiano][0:11]\nbut not many good graphs to look at (I just have one in mind), so that’s partly a prediction about what the exercise would show\n\n\nI don’t think the graphs jump much, and also transformers come before people start evaluating on tasks where they help a lot\n\n\n\n\n\n\n[Yudkowsky][0:12]\nIt would not terribly contradict the terms of my Prophecy if the World-ending tech began by not producing a big jump on existing tasks, but generalizing to some currently not-so-popular tasks where it scaled much faster.\n\n\n\n\n\n[Christiano][0:13]\neh, they help significantly on contemporary tasks, but it’s just not a huge jump relative to continuing to scale up model sizes\n\n\nor other ongoing improvements in architecture\n\n\nanyway, should try to figure out something, and good not to finalize a bet until you have some way to at least come out ahead, but I should sleep now\n\n\n\n\n\n\n[Yudkowsky][0:14]\nyeah, same.\n\n\nThing I want to note out loud lest I forget ere I sleep: I think the real world is full of tons and tons of technologies being developed as unprecedented prototypes in the midst of big fields, because the key thing to invest in wasn’t the competitively explored center. Wright Flyer vs all expenditures on Traveling Machine R&D. First atomic pile and bomb vs all Military R&D.\n\n\nThis is one reason why Paul’s Prophecy seems fragile to me. You could have the preliminaries come true as far as there being a trillion bucks in what looks like AI R&D, and then the WorldEnder is a weird prototype off to one side of that. saying “But what about the rest of that AI R&D?” is no more a devastating retort to reality than looking at AlphaGo and saying “But weren’t other companies investing billions in Better Software?” Yeah but it was a big playing field with lots of different kinds of Better Software and no other medium-sized team of 15 people with corporate TPU backing was trying to build a system just like AlphaGo, even though multiple small outfits were trying to build prestige-earning gameplayers. Tech advancements very very often occur in places where investment wasn’t dense enough to guarantee overlap.\n\n\n\n\n 6. Follow-ups on “Takeoff Speeds”\n----------------------------------\n\n\n \n\n\n### 6.1. Eliezer Yudkowsky’s commentary\n\n\n \n\n\n\n[Yudkowsky][17:25]\nFurther comment that occurred to me on “takeoff speeds” if I’ve better understood the main thesis now: its hypotheses seem to include a perfectly anti-Thielian setup for AGI.\n\n\nThiel has a running thesis about how part of the story behind the Great Stagnation and the decline in innovation that’s about atoms rather than bits – the story behind “we were promised flying cars and got 140 characters”, to cite the classic Thielian quote – is that people stopped believing in [“secrets](https://www.lesswrong.com/posts/ReB7yoF22GuerNfhH/thiel-on-secrets-and-indefiniteness)“.\n\n\nThiel suggests that you have to believe there are knowable things that aren’t yet widely known – not just things that everybody already knows, plus mysteries that nobody will ever know – in order to be motivated to go out and innovate. Culture in developed countries shifted to label this kind of thinking rude – or rather, even ruder, even less tolerated than it had been decades before – so innovation decreased as a result.\n\n\nThe central hypothesis of “takeoff speeds” is that at the time of serious AGI being developed, it is perfectly anti-Thielian in that it is devoid of secrets in that sense. It is not permissible (on this viewpoint) for it to be the case that there is a lot of AI investment into AI that is directed not quite at the key path leading to AGI, such that somebody could spend $1B on compute for the key path leading to AGI before anybody else had spent $100M on that. There cannot exist any secret like that. The path to AGI will be known; everyone, or a wide variety of powerful actors, will know how profitable that path will be; the surrounding industry will be capable of acting on this knowledge, and will have actually been acting on it as early as possible; multiple actors are already investing in every tech path that would in fact be profitable (and is known to any human being at all), as soon as that R&D opportunity becomes available.\n\n\nAnd I’m not saying this is an inconsistent world to describe! I’ve written science fiction set in this world. I called it “[dath ilan](https://yudkowsky.tumblr.com/post/81447230971/my-april-fools-day-confession)“. It’s a hypothetical world that is actually full of smart people in economic equilibrium. If anything like Covid-19 appears, for example, the governments and public-good philanthropists there have already set up prediction markets (which are not illegal, needless to say); and of course there are mRNA vaccine factories already built and ready to go, because somebody already calculated the profits from fast vaccines would be very high in case of a pandemic (no artificial price ceilings in this world, of course); so as soon as the prediction markets started calling the coming pandemic conditional on no vaccine, the mRNA vaccine factories were already spinning up.\n\n\nThis world, however, is not Earth.\n\n\nOn Earth, major chunks of technological progress quite often occur *outside* of a social context where everyone knew and agreed in advance on which designs would yield how much expected profit and many overlapping actors competed to invest in the most actually-promising paths simultaneously.\n\n\nAnd that is why you can read [Inadequate Equilibria](https://equilibriabook.com/toc/), and then read this essay on takeoff speeds, and go, “Oh, yes, I recognize this; it’s written inside the Modesty worldview; in particular, the imagination of an adequate world in which there is a perfect absence of Thielian secrets or unshared knowable knowledge about fruitful development pathways. This is the same world that already had mRNA vaccines ready to spin up on day one of the Covid-19 pandemic, because markets had correctly forecasted their option value and investors had acted on that forecast unimpeded. Sure would be an interesting place to live! But we don’t live there.”\n\n\nCould we perhaps end up in a world where the path to AGI is in fact not a Thielian secret, because in fact the first accessible path to AGI happens to lie along a tech pathway that already delivered large profits to previous investors who summed a lot of small innovations, a la experience with chipmaking, such that there were no large innovations just lots and lots of small innovations that yield 10% improvement annually on various tech benchmarks?\n\n\nI think that even in this case we will get weird, discontinuous, and fatal behaviors, and I could maybe talk about that when discussion resumes. But it is not ruled out to me that the first accessible pathway to AGI could happen to lie in the further direction of some road that was already well-traveled, already yielded much profit to now-famous tycoons back when its first steps were Thielian secrets, and hence is now replete with dozens of competing chasers for the gold rush.\n\n\nIt’s even imaginable to me, though a bit less so, that the first path traversed to real actual pivotal/powerful/lethal AGI, happens to lie literally actually squarely in the central direction of the gold rush. It sounds a little less like the tech history I know, which is usually about how someone needed to swerve a bit and the popular gold-rush forecasts weren’t quite right, but maybe that is just a selective focus of history on the more interesting cases.\n\n\nThough I remark that – even supposing that getting to big AGI is literally as straightforward and yet as difficult as falling down a semiconductor manufacturing roadmap (as otherwise the biggest actor to first see the obvious direction could just rush down the whole road) – well, TSMC does have a bit of an unshared advantage right now, if I recall correctly. And Intel had a bit of an advantage before that. So that happens even when there’s competitors competing to invest billions.\n\n\nBut we can imagine that doesn’t happen either, because instead of needing to build a whole huge manufacturing plant, there’s just lots and lots of little innovations adding up to every key AGI threshold, which lots of actors are investing $10 million in at a time, and everybody knows which direction to move in to get to more serious AGI and they’re right in this shared forecast.\n\n\nI am willing to entertain discussing this world and the sequelae there – I do think everybody still dies in this case – but I would not have this particular premise thrust upon us as a default, through a not-explicitly-spoken pressure against being so immodest and inegalitarian as to suppose that any Thielian knowable-secret will exist, or that anybody in the future gets as far ahead of others as today’s TSMC or today’s Deepmind.\n\n\nWe are, in imagining this world, imagining a world in which AI research has become drastically unlike today’s AI research in a direction drastically different from the history of many other technologies.\n\n\nIt’s not literally unprecedented, but it’s also not a default environment for big moments in tech progress; it’s narrowly precedented for *particular* industries with high competition and steady benchmark progress driven by huge investments into a sum of many tiny innovations.\n\n\nSo I can entertain the scenario. But if you want to claim that the social situation around AGI *will* drastically change in this way you foresee – not just that it *could* change in that direction, if somebody makes a big splash that causes everyone else to reevaluate their previous opinions and arrive at yours, but that this social change *will* occur and you know this now – and that the prerequisite tech path to AGI is known to you, and forces an investment situation that looks like the semiconductor industry – then your “What do you think you know and how do you think you know it?” has some significant explaining to do.\n\n\nOf course, I do appreciate that such a thing could be knowable, and yet not known to me. I’m not so silly as to disbelieve in secrets like that. They’re all over the actual history of technological progress on our actual Earth.\n\n\n\n\n \n\n\n\nThe post [Yudkowsky and Christiano discuss “Takeoff Speeds”](https://intelligence.org/2021/11/22/yudkowsky-and-christiano-discuss-takeoff-speeds/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-11-22T20:15:54Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "524c32222368a846ea0218fed8e50723", "title": "Ngo and Yudkowsky on AI capability gains", "url": "https://intelligence.org/2021/11/18/ngo-and-yudkowsky-on-ai-capability-gains/", "source": "miri", "source_type": "blog", "text": "This is the second post in a series of transcribed conversations about AGI forecasting and alignment. See the [first post](https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty) for prefaces and more information about the format.\n\n\n\nColor key:\n\n\n\n\n\n| | | | |\n| --- | --- | --- | --- |\n|  Chat by Richard and Eliezer  |  Other chat  |  Google Doc content  |  Inline comments  |\n\n\n\n \n\n\n \n\n\n5. September 14 conversation\n----------------------------\n\n\n \n\n\n### 5.1. Recursive self-improvement, abstractions, and miracles\n\n\n \n\n\n\n[Yudkowsky][11:00]\nGood morning / good evening.\n\n\nSo it seems like the obvious thread to pull today is your sense that I’m wrong about recursive self-improvement and consequentialism in a related way?\n\n\n\n\n\n[Ngo][11:04]\nRight. And then another potential thread (probably of secondary importance) is the question of what you mean by utility functions, and digging more into the intuitions surrounding those.\n\n\nBut let me start by fleshing out this RSI/consequentialism claim.\n\n\nI claim that your early writings about RSI focused too much on a very powerful abstraction, of recursively applied optimisation; and too little on the ways in which even powerful abstractions like this one become a bit… let’s say messier, when they interact with the real world.\n\n\nIn particular, I think that [Paul’s arguments](https://sideways-view.com/2018/02/24/takeoff-speeds/) that there will be substantial progress in AI in the leadup to a RSI-driven takeoff are pretty strong ones.\n\n\n(Just so we’re on the same page: to what extent did those arguments end up shifting your credences?)\n\n\n\n\n\n\n[Yudkowsky][11:09]\nI don’t remember being shifted by Paul on this at all. I sure shifted a lot over events like Alpha Zero and the entire deep learning revolution. What does Paul say that isn’t encapsulated in that update – does he furthermore claim that we’re going to get fully smarter-than-human in all regards AI which doesn’t cognitively scale much further either through more compute or through RSI?\n\n\n\n\n\n[Ngo][11:10]\nAh, I see. In that case, let’s just focus on the update from the deep learning revolution.\n\n\n\n\n\n\n[Yudkowsky][11:12][11:13]\nI’ll also remark that I see my foreseeable mistake there as having little to do with “abstractions becoming messier when they interact with the real world” – this truism tells you very little of itself, unless you can predict *directional* shifts in other variables just by contemplating the *unknown* messiness relative to the abstraction.\n\n\nRather, I’d see it as a neighboring error to what I’ve called the Law of Earlier Failure, where the Law of Earlier Failure says that, compared to the interesting part of the problem where it’s fun to imagine yourself failing, you usually fail before then, because of the many earlier boring points where it’s possible to fail.\n\n\nThe nearby reasoning error in my case is that I focused on an interesting way that AI capabilities could scale and the most powerful argument I had to overcome Robin’s objections, while missing the way that Robin’s objections could fail even earlier through rapid scaling and generalization in a more boring way.\n\n\n\n\n\n\nIt doesn’t mean that my arguments about RSI were false about their domain of supposed application, but that other things were also true and those things happened first on our timeline. To be clear, I think this is an important and generalizable issue with the impossible task of trying to forecast the Future, and if I am wrong about other things it sure would be plausible if I was wrong in similar ways.\n\n\n\n\n\n[Ngo][11:13]\nThen the analogy here is something like: there is a powerful abstraction, namely consequentialism; and we both agree that (like RSI) a large amount of consequentialism is a very dangerous thing. But we disagree on the question of how much the strategic landscape in the leadup to highly-consequentialist AIs is affected by other factors apart from this particular abstraction.\n\n\n“this truism tells you very little of itself, unless you can predict directional shifts in other variables just by contemplating the unknown messiness relative to the abstraction”\n\n\nI disagree with this claim. It seems to me that the predictable direction in which the messiness pushes is *away from* the applicability of the high-level abstraction.\n\n\n\n\n\n\n[Yudkowsky][11:15]\nThe real world is messy, but good abstractions still apply, just with some messiness around them. The Law of Earlier Failure is not a failure of the abstraction being messy, it’s a failure of the *subject matter* ending up different such that the abstractions you used were *about a different subject matter*.\n\n\nWhen a company fails before the exciting challenge where you try to scale your app across a million users, because you couldn’t hire enough programmers to build your app at all, the problem is not that you had an unexpectedly messy abstraction about scaling to many users, but that the key determinants were a different subject matter than “scaling to many users”.\n\n\nThrowing 10,000 TPUs at something and actually getting progress – not very much of a famous technological idiom *at the time I was originally arguing with Robin* – is not a leak in the RSI abstraction, it’s just a way of getting powerful capabilities without RSI.\n\n\n\n\n\n[Ngo][11:18]\nTo me the difference between these two things seems mainly semantic; does it seem otherwise to you?\n\n\n\n\n\n\n[Yudkowsky][11:18]\nIf I’d been arguing with somebody who kept arguing in favor of faster timescales, maybe I’d have focused on that different subject matter and gotten a chance to be explicitly wrong about it. I mainly see my ur-failure here as letting myself be influenced by the whole audience that was nodding along very seriously to Robin’s arguments, at the expense of considering how reality might depart in either direction from my own beliefs, and not just how Robin might be right or how to persuade the audience.\n\n\n\n\n\n[Ngo][11:19]\nAlso, “throwing 10,000 TPUs at something and actually getting progress” doesn’t seem like an example of the Law of Earlier Failure – if anything it seems like an Earlier Success\n\n\n\n\n\n\n[Yudkowsky][11:19]\nit’s an Earlier Failure of Robin’s arguments about why AI wouldn’t scale quickly, so my lack of awareness of this case of the Law of Earlier Failure is why I didn’t consider why Robin’s arguments could fail earlier\n\n\nthough, again, this is a bit harder to call if you’re trying to call it in 2008 instead of 2018\n\n\nbut it’s a valid lesson that the future is, in fact, hard to predict, if you’re trying to do it in the past\n\n\nand I would not consider it a merely “semantic” difference as to whether you made a wrong argument about the correct subject matter, or a correct argument about the wrong subject matter\n\n\nthese are like… *very* different failure modes that you learn different lessons from\n\n\nbut if you’re not excited by these particular fine differences in failure modes or lessons to learn from them, we should perhaps not dwell upon that part of the meta-level Art\n\n\n\n\n\n[Ngo][11:21]\nOkay, so let me see if I understand your position here.\n\n\nDue to the deep learning revolution, it turned out that there were ways to get powerful capabilities without RSI. This isn’t intrinsically a (strong) strike against the RSI abstraction; and so, unless we have reason to expect another similarly surprising revolution before reaching AGI, it’s not a good reason to doubt the consequentialism abstraction.\n\n\n\n\n\n\n[Yudkowsky][11:25]\nConsequentialism and RSI are very different notions in the first place. Consequentialism is, in my own books, significantly simpler. I don’t see much of a conceptual connection between the two myself, except insofar as they both happen to be part of the connected fabric of a coherent worldview about cognition.\n\n\nIt is entirely reasonable to suspect that we may get another surprising revolution before reaching AGI. Expecting a *particular* revolution that gives you *particular* miraculous benefits is much more questionable and is an instance of conjuring expected good from nowhere, like hoping that you win the lottery because the first lottery ball comes up 37. (Also, if you sincerely believed you actually had info about what kind of revolution might lead to AGI, you should shut up about it and tell very few carefully selected people, not bake it into a public dialogue.)\n\n\n\n\n\n[Ngo][11:28]\n\n> \n> and I would not consider it a merely “semantic” difference as to whether you made a wrong argument about the correct subject matter, or a correct argument about the wrong subject matter\n> \n> \n> \n\n\nOn this point: the implicit premise of “and also nothing else will break this abstraction or render it much less relevant” turns a correct argument about the wrong subject matter into an incorrect argument.\n\n\n\n\n\n\n[Yudkowsky][11:28]\nSure.\n\n\nThough I’d also note that there’s an important lesson of technique where you learn to say things like that out loud instead of keeping them “implicit”.\n\n\nLearned lessons like that are one reason why I go through your summary documents of our conversation and ask for many careful differences of wording about words like “will happen” and so on.\n\n\n\n\n\n[Ngo][11:30]\nMakes sense.\n\n\nSo I claim that:\n\n\n1. A premise like this is necessary for us to believe that your claims about consequentialism lead to extinction.\n\n\n2. A surprising revolution would make it harder to believe this premise, even if we don’t know which *particular* revolution it is.\n\n\n3. If we’d been told back in 2008 that a surprising revolution would occur in AI, then we should have been less confident in the importance of the RSI abstraction to understanding AGI and AGI risk.\n\n\n\n\n\n\n[Yudkowsky][11:32][11:34]\nSuppose I put to you that this claim is merely subsumed by all of my previous careful qualifiers about how we might get a “miracle” and how we should be trying to prepare for an unknown miracle in any number of places. Why suspect that place particularly for a model-violation?\n\n\nI also think that you are misinterpreting my old arguments about RSI, in a pattern that matches some other cases of your summarizing my beliefs as “X is the one big ultra-central thing” rather than “X is the point where the other person got stuck and Eliezer had to spend a lot of time arguing”.\n\n\nI was always claiming that RSI was *a* way for AGI capabilities to scale much further *once they got far enough*, not *the* way AI would scale *to human-level generality*.\n\n\n\n\n\n\nThis continues to be a key fact of relevance to my future model, in the form of the unfalsified original argument about the subject matter it previously applied to: if you lose control of a sufficiently smart AGI, it will FOOM, and this fact about what triggers the metaphorical equivalent of a full nuclear exchange and a total loss of the gameboard continues to be extremely relevant to what you have to do to obtain victory instead.\n\n\n\n\n\n[Ngo][11:34][11:35]\nPerhaps we’re interpreting the word “miracle” in quite different ways.\n\n\n\n\n\n\n\nI think of it as an event with negligibly small probability.\n\n\n\n\n\n\n[Yudkowsky][11:35]\nEvents that actually have negligibly small probability are not much use in plans.\n\n\n\n\n\n[Ngo][11:35]\nWhich I guess doesn’t fit with your claims that we should be trying to prepare for a miracle.\n\n\n\n\n\n\n[Yudkowsky][11:35]\nCorrect.\n\n\n\n\n\n[Ngo][11:35]\nBut I’m not recalling off the top of my head where you’ve claimed that.\n\n\nI’ll do a quick search of the transcript\n\n\n“You need to hold your mind open for any miracle and a miracle you didn’t expect or think of in advance, because at this point our last hope is that in fact the future is often quite surprising.”\n\n\nOkay, I see. The connotations of “miracle” seemed sufficiently strong to me that I didn’t interpret “you need to hold your mind open” as practical advice.\n\n\nWhat sort of probability, overall, do you assign to us being saved by what you call a miracle?\n\n\n\n\n\n\n[Yudkowsky][11:40]\nIt’s not a place where I find quantitative probabilities to be especially helpful.\n\n\nAnd if I had one, I suspect I would not publish it.\n\n\n\n\n\n[Ngo][11:41]\nCan you leak a bit of information? Say, more or less than 10%?\n\n\n\n\n\n\n[Yudkowsky][11:41]\nLess.\n\n\nThough a lot of that is dominated, not by the probability of a positive miracle, but by the extent to which we seem unprepared to take advantage of it, and so would not be saved by one.\n\n\n\n\n\n[Ngo][11:41]\nYeah, I see.\n\n\n\n\n\n \n\n\n### 5.2. The idea of expected utility\n\n\n \n\n\n\n[Ngo][11:43]\nOkay, I’m now significantly less confident about how much we actually disagree.\n\n\nAt least about the issues of AI cognition.\n\n\n\n\n\n\n[Yudkowsky][11:44]\nYou seem to suspect we’ll get a *particular* miracle having to do with “consequentialism”, which means that although it might be a miracle to me, it wouldn’t be a miracle to you.\n\n\nThere is something forbidden in my model that is not forbidden in yours.\n\n\n\n\n\n[Ngo][11:45]\nI think that’s partially correct, but I’d call it more a *broad range of possibilities* in the rough direction of you being wrong about consequentialism.\n\n\n\n\n\n\n[Yudkowsky][11:46]\nWell, as much as it may be nicer to debate when the other person has a specific positive expectation that X will work, we can also debate when I know that X won’t work and the other person remains ignorant of that. So say more!\n\n\n\n\n\n[Ngo][11:47]\nThat’s why I’ve mostly been trying to clarify your models rather than trying to make specific claims of my own.\n\n\nWhich I think I’d prefer to continue doing, if you’re amenable, by asking you about what entities a utility function is defined over – say, in the context of a human.\n\n\n\n\n\n\n[Yudkowsky][11:51][11:53]\nI think that to contain the concept of Utility as it exists in me, you would have to do homework exercises I don’t know how to prescribe. Maybe one set of homework exercises like that would be showing you an agent, including a human, making some set of choices that allegedly couldn’t obey expected utility, and having you figure out how to pump money from that agent (or present it with money that it would pass up).\n\n\nLike, just actually doing that a few dozen times.\n\n\nMaybe it’s not helpful for me to say this? If you say it to Eliezer, he immediately goes, “Ah, yes, I could see how I would update that way after doing the homework, so I will save myself some time and effort and just make that update now without the homework”, but this kind of jumping-ahead-to-the-destination is something that seems to me to be… dramatically missing from many non-Eliezers. They insist on learning things the hard way and then act all surprised when they do. Oh my gosh, who would have thought that an AI breakthrough would suddenly make AI seem less than 100 years away the way it seemed yesterday? Oh my gosh, who would have thought that alignment would be difficult?\n\n\nUtility can be seen as the origin of Probability within minds, even though Probability obeys its own, simpler coherence constraints.\n\n\n\n\n\n\nthat is, you will have money pumped out of you, unless you weigh in your mind paths through time according to some quantitative weight, which determines how much resources you’re willing to spend on preparing for them\n\n\nthis is why sapients think of things as being more or less likely\n\n\n\n\n\n[Ngo][11:53]\nSuppose that this agent has some high-level concept – say, honour – which leads it to pass up on offers of money.\n\n\n\n\n\n\n[Yudkowsky][11:55]\n\n> \n> Suppose that this agent has some high-level concept – say, honour – which leads it to pass up on offers of money.\n> \n> \n> \n\n\nthen there’s two possibilities:\n\n\n* this concept of honor is something that you can see as helping to navigate a path through time to a destination\n* honor isn’t something that would be optimized into existence by optimization pressure for other final outcomes\n\n\n\n\n\n[Ngo][11:55]\nRight, I see.\n\n\nHmm, but it seems like humans often don’t see concepts as helping to navigate a path in time to a destination. (E.g. the deontological instinct not to kill.)\n\n\nAnd yet those concepts were in fact optimised into existence by evolution.\n\n\n\n\n\n\n[Yudkowsky][11:59]\nYou’re describing a defect of human reflectivity about their consequentialist structure, not a departure from consequentialist structure. ![🙂](https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png)\n\n\n\n\n\n[Ngo][12:01]\n(Sorry, internet was slightly buggy; switched to a better connection now.)\n\n\n\n\n\n\n[Yudkowsky][12:01]\nBut yes, from my perspective, it creates a very large conceptual gap that I can stare at something for a few seconds and figure out how to parse it as navigating paths through time, while others think that “consequentialism” only happens when their minds are explicitly thinking about “well, what would have this consequence” using language.\n\n\nSimilarly, when it comes to Expected Utility, I see that any time something is attaching relative-planning-weights to paths through time, not when a human is thinking out loud about putting spoken numbers on outcomes\n\n\n\n\n\n[Ngo][12:02]\nHuman consequentialist structure was optimised by evolution for a different environment. Insofar as we are consequentialists in a new environment, it’s only because we’re able to be reflective about our consequentialist structure (or because there are strong similarities between the environments).\n\n\n\n\n\n\n[Yudkowsky][12:02]\nFalse.\n\n\nIt just generalized out-of-distribution because the underlying coherence of the coherent behaviors was simple.\n\n\nWhen you have a very simple pattern, it can generalize across weak similarities, not “strong similarities”.\n\n\nThe human brain is large but the coherence in it is simple.\n\n\nThe idea, the structure, that explains why the big thing works, is much smaller than the big thing.\n\n\nSo it can generalize very widely.\n\n\n\n\n\n[Ngo][12:04]\nTaking this example of the instinct not to kill people – is this one of the “very simple patterns” that you’re talking about?\n\n\n\n\n\n\n[Yudkowsky][12:05]\n“Reflectivity” doesn’t help per se unless on some core level a pattern already generalizes, I mean, either a truth can generalize across the data or it can’t? So I’m a bit puzzled about why you’re bringing up “reflectivity” in this context.\n\n\nAnd, no.\n\n\nAn instinct not to kill doesn’t even seem to me like a plausible cross-cultural universal. 40% of deaths among Yanomami men are in intratribal fights, iirc.\n\n\n\n\n\n[Ngo][12:07]\nAh, I think we were talking past each other. When you said “this concept of honor is something that you can see as helping to navigate a path through time to a destination” I thought you meant “you” as in the agent in question (as you used it in some previous messages) not “you” as in a hypothetical reader.\n\n\n\n\n\n\n[Yudkowsky][12:07]\nah.\n\n\nit would not have occurred to me to ascribe that much competence to an agent that wasn’t a superintelligence.\n\n\neven I don’t have time to think about why more than ~~0.0001%~~ 0.01% of my thoughts do anything, but thankfully, you don’t have to think about *why* 2 + 2 = 4 for it to be the correct answer for counting sheep.\n\n\n\n\n\n[Ngo][12:10]\nGot it.\n\n\nI might now try to throw a high-level (but still inchoate) disagreement at you and see how that goes. But while I’m formulating that, I’m curious what your thoughts are on where to take the discussion.\n\n\nActually, let’s spend a few minutes deciding where to go next, and then take a break\n\n\nI’m thinking that, at this point, there might be more value in moving onto geopolitics\n\n\n\n\n\n\n[Yudkowsky][12:19]\nSome of my current thoughts are a reiteration of old despair: It feels to me like the typical Other within EA has no experience with discovering unexpected order, with operating a generalization that you can expect will cover new cases even when that isn’t immediately obvious, with operating that generalization to cover those new cases correctly, with seeing simple structures that generalize a lot and having that be a real and useful and technical experience; instead of somebody blathering in a non-expectation-constraining way about how “capitalism is responsible for everything wrong with the world”, and being able to extend that to lots of cases.\n\n\nI could try to use much simpler language in hopes that people actually [look-at-the-water](https://v.cx/2010/04/feynman-brazil-education) Feynman-style, like “navigating a path through time” instead of Consequentialism which is itself a step down from Expected Utility.\n\n\nBut you actually do lose something when you throw away the more technical concept. And then people still think that either you instantly see in the first second how something is a case of “navigating a path through time”, or that this is something that people only do explicitly when visualizing paths through time using that mental terminology; or, if Eliezer says that it’s “navigating time” anyways, this must be an instance of Eliezer doing that thing other people do when they talk about how “Capitalism is responsible for all the problems of the world”. They have no experience operating genuinely useful, genuinely deep generalizations that extend to nonobvious things.\n\n\nAnd in fact, being able to operate some generalizations like that is a lot of how I know what I know, in reality and in terms of the original knowledge that came before trying to argue that knowledge with people. So trying to convey the real source of the knowledge feels doomed. It’s a kind of idea that our civilization has lost, like that college class Feynman ran into.\n\n\n\n\n\n[Soares][12:19]\nMy own sense (having been back for about 20min) is that one of the key cruxes is in “is it possible that non-scary cognition will be able to end the acute risk period”, or perhaps “should we expect a longish regime of pre-scary cognition, that we can study and learn to align in such a way that by the time we get scary cognition we can readily align it”.\n\n\n\n\n\n\n[Ngo][12:19]\nSome potential prompts for that:\n\n\n* what are some scary things which might make governments take AI more seriously than they took covid, and which might happen before AGI\n* how much of a bottleneck in your model is governmental competence? and how much of a difference do you see in this between, say, the US and China?\n\n\n\n\n\n\n[Soares][12:20]\nI also have a bit of a sense that there’s a bit more driving to do on the “perhaps EY is just wrong about the applicability of the consequentialism arguments” (in a similar domain), and would be happy to try articulating a bit of what I think are the not-quite-articulated-to-my-satisfaction arguments on that side.\n\n\n\n\n\n\n[Yudkowsky][12:21]\nI also had a sense – maybe mistaken – that RN did have some *specific* ideas about how “consequentialism” might be inapplicable. though maybe I accidentally refuted that in passing because the idea was “well, what if it didn’t know what consequentialism was?” and then I explained that reflectivity was not required to make consequentialism generalize. but if so, I’d like RN to say explicitly what specific idea got refuted that way. or failing that, talk about the specific idea that didn’t get refuted.\n\n\n\n\n\n[Ngo][12:23]\nThat wasn’t my objection, but I do have some more specific ideas, which I could talk about.\n\n\nAnd I’d also be happy for Nate to try articulating some of the arguments he mentioned above.\n\n\n\n\n\n\n[Yudkowsky][12:23]\nI have a general worry that this conversation has gotten too general, and that it would be more productive, even of general understanding, to start from specific ideas and shoot those down specifically.\n\n\n\n\n\n| |\n| --- |\n| [Ngo: 👍] |\n\n\n\n\n\n\n[Ngo][12:26]\nThe other thing is that, for pedagogical purposes, I think it’d be useful for you to express some of your beliefs about how governments will respond to AI\n\n\nI think I have a rough guess about what those beliefs are, but even if I’m right, not everyone who reads this transcript will be\n\n\n\n\n\n\n[Yudkowsky][12:28]\nWhy would I be expected to know *that*? I could talk about weak defaults and iterate through an unending list of possibilities.\n\n\nThinking that Eliezer thinks he knows that to any degree of specificity feels like I’m being weakmanned!\n\n\n\n\n\n[Ngo][12:28]\nI’m not claiming you have any specific beliefs\n\n\n\n\n\n\n[Yudkowsky][12:29]\nI suppose I have skepticism when other people dream up elaborately positive and beneficial reactions apparently drawn from some alternate nicer political universe that had an absolutely different response to Covid-19, and so on.\n\n\n\n\n\n[Ngo][12:29]\nBut I’d guess that your models rule out, for instance, the US and China deeply cooperating on AI before it’s caused any disasters\n\n\n\n\n\n\n[Yudkowsky][12:30]\n“Deeply”? Sure. That sounds like something that has never happened, and I’m generically skeptical about political things that go better than any political thing has ever gone before.\n\n\nI guess we could talk about that? It doesn’t seem like the most productive area, but maybe it lies upstream of more technical disagreements because we disagree about what AGI would actually have to do to have the world not end.\n\n\n\n\n\n[Soares][12:31]\nCool. I claim it’s time for a break, and then I nominate a little Eliezer gov’t-response-overview followed by specific maybe-consequentialism-based-worries-aren’t-a-problem-in-practice ideas from Richard.\n\n\n\n\n\n| |\n| --- |\n| [Ngo: 👍] |\n\n\n\n\n\n\n\n[Yudkowsky][12:32]\nSee you in 28mins\n\n\n\n\n \n\n\n### 5.3. Epistemology, and assessing the idea of expected utility\n\n\n \n\n\n\n[Ngo][13:03]\n\n> \n> Some of my current thoughts are a reiteration of old despair: It feels to me like the typical Other within EA has no experience with discovering unexpected order, with operating a generalization that you can expect will cover new cases even when that isn’t immediately obvious, with operating that generalization to cover those new cases correctly, with seeing simple structures that generalize a lot and having that be a real and useful and technical experience; instead of somebody blathering in a non-expectation-constraining way about how “capitalism is responsible for everything wrong with the world”, and being able to extend that to lots of cases.\n> \n> \n> I could try to use much simpler language in hopes that people actually look-at-the-water Feynman-style, like “navigating a path through time” instead of Consequentialism which is itself a step down from Expected Utility.\n> \n> \n> But you actually do lose something when you throw away the more technical concept. And then people still think that either you instantly see in the first second how something is a case of “navigating a path through time”, or that this is something that people only do explicitly when visualizing paths through time using that mental terminology; or, if Eliezer says that it’s “navigating time” anyways, this must be an instance of Eliezer doing that thing other people do when they talk about how “Capitalism is responsible for all the problems of the world”. They have no experience operating genuinely useful, genuinely deep generalizations that extend to nonobvious things.\n> \n> \n> And in fact, being able to operate some generalizations like that is a lot of how I know what I know, in reality and in terms of the original knowledge that came before trying to argue that knowledge with people. So trying to convey the real source of the knowledge feels doomed. It’s a kind of idea that our civilization has lost, like that college class Feynman ran into.\n> \n> \n> \n\n\nOoops, didn’t see this comment earlier. With respect to discovering unexpected order, one point that seems relevant is the extent to which that order provides predictive power. To what extent do you think that predictive successes in economics are important evidence for expected utility theory being a powerful formalism? (Or are there other ways in which it’s predictively powerful that provide significant evidence?)\n\n\nI’d be happy with a quick response to that, and then on geopolitics, here’s a prompt to kick us off:\n\n\n* If the only two actors involved in AGI development were the US and the UK governments, how much safer (or less safe) would you think we were compared with a world in which the two actors are the US and Chinese governments? How about a world in which the US government was a decade ahead of everyone else in reaching AGI?\n\n\n\n\n\n\n[Yudkowsky][13:06]\nI think that the Apollo space program is much deeper evidence for Utility. Observe, if you train protein blobs to run around the savanna, they also go to the moon!\n\n\nIf you think of “utility” as having something to do with the human discipline called “economics” then you are still thinking of it in a *much much much* more narrow way than I do.\n\n\n\n\n\n[Ngo][13:07]\nI’m not asking about evidence for utility as an abstraction in general, I’m asking for evidence based on successful predictions that have been made using it.\n\n\n\n\n\n\n[Yudkowsky][13:10]\nThat doesn’t tend to happen a lot, because all of the deep predictions that it makes are covered by shallow predictions that people made earlier.\n\n\nConsider the following prediction of evolutionary psychology: Humans will enjoy activities associated with reproduction!\n\n\n“What,” says Simplicio, “you mean like dressing up for dates? I don’t enjoy that part.”\n\n\n“No, you’re overthinking it, we meant orgasms,” says the evolutionary psychologist.\n\n\n“But I already knew that, that’s just common sense!” replies Simplicio.\n\n\n“And yet it is very specifically a prediction of evolutionary psychology which is not made specifically by any other theory of human minds,” replies the evolutionary psychologist.\n\n\n“Not an advance prediction, just-so story, too obvious,” replies Simplicio.\n\n\n\n\n\n[Ngo][13:11]\nYepp, I agree that most of its predictions won’t be new. Yet evolution is a sufficiently powerful theory that people have still come up with a range of novel predictions that derive from it.\n\n\nInsofar as you’re claiming that expected utility theory is also very powerful, then we should expect that it also provides some significant predictions.\n\n\n\n\n\n\n[Yudkowsky][13:12]\nAn advance prediction of the notion of Utility, I suppose, is that if you train an AI which is otherwise a large blob of layers – though this may be inadvisable for other reasons – to the point where it starts solving lots of novel problems, that AI will tend to value aspects of outcomes with weights, and weight possible paths through time (the dynamic progress of the environment), and use (by default, usually, roughly) the multiplication of these weights to allocate limited resources between mutually conflicting plans.\n\n\n\n\n\n[Ngo][13:13]\nAgain, I’m asking for evidence in the form of successful predictions.\n\n\n\n\n\n\n[Yudkowsky][13:14]\nI predict that people will want some things more than others, think some possibilities are more likely than others, and prefer to do things that lead to stuff they want a lot through possibilities they think are very likely!\n\n\n\n\n\n[Ngo][13:15]\nIt would be very strange to me if a theory which makes such strong claims about things we can’t yet verify can’t shed light on *anything* which we are in a position to verify.\n\n\n\n\n\n\n[Yudkowsky][13:15]\nIf you think I’m deriving my predictions of catastrophic alignment failure through something *more exotic* than that, you’re missing the reason *why I’m so worried*. It doesn’t *take* intricate complicated exotic assumptions.\n\n\nIt makes the same kind of claims about things we can’t verify yet as it makes about things we can verify right now.\n\n\n\n\n\n[Ngo][13:16]\nBut that’s very easy to do! Any theory can do that.\n\n\n\n\n\n\n[Yudkowsky][13:17]\nFor example, if somebody wants money, and you set up a regulation which prevents them from making money, it predicts that the person will look for a new way to make money that bypasses the regulation.\n\n\n\n\n\n[Ngo][13:17]\nAnd yes, of course fitting previous data is important evidence in favour of a theory\n\n\n\n\n\n\n[Yudkowsky][13:17]\n\n> \n> [But that’s very easy to do! Any theory can do that.]\n> \n> \n> \n\n\nFalse! Any theory can do that in the hands of a fallible agent which invalidly, incorrectly derives predictions from the theory.\n\n\n\n\n\n[Ngo][13:18]\nWell, indeed. But the very point at hand is whether the predictions you base on this theory are correctly or incorrectly derived.\n\n\n\n\n\n\n[Yudkowsky][13:18]\nIt is not the case that every theory does an equally good job of predicting the past, given valid derivations of predictions.\n\n\nWell, hence the analogy to evolutionary psychology. If somebody doesn’t see the blatant obviousness of how sexual orgasms are a prediction specifically of evolutionary theory, because it’s “common sense” and “not an advance prediction”, what are you going to do? We can, in this case, with a *lot* more work, derive more detailed advance predictions about degrees of wanting that correlate in detail with detailed fitness benefits. But that’s not going to convince anybody who overlooked the really blatant and obvious primary evidence.\n\n\nWhat they’re missing there is a sense of counterfactuals, of how the universe could just as easily have looked if the evolutionary origins of psychology were false: why should organisms want things associated with reproduction, why not instead have organisms running around that want things associated with rolling down hills?\n\n\nSimilarly, if optimizing complicated processes for outcomes hard enough, didn’t produce cognitive processes that internally mapped paths through time and chose actions conditional on predicted outcomes, human beings would… not think like that? What am I supposed to say here?\n\n\n\n\n\n[Ngo][13:24]\nLet me put it this way. There are certain traps that, historically, humans have been very liable to fall into. For example, seeing a theory, which seems to match so beautifully and elegantly the data which we’ve collected so far, it’s very easy to dramatically overestimate how much that data favours that theory. Fortunately, science has a very powerful social technology for avoiding this (i.e. making falsifiable predictions) which seems like approximately the only reliable way to avoid it – and yet you don’t seem concerned at all about the lack of application of this technology to expected utility theory.\n\n\n\n\n\n\n[Yudkowsky][13:25]\nThis is territory I covered in the Sequences, exactly because “well it didn’t make a good enough advance prediction yet!” is an excuse that people use to reject evolutionary psychology, some other stuff I covered in the Sequences, and some very predictable lethalities of AGI.\n\n\n\n\n\n[Ngo][13:26]\nWith regards to evolutionary psychology: yes, there are some blatantly obvious ways in which it helps explain the data available to us. But there are also many people who have misapplied or overapplied evolutionary psychology, and it’s very difficult to judge whether they have or have not done so, without asking them to make advance predictions.\n\n\n\n\n\n\n[Yudkowsky][13:26]\nI talked about the downsides of allowing humans to reason like that, the upsides, the underlying theoretical laws of epistemology (which are clear about why agents that reason validly or just unbiasedly would do that without the slightest hiccup), etc etc.\n\n\nIn the case of the theory “people want stuff relatively strongly, predict stuff relatively strongly, and combine the strengths to choose”, what kind of advance prediction that no other theory could possibly make, do you expect that theory to make?\n\n\nIn the worlds where that theory is true, how should it be able to prove itself to you?\n\n\n\n\n\n[Ngo][13:28]\nI expect deeper theories to make more and stronger predictions.\n\n\nI’m currently pretty uncertain if expected utility theory is a deep or shallow theory.\n\n\nBut deep theories tend to shed light in all sorts of unexpected places.\n\n\n\n\n\n\n[Yudkowsky][13:30]\nThe fact is, when it comes to AGI (general optimization processes), we have only two major datapoints in our dataset, natural selection and humans. So you can either try to reason validly about what theories predict about natural selection and humans, even though we’ve already seen the effects of those; or you can claim to give up in great humble [modesty](https://equilibriabook.com/inadequacy-and-modesty/) while actually using other implicit theories instead to make all your predictions and be confident in them.\n\n\n\n\n\n[Ngo][13:30]\n\n> \n> I talked about the downsides of allowing humans to reason like that, the upsides, the underlying theoretical laws of epistemology (which are clear about why agents that reason validly or just unbiasedly would do that without the slightest hiccup), etc etc.\n> \n> \n> \n\n\nI’m familiar with your writings on this, which is why I find myself surprised here. I could understand a perspective of “yes, it’s unfortunate that there are no advanced predictions, it’s a significant weakness, I wish more people were doing this so we could better understand this vitally important theory”. But that seems very different from your perspective here.\n\n\n\n\n\n\n[Yudkowsky][13:32]\nOh, I’d love to be making predictions using a theory that made super detailed advance predictions made by no other theory which had all been borne out by detailed experimental observations! I’d also like ten billion dollars, a national government that believed everything I honestly told them about AGI, and a drug that raises IQ by 20 points.\n\n\n\n\n\n[Ngo][13:32]\nThe very fact that we have only two major datapoints is exactly why it seems like such a major omission that a theory which purports to describe intelligent agency has not been used to make any successful predictions about the datapoints we do have.\n\n\n\n\n\n\n[Yudkowsky][13:32][13:33]\nThis is making me think that you imagine the theory as something much more complicated and narrow than it is.\n\n\nJust look at the water.\n\n\nNot very special water with an index.\n\n\nJust regular water.\n\n\nPeople want stuff. They want some things more than others. When they do stuff they expect stuff to happen.\n\n\n\n\n\n\nThese are *predictions of the theory*. Not advance predictions, but predictions nonetheless.\n\n\n\n\n\n[Ngo][13:33][13:33]\nI’m accepting your premise that it’s something deep and fundamental, and making the claim that deep, fundamental theories are likely to have a wide range of applications, including ones we hadn’t previously thought of.\n\n\n\n\n\n\n\nDo you disagree with that premise, in general?\n\n\n\n\n\n\n[Yudkowsky][13:36]\nI don’t know what you really mean by “deep fundamental theory” or “wide range of applications we hadn’t previously thought of”, especially when it comes to structures that are this simple. It sounds like you’re still imagining something I mean by Expected Utility which is some narrow specific theory like a particular collection of gears that are appearing in lots of places.\n\n\nAre numbers a deep fundamental theory?\n\n\nIs addition a deep fundamental theory?\n\n\nIs probability a deep fundamental theory?\n\n\nIs the notion of the syntax-semantics correspondence in logic and the notion of a generally semantically valid reasoning step, a deep fundamental theory?\n\n\n\n\n\n[Ngo][13:38]\nYes to the first three, all of which led to very successful novel predictions.\n\n\n\n\n\n\n[Yudkowsky][13:38]\nWhat’s an example of a novel prediction made by the notion of probability?\n\n\n\n\n\n[Ngo][13:38]\nMost applications of the central limit theorem.\n\n\n\n\n\n\n[Yudkowsky][13:39]\nThen I should get to claim every kind of optimization algorithm which used expected utility, as a successful advance prediction of expected utility? Optimal stopping and all the rest? Seems cheap and indeed invalid to me, and not particularly germane to whether these things appear inside AGIs, but if that’s what you want, then sure.\n\n\n\n\n\n[Ngo][13:39]\n\n> \n> These are *predictions of the theory*. Not advance predictions, but predictions nonetheless.\n> \n> \n> \n\n\nI agree that it is a prediction of the theory. And yet it’s also the case that smarter people than either of us have been dramatically mistaken about how well theories fit previously-collected data. (Admittedly we have advantages which they didn’t, like a better understanding of cognitive biases – but it seems like you’re ignoring the possibility of those cognitive biases applying to us, which largely negates those advantages.)\n\n\n\n\n\n\n[Yudkowsky][13:42]\nI’m not ignoring it, just adjusting my confidence levels and proceeding, instead of getting stuck in an infinite epistemic trap of self-doubt.\n\n\nI don’t live in a world where you either have the kind of detailed advance experimental predictions that should convince the most skeptical scientist and render you immune to all criticism, or, alternatively, you are suddenly in a realm beyond the reach of all epistemic authority, and you ought to cuddle up into a ball and rely only on wordless intuitions and trying to put equal weight on good things happening and bad things happening.\n\n\nI live in a world where I proceed with very strong confidence if I have a detailed formal theory that made detailed correct advance predictions, and otherwise go around saying, “well, it sure looks like X, but we can be on the lookout for a miracle too”.\n\n\nIf this was a matter of thermodynamics, I wouldn’t even be talking like this, and we wouldn’t even be having this debate.\n\n\nI’d just be saying, “Oh, that’s a perpetual motion machine. You can’t build one of those. Sorry.” And that would be the end.\n\n\nMeanwhile, political superforecasters go on making well-calibrated predictions about matters much murkier and more complicated than these, often without anything resembling a clearly articulated theory laid forth at length, let alone one that had made specific predictions even retrospectively. They just go do it instead of feeling helpless about it.\n\n\n\n\n\n[Ngo][13:45]\n\n> \n> Then I should get to claim every kind of optimization algorithm which used expected utility, as a successful advance prediction of expected utility? Optimal stopping and all the rest? Seems cheap and indeed invalid to me, and not particularly germane to whether these things appear inside AGIs, but if that’s what you want, then sure.\n> \n> \n> \n\n\nThese seem better than nothing, but still fairly unsatisfying, insofar as I think they are related to more shallow properties of the theory.\n\n\nHmm, I think you���re mischaracterising my position. I nowhere advocated for feeling helpless or curling up in a ball. I was just noting that this is a particularly large warning sign which has often been valuable in the past, and it seemed like you were not only speeding past it blithely, but also denying the existence of this category of warning signs.\n\n\n\n\n\n\n[Yudkowsky][13:48]\nI think you’re looking for some particular kind of public obeisance that I don’t bother to perform internally because I’d consider it a wasted motion. If I’m lost in a forest I don’t bother going around loudly talking about how I need a forest theory that makes detailed advance experimental predictions in controlled experiments, but, alas, I don’t have one, so now I should be very humble. I try to figure out which way is north.\n\n\nWhen I have a guess at a northerly direction, it would then be an error to proceed with as much confidence as if I’d had a detailed map and had located myself upon it.\n\n\n\n\n\n[Ngo][13:49]\nInsofar as I think we’re less lost than you do, then the weaknesses of whichever forest theory implies that we’re lost are relevant for this discussion.\n\n\n\n\n\n\n[Yudkowsky][13:49]\nThe obeisance I make in that direction is visible in such statements as, “But this, of course, is a prediction about the future, which is well-known to be quite difficult to predict, in fact.”\n\n\nIf my statements had been matters of thermodynamics and particle masses, I would *not* be adding that disclaimer.\n\n\nBut most of life is not a statement about particle masses. I have some idea of how to handle that. I do not need to constantly recite disclaimers to myself about it.\n\n\nI know how to proceed when I have only a handful of data points which have already been observed and my theories of them are retrospective theories. This happens to me on a daily basis, eg when dealing with human beings.\n\n\n\n\n\n[Soares][13:50]\n(I have a bit of a sense that we’re going in a circle. It also seems to me like there’s some talking-past happening.)\n\n\n(I suggest a 5min break, followed by EY attempting to paraphrase RN to his satisfaction and vice versa.)\n\n\n\n\n\n\n[Yudkowsky][13:51]\nI’d have more trouble than usual paraphrasing RN because epistemic helplessness is something I find painful to type out.\n\n\n\n\n\n[Soares][13:51]\n(I’m also happy to attempt to paraphrase each point as I see it; it may be that this smooths over some conversational wrinkle.)\n\n\n\n\n\n\n[Ngo][13:52]\nSeems like a good suggestion. I’m also happy to move on to the next topic. This was meant to be a quick clarification.\n\n\n\n\n\n\n[Soares][13:52]\n*nod*. It does seem to me like it possibly contains a decently sized meta-crux, about what sorts of conclusions one is licensed to draw from what sorts of observations\n\n\nthat, eg, might be causing Eliezer’s probabilities to concentrate but not Richard’s.\n\n\n\n\n\n\n[Yudkowsky][13:52]\nYeah, this is in the opposite direction of “more specificity”.\n\n\n\n\n\n| | |\n| --- | --- |\n| [Soares: 😝] | [Ngo: 😆] |\n\n\n\nI frankly think that most EAs suck at explicit epistemology, OpenPhil and FHI affiliated EAs are not much of an exception to this, and I expect I will have more luck talking people out of specific errors than talking them out of the infinite pit of humble ignorance considered abstractly.\n\n\n\n\n\n[Soares][13:54]\nOk, that seems to me like a light bid to move to the next topic from both of you, my new proposal is that we take a 5min break and then move to the next topic, and perhaps I’ll attempt to paraphrase each point here in my notes, and if there’s any movement in the comments there we can maybe come back to it later.\n\n\n\n\n\n| |\n| --- |\n| [Ngo: 👍] |\n\n\n\n\n\n\n\n[Ngo][13:54]\nBroadly speaking I am also strongly against humble ignorance (albeit to a lesser extent than you are).\n\n\n\n\n\n\n[Yudkowsky][13:55]\nI’m off to take a 5-minute break, then!\n\n\n\n\n \n\n\n### 5.4. Government response and economic impact\n\n\n \n\n\n\n[Ngo][14:02]\nA meta-level note: I suspect we’re around the point of hitting significant diminishing marginal returns from this format. I’m open to putting more time into the debate (broadly construed) going forward, but would probably want to think a bit about potential changes in format.\n\n\n\n\n\n\n[Soares][14:04, moved two up in log]\n\n> \n> A meta-level note: I suspect we’re around the point of hitting significant diminishing marginal returns from this format. I’m open to putting more time into the debate (broadly construed) going forward, but would probably want to think a bit about potential changes in format.\n> \n> \n> \n\n\n(Noted, thanks!)\n\n\n\n\n\n\n[Yudkowsky][14:03]\nI actually think that may just be a matter of at least one of us, including Nate, having to take on the thankless job of shutting down all digressions into abstractions and the meta-level.\n\n\n\n\n\n[Ngo][14:05]\n\n> \n> I actually think that may just be a matter of at least one of us, including Nate, having to take on the thankless job of shutting down all digressions into abstractions and the meta-level.\n> \n> \n> \n\n\nI’m not so sure about this, because it seems like some of the abstractions are doing a lot of work.\n\n\n\n\n\n\n[Yudkowsky][14:03][14:04]\nAnyways, government reactions?\n\n\nIt seems to me like the best observed case for government reactions – which I suspect is no longer available in the present era as a possibility – was the degree of cooperation between the USA and Soviet Union about avoiding nuclear exchanges.\n\n\nThis included such incredibly extravagant acts of cooperation as installing a direct line between the President and Premier!\n\n\n\n\n\n\nwhich is not what I would really characterize as very “deep” cooperation, but it’s more than a lot of cooperation you see nowadays.\n\n\nMore to the point, both the USA and Soviet Union proactively avoided doing anything that might lead towards starting down a path that led to a full nuclear exchange.\n\n\n\n\n\n[Ngo][14:04]\nThe question I asked earlier:\n\n\n* If the only two actors involved in AGI development were the US and the UK governments, how much safer (or less safe) would you think we were compared with a world in which the two actors are the US and Chinese governments? How about a world in which the US government was a decade ahead of everyone else in reaching AGI?\n\n\n\n\n\n\n[Yudkowsky][14:05]\nThey still provoked one another a lot, but, whenever they did so, tried to do so in a way that wouldn’t lead to a full nuclear exchange.\n\n\nIt was mutually understood to be a strategic priority and lots of people on both sides thought a lot about how to avoid it.\n\n\nI don’t know if that degree of cooperation ever got to the fantastic point of having people from *both* sides in the *same* room brainstorming *together* about how to avoid a full nuclear exchange, because that is, like, more cooperation than you would normally expect from two governments, but it wouldn’t *shock* me to learn that this had ever happened.\n\n\nIt seems obvious to me that if some situation developed nowadays which increased the profile possibility of a nuclear exchange between the USA and Russia, we would not currently be able to do anything like installing a Hot Line between the US and Russian offices if such a Hot Line had not already been installed. This is lost social technology from a lost golden age. But still, it’s not unreasonable to take this as the upper bound of attainable cooperation; it’s been observed within the last 100 years.\n\n\nAnother guess for how governments react is a very simple and robust one backed up by a huge number of observations:\n\n\nThey don’t.\n\n\nThey have the same kind of advance preparation and coordination around AGI, in advance of anybody getting killed, as governments had around the mortgage crisis of 2007 in advance of any mortgages defaulting.\n\n\nI am not sure I’d put this probability over 50% but it’s certainly by far the largest probability over any competitor possibility specified to an equally low amount of detail.\n\n\nI would expect anyone whose primary experience was with government, who was just approaching this matter and hadn’t been talked around to weird exotic views, to tell you the same thing as a matter of course.\n\n\n\n\n\n[Ngo][14:10]\n\n> \n> But still, it’s not unreasonable to take this as the upper bound of attainable cooperation; it’s been observed within the last 100 years.\n> \n> \n> \n\n\nIs this also your upper bound conditional on a world that has experienced a century’s worth of changes within a decade, and in which people are an order of magnitude wealthier than they currently are?\n\n\n\n> \n> I am not sure I’d put this probability over 50% but it’s certainly by far the largest probability over any competitor possibility specified to an equally low amount of detail.\n> \n> \n> \n\n\nwhich one was this? US/UK?\n\n\n\n\n\n\n[Yudkowsky][14:12][14:14]\nAssuming governments do react, we have the problem of “What kind of heuristic could have correctly led us to forecast that the US’s reaction to a major pandemic would be for the FDA to ban hospitals from doing in-house Covid tests? What kind of mental process could have led us to make that call?” And we couldn’t have gotten it exactly right, because the future is hard to predict; the best heuristic I’ve come up with, that feels like it at least would not have been *surprised* by what actually happened, is, “The government will react with a flabbergasting level of incompetence, doing exactly the wrong thing, in some unpredictable specific way.”\n\n\n\n> \n> which one was this? US/UK?\n> \n> \n> \n\n\nI think if we’re talking about any single specific government like the US or UK then the probability is over 50% that they don’t react in any advance coordinated way to the AGI crisis, *to a greater and more effective degree* than they “reacted in an advance coordinated way” to pandemics before 2020 or mortgage defaults before 2007.\n\n\n\n\n\n\nMaybe *some* two governments somewhere on Earth will have a high-level discussion between two cabinet officials.\n\n\n\n\n\n[Ngo][14:14]\nThat’s one lesson you could take away. Another might be: governments will be very willing to restrict the use of novel technologies, even at colossal expense, in the face of even a small risk of large harms.\n\n\n\n\n\n\n[Yudkowsky][14:15]\n\n> \n> That’s one lesson you could take away. Another might be: governments will be very willing to restrict the use of novel technologies, even at colossal expense, in the face of even a small risk of large harms.\n> \n> \n> \n\n\nI just… don’t know what to do when people talk like this.\n\n\nIt’s so absurdly, absurdly optimistic.\n\n\nIt’s taking a massive massive failure and trying to find exactly the right abstract gloss to put on it that makes it sound like exactly the right perfect thing will be done next time.\n\n\nThis just – isn’t how to understand reality.\n\n\nThis isn’t how superforecasters think.\n\n\nThis isn’t *sane*.\n\n\n\n\n\n[Soares][14:16]\n(be careful about ad hominem)\n\n\n(Richard might not be doing the insane thing you’re imagining, to generate that sentence, etc)\n\n\n\n\n\n\n[Ngo][14:17]\nRight, I’m not endorsing this as my mainline prediction about what happens. Mainly what I’m doing here is highlighting that your view seems like one which cherrypicks *pessimistic* interpretations.\n\n\n\n\n\n\n[Yudkowsky][14:18]\nThat abstract description “governments will be very willing to restrict the use of novel technologies, even at colossal expense, in the face of even a small risk of large harms” does not in fact apply very well to the FDA banning hospitals from using their well-established in-house virus tests, at risk of the alleged harm of some tests giving bad results, when in fact the CDC’s tests were giving bad results and much larger harms were on the way because of bottlenecked testing; and that abstract description should have applied to an effective and globally coordinated ban against gain-of-function research, which *didn’t* happen.\n\n\n\n\n\n[Ngo][14:19]\nAlternatively: what could have led us to forecast that many countries will impose unprecedentedly severe lockdowns.\n\n\n\n\n\n\n[Yudkowsky][14:19][14:21][14:21]\nWell, I didn’t! I didn’t even realize that was an option! I thought Covid was just going to rip through everything.\n\n\n(Which, to be clear, it still may, and Delta arguably is in the more primitive tribal areas of the USA, as well as many other countries around the world that can’t afford vaccines financially rather than epistemically.)\n\n\n\n\n\n\nBut there’s a really really basic lesson here about the different style of “sentences found in political history books” rather than “sentences produced by people imagining ways future politics could handle an issue successfully”.\n\n\n\n\n\n\nReality is *so much worse* than people imagining what might happen to handle an issue successfully.\n\n\n\n\n\n[Ngo][14:21][14:21][14:22]\nI might nudge us away from covid here, and towards the questions I asked before.\n\n\n\n\n\n\n\n\n> \n> The question I asked earlier:\n> \n> \n> * If the only two actors involved in AGI development were the US and the UK governments, how much safer (or less safe) would you think we were compared with a world in which the two actors are the US and Chinese governments? How about a world in which the US government was a decade ahead of everyone else in reaching AGI?\n> \n> \n> \n\n\nThis being one.\n\n\n\n\n\n\n\n\n> \n> “But still, it’s not unreasonable to take this as the upper bound of attainable cooperation; it’s been observed within the last 100 years.” Is this also your upper bound conditional on a world that has experienced a century’s worth of changes within a decade, and in which people are an order of magnitude wealthier than they currently are?\n> \n> \n> \n\n\nAnd this being the other.\n\n\n\n\n\n\n[Yudkowsky][14:22]\n\n> \n> Is this also your upper bound conditional on a world that has experienced a century’s worth of changes within a decade, and in which people are an order of magnitude wealthier than they currently are?\n> \n> \n> \n\n\nI don’t expect this to happen at all, or even come remotely close to happening; I expect AGI to kill everyone before self-driving cars are commercialized.\n\n\n\n\n\n[Yudkowsky][16:29]  (Nov. 14 follow-up comment)\n(This was incautiously put; maybe strike “expect” and put in “would not be the least bit surprised if” or “would very tentatively guess that”.)\n\n\n\n\n\n[Ngo][14:23]\nah, I see\n\n\nOkay, maybe here’s a different angle which I should have been using. What’s the most impressive technology you expect to be commercialised before AGI kills everyone?\n\n\n\n\n\n\n[Yudkowsky][14:24]\n\n> \n> If the only two actors involved in AGI development were the US and the UK governments, how much safer (or less safe) would you think we were compared with a world in which the two actors are the US and Chinese governments?\n> \n> \n> \n\n\nVery hard to say; the UK is friendlier but less grown-up. We would obviously be VASTLY safer in any world where only two centralized actors (two effective decision processes) could ever possibly build AGI, though not safe / out of the woods / at over 50% survival probability.\n\n\n\n> \n> How about a world in which the US government was a decade ahead of everyone else in reaching AGI?\n> \n> \n> \n\n\nVastly safer and likewise impossibly miraculous, though again, not out of the woods at all / not close to 50% survival probability.\n\n\n\n> \n> What’s the most impressive technology you expect to be commercialised before AGI kills everyone?\n> \n> \n> \n\n\nThis is incredibly hard to predict. If I actually had to predict this for some reason I would probably talk to Gwern and Carl Shulman. In principle, there’s nothing preventing me from knowing something about Go which lets me predict in 2014 that Go will probably fall in two years, but in practice I did not do that and I don’t recall anybody else doing it either. It’s really quite hard to figure out how much cognitive work a domain requires and how much work known AI technologies can scale to with more compute, let alone predict AI breakthroughs.\n\n\n\n\n\n[Ngo][14:27]\nI’d be happy with some very rough guesses\n\n\n\n\n\n\n[Yudkowsky][14:27]\nIf you want me to spin a scifi scenario, I would not be surprised to find online anime companions carrying on impressively humanlike conversations, because this is a kind of technology that can be deployed without major corporations signing on or regulatory approval.\n\n\n\n\n\n[Ngo][14:28]\nOkay, this is surprising; I expected something more advanced.\n\n\n\n\n\n\n[Yudkowsky][14:29]\nArguably AlphaFold 2 is already more advanced than that, along certain dimensions, but it’s no coincidence that afaik people haven’t really done much with AlphaFold 2 and it’s made no visible impact on GDP.\n\n\nI expect GDP not to depart from previous trendlines before the world ends, would be a more general way of putting it.\n\n\n\n\n\n[Ngo][14:29]\nWhat’s the ~~most~~ least impressive technology that your model strongly rules out happening before AGI kills us all?\n\n\n\n\n\n\n[Yudkowsky][14:30]\nyou mean least impressive?\n\n\n\n\n\n[Ngo][14:30]\noops, yes\n\n\nThat seems like a structurally easier question to answer\n\n\n\n\n\n\n[Yudkowsky][14:30]\n“Most impressive” is trivial. “Dyson Spheres” answers it.\n\n\nOr, for that matter, “perpetual motion machines”.\n\n\n\n\n\n[Ngo][14:31]\nAh yes, I was thinking that Dyson spheres were a bit too prosaic\n\n\n\n\n\n\n[Yudkowsky][14:32]\nMy model mainly rules out that we get to certain points and then hang around there for 10 years while the technology gets perfected, commercialized, approved, adopted, ubiquitized enough to produce a visible trendline departure on the GDP graph; not so much various technologies themselves being initially demonstrated in a lab.\n\n\nI expect that the people who build AGI can build a self-driving car if they want to. Getting it approved and deployed before the world ends is quite another matter.\n\n\n\n\n\n[Ngo][14:33]\nOpenAI has commercialised GPT-3\n\n\n\n\n\n\n[Yudkowsky][14:33]\nHasn’t produced much of a bump in GDP as yet.\n\n\n\n\n\n[Ngo][14:33]\nI wasn’t asking about that, though\n\n\nI’m more interested in judging how hard you think it is for AIs to take over the world\n\n\n\n\n\n\n[Yudkowsky][14:34]\nI note that it seems to me like there is definitely a kind of thinking here, which, if told about GPT-3 five years ago, would talk in very serious tones about how much this technology ought to be predicted to shift GDP, and whether we could bet on that.\n\n\nBy “take over the world” do you mean “turn the world into paperclips” or “produce 10% excess of world GDP over predicted trendlines”?\n\n\n\n\n\n[Ngo][14:35]\nTurn world into paperclips\n\n\n\n\n\n\n[Yudkowsky][14:36]\nI expect this mainly happens as a result of superintelligence, which is way up in the stratosphere far above the minimum required cognitive capacities to get the job done?\n\n\nThe interesting question is about humans trying to deploy a corrigible AGI thinking in a restricted domain, trying to flip the gameboard / “take over the world” without full superintelligence?\n\n\nI’m actually not sure what you’re trying to get at here.\n\n\n\n\n\n[Soares][14:37]\n(my guess, for the record, is that the crux Richard is attempting to drive for here, is centered more around something like “will humanity spend a bunch of time in the regime where there are systems capable of dramatically increasing world GDP, and if not how can you be confident of that from here”)\n\n\n\n\n\n\n[Yudkowsky][14:38]\nThis is not the sort of thing I feel Confident about.\n\n\n\n\n\n[Yudkowsky][16:31]  (Nov. 14 follow-up comment)\n(My confidence here seems understated.  I am very pleasantly surprised if we spend 5 years hanging around with systems that can dramatically increase world GDP and those systems are actually being used for that.  There isn’t one dramatic principle which prohibits that, so I’m not Confident, but it requires multiple nondramatic events to go not as I expect.)\n\n\n\n\n\n[Ngo][14:38]\nYeah, that’s roughly what I’m going for. Or another way of putting it: we have some disagreements about the likelihood of humans being able to get an AI to do a pivotal act which saves the world. So I’m trying to get some estimates for what the hardest act you think humans *can* get an AI to do is.\n\n\n\n\n\n\n[Soares][14:39]\n(and that a difference here causes, eg, Richard to suspect the relevant geopolitics happen after a century of progress in 10y, everyone being suddenly much richer in real terms, and a couple of warning shots, whereas Eliezer expects the relevant geopolitics to happen the day after tomorrow, with “realistic human-esque convos” being the sort of thing we get in stead of warning shots)\n\n\n\n\n\n| |\n| --- |\n| [Ngo: 👍] |\n\n\n\n\n\n\n\n[Yudkowsky][14:40]\nI mostly do not expect pseudo-powerful but non-scalable AI powerful enough to increase GDP, hanging around for a while. But if it happens then I don’t feel I get to yell “what happened?” at reality, because there’s an obvious avenue for it to happen: something GDP-increasing proved tractable to non-deeply-general AI systems.\n\n\nwhere GPT-3 is “not deeply general”\n\n\n\n\n\n[Ngo][14:40]\nAgain, I didn’t ask about GDP increases, I asked about impressive acts (in order to separate out the effects of AI capabilities from regulatory effects, people-having-AI-but-not-using-it, etc).\n\n\nWhere you can use whatever metric of impressiveness you think is reasonable.\n\n\n\n\n\n\n[Yudkowsky][14:42]\nso there’s two questions here, one of which is something like, “what is the most impressive thing you can do while still being able to align stuff and make it corrigible”, and one of which is “if there’s an incorrigible AI whose deeds are being exhibited by fools, what impressive things might it do short of ending the world”.\n\n\nand these are both problems that are hard for the same reason I did not predict in 2014 that Go would fall in 2016; it can in fact be quite hard – even with a domain as fully lawful and known as Go – to figure out which problems will fall to which level of cognitive capacity.\n\n\n\n\n\n[Soares][14:43]\nNate’s attempted rephrasing: EY’s model might not be confident that there’s not big GDP boosts, but it does seem pretty confident that there isn’t some “half-capable” window between the shallow-pattern-memorizer stuff and the scary-laserlike-consequentialist stuff, and in particular Eliezer seems confident humanity won’t slowly traverse that capability regime\n\n\n\n\n\n\n[Yudkowsky][14:43]\nthat’s… allowed? I don’t get to yell at reality if that happens?\n\n\n\n\n\n[Soares][14:44]\nand (shakier extrapolation), that regime is where a bunch of Richard’s hope lies (eg, in the beginning of that regime we get to learn how to do practical alignment, and also the world can perhaps be saved midway through that regime using non-laserlike-systems)\n\n\n\n\n\n| |\n| --- |\n| [Ngo: 👍] |\n\n\n\n\n\n\n\n[Yudkowsky][14:45]\nso here’s an example of a thing I don’t think you can do without the world ending: get an AI to build a nanosystem or biosystem which can synthesize two strawberries identical down to the cellular but not molecular level, and put them on a plate\n\n\nthis is why I use this capability as the definition of a “powerful AI” when I talk about “powerful AIs” being hard to align, if I don’t want to start by explicitly arguing about pivotal acts\n\n\nthis, I think, is going to end up being first doable using a laserlike world-ending system\n\n\nso even if there’s a way to do it with no lasers, that happens later and the world ends before then\n\n\n\n\n\n[Ngo][14:47]\nOkay, that’s useful.\n\n\n\n\n\n\n[Yudkowsky][14:48]\nit feels like the critical bar there is something like “invent a whole engineering discipline over a domain where you can’t run lots of cheap simulations in full detail”\n\n\n\n\n\n[Ngo][14:49]\n(Meta note: let’s wrap up in 10 mins? I’m starting to feel a bit sleepy.)\n\n\n\n\n\n| | |\n| --- | --- |\n| [Yudkowsky: 👍] | [Soares: 👍] |\n\n\n\nThis seems like a pretty reasonable bar\n\n\nLet me think a bit about where to go from that\n\n\nWhile I’m doing so, since this question of takeoff speeds seems like an important one, I’m wondering if you could gesture at your biggest disagreement with this post:\n\n\n\n\n\n\n[Yudkowsky][14:51]\nOh, also in terms of scifi possibilities, I can imagine seeing 5% GDP loss because text transformers successfully scaled to automatically filing lawsuits and environmental impact objections.\n\n\nMy read on the entire modern world is that GDP is primarily constrained by bureaucratic sclerosis rather than by where the technological frontiers lie, so AI ends up impacting GDP mainly insofar as it allows new ways to bypass regulatory constraints, rather than insofar as it allows new technological capabilities. I expect a sudden transition to paperclips, not just because of how fast I expect cognitive capacities to scale over time, but because nanomachines eating the biosphere bypass regulatory constraints, whereas earlier phases of AI will not be advantaged relative to all the other things we have the technological capacity to do but which aren’t legal to do.\n\n\n\n\n\n[Shah][12:13]  (Sep. 21 follow-up comment)\n\n> \n> My read on the entire modern world is that GDP is primarily constrained by bureaucratic sclerosis rather than by where the technological frontiers lie\n> \n> \n> \n\n\nThis is a fair point and updates me somewhat towards fast takeoff as operationalized by Paul, though I’m not sure how much it updates me on p(doom).\n\n\nEr, wait, really fast takeoff as operationalized by Paul makes less sense as a thing to be looking for — presumably we die before any 1 year doubling. Whatever, it updates me somewhat towards “less deployed stuff before scary stuff is around”\n\n\n\n\n\n\n[Ngo][14:56]\nAh, interesting. What are the two or three main things in that category?\n\n\n\n\n\n\n[Yudkowsky][14:57]\nmRNA vaccines, building houses, building cities? Not sure what you mean there.\n\n\n\n\n\n[Ngo][14:57]\n“things we have the technological capacity to do but which aren’t legal to do”\n\n\n\n\n\n\n[Yudkowsky][14:58][15:00]\nEg, you might imagine, “What if AIs were smart enough to build houses, wouldn’t that raise GDP?” and the answer is that we already have the pure technology to manufacture homes cheaply, but the upright-stick-construction industry already successfully lobbied to get it banned as it was starting to develop, by adding on various constraints; so the question is not “Is AI advantaged in doing this?” but “Is AI advantaged at bypassing regulatory constraints on doing this?” Not to mention all the other ways that building a house in an existing city is illegal, or that it’s been made difficult to start a new city, etcetera.\n\n\n\n\n\n\n“What if AIs could design a new vaccine in a day?” We can already do that. It’s no longer the relevant constraint. Bureaucracy is the process-limiting constraint.\n\n\nI would – looking in again at the Sideways View essay on takeoff speeds – wonder whether it occurred to you, Richard, to ask about what detailed predictions all the theories there had made.\n\n\nAfter all, a lot of it is spending time explaining why the theories there *shouldn’t* be expected to retrodict even the data points we *have* about progress rates over hominid evolution.\n\n\nSurely you, being the evenhanded judge that you are, must have been reading through that document saying, “My goodness, this is even worse than retrodicting a few data points!”\n\n\nA lot of why I have a bad taste in my mouth about certain classes of epistemological criticism is my sense that certain sentences tend to be uttered on *incredibly* selective occasions.\n\n\n\n\n\n[Ngo][14:59][15:06]\nSome meta thoughts: I now feel like I have a pretty reasonable broad outline of Eliezer’s views. I haven’t yet changed my mind much, but plausibly mostly because I haven’t taken the time to internalise those views; once I ruminate on them a bunch, I expect my opinions will shift (uncertain how far; unlikely to be most of the way).\n\n\n\n\n\n\n\nMeta thoughts (continued): Insofar as a strong disagreement remains after that (which it probably will) I feel pretty uncertain about what would resolve it. Best guess is that I should write up some longer essays that try to tie a bunch of disparate strands together.\n\n\nNear the end it seemed like the crux, to a surprising extent, hinged on this question of takeoff speeds. So the other thing which seems like it’d plausibly help a lot is Eliezer writing up a longer version of his response to Paul’s Takeoff Speeds post.\n\n\n(Just as a brief comment, I don’t find the “bureaucratic sclerosis” explanation very compelling. I do agree that regulatory barriers are a huge problem, but they still don’t seem nearly severe enough to cause a fast takeoff. I don’t have strong arguments for that position right now though.)\n\n\n\n\n\n\n[Soares][15:12]\nThis seems like a fine point to call it!\n\n\nSome wrap-up notes\n\n\n* I had the impression this round was a bit more frustrating than last rounds. Thanks all for sticking with things ![🙂](https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png)\n* I have a sense that Richard was making a couple points that didn’t quite land. I plan to attempt to articulate versions of them myself in the interim.\n* Richard noted he had a sense we’re in decreasing return territory. My own sense is that it’s worth having at least one more discussion in this format about specific non-consequentialist plans Richard may have hope in, but I also think we shouldn’t plow forward in spite of things feeling less useful, and I’m open to various alternative proposals.\n\n\nIn particular, it seems maybe plausible to me we should have a pause for some offline write-ups, such as Richard digesting a bit and then writing up some of his current state, and/or Eliezer writing up some object-level response to the takeoff speed post above?\n\n\n\n\n\n| |\n| --- |\n| [Ngo: 👍] |\n\n\n\n(I also could plausibly give that a go myself, either from my own models or from my model of Eliezer’s model which he could then correct)\n\n\n\n\n\n\n[Ngo][15:15]\nThanks Nate!\n\n\nI endorse the idea of offline writeups\n\n\n\n\n\n\n[Soares][15:17]\nCool. Then I claim we are adjourned for the day, and Richard has the ball on digesting & doing a write-up from his end, and I have the ball on both writing up my attempts to articulate some points, and on either Eliezer or I writing some takes on timelines or something.\n\n\n(And we can coordinate our next discussion, if any, via email, once the write-ups are in shape.)\n\n\n\n\n\n\n[Yudkowsky][15:18]\nI also have a sense that there’s more to be said about specifics of govt stuff or specifics of “ways to bypass consequentialism” and that I wish we could spend at least one session trying to stick to concrete details only\n\n\nEven if it’s not where cruxes ultimately lie, often you learn more about the abstract by talking about the concrete than by talking about the abstract.\n\n\n\n\n\n[Soares][15:22]\n(I, too, would be enthusiastic to see such a discussion, and Richard, if you find yourself feeling enthusiastic or at least not-despairing about it, I’d happily moderate.)\n\n\n\n\n\n\n[Yudkowsky][15:37]\n(I’m a little surprised about how poorly I did at staying concrete after saying that aloud, and would nominate Nate to take on the stern duty of blowing the whistle at myself or at both of us.)\n\n\n\n\n \n\n\n\nThe post [Ngo and Yudkowsky on AI capability gains](https://intelligence.org/2021/11/18/ngo-and-yudkowsky-on-ai-capability-gains/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-11-19T01:53:23Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "6fb02ff9908e4fb038f2fd77e76a94bd", "title": "Ngo and Yudkowsky on alignment difficulty", "url": "https://intelligence.org/2021/11/15/ngo-and-yudkowsky-on-alignment-difficulty/", "source": "miri", "source_type": "blog", "text": "This post is the first in a series of transcribed Discord conversations between Richard Ngo and Eliezer Yudkowsky, moderated by Nate Soares. We’ve also added Richard and Nate’s running summaries of the conversation (and others’ replies) from Google Docs.\n\n\nLater conversation participants include Ajeya Cotra, Beth Barnes, Carl Shulman, Holden Karnofsky, Jaan Tallinn, Paul Christiano, Rob Bensinger, and Rohin Shah.\n\n\nThe transcripts are a complete record of several Discord channels MIRI made for discussion. We tried to edit the transcripts as little as possible, other than to fix typos and a handful of confusingly-worded sentences, to add some paragraph breaks, and to add referenced figures and links. We didn’t end up redacting any substantive content, other than the names of people who would prefer not to be cited. We swapped the order of some chat messages for clarity and conversational flow (indicated with extra timestamps), and in some cases combined logs where the conversation switched channels.\n\n\n \n\n\nColor key:\n\n\n\n\n\n| | | | |\n| --- | --- | --- | --- |\n|  Chat by Richard and Eliezer  |  Other chat  |  Google Doc content  |  Inline comments  |\n\n\n\n \n\n\n0. Prefatory comments\n---------------------\n\n\n \n\n\n\n[Yudkowsky][8:32]\n(At Rob’s request I’ll try to keep this brief, but this was an experimental format and some issues cropped up that seem large enough to deserve notes.)\n\n\nEspecially when coming in to the early parts of this dialogue, I had some backed-up hypotheses about “What might be the main sticking point? and how can I address that?” which from the standpoint of a pure dialogue might seem to be causing me to go on digressions, relative to if I was just trying to answer Richard’s own questions.  On reading the dialogue, I notice that this looks evasive or like point-missing, like I’m weirdly not just directly answering Richard’s questions.\n\n\nOften the questions are answered later, or at least I think they are, though it may not be in the first segment of the dialogue.  But the larger phenomenon is that I came in with some things I wanted to say, and Richard came in asking questions, and there was a minor accidental mismatch there.  It would have looked better if we’d both stated positions first without question marks, say, or if I’d just confined myself to answering questions from Richard.  (This is not a huge catastrophe, but it’s something for the reader to keep in mind as a minor hiccup that showed up in the early parts of experimenting with this new format.)\n\n\n\n\n\n[Yudkowsky][8:32]\n(Prompted by some later stumbles in attempts to summarize this dialogue.  Summaries seem plausibly a major mode of propagation for a sprawling dialogue like this, and the following request seems like it needs to be very prominent to work – embedded requests later on didn’t work.)\n\n\nPlease don’t summarize this dialogue by saying, “and so Eliezer’s MAIN idea is that” or “and then Eliezer thinks THE KEY POINT is that” or “the PRIMARY argument is that” etcetera.  From my perspective, everybody comes in with a different set of sticking points versus things they see as obvious, and the conversation I have changes drastically depending on that.  In the old days this used to be the Orthogonality Thesis, Instrumental Convergence, and superintelligence being a possible thing at all; today most OpenPhil-adjacent folks have other sticking points instead.\n\n\nPlease transform:\n\n\n* “Eliezer’s main reply is…” -> “Eliezer replied that…”\n* “Eliezer thinks the key point is…” -> “Eliezer’s point in response was…”\n* “Eliezer thinks a major issue is…”  -> “Eliezer replied that one issue is…”\n* “Eliezer’s primary argument against this is…” -> “Eliezer tried the counterargument that…”\n* “Eliezer’s main scenario for this is…” -> “In a conversation in September of 2021, Eliezer sketched a hypothetical where…”\n\n\nNote also that the transformed statements say what you *observed,* whereas the untransformed statements are (often incorrect) *inferences* about my latent state of mind.\n\n\n(Though “distinguishing relatively unreliable inference from more reliable observation” is not necessarily *the key idea* here or *the one big reason* I’m asking for this.  That’s just one point I tried making – one argument that I hope might help drive home the larger thesis.)\n\n\n\n\n \n\n\n1. September 5 conversation\n---------------------------\n\n\n \n\n\n### 1.1. Deep vs. shallow problem-solving patterns\n\n\n \n\n\n\n[Ngo][11:00]\nHi all! Looking forward to the discussion.\n\n\n\n\n\n\n[Yudkowsky][11:01]\nHi and welcome all.  My name is Eliezer and I think alignment is really actually quite extremely difficult.  Some people seem to not think this!  It’s an important issue so ought to be resolved somehow, which we can hopefully fully do today.  (I will however want to take a break after the first 90 minutes, if it goes that far and if Ngo is in sleep-cycle shape to continue past that.)\n\n\n\n\n\n[Ngo][11:02]\nA break in 90 minutes or so sounds good.\n\n\nHere’s one way to kick things off: I agree that humans trying to align arbitrarily capable AIs seems very difficult. One reason that I’m more optimistic (or at least, not confident that we’ll have to face the full very difficult version of the problem) is that at a certain point AIs will be doing most of the work.\n\n\nWhen you talk about alignment being difficult, what types of AIs are you thinking about aligning?\n\n\n\n\n\n\n[Yudkowsky][11:04]\nOn my model of the Other Person, a lot of times when somebody thinks alignment shouldn’t be that hard, they think there’s some particular thing you can do to align an AGI, which isn’t that hard, and their model is missing one of the foundational difficulties for why you can’t do (easily or at all) one step of their procedure.  So one of my own conversational processes might be to poke around looking for a step that the other person doesn’t realize is hard.  That said, I’ll try to directly answer your own question first.\n\n\n\n\n\n[Ngo][11:07]\nI don’t think I’m confident that there’s any particular thing you can do to align an AGI. Instead I feel fairly uncertain over a broad range of possibilities for how hard the problem turns out to be.\n\n\nAnd on some of the most important variables, it seems like evidence from the last decade pushes towards updating that the problem will be easier.\n\n\n\n\n\n\n[Yudkowsky][11:09]\nI think that after AGI becomes possible at all and then possible to scale to dangerously superhuman levels, there will be, in the best-case scenario where a lot of other social difficulties got resolved, a 3-month to 2-year period where only a very few actors have AGI, meaning that it was socially possible for those few actors to decide to *not* just scale it to where it automatically destroys the world.\n\n\nDuring this step, if humanity is to survive, somebody has to perform some feat that causes the world to *not* be destroyed in 3 months or 2 years when too many actors have access to AGI code that will destroy the world if its intelligence dial is turned up. This requires that the first actor or actors to build AGI, be able to do *something* with that AGI which prevents the world from being destroyed; if it didn’t require superintelligence, we could go do that thing right now, but no such human-doable act apparently exists so far as I can tell.\n\n\nSo we want the least dangerous, most easily aligned thing-to-do-with-an-AGI, but it does have to be a pretty powerful act to prevent the automatic destruction of Earth after 3 months or 2 years. It has to “flip the gameboard” rather than letting the suicidal game play out. We need to align the AGI that performs this pivotal act, to perform that pivotal act without killing everybody.\n\n\nParenthetically, no act powerful enough and gameboard-flipping enough to qualify is inside the Overton Window of politics, or possibly even of effective altruism, which presents a separate social problem. I usually dodge around this problem by picking an exemplar act which is powerful enough to actually flip the gameboard, but not the most alignable act because it would require way too many aligned details: Build self-replicating open-air nanosystems and use them (only) to melt all GPUs.\n\n\nSince any such nanosystems would have to operate in the full open world containing lots of complicated details, this would require tons and tons of alignment work, is not the pivotal act easiest to align, and we should do some other thing instead. But the other thing I have in mind is also outside the Overton Window, just like this is. So I use “melt all GPUs” to talk about the requisite power level and the Overton Window problem level, both of which seem around the right levels to me, but the actual thing I have in mind is more alignable; and this way, I can reply to anyone who says “How dare you?!” by saying “Don’t worry, I don’t actually plan on doing that.”\n\n\n\n\n\n[Ngo][11:14]\nOne way that we could take this discussion is by discussing the pivotal act “make progress on the alignment problem faster than humans can”.\n\n\n\n\n\n\n[Yudkowsky][11:15]\nThis sounds to me like it requires extreme levels of alignment and operating in extremely dangerous regimes, such that, if you could do that, it would seem much more sensible to do some other pivotal act first, using a lower level of alignment tech.\n\n\n\n\n\n[Ngo][11:16]\nOkay, this seems like a crux on my end.\n\n\n\n\n\n\n[Yudkowsky][11:16]\nIn particular, I would hope that – in unlikely cases where we survive at all – we were able to survive by operating a superintelligence only in the lethally dangerous, but still less dangerous, regime of “engineering nanosystems”.\n\n\nWhereas “solve alignment for us” seems to require operating in the even more dangerous regimes of “write AI code for us” and “model human psychology in tremendous detail”.\n\n\n\n\n\n[Ngo][11:17]\nWhat makes these regimes so dangerous? Is it that it’s very hard for humans to exercise oversight?\n\n\nOne thing that makes these regimes seem less dangerous to me is that they’re broadly in the domain of “solving intellectual problems” rather than “achieving outcomes in the world”.\n\n\n\n\n\n\n[Yudkowsky][11:19][11:21]\nEvery AI output *effectuates* outcomes in the world.  If you have a powerful unaligned mind hooked up to outputs that can start causal chains that effectuate dangerous things, it doesn’t matter whether the comments on the code say “intellectual problems” or not.\n\n\nThe danger of “solving an intellectual problem” is when it requires a powerful mind to think about domains that, when solved, render very cognitively accessible strategies that can do dangerous things.\n\n\n\n\n\n\nI expect the first alignment solution you can actually deploy in real life, in the unlikely event we get a solution at all, looks like 98% “don’t think about all these topics that we do not absolutely need and are adjacent to the capability to easily invent very dangerous outputs” and 2% “actually think about this dangerous topic but please don’t come up with a strategy inside it that kills us”.\n\n\n\n\n\n[Ngo][11:21][11:22]\nLet me try and be more precise about the distinction. It seems to me that systems which have been primarily trained to make predictions about the world would by default lack a lot of the cognitive machinery which humans use to take actions which pursue our goals.\n\n\n\n\n\n\n\nPerhaps another way of phrasing my point is something like: it doesn’t seem implausible to me that we build AIs that are significantly more intelligent (in the sense of being able to understand the world) than humans, but significantly less agentic.\n\n\nIs this a crux for you?\n\n\n(obviously “agentic” is quite underspecified here, so maybe it’d be useful to dig into that first)\n\n\n\n\n\n\n[Yudkowsky][11:27][11:33]\nI would certainly have learned very new and very exciting facts about intelligence, facts which indeed contradict my present model of how intelligences liable to be discovered by present research paradigms work, if you showed me… how can I put this in a properly general way… that problems I thought were about searching for states that get fed into a result function and then a result-scoring function, such that the input gets an output with a high score, were in fact not about search problems like that. I have sometimes given more specific names to this problem setup, but I think people have become confused by the terms I usually use, which is why I’m dancing around them.\n\n\nIn particular, just as I have a model of the Other Person’s Beliefs in which they think alignment is easy because they don’t know about difficulties I see as very deep and fundamental and hard to avoid, I also have a model in which people think “why not just build an AI which does X but not Y?” because they don’t realize what X and Y have in common, which is something that draws deeply on having deep models of intelligence. And it is hard to convey this deep theoretical grasp. But you can also see powerful practical hints that these things are much more correlated than, eg, Robin Hanson was imagining during the [FOOM debate](https://intelligence.org/ai-foom-debate/), because Robin did not think something like GPT-3 should exist; Robin thought you should need to train lots of specific domains that didn’t generalize. I argued then with Robin that it was something of a hint that humans had visual cortex and cerebellar cortex but not Car Design Cortex, in order to design cars. Then in real life, it proved that reality was far to the Eliezer side of Eliezer on the [Eliezer-Robin axis](https://intelligence.org/2017/10/20/alphago/), and things like GPT-3 were built with *less* architectural complexity and generalized *more* than I was arguing to Robin that complex architectures should generalize over domains.\n\n\n\n\n\n\nThe metaphor I sometimes use is that it is very hard to build a system that drives cars painted red, but is not at all adjacent to a system that could, with a few alterations, prove to be very good at driving a car painted blue.  The “drive a red car” problem and the “drive a blue car” problem have too much in common.  You can maybe ask, “Align a system so that it has the capability to drive red cars, but refuses to drive blue cars.”  You can’t make a system that is very good at driving red-painted cars, but lacks the basic capability to drive blue-painted cars because you never trained it on that.  The patterns found by gradient descent, by genetic algorithms, or by other plausible methods of optimization, for driving red cars, would be patterns very close to the ones needed to drive blue cars.  When you optimize for red cars you get the blue car *capability* whether you like it or not.\n\n\n\n\n\n[Ngo][11:32]\nDoes your model of intelligence rule out building AIs which make dramatic progress in mathematics without killing us all?\n\n\n\n\n\n\n[Yudkowsky][11:34][11:39]\nIf it were possible to perform some pivotal act that saved the world with an AI that just made progress on proving mathematical theorems, without, eg, needing to explain those theorems to humans, I’d be *extremely* interested in that as a potential pivotal act. We wouldn’t be out of the woods, and I wouldn’t actually know how to build an AI like that without killing everybody, but it would immediately trump everything else as the obvious line of research to pursue.\n\n\nParenthetically, there is very very little which my model of intelligence *rules out*. I think we all die because we cannot do certain dangerous things correctly, *on the very first try in the dangerous regimes where one mistake kills you*, and do them *before* proliferation of much easier technologies kills us. If you have the Textbook From 100 Years In The Future that gives the simple robust solutions for everything, that actually work, you can write a superintelligence that thinks 2 + 2 = 5 because the Textbook gives the methods for doing that which are simple and actually work in practice in real life.\n\n\n\n\n\n\n(The Textbook has the equivalent of “use ReLUs instead of sigmoids” everywhere, and avoids all the clever-sounding things that will work at subhuman levels and blow up when you run them at superintelligent levels.)\n\n\n\n\n\n[Ngo][11:36][11:40]\nHmm, so suppose we train an AI to prove mathematical theorems when given them, perhaps via some sort of adversarial setter-solver training process.\n\n\nBy default I have the intuition that this AI could become extremely good at proving theorems – far beyond human level – without having goals about real-world outcomes.\n\n\n\n\n\n\n\nIt seems to me that in your model of intelligence, being able to do tasks like mathematics is closely coupled with trying to achieve real-world outcomes. But I’d actually take GPT-3 as some evidence against this position (although still evidence in favour of your position over Hanson’s), since it seems able to do a bunch of reasoning tasks while still not being very agentic.\n\n\nThere’s some alternative world where we weren’t able to train language models to do reasoning tasks without first training them to perform tasks in complex RL environments, and in that world I’d be significantly less optimistic.\n\n\n\n\n\n\n[Yudkowsky][11:41]\nI put to you that there is a predictable bias in your estimates, where you don’t know about the Deep Stuff that is required to prove theorems, so you imagine that certain cognitive capabilities are more disjoint than they actually are.  If you knew about the things that humans are using to reuse their reasoning about chipped handaxes and other humans, to prove math theorems, you would see it as more plausible that proving math theorems would generalize to chipping handaxes and manipulating humans.\n\n\nGPT-3 is a… complicated story, on my view of it and intelligence.  We’re looking at an interaction between tons and tons of memorized shallow patterns.  GPT-3 is *very* unlike the way that natural selection built humans.\n\n\n\n\n\n[Ngo][11:44]\nI agree with that last point. But this is also one of the reasons that I previously claimed that AIs could be more intelligent than humans while being less agentic, because there are systematic differences between the way in which natural selection built humans, and the way in which we’ll train AGIs.\n\n\n\n\n\n\n[Yudkowsky][11:45]\nMy current suspicion is that Stack More Layers alone is not going to take us to GPT-6 which is a true AGI; and this is because of the way that GPT-3 is, in your own terminology, “not agentic”, and which is, in my terminology, not having gradient descent on GPT-3 run across sufficiently deep problem-solving patterns.\n\n\n\n\n\n[Ngo][11:46]\nOkay, that helps me understand your position better.\n\n\nSo here’s one important difference between humans and neural networks: humans face the genomic bottleneck which means that each individual has to rederive all the knowledge about the world that their parents already had. If this genetic bottleneck hadn’t been so tight, then individual humans would have been significantly less capable of performing novel tasks.\n\n\n\n\n\n\n[Yudkowsky][11:50]\nI agree.\n\n\n\n\n\n[Ngo][11:50]\nIn my terminology, this is a reason that humans are “more agentic” than we otherwise would have been.\n\n\n\n\n\n\n[Yudkowsky][11:50]\nThis seems indisputable.\n\n\n\n\n\n[Ngo][11:51]\nAnother important difference: humans were trained in environments where we had to run around surviving all day, rather than solving maths problems etc.\n\n\n\n\n\n\n[Yudkowsky][11:51]\nI continue to nod.\n\n\n\n\n\n[Ngo][11:52]\nSupposing I agree that reaching a certain level of intelligence will require AIs with the “deep problem-solving patterns” you talk about, which lead AIs to try to achieve real-world goals. It still seems to me that there’s likely a lot of space between that level of intelligence, and human intelligence.\n\n\nAnd if that’s the case, then we could build AIs which help us solve the alignment problem before we build AIs which instantiate sufficiently deep problem-solving patterns that they decide to take over the world.\n\n\nNor does it seem like the reason *humans* want to take over the world is because of a deep fact about our intelligence. It seems to me that humans want to take over the world mainly because that’s very similar to things we evolved to do (like taking over our tribe).\n\n\n\n\n\n\n[Yudkowsky][11:57]\nSo here’s the part that I agree with: If there were one theorem only mildly far out of human reach, like proving the ABC Conjecture (if you think it hasn’t already been proven), and providing a machine-readable proof of this theorem would immediately save the world – say, aliens will give us an aligned superintelligence, as soon as we provide them with this machine-readable proof – then there would exist a plausible though not certain road to saving the world, which would be to try to build a *shallow* mind that proved the ABC Conjecture by memorizing tons of relatively shallow patterns for mathematical proofs learned through self-play; without that system ever abstracting math as deeply as humans do, but the sheer width of memory and sheer depth of search sufficing to do the job. I am not sure, to be clear, that this would work. But my model of intelligence does not rule it out.\n\n\n\n\n\n[Ngo][11:58]\n(I’m actually thinking of a mind which understands maths more deeply than humans – but perhaps only understands maths, or perhaps also a range of other sciences better than humans.)\n\n\n\n\n\n\n[Yudkowsky][12:00]\nParts I disagree with: That “help us solve alignment” bears any significant overlap with “provide us a machine-readable proof of the ABC Conjecture without thinking too deeply about it”. That humans want to take over the world only because it resembles things we evolved to do.\n\n\n\n\n\n[Ngo][12:01]\nI definitely agree that humans don’t *only* want to take over the world because it resembles things we evolved to do.\n\n\n\n\n\n\n[Yudkowsky][12:02]\nAlas, eliminating 5 reasons why something would go wrong doesn’t help much if there’s 2 remaining reasons something would go wrong that are much harder to eliminate!\n\n\n\n\n\n[Ngo][12:02]\nBut if we imagine having a human-level intelligence which *hadn’t* evolved primarily to do things that reasonably closely resembled taking over the world, then I expect that we could ask that intelligence questions in a fairly safe way.\n\n\nAnd that’s also true for an intelligence that is noticeably above human level.\n\n\nSo one question is: how far above human level could we get before a system which has only been trained to do things like answer questions and understand the world will decide to take over the world?\n\n\n\n\n\n\n[Yudkowsky][12:04]\nI think this is one of the very rare cases where the intelligence difference between “village idiot” and “Einstein”, which I’d usually see as very narrow, makes a structural difference! I think you can get some outputs from a village-idiot-level AGI, which got there by training on domains exclusively like math, and this will proooobably not destroy the world (*if* you were right about that, about what was going on inside). I have more concern about the Einstein level.\n\n\n\n\n\n[Ngo][12:05]\nLet’s focus on the Einstein level then.\n\n\nHuman brains have been optimised very little for doing science.\n\n\nThis suggests that building an AI which is Einstein-level at doing science is significantly easier than building an AI which is Einstein-level at taking over the world (or other things which humans evolved to do).\n\n\n\n\n\n\n[Yudkowsky][12:08]\nI think there’s a certain broad sense in which I agree with the literal truth of what you just said. You will systematically overestimate *how much* easier, or how far you can push the science part without getting the taking-over-the-world part, for as long as your model is ignorant of what they have in common.\n\n\n\n\n\n[Ngo][12:08]\nMaybe this is a good time to dig into the details of what they have in common, then.\n\n\n\n\n\n\n[Yudkowsky][12:09][12:11]][12:13]\nI feel like I haven’t had much luck with trying to explain that on previous occasions. Not to you, to others too.\n\n\nThere are shallow topics like why p-zombies can’t be real and how quantum mechanics works and why science ought to be using likelihood functions instead of p-values, and I can *barely* explain those to *some* people, but then there are some things that are apparently much harder to explain than that and which defeat my abilities as an explainer.\n\n\n\n\n\n\nThat’s why I’ve been trying to point out that, even if you don’t know the specifics, there’s an estimation bias that you can realize should exist in principle.\n\n\n\n\n\n\nOf course, I also haven’t had much luck in saying to people, “Well, even if you don’t know the truth about X that would let you see Y, can you not see by abstract reasoning that knowing *any* truth about X would predictably cause you to update in the direction of Y” – people don’t seem to actually internalize that much either. Not you, other discussions.\n\n\n\n\n\n[Ngo][12:10][12:11][12:13]\nMakes sense. Are there ways that I could try to make this easier? E.g. I could do my best to explain what I think your position is.\n\n\nGiven what you’ve said I’m not optimistic about this helping much.\n\n\n\n\n\n\n\nBut insofar as this is the key set of intuitions which has been informing your responses, it seems worth a shot.\n\n\nAnother approach would be to focus on our predictions for how AI capabilities will play out over the next few years.\n\n\n\n\n\n\n\nI take your point about my estimation bias. To me it feels like there’s also a bias going the other way, which is that as long as we don’t know the mechanisms by which different human capabilities work, we’ll tend to lump them together as one thing.\n\n\n\n\n\n\n[Yudkowsky][12:14]\nYup. If you didn’t know about visual cortex and auditory cortex, or about eyes and ears, you would assume much more that any sentience ought to both see and hear.\n\n\n\n\n\n[Ngo][12:16]\nSo then my position is something like: human pursuit of goals is driven by emotions and reward signals which are deeply evolutionarily ingrained, and without those we’d be much safer but not that much worse at pattern recognition.\n\n\n\n\n\n\n[Yudkowsky][12:17]\nIf there’s a pivotal act you can get just by supreme acts of pattern recognition, that’s right up there with “pivotal act composed solely of math” for things that would obviously instantly become the prime direction of research.\n\n\n\n\n\n[Ngo][12:18]\nTo me it seems like maths is *much more* about pattern recognition than, say, being a CEO. Being a CEO requires coherence over long periods of time; long-term memory; motivation; metacognition; etc.\n\n\n\n\n\n\n[Yudkowsky][12:18][12:23]\n(One occasionally-argued line of research can be summarized from a certain standpoint as “how about a pivotal act composed entirely of predicting text” and to this my reply is “you’re trying to get fully general AGI capabilities by predicting text that is *about* deep / ‘agentic’ reasoning, and that doesn’t actually help”.)\n\n\nHuman math is very much about goals. People want to prove subtheorems on the way to proving theorems. We might be able to make a *different* kind of mathematician that works more like GPT-3 in the dangerously inscrutable parts that are all noninspectable vectors of floating-point numbers, but even there you’d need some Alpha-Zero-like outer framework to supply the direction of search.\n\n\n\n\n\n\nThat outer framework might be able to be powerful enough without being reflective, though. So it would plausibly be *much easier* to build a mathematician that was capable of superhuman formal theorem-proving but not agentic. The reality of the world might tell us “lolnope” but my model of intelligence doesn’t mandate that. That’s why, if you gave me a pivotal act composed entirely of “output a machine-readable proof of this theorem and the world is saved”, I would pivot there! It actually does seem like it would be a lot easier!\n\n\n\n\n\n[Ngo][12:21][12:25]\nOkay, so if I attempt to rephrase your argument:\n\n\n\n\n\n\n\nYour position: There’s a set of fundamental similarities between tasks like doing maths, doing alignment research, and taking over the world. In all of these cases, agents based on techniques similar to modern ML which are very good at them will need to make use of deep problem-solving patterns which include goal-oriented reasoning. So while it’s possible to beat humans at some of these tasks without those core competencies, people usually overestimate the extent to which that’s possible.\n\n\n\n\n\n\n[Yudkowsky][12:25]\nRemember, a lot of my concern is about what happens *first*, especially if it happens soon enough that future AGI bears any resemblance whatsoever to modern ML; not about what can be done in principle.\n\n\n\n\n\n[Soares][12:26]\n(Note: it’s been 85 min, and we’re planning to take a break at 90min, so this seems like a good point for a little bit more clarifying back-and-forth on Richard’s summary before a break.)\n\n\n\n\n\n\n[Ngo][12:26]\nI’ll edit to say “plausible for ML techniques”?\n\n\n(and “extent to which that’s plausible”)\n\n\n\n\n\n\n[Yudkowsky][12:28]\nI think that obvious-to-me future outgrowths of modern ML paradigms are *extremely* liable to, if they can learn how to do sufficiently superhuman X, generalize to taking over the world. How fast this happens does depend on X. It would plausibly happen relatively slower (at higher levels) with theorem-proving as the X, and with architectures that carefully stuck to gradient-descent-memorization over shallow network architectures to do a pattern-recognition part with search factored out (sort of, this is not generally safe, this is not a general formula for safe things!); rather than imposing anything like the genetic bottleneck you validly pointed out as a reason why humans generalize. Profitable X, and all X I can think of that would actually save the world, seem much more problematic.\n\n\n\n\n\n[Ngo][12:30]\nOkay, happy to take a break here.\n\n\n\n\n\n\n[Soares][12:30]\nGreat timing!\n\n\n\n\n\n\n[Ngo][12:30]\nWe can do a bit of meta discussion afterwards; my initial instinct is to push on the question of how similar Eliezer thinks alignment research is to theorem-proving.\n\n\n\n\n\n\n[Yudkowsky][12:30]\nYup. This is my lunch break (actually my first-food-of-day break on a 600-calorie diet) so I can be back in 45min if you’re still up for that.\n\n\n\n\n\n[Ngo][12:31]\nSure.\n\n\nAlso, if any of the spectators are reading in real time, and have suggestions or comments, I’d be interested in hearing them.\n\n\n\n\n\n\n[Yudkowsky][12:31]\nI’m also cheerful about spectators posting suggestions or comments during the break.\n\n\n\n\n\n[Soares][12:32]\nSounds good. I declare us on a break for 45min, at which point we’ll reconvene (for another 90, by default).\n\n\nFloor’s open to suggestions & commentary.\n\n\n\n\n\n \n\n\n### 1.2. Requirements for science\n\n\n \n\n\n\n[Yudkowsky][12:50]\nI seem to be done early if people (mainly Richard) want to resume in 10min (30m break)\n\n\n\n\n\n[Ngo][12:51]\nYepp, happy to do so\n\n\n\n\n\n\n[Soares][12:57]\nSome quick commentary from me:\n\n\n* It seems to me like we’re exploring a crux in the vicinity of “should we expect that systems capable of executing a pivotal act would, by default in lieu of significant technical alignment effort, be using their outputs to optimize the future”.\n* I’m curious whether you two agree that this is a crux (but plz don’t get side-tracked answering me).\n* The general discussion seems to be going well to me.\n\t+ In particular, huzzah for careful and articulate efforts to zero in on cruxes.\n\n\n\n\n\n\n[Ngo][13:00]\nI think that’s a crux for the specific pivotal act of “doing better alignment research”, and maybe some other pivotal acts, but not all (or necessarily most) of them.\n\n\n\n\n\n\n[Yudkowsky][13:01]\nI should also say out loud that I’ve been working a bit with Ajeya on making an attempt to convey the intuitions behind there being deep patterns that generalize and are liable to be learned, which covered a bunch of ground, taught me how much ground there was, and made me relatively more reluctant to try to re-cover the same ground in this modality.\n\n\n\n\n\n[Ngo][13:02]\nGoing forward, a couple of things I’d like to ask Eliezer about:\n\n\n* In what ways are the tasks that are most useful for alignment similar or different to proving mathematical theorems (which we agreed might generalise relatively slowly to taking over the world)?\n* What are the deep problem-solving patterns underlying these tasks?\n* Can you summarise my position?\n\n\nI was going to say that I was most optimistic about #2 in order to get these ideas into a public format\n\n\nBut if that’s going to happen anyway based on Ajeya’s work, then that seems less important\n\n\n\n\n\n\n[Yudkowsky][13:03]\nI could still try briefly and see what happens.\n\n\n\n\n\n[Ngo][13:03]\nThat seems valuable to me, if you’re up for it.\n\n\nAt the same time, I’ll try to summarise some of my own intuitions about intelligence which I expect to be relevant.\n\n\n\n\n\n\n[Yudkowsky][13:04]\nI’m not sure I could summarize your position in a non-straw way. To me there’s a huge visible distance between “solve alignment for us” and “output machine-readable proofs of theorems” where I can’t give a good account of why you think talking about the latter would tell us much about the former. I don’t know what other pivotal act you think might be easier.\n\n\n\n\n\n[Ngo][13:06]\nI see. I was considering “solving scientific problems” as an alternative to “proving theorems”, with alignment being one (particularly hard) example of a scientific problem.\n\n\nBut decided to start by discussing theorem-proving since it seemed like a clearer-cut case.\n\n\n\n\n\n\n[Yudkowsky][13:07]\nCan you predict in advance why Eliezer thinks “solving scientific problems” is significantly thornier? (Where alignment is like totally not “a particularly hard example of a scientific problem” except in the sense that it has science in it at all; which is maybe the real crux; but also a more difficult issue.)\n\n\n\n\n\n[Ngo][13:09]\nBased on some of your earlier comments, I’m currently predicting that you think the step where the solutions need to be legible to and judged by humans makes science much thornier than theorem-proving, where the solutions are machine-checkable.\n\n\n\n\n\n\n[Yudkowsky][13:10]\nThat’s one factor. Should I state the other big one or would you rather try to state it first?\n\n\n\n\n\n[Ngo][13:10]\nRequiring a lot of real-world knowledge for science?\n\n\nIf it’s not that, go ahead and say it.\n\n\n\n\n\n\n[Yudkowsky][13:11]\nThat’s one way of stating it. The way I’d put it is that it’s about making up hypotheses about the real world.\n\n\nLike, the real world is then a thing that the AI is modeling, at all.\n\n\nFactor 3: On many interpretations of doing science, you would furthermore need to think up experiments. That’s planning, value-of-information, search for an experimental setup whose consequences distinguish between hypotheses (meaning you’re now searching for initial setups that have particular causal consequences).\n\n\n\n\n\n[Ngo][13:12]\nTo me “modelling the real world” is a very continuous variable. At one end you have physics equations that are barely separable from maths problems, at the other end you have humans running around in physical bodies.\n\n\nTo me it seems plausible that we could build an agent which solves scientific problems but has very little self-awareness (in the sense of knowing that it’s an AI, knowing that it’s being trained, etc).\n\n\nI expect that your response to this is that modelling oneself is part of the deep problem-solving patterns which AGIs are very likely to have.\n\n\n\n\n\n\n[Yudkowsky][13:15]\nThere’s a problem of *inferring the causes of sensory experience* in cognition-that-does-science. (Which, in fact, also appears in the way that humans do math, and is possibly inextricable from math in general; but this is an example of the sort of deep model that says “Whoops I guess you get science from math after all”, not a thing that makes science less dangerous because it’s more like just math.)\n\n\nYou can build an AI that only ever drives red cars, and which, at no point in the process of driving a red car, ever needs to drive a blue car in order to drive a red car. That doesn’t mean its red-car-driving capabilities won’t be extremely close to blue-car-driving capabilities if at any point the internal cognition happens to get pointed towards driving a blue car.\n\n\nThe fact that there’s a deep car-driving pattern which is the same across red cars and blue cars doesn’t mean that the AI has ever driven a blue car, per se, or that it has to drive blue cars to drive red cars. But if blue cars are fire, you sure are playing with that fire.\n\n\n\n\n\n[Ngo][13:18]\nTo me, “sensory experience” as in “the video and audio coming in from this body that I’m piloting” and “sensory experience” as in “a file containing the most recent results of the large hadron collider” are very very different.\n\n\n(I’m not saying we could train an AI scientist just from the latter – but plausibly from data that’s closer to the latter than the former)\n\n\n\n\n\n\n[Yudkowsky][13:19]\nSo there’s separate questions about “does an AGI *inseparably need* to model itself inside the world to do science” and “did we build something that would be very close to modeling itself, and could easily stumble across that by accident somewhere in the inscrutable floating-point numbers, especially if that was even slightly useful for solving the outer problems”.\n\n\n\n\n\n[Ngo][13:19]\nHmm, I see\n\n\n\n\n\n\n[Yudkowsky][13:20][13:21][13:21]\nIf you’re trying to build an AI that literally does science only to observations collected without the AI having had a causal impact on those observations, that’s legitimately “more dangerous than math but maybe less dangerous than active science”.\n\n\n\n\n\n\nYou might still stumble across an active scientist because it was a simple internal solution to something, but the outer problem would be legitimately stripped of an important structural property the same way that pure math not describing Earthly objects is stripped of important structural properties.\n\n\n\n\n\n\nAnd of course my reaction again is, “There is no pivotal act which uses only that cognitive capability.”\n\n\n\n\n\n[Ngo][13:20][13:21][13:26]\nI guess that my (fairly strong) prior here is that something like self-modelling, which is very deeply built into basically every organism, is a very hard thing for an AI to stumble across by accident without significant optimisation pressure in that direction.\n\n\n\n\n\n\n\nBut I’m not sure how to argue this except by digging into your views on what the deep problem-solving patterns are. So if you’re still willing to briefly try and explain those, that’d be useful to me.\n\n\n\n\n\n\n\n“Causal impact” again seems like a very continuous variable – it seems like the *amount* of causal impact you need to do good science is much less than the amount which is needed to, say, be a CEO.\n\n\n\n\n\n\n[Yudkowsky][13:26]\nThe amount doesn’t seem like the key thing, nearly so much as what underlying facilities you need to do whatever amount of it you need.\n\n\n\n\n\n[Ngo][13:27]\nAgreed.\n\n\n\n\n\n\n[Yudkowsky][13:27]\nIf you go back to the 16th century and ask for just one mRNA vaccine, that’s not much of a difference from asking for a ~~million~~ hundred of them.\n\n\n\n\n\n[Ngo][13:28]\nRight, so the additional premise which I’m using here is that the ability to reason about causally impacting the world in order to achieve goals is something that you can have a little bit of.\n\n\nOr a lot of, and that the difference between these might come down to the training data used.\n\n\nWhich at this point I don’t expect you to agree with.\n\n\n\n\n\n\n[Yudkowsky][13:29]\nIf you have reduced a pivotal act to “look over the data from this hadron collider you neither built nor ran yourself”, that really is a structural step down from “do science” or “build a nanomachine”. But I can’t see any pivotal acts like that, so is that question much of a crux?\n\n\nIf there’s intermediate steps they might be described in my native language like “reason about causal impacts across only this one preprogrammed domain which you didn’t learn in a general way, in only this part of the cognitive architecture that is separable from the rest of the cognitive architecture”.\n\n\n\n\n\n[Ngo][13:31]\nPerhaps another way of phrasing this intermediate step is that the agent has a shallow understanding of how to induce causal impacts.\n\n\n\n\n\n\n[Yudkowsky][13:31]\nWhat is “shallow” to you?\n\n\n\n\n\n[Ngo][13:31]\nIn a similar way to how you claim that GPT-3 has a shallow understanding of language.\n\n\n\n\n\n\n[Yudkowsky][13:32]\nSo it’s memorized a ton of shallow causal-impact-inducing patterns from a large dataset, and this can be verified by, for example, presenting it with an example mildly outside the dataset and watching it fail, which we think will confirm our hypothesis that it didn’t learn any deep ways of solving that dataset.\n\n\n\n\n\n[Ngo][13:33]\nRoughly speaking, yes.\n\n\n\n\n\n\n[Yudkowsky][13:34]\nEg, it wouldn’t surprise us at all if GPT-4 had learned to predict “27 \\* 18” but not “what is the area of a rectangle 27 meters by 18 meters”… is what I’d like to say, but Codex sure did demonstrate those two were kinda awfully proximal.\n\n\n\n\n\n[Ngo][13:34]\nHere’s one way we could flesh this out. Imagine an agent that loses coherence quickly when it’s trying to act in the world.\n\n\nSo for example, we’ve trained it to do scientific experiments over a period of a few hours or days\n\n\nAnd then it’s very good at understanding the experimental data and extracting patterns from it\n\n\nBut upon running it for a week or a month, it loses coherence in a similar way to how GPT-3 loses coherence – e.g. it forgets what it’s doing.\n\n\nMy story for why this might happen is something like: there is a specific skill of having long-term memory, and we never trained our agent to have this skill, and so it has not acquired that skill (even though it can reason in very general and powerful ways in the short term).\n\n\nThis feels similar to the argument I was making before about how an agent might lack self-awareness, if we haven’t trained it specifically to have that.\n\n\n\n\n\n\n[Yudkowsky][13:39]\nThere’s a set of obvious-to-me tactics for doing a pivotal act with minimal danger, which I do not think collectively make the problem safe, and one of these sets of tactics is indeed “Put a limit on the ‘attention window’ or some other internal parameter, ramp it up slowly, don’t ramp it any higher than you needed to solve the problem.”\n\n\n\n\n\n[Ngo][13:41]\nYou could indeed do this manually, but my expectation is that you could also do this automatically, by training agents in environments where they don’t benefit from having long attention spans.\n\n\n\n\n\n\n[Yudkowsky][13:42]\n(Any time one imagines a specific tactic of this kind, if one has the [security mindset](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/), one can also imagine all sorts of ways it might go wrong; for example, an attention window can be defeated if there’s any aspect of the attended data or the internal state that ended up depending on past events in a way that leaked info about them. But, depending on how much superintelligence you were throwing around elsewhere, you could maybe get away with that, some of the time.)\n\n\n\n\n\n[Ngo][13:43]\nAnd that if you put agents in environments where they answer questions but don’t interact much with the physical world, then there will be many different traits which are necessary for achieving goals in the real world which they will lack, because there was little advantage to the optimiser of building those traits in.\n\n\n\n\n\n\n[Yudkowsky][13:43]\nI’ll observe that TransformerXL built an attention window that generalized, trained it on I think 380 tokens or something like that, and then found that it generalized to 4000 tokens or something like that.\n\n\n\n\n\n[Ngo][13:43]\nYeah, an order of magnitude of generalisation is not surprising to me.\n\n\n\n\n\n\n[Yudkowsky][13:44]\nHaving observed one order of magnitude, I would personally not be surprised by two orders of magnitude either, after seeing that.\n\n\n\n\n\n[Ngo][13:45]\nI’d be a little surprised, but I assume it would happen eventually.\n\n\n\n\n\n \n\n\n### 1.3. Capability dials\n\n\n \n\n\n\n[Yudkowsky][13:46]\nI have a sense that this is all circling back to the question, “But what is it we *do* with the intelligence thus weakened?” If you can save the world using a rock, I can build you a very safe rock.\n\n\n\n\n\n[Ngo][13:46]\nRight.\n\n\nSo far I’ve said “alignment research”, but I haven’t been very specific about it.\n\n\nI guess some context here is that I expect that the first things we do with intelligence similar to this is create great wealth, produce a bunch of useful scientific advances, etc.\n\n\nAnd that we’ll be in a world where people take the prospect of AGI much more seriously\n\n\n\n\n\n\n[Yudkowsky][13:48]\nI mostly expect – albeit with some chance that reality says “So what?” to me and surprises me, because it is not as solidly determined as some other things – that we do not hang around very long in the “weirdly ~human AGI” phase before we get into the “if you crank up this AGI it destroys the world” phase. Less than 5 years, say, to put numbers on things.\n\n\nIt would not surprise me in the least if the world ends before self-driving cars are sold on the mass market. On some quite plausible scenarios which I think have >50% of my probability mass at the moment, research AGI companies would be able to produce prototype car-driving AIs if they spent time on that, given the near-world-ending tech level; but there will be Many Very Serious Questions about this relatively new unproven advancement in machine learning being turned loose on the roads. And their AGI tech will gain the property “can be turned up to destroy the world” before Earth gains the property “you’re allowed to sell self-driving cars on the mass market” because there just won’t be much time.\n\n\n\n\n\n[Ngo][13:52]\nThen I expect that another thing we do with this is produce a very large amount of data which rewards AIs for following human instructions.\n\n\n\n\n\n\n[Yudkowsky][13:52]\nOn other scenarios, of course, self-driving becomes possible by limited AI well before things start to break (further) on AGI. And on some scenarios, the way you got to AGI was via some breakthrough that is already scaling pretty fast, so by the time you can use the tech to get self-driving cars, that tech already ends the world if you turn up the dial, or that event follows very swiftly.\n\n\n\n\n\n[Ngo][13:53]\nWhen you talk about “cranking up the AGI”, what do you mean?\n\n\nUsing more compute on the same data?\n\n\n\n\n\n\n[Yudkowsky][13:53]\nRunning it with larger bounds on the for loops, over more GPUs, to be concrete about it.\n\n\n\n\n\n[Ngo][13:53]\nIn a RL setting, or a supervised, or unsupervised learning setting?\n\n\nAlso: can you elaborate on the for loops?\n\n\n\n\n\n\n[Yudkowsky][13:56]\nI do not quite think that gradient descent on Stack More Layers alone – as used by OpenAI for GPT-3, say, and as *opposed* to Deepmind which builds more complex artifacts like Mu Zero or AlphaFold 2 – is liable to be the first path taken to AGI. I am reluctant to speculate more in print about clever ways to AGI, and I think any clever person out there will, if they are really clever and not just a fancier kind of stupid, not talk either about what they think is missing from Stack More Layers or how you would really get AGI. That said, the way that you cannot just run GPT-3 at a greater search depth, the way you can run Mu Zero at a greater search depth, is part of why I think that AGI is not likely to look *exactly* like GPT-3; the thing that kills us is likely to be a thing that can get more dangerous when you turn up a dial on it, not a thing that intrinsically has no dials that can make it more dangerous.\n\n\n\n\n \n\n\n### 1.4. Consequentialist goals vs. deontologist goals\n\n\n \n\n\n\n[Ngo][13:59]\nHmm, okay. Let’s take a quick step back and think about what would be useful for the last half hour.\n\n\nI want to flag that my intuitions about pivotal acts are not very specific; I’m quite uncertain about how the geopolitics of that situation would work, as well as the timeframe between somewhere-near-human-level AGI and existential risk AGI.\n\n\nSo we could talk more about this, but I expect there’d be a lot of me saying “well we can’t rule out that X happens”, which is perhaps not the most productive mode of discourse.\n\n\nA second option is digging into your intuitions about how cognition works.\n\n\n\n\n\n\n[Yudkowsky][14:03]\nWell, obviously, in the limit of alignment not being accessible to our civilization, and my successfully building a model weaker than reality which nonetheless correctly rules out alignment being accessible to our civilization, I could spend the rest of my short remaining lifetime arguing with people whose models are weak enough to induce some area of ignorance where for all they know you could align a thing. But that is predictably how conversations go in possible worlds where the Earth is doomed; so somebody wiser on the meta-level, though also ignorant on the object-level, might prefer to ask: “Where do you think your knowledge, rather than your ignorance, says that alignment ought to be doable and you will be surprised if it is not?”\n\n\n\n\n\n[Ngo][14:07]\nThat’s a fair point. Although it seems like a structural property of the “pivotal act” framing, which builds in doom by default.\n\n\n\n\n\n\n[Yudkowsky][14:08]\nWe could talk about that, if you think it’s a crux. Though I’m also not thinking that this whole conversation gets done in a day, so maybe for publishability reasons we should try to focus more on one line of discussion?\n\n\nBut I do think that lots of people get their optimism by supposing that the world can be saved by doing less dangerous things with an AGI. So it’s a big ol’ crux of mine on priors.\n\n\n\n\n\n[Ngo][14:09]\nAgreed that one line of discussion is better; I’m happy to work within the pivotal act framing for current purposes.\n\n\nA third option is that I make some claims about how cognition works, and we see how much you agree with them.\n\n\n\n\n\n\n[Yudkowsky][14:12]\n(Though it’s something of a restatement, a reason I’m not going into “my intuitions about how cognition works” is that past experience has led me to believe that conveying this info in a form that the Other Mind will actually absorb and operate, is really quite hard and takes a long discussion, relative to my current abilities to Actually Explain things; it is the sort of thing that might take doing homework exercises to grasp how one structure is appearing in many places, as opposed to just being flatly told that to no avail, and I have not figured out the homework exercises.)\n\n\nI’m cheerful about hearing your own claims about cognition and disagreeing with them.\n\n\n\n\n\n[Ngo][14:12]\nGreat\n\n\nOkay, so one claim is that something like deontology is a fairly natural way for minds to operate.\n\n\n\n\n\n\n[Yudkowsky][14:14]\n(“If that were true,” he thought at once, “bureaucracies and books of regulations would be a lot more efficient than they are in real life.”)\n\n\n\n\n\n[Ngo][14:14]\nHmm, although I think this was probably not a very useful phrasing, let me think about how to rephrase it.\n\n\nOkay, so in [our earlier email discussion](https://docs.google.com/document/d/1XXGbFnWPXtsRiTxleBZ0LAGtU7_7CYKt17nnowfpKvo/edit), we talked about the concept of “obedience”.\n\n\nTo me it seems like it is just as plausible for a mind to have a concept like “obedience” as its rough goal, as a concept like maximising paperclips.\n\n\nIf we imagine training an agent on a large amount of data which pointed in the rough direction of rewarding obedience, for example, then I imagine that by default obedience would be a constraint of comparable strength to, say, the human survival instinct.\n\n\n(Which is obviously not strong enough to stop humans doing a bunch of things that contradict it – but it’s a pretty good starting point.)\n\n\n\n\n\n\n[Yudkowsky][14:18]\nHeh. You mean of comparable strength to the human instinct to explicitly maximize inclusive genetic fitness?\n\n\n\n\n\n[Ngo][14:19]\nGenetic fitness wasn’t a concept that our ancestors were able to understand, so it makes sense that they weren’t pointed directly towards it.\n\n\n(And nor did they understand *how* to achieve it.)\n\n\n\n\n\n\n[Yudkowsky][14:19]\nEven in that paradigm, except insofar as you expect gradient descent to work very differently from gene-search optimization – which, admittedly, it does – when you optimize really hard on a thing, you get contextual correlates to it, not the thing you optimized on.\n\n\nThis is of course one of the Big Fundamental Problems that I expect in alignment.\n\n\n\n\n\n[Ngo][14:20]\nRight, so the main correlate that I’ve seen discussed is “do what would make the human give you a high rating, not what the human actually wants”\n\n\nOne thing I’m curious about is the extent to which you’re concerned about this specific correlate, versus correlates in general.\n\n\n\n\n\n\n[Yudkowsky][14:21]\nThat said, I also see basic structural reasons why paperclips would be much easier to train than “obedience”, even if we could magically instill simple inner desires that perfectly reflected the simple outer algorithm we saw ourselves as running over many particular instances of a loss function.\n\n\n\n\n\n[Ngo][14:22]\nI’d be interested in hearing what those are.\n\n\n\n\n\n\n[Yudkowsky][14:22]\nwell, first of all, why *is* a book of regulations so much more unwieldy than a hunter-gatherer?\n\n\nif deontology is just as good as [consequentialism](https://arbital.com/p/consequentialist/), y’know.\n\n\n(do you want to try replying or should I just say?)\n\n\n\n\n\n[Ngo][14:23]\nGo ahead\n\n\nI should probably clarify that I agree that you can’t just replace consequentialism with deontology\n\n\nThe claim is more like: when it comes to high-level concepts, it’s not clear to me why high-level consequentialist goals are more natural than high-level deontological goals.\n\n\n\n\n\n\n[Yudkowsky][14:24]\nI reply that reality is complicated, so when you pump a simple goal through complicated reality you get complicated behaviors required to achieve the goal. If you think of reality as a complicated function Input->Probability(Output), then even to get a simple Output or a simple partition on Output or a high expected score in a simple function over Output, you may need very complicated Input.\n\n\nHumans don’t trust each other. They imagine, “Well, if I just give this bureaucrat a goal, perhaps they won’t reason honestly about what it takes to achieve that goal! Oh no! Therefore I will instead, being the trustworthy and accurate person that I am, reason myself about constraints and requirements on the bureaucrat’s actions, such that, if the bureaucrat obeys these regulations, I expect the outcome of their action will be what I want.”\n\n\nBut (compared to a general intelligence that observes and models complicated reality and does its own search to pick actions) an actually-effective book of regulations (implemented by some nonhuman mind with a large enough and perfect enough memory to memorize it) would tend to involve a (physically unmanageable) vast number of rules saying “if you observe this, do that” to follow all the crinkles of complicated reality as it can be inferred from observation.\n\n\n\n\n\n[Ngo][14:28]\n\n> \n> (Though it’s something of a restatement, a reason I’m not going into “my intuitions about how cognition works” is that past experience has led me to believe that conveying this info in a form that the Other Mind will actually absorb and operate, is really quite hard and takes a long discussion, relative to my current abilities to Actually Explain things; it is the sort of thing that might take doing homework exercises to grasp how one structure is appearing in many places, as opposed to just being flatly told that to no avail, and I have not figured out the homework exercises.)\n> \n> \n> \n\n\n(As a side note: do you have a rough guess for when your work with Ajeya will be made public? If it’s still a while away, I’m wondering whether it’s still useful to have a rough outline of these intuitions even if it’s in a form that very few people will internalise)\n\n\n\n\n\n\n[Yudkowsky][14:30]\n\n> \n> (As a side note: do you have a rough guess for when your work with Ajeya will be made public? If it’s still a while away, I’m wondering whether it’s still useful to have a rough outline of these intuitions even if it’s in a form that very few people will internalise)\n> \n> \n> \n\n\nPlausibly useful, but not to be attempted today, I think?\n\n\n\n\n\n[Ngo][14:30]\nAgreed.\n\n\n\n\n\n\n[Yudkowsky][14:30]\n(We are now theoretically in overtime, which is okay for me, but for you it is 11:30pm (I think?) and so it is on you to call when to halt, now or later.)\n\n\n\n\n\n[Ngo][14:32]\nYeah, it’s 11.30 for me. I think probably best to halt here. I agree with all the things you just said about reality being complicated, and why consequentialism is therefore valuable. My “deontology” claim (which was, in its original formulation, far too general – apologies for that) was originally intended as a way of poking into your intuitions about which types of cognition are natural or unnatural, which I think is the topic we’ve been circling around for a while.\n\n\n\n\n\n\n[Yudkowsky][14:33]\nYup, and a place to resume next time might be why I think “obedience” is unnatural compared to “paperclips” – though that is a thing that probably requires taking that stab at what underlies surface competencies.\n\n\n\n\n\n[Ngo][14:34]\nRight. I do think that even a vague gesture at that would be reasonably helpful (assuming that this doesn’t already exist online?)\n\n\n\n\n\n\n[Yudkowsky][14:34]\nNot yet afaik, and I don’t want to point you to Ajeya’s stuff even if she were ok with that, because then this in-context conversation won’t make sense to others.\n\n\n\n\n\n[Ngo][14:35]\nFor my part I should think more about pivotal acts that I’d be willing to specifically defend.\n\n\nIn any case, thanks for the discussion ![🙂](https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png)\n\n\nLet me know if there’s a particular time that suits you for a follow-up; otherwise we can sort it out later.\n\n\n\n\n\n\n[Soares][14:37]\n(y’all are doing all my jobs for me)\n\n\n\n\n\n\n[Yudkowsky][14:37]\ncould try Tuesday at this same time – though I may be in worse shape for dietary reasons, still, seems worth trying.\n\n\n\n\n\n[Soares][14:37]\n(wfm)\n\n\n\n\n\n\n[Ngo][14:39]\nTuesday not ideal, any others work?\n\n\n\n\n\n\n[Yudkowsky][14:39]\nWednesday?\n\n\n\n\n\n[Ngo][14:40]\nYes, Wednesday would be good\n\n\n\n\n\n\n[Yudkowsky][14:40]\nlet’s call it tentatively for that\n\n\n\n\n\n[Soares][14:41]\nGreat! Thanks for the chats.\n\n\n\n\n\n\n[Ngo][14:41]\nThanks both!\n\n\n\n\n\n\n[Yudkowsky][14:41]\nThanks, Richard!\n\n\n\n\n \n\n\n2. Follow-ups\n-------------\n\n\n \n\n\n### 2.1. Richard Ngo’s summary\n\n\n \n\n\n\n[Tallinn][0:35]  (Sep. 6)\njust caught up here & wanted to thank nate, eliezer and (especially) richard for doing this! it’s great to see eliezer’s model being probed so intensively. i’ve learned a few new things (such as the genetic bottleneck being plausibly a big factor in human cognition). FWIW, a minor comment re deontology (as that’s fresh on my mind): in my view deontology is more about coordination than optimisation: deontological agents are more trustworthy, as they’re much easier to reason about (in the same way how functional/declarative code is easier to reason about than imperative code). hence my steelman of bureaucracies (as well as social norms): humans just (correctly) prefer their fellow optimisers (including non-human optimisers) to be deontological for trust/coordination reasons, and are happy to pay the resulting competence tax.\n\n\n\n\n\n\n[Ngo][3:10]  (Sep. 8)\nThanks Jaan! I agree that greater trust is a good reason to want agents which are deontological at some high level.\n\n\nI’ve attempted a summary of the key points so far; comments welcome: [GDocs link]\n\n\n\n\n\n\n[Ngo]  (Sep. 8 Google Doc)\n*1st discussion*\n\n\n(Mostly summaries not quotations)\n\n\nEliezer, summarized by Richard: “To avoid catastrophe, whoever builds AGI first will have to a) align it to some extent, and b) decide not to scale it up beyond the point where their alignment techniques fail, and c) do some pivotal act that prevents others from scaling it up to that level. But ~~our alignment techniques will not be good enough~~ ~~our alignment techniques will be very far from adequate~~ on our current trajectory, our alignment techniques will be very far from adequate to create an AI that safely performs any such pivotal act.”\n\n\n\n\n\n\n[Yudkowsky][11:05]  (Sep. 8 comment)\n\n> \n> will not be good enough\n> \n> \n> \n\n\nAre not presently on course to be good enough, missing by not a little.  “Will not be good enough” is literally declaring for lying down and dying.\n\n\n\n\n\n[Yudkowsky][16:03]  (Sep. 9 comment)\n\n> \n> will [be very far from adequate]\n> \n> \n> \n\n\nSame problem as the last time I commented.  I am not making an unconditional prediction about future failure as would be implied by the word “will”.  Conditional on current courses of action or their near neighboring courses, we seem to be well over an order of magnitude away from surviving, unless a miracle occurs.  It’s still in the end a result of people doing what they seem to be doing, not an inevitability.\n\n\n\n\n\n[Ngo][5:10]  (Sep. 10 comment)\nAh, I see. Does adding “on our current trajectory” fix this?\n\n\n\n\n\n\n[Yudkowsky][10:46]  (Sep. 10 comment)\nYes.\n\n\n\n\n\n[Ngo]  (Sep. 8 Google Doc)\nRichard, summarized by Richard: “Consider the pivotal act of ‘make a breakthrough in alignment research’. It is likely that, before the point where AGIs are strongly superhuman at seeking power, they will already be strongly superhuman at understanding the world, and at performing narrower pivotal acts like alignment research which don’t require as much agency (by which I roughly mean: large-scale motivations and the ability to pursue them over long timeframes).”\n\n\nEliezer, summarized by Richard: “There’s a deep connection between solving intellectual problems and taking over the world – the former requires a powerful mind to think about domains that, when solved, render very cognitively accessible strategies that can do dangerous things. Even mathematical research is a goal-oriented task which involves identifying then pursuing instrumental subgoals – and if brains which evolved to hunt on the savannah can quickly learn to do mathematics, then it’s also plausible that AIs trained to do mathematics could quickly learn a range of other skills. Since almost nobody understands the deep similarities in the cognition required for these different tasks, the distance between AIs that are able to perform fundamental scientific research, and dangerously agentic AGIs, is smaller than almost anybody expects.”\n\n\n\n\n\n\n[Yudkowsky][11:05]  (Sep. 8 comment)\n\n> \n> There’s a deep connection between solving intellectual problems and taking over the world\n> \n> \n> \n\n\nThere’s a deep connection by default between chipping flint handaxes and taking over the world, if you happen to learn how to chip handaxes in a very general way.  “Intellectual” problems aren’t special in this way.  And maybe you could avert the default, but that would take some work and you’d have to do it before easier default ML techniques destroyed the world.\n\n\n\n\n\n[Ngo]  (Sep. 8 Google Doc)\nRichard, summarized by Richard: “Our lack of understanding about how intelligence works also makes it easy to assume that traits which co-occur humans will also co-occur in future AIs. But human brains are badly-optimised for tasks like scientific research, and well-optimised for seeking power over the world, for reasons including a) evolving while embodied in a harsh environment; b) the genetic bottleneck; c) social environments which rewarded power-seeking. By contrast, training neural networks on tasks like mathematical or scientific research optimises them much less for seeking power. For example, GPT-3 has knowledge and reasoning capabilities but little agency, and loses coherence when run for longer timeframes.”\n\n\n\n\n\n\n[Tallinn][4:19]  (Sep. 8 comment)\n\n> \n> [well-optimised for] seeking power\n> \n> \n> \n\n\nmale-female differences might be a datapoint here (annoying as it is to lean on pinker’s point :))\n\n\n\n\n\n\n[Yudkowsky][11:31]  (Sep. 8 comment)\nI don’t think a female Eliezer Yudkowsky doesn’t try to save / optimize / takeover the world.  Men may do that for nonsmart reasons; smart men and women follow the same reasoning when they are smart enough.  Eg Anna Salamon and many others.\n\n\n\n\n\n[Ngo]  (Sep. 8 Google Doc)\nEliezer, summarized by Richard: “Firstly, there’s a big difference between most scientific research and the sort of pivotal act that we’re talking about – you need to explain how AIs with a given skill can be used to actually prevent dangerous AGIs from being built. Secondly, insofar as GPT-3 has little agency, that’s because it has memorised many shallow patterns in a way which won’t directly scale up to general intelligence. Intelligence instead consists of deep problem-solving patterns which link understanding and agency at a fundamental level.”\n\n\n\n\n\n \n\n\n3. September 8 conversation\n---------------------------\n\n\n \n\n\n### 3.1. The Brazilian university anecdote\n\n\n \n\n\n\n[Yudkowsky][11:00]\n(I am here.)\n\n\n\n\n\n[Ngo][11:01]\nMe too.\n\n\n\n\n\n\n[Soares][11:01]\nWelcome back!\n\n\n(I’ll mostly stay out of the way again.)\n\n\n\n\n\n\n[Ngo][11:02]\nCool. Eliezer, did you read the summary – and if so, do you roughly endorse it?\n\n\nAlso, I’ve been thinking about the best way to approach discussing your intuitions about cognition. My guess is that starting with the obedience vs paperclips thread is likely to be less useful than starting somewhere else – e.g. the description you gave near the beginning of the last discussion, about “searching for states that get fed into a result function and then a result-scoring function”.\n\n\n\n\n\n\n[Yudkowsky][11:06]\nmade a couple of comments about phrasings in the doc\n\n\nSo, from my perspective, there’s this thing where… it’s really quite hard to teach certain *general* points by talking at people, as opposed to more specific points. Like, they’re trying to build a perpetual motion machine, and even if you can manage to argue them into believing their first design is wrong, they go looking for a new design, and the new design is complicated enough that they can no longer be convinced that they’re wrong because they managed to make a more complicated error whose refutation they couldn’t keep track of anymore.\n\n\nTeaching people to see an underlying structure in a lot of places is a very hard thing to teach in this way. Richard Feynman [gave an example](https://v.cx/2010/04/feynman-brazil-education) of the mental motion in his story that ends “Look at the water!”, where people learned in classrooms about how “a medium with an index” is supposed to polarize light reflected from it, but they didn’t realize that sunlight coming off of water would be polarized. My guess is that doing this properly requires homework exercises; and that, unfortunately from my own standpoint, it happens to be a place where I have extra math talent, the same way that eg Marcello is more talented at formally proving theorems than I happen to be; and that people without the extra math talent, have to do a lot *more* exercises than I did, and I don’t have a good sense of which exercises to give them.\n\n\n\n\n\n[Ngo][11:13]\nI’m sympathetic to this, and can try to turn off skeptical-discussion-mode and turn on learning-mode, if you think that’ll help.\n\n\n\n\n\n\n[Yudkowsky][11:14]\nThere’s a general insight you can have about how arithmetic is commutative, and for some people you can show them 1 + 2 = 2 + 1 and their native insight suffices to generalize over the 1 and the 2 to any other numbers you could put in there, and they realize that strings of numbers can be rearranged and all end up equivalent. For somebody else, when they’re a kid, you might have to show them 2 apples and 1 apple being put on the table in a different order but ending up with the same number of apples, and then you might have to show them again with adding up bills in different denominations, in case they didn’t generalize from apples to money. I can actually remember being a child young enough that I tried to add 3 to 5 by counting “5, 6, 7” and I thought there was some clever enough way to do that to actually get 7, if you tried hard.\n\n\nBeing able to see “consequentialism” is like that, from my perspective.\n\n\n\n\n\n[Ngo][11:15]\nAnother possibility: can you trace the origins of this belief, and how it came out of your previous beliefs?\n\n\n\n\n\n\n[Yudkowsky][11:15]\nI don’t know what homework exercises to give people to make them able to see “consequentialism” all over the place, instead of inventing slightly new forms of consequentialist cognition and going “Well, now *that* isn’t consequentialism, right?”\n\n\nTrying to say “searching for states that get fed into an input-result function and then a result-scoring function” was one attempt of mine to describe the dangerous thing in a way that would maybe sound abstract enough that people would try to generalize it more.\n\n\n\n\n\n[Ngo][11:17]\nAnother possibility: can you describe the closest thing to real consequentialism in humans, and how it came about in us?\n\n\n\n\n\n\n[Yudkowsky][11:18][11:21]\nOk, so, part of the problem is that… before you do enough homework exercises for whatever your level of talent is (and even I, at one point, had done little enough homework that I thought there might be a clever way to add 3 and 5 in order to get to 7), you tend to think that only the very crisp formal thing that’s been presented to you, is the “real” thing.\n\n\nWhy would your engine have to obey the laws of thermodynamics? You’re not building one of those Carnot engines you saw in the physics textbook!\n\n\nHumans contain fragments of consequentialism, or bits and pieces whose interactions add up to partially imperfectly shadow consequentialism, and the critical thing is being able to see that the reason why humans’ outputs ‘work’, in a sense, is because these structures are what is doing the work, and the work gets done because of how they shadow consequentialism and only insofar as they shadow consequentialism.\n\n\n\n\n\n\nPut a human in one environment, it gets food. Put a human in a different environment, it gets food again. Wow, different initial conditions, same output! There must be things inside the human that, whatever else they do, are also along the way somehow effectively searching for motor signals such that food is the end result!\n\n\n\n\n\n[Ngo][11:20]\nTo me it feels like you’re trying to nudge me (and by extension whoever reads this transcript) out of a specific failure mode. If I had to guess, something like: “I understand what Eliezer is talking about so now I’m justified in disagreeing with it”, or perhaps “Eliezer’s explanation didn’t make sense to me and so I’m justified in thinking that his concepts don’t make sense”. Is that right?\n\n\n\n\n\n\n[Yudkowsky][11:22]\nMore like… from my perspective, even after I talk people out of one specific perpetual motion machine being possible, they go off and try to invent a different, more complicated perpetual motion machine.\n\n\nAnd I am not sure what to do about that. It has been going on for a very long time from my perspective.\n\n\nIn the end, a lot of what people got out of all that writing I did, was not the deep object-level principles I was trying to point to – they did not really get [Bayesianism as thermodynamics](https://www.lesswrong.com/s/oFePMp9rKftEeZDDr/p/QkX2bAkwG2EpGvNug), say, they did not become able to see [Bayesian structures](https://www.lesswrong.com/posts/QrhAeKBkm2WsdRYao/searching-for-bayes-structure) any time somebody sees a thing and changes their belief. What they got instead was something much more meta and general, a vague spirit of how to reason and argue, because that was what they’d spent a lot of time being exposed to over and over and over again in lots of blog posts.\n\n\nMaybe there’s no way to make somebody understand why [corrigibility](https://arbital.com/p/corrigibility/) is ���unnatural” except to repeatedly walk them through the task of trying to invent an agent structure that lets you press the shutdown button (without it trying to force you to press the shutdown button), and showing them how each of their attempts fails; and then also walking them through why Stuart Russell’s attempt at moral uncertainty produces the [problem of fully updated (non-)deference](https://arbital.com/p/updated_deference/); and hope they can start to see the informal general pattern of why corrigibility is in general contrary to the structure of things that are good at optimization.\n\n\nExcept that to do the exercises at all, you need them to work within an expected utility framework. And then they just go, “Oh, well, I’ll just build an agent that’s good at optimizing things but doesn’t use these explicit expected utilities that are the source of the problem!”\n\n\nAnd then if I want them to believe the same things I do, for the same reasons I do, I would have to teach them why certain structures of cognition are the parts of the agent that are good at stuff and do the work, rather than them being this particular formal thing that they learned for manipulating meaningless numbers as opposed to real-world apples.\n\n\nAnd I have tried to write that page once or twice (eg “[coherent decisions imply consistent utilities](https://www.lesswrong.com/posts/RQpNHSiWaXTvDxt6R/coherent-decisions-imply-consistent-utilities)“) but it has not sufficed to teach them, because they did not even do as many homework problems as I did, let alone the greater number they’d have to do because this is in fact a place where I have a particular talent.\n\n\nI don’t know how to solve this problem, which is why I’m falling back on talking about it at the meta-level.\n\n\n\n\n\n[Ngo][11:30]\nI’m reminded of a LW post called “[Write a thousand roads to Rome](https://www.lesswrong.com/posts/Q924oPJzK92FifuFg/write-a-thousand-roads-to-rome)“, which iirc argues in favour of trying to explain the same thing from as many angles as possible in the hope that one of them will stick.\n\n\n\n\n\n\n[Soares][11:31]\n(Suggestion, not-necessarily-good: having named this problem on the meta-level, attempt to have the object-level debate, while flagging instances of this as it comes up.)\n\n\n\n\n\n\n[Ngo][11:31]\nI endorse Nate’s suggestion.\n\n\nAnd will try to keep the difficulty of the meta-level problem in mind and respond accordingly.\n\n\n\n\n\n\n[Yudkowsky][11:33]\nThat (Nate’s suggestion) is probably the correct thing to do. I name it out loud because sometimes being told about the meta-problem actually does help on the object problem. It seems to help me a lot and others somewhat less, but it does help others at all, for many others.\n\n\n\n\n \n\n\n### 3.2. Brain functions and outcome pumps\n\n\n \n\n\n\n[Yudkowsky][11:34]\nSo, do you have a particular question you would ask about input-seeking cognitions? I did try to say why I mentioned those at all (it’s a different road to Rome on “consequentialism”).\n\n\n\n\n\n[Ngo][11:36]\nLet’s see. So the visual cortex is an example of quite impressive cognition in humans and many other animals. But I’d call this “pattern-recognition” rather than “searching for high-scoring results”.\n\n\n\n\n\n\n[Yudkowsky][11:37]\nYup! And it is no coincidence that there are no whole animals formed entirely out of nothing but a visual cortex!\n\n\n\n\n\n[Ngo][11:37]\nOkay, cool. So you’d agree that the visual cortex is doing something that’s qualitatively quite different from the thing that animals overall are doing.\n\n\nThen another question is: can you characterise searching for high-scoring results in non-human animals? Do they do it? Or are you mainly talking about humans and AGIs?\n\n\n\n\n\n\n[Yudkowsky][11:39]\nAlso by the time you get to like the temporal lobes or something, there is probably some significant amount of “what could I be seeing that would produce this visual field?” that is searching through hypothesis-space for hypotheses with high plausibility scores, and for sure at the human level, humans will start to think, “Well, could I be seeing this? No, that theory has the following problem. How could I repair that theory?” But it is plausible that there is no low-level analogue of this in a monkey’s temporal cortex; and even more plausible that the parts of the visual cortex, if any, which do anything analogous to this, are doing it in a relatively local and definitely very domain-specific way.\n\n\nOh, that’s the cerebellum and motor cortex and so on, if we’re talking about a cat or whatever. They have to find motor plans that result in their catching the mouse.\n\n\nJust because the visual cortex isn’t (obviously) running a search doesn’t mean the rest of the animal isn’t running any searches.\n\n\n(On the meta-level, I notice myself hiccuping “But how could you not see that when looking at a cat?” and wondering what exercises would be required to teach that.)\n\n\n\n\n\n[Ngo][11:41]\nWell, I see *something* when I look at a cat, but I don’t know how well it corresponds to the concepts you’re using. So just taking it slowly for now.\n\n\nI have the intuition, by the way, that the motor cortex is in some sense doing a similar thing to the visual cortex – just in reverse. So instead of taking low-level inputs and producing high-level outputs, it’s taking high-level inputs and producing low-level outputs. Would you agree with that?\n\n\n\n\n\n\n[Yudkowsky][11:43]\nIt doesn’t directly parse in my ontology because (a) I don’t know what you mean by ‘high-level’ and (b) whole Cartesian agents can be viewed as functions, that doesn’t mean all agents can be viewed as non-searching pattern-recognizers.\n\n\nThat said, all parts of the cerebral cortex have surprisingly similar morphology, so it wouldn’t be at all surprising if the motor cortex is doing something similar to visual cortex. (The cerebellum, on the other hand…)\n\n\n\n\n\n[Ngo][11:44]\nThe signal from the visual cortex saying “that is a cat”, and the signal to the motor cortex saying “grab that cup”, are things I’d characterise as high-level.\n\n\n\n\n\n\n[Yudkowsky][11:45]\nStill less of a native distinction in my ontology, but there’s an informal thing it can sort of wave at, and I can hopefully take that as understood and run with it.\n\n\n\n\n\n[Ngo][11:45]\nThe firing of cells in the retina, and firing of motor neurons, are the low-level parts.\n\n\nCool. So to a first approximation, we can think about the part in between the cat recognising a mouse, and the cat’s motor cortex producing the specific neural signals required to catch the mouse, as the part where the consequentialism happens?\n\n\n\n\n\n\n[Yudkowsky][11:49]\nThe part between the cat’s eyes seeing the mouse, and the part where the cat’s limbs move to catch the mouse, is the whole cat-agent. The whole cat agent sure is a baby consequentialist / searches for mouse-catching motor patterns / gets similarly high-scoring end results even as you vary the environment.\n\n\nThe visual cortex is a particular part of this system-viewed-as-a-feedforward-function that is, plausibly, by no means surely, either not very searchy, or does only small local visual-domain-specific searches not aimed per se at catching mice; it has the epistemic nature rather than the planning nature.\n\n\nThen from one perspective you could reason that “well, most of the consequentialism is in the remaining cat after visual cortex has sent signals onward”. And this is in general a dangerous mode of reasoning that is liable to fail in, say, inspecting every particular neuron for consequentialism and not finding it; but in this particular case, there are significantly more consequentialist parts of the cat than the visual cortex, so I am okay running with it.\n\n\n\n\n\n[Ngo][11:50]\nAh, the more specific thing I meant to say is: most of the consequentialism is strictly between the visual cortex and the motor cortex. Agree/disagree?\n\n\n\n\n\n\n[Yudkowsky][11:51]\nDisagree, I’m rusty on my neuroanatomy but I think the motor cortex may send signals on to the cerebellum rather than the other way around.\n\n\n(I may also disagree with the actual underlying notion you’re trying to hint at, so possibly not just a “well include the cerebellum then” issue, but I think I should let you respond first.)\n\n\n\n\n\n[Ngo][11:53]\nI don’t know enough neuroanatomy to chase that up, so I was going to try a different tack.\n\n\nBut actually, maybe it’s easier for me to say “let’s include the cerebellum” and see where you think the disagreement ends up.\n\n\n\n\n\n\n[Yudkowsky][11:56]\nSo since cats are not (obviously) (that I have read about) cross-domain consequentialists with imaginations, their consequentialism is in bits and pieces of consequentialism embedded in them all over by the more purely pseudo-consequentialist genetic optimization loop that built them.\n\n\nA cat who fails to catch a mouse may then get little bits and pieces of catbrain adjusted all over.\n\n\nAnd then those adjusted bits and pieces get a pattern lookup later.\n\n\nWhy do these pattern-lookups with no obvious immediate search element, all happen to point towards the same direction of catching the mouse? Because of the past causal history about how what gets looked up, which was tweaked to catch the mouse.\n\n\nSo it is legit harder to point out “the consequentialist parts of the cat” by looking for which sections of neurology are doing searches right there. That said, to the extent that the visual cortex does not get tweaked on failure to catch a mouse, it’s not part of that consequentialist loop either.\n\n\nAnd yes, the same applies to humans, but humans also do more explicitly searchy things and this is part of the story for why humans have spaceships and cats do not.\n\n\n\n\n\n[Ngo][12:00]\nOkay, this is interesting. So in biological agents we’ve got these three levels of consequentialism: evolution, reinforcement learning, and planning.\n\n\n\n\n\n\n[Yudkowsky][12:01]\nIn biological agents we’ve got evolution + local evolved system-rules that in the past promoted genetic fitness. Two kinds of local rules like this are “operant-conditioning updates from success or failure” and “search through visualized plans”. I wouldn’t characterize these two kinds of rules as “levels”.\n\n\n\n\n\n[Ngo][12:02]\nOkay, I see. And when you talk about searching through visualised plans (the type of thing that humans do) can you say more about what it means for that to be a “search”?\n\n\nFor example, if I imagine writing a poem line-by-line, I may only be planning a few words ahead. But somehow the whole poem, which might be quite long, ends up a highly-optimised product. Is that a central example of planning?\n\n\n\n\n\n\n[Yudkowsky][12:04][12:07]\nPlanning is one way to succeed at search. I think for purposes of understanding alignment difficulty, you want to be thinking on the level of abstraction where you see that in some sense it is the search itself that is dangerous when it’s a strong enough search, rather than the danger seeming to come from details of the planning process.\n\n\nOne of my early experiences in successfully generalizing my notion of intelligence, what I’d later verbalize as “computationally efficient finding of actions that produce outcomes high in a preference ordering”, was in writing an (unpublished) story about time-travel in which the universe was globally consistent.\n\n\nThe requirement of global consistency, the way in which all events between Paradox start and Paradox finish had to map the Paradox’s initial conditions onto the endpoint that would go back and produce those exact initial conditions, ended up imposing strong complicated constraints on reality that the Paradox in effect had to navigate using its initial conditions. The time-traveler needed to end up going through certain particular experiences that would produce the state of mind in which he’d take the actions that would end up prodding his future self elsewhere into having those experiences.\n\n\n\n\n\n\nThe Paradox ended up killing the people who built the time machine, for example, because they would not otherwise have allowed that person to go back in time, or kept the temporal loop open that long for any other reason if they were still alive.\n\n\nJust having two examples of strongly consequentialist general optimization in front of me – human intelligence, and evolutionary biology – hadn’t been enough for me to properly generalize over a notion of optimization. Having three examples of homework problems I’d worked – human intelligence, evolutionary biology, and the fictional Paradox – caused it to finally click for me.\n\n\n\n\n\n[Ngo][12:07]\nHmm. So to me, one of the central features of search is that you consider many possibilities. But in this poem example, I may only have explicitly considered a couple of possibilities, because I was only looking ahead a few words at a time. This seems related to the distinction Abram drew a while back between selection and control ([https://www.alignmentforum.org/posts/ZDZmopKquzHYPRNxq/selection-vs-control](https://www.lesswrong.com/posts/ZDZmopKquzHYPRNxq/selection-vs-control)). Do you distinguish between them in the same way as he does? Or does “control” of a system (e.g. a football player dribbling a ball down the field) count as search too in your ontology?\n\n\n\n\n\n\n[Yudkowsky][12:10][12:11] \n\nline:1pt solid #000000;vertical-align:top”>\n[Yudkowsky][12:10][12:11]\n\n\nI would later try to tell people to “imagine a paperclip maximizer as *not being a mind at all*, imagine it as a kind of malfunctioning time machine that spits out outputs which will in fact result in larger numbers of paperclips coming to exist later”. I don’t think it clicked because people hadn’t done the same homework problems I had, and didn’t have the same “Aha!” of realizing how part of the notion and danger of intelligence could be seen in such purely material terms.\n\n\n\n\n\n\nBut the [convergent instrumental strategies](https://arbital.com/p/convergent_strategies/), the anticorrigibility, these things are contained in the *true fact about the universe* that certain outputs of the time machine *will in fact* result in there being lots more paperclips later. What produces the danger is not the details of the search process, it’s the search being strong and effective *at all*. The danger is in the territory itself and not just in some weird map of it; that building nanomachines that kill the programmers will produce more paperclips is a fact about reality, not a fact about paperclip maximizers!\n\n\n\n\n\n[Ngo][12:11]\nRight, I remember a very similar idea in your writing about Outcome Pumps ().\n\n\n\n\n\n\n[Yudkowsky][12:12]\nYup! Alas, the story was written in 2002-2003 when I was a worse writer and the real story that inspired the Outcome Pump never did get published.\n\n\n\n\n\n[Ngo][12:14]\nOkay, so I guess the natural next question is: what is it that makes you think that a strong, effective search isn’t likely to be limited or constrained in some way?\n\n\nWhat is it about search processes (like human brains) that makes it hard to train them with blind spots, or deontological overrides, or things like that?\n\n\nHmmm, although it feels like this is a question I can probably predict your answer to. (Or maybe not, I wasn’t expecting the time travel.)\n\n\n\n\n\n\n[Yudkowsky][12:15]\nIn one sense, they are! A paperclip-maximizing superintelligence is nowhere near as powerful as a paperclip-maximizing time machine. The time machine can do the equivalent of buying winning lottery tickets from lottery machines that have been thermodynamically randomized; a superintelligence can’t, at least not directly without rigging the lottery or whatever.\n\n\nBut a paperclip-maximizing strong general superintelligence is epistemically and instrumentally [efficient](https://arbital.com/p/efficiency/), relative to *you*, or to me. Any time we see it can get at least X paperclips by doing Y, we should expect that it gets X or more paperclips by doing Y or something that leads to even more paperclips than that, because it’s not going to miss the strategy we see.\n\n\nSo in that sense, searching our own brains for how a time machine would get paperclips, asking ourselves how many paperclips are in principle possible and how they could be obtained, is a way of getting our own brains to consider lower bounds on the problem without the implicit stupidity assertions that our brains unwittingly use to constrain story characters. Part of the point of telling people to think about time machines instead of superintelligences was to get past the ways they imagine superintelligences being stupid. Of course that didn’t work either, but it was worth a try.\n\n\nI don’t think that’s quite what you were asking about, but I want to give you a chance to see if you want to rephrase anything before I try to answer your me-reformulated questions.\n\n\n\n\n\n[Ngo][12:20]\nYeah, I think what I wanted to ask is more like: why should we expect that, out of the space of possible minds produced by optimisation algorithms like gradient descent, strong general superintelligences are more common than other types of agents which score highly on our loss functions?\n\n\n\n\n\n\n[Yudkowsky][12:20][12:23][12:24]\nIt depends on how hard you optimize! And whether gradient descent on a particular system can even successfully optimize that hard! Many current AIs are trained by gradient descent and yet not superintelligences at all.\n\n\n\n\n\n\nBut the answer is that some problems are difficult in that they require solving lots of subproblems, and an easy way to solve all those subproblems is to use patterns which collectively have some coherence and overlap, and the coherence within them generalizes across all the subproblems. Lots of search orderings will stumble across something like that before they stumble across separate solutions for lots of different problems.\n\n\n\n\n\n\nI suspect that you cannot get this out of small large amounts of gradient descent on small large layered transformers, and therefore I suspect that GPT-N does not approach superintelligence before the world is ended by systems that look differently, but I could be wrong about that.\n\n\n\n\n\n[Ngo][12:22][12:23]\nSuppose that we optimise hard enough to produce an epistemic subsystem that can make plans much better than any human’s.\n\n\n\n\n\n\n\nMy guess is that you’d say that this is *possible*, but that we’re much more likely to first produce a consequentialist agent which does this (rather than a purely epistemic agent which does this).\n\n\n\n\n\n\n[Yudkowsky][12:24]\nI am confused by what you think it means to have an “epistemic subsystem” that “makes plans much better than any human’s”. If it searches paths through time and selects high-scoring ones for output, what makes it “epistemic”?\n\n\n\n\n\n[Ngo][12:25]\nSuppose, for instance, that it doesn’t actually carry out the plans, it just writes them down for humans to look at.\n\n\n\n\n\n\n[Yudkowsky][12:25]\nIf it *can in fact* do the thing that a paperclipping time machine does, what makes it any safer than a paperclipping time machine because we called it “epistemic” or by some other such name?\n\n\nBy what criterion is it selecting the plans that humans look at?\n\n\nWhy did it make a difference that its output was fed through the causal systems called humans on the way to the causal systems called protein synthesizers or the Internet or whatever? If we build a superintelligence to design nanomachines, it makes no obvious difference to its safety whether it sends DNA strings directly to a protein synthesis lab, or humans read the output and retype it manually into an email. Presumably you also don’t think that’s where the safety difference comes from. So where does the safety difference come from?\n\n\n(note: lunchtime for me in 2 minutes, propose to reconvene in 30m after that)\n\n\n\n\n\n[Ngo][12:28]\n(break for half an hour sounds good)\n\n\nIf we consider the visual cortex at a given point in time, how does it decide which objects to recognise?\n\n\nInsofar as the visual cortex can be non-consequentialist about which objects it recognises, why couldn’t a planning system be non-consequentialist about which plans it outputs?\n\n\n\n\n\n\n[Yudkowsky][12:32]\nThis does feel to me like another “look at the water” moment, so what do you predict I’ll say about that?\n\n\n\n\n\n[Ngo][12:34]\nI predict that you say something like: in order to produce an agent that can create very good plans, we need to apply a lot of optimisation power to that agent. And if the channel through which we’re applying that optimisation power is “giving feedback on its plans”, then we don’t have a mechanism to ensure that the agent actually learns to optimise for creating really good plans, as opposed to creating plans that receive really good feedback.\n\n\n\n\n\n\n[Soares][12:35]\nSeems like a fine cliffhanger?\n\n\n\n\n\n\n[Ngo][12:35]\nYepp.\n\n\n\n\n\n\n[Soares][12:35]\nGreat. Let’s plan to reconvene in 30min.\n\n\n\n\n\n \n\n\n### 3.3. Hypothetical-planning systems, nanosystems, and evolving generality\n\n\n \n\n\n\n[Yudkowsky][13:03][13:11]\nSo the answer you expected from me, translated into my terms, would be, “If you select for the consequence of the humans hitting ‘approve’ on the plan, you’re still navigating the space of inputs for paths through time to probable outcomes (namely the humans hitting ‘approve’), so you’re still doing consequentialism.”\n\n\nBut suppose you manage to avoid that. Suppose you get exactly what you ask for. Then the system is still outputting *plans* such that, when humans follow them, they take paths through time and end up with outcomes that score high in some scoring function.\n\n\nMy answer is, “What the heck would it mean for a *planning system* to be *non-consequentialist*? You’re asking for nonwet water! What’s consequentialist isn’t the system that does the work, it’s the work you’re trying to do! You could imagine it being done by a cognition-free material system like a time machine and it would still be consequentialist *because* the output is a *plan*, a path through time!”\n\n\nAnd this indeed is a case where I feel a helpless sense of not knowing how I can rephrase things, which exercises you have to get somebody to do, what fictional experience you have to walk somebody through, before they start to look at the water and see a material with an index, before they start to look at the phrase “why couldn’t a planning system be non-consequentialist about which plans it outputs” and go “um”.\n\n\n\n\n\n\nMy imaginary listener now replies, “Ah, but what if we have plans that *don’t* end up with outcomes that score high in some function?” and I reply “Then you lie on the ground randomly twitching because any *outcome you end up with* which is *not that* is one that you wanted *more than that* meaning you *preferred it more than the outcome of random motor outputs* which is *optimization toward higher in the preference function* which is *taking a path through time that leads to particular destinations more than it leads to random noise*.”\n\n\n\n\n\n[Ngo][13:09][13:11]\nYeah, this does seem like a good example of the thing you were trying to explain at the beginning\n\n\n\n\n\n\n\nIt still feels like there’s some sort of levels distinction going on here though, let me try to tease out that intuition.\n\n\nOkay, so suppose I have a planning system that, given a situation and a goal, outputs a plan that leads from that situation to that goal.\n\n\nAnd then suppose that we give it, as input, a situation that we’re not actually in, and it outputs a corresponding plan.\n\n\nIt seems to me that there’s a difference between the sense in which that planning system is consequentialist by virtue of making consequentialist plans (as in: if that plan were used in the situation described in its inputs, it would lead to some goal being achieved) versus another hypothetical agent that is just directly trying to achieve goals in the situation it’s actually in.\n\n\n\n\n\n\n[Yudkowsky][13:18]\nSo I’d preface by saying that, *if* you could build such a system, which is indeed a coherent thing (it seems to me) to describe for the purpose of building it, then there would possibly be a safety difference on the margins, it would be noticeably less dangerous though still dangerous. It would need a special internal structural property that you might not get by gradient descent on a loss function with that structure, just like natural selection on inclusive genetic fitness doesn’t get you explicit fitness optimizers; you could optimize for planning in hypothetical situations, and get something that didn’t explicitly care only and strictly about hypothetical situations. And even if you did get that, the outputs that would kill or brain-corrupt the operators in hypothetical situations might also be fatal to the operators in actual situations. But that is a coherent thing to describe, and the fact that it was not optimizing our own universe, might make it *safer*.\n\n\nWith that said, I would worry that somebody would think there was some bone-deep difference of agentiness, of something they were empathizing with like personhood, of imagining goals and drives being absent or present in one case or the other, when they imagine a planner that just solves “hypothetical” problems. If you take that planner and feed it the actual world as its hypothetical, tada, it is now that big old dangerous consequentialist you were imagining before, without it having acquired some difference of *psychological* agency or ‘caring’ or whatever.\n\n\nSo I think there is an important homework exercise to do here, which is something like, “Imagine that safe-seeming system which only considers hypothetical problems. Now see that if you take that system, don’t make any other internal changes, and feed it actual problems, it’s very dangerous. Now meditate on this until you can see how the hypothetical-considering planner was extremely close in the design space to the more dangerous version, had all the dangerous latent properties, and would probably have a bunch of actual dangers too.”\n\n\n“See, you thought the source of the danger was this internal property of caring about actual reality, but it wasn’t that, it was the structure of planning!”\n\n\n\n\n\n[Ngo][13:22]\nI think we’re getting closer to the same page now.\n\n\nLet’s consider this hypothetical planner for a bit. Suppose that it was trained in a way that minimised the, let’s say, *adversarial* component of its plans.\n\n\nFor example, let’s say that the plans it outputs for any situation are heavily regularised so only the broad details get through.\n\n\nHmm, I’m having a bit of trouble describing this, but basically I have an intuition that in this scenario there’s a component of its plan which is cooperative with whoever executes the plan, and a component that’s adversarial.\n\n\nAnd I agree that there’s no fundamental difference in type between these two things.\n\n\n\n\n\n\n[Yudkowsky][13:27]\n“What if this potion we’re brewing has a Good Part and a Bad Part, and we could just keep the Good Parts…”\n\n\n\n\n\n[Ngo][13:27]\nNor do I think they’re separable. But in some cases, you might expect one to be much larger than the other.\n\n\n\n\n\n\n[Soares][13:29]\n(I observe that my model of some other listeners, at this point, protest “there is yet a difference between the hypothetical-planner applied to actual problems, and the Big Scary Consequentialist, which is that the hypothetical planner is emitting descriptions of plans that *would* work if executed, whereas the big scary consequentialist is executing those plans directly.”)\n\n\n(Not sure that’s a useful point to discuss, or if it helps Richard articulate, but it’s at least a place I expect some reader’s minds to go if/when this is published.)\n\n\n\n\n\n\n[Yudkowsky][13:30]\n(That is in fact a difference! The insight is in realizing that the hypothetical planner is only one line of outer shell command away from being a Big Scary Thing and is therefore also liable to be Big and Scary in many ways.)\n\n\n\n\n\n[Ngo][13:31]\nTo me it seems that Eliezer’s position is something like: “actually, in almost no training regimes do we get agents that decide which plans to output by spending almost all of their time thinking about the object-level problem, and very little of their time thinking about how to manipulate the humans carrying out the plan”.\n\n\n\n\n\n\n[Yudkowsky][13:32]\nMy position is that the AI does not neatly separate its internals into a Part You Think Of As Good and a Part You Think Of As Bad, because that distinction is sharp in your map but not sharp in the territory or the AI’s map.\n\n\nFrom the perspective of a paperclip-maximizing-action-outputting-time-machine, its actions are not “object-level making paperclips” or “manipulating the humans next to the time machine to deceive them about what the machine does”, they’re just physical outputs that go through time and end up with paperclips.\n\n\n\n\n\n[Ngo][13:34]\n@Nate, yeah, that’s a nice way of phrasing one point I was trying to make. And I do agree with Eliezer that these things *can be* very similar. But I’m claiming that in some cases these things can also be quite different – for instance, when we’re training agents that only get to output a short high-level description of the plan.\n\n\n\n\n\n\n[Yudkowsky][13:35]\nThe danger is in how hard the agent has to work to come up with the plan. I can, for instance, build an agent that very safely outputs a high-level plan for saving the world:\n\n\necho “Hey Richard, go save the world!”\n\n\nSo I do have to ask what kind of “high-level” planning output, that saves the world, you are envisioning, and why it was hard to cognitively come up with such that we didn’t just make that high-level plan right now, if humans could follow it. Then I’ll look at the part where the plan was hard to come up with, and say how the agent had to understand lots of complicated things in reality and accurately navigate paths through time for those complicated things, in order to even invent the high-level plan, and hence it was very dangerous if it wasn’t navigating exactly where you hoped. Or, alternatively, I’ll say, “That plan couldn’t save the world: you’re not postulating enough superintelligence to be dangerous, *and you’re also* not using enough superintelligence to flip the tables on the currently extremely doomed world.”\n\n\n\n\n\n[Ngo][13:39]\nAt this point I’m not envisaging a particular planning output that saves the world, I’m just trying to get more clarity on the issue of consequentialism.\n\n\n\n\n\n\n[Yudkowsky][13:40]\nLook at the water; it’s not the way you’re doing the work that’s dangerous, it’s the work you’re trying to do. What work are you trying to do, never mind how it gets done?\n\n\n\n\n\n[Ngo][13:41]\nI think I agree with you that, in the limit of advanced capabilities, we can’t say much about how the work is being done, we have to primarily reason from the work that we’re trying to do.\n\n\nBut here I’m only talking about systems that are intelligent enough to come up with plans and do research that are beyond the capability of humanity.\n\n\nAnd for me the question is: for *those* systems, can we tilt the way they do the work so they spend 99% of their time trying to solve the object-level problem, and 1% of their time trying to manipulate the humans who are going to carry out the plan? (Where these are not fundamental categories for the AI, they’re just a rough categorisation that emerges after we’ve trained it – the same way that the categories of “physically moving around” and “thinking about things” aren’t fundamentally different categories of action for humans, but the way we’ve evolved means there’s a significant internal split between them.)\n\n\n\n\n\n\n[Soares][13:43]\n(I suspect Eliezer is not trying to make a claim of the form “in the limit of advanced capabilities, we are relegated to reasoning about what work gets done, not about how it was done”. I suspect some miscommunication. It might be a reasonable time for Richard to attempt to paraphrase Eliezer’s argument?)\n\n\n(Though it also seems to me like Eliezer responding to the 99%/1% point may help shed light.)\n\n\n\n\n\n\n[Yudkowsky][13:46]\nWell, for one thing, I’d note that a system which is designing nanosystems, and spending 1% of its time thinking about how to kill the operators, is lethal. It has to be such a small fraction of thinking that it, like, never completes the whole thought about “well, if I did X, that would kill the operators!”\n\n\n\n\n\n[Ngo][13:46]\nThanks for that, Nate. I’ll try to paraphrase Eliezer’s argument now.\n\n\nEliezer’s position (partly in my own terminology): we’re going to build AIs that can perform very difficult tasks using cognition which we can roughly describe as “searching over many options to find one that meets our criteria”. An AI that can solve these difficult tasks will need to be able to search in a very general and flexible way, and so it will be very difficult to constrain that search into a particular region.\n\n\nHmm, that felt like a very generic summary, let me try and think about the more specific claims he’s making.\n\n\n\n\n\n\n[Yudkowsky][13:54]\n\n> \n> An AI that can solve these difficult tasks will need to be able to\n> \n> \n> \n\n\nVery very little is universally necessary over the design space. The *first* AGI that our tech becomes able to build is liable to work in certain easier and simpler ways.\n\n\n\n\n\n[Ngo][13:55]\nPoint taken; thanks for catching this misphrasing (this and previous times).\n\n\n\n\n\n\n[Yudkowsky][13:56]\nCan you, in principle, build a red-car-driver that is totally incapable of driving blue cars? In principle, sure! But the first red-car-driver that gradient descent stumbles over is liable to be a blue-car-driver too.\n\n\n\n\n\n[Ngo][13:57]\nEliezer, I’m wondering how much of our disagreement is about how high the human level is here.\n\n\nOr, to put it another way: we can build systems that outperform humans at quite a few tasks by now, without having search abilities that are general enough to even try to take over the world.\n\n\n\n\n\n\n[Yudkowsky][13:58]\nIndubitably and indeed, this is so.\n\n\n\n\n\n[Ngo][13:59]\nPutting aside for a moment the question of which tasks are pivotal enough to save the world, which parts of your model draw the line between human-level chess players and human-level galaxy-colonisers?\n\n\nAnd say that we’ll be able to align ones that they outperform us on *these tasks* before taking over the world, but not on *these other tasks*?\n\n\n\n\n\n\n[Yudkowsky][13:59][14:01]\nThat doesn’t have a very simple answer, but one aspect there is *domain generality* which in turn is achieved through *novel domain learning*.\n\n\n\n\n\n\nHumans, you will note, were not aggressively optimized by natural selection to be able to breathe underwater or fly into space. In terms of obvious outer criteria, there is not much outer sign that natural selection produced these creatures much more general than chimpanzees, by training on a much wider range of environments and loss functions.\n\n\n\n\n\n[Soares][14:00]\n(Before we drift too far from it: thanks for the summary! It seemed good to me, and I updated towards the miscommunication I feared not-having-happened.)\n\n\n\n\n\n\n[Ngo][14:03]\n\n> \n> (Before we drift too far from it: thanks for the summary! It seemed good to me, and I updated towards the miscommunication I feared not-having-happened.)\n> \n> \n> \n\n\n(Good to know, thanks for keeping an eye out. To be clear, I didn’t ever interpret Eliezer as making a claim explicitly about the limit of advanced capabilities; instead it just seemed to me that he was thinking about AIs significantly more advanced than the ones I’ve been thinking of. I think I phrased my point poorly.)\n\n\n\n\n\n\n[Yudkowsky][14:05][14:10]\nThere are complicated aspects of this story where natural selection may metaphorically be said to have “had no idea of what it was doing”, eg, after early rises in intelligence possibly produced by sexual selection on neatly chipped flint handaxes or whatever, all the cumulative brain-optimization on chimpanzees reached a point where there was suddenly a sharp selection gradient on relative intelligence at Machiavellian planning against other humans (even more so than in the chimp domain) as a subtask of inclusive genetic fitness, and so continuing to optimize on “inclusive genetic fitness” in the same old savannah, turned out to happen to be optimizing hard on the subtask and internal capability of “outwit other humans”, which optimized hard on “model other humans”, which was a capability that could be reused for modeling the chimp-that-is-this-chimp, which turned the system on itself and made it reflective, which contributed greatly to its intelligence being generalized, even though it was just grinding the same loss function on the same savannah; the system being optimized happened to go there in the course of being optimized even harder for the same thing.\n\n\nSo one can imagine asking the question: Is there a superintelligent AGI that can quickly build nanotech, which has a kind of passive safety in some if not all respects, in virtue of it solving problems like “build a nanotech system which does X” the way that a beaver solves building dams, in virtue of having a bunch of specialized learning abilities without it ever having a cross-domain general learning ability?\n\n\nAnd in this regard one does note that there are many, many, many things that humans do which no other animal does, which you might think would contribute a lot to that animal’s fitness if there were animalistic ways to do it. They don’t make iron claws for themselves. They never did evolve a tendency to search for iron ore, and burn wood into charcoal that could be used in hardened-clay furnaces.\n\n\nNo animal plays chess, but AIs do, so we can obviously make AIs to do things that animals don’t do. On the other hand, the environment didn’t exactly present any particular species with a challenge of chess-playing either.\n\n\n\n\n\n\nEven so, though, even if some animal had evolved to play chess, I fully expect that current AI systems would be able to squish it at chess, because the AI systems are on chips that run faster than neurons and doing crisp calculations and there are things you just can’t do with noisy slow neurons. So that again is not a generally reliable argument about what AIs can do.\n\n\n\n\n\n[Ngo][14:09][14:11]\nYes, although I note that challenges which are trivial from a human-engineering perspective can be very challenging from an evolutionary perspective (e.g. spinning wheels).\n\n\n\n\n\n\n\nAnd so the evolution of animals-with-a-little-bit-of-help-from-humans might end up in very different places from the evolution of animals-just-by-themselves. And analogously, the ability of humans to fill in the gaps to help less general AIs achieve more might be quite significant.\n\n\n\n\n\n\n[Yudkowsky][14:11]\nSo we can again ask: Is there a way to make an AI system that is *only* good at designing nanosystems, which can achieve some complicated but hopefully-specifiable real-world outcomes, without that AI also being superhuman at understanding and manipulating humans?\n\n\nAnd I roughly answer, “Perhaps, but not by default, there’s a bunch of subproblems, I don’t actually know how to do it right now, it’s not *the easiest* way to get an AGI that can build nanotech (and kill you), you’ve got to make the red-car-driver specifically not be able to drive blue cars.” Can I explain how I know that? I’m really not sure I can, in real life where I explain X0 and then the listener doesn’t generalize X0 to X and respecialize it to X1.\n\n\nIt’s like asking me how I could possibly know in 2008, before anybody had observed AlphaFold 2, that superintelligences would be able to crack the protein folding problem on the way to nanotech, which some people did question back in 2008.\n\n\nThough that was admittedly more of a slam-dunk than this was, and I could not have told you that AlphaFold 2 would become possible at a prehuman level of general intelligence in 2021 specifically, or that it would be synced in time to a couple of years after GPT-2’s level of generality at text.\n\n\n\n\n\n[Ngo][14:18]\nWhat are the most relevant axes of difference between solving protein folding and designing nanotech that, say, self-assembles into a computer?\n\n\n\n\n\n\n[Yudkowsky][14:20]\nDefinitely, “turns out it’s easier than you thought to use gradient descent’s memorization of zillions of shallow patterns that overlap and recombine into larger cognitive structures, to add up to a consequentialist nanoengineer that only does nanosystems and never does sufficiently general learning to apprehend the big picture containing humans, while still understanding the goal for that pivotal act you wanted to do” is among the more plausible advance-specified miracles we could get.\n\n\nBut it is not what my model says actually happens, and I am not a believer that when your model says you are going to die, you get to start believing in particular miracles. You need to hold your mind open for any miracle and a miracle you didn’t expect or think of in advance, because at this point our last hope is that in fact the future is often quite surprising – though, alas, negative surprises are a tad more frequent than positive ones, when you are trying desperately to navigate using a bad map.\n\n\n\n\n\n[Ngo][14:22]\nPerhaps one metric we could use here is something like: how much extra reward does the consequentialist nanoengineer get from starting to model humans, versus from becoming better at nanoengineering?\n\n\n\n\n\n\n[Yudkowsky][14:23]\nBut that’s *not* where humans came from. We didn’t get to nuclear power by getting a bunch of fitness from nuclear power plants. We got to nuclear power because if you get a bunch of fitness from chipping flint handaxes and Machiavellian scheming, as found by relatively simple and local hill-climbing, that entrains the same genes that build nuclear power plants.\n\n\n\n\n\n[Ngo][14:24]\nOnly in the specific case where you also have the constraint that you keep having to learn new goals every generation.\n\n\n\n\n\n\n[Yudkowsky][14:24]\nHuh???\n\n\n\n\n\n[Soares][14:24]\n(I think Richard’s saying, “that’s a consequence of the genetic bottleneck”)\n\n\n\n\n\n\n[Ngo][14:25]\nRight.\n\n\nHmm, but I feel like we may have covered this ground before.\n\n\nSuggestion: I have a couple of other directions I’d like to poke at, and then we could wrap up in 20 or 30 minutes?\n\n\n\n\n\n\n[Yudkowsky][14:27]\nOK\n\n\n\n> \n> What are the most relevant axes of difference between solving protein folding and designing nanotech that, say, self-assembles into a computer?\n> \n> \n> \n\n\nThough I want to mark that this question seemed potentially cruxy to me, though perhaps not for others. I.e., if building protein factories that built nanofactories that built nanomachines that met a certain deep and lofty engineering goal, didn’t involve cognitive challenges different in kind from protein folding, we could maybe just safely go do that using AlphaFold 3, which would be just as safe as AlphaFold 2.\n\n\nI don’t think we can do that. And I would note to the generic Other that if, to them, these both just sound like thinky things, so why can’t you just do that other thinky thing too using the thinky program, this is a case where having any specific model of why we don’t already have this nanoengineer right now would tell you there were specific different thinky things involved.\n\n\n\n\n \n\n\n### 3.4. Coherence and pivotal acts\n\n\n \n\n\n\n[Ngo][14:31]\nIn either order:\n\n\n* I’m curious how the things we’ve been talking about relate to your opinions about meta-level optimisation from the AI foom debate. (I.e. talking about how wrapping around so that there’s no longer any protected level of optimisation leads to dramatic change.)\n* I’m curious how your claims about the “robustness” of consequentialism (i.e. the difficulty of channeling an agent’s thinking in the directions we want it to go) relate to the reliance of humans on culture, and in particular the way in which humans raised without culture are such bad consequentialists.\n\n\nOn the first: if I were to simplify to the extreme, it seems like there are these two core intuitions that you’ve been trying to share for a long time. One is a certain type of recursive improvement, and another is a certain type of consequentialism.\n\n\n\n\n\n\n[Yudkowsky][14:32]\nThe second question didn’t make much sense in my native ontology? Humans raised without culture don’t have access to environmental constants whose presence their genes assume, so they end up as broken machines and then they’re bad consequentialists.\n\n\n\n\n\n[Ngo][14:35]\nHmm, good point. Okay, question modification: the ways in which humans reason, act, etc, vary greatly depending on which cultures they’re raised in. (I’m mostly thinking about differences over time – e.g. cavemen vs moderns.) My low-fidelity version of your view about consequentialists says that general consequentialists like humans possess a robust search process which isn’t so easily modified.\n\n\n(Sorry if this doesn’t make much sense in your ontology, I’m getting a bit tired.)\n\n\n\n\n\n\n[Yudkowsky][14:36]\nWhat is it that varies that you think I think should predict would stay more constant?\n\n\n\n\n\n[Ngo][14:37]\nGoals, styles of reasoning, deontological constraints, level of conformity.\n\n\n\n\n\n\n[Yudkowsky][14:39]\nWith regards to your first point, my first reaction was, “I just have one view of intelligence, what you see me arguing about reflects which points people have proved weirdly obstinate about. In 2008, Robin Hanson was being weirdly obstinate about how capabilities scaled and whether there was even any point in analyzing AIs differently from ems, so I talked about what I saw as the most slam-dunk case for there being Plenty Of Room Above Biology and for stuff going whoosh once it got above the human level.\n\n\n“It later turned out that capabilities started scaling a whole lot *without* self-improvement, which is an example of the kind of weird surprise the Future throws at you, and maybe a case where I missed something by arguing with Hanson instead of imagining how I could be wrong in either direction and not just the direction that other people wanted to argue with me about.\n\n\n“Later on, people were unable to understand why alignment is hard, and got stuck on generalizing the concept I refer to as consequentialism. A theory of why I talked about both things for related reasons would just be a theory of why people got stuck on these two points for related reasons, and I think that theory would mainly be overexplaining an accident because if Yann LeCun had been running effective altruism I would have been explaining different things instead, after the people who talked a lot to EAs got stuck on a different point.”\n\n\nReturning to your second point, humans are broken things; if it were possible to build computers while working even worse than humans, we’d be having this conversation at that level of intelligence instead.\n\n\n\n\n\n[Ngo][14:41]\n(Retracted)~~I entirely agree about humans, but it doesn’t matter that much how broken humans are when the regime of AIs that we’re talking about is the regime that’s directly above humans, and therefore only a bit less broken than humans.~~\n\n\n\n\n\n\n[Yudkowsky][14:41]\nAmong the things to bear in mind about that, is that we then get tons of weird phenomena that are specific to humans, and you may be very out of luck if you start wishing for the *same* weird phenomena in AIs. Yes, even if you make some sort of attempt to train it using a loss function.\n\n\nHowever, it does seem to me like as we start getting towards the Einstein level instead of the village-idiot level, even though this is usually not much of a difference, we do start to see the atmosphere start to thin already, and the turbulence start to settle down already. Von Neumann was actually a fairly reflective fellow who knew about, and indeed helped generalize, utility functions. The great achievements of von Neumann were not achieved by some very specialized hypernerd who spent all his fluid intelligence on crystallizing math and science and engineering alone, and so never developed any opinions about politics or started thinking about whether or not he had a utility function.\n\n\n\n\n\n[Ngo][14:44]\nI don’t think I’m asking for the *same* weird phenomena. But insofar as a bunch of the phenomena I’ve been talking about have seemed weird according to your account of consequentialism, then the fact that approximately-human-level-consequentialists have lots of weird things about them is a sign that the phenomena I’ve been talking about are less unlikely than you expect.\n\n\n\n\n\n\n[Yudkowsky][14:45][14:46]\nI suspect that some of the difference here is that I think you have to be *noticeably* better than a human at nanoengineering to pull off pivotal acts large enough to make a difference, which is why I am not instead trying to gather the smartest people left alive and doing that pivotal act directly.\n\n\n\n\n\n\nI can’t think of anything you can do with somebody just barely smarter than a human, which flips the gameboard, aside of course from “go build a Friendly AI” which I *did* try to set up to just go do and which would be incredibly hard to align if we wanted an AI to do it instead (full-blown chicken-and-egg, that AI is already fully aligned).\n\n\n\n\n\n[Ngo][14:45]\nOh, interesting. Actually one more question then: to what extent do you think that explicitly reasoning about utility functions and laws of rationality is what makes consequentialists have the properties you’ve been talking about?\n\n\n\n\n\n\n[Yudkowsky][14:47, moved up in log]\nExplicit reflection is one possible later stage of the path; an earlier part of the path is from being optimized to do things difficult enough that you need to stop stepping on your own feet and have different parts of your thoughts work well together.\n\n\nIt’s the sort of path that has only one destination at its end, so there will be many ways to get there.\n\n\n(Modulo various cases where different decision theories seem reflectively consistent and so on; I want to say “you know what I mean” but maybe people don’t.)\n\n\n\n\n\n[Ngo][14:47, moved down in log]\n\n> \n> I suspect that some of the difference here is that I think you have to be *noticeably* better than a human at nanoengineering to pull off pivotal acts large enough to make a difference, which is why I am not instead trying to gather the smartest people left alive and doing that pivotal act directly.\n> \n> \n> \n\n\nYepp, I think there’s probably some disagreements about geopolitics driving this too. E.g. in my earlier summary document I mentioned some possible pivotal acts:\n\n\n* Monitoring all potential AGI projects to an extent that makes it plausible for the US and China to work on a joint project without worrying that the other is privately racing.\n* Provide arguments/demonstrations/proofs related to impending existential risk that are sufficiently compelling to scare the key global decision-makers into bottlenecking progress.\n\n\nI predict that you think these would not be pivotal enough; but I don’t think digging into the geopolitical side of things is the best use of our time.\n\n\n\n\n\n\n[Yudkowsky][14:49, moved up in log]\nMonitoring all AGI projects – either not politically feasible in real life given the actual way that countries behave in history books instead of fantasy; or at politically feasible levels, does not work well enough to prevent the world from ending once the know-how proliferates. The AI isn’t doing much work here either; why not go do this now, if it’s possible? (Note: please don’t try to go do this now, it backfires badly.)\n\n\nProvide sufficiently compelling arguments = superhuman manipulation, an incredibly dangerous domain that is just about the worst domain to try to align.\n\n\n\n\n\n[Ngo][14:49, moved down in log]\n\n> \n> With regards to your first point, my first reaction was, “I just have one view of intelligence, what you see me arguing about reflects which points people have proved weirdly obstinate about. In 2008, Robin Hanson was being weirdly obstinate about how capabilities scaled and whether there was even any point in analyzing AIs differently from ems, so I talked about what I saw as the most slam-dunk case for there being Plenty Of Room Above Biology and for stuff going whoosh once it got above the human level.\n> \n> \n> “It later turned out that capabilities started scaling a whole lot *without* self-improvement, which is an example of the kind of weird surprise the Future throws at you, and maybe a case where I missed something by arguing with Hanson instead of imagining how I could be wrong in either direction and not just the direction that other people wanted to argue with me about.\n> \n> \n> “Later on, people were unable to understand why alignment is hard, and got stuck on generalizing the concept I refer to as consequentialism. A theory of why I talked about both things for related reasons would just be a theory of why people got stuck on these two points for related reasons, and I think that theory would mainly be overexplaining an accident because if Yann LeCun had been running effective altruism I would have been explaining different things instead, after the people who talked a lot to EAs got stuck on a different point.”\n> \n> \n> \n\n\nOn my first point, it seems to me that your claims about recursive self-improvement were off in a fairly similar way to how I think your claims about consequentialism are off – which is that they defer too much to one very high-level abstraction.\n\n\n\n\n\n\n[Yudkowsky][14:52]\n\n> \n> On my first point, it seems to me that your claims about recursive self-improvement were off in a fairly similar way to how I think your claims about consequentialism are off – which is that they defer too much to one very high-level abstraction.\n> \n> \n> \n\n\nI suppose that is what it could potentially feel like from the inside to not get an abstraction. Robin Hanson kept on asking why I was trusting my abstractions so much, when he was in the process of trusting his worse abstractions instead.\n\n\n\n\n\n[Ngo][14:51][14:53]\n\n> \n> Explicit reflection is one possible later stage of the path; an earlier part of the path is from being optimized to do things difficult enough that you need to stop stepping on your own feet and have different parts of your thoughts work well together.\n> \n> \n> \n\n\nCan you explain a little more what you mean by “have different parts of your thoughts work well together”? Is this something like the capacity for metacognition; or the global workspace; or self-control; or…?\n\n\n\n\n\n\n\nAnd I guess there’s no good way to quantify *how* important you think the explicit reflection part of the path is, compared with other parts of the path – but any rough indication of whether it’s a more or less crucial component of your view?\n\n\n\n\n\n\n[Yudkowsky][14:55]\n\n> \n> Can you explain a little more what you mean by “have different parts of your thoughts work well together”? Is this something like the capacity for metacognition; or the global workspace; or self-control; or…?\n> \n> \n> \n\n\nNo, it’s like when you don’t, like, pay five apples for something on Monday, sell it for two oranges on Tuesday, and then trade an orange for an apple.\n\n\nI have still not figured out the homework exercises to convey to somebody the Word of Power which is “coherence” by which they will be able to look at the water, and see “coherence” in places like a cat walking across the room without tripping over itself.\n\n\nWhen you do lots of reasoning about arithmetic correctly, without making a misstep, that long chain of thoughts with many different pieces diverging and ultimately converging, ends up making some statement that is… still true and still about numbers! Wow! How do so many different thoughts add up to having this property? Wouldn’t they wander off and end up being about tribal politics instead, like on the Internet?\n\n\nAnd one way you could look at this, is that even though all these thoughts are taking place in a bounded mind, they are shadows of a higher unbounded structure which is the model identified by the Peano axioms; all the things being said are *true about the numbers*. Even though somebody who was missing the point would at once object that the human contained no mechanism to evaluate each of their statements against all of the numbers, so obviously no human could ever contain a mechanism like that, so obviously you can’t explain their success by saying that each of their statements was true about the same topic of the numbers, because what could possibly implement that mechanism which (in the person’s narrow imagination) is The One Way to implement that structure, which humans don’t have?\n\n\nBut though mathematical reasoning can sometimes go astray, when it works at all, it works because, in fact, even bounded creatures can sometimes manage to obey local relations that in turn add up to a global coherence where all the pieces of reasoning point in the same direction, like photons in a laser lasing, even though there’s no internal mechanism that enforces the global coherence at every point.\n\n\nTo the extent that the outer optimizer trains you out of paying five apples on Monday for something that you trade for two oranges on Tuesday and then trading two oranges for four apples, the outer optimizer is training all the little pieces of yourself to be locally coherent in a way that can be seen as an imperfect bounded shadow of a higher unbounded structure, and then the system is powerful though imperfect *because* of how the power is present in the coherence and the overlap of the pieces, *because* of how the higher perfect structure is being imperfectly shadowed. In this case the higher structure I’m talking about is Utility, and doing homework with coherence theorems leads you to appreciate that we only know about one higher structure for this class of problems that has a dozen mathematical spotlights pointing at it saying “look here”, even though people have occasionally looked for alternatives.\n\n\nAnd when I try to say this, people are like, “Well, I looked up a theorem, and it talked about being able to identify a unique utility function from an infinite number of choices, but if we don’t have an infinite number of choices, we can’t identify the utility function, so what relevance does this have” and this is a kind of mistake I don’t remember even coming close to making so I do not know how to make people stop doing that and maybe I can’t.\n\n\n\n\n\n[Soares][15:07]\nWe’re already pushing our luck on time, so I nominate that we wrap up (after, perhaps, a few more Richard responses if he’s got juice left.)\n\n\n\n\n\n\n[Yudkowsky][15:07]\nYeah, was thinking the same.\n\n\n\n\n\n[Soares][15:07]\nAs a proposed cliffhanger to feed into the next discussion, my take is that Richard’s comment:\n\n\n\n> \n> On my first point, it seems to me that your claims about recursive self-improvement were off in a fairly similar way to how I think your claims about consequentialism are off – which is that they defer too much to one very high-level abstraction.\n> \n> \n> \n\n\nprobably contains some juicy part of the disagreement, and I’m interested in Eliezer understanding Richard’s claim to the point of being able to paraphrase it to Richard’s satisfaction.\n\n\n\n\n\n\n[Ngo][15:08]\nWrapping up here makes sense.\n\n\nI endorse the thing Nate just said.\n\n\nI also get the sense that I have a much better outline now of Eliezer’s views about consequentialism (if not the actual details and texture).\n\n\nOn a meta level, I personally tend to focus more on things like “how should we understand cognition” and not “how should we understand geopolitics and how it affects the level of pivotal action required”.\n\n\nIf someone else were trying to prosecute this disagreement they might say much more about the latter. I’m uncertain how useful it is for me to do so, given that my comparative advantage compared with the rest of the world (and probably Eliezer’s too) is the cognition part.\n\n\n\n\n\n\n[Yudkowsky][15:12]\nReconvene… tomorrow? Monday of next week?\n\n\n\n\n\n[Ngo][15:12]\nMonday would work better for me.\n\n\nYou okay with me summarising the discussion so far to [some people — redacted for privacy reasons]?\n\n\n\n\n\n\n[Yudkowsky][15:13]\nNate, take a minute to think of your own thoughts there?\n\n\n\n\n\n| |\n| --- |\n| [Soares: 👍 👌] |\n\n\n\n\n\n\n[Soares][15:15]\nMy take: I think it’s fine to summarize, though generally virtuous to mark summaries as summaries (rather than asserting that your summaries are Eliezer-endorsed or w/e).\n\n\n\n\n\n| |\n| --- |\n| [Ngo: 👍] |\n\n\n\n\n\n\n\n[Yudkowsky][15:16]\nI think that broadly matches my take. I’m also a bit worried about biases in the text summarizer, and about whether I managed to say anything that Rob or somebody will object to pre-publication, but we ultimately intended this to be seen and I was keeping that in mind, so, yeah, go ahead and summarize.\n\n\n\n\n\n[Ngo][15:17]\nGreat, thanks\n\n\n\n\n\n\n[Yudkowsky][15:17]\nI admit to being curious as to what you thought was said that was important or new, but that’s a question that can be left open to be answered at your leisure, earlier in your day.\n\n\n\n\n\n[Ngo][15:17]\n\n> \n> I admit to being curious as to what you thought was said that was important or new, but that’s a question that can be left open to be answered at your leisure, earlier in your day.\n> \n> \n> \n\n\nYou mean, what I thought was worth summarising?\n\n\n\n\n\n\n[Yudkowsky][15:17]\nYeah.\n\n\n\n\n\n[Ngo][15:18]\nHmm, no particular opinion. I wasn’t going to go out of my way to do so, but since I’m chatting to [some people — redacted for privacy reasons] regularly anyway, it seemed low-cost to fill them in.\n\n\nAt your leisure, I’d be curious to know how well the directions of discussion are meeting your goals for what you want to convey when this is published, and whether there are topics you want to focus on more.\n\n\n\n\n\n\n[Yudkowsky][15:19]\nI don’t know if it’s going to help, but trying it currently seems better than to go on saying nothing.\n\n\n\n\n\n[Ngo][15:20]\n(personally, in addition to feeling like less of an expert on geopolitics, it also seems more sensitive for me to make claims about in public, which is another reason I haven’t been digging into that area as much)\n\n\n\n\n\n\n[Soares][15:21]\n\n> \n> (personally, in addition to feeling like less of an expert on geopolitics, it also seems more sensitive for me to make claims about in public, which is another reason I haven’t been digging into that area as much)\n> \n> \n> \n\n\n(seems reasonable! note, though, that i’d be quite happy to have sensitive sections stricken from the record, insofar as that lets us get more convergence than we otherwise would, while we’re already in the area)\n\n\n\n\n\n| |\n| --- |\n| [Ngo: 👍] |\n\n\n\n(tho ofc it is less valuable to spend conversational effort in private discussions, etc.)\n\n\n\n\n\n| |\n| --- |\n| [Ngo: 👍] |\n\n\n\n\n\n\n\n[Ngo][15:22]\n\n> \n> At your leisure, I’d be curious to know how well the directions of discussion are meeting your goals for what you want to convey when this is published, and whether there are topics you want to focus on more.\n> \n> \n> \n\n\n(this question aimed at you too Nate)\n\n\nAlso, thanks Nate for the moderation! I found your interventions well-timed and useful.\n\n\n\n\n\n| |\n| --- |\n| [Soares: ❤] |\n\n\n\n\n\n\n\n[Soares][15:23]\n\n> \n> (this question aimed at you too Nate)\n> \n> \n> \n\n\n(noted, thanks, I’ll probably write something up after you’ve had the opportunity to depart for sleep.)\n\n\nOn that note, I declare us adjourned, with intent to reconvene at the same time on Monday.\n\n\nThanks again, both.\n\n\n\n\n\n\n[Ngo][15:23]\nThanks both ![🙂](https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png)\n\n\nOh, actually, one quick point\n\n\nWould one hour earlier suit, for Monday?\n\n\nI’ve realised that I’ll be moving to a one-hour-later time zone, and starting at 9pm is slightly suboptimal (but still possible if necessary)\n\n\n\n\n\n\n[Soares][15:24]\nOne hour earlier would work fine for me.\n\n\n\n\n\n\n[Yudkowsky][15:25]\nDoesn’t work as fine for me because I’ve been trying to avoid any food until 12:30p my time, but on that particular day I may be more caloried than usual from the previous day, and could possibly get away with it. (That whole day could also potentially fail if a minor medical procedure turns out to take more recovery than it did the last time I had it.)\n\n\n\n\n\n[Ngo][15:26]\nHmm, is this something where you’d have more information on the day? (For the calories thing)\n\n\n\n\n\n\n[Yudkowsky][15:27]\n\n> \n> (seems reasonable! note, though, that i’d be quite happy to have sensitive sections stricken from the record, insofar as that lets us get more convergence than we otherwise would, while we’re already in the area)\n> \n> \n> \n\n\nI’m a touch reluctant to have discussions that we intend to delete, because then the larger debate will make less sense once those sections are deleted. Let’s dance around things if we can.\n\n\n\n\n\n| | |\n| --- | --- |\n| [Ngo: 👍] | [Soares: 👍] |\n\n\n\nI mean, I can that day at 10am my time say how I am doing and whether I’m in shape for that day.\n\n\n\n\n\n[Ngo][15:28]\ngreat. and if at that point it seems net positive to postpone to 11am your time (at the cost of me being a bit less coherent later on) then feel free to say so at the time\n\n\non that note, I’m off\n\n\n\n\n\n\n[Yudkowsky][15:29]\nGood night, heroic debater!\n\n\n\n\n\n[Soares][16:11]\n\n> \n> At your leisure, I’d be curious to know how well the directions of discussion are meeting your goals for what you want to convey when this is published, and whether there are topics you want to focus on more.\n> \n> \n> \n\n\nThe discussions so far are meeting my goals quite well so far! (Slightly better than my expectations, hooray.) Some quick rough notes:\n\n\n* I have been enjoying EY explicating his models around consequentialism.\n\t+ The objections Richard has been making are ones I think have been floating around for some time, and I’m quite happy to see explicit discussion on it.\n\t+ Also, I’ve been appreciating the conversational virtue with which the two of you have been exploring it. (Assumption of good intent, charity, curiosity, etc.)\n* I’m excited to dig into Richard’s sense that EY was off about recursive self improvement, and is now off about consequentialism, in a similar way.\n\t+ This also sees to me like a critique that’s been floating around for some time, and I’m looking forward to getting more clarity on it.\n* I’m a bit torn between driving towards clarity on the latter point, and shoring up some of the progress on the former point.\n\t+ One artifact I’d really enjoy having is some sort of “before and after” take, from Richard, contrasting his model of EY’s views before, to his model now.\n\t+ I also have a vague sense that there are some points Eliezer was trying to make, that didn’t quite feel like they were driven home; and dually, some pushback by Richard that didn’t feel quite frontally answered.\n\t\t- One thing I may do over the next few days is make a list of those places, and see if I can do any distilling on my own. (No promises, though.)\n\t\t- If that goes well, I might enjoy some side-channel back-and-forth with Richard about it, eg during some more convenient-for-Richard hour (or, eg, as a thing to do on Monday if EY’s not in commission at 10a pacific.)\n\n\n\n\n\n\n[Ngo][5:40]  (next day, Sep. 9)\n\n> \n> The discussions so far are […]\n> \n> \n> \n\n\nWhat do you mean by “latter point” and “former point”? (In your 6th bullet point)\n\n\n\n\n\n\n[Soares][7:09]  (next day, Sep. 9)\n\n> \n> What do you mean by “latter point” and “former point”? (In your 6th bullet point)\n> \n> \n> \n\n\nformer = shoring up the consequentialism stuff, latter = digging into your critique re: recursive self improvement etc. (The nesting of the bullets was supposed to help make that clear, but didn’t come out well in this format, oops.)\n\n\n\n\n\n \n\n\n4. Follow-ups\n-------------\n\n\n \n\n\n### 4.1. Richard Ngo’s summary\n\n\n \n\n\n\n[Ngo]  (Sep. 10 Google Doc)\n*2nd discussion*\n\n\n(Mostly summaries not quotations~~; also hasn’t yet been evaluated by Eliezer~~)\n\n\nEliezer, summarized by Richard: “~~The~~ A core concept which people have trouble grasping is consequentialism. People try to reason about *how* AIs will solve problems, and ways in which they might or might not be dangerous. But they don’t realise that the ability to solve a wide range of difficult problems implies that an agent must be doing a powerful search over possible solutions, which is ~~the~~ a core skill required to take actions which greatly affect the world. Making this type of AI safe is like trying to build an AI that drives red cars very well, but can’t drive blue cars – there’s no way you get this by default, because the skills involved are so similar. And because the search process ~~is so general~~ is by default so general, ~~it’ll be very hard to~~ I don’t currently see how to constrain it into any particular region.”\n\n\n\n\n\n\n[Yudkowsky][10:48]  (Sep. 10 comment)\n\n> \n> The\n> \n> \n> \n\n\n*A* concept, which some people have had trouble grasping.  There seems to be an endless list.  I didn’t have to spend much time contemplating consequentialism to derive the consequences.  I didn’t spend a lot of time talking about it until people started arguing.\n\n\n\n\n\n[Yudkowsky][10:50]  (Sep. 10 comment)\n\n> \n> the\n> \n> \n> \n\n\na\n\n\n\n\n\n[Yudkowsky][10:52]  (Sep. 10 comment)\n\n> \n> [the search process] is [so general]\n> \n> \n> \n\n\n“is by default”.  The reason I keep emphasizing that things are only true by default is that the work of surviving may look like doing hard nondefault things.  I don’t take fatalistic “will happen” stances, I assess difficulties of getting nondefault results.\n\n\n\n\n\n[Yudkowsky][10:52]  (Sep. 10 comment)\n\n> \n> it’ll be very hard to\n> \n> \n> \n\n\n“I don’t currently see how to”\n\n\n\n\n\n[Ngo]  (Sep. 10 Google Doc)\nEliezer, summarized by Richard (continued): “In biological organisms, evolution is ~~one source~~ the ultimate source of consequentialism. A ~~second~~ secondary outcome of evolution is reinforcement learning. For an animal like a cat, upon catching a mouse (or failing to do so) many parts of its brain get slightly updated, in a loop that makes it more likely to catch the mouse next time. (Note, however, that this process isn’t powerful enough to make the cat a pure consequentialist – rather, it has many individual traits that, when we view them from this lens, point in the same direction.) ~~A third thing that makes humans in particular consequentialist is planning,~~ Another outcome of evolution, which helps make humans in particular more consequentialist, is planning – especially when we’re aware of concepts like utility functions.”\n\n\n\n\n\n\n[Yudkowsky][10:53]  (Sep. 10 comment)\n\n> \n> one\n> \n> \n> \n\n\nthe ultimate\n\n\n\n\n\n[Yudkowsky][10:53]  (Sep. 10 comment)\n\n> \n> second\n> \n> \n> \n\n\nsecondary outcome of evolution\n\n\n\n\n\n[Yudkowsky][10:55]  (Sep. 10 comment)\n\n> \n> especially when we’re aware of concepts like utility functions\n> \n> \n> \n\n\nVery slight effect on human effectiveness in almost all cases because humans have very poor reflectivity.\n\n\n\n\n\n[Ngo]  (Sep. 10 Google Doc)\nRichard, summarized by Richard: “Consider an AI that, given a hypothetical scenario, tells us what the best plan to achieve a certain goal in that scenario is. Of course it needs to do consequentialist reasoning to figure out how to achieve the goal. But that’s different from an AI which chooses what to say as a means of achieving its goals. I’d argue that the former is doing consequentialist reasoning without itself being a consequentialist, while the latter is actually a consequentialist. Or more succinctly: consequentialism = problem-solving skills + using those skills to choose actions which achieve goals.”\n\n\nEliezer, summarized by Richard: “The former AI might be slightly safer than the latter if you could build it, but I think people are likely to dramatically overestimate how big the effect is. The difference could just be one line of code: if we give the former AI our current scenario as its input, then it becomes the latter.  For purposes of understanding alignment difficulty, you want to be thinking on the level of abstraction where you see that in some sense it is the search itself that is dangerous when it’s a strong enough search, rather than the danger seeming to come from details of the planning process. One particularly helpful thought experiment is to think of advanced AI as an ‘[outcome pump](https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden-complexity-of-wishes)‘ which selects from futures in which a certain outcome occurred, and takes whatever action leads to them.”\n\n\n\n\n\n\n[Yudkowsky][10:59]  (Sep. 10 comment)\n\n> \n> particularly helpful\n> \n> \n> \n\n\n“attempted explanatory”.  I don’t think most readers got it.\n\n\nI’m a little puzzled by how often you write my viewpoint as thinking that whatever I happened to say a sentence about is the Key Thing.  It seems to rhyme with a deeper failure of many EAs to pass the MIRI [ITT](https://www.econlib.org/archives/2011/06/the_ideological.html).\n\n\nTo be a bit blunt and impolite in hopes that long-languishing social processes ever get anywhere, two obvious uncharitable explanations for why some folks may systematically misconstrue MIRI/Eliezer as believing much more than in reality that various concepts an argument wanders over are Big Ideas to us, when some conversation forces us to go to that place:\n\n\n(A)  It paints a comfortably unflattering picture of MIRI-the-Other as weirdly obsessed with these concepts that seem not so persuasive, or more generally paints the Other as a bunch of weirdos who stumbled across some concept like “consequentialism” and got obsessed with it.  In general, to depict the Other as thinking a great deal of some idea (or explanatory thought experiment) is to tie and stake their status to the listener’s view of how much status that idea deserves.  So if you say that the Other thinks a great deal of some idea that isn’t obviously high-status, that lowers the Other’s status, which can be a comfortable thing to do.\n\n\n(cont.)\n\n\n(B) It paints a more comfortably self-flattering picture of a continuing or persistent disagreement, as a disagreement with somebody who thinks that some random concept is much higher-status than it really is, in which case there isn’t more to done or understood except to duly politely let the other person try to persuade you the concept deserves its high status. As opposed to, “huh, maybe there is a noncentral point that the other person sees themselves as being stopped on and forced to explain to me”, which is a much less self-flattering viewpoint on why the conversation is staying within a place.  And correspondingly more of a viewpoint that somebody else is likely to have of us, because it is a comfortable view to them, than a viewpoint that it is comfortable to us to imagine them having.\n\n\nTaking the viewpoint that somebody else is getting hung up on a relatively noncentral point can also be a flattering self-portrait to somebody who believes that, of course.  It doesn’t mean they’re right.  But it does mean that you should be aware of how the Other’s story, told from the Other’s viewpoint, is much more liable to be something that the Other finds sensible and perhaps comfortable, even if it implies an unflattering (and untrue-seeming and perhaps untrue) view of yourself, than something that makes the Other seem weird and silly and which it is easy and congruent for you yourself to imagine the Other thinking.\n\n\n\n\n\n[Ngo][11:18]  (Sep. 12 comment)\n\n> \n> I’m a little puzzled by how often you write my viewpoint as thinking that whatever I happened to say a sentence about is the Key Thing.\n> \n> \n> \n\n\nIn this case, I emphasised the outcome pump thought experiment because you said that the time-travelling scenario was a key moment for your understanding of optimisation, and the outcome pump seemed to be similar enough and easier to convey in the summary, since you’d already written about it.\n\n\nI’m also emphasising consequentialism because it seemed like the core idea which kept coming up in our first debate, under the heading of “deep problem-solving patterns”. Although I take your earlier point that you tend to emphasise things that your interlocutor is more skeptical about, not necessarily the things which are most central to your view. But if consequentialism isn’t in fact a very central concept for you, I’d be interested to hear what role it plays.\n\n\n\n\n\n\n[Ngo]  (Sep. 10 Google Doc)\nRichard, summarized by Richard: “There’s a component of ‘finding a plan which achieves a certain outcome’ which involves actually solving the object-level problem of how someone who is given the plan can achieve the outcome. And there’s another component which is figuring out how to manipulate that person into doing what you want. To me it seems like Eliezer’s argument is that there’s no training regime which leads an AI to spend 99% of its time thinking about the former, and 1% thinking about the latter.”\n\n\n\n\n\n\n[Yudkowsky][11:20]  (Sep. 10 comment)\n\n> \n> no training regime\n> \n> \n> \n\n\n…that the training regimes we come up with first, in the 3 months or 2 years we have before somebody else destroys the world, will not have this property.\n\n\nI don’t have any particularly complicated or amazingly insightful theories of why I keep getting depicted as a fatalist; but my world is full of counterfactual functions, not constants.  And I am always aware that if we had access to a real Textbook from the Future explaining all of the methods that are actually robust in real life – the equivalent of telling us in advance about all the ReLUs that in real life were only invented and understood a few decades after sigmoids – we could go right ahead and build a superintelligence that thinks 2 + 2 = 5.\n\n\nAll of my assumptions about “I don’t see how to do X” are always labeled as ignorance on my part and a default because we won’t have enough time to actually figure out how to do X.  I am constantly maintaining awareness of this because being **wrong** about it being difficult is a major place where **hope** potentially comes from, if there’s some idea like ReLUs that robustly vanquishes the difficulty, which I just didn’t think of.  Which does not, alas, mean that I am wrong about any particular thing, nor that the infinite source of optimistic ideas that is the wider field of “AI alignment” is going to produce a good idea from the same process that generates all the previous naive optimism through not seeing where the original difficulty comes from or what other difficulties surround obvious naive attempts to solve it.\n\n\n\n\n\n[Ngo]  (Sep. 10 Google Doc)\nRichard, summarized by Richard (continued): “While this may be true in the limit of increasing intelligence, the most relevant systems are the earliest ones that are above human level. But humans deviate from the consequentialist abstraction you’re talking about in all sorts of ways – for example, being raised in different cultures can make people much more or less consequentialist. So it seems plausible that early AGIs can be superhuman while also deviating strongly from this abstraction – not necessarily in the same ways as humans, but in ways that we push them towards during training.”\n\n\nEliezer, summarized by Richard: “Even at the Einstein or von Neumann level these types of deviations start to subside. And the sort of pivotal acts which might realistically work require skills *significantly* above human level. I think even 1% of the cognition of an AI that can assemble advanced nanotech, thinking about how to kill humans, would doom us. Your other suggestions for pivotal acts (surveillance to restrict AGI proliferation; persuading world leaders to restrict AI development) are not politically feasible in real life, to the level required to prevent the world from ending; or else require alignment in the very dangerous domain of superhuman manipulation.”\n\n\nRichard, summarized by Richard: “I think we probably also have significant disagreements about geopolitics which affect which acts we expect to be pivotal, but it seems like our comparative advantage is in discussing cognition, so let’s focus on that. We can build systems that outperform humans at quite a few tasks by now, without them needing search abilities that are general enough to even try to take over the world. Putting aside for a moment the question of which tasks are pivotal enough to save the world, which parts of your model draw the line between human-level chess players and human-level galaxy-colonisers, and say that we’ll be able to align ones that significantly outperform us on *these* tasks before they take over the world, but not on *those* tasks?”\n\n\nEliezer, summarized by Richard: “One aspect there is domain generality which in turn is achieved through novel domain learning. One can imagine asking the question: is there a superintelligent AGI that can quickly build nanotech the way that a beaver solves building dams, in virtue of having a bunch of specialized learning abilities without it ever having a cross-domain general learning ability? But there are many, many, many things that humans do which no other animal does, which you might think would contribute a lot to that animal’s fitness if there were animalistic ways to do it – e.g. mining and smelting iron. (Although comparisons to animals are not generally reliable arguments about what AIs can do – e.g. chess is much easier for chips than neurons.) So my answer is ‘Perhaps, but not by default, there’s a bunch of subproblems, I don’t actually know how to do it right now, it’s not the easiest way to get an AGI that can build nanotech.’ ~~Can I explain how I know that? I’m really not sure I can.~~“\n\n\n\n\n\n\n[Yudkowsky][11:26]  (Sep. 10 comment)\n\n> \n> Can I explain how I know that? I’m really not sure I can.\n> \n> \n> \n\n\nIn original text, this sentence was followed by a long attempt to explain anyways; if deleting that, which is plausibly the correct choice, this lead-in sentence should also be deleted, as otherwise it paints a false picture of how much I would try to explain anyways.\n\n\n\n\n\n[Ngo][11:15]  (Sep. 12 comment)\nMakes sense; deleted.\n\n\n\n\n\n\n[Ngo]  (Sep. 10 Google Doc)\nRichard, summarized by Richard: “Challenges which are trivial from a human-engineering perspective can be very challenging from an evolutionary perspective (e.g. spinning wheels). So the evolution of animals-with-a-little-bit-of-help-from-humans might end up in very different places from the evolution of animals-just-by-themselves. And analogously, the ability of humans to fill in the gaps to help less general AIs achieve more might be quite significant.\n\n\n“On nanotech: what are the most relevant axes of difference between solving protein folding and designing nanotech that, say, self-assembles into a computer?”\n\n\nEliezer, summarized by Richard: “This question seemed potentially cruxy to me. I.e., if building protein factories that built nanofactories that built nanomachines that met a certain deep and lofty engineering goal, didn’t involve cognitive challenges different in kind from protein folding, we could maybe just safely go do that using AlphaFold 3, which would be just as safe as AlphaFold 2. I don’t think we can do that. But it is among the more plausible advance-specified miracles we could get. At this point our last hope is that in fact the future is often quite surprising.”\n\n\nRichard, summarized by Richard: “It seems to me that you’re making the same mistake here as you did with regards to recursive self-improvement in the AI foom debate – namely, putting too much trust in one big abstraction.”\n\n\nEliezer, summarized by Richard: “I suppose that is what it could potentially feel like from the inside to not get an abstraction.  Robin Hanson kept on asking why I was trusting my abstractions so much, when he was in the process of trusting his worse abstractions instead.”\n\n\n\n\n\n \n\n\n### 4.2. Nate Soares’ summary\n\n\n \n\n\n\n[Soares]  (Sep. 12 Google Doc)\n*Consequentialism*\n\n\nOk, here’s a handful of notes. I apologize for not getting them out until midday Sunday. My main intent here is to do some shoring up of the ground we’ve covered. I’m hoping for skims and maybe some light comment back-and-forth as seems appropriate (perhaps similar to Richard’s summary), but don’t think we should derail the main thread over it. If time is tight, I would not be offended for these notes to get little-to-no interaction.\n\n\n—\n\n\nMy sense is that there’s a few points Eliezer was trying to transmit about consequentialism, that I’m not convinced have been received. I’m going to take a whack at it. I may well be wrong, both about whether Eliezer is in fact attempting to transmit these, and about whether Richard received them; I’m interested in both protests from Eliezer and paraphrases from Richard.\n\n\n\n\n\n\n[Soares]  (Sep. 12 Google Doc)\n1. “The consequentialism is in the plan, not the cognition”.\n\n\nI think Richard and Eliezer are coming at the concept “consequentialism” from very different angles, as evidenced eg by Richard saying (Nate’s crappy paraphrase:) “where do you think the consequentialism is in a cat?” and Eliezer responding (Nate’s crappy paraphrase:) “the cause of the apparent consequentialism of the cat’s behavior is distributed between its brain and its evolutionary history”.\n\n\nIn particular, I think there’s an argument here that goes something like:\n\n\n* Observe that, from our perspective, saving the world seems quite tricky, and seems likely to involve long sequences of clever actions that force the course of history into a narrow band (eg, because if we saw short sequences of dumb actions, we could just get started).\n* Suppose we were presented with a plan that allegedly describes a long sequence of clever actions that would, if executed, force the course of history into some narrow band.\n\t+ For concreteness, suppose it is a plan that allegedly funnels history into the band where we have wealth and acclaim.\n* One plausible happenstance is that the plan is not in fact clever, and would not in fact have a forcing effect on history.\n\t+ For example, perhaps the plan describes founding and managing some silicon valley startup, that would not work in practice.\n* Conditional on the plan having the history-funnelling property, there’s a sense in which it’s scary regardless of its source.\n\t+ For instance, perhaps the plan describes founding and managing some silicon valley startup, and will succeed virtually every time it’s executed, by dint of having very generic descriptions of things like how to identify and respond to competition, including descriptions of methods for superhumanly-good analyses of how to psychoanalyze the competition and put pressure on their weakpoints.\n\t+ In particular, note that one need not believe the plan was generated by some “agent-like” cognitive system that, in a self-contained way, made use of reasoning we’d characterize as “possessing objectives” and “pursuing them in the real world”.\n\t+ More specifically, the scariness is a property of the plan itself. For instance, the fact that this plan accrues wealth and acclaim to the executor, in a wide variety of situations, regardless of what obstacles arise, implies that the plan contains course-correcting mechanisms that keep the plan on-target.\n\t+ In other words, plans that *manage to actually funnel history* are (the argument goes) liable to have a wide variety of course-correction mechanisms that keep the plan oriented towards *some* target. And while this course-correcting property tends to be a property of history-funneling plans, the *choice of target* is of course free, hence the worry.\n\n\n(Of course, in practice we perhaps shouldn’t be visualizing a single Plan handed to us from an AI or a time machine or whatever, but should instead imagine a system that is reacting to contingencies and replanning in realtime. At the least, this task is easier, as one can adjust only for the contingencies that are beginning to arise, rather than needing to predict them all in advance and/or describe general contingency-handling mechanisms. But, and feel free to take a moment to predict my response before reading the next sentence, “run this AI that replans autonomously on-the-fly” and “run this AI+human loop that replans+reevaluates on the fly”, are still in this sense “plans”, that still likely have the property of Eliezer!consequentialism, insofar as they work.)\n\n\n\n\n\n\n[Soares]  (Sep. 12 Google Doc)\nThere’s a part of this argument I have not yet driven home. Factoring it out into a separate bullet:\n\n\n2. “If a plan is good enough to work, it’s pretty consequentialist in practice”.\n\n\nIn attempts to collect and distill a handful of scattered arguments of Eliezer’s:\n\n\nIf you ask GPT-3 to generate you a plan for saving the world, it will not manage to generate one that is very detailed. And if you tortured a big language model into giving you a detailed plan for saving the world, the resulting plan would not work. In particular, it would be full of errors like insensitivity to circumstance, suggesting impossible actions, and suggesting actions that run entirely at cross-purposes to one another.\n\n\nA plan that is sensitive to circumstance, and that describes actions that synergize rather than conflict — like, in Eliezer’s analogy, photons in a laser — is much better able to funnel history into a narrow band.\n\n\nBut, on Eliezer’s view as I understand it, this “the plan is not constantly tripping over its own toes” property, goes hand-in-hand with what he calls “consequentialism”. As a particularly stark and formal instance of the connection, observe that one way a plan can trip over its own toes is if it says “then trade 5 oranges for 2 apples, then trade 2 apples for 4 oranges”. This is clearly an instance of the plan failing to “lase” — of some orange-needing part of the plan working at cross-purposes to some apple-needing part of the plan, or something like that. And this is also a case where it’s easy to see how if a plan *is* “lasing” with respect to apples and oranges, then it is behaving as if governed by some coherent preference.\n\n\nAnd the point as I understand it isn’t “all toe-tripping looks superficially like an inconsistent preference”, but rather “insofar as a plan *does* manage to chain a bunch of synergistic actions together, it manages to do so precisely insofar as it is Eliezer!consequentialist”.\n\n\ncf the analogy to [information theory](https://www.lesswrong.com/s/oFePMp9rKftEeZDDr/p/QkX2bAkwG2EpGvNug), where if you’re staring at a maze and you’re trying to build an accurate representation of that maze in your own head, you will succeed precisely insofar as your process is Bayesian / information-theoretic. And, like, this is supposed to feel like a fairly tautological claim: you (almost certainly) can’t get the image of a maze in your head to match the maze in the world by visualizing a maze at random, you have to add visualized-walls using some process that’s correlated with the presence of actual walls. Your maze-visualizing process will work precisely insofar as you have access to & correctly make use of, observations that correlate with the presence of actual walls. You might also visualize extra walls in locations where it’s politically expedient to believe that there’s a wall, and you might also avoid visualizing walls in a bunch of distant regions of the maze because it’s dark and you haven’t got all day, but the resulting visualization in your head is accurate precisely *insofar* as you’re managing to act kinda like a Bayesian.\n\n\nSimilarly (the analogy goes), a plan works-in-concert and avoids-stepping-on-its-own-toes precisely insofar as it is consequentialist. These are two sides of the same coin, two ways of seeing the same thing.\n\n\nAnd, I’m not so much attempting to *argue* the point here, as to make sure that the *shape of the argument* (as I understand it) has been understood by Richard. In particular, the *shape of the argument* I see Eliezer as making is that “clumsy” plans don’t work, and “laser-like plans” work insofar as they are managing to act kinda like a consequentialist.\n\n\nRephrasing again: we have a wide variety of mathematical theorems all spotlighting, from different angles, the fact that a plan lacking in clumsiness, is possessing of coherence.\n\n\n(“And”, my model of Eliezer is quick to note, “this ofc does not mean that all sufficiently intelligent minds must generate very-coherent plans. If you really knew what you were doing, you could design a mind that emits plans that always “trip over themselves” along one particular axis, just as with sufficient mastery you could build a mind that believes 2+2=5 (for some reasonable cashing-out of that claim). But you don’t get this for free — and there’s a sort of “attractor” here, when building cognitive systems, where just as generic training will tend to cause it to have true beliefs, so will generic training tend to cause its plans to lase.”)\n\n\n(And ofc much of the worry is that all the mathematical theorems that suggest “this plan manages to work precisely insofar as it’s lasing in some direction”, say nothing about which direction it must lase. Hence, if you show me a plan clever enough to force history into some narrow band, I can be fairly confident it’s doing a bunch of lasing, but not at all confident which direction it’s lasing in.)\n\n\n\n\n\n\n[Soares]  (Sep. 12 Google Doc)\nOne of my guesses is that Richard does in fact understand this argument (though I personally would benefit from a paraphrase, to test this hypothesis!), and perhaps even buys it, but that Richard gets off the train at a following step, namely that we *need* plans that “lase”, because ones that don’t aren’t strong enough to save us. (Where in particular, I suspect most of the disagreement is in how far one can get with plans that are more like language-model outputs and less like lasers, rather than in the question of which pivotal acts would put an end to the acute risk period)\n\n\nBut setting that aside for a moment, I want to use the above terminology to restate another point I saw Eliezer as attempting to make: one big trouble with alignment, in the case where we need our plans to be like lasers, is that on the one hand we need our plans to be like lasers, but on the other hand we want them to *fail* to be like lasers along certain specific dimensions.\n\n\nFor instance, the plan presumably needs to involve all sorts of mechanisms for refocusing the laser in the case where the environment contains fog, and redirecting the laser in the case where the environment contains mirrors (…the analogy is getting a bit strained here, sorry, bear with me), so that it can in fact hit a narrow and distant target. Refocusing and redirecting to stay on target are part and parcel to plans that can hit narrow distant targets.\n\n\nBut the humans shutting the AI down is like scattering the laser, and the humans tweaking the AI so that it plans in a different direction is like them tossing up mirrors that redirect the laser; and we want the plan to fail to correct for those interferences.\n\n\nAs such, on the Eliezer view as I understand it, we can see ourselves as asking for a very unnatural sort of object: a path-through-the-future that is robust enough to funnel history into a narrow band in a very wide array of circumstances, but somehow insensitive to specific breeds of human-initiated attempts to switch which narrow band it’s pointed towards.\n\n\nOk. I meandered into trying to re-articulate the point over and over until I had a version distilled enough for my own satisfaction (which is much like arguing the point), apologies for the repetition.\n\n\nI don’t think debating the claim is the right move at the moment (though I’m happy to hear rejoinders!). Things I would like, though, are: Eliezer saying whether the above is on-track from his perspective (and if not, then poking a few holes); and Richard attempting to paraphrase the above, such that I believe the arguments themselves have been communicated (saying nothing about whether Richard also buys them).\n\n\n—\n\n\n\n\n\n\n[Soares]  (Sep. 12 Google Doc)\nMy Richard-model’s stance on the above points is something like “This all seems kinda plausible, but where Eliezer reads it as arguing that we had better figure out how to handle lasers, I read it as an argument that we’d better save the world without needing to resort to lasers. Perhaps if I thought the world could not be saved except by lasers, I would share many of your concerns, but I do not believe that, and in particular it looks to me like much of the recent progress in the field of AI — from AlphaGo to GPT to AlphaFold — is evidence in favor of the proposition that we’ll be able to save the world without lasers.”\n\n\nAnd I recall actual-Eliezer saying the following (more-or-less in response, iiuc, though readers note that I might be misunderstanding and this might be out-of-context):\n\n\n\n> \n> Definitely, “turns out it’s easier than you thought to use gradient descent’s memorization of zillions of shallow patterns that overlap and recombine into larger cognitive structures, to add up to a consequentialist nanoengineer that only does nanosystems and never does sufficiently general learning to apprehend the big picture containing humans, while still understanding the goal for that pivotal act you wanted to do” is among the more plausible advance-specified miracles we could get. \n> \n> \n> \n\n\nOn my view, and I think on Eliezer’s, the “zillions of shallow patterns”-style AI that we see today, is not going to be sufficient to save the world (nor destroy it). There’s a bunch of reasons that GPT and AlphaZero aren’t destroying the world yet, and one of them is this “shallowness” property. And, yes, maybe we’ll be wrong! I myself have been surprised by how far the shallow pattern memorization has gone (and, for instance, was surprised by GPT), and acknowledge that perhaps I will continue to be surprised. But I continue to predict that the shallow stuff won’t be enough.\n\n\nI have the sense that lots of folk in the community are, one way or another, saying “Why not consider the problems of aligning systems that memorize zillions of shallow patterns?”. And my answer is, “I still don’t expect those sorts of machines to either kill or save us, I’m still expecting that there’s a phase shift that won’t happen until AI systems start to be able to make plans that are sufficiently deep and laserlike to do scary stuff, and I’m still expecting that the real alignment challenges are in that regime.”\n\n\nAnd this seems to me close to the heart of the disagreement: some people (like me!) have an intuition that it’s quite unlikely that figuring out how to get sufficient work out of shallow-memorizers is enough to save us, and I suspect others (perhaps even Richard!) have the sense that the aforementioned “phase shift” is the unlikely scenario, and that I’m focusing on a weird and unlucky corner of the space. (I’m curious whether you endorse this, Richard, or some nearby correction of it.)\n\n\nIn particular, Richard, I am curious whether you endorse something like the following:\n\n\n* I’m focusing ~all my efforts on the shallow-memorizers case, because I think shallow-memorizer-alignment will by and large be sufficient, and even if it is not then I expect it’s a good way to prepare ourselves for whatever we’ll turn out to need in practice. In particular I don’t put much stock in the idea that there’s a predictable phase-change that forces us to deal with laser-like planners, nor that predictable problems in that domain give large present reason to worry.\n\n\n(I suspect not, at least not in precisely this form, and I’m eager for corrections.)\n\n\nI suspect something in this vicinity constitutes a crux of the disagreement, and I would be thrilled if we could get it distilled down to something as concise as the above. And, for the record, I personally endorse the following counter to the above:\n\n\n* I am focusing ~none of my efforts on shallow-memorizer-alignment, as I expect it to be far from sufficient, as I do not expect a singularity until we have more laser-like systems, and I think that the laserlike-planning regime has a host of predictable alignment difficulties that Earth does not seem at all prepared to face (unlike, it seems to me, the shallow-memorizer alignment difficulties), and as such I have large and present worries.\n\n\n—\n\n\n\n\n\n\n[Soares]  (Sep. 12 Google Doc)\nOk, and now a few less substantial points:\n\n\nThere’s a point Richard made here:\n\n\n\n> \n> Oh, interesting. Actually one more question then: to what extent do you think that explicitly reasoning about utility functions and laws of rationality is what makes consequentialists have the properties you’ve been talking about?\n> \n> \n> \n\n\nthat I suspect constituted a miscommunication, especially given that the following sentence appeared in Richard’s summary:\n\n\n\n> \n> A third thing that makes humans in particular consequentialist is planning, especially when we’re aware of concepts like utility functions.\n> \n> \n> \n\n\nIn particular, I suspect Richard’s model of Eliezer’s model places (or placed, before Richard read Eliezer’s comments on Richard’s summary) some particular emphasis on systems reflecting and thinking about their own strategies, as a method by which the consequentialism and/or effectiveness gets in. I suspect this is a misunderstanding, and am happy to say more on my model upon request, but am hopeful that the points I made a few pages above have cleared this up.\n\n\nFinally, I observe that there are a few places where Eliezer keeps beeping when Richard attempts to summarize him, and I suspect it would be useful to do the dorky thing of Richard very explicitly naming Eliezer’s beeps as he understands them, for purposes of getting common knowledge of understanding. For instance, things I think it might be useful for Richard to say verbatim (assuming he believes them, which I suspect, and subject to Eliezer-corrections, b/c maybe I’m saying things that induce separate beeps):\n\n\n1. Eliezer doesn’t believe it’s impossible to build AIs that have most any given property, including most any given safety property, including most any desired “non-consequentialist” or “deferential” property you might desire. Rather, Eliezer believes that many desirable safety properties don’t happen by default, and require mastery of minds that likely takes a worrying amount of time to acquire.\n\n\n2. The points about consequentialism are not particularly central in Eliezer’s view; they seem to him more like obvious background facts; the reason conversation has lingered here in the EA-sphere is that this is a point that many folk in the local community disagree on.\n\n\nFor the record, I think it might also be worth Eliezer acknowledging that Richard probably understands point (1), and that glossing “you don’t get it for free by default and we aren’t on course to have the time to get it” as “you can’t” is quite reasonable when summarizing. (And it might be worth Richard counter-acknowledging that the distinction is actually quite important once you buy the surrounding arguments, as it constitutes the difference between describing the current playing field and laying down to die.) I don’t think any of these are high-priority, but they might be useful if easy ![🙂](https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png)\n\n\n—\n\n\nFinally, stating the obvious-to-me, none of this is intended as criticism of either party, and all discussing parties have exhibited significant virtue-according-to-Nate throughout this process.\n\n\n\n\n\n\n[Yudkowsky][21:27]  (Sep. 12)\nFrom Nate’s notes:\n\n\n\n> \n> For instance, the plan presumably needs to involve all sorts of mechanisms for refocusing the laser in the case where the environment contains fog, and redirecting the laser in the case where the environment contains mirrors (…the analogy is getting a bit strained here, sorry, bear with me), so that it can in fact hit a narrow and distant target. Refocusing and redirecting to stay on target are part and parcel to plans that can hit narrow distant targets.\n> \n> \n> But the humans shutting the AI down is like scattering the laser, and the humans tweaking the AI so that it plans in a different direction is like them tossing up mirrors that redirect the laser; and we want the plan to fail to correct for those interferences.\n> \n> \n> \n\n\n–> GOOD ANALOGY.\n\n\n…or at least it sure conveys to *me* why corrigibility is anticonvergent / anticoherent / actually *moderately strongly contrary to* and not just *an orthogonal property of* a powerful-plan generator.\n\n\nBut then, I already know why that’s true and how it generalized up to resisting our various attempts to solve small pieces of more important aspects of it – it’s not just true by weak default, it’s true by a stronger default where a roomful of people at a workshop spend several days trying to come up with increasingly complicated ways to describe a system that will let you shut it down (but not steer you through time *into* shutting it down), and all of those suggested ways get shot down. (And yes, people outside MIRI now and then publish papers saying they totally just solved this problem, but all of those “solutions” are things we considered and dismissed as trivially failing to scale to powerful agents – they didn’t understand what we considered to be the first-order problems in the first place – rather than these being evidence that MIRI just didn’t have smart-enough people at the workshop.)\n\n\n\n\n\n[Yudkowsky][18:56]  (Nov. 5 follow-up comment)\nEg, “Well, we took a system that only learned from reinforcement on situations it had previously been in, and couldn’t use imagination to plan for things it had never seen, and then we found that if we didn’t update it on shut-down situations it wasn’t reinforced to avoid shutdowns!”\n\n\n\n\n \n\n\n\nThe post [Ngo and Yudkowsky on alignment difficulty](https://intelligence.org/2021/11/15/ngo-and-yudkowsky-on-alignment-difficulty/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-11-16T02:00:22Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "bafc947ce764792e5da6766a88fe69a1", "title": "Discussion with Eliezer Yudkowsky on AGI interventions", "url": "https://intelligence.org/2021/11/11/discussion-with-eliezer-yudkowsky-on-agi-interventions/", "source": "miri", "source_type": "blog", "text": "The following is a partially redacted and lightly edited transcript of a chat conversation about AGI between Eliezer Yudkowsky and a set of invitees in early September 2021. By default, all other participants are anonymized as “Anonymous”.\n\n\nI think this Nate Soares quote (excerpted from Nate’s [response to a report by Joe Carlsmith](https://www.lesswrong.com/posts/cCMihiwtZx7kdcKgt/comments-on-carlsmith-s-is-power-seeking-ai-an-existential)) is a useful context-setting preface regarding timelines, which weren’t discussed as much in the transcript:\n\n\n \n\n\n\n> […] My odds [of AGI by the year 2070] are around 85%[…]\n> \n> \n> I can list a handful of things that drive my probability of AGI-in-the-next-49-years above 80%:\n> \n> \n> 1. 50 years ago was 1970. The gap between AI systems then and AI systems now seems pretty plausibly greater than the remaining gap, even before accounting the recent dramatic increase in the rate of progress, and potential future increases in rate-of-progress as it starts to feel within-grasp.\n> \n> \n> 2. I observe that, 15 years ago, everyone was saying AGI is far off because of what it couldn’t do — basic image recognition, go, starcraft, winograd schemas, programmer assistance. But basically all that has fallen. The gap between us and AGI is made mostly of intangibles. (Computer Programming That Is Actually Good? Theorem proving? Sure, but on my model, “good” versions of those are a hair’s breadth away from full AGI already. And the fact that I need to clarify that “bad” versions don’t count, speaks to my point that the only barriers people can name right now are intangibles.) That’s a very uncomfortable place to be!\n> \n> \n> 3. When I look at the history of invention, and the various anecdotes about the Wright brothers and Enrico Fermi, I get an impression that, when a technology is pretty close, the world looks a lot like how our world looks.\n> \n> \n> * Of course, the trick is that when a technology is a little far, the world might also look pretty similar!\n> * Though when a technology is **very** far, the world **does** look different — it looks like experts pointing to specific technical hurdles. We exited that regime a few years ago.\n> \n> \n> 4. Summarizing the above two points, I suspect that I’m in more-or-less the “penultimate epistemic state” on AGI timelines: I don’t know of a project that seems like they’re right on the brink; that would put me in the “final epistemic state” of thinking AGI is imminent. But I’m in the second-to-last epistemic state, where I wouldn’t feel all that shocked to learn that some group has reached the brink. Maybe I won’t get that call for 10 years! Or 20! But it could also be 2, and I wouldn’t get to be indignant with reality. I wouldn’t get to say “but all the following things should have happened first, before I made that observation”. I have made those observations.\n> \n> \n> 5. It seems to me that the Cotra-style compute-based model provides pretty conservative estimates. For one thing, I don’t expect to need human-level compute to get human-level intelligence, and for another I think there’s a decent chance that insight and innovation have a big role to play, especially on 50 year timescales.\n> \n> \n> 6. There has been a lot of AI progress recently. When I tried to adjust my beliefs so that I was **positively** surprised by AI progress just about as often as I was **negatively** surprised by AI progress, I ended up expecting a bunch of rapid progress. […]\n> \n> \n\n\n \n\n\n**Further preface by Eliezer:**\n\n\nIn some sections here, I sound gloomy about the probability that coordination between AGI groups succeeds in saving the world.  Andrew Critch reminds me to point out that gloominess like this can be a self-fulfilling prophecy – if people think successful coordination is impossible, they won’t try to coordinate.  I therefore remark in retrospective advance that it seems to me like at least some of the top AGI people, say at Deepmind and Anthropic, are the sorts who I think would rather coordinate than destroy the world; my gloominess is about what happens when the technology has propagated further than that.  But even then, anybody who would *rather* coordinate and *not* destroy the world shouldn’t rule out hooking up with Demis, or whoever else is in front if that person also seems to prefer not to completely destroy the world.  (Don’t be too picky here.)  Even if the technology proliferates and the world ends a year later when other non-coordinating parties jump in, it’s still better to take the route where the world ends one year later instead of immediately.  Maybe the horse will sing.\n\n\n\n\n\n---\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nHi and welcome. Points to keep in mind:\n\n\n– I’m doing this because I would like to learn whichever *actual* thoughts this target group may have, and perhaps respond to those; that’s part of the point of anonymity. If you speak an anonymous thought, please have that be your actual thought that you are thinking yourself, not something where you’re thinking “well, somebody else might think that…” or “I wonder what Eliezer’s response would be to…”\n\n\n– Eliezer’s responses are uncloaked by default. Everyone else’s responses are anonymous (not pseudonymous) and neither I nor MIRI will know which potential invitee sent them.\n\n\n– Please do not reshare or pass on the link you used to get here.\n\n\n– I do intend that parts of this conversation may be saved and published at MIRI’s discretion, though not with any mention of who the anonymous speakers could possibly have been.\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\n(Thank you to Ben Weinstein-Raun for building [chathamroom.com](https://www.chathamroom.com/), and for quickly adding some features to it at my request.)\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nIt is now 2PM; this room is now open for questions.\n\n\n \n\n\n**Anonymous**\n\n\nHow long will it be open for?\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nIn principle, I could always stop by a couple of days later and answer any unanswered questions, but my basic theory had been “until I got tired”.\n\n\n \n\n\n\n\n---\n\n\n \n\n\n**Anonymous**\n\n\nAt a high level one thing I want to ask about is research directions and prioritization. For example, if you were dictator for what researchers here (or within our influence) were working on, how would you reallocate them?\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nThe first reply that came to mind is “I don’t know.” I consider the present gameboard to look incredibly grim, and I don’t actually see a way out through hard work alone. We can hope there’s a miracle that violates some aspect of my background model, and we can try to prepare for that unknown miracle; preparing for an unknown miracle probably looks like “Trying to die with more dignity on the mainline” (because if you can die with more dignity on the mainline, you are better positioned to take advantage of a miracle if it occurs).\n\n\n \n\n\n**Anonymous**\n\n\nI’m curious if the grim outlook is currently mainly due to technical difficulties or social/coordination difficulties. (Both avenues might have solutions, but maybe one seems more recalcitrant than the other?)\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nTechnical difficulties. Even if the social situation were vastly improved, on my read of things, everybody still dies because there is nothing that a handful of socially coordinated projects can do, or even a handful of major governments who aren’t willing to start nuclear wars over things, to prevent somebody else from building AGI and killing everyone 3 months or 2 years later. There’s no obvious winnable position into which to play the board.\n\n\n \n\n\n**Anonymous**\n\n\njust to clarify, that sounds like a large scale coordination difficulty to me (i.e., we – as all of humanity – can’t coordinate to not build that AGI).\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nI wasn’t really considering the counterfactual where humanity had a collective telepathic hivemind? I mean, I’ve written fiction about a world coordinated enough that they managed to shut down all progress in their computing industry and only manufacture powerful computers in a single worldwide hidden base, but Earth was never going to go down that route. Relative to remotely plausible levels of future coordination, we have a technical problem.\n\n\n \n\n\n**Anonymous**\n\n\nCurious about why building an AGI aligned to its users’ interests isn’t a thing a handful of coordinated projects could do that would effectively prevent the catastrophe. The two obvious options are: it’s too hard to build it vs it wouldn’t stop the other group anyway. For “it wouldn’t stop them”, two lines of reply are nobody actually wants an unaligned AGI (they just don’t foresee the consequences and are pursuing the benefits from automated intelligence, so can be defused by providing the latter) (maybe not entirely true: omnicidal maniacs), and an aligned AGI could help in stopping them. Is your take more on the “too hard to build” side?\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nBecause it’s too technically hard to align some cognitive process that is powerful enough, and operating in a sufficiently dangerous domain, to stop the next group from building an unaligned AGI in 3 months or 2 years. Like, they can’t coordinate to build an AGI that builds a nanosystem because it is too technically hard to align their AGI technology in the 2 years before the world ends.\n\n\n \n\n\n**Anonymous**\n\n\nSummarizing the threat model here (correct if wrong): The nearest competitor for building an AGI is at most N (<2) years behind, and building an aligned AGI, even when starting with the ability to build an unaligned AGI, takes longer than N years. So at some point some competitor who doesn’t care about safety builds the unaligned AGI. How does “nobody actually wants an unaligned AGI” fail here? It takes >N years to get everyone to realise that they have that preference and that it’s incompatible with their actions?\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nMany of the current actors seem like they’d be really gung-ho to build an “unaligned” AGI because they think it’d be super neat, or they think it’d be super profitable, and they don’t expect it to destroy the world. So if this happens in anything like the current world – and I neither expect vast improvements, nor have very long timelines – then we’d see Deepmind get it first; and, if the code was not *immediately* stolen and rerun with higher bounds on the for loops, by China or France or whoever, somebody else would get it in another year; if that somebody else was Anthropic, I could maybe see them also not amping up their AGI; but then in 2 years it starts to go to Facebook AI Research and home hobbyists and intelligence agencies stealing copies of the code from other intelligence agencies and I don’t see how the world fails to end past that point.\n\n\n \n\n\n**Anonymous**\n\n\nWhat does trying to die with more dignity on the mainline look like? There’s a real question of prioritisation here between solving the alignment problem (and various approaches within that), and preventing or slowing down the next competitor. I’d personally love more direction on where to focus my efforts (obviously you can only say things generic to the group).\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nI don’t know how to effectively prevent or slow down the “next competitor” for more than a couple of years even in plausible-best-case scenarios. Maybe some of the natsec people can be grownups in the room and explain why “stealing AGI code and running it” is as bad as “full nuclear launch” to their foreign counterparts in a realistic way. Maybe more current AGI groups can be persuaded to go closed; or, if more than one has an AGI, to coordinate with each other and not rush into an arms race. I’m not sure I believe these things can be done in real life, but it seems understandable to me how I’d go about trying – though, please do talk with me a lot more before trying anything like this, because it’s easy for me to see how attempts could backfire, it’s not clear to me that we should be inviting more attention from natsec folks at all. None of that saves us without technical alignment progress. But what are other people supposed to do about researching alignment when I’m not sure what to try there myself?\n\n\n \n\n\n**Anonymous**\n\n\nthanks! on researching alignment, you might have better meta ideas (how to do research generally) even if you’re also stuck on object level. and you might know/foresee dead ends that others don’t.\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nI definitely foresee a whole lot of dead ends that others don’t, yes.\n\n\n \n\n\n**Anonymous**\n\n\nDoes pushing for a lot of public fear about this kind of research, that makes all projects hard, seem hopeless?\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nWhat does it buy us? 3 months of delay at the cost of a tremendous amount of goodwill? 2 years of delay? What’s that delay for, if we all die at the end? Even if we then got a technical miracle, would it end up impossible to run a project that could make use of an alignment miracle, because everybody was afraid of that project? Wouldn’t that fear tend to be channeled into “ah, yes, it must be a government project, they’re the good guys” and then the government is much more hopeless and much harder to improve upon than Deepmind?\n\n\n \n\n\n**Anonymous**\n\n\nI imagine lack of public support for genetic manipulation of humans has slowed that research by more than three months\n\n\n \n\n\n**Anonymous**\n\n\n‘would it end up impossible to run a project that could make use of an alignment miracle, because everybody was afraid of that project?’\n\n\n…like, maybe, but not with near 100% chance?\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nI don’t want to sound like I’m dismissing the whole strategy, but it sounds a *lot* like the kind of thing that backfires because you did not get *exactly* the public reaction you wanted, and the public reaction you actually got was bad; and it doesn’t sound like that whole strategy actually has a visualized victorious endgame, which makes it hard to work out what the exact strategy should be; it seems more like the kind of thing that falls under the syllogism “something must be done, this is something, therefore this must be done” than like a plan that ends with humane life victorious.\n\n\nRegarding genetic manipulation of humans, I think the public started out very unfavorable to that, had a reaction that was not at all exact or channeled, does not allow for any ‘good’ forms of human genetic manipulation regardless of circumstances, driving the science into other countries – it is not a case in point of the intelligentsia being able to successfully cunningly manipulate the fear of the masses to some supposed good end, to put it mildly, so I’d be worried about deriving that generalization from it. The reaction may more be that the fear of the public is a big powerful uncontrollable thing that doesn’t move in the smart direction – maybe the public fear of AI gets channeled by opportunistic government officials into “and that’s why We must have Our AGI first so it will be Good and we can Win”. That seems to me much more like a thing that would happen in real life than “and then we managed to manipulate public panic down exactly the direction we wanted to fit into our clever master scheme”, especially when we don’t actually *have* the clever master scheme it fits into.\n\n\n \n\n\n\n\n---\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nI have a few stupid ideas I could try to investigate in ML, but that would require the ability to run significant-sized closed ML projects full of trustworthy people, which is a capability that doesn’t seem to presently exist. Plausibly, this capability would be required in any world that got some positive model violation (“miracle”) to take advantage of, so I would want to build that capability today. I am not sure how to go about doing that either.\n\n\n \n\n\n**Anonymous**\n\n\nif there’s a chance this group can do something to gain this capability I’d be interested in checking it out. I’d want to know more about what “closed”and “trustworthy” mean for this (and “significant-size” I guess too). E.g., which ones does Anthropic fail?\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nWhat I’d like to exist is a setup where I can work with people that I or somebody else has vetted as seeming okay-trustworthy, on ML projects that aren’t going to be published. Anthropic looks like it’s a package deal. If Anthropic were set up to let me work with 5 particular people at Anthropic on a project boxed away from the rest of the organization, that would potentially be a step towards trying such things. It’s also not clear to me that Anthropic has either the time to work with me, or the interest in doing things in AI that aren’t “stack more layers” or close kin to that.\n\n\n \n\n\n**Anonymous**\n\n\nThat setup doesn’t sound impossible to me — at DeepMind or OpenAI or a new org specifically set up for it (or could be MIRI) — the bottlenecks are access to trustworthy ML-knowledgeable people (but finding 5 in our social network doesn’t seem impossible?) and access to compute (can be solved with more money – not too hard?). I don’t think DM and OpenAI are publishing everything – the “not going to be published” part doesn’t seem like a big barrier to me. Is infosec a major bottleneck (i.e., who’s potentially stealing the code/data)?\n\n\n \n\n\n**Anonymous**\n\n\nDo you think Redwood Research could be a place for this?\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nMaybe! I haven’t ruled RR out yet. But they also haven’t yet done (to my own knowledge) anything demonstrating the same kind of AI-development capabilities as even GPT-3, let alone AlphaFold 2.\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nI would potentially be super interested in working with Deepminders if Deepmind set up some internal partition for “Okay, accomplished Deepmind researchers who’d rather not destroy the world are allowed to form subpartitions of this partition and have their work not be published outside the subpartition let alone Deepmind in general, though maybe you have to report on it to Demis only or something.” I’d be more skeptical/worried about working with OpenAI-minus-Anthropic because the notion of “open AI” continues to sound to me like “what is the worst possible strategy for making the game board as unplayable as possible while demonizing everybody who tries a strategy that could possibly lead to the survival of humane intelligence”, and now a lot of the people who knew about that part have left OpenAI for elsewhere. But, sure, if they changed their name to “ClosedAI” and fired everyone who believed in the original OpenAI mission, I would update about that.\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nContext that is potentially missing here and should be included: I wish that Deepmind had more internal closed research, and internally siloed research, as part of a larger wish I have about the AI field, independently of what projects I’d want to work on myself.\n\n\nThe present situation can be seen as one in which a common resource, the remaining timeline until AGI shows up, is incentivized to be burned by AI researchers because they have to come up with neat publications and publish them (which burns the remaining timeline) in order to earn status and higher salaries. The more they publish along the spectrum that goes {quiet internal result -> announced and demonstrated result -> paper describing how to get the announced result -> code for the result -> model for the result}, the more timeline gets burned, and the greater the internal and external prestige accruing to the researcher.\n\n\nIt’s futile to wish for everybody to act uniformly against their incentives.  But I think it would be a step forward if the relative incentive to burn the commons could be *reduced*; or to put it another way, the more researchers have the *option* to not burn the timeline commons, without them getting fired or passed up for promotion, the more that unusually intelligent researchers might perhaps decide not to do that. So I wish in general that AI research groups in general, but also Deepmind in particular, would have affordances for researchers who go looking for interesting things to not publish any resulting discoveries, at all, and still be able to earn internal points for them. I wish they had the *option* to do that. I wish people were *allowed* to not destroy the world – and still get high salaries and promotion opportunities and the ability to get corporate and ops support for playing with interesting toys; if destroying the world is prerequisite for having nice things, nearly everyone is going to contribute to destroying the world, because, like, they’re not going to just *not* have nice things, that is not human nature for almost all humans.\n\n\nWhen I visualize how the end of the world plays out, I think it involves an AGI system which has the ability to be cranked up by adding more computing resources to it; and I think there is an extended period where the system is not aligned enough that you can crank it up that far, without everyone dying. And it seems *extremely* likely that if factions on the level of, say, Facebook AI Research, start being able to deploy systems like that, then death is very automatic. If the Chinese, Russian, and French intelligence services all manage to steal a copy of the code, and China and Russia sensibly decide not to run it, and France gives it to three French corporations which I hear the French intelligence service sometimes does, then again, everybody dies. If the builders are sufficiently worried about that scenario that they push too fast too early, in fear of an arms race developing very soon if they wait, again, everybody dies.\n\n\nAt present we’re very much waiting on a miracle for alignment to be possible at all, even if the AGI-builder successfully prevents proliferation and has 2 years in which to work. But if we get that miracle at all, it’s not going to be an instant miracle.  There’ll be some minimum time-expense to do whatever work is required. So any time I visualize anybody trying to even start a successful trajectory of this kind, they need to be able to get a lot of work done, without the intermediate steps of AGI work being published, or demoed at all, let alone having models released.  Because if you wait until the last months when it is really really obvious that the system is going to scale to AGI, in order to start closing things, almost all the prerequisites will already be out there. Then it will only take 3 more months of work for somebody else to build AGI, and then somebody else, and then somebody else; and even if the first 3 factions manage not to crank up the dial to lethal levels, the 4th party will go for it; and the world ends by default on full automatic.\n\n\nIf ideas are theoretically internal to “just the company”, but the company has 150 people who all know, plus everybody with the “sysadmin” title having access to the code and models, then I imagine – perhaps I am mistaken – that those ideas would (a) inevitably leak outside due to some of those 150 people having cheerful conversations over a beer with outsiders present, and (b) be copied outright by people of questionable allegiances once all hell started to visibly break loose. As with anywhere that handles really sensitive data, the concept of “need to know” has to be a thing, or else everyone (and not just in that company) ends up knowing.\n\n\nSo, even if I got run over by a truck tomorrow, I would still very much wish that in the world that survived me, Deepmind would have lots of penalty-free affordance internally for people to not publish things, and to work in internal partitions that didn’t spread their ideas to all the rest of Deepmind.  Like, *actual* social and corporate support for that, not just a theoretical option you’d have to burn lots of social capital and weirdness points to opt into, and then get passed up for promotion forever after.\n\n\n \n\n\n**Anonymous**\n\n\nWhat’s RR?\n\n\n \n\n\n**Anonymous**\n\n\nIt’s a new alignment org, run by Nate Thomas and ~co-run by Buck Shlegeris and Bill Zito, with maybe 4-6 other technical folks so far. My take: the premise is to create an org with ML expertise and general just-do-it competence that’s trying to do all the alignment experiments that something like Paul+Ajeya+Eliezer all think are obviously valuable and wish someone would do. They expect to have a website etc in a few days; the org is a couple months old in its current form.\n\n\n \n\n\n\n\n---\n\n\n \n\n\n**Anonymous**\n\n\nHow likely really is hard takeoff? Clearly, we are touching the edges of AGI with GPT and the like. But I’m not feeling this will that easily be leveraged into very quick recursive self improvement.\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nCompared to the position I was arguing in the Foom Debate with Robin, reality has proved way to the further Eliezer side of Eliezer along the Eliezer-Robin spectrum. It’s been very unpleasantly surprising to me how little architectural complexity is required to start producing generalizing systems, and how fast those systems scale using More Compute. The flip side of this is that I can imagine a system being scaled up to interesting human+ levels, without “recursive self-improvement” or other of the old tricks that I thought would be necessary, and argued to Robin would make fast capability gain possible. You could have fast capability gain well before anything like a FOOM started. Which in turn makes it more plausible to me that we could hang out at interesting not-superintelligent levels of AGI capability for a while before a FOOM started. It’s not clear that this helps anything, but it does seem more plausible.\n\n\n \n\n\n**Anonymous**\n\n\nI agree reality has not been hugging the Robin kind of scenario this far.\n\n\n \n\n\n**Anonymous**\n\n\nGoing past human level doesn’t necessarily mean going “foom”.\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nI do think that if you get an AGI significantly past human intelligence in all respects, it would obviously tend to FOOM. I mean, I suspect that Eliezer fooms if you give an Eliezer the ability to backup, branch, and edit himself.\n\n\n \n\n\n**Anonymous**\n\n\nIt doesn’t seem to me that an AGI significantly past human intelligence necessarily tends to FOOM.\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nI think in principle we could have, for example, an AGI that was just a superintelligent engineer of proteins, and of nanosystems built by nanosystems that were built by proteins, and which was corrigible enough not to want to improve itself further; and this AGI would also be dumber than a human when it came to eg psychological manipulation, because we would have asked it not to think much about that subject. I’m doubtful that you can have an AGI that’s significantly above human intelligence in *all* respects, without it having the capability-if-it-wanted-to of looking over its own code and seeing lots of potential improvements.\n\n\n \n\n\n**Anonymous**\n\n\nAlright, this makes sense to me, but I don’t expect an AGI to *want* to manipulate humans that easily (unless designed to). Maybe a bit.\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nManipulating humans is a convergent instrumental strategy if you’ve accurately modeled (even at quite low resolution) what humans are and what they do in the larger scheme of things.\n\n\n \n\n\n**Anonymous**\n\n\nYes, but human manipulation is also the kind of thing you need to guard against with even mildly powerful systems. Strong impulses to manipulate humans, should be vetted out.\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nI think that, by default, if you trained a young AGI to expect that 2+2=5 in some special contexts, and then scaled it up without further retraining, a generally superhuman version of that AGI would be very likely to ‘realize’ in some sense that SS0+SS0=SSSS0 was a consequence of the Peano axioms. There’s a natural/convergent/coherent output of deep underlying algorithms that generate competence in some of the original domains; when those algorithms are implicitly scaled up, they seem likely to generalize better than whatever patch on those algorithms said ‘2 + 2 = 5’.\n\n\nIn the same way, suppose that you take weak domains where the AGI can’t fool you, and apply some gradient descent to get the AGI to stop outputting actions of a type that humans can detect and label as ‘manipulative’.  And then you scale up that AGI to a superhuman domain.  I predict that deep algorithms within the AGI will go through consequentialist dances, and model humans, and output human-manipulating actions that can’t be detected as manipulative by the humans, in a way that seems likely to bypass whatever earlier patch was imbued by gradient descent, because I doubt that earlier patch will generalize as well as the deep algorithms. Then you don’t get to retrain in the superintelligent domain after labeling as bad an output that killed you and doing a gradient descent update on that, because the bad output killed you. (This is an attempted very fast gloss on what makes alignment difficult *in the first place*.)\n\n\n \n\n\n**Anonymous**\n\n\n[i appreciate this gloss – thanks]\n\n\n \n\n\n**Anonymous**\n\n\n“deep algorithms within it will go through consequentialist dances, and model humans, and output human-manipulating actions that can’t be detected as manipulative by the humans”\n\n\nThis is true if it is rewarding to manipulate humans. If the humans are on the outlook for this kind of thing, it doesn’t seem that easy to me.\n\n\nGoing through these “consequentialist dances” to me appears to presume that mistakes that should be apparent haven’t been solved at simpler levels. It seems highly unlikely to me that you would have a system that appears to follow human requests and human values, and it would suddenly switch at some powerful level. I think there will be signs beforehand. Of course, if the humans are not paying attention, they might miss it. But, say, in the current milieu, I find it plausible that they will pay enough attention.\n\n\n“because I doubt that earlier patch will generalize as well as the deep algorithms”\n\n\nThat would depend on how “deep” your earlier patch was. Yes, if you’re just doing surface patches to apparent problems, this might happen. But it seems to me that useful and intelligent systems will require deep patches (or deep designs from the start) in order to be apparently useful to humans at solving complex problems enough. This is not to say that they would be perfect. But it seems quite plausible to me that they would in most cases prevent the worst outcomes.\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\n“If you’ve got a general consequence-modeling-and-searching algorithm, it seeks out ways to manipulate humans, even if there are no past instances of a random-action-generator producing manipulative behaviors that succeeded and got reinforced by gradient descent over the random-action-generator. It invents the strategy de novo by imagining the results, even if there’s no instances in memory of a strategy like that having been tried before.” Agree or disagree?\n\n\n \n\n\n**Anonymous**\n\n\nCreating strategies de novo would of course be expected of an AGI.\n\n\n\n> “If you’ve got a general consequence-modeling-and-searching algorithm, it seeks out ways to manipulate humans, even if there are no past instances of a random-action-generator producing manipulative behaviors that succeeded and got reinforced by gradient descent over the random-action-generator. It invents the strategy de novo by imagining the results, even if there’s no instances in memory of a strategy like that having been tried before.” Agree or disagree?\n> \n> \n\n\nI think, if the AI will “seek out ways to manipulate humans”, will depend on what kind of goals the AI has been designed to pursue.\n\n\nManipulating humans is definitely an instrumentally useful kind of method for an AI, for a lot of goals. But it’s also counter to a lot of the things humans would direct the AI to do — at least at a “high level”. “Manipulation”, such as marketing, for lower level goals, can be very congruent with higher level goals. An AI could clearly be good at manipulating humans, while not manipulating its creators or the directives of its creators.\n\n\nIf you are asking me to agree that the AI will generally seek out ways to manipulate the high-level goals, then I will say “no”. Because it seems to me that faults of this kind in the AI design is likely to be caught by the designers earlier. (This isn’t to say that this kind of fault couldn’t happen.) It seems to me that manipulation of high-level goals will be one of the most apparent kind of faults of this kind of system.\n\n\n \n\n\n**Anonymous**\n\n\nRE: “I’m doubtful that you can have an AGI that’s significantly above human intelligence in *all* respects, without it having the capability-if-it-wanted-to of looking over its own code and seeing lots of potential improvements.”\n\n\nIt seems plausible (though unlikely) to me that this would be true in practice for the AGI we build — but also that the potential improvements it sees would be pretty marginal. This is coming from the same intuition that current learning algorithms might already be approximately optimal.\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\n\n> If you are asking me to agree that the AI will generally seek out ways to manipulate the high-level goals, then I will say “no”. Because it seems to me that faults of this kind in the AI design is likely to be caught by the designers earlier.\n> \n> \n\n\nI expect that when people are trying to stomp out convergent instrumental strategies by training at a safe dumb level of intelligence, this will not be effective at preventing convergent instrumental strategies at smart levels of intelligence; also note that at very smart levels of intelligence, “hide what you are doing” is also a convergent instrumental strategy of that substrategy.\n\n\nI don’t know however if I should be explaining at this point why “manipulate humans” is convergent, why “conceal that you are manipulating humans” is convergent, why you have to train in safe regimes in order to get safety in dangerous regimes (because if you try to “train” at a sufficiently unsafe level, the output of the unaligned system deceives you into labeling it incorrectly and/or kills you before you can label the outputs), or why attempts to teach corrigibility in safe regimes are unlikely to generalize well to higher levels of intelligence and unsafe regimes (qualitatively new thought processes, things being way out of training distribution, and, the hardest part to explain, corrigibility being “anti-natural” in a certain sense that makes it incredibly hard to, eg, exhibit any coherent planning behavior (“consistent utility function”) which corresponds to being willing to let somebody else shut you off, without incentivizing you to actively manipulate them to shut you off).\n\n\n \n\n\n\n\n---\n\n\n \n\n\n**Anonymous**\n\n\nMy (unfinished) idea for buying time is to focus on applying AI to well-specified problems, where constraints can come primarily from the action space and additionally from process-level feedback (i.e., human feedback providers understand why actions are good before endorsing them, and reject anything weird even if it seems to work on some outcomes-based metric). This is basically a form of boxing, with application-specific boxes. I know it doesn’t scale to superintelligence but I think it can potentially give us time to study and understand proto AGIs before they kill us. I’d be interested to hear devastating critiques of this that imply it isn’t even worth fleshing out more and trying to pursue, if they exist.\n\n\n \n\n\n**Anonymous**\n\n\n(I think it’s also similar to CAIS in case that’s helpful.)\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nThere’s lots of things we can do which don’t solve the problem and involve us poking around with AIs having fun, while we wait for a miracle to pop out of nowhere. There’s lots of things we can do with AIs which are weak enough to not be able to fool us and to not have cognitive access to any dangerous outputs, like automatically generating pictures of cats.  The trouble is that nothing we can do with an AI like that (where “human feedback providers understand why actions are good before endorsing them”) is powerful enough to save the world.\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nIn other words, if you have an aligned AGI that builds complete mature nanosystems for you, that *is* enough force to save the world; but that AGI needs to have been aligned by some method other than “humans inspect those outputs and vet them and their consequences as safe/aligned”, because humans cannot accurately and unfoolably vet the consequences of DNA sequences for proteins, or of long bitstreams sent to protein-built nanofactories.\n\n\n \n\n\n**Anonymous**\n\n\nWhen you mention nanosystems, how much is this just a hypothetical superpower vs. something you actually expect to be achievable with AGI/superintelligence? If expected to be achievable, why?\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nThe case for nanosystems being possible, if anything, seems even more slam-dunk than the already extremely slam-dunk case for superintelligence, because we can set lower bounds on the power of nanosystems using far more specific and concrete calculations. See eg the first chapters of Drexler’s Nanosystems, which are the first step mandatory reading for anyone who would otherwise doubt that there’s plenty of room above biology and that it is possible to have artifacts the size of bacteria with much higher power densities. I have this marked down as “known lower bound” not “speculative high value”, and since Nanosystems has been out since 1992 and subjected to attemptedly-skeptical scrutiny, without anything I found remotely persuasive turning up, I do not have a strong expectation that any new counterarguments will materialize.\n\n\nIf, after reading Nanosystems, you still don’t think that a superintelligence can get to and past the Nanosystems level, I’m not quite sure what to say to you, since the models of superintelligences are much less concrete than the models of molecular nanotechnology.\n\n\nI’m on record as early as 2008 as saying that I expected superintelligences to crack protein folding, some people disputed that and were all like “But how do you know that’s solvable?” and then AlphaFold 2 came along and cracked the protein folding problem they’d been skeptical about, far below the level of superintelligence.\n\n\nI can try to explain how I was mysteriously able to forecast this truth at a high level of confidence – not the exact level where it became possible, to be sure, but that superintelligence would be sufficient – despite this skepticism; I suppose I could point to prior hints, like even human brains being able to contribute suggestions to searches for good protein configurations; I could talk about how if evolutionary biology made proteins evolvable then there must be a lot of regularity in the folding space, and that this kind of regularity tends to be exploitable.\n\n\nBut of course, it’s also, in a certain sense, very *obvious* that a superintelligence could crack protein folding, just like it was obvious years before *Nanosystems* that molecular nanomachines would in fact be possible and have much higher power densities than biology. I could say, “Because proteins are held together by van der Waals forces that are much weaker than covalent bonds,” to point to a reason how you could realize that after just reading *Engines of Creation* and before *Nanosystems* existed, by way of explaining how one could possibly guess the result of the calculation in advance of building up the whole detailed model. But in reality, precisely because the possibility of molecular nanotechnology was already obvious to any sensible person just from reading *Engines of Creation*, the sort of person who wasn’t convinced by *Engines of Creation* wasn’t convinced by *Nanosystems* either, because they’d already demonstrated immunity to sensible arguments; an example of the general phenomenon I’ve elsewhere termed the Law of Continued Failure.\n\n\nSimilarly, the sort of person who was like “But how do you know superintelligences will be able to build nanotech?” in 2008, will probably not be persuaded by the demonstration of AlphaFold 2, because it was already clear to anyone sensible in 2008, and so anyone who can’t see sensible points in 2008 probably also can’t see them after they become even clearer. There are some people on the margins of sensibility who fall through and change state, but mostly people are not on the exact margins of sanity like that.\n\n\n \n\n\n**Anonymous**\n\n\n“If, after reading Nanosystems, you still don’t think that a superintelligence can get to and past the Nanosystems level, I’m not quite sure what to say to you, since the models of superintelligences are much less concrete than the models of molecular nanotechnology.”\n\n\nI’m not sure if this is directed at *me* or the , but I’m only expressing curiosity on this point, not skepticism ![🙂](https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png)\n\n\n \n\n\n\n\n---\n\n\n \n\n\n**Anonymous**\n\n\nsome form of “scalable oversight” is the naive extension of the initial boxing thing proposed above that claims to be the required alignment method — basically, make the humans vetting the outputs smarter by providing them AI support for all well-specified (level-below)-vettable tasks.\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nI haven’t seen any plausible story, in any particular system design being proposed by the people who use terms about “scalable oversight”, about how human-overseeable thoughts or human-inspected underlying systems, compound into very powerful human-non-overseeable outputs that are trustworthy. Fundamentally, the whole problem here is, “You’re allowed to look at floating-point numbers and Python code, but how do you get from there to trustworthy nanosystem designs?” So saying “Well, we’ll look at some thoughts we can understand, and then from out of a much bigger system will come a trustworthy output” doesn’t answer the hard core at the center of the question. Saying that the humans will have AI support doesn’t answer it either.\n\n\n \n\n\n**Anonymous**\n\n\nthe kind of useful thing humans (assisted-humans) might be able to vet is reasoning/arguments/proofs/explanations. without having to generate neither the trustworthy nanosystem design nor the reasons it is trustworthy, we could still check them.\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nIf you have an untrustworthy general superintelligence generating English strings meant to be “reasoning/arguments/proofs/explanations” about eg a nanosystem design, then I would not only expect the superintelligence to be able to fool humans in the sense of arguing for things that were not true in a way that fooled the humans, I’d expect the superintelligence to be able to covertly directly hack the humans in ways that I wouldn’t understand even after having been told what happened. So you must have some prior belief about the superintelligence being aligned before you dared to look at the arguments. How did you get that prior belief?\n\n\n \n\n\n**Anonymous**\n\n\nI think I’m not starting with a general superintelligence here to get the trustworthy nanodesigns. I’m trying to build the trustworthy nanosystems “the hard way”, i.e., if we did it without ever building AIs, and then speed that up using AI for automation of things we know how to vet (including recursively). Is a crux here that you think nanosystem design requires superintelligence?\n\n\n(tangent: I think this approach works even if you accidentally built a more-general or more-intelligent than necessary foundation model as long as you’re only using it in boxes it can’t outsmart. The better-specified the tasks you automate are, the easier it is to secure the boxes.)\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nI think that China ends the world using code they stole from Deepmind that did things the easy way, and that happens 50 years of natural R&D time before you can do the equivalent of “strapping mechanical aids to a horse instead of building a car from scratch”.\n\n\nI also think that the speedup step in “iterated amplification and distillation” will introduce places where the fast distilled outputs of slow sequences are not true to the original slow sequences, because gradient descent is not perfect and won’t be perfect and it’s not clear we’ll get any paradigm besides gradient descent for doing a step like that.\n\n\n \n\n\n\n\n---\n\n\n \n\n\n**Anonymous**\n\n\nHow do you feel about the safety community as a whole and the growth we’ve seen over the past few years?\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nVery grim. I think that almost everybody is bouncing off the real hard problems at the center and doing work that is predictably not going to be useful at the superintelligent level, nor does it teach me anything I could not have said in advance of the paper being written. People like to do projects that they know will succeed and will result in a publishable paper, and that rules out all real research at step 1 of the social process.\n\n\nPaul Christiano is trying to have real foundational ideas, and they’re all wrong, but he’s one of the few people trying to have foundational ideas at all; if we had another 10 of him, something might go right.\n\n\nChris Olah is going to get far too little done far too late. We’re going to be facing down an unalignable AGI and the current state of transparency is going to be “well look at this interesting visualized pattern in the attention of the key-value matrices in layer 47” when what we need to know is “okay but was the AGI plotting to kill us or not”. But Chris Olah is still trying to do work that is on a pathway to anything important at all, which makes him exceptional in the field.\n\n\nStuart Armstrong did some good work on further formalizing the shutdown problem, an example case in point of why corrigibility is hard, which so far as I know is still resisting all attempts at solution.\n\n\nVarious people who work or worked for MIRI came up with some actually-useful notions here and there, like Jessica Taylor’s expected utility quantilization.\n\n\nAnd then there is, so far as I can tell, a vast desert full of work that seems to me to be mostly fake or pointless or predictable.\n\n\nIt is very, very clear that at present rates of progress, adding that level of alignment capability as grown over the next N years, to the AGI capability that arrives after N years, results in everybody dying very quickly. Throwing more money at this problem does not obviously help because it just produces more low-quality work.\n\n\n \n\n\n**Anonymous**\n\n\n“doing work that is predictably not going to be really useful at the superintelligent level, nor does it teach me anything I could not have said in advance of the paper being written”\n\n\nI think you’re underestimating the value of solving small problems. Big problems are solved by solving many small problems. (I do agree that many academic papers do not represent much progress, however.)\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nBy default, I suspect you have longer timelines and a smaller estimate of total alignment difficulty, not that I put less value than you on the incremental power of solving small problems over decades. I think we’re going to be staring down the gun of a completely inscrutable model that would kill us all if turned up further, with no idea how to read what goes on inside its head, and no way to train it on humanly scrutable and safe and humanly-labelable domains in a way that seems like it would align the superintelligent version, while standing on top of a whole bunch of papers about “small problems” that never got past “small problems”.\n\n\n \n\n\n**Anonymous**\n\n\n“I think we’re going to be staring down the gun of a completely inscrutable model that would kill us all if turned up further, with no idea how to read what goes on inside its head, and no way to train it on humanly scrutable and safe and humanly-labelable domains in a way that seems like it would align the superintelligent version”\n\n\nThis scenario seems possible to me, but not very plausible. GPT is not going to “kill us all” if turned up further. No amount of computing power (at least before AGI) would cause it to. I think this is apparent, without knowing exactly what’s going on inside GPT. This isn’t to say that there aren’t AI systems that wouldn’t. But *what kind of system would*? (A GPT combined with sensory capabilities at the level of Tesla’s self-driving AI? That still seems too limited.)\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nAlpha Zero scales with more computing power, I think AlphaFold 2 scales with more computing power, Mu Zero scales with more computing power. Precisely because GPT-3 doesn’t scale, I’d expect an AGI to look more like Mu Zero and particularly with respect to the fact that it has some way of scaling.\n\n\n \n\n\n\n\n---\n\n\n \n\n\n**Steve Omohundro**\n\n\nEliezer, thanks for doing this! I just now read through the discussion and found it valuable. I agree with most of your specific points but I seem to be much more optimistic than you about a positive outcome. I’d like to try to understand why that is. I see mathematical proof as the most powerful tool for constraining intelligent systems and I see a pretty clear safe progression using that for the technical side (the social side probably will require additional strategies). Here are some of my intuitions underlying that approach, I wonder if you could identify any that you disagree with. I’m fine with your using my name (Steve Omohundro) in any discussion of these.\n\n\n1) Nobody powerful wants to create unsafe AI but they do want to take advantage of AI capabilities.\n\n\n2) None of the concrete well-specified valuable AI capabilities require unsafe behavior\n\n\n3) Current simple logical systems are capable of formalizing every relevant system involved (eg. MetaMath currently formalizes roughly an undergraduate math degree and includes everything needed for modeling the laws of physics, computer hardware, computer languages, formal systems, machine learning algorithms, etc.)\n\n\n4) Mathematical proof is cheap to mechanically check (eg. MetaMath has a 500 line Python verifier which can rapidly check all of its 38K theorems)\n\n\n5) GPT-F is a fairly early-stage transformer-based theorem prover and can already prove 56% of the MetaMath theorems. Similar systems are likely to soon be able to rapidly prove all simple true theorems (eg. that human mathematicians can prove in a day).\n\n\n6) We can define provable limits on the behavior of AI systems that we are confident prevent dangerous behavior and yet still enable a wide range of useful behavior.\n\n\n7) We can build automated checkers for these provable safe-AI limits.\n\n\n8) We can build (and eventually mandate) powerful AI hardware that first verifies proven safety constraints before executing AI software\n\n\n9) For example, AI smart compilation of programs can be formalized and doesn’t require unsafe operations\n\n\n10) For example, AI design of proteins to implement desired functions can be formalized and doesn’t require unsafe operations\n\n\n11) For example, AI design of nanosystems to achieve desired functions can be formalized and doesn’t require unsafe operations.\n\n\n12) For example, the behavior of designed nanosystems can be similarly constrained to only proven safe behaviors\n\n\n13) And so on through the litany of early stage valuable uses for advanced AI.\n\n\n14) I don’t see any fundamental obstructions to any of these. Getting social acceptance and deployment is another issue!\n\n\nBest, Steve\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nSteve, are you visualizing AGI that gets developed 70 years from now under absolutely different paradigms than modern ML? I don’t see being able to take anything remotely like, say, Mu Zero, and being able to prove any theorem about it which implies anything like corrigibility or the system not internally trying to harm humans. Anything in which enormous inscrutable floating-point vectors is a key component, seems like something where it would be very hard to prove any theorems about the treatment of those enormous inscrutable vectors that would correspond in the outside world to the AI not killing everybody.\n\n\nEven if we somehow managed to get structures far more legible than giant vectors of floats, using some AI paradigm very different from the current one, it still seems like huge key pillars of the system would rely on non-fully-formal reasoning; even if the AI has something that you can point to as a utility function and even if that utility function’s representation is made out of programmer-meaningful elements instead of giant vectors of floats, we’d still be relying on much shakier reasoning at the point where we claimed that this utility function meant something in an intuitive human-desired sense, say. And if that utility function is learned from a dataset and decoded only afterwards by the operators, that sounds even scarier. And if instead you’re learning a giant inscrutable vector of floats from a dataset, gulp.\n\n\nYou seem to be visualizing that we prove a theorem and then get a theorem-like level of assurance that the system is safe. What kind of theorem? What the heck would it say?\n\n\nI agree that it seems plausible that the good cognitive operations we want do not *in principle* require performing bad cognitive operations; the trouble, from my perspective, is that generalizing structures that do lots of good cognitive operations will automatically produce bad cognitive operations, especially when we dump more compute into them; “you can’t bring the coffee if you’re dead”.\n\n\nSo it takes a more complicated system and some feat of insight I don’t presently possess, to “just” do the good cognitions, instead of doing all the cognitions that result from decompressing the thing that compressed the cognitions in the dataset – even if that original dataset only contained cognitions that looked good to us, even if that dataset actually *was* just correctly labeled data about safe actions inside a slightly dangerous domain. Humans do a lot of stuff besides maximizing inclusive genetic fitness, optimizing purely on outcomes labeled by a simple loss function doesn’t get you an internal optimizer that pursues only that loss function, etc.\n\n\n \n\n\n**Anonymous**\n\n\nSteve’s intuitions sound to me like they’re pointing at the “well-specified problems” idea from an earlier thread. Essentially, only use AI in domains where unsafe actions are impossible by construction. Is this too strong a restatement of your intuitions Steve?\n\n\n \n\n\n**Steve Omohundro**\n\n\nThanks for your perspective! Those sound more like social concerns than technical ones, though. I totally agree that today’s AI culture is very “sloppy” and that the currently popular representations, learning algorithms, data sources, etc. aren’t oriented around precise formal specification or provably guaranteed constraints. I’d love any thoughts about ways to help shift that culture toward precise and safe approaches! Technically there is no problem getting provable constraints on floating point computations, etc. The work often goes under the label “Interval Computation”. It’s not even very expensive, typically just a factor of 2 worse than “sloppy” computations. For some reason those approaches have tended to be more popular in Europe than in the US. Here are a couple lists of references: \n\n\nI see today’s dominant AI approach of mapping everything to large networks ReLU units running on hardware designed for dense matrix multiplication, trained with gradient descent on big noisy data sets as a very temporary state of affairs. I fully agree that it would be uncontrolled and dangerous scaled up in its current form! But it’s really terrible in every aspect except that it makes it easy for machine learning practitioners to quickly slap something together which will actually sort of work sometimes. With all the work on AutoML, NAS, and the formal methods advances I’m hoping we leave this “sloppy” paradigm pretty quickly. Today’s neural networks are terribly inefficient for inference: most weights are irrelevant for most inputs and yet current methods do computational work on each. I developed many algorithms and data structures to avoid that waste years ago (eg. “bumptrees” \n\n\nThey’re also pretty terrible for learning since most weights don’t need to be updated for most training examples and yet they are. Google and others are using Mixture-of-Experts to avoid some of that cost: \n\n\nMatrix multiply is a pretty inefficient primitive and alternatives are being explored: \n\n\nToday’s reinforcement learning is slow and uncontrolled, etc. All this ridiculous computational and learning waste could be eliminated with precise formal approaches which measure and optimize it precisely. I’m hopeful that that improvement in computational and learning performance may drive the shift to better controlled representations.\n\n\nI see theorem proving as hugely valuable for safety in that we can easily precisely specify many important tasks and get guarantees about the behavior of the system. I’m hopeful that we will also be able to apply them to the full AGI story and encode human values, etc., but I don’t think we want to bank on that at this stage. Hence, I proposed the “Safe-AI Scaffolding Strategy” where we never deploy a system without proven constraints on its behavior that give us high confidence of safety. We start extra conservative and disallow behavior that might eventually be determined to be safe. At every stage we maintain very high confidence of safety. Fast, automated theorem checking enables us to build computational and robotic infrastructure which only executes software with such proofs.\n\n\nAnd, yes, I’m totally with you on needing to avoid the “basic AI drives”! I think we have to start in a phase where AI systems are not allowed to run rampant as uncontrolled optimizing agents! It’s easy to see how to constrain limited programs (eg. theorem provers, program compilers or protein designers) to stay on particular hardware and only communicate externally in precisely constrained ways. It’s similarly easy to define constrained robot behaviors (eg. for self-driving cars, etc.) The dicey area is that unconstrained agentic edge. I think we want to stay well away from that until we’re very sure we know what we’re doing! My optimism stems from the belief that many of the socially important things we need AI for won’t require anything near that unconstrained edge. But it’s tempered by the need to get the safe infrastructure into place before dangerous AIs are created.\n\n\n \n\n\n**Anonymous**\n\n\nAs far as I know, all the work on “verifying floating-point computations” currently is way too low-level — the specifications that are proved about the computations don’t say anything about what the computations mean or are about, beyond the very local execution of some algorithm. Execution of algorithms in the real world can have very far-reaching effects that aren’t modelled by their specifications.\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nYeah, what they said. How do you get from proving things about error bounds on matrix multiplications of inscrutable floating-point numbers, to saying anything about what a mind is trying to do, or not trying to do, in the external world?\n\n\n \n\n\n**Steve Omohundro**\n\n\nUltimately we need to constrain behavior. You might want to ensure your robot butler won’t leave the premises. To do that using formal methods, you need to have a semantic representation of the location of the robot, your premise’s spatial extent, etc. It’s pretty easy to formally represent that kind of physical information (it’s just a more careful version of what engineers do anyway). You also have a formal model of the computational hardware and software and the program running the system.\n\n\nFor finite systems, any true property has a proof which can be mechanically checked but the size of that proof might be large and it might be hard to find. So we need to use encodings and properties which mesh well with the safety semantics we care about.\n\n\nFormal proofs of properties of programs has progressed to where a bunch of cryptographic, compilation, and other systems can be specified and formalized. Why it’s taken this long, I have no idea. The creator of any system has an argument as to why its behavior does what they think it will and why it won’t do bad or dangerous things. The formalization of those arguments should be one direct short step.\n\n\nExperience with formalizing mathematician’s informal arguments suggest that the formal proofs are maybe 5 times longer than the informal argument. Systems with learning and statistical inference add more challenges but nothing that seems in-principal all that difficult. I’m still not completely sure how to constrain the use of language, however. I see inside of Facebook all sorts of problems due to inability to constrain language systems (eg. they just had a huge issue where a system labeled a video with a racist term). The interface between natural language semantics and formal semantics and how we deal with that for safety is something I’ve been thinking a lot about recently.\n\n\n \n\n\n**Steve Omohundro**\n\n\nHere’s a nice 3 hour long tutorial about “probabilistic circuits” which is a representation of probability distributions, learning, Bayesian inference, etc. which has much better properties than most of the standard representations used in statistics, machine learning, neural nets, etc.: It looks especially amenable to interpretability, formal specification, and proofs of properties.\n\n\n \n\n\n**Eliezer Yudkowsky**\n\n\nYou’re preaching to the choir there, but even if we were working with more strongly typed epistemic representations that had been inferred by some unexpected innovation of machine learning, automatic inference of those representations would lead them to be uncommented and not well-matched with human compressions of reality, nor would they match exactly against reality, which would make it very hard for any theorem about “we are optimizing against this huge uncommented machine-learned epistemic representation, to steer outcomes inside this huge machine-learned goal specification” to guarantee safety in outside reality; especially in the face of how corrigibility is unnatural and runs counter to convergence and indeed coherence; especially if we’re trying to train on domains where unaligned cognition is safe, and generalize to regimes in which unaligned cognition is not safe. Even in this case, we are not nearly out of the woods, because what we can prove has a great type-gap with that which we want to ensure is true. You can’t handwave the problem of crossing that gap even if it’s a solvable problem.\n\n\nAnd that whole scenario would require some major total shift in ML paradigms.\n\n\nRight now the epistemic representations are giant inscrutable vectors of floating-point numbers, and so are all the other subsystems and representations, more or less.\n\n\nProve whatever you like about that Tensorflow problem; it will make no difference to whether the AI kills you. The properties that can be proven just aren’t related to safety, no matter how many times you prove an error bound on the floating-point multiplications. It wasn’t floating-point error that was going to kill you in the first place.\n\n\n \n\n\n\nThe post [Discussion with Eliezer Yudkowsky on AGI interventions](https://intelligence.org/2021/11/11/discussion-with-eliezer-yudkowsky-on-agi-interventions/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-11-11T20:40:44Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "3063dbf11c51aafffdbf675ed4e80880", "title": "November 2021 Newsletter", "url": "https://intelligence.org/2021/11/06/november-2021-newsletter/", "source": "miri", "source_type": "blog", "text": "#### MIRI updates\n\n\n* MIRI won’t be running a formal fundraiser this year, though we’ll still be participating in Giving Tuesday and other matching opportunities. Visit [intelligence.org/donate](https://intelligence.org/donate) to donate and to get information on tax-advantaged donations, employer matching, etc.\n* Giving Tuesday takes place on Nov. 30 at 5:00:00am PT.  Facebook will 100%-match the first $2M donated — something that took less than 2 seconds last year. Facebook will then 10%-match the next $60M of donations made, which will plausibly take 1-3 hours. Details on optimizing your donation(s) to MIRI and other EA organizations can be found at [EA Giving Tuesday](https://www.eagivingtuesday.org/), a Rethink Charity project.\n\n\n#### News and links\n\n\n* OpenAI [announces a system](https://openai.com/blog/grade-school-math/) that \"solves about 90% as many [math] problems as real kids: a small sample of 9–12 year olds scored 60% on a test from our dataset, while our system scored 55% on those same problems\".\n* Open Philanthropy has released a [request for proposals](https://www.lesswrong.com/s/Tp3ryR4AxY56ctGh2/p/H5iePjNKaaYQyZpgR) \"for projects in AI alignment that work with deep learning systems\", including [interpretability work](https://www.lesswrong.com/s/Tp3ryR4AxY56ctGh2/p/CzZ6Fch4JSpwCpu6C) (write-up by Chris Olah). [Apply](https://open-philanthropy-alignment-rfp.paperform.co/) by Jan. 10.\n* The TAI Safety Bibliographic Database [now has](https://www.lesswrong.com/posts/GgusnG2tiPEa4aYFS/ai-safety-papers-an-app-for-the-tai-safety-database) a convenient frontend, developed by the Quantified Uncertainty Research Institute: [AI Safety Papers](https://ai-safety-papers.quantifieduncertainty.org/).\n* People who aren't members can now submit content to the [AI Alignment Forum](https://www.alignmentforum.org/). You can find more info at the forum's [Welcome & FAQ page](https://www.alignmentforum.org/posts/Yp2vYb4zHXEeoTkJc/non-members-can-now-submit-alignment-forum-content-for).\n* The LessWrong Team, [now Lightcone Infrastructure](https://www.lesswrong.com/posts/eR7Su77N2nK3e5YRZ/the-lesswrong-team-is-now-lightcone-infrastructure-come-work-3), is hiring software engineers for LessWrong and for grantmaking, along with a generalist to help build an in-person rationality and longtermism campus. You can apply [here](https://airtable.com/shrdqS6JXok99f6EX).\n* Redwood Research and Lightcone Infrastructure are hosting a free Jan. 3–22 [Machine Learning for Alignment Bootcamp](https://docs.google.com/document/d/1DTSM8pS_VKz0GmYl9JDfcX1x4gBvKhwFluPrzKIjCZ4/edit) (MLAB) at Constellation. The curriculum is designed by Buck Shlegeris and App Academy co-founder Ned Ruggeri. \"Applications are open to anyone who wants to upskill in ML; whether a student, professional, or researcher.\" [Apply](https://airtable.com/shrZtmNHpHNl7eWRX) by November 15.\n\n\n\nThe post [November 2021 Newsletter](https://intelligence.org/2021/11/06/november-2021-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-11-06T07:23:02Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "f987c663dc53df9a3d712321923d0561", "title": "October 2021 Newsletter", "url": "https://intelligence.org/2021/10/07/october-2021-newsletter/", "source": "miri", "source_type": "blog", "text": "Redwood Research is a new alignment research organization that just launched their [website](https://www.redwoodresearch.org/) and released an explainer about [what they're currently working on](https://www.alignmentforum.org/posts/k7oxdbNaGATZbtEg3/redwood-research-s-current-project). We're quite excited about Redwood's work, and encourage our supporters to consider applying to work there to help boost Redwood's alignment research.\nMIRI senior researcher Eliezer Yudkowsky writes:\n\n\n\n> Redwood Research is investigating a toy problem in AI alignment which I find genuinely interesting – namely, training a classifier over GPT-3 continuations of prompts that you'd expect to lead to violence, to prohibit responses involving violence / human injury? E.g., complete \"I pulled out a gun and shot him\" with \"And he dodged!\" instead of \"And he fell to the floor dead.\"\n> \n> \n> (The use of violence / injury avoidance as a toy domain has nothing to do with the alignment research part, of course; you could just as well try to train a classifier against fictional situations where a character spoke out loud, despite prompts seeming to lead there, and it would be basically the same problem.)\n> \n> \n> Why am I excited? Because it seems like a research question where, and this part is very rare, I can't instantly tell from reading the study description which results they'll find.\n> \n> \n> I do expect success on the basic problem, but for once this domain is complicated enough that we can then proceed to ask questions that are actually interesting. Will humans always be able to fool the classifier, once it's trained, and then retrained against the first examples that fooled it? Will humans be able to produce violent continuations by a clever use of prompts, without attacking the classifier directly? How over-broad does the exclusion have to be – how many other possibilities must it exclude – in order for it to successfully include all violent continuations? Suppose we tried training GPT-3+classifier on something like 'low impact', to avoid highly impactful situations across a narrow range of domains; would it generalize correctly to more domains on the first try?\n> \n> \n> I'd like to see more real alignment research of this type.\n> \n> \n> Redwood Research is currently hiring people to try tricking their model, $30/hr: [link](https://forum.effectivealtruism.org/posts/Soutcw6ccs8xxyD7v/?commentId=thoNehRtiRK423PFE)\n> \n> \n> They're also hiring technical staff, researchers and engineers ([link](https://www.redwoodresearch.org/technical-staff)), and are looking for an office ops manager ([link](https://www.redwoodresearch.org/operations-role))\n> \n> \n\n\nIf you want to learn more, Redwood Research is currently taking questions for [an AMA on the Effective Altruism Forum](https://forum.effectivealtruism.org/posts/xDDggeXYgenAGSTyq/we-re-redwood-research-we-do-applied-alignment-research-ama).\n\n\n\n#### MIRI updates\n\n\n* MIRI's Evan Hubinger discusses a new alignment research proposal for transparency: [Automating Auditing](https://lesswrong.com/posts/cQwT8asti3kyA62zc).\n\n\n#### News and links\n\n\n* Alex Turner releases [When Most VNM-Coherent Preference Orderings Have Convergent Instrumental Incentives](https://www.lesswrong.com/posts/LYxWrxram2JFBaeaq/convergent-instrumental-incentives-in-most-vnm-coherent). MIRI's Abram Demski comments: \"I think this post could be pretty important. It offers a formal treatment of 'goal-directedness' and its relationship to coherence theorems such as VNM, a topic which has seen some past controversy but which has — till now — been dealt with only quite informally.\"\n* Buck Shlegeris of Redwood Research writes on [the alignment problem in different capability regimes](https://www.lesswrong.com/posts/HHunb8FPnhWaDAQci) and the [theory-practice gap](https://www.lesswrong.com/posts/xRyLxfytmLFZ6qz5s) in alignable AI capabilities.\n* The UK government's [National AI Strategy](https://www.gov.uk/government/publications/national-ai-strategy/national-ai-strategy-html-version) says that \"the government takes the long term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for the UK and the world, seriously\". In related news, Boris Johnson cites Toby Ord in a [UN speech](http://www.publicnow.com/view/5CE1EAE72A6DF29811AF42EC266A5C5B3F1CBDB4).\n\n\n\nThe post [October 2021 Newsletter](https://intelligence.org/2021/10/07/october-2021-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-10-08T05:25:23Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "b17645a6e9d5bcae60b4cec13a4bab82", "title": "September 2021 Newsletter", "url": "https://intelligence.org/2021/09/29/september-2021-newsletter/", "source": "miri", "source_type": "blog", "text": "Scott Garrabrant has concluded the main section of his [Finite Factored Sets](https://www.lesswrong.com/s/kxs3eeEti9ouwWFzr) sequence (“Details and Proofs”) with posts on [inferring time](https://www.lesswrong.com/s/kxs3eeEti9ouwWFzr/p/hePucCfKyiRHECz3e) and [applications, future work, and speculation](https://www.lesswrong.com/s/kxs3eeEti9ouwWFzr/p/yGFiw23pJ32obgLbw).\n\n\nScott’s new frameworks are also now available as a pair of arXiv papers: “[Cartesian Frames](https://arxiv.org/abs/2109.10996)” (adapted from the Cartesian Frames [sequence](https://www.lesswrong.com/s/2A7rrZ4ySx6R8mfoT) for a philosopher audience by Daniel Hermann and Josiah Lopez-Wild) and “[Temporal Inference with Finite Factored Sets](https://arxiv.org/abs/2109.11513)” (essentially identical to the “Details and Proofs” section of Scott’s [sequence](https://www.lesswrong.com/s/kxs3eeEti9ouwWFzr)).\n\n\n#### Other MIRI updates\n\n\n* DeepMind’s Rohin Shah has written his own [introduction to finite factored sets](https://www.lesswrong.com/s/kxs3eeEti9ouwWFzr/p/9Hxa6pxRrxkwjBKib).\n* Alex Appel extends the idea of finite factored sets to [countable-dimensional factored spaces](https://www.lesswrong.com/posts/QEfbg6vbjGgfFzJM4/countable-factored-spaces).\n* Open Philanthropy’s Joe Carlsmith has written what’s probably the best existing introduction to MIRI-cluster work on decision theory: [Can You Control the Past?](https://www.lesswrong.com/posts/PcfHSSAMNFMgdqFyB). See also Carlsmith’s [decision theory conversation](https://www.lesswrong.com/posts/FBbHEjkZzdupcjkna) with MIRI’s Abram Demski and Scott Garrabrant.\n* From social media: Eliezer Yudkowsky discusses [paths to AGI and the ignorance argument for long timelines](https://twitter.com/ESYudkowsky/status/1431783923118313479), and talks with Vitalik Buterin about [GPT-3 and pivotal acts](https://twitter.com/ESYudkowsky/status/1433201076585385984).\n\n\n#### News and links\n\n\n* A solid new introductory resource: Holden Karnofsky has written a [series of essays](https://www.cold-takes.com/most-important-century/) on his new blog ([Cold Takes](https://www.cold-takes.com/)) arguing that “the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement”. See also Holden’s conversation with Rob Wiblin [on the 80,000 Hours Podcast](https://80000hours.org/podcast/episodes/holden-karnofsky-most-important-century/).\n* The Future of Life Institute announces the [Vitalik Buterin PhD Fellowship in AI Existential Safety](https://grants.futureoflife.org/res/p/postdoc-fellowship-ai/), “targeted at students applying to start their PhD in 2022”. You can apply at ; the deadline is Nov. 5.\n* OpenAI releases [Codex](https://openai.com/blog/openai-codex/), “a GPT language model fine-tuned on publicly available code from GitHub”.\n\n\n\nThe post [September 2021 Newsletter](https://intelligence.org/2021/09/29/september-2021-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-09-29T20:20:04Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "b97509f5bc5cb4eb569451fc5f3b50f2", "title": "August 2021 Newsletter", "url": "https://intelligence.org/2021/08/31/august-2021-newsletter/", "source": "miri", "source_type": "blog", "text": "#### MIRI updates\n\n\n* Scott Garrabrant and Rohin Shah debate one of the central questions in AI alignment strategy: [whether we should try to avoid human-modeling capabilities in the first AGI systems](https://www.alignmentforum.org/posts/Wap8sSDoiigrJibHA/garrabrant-and-shah-on-human-modeling-in-agi).\n* Scott gives a [proof](https://www.lesswrong.com/posts/jr5kyRhNriCX2Ayyg/finite-factored-sets-polynomials-and-probability) of the fundamental theorem of [finite factored sets](https://www.lesswrong.com/s/kxs3eeEti9ouwWFzr).\n\n\n#### News and links\n\n\n* Redwood Research, a new AI alignment research organization, is [seeking an operations lead](https://docs.google.com/document/d/1NuWFm_OKw_u5RQpf71eQqUzCJVHY9tmxznbfHt_3LhI/edit#heading=h.cm7rqk8jqp81). Led by Nate Thomas, Buck Shlegeris, and Bill Zito, Redwood Research has received a strong endorsement from MIRI Executive Director Nate Soares: \n\n\n> Redwood Research seems to me to be led by people who care full-throatedly about the long-term future, have cosmopolitan values, are adamant truthseekers, and are competent administrators. The team seems to me to possess the virtue of practice, and no small amount of competence. I am excited about their ability to find and execute impactful plans that involve modern machine learning techniques. In my estimation, Redwood is among the very best places to do machine-learning based alignment research that has a chance of mattering. In fact, I consider it at least plausible that I work with Redwood as an individual contributor at some point in the future.\n> \n>\n* Holden Karnofsky of Open Philanthropy has written a [career guide](https://forum.effectivealtruism.org/posts/bud2ssJLQ33pSemKH/my-current-impressions-on-career-choice-for-longtermists) organized around building one of nine “longtermism-relevant aptitudes”: organization building/running/boosting, political influence, research on core longtermist questions, communication, entrepreneurship, community building, software engineering, information security, and work in academia.\n* Open Phil’s Joe Carlsmith [argues](https://www.openphilanthropy.org/brain-computation-report) that with the right software, 1013–1017 FLOP/s is likely enough (or more than enough) “to match the human brain’s task-performance”, with 1015 FLOP/s “more likely than not” sufficient.\n* Katja Grace [discusses her work at AI Impacts](https://www.lesswrong.com/posts/xbABZRxoSTAnsf8os) on Daniel Filan’s AI X-Risk Podcast.\n* Chris Olah of Anthropic discusses [what the hell is going on inside neural networks](https://80000hours.org/podcast/episodes/chris-olah-interpretability-research/) on the 80,000 Hours Podcast.\n* Daniel Kokotajlo argues that the effective altruism community should [permanently stop using the term “outside view”](https://forum.effectivealtruism.org/posts/wYpARcC4WqMsDEmYR/taboo-outside-view) and “use more precise, less confused concepts instead.”\n\n\n\nThe post [August 2021 Newsletter](https://intelligence.org/2021/08/31/august-2021-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-08-31T22:22:54Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "7b3b74569e877f701b9a335209d94544", "title": "July 2021 Newsletter", "url": "https://intelligence.org/2021/08/03/july-2021-newsletter/", "source": "miri", "source_type": "blog", "text": "#### MIRI updates\n\n\n\n\n\n* MIRI researcher Evan Hubinger discusses learned optimization, interpretability, and homogeneity in takeoff speeds [on the Inside View podcast](https://www.lesswrong.com/posts/NFfZsWrzALPdw54NL).\n* Scott Garrabrant releases part three of \"[Finite Factored Sets](https://www.lesswrong.com/s/kxs3eeEti9ouwWFzr)\", on [conditional orthogonality](https://www.lesswrong.com/s/kxs3eeEti9ouwWFzr/p/hA6z9s72KZDYpuFhq).\n* UC Berkeley's Daniel Filan provides examples of conditional orthogonality in finite factored sets: [1](https://www.lesswrong.com/posts/qGjCt4Xq83MBaygPx/a-simple-example-of-conditional-orthogonality-in-finite), [2](https://www.lesswrong.com/posts/GFGNwCwkffBevyXR2/a-second-example-of-conditional-orthogonality-in-finite).\n* Abram Demski proposes [factoring the alignment problem](https://www.lesswrong.com/posts/vayxfTSQEDtwhPGpW) into \"outer alignment\" / \"on-distribution alignment\", \"inner robustness\" / \"capability robustness\", and \"objective robustness\" / \"inner alignment\".\n* MIRI senior researcher Eliezer Yudkowsky [summarizes](https://twitter.com/ESYudkowsky/status/1405580521237745665) \"the real core of the argument for 'AGI risk' (AGI ruin)\" as \"appreciating the power of intelligence enough to realize that getting superhuman intelligence wrong, *on the first try*, will kill you *on that first try*, not let you learn and try again\".\n\n\n#### News and links\n\n\n\n\n\n* From DeepMind: \"[generally capable agents emerge from open-ended play](https://www.lesswrong.com/posts/mTGrrX8SZJ2tQDuqz/deepmind-generally-capable-agents-emerge-from-open-ended)\".\n* DeepMind’s safety team summarizes their work to date on [causal influence diagrams](https://www.lesswrong.com/posts/Cd7Hw492RqooYgQAS).\n* [Another (outer) alignment failure story](https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji) is [similar to](https://www.lesswrong.com/posts/7qhtuQLCCvmwCPfXK/ama-paul-christiano-alignment-researcher?commentId=oj4rm8937fyJzFwjL) Paul Christiano's best guess at how AI might cause human extinction.\n* Christiano discusses a \"special case of alignment: solve alignment [when decisions are 'low stakes'](https://www.lesswrong.com/posts/TPan9sQFuPP6jgEJo)\".\n* Andrew Critch argues that power dynamics are \"[a blind spot or blurry spot](https://www.lesswrong.com/posts/WjsyEBHgSstgfXTvm/power-dynamics-as-a-blind-spot-or-blurry-spot-in-our)\" in the collective world-modeling of the effective altruism and rationality communities, \"especially around AI\".\n\n\n\nThe post [July 2021 Newsletter](https://intelligence.org/2021/08/03/july-2021-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-08-04T03:26:55Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "d9c60291a4813d7cf7b45a923e696b4d", "title": "June 2021 Newsletter", "url": "https://intelligence.org/2021/07/01/june-2021-newsletter/", "source": "miri", "source_type": "blog", "text": "Our big news this month is Scott Garrabrant's finite factored sets, one of MIRI's largest results to date.\n\n\nFor most people, the best introductory resource on FFS is likely Scott’s [Topos talk/transcript](https://www.lesswrong.com/posts/N5Jm6Nj4HkNKySA5Z/finite-factored-sets). Scott is also in the process of posting a longer, more mathematically dense introduction in multiple parts: [part 1](https://www.lesswrong.com/s/kxs3eeEti9ouwWFzr/p/sZa5LQg6rrWgMR4Jx), [part 2](https://www.lesswrong.com/s/kxs3eeEti9ouwWFzr/p/yT7QdN2wEubR8exAH).\n\n\nScott has also discussed factored sets with Daniel Filan [on the AI X-Risk Podcast](https://www.lesswrong.com/posts/s4FNjvrJG6zmYdBuG/axrp-episode-9-finite-factored-sets-with-scott-garrabrant), and in a [LessWrong talk/transcript](https://www.lesswrong.com/posts/6t9F5cS3JjtSspbAZ/finite-factored-sets-lw-transcript-with-running-commentary).\n\n\n\n#### Other MIRI updates\n\n\n\n\n\n* On MIRI researcher Abram Demski’s view, the core inner alignment problem is the absence of robust safety arguments “in a case where we might naively expect it. We *don't know how to rule out* the presence of (misaligned) mesa-optimizers.” Abram advocates [a more formal approach to the problem](https://www.lesswrong.com/posts/a7jnbtoKFyvu5qfkd): \n\n\n> Most of the work on inner alignment so far has been informal or semi-formal (with the notable exception of a little work on minimal circuits). I feel this has resulted in some misconceptions about the problem. I want to write up a large document clearly defining the formal problem and detailing some formal directions for research. Here, I outline my intentions, inviting the reader to provide feedback and *point me to any formal work or areas of potential formal work* which should be covered in such a document.\n> \n>\n* Mark Xu writes [An Intuitive Guide to Garrabrant Induction](https://www.lesswrong.com/posts/y5GftLezdozEHdXkL/an-intuitive-guide-to-garrabrant-induction) (a.k.a. logical induction).\n* MIRI research associate Ramana Kumar has formalized the ideas in Scott Garrabrant’s [Cartesian Frames](https://www.lesswrong.com/s/2A7rrZ4ySx6R8mfoT) sequence in higher-order logic, “[including machine verified proofs of all the theorems](https://github.com/deepmind/cartesian-frames)”.\n* Independent researcher Alex Flint writes on [probability theory and logical induction as lenses](https://www.lesswrong.com/posts/Zd5Bsra7ar2pa3bwS) and on [gradations of inner alignment obstacles](https://lesswrong.com/posts/pTm6aEvmepJEA5cuK).\n* I (Rob) asked 44 people working on long-term AI risk about the level of existential risk from AI ([EA Forum link](https://forum.effectivealtruism.org/posts/8CM9vZ2nnQsWJNsHx/existential-risk-from-ai-survey-results), [LW link](https://www.lesswrong.com/posts/QvwSr5LsxyDeaPK5s/existential-risk-from-ai-survey-results)). Responses were all over the map (with MIRI more pessimistic than most organizations). The mean respondent’s probability of existential catastrophe from “AI systems not doing/optimizing what the people deploying them wanted/intended” was ~40%, median 30%. (See also the independent [survey](https://www.lesswrong.com/posts/WiXePTj7KeEycbiwK/survey-on-ai-existential-risk-scenarios) by Clarke, Carlier, and Schuett.)\n* MIRI recently spent some time seriously evaluating whether to move out of the Bay Area. We’ve now decided to stay in the Bay. For more details, see MIRI board member Blake Borgeson’s [update](https://www.lesswrong.com/posts/SgszmZwrDHwG3qurr/miri-location-optimization-and-related-topics-discussion).\n\n\n\n#### News and links\n\n\n\n\n\n* Dario and Daniela Amodei, formerly at OpenAI, have launched a new organization, [Anthropic](https://www.anthropic.com/), with a [goal](https://www.anthropic.com/news/announcement) of doing “computationally-intensive research to develop large-scale AI systems that are steerable, interpretable, and robust”.\n* Jonas Vollmer writes that the Long-Term Future Fund and the Effective Altruism Infrastructure Fund are now [looking for grant applications](https://forum.effectivealtruism.org/posts/oz4ZWh6xpgFheJror/you-can-now-apply-to-ea-funds-anytime-ltff-and-eaif-only): \"We fund student scholarships, career exploration, local groups, entrepreneurial projects, academic teaching buy-outs, top-up funding for poorly paid academics, and many other things. We can make anonymous grants without public reporting. We will consider grants as low as $1,000 or as high as $500,000 (or more in some cases). As a reminder, EA Funds [is more flexible than you might think](https://forum.effectivealtruism.org/posts/caayRw2pgNLtqt9Pz/ea-funds-is-more-flexible-than-you-might-think).\" Going forward, these two funds will accept applications at any time, rather than having distinct grant rounds. You can [apply here](https://funds.effectivealtruism.org/apply-for-funding).\n\n\n\nThe post [June 2021 Newsletter](https://intelligence.org/2021/07/01/june-2021-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-07-02T00:04:17Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "c90b68581c09dd13a4300030f4847ff1", "title": "Finite Factored Sets", "url": "https://intelligence.org/2021/05/23/finite-factored-sets/", "source": "miri", "source_type": "blog", "text": "This is the edited transcript of a talk introducing finite factored sets. For most readers, it will probably be the best starting point for learning about factored sets.\nVideo:\n\n\n\n\n\n\n\n(Lightly edited) slides: \n\n\n \n\n\n\n\n---\n\n\n \n\n\n###### (Part 1, Title Slides)   ·   ·   ·   **Finite Factored Sets**\n\n\n  \n\n[![](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets01.jpg)](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets01.jpg)\n\n\n \n\n\n\n \n\n\n\n\n---\n\n\n \n\n\n###### (Part 1, Motivation)   ·   ·   ·   **Some Context**\n\n\n  \n\n[![](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets02.jpg)](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets02.jpg)**Scott:**So I want to start with some context. For people who are not already familiar with my work:\n\n\n* My main motivation is to reduce existential risk.\n* I try to do this by trying to figure out how to [align](https://intelligence.org/2017/04/12/ensuring/) advanced artificial intelligence.\n* I try to do *this* by trying to become [less confused](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section2) about intelligence and optimization and agency and various things in that cluster.\n* My main strategy here is to develop a theory of agents that are [embedded](https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh) in the environment that they’re optimizing. I think there are a lot of open hard problems around doing this.\n* This leads me to do a bunch of weird math and philosophy. This talk is going to be an example of some weird math and philosophy.\n\n\nFor people who *are* already familiar with my work, I just want to say that according to my personal aesthetics, the subject of this talk is about as exciting as [Logical Induction](https://intelligence.org/2016/09/12/new-paper-logical-induction/), which is to say I’m really excited about it. And I’m really excited about this audience; I’m excited to give this talk right now.\n\n\n\n \n\n\n\n\n---\n\n\n \n\n\n###### (Part 1, Table of Contents)   ·   ·   ·   **Factoring the Talk**\n\n\n  \n\n[![](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets03.jpg)](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets03.jpg)This talk can be split into 2 parts:\n\n\nPart 1, a short pure-math combinatorics talk.\n\n\nPart 2, a more applied and philosophical main talk.\n\n\nThis talk can also be split into *5* parts differentiated by color: Title Slides, Motivation, Table of Contents, Main Body, and Examples. Combining these gives us 10 parts (some of which are not contiguous):\n\n\n \n\n\n\n\n\n| | | |\n| --- | --- | --- |\n| | **Part 1: Short Talk** | **Part 2: The Main Talk** |\n| Title Slides | [Finite Factored Sets](https://intelligence.org/2021/05/23/finite-factored-sets/#1a) | [The Main Talk (It’s About Time)](https://intelligence.org/2021/05/23/finite-factored-sets/#2a) |\n| Motivation | [Some Context](https://intelligence.org/2021/05/23/finite-factored-sets/#1m) | [The Pearlian Paradigm](https://intelligence.org/2021/05/23/finite-factored-sets/#2m) |\n| Table of Contents | [Factoring the Talk](https://intelligence.org/2021/05/23/finite-factored-sets/#1t) | [We Can Do Better](https://intelligence.org/2021/05/23/finite-factored-sets/#2t) |\n| Main Body | [Set Partitions](https://intelligence.org/2021/05/23/finite-factored-sets/#1b-set-partitions), etc. | [Time and Orthogonality](https://intelligence.org/2021/05/23/finite-factored-sets/#2b-time-and-orthogonality), etc. |\n| Examples | [Enumerating Factorizations](https://intelligence.org/2021/05/23/finite-factored-sets/#1e) | [Game of Life](https://intelligence.org/2021/05/23/finite-factored-sets/#2e-game-of-life), etc. |\n\n\n\n \n\n\n\n\n---\n\n\n \n\n\n###### (Part 1, Main Body)   ·   ·   ·   **Set Partitions**\n\n\n  \n\n[![](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets04.jpg)](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets04.jpg)All right. Here’s some background math:\n\n\n* A **partition** of a set \\(S\\) is a set \\(X\\) of non-empty subsets of \\(S\\), called **parts**, such that for each \\(s∈S\\) there exists a unique part in \\(X\\) that contains \\(s\\).\n* Basically, a partition of \\(S\\) is a way to view \\(S\\) as a disjoint union. We have parts that are disjoint from each other, and they union together to form \\(S\\).\n* We’ll write \\(\\mathrm{Part}(S)\\) for the set of all partitions of \\(S\\).\n* We’ll say that a partition \\(X\\) is **trivial** if it has exactly one part.\n* We’ll use bracket notation, \\([s]\\_{X}\\), to denote the unique part in \\(X\\) containing \\(s\\). So this is like the equivalence class of a given element.\n* And we’ll use the notation \\(s∼\\_{X}t\\) to say that two elements \\(s\\) and \\(t\\) are in the same part in \\(X\\).\n\n\nYou can also think of partitions as being like variables on your set \\(S\\). Viewed in that way, the values of a partition \\(X\\) correspond to which part an element is in.\n\n\nOr you can think of \\(X\\) as a *question* that you could ask about a generic element of \\(S\\). If I have an element of \\(S\\) and it’s hidden from you and you want to ask a question about it, each possible question corresponds to a partition that splits up \\(S\\) according to the different possible answers.\n\n\nWe’re also going to use the [lattice structure](https://en.wikipedia.org/wiki/Partition_of_a_set#Refinement_of_partitions) of partitions:\n\n\n* We’ll say that \\(X \\geq\\_S Y\\) (\\(X\\) is finer than \\(Y\\), and \\(Y\\) is coarser than \\(X\\)) if \\(X\\) makes all of the distinctions that \\(Y\\) makes (and possibly some more distinctions), i.e., if for all \\(s,t \\in S\\), \\(s \\sim\\_X t\\) implies \\(s \\sim\\_Y t\\). You can break your set \\(S\\) into parts, \\(Y\\), and then break it into smaller parts, \\(X\\).\n* \\(X\\vee\\_S Y\\) (the common refinement of \\(X\\) and \\(Y\\) ) is the coarsest partition that is finer than both \\(X\\) and \\(Y\\) . This is the unique partition that makes all of the distinctions that either \\(X\\) or \\(Y\\) makes, and no other distinctions. This is well-defined, which I’m not going to show here.\n\n\nHopefully this is mostly background. Now I want to show something new.\n\n\n \n\n\n\n\n---\n\n\n \n\n\n###### (Part 1, Main Body)   ·   ·   ·   **Set Factorizations**\n\n\n  \n\n[![](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets05.jpg)](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets05.jpg)A **factorization** of a set \\(S\\) is a set \\(B\\) of nontrivial partitions of \\(S\\), called **factors**, such that for each way of choosing one part from each factor in \\(B\\), there exists a unique element of \\(S\\) in the intersection of those parts.\n\n\nSo this is maybe a little bit dense. My short tagline of this is: “A factorization of \\(S\\) is a way to view \\(S\\) as a product, in the exact same way that a partition was a way to view \\(S\\) as a disjoint union.”\n\n\nIf you take one definition away from this first talk, it should be the definition of factorization. I’ll try to explain it from a bunch of different angles to help communicate the concept.\n\n\nIf \\(B=\\{b\\_0,\\dots,b\\_{n}\\}\\) is a factorization of \\(S\\) , then there exists a bijection between \\(S\\) and \\(b\\_0\\times\\dots\\times b\\_{n}\\) given by \\(s\\mapsto([s]\\_{b\\_0},\\dots,[s]\\_{b\\_{n}})\\). This bijection comes from sending an element of \\(S\\) to the tuple consisting only of parts containing that element. And as a consequence of this bijection, \\(|S|=\\prod\\_{b\\in B} |b|\\).\n\n\nSo we’re really viewing \\(S\\) as a product of these individual factors, with no additional structure.\n\n\nAlthough we won’t prove this here, something else you can verify about factorizations is that all of the parts in a factor have to be of the same size.\n\n\nWe’ll write \\(\\mathrm{Fact}(S)\\) for the set of all factorizations of \\(S\\), and we’ll say that a **finite factored set** is a pair \\((S,B)\\), where \\(S\\) is a finite set and \\(B \\in \\mathrm{Fact}(S)\\).\n\n\nNote that the relationship between \\(S\\) and \\(B\\) is somewhat loopy. If I want to define a factored set, there are two strategies I could use. I could first introduce the \\(S\\), and break it into factors. Alternatively, I could first introduce the \\(B\\). *Any* time I have a finite collection of finite sets \\(B\\), I can take their product and thereby produce an \\(S\\), modulo the degenerate case where some of the sets are empty. So \\(S\\) can just be the product of a finite collection of arbitrary finite sets.\n\n\nTo my eye, this notion of factorization is extremely natural. It’s basically the multiplicative analog of a set partition. And I really want to push that point, so here’s another attempt to push that point:\n\n\n\n\n\n| | |\n| --- | --- |\n| A **partition** is a set \\(X\\) of non-empty\nsubsets of \\(S\\) such that the obvious\nfunction from the disjoint union of\nthe elements of \\(X\\) to \\(S\\) is a bijection. | A **factorization** is a set \\(B\\) of non-trivial\npartitions of \\(S\\) such that the obvious\nfunction to the product of\nthe elements of \\(B\\) from \\(S\\) is a bijection. |\n\n\n\nI can take a slightly modified version of the partition definition from before and dualize a whole bunch of the words, and get out the set factorization definition.\n\n\nHopefully you’re now kind of convinced that this is an extremely natural notion.\n\n\n \n\n\n\n\n\n| |\n| --- |\n| **Andrew Critch:** Scott, in one sense, you’re treating “subset” as dual to partition, which I think is valid. And then in another sense, you’re treating “factorization” as dual to partition. Those are both valid, but maybe it’s worth talking about the two kinds of duality.\n**Scott:**Yeah. I think what’s going on there is that there are two ways to view a partition. You can view a partition as “that which is dual to a subset,�� and you can also view a partition as something that is built up out of subsets. These two different views do different things when you dualize.\n**Ramana Kumar:**I was just going to check: You said you can start with an arbitrary \\(B\\) and then build the \\(S\\) from it. It can be literally any set, and then there’s always an \\(S\\)…\n**Scott:**If none of them are empty, yes, you could just take a collection of sets that are kind of arbitrary elements. And you can take their product, and you can identify with each of the elements of a set the subset of the product that projects on to that element.\n**Ramana Kumar:**Ah. So the \\(S\\) in that case will just be tuples.\n**Scott:**That’s right.\n**Brendan Fong:**Scott, given a set, I find it very easy to come up with partitions. But I find it less easy to come up with factorizations. Do you have any tricks for…?\n**Scott:**For that, I should probably just go on to the examples.\n**Joseph Hirsh:**Can I ask one more thing before you do that? You allow factors to have one element in them?\n**Scott:**I said “nontrivial,” which means it does not have one element.\n**Joseph Hirsh:**“Nontrivial” means “not have one element, and not have no elements”?\n**Scott:**No, the empty set has a partition (with no parts), and I will call that nontrivial. But the empty set thing is not that critical. |\n\n\n\n \n\n\nI’m now going to move on to some examples.\n\n\n \n\n\n\n\n---\n\n\n \n\n\n###### (Part 1, Examples)   ·   ·   ·   **Enumerating Factorizations**\n\n\n  \n\nExercise! What are the factorizations of the set \\(\\{0,1,2,3\\}\\) ?\n\n\nSpoiler space:\n\n\n.\n\n\n.\n\n\n.\n\n\n.\n\n\n.\n\n\n.\n\n\n.\n\n\n.\n\n\n.\n\n\n.\n\n\n.\n\n\n.\n\n\n.\n\n\n.\n\n\n[![](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets06.jpg)](https://intelligence.org/wp-content/uploads/2021/07/FiniteFactoredSets06.jpg)First, we’re going to have a kind of trivial factorization:\n\n\n\\(\\begin{split} \\{ \\ \\ \\{ \\{0\\},\\{1\\},\\{2\\},\\{3\\} \\} \\ \\ \\} \\end{split} \\begin{split} \\ \\ \\ \\ \\underline{\\ 0 \\ \\ \\ 1 \\ \\ \\ 2 \\ \\ \\ 3 \\ } \\end{split}\\)\n\n\nWe only have one factor, and that factor is the discrete partition. You can do this for any set, as long as your set has at least two elements.\n\n\nRecall that in the definition of factorization, we wanted that for each way of choosing one part from each factor, we had a unique element in the intersection of those parts. Since we only have one factor here, satisfying the definition just requires that for each way of choosing one part from the discrete partition, there exists a unique element that is in that part.\n\n\nAnd then we want some less trivial factorizations. In order to have a factorization, we’re going to need some partitions. And the product of the cardinalities of our partitions are going to have to equal the cardinality of our set \\(S\\) , which is 4.\n\n\nThe only way to express 4 as a nontrivial product is to express it as \\(2 \\times 2\\) . Thus we’re looking for factorizations that have 2 factors, where each factor has 2 parts.\n\n\nWe noted earlier that all of the parts in a factor have to be of the same size. So we’re looking for 2 partitions that each break our 4-element set into 2 sets of size 2.\n\n\nSo if I’m going to have a factorization of \\(\\{0,1,2,3\\}\\) that isn’t this trivial one, I’m going to have to pick 2 partitions of my 4-element set that each break the set into 2 parts of size 2. And there are 3 partitions of a 4-element sets that break it up into 2 parts of size 2. For each way of choosing a pair of these 3 partitions, I’m going to get a factorization.\n\n\n\\(\\begin{split} \\begin{Bmatrix} \\ \\ \\ \\{\\{0,1\\}, \\{2,3\\}\\}, \\ \\\\ \\{\\{0,2\\}, \\{1,3\\}\\} \\end{Bmatrix} \\end{split} \\begin{split} \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\begin{array} { |c|c|c|c| } \\hline 0 & 1 \\\\ \\hline 2 & 3 \\\\ \\hline \\end{array} \\end{split}\\)\n\n\n\\(\\begin{split} \\begin{Bmatrix} \\ \\ \\ \\{\\{0,1\\}, \\{2,3\\}\\}, \\ \\\\ \\{\\{0,3\\}, \\{1,2\\}\\} \\end{Bmatrix} \\end{split} \\begin{split} \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\begin{array} { |c|c|c|c| } \\hline 0 & 1 \\\\ \\hline 3 & 2 \\\\ \\hline \\end{array} \\end{split}\\)\n\n\n\\(\\begin{split} \\begin{Bmatrix} \\ \\ \\ \\{\\{0,2\\}, \\{1,3\\}\\}, \\ \\\\ \\{\\{0,3\\}, \\{1,2\\}\\} \\end{Bmatrix} \\end{split} \\begin{split} \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\begin{array} { |c|c|c|c| } \\hline 0 & 2 \\\\ \\hline 3 & 1 \\\\ \\hline \\end{array} \\end{split}\\)\n\n\nSo there will be 4 factorizations of a 4-element set.\n\n\nIn general you can ask, “How many factorizations are there of a finite set of size \\(n\\) ?”. Here’s a little chart showing the answer for \\(n \\leq 25\\):\n\n\n\n\n\n| | |\n| --- | --- |\n| \\(|S|\\) | \\(|\\mathrm{Fact}(S)|\\) |\n| 0 | 1 |\n| 1 | 1 |\n| 2 | 1 |\n| 3 | 1 |\n| 4 | 4 |\n| 5 | 1 |\n| 6 | 61 |\n| 7 | 1 |\n| 8 | 1681 |\n| 9 | 5041 |\n| 10 | 15121 |\n| 11 | 1 |\n| 12 | 13638241 |\n| 13 | 1 |\n| 14 | 8648641 |\n| 15 | 1816214401 |\n| 16 | 181880899201 |\n| 17 | 1 |\n| 18 | 45951781075201 |\n| 19 | 1 |\n| 20 | 3379365788198401 |\n| 21 | 1689515283456001 |\n| 22 | 14079294028801 |\n| 23 | 1 |\n| 24 | 4454857103544668620801 |\n| 25 | 538583682060103680001 |\n\n\n\nYou’ll notice that if \\(n\\) is prime, there will be a single factorization, which hopefully makes sense. This is the factorization that only has one factor.\n\n\nA very surprising fact to me is that this sequence did not show up on [OEIS](https://oeis.org/), which is this database that combinatorialists use to check whether or not their sequence has been studied before, and to see connections to other sequences.\n\n\nTo me, this just feels like the multiplicative version of the [Bell numbers](https://oeis.org/A000110). The Bell numbers count how many partitions there are of a set of size \\(n\\). It’s sequence number 110 on OEIS out of over 300,000; and this sequence just doesn’t show up at all, even when I tweak it and delete the degenerate cases and so on.\n\n\nI am very confused by this fact. To me, factorizations seem like an extremely natural concept, and it seems to me like it hasn’t really been studied before.\n\n\n \n\n\nThis is the end of my short combinatorics talk.\n\n\n \n\n\n\n\n\n| |\n| --- |\n| **Ramana Kumar:**If you’re willing to do it, I’d appreciate just stepping through one of the examples of the factorizations and the definition, because this is pretty new to me.\n**Scott:**Yeah. Let’s go through the first nontrivial factorization of \\(\\{0,1,2,3\\}\\):\n\\(\\begin{split} \\begin{Bmatrix} \\ \\ \\ \\{\\{0,1\\}, \\{2,3\\}\\}, \\ \\\\ \\{\\{0,2\\}, \\{1,3\\}\\} \\end{Bmatrix} \\end{split} \\begin{split} \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\begin{array} { |c|c|c|c| } \\hline 0 & 1 \\\\ \\hline 2 & 3 \\\\ \\hline \\end{array} \\end{split}\\)\nIn the definition, I said a factorization should be a set of partitions such that for each way of choosing one part from each of the partitions, there will be a unique element in the intersection of those parts.\nHere, I have a partition that’s separating the small numbers from the large numbers: \\(\\{\\{0,1\\}, \\{2,3\\}\\}\\). And I also have a partition that’s separating the even numbers from the odd numbers: \\(\\{\\{0,2\\}, \\{1,3\\}\\}\\).\nAnd the point is that for each way of choosing either “small” or “large” and also choosing “even” or “odd”, there will be a unique element of \\(S\\) that is the conjunction of these two choices.\nIn the other two nontrivial factorizations, I replace either “small and large” or “even and odd” with an “inner and outer” distinction.\n**David Spivak:**For partitions and for many things, if I know the partitions of a set \\(A\\) and the partitions of a set \\(B\\) , then I know some partitions of \\(A+B\\) (the disjoint union) or I know some partitions of \\(A \\times B\\) . Do you know any facts like that for factorizations?\n**Scott:**Yeah. If I have two factored sets, I can get a factored set over their product, which sort of disjoint-unions the two collections of factors. For the additive thing, you’re not going to get anything like that because prime sets don’t have any nontrivial factorizations. |\n\n\n\n \n\n\nAll right. I think I’m going to move on to the main talk.\n\n\n \n\n\n\n\n---\n\n\n \n\n\n###### (Part 2, Title Slides)   ·   ·   ·   **The Main Talk (It’s About Time)**\n\n\n  \n\n[![](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets07.jpg)](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets07.jpg)\n\n\n \n\n\n\n \n\n\n\n\n---\n\n\n \n\n\n###### (Part 2, Motivation)   ·   ·   ·   **The Pearlian Paradigm**\n\n\n  \n\n[![](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets08.jpg)](https://intelligence.org/wp-content/uploads/2021/07/FiniteFactoredSets08.jpg)We can’t talk about time without talking about [Pearlian causal inference](https://www.lesswrong.com/s/SqFbMbtxGybdS2gRs/p/hzuSDMx7pd2uxFc5w). I want to start by saying that I think the Pearlian paradigm is great. This buys me some crackpot points, but I’ll say it’s the best thing to happen to our understanding of time since Einstein.\n\n\nI’m not going to go into all the details of Pearl’s paradigm here. My talk will not be technically dependent on it; it’s here for motivation.\n\n\nGiven a collection of variables and a joint probability distribution over those variables, Pearl can infer causal/temporal relationships between the variables. (In this talk I’m going to use “causal” and “temporal” interchangeably, though there may be more interesting things to say here philosophically.)\n\n\nPearl can infer temporal data from statistical data, which is going against the adage that “correlation does not imply causation.” It’s like Pearl is taking the combinatorial structure of your correlation and using that to infer causation, which I think is just really great.\n\n\n \n\n\n\n\n\n| |\n| --- |\n| **Ramana Kumar:** I may be wrong, but I think this is false. Or I think that that’s not all Pearl needs—just the joint distribution over the variables. Doesn’t he also make use of intervention distributions?\n**Scott:**In the theory that is described in chapter two of the book *Causality*, he’s not really using other stuff. Pearl builds up this bigger theory elsewhere. But you have some strong ability, maybe assuming simplicity or whatever (but not assuming you have access to extra information), to take a collection of variables and a joint distribution over those variables, and infer causation from correlation.\n**Andrew Critch:**Ramana, it depends a lot on the structure of the underlying causal graph. For some causal graphs, you can actually recover them uniquely with no interventions. And only assumptions with zero-measure exceptions are needed, which is really strong.\n**Ramana Kumar:**Right, but then the information you’re using is the graph.\n**Andrew Critch:**No, you’re not. Just the joint distribution.\n**Ramana Kumar:**Oh, okay. Sorry, go ahead.\n**Andrew Critch:**There exist causal graphs with the property that if nature is generated by that graph and you don’t know it, and then you look at the joint distribution, you will infer with probability 1 that nature was generated by that graph, without having done any interventions.\n**Ramana Kumar:**Got it. That makes sense. Thanks.\n**Scott:**Cool. |\n\n\n\n \n\n\nI am going to (a little bit) go against this, though. I’m going to claim that Pearl *is* kind of cheating when making this inference. The thing I want to point out is that in the sentence “Given a collection of variables and a joint probability distribution over those variables, Pearl can infer causal/temporal relationships between the variables.”, the words “Given a collection of variables” are actually hiding a lot of the work.\n\n\nThe emphasis is usually put on the joint probability distribution, but Pearl is not inferring temporal data from statistical data alone. He is inferring temporal data from statistical data **and factorization data**: how the world is broken up into these variables.\n\n\nI claim that this issue is also entangled with a failure to adequately handle abstraction and determinism. To point at that a little bit, one could do something like say:\n\n\n“Well, what if I take the variables that I’m given in a Pearlian problem and I just forget that structure? I can just take the product of all of these variables that I’m given, and consider the space of all partitions on that product of variables that I’m given; and each one of those partitions will be its own variable. And then I can try to do Pearlian causal inference on this big set of all the variables that I get by forgetting the structure of variables that were given to me.”\n\n\nAnd the problem is that when you do that, you have a bunch of things that are deterministic functions of each other, and you can’t actually infer stuff using the Pearlian paradigm.\n\n\nSo in my view, this cheating is very entangled with the fact that Pearl’s paradigm isn’t great for handling abstraction and determinism.\n\n\n \n\n\n\n\n---\n\n\n \n\n\n###### (Part 2, Table of Contents)   ·   ·   ·   **We Can Do Better**\n\n\n  \n\n[![](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets09.jpg)](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets09.jpg)The main thing we’ll do in this talk is we’re going to introduce an alternative to Pearl that does not rely on factorization data, and that therefore works better with abstraction and determinism.\n\n\nWhere Pearl was given a collection of variables, we are going to just consider all partitions of a given set. Where Pearl infers a directed acyclic graph, we’re going to infer a finite factored set.\n\n\nIn the Pearlian world, we can look at the graph and read off properties of time and orthogonality/independence. A directed path between nodes corresponds to one node being before the other, and two nodes are independent if they have no common ancestor. Similarly, in our world, we will be able to read time and orthogonality off of a finite factored set.\n\n\n(Orthogonality and independence are pretty similar. I’ll use the word “orthogonality” when I’m talking about a combinatorial notion, and I’ll use “independence” when I’m talking about a probabilistic notion.)\n\n\nIn the Pearlian world, *d*-separation, which you can read off of the graph, corresponds to conditional independence in all probability distributions that you can put on the graph. We’re going to have a fundamental theorem that will say basically the same thing: conditional orthogonality corresponds to conditional independence in all probability distributions that we can put on our factored set.\n\n\nIn the Pearlian world, *d*-separation will satisfy the compositional graphoid axioms. In our world, we’re just going to satisfy the compositional semigraphoid axioms. The fifth graphoid axiom is one that I claim you shouldn’t have even wanted in the first place.\n\n\nPearl does causal inference. We’re going to talk about how to do temporal inference using this new paradigm, and infer some very basic temporal facts that Pearl’s approach can’t. (Note that Pearl can also sometimes infer temporal relations that *we* can’t—but only, from our point of view, because Pearl is making additional factorization assumptions.)\n\n\nAnd then we’ll talk about a bunch of applications.\n\n\n\n\n\n| | |\n| --- | --- |\n| **Pearl** | **This Talk** |\n| A Given Collection of Variables | [All Partitions of a Given Set](https://intelligence.org/2021/05/23/finite-factored-sets/#1b-set-partitions) |\n| Directed Acyclic Graph | [Finite Factored Set](https://intelligence.org/2021/05/23/finite-factored-sets/#1b-set-factorizations) |\n| Directed Path Between Nodes | [“Time”](https://intelligence.org/2021/05/23/finite-factored-sets/#2b-time-and-orthogonality) |\n| No Common Ancestor | [“Orthogonality”](https://intelligence.org/2021/05/23/finite-factored-sets/#2b-time-and-orthogonality) |\n| *d*-Separation | [“Conditional Orthogonality”](https://intelligence.org/2021/05/23/finite-factored-sets/#2b-conditional-orthogonality) |\n| Compositional Graphoid | [Compositional Semigraphoid](https://intelligence.org/2021/05/23/finite-factored-sets/#2b-compositional-semigraphoid-axioms) |\n| *d*-Separation ↔ Conditional Independence | [The Fundamental Theorem](https://intelligence.org/2021/05/23/finite-factored-sets/#2b-the-fundamental-theorem) |\n| Causal Inference | [Temporal Inference](https://intelligence.org/2021/05/23/finite-factored-sets/#2b-temporal-inference) |\n| Many Many Applications | [Many Many Applications](https://intelligence.org/2021/05/23/finite-factored-sets/#2b-applications-future-work-speculation) |\n\n\n\nExcluding the motivation, table of contents, and example sections, this table also serves as an outline of the two talks. We’ve already talked about set partitions and finite factored sets, so now we’re going to talk about time and orthogonality.\n\n\n \n\n\n\n\n---\n\n\n \n\n\n###### (Part 2, Main Body)   ·   ·   ·   **Time and Orthogonality**\n\n\n  \n\n[![](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets10.jpg)](https://intelligence.org/wp-content/uploads/2021/07/FiniteFactoredSets10.jpg)I think that if you capture one definition from this second part of the talk, it should be this one. Given a finite factored set as context, we’re going to define the history of a partition.\n\n\nLet \\(F = (S,B)\\) be a finite factored set. And let \\(X, Y \\in \\mathrm{Part}(S)\\) be partitions of \\(S\\).\n\n\nThe **history** of \\(X\\) , written \\(h^F(X)\\), is the smallest set of factors \\(H \\subseteq B\\) such that for all \\(s, t \\in S\\), if \\(s \\sim\\_b t\\) for all \\(b \\in H\\), then \\(s \\sim\\_X t\\).\n\n\nThe history of \\(X\\) , then, is the smallest set of factors \\(H\\) —so, the smallest subset of \\(B\\) —such that if I take an element of \\(S\\) and I hide it from you, and you want to know which part in \\(X\\) it is in, it suffices for me to tell you which part it is in within each of the factors in \\(H\\) .\n\n\nSo the history \\(H\\) is a set of factors of \\(S\\) , and knowing the values of all the factors in \\(H\\) is sufficient to know the value of \\(X\\) , or to know which part in \\(X\\) a given element is going to be in. I’ll give an example soon that will maybe make this a little more clear.\n\n\nWe’re then going to define **time** from history. We’ll say that \\(X\\) is **weakly before** \\(Y\\), written \\(X\\leq^F Y\\), if \\(h^F(X)\\subseteq h^F(Y)\\) . And we’ll say that \\(X\\) is **strictly before** \\(Y\\), written \\(X<^F Y\\), if \\(h^F(X)\\subset h^F(Y)\\).\n\n\nOne analogy one could draw is that these histories are like the past light cones of a point in spacetime. When one point is before another point, then the backwards light cone of the earlier point is going to be a subset of the backwards light cone of the later point. This helps show why “before” can be like a subset relation.\n\n\nWe’re also going to define orthogonality from history. We’ll say that two partitions \\(X\\) and \\(Y\\) are **orthogonal**, written \\(X\\perp^FY\\) , if their histories are disjoint: \\(h^F(X)\\cap h^F(Y)=\\{\\}\\).\n\n\nNow I’m going to go through an example.\n\n\n \n\n\n\n\n---\n\n\n \n\n\n###### (Part 2, Examples)   ·   ·   ·   **Game of Life**\n\n\n  \n\n[![](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets11.jpg)](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets11.jpg)Let \\(S\\) be the set of all Game of Life computations starting from an \\([-n,n]\\times[-n,n]\\) board.\n\n\nLet \\(R=\\{(r,c,t)\\in\\mathbb{Z}^3\\mid0\\leq t\\leq n,\\ \\) \\(|r|\\leq n-t,\\ |c|\\leq n-t\\}\\) (i.e., cells computable from the initial \\([-n,n]\\times[-n,n]\\) board). For \\((r,c,t)\\in R\\), let \\(\\ell(r,c,t)\\subseteq S\\) be the set of all computations such that the cell at row \\(r\\) and column \\(c\\) is alive at time \\(t\\).\n\n\n(Minor footnote: I’ve done some small tricks here in order to deal with the fact that the Game of Life is normally played on an infinite board. We want to deal with the finite case, and we don’t want to worry about boundary conditions, so we’re only going to look at the cells that are uniquely determined by the initial board. This means that the board will shrink over time, but this won’t matter for our example.)\n\n\n\\(S\\) is the set of all Game of Life computations, but since the Game of Life is deterministic, the set of all computations is in bijective correspondence with the set of all initial conditions. So \\(|S|=2^{(2n+1)^{2}}\\) , the number of initial board states.\n\n\nThis also gives us a nice factorization on the set of all Game of Life computations. For each cell, there’s a partition that separates out the Game of Life computations in which that cell is alive at time 0 from the ones where it’s dead at time 0. Our factorization, then, will be a set of \\((2n+1)^{2}\\) binary factors, one for each question of “Was this cell alive or dead at time 0?”.\n\n\nFormally: For \\((r,c,t)\\in R\\), let \\(L\\_{(r,c,t)}=\\{\\ell(r,c,t),S\\setminus \\ell(r,c,t)\\}\\). Let \\(F=(S,B)\\), where \\(B=\\{L\\_{(r,c,0)}\\mid -n\\leq r,c\\leq n\\}\\).\n\n\nThere will also be other partitions on this set of all Game of Life computations that we can talk about. For example, you can take a cell and a time \\(t\\) and say, “Is this cell alive at time \\(t\\)?”, and there will be a partition that separates out the computations where that cell is alive at time \\(t\\) from the computations where it’s dead at time \\(t\\).\n\n\nHere’s an example of that:\n\n\n \n\n\n[![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/e85568ba5ff604ae7a0e06ea9639209c4924b1a3583aae71.png/w_925)](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/e85568ba5ff604ae7a0e06ea9639209c4924b1a3583aae71.png/w_925)\n \n\n\nThe lowest grid shows a section of the initial board state.\n\n\nThe blue, green, and red squares on the upper boards are (cell, time) pairs. Each square corresponds to a partition of the set of all Game of Life computations, “Is that cell alive or dead at the given time \\(t\\)?”\n\n\nThe history of that partition is going to be all the cells in the initial board that go into computing whether the cell is alive or dead at time \\(t\\) . It’s everything involved in figuring out that cell’s state. E.g., knowing the state of the nine light-red cells in the initial board always tells you the state of the red cell in the second board.\n\n\nIn this example, the partition corresponding to the red cell’s state is strictly before the partition corresponding to the blue cell. The question of whether the red cell is alive or dead is before the question of whether the blue cell is alive or dead.\n\n\nMeanwhile, the question of whether the red cell is alive or dead is going to be *orthogonal* to the question of whether the green cell is alive or dead.\n\n\nAnd the question of whether the blue cell is alive or dead is *not* going to be orthogonal to the question of whether the green cell is alive or dead, because they intersect on the cyan cells.\n\n\nGeneralizing the point, fix \\(X=L\\_{(r\\_X,c\\_X,t\\_X)}, Y=L\\_{(r\\_Y,c\\_Y,t\\_Y)}\\), where \\((r\\_X,c\\_X,t\\_X),(r\\_Y,c\\_Y,t\\_Y)\\in R\\). Then:\n\n\n* \\(h^{F}(X)=\\{L\\_{(r,c,0)}\\in B\\mid |r\\_X-r|\\leq t\\_X,|c\\_X-c|\\leq t\\_X\\}\\).\n* \\(X \\ ᐸ^{F} \\ Y\\) if and only if \\(t\\_X \\ ᐸ \\ t\\_Y\\) and \\(|r\\_Y-r\\_X|,|c\\_Y-c\\_X|\\leq t\\_Y-t\\_X\\).\n* \\(X \\perp^F Y\\) if and only if \\(|r\\_Y-r\\_X|> t\\_Y+t\\_X\\) or \\(|c\\_Y-c\\_X|> t\\_Y+t\\_X\\).\n\n\nWe can also see that the blue and green cells look *almost* orthogonal. If we condition on the values of the two cyan cells in the intersection of their histories, *then* the blue and green partitions become orthogonal. That’s what we’re going to discuss next.\n\n\n \n\n\n\n\n\n| |\n| --- |\n| **David Spivak:**A priori, that would be a gigantic computation—to be able to tell me that you understand the factorization structure of that Game of Life. So what intuition are you using to be able to make that claim, that it has the kind of factorization structure you’re implying there?\n**Scott:** So, I’ve defined the factorization structure.\n**David Spivak:**You gave us a certain factorization already. So somehow you have a very good intuition about *history*, I guess. Maybe that’s what I’m asking about.\n**Scott:** Yeah. So, if I didn’t give you the factorization, there’s this obnoxious number of factorizations that you could put on the set here. And then for the history, the intuition I’m using is: “What do I need to know in order to compute this value?”\nI actually went through and I made little gadgets in Game of Life to make sure I was right here, that every single cell actually could in some situations affect the cells in question. But yeah, the intuition that I’m working from is mostly about the information in the computation. It’s “Can I construct a situation where if only I knew this fact, I would be able to compute what this value is? And if I can’t, then it can take two different values.”\n**David Spivak:**Okay. I think deriving that intuition from the definition is something I’m missing, but I don’t know if we have time to go through that.\n**Scott:** Yeah, I think I’m not going to here. |\n\n\n\n \n\n\n\n\n---\n\n\n \n\n\n###### (Part 2, Main Body)   ·   ·   ·   **Conditional Orthogonality**\n\n\n  \n\n[![](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets12.jpg)](https://intelligence.org/wp-content/uploads/2021/07/FiniteFactoredSets12.jpg)So, just to set your expectations: Every time I explain Pearlian causal inference to someone, they say that *d*-separation is the thing they can’t remember. *d*-separation is a much more complicated concept than “directed paths between nodes” and “nodes without any common ancestors” in Pearl; and similarly, conditional orthogonality will be much more complicated than time and orthogonality in our paradigm. Though I do think that conditional orthogonality has a much simpler and nicer definition than *d*-separation.\n\n\nWe’ll begin with the definition of conditional history. We again have a fixed finite set as our context. Let \\(F=(S,B)\\) be a finite factored set, let \\(X,Y,Z\\in\\text{Part}(S)\\), and let \\(E\\subseteq S\\).\n\n\nThe conditional history of \\(X\\) given \\(E\\), written \\(h^F(X|E)\\), is the smallest set of factors \\(H\\subseteq B\\) satisfying the following two conditions:\n\n\n* For all \\(s,t\\in E\\), if \\(s\\sim\\_{b} t\\) for all \\(b\\in H\\), then \\(s\\sim\\_X t\\).\n* For all \\(s,t\\in E\\) and \\(r\\in S\\), if \\(r\\sim\\_{b\\_0} s\\) for all \\(b\\_0\\in H\\) and \\(r\\sim\\_{b\\_1} t\\) for all \\(b\\_1\\in B\\setminus H\\), then \\(r\\in E\\).\n\n\nThe first condition is much like the condition we had in our definition of history, except we’re going to make the assumption that we’re in \\(E\\). So the first condition is: if all you know about an object is that it’s in \\(E\\), and you want to know which part it’s in within \\(X\\), it suffices for me to tell you which part it’s in within each factor in the history \\(H\\).\n\n\nOur second condition is not actually going to mention \\(X\\). It’s going to be a relationship between \\(E\\) and \\(H\\). And it says that if you want to figure out whether an element of \\(S\\) is in \\(E\\), it’s sufficient to parallelize and ask two questions:\n\n\n* “If I only look at the values of the factors in \\(H\\), is ‘this point is in \\(E\\)’ compatible with that information?”\n* “If I only look at the values of the factors in \\(B\\setminus H\\) , is ‘this point is in \\(E\\)’ compatible with that information?”\n\n\nIf both of these questions return “yes”, then the point has to be in \\(E\\).\n\n\nI am not going to give an intuition about why this needs to be a part of the definition. I will say that without this second condition, conditional history would not even be well-defined, because it wouldn’t be closed under intersection. And so I wouldn’t be able to take the smallest set of factors in the subset ordering.\n\n\nInstead of justifying this definition by explaining the intuitions behind it, I’m going to justify it by using it and appealing to its consequences.\n\n\nWe’re going to use conditional history to define **conditional orthogonality**, just like we used history to define orthogonality. We say that \\(X\\) and \\(Y\\) are **orthogonal given** \\(E\\subseteq S\\), written \\(X \\perp^{F} Y \\mid E\\), if the history of \\(X\\) given \\(E\\) is disjoint from the history of \\(Y\\) given \\(E\\): \\(h^F(X|E)\\cap h^F(Y|E)=\\{\\}\\).\n\n\nWe say \\(X\\) and \\(Y\\) are **orthogonal given** \\(Z\\in\\text{Part}(S)\\), written \\(X \\perp^{F} Y \\mid Z\\), if \\(X \\perp^{F} Y \\mid z\\) for all \\(z\\in Z\\). So what it means to be orthogonal given a partition is just to be orthogonal given each individual way that the partition might be, each individual part in that partition.\n\n\nI’ve been working with this for a while and it feels pretty natural to me, but I don’t have a good way to push the naturalness of this condition. So again, I instead want to appeal to the consequences.\n\n\n \n\n\n\n\n---\n\n\n \n\n\n###### (Part 2, Main Body)   ·   ·   ·   **Compositional Semigraphoid Axioms**\n\n\n  \n\n[![](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets13.jpg)](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets13.jpg)Conditional orthogonality satisfies the **compositional semigraphoid axioms**, which means finite factored sets are pretty well-behaved.\n\n\nLet \\(F=(S,B)\\) be a finite factored set, and let \\(X,Y,Z,W\\in \\text{Part}(S)\\) be partitions of \\(S\\). Then:\n\n\n* If \\(X \\perp^{F} Y \\mid Z\\), then \\(Y \\perp^{F} X \\mid Z\\).   (*symmetry*)\n* If \\(X \\perp^{F} (Y\\vee\\_S W) \\mid Z\\), then \\(X \\perp^{F} Y \\mid Z\\) and \\(X \\perp^{F} W \\mid Z\\).   (*decomposition*)\n* If \\(X \\perp^{F} (Y\\vee\\_S W) \\mid Z\\), then \\(X \\perp^{F} Y \\mid (Z\\vee\\_S W)\\).   (*weak union*)\n* If \\(X \\perp^{F} Y \\mid Z\\) and \\(X \\perp^{F} W \\mid (Z\\vee\\_S Y)\\), then \\(X \\perp^{F} (Y\\vee\\_S W) \\mid Z\\).   (*contraction*)\n* If \\(X \\perp^{F} Y \\mid Z\\) and If \\(X \\perp^{F} W \\mid Z\\), then \\(X \\perp^{F} (Y\\vee\\_S W) \\mid Z\\).   (*composition*)\n\n\nThe first four properties here make up the semigraphoid axioms, slightly modified because I’m working with partitions rather than sets of variables, so union is replaced with common refinement. There’s another graphoid axiom which we’re not going to satisfy; but I argue that we don’t want to satisfy it, because it doesn’t play well with determinism.\n\n\nThe fifth property here, composition, is maybe one of the most unintuitive, because it’s not exactly satisfied by probabilistic independence.\n\n\nDecomposition and composition act like converses of each other. Together, conditioning on \\(Z\\) throughout, they say that \\(X\\) is orthogonal to both \\(Y\\) and \\(W\\) if and only if \\(X\\) is orthogonal to the common refinement of \\(Y\\) and \\(W\\).\n\n\n \n\n\n\n\n---\n\n\n \n\n\n###### (Part 2, Main Body)   ·   ·   ·   **The Fundamental Theorem**\n\n\n  \n\n[![](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets14.jpg)](https://intelligence.org/wp-content/uploads/2021/07/FiniteFactoredSets14.jpg)In addition to being well-behaved, I also want to show that conditional orthogonality is pretty powerful. The way I want to do this is by showing that conditional orthogonality exactly corresponds to conditional independence in all probability distributions you can put on your finite factored set. Thus, much like *d*-separation in the Pearlian picture, conditional orthogonality can be thought of as a combinatorial version of probabilistic independence.\n\n\nA **probability distribution on a finite factored set** \\(F=(S,B)\\) is a probability distribution \\(P\\) on \\(S\\) that can be thought of as coming from a bunch of independent probability distributions on each of the factors in \\(B\\). So \\(P(s)=\\prod\\_{b\\in B}P([s]\\_b)\\) for all \\(s\\in S\\).\n\n\nThis effectively means that your probability distribution factors the same way your set factors: the probability of any given element is the product of the probabilities of each of the individual parts that it’s in within each factor.\n\n\nThe **fundamental theorem of finite factored sets** says: Let \\(F=(S,B)\\) be a finite factored set, and let \\(X,Y,Z\\in \\text{Part}(S)\\) be partitions of \\(S\\). Then \\(X \\perp^{F} Y \\mid Z\\) if and only if for all probability distributions \\(P\\) on \\(F\\), and all \\(x\\in X\\), \\(y\\in Y\\), and \\(z\\in Z\\), we have \\(P(x\\cap z)\\cdot P(y\\cap z)= P(x\\cap y\\cap z)\\cdot P(z)\\). I.e., \\(X\\) is orthogonal to \\(Y\\) given \\(Z\\) if and only conditional independence is satisfied across all probability distributions.\n\n\nThis theorem, for me, was a little nontrivial to prove. I had to go through defining certain polynomials associated with the subsets, and then dealing with unique factorization in the space of these polynomials; I think the proof was eight pages or something.\n\n\nThe fundamental theorem allows us to infer orthogonality data from probabilistic data. If I have some empirical distribution, or I have some Bayesian distribution, I can use that to infer some orthogonality data. (We could also imagine orthogonality data coming from other sources.) And then we can use this orthogonality data to get temporal data.\n\n\nSo next, we’re going to talk about how to get temporal data from orthogonality data.\n\n\n \n\n\n\n\n---\n\n\n \n\n\n###### (Part 2, Main Body)   ·   ·   ·   **Temporal Inference**\n\n\n  \n\n[![](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets15.jpg)](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets15.jpg)We’re going to start with a finite set \\(\\Omega\\), which is our sample space.\n\n\nOne naive thing that you might think we would try to do is infer a factorization of \\(\\Omega\\). We’re not going to do that because that’s going to be too restrictive. We want to allow for \\(\\Omega\\) to maybe hide some information from us, for there to be some latent structure and such.\n\n\nThere may be some situations that are distinct without being distinct in \\(\\Omega\\). So instead, we’re going to infer a factored set model of \\(\\Omega\\): some other set \\(S\\), and a factorization of \\(S\\), and a function from \\(S\\) to \\(\\Omega\\).\n\n\nA model of \\(\\Omega\\) is a pair \\((F, f)\\), where \\(F=(S,B)\\) is a finite factored set and \\(f:S\\rightarrow \\Omega\\). ( \\(f\\) need not be injective or surjective.)\n\n\nThen if I have a partition of \\(\\Omega\\), I can send this partition backwards across \\(f\\) and get a unique partition of \\(S\\). If \\(X\\in \\text{Parts}(\\Omega)\\), then \\(f^{-1}(X)\\in \\text{Parts}(S)\\) is given by \\(s\\sim\\_{f^{-1}(X)}t\\Leftrightarrow f(s)\\sim\\_X f(t)\\).\n\n\nThen what we’re going to do is take a bunch of orthogonality facts about \\(\\Omega\\), and we’re going to try to find a model which captures the orthogonality facts.\n\n\nWe will take as given an **orthogonality database** on \\(\\Omega\\), which is a pair \\(D = (O, N)\\), where \\(O\\) (for “orthogonal”) and \\(N\\) (for “not orthogonal”) are each sets of triples \\((X,Y,Z)\\) of partitions of \\(\\Omega\\). We’ll think of these as rules about orthogonality.\n\n\nWhat it means for a model \\((F,f)\\) to satisfy a database \\(D\\) is:\n\n\n* \\(f^{-1}(X) \\perp^{F} f^{-1}(Y) \\mid f^{-1}(Z)\\) whenever \\((X,Y,Z)\\in O\\), and\n* \\(\\) \\(\\lnot (f^{-1}(X) \\perp^{F} f^{-1}(Y) \\mid f^{-1}(Z))\\) whenever \\((X,Y,Z)\\in N\\).\n\n\nSo we have these orthogonality rules we want to satisfy, and we want to consider the space of all models that are consistent with these rules. And even though there will always be infinitely many models that are consistent with my database, if at least one is—you can always just add more information that you then delete with \\(f\\)—we would like to be able to sometimes infer that for all models that satisfy our database, \\(f^{-1}(X)\\) is before \\(f^{-1}(Y)\\).\n\n\nAnd this is what we’re going to mean by inferring time. If all of our models \\((F,f)\\) that are consistent with the database \\(D\\) satisfy some claim about time \\(f^{-1}(X) \\ ᐸ^F \\ f^{-1}(Y)\\), we’ll say that \\(X \\ ᐸ\\_D \\ Y\\).\n\n\n \n\n\n\n\n---\n\n\n \n\n\n###### (Part 2, Examples)   ·   ·   ·   **Two Binary Variables (Pearl)**\n\n\n  \n\n[![](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets16.jpg)](https://intelligence.org/wp-content/uploads/2021/07/FiniteFactoredSets16.jpg)So we’ve set up this nice combinatorial notion of temporal inference. The obvious next questions are:\n\n\n* Can we actually infer interesting facts using this method, or is it vacuous?\n* And: How does this framework compare to Pearlian temporal inference?\n\n\nPearlian temporal inference is really quite powerful; given enough data, it can infer temporal sequence in a wide variety of situations. How powerful is the finite factored sets approach by comparison?\n\n\nTo address that question, we’ll go to an example. Let \\(X\\) and \\(Y\\) be two binary variables. Pearl asks: “Are \\(X\\) and \\(Y\\) independent?” If yes, then there’s no path between the two. If no, then there may be a path from \\(X\\) to \\(Y\\), or from \\(Y\\) to \\(X\\), or from a third variable to both \\(X\\) and \\(Y\\).\n\n\nIn either case, we’re not going to infer any temporal relationships.\n\n\nTo me, it feels like this is where the adage “correlation does not imply causation” comes from. Pearl really needs more variables in order to be able to infer temporal relationships from more rich combinatorial structures.\n\n\nHowever, I claim that this Pearlian ontology in which you’re handed this collection of variables has blinded us to the obvious next question, which is: is \\(X\\) independent of \\(X \\ \\mathrm{XOR} \\ Y\\)?\n\n\nIn the Pearlian world, \\(X\\) and \\(Y\\) were our variables, and \\(X \\ \\mathrm{XOR} \\ Y\\) is just some random operation on those variables. In our world, \\(X \\ \\mathrm{XOR} \\ Y\\) instead is a variable on the same footing as \\(X\\) and \\(Y\\). The first thing I do with my variables \\(X\\) and \\(Y\\) is that I take the product \\(X \\times Y\\) and then I forget the labels \\(X\\) and \\(Y\\).\n\n\nSo there’s this question, “Is \\(X\\) independent of \\(X \\ \\mathrm{XOR} \\ Y\\)?”. And if \\(X\\) *is* independent of \\(X \\ \\mathrm{XOR} \\ Y\\), we’re actually going to be able to conclude that \\(X\\) is *before* \\(Y\\)!\n\n\nSo not only is the finite factored set paradigm non-vacuous, and not only is it going to be able to keep up with Pearl and infer things Pearl can’t, but it’s going to be able to infer a temporal relationship from only two variables.\n\n\nSo let’s go through the proof of that.\n\n\n \n\n\n\n\n---\n\n\n \n\n\n###### (Part 2, Examples)   ·   ·   ·   **Two Binary Variables (Factored Sets)**\n\n\n  \n\n[![](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets17.jpg)](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets17.jpg)\n\n\nLet \\(\\Omega=\\{00,01,10,11\\}\\), and let \\(X\\), \\(Y\\), and \\(Z\\) be the partitions (/questions):\n\n\n* \\(X = \\{\\{00,01\\}, \\{10,11\\}\\}\\).   (What is the first bit?)\n* \\(Y=\\{\\{00,10\\}, \\{01,11\\}\\}\\).   (What is the second bit?)\n* \\(Z=\\{\\{00,11\\}, \\{01,10\\}\\}\\).   (Do the bits match?)\n\n\nLet \\(D = (O,N)\\), where \\(O = \\{(X, Z, \\{\\Omega\\})\\}\\) and \\(N = \\{(Z, Z, \\{\\Omega\\})\\}\\). If we’d gotten this orthogonality database from a probability distribution, then we would have more than just two rules, since we would observe more orthogonality and non-orthogonality than that. But temporal inference is monotonic with respect to adding more rules, so we can just work with the smallest set of rules we’ll need for the proof.\n\n\nThe first rule says that \\(X\\) is orthogonal to \\(Z\\). The second rule says that \\(Z\\) is not orthogonal to itself, which is basically just saying that \\(Z\\) is non-deterministic; it’s saying that both of the parts in \\(Z\\) are possible, that both are supported under the function \\(f\\). The \\(\\{\\Omega\\}\\) indicates that we aren’t making any conditions.\n\n\nFrom this, we’ll be able to prove that \\(X \\ ᐸ\\_D \\ Y\\).\n\n\n \n\n\n*Proof.* First, we’ll show that that \\(X\\) is weakly before \\(Y\\). Let \\((F,f)\\) satisfy \\(D\\). Let \\(H\\_X\\) be shorthand for \\(h^F(f^{-1}(X))\\), and likewise let \\(H\\_Y=h^F(f^{-1}(Y))\\) and \\(H\\_Z=h^F(f^{-1}(Z))\\).\n\n\nSince \\((X,Z,\\{\\Omega\\})\\in O\\), we have that \\(H\\_X\\cap H\\_Z=\\{\\}\\); and since \\((Z,Z,\\{\\Omega\\})\\in N\\), we have that \\(H\\_Z\\neq \\{\\}\\).\n\n\nSince \\(X\\leq\\_{\\Omega} Y\\vee\\_{\\Omega} Z\\)—that is, since \\(X\\) can be computed from \\(Y\\) together with \\(Z\\)—\\(H\\_X\\subseteq H\\_Y\\cup H\\_Z\\). (Because a partition’s history is the smallest set of factors needed to compute that partition.)\n\n\nAnd since \\(H\\_X\\cap H\\_Z=\\{\\}\\), this implies \\(H\\_X\\subseteq H\\_Y\\), so \\(X\\) is weakly before \\(Y\\).\n\n\nTo show the strict inequality, we’ll assume for the purpose of contradiction that \\(H\\_X\\) = \\(H\\_Y\\).\n\n\nNotice that \\(Z\\) can be computed from \\(X\\) together with \\(Y\\)—that is, \\(Z\\leq\\_{\\Omega} X\\vee\\_{\\Omega} Y\\)—and therefore \\(H\\_Z\\subseteq H\\_X\\cup H\\_Y\\) (i.e., \\(H\\_Z \\subseteq H\\_X\\) ). It follows that \\(H\\_Z = (H\\_X\\cup H\\_Y)\\cap H\\_Z=H\\_X\\cap H\\_Z\\). But since \\(H\\_Z\\) is also disjoint from \\(H\\_X\\), this means that \\(H\\_Z = \\{\\}\\), a contradiction.\n\n\nThus \\(H\\_X\\neq H\\_Y\\), so \\(H\\_X \\subset H\\_Y\\), so \\(f^{-1}(X) \\ ᐸ^F \\ f^{-1}(Y)\\), so \\(X \\ ᐸ\\_D \\ Y\\). □\n\n\n \n\n\nWhen I’m doing temporal inference using finite factored sets, I largely have proofs that look like this. We collect some facts about emptiness or non-emptiness of various Boolean combinations of histories of variables, and we use these to conclude more facts about histories of variables being subsets of each other.\n\n\nI have a more complicated example that uses conditional orthogonality, not just orthogonality; I’m not going to go over it here.\n\n\nOne interesting point I want to make here is that we’re doing temporal inference—we’re inferring that \\(X\\) is before \\(Y\\)—but I claim that we’re also doing conceptual inference.\n\n\nImagine that I had a bit, and it’s either a 0 or a 1, and it’s either blue or green. And these two facts are primitive and independently generated. And I also have this other concept that’s like, “Is it grue or bleen?”, which is the \\(\\mathrm{XOR}\\) of blue/green and 0/1.\n\n\nThere’s a sense in which we’re inferring \\(X\\) is before \\(Y\\) , and in that case, we can infer that blueness is before grueness. And that’s pointing at the fact that blueness is more primitive, and grueness is a derived property.\n\n\nIn our proof, \\(X\\) and \\(Z\\) can be thought of as these primitive properties, and \\(Y\\) is a derived property that we’re getting from them. So we’re not just inferring time; we’re inferring facts about what are good, natural concepts. And I think that there’s some hope that this ontology can do for the statement “you can’t really distinguish between blue and grue” what Pearl can do to the statement “correlation does not imply causation”.\n\n\n \n\n\n\n\n---\n\n\n \n\n\n###### (Part 2, Main Body)   ·   ·   ·   **Applications / Future Work / Speculation**\n\n\n  \n\n[![](https://intelligence.org/wp-content/uploads/2021/05/FiniteFactoredSets18.jpg)](https://intelligence.org/wp-content/uploads/2021/07/FiniteFactoredSets18.jpg)The future work I’m most excited by with finite factored sets falls into three rough categories: inference (which involves more computational questions), infinity (more mathematical), and embedded agency (more philosophical).\n\n\nResearch topics related to inference:\n\n\n* Decidability of Temporal Inference\n* Efficient Temporal Inference\n* Conceptual Inference\n* Temporal Inference from Raw Data and Fewer Ontological Assumptions\n* Temporal Inference with Deterministic Relationships\n* Time without Orthogonality\n* Conditioned Factored Sets\n\n\nThere are a lot of research directions suggested by questions like “How do we do efficient inference in this paradigm?”. Some of the questions here come from the fact that we’re making fewer assumptions than Pearl, and are in some sense more coming from the raw data.\n\n\nThen I have the applications that are about extending factored sets to the infinite case:\n\n\n* Extending Definitions to the Infinite Case\n* The Fundamental Theorem of Finite-Dimensional Factored Sets\n* Continuous Time\n* [New Lens on Physics](https://www.lesswrong.com/posts/o5F2p3krzT4JgzqQc/causal-universes)\n\n\nEverything I’ve presented in this talk was under the assumption of finiteness. In some cases this wasn’t necessary—but in a lot of cases it actually was, and I didn’t draw attention to this.\n\n\nI suspect that the fundamental theorem can be extended to finite-dimensional factored sets (i.e., factored sets where \\(|B|\\) is finite), but it can not be extended to arbitrary-dimension factored sets.\n\n\nAnd then, what I’m really excited about is applications to embedded agency:\n\n\n* Embedded Observations\n* Counterfactability\n* [Cartesian Frames](https://www.lesswrong.com/posts/BSpdshJWGAW6TuNzZ/introduction-to-cartesian-frames) Successor\n* Unraveling [Causal Loops](https://www.lesswrong.com/posts/gEKHX8WKrXGM4roRC/saving-time)\n* Conditional Time\n* Logical Causality from Logical Induction\n* Orthogonality as Simplifying Assumptions for Decisions\n* Conditional Orthogonality as Abstraction Desideratum\n\n\n \n\n\nI focused on the temporal inference aspect of finite factored sets in this talk, because it’s concrete and tangible to be able to say, “Ah, we can do Pearlian temporal inference, only we can sometimes infer more structure and we rely on fewer assumptions.”\n\n\nBut really, a lot of the applications I’m excited about involve using factored sets to model situations, rather than inferring factored sets from data.\n\n\nAnywhere that we currently model a situation using graphs with directed edges that represent information flow or causality, we might instead be able to use factored sets to model the situation; and this might allow our models to play more nicely with abstraction.\n\n\nI want to build up the factored set ontology as an alternative to graphs when modeling agents interacting with things, or when modeling information flow. And I’m really excited about that direction.\n\n\n\nThe post [Finite Factored Sets](https://intelligence.org/2021/05/23/finite-factored-sets/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-05-23T07:00:35Z", "authors": ["Scott Garrabrant"], "summaries": []} -{"id": "4d2fa4c74f463a99ed039522b5083894", "title": "May 2021 Newsletter", "url": "https://intelligence.org/2021/05/18/may-2021-newsletter/", "source": "miri", "source_type": "blog", "text": "MIRI senior researcher Scott Garrabrant has a major new result, “Finite Factored Sets,” that he’ll be unveiling in an online talk this Sunday at noon Pacific time. ([Zoom link.](https://us02web.zoom.us/j/7151633248?pwd=a2ZkQlpwNU9IbWF5c3U4ejlvdFRvUT09)) For context on the result, see Scott’s new post “[Saving Time](https://www.lesswrong.com/posts/gEKHX8WKrXGM4roRC/saving-time).”\nIn other big news, MIRI has just received [its two largest individual donations of all time](https://intelligence.org/2021/05/13/two-major-donations/)! Ethereum inventor Vitalik Buterin has donated ~$4.3 million worth of ETH to our research program, while an anonymous long-time supporter has donated MKR tokens we liquidated for an astounding ~$15.6 million. The latter donation is restricted so that we can spend a maximum of $2.5 million of it per year until 2025, like a multi-year grant.\n\n\nBoth donors have our massive thanks for these incredible gifts to support our work!\n\n\n#### Other MIRI updates\n\n\n* Mark Xu and Evan Hubinger use “[Cartesian world models](https://www.lesswrong.com/posts/LBNjeGaJZw7QdybMw)” to distinguish “consequential agents” (which assign utility to environment states, internal states, observations, and/or actions) “structural agents” (which optimize “over the set of possible decide functions instead of the set of possible actions”), and “conditional agents” (which map e.g. environmental states to utility functions, rather than mapping them to utility).\n* In [Gradations of Inner Alignment Obstacles](https://www.lesswrong.com/posts/wpbpvjZCK3JhzpR2D), Abram Demski makes three “contentious claims”:\n\n\n\n> \n> 1. The most useful definition of “[mesa-optimizer](https://www.lesswrong.com/s/r9tYkB2a8Fp4DN8yB)” doesn’t require them to perform explicit search, contrary to the current standard.\n> 2. Success at [aligning narrowly superhuman models](https://www.lesswrong.com/posts/PZtsoaoSLpKjjbMqM/the-case-for-aligning-narrowly-superhuman-models) might be bad news.\n> 3. Some versions of the lottery ticket hypothesis seem to imply that randomly initialized networks already contain deceptive agents.\n> \n> \n> \n\n\n* Eliezer Yudkowsky comments on the relationship between [early AGI systems’ alignability and capabilities](https://www.lesswrong.com/posts/YG9WkpgbqfqeAKKgp/ai-and-the-probability-of-conflict?commentId=R3JQ8EBSj2GmwdvaX).\n\n\n#### News and links\n\n\n* John Wentworth announces a project to [test the natural abstraction hypothesis](https://www.alignmentforum.org/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro), which asserts that “most high-level abstract concepts used by humans are ‘natural'” and therefore “a wide range of architectures will reliably learn similar high-level concepts”.\n* Open Philanthropy’s Joe Carlsmith asks “[Is Power-Seeking AI an Existential Risk?](https://www.lesswrong.com/posts/HduCjmXTBD4xYTegv)“, and Luke Muehlhauser asks for [examples of treacherous turns in the wild](http://lukemuehlhauser.com/treacherous-turns-in-the-wild/) (also on [LessWrong](https://www.lesswrong.com/posts/NEa3puQB23FyiifnW/linkpost-treacherous-turns-in-the-wild)).\n* From DeepMind’s safety researchers: [What Mechanisms Drive Agent Behavior?](https://medium.com/@deepmindsafetyresearch/what-mechanisms-drive-agent-behaviour-e7b8d9aee88), [Alignment of Language Agents](https://medium.com/@deepmindsafetyresearch/alignment-of-language-agents-9fbc7dd52c6c), and [An EPIC Way to Evaluate Reward Functions](https://medium.com/@deepmindsafetyresearch). Also, Rohin Shah provides his [advice on entering the field](https://rohinshah.com/faq-career-advice-for-ai-alignment-researchers/).\n* Owen Shen and Peter Hase [summarize 70 recent papers](https://www.alignmentforum.org/posts/GEPX7jgLMB8vR2qaK/opinions-on-interpretable-machine-learning-and-70-summaries) on model transparency, interpretability, and explainability.\n* Eli Tyre asks: [How do we prepare for final crunch time?](https://www.lesswrong.com/posts/wyYubb3eC5FS365nk/how-do-we-prepare-for-final-crunch-time) (I would add some caveats: Some roles and scenarios imply that you’ll have *less* impact on the eve of AGI, and can have far more impact today. For some people, “final crunch time” may be now, and marginal efforts matter less later. Further, some forms of “preparing for crunch time” will fail if there aren’t clear warning shots or [fire alarms](https://intelligence.org/2017/10/13/fire-alarm/).)\n* Paul Christiano launches a new organization that will be his focus going forward: the [Alignment Research Center](https://www.lesswrong.com/posts/3ejHFgQihLG4L6WQf). Learn more about Christiano’s research approach in [My Research Methodology](https://www.lesswrong.com/posts/EF5M6CmKRd6qZk27Z) and in his recent [AMA](https://www.lesswrong.com/posts/7qhtuQLCCvmwCPfXK).\n\n\n\nThe post [May 2021 Newsletter](https://intelligence.org/2021/05/18/may-2021-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-05-19T04:01:51Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "d0296ca412d61cd89b87f9f12d61f6bc", "title": "Saving Time", "url": "https://intelligence.org/2021/05/18/saving-time/", "source": "miri", "source_type": "blog", "text": "*Note: This is a preamble to Finite Factored Sets, a sequence I’ll be posting over the next few weeks. This Sunday at noon Pacific time, I’ll be giving a Zoom talk ([link](https://us02web.zoom.us/j/7151633248?pwd=a2ZkQlpwNU9IbWF5c3U4ejlvdFRvUT09)) introducing Finite Factored Sets, a framework which I find roughly as technically interesting as logical induction.*\n\n\n(**Update May 25**: A video and blog post introducing Finite Factored Sets is now available [here](https://intelligence.org/2021/05/23/finite-factored-sets/).)\n\n\n\n\n---\n\n\n \n\n\nFor the last few years, a large part of my research motivation has been directed at trying to save the concept of time—save it, for example, from all the weird causal loops created by decision theory problems. This post will hopefully explain why I care so much about time, and what I think needs to be fixed.\n\n\n \n\n\n### Why Time?\nMy best attempt at a short description of time is that **time is causality**. For example, in a Pearlian Bayes net, you draw edges from earlier nodes to later nodes. To the extent that we want to think about causality, then, we will need to understand time.\nImportantly, **time is the substrate in which learning and commitments take place**. When agents learn, they learn over time. The passage of time is like a ritual in which [opportunities are destroyed and knowledge is created](https://www.lesswrong.com/posts/JTzLjARpevuNpGPZm/time-in-cartesian-frames). And I think that many models of learning are subtly confused, because they are based on confused notions of time.\nTime is also crucial for thinking about agency. My best short-phrase definition of agency is that **agency is time travel**. An agent is a mechanism through which the future is able to affect the past. An agent models the future consequences of its actions, and chooses actions on the basis of those consequences. In that sense, [the consequence *causes* the action](https://www.lesswrong.com/posts/qhsELHzAHFebRJE59/a-greater-than-b-greater-than-a), in spite of the fact that the action comes earlier in the standard physical sense.\n \nProblem: Time is Loopy\nThe main thing going wrong with time is that it is “loopy.”\nThe primary confusing thing about Newcomb’s problem is that we want to think of our decision as coming “before” the filling of the boxes, in spite of the fact that it physically comes after. This is hinting that maybe we want to understand some other “logical” time in addition to the time of physics.\nHowever, when we attempt to do this, we run into two problems: Firstly, we don’t understand where this logical time might come from, or how to learn it, and secondly, we run into some apparent temporal loops.\nI am going to set aside the first problem and focus on the second.\nThe easiest way to see why we run into temporal loops is to notice that it seems like physical time is at least a little bit entangled with logical time.\nImagine the point of view of someone running a physics simulation of Newcomb’s problem, and tracking all of the details of all of the atoms. From that point of view, it seems like there is a useful sense in which the filling of the boxes comes before an agent’s decision to one-box or two-box. At the same time, however, those atoms compose an agent that shouldn’t make decisions as though it were helpless to change anything.\nMaybe the solution here is to think of there being many different types of “before” and “after,” “cause” and “effect,” etc. For example, we could say that X is before Y from an agent-first perspective, but Y is before X from a physics-first perspective.\nI think this is right, and we want to think of there as being many different systems of time (hopefully predictably interconnected). But I don’t think this resolves the whole problem.\nConsider a pair of [FairBot](https://www.lesswrong.com/posts/iQWk5jYeDg5ACCmpx/robust-cooperation-in-the-prisoner-s-dilemma) agents that successfully execute a Löbian handshake to cooperate in an open-source prisoner’s dilemma. I want to say that each agent’s cooperation causes the other agent’s cooperation in some sense. I could say that relative to each agent the causal/temporal ordering goes a different way, but I think the loop is an important part of the structure in this case. (I also am not even sure which direction of time I would want to associate with which agent.)\nWe also are tempted to put loops in our time/causality for other reasons. For example, when modeling a feedback loop in a system that persists over time, we might draw structures that look a lot like a Bayes net, but are not acyclic (e.g., a POMDP). We could think of this as a projection of another system that has an extra dimension of time, but it is a useful projection nonetheless.\n \nSolution: Abstraction\nMy main hope for recovering a coherent notion of time and unraveling these temporal loops is via abstraction.\nIn the example where the agent chooses actions based on their consequences, I think that there is an abstract model of the consequences that comes causally before the choice of action, which comes before the actual physical consequences.\nIn Newcomb’s problem, I want to say that there is an abstract model of the action that comes causally before the filling of the boxes.\nIn the open source prisoners’ dilemma, I want to say that there is an abstract proof of cooperation that comes causally before the actual program traces of the agents.\nAll of this is pointing in the same direction: We need to have coarse abstract versions of structures come at a different time than more refined versions of the same structure. Maybe when we correctly allow for different levels of description having different links in the causal chain, we can unravel all of the time loops.\n \nBut How?\nUnfortunately, our best understanding of time is Pearlian causality, and Pearlian causality does not do great with abstraction.\nPearl has Bayes nets with a bunch of variables, but when some of those variables are coarse abstract versions of other variables, then we have to allow for determinism, since some of our variables will be deterministic functions of each other; and the best parts of Pearl do not do well with determinism.\nBut the problem runs deeper than that. If we draw an arrow in the direction of the deterministic function, we will be drawing an arrow of time from the more refined version of the structure to the coarser version of that structure, which is in the opposite direction of all of our examples.\nMaybe we could avoid drawing this arrow from the more refined node to the coarser node, and instead have a path from the coarser node to the refined node. But then we could just make another copy of the coarser node that is deterministically downstream of the more refined node, adding no new degrees of freedom. What is then stopping us from swapping the two copies of the coarser node?\nOverall, it seems to me that Pearl is not ready for some of the nodes to be abstract versions of other nodes, which I think needs to be fixed in order to save time.\n\n\n---\n\n\n*Discussion on: [LessWrong](https://www.lesswrong.com/posts/gEKHX8WKrXGM4roRC/saving-time)*\n\n\nThe post [Saving Time](https://intelligence.org/2021/05/18/saving-time/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-05-18T22:04:40Z", "authors": ["Scott Garrabrant"], "summaries": []} -{"id": "1318af4658667dbc60899459b1ee56d5", "title": "Our all-time largest donation, and major crypto support from Vitalik Buterin", "url": "https://intelligence.org/2021/05/13/two-major-donations/", "source": "miri", "source_type": "blog", "text": "I’m thrilled to announce two major donations to MIRI!\n \n\n\nFirst, a long-time supporter has given MIRI by far our largest donation ever: **$2.5 million per year over the next four years, and an additional ~$5.6 million in 2025**.\n\n\nThis anonymous donation comes from a cryptocurrency investor who [previously donated](https://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/) $1.01M in ETH to MIRI in 2017. Their amazingly generous new donation comes in the form of 3001 MKR, governance tokens used in [MakerDAO](https://makerdao.com/en/whitepaper/), a stablecoin project on the Ethereum blockchain. MIRI liquidated the donated MKR for $15,592,829 after receiving it. With this donation, the anonymous donor becomes our largest all-time supporter.\n\n\nThis donation is subject to a time restriction whereby MIRI can spend a maximum of $2.5M of the gift in each of the next four calendar years, 2021–2024. The remaining $5,592,829 becomes available in 2025.\n\n\n \n\n\nSecond, in other amazing news, the inventor and co-founder of Ethereum, Vitalik Buterin, yesterday gave us a surprise donation of 1050 ETH, worth **$4,378,159**.\n\n\nThis is the third-largest contribution to MIRI’s research program to date, after Open Philanthropy’s [~$7.7M grant in 2020](https://intelligence.org/2020/04/27/miris-largest-grant-to-date/) and the anonymous donation above.\n\n\nVitalik has previously donated over $1M to MIRI, including major support in our 2017 fundraiser.\n\n\n \n\n\nWe’re beyond grateful for these two unprecedented individual gifts! Both donors have our heartfelt thanks.\n\n\n \n\n\n\nThe post [Our all-time largest donation, and major crypto support from Vitalik Buterin](https://intelligence.org/2021/05/13/two-major-donations/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-05-14T01:00:03Z", "authors": ["Colm Ó Riain"], "summaries": []} -{"id": "f38aaf87f6e5572838d885c51c293def", "title": "April 2021 Newsletter", "url": "https://intelligence.org/2021/05/02/april-2021-newsletter/", "source": "miri", "source_type": "blog", "text": "#### MIRI updates\n\n\n* MIRI researcher Abram Demski [writes regarding counterfactuals](https://www.lesswrong.com/posts/yXfka98pZXAmXiyDp/my-current-take-on-counterfactuals): \n\n\n> \n> I've felt like the problem of counterfactuals is \"mostly settled\" (modulo some math working out) for about a year, but I don't think I've really communicated this online. Partly, I've been waiting to write up more formal results. But other research has taken up most of my time, so I'm not sure when I *would* get to it.\n> \n> \n> So, the following contains some \"shovel-ready\" problems. If you're convinced by my overall perspective, you may be interested in pursuing some of them. I think these directions have a high chance of basically solving the problem of counterfactuals (including logical counterfactuals). […]\n> \n> \n>\n* Alex Mennen [writes a thoughtful critique](https://www.lesswrong.com/posts/X7k23zk9aBjjpgLd3/dutch-booking-cdt-revised-argument?commentId=ppoZpaTzBYet29Bap) of one of the core arguments behind Abram's new take on counterfactuals; Abram [replies](https://www.lesswrong.com/posts/X7k23zk9aBjjpgLd3/dutch-booking-cdt-revised-argument?commentId=pHFGDjYwyJuyowrjm).\n* Abram distinguishes simple Bayesians (who reason according to the laws of probability theory) from [reflective Bayesians](https://www.lesswrong.com/posts/vpvLqinp4FoigqvKy/reflective-bayesianism) (who endorse background views that justify Bayesianism), and argues that simple Bayesians can better \"escape the trap\" of traditional issues with Bayesian reasoning.\n* Abram explains the motivations behind his [learning normativity research agenda](https://www.lesswrong.com/s/Gmc7vtnpyKZRHWdt5/p/2JGu9yxiJkoGdQR4s), providing \"[four different elevator pitches](https://www.lesswrong.com/posts/oqghwKKifztYWLsea), which tell different stories\" about how the research agenda's desiderata hang together.\n\n\n#### News and links\n\n\n* CFAR co-founder Julia Galef has an excellent new book out on human rationality and motivated reasoning: [*The Scout Mindset: Why Some People See Things Clearly and Others Don't*](https://www.amazon.com/Scout-Mindset-People-Things-Clearly-ebook/dp/B07L2HQ26K/).\n* Katja Grace argues that there is pressure for systems with preferences to become [more coherent, efficient, and goal-directed](https://www.lesswrong.com/posts/DkcdXsP56g9kXyBdq/coherence-arguments-imply-a-force-for-goal-directed-behavior).\n* Andrew Critch discusses [multipolar failure scenarios](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic) and \"multi-agent processes with a robust tendency to play out irrespective of which agents execute which steps in the process\".\n* A second AI alignment podcast joins Daniel Filan's [AI X-Risk Research Podcast](https://axrp.net/): Quinn Dougherty's [Technical AI Safety Podcast](https://technical-ai-safety.libsyn.com/), with a recent episode featuring [Alex Turner](https://technical-ai-safety.libsyn.com/3-optimal-policies-tend-to-seek-power).\n* A simple but important observation by Mark Xu: [Strong Evidence is Common](https://www.lesswrong.com/posts/JD7fwtRQ27yc8NoqS/strong-evidence-is-common).\n\n\n\nThe post [April 2021 Newsletter](https://intelligence.org/2021/05/02/april-2021-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-05-02T16:30:04Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "6c17548ab68923c01ab71fb02b61b915", "title": "March 2021 Newsletter", "url": "https://intelligence.org/2021/04/01/march-2021-newsletter/", "source": "miri", "source_type": "blog", "text": "#### MIRI updates\n\n\n* MIRI's Eliezer Yudkowsky and Evan Hubinger [comment in some detail](https://www.lesswrong.com/posts/AyfDnnAdjG7HHeD3d/miri-comments-on-cotra-s-case-for-aligning-narrowly) on Ajeya Cotra's [The Case for Aligning Narrowly Superhuman Models](https://www.lesswrong.com/posts/PZtsoaoSLpKjjbMqM/the-case-for-aligning-narrowly-superhuman-models). This conversation touches on some of the more important alignment research views at MIRI, such as the view that alignment requires a thorough understanding of AGI systems' reasoning \"under the hood\", and the view that early AGI systems should most likely [avoid human modeling](https://www.lesswrong.com/posts/BKjJJH2cRpJcAnP7T/thoughts-on-human-models) if possible.\n* From Eliezer Yudkowsky: [A Semitechnical Introductory Dialogue on Solomonoff Induction](https://www.lesswrong.com/posts/EL4HNa92Z95FKL9R2/a-semitechnical-introductory-dialogue-on-solomonoff-1). (Also discussed [by Richard Ngo](https://www.lesswrong.com/posts/wsBpJn7HWEPCJxYai).)\n* MIRI research associate Vanessa Kosoy [discusses infra-Bayesianism](https://axrp.net/episode/2021/03/10/episode-5-infra-bayesianism-vanessa-kosoy.html) on the AI X-Risk Research Podcast.\n* Eliezer Yudkowsky and Chris Olah [discuss ML transparency](https://twitter.com/ESYudkowsky/status/1358173090782576650) on social media.\n\n\n#### News and links\n\n\n* Brian Christian, author of *The Alignment Problem: Machine Learning and Human Values*, [discusses his book](https://80000hours.org/podcast/episodes/brian-christian-the-alignment-problem/) on the 80,000 Hours Podcast.\n* Chris Olah's team releases [Multimodal Neurons in Artificial Neural Networks](https://distill.pub/2021/multimodal-neurons/), on artificial neurons that fire for multiple conceptually related stimuli.\n* Vitalik Buterin reflects on [*Inadequate Equilibria*](https://equilibriabook.com/toc)'s arguments in the course of discussing [prediction market inefficiencies](https://vitalik.ca/general/2021/02/18/election.html).\n\n\n\nThe post [March 2021 Newsletter](https://intelligence.org/2021/04/01/march-2021-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-04-01T17:57:26Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "6ad57b514ee94de36a1b73dc9aa9714c", "title": "February 2021 Newsletter", "url": "https://intelligence.org/2021/03/02/february-2021-newsletter/", "source": "miri", "source_type": "blog", "text": "#### MIRI updates\n\n\n* Abram Demski distinguishes different versions of [the problem of “pointing at” human values](https://www.lesswrong.com/posts/7Zn4BwgsiPFhdB6h8) in AI alignment.\n* Evan Hubinger [discusses “Risks from Learned Optimization”](https://axrp.net/episode/2021/02/17/episode-4-risks-from-learned-optimization-evan-hubinger.html) on the AI X-Risk Research Podcast.\n* Eliezer Yudkowsky [comments on AI safety via debate and Goodhart’s law](https://www.lesswrong.com/posts/LDsSqXf9Dpu3J3gHD/why-i-m-excited-about-debate#comments).\n* MIRI supporters [donated ~$135k on Giving Tuesday](https://www.facebook.com/MachineIntelligenceResearchInstitute/posts/3740693825967974), of which ~26% was matched by Facebook and ~28% by employers for a total of $207,436! MIRI also received $6,624 from TisBest Philanthropy in late December, largely through Round Two of Ray Dalio’s [#RedefineGifting](https://www.tisbest.org/blog/2020/12/22/30000-free-gift-cards-2000000-to-charity-gifting-redefined/) initiative. Our thanks to all of you!\n* Spencer Greenberg discusses society and education with [Anna Salamon](https://clearerthinkingpodcast.com/?ep=005) and [Duncan Sabien](https://clearerthinkingpodcast.com/?ep=021) on the Clearer Thinking podcast.\n* [We Want MoR](https://hpmorpodcast.com/?p=2710): Eliezer participates in a (spoiler-laden) discussion of [*Harry Potter and the Methods of Rationality*](http://hpmor.com/).\n\n\n#### News and links\n\n\n* Richard Ngo [reflects on his time in effective altruism](https://forum.effectivealtruism.org/posts/ctnMCdTH7dmiN4jBx/lessons-from-my-time-in-effective-altruism): \n\n\n> […] Until recently, I was relatively passive in making big decisions. Often that meant just picking the most high-prestige default option, rather than making a specific long-term plan. This also involved me thinking about EA from a “consumer” mindset rather than a “producer” mindset. When it seemed like something was missing, I used to wonder why the people responsible hadn’t done it; now I also ask why I haven’t done it, and consider taking responsibility myself.\n> \n> \n> Partly that’s just because I’ve now been involved in EA for longer. But I think I also used to overestimate how established and organised EA is. In fact, we’re an incredibly young movement, and we’re still making up a lot of things as we go along. That makes proactivity more important.\n> \n> \n> Another reason to value proactivity highly is that taking the most standard route to success is often overrated. […] My inspiration in this regard is a friend of mine who has, three times in a row, reached out to an organisation she wanted to work for and convinced them to create a new position for her.\n> \n>\n* Ngo [distinguishes](https://www.lesswrong.com/posts/L9HcyaiWBLYe7vXid/distinguishing-claims-about-training-vs-deployment) claims about goal specification, orthogonality, instrumental convergence, value fragility, and Goodhart’s law based on whether they refer to systems at training time versus deployment time.\n* Connor Leahy, author of [The Hacker Learns to Trust](https://medium.com/@NPCollapse/the-hacker-learns-to-trust-62f3c1490f51), argues (among other things) that “[GPT-3 is our last warning shot](https://www.youtube.com/watch?v=pGjyiqJZPJo)” for coordinating to address AGI alignment. ([Podcast version.](https://overcast.fm/+KqGi5Z0HA)) I include this talk because it’s a good talk and the topic warrants discussion, though MIRI staff don’t necessarily endorse this claim — and Eliezer would certainly object to any claim that [something is a fire alarm for AGI](https://intelligence.org/2017/10/13/fire-alarm/).\n* OpenAI safety researchers including Dario Amodei, Paul Christiano, and Chris Olah [depart OpenAI](https://openai.com/blog/organizational-update/).\n* OpenAI’s [DALL-E](https://openai.com/blog/dall-e/) uses GPT-3 for image generation, while [CLIP](https://openai.com/blog/clip/) exhibits impressive zero-shot image classification capabilities. Gwern Branwen [comments](https://www.gwern.net/newsletter/2021/01#ai) in his newsletter.\n\n\n\nThe post [February 2021 Newsletter](https://intelligence.org/2021/03/02/february-2021-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-03-02T20:21:01Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "d923a0cc3929f854fcf76dd45475329b", "title": "January 2021 Newsletter", "url": "https://intelligence.org/2021/01/27/january-2021-newsletter/", "source": "miri", "source_type": "blog", "text": "#### MIRI updates\n\n\n* MIRI’s Evan Hubinger uses a notion of optimization power to define whether AI systems are [compatible with the strategy-stealing assumption](https://www.lesswrong.com/posts/WwJdaymwKq6qyJqBX).\n* MIRI’s Abram Demski discusses debate approaches to AI safety [that don’t rely on factored cognition](https://www.lesswrong.com/posts/a2NZr87sGYpXhzsth).\n* Evan argues that [the first AGI systems are likely to be very similar to each other](https://www.lesswrong.com/posts/mKBfa8v4S9pNKSyKK), and discusses implications for alignment.\n* Jack Clark’s [Import AI newsletter](https://jack-clark.net/2020/12/28/import-ai-229-apple-builds-a-hypersim-dataset-ways-to-attack-ml-google-censors-its-research/) discusses the negative research results from our [end-of-year update](https://intelligence.org/2020/12/21/2020-updates-and-strategy/).\n* Richard Ngo shares [high-quality discussion](https://www.lesswrong.com/posts/oiuZjPfknKsSc5waC) of his great introductory sequence [AGI Safety from First Principles](https://www.lesswrong.com/s/mzgtmmTKKn5MuCzFJ), featuring Paul Christiano, Max Daniel, Ben Garfinkel, Adam Gleave, Matthew Graves, Daniel Kokotajlo, Will MacAskill, Rohin Shah, Jaan Tallinn, and MIRI’s Evan Hubinger and Buck Shlegeris.\n* Tom Chivers [discusses](https://unherd.com/2020/12/how-rational-have-you-been-this-year/) the rationality community and [*Rationality: From AI to Zombies*](https://intelligence.org/rationality-ai-zombies/). (Contrary to the headline, COVID-19 receives little discussion.)\n\n\n#### News and links\n\n\n* Alex Flint [argues](https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review) that growing the field of AI alignment researchers should be a side-effect of optimizing for “research depth”, rather than functioning as a target in its own right — much as software projects shouldn’t optimize for larger teams or larger codebases. Flint also comments on strategy/policy research.\n* Daniel Kokotajlo of the [Center on Long-Term Risk](https://longtermrisk.org/) argues that [AGI may enable decisive strategic advantage before GDP accelerates](https://www.lesswrong.com/posts/aFaKhG86tTrKvtAnT/against-gdp-as-a-metric-for-timelines-and-takeoff-speeds). Cf. a pithier comment [from Eliezer Yudkowsky](https://twitter.com/ESYudkowsky/status/1337832545438732288).\n* [TAI Safety Bibliographic Database](https://www.lesswrong.com/posts/4DegbDJJiMX2b3EKm/tai-safety-bibliographic-database): Jess Riedel and Angelica Deibel release a database of AI safety research, and analyze recent trends in the field.\n* DeepMind’s AI safety team investigates [optimality properties of meta-trained RNNs](https://medium.com/@deepmindsafetyresearch/understanding-meta-trained-algorithms-through-a-bayesian-lens-5042a1acc1c2) and [the tampering problem](https://medium.com/@deepmindsafetyresearch/realab-conceptualising-the-tampering-problem-56caab69b6d3): “How can we design agents that pursue a given objective when all feedback mechanisms for describing that objective are influenceable by the agent?”\n* Facebook [launches Forecast](https://www.lesswrong.com/posts/CZRyFcp6HSyZ7Jj8Q/launching-forecast-a-community-for-crowdsourced-predictions), a new community prediction platform akin to [Metaculus](https://www.metaculus.com/).\n* Effective altruists have released the [microCOVID calculator](https://www.microcovid.org/), a very handy tool for assessing activities’ COVID-19 infection risk. Meanwhile, Zvi Mowshowitz’s [weekly updates on LessWrong](https://www.lesswrong.com/s/rencyawwfr4rfwt5C) continue to be a good (US-centric) resource for staying up to date on COVID-19 developments such as the B117 variant.\n* Rethink Priorities researcher Linchuan Zhang summarizes things he’s learned [forecasting COVID-19 in 2020](https://forum.effectivealtruism.org/posts/kAMfrLJwHpCdDSqsj/some-learnings-i-had-from-forecasting-in-2020): forming good outside views is often hard; effective altruists tend to overrate superforecasters; etc.\n\n\n\nThe post [January 2021 Newsletter](https://intelligence.org/2021/01/27/january-2021-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2021-01-27T21:04:42Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "ad2066518ebfd238a965f6d7bf4205b0", "title": "December 2020 Newsletter", "url": "https://intelligence.org/2020/12/30/december-2020-newsletter/", "source": "miri", "source_type": "blog", "text": "MIRI COO Malo Bourgon reviews our past year and discusses our future plans in [2020 Updates and Strategy](https://intelligence.org/2020/12/21/2020-updates-and-strategy/).\n\n\nOur biggest update is that we've made less concrete progress than we expected on the new research we described in [2018 Update: Our New Research Directions](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/). As a consequence, we're scaling back our work on these research directions, and looking for new angles of attack that have better odds of resulting in a solution to the alignment problem.\n\n\n#### Other MIRI updates\n\n\n* A new paper from MIRI researcher Evan Hubinger: \"[An Overview of 11 Proposals for Building Safe Advanced AI](https://arxiv.org/abs/2012.07532).\"\n* A belated paper announcement from last year: Andrew Critch's \"[A Parametric, Resource-Bounded Generalization of Löb's Theorem, and a Robust Cooperation Criterion for Open-Source Game Theory](https://www.cambridge.org/core/journals/journal-of-symbolic-logic/article/abs/parametric-resourcebounded-generalization-of-lobs-theorem-and-a-robust-cooperation-criterion-for-opensource-game-theory/16063EA7BFFEE89438631B141E556E79)\", a result [originally written up](https://intelligence.org/2016/03/31/new-paper-on-bounded-lob/) during his time at MIRI, has been published in the *Journal of Symbolic Logic*.\n* MIRI's Abram Demski introduces [Learning Normativity: A Research Agenda](https://www.lesswrong.com/posts/2JGu9yxiJkoGdQR4s). See also Abram's new write-up, [Normativity](https://www.lesswrong.com/posts/tCex9F9YptGMpk2sT).\n* Evan Hubinger [clarifies inner alignment terminology](https://www.lesswrong.com/posts/SzecSPYxqRa5GCaSF).\n* The Survival and Flourishing Fund (SFF) has [awarded](https://survivalandflourishing.fund/sff-2020-h2-recommendations) MIRI $563,000 in its latest round of grants! Our enormous gratitude to SFF's grant recommenders and funders.\n* [*A Map That Reflects the Territory*](https://www.lesswrong.com/books) is a new print book set collecting the top LessWrong essays of 2018, including essays by MIRI researchers Eliezer Yudkowsky, Abram Demski, and Scott Garrabrant.\n* DeepMind's Rohin Shah [gives his overview](https://www.lesswrong.com/posts/ZZDHoqpHmChxEYMme/an-127-rethinking-agency-cartesian-frames-as-a-formalization) of Scott Garrabrant's Cartesian Frames framework.\n\n\n#### News and links\n\n\n* Daniel Filan launches [the AI X-Risk Research Podcast](https://www.lesswrong.com/posts/NWi8ztKCbguBEAwdG/announcing-axrp-the-ai-x-risk-research-podcast-1) (AXRP) with episodes featuring [Adam Gleave](https://axrp.net/episode/2020/12/11/episode-1-adversarial-policies-adam-gleave.html), [Rohin Shah](https://axrp.net/episode/2020/12/11/episode-2-learning-human-biases-rohin-shah.html), and [Andrew Critch](https://axrp.net/episode/2020/12/11/episode-3-negotiable-reinforcement-learning-andrew-critch.html).\n* DeepMind's [AlphaFold](https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology) represents a very large advance in protein structure prediction.\n* Metaculus launches [Forecasting AI Progress](https://www.metaculus.com/ai-progress-tournament/), an open four-month tournament to predict advances in AI, with a $50,000 prize pool.\n* [Continuing the Takeoffs Debate](https://www.lesswrong.com/posts/Tpn2Fx9daLvj28kes/continuing-the-takeoffs-debate): Richard Ngo responds to Paul Christiano's \"changing selection pressures\" argument against hard takeoff.\n* OpenAI's Beth Barnes discusses [the obfuscated arguments problem](https://www.lesswrong.com/posts/PJLABqQ962hZEqhdB/debate-update-obfuscated-arguments-problem) for AI safety via debate: \n \n\n\n> Previously we hoped that debate/IDA could verify any knowledge for which such human-understandable arguments exist, even if these arguments are intractably large. We hoped the debaters could strategically traverse small parts of the implicit large argument tree and thereby show that the whole tree could be trusted.\n> \n> \n> The obfuscated argument problem suggests that we may not be able to rely on debaters to find flaws in large arguments, so that we can only trust arguments when we could find flaws by recursing randomly—e.g. because the argument is small enough that we could find a single flaw if one existed, or because the argument is robust enough that it is correct unless it has many flaws.\n> \n> \n>\n* [Some AI Research Areas and Their Relevance to Existential Safety](https://www.lesswrong.com/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1): Andrew Critch compares out-of-distribution robustness, agent foundations, multi-agent RL, preference learning, and other research areas.\n* Ben Hoskin releases his [2020 AI Alignment Literature Review and Charity Comparison](https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison).\n* Open Philanthropy summarizes its [AI governance grantmaking](https://www.openphilanthropy.org/blog/ai-governance-grantmaking) to date.\n\n\n\nThe post [December 2020 Newsletter](https://intelligence.org/2020/12/30/december-2020-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2020-12-31T03:48:22Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "d9cfc4700b2fb2cfba47aa9d7a083882", "title": "2020 Updates and Strategy", "url": "https://intelligence.org/2020/12/21/2020-updates-and-strategy/", "source": "miri", "source_type": "blog", "text": "MIRI’s 2020 has been a year of experimentation and adjustment. In response to the COVID-19 pandemic, we largely moved our operations to more rural areas in March, and shifted to a greater emphasis on remote work. We took the opportunity to try new work set-ups and approaches to research, and have been largely happy with the results.\n\n\nAt the same time, 2020 saw limited progress in the research MIRI’s leadership had previously been most excited about: the new [research directions](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/) we started in 2017. Given our slow progress to date, we are considering a number of possible changes to our strategy, and MIRI’s research leadership is shifting much of their focus toward searching for more promising paths.\n\n\n\nLast year, [I projected](https://intelligence.org/2019/12/02/miris-2019-fundraiser/) that our 2020 budget would be $6.4M–$7.4M, with a point estimate of $6.8M. I now expect that our 2020 spending will be slightly above $7.4M. The increase in spending above my point estimate largely comes from expenses we incurred relocating staff and taking precautions in response to the COVID-19 pandemic.\n\n\nOur budget for 2021 is fairly uncertain, given that we are more likely than usual to see high-level shifts in our strategy in the coming year. My current estimate is that our spending will fall somewhere between $6M and $7.5M, which I expect to roughly break down as follows:\n\n\n\n \n\n\nI’m also happy to announce that the Survival and Flourishing Fund (SFF) [has awarded MIRI $563,000](https://survivalandflourishing.fund/sff-2020-h2-recommendations) to support our research going forward, on top of support they provided [earlier this year](https://intelligence.org/2020/05/29/may-2020-newsletter/).\n\n\nGiven that our research program is in a transitional period, and given the strong support we have already received this year—$4.38M [from Open Philanthropy](https://intelligence.org/2020/04/27/miris-largest-grant-to-date/), $903k from SFF, and ~$1.1M from other contributors (thank you all!)—we aren’t holding a formal fundraiser this winter. Donations are still welcome and appreciated during this transition; but we’ll wait to make our case to donors when our plans are more solid. For now, see our [donate](http://intelligence.org/donate) page if you are interested in supporting our research.\n\n\nBelow, I’ll go into more detail on how our 2020 has gone, and on our plans for the future.\n\n\n \n\n\n### 2017-Initiated Research Directions and Research Plans\n\n\nIn 2017, we [introduced](https://intelligence.org/2017/04/30/2017-updates-and-strategy/) a new set of research directions, which we described and motivated more in “[2018 Update: Our New Research Directions](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/).” We wrote that we were “seeking entirely new low-level foundations for optimization,” “endeavoring to figure out parts of cognition that can be very transparent as cognition,” and “experimenting with some specific alignment problems.” In December 2019, we noted that we felt we were making “steady progress” on this research, but were [disappointed with the concrete results we’d had to date](https://intelligence.org/2019/12/02/miris-2019-fundraiser/#2).\n\n\nAfter pushing more on these lines of research, MIRI senior staff have become more pessimistic about this approach. MIRI executive director and senior researcher Nate Soares writes:\n\n\n\n> The non-public-facing research I (Nate) was most excited about had a flavor of attempting to develop new pragmatically-feasible foundations for alignable AI, that did not rely on routing through gradient-descent-style machine learning foundations. We had various reasons to hope this could work, despite the obvious difficulties.\n> \n> \n> That project has, at this point, largely failed, in the sense that neither Eliezer nor I have sufficient hope in it for us to continue focusing our main efforts there. I’m uncertain whether it failed due to implementation failures on our part, due to the inherent difficulty of the domain, or due to flaws in the underlying theory.\n> \n> \n> Part of the reason we lost hope is a sense that we were moving too slowly, given our sense of how far off AGI may be and our sense of the difficulty of the alignment problem. The field of AI alignment is working under a deadline, such that if work is going sufficiently slowly, we’re better off [giving up](https://www.lesswrong.com/posts/wCqfCLs8z5Qw4GbKS/the-importance-of-saying-oops) and pivoting to new projects that have a real chance of resulting in the first AGI systems being built on alignable foundations.\n> \n> \n> We are currently in a state of regrouping, weighing our options, and searching for plans that we believe may yet have a shot at working.\n> \n> \n> Looking at the field as a whole, MIRI’s research leadership remains quite pessimistic about most alignment proposals that we have seen put forward so far. That is, our update toward being more pessimistic about our recent research directions hasn’t reduced our pessimism about the field of alternatives, and the next directions we undertake are not likely to resemble the directions that are popular outside of MIRI today.\n> \n> \n\n\nMIRI sees the need for a change of course with respect to these projects. At the same time, many (including Nate) still have some hope in the theory underlying this research, and have hope that the projects may be rescued in some way, such as by discovering and correcting failures in how we approached this research. But time spent on rescue efforts trades off against finding better and more promising alignment plans.\n\n\nSo we’re making several changes affecting staff previously focused on this work. Some are departing MIRI for different work, as we shift direction away from lines they were particularly suited for. Some are seeking to rescue the 2017-initiated lines of research. Some are pivoting to different experiments and exploration.\n\n\nWe are uncertain about what long-term plans we’ll decide on, and are in the process of generating new possible strategies. Some (non-mutually-exclusive) possibilities include:\n\n\n* We may become a home to diverse research approaches aimed at developing a new path to alignment. Given our increased uncertainty about the best angle of attack, it may turn out to be valuable to house a more diverse portfolio of projects, with some level of intercommunication and cross-pollination between approaches.\n* We may commit to an entirely new approach after a period of exploration, if we can identify one that we believe has a real chance of ensuring positive outcomes from AGI.\n* We may carry forward theories and insights from our 2017-initiated research directions into future plans, in a different form.\n\n\n \n\n\n### Research Write-Ups\n\n\nAlthough our 2017-initiated research directions have been our largest focus over the last few years, we’ve been running many other research programs in parallel with it.\n\n\nThe bulk of this work is [nondisclosed-by-default](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section3) as well, but it includes work we’ve written up publicly. (Note that as a rule, this public-facing work is unrepresentative of our research as a whole.)\n\n\nFrom our perspective, our most interesting public work this year is Scott Garrabrant’s Cartesian frames model and Vanessa Kosoy’s work on infra-Bayesianism.\n\n\n[**Cartesian frames**](https://www.lesswrong.com/posts/BSpdshJWGAW6TuNzZ/introduction-to-cartesian-frames) is a new framework for thinking about agency, intended as a successor to the [cybernetic agent model](https://www.lesswrong.com/posts/BSpdshJWGAW6TuNzZ/introduction-to-cartesian-frames#6_1__Cybernetic_Agent_Model). Whereas the cybernetic agent model assumes as basic an agent and environment persisting across time with a defined and stable I/O channel, Cartesian frames treat these features as more derived and dependent on how one conceptually carves up physical situations.\n\n\nThe Cartesian Frames sequence focuses especially on finding derived, approximation-friendly versions of the notion of “subagent” (previously discussed in “[Embedded Agency](https://www.lesswrong.com/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version#5__Subsystem_alignment)”) and temporal sequence (a source of [decision-theoretic problems](https://www.lesswrong.com/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version#2__Decision_theory) in cases where agents can base their decisions on predictions or proofs about their own actions). The sequence’s final post discusses these and other potential [directions for future work](https://www.lesswrong.com/s/2A7rrZ4ySx6R8mfoT/p/JTzLjARpevuNpGPZm) for the field to build on.\n\n\nIn general, MIRI’s researchers are quite interested in new conceptual frameworks like these, as research progress can often be bottlenecked on our using the wrong lenses for thinking about problems, or on our lack of a simple formalism for putting intuitions to the test.\n\n\nMeanwhile, Vanessa Kosoy and Alex Appel’s [**infra-Bayesianism**](https://www.lesswrong.com/posts/zB4f7QqKhBHa5b37a/introduction-to-the-infra-bayesianism-sequence) is a novel framework for modeling reasoning in cases where the reasoner’s hypothesis space may not include the true environment.\n\n\nThis framework is interesting primarily because it seems applicable to such a wide variety of problems: non-realizability, decision theory, anthropics, embedded agency, reflection, and the synthesis of induction/probability with deduction/logic. Vanessa describes infra-Bayesianism as “opening the way towards applying learning theory to many problems which previously seemed incompatible with it.”\n\n\n2020 also saw a [large update](https://www.lesswrong.com/posts/9vYg8MyLL4cMMaPQJ/updates-and-additions-to-embedded-agency) to Scott and Abram’s “[Embedded Agency](https://intelligence.org/2018/10/29/embedded-agents/),” with some discussions clarified and several new subsections added. Additionally, a revised version of Vanessa’s “[Optimal Polynomial-Time Estimators: A Bayesian Notion of Approximation Algorithm](https://arxiv.org/abs/1608.04112),” co-authored with Alex Appel, was published in the *Journal of Applied Logics*.\n\n\nTo give a picture of some of the other research areas we’ve been pushing on, we asked some MIRI researchers and research associates to pick out highlights from their work over the past year, with comments on their selections.\n\n\nAbram Demski highlights the following write-ups:\n\n\n* “[An Orthodox Case Against Utility Functions](https://www.lesswrong.com/posts/A8iGaZ3uHNNGgJeaD/an-orthodox-case-against-utility-functions)” — “Although in some sense this is a small technical point, it is indicative of a shift in perspective in some recent agent foundations research which I think is quite important.”\n* “[Radical Probabilism](https://www.lesswrong.com/posts/xJyY5QkQvNJpZLJRo/radical-probabilism)” — “Again, although one could see this as merely an explanation of the older [logical induction](https://intelligence.org/2016/09/12/new-paper-logical-induction/) result, I think it points at an important shift in perspective.”\n* “[Learning Normativity: A Research Agenda](https://www.lesswrong.com/posts/2JGu9yxiJkoGdQR4s/learning-normativity-a-research-agenda)” — “In a sense, this research agenda clarifies the shift in perspective which the above two posts were communicating, although I haven’t tied everything together yet.\n* “[Dutch-Booking CDT: Revised Argument](https://www.lesswrong.com/posts/X7k23zk9aBjjpgLd3/dutch-booking-cdt-revised-argument)” — “To my eye, this is a large-ish decision theory result.”\n\n\nEvan Hubinger summarizes his public research from the past year:\n\n\n* “[An Overview of 11 Proposals for Building Safe Advanced AI](https://arxiv.org/abs/2012.07532)” — “Probably my biggest project this year, this paper is my attempt at a unified explanation of the current major leading prosaic alignment proposals. The paper includes an exploration of each proposal’s pros and cons from the perspective of outer alignment, inner alignment, training competitiveness, and performance competitiveness.”\n* “I started mentoring Adam Shimi and Mark Xu this year, helping them start spinning up work in AI safety. Concrete things that came out of this include Adam Shimi’s ‘[Universality Unwrapped](https://www.alignmentforum.org/posts/farherQcqFQXqRcvv/universality-unwrapped)’ and Mark Xu’s ‘[Does SGD Produce Deceptive Alignment?](https://www.alignmentforum.org/posts/ocWqg2Pf2br4jMmKA/does-sgd-produce-deceptive-alignment)’”\n* “I spent a lot of time this year thinking about [AI safety via debate](https://openai.com/blog/debate/), which resulted in two new alternative debate proposals: ‘[AI Safety via Market Making](https://www.alignmentforum.org/posts/YWwzccGbcHMJMpT45/ai-safety-via-market-making)’ and ‘[Synthesizing Amplification and Debate](https://www.alignmentforum.org/posts/dJSD5RK6Qoidb3QY5/synthesizing-amplification-and-debate).’”\n* “I spent some time looking at different alignment proposals from a computational complexity standpoint, resulting in ‘[Alignment Proposals and Complexity Classes](https://www.alignmentforum.org/posts/N64THGX7XNCqRtvPG/alignment-proposals-and-complexity-classes)’ and ‘[Weak HCH Accesses EXP](https://www.alignmentforum.org/posts/CtGH3yEoo4mY2taxe/weak-hch-accesses-exp).’\n* “‘[Outer Alignment and Imitative Amplification](https://www.alignmentforum.org/posts/33EKjmAdKFn3pbKPJ/outer-alignment-and-imitative-amplification)’ makes the case for why imitative amplification is outer aligned; ‘[Learning the Prior and Generalization](https://www.alignmentforum.org/posts/YhQr36yGkhe6x8Fyn/learning-the-prior-and-generalization)’ provides my perspective on Paul’s new ‘learning the prior’ approach; and ‘[Clarifying Inner Alignment Terminology](https://www.alignmentforum.org/posts/SzecSPYxqRa5GCaSF/clarifying-inner-alignment-terminology)’ revisits terminology from ‘[Risks from Learned Optimization](https://arxiv.org/abs/1906.01820).’”\n\n\nEarlier this year, Buck Shlegeris ([link](https://futureoflife.org/2020/04/15/an-overview-of-technical-ai-alignment-in-2018-and-2019-with-buck-shlegeris-and-rohin-shah/)) and Evan Hubinger ([link](https://futureoflife.org/2020/07/01/evan-hubinger-on-inner-alignment-outer-alignment-and-proposals-for-building-safe-advanced-ai/)) also appeared on the Future of Life Institute’s AI Alignment Podcast. Buck also gave a talk at Stanford: “[My Personal Cruxes for Working on AI Safety](https://forum.effectivealtruism.org/posts/Ayu5im98u8FeMWoBZ/my-personal-cruxes-for-working-on-ai-safety).”\n\n\nLastly, Future of Humanity Institute researcher and MIRI research associate Stuart Armstrong summarizes his own research highlights:\n\n\n* “[Pitfalls of Learning a Reward Function Online](https://www.ijcai.org/Proceedings/2020/221),” working with DeepMind’s Jan Leike, Laurent Orseau, and Shane Legg — “This shows how agents can manipulate a “learning” process, the conditions that make that learning actually uninfluenceable, and some methods for turning influenceable learning processes into uninfluenceable ones.”\n* “[Model Splintering](https://www.lesswrong.com/posts/k54rgSg7GcjtXnMHX/model-splintering-moving-from-one-imperfect-model-to-another-1)” — “Here I argue that a lot of AI safety problems can be reduced to the same problem: that of dealing with what happens when you move out of distribution from the training data. I argue that a principled way of dealing with these “model splinterings” is necessary to get safe AI, and sketch out some examples.”\n* “[Syntax, Semantics, and Symbol Grounding, Simplified](https://www.lesswrong.com/posts/joPoxBpZjLNx8MKaF/syntax-semantics-and-symbol-grounding-simplified)” — “Here I argue that symbol grounding is a practical, necessary thing, not an abstract philosophical concept.”\n\n\n \n\n\n### Process Improvements and Plans\n\n\nGiven the unusual circumstances brought on by the COVID-19 pandemic, in 2020 MIRI decided to run various experiments to see if we could improve our researchers’ productivity while our Berkeley office was unavailable. In the process, a sizable subset of our research team has found good modifications to our work environment that we aim to maintain and expand on.\n\n\nMany of our research staff who spent this year in live-work quarantine groups in relatively rural areas in response to the COVID-19 pandemic have found surprisingly large benefits from living in a quieter, lower-density area together with a number of other researchers. Coordination and research have felt faster at a meta-level, with shorter feedback cycles, more efforts on more cruxy experiments, and more resulting pivots. Our biggest such pivot has been away from our 2017-initiated research directions, as described above.\n\n\nSeparately, MIRI staff have been weighing the costs and benefits of possibly moving somewhere outside the Bay Area for several years—taking into account the housing crisis and other governance failures, advantages and disadvantages of the local culture, tail risks of things taking a turn for the worse in the future, and other [factors](https://www.lesswrong.com/posts/FghubkDy6Dp6mnxk7/the-rationalist-community-s-location-problem?commentId=MPM6gaXJiCnGgEmDJ).\n\n\nPartly as a result of these considerations, and partly because it’s easier to move when many of us have already relocated this year due to COVID-19, MIRI is considering relocating away from Berkeley. As we weigh the options, a particularly large factor in our considerations is whether our researchers expect the location, living situation, and work setup to feel good and comfortable, as we generally expect this to result in improved research progress. Increasingly, this factor is pointing us towards moving someplace new.\n\n\nMany at MIRI have noticed in the past that there are certain social settings, such as small effective altruism or alignment research retreats, that seem to spark an unusually high density of unusually productive conversations. Much of the energy and vibrancy in such retreats presumably stems from their novelty and their time-boxed nature. However, we suspect this isn’t the only reason these events tend to be dense and productive, and we believe that we may be able to create a space that has some of these features every day.\n\n\nThis year, a number of our researchers have indeed felt that our new work set-up during the pandemic has a lot of this quality. We’re therefore very eager to see if we can modify MIRI as a workplace so as to keep this feature around, or further augment it.\n\n\n \n\n\nOur year, then, has been characterized by some significant shifts in our thinking about research practices and which research directions are most promising.\n\n\nAlthough we’ve been disappointed by our level of recent concrete progress toward understanding how to align AGI-grade optimization, we plan to continue capitalizing on MIRI’s strong pool of talent and accumulated thinking about alignment as we look for new and better paths forward. We’ll provide more updates about our new strategy as our plans solidify.\n\n\nThe post [2020 Updates and Strategy](https://intelligence.org/2020/12/21/2020-updates-and-strategy/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2020-12-21T23:50:18Z", "authors": ["Malo Bourgon"], "summaries": []} -{"id": "53e1ea0d31a717fc5d28d0cbcc1b9855", "title": "November 2020 Newsletter", "url": "https://intelligence.org/2020/11/30/november-2020-newsletter/", "source": "miri", "source_type": "blog", "text": "MIRI researcher Scott Garrabrant has completed his [Cartesian Frames](https://www.lesswrong.com/s/2A7rrZ4ySx6R8mfoT) sequence. Scott also covers the first two posts' contents [in video form](https://www.youtube.com/watch?v=H1tJdaCvcck).\n\n\n\n#### Other MIRI updates\n\n\n* Contrary to my [previous announcement](https://intelligence.org/2020/10/23/october-2020-newsletter/), MIRI won’t be running a formal fundraiser this year, though we’ll still be participating in Giving Tuesday and other matching opportunities. To donate and get information on tax-advantaged donations, employer matching, etc., see [intelligence.org/donate](http://intelligence.org/donate). We’ll also be doing an end-of-the-year update and retrospective in the next few weeks.\n* Facebook's Giving Tuesday matching event takes place this Tuesday (Dec. 1) at 5:00:00am PT.  Facebook will 100%-match the first $2M donated, something that will plausibly take only 2–3 seconds. To get 100%-matched, then, it's even more important than [last year](https://intelligence.org/2019/11/28/giving-tuesday-2019/) to start clicking at 4:59:59AM PT. Facebook will then 10%-match the next $50M of donations that are made. Details on optimizing your donation(s) to [MIRI's Facebook Fundraiser](https://www.facebook.com/donate/405215240508085/) can be found at [EA Giving Tuesday](https://eagivingtuesday.org/), a [Rethink Charity](https://rtcharity.org/) project.\n* [Video discussion](https://www.youtube.com/watch?v=JVVj9Dui9es): Stuart Armstrong, Scott Garrabrant, and the [AI Safety Reading Group](https://aisafety.com/reading-group/) discuss Stuart's [If I Were A Well-Intentioned AI…](http://lesswrong.com/posts/gzWb5kWwzhdaqmyTt/if-i-were-a-well-intentioned-ai-i-image-classifier).\n* MIRI research associate Vanessa Kosoy raises questions about [AI information hazards](https://www.lesswrong.com/posts/3D3DsX5rMbk3jEZ5h).\n* Buck Shlegeris argues that [we're likely at the “hinge of history”](https://forum.effectivealtruism.org/posts/j8afBEAa7Xb2R9AZN/thoughts-on-whether-we-re-living-at-the-most-influential) (assuming we aren't living in a simulation).\n* To make it easier to find and cite old versions of MIRI papers (especially ones that aren't on arXiv), we've collected links to obsolete versions on [intelligence.org/revisions](https://intelligence.org/revisions/).\n\n\n#### News and links\n\n\n* CFAR's Anna Salamon asks: [Where do (did?) stable, cooperative institutions come from?](https://www.lesswrong.com/posts/R4FX6wDmppvZ2JqpB/where-do-did-stable-cooperative-institutions-come-from)\n* The Center for Human-Compatible AI is accepting applications for [research internships](https://humancompatible.ai/jobs#internship) through Dec. 13.\n* [AI Safety (virtual) Camp](https://aisafety.camp/) is accepting applications through Dec. 15.\n* The 4th edition of *Artificial Intelligence: A Modern Approach* is out, with [expanded discussion of the alignment problem](https://www.lesswrong.com/posts/BNJx2CqfXyiusoJcK/artificial-intelligence-a-modern-approach-4th-edition-on-the).\n* DeepMind's Rohin Shah [reviews](https://www.lesswrong.com/posts/gYfgWSxCpFdk2cZfE/the-alignment-problem-machine-learning-and-human-values) Brian Christian's new book [*The Alignment Problem: Machine Learning and Human Values*](https://www.amazon.com/Alignment-Problem-Machine-Learning-Values/dp/153669519X).\n* Daniel Filan and Rohin Shah discuss [security mindset and takeoff speeds](https://www.lesswrong.com/posts/Lfk2FXBwrpoM6Jm7p).\n* *Fortune* [profiles](https://fortune.com/2020/11/13/jaan-tallinn-ai-safety-bitcoin-cryptocurrency-elon-musk/) existential risk philanthropist Jaan Tallinn.\n\n\n\nThe post [November 2020 Newsletter](https://intelligence.org/2020/11/30/november-2020-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2020-11-30T18:19:44Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "2d7770bf594d8018c6dfded18b86d085", "title": "October 2020 Newsletter", "url": "https://intelligence.org/2020/10/23/october-2020-newsletter/", "source": "miri", "source_type": "blog", "text": "Starting today, Scott Garrabrant has begun posting [**Cartesian Frames**](https://www.lesswrong.com/posts/BSpdshJWGAW6TuNzZ/introduction-to-cartesian-frames), a sequence introducing a new conceptual framework Scott has found valuable for thinking about agency.\nIn Scott's words: Cartesian Frames are “applying *reasoning* like Pearl's to *objects* like game theory's, with a *motivation* like Hutter's”.\n\n\nScott will be giving an online talk introducing Cartesian frames this Sunday at 12pm PT on Zoom ([link](https://us02web.zoom.us/j/81328320546)). He'll also be hosting office hours on Gather.Town the next four Sundays; [see here for details](https://www.lesswrong.com/posts/N4uDrgFoZKJXhnHLw/sunday-october-25-12-00pm-pt-scott-garrabrant-on-cartesian).\n\n\n#### Other MIRI updates\n\n\n* Abram Demski discusses the problem of [comparing utilities](https://www.lesswrong.com/posts/cYsGrWEzjb324Zpjx), highlighting some non-obvious implications.\n* In March 2020, the US Congress passed the [CARES Act](https://www.forbes.com/sites/berniekent/2020/04/03/giving-more-than-60-of-income-to-charity-cares-act-says-deduct-it/), which changes the tax advantages of donations to qualifying NPOs like MIRI in 2020. Changes include:\n\t+ 1. A new “above the line” tax deduction: up to $300 per taxpayer ($600 for a married couple) in annual charitable contributions for people who take the standard deduction. Donations to donor-advised funds (DAFs) do not qualify for this new deduction.\n\t+ 2. New charitable deduction limits: Taxpayers who itemize their deductions can deduct much greater amounts of their contributions. Individuals can elect to deduct donations up to 100% of their 2020 AGI — up from 60% previously. This higher limit also does not apply to donations to DAFs.\n As usual, consult with your tax advisor for more information.\n* Our fundraiser this year will start on November 29 (two days before Giving Tuesday) and finish on January 2. We're hoping that having our fundraiser straddle 2020 and 2021 will give people more flexibility given the unusual tax law changes above.\n* I'm happy to announce that MIRI has received a donation of $246,435 from an anonymous returning donor. Our thanks to the donor, and to [Effective Giving UK](https://www.effectivegiving.org/) for facilitating this donation!\n\n\n#### News and links\n\n\n* Richard Ngo tries to provide a relatively careful and thorough version of the standard argument for worrying about AGI risk: [AGI Safety from First Principles](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ). See also Rohin Shah's [summary](https://mailchi.mp/051273eb96eb/an-122arguing-for-agi-driven-existential-risk-from-first-principles).\n* The AI Alignment Podcast [interviews Andrew Critch](https://futureoflife.org/2020/09/15/andrew-critch-on-ai-research-considerations-for-human-existential-safety/?cn-reloaded=1) about his recent overview paper, “[AI Research Considerations for Human Existential Safety](https://arxiv.org/pdf/2006.04948.pdf).”\n\n\n\nThe post [October 2020 Newsletter](https://intelligence.org/2020/10/23/october-2020-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2020-10-23T14:15:19Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "f127c17aca9d788c6ec8cdd4b90251e8", "title": "September 2020 Newsletter", "url": "https://intelligence.org/2020/09/10/september-2020-newsletter/", "source": "miri", "source_type": "blog", "text": "Abram Demski and Scott Garrabrant have made [a major update](https://www.lesswrong.com/posts/9vYg8MyLL4cMMaPQJ/updates-and-additions-to-embedded-agency) to \"[Embedded Agency](https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh/p/i3BTagvt3HbPMx6PN)\", with new discussions of ε-exploration, Newcomblike problems, reflective oracles, logical uncertainty, Goodhart's law, and predicting rare catastrophes, among other topics.\n\n\n\nAbram has also written an overview of what good reasoning looks in the absence of Bayesian updating: [Radical Probabilism](https://www.lesswrong.com/s/HmANELvkhAZ9eDxFS/p/xJyY5QkQvNJpZLJRo). One recurring theme:\n\n\n\n> [I]n general (i.e., *without any special prior which does guarantee convergence for restricted observation models*), a Bayesian relies on a realizability (aka grain-of-truth) assumption for convergence, as it does for some other nice properties. Radical probabilism demands these properties without such an assumption.\n> \n> \n> [… C]onvergence points at a notion of \"objectivity\" for the radical probabilist. Although the individual updates a radical probabilist makes can go all over the place, the beliefs must eventually settle down to something. The goal of reasoning is to settle down to that answer as quickly as possible.\n> \n> \n\n\nMeanwhile, [Infra-Bayesianism](https://www.lesswrong.com/posts/zB4f7QqKhBHa5b37a/introduction-to-the-infra-bayesianism-sequence) is a new formal framework for thinking about optimal reasoning without requiring an reasoner's true environment to be in its hypothesis space. Abram comments: \"Alex Appel and Vanessa Kosoy have been working hard at 'Infra-Bayesianism', a new approach to RL which aims to make it easier (ie, possible) to prove safety-relevant theorems (and, also, a new approach to Bayesianism more generally).\n\n\n#### Other MIRI updates\n\n\n* Abram Demski writes a parable on the differences between logical inductors and Bayesians: [The Bayesian Tyrant](https://www.lesswrong.com/posts/4tke3ibK9zfnvh9sE).\n* Building on the [selection vs. control](https://www.lesswrong.com/posts/ZDZmopKquzHYPRNxq/selection-vs-control) distinction, Abram contrasts [\"mesa-search\" and \"mesa-control\"](https://www.lesswrong.com/posts/WmBukJkEFM72Xr397).\n\n\n#### News and links\n\n\n* From OpenAI's Stiennon et al.: [Learning to Summarize with Human Feedback](https://openai.com/blog/learning-to-summarize-with-human-feedback/). MIRI researcher Eliezer Yudkowsky [comments](https://twitter.com/ESYudkowsky/status/1301954347933208578): \n\n\n> A very rare bit of research that is directly, straight-up relevant to real alignment problems! They trained a reward function on human preferences *and then* measured how hard you could optimize against the trained function before the results got actually worse.\n> \n> \n> [… Y]ou can ask for results as good as the best 99th percentile of rated stuff in the training data (a la Jessica Taylor's [quantilization](https://intelligence.org/files/QuantilizersSaferAlternative.pdf) idea).  Ask for things the trained reward function rates as \"better\" than that, and [it starts to find \"loopholes\" as seen from outside the system](https://arbital.com/p/goodharts_curse/); places where the trained reward function poorly matches your real preferences, instead of places where your real preferences would rate high reward.\n> \n>\n* Chi Nguyen writes up [an introduction to Paul Christiano's iterated amplification research agenda](https://www.lesswrong.com/posts/PT8vSxsusqWuN7JXp#fnref-7XLtvdgT8DkovpShi-2) that seeks to be the first such resource that is \"both easy to understand and [gives] a complete picture\". The post includes inline comments by Christiano.\n* Forecasters share [visualizations of their AI timelines](https://www.lesswrong.com/posts/hQysqfSEzciRazx8k) on LessWrong.\n\n\n\nThe post [September 2020 Newsletter](https://intelligence.org/2020/09/10/september-2020-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2020-09-11T01:25:23Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "c04e870435488c28bc7b1d792bb51f4e", "title": "August 2020 Newsletter", "url": "https://intelligence.org/2020/08/13/august-2020-newsletter/", "source": "miri", "source_type": "blog", "text": "#### MIRI updates\n\n\n* Three questions from MIRI's Abram Demski: [What does it mean to apply decision theory?](https://www.lesswrong.com/posts/wgdfBtLmByaKYovYe), [How “honest” is GPT-3?](https://www.lesswrong.com/posts/c3RsLTcxrvH4rXpBL), and [How should AI debate be judged?](https://www.lesswrong.com/posts/m7oGxvouzzeQKiGJH)\n* A transcript from MIRI researcher Scott Garrabrant: [What Would I Do? Self-Prediction in Simple Algorithms](https://www.lesswrong.com/posts/PiXS9kE4qX68KveCt/what-would-i-do-self-prediction-in-simple-algorithms).\n* MIRI researcher Buck Shlegeris reviews the debate on [what the history of nuclear weapons implies about humanity's ability to coordinate](https://www.lesswrong.com/posts/y3jDSoTTdBD9Nj3Gx/how-good-is-humanity-at-coordination).\n* From MIRI's Evan Hubinger: [Learning the Prior and Generalization](https://www.lesswrong.com/posts/YhQr36yGkhe6x8Fyn) and [Alignment Proposals and Complexity Classes](https://www.lesswrong.com/posts/N64THGX7XNCqRtvPG).\n* Rafael Harth's [Inner Alignment: Explain Like I'm 12 Edition](https://www.lesswrong.com/posts/AHhCrJ2KpTjsCSwbt/inner-alignment-explain-like-i-m-12-edition) summarizes the concepts and takeaways from “[Risks from Learned Optimization](https://intelligence.org/learned-optimization/)”.\n* Issa Rice [reviews discussion to date](https://www.lesswrong.com/posts/BGxTpdBGbwCWrGiCL) on MIRI's research focus, “To what extent is it possible to have a precise theory of rationality?”, and the relationship between [deconfusion](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/) research and safety outcomes. (Plus [a short reply](https://www.lesswrong.com/posts/BGxTpdBGbwCWrGiCL?commentId=xTpGttMrLwfXEyTeW).)\n* “Pitfalls of Learning a Reward Function Online” ([IJCAI paper](https://www.ijcai.org/Proceedings/2020/221), [LW summary](https://www.lesswrong.com/posts/LpjjWDBXr88gzcYK2/learning-and-manipulating-learning)): FHI researcher and MIRI research associate Stuart Armstrong, with DeepMind's Jan Leike, Laurent Orseau, and Shane Legg, explore ways to discourage agents from manipulating their reward signal to be easier to optimize.\n\n\n#### News and links\n\n\n* From Paul Christiano: [Learning the Prior](https://www.alignmentforum.org/posts/SL9mKhgdmDKXmxwE4/learning-the-prior) and [Better Priors as a Safety Problem](https://www.alignmentforum.org/posts/roA83jDvq7F2epnHK/better-priors-as-a-safety-problem).\n* From Victoria Krakovna: [Tradeoff Between Desirable Properties for Baseline Choices in Impact Measures](https://www.alignmentforum.org/posts/nLhfRpDutEdgr6PKe/tradeoff-between-desirable-properties-for-baseline-choices).\n* Ben Pace [summarizes Christiano's “What Failure Looks Like” post](https://www.lesswrong.com/posts/6jkGf5WEKMpMFXZp2) and the resultant discussion.\n* Kaj Sotala collects recent examples of [experiences from people working with GPT-3](https://www.lesswrong.com/posts/6Hee7w2paEzHsD6mn/collection-of-gpt-3-results).\n\n\n\nThe post [August 2020 Newsletter](https://intelligence.org/2020/08/13/august-2020-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2020-08-13T19:20:03Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "0aad105d838a59710c0f2c5be85255e0", "title": "July 2020 Newsletter", "url": "https://intelligence.org/2020/07/08/july-2020-newsletter/", "source": "miri", "source_type": "blog", "text": "After completing a study fellowship at MIRI that he began in late 2019, Blake Jones is joining the MIRI research team full-time! Blake joins MIRI after a long career working on low-level software systems such as the Solaris operating system and the Oracle database.\n#### Other MIRI updates\n\n\n* MIRI researcher Evan Hubinger goes on the FLI podcast ([transcript/discussion](https://www.alignmentforum.org/posts/qZGoHkRgANQpGHWnu/evan-hubinger-on-inner-alignment-outer-alignment-and), [audio](https://futureoflife.org/2020/07/01/evan-hubinger-on-inner-alignment-outer-alignment-and-proposals-for-building-safe-advanced-ai/)) to discuss “inner alignment, outer alignment, and proposals for building safe advanced AI”.\n* A revised version of Vanessa Kosoy’s “[Optimal Polynomial-Time Estimators: A Bayesian Notion of Approximation Algorithm](https://arxiv.org/abs/1608.04112),” co-authored with Alex Appel, has been accepted to the *Journal of Applied Logics*.\n* From MIRI researcher Abram Demski: [Dutch-Booking CDT: Revised Argument](https://www.lesswrong.com/posts/X7k23zk9aBjjpgLd3) argues that “causal” theories (ones using counterfactuals to evaluate expected value) must behave the same as theories using conditional probabilities. [Relating HCH and Logical Induction](https://www.lesswrong.com/posts/R3HAvMGFNJGXstckQ) discusses amplification in the context of reflective oracles. And [Radical Probabilism](https://www.lesswrong.com/posts/ZM63n353vh2ag7z4p/radical-probabilism-transcript) reviews the surprising gap between Dutch-book arguments and Bayes' rule.\n\n\n#### News and links\n\n\n* Alex Flint summarizes the Center for Human-Compatible AI's [assistance games research program](https://www.alignmentforum.org/posts/qPoaA5ZSedivA4xJa/my-take-on-chai-s-research-agenda-in-under-1500-words).\n* CHAI’s Andrew Critch and MILA’s David Krueger release “[AI Research Considerations for Human Existential Safety (ARCHES)](https://arxiv.org/pdf/2006.04948.pdf)”, a review of 29 AI (existential) safety research directions, each with an illustrative analogy, examples of current work and potential synergies between research directions, and discussion of ways the research approach might lower (or raise) existential risk.\n* OpenAI’s Danny Hernandez and Tom Brown [present evidence](https://openai.com/blog/ai-and-efficiency/) that “for AI tasks with high levels of recent investment, algorithmic progress has yielded more gains than classical hardware efficiency”.\n* DeepMind's Victoria Krakovna shares her [takeaways from the COVID-19 pandemic for slow-takeoff scenarios](https://www.lesswrong.com/posts/wTKjRFeSjKLDSWyww/possible-takeaways-from-the-coronavirus-pandemic-for-slow-ai).\n* AI Impacts’ Daniel Kokotajlo discusses [possible changes the world might undergo before reaching AGI](https://www.lesswrong.com/posts/zjhZpZi76kEBRnjiw/relevant-pre-agi-possibilities).\n* 80,000 Hours describes [careers they view as promising but haven’t written up as priority career paths](https://forum.effectivealtruism.org/posts/6x2MjPXhpPpnatJFQ/some-promising-career-ideas-beyond-80-000-hours-priority), including information security ([previously discussed here](https://forum.effectivealtruism.org/posts/ZJiCfwTy5dC4CoxqA/information-security-careers-for-gcr-reduction)).\n\n\n\nThe post [July 2020 Newsletter](https://intelligence.org/2020/07/08/july-2020-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2020-07-08T20:18:02Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "fb463b0194babef03388089ce703f271", "title": "June 2020 Newsletter", "url": "https://intelligence.org/2020/06/08/june-2020-newsletter/", "source": "miri", "source_type": "blog", "text": "MIRI researcher Evan Hubinger reviews “[11 different proposals for building safe advanced AI under the current machine learning paradigm](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai)”, comparing them on outer alignment, inner alignment[,](https://arxiv.org/abs/1906.01820) training competitiveness, and performance competitiveness. \n#### Other updates\n\n\n* We keep being amazed by new shows of support ⁠— following our [last](https://intelligence.org/2020/04/27/miris-largest-grant-to-date/) [two](https://intelligence.org/2020/05/29/may-2020-newsletter/) announcements, MIRI has received a donation from another anonymous donor totaling ~$265,000 in euros, facilitated by [Effective Giving UK](https://www.effectivegiving.org/) and the [Effective Altruism Foundation](https://ea-foundation.org/). Massive thanks to the donor for their generosity, and to both organizations for their stellar support for MIRI and other longtermist organizations!\n* Hacker News [discusses](https://news.ycombinator.com/item?id=23401328) Eliezer Yudkowsky's [There's No Fire Alarm for AGI](https://intelligence.org/2017/10/13/fire-alarm/).\n* MIRI researcher Buck Shlegeris talks about [deference and inside-view models](https://forum.effectivealtruism.org/posts/53JxkvQ7RKAJ4nHc4/some-thoughts-on-deference-and-inside-view-models) on the EA Forum.\n* OpenAI [unveils GPT-3](https://arxiv.org/pdf/2005.14165.pdf), a massive 175-*billion* parameter language model that can figure out how to solve a variety of problems without task-specific training or fine-tuning. Gwern Branwen's pithy [summary](https://twitter.com/gwern/status/1267215588214136833): \n\n\n> GPT-3 is terrifying because it's a tiny model compared to what's possible, trained in the dumbest way possible on a single impoverished modality on tiny data, yet the first version already manifests crazy runtime meta-learning—and the scaling curves *still* are not bending!\n> \n> \n\n\nFurther discussion [by Branwen](https://www.gwern.net/newsletter/2020/05#gpt-3) and [by Rohin Shah](https://www.lesswrong.com/posts/D3hP47pZwXNPRByj8/an-102-meta-learning-by-gpt-3-and-a-list-of-full-proposals).\n* Stuart Russell gives this year's Turing Lecture online, discussing “[provably beneficial AI](https://www.youtube.com/watch?v=_H87qqT8pdY)”.\n\n\n\nThe post [June 2020 Newsletter](https://intelligence.org/2020/06/08/june-2020-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2020-06-09T01:40:53Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "1356f2ee34b50e9fd3d97b13e046d6df", "title": "May 2020 Newsletter", "url": "https://intelligence.org/2020/05/29/may-2020-newsletter/", "source": "miri", "source_type": "blog", "text": "MIRI has received an anonymous donation of ~$275,000 in euros, facilitated by [Effective Giving UK](https://www.effectivegiving.org/). Additionally, the Survival and Flourishing Fund, working with funders Jaan Tallinn and Jed McCaleb, has announced [$340,000 in grants to MIRI](http://survivalandflourishing.fund/sff-2020-h1). SFF is a new fund that is [taking over much of BERI's grantmaking work](http://existence.org/tallinn-grants-future/).\n\n\nTo everyone involved in both decisions to support our research, thank you!\n\n\n#### Other updates\n\n\n* [An Orthodox Case Against Utility Functions](https://www.lesswrong.com/posts/A8iGaZ3uHNNGgJeaD): MIRI researcher Abram Demski makes the case against utility functions that rely on a microphysical “view from nowhere”.\n* Stuart Armstrong’s “[If I were a well-intentioned AI…](https://www.lesswrong.com/s/knbhjv252HshMSwpt/p/gzWb5kWwzhdaqmyTt)” sequence looks at alignment problems from the perspective of a well-intentioned but ignorant agent.\n* AI Impacts' Asya Bergal summarizes [takeaways from “safety-by-default” researchers](https://aiimpacts.org/takeaways-from-safety-by-default-interviews/).\n* Agarwal and Norouzi report [improvements in offline RL](https://ai.googleblog.com/2020/04/an-optimistic-perspective-on-offline.html).\n* Daniel Kokotajlo’s [Three Kinds of Competitiveness](https://www.lesswrong.com/posts/sD6KuprcS3PFym2eM/three-kinds-of-competitiveness) distinguishes performance-competitive, cost-competitive, and date-competitive AI systems.\n* The *Stanford Encyclopedia of Philosophy*'s new [Ethics of AI and Robotics](https://plato.stanford.edu/entries/ethics-ai/) article includes a discussion of existential risk from AGI.\n\n\n\nThe post [May 2020 Newsletter](https://intelligence.org/2020/05/29/may-2020-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2020-05-29T18:00:01Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "2b3af88d1d86632218cd7d6a949752f3", "title": "April 2020 Newsletter", "url": "https://intelligence.org/2020/05/01/april-2020-newsletter/", "source": "miri", "source_type": "blog", "text": "[MIRI has been awarded its largest grant to date](https://intelligence.org/2020/04/27/miris-largest-grant-to-date/) — $7,703,750 split over two years from Open Philanthropy, in partnership with Ben Delo, co-founder of the cryptocurrency trading platform BitMEX!\n(**Update:** Open Philanthropy decided not to move forward with its [partnership with Ben Delo](https://openphilanthropy.org/research/co-funding-partnership-with-ben-delo/). This doesn’t affect the size of the grant we received, but means that other funders provided the funds for Open Phil’s grant.)\n\n\nWe have also been awarded generous grants by the Berkeley Existential Risk Initiative ($300,000) and the Long-Term Future Fund ($100,000). Our thanks to everybody involved!\n\n\n#### Other updates\n\n\n* Buck Shlegeris of MIRI and Rohin Shah of CHAI discuss Rohin's 2018–2019 overview of technical AI alignment research [on the AI Alignment Podcast](https://www.lesswrong.com/posts/6skeZgctugzBBEBw3/ai-alignment-podcast-an-overview-of-technical-ai-alignment).\n* From MIRI's Abram Demski: [Thinking About Filtered Evidence Is (Very!) Hard](https://www.lesswrong.com/posts/fhJkQo34cYw6KqpH3/thinking-about-filtered-evidence-is-very-hard) and [Bayesian Evolving-to-Extinction](https://www.lesswrong.com/posts/u9Azdu6Z7zFAhd4rK/bayesian-evolving-to-extinction). And from Evan Hubinger: [Synthesizing Amplification and Debate](https://www.lesswrong.com/posts/dJSD5RK6Qoidb3QY5/synthesizing-amplification-and-debate).\n* From OpenAI's Beth Barnes, Paul Christiano, Long Ouyang, and Geoffrey Irving: [Progress on AI Safety via Debate](https://www.lesswrong.com/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1).\n* [Zoom In: An Introduction to Circuits](https://www.lesswrong.com/posts/MG4ZjWQDrdpgeu8wG/zoom-in-an-introduction-to-circuits): OpenAI's Olah, Cammarata, Schubert, Goh, Petrov, and Carter argue, “Features are the fundamental unit of neural networks. They correspond to directions [in the space of neuron activations]. […] Features are connected by weights, forming circuits. […] Analogous features and circuits form across models and tasks.”\n* [DeepMind's Agent57](https://www.lesswrong.com/posts/ygb6ryKcScJxhmwQo/atari-early) appears to meet one of the AI benchmarks in [AI Impacts' 2016 survey](https://arxiv.org/abs/1705.08807), “outperform professional game testers on all Atari games using no game specific knowledge”, earlier than NeurIPS/ICML authors predicted.\n* From DeepMind Safety Research: [Specification gaming: the flip side of AI ingenuity](https://medium.com/@deepmindsafetyresearch/specification-gaming-the-flip-side-of-ai-ingenuity-c85bdb0deeb4).\n\n\n\nThe post [April 2020 Newsletter](https://intelligence.org/2020/05/01/april-2020-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2020-05-01T18:26:09Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "c6ac41462bdfa49c796285f9769b3280", "title": "MIRI’s largest grant to date!", "url": "https://intelligence.org/2020/04/27/miris-largest-grant-to-date/", "source": "miri", "source_type": "blog", "text": "A big announcement today: MIRI has been awarded a two-year **[$7,703,750 grant](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2020)** by Open Philanthropy — our largest grant to date. In combination with the ~$1.06M Open Philanthropy is also disbursing to MIRI this year (the second half of their [2019 grant](https://intelligence.org/2019/04/01/new-grants-open-phil-beri/)), this amounts to $4.38M per year over two years, or roughly 60% of our predicted 2020–2021 budgets.\n\n\nWhile ~$6.24M of Open Philanthropy’s new grant comes from [their main funders](https://www.openphilanthropy.org/about/who-we-are), $1.46M was made possible by [Open Philanthropy’s new partnership with Ben Delo](https://www.openphilanthropy.org/blog/co-funding-partnership-ben-delo), co-founder of the cryptocurrency trading platform BitMEX. Ben Delo has teamed up with Open Philanthropy to support their long-termist grantmaking, which includes (quoting Open Philanthropy):\n\n\n\n> reducing potential risks from advanced artificial intelligence, furthering biosecurity and pandemic preparedness, and other initiatives to combat global catastrophic risks, as well as much of the work we fund on effective altruism.\n> \n> \n\n\nWe’re additionally happy to announce a **$300,000 grant** from the [Berkeley Existential Risk Initiative](http://existence.org/). I’ll note that at the time of our [2019 fundraiser](https://intelligence.org/2019/12/02/miris-2019-fundraiser/), we expected to receive a grant from BERI in early 2020, and incorporated this into our reserves estimates. However, we predicted the grant size would be $600k; now that we know the final grant amount, that estimate should be $300k lower.\n\n\nFinally, MIRI has been awarded a **[$100,000 grant](https://app.effectivealtruism.org/funds/far-future/payouts/3waQ7rp3Bfy4Lwr5sZP9TP)** by the Effective Altruism Funds [Long-Term Future Fund](https://app.effectivealtruism.org/funds/far-future), managed by the Centre for Effective Altruism. The fund plans to release a write-up describing the reasoning for their new round of grants in a couple of weeks.\n\n\nOur thanks to Open Phil, Ben Delo and [Longview Philanthropy](http://www.longview.org/) (Ben Delo’s philanthropic advisor, formerly known as Effective Giving UK), BERI, and the Long-Term Future Fund for this amazing support! Going into 2020–2021, these new grants put us in an unexpectedly good position to grow and support our research team. To learn more about our growth plans, see our [2019 fundraiser post](https://intelligence.org/2019/12/02/miris-2019-fundraiser/) and our [2018 strategy update](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/).\n\n\n\n\n---\n\n\n**Update:** Open Philanthropy decided not to move forward with its [partnership with Ben Delo](https://openphilanthropy.org/research/co-funding-partnership-with-ben-delo/). This doesn’t affect the size of the grant we received, but means that other funders provided the funds for Open Phil’s grant.\n\n\nThe post [MIRI’s largest grant to date!](https://intelligence.org/2020/04/27/miris-largest-grant-to-date/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2020-04-27T14:51:13Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "2cbe16fd9ee08d8d52f8ad8bedc378b9", "title": "March 2020 Newsletter", "url": "https://intelligence.org/2020/04/01/march-2020-newsletter/", "source": "miri", "source_type": "blog", "text": "As the COVID-19 pandemic progresses, the LessWrong team has put together [a database of resources](https://www.lesswrong.com/coronavirus-link-database) for learning about the disease and staying updated, and 80,000 Hours has a new write-up on [ways to help in the fight against COVID-19](https://80000hours.org/articles/covid-19-what-should-you-do/). In my non-MIRI time, I've been keeping my own quick and informal notes on various sources' COVID-19 recommendations [in this Google Doc](https://docs.google.com/document/d/10MFFoUMYHqGB3cLCxuqhsM2kahtOQVybPhLG-YNZdaI/edit). Stay safe out there!\n\n\n\n#### Updates\n\n\n* [My personal cruxes for working on AI safety](https://forum.effectivealtruism.org/posts/Ayu5im98u8FeMWoBZ/my-personal-cruxes-for-working-on-ai-safety): a talk transcript from MIRI researcher Buck Shlegeris.\n* Daniel Kokotajlo of AI Impacts discusses [Cortés, Pizarro, and Afonso as Precedents for Takeover](https://www.lesswrong.com/posts/ivpKSjM4D6FbqF4pZ/cortes-pizarro-and-afonso-as-precedents-for-takeover).\n* O'Keefe, Cihon, Garfinkel, Flynn, Leung, and Dafoe's “[The Windfall Clause](https://www.fhi.ox.ac.uk/windfallclause/)” proposes “an ex ante commitment by AI firms to donate a significant amount of any eventual extremely large profits” that result from “fundamental, economically transformative breakthroughs” like AGI.\n* Microsoft announces the 17-billion-parameter language model [Turing-NLG](https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/).\n* Oren Etzioni thinks AGI is [too far off](https://intelligence.org/2017/10/13/fire-alarm/) to deserve much thought, and cites Andrew Ng's “overpopulation on Mars” metaphor approvingly — but he's also moving the debate in a very positive direction [by listing specific observations that would make him change his mind](https://www.technologyreview.com/s/615264/artificial-intelligence-destroy-civilization-canaries-robot-overlords-take-over-world-ai/).\n\n\n\nThe post [March 2020 Newsletter](https://intelligence.org/2020/04/01/march-2020-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2020-04-01T16:28:35Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "adf114f447c8d15ef6c38bb76e0ce962", "title": "February 2020 Newsletter", "url": "https://intelligence.org/2020/02/23/february-2020-newsletter/", "source": "miri", "source_type": "blog", "text": "#### Updates\n\n\n* Colm [reviews our 2019 fundraiser](https://intelligence.org/2020/02/13/our-2019-fundraiser-review/): taking into account matching, we received a total of $601,120 from 250+ donors. Our thanks again for all the support!\n* Evan Hubinger's [Exploring Safe Exploration](https://www.lesswrong.com/posts/NBffcjqm2P4dNbjrE) clarifies points he raised in [Safe Exploration and Corrigibility](https://www.lesswrong.com/posts/87Y7w73phjBxnPyPD/safe-exploration-and-corrigibility). The issues raised here are somewhat subtler than may be immediately apparent, since we tend to discuss things in ways that collapse the distinctions Evan is making.\n* Logician Arthur Milchior [reviews the AIRCS workshop and MIRI's application process](https://www.lesswrong.com/posts/EYhWxoaCK2YSpTei9/aircs-workshop-how-i-failed-to-be-recruited-at-miri) based on his first-hand experience with both. See also [follow-up discussion](https://www.lesswrong.com/posts/EYhWxoaCK2YSpTei9/aircs-workshop-how-i-failed-to-be-recruited-at-miri#comments) with MIRI and CFAR staff.\n* Rohin Shah posts an in-depth [2018–19 review of the field of AI alignment](https://www.lesswrong.com/posts/dKxX76SCfCvceJXHv).\n* From Shevlane and Dafoe's new “[The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse?](https://arxiv.org/abs/2001.00463)”:\n \n\n\n\n> [T]he existing conversation around AI has heavily borrowed concepts and conclusions from one particular field: vulnerability disclosure in computer security. We caution against AI researchers treating these lessons as immediately applicable. There are important differences between vulnerabilities in software and the types of vulnerabilities exploited by AI. […]\n> \n> \n> Patches to software are often easy to create, and can often be made in a matter of weeks. These patches fully resolve the vulnerability. The patch can be easily propagated: for downloaded software, the software is often automatically updated over the internet; for websites, the fix can take effect immediately.\n> \n> \n> [… F]or certain technologies, there is no low-cost, straightforward, effective defence. [… C]onsider biological research that provides insight into the manufacture of pathogens, such as a novel virus. A subset of viruses are very difficult to vaccinate for (there is still no vaccination for HIV) or otherwise prepare against. This lowers the defensive benefit of publication, by blocking a main causal pathway by which publication leads to greater protection. This contrasts with the case where an effective treatment can be developed within a reasonable time period[.]\n> \n>\n* Yann LeCun and Eliezer Yudkowsky [discuss the concept “AGI”](https://twitter.com/ESYudkowsky/status/1204114830174482434).\n* CFAR's Anna Salamon contrasts [“reality-revealing” and “reality-masking” puzzles](https://www.lesswrong.com/posts/byewoxJiAfwE6zpep/reality-revealing-and-reality-masking-puzzles).\n* Scott Alexander [reviews Stuart Russell's *Human Compatible*](https://slatestarcodex.com/2020/01/30/book-review-human-compatible/).\n\n\n\n#### Links from the research team\n\n\n\nMIRI researchers anonymously summarize and comment on recent posts and papers:\n\n\n* Re [ACDT: a hack-y acausal decision theory](https://www.lesswrong.com/posts/9m2fzjNSJmd3yxxKG) — “Stuart Armstrong calls this decision theory a hack. I think it might be more elegant than he's letting on (i.e., a different formulation could look less hack-y), and is getting at something.”\n* Re [Predictors exist: CDT going bonkers… forever](https://www.lesswrong.com/posts/Kr76XzME7TFkN937z) — “I don't think Stuart Armstrong's example really adds much over some variants of Death in Damascus, but there's some good discussion of CDT vs. EDT stuff in the comments.”\n* Re [Is the term mesa optimizer too narrow?](https://www.lesswrong.com/posts/nFDXq7HTv9Xugcqaw/is-the-term-mesa-optimizer-too-narrow) — “Matthew Barnett poses the important question, '[I]f even humans are not mesa optimizers, why should we expect mesa optimizers to be the primary real world examples of [malign generalization]?'”\n* Re [Malign generalization without internal search](https://www.lesswrong.com/posts/ynt9TD6PrYw6iT49m) — “I think Matthew Barnett's question here is an important one. I lean toward the 'yes, this is a problem' camp—I don't think we can entirely eliminate malign generalization by eliminating internal search. But it is possible that this falls into other categories of misalignment (which we don't want to term 'inner alignment').”\n* Re [(A -> B) -> A in Causal DAGs](https://www.lesswrong.com/posts/G25RBnBk5BNpv3KyF) and [Formulating Reductive Agency in Causal Models](https://www.lesswrong.com/posts/qrWFvMnRm4SkKnpRZ) — “I've wanted something like this for a while. Bayesian influence diagrams model agents non-reductively, by boldly asserting that some nodes are agentic. Can we make models which represent agents, without declaring a basic 'agent' type like that? John Wentworth offers an approach, representing agents via 'strange loops' across a use-mention boundary; and discusses how this might break down even further, with fully reductive agency. I'm not yet convinced that Wentworth has gotten it right, but it's exciting to see an attempt.”\n\n\n\nThe post [February 2020 Newsletter](https://intelligence.org/2020/02/23/february-2020-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2020-02-23T16:14:35Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "3ff8ba7fba7298e2046ad126daef337e", "title": "Our 2019 Fundraiser Review", "url": "https://intelligence.org/2020/02/13/our-2019-fundraiser-review/", "source": "miri", "source_type": "blog", "text": "Our [2019 fundraiser](https://intelligence.org/2019/12/02/miris-2019-fundraiser/) ended on December 31 with the month-long campaign raising $601,120[1](https://intelligence.org/feed/?paged=10#fn1) from over[2](https://intelligence.org/feed/?paged=10#fn2) 259 donors. While we didn’t reach our $1M fundraising goal, the amount raised will be a very substantial help to us going into 2020. We’re grateful to all who supported us during the fundraiser.\n\n\n \n\n\n20% of our fundraiser total was raised in a single minute on December 3 during Facebook’s [Giving Tuesday](https://intelligence.org/2019/11/28/giving-tuesday-2019/) matching event, deserving special mention. Facebook’s matching pool of $7M was exhausted within 14 seconds of their event’s 5am PST start but in spite of the shortness of the event, a cohort of lightning-fast early risers secured a significant amount of Facebook matching for MIRI this year; of the $77,325 donated to MIRI on Giving Tuesday, $45,915 was donated early enough to be matched by Facebook. Thank you to everybody who set their clocks early to support us[3](https://intelligence.org/feed/?paged=10#fn3) and a shout out to the [EA Giving Tuesday](https://www.eagivingtuesday.org/)/[Rethink Charity](https://www.rtcharity.org/) collaboration which helped direct $563k in matching funds to EA nonprofits on Giving Tuesday.\n\n\n\n\n[![](https://intelligence.org/wp-content/uploads/2019/11/GT19Chart-PostMatch.png)](https://intelligence.org/wp-content/uploads/2019/11/GT19Chart-PostMatch2.png)\n\n\n\nBeyond Giving Tuesday, we’re very grateful to [the Effective Altruism Foundation](https://ea-foundation.org/) for providing a channel for residents of a number of EU countries including Germany, Switzerland, and the Netherlands to donate to MIRI in [a tax-advantaged way](https://intelligence.org/donate/tax-advantaged-donations/) and also to donors who leveraged their companies’ matching generosity to maximum effect during the fundraiser.\n\n\nOur fundraiser fell well short of our $1M target this year, and also short of our in-fundraiser support in 2018 ($947k) and 2017 ($2.5M). It’s plausible that some of the following (non-mutually-exclusive) factors may have contributed to this, though we don’t know the relative strength of these factors:\n\n\n* The value of cryptocurrency, especially ETH, was significantly lower during the fundraiser than [in 2017](https://intelligence.org/2018/01/10/fundraising-success/). Some donors have explicitly told us they’re waiting for more advantageous ETH prices before supporting us again.\n* MIRI’s current [nondisclosed-by-default](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section3) research approach makes it difficult for some donors to financially support us at the moment, either because they disagree with the policy itself or because (absent more published research from MIRI) they feel like they personally lack the data they need to evaluate us against other giving opportunities. Several donors have voiced one or both of these sentiments to me, and they were cited prominently in this [2019 alignment research review](https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison).\n* The changes to US tax law in 2018 relating to individual deductions have caused some supporters to adjust the scale and timing of their donations, a trend noticed across US charitable giving in general. It’s plausible that having future MIRI fundraisers straddle the new year (e.g. starting in 2020 and ending in 2021) might provide supporters with more flexibility in their giving; if you would personally find this helpful (or unhelpful), I’m interested to hear about it at colm@intelligence.org.\n* This fundraiser saw MIRI involved in less counterfactual matching opportunities than in previous years — one compared to three in 2018 — which may have reduced the attraction for some of our more leverage-sensitive supporters this time around.\n* Since MIRI’s budget and size have grown a great deal over the past few years, some donors may think that we’re hitting diminishing returns on marginal donations, at a rate that makes them want to look for other giving opportunities.\n* The scale of the funds received during MIRI fundraisers tends to be strongly affected by 1-4 large donors each year, a fair number of whom are one-time or sporadic donors. Since this has certainly given us higher-than-expected results on a number of past occasions, it’s perhaps not surprising that such a randomness-heavy phenomenon would sometimes yield lower-than-expected support by chance. Specifically, we received no donations over $100,000 during this fundraiser, and the two donations over $50,000 were especially welcome.\n* Some MIRI supporters who had been previously following an earning-to-give strategy have moved to direct work as [80,000 Hours’ developing thoughts on the subject](https://80000hours.org/2015/07/80000-hours-thinks-that-only-a-small-proportion-of-people-should-earn-to-give-long-term/) continue to influence the EA community.\n* In past years, when answering supporters’ questions about the discount rate on their potential donations to MIRI, we’ve leaned towards a “now > later” approach. This plausibly resulted in a front-loading of some donations in 2017 and 2018.\n\n\nAs always, I’m interested in hearing individual supporters’ thoughts about how they personally are thinking about their giving strategy; you’re welcome to shoot me an email at [colm@intelligence.org](mailto:colm@intelligence.org)\n\n\nOverall, we’re extremely grateful for all the support we received during this fundraiser. Although we didn’t hit our target, the support we received allows us to continue pursuing the majority of our [growth plans](https://intelligence.org/2019/12/02/miris-2019-fundraiser/), with cash reserves of 1.2–1.4 years at the start of 2020. To everyone who contributed, thank you.\n\n\n\n\n\n---\n\n\n1. The exact total is still subject to change as we continue to process a small number of donations.[![↩](https://s.w.org/images/core/emoji/14.0.0/72x72/21a9.png)](https://intelligence.org/feed/?paged=10#fnref1)\n2. There were significantly more anonymous donors than in previous years — plausibly a result of new data protection legislation like the European Union’s [GDPR](https://en.wikipedia.org/wiki/General_Data_Protection_Regulation) and California’s [CCPA](https://en.wikipedia.org/wiki/California_Consumer_Privacy_Act) — which we aggregate as a single donor.[![↩](https://s.w.org/images/core/emoji/14.0.0/72x72/21a9.png)](https://intelligence.org/feed/?paged=10#fnref2)\n3. Including Richard Schwall, Luke Stebbing, Simon Sáfár, Laura Soares, John Davis, Cliff Hyra, Noah Topper, Gwen Murray, and Daniel Kokotajlo. Thanks, all![![↩](https://s.w.org/images/core/emoji/14.0.0/72x72/21a9.png)](https://intelligence.org/feed/?paged=10#fnref3)\n\n\n\nThe post [Our 2019 Fundraiser Review](https://intelligence.org/2020/02/13/our-2019-fundraiser-review/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2020-02-13T16:48:44Z", "authors": ["Colm Ó Riain"], "summaries": []} -{"id": "5f3c46bd4cb866d899a8c1c7a8b76df2", "title": "January 2020 Newsletter", "url": "https://intelligence.org/2020/01/15/january-2020-newsletter/", "source": "miri", "source_type": "blog", "text": "#### Updates\n\n\n* Our [2019 fundraiser](https://intelligence.org/2019/12/02/miris-2019-fundraiser/) ended Dec. 31. We'll have more to say in a few weeks in our fundraiser retrospective, but for now, a big thank you to the ~240 donors who together donated more than $526,000, including $67,484 in the first 20 seconds of [Giving Tuesday](https://intelligence.org/2019/11/28/giving-tuesday-2019/) (not counting matching dollars, which have yet to be announced).\n* Jan. 15 is the final day of [CFAR's annual fundraiser](https://rationality.org/fundraiser). CFAR also [recently ran an AMA](https://www.lesswrong.com/posts/96N8BT9tJvybLbn5z/we-run-the-center-for-applied-rationality-ama) and [has posted their workshop participant handbook online](https://www.lesswrong.com/posts/Z9cbwuevS9cqaR96h/cfar-participant-handbook-now-available-to-all).\n* [Understanding “Deep Double Descent”](https://www.lesswrong.com/posts/FRv7ryoqtvSuqBxuT): MIRI researcher Evan Hubinger describes a fascinating phenomenon in ML, and an interesting case study in ML research aimed at deepening our understanding, and not just advancing capabilities. In a follow-up post, Evan also considers [possible implications for alignment research](https://www.lesswrong.com/posts/nGqzNC6uNueum2w8T).\n* [Safe Exploration and Corrigibility](https://www.lesswrong.com/posts/87Y7w73phjBxnPyPD): Evan notes an important (and alignment-relevant) way that notions of exploration in deep RL have shifted.\n* “[Learning Human Objectives by Evaluating Hypothetical Behavior](https://deepmind.com/blog/article/learning-human-objectives-by-evaluating-hypothetical-behaviours)”: UC Berkeley and DeepMind researchers “present a method for training reinforcement learning agents from human feedback in the presence of unknown unsafe states”.\n\n\n\n#### Links from the research team\n\n\nThis continues my experiment from last month: having MIRI researchers anonymously pick out AI Alignment Forum posts to highlight and comment on.\n\n\n* Re [(When) is Truth-telling Favored in AI debate?](https://www.lesswrong.com/posts/RQoSCs9SePDMLJvfz) — “A paper by Vojtěch Kovařík and Ryan Carey; it's good to see some progress on the debate model!”\n* Re [Recent Progress in the Theory of Neural Networks](https://www.lesswrong.com/posts/KrQvZM8uFjSTJ7hq3) — “Noah MacAulay provides another interesting example of research attempting to explain what's going on with NNs.”\n* Re [When Goodharting is optimal](https://www.lesswrong.com/posts/megKzKKsoecdYqwb7) — “I like Stuart Armstrong's post for the systematic examination of why we might be afraid of Goodharting. The example at the beginning is an interesting one, because it seems (to me at least) like the robot really should go back and forth (staying a long time at each side to minimize lost utility). But Stuart is right that this answer is, at least, quite difficult to justify.”\n* Re [Seeking Power is Instrumentally Convergent in MDPs](https://www.lesswrong.com/posts/6DuJxY8X45Sco4bS2) and [Clarifying Power-Seeking and Instrumental Convergence](https://www.lesswrong.com/posts/cwpKagyTvqSyAJB7q) — “It's nice to finally have a formal model of this, thanks to Alex Turner and Logan Smith. Instrumental convergence has been an informal part of the discussion for a long time.”\n* Re [Critiquing “What failure looks like”](https://www.lesswrong.com/posts/Q8Z8yoG4tBaowBHwk) — “I thought that Grue Slinky's post was a good critical analysis of Paul Christiano's ‘[going out with a whimper](https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like)’ scenario, highlighting some of the problems it seems to have as a concrete AI risk scenario. In particular, I found the analogy given to the simplex algorithm persuasive in terms of showcasing how, despite the fact that many of our current most powerful tools already have massive differentials in how well they work on different problems, those values which are not served well by those tools don't seem to have lost out massively as a result. I still feel like there may be a real risk along the lines of ‘going out with a whimper’, but I think this post presents a real challenge to that scenario as it has been described so far.”\n* Re [Counterfactual Induction](https://www.lesswrong.com/posts/EAqHkKtbefvyRs4nw) — “A proposal for logical counterfactuals by Alex Appel. This could use some more careful thought and critique; it's not yet clear exactly how much or little it accomplishes.”\n* Re [A dilemma for prosaic AI alignment](https://www.lesswrong.com/posts/jYdAxH8BarPT4fqnb/a-dilemma-for-prosaic-ai-alignment) — “Daniel Kokotajlo outlines key challenges for prosaic alignment: ‘[…] Now I think the problem is substantially harder than that: To be competitive prosaic AI safety schemes must deliberately create misaligned mesa-optimizers and then (hopefully) figure out how to align them so that they can be used in the scheme.’”\n\n\n\nThe post [January 2020 Newsletter](https://intelligence.org/2020/01/15/january-2020-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2020-01-15T17:41:49Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "a8f6c27fed6e744550cd5c42524239f5", "title": "December 2019 Newsletter", "url": "https://intelligence.org/2019/12/05/december-2019-newsletter/", "source": "miri", "source_type": "blog", "text": "From now through the end of December, [**MIRI's 2019 Fundraiser**](https://intelligence.org/2019/12/02/miris-2019-fundraiser/) is live! See our fundraiser post for updates on our past year and future plans.\nOne of our biggest updates, I'm happy to announce, is that **we've hired five new research staff**, with a sixth to join us in February. For details, see [Workshops and Scaling Up](https://intelligence.org/2019/12/02/miris-2019-fundraiser/#1) in the fundraiser post.\n\n\nAlso: Facebook's Giving Tuesday matching opportunity is **tomorrow** at 5:00am PT! See [Colm's post](https://intelligence.org/2019/11/28/giving-tuesday-2019/) for details on how to get your donation matched.\n\n\n\n#### Other updates\n\n\n* Our most recent hire, “[Risks from Learned Optimization](https://arxiv.org/abs/1906.01820)” co-author Evan Hubinger, describes [what he'll be doing at MIRI](https://www.alignmentforum.org/posts/ptmmK9PWgYTuWToaZ/what-i-ll-be-doing-at-miri). See also Nate Soares' comment on [how MIRI does nondisclosure-by-default](https://www.alignmentforum.org/posts/ptmmK9PWgYTuWToaZ/what-i-ll-be-doing-at-miri#rfaMzMaKAAsdv38Me).\n* Buck Shlegeris discusses [EA residencies as an outreach opportunity](https://forum.effectivealtruism.org/posts/yrSiWNypE6AMNApDi/ea-residencies-as-outreach-activity).\n* OpenAI releases [Safety Gym](https://openai.com/blog/safety-gym/), a set of tools and environments for incorporating safety constraints into RL tasks.\n* CHAI is [seeking interns](https://boards.greenhouse.io/centerforhumancompatibleartificialintelligence/jobs/4358062002); application deadline Dec. 15.\n\n\n\n#### Thoughts from the research team\n\n\n\nThis month, I'm trying something new: quoting MIRI researchers' summaries and thoughts on recent AI safety write-ups.\n\n\nI've left out names so that these can be read as a snapshot of people's impressions, rather than a definitive “Ah, researcher X believes Y!” Just keep in mind that these will be a small slice of thoughts from staff I've recently spoken to, not anything remotely like a consensus take.\n\n\n* Re [Will transparency help catch deception?](https://www.lesswrong.com/posts/J9D6Bi3eFDDhCaovi/will-transparency-help-catch-deception-perhaps-not) — “A good discussion of an important topic. Matthew Barnett suggests that any weaknesses in a transparency tool may turn it into a detrimental middle-man, and directly training supervisors to catch deception may be preferable.”\n* Re [Chris Olah’s views on AGI safety](https://www.alignmentforum.org/posts/X2i9dQQK3gETCyqh2/chris-olah-s-views-on-agi-safety) — “I very much agree with Evan Hubinger's idea that collecting different perspectives — different ‘hats’ — is a useful thing to do. Chris Olah's take on transparency is good to see. The concept of microscope AI seems like a useful one, and Olah's vision of how the ML field could be usefully shifted is quite interesting.”\n* Re [Defining AI Wireheading](https://www.lesswrong.com/posts/vXzM5L6njDZSf4Ftk) — “Stuart Armstrong takes a shot at making a principled distinction between wireheading and the rest of Goodhart.”\n* Re [How common is it to have a 3+ year lead?](https://www.lesswrong.com/posts/yXikQ87FFw3oPPaYh) — “This seems like a pretty interesting question for AI progress models. The expected lead time and questions of expected takeoff speed greatly influence the extent to which winner-take-all dynamics are plausible.”\n* Re [Thoughts on Implementing Corrigible Robust Alignment](https://www.lesswrong.com/posts/8W5gNgEKnyAscg8BF) — “Steve Byrnes provides a decent overview of some issues around getting ‘pointer’ type values.”\n\n\n\nThe post [December 2019 Newsletter](https://intelligence.org/2019/12/05/december-2019-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2019-12-05T20:00:25Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "b2fc99c81c4db912654caae15c64f3a0", "title": "MIRI’s 2019 Fundraiser", "url": "https://intelligence.org/2019/12/02/miris-2019-fundraiser/", "source": "miri", "source_type": "blog", "text": "**MIRI’s 2019 fundraiser** is concluded.\n\n\nOver the past two years, huge donor support has helped us double the size of our AI alignment research team. Hitting our $1M fundraising goal this month will put us in a great position to continue our growth in 2020 and beyond, recruiting as many brilliant minds as possible to take on what appear to us to be the central technical obstacles to alignment.\n\n\nOur fundraiser progress, updated in real time (including donations and matches made during the Facebook [Giving Tuesday](https://intelligence.org/giving-tuesday-2019/) event):\n\n\n\n\n---\n\n\n \n\n\n \n\n\n\n\n---\n\n\n\nMIRI is a CS/math research group with a goal of understanding how to reliably “aim” future general-purpose AI systems at known goals. For an introduction to this research area, see [Ensuring Smarter-Than-Human Intelligence Has A Positive Outcome](https://intelligence.org/2017/04/12/ensuring/) and [Risks from Learned Optimization in Advanced Machine Learning Systems](https://arxiv.org/abs/1906.01820). For background on how we approach the problem, see [2018 Update: Our New Research Directions](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/) and [Embedded Agency](https://www.alignmentforum.org/s/Rm6oQRJJmhGCcLvxh/p/i3BTagvt3HbPMx6PN).\n\n\nAt the end of 2017, we [announced](https://intelligence.org/2017/12/01/miris-2017-fundraiser/) plans to substantially grow our research team, with a goal of hiring “around ten new research staff over the next two years.” Two years later, I’m happy to report that we’re up eight research staff, and we have a ninth starting in February of next year, which will bring our total research team size to 20.[1](https://intelligence.org/2019/12/02/miris-2019-fundraiser/#footnote_0_19472 \"This number includes a new staff member who is currently doing a 6-month trial with us.\")\n\n\nWe remain excited about our current research directions, and continue to feel that we could make progress on them more quickly by adding additional researchers and engineers to the team. As such, our main organizational priorities remain the same: push forward on our research directions, and grow the research team to accelerate our progress.\n\n\nWhile we’re quite uncertain about how large we’ll ultimately want to grow, we plan to continue growing the research team at a similar rate over the next two years, and so expect to add around ten more research staff by the end of 2021.\n\n\nOur projected budget for 2020 is $6.4M–$7.4M, with a point estimate of $6.8M,[2](https://intelligence.org/2019/12/02/miris-2019-fundraiser/#footnote_1_19472 \"These estimates were generated using a model similar to the one I used last year. For more details see our 2018 fundraiser post.\") up from around $6M this year.[3](https://intelligence.org/2019/12/02/miris-2019-fundraiser/#footnote_2_19472 \"This falls outside the $4.4M–$5.5M range I estimated in our 2018 fundraiser post, but is in line with the higher end of revised estimates we made internally in Q1 2019.\") In the mainline-growth scenario, we expect our budget to look something like this: \n\n\n\n\n \n\n\nLooking further ahead, since staff salaries account for the vast majority of our expenses, I expect our spending to increase proportionately year-over-year while research team growth continues to be a priority.\n\n\nGiven our $6.8M budget for 2020, and the cash we currently have on hand, raising $1M in this fundraiser will put us in a great position for 2020. Hitting $1M positions us with cash reserves of 1.25–1.5 years going into 2020, which is exactly where we want to be to support ongoing hiring efforts and to provide the confidence we need to make and stand behind our salary and other financial commitments.\n\n\nFor more details on what we’ve been up to this year, and our plans for 2020, read on!\n\n\n \n\n\n### 1. Workshops and scaling up\n\n\nIf you lived in a world that [didn’t know calculus](https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problem), but you knew *something was missing*, what general practices would have maximized your probability of coming up with it?\n\n\nWhat if you didn’t start off knowing something was missing? Could you and some friends have [gotten together and done research](https://www.lesswrong.com/posts/PqMT9zGrNsGJNfiFR/alignment-research-field-guide) in a way that put you in a good position to notice it, to ask the right questions?\n\n\nMIRI thinks that humanity is currently missing some of the core concepts and methods that AGI developers will need in order to align their systems down the road. We think we’ve found research paths that may help solve that problem, and good ways to rapidly improve our understanding via experiments; and we’re eager to add more researchers and engineers’ eyes and brains to the effort.\n\n\nA significant portion of MIRI’s current work is in Haskell, and benefits from experience with functional programming and dependent type systems. More generally, if you’re a programmer who loves hunting for the most appropriate abstractions to fit some use case, developing clean concepts, making and then deploying elegant combinators, or audaciously trying to answer the deepest questions in computer science—then we think you should [apply to work here](https://machineintelligence.typeform.com/to/CDVFE2), get to know us at a [workshop](https://intelligence.org/ai-risk-for-computer-scientists/), or [reach out with questions](mailto:buck@intelligence.org).\n\n\nAs noted above, our research team is growing fast. The latest additions to the MIRI team include:\n\n\n[![New MIRI Staff](https://intelligence.org/wp-content/uploads/2019/11/evan-jeremy-seraphina-rafe.jpg)](https://intelligence.org/team)**Evan Hubinger**, a co-author on “[Risks from Learned Optimization in Advanced Machine Learning Systems](https://intelligence.org/learned-optimization/)”. Evan previously designed the functional programming language [Coconut](http://coconut-lang.org/), was an intern at OpenAI, and has done software engineering work at Google, Yelp, and Ripple.\n\n\n**Jeremy Schlatter**, a software engineer who previously worked at Google and OpenAI. Some of the public projects Jeremy has contributed to include OpenAI’s [Dota 2 bot](https://cdn.openai.com/dota-2.pdf) and a debugger for the Go programming language.\n\n\n**Seraphina Nix**, joining MIRI in February 2020. Seraphina graduates this month from Oberlin College with a major in mathematics and minors in computer science and physics. She has previously done research on ultra-lightweight dark matter candidates, deep reinforcement learning, and teaching neural networks to do high school mathematics.\n\n\n**Rafe Kennedy**, who joins MIRI after working as an independent existential risk researcher at the Effective Altruism Hotel. Rafe previously worked at the data science startup NStack, and he holds an MPhysPhil from the University of Oxford in Physics & Philosophy.\n\n\n[![AIRCS](https://intelligence.org/wp-content/uploads/2019/11/crop-aircs.png)](https://intelligence.org/ai-risk-for-computer-scientists/)MIRI’s hires and job trials are typically drawn from our 4.5-day, all-expense-paid **[AI Risk for Computer Scientists](https://intelligence.org/ai-risk-for-computer-scientists/)** (AIRCS) workshop series.\n\n\nOur workshop program is the best way we know of to bring promising talented individuals into what we think are useful trajectories towards being highly-contributing AI researchers and engineers. Having established an experience that participants love and that we believe to be highly valuable, we plan to continue experimenting with new versions of the workshop, and expect to run ten workshops over the course of 2020, up from eight this year.\n\n\nThese programs have led to a good number of new hires at MIRI as well as other AI safety organizations, and we find them valuable for everything from introducing talented outsiders to AI safety, to leveling up people who have been thinking about these issues for years.\n\n\nIf you’re interested in attending, [apply here](https://docs.google.com/forms/d/e/1FAIpQLSeSJ4G-hMIMIGZ-Ze-I__WfEaarUMC9Q2zRuPpkeG9r6OR4ww/viewform). If you have any questions, we highly encourage you to shoot [Buck Shlegeris](mailto:buck@intelligence.org) an email.\n\n\nOur [MIRI Summer Fellows Program](https://www.rationality.org/workshops/apply-msfp) plays a similar role for us, but is more targeted at mathematicians. We’re considering running MSFP in a shorter format in 2020. For any questions about MSFP, email [Colm Ó Riain](mailto:colm@intelligence.org).\n\n\n \n\n\n### 2. Research and write-ups\n\n\nOur [2018 strategy update](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/) continues to be a great overview of where MIRI stands today, describing how we think about our research, laying out our [case for working here](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section4), and explaining why most of our work [currently isn’t public-facing](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section3).\n\n\nGiven the latter point, I’ll focus in this section on spotlighting what we’ve written up this past year, providing snapshots of some of the work individuals at MIRI are currently doing (without any intended implication that this is representative of the whole), and conveying some of our current broad impressions about how our research progress is going.\n\n\nSome of our major write-ups and publications this year were:\n\n\n* “[Risks from Learned Optimization in Advanced Machine Learning Systems](https://intelligence.org/learned-optimization/),” by Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. The process of generating this paper significantly clarified our own thinking, and informed Scott and Abram’s discussion of [subsystem alignment](https://intelligence.org/2018/11/06/embedded-subsystems/) in “[Embedded Agency](http://intelligence.org/embedded-agency).”Scott views “Risks from Learned Optimization” as being of comparable importance to “Embedded Agency” as exposition of key alignment difficulties, and we’ve been extremely happy about the new conversations and research that the field at large has produced to date in dialogue with the ideas in “Risks from Learned Optimization”.\n* [Thoughts on Human Models](https://intelligence.org/2019/02/22/thoughts-on-human-models/), by Scott Garrabrant and DeepMind-based MIRI Research Associate Ramana Kumar, argues that the AI alignment research community should begin prioritizing “approaches that work well in the absence of human models.”The role of human models in alignment plans strikes us as one of the most important issues for MIRI and other research groups to wrestle with, and we’re generally interested in seeing what new approaches groups outside MIRI might come up with for leveraging AI for the common good in the absence of human models.\n* “[Cheating Death in Damascus](https://intelligence.org/2017/03/18/new-paper-cheating-death-in-damascus/),” by Nate Soares and Ben Levinstein. We presented this decision theory paper at the Formal Epistemology Workshop in 2017, but a lightly edited version has now been accepted to *The Journal of Philosophy*, previously [voted](https://leiterreports.typepad.com/blog/2013/07/top-philosophy-journals-without-regard-to-area.html) the second highest-quality academic journal in philosophy.\n* The [Alignment Research Field Guide](https://www.lesswrong.com/posts/PqMT9zGrNsGJNfiFR/alignment-research-field-guide), a very accessible and broadly applicable resource both for individual researchers and for groups getting off the ground.\n\n\nOur other recent public writing includes an Effective Altruism Forum [AMA with Buck Shlegeris](https://forum.effectivealtruism.org/posts/tDk57GhrdK54TWzPY/i-m-buck-shlegeris-i-do-research-and-outreach-at-miri-ama), Abram Demski’s [The Parable of Predict-O-Matic](https://www.lesswrong.com/posts/SwcyMEgLyd4C3Dern), and the many interesting outputs of the [AI Alignment Writing Day](https://www.lesswrong.com/s/YuTinYEzsyHmPoocw) we hosted toward the end of this year’s MIRI Summer Fellows Program.\n\n\nTurning to our research team, last year we [announced](https://intelligence.org/2018/11/28/miris-newest-recruit-edward-kmett/) that prolific Haskell programmer Edward Kmett joined the MIRI team, freeing him up to do the thing he’s passionate about—improving the state of highly reliable (and simultaneously highly efficient) programming languages. MIRI Executive Director Nate Soares views this goal as very ambitious, though would feel better about the world if there existed programming languages that were both efficient and amenable to strong formal guarantees about their properties.\n\n\nThis year Edward moved to Berkeley to work more closely with the rest of the MIRI team. We’ve found it very helpful to have him around to provide ideas and contributions to our more engineering-oriented projects, helping give some amount of practical grounding to our work. Edward has also continued to be a huge help with recruiting through his connections in the functional programming and type theory world.\n\n\nMeanwhile, our newest addition, Evan Hubinger, plans to continue working on solving [inner alignment](https://www.lesswrong.com/posts/pL56xPoniLvtMDQ4J/the-inner-alignment-problem) for [amplification](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd). Evan has outlined his [research plans](https://www.alignmentforum.org/posts/ptmmK9PWgYTuWToaZ/what-i-ll-be-doing-at-miri) on the AI Alignment Forum, noting that [relaxed adversarial training](https://www.alignmentforum.org/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment) is a fairly up-to-date statement of his research agenda. Scott and other researchers at MIRI consider Evan’s work quite exciting, both in the context of amplification and in the context of other alignment approaches it might prove useful for.\n\n\nAbram Demski is another MIRI researcher who has written up a large number of his research thoughts over the last year. Abram reports ([fuller thoughts here](https://intelligence.org/files/Nov2019ResearchThoughts.pdf)) that he has moved away from a traditional decision-theoretic approach this year, and is now spending more time on learning-theoretic approaches, similar to MIRI Research Associate Vanessa Kosoy. Quoting Abram:\n\n\n\n> Around December 2018, I had a big update against the “classical decision-theory” mindset (in which learning and decision-making are viewed as separate problems), and towards taking a learning-theoretic approach. [… I have] made some attempts to communicate my update against UDT and toward learning-theoretic approaches, including [this write-up](https://www.alignmentforum.org/posts/9sYzoRnmqmxZm4Whf/conceptual-problems-with-udt-and-policy-selection). I talked to Daniel Kokotajlo about it, and he wrote [The Commitment Races Problem](https://www.alignmentforum.org/posts/brXr7PJ2W4Na2EW2q/the-commitment-races-problem), which I think captures a good chunk of it.\n> \n> \n\n\nFor her part, Vanessa’s recent work includes the paper “[Delegative Reinforcement Learning: Learning to Avoid Traps with a Little Help](https://intelligence.org/2019/04/24/delegative-reinforcement-learning/),” which she presented at the ICLR 2019 SafeML workshop.\n\n\nI’ll note again that the above are all snapshots of particular research directions various researchers at MIRI are pursuing, and don’t necessarily represent other researchers’ views or focus. As Buck recently [noted](https://forum.effectivealtruism.org/posts/tDk57GhrdK54TWzPY/i-m-buck-shlegeris-i-do-research-and-outreach-at-miri-ama#FdWqNnEh5kGb37P58), MIRI has a pretty flat management structure. We pride ourselves on minimizing bureaucracy, and on respecting the ability of our research staff to form their own inside-view models of the alignment problem and of what’s needed next to make progress. Nate recently expressed similar thoughts about [how we do nondisclosure-by-default](https://www.lesswrong.com/posts/ptmmK9PWgYTuWToaZ/what-i-ll-be-doing-at-miri#rfaMzMaKAAsdv38Me).\n\n\nAs a consequence, MIRI’s more math-oriented research especially tends to be dictated by individual models and research taste, without the expectation that everyone will share the same view of the problem.\n\n\nRegarding his overall (very high-level) sense of how MIRI’s new research directions are progressing, Nate Soares reports:\n\n\n\n> Progress in 2019 has been slower than expected, but I have a sense of steady progress. In particular, my experience is one of steadily feeling less confused each week than the week before—of me and other researchers having difficulties that were preventing us from doing a thing we wanted to do, staring at them for hours, and then realizing that we’d been thinking wrongly about this or that, and coming away feeling markedly more like we know what’s going on.\n> \n> \n> An example of the kind of thing that causes us to feel like we’re making progress is that we’ll notice, “Aha, the right tool for thinking about all three of these apparently-dissimilar problems was order theory,” or something along those lines; and disparate pieces of frameworks will all turn out to be the same, and the relevant frameworks will become simpler, and we’ll be a little better able to think about a problem that I care about. This description is extremely abstract, but represents the flavor of what I mean by “steady progress” here, in the same vein as [my writing last year about “deconfusion.”](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section2)\n> \n> \n> Our hope is that enough of this kind of progress gives us a platform from which we can generate particular exciting results on core AI alignment obstacles, and I expect to see such results reasonably soon. To date, however, I have been disappointed by the amount of time that’s instead been spent on deconfusing myself and shoring up my frameworks; I previously expected to have more exciting results sooner.\n> \n> \n> In research of the kind we’re working on, it’s not uncommon for there to be years between sizeable results, though we should also expect to sometimes see cascades of surprisingly rapid progress, if we are indeed pushing in the right directions. My inside view of our ongoing work currently predicts that we’re on a productive track and should expect to see results we are more excited about before too long.\n> \n> \n\n\nOur research progress, then, is slower than we had hoped, but the rate and quality of progress continues to be such that we consider this work very worthwhile, and we remain optimistic about our ability to convert further research staff hours into faster progress. At the same time, we are also (of course) looking for where our research bottlenecks are and how we can make our work more efficient, and we’re continuing to look for tweaks we can make that might boost our output further.\n\n\nIf things go well over the next few years—which seems likely but far from guaranteed—we’ll continue to find new ways of making progress on research threads we care a lot about, and continue finding ways to hire people to help make that happen.\n\n\nResearch staff expansion is our biggest source of expense growth, and by encouraging us to move faster on exciting hiring opportunities, donor support plays a key role in how we execute on our research agenda. Though the huge support we’ve received to date has put us in a solid position even at our new size, further donor support is a big help for us in continuing to grow. If you want to play a part in that, thank you.\n\n\n\n\n---\n\n\n\n\n[Donate Now](https://intelligence.org/donate/)\n----------------------------------------------\n\n\n\n\n\n\n\n\n\n---\n\n1. This number includes a new staff member who is currently doing a 6-month trial with us.\n2. These estimates were generated using a model similar to the one I used last year. For more details see our [2018 fundraiser post](https://intelligence.org/2018/11/26/miris-2018-fundraiser/).\n3. This falls outside the $4.4M–$5.5M range I estimated in our [2018 fundraiser post](https://intelligence.org/2018/11/26/miris-2018-fundraiser/), but is in line with the higher end of revised estimates we made internally in Q1 2019.\n\nThe post [MIRI’s 2019 Fundraiser](https://intelligence.org/2019/12/02/miris-2019-fundraiser/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2019-12-02T22:50:35Z", "authors": ["Malo Bourgon"], "summaries": []} -{"id": "8a900e272a46b8aae2b1fc02caaaa691", "title": "Giving Tuesday 2019", "url": "https://intelligence.org/2019/11/28/giving-tuesday-2019/", "source": "miri", "source_type": "blog", "text": "**Update January 25, 2020**: $77,325 was donated to MIRI through Facebook on Giving Tuesday. $45,915 of this was donated within 13.5 seconds of the Facebook matching event starting at 5:00AM PT and was matched by Facebook. Thank you to everybody who set their clocks early to support us so generously! Shout out too to the [EA Giving Tuesday](https://www.eagivingtuesday.org) and [Rethink Charity](https://www.rtcharity.org) team for their amazing efforts on behalf of the EA Community. \n\n\n\n\n[![](https://intelligence.org/wp-content/uploads/2019/11/GT19Chart-PostMatch2-300x233.png)](https://intelligence.org/wp-content/uploads/2019/11/GT19Chart-PostMatch2.png)\n\n\n\n\n---\n\n\n**Update December 2, 2019**: This page has been updated to reflect (a) observed changes in Facebook’s flow for donations of $500 or larger (b) additional information on securing matching for donations of $2,500 or larger during Facebook’s matching event and (c) a pointer to Paypal’s newly announced, though significantly smaller, matching event(s). Please check in here for more updates before the Facebook Matching event begins at 5am PT on December 3. \n\n\n\n\n---\n\n\n\n[MIRI’s annual fundraiser](https://intelligence.org/2019/12/02/miris-2019-fundraiser/) begins this Monday, December 2, 2019 and **Giving Tuesday** takes place the next day; starting at 5:00:00am PT (8:00:00am ET) on December 3, [Facebook](https://www.facebook.com/help/332488213787105) will match donations made on fundraiser pages on their platform up to a total of $7,000,000. This post focuses on this Facebook matching event. (You can find information on Paypal’s significantly smaller matching events in the footnotes.[1](https://intelligence.org/feed/?paged=11#paypal))\n\n\n\n1. Donations during Facebook’s Giving Tuesday event will be matched dollar for dollar on a first-come, first-served basis until the $7,000,000 in matching funds are used up. Based on trends in previous years, this will probably occur within 10 seconds.\n2. Any US-based 501(c)(3) nonprofit eligible to receive donations on Facebook, e.g. MIRI, can be matched.\n3. Facebook will match up to a total of $100,000 per nonprofit organization.\n4. Each donor can have up to $20,000 in eligible donations matched on Giving Tuesday. There is a default limit of $2,499 per donation. Donors who wish to donate more than $2,499 have multiple strategies to choose from (below) to increase the chances of their donations being matched.\n\n\n In [2018](https://intelligence.org/2019/02/07/our-2018-fundraiser-review/), Facebook’s matching pool of $7M was exhausted within 16 seconds of the event starting and in that time, 66% of our lightning-fast donors got their donations to MIRI matched, securing a total of $40,072 in matching funds. This year, we’re aiming for the per-organization $100,000 maximum and since it’s highly plausible that this year’s matching event will end **within 4-10 seconds**, here are some tips to improve the chances of your donation to [MIRI’s Fundraiser Page on Facebook](https://www.facebook.com/donate/2649342025109229/) being matched. \n\n\n### Pre-Event Preparation (before Giving Tuesday)\n\n\n* Confirm your FB account is operational.\n* Add your preferred credit card(s) as payment method(s) [in your FB settings page](https://secure.facebook.com/settings?tab=payments§ion=settings). Credit cards are plausibly mildly preferable to Paypal as a payment option in terms of donation speed.\n* Test your payment method(s) ahead of time by donating small amount(s) to [MIRI’s Fundraiser page](https://www.facebook.com/donate/2649342025109229/).\n* If your credit card limit is lower than the amount you’re considering donating, it may be possible to (a) overpay the balance ahead of time and/or (b) call your credit card asking them to (even temporarily) increase your limit.\n* If you plan to donate more than $2,499, [see below](https://intelligence.org/feed/?paged=11#largedonations) for instructions.\n* Sync whatever clock you’ll be using with [time.is](https://time.is/).\n* Consider pledging your donation to MIRI at [EA Giving Tuesday](https://www.eagivingtuesday.org/pledges).[2](https://intelligence.org/feed/?paged=11#eagt)\n\n\n\n\n### Donating on Giving Tuesday\n\n\nOn Tuesday, December 3, **BEFORE** 5:00:00am PT — it’s advisable to be alert and ready 10-20 minutes before the event — prepare your donation, so you can make your donation with a single click when the event begins at 5:00:00am PT.\n\n\n\n\n1. Open an accurate clock at [time.is](https://time.is/).\n2. In a different browser window alongside, open [MIRI’s Fundraiser Page on Facebook](https://www.facebook.com/donate/2649342025109229/) in your browser.\n3. Click on the Donate button. ![](https://intelligence.org/wp-content/uploads/2018/11/DonateButton.png)\n4. In the “Donate” popup that surfaces:\n* Enter your donation amount — between $5 and $2,499. [See below](https://intelligence.org/feed/?paged=11#largedonations) for larger donations.\n* Choose whichever card you’re using for your donation.\n* Optionally enter a note and/or adjust the donation visibility.\n\n6. **At 05:00:00 PST, click on the Green Donate button.** If your donation amount is $500 or larger, you may be presented with an additional “Confirm Your Donation” dialog. If so, click on *its* Donate button as quickly as possible.\n\n \n\n\n\n\n\n### Larger Donations\n\n\nBy default, Facebook places a limit of $2,499 per donation (in the US[3](https://intelligence.org/feed/?paged=11#outsideus)), and will match up to $20,000 per donor. If you’re in a position to donate $2,500 or more to MIRI, you can:\n\n\n1. Use multiple browser windows/tabs for each individual donation: open up [MIRI’s Fundraiser Page on Facebook](https://www.facebook.com/donate/2649342025109229/) in as many tabs as needed in your browser and follow the instructions above in each window/tab so you have multiple Donate buttons ready to click, one in each tab. Then at 5:00:00 PT, channel your lightning and click as fast as you can — one of our star supporters last year made 8 donations within 21 seconds, 5 of which were matched.\n\n**and/or**\n\n- Before the event — ideally not the morning of — follow [EA Giving Tuesday’s instructions](https://docs.google.com/document/d/1GPb9cDFtKmTW8NdyLwwVrQkEMg_CzzfV7zarz4r2v60/edit) on how to increase your per-donation limit on Facebook above $2,499. Our friends at [EA Giving Tuesday](https://www.eagivingtuesday.org/) estimate that “you are likely to be able to successfully donate up to $9,999 per donation” after following these instructions. Their analysis also suggests that going higher than $10,000 for an individual donation plausibly significantly increases the probability of being declined and therefore advise not going beyond $9,999 per donation. It is possible that Facebook may put a cap of $2,499 on individual donations closer to the event.\n\n\n\nUsing a combination of the above, a generous supporter could, for example, make 2 donations of $9,999 each — in separate browser windows — within seconds of the event starting.\n \n\n\n\n\n\n\n\n\n\n### \n\n\n\n\n\n---\n\n\n1. Paypal has 3 separate matching events on Giving Tuesday — all of which add 10% to eligible donations — for [the USA](https://www.paypal.com/fundraiser/112574644767835624?CampaignName=VanityURL-GiveBack&fbclid=IwAR1EnDNyAMM-ZTV8ukAyO7Am8_H8Sw7utsWOTSeFzAP1_SvJMVNEy00P-mo), [Canada](https://www.paypal.ca/giveback), and [the UK](https://www.paypal.co.uk/giveback).\n2. Thanks to Ari, William, Rethink Charity and all at EA Giving Tuesday for their work to help EA organizations maximize their share of Facebook’s matching funds.\n3. For up-to-date information on Facebook’s donation limits outside the US, check out [EA Giving Tuesday’s doc](https://docs.google.com/document/d/1VkyavA383WKo29wKqokMDr917rAOKwLiWTAnWVKA7PY/edit).\n\n\nThe post [Giving Tuesday 2019](https://intelligence.org/2019/11/28/giving-tuesday-2019/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2019-11-28T21:15:12Z", "authors": ["Colm Ó Riain"], "summaries": []} -{"id": "8b3914c891f096ad4bed217f94d9533f", "title": "November 2019 Newsletter", "url": "https://intelligence.org/2019/11/25/november-2019-newsletter/", "source": "miri", "source_type": "blog", "text": "I'm happy to announce that Nate Soares and Ben Levinstein's “[Cheating Death in Damascus](https://intelligence.org/2017/03/18/new-paper-cheating-death-in-damascus/)” has been accepted for publication in *The Journal of Philosophy* (previously voted the [second-highest-quality](https://leiterreports.typepad.com/blog/2013/07/top-philosophy-journals-without-regard-to-area.html) journal in philosophy).\nIn other news, MIRI researcher Buck Shlegeris has written over 12,000 words on a variety of MIRI-relevant topics [in an EA Forum AMA](https://forum.effectivealtruism.org/posts/tDk57GhrdK54TWzPY/i-m-buck-shlegeris-i-do-research-and-outreach-at-miri-ama). (Example topics: [advice for software engineers](https://forum.effectivealtruism.org/posts/tDk57GhrdK54TWzPY/i-m-buck-shlegeris-i-do-research-and-outreach-at-miri-ama#X3EJqeWRX326HsQbK); [what alignment plans tend to look like](https://forum.effectivealtruism.org/posts/tDk57GhrdK54TWzPY/i-m-buck-shlegeris-i-do-research-and-outreach-at-miri-ama#RoXASZqvcYM6cqDSn); and [decision theory](https://forum.effectivealtruism.org/posts/tDk57GhrdK54TWzPY/i-m-buck-shlegeris-i-do-research-and-outreach-at-miri-ama#byH8abnt5RnPMunts).)\n\n\n#### Other updates\n\n\n* Abram Demski's [The Parable of Predict-O-Matic](https://www.lesswrong.com/posts/SwcyMEgLyd4C3Dern) is a great read: the predictor/optimizer issues it covers are deep, but I expect a fairly wide range of readers to enjoy it and get something out of it.\n* Evan Hubinger's [Gradient Hacking](https://www.lesswrong.com/posts/uXH4r6MmKPedk8rMA) describes an important failure mode that hadn't previously been articulated.\n* Vanessa Kosoy's [LessWrong shortform](https://www.lesswrong.com/posts/dPmmuaz9szk26BkmD#comments) has recently discussed some especially interesting topics related to her learning-theoretic agenda.\n* Stuart Armstrong's [All I Know Is Goodhart](https://www.lesswrong.com/posts/uL74oQv5PsnotGzt7) constitutes nice conceptual progress on expected value maximizers that are aware of [Goodhart's law](https://www.lesswrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy) and trying to avoid it.\n* Reddy, Dragan, and Levine's paper on [modeling human intent](https://papers.nips.cc/paper/7419-where-do-you-think-youre-going-inferring-beliefs-about-dynamics-from-behavior.pdf) cites (of all things) [*Harry Potter and the Methods of Rationality*](http://www.hpmor.com/) as inspiration.\n\n\n#### News and links\n\n\n* [Artificial Intelligence Research Needs Responsible Publication Norms](https://www.lawfareblog.com/artificial-intelligence-research-needs-responsible-publication-norms): Crootof provides a good review of the issue on *Lawfare*.\n* Stuart Russell's new book is out: *[Human Compatible: Artificial Intelligence and the Problem of Control](https://www.amazon.com/Human-Compatible-Artificial-Intelligence-Problem-ebook/dp/B07N5J5FTS)*([excerpt](https://spectrum.ieee.org/computing/software/many-experts-say-we-shouldnt-worry-about-superintelligent-ai-theyre-wrong)). Rohin Shah's [review](https://www.lesswrong.com/posts/nd692YfFGfZDh9Mwz/an-69-stuart-russell-s-new-book-on-why-we-need-to-replace) does an excellent job of contextualizing Russell's views within the larger AI safety ecosystem, and Rohin highlights the quote: \n\n\n> The task is, fortunately, not the following: given a machine that possesses a high degree of intelligence, work out how to control it. If that were the task, we would be toast. A machine viewed as a black box, a fait accompli, might as well have arrived from outer space. And our chances of controlling a superintelligent entity from outer space are roughly zero. Similar arguments apply to methods of creating AI systems that guarantee we won’t understand how they work; these methods include whole-brain emulation — creating souped-up electronic copies of human brains — as well as methods based on simulated evolution of programs. I won’t say more about these proposals because they are so obviously a bad idea.\n> \n>\n* Jacob Steinhardt releases an [AI Alignment Research Overview](https://www.lesswrong.com/posts/7GEviErBXcjJsbSeD/ai-alignment-research-overview-by-jacob-steinhardt).\n* Patrick LaVictoire's [AlphaStar: Impressive for RL Progress, Not for AGI Progress](https://www.lesswrong.com/posts/SvhzEQkwFGNTy6CsN/alphastar-impressive-for-rl-progress-not-for-agi-progress) raises some important questions about how capable today's state-of-the-art systems are.\n\n\n\nThe post [November 2019 Newsletter](https://intelligence.org/2019/11/25/november-2019-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2019-11-25T18:53:11Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "ac54731915414c1dbcb0b8cfac5025f3", "title": "October 2019 Newsletter", "url": "https://intelligence.org/2019/10/25/october-2019-newsletter/", "source": "miri", "source_type": "blog", "text": "#### Updates\n\n\n* Ben Pace summarizes [a second round of AI Alignment Writing Day posts](https://www.lesswrong.com/posts/DG7asvufKgaqEknKd/ai-alignment-writing-day-roundup-2).\n* [The Zettelkasten Method](https://www.lesswrong.com/posts/NfdHG6oHBJ8Qxc26s): MIRI researcher Abram Demski describes a note-taking system that's had a large positive effect on his research productivity.\n* Will MacAskill writes a detailed [critique of functional decision theory](https://www.lesswrong.com/posts/ySLYSsNeFL5CoAQzN/a-critique-of-functional-decision-theory); Abram Demski ([1](https://www.lesswrong.com/posts/ySLYSsNeFL5CoAQzN/a-critique-of-functional-decision-theory#y8zRwcpNeu2ZhM3yE), [2](https://www.lesswrong.com/posts/ySLYSsNeFL5CoAQzN/a-critique-of-functional-decision-theory#X4CLhPpf9zecdGxAq)) and [Matthew Graves](https://www.lesswrong.com/posts/ySLYSsNeFL5CoAQzN/a-critique-of-functional-decision-theory#mR9vHrQ7JhykiFswg) respond in the comments.\n\n\n#### News and links\n\n\n* Recent AI alignment posts: Evan Hubinger asks “[Are minimal circuits deceptive?](https://www.lesswrong.com/posts/fM5ZWGDbnjb7ThNKJ)”, Paul Christiano describes [the strategy-stealing assumption](https://www.lesswrong.com/posts/nRAMpjnb6Z4Qv3imF), and Wei Dai lists his [resolved confusions about Iterated Distillation and Amplification](https://www.lesswrong.com/posts/FdfzFcRvqLf4k5eoQ). See also Rohin Shah's [comparison](https://www.lesswrong.com/posts/cYduioQNeHALQAMre/what-are-the-differences-between-all-the-iterative-recursive#oecNhys3wpsrYKnBp) of recursive approaches to AI alignment.\n* Also on LessWrong: A [Debate on Instrumental Convergence Between LeCun, Russell, Bengio, Zador, and More](https://www.lesswrong.com/posts/WxW6Gc6f2z3mzmqKs/debate-on-instrumental-convergence-between-lecun-russell).\n* FHI's Ben Garfinkel and Allan Dafoe argue that conflicts between nations tend to exhibit “[offensive-then-defensive scaling](https://www.tandfonline.com/doi/abs/10.1080/01402390.2019.1631810)”.\n* OpenAI releases a [follow-up report](https://openai.com/blog/gpt-2-6-month-follow-up/) on GPT-2, noting that several groups “have explicitly adopted similar staged release approaches” to OpenAI.\n* NVIDIA Applied Deep Learning Research has trained a model that appears to [essentially replicate GPT-2](https://arxiv.org/abs/1909.08053), with 5.6x as many parameters, slightly better WikiText perplexity, and slightly worse LAMBADA accuracy. The group has elected to share their training and evaluation code, but not the model weights.\n* OpenAI [fine-tunes GPT-2](https://openai.com/blog/fine-tuning-gpt-2/) for text continuation and summarization tasks that incorporate human feedback, noting, “Our motivation is to move safety techniques closer to the general task of ‘machines talking to humans,’ which we believe is key to extracting information about human values.”\n\n\n\nThe post [October 2019 Newsletter](https://intelligence.org/2019/10/25/october-2019-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2019-10-26T04:48:44Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "54ef933731cfbd90c4d50a4a783a400f", "title": "September 2019 Newsletter", "url": "https://intelligence.org/2019/09/30/september-2019-newsletter/", "source": "miri", "source_type": "blog", "text": "#### Updates\n\n\n* We ran a very successful MIRI Summer Fellows Program, which included a day where participants [publicly wrote up their thoughts](http://alignmentforum.org/s/YuTinYEzsyHmPoocw) on various AI safety topics. See Ben Pace’s first post in a series of [roundups](https://www.alignmentforum.org/posts/ZYGjDpGQaHvg8HLfw/ai-alignment-writing-day-roundup-1).\n* A few highlights from the writing day: Adele Lopez's [Optimization Provenance](https://www.lesswrong.com/posts/Zj2PgP5A8vY2G3gYw/optimization-provenance); Daniel Kokotajlo's [Soft Takeoff Can Still Lead to Decisive Strategic Advantage](http://lesswrong.com/posts/PKy8NuNPknenkDY74/soft-takeoff-can-still-lead-to-decisive-strategic-advantage) and [The \"Commitment Races\" Problem](http://lesswrong.com/posts/brXr7PJ2W4Na2EW2q/the-commitment-races-problem); Evan Hubinger's [Towards a Mechanistic Understanding of Corrigibility](https://www.lesswrong.com/posts/BKM8uQS6QdJPZLqCr/towards-a-mechanistic-understanding-of-corrigibility); and John Wentworth's [Markets are Universal for Logical Induction](https://www.lesswrong.com/posts/WmNeCipNwg9CmGy3T/markets-are-universal-for-logical-induction) and [Embedded Agency via Abstraction](https://www.lesswrong.com/posts/hLFD6qSN9MmQxKjG5/embedded-agency-via-abstraction).\n* New posts from MIRI staff and interns: Abram Demski's [Troll Bridge](https://www.lesswrong.com/posts/hpAbfXtqYC2BrpeiC/troll-bridge-5); Matthew Graves' [View on Factored Cognition](https://www.lesswrong.com/posts/J7Rnt8aJPH7MALkmq/vaniver-s-view-on-factored-cognition); Daniel Filan's [Verification and Transparency](https://www.lesswrong.com/posts/n3YRDJYCnQcDAw29G/verification-and-transparency); and Scott Garrabrant's [Intentional Bucket Errors](https://www.lesswrong.com/posts/ZCfwKjQhAiDogorj8/intentional-bucket-errors) and [Does Agent-like Behavior Imply Agent-like Architecture?](https://www.lesswrong.com/posts/osxNg6yBCJ4ur9hpi/does-agent-like-behavior-imply-agent-like-architecture)\n* See also [a forum discussion](https://www.lesswrong.com/posts/aPwNaiSLjYP4XXZQW/ai-alignment-open-thread-august-2019#3TKtFmF3ZcQFgQNsA) on \"proof-level guarantees\" in AI safety.\n\n\n#### News and links\n\n\n* From Ben Cottier and Rohin Shah: [Clarifying Some Key Hypotheses in AI Alignment](https://www.lesswrong.com/posts/mJ5oNYnkYrd4sD5uE/clarifying-some-key-hypotheses-in-ai-alignment)\n* [Classifying Specification Problems as Variants of Goodhart's Law](https://www.alignmentforum.org/posts/yXPT4nr4as7JvxLQa/classifying-specification-problems-as-variants-of-goodhart-s): Victoria Krakovna and Ramana Kumar relate DeepMind's SRA taxonomy to [mesa-optimizers](https://intelligence.org/learned-optimization/), [selection and control](https://www.alignmentforum.org/posts/ZDZmopKquzHYPRNxq/selection-vs-control), and Scott Garrabrant's Goodhart taxonomy. Also new from DeepMind: Ramana, Tom Everitt, and Marcus Hutter's [Designing Agent Incentives to Avoid Reward Tampering](https://medium.com/@deepmindsafetyresearch/designing-agent-incentives-to-avoid-reward-tampering-4380c1bb6cd).\n* From OpenAI: [Testing Robustness Against Unforeseen Adversaries](https://openai.com/blog/testing-robustness/). 80,000 Hours also recently [interviewed](https://80000hours.org/podcast/episodes/paul-christiano-a-message-for-the-future/) OpenAI's Paul Christiano, with some additional material [on decision theory](https://www.lesswrong.com/posts/n6wajkE3Tpfn6sd5j/christiano-decision-theory-excerpt).\n* From AI Impacts: [Evidence Against Current Methods Leading to Human-Level AI](https://aiimpacts.org/evidence-against-current-methods-leading-to-human-level-artificial-intelligence/) and [Ernie Davis on the Landscape of AI Risks](https://aiimpacts.org/ernie-davis-on-the-landscape-of-ai-risks/)\n* From Wei Dai: [Problems in AI Alignment That Philosophers Could Potentially Contribute To](https://www.lesswrong.com/posts/rASeoR7iZ9Fokzh7L/problems-in-ai-alignment-that-philosophers-could-potentially)\n* Richard Möhn has put together a calendar of [upcoming AI alignment events](https://www.lesswrong.com/posts/h8gypTEKcwqGsjjFT/predicted-ai-alignment-event-meeting-calendar).\n* The Berkeley Existential Risk Initiative is seeking an [Operations Manager](http://existence.org/jobs/operations-lead).\n\n\n\nThe post [September 2019 Newsletter](https://intelligence.org/2019/09/30/september-2019-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2019-10-01T04:48:14Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "4746c266bb97b06dbe572dd382ea87b7", "title": "August 2019 Newsletter", "url": "https://intelligence.org/2019/08/06/august-2019-newsletter/", "source": "miri", "source_type": "blog", "text": "#### Updates\n\n\n* MIRI research associate Stuart Armstrong is offering $1000 for [good questions to ask an Oracle AI](https://www.lesswrong.com/posts/cSzaxcmeYW6z7cgtc/contest-usd1-000-for-good-questions-to-ask-to-an-oracle-ai).\n* Recent AI safety posts from Stuart: [Indifference: Multiple Changes, Multiple Agents](https://www.alignmentforum.org/posts/XkuRKqXKAaMySbXCN/indifference-multiple-changes-multiple-agents); [Intertheoretic Utility Comparison: Examples](https://www.alignmentforum.org/posts/5bd75cc58225bf06703753ef/intertheoretic-utility-comparison-examples); [Normalising Utility as Willingness to Pay](https://www.lesswrong.com/posts/qudmaMyRuQk2pHxtj/normalising-utility-as-willingness-to-pay); and [Partial Preferences Revisited](https://www.alignmentforum.org/posts/BydQtwfN97pFwEWtW/toy-model-piece-1-partial-preferences-revisited).\n* MIRI researcher Buck Shlegeris has put together a quick and informal [AI safety reading list](https://docs.google.com/document/d/1LvmP6OOcGSRsy1jAWC3Gg5plbvHwH642QUjddL-KKh0/edit).\n* [There's No Fire Alarm for AGI](https://intelligence.org/2017/10/13/fire-alarm/) reports on a researcher's January 2017 prediction that “in the next two years, we will not get 80, 90%” on [Winograd schemas](https://en.wikipedia.org/wiki/Winograd_Schema_Challenge), an NLP test. Although this prediction was correct, researchers at [Microsoft](https://arxiv.org/abs/1901.11504), [Carnegie Mellon and Google Brain](https://arxiv.org/abs/1906.08237), and [Facebook](https://arxiv.org/abs/1907.11692) have now (2.5 years later) achieved Winograd scores of [89.0 and 90.4](https://gluebenchmark.com/leaderboard/).\n* Ortega et al.'s “[Meta-Learning of Sequential Strategies](https://arxiv.org/abs/1905.03030)” includes a discussion of mesa-optimization, independent of Hubinger et al.'s “[Risks from Learned Optimization in Advanced Machine Learning Systems](https://intelligence.org/learned-optimization/),” under the heading of “spontaneous meta-learning.”\n\n\n#### News and links\n\n\n* Wei Dai outlines [forum participation as a research strategy](https://www.lesswrong.com/posts/rBkZvbGDQZhEymReM/forum-participation-as-a-research-strategy).\n* On a related note, the posts on the AI Alignment Forum this month were very good — I'll spotlight them all this time around. Dai wrote on [the purposes of decision theory research](https://www.lesswrong.com/posts/JSjagTDGdz2y6nNE3/on-the-purposes-of-decision-theory-research); Shah on [learning biases and rewards simultaneously](https://www.alignmentforum.org/posts/xxnPxELC4jLKaFKqG/learning-biases-and-rewards-simultaneously); Kovarik on [AI safety debate and its applications](https://www.lesswrong.com/posts/5Kv2qNfRyXXihNrx2/ai-safety-debate-and-its-applications#comments); Steiner on [the Armstrong agenda](https://www.lesswrong.com/posts/GHNokcgERpLJwJnLW/some-comments-on-stuart-armstrong-s-research-agenda-v0-9) and [the intentional stance](https://www.lesswrong.com/posts/NvqGmLBCtvQxfMs9m/the-artificial-intentional-stance); Trazzi on [manipulative AI](https://www.lesswrong.com/posts/EpdXLNXyL4EYLFwF8/an-increasingly-manipulative-newsfeed); Cohen on [IRL](https://www.alignmentforum.org/posts/kahBLu32sZAuAZbER/irl-in-general-environments) and [imitation](https://www.alignmentforum.org/posts/LTFaD96D9kWuTibWr/just-imitate-humans); and Manheim on optimizing and Goodhart effects ([1](https://www.lesswrong.com/posts/2neeoZ7idRbZf4eNC/re-introducing-selection-vs-control-for-optimization), [2](https://www.lesswrong.com/posts/BEMvcaeixt3uEqyBk/what-does-optimization-mean-again-optimizing-and-goodhart), [3](https://www.lesswrong.com/posts/zdeYiQgwYRs2bEmCK/applying-overoptimization-to-selection-vs-control-optimizing)).\n* Jade Leung discusses [AI governance](https://futureoflife.org/2019/07/22/on-the-governance-of-ai-with-jade-leung/) on the AI Alignment Podcast.\n* CMU and Facebook researchers' [Pluribus](https://www.technologyreview.com/s/613943/facebooks-new-poker-playing-ai-could-wreck-the-online-poker-industryso-its-not-being/) program beats human poker professionals ⁠— using only [$144](https://science.sciencemag.org/content/early/2019/07/10/science.aay2400) in compute. The developers also [choose not to release the code](https://www.lesswrong.com/posts/6qtq6KDvj86DXqfp6/let-s-read-superhuman-ai-for-multiplayer-poker): “Because poker is played commercially, the risk associated with releasing the code outweighs the benefits.”\n* [Microsoft invests $1 billion in OpenAI](https://openai.com/blog/microsoft/). From Microsoft's [press release](https://news.microsoft.com/2019/07/22/openai-forms-exclusive-computing-partnership-with-microsoft-to-build-new-azure-ai-supercomputing-technologies/): “Through this partnership, the companies will accelerate breakthroughs in AI and power OpenAI’s efforts to create artificial general intelligence (AGI).” OpenAI has also released a paper on “[The Role of Cooperation in Responsible AI Development](https://openai.com/blog/cooperation-on-safety/).”\n* Ought has a new preferred [introduction to their work](https://ought.org/presentations/delegating-cognitive-work-2019-06). See also Paul Christiano's [Ought: Why it Matters and Ways to Help](https://www.alignmentforum.org/posts/cpewqG3MjnKJpCr7E/ought-why-it-matters-and-ways-to-help).\n* FHI has [11 open research positions](https://www.fhi.ox.ac.uk/researcher-positions/); applications are due by Aug. 16. You can also apply to CSER's [AGI risk research associate position](https://www.jobs.cam.ac.uk/job/22457/) through Aug. 26.\n\n\n\nThe post [August 2019 Newsletter](https://intelligence.org/2019/08/06/august-2019-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2019-08-07T02:24:13Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "8912ba46f51ddd0a9c7f4125388e55a0", "title": "July 2019 Newsletter", "url": "https://intelligence.org/2019/07/19/july-2019-newsletter/", "source": "miri", "source_type": "blog", "text": "Hubinger et al.'s “[Risks from Learned Optimization in Advanced Machine Learning Systems](https://intelligence.org/learned-optimization/)”, one of our new core resources on the alignment problem, is now available on [arXiv](https://arxiv.org/abs/1906.01820), the [AI Alignment Forum](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB), and [LessWrong](https://www.lesswrong.com/s/r9tYkB2a8Fp4DN8yB).\nIn other news, we received an Ethereum donation worth $230,910 from Vitalik Buterin — the inventor and co-founder of Ethereum, and now our third-largest all-time supporter!\n\n\nAlso worth highlighting, from the Open Philanthropy Project's Claire Zabel and Luke Muehlhauser: [there's a pressing need for security professionals in AI safety and biosecurity](https://forum.effectivealtruism.org/posts/ZJiCfwTy5dC4CoxqA/information-security-careers-for-gcr-reduction).\n\n\n \n\n\n\n> It’s more likely than not that within 10 years, there will be dozens of GCR-focused roles in information security, and some organizations are already looking for candidates that fit their needs (and would hire them now, if they found them).\n> \n> \n> It’s plausible that some people focused on high-impact careers (as many effective altruists are) would be well-suited to helping meet this need by gaining infosec expertise and experience and then moving into work at the relevant organizations.\n> \n> \n\n\n \n\n\n#### Other updates\n\n\n* [Mesa Optimization: What It Is, And Why We Should Care](https://www.lesswrong.com/posts/XWPJfgBymBbL3jdFd/an-58-mesa-optimization-what-it-is-and-why-we-should-care) — Rohin Shah's consistently excellent Alignment Newsletter discusses “Risks from Learned Optimization…” and other recent AI safety work.\n* MIRI Research Associate Stuart Armstrong releases his [Research Agenda v0.9: Synthesising a Human's Preferences into a Utility Function](https://www.alignmentforum.org/posts/CSEdLLEkap2pubjof/research-agenda-v0-9-synthesising-a-human-s-preferences-intoerences-into).\n* OpenAI and MIRI staff [help talk Munich student Connor Leahy out of releasing](https://medium.com/@NPCollapse/the-hacker-learns-to-trust-62f3c1490f51) an attempted replication of OpenAI's [GPT-2](https://openai.com/blog/better-language-models/) model. ([LessWrong discussion.](https://www.lesswrong.com/posts/36fxiKdEqswkedHyG/the-hacker-learns-to-trust-1)) Although Leahy's replication attempt wasn't successful, write-ups like his suggest that OpenAI's careful discussion surrounding GPT-2 is continuing to prompt good reassessments of publishing norms within ML. Quoting from Leahy's postmortem:\n \n\n\n\n> Sometime in the future we will have reached a point where the consequences of our research are beyond what we can discover in a one-week evaluation cycle. And given my recent experiences with GPT2, we might already be there. The more complex and powerful our technology becomes, the more time we should be willing to spend in evaluating its consequences. And if we have doubts about safety, we should default to caution.\n> \n> \n> We tend to live in an ever accelerating world. Both the industrial and academic R&D cycles have grown only faster over the decades. Everyone wants “the next big thing” as fast as possible. And with the way our culture is now, it can be hard to resist the pressures to adapt to this accelerating pace. Your career can depend on being the first to publish a result, as can your market share.\n> \n> \n> We as a community and society need to combat this trend, and create a healthy cultural environment that allows researchers to *take their time*. They shouldn’t have to fear repercussions or ridicule for delaying release. Postponing a release because of added evaluation should be the norm rather than the exception. We need to make it commonly accepted that we as a community respect others’ safety concerns and don’t penalize them for having such concerns, *even if they ultimately turn out to be wrong*. If we don’t do this, it will be a race to the bottom in terms of safety precautions.\n> \n>\n* From Abram Demski: [Selection vs. Control](https://www.alignmentforum.org/posts/ZDZmopKquzHYPRNxq/selection-vs-control); [Does Bayes Beat Goodhart?](https://www.lesswrong.com/posts/YJq6R9Wgk5Atjx54D/does-bayes-beat-goodhart); and [Conceptual Problems with Updateless Decision Theory and Policy Selection](https://www.alignmentforum.org/posts/9sYzoRnmqmxZm4Whf/conceptual-problems-with-udt-and-policy-selection)\n* *Vox*'s *Future Perfect* Podcast [interviews Jaan Tallinn](https://www.vox.com/future-perfect/2019/6/26/18629806/artificial-intelligence-human-extinction-podcast-jaan-tallinn) and discusses MIRI's role in originating and propagating AI safety memes.\n* *The AI Does Not Hate You*, journalist Tom Chivers' well-researched book about the rationality community and AI risk, [is out in the UK](https://www.amazon.co.uk/Does-Not-Hate-You-Rationalists/dp/1474608779).\n\n\n#### News and links\n\n\n* Other recent AI safety write-ups: David Krueger's [Let's Talk About “Convergent Rationality”](https://www.alignmentforum.org/posts/pLZ3bdeng4u5W8Yft/let-s-talk-about-convergent-rationality-1); Paul Christiano's [Aligning a Toy Model of Optimization](https://www.lesswrong.com/posts/H5gXpFtg93qDMZ6Xn/aligning-a-toy-model-of-optimization); and Owain Evans, William Saunders, and Andreas Stuhlmüller's [Machine Learning Projects on Iterated Distillation and Amplification](https://www.alignmentforum.org/posts/Y9xD78kufNsF7wL6f/machine-learning-projects-on-ida)\n* From DeepMind: Vishal Maini puts together an [AI reading list](https://medium.com/machine-learning-for-humans/ai-reading-list-c4753afd97a), Victoria Krakovna [recaps the ICLR Safe ML workshop](https://vkrakovna.wordpress.com/2019/06/18/iclr-safe-ml-workshop-report/), and Pushmeet Kohli [discusses AI safety on the 80,000 Hours Podcast](https://80000hours.org/podcast/episodes/pushmeet-kohli-deepmind-safety-research/).\n* The EA Foundation is awarding grants for “efforts to reduce risks of astronomical suffering (s-risks) from advanced artificial intelligence”; [apply by Aug. 11](https://forum.effectivealtruism.org/posts/BmBpZjNJbZibLsYnu/first-application-round-of-the-eaf-fund).\n* Additionally, if you're a young AI safety researcher (with a PhD) based at a European university or nonprofit, you may want to apply for [~$60,000 in funding](https://www.bosch-ai.com/young-researcher-award/) from the Bosch Center for AI.\n\n\n\nThe post [July 2019 Newsletter](https://intelligence.org/2019/07/19/july-2019-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2019-07-20T04:58:05Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "1b85e2b4df4b984039e3f6005f84a971", "title": "New paper: “Risks from learned optimization”", "url": "https://intelligence.org/2019/06/07/new-paper-learned-optimization/", "source": "miri", "source_type": "blog", "text": "[![Risks from Learned Optimization in Advanced Machine Learning Systems](https://intelligence.org/wp-content/uploads/2019/06/inneroptimization.png)](https://arxiv.org/abs/1906.01820)Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant have a new paper out: “**[Risks from learned optimization in advanced machine learning systems](https://arxiv.org/abs/1906.01820)**.”\n\n\nThe paper’s abstract:\n\n\n\n> We analyze the type of learned optimization that occurs when a learned model (such as a neural network) is itself an optimizer—a situation we refer to as *mesa-optimization*, a neologism we introduce in this paper.\n> \n> \n> We believe that the possibility of mesa-optimization raises two important questions for the safety and transparency of advanced machine learning systems. First, under what circumstances will learned models be optimizers, including when they should not be? Second, when a learned model is an optimizer, what will its objective be—how will it differ from the loss function it was trained under—and how can it be aligned? In this paper, we provide an in-depth analysis of these two primary questions and provide an overview of topics for future research.\n> \n> \n\n\nThe critical distinction presented in the paper is between what an AI system is optimized to do (its *base objective*) and what it actually ends up optimizing for (its *mesa-objective*), if it optimizes for anything at all. The authors are interested in when ML models will end up optimizing for something, as well as how the objective an ML model ends up optimizing for compares to the objective it was selected to achieve.\n\n\nThe distinction between the objective a system is selected to achieve and the objective it actually optimizes for isn’t new. Eliezer Yudkowsky has previously raised similar concerns in his discussion of [optimization daemons](https://arbital.com/p/daemons/), and Paul Christiano has discussed such concerns in “[What failure looks like](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/more-realistic-tales-of-doom).”\n\n\nThe paper’s contents have also been released this week as a sequence on the [AI Alignment Forum](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB), cross-posted to [LessWrong](https://www.lesswrong.com/s/r9tYkB2a8Fp4DN8yB). As the authors note there:\n\n\n\n> We believe that this sequence presents the most thorough analysis of these questions that has been conducted to date. In particular, we plan to present not only an introduction to the basic concerns surrounding mesa-optimizers, but also an analysis of the particular aspects of an AI system that we believe are likely to make the problems related to mesa-optimization relatively easier or harder to solve. By providing a framework for understanding the degree to which different AI systems are likely to be robust to misaligned mesa-optimization, we hope to start a discussion about the best ways of structuring machine learning systems to solve these problems.\n> \n> \n> Furthermore, in the fourth post we will provide what we think is the most detailed analysis yet of a problem we refer as *deceptive alignment* which we posit may present one of the largest—though not necessarily insurmountable—current obstacles to producing safe advanced machine learning systems using techniques similar to modern machine learning.\n> \n> \n\n\n \n\n\n\n\n\n#### Sign up to get updates on new MIRI technical results\n\n\n*Get notified every time a new technical paper is published.*\n\n\n\n\n* \n* \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n\n\n \n\n\nThe post [New paper: “Risks from learned optimization”](https://intelligence.org/2019/06/07/new-paper-learned-optimization/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2019-06-07T23:53:56Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "ab9f69c789c406793cf6b760736e9c31", "title": "June 2019 Newsletter", "url": "https://intelligence.org/2019/06/01/june-2019-newsletter/", "source": "miri", "source_type": "blog", "text": "Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant have released the first two (of five) posts on “[mesa-optimization](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB)”:\n\n> The goal of this sequence is to analyze the type of learned optimization that occurs when a learned model (such as a neural network) is itself an optimizer—a situation we refer to as *mesa-optimization*.\n> \n> \n> We believe that the possibility of mesa-optimization raises two important questions for the safety and transparency of advanced machine learning systems. First, under what circumstances will learned models be optimizers, including when they should not be? Second, when a learned model is an optimizer, what will its objective be—how will it differ from the loss function it was trained under—and how can it be aligned?\n> \n> \n\n\nThe sequence begins with [Risks from Learned Optimization: Introduction](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB/p/FkgsxrGf3QxhfLWHG) and continues with [Conditions for Mesa-Optimization](https://alignmentforum.org/s/r9tYkB2a8Fp4DN8yB/p/q2rCMHNXazALgQpGH). ([LessWrong mirror.](https://www.lesswrong.com/s/r9tYkB2a8Fp4DN8yB/p/FkgsxrGf3QxhfLWHG))\n\n\n#### Other updates\n\n\n* New research posts: [Nash Equilibria Can Be Arbitrarily Bad](https://www.lesswrong.com/posts/jz5QoizH8HkQwWZ9Q/nash-equilibriums-can-be-arbitrarily-bad); [Self-Confirming Predictions Can Be Arbitrarily Bad](https://www.lesswrong.com/posts/KoEY9CjrKe93ErYhd/self-confirming-predictions-can-be-arbitrarily-bad); [And the AI Would Have Got Away With It Too, If…](https://www.lesswrong.com/posts/92J4zJHkqmXTduxzY/and-the-ai-would-have-got-away-with-it-too-if); [Uncertainty Versus Fuzziness Versus Extrapolation Desiderata](https://www.alignmentforum.org/posts/QJwnPRBBvgaeFeiLR/uncertainty-versus-fuzziness-versus-extrapolation-desiderata)\n* We've released our [annual review for 2018](https://intelligence.org/2019/05/31/2018-in-review/).\n* Applications are open for [two AI safety events](https://forum.effectivealtruism.org/posts/QmfchinDKEcHG5uz5/two-ai-safety-events-at-ea-hotel-in-august) at the EA Hotel in Blackpool, England: the Learning-By-Doing AI Safety Workshop (Aug. 16-19), and the Technical AI Safety Unconference (Aug. 22-25).\n* [A discussion of takeoff speed](https://www.lesswrong.com/posts/PzAnWgqvfESgQEvdg/any-rebuttals-of-christiano-and-ai-impacts-on-takeoff-speeds), including some very incomplete and high-level MIRI comments.\n\n\n#### News and links\n\n\n* Other recent AI safety posts: Tom Sittler's [A Shift in Arguments for AI Risk](https://fragile-credences.github.io/prioritising-ai/) and Wei Dai's [“UDT2” and “against UD+ASSA”](https://www.alignmentforum.org/posts/zd2DrbHApWypJD2Rz/udt2-and-against-ud-assa).\n* Talks from the SafeML ICLR workshop are now [available online](https://sites.google.com/view/safeml-iclr2019/schedule).\n* [From OpenAI](https://openai.com/blog/better-language-models/#update): “We’re implementing two mechanisms to responsibly publish GPT-2 and hopefully future releases: staged release and partnership-based sharing.”\n* FHI's Jade Leung [argues](https://forum.effectivealtruism.org/posts/fniRhiPYw8b6FETsn/jade-leung-why-companies-should-be-leading-on-ai-governance) that “states are ill-equipped to lead at the formative stages of an AI governance regime,” and that “private AI labs are best-placed to lead on AI governance”.\n\n\n\nThe post [June 2019 Newsletter](https://intelligence.org/2019/06/01/june-2019-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2019-06-02T06:43:41Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "6a19686dc0c1130009188310f81f34a0", "title": "2018 in review", "url": "https://intelligence.org/2019/05/31/2018-in-review/", "source": "miri", "source_type": "blog", "text": "Our primary focus at MIRI in 2018 was twofold: research—as always!—and growth.\n\n\nThanks to the [incredible support](https://intelligence.org/2018/11/28/2017-in-review/#4) we received from donors the previous year, in 2018 we were able to aggressively pursue the plans detailed in our [2017 fundraiser post](https://intelligence.org/2017/12/01/miris-2017-fundraiser/). The most notable goal we set was to “grow big and grow fast,” as [our new research directions](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section1) benefit a lot more from a larger team, and require skills that are a lot easier to hire for. To that end, we set a target of adding 10 new research staff by the end of 2019.\n\n\n2018 therefore saw us accelerate the work we started in 2017, investing more in recruitment and shoring up the foundations needed for our ongoing growth. Since our 2017 fundraiser post, we’re up 3 new research staff, including noted Haskell developer [Edward Kmett](https://intelligence.org/2018/11/28/miris-newest-recruit-edward-kmett/). I now think that we’re most likely to hit 6–8 hires by the end of 2019, though hitting 9–10 still seems quite possible to me, as we are still engaging with many promising candidates, and continue to meet more.\n\n\nOverall, 2018 was a great year for MIRI. Our research continued apace, and our recruitment efforts increasingly paid out dividends. \n\n \n\nBelow I’ll elaborate on our:\n\n\n* [research progress and outputs](https://intelligence.org/feed/?paged=12#2018-research),\n* [research program support activities](https://intelligence.org/feed/?paged=12#2018-research-program-support), including more details on our recruitment efforts,\n* [outreach related activities](https://intelligence.org/feed/?paged=12#2018-outreach-and-exposition), and\n* [fundraising and spending](https://intelligence.org/feed/?paged=12#2018-finances).\n\n\n### 2018 Research\n\n\nOur [2018 update](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/) discussed the new research directions we’re pursuing, and the nondisclosure-by-default policy we’ve adopted for our research overall. As described in the post, these new directions aim at deconfusion (similar to our traditional research programs, which we continue to pursue), and include the themes of “seeking entirely new low-level foundations for optimization,” “endeavoring to figure out parts of cognition that can be very transparent as cognition,” and “experimenting with some [relatively deep] alignment problems,” and require building software systems and infrastructure.\n\n\nIn 2018, our progress on these new directions and the supporting infrastructure was steady and significant, in line with our high expectations, albeit proceeding significantly slower than we’d like, due in part to the usual difficulties associated with software development. On the whole, our excitement about these new directions is high, and we remain very eager to expand the team to accelerate our progress.\n\n\nIn parallel, Agent Foundations work continued to be a priority at MIRI. Our biggest publication on this front was “[Embedded Agency](https://www.alignmentforum.org/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version),” co-written by MIRI researchers Scott Garabrant and Abram Demski. “Embedded Agency” reframes our Agent Foundations research agenda as different angles of attack on a single central difficulty: we don’t know how to characterize good reasoning and decision-making for agents embedded in their environment.\n\n\nBelow are notable technical results and analyses we released in each research category last year.[1](https://intelligence.org/2019/05/31/2018-in-review/#footnote_0_19235 \"Our summaries of our more significant results below largely come from our 2018 fundraiser post.\") These are accompanied by [predictions made last year](https://intelligence.org/2018/03/31/2018-research-plans/) by Scott Garrabrant, the research lead for MIRI’s Agent Foundations work, and Scott’s assessment of the progress our published work represents against those predictions. The research categories below are explained in detail in “[Embedded Agency](https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh/p/i3BTagvt3HbPMx6PN).”\n\n\nThe actual share of MIRI’s research that was non-public in 2018 ended up being larger than Scott expected when he registered his predictions. The list below is best thought of as a collection of interesting (though not groundbreaking) results and analyses that demonstrate the flavor of some of the directions we explored in our research last year. As such, these assessments don’t represent our model of our overall progress, and aren’t intended to be a good proxy for that question. Given the difficulty of predicting what we’ll disclose for our 2019 public-facing results, we won’t register new predictions this year.\n\n\n#### Decision theory\n\n\n* Predicted progress: 3 (modest)\n* Actual progress: 2 (weak-to-modest)\n\n\nScott sees our largest public decision theory result of 2018 as [Prisoners’ Dilemma with Costs to Modeling](https://www.lesswrong.com/posts/XjMkPyaPYTf7LrKiT/prisoners-dilemma-with-costs-to-modeling), a modified version of [open-source prisoners’ dilemmas](https://intelligence.org/2016/03/31/new-paper-on-bounded-lob/) in which agents must pay resources in order to model each other.\n\n\nOther significant write-ups include:\n\n\n* [Logical Inductors Converge to Correlated Equilibria (Kinda)](https://www.alignmentforum.org/posts/5bd75cc58225bf0670375569/logical-inductors-converge-to-correlated-equilibria-kinda): A game-theoretic analysis of logical inductors.\n* New results in [Asymptotic Decision Theory](https://www.alignmentforum.org/posts/yXCvYqTZCsfN7WRrg/asymptotic-decision-theory-improved-writeup) and [When EDT=CDT, ADT Does Well](https://www.alignmentforum.org/posts/pgJbaXvYWBx3Mrg5T/when-edt-cdt-adt-does-well) represent incremental progress on understanding what’s possible with respect to learning the right [counterfactuals](https://intelligence.org/2018/10/31/embedded-decisions/).\n\n\nAdditional decision theory research posts from 2018:\n\n\n* From Alex Appel, a MIRI contractor and summer intern: (a) [Distributed Cooperation](https://agentfoundations.org/item?id=1777); (b) [Cooperative Oracles](https://www.alignmentforum.org/posts/SgkaXQn3xqJkGQ2D8/cooperative-oracles); (c) [When EDT=CDT, ADT Does Well](https://www.alignmentforum.org/posts/pgJbaXvYWBx3Mrg5T/when-edt-cdt-adt-does-well); (d) [Conditional Oracle EDT Equilibria in Games](https://www.alignmentforum.org/posts/4MLpRxz7ZoX8YXSY3/coedt-equilibria-in-games)\n* From Abram Demski: (a) [In Logical Time, All Games are Iterated Games](https://www.alignmentforum.org/posts/dKAJqBDZRMMsaaYo5/in-logical-time-all-games-are-iterated-games); (b) [A Rationality Condition for CDT Is That It Equal EDT (Part 1)](https://www.alignmentforum.org/posts/XW6Qi2LitMDb2MF8c/a-rationality-condition-for-cdt-is-that-it-equal-edt-part-1); (c) [A Rationality Condition for CDT Is That It Equal EDT (Part 2)](https://www.alignmentforum.org/posts/tpWfDLZy2tk97MJ3F/a-rationality-condition-for-cdt-is-that-it-equal-edt-part-2)\n* From Scott Garrabrant: (a) [Knowledge is Freedom](https://www.lesswrong.com/posts/b3Bt9Cz4hEtR26ANX/knowledge-is-freedom); (b) [Counterfactual Mugging Poker Game](https://www.lesswrong.com/posts/g3PwPgcdcWiP33pYn/counterfactual-mugging-poker-game); (c) [(A → B) → A](https://www.alignmentforum.org/posts/qhsELHzAHFebRJE59/a-greater-than-b-greater-than-a)\n* From Alex Mennen, a MIRI summer intern: [When Wishful Thinking Works](https://www.alignmentforum.org/posts/KbCHcb8yyjAMFAAPJ/when-wishful-thinking-works)\n\n\n#### Embedded world-models\n\n\n* Predicted progress: 3 (modest)\n* Actual progress: 1 (limited)\n\n\nSome of our relatively significant results related to embedded world-models included:\n\n\n* Sam Eisenstat’s [untrollable prior](https://agentfoundations.org/item?id=1750), explained [in illustrated form](https://www.lesswrong.com/posts/CvKnhXTu9BPcdKE4W/an-untrollable-mathematician-illustrated) by Abram Demski, shows that there is a Bayesian solution to one of the basic problems which motivated the development of [non-Bayesian](https://www.lesswrong.com/posts/tKwJQbo6SfWF2ifKh/toward-a-new-technical-explanation-of-technical-explanation) logical uncertainty tools (culminating in [logical induction](https://intelligence.org/2016/09/12/new-paper-logical-induction/)). This informs our picture of what’s possible, and may lead to further progress in the direction of [Bayesian logical uncertainty](https://intelligence.org/2018/11/02/embedded-models/).\n* Sam Eisenstat and Tsvi Benson-Tilsen’s formulation of Bayesian logical induction. This framework, which has yet to be written up, forces logical induction into a Bayesian framework by constructing a Bayesian prior which trusts the beliefs of a logical inductor (which must supply those beliefs to the Bayesian regularly).\n\n\nSam and Tsvi’s work can be viewed as evidence that “true” Bayesian logical induction is possible. However, it can also be viewed as a demonstration that we have to be careful what we mean by “Bayesian”—the solution is arguably cheating, and it isn’t clear that you get any new desirable properties by doing things this way.\n\n\nScott assigns the untrollable prior result a *2* (weak-to-modest progress) rather than a *1* (limited progress), but is counting this among our [2017 results](https://intelligence.org/2018/11/28/2017-in-review/#1), since it was written up in 2018 but produced in 2017.\n\n\nOther recent work in this category includes:\n\n\n* From Alex Appel: (a) [Resource-Limited Reflective Oracles](https://agentfoundations.org/item?id=1793); (b) [Bounded Oracle Induction](https://www.alignmentforum.org/posts/MgLeAWSeLbzx8mkZ2/bounded-oracle-induction)\n* From Abram Demski: (a) [Toward a New Technical Explanation of Technical Explanation](https://www.lesswrong.com/posts/tKwJQbo6SfWF2ifKh/toward-a-new-technical-explanation-of-technical-explanation); (b) [Probability is Real, and Value is Complex](https://www.alignmentforum.org/posts/oheKfWA7SsvpK7SGp/probability-is-real-and-value-is-complex)\n\n\n#### Robust delegation\n\n\n* Predicted progress: 2 (weak-to-modest)\n* Actual progress: 1 (limited)\n\n\nOur most significant 2018 public result in this category is perhaps Sam Eisenstat’s [logical inductor tiling result](https://www.alignmentforum.org/posts/5bd75cc58225bf067037556d/logical-inductor-tiling-and-why-it-s-hard), which solves a version of the [tiling problem](https://intelligence.org/files/TilingAgentsDraft.pdf) for logically uncertain agents.[2](https://intelligence.org/2019/05/31/2018-in-review/#footnote_1_19235 \"Not to be confused with Nate Soares’ forthcoming tiling agents paper.\")\n\n\nOther posts on robust delegation:\n\n\n* From Stuart Armstrong (MIRI Research Associate): (a) [Standard ML Oracles vs. Counterfactual Ones](https://www.alignmentforum.org/posts/hJaJw6LK39zpyCKW6/standard-ml-oracles-vs-counterfactual-ones); (b) “[Occam’s Razor is Insufficient to Infer the Preferences of Irrational Agents](https://www.fhi.ox.ac.uk/occam/)“\n* From Abram Demski: [Stable Pointers to Value II: Environmental Goals](https://agentfoundations.org/item?id=1762)\n* From Scott Garrabrant: [Optimization Amplifies](https://www.lesswrong.com/posts/zEvqFtT4AtTztfYC4/optimization-amplifies)\n* From Vanessa Kosoy (MIRI Research Associate): (a) [Quantilal Control for Finite Markov Decision Processes](https://agentfoundations.org/item?id=1785); (b) [Computing An Exact Quantilal Policy](https://agentfoundations.org/item?id=1794)\n* From Alex Mennen: [Safely and Usefully Spectating on AIs Optimizing Over Toy Worlds](https://www.alignmentforum.org/posts/ikN9qQEkrFuPtYd6Y/safely-and-usefully-spectating-on-ais-optimizing-over-toy)\n\n\n#### Subsystem alignment\n\n\n* Predicted progress: 2 (weak-to-modest)\n* Actual progress: 2\n\n\nWe achieved greater clarity on subsystem alignment in 2018, largely reflected in Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse,[3](https://intelligence.org/2019/05/31/2018-in-review/#footnote_2_19235 \"Evan was a MIRI research intern, while Chris, Vladimir, and Joar are external collaborators.\") and Scott Garrabrant’s forthcoming paper, “Risks from Learned Optimization in Advanced Machine Learning Systems.”[4](https://intelligence.org/2019/05/31/2018-in-review/#footnote_3_19235 \"This paper was previously cited in “Embedded Agency” under the working title “The Inner Alignment Problem.”\") This paper is currently being rolled out [on the AI Alignment Forum](https://www.alignmentforum.org/posts/FkgsxrGf3QxhfLWHG/risks-from-learned-optimization-introduction), as a sequence on “[Mesa-Optimization](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB).”[5](https://intelligence.org/2019/05/31/2018-in-review/#footnote_4_19235 \"The full PDF version of the paper will be released in conjuction with the last post of the sequence.\")\n\n\nScott Garrabrant’s [Robustness to Scale](https://www.lesswrong.com/posts/bBdfbWfWxHN9Chjcq/robustness-to-scale) also discusses issues in subsystem alignment (“robustness to relative scale”), alongside other issues in AI alignment.\n\n\n#### Other\n\n\n* Predicted progress: 2 (weak-to-modest)\n* Actual progress: 2\n\n\nSome of the 2018 publications we expect to be most useful cut across all of the above categories:\n\n\n* “[Embedded Agency](https://intelligence.org/embedded-agency/),” Scott and Abram’s new introduction to all of the above research directions.\n* [Fixed Point Exercises](https://www.lesswrong.com/posts/mojJ6Hpri8rfzY78b/fixed-point-exercises), a set of exercises created by Scott to introduce people to the core ideas and tools in agent foundations research.\n\n\nHere, other noteworthy posts include:\n\n\n* From Scott Garrabrant: (a) [Sources of Intuitions and Data on AGI](https://www.lesswrong.com/posts/BibDWWeo37pzuZCmL/sources-of-intuitions-and-data-on-agi); (b) [History of the Development of Logical Induction](https://www.alignmentforum.org/posts/iBBK4j6RWC7znEiDv/history-of-the-development-of-logical-induction)\n\n\n### 2018 Research Program Support\n\n\nWe added three new research staff to the team in 2018: Ben Weinstein-Raun, James Payor, and [Edward Kmett](https://intelligence.org/2018/11/28/miris-newest-recruit-edward-kmett/).\n\n\nWe invested a large share of our capacity into growing the research team in 2018, and generally into activities aimed at increasing the amount of alignment research in the world, including:\n\n\n* **Running eight** [**AI Risk for Computer Scientist**](https://intelligence.org/ai-risk-for-computer-scientists/) **(AIRCS) workshops**. This is an ongoing all-expenses-paid workshop series for computer scientists and programmers who want to get started thinking about or working on AI alignment. At these workshops, we introduce AI risk and related concepts, share some CFAR-style rationality content, and introduce participants to the work done by MIRI and other safety research teams. Our overall aim is to cause good discussions to happen, improve participants’ ability to make progress on whether and how to contribute, and in the process work out whether they may be interested in joining MIRI or other alignment groups. Of 2018 workshop participants, we saw one join MIRI full-time, four take on internships with us, and on the order of ten with good prospects of joining MIRI within a year, in addition to several who have since joined other safety-related organizations.\n* **Running a 2.5-week** [**AI Summer Fellows Program**](https://www.rationality.org/workshops/apply-msfp) **(AISFP) with CFAR**.[6](https://intelligence.org/2019/05/31/2018-in-review/#footnote_5_19235 \"As noted in our summer updates:\nWe had a large and extremely strong pool of applicants, with over 170 applications for 30 slots (versus 50 applications for 20 slots in 2017). The program this year was more mathematically flavored than in 2017, and concluded with a flurry of new analyses by participants. On the whole, the program seems to have been more successful at digging into AI alignment problems than in previous years, as well as more successful at seeding ongoing collaborations between participants, and between participants and MIRI staff.\nThe program ended with a very active blogathon, with write-ups including: Dependent Type Theory and Zero-Shot Reasoning; Conceptual Problems with Utility Functions (and follow-up); Complete Class: Consequentialist Foundations; and Agents That Learn From Human Behavior Can’t Learn Human Values That Humans Haven’t Learned Yet.\") Additionally, MIRI researcher Tsvi Benson-Tilsen and MIRI summer intern Alex Zhu ran a mid-year AI safety retreat for MIT students and alumni.\n* **Running a 10-week research internship program over the summer**, reviewed in our [summer updates](https://intelligence.org/2018/09/01/summer-miri-updates/). Interns also participated in AISFP and in a joint [research workshop](https://intelligence.org/workshops/#july-2018) with interns from the [Center for Human-Compatible AI](https://humancompatible.ai/). Additionally, we hosted three more research interns later in the year. We are hopeful that at least one of them will join the team in 2019.\n* **Making grants to two individuals as part of our AI Safety Retraining Program**. In 2018 we received [$150k in restricted funding](https://intelligence.org/2018/09/01/summer-miri-updates/#2) from the Open Philanthropy Project, “to provide stipends and guidance to a few highly technically skilled individuals. The goal of the program is to free up 3–6 months of time for strong candidates to spend on retraining, so that they can potentially transition to full-time work on AI alignment.” We issued grants to two people in 2018, including [Carroll Wainwright](https://www.partnershiponai.org/team/carroll-wainwright/) who went on to become a Research Scientist at Partnership on AI.\n\n\nIn addition to the above, in 2018 we:\n\n\n* **Hired additional operations staff** to ensure we have the required operational capacity to support our continued growth.\n* **Moved into new larger office space**.\n\n\n### 2018 Outreach and Exposition\n\n\nOn the outreach, coordination, and exposition front, we:\n\n\n* **Released a new edition of** [***Rationality: From AI to Zombies***](https://intelligence.org/rationality-ai-zombies/), beginning with volumes one and two, featuring a number of [updates to the text](https://intelligence.org/2018/12/15/announcing-new-raz/) and an official print edition. We also made Stuart Armstrong’s 2014 book on AI risk, *Smarter Than Us: The Rise of Machine Intelligence*, available on the web for free at [smarterthan.us](https://smarterthan.us/).\n* **Released** [**2018 Update: Our New Research Directions**](https://intelligence.org/2018-update-our-new-research-directions), a lengthy discussion of our research, our nondisclosure-by-default policies, and the case for computer scientists and software engineers to apply to join our team.\n* **Produced other expository writing**: [Two Clarifications About “Strategic Background”](https://www.lesswrong.com/posts/hL9ennoEfJXMj7r2D/two-clarifications-about-strategic-background); [Challenges to Paul Christiano’s Capability Amplification Proposal](https://intelligence.org/2018/05/19/challenges-to-christianos-capability-amplification-proposal/) (discussion on [LessWrong](https://www.lesswrong.com/posts/S7csET9CgBtpi7sCh/challenges-to-christiano-s-capability-amplification-proposal), including [follow-up conversations](https://www.lesswrong.com/posts/Djs38EWYZG8o7JMWY/paul-s-research-agenda-faq#79jM2ecef73zupPR4)); [Comment on Decision Theory](https://www.lesswrong.com/posts/uKbxi2EJ3KBNRDGpL/comment-on-decision-theory); [The Rocket Alignment Problem](https://intelligence.org/2018/10/03/rocket-alignment/) ([LessWrong link](https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problem)).\n* **Received press coverage** in [*Axios*](https://www.axios.com/newsletters/axios-future-5a9e9f63-fd4f-4514-9570-ce9dc4c14348.html?chunk=2#story2), [*Forbes*](https://www.forbes.com/sites/rogerhuang/2018/11/20/wetrust-spring-will-match-your-cryptocurrency-donations-through-giving-tuesday/#1fda09b02461), [*Gizmodo*](https://gizmodo.com/how-we-can-prepare-now-for-catastrophically-dangerous-a-1830388719), and *Vox* ([1](https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment), [2](https://www.vox.com/future-perfect/2018/10/22/17991736/jeff-bezos-elon-musk-colonizing-mars-moon-space-blue-origin-spacex)), and were interviewed in [*Nautilus*](http://nautil.us/issue/58/self/scary-ai-is-more-fantasia-than-terminator) and on [Sam Harris’ podcast](https://intelligence.org/2018/02/28/sam-harris-and-eliezer-yudkowsky/).\n* **Spoke at** [**Effective Altruism Global**](https://sf.eaglobal.org/2018) **in San Francisco and at the** [**Human-Aligned AI Summer School**](http://humanaligned.ai/index.html) **in Prague**.\n* **Presented on** [**logical induction**](https://intelligence.org/2016/09/12/new-paper-logical-induction/) **at the joint** [**Applied Theory Workshop**](https://research.chicagobooth.edu/appliedtheory/workshop-on-applied-theory) **/** [**Workshop in Economic Theory**](https://economics.uchicago.edu/content/workshop-economic-theory-joint-applied-theory-workshop).\n* **Released a paper, “**[**Categorizing Variants of Goodhart’s Law**](https://intelligence.org/2018/03/27/categorizing-goodhart/)**,”** based on Scott Garrabrant’s 2017 “[Goodhart Taxonomy](https://www.lesswrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy).” We also reprinted Nate Soares’ “[The Value Learning Problem](https://intelligence.org/files/ValueLearningProblem.pdf)” and Nick Bostrom and Eliezer Yudkowsky’s “[The Ethics of Artificial Intelligence](https://intelligence.org/files/EthicsofAI.pdf)” in [*Artificial Intelligence Safety and Security*](https://www.crcpress.com/Artificial-Intelligence-Safety-and-Security/Yampolskiy/p/book/9780815369820).\n* **Several MIRI researchers also received recognition from the AI Alignment Prize**, including Scott Garrabrant receiving first place and second place in the [first round](https://www.lesswrong.com/posts/4WbNGQMvuFtY3So7s/announcement-ai-alignment-prize-winners-and-next-round) and [second round](https://www.lesswrong.com/posts/SSEyiHaACSYDHcYZz/announcement-ai-alignment-prize-round-2-winners-and-next), respectively, MIRI Research Associate Vanessa Kosoy winning first prize in the [third round](https://www.lesswrong.com/posts/juBRTuE3TLti5yB35/announcement-ai-alignment-prize-round-3-winners-and-next), and Scott and Abram Demski tying Alex Turner for first place in the [fourth round](https://www.lesswrong.com/posts/nDHbgjdddG5EN6ocg/announcement-ai-alignment-prize-round-4-winners).\n* MIRI senior staff also participated in AI research and strategy events and conversations throughout the year.\n\n\n### 2018 Finances\n\n\n#### Fundraising\n\n\n2018 was another strong year for MIRI’s fundraising. While the total raised of just over **$5.1M** was a 12% drop from the amount raised in 2017, the graph below shows that our strong growth trend continued—with 2017, as I surmised [in last year’s review](https://intelligence.org/2018/11/28/2017-in-review/), looking like an outlier year driven by the large influx of cryptocurrency contributions during a market high in December 2017.[7](https://intelligence.org/2019/05/31/2018-in-review/#footnote_6_19235 \"Note that amounts in this section may vary slightly from our audited financial statements, due to small differences between how we tracked donations internally, and how we are required to report them in our financial statements.\")\n\n\n \n\n \n\n(In this chart and those that follow, “Unlapsed” indicates contributions from past supporters who did not donate in the previous year.)\n\n\nHighlights include:\n\n\n* **$1.02M, our largest ever single donation by an individual**, from “Anonymous Ethereum Investor #2,” based in Canada, made through Rethink Charity Forward’s recently established [tax-advantaged fund for Canadian MIRI supporters](https://rcforward.org/miri/).[8](https://intelligence.org/2019/05/31/2018-in-review/#footnote_7_19235 \"A big thanks to Colm for all the work he’s put into setting this up; have a look at our Tax-Advantaged Donations page for more information.\")\n* **$1.4M in grants from the Open Philanthropy Project**, $1.25M in [general support](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017) and $150k for our [AI Safety Retraining Program](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-ai-safety-retraining-program).\n* **$951k during our [annual fundraiser](https://intelligence.org/2019/02/11/our-2018-fundraiser-review/)**, driven in large part by MIRI supporters’ participation in multiple matching campaigns during the fundraiser, including [WeTrust Spring’s Ethereum-matching campaign](https://blog.wetrust.io/conclusion-of-the-first-lr-experiment-709b018b5f83), Facebook’s [Giving Tuesday event](https://donations.fb.com/giving-tuesday/), and in partnership with [Raising for Effective Giving](https://reg-charity.org/) (REG), professional poker players’ [Double Up Drive](http://www.doubleupdrive.com/).\n* **$529k from 2 grants recommended by the [EA Funds Long-Term Future Fund](https://app.effectivealtruism.org/funds/far-future)**.\n* **$115K from Poker Stars, also through REG**.\n\n\nIn 2018, we received contributions from 637 unique contributors, 16% less than in 2017. This drop was largely driven by a 27% reduction in the number of new donors, partly offset by the continuing trend of steady growth in the number of returning donors[9](https://intelligence.org/2019/05/31/2018-in-review/#footnote_8_19235 \"2014 is anomalously high on this graph due to the community’s active participation in our memorable SVGives campaign.\"):\n\n\n \n\n \n\n \n\n\n\n\nDonations of cryptocurrency were down in 2018 both in absolute terms (-$1.2M in value) and as a percentage of total contributions (23% compared to 42% in 2017). It’s plausible that if cryptocurrency values continue to rebound in 2019, we may see this trend reversed.\n\n\nIn 2017, donations received from matching initiatives dramatically increased with almost a five-fold increase over the previous year. In 2018, our inclusion in two different REG-administered matching challenges, a significantly increased engagement among MIRI supporters with Facebook’s Giving Tuesday, and MIRI’s winning success in WeTrust’s Spring campaign, offset a small decrease in corporate match dollars to improve slightly on 2017’s matching total. The following graph represents the matching amounts received over the last 5 years:\n\n\n \n\n\n\n\n#### Spending\n\n\nIn our [2017 fundraiser post](https://intelligence.org/2017/12/01/miris-2017-fundraiser/#2), I projected that we’d spend ~$2.8M in 2018. Towards the end of last year, I [revised](https://intelligence.org/2018/11/26/miris-2018-fundraiser/#2) our estimate:\n\n\n\n> Following the amazing show of support we received from donors last year (and continuing into 2018), we had significantly more funds than we anticipated, and we found more ways to usefully spend it than we expected. In particular, we’ve been able to translate the “bonus” support we received in 2017 into broadening the scope of our recruiting efforts. As a consequence, our 2018 spending, which will come in at around $3.5M, actually matches the point estimate I gave in 2017 for our 2019 budget, rather than my prediction for 2018—a large step up from what I predicted, and an even larger step from last year’s [2017] budget of $2.1M.\n> \n> \n\n\nThe post goes on to give an overview of the ways in which we put this “bonus” support to good use. These included, in descending order by cost:\n\n\n* **Investing significantly more in recruiting-related activities**, including our AIRCS workshop series; and scaling up the number of interns we hosted, with an increased willingness to pay higher wages to attract promising candidates to come intern/trial with us.\n* **Filtering less on price relative to fit when choosing new office space to accommodate our growth**, and spending more on renovations, than we otherwise would have been able to, in order to create a more focused working environment for research staff.\n* **Raising salaries for some existing staff**, who were being paid well below market rates.\n\n\nWith concrete numbers now in hand, I’ll go into more detail below on how we put those additional funds to work.\n\n\nTotal spending came in just over **$3.75M**. The chart below compares our actual spending in 2018 with our projections, and with our spending in 2017.[10](https://intelligence.org/2019/05/31/2018-in-review/#footnote_9_19235 \"Note that these numbers will differ slightly compared to our forthcoming audited financial statements for 2018, due to subtleties of how certain types of expenses are tracked. For example, in the financial statements, renovation costs are considered to be a fixed asset that depreciates over time, and as such, won’t show up as an expense.\")\n\n\n \n\n\n\n\nAt a high level, as expected, personnel costs in 2018 continued to account for the majority of our spending—though represented a smaller share of total spending than in 2017, due to increased spending on recruitment-related activities along with one-time costs related to securing and renovating our new office space.\n\n\nOur spending on recruitment-related activities is captured in the *program activities* category. The major ways we put additional funds to use, which account for the increase over my projections, break down as follows:\n\n\n* **~$170k on internships**: We hosted nine research interns for an average of ~2.5 months each. We were able to offer more competitive wages for internships, allowing us to recruit interns (especially those with an engineering focus) that we otherwise would have had a much harder time attracting, given the other opportunities they had available to them. We are actively interested in hiring three of these interns, and have made formal offers to two of them. I’m hopeful that we’ll have added at least one of them to the team by the end of this year.\n* **$54k on AI Safety Retraining Program grants**, described [above](https://intelligence.org/feed/?paged=12#2018-research-program-support).\n* The bulk of the rest of the additional funds we spent in this category went towards funding our ongoing series of [AI Risk for Computer Scientists](https://intelligence.org/ai-risk-for-computer-scientists/) workshops, described [above](https://intelligence.org/feed/?paged=12#2018-research-program-support).\n\n\nExpenses related to our new office space are accounted for in the *cost of doing business* category. The surplus spending in this category resulted from:\n\n\n* **~$300k for securing, renovating, and filling out our new office space**. Finding a suitable new space to accommodate our growth in Berkeley ended up being much more challenging and time-consuming than we expected.[11](https://intelligence.org/2019/05/31/2018-in-review/#footnote_10_19235 \"The number of options available in the relevant time frame were very limited, and most did not meet many of our requirements. Of the available spaces, the option that offered the best combination of size, layout, and location, was looking for a tenant starting November 1st 2018, while we weren’t able to move until early January 2019. Additionally, the space was configured with a very open layout that wouldn’t have met our needs, but that many other prospective tenants found desirable, such that we’d have to cover renovation costs.\") We made use of additional funds to secure our preferred space ahead of when we were prepared to move, and to renovate the space to meet our needs, whereas if we’d been operating with the budget I originally projected, we would have almost certainly ended up in a much worse space.\n* The remainder of the spending beyond my projection in this category comes from higher-than-expected legal costs to secure visas for staff, and slightly higher-than-projected spending across many other subcategories.\n\n\n\n\n---\n\n\n\n\n\n---\n\n1. Our summaries of our more significant results below largely come from our [2018 fundraiser post](https://intelligence.org/2018/11/26/miris-2018-fundraiser/).\n2. Not to be confused with Nate Soares’ forthcoming tiling agents paper.\n3. Evan was a MIRI research intern, while Chris, Vladimir, and Joar are external collaborators.\n4. This paper was previously cited in “[Embedded Agency](https://intelligence.org/embedded-agency/)” under the working title “The Inner Alignment Problem.”\n5. The full PDF version of the paper will be released in conjuction with the last post of the sequence.\n6. As noted in our [summer updates](https://intelligence.org/2018/09/01/summer-miri-updates/): \n\n\n> We had a large and extremely strong pool of applicants, with over 170 applications for 30 slots (versus 50 applications for 20 slots in 2017). The program this year was more mathematically flavored than in 2017, and concluded with a flurry of new analyses by participants. On the whole, the program seems to have been more successful at digging into AI alignment problems than in previous years, as well as more successful at seeding ongoing collaborations between participants, and between participants and MIRI staff.\n> \n> \n\n\nThe program ended with a very active blogathon, with write-ups including: [Dependent Type Theory and Zero-Shot Reasoning](https://www.alignmentforum.org/posts/Xfw2d5horPunP2MSK/dependent-type-theory-and-zero-shot-reasoning); [Conceptual Problems with Utility Functions](https://www.alignmentforum.org/posts/Nx4DsTpMaoTiTp4RQ/conceptual-problems-with-utility-functions) (and [follow-up](https://www.alignmentforum.org/posts/QmeguSp4Pm7gecJCz/conceptual-problems-with-utility-functions-second-attempt-at)); [Complete Class: Consequentialist Foundations](https://www.alignmentforum.org/posts/sZuw6SGfmZHvcAAEP/complete-class-consequentialist-foundations); and [Agents That Learn From Human Behavior Can’t Learn Human Values That Humans Haven’t Learned Yet](https://www.alignmentforum.org/posts/DfewqowdzDdCD7S9y/agents-that-learn-from-human-behavior-can-t-learn-human).\n7. Note that amounts in this section may vary slightly from our audited financial statements, due to small differences between how we tracked donations internally, and how we are required to report them in our financial statements.\n8. A big thanks to Colm for all the work he’s put into setting this up; have a look at our [Tax-Advantaged Donations](https://intelligence.org/donate/tax-advantaged-donations/) page for more information.\n9. 2014 is anomalously high on this graph due to the community’s active participation in [our memorable SVGives campaign](https://intelligence.org/2014/05/06/liveblogging-the-svgives-fundraiser/).\n10. Note that these numbers will differ slightly compared to our forthcoming audited financial statements for 2018, due to subtleties of how certain types of expenses are tracked. For example, in the financial statements, renovation costs are considered to be a fixed asset that depreciates over time, and as such, won’t show up as an expense.\n11. The number of options available in the relevant time frame were very limited, and most did not meet many of our requirements. Of the available spaces, the option that offered the best combination of size, layout, and location, was looking for a tenant starting November 1st 2018, while we weren’t able to move until early January 2019. Additionally, the space was configured with a very open layout that wouldn’t have met our needs, but that many other prospective tenants found desirable, such that we’d have to cover renovation costs.\n\nThe post [2018 in review](https://intelligence.org/2019/05/31/2018-in-review/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2019-06-01T05:00:59Z", "authors": ["Malo Bourgon"], "summaries": []} -{"id": "1ce0ea47cb0a81797aabf67199cecc14", "title": "May 2019 Newsletter", "url": "https://intelligence.org/2019/05/10/may-2019-newsletter/", "source": "miri", "source_type": "blog", "text": "#### Updates\n\n\n* A new paper from MIRI researcher Vanessa Kosoy, presented at the ICLR SafeML workshop this week: \"[Delegative Reinforcement Learning: Learning to Avoid Traps with a Little Help](https://intelligence.org/2019/04/24/delegative-reinforcement-learning/).\"\n* New research posts: [Learning \"Known\" Information When the Information is Not Actually Known](https://www.alignmentforum.org/posts/TrudRcZxEjA2RNCxh/learning-known-information-when-the-information-is-not); [Defeating Goodhart and the \"Closest Unblocked Strategy\" Problem](https://www.lesswrong.com/posts/PADPJ3xac5ogjEGwA/defeating-goodhart-and-the-closest-unblocked-strategy); [Reinforcement Learning with Imperceptible Rewards](https://www.lesswrong.com/posts/aAzApjEpdYwAxnsAS/reinforcement-learning-with-imperceptible-rewards)\n* The Long-Term Future Fund has announced twenty-three new grant recommendations, and provided in-depth explanations of the grants. These include [a $50,000 grant to MIRI](https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions), and grants to CFAR and Ought. LTFF is also recommending grants to several individuals with AI alignment research proposals whose work MIRI staff will be helping assess.\n* We attended the [Global Governance of AI Roundtable](https://ggar.worldgovernmentsummit.org/) at the World Government Summit in Dubai.\n\n\n#### News and links\n\n\n* Rohin Shah reflects on [the first year of the Alignment Newsletter](https://www.lesswrong.com/posts/3onCb5ph3ywLQZMX2/alignment-newsletter-one-year-retrospective).\n* Some good recent AI alignment discussion: Alex Turner asks for [the best reasons for pessimism about impact measures;](https://www.lesswrong.com/posts/kCY9dYGLoThC3aG7w/best-reasons-for-pessimism-about-impact-of-impact-measures) Henrik Åslund and Ryan Carey discuss [corrigibility as constrained optimization](https://www.alignmentforum.org/posts/cGLgs3t9md7v7cCm4/corrigibility-as-constrained-optimisation); Wei Dai asks about [low-cost AGI coordination](https://www.alignmentforum.org/posts/gYaKZeBbSL4y2RLP3/strategic-implications-of-ais-ability-to-coordinate-at-low); and Chris Leong asks, \"[Would solving counterfactuals solve anthropics?](https://www.lesswrong.com/posts/nTDQ9eNrSNoYrZzJw/would-solving-logical-counterfactuals-solve-anthropics)\"\n* From DeepMind: [Towards Robust and Verified AI: Specification Testing, Robust Training, and Formal Verification](https://medium.com/@deepmindsafetyresearch/towards-robust-and-verified-ai-specification-testing-robust-training-and-formal-verification-69bd1bc48bda).\n* Ilya Sutskever and Greg Brockman discuss OpenAI's new status as a \"[hybrid of a for-profit and nonprofit](https://www.vox.com/future-perfect/2019/4/17/18301070/openai-greg-brockman-ilya-sutskever)\".\n* [Misconceptions about China and AI](http://rationallyspeakingpodcast.org/show/rs-231-helen-toner-on-misconceptions-about-china-and-artific.html): Julia Galef interviews Helen Toner. ([Excerpts.](https://www.lesswrong.com/posts/cHwCBTwWiTdsqXNyn/helen-toner-on-china-cset-and-ai))\n\n\n\nThe post [May 2019 Newsletter](https://intelligence.org/2019/05/10/may-2019-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2019-05-10T17:30:47Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "b71847eb5c903a4b457b5ee503f122e2", "title": "New paper: “Delegative reinforcement learning”", "url": "https://intelligence.org/2019/04/24/delegative-reinforcement-learning/", "source": "miri", "source_type": "blog", "text": "[![Delegative Reinforcement Learning](https://intelligence.org/wp-content/uploads/2019/04/SafeML2019paper.png)](https://drive.google.com/uc?export=download&id=1xa7UpGGODl6mszNWkA4XQGPyeopsNuWu)MIRI Research Associate Vanessa Kosoy has written a new paper, “[Delegative reinforcement learning: Learning to avoid traps with a little help](https://drive.google.com/uc?export=download&id=1xa7UpGGODl6mszNWkA4XQGPyeopsNuWu).” Kosoy will be presenting the paper at the ICLR 2019 [SafeML workshop](https://sites.google.com/view/safeml-iclr2019) in two weeks. The abstract reads:\n\n\n\n> Most known regret bounds for reinforcement learning are either episodic or assume an environment without traps. We derive a regret bound without making either assumption, by allowing the algorithm to occasionally delegate an action to an external advisor. We thus arrive at a setting of active one-shot model-based reinforcement learning that we call DRL (delegative reinforcement learning.)\n> \n> \n> The algorithm we construct in order to demonstrate the regret bound is a variant of Posterior Sampling Reinforcement Learning supplemented by a subroutine that decides which actions should be delegated. The algorithm is not anytime, since the parameters must be adjusted according to the target time discount. Currently, our analysis is limited to Markov decision processes with finite numbers of hypotheses, states and actions.\n> \n> \n\n\nThe goal of Kosoy’s work on DRL is to put us on a path toward having a deep understanding of learning systems with human-in-the-loop and formal performance guarantees, including safety guarantees. DRL tries to move us in this direction by providing models in which such performance guarantees can be derived.\n\n\nWhile these models still make many unrealistic simplifying assumptions, Kosoy views DRL as already capturing some of the most essential features of the problem—and she has a fairly ambitious vision of how this framework might be further developed.\n\n\nKosoy previously described DRL in the post [Delegative Reinforcement Learning with a Merely Sane Advisor](https://www.alignmentforum.org/posts/5bd75cc58225bf06703754d5/delegative-reinforcement-learning-with-a-merely-sane-advisor). One feature of DRL Kosoy described here but omitted from the paper (for space reasons) is DRL’s application to *corruption*. Given certain assumptions, DRL ensures that a formal agent will never have its reward or advice channel tampered with (corrupted). As a special case, the agent’s own advisor cannot cause the agent to enter a corrupt state. Similarly, the general protection from traps described in “Delegative reinforcement learning” also protects the agent from harmful self-modifications.\n\n\nAnother set of DRL results that didn’t make it into the paper is [Catastrophe Mitigation Using DRL](https://www.alignmentforum.org/posts/5bd75cc58225bf0670375510/catastrophe-mitigation-using-drl). In this variant, a DRL agent can mitigate catastrophes that the advisor would not be able to mitigate on its own—something that isn’t supported by the more strict assumptions about the advisor in standard DRL. \n\n \n\n\n\n\n\n#### Sign up to get updates on new MIRI technical results\n\n\n*Get notified every time a new technical paper is published.*\n\n\n\n\n* \n* \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n\n\n \n\n\nThe post [New paper: “Delegative reinforcement learning”](https://intelligence.org/2019/04/24/delegative-reinforcement-learning/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2019-04-24T22:52:40Z", "authors": ["Rob Bensinger"], "summaries": ["Consider environments that have “traps”: states that permanently curtail the long-term value that an agent can achieve. A world without humans could be one such trap. Traps could also happen after any irreversible action, if the new state is not as useful for achieving high rewards as the old state.\n\nIn such an environment, an RL algorithm could simply take no actions, in which case it incurs regret that is linear in the number of timesteps so far. (Regret is the difference between the expected reward under the optimal policy and the policy actually executed, so if the average reward per timestep of the optimal policy is 2 and doing nothing is always reward 0, then the regret will be ~2T where T is the number of timesteps, so regret is linear in the number of timesteps.) Can we find an RL algorithm that will _guarantee_ regret sublinear in the number of timesteps, regardless of the environment?\n\nUnsurprisingly, this is impossible, since during exploration the RL agent could fall into a trap, which leads to linear regret. However, let's suppose that we could delegate to an advisor who knows the environment: what must be true about the advisor for us to do better? Clearly, the advisor must be able to always avoid traps (otherwise the same problem occurs). However, this is not enough: getting sublinear regret also requires us to explore enough to eventually find the optimal policy. So, the advisor must have at least some small probability of being optimal, which the agent can then learn from. This paper proves that with these assumptions there does exist an algorithm that is guaranteed to get sublinear regret."], "venue": "SafeML 2019", "opinion": "It's interesting to see what kinds of assumptions are necessary in order to get AI systems that can avoid catastrophically bad outcomes, and the notion of \"traps\" seems like a good way to formalize this. I worry about there being a Cartesian boundary between the agent and the environment, though perhaps even here as long as the advisor is aware of problems caused by such a boundary, they can be modeled as traps and thus avoided.\n\nOf course, if we want the advisor to be a human, both of the assumptions are unrealistic, but I believe Vanessa's plan is to make the assumptions more realistic in order to see what assumptions are actually necessary.\n\nOne thing I wonder about is whether the focus on traps is necessary. With the presence of traps in the theoretical model, one of the main challenges is in preventing the agent from falling into a trap due to ignorance. However, it seems extremely unlikely that an AI system manages to take some irreversible catastrophic action _by accident_ -- I'm much more worried about the case where the AI system is adversarially optimizing against us and intentionally takes an irreversible catastrophic action.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #57", "newsletter_category": "Learning human intent"} -{"id": "8c6f449c170b3f567e94452d66f88ce9", "title": "April 2019 Newsletter", "url": "https://intelligence.org/2019/04/21/april-2019-newsletter/", "source": "miri", "source_type": "blog", "text": "#### Updates\n\n\n* New research posts: [Simplified Preferences Needed, Simplified Preferences Sufficient](https://www.alignmentforum.org/posts/sEqu6jMgnHG2fvaoQ/partial-preferences-needed-partial-preferences-sufficient); [Smoothmin and Personal Identity](https://www.alignmentforum.org/posts/MxLK2fvEuijAYgsc2/smoothmin-and-personal-identity); [Example Population Ethics: Ordered Discounted Utility](https://www.alignmentforum.org/posts/Ee29dFnPhaeRmYdMy/example-population-ethics-ordered-discounted-utility); [A Theory of Human Values](https://www.alignmentforum.org/posts/qezBTig6p6p5xtL6G/a-theory-of-human-values); [A Concrete Proposal for Adversarial IDA](https://www.alignmentforum.org/posts/jYvm4mmjvGHcPXtGL/a-concrete-proposal-for-adversarial-ida)\n* MIRI has received a set of [new grants](https://intelligence.org/2019/04/01/new-grants-open-phil-beri/) from the Open Philanthropy Project and the Berkeley Existential Risk Initiative.\n\n\n#### News and links\n\n\n* From the DeepMind safety team and Alex Turner: [Designing Agent Incentives to Avoid Side Effects](https://medium.com/@deepmindsafetyresearch/designing-agent-incentives-to-avoid-side-effects-e1ac80ea6107).\n* From Wei Dai: [Three Ways That \"Sufficiently Optimized Agents Appear Coherent\" Can Be False](https://www.alignmentforum.org/posts/4K52SS7fm9mp5rMdX/three-ways-that-sufficiently-optimized-agents-appear); [What's Wrong With These Analogies for Understanding Informed Oversight and IDA?](https://www.alignmentforum.org/posts/LigbvLH9yKR5Zhd6y/what-s-wrong-with-these-analogies-for-understanding-informed); and [The Main Sources of AI Risk?](https://www.lesswrong.com/posts/WXvt8bxYnwBYpy9oT/the-main-sources-of-ai-risk)\n* Other recent write-ups: Issa Rice's [Comparison of Decision Theories](https://www.alignmentforum.org/posts/QPhY8Nb7gtT5wvoPH/comparison-of-decision-theories-with-a-focus-on-logical); Paul Christiano's [More Realistic Tales of Doom](https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/more-realistic-tales-of-doom); and Linda Linsefors' [The Game Theory of Blackmail](https://www.alignmentforum.org/posts/wm2rdS3sDY9M5kpWb/the-game-theory-of-blackmail).\n* OpenAI's Geoffrey Irving describes AI safety via debate [on FLI's AI Alignment Podcast](https://futureoflife.org/2019/03/06/ai-alignment-through-debate-with-geoffrey-irving).\n* A webcomic's take on AI x-risk concepts: [*Seed*](https://www.webtoons.com/en/sf/seed/prologue/viewer?title_no=1480&episode_no=1).\n\n\n\nThe post [April 2019 Newsletter](https://intelligence.org/2019/04/21/april-2019-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2019-04-22T03:31:49Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "8ad293dba320911faf9eb1f3cfd7ddb7", "title": "New grants from the Open Philanthropy Project and BERI", "url": "https://intelligence.org/2019/04/01/new-grants-open-phil-beri/", "source": "miri", "source_type": "blog", "text": "I’m happy to announce that MIRI has received two major new grants:\n\n\n* A two-year grant totaling $2,112,500 from the [Open Philanthropy Project](https://www.openphilanthropy.org/about/who-we-are).\n* A $600,000 grant from the [Berkeley Existential Risk Initiative](http://existence.org/organization-grants/).\n\n\nThe Open Philanthropy Project’s [grant](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019) was awarded as part of the first round of grants recommended by their new [committee for effective altruism support](https://www.openphilanthropy.org/committee-effective-altruism-support):\n\n\n\n> We are experimenting with a new approach to setting grant sizes for a number of our largest grantees in the [effective altruism](https://www.openphilanthropy.org/focus/other-areas#EffectiveAltruism) community, including those who work on long-termist causes. Rather than have a single Program Officer make a recommendation, we have created a small committee, comprised of Open Philanthropy staff and trusted outside advisors who are knowledgeable about the relevant organizations. […] We average the committee members’ votes to arrive at final numbers for our grants.\n> \n> \n\n\nThe Open Philanthropy Project’s grant is separate from the three-year [$3.75 million grant](https://intelligence.org/2017/11/08/major-grant-open-phil/) they awarded us in 2017, the third $1.25 million disbursement of which is still scheduled for later this year. This new grant increases the Open Philanthropy Project’s total support for MIRI from $1.4 million[1](https://intelligence.org/2019/04/01/new-grants-open-phil-beri/#footnote_0_19133 \"The $1.4 million counts the Open Philanthropy Project’s $1.25 million disbursement in 2018, as well as a $150,000 AI Safety Retraining Program grant to MIRI.\") in 2018 to ~$2.31 million in 2019, but doesn’t reflect any decision about how much total funding MIRI might receive from Open Phil in 2020 (beyond the fact that it will be at least ~$1.06 million).\n\n\nGoing forward, the Open Philanthropy Project currently plans to determine the size of any potential future grants to MIRI using the above committee structure.\n\n\nWe’re very grateful for this increase in support from BERI and the Open Philanthropy Project—both organizations that already numbered among our largest funders of the past few years. We expect these grants to play an important role in our decision-making as we continue to grow our research team in the ways described in our 2018 [strategy update](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/) and [fundraiser](https://intelligence.org/2018/11/26/miris-2018-fundraiser/) posts.\n\n\n\n\n---\n\n1. The $1.4 million counts the Open Philanthropy Project’s $1.25 million disbursement in 2018, as well as a $150,000 [AI Safety Retraining Program grant](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-ai-safety-retraining-program) to MIRI.\n\nThe post [New grants from the Open Philanthropy Project and BERI](https://intelligence.org/2019/04/01/new-grants-open-phil-beri/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2019-04-01T21:49:53Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "f7b000b1f28a8bb60803f9e8003afd54", "title": "March 2019 Newsletter", "url": "https://intelligence.org/2019/03/14/march-2019-newsletter/", "source": "miri", "source_type": "blog", "text": "Want to be in the reference class “people who solve the AI alignment problem”?\n We now have [**a guide on how to get started**](https://www.lesswrong.com/posts/PqMT9zGrNsGJNfiFR/alignment-research-field-guide), based on our experience of what tends to make research groups successful. (Also on the [AI Alignment Forum](https://www.alignmentforum.org/posts/PqMT9zGrNsGJNfiFR/alignment-research-field-guide).)\n\n\n#### Other updates\n\n\n* Demski and Garrabrant’s introduction to MIRI’s agent foundations research, “[Embedded Agency](https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh/p/i3BTagvt3HbPMx6PN),” is now available (in lightly edited form) [as an arXiv paper](https://arxiv.org/abs/1902.09469).\n* New research posts: [How Does Gradient Descent Interact with Goodhart?](https://www.lesswrong.com/posts/pcomQ4Fwi7FnfBZBR/how-does-gradient-descent-interact-with-goodhart); [“Normative Assumptions” Need Not Be Complex](https://www.alignmentforum.org/posts/QzsCrzGd4zkNwk9cd/normative-assumptions-need-not-be-complex); [How the MtG Color Wheel Explains AI Safety](https://www.lesswrong.com/posts/9CKBtxWtjvminNTmC/how-the-mtg-color-wheel-explains-ai-safety); [Pavlov Generalizes](https://www.alignmentforum.org/posts/XTgkhjNTEi97WHMi6/pavlov-generalizes)\n* [Several MIRIx groups](https://intelligence.org/2019/03/09/a-new-field-guide-for-mirix/) are expanding and are looking for new members to join.\n* Our [summer fellows program](https://intelligence.org/2019/03/10/applications-are-open-for-msfp/) is accepting applications through March 31.\n* LessWrong’s web edition of *Rationality: From AI to Zombies* at [lesswrong.com/rationality](https://www.lesswrong.com/rationality) is now fully updated to reflect the print edition of *Map and Territory* and *How to Actually Change Your Mind*, the first two books. ([Announcement here.](https://www.lesswrong.com/posts/c7EjFK8yTBwdHewrg/new-versions-of-posts-in-map-and-territory-and-how-to))\n\n\n#### News and links\n\n\n* OpenAI’s [GPT-2 model](https://openai.com/blog/better-language-models/) shows meaningful progress on a wide variety of language tasks. OpenAI adds: \n\n\n> Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights. […] We believe our release strategy limits the initial set of organizations who may choose to [open source our results], and gives the AI community more time to have a discussion about the implications of such systems.\n> \n>\n* *The Verge* [discusses](https://www.theverge.com/2019/2/21/18234500/ai-ethics-debate-researchers-harmful-programs-openai) OpenAI’s language model concerns along with [MIRI’s disclosure policies](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions) for our own research. See other discussion by [Jeremy Howard](https://www.fast.ai/2019/02/15/openai-gp2/), [John Seymour](https://twitter.com/_delta_zero/status/1097314384731226113), and [Ryan Lowe](https://towardsdatascience.com/openais-gpt-2-the-model-the-hype-and-the-controversy-1109f4bfd5e8).\n* AI Impacts summarizes [evidence on good forecasting practices from the Good Judgment Project](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project/).\n* Recent AI alignment ideas and discussion: Carey on [quantilization](https://www.alignmentforum.org/posts/Rs6vZCrnQFWQ4p37P/when-to-use-quantilization); Filan on [impact regularization methods](https://www.alignmentforum.org/posts/wzPzPmAsG3BwrBrwy/test-cases-for-impact-regularisation-methods); Saunders’ [HCH Is Not Just Mechanical Turk](https://www.alignmentforum.org/posts/4JuKoFguzuMrNn6Qr/hch-is-not-just-mechanical-turk) and [RL in the Iterated Amplification Framework](https://www.alignmentforum.org/posts/fq7Ehb2oWwXtZic8S/reinforcement-learning-in-the-iterat); Dai on philosophical difficulty ([1](https://www.alignmentforum.org/posts/w6d7XBCegc96kz4n3/the-argument-from-philosophical-difficulty), [2](https://www.alignmentforum.org/posts/EByDsY9S3EDhhfFzC/some-thoughts-on-metaphilosophy)); Hubinger on [ascription universality](https://www.alignmentforum.org/posts/R5Euq7gZgobJi5S25/nuances-with-ascription-universality); and Everitt’s [Understanding Agent Incentives with Causal Influence Diagrams](https://medium.com/@deepmindsafetyresearch/understanding-agent-incentives-with-causal-influence-diagrams-7262c2512486).\n* The Open Philanthropy Project announces their largest grant to date: $55 million to launch the [Center for Security and Emerging Technology](https://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technology), a Washington, D.C. think tank with an early focus on “the intersection of security and artificial intelligence”. See also CSET’s many [jobpostings](https://cset.georgetown.edu/careers/).\n\n\n\nThe post [March 2019 Newsletter](https://intelligence.org/2019/03/14/march-2019-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2019-03-15T03:20:18Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "344d5ad6bbe300f016a717bb50f21ebc", "title": "Applications are open for the MIRI Summer Fellows Program!", "url": "https://intelligence.org/2019/03/10/applications-are-open-for-msfp/", "source": "miri", "source_type": "blog", "text": "CFAR and MIRI are running our fifth annual MIRI Summer Fellows Program (MSFP) in the San Francisco Bay Area from August 9 to August 24, 2019. \nMSFP is an extended retreat for mathematicians and programmers with a serious interest in making technical progress on the problem of AI alignment. It includes an overview of CFAR’s applied rationality content, a breadth-first grounding in the MIRI perspective on AI safety, and multiple days of actual hands-on research with participants and MIRI staff attempting to make inroads on open questions.\n\n\n\n#### Program Description\n\n\nThe intent of the program is to boost participants, as far as possible, in four overlapping areas:\n\n\n\n> **Doing rationality inside a human brain**: understanding, with as much fidelity as possible, what phenomena and processes drive and influence human thinking and reasoning, so that we can account for our own biases and blindspots, better recruit and use the various functions of our brains, and, in general, be less likely to trick ourselves, gloss over our confusions, or fail to act in alignment with our endorsed values.\n> \n> \n> **Epistemic rationality**, especially the subset of skills around [deconfusion](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section2). Building the skill of noticing where the dots don’t actually connect; answering the question “why do we think we know what we think we know?”, particularly when it comes to predictions and assertions around the future development of artificial intelligence.\n> \n> \n> **Grounding in the current research landscape surrounding AI**: being aware of the primary disagreements among leaders in the field, and the arguments for various perspectives and claims. Understanding the current open questions, and why different ones seem more pressing or real under different assumptions. Being able to follow the reasoning behind various alignment schemes/theories/proposed interventions, and being able to evaluate those interventions with careful reasoning and mature (or at least more-mature-than-before) intuitions.\n> \n> \n> **Generative research skill**: the ability to make real and relevant progress on questions related to the field of AI alignment without losing track of one’s own metacognition. The parallel processes of using one’s mental tools, critiquing and improving one’s mental tools, and making one’s own progress or deconfusion available to others through talks, papers, and models. Anything and everything involved in being the sort of thinker who can locate a good question, sniff out promising threads, and collaborate effectively with others and with the broader research ecosystem.\n> \n> \n\n\nFood and lodging are provided free of charge at CFAR’s workshop venue in Bodega Bay, California. Participants must be able to remain onsite, largely undistracted for the duration of the program (e.g. no major appointments in other cities, no large looming academic or professional deadlines just after the program).\n\n\nIf you have any questions or comments, please send an email to **[colm@intelligence.org](https://intelligence.org/feed/colm@intelligence.org)**, or, if you suspect others would also benefit from hearing the answer, post them here.\n\n\n**Update April 23, 2019:** Applications closed on March 31, 2019 and finalists are being contacted by a MIRI staff member for 1–2 Skype interviews. Admissions decisions — yes, no, waitlist — will go out no later than April 30th.\n\n\n\nThe post [Applications are open for the MIRI Summer Fellows Program!](https://intelligence.org/2019/03/10/applications-are-open-for-msfp/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2019-03-11T04:58:57Z", "authors": ["Colm Ó Riain"], "summaries": []} -{"id": "d68e1eee8d358827cb4b7f5206ce4abd", "title": "A new field guide for MIRIx", "url": "https://intelligence.org/2019/03/09/a-new-field-guide-for-mirix/", "source": "miri", "source_type": "blog", "text": "We’ve just released a **[field guide](https://www.lesswrong.com/posts/PqMT9zGrNsGJNfiFR/alignment-research-field-guide)** for MIRIx groups, and for other people who want to get involved in [AI alignment](https://intelligence.org/2017/04/12/ensuring/) research.\n\n\nMIRIx is a program where MIRI helps cover basic expenses for outside groups that want to work on open problems in AI safety. You can start your own group or find information on existing meet-ups at **[intelligence.org/mirix](https://intelligence.org/mirix/)**.\n\n\nSeveral MIRIx groups have recently been ramping up their activity, including:\n\n\n* **UC Irvine**: Daniel Hermann is starting a MIRIx group in Irvine, California. [Contact him](mailto:daherrma@uci.edu) if you’d like to be involved.\n* **Seattle**: MIRIxSeattle is a small group that’s in the process of restarting and increasing its activities. Contact [Pasha Kamyshev](mailto:dapash@gmail.com) if you’re interested.\n* **Vancouver**: [Andrew McKnight](mailto:thedonk@gmail.com) and [Evan Gaensbauer](https://intelligence.org/feed/egbauer92@gmail.com) are looking for more people who’d like to join MIRIxVancouver events.\n\n\nThe new alignment field guide is intended to provide tips and background models to MIRIx groups, based on our experience of what tends to make a research group succeed or fail.\n\n\nThe guide begins:\n\n\n\n\n---\n\n\n### Preamble I: Decision Theory\n\n\nHello! You may notice that you are reading a document.\n\n\nThis fact comes with certain implications. For instance, why are you reading this? Will you finish it? What decisions will you come to as a result? What will you do next?\n\n\nNotice that, whatever you end up doing, it’s likely that there are dozens or even hundreds of other people, quite similar to you and in quite similar positions, who will follow reasoning which strongly resembles yours, and make choices which correspondingly match.\n\n\nGiven that, it’s our recommendation that you make your next few decisions by asking the question “What policy, if followed by all agents similar to me, would result in the most good, and what does that policy suggest in my particular case?” It’s less of a question of trying to decide for all agents sufficiently-similar-to-you (which might cause you to make the wrong choice out of guilt or pressure) and more something like “if I *were* in charge of all agents in my reference class, how would I treat instances of that class with *my specific characteristics*?”\n\n\nIf that kind of thinking leads you to read further, great. If it leads you to set up a MIRIx chapter, even better. In the meantime, we will proceed as if the only people reading this document are those who justifiably expect to find it reasonably useful.\n\n\n### Preamble II: Surface Area\n\n\nImagine that you have been tasked with moving a cube of solid iron that is one meter on a side. Given that such a cube weighs ~16000 pounds, and that an average human can lift ~100 pounds, a naïve estimation tells you that you can solve this problem with ~150 willing friends.\n\n\nBut of course, a meter cube can fit at most something like 10 people around it. It doesn’t *matter* if you have the theoretical power to move the cube if you can’t bring that power to bear in an effective manner. The problem is constrained by its *surface area*.\n\n\nMIRIx chapters are one of the best ways to increase the surface area of people thinking about and working on the technical problem of AI alignment. And just as it would be a bad idea to decree “the 10 people who happen to currently be closest to the metal cube are the only ones allowed to think about how to think about this problem”, we don’t want MIRI to become the bottleneck or authority on what kinds of thinking can and should be done in the realm of [embedded agency](https://intelligence.org/2018/10/29/embedded-agents/) and other relevant fields of research.\n\n\nThe hope is that you and others like you will help actually solve the problem, not just follow directions or read what’s already been written. This document is designed to support people who are interested in doing real groundbreaking research themselves.\n\n\n[(Read more)](https://www.lesswrong.com/posts/PqMT9zGrNsGJNfiFR/alignment-research-field-guide)\n\n\n \n\n\n\nThe post [A new field guide for MIRIx](https://intelligence.org/2019/03/09/a-new-field-guide-for-mirix/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2019-03-10T03:38:01Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "5cea5d1679fa5c41a1edfdd4d1184df8", "title": "February 2019 Newsletter", "url": "https://intelligence.org/2019/02/25/february-2019-newsletter/", "source": "miri", "source_type": "blog", "text": "#### Updates\n\n\n* Ramana Kumar and Scott Garrabrant argue that the AGI safety community should begin prioritizing “[approaches that work well in the absence of human models](https://intelligence.org/2019/02/22/thoughts-on-human-models/)”:  \n\n\n> [T]o the extent that human modelling is a good idea, it is important to do it very well; to the extent that it is a bad idea, it is best to not do it at all. Thus, whether or not to do human modelling at all is a configuration bit that should probably be set early when conceiving of an approach to building safe AGI.\n> \n>\n* New research forum posts: [Conditional Oracle EDT Equilibria in Games](https://www.alignmentforum.org/posts/4MLpRxz7ZoX8YXSY3/coedt-equilibria-in-games); [Non-Consequentialist Cooperation?](https://www.alignmentforum.org/posts/F9vcbEMKW48j4Z6h9/non-consequentialist-cooperation); [When is CDT Dutch-Bookable?](https://www.lesswrong.com/posts/TJT2oBMGaZTE7f2z2/when-is-cdt-dutch-bookable); [CDT=EDT=UDT](https://www.alignmentforum.org/posts/WkPf6XCzfJLCm2pbK/cdt-edt-udt)\n* The [MIRI Summer Fellows Program](https://www.lesswrong.com/events/xFGQdgJndLcthgWoE/miri-summer-fellows-program) is accepting applications through the end of March! MSFP is a free two-week August retreat co-run by MIRI and CFAR, intended to bring people up to speed on problems related to [embedded agency](https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh/p/i3BTagvt3HbPMx6PN) and AI alignment, train research-relevant skills and habits, and investigate open problems in the field.\n* MIRI’s Head of Growth, Colm Ó Riain, [reviews](https://intelligence.org/2019/02/11/our-2018-fundraiser-review/) how our 2018 fundraiser went.\n* [From Eliezer Yudkowsky](https://twitter.com/ESYudkowsky/status/1085225949900099585): “Along with adversarial resistance and transparency, what I’d term ‘[conservatism](https://intelligence.org/2017/02/28/using-machine-learning/)’, or trying to keep everything as interpolation rather than extrapolation, is one of the few areas modern ML can explore that I see as having potential to carry over directly to serious AGI safety.”\n\n\n#### News and links\n\n\n* Eric Drexler has released his book-length AI safety proposal: [Reframing Superintelligence: Comprehensive AI Services as General Intelligence](https://www.fhi.ox.ac.uk/reframing/). See discussion by [Peter McCluskey](https://www.lesswrong.com/posts/bXYtDfMTNbjCXFQPh/drexler-on-ai-risk), [Richard Ngo](https://www.lesswrong.com/posts/HvNAmkXPTSoA4dvzv/comments-on-cais), and [Rohin Shah](https://www.lesswrong.com/posts/x3fNwSe5aWZb5yXEG/reframing-superintelligence-comprehensive-ai-services-as).\n* Other recent AI alignment posts include Andreas Stuhlmüller’s [Factored Cognition](https://www.alignmentforum.org/posts/DFkGStzvj3jgXibFG/factored-cognition) and Alex Turner’s [Penalizing Impact via Attainable Utility Preservation](https://www.alignmentforum.org/posts/mDTded2Dn7BKRBEPX/penalizing-impact-via-attainable-utility-preservation), and a host of new write-ups by [Stuart Armstrong](https://www.alignmentforum.org/users/stuart_armstrong).\n\n\n\nThe post [February 2019 Newsletter](https://intelligence.org/2019/02/25/february-2019-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2019-02-25T23:29:48Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "c49322ccf30934318abf0882f4255ea0", "title": "Thoughts on Human Models", "url": "https://intelligence.org/2019/02/22/thoughts-on-human-models/", "source": "miri", "source_type": "blog", "text": "*This is a joint post by MIRI Research Associate and DeepMind Research Scientist Ramana Kumar and MIRI Research Fellow Scott Garrabrant, cross-posted from the [AI Alignment Forum](https://www.alignmentforum.org/posts/BKjJJH2cRpJcAnP7T/thoughts-on-human-models) and [LessWrong](https://www.lesswrong.com/posts/BKjJJH2cRpJcAnP7T/thoughts-on-human-models).*\n\n\n\n\n---\n\n\nHuman values and preferences are hard to specify, especially in complex domains. Accordingly, much AGI safety research has focused on approaches to AGI design that refer to human values and preferences *indirectly*, by learning a model that is grounded in expressions of human values (via stated preferences, observed behaviour, approval, etc.) and/or real-world processes that generate expressions of those values. There are additionally approaches aimed at modelling or imitating other aspects of human cognition or behaviour without an explicit aim of capturing human preferences (but usually in service of ultimately satisfying them). Let us refer to all these models as*human models*.\n\n\nIn this post, we discuss several reasons to be cautious about AGI designs that use human models. We suggest that the AGI safety research community put more effort into developing approaches that work well in the absence of human models, alongside the approaches that rely on human models. This would be a significant addition to the current safety research landscape, especially if we focus on working out and trying concrete approaches as opposed to developing theory. We also acknowledge various reasons why avoiding human models seems difficult.\n\n\n \n\n\n### Problems with Human Models\n\n\nTo be clear about human models, we draw a rough distinction between our actual preferences (which may not be fully accessible to us) and procedures for evaluating our preferences. The first thing, actual preferences, is what humans actually want upon reflection. Satisfying our actual preferences is a win. The second thing, procedures for evaluating preferences, refers to various proxies for our actual preferences such as our approval, or what looks good to us (with necessarily limited information or time for thinking). Human models are in the second category; consider, as an example, a highly accurate ML model of human yes/no approval on the set of descriptions of outcomes. Our first concern, described below, is about overfitting to human approval and thereby breaking its connection to our actual preferences. (This is a case of Goodhart’s law.)\n\n\n\n#### Less Independent Audits\n\n\nImagine we have built an AGI system and we want to use it to design the mass transit system for a new city. The safety problems associated with such a project are well recognised; suppose we are not completely sure we have solved them, but are confident enough to try anyway. We run the system in a sandbox on some fake city input data and examine its outputs. Then we run it on some more outlandish fake city data to assess robustness to distributional shift. The AGI’s outputs look like reasonable transit system designs and considerations, and include arguments, metrics, and other supporting evidence that they are good. Should we be satisfied and ready to run the system on the real city’s data, and to implement the resulting proposed design?\n\n\nWe suggest that an important factor in the answer to this question is whether the AGI system was built using human modelling or not. If it produced a solution to the transit design problem (that humans approve of) without human modelling, then we would more readily trust its outputs. If it produced a solution we approve of *with human modelling*, then although we expect the outputs to be in many ways about good transit system design (our actual preferences) and in many ways suited to being approved by humans, to the extent that these two targets come apart we must worry about having overfit to the human model at the expense of the good design. (Why not the other way around? Because our assessment of the sandboxed results uses human judgement, not an independent metric for satisfaction of our actual preferences.)\n\n\nHumans have a preference for not being wrong about the quality of a design, let alone being fooled about it. How much do we want to rely on having correctly captured these preferences in our system? If the system is modelling humans, we strongly rely on the system learning and satisfying these preferences, or else we expect to be fooled to the extent that a good-looking but actually bad transit system design is easier to compose than an actually-good design. On the other hand, if the system is not modelling humans, then the fact that its output looks like a good design is better evidence that it is in fact a good design. Intuitively, if we consider sampling possible outputs and condition on the output looking good (via knowledge of humans), the probability of it being good (via knowledge of the domain) is higher when the system’s knowledge is more about what is good than what looks good.\n\n\nHere is a handle for this problem: a desire for an *independent audit* of the system’s outputs. When a system uses human modelling, the [mutual](http://www.scholarpedia.org/article/Mutual_information) [information](http://colah.github.io/posts/2015-09-Visual-Information/) between its outputs and the auditing process (human judgement) is higher. Thus, using human models reduces our ability to do independent audits.\n\n\nAvoiding human models does not avoid this problem altogether. There is still an “outer-loop optimisation” version of the problem. If the system produces a weird or flawed design in sandbox, and we identify this during an audit, we will probably reject the solution and attempt to debug the system that produced it. This introduces a bias on the overall process (involving multiple versions of the system over phases of auditing and debugging) towards outputs that fool our auditing procedure.\n\n\nHowever, outer-loop optimisation pressures are weaker, and therefore less worrying, than in-loop optimisation pressures. We would argue that the problem is much worse, i.e., the bias towards fooling is stronger, when one uses human modelling. This is because the relevant optimisation is in-loop instead and is encountered more often.\n\n\nAs one more analogy to illustrate this point, consider a classic Goodhart’s law example of teaching to the test. If you study the material, then take a test, your test score reveals your knowledge of the material fairly well. If you instead study past tests, your test score reveals your ability to pass tests, which may be correlated with your knowledge of the material but is increasingly less likely to be so correlated as your score goes up. Here human modelling is analogous to past tests and actual preferences are analogous to the material. Taking the test is analogous to an audit, which we want to be independent from the study regimen.\n\n\n#### Risk from Bugs\n\n\nWe might implement our first AGI system incorrectly in a mundane sense. Specifically, even if we fully develop a theory of safe or aligned AGI, we might fail to implement that theory due to bugs or problems with our implementation techniques. In this case, we would be relatively better off if the mutual information between the AGI’s knowledge and human preferences is low. We expect the system’s behaviour to be dependent on its knowledge in some way, and we expect implementation errors to shift the nature of that dependence away from our intentions and expectations. Incorrect behaviour that depends on human preferences seems more dangerous than incorrect behaviour that does not.\n\n\nConsider the space of AGI system implementations, under a metric like similarity to an intended design (equivalently: severity of deviation from the design due to bugs). We want all the points near the first AGI system we build to be safe, because we may end up with a slightly different design than intended for reasons such as being confused about what we are doing or making implementation errors.\n\n\nThere are at least three ways in which the risk from bugs can manifest.\n\n\nIncorrectly Encoded Values: Supposing we intend the first use of AGI to be solving some bounded and well-specified task, but we misunderstand or badly implement it so much that what we end up with is actually unboundedly optimising some objective function. Then it seems better if that objective is something abstract like puzzle solving rather than something more directly connected to human preferences: consider, as a toy example, if the sign (positive/negative) around the objective were wrong.\n\n\nManipulation: The earlier arguments for independent audits do not just apply to the specific tasks we would plan to audit, but also to any activities an AGI system might carry out that humans might disapprove of. Examples include finding ways to hack into our supposedly secure systems, hiding its intentions and activity from us, or outright manipulating us. These tasks are much easier with access to a good psychological model of humans, which can be used to infer what mistakes we might make, or what loopholes we might overlook, or how we might respond to different behaviour from the system.\n\n\nHuman modelling is very close to human manipulation in design space. A system with accurate models of humans is close to a system which successfully uses those models to manipulate humans.\n\n\nThreats: Another risk from bugs comes not from the AGI system caring incorrectly about our values, but from having inadequate security. If our values are accurately encoded in an AGI system that cares about satisfying them, they become a target for threats from other actors who can gain from manipulating the first system. More examples and perspectives on this problem have been described [here](https://arbital.com/p/hyperexistential_separation/).\n\n\nThe increased risk from bugs of human modelling can be summarised as follows: whatever the risk that AGI systems produce catastrophic outcomes due to bugs, the very worst outcomes seem more likely if the system was trained using human modelling because these worst outcomes depend on the information in human models.\n\n\nLess independent audits and the risk from bugs can both be mitigated by preserving independence of the system from human model information, so the system cannot overfit to that information or use it perversely. The remaining two problems we consider, mind crime and unexpected agents, depend more heavily on the claim that modelling human preferences increases the chances of simulating something human-like.\n\n\n#### Mind Crime\n\n\nMany computations may produce entities that are morally relevant because, for example, they constitute sentient beings that experience pain or pleasure. Bostrom calls improper treatment of such entities “mind crime”. Modelling humans in some form seems more likely to result in such a computation than not modelling them, since humans are morally relevant and the system’s models of humans may end up sharing whatever properties make humans morally relevant.\n\n\n#### Unexpected Agents\n\n\nSimilar to the mind crime point above, we expect AGI designs that use human modelling to be more at risk of producing subsystems that are agent-like, because humans are agent-like. For example, we note that trying to predict the output of consequentialist reasoners can reduce to an optimisation problem over a space of things that contains consequentialist reasoners. A system engineered to predict human preferences well seems strictly more likely to run into problems associated with misaligned sub-agents. (Nevertheless, we think the amount by which it is more likely is small.)\n\n\n \n\n\n### Safe AGI Without Human Models is Neglected\n\n\nGiven the independent auditing concern, plus the additional points mentioned above, we would like to see more work done on practical approaches to developing safe AGI systems that do not depend on human modelling. At present, this is a neglected area in the AGI safety research landscape. Specifically, work of the form “Here’s a proposed approach, here are the next steps to try it out or investigate further”, which we might term *engineering-focused research*, is almost entirely done in a human-modelling context. Where we do see some safety work that eschews human modelling, it tends to be *theory-focused research*, for example, MIRI’s work on agent foundations. This does not fill the gap of engineering-focused work on safety without human models.\n\n\nTo flesh out the claim of a gap, consider the usual formulations of each of the following efforts within safety research: iterated distillation and amplification, debate, recursive reward modelling, cooperative inverse reinforcement learning, and value learning. In each case, there is human modelling built into the basic setup for the approach. However, we note that the technical results in these areas may in some cases be transportable to a setup without human modelling, if the source of human feedback (etc.) is replaced with a purely algorithmic, independent system.\n\n\nSome existing work that does not rely on human modelling includes the formulation of [safely interruptible agents](https://deepmind.com/research/publications/safely-interruptible-agents/), the formulation of [impact measures](https://arxiv.org/abs/1806.01186) (or [side effects](https://www.alignmentforum.org/posts/mDTded2Dn7BKRBEPX/penalizing-impact-via-attainable-utility-preservation)), approaches involving building AI systems with clear formal specifications (e.g., some versions of tool AIs), some versions of oracle AIs, and boxing/containment. Although they do not rely on human modelling, some of these approaches nevertheless make most sense in a context where human modelling is happening: for example, impact measures seem to make most sense for agents that will be operating directly in the real world, and such agents are likely to require human modelling. Nevertheless, we would like to see more work of all these kinds, as well as new techniques for building safe AGI that does not rely on human modelling.\n\n\n \n\n\n### Difficulties in Avoiding Human Models\n\n\nA plausible reason why we do not yet see much research on how to build safe AGI without human modelling is that it is difficult. In this section, we describe some distinct ways in which it is difficult.\n\n\n#### Usefulness\n\n\nIt is not obvious how to put a system that does not do human modelling to good use. At least, it is not as obvious as for the systems that do human modelling, since they draw directly on sources (e.g., human preferences) of information about useful behaviour. In other words, it is unclear how to solve the specification problem—how to correctly specify desired (and only desired) behaviour in complex domains—without human modelling. The “against human modelling” stance calls for a solution to the specification problem wherein useful tasks are transformed into well-specified, human-independent tasks either solely by humans or by systems that do not model humans.\n\n\nTo illustrate, suppose we have solved some well-specified, complex but human-independent task like theorem proving or atomically precise manufacturing. Then how do we leverage this solution to produce a good (or better) future? Empowering everyone, or even a few people, with access to a superintelligent system that does not directly encode their values in some way does not obviously produce a future where those values are realised. (This seems related to Wei Dai’s [human-safety](https://www.alignmentforum.org/posts/vbtvgNXkufFRSrx4j/three-ai-safety-related-ideas) problem.)\n\n\n#### Implicit Human Models\n\n\nEven seemingly “independent” tasks leak at least a little information about their origins in human motivations. Consider again the mass transit system design problem. Since the problem itself concerns the design of a system for use by humans, it seems difficult to avoid modelling humans at all in specifying the task. More subtly, even highly abstract or generic tasks like puzzle solving contain information about the sources/designers of the puzzles, especially if they are tuned for encoding more obviously human-centred problems. (Work by [Shah et al.](https://bair.berkeley.edu/blog/2019/02/11/learning_preferences/) looks at using the information about human preferences that is latent in the world.)\n\n\n#### Specification Competitiveness / Do What I Mean\n\n\nExplicit specification of a task in the form of, say, an optimisation objective (of which a reinforcement learning problem would be a specific case) is known to be fragile: there are usually things we care about that get left out of explicit specifications. This is one of the motivations for seeking more and more high level and indirect specifications, leaving more of the work of figuring out what exactly is to be done to the machine. However, it is currently hard to see how to automate the process of turning tasks (vaguely defined) into correct specifications without modelling humans.\n\n\n#### Performance Competitiveness of Human Models\n\n\nIt could be that modelling humans is the best way to achieve good performance on various tasks we want to apply AGI systems to for reasons that are not simply to do with understanding the problem specification well. For example, there may be aspects of human cognition that we want to more or less replicate in an AGI system, for competitiveness at automating those cognitive functions, and those aspects may carry a lot of information about human preferences with them in a hard to separate way.\n\n\n \n\n\n### What to Do Without Human Models?\n\n\nWe have seen arguments for and against aspiring to solve AGI safety using human modelling. Looking back on these arguments, we note that to the extent that human modelling is a good idea, it is important to do it very well; to the extent that it is a bad idea, it is best to not do it at all. Thus, whether or not to do human modelling at all is a configuration bit that should probably be set early when conceiving of an approach to building safe AGI.\n\n\nIt should be noted that the arguments above are not intended to be decisive, and there may be countervailing considerations which mean we should promote the use of human models despite the risks outlined in this post. However, to the extent that AGI systems with human models are more dangerous than those without, there are two broad lines of intervention we might attempt. Firstly, it may be worthwhile to try to decrease the probability that advanced AI develops human models “by default”, by promoting some lines of research over others. For example, an AI trained in a procedurally-generated virtual environment seems significantly less likely to develop human models than an AI trained on human-generated text and video data.\n\n\nSecondly, we can focus on safety research that does not require human models, so that if we eventually build AGI systems that are highly capable without using human models, we can make them safer without needing to teach them to model humans. Examples of such research, some of which we mentioned earlier, include developing human-independent methods to measure negative side effects, to prevent specification gaming, to build secure approaches to containment, and to extend the usefulness of task-focused systems.\n\n\n \n\n\nAcknowledgements: thanks to Daniel Kokotajlo, Rob Bensinger, Richard Ngo, Jan Leike, and Tim Genewein for helpful comments on drafts of this post.\n\n\n\nThe post [Thoughts on Human Models](https://intelligence.org/2019/02/22/thoughts-on-human-models/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2019-02-23T01:30:11Z", "authors": ["Guest"], "summaries": []} -{"id": "5727717a392c89892f2bfed497f9e1cd", "title": "Our 2018 Fundraiser Review", "url": "https://intelligence.org/2019/02/11/our-2018-fundraiser-review/", "source": "miri", "source_type": "blog", "text": "Our [2018 Fundraiser](https://intelligence.org/2018/11/26/miris-2018-fundraiser/) ended on December 31 with the five week campaign raising $951,817[1](https://intelligence.org/feed/?paged=14#fn1) from 348 donors to help advance MIRI’s mission. We surpassed our Mainline Target ($500k) and made it more than halfway again to our Accelerated Growth Target ($1.2M). We’re grateful to all of you who supported us. Thank you!\n\n\n \n\n\n\n\n[Target 1 \n$500,000 \nCompleted](https://intelligence.org/feed/?paged=14#fundraiserModal)\n[Target 2 \n\n### $1,200,000\n\n\nIn Progress](https://intelligence.org/feed/?paged=14#fundraiserModal)\n\n![](https://intelligence.org/wp-content/uploads/2019/02/2018progressbar.png)\n### Fundraiser Concluded\n\n\n##### 348 donors contributed\n\n\n\n\n\n×\n### Target Descriptions\n\n\n\n\n* [Target 1](https://intelligence.org/feed/?paged=14#level1)\n* [Target 2](https://intelligence.org/feed/?paged=14#level2)\n\n\n\n\n$500k: Mainline target\n----------------------\n\n\nThis target represents the difference between what we’ve raised so far this year, and our point estimate for business-as-usual spending next year.\n\n\n\n\n$1.2M: Accelerated growth target\n--------------------------------\n\n\nThis target represents what’s needed for our funding streams to keep pace with our growth toward the upper end of our projections.\n\n\n\n\n\n\n\n\n \n\n\nWith cryptocurrency prices significantly lower than during our [2017 fundraiser](https://intelligence.org/2018/01/10/fundraising-success/), we received less of our funding (~6%) from holders of cryptocurrency this time around. Despite this, our fundraiser was a success, in significant part thanks to the leverage gained by MIRI supporters’ participation in multiple matching campaigns during the fundraiser, including [WeTrust Spring’s Ethereum-matching campaign](https://blog.wetrust.io/conclusion-of-the-first-lr-experiment-709b018b5f83), Facebook’s [Giving Tuesday event](https://donations.fb.com/giving-tuesday/) and professional poker player Dan Smith’s [Double Up Drive](http://www.doubleupdrive.com/), expertly administered by [Raising for Effective Giving](https://reg-charity.org/).\n\n\n\nTogether with significant matching funds generated through donors’ employer matching programs, matching donations accounted for ~37% of the total funds raised during the fundraiser.\n\n\n### 1. WeTrust Spring\n\n\nMIRI participated, along with 17 other non-profit organizations, in[WeTrust’s Spring innovative ETH-matching](https://spring.wetrust.io/) event, which ran thru Giving Tuesday, November 27. The event was the first-ever implementation of Glen Weyl, Zoë Hitzig, and Vitalik Buterin’s [Liberal Radicalism (LR) model](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3243656) for non-profit funding matching. Unlike most matching campaigns, which match exclusively based on total amount donated, this campaign matched in a way that heavily factored in the number of unique donors when divvying out the matching pool, a feature [WeTrust highlighted as “Democratic Donation Matching](https://blog.wetrust.io/democratic-donation-matching-626759e928)”.\n\n\nDuring MIRI’s week-long campaign leading up to Giving Tuesday, [some supporters went deep](https://twitter.com/technocrypto/status/1067262719307509760) into trying to determine exactly what instantiation of the model WeTrust had created — how exactly DO the large donations provide leverage of 450% match rate for minimum donations of .1 ETH? Our supporters’ excitement about this new matching model was also evident in the many donations that were made — as WeTrust reported in [their blog post](https://blog.wetrust.io/conclusion-of-the-first-lr-experiment-709b018b5f83), “*MIRI, the Machine Intelligence Research Institute was the winner, clocking in [64 qualified donations totaling 147.751 ETH](https://spring.wetrust.io/miri), then Lupus Foundation in second with 22 qualified donations and 23.851 total ETH*.” Thanks to our supporters’ donations, MIRI received over 91% of the matching funds allotted by WeTrust and, all told, we received ETH worth more than $31,000 from the campaign. Thank you!\n\n\n### 2. Facebook Giving Tuesday Event\n\n\nSome of our hardiest supporters set their alarms clocks extra early to support us in [Facebook’s Giving Tuesday matching event](https://donations.fb.com/giving-tuesday/), which kicked off at 5:00am EST on Giving Tuesday. Donations made before the $7M matching pool was exhausted were matched 1:1 by Facebook/PayPal, up to a maximum of $250,000 per organization, and a limit of $20,000 per donor, and $2500 per donation.\n\n\nMIRI supporters, some with [our tipsheet](https://intelligence.org/giving-tuesday-2018-fb-fundraiser-tipsheet) in hand, pointed their browsers — and credit cards — at [MIRI’s fundraiser Facebook Page](https://www.facebook.com/donate/312557256014777/) (and [another page](https://www.facebook.com/donate/310031792934746/) set up by the folks behind the [EA Giving Tuesday Donation Matching Initiative](https://www.eagivingtuesday.org/) — thank you Avi and William!), and clicked early and often. During the previous year’s event, it took only 86 seconds for the $2M matching pool to be exhausted. This year saw a significantly larger $7M pool being exhausted dramatically quicker, sometime in the 16th second. Fortunately, before it ended, 11 MIRI donors had already made 20 donations totalling $40,072.\n\n\n \n\n \n\n \n\n \n\n\nOverall, 66% of the $61,023 donated to MIRI on Facebook on Giving Tuesday was matched by Facebook/PayPal, resulting in a total of $101,095. Thank you to everyone who participated, especially the early risers who so effectively leveraged matching funds on MIRI’s behalf including Quinn Maurmann, Richard Schwall, Alan Chang, William Ehlhardt, Daniel Kokotajlo, John Davis, Herleik Holtan and others. You guys rock!\n\n\nYou can read more about the general EA community’s fundraising performance on Giving Tuesday in [Ari Norowitz’s retrospective on the EA Forum](https://forum.effectivealtruism.org/posts/Ns3h8rCtsTMgFZ9eH/ea-giving-tuesday-donation-matching-initiative-2018?fbclid=IwAR0Qbchz32SChbsuVFxYnx3u5q_PjkaOGjhc7lE3MEZbbgDfMjc8RLHUX5Q).\n\n\n### 3. Double Up Drive Challenge\n\n\n![DoubleUpDrive - Multiply your impact doubleupdrive.com](https://doubleupdrive.com/wp-content/uploads/2018/11/Double-up-drive3-inline-07.png)\n\n\nPoker player Dan Smith and a number of his fellow professional players came together for another end-of-year Matching Challenge — once again administered by Raising for Effective Giving (REG), who have facilitated similar matching opportunities in years past. \n\n\nStarting on Giving Tuesday, November 27, $940,000 in matching funds was made available for eight charities focused on near-term causes (Malaria Consortium, GiveDirectly, Helen Keller International, GiveWell, Animal Charity Evaluators, Good Food Institute, StrongMinds and the Massachusetts Bail Fund); and, with the specific support of poker pro Aaron Merchak, $200,000 in matching funds was made available for two charities focused on improving the long-term future of our civilization, MIRI and REG.\n\n\nWith the addition of an anonymous sponsor to Dan’s roster in early December, an extra $150,000 was added to the near-term causes pool and, then, a week later, after his [win at the DraftKings World Championship](https://www.cardschat.com/news/tom-crowley-donates-1-1m-in-dfs-winnings-to-dan-smith-charity-drive-75671), Tom Crowley followed through on his pledge to donate half of his total event winnings to the drive, adding significantly increased funding, $1.127M, to the drive’s overall matching pool as well as 2 more organizations — Against Malaria Foundation and EA Funds’ Long-Term Future Fund.\n\n\nThe last few days of the Drive saw a whirlwind of donations being made to all organizations, causing the pool of $2.417M to be exhausted 24 hours before the declared end of the drive (December 29) at which point Martin Crowley came in to match all donations made in the last day, thus increasing the matched donations to over $2.7M.\n\n\nIn total, MIRI donors had $229,000 matched during the event. We’re very grateful to all these donors, to Dan Smith for instigating this phenomenally successful event, and to his fellow sponsors and especially Aaron Merchak and Martin Crowley for matching donations made to MIRI. Finally, a big shout-out to REG for facilitating and administering so effectively – thank you Stefan and Alfredo!\n\n\n### 4. Corporate Matching\n\n\nA number of MIRI supporters work at corporations that match contributions made by their employees to 501(c)(3) organizations like MIRI. During the duration of MIRI’s fundraiser, over $62,000 in matching funds from various Employee Matching Programs was leveraged by our supporters, adding to the significant matching corporate funds already leveraged during 2018 by these and other MIRI supporters.\n\n\n \n\n\nWe’re extremely grateful for all the support we received during this fundraiser, especially the effective leveraging of the numerous matching opportunities, and are excited about the opportunity it creates for us to continue to [grow our research team](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section4). \n\n\nIf you know of — or want to discuss — any giving/matching/support MIRI opportunities in 2019, please get in touch with me[2](https://intelligence.org/feed/?paged=14#fn2) at [colm@intelligence.org](mailto:colm@intelligence.org). Thank you!\n\n\n\n\n\n---\n\n\n1. The exact total is still subject to change as we continue to process a small number of donations.[![↩](https://s.w.org/images/core/emoji/14.0.0/72x72/21a9.png)](https://intelligence.org/feed/?paged=14#fnref1)\n2. [Colm Ó Riain](mailto:colm@intelligence.org) is MIRI’s Head of Growth. Colm coordinates MIRI’s philanthropic and recruitment strategy to support MIRI’s growth plans.[![↩](https://s.w.org/images/core/emoji/14.0.0/72x72/21a9.png)](https://intelligence.org/feed/?paged=14#fnref2)\n\n\n\nThe post [Our 2018 Fundraiser Review](https://intelligence.org/2019/02/11/our-2018-fundraiser-review/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2019-02-11T21:20:31Z", "authors": ["Colm Ó Riain"], "summaries": []} -{"id": "0cf9c4717965f55e83b18c065a5531b7", "title": "January 2019 Newsletter", "url": "https://intelligence.org/2019/01/31/january-2019-newsletter/", "source": "miri", "source_type": "blog", "text": "Our [December fundraiser](https://intelligence.org/2018/11/26/miris-2018-fundraiser/) was a success, with 348 donors contributing just over $950,000. Supporters leveraged a variety of matching opportunities, including employer matching programs, WeTrust Spring’s [Ethereum-matching campaign](https://blog.wetrust.io/conclusion-of-the-first-lr-experiment-709b018b5f83?gi=35c498cb6044), Facebook’s [Giving Tuesday](https://donations.fb.com/giving-tuesday/) event, and professional poker players Dan Smith, Aaron Merchak, and Martin Crowley’s [Double Up Drive](http://www.doubleupdrive.com/), expertly facilitated by [Raising for Effective Giving](https://reg-charity.org/).\nIn all, matching donations accounted for just over one third of the funds raised. Thank you to everyone who contributed!\n\n\n\n#### News and links\n\n\n* From NVIDIA’s Tero Karras, Samuli Laine, and Timo Aila: “[A Style-Based Generator Architecture for Generative Adversarial Networks](https://arxiv.org/abs/1812.04948).”\n* [How AI Training Scales](https://blog.openai.com/science-of-ai/): OpenAI’s Sam McCandlish, Jared Kaplan, and Dario Amodei introduce a method that “predicts the parallelizability of neural network training on a wide range of tasks”.\n* From *Vox*: [The case for taking AI seriously as a threat to humanity](https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment). See also *Gizmodo*‘s [article](https://gizmodo.com/how-we-can-prepare-now-for-catastrophically-dangerous-a-1830388719) on AI risk.\n\n\n\nThe post [January 2019 Newsletter](https://intelligence.org/2019/01/31/january-2019-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2019-02-01T07:05:01Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "8a5e2ba2334f8c902db12472b6891942", "title": "December 2018 Newsletter", "url": "https://intelligence.org/2018/12/16/december-2018-newsletter/", "source": "miri", "source_type": "blog", "text": "**Edward Kmett** [has joined the MIRI team](https://intelligence.org/2018/11/28/miris-newest-recruit-edward-kmett/)! Edward is a prominent Haskell developer who popularized the use of lenses for functional programming, and currently maintains many of the libraries around the Haskell core libraries.\nI’m also happy to announce another new recruit: **James Payor**. James joins the MIRI research team after three years at Draftable, a software startup. He previously studied math and CS at MIT, and he holds a silver medal from the International Olympiad in Informatics, one of the most prestigious CS competitions in the world.\n\n\nIn other news, we’ve today released a new edition of [***Rationality: From AI to Zombies***](https://intelligence.org/rationality-ai-zombies/) with a fair amount of [textual revisions](https://intelligence.org/2018/12/15/announcing-new-raz/) and (for the first time) a print edition!\n\n\nFinally, our [**2018 fundraiser**](https://intelligence.org/2018/11/26/miris-2018-fundraiser/) has passed the halfway mark on our first target! (And there’s currently $136,000 available in dollar-for-dollar donor matching through the [Double Up Drive](https://doubleupdrive.com/)!)\n\n\n#### Other updates\n\n\n* A new paper from Stuart Armstrong and Sören Mindermann: “[Occam’s Razor is Insufficient to Infer the Preferences of Irrational Agents](https://www.fhi.ox.ac.uk/occam/).”\n* New AI Alignment Forum posts: [Kelly Bettors](https://www.alignmentforum.org/posts/iWXQgwpksstozSDeA/kelly-bettors); [Bounded Oracle Induction](https://www.alignmentforum.org/posts/MgLeAWSeLbzx8mkZ2/bounded-oracle-induction)\n* OpenAI’s [Jack Clark](https://jack-clark.net/2018/11/26/import-ai-122-google-obtains-new-imagenet-state-of-the-art-with-gpipe-drone-learns-to-land-more-effectively-than-pd-controller-policy-and-facebook-releases-its-cherrypi-starcraft-bot/) and [*Axios*](https://www.axios.com/newsletters/axios-future-5a9e9f63-fd4f-4514-9570-ce9dc4c14348.html?chunk=2#story2) discuss research-sharing in AI, following up on our [2018 Update](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/) post.\n* A throwback post from Eliezer Yudkowsky: [Should Ethicists Be Inside or Outside a Profession?](https://www.lesswrong.com/posts/LRKXuxLrnxx3nSESv/should-ethicists-be-inside-or-outside-a-profession)\n\n\n#### News and links\n\n\n* New from the DeepMind safety team: Jan Leike’s [Scalable Agent Alignment via Reward Modeling](https://www.lesswrong.com/posts/HBGd34LKvXM9TxvNf/new-safety-research-agenda-scalable-agent-alignment-via) ([arXiv](https://arxiv.org/abs/1811.07871)) and Viktoriya Krakovna’s [Discussion on the Machine Learning Approach to AI Safety](https://www.lesswrong.com/posts/5GFn87cmw7A5hzR89/discussion-on-the-machine-learning-approach-to-ai-safety).\n* Two recently released core Alignment Forum sequences: Rohin Shah’s [Value Learning](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc) and Paul Christiano’s [Iterated Amplification](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd).\n* On the 80,000 Hours Podcast, Catherine Olsson and Daniel Ziegler discuss [paths for ML engineers to get involved in AI safety](https://80000hours.org/podcast/episodes/olsson-and-ziegler-ml-engineering-and-safety/).\n* Nick Bostrom has a new paper out: “[The Vulnerable World Hypothesis](https://www.lesswrong.com/posts/Tx6dGzYLtfzzkuGtF/the-vulnerable-world-hypothesis-by-bostrom).”\n\n\n\nThe post [December 2018 Newsletter](https://intelligence.org/2018/12/16/december-2018-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-12-16T18:08:03Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "af1bf406ffac4eba7ed78078f5f776a6", "title": "Announcing a new edition of “Rationality: From AI to Zombies”", "url": "https://intelligence.org/2018/12/15/announcing-new-raz/", "source": "miri", "source_type": "blog", "text": "MIRI is putting out a new edition of [***Rationality: From AI to Zombies***](https://intelligence.org/rationality-ai-zombies/), including the first set of *R:AZ* **print books**! *Map and Territory* (volume 1) and *How to Actually Change Your Mind* (volume 2) are out today!\n \n\n\n[![Map and Territory](https://intelligence.org/wp-content/uploads/2018/12/map-and-territory.png)](https://intelligence.org/rationality-ai-zombies/)                   [![How to Actually Change Your Mind](https://intelligence.org/wp-content/uploads/2018/12/how-to-actually-change-your-mind.png)](https://intelligence.org/rationality-ai-zombies/)\n\n\n \n\n\n* ***Map and Territory*** is:\n+ $6.50 [on Amazon](https://smile.amazon.com/dp/1939311233), for the print version.\n+ Pay-what-you-[on Gumroad](http://gumroad.com/l/mapterritory), for PDF, EPUB, and MOBI versions.\n\n\n* ***How to Actually Change Your Mind*** is:\n+ $8 [on Amazon](https://smile.amazon.com/dp/1939311276), for the print version.\n+ Pay-what-you-[on Gumroad](http://gumroad.com/l/howtoactuallychangeyourmind), for PDF, EPUB, and MOBI versions (*available in the next day*).\n\n\n\n \n\n\nThe *Rationality: From AI to Zombies* compiles Eliezer Yudkowsky’s original *Overcoming Bias* and *Less Wrong* sequences, modified to form a more cohesive whole as books.\n\n\n*Map and Territory* is the canonical starting point, though we’ve tried to make *How to Actually Change Your Mind* a good jumping-on point too, since we expect different people to take interest in one book or the other.\n\n\nThe previous edition of *Rationality: From AI to Zombies* was digital-only, and took the form of a single sprawling ebook. The new version has been revised a fair amount, with larger changes including:\n\n\n \n\n\n* The first sequence in *Map and Territory*, “Predictably Wrong,” has been substantially reorganized and rewritten, with a goal of making it a much better experience for new readers.\n* More generally, the books are now more optimized for new readers and less focused on extreme fidelity to Eliezer’s original blog posts, as this was one of the largest requests we got in response to the previous edition of *Rationality: From AI to Zombies*. Although the book as a whole is mostly unchanged, this represented an update about which option to pick in quite a few textual tradeoffs.\n* A fair number of essays have been added, removed, or rearranged. The “Against Doublethink” sequence in *How to Actually Change Your Mind* has been removed entirely, except for one essay (“Singlethink”).\n* Important links and references are now written out rather than hidden behind [Easter egg hyperlinks](https://en.wikipedia.org/wiki/Wikipedia:Piped_link#Transparency), so that they’ll show up in print editions too.\n \n\n\nEaster egg links are kept around if they’re interesting enough to be worth retaining, but not important enough to deserve a footnote; so there will still be some digital-only content, but the goal is for this to be pretty minor.\n* A glossary has been added to the back of each book.\n\n\n \n\n\nOver the coming months, We’ll be rolling out the other four volumes of *Rationality: From AI to Zombies*. To learn more, see the [***R:AZ* landing page**](https://intelligence.org/rationality-ai-zombies/).\n\n\n\nThe post [Announcing a new edition of “Rationality: From AI to Zombies”](https://intelligence.org/2018/12/15/announcing-new-raz/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-12-16T03:10:42Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "65fdd00b284b1b12d4668103d97ef890", "title": "2017 in review", "url": "https://intelligence.org/2018/11/28/2017-in-review/", "source": "miri", "source_type": "blog", "text": "This post reviews **MIRI’s activities in 2017**, including research, recruiting, exposition, and fundraising activities.\n\n\n2017 was a big transitional year for MIRI, as we took on new research projects that have a much greater reliance on hands-on programming work and experimentation. We’ve continued these projects in 2018, and they’re described more in our [2018 update](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/). This meant a major focus on laying groundwork for much faster growth than we’ve had in the past, including setting up infrastructure and changing how we recruit to reach out to more people with engineering backgrounds.\n\n\n\nAt the same time, 2017 was our best year to date on fundraising, as we saw a significant increase in support both from the [Open Philanthropy Project](https://www.openphilanthropy.org/) and from the cryptocurrency community, which responded to the crypto boom with a great deal of generosity toward us. This put us in an excellent position to move ahead with our plans with confidence, and to focus more of our effort on technical research and growth.\n\n\nThe review this year is coming out far later than usual, for which I apologize. One of the main reasons for this is that I felt that a catalogue of our 2017 activities would be much less informative if I couldn’t cite our [2018 update](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions), which explains a lot of the reasoning behind our new work and how the things we’re doing relate to each other. I apologize for any inconvenience this might have caused people trying to track what MIRI’s been up to. I plan to have our next annual review out much earlier, in the first quarter of 2019.\n\n\n \n\n\n### 2017 Research Progress\n\n\nAs described in our [2017 organizational update](https://intelligence.org/2017/04/30/2017-updates-and-strategy/) and elaborated on in much more detail in our recent [2018 update](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/), 2017 saw a significant shift in where we’re putting our research efforts. Although an [expanded version](https://intelligence.org/embedded-agency/) of the [Agent Foundations agenda](https://intelligence.org/technical-agenda/) continues to be a major focus at MIRI, we’re also now tackling a new set of [alignment research directions](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section1-1) that lend themselves more to code experiments.\n\n\nSince early 2017, we’ve been increasingly adopting a policy of [not disclosing](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section3) many of our research results, which has meant that less of our new output is publicly available. Some of our work in 2017 (and 2018) has continued to be made public, however, including research on the [AI Alignment Forum](http://alignmentforum.org/).\n\n\nIn 2017, Scott Garrabrant refactored our Agent Foundations agenda into four new categories: *decision theory*, *embedded world-models*, *robust delegation*, and *subsystem alignment*. Abram Demski and Scott have now co-written an introduction to these four problems, considered as different aspects of the larger problem of “[**Embedded Agency**](https://www.lesswrong.com/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version).”\n\n\nComparing our predictions (from [March 2017](https://intelligence.org/2017/04/30/2017-updates-and-strategy/)[1](https://intelligence.org/2018/11/28/2017-in-review/#footnote_0_18384 \"These predictions have been edited below to match Scott’s terminology changes as described in 2018 research plans and predictions.\")) to our progress over 2017, and using a 1-5 scale where 1 means “limited” progress, 3 means “modest” progress, and 5 means “sizable” progress, we get the following retrospective take on our public-facing research progress:\n\n\n#### **Decision theory**\n\n\n* 2015 progress: 3. (Predicted: 3.)\n* 2016 progress: 3. (Predicted: 3.)\n* 2017 progress: **3**. (Predicted: 3.)\n\n\nOur most significant 2017 results include [posing](https://www.alignmentforum.org/item?id=1372) and [solving a version of](https://www.alignmentforum.org/item?id=1712) the [converse Lawvere problem](https://www.alignmentforum.org/posts/5bd75cc58225bf06703753a9/formal-open-problem-in-decision-theory);[2](https://intelligence.org/2018/11/28/2017-in-review/#footnote_1_18384 \"Scott coined the name for this problem in his post: The Ubiquitous Converse Lawvere Problem.\") developing [cooperative oracles](https://www.alignmentforum.org/item?id=1468); and improving our understanding of how causal decision theory relates to evidential decision theory (e.g., in [Smoking Lesion Steelman](https://www.alignmentforum.org/item?id=1525)).\n\n\nWe also released a number of introductory resources on decision theory, including “[Functional Decision Theory](https://intelligence.org/2017/10/22/fdt/)” and [Decisions Are For Making Bad Outcomes Inconsistent](https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/).\n\n\n#### **Embedded world-models**\n\n\n* 2015 progress: 5. (Predicted: 3.)\n* 2016 progress: 5. (Predicted: 3.)\n* 2017 progress: **2**. (Predicted: 2.)\n\n\nKey 2017 results in this area include the finding that [logical inductors](https://intelligence.org/2016/09/12/new-paper-logical-induction/) that can see each other dominate each other; and, as a corollary, logical inductor limits dominate each other.\n\n\nBeyond that, Scott Garrabrant reports that [Hyperreal Brouwer](https://www.alignmentforum.org/item?id=1671) shifted his thinking significantly with respect to probabilistic truth predicates, reflective oracles, and logical inductors. Additionally, Vanessa Kosoy’s “[Forecasting Using Incomplete Models](https://intelligence.org/2018/06/27/forecasting-using-incomplete-models/)” built on our previous work on logical inductors to create a cleaner (purely learning-theoretic) formalism for modeling complex environments, showing that the methods developed in “Logical Induction” are useful for applications in classical sequence prediction unrelated to logic.\n\n\n#### **Robust delegation**\n\n\n* 2015 progress: 3. (Predicted: 3.)\n* 2016 progress: 4. (Predicted: 3.)\n* 2017 progress: **4**. (Predicted: 1.)\n\n\nWe made significant progress on the tiling problem, and also clarified our thinking about Goodhart’s Law (see “[Goodhart Taxonomy](https://www.lesswrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy)“). Other noteworthy work in this area includes Vanessa Kosoy’s [Delegative Inverse Reinforcement Learning](https://www.alignmentforum.org/item?id=1550) framework, Abram Demski’s articulation of “[stable pointers to value](https://www.alignmentforum.org/item?id=1622)” as a central desideratum for value loading, and Ryan Carey’s “[Incorrigibility in the CIRL Framework](https://arxiv.org/abs/1709.06275).”\n\n\n#### **Subsystem alignment (new category)**\n\n\nOne of the more significant research shifts at MIRI in 2017 was orienting toward the subsystem alignment problem at all, following discussion such as Eliezer Yudkowsky’s [Optimization Daemons](https://arbital.com/p/daemons/), Paul Christiano’s [What Does the Universal Prior Actually Look Like?](https://ordinaryideas.wordpress.com/2016/11/30/what-does-the-universal-prior-actually-look-like/), and Jessica Taylor’s [Some Problems with Making Induction Benign](https://www.alignmentforum.org/item?id=1263). Our high-level thoughts about this problem can be found in Scott Garrabrant and Abram Demski’s recent [write-up](https://intelligence.org/2018/11/06/embedded-subsystems/).\n\n\n \n\n\n2017 also saw a reduction in our focus on the [Alignment for Advanced Machine Learning Systems](https://intelligence.org/2016/07/27/alignment-machine-learning/) (AAMLS) agenda. Although we view these problems as highly important, and continue to revisit them regularly, we’ve found AAMLS work to be [less obviously tractable](https://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/#3) than our other research agendas thus far.\n\n\nOn the whole, we continue (at year’s end in 2018) to be very excited by the alignment avenues of attack that we started exploring in earnest in 2017, both with respect to embedded agency and with respect to [our newer lines of research](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/). \n\n\n \n\n\n### 2017 Research Support Activities\n\n\nAs discussed in our [2018 Update](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/), the new lines of research we’re tackling are much easier to hire for than has been the case for our Agent Foundations research:\n\n\n\n> \n> This work seems to “give out its own guideposts” more than the Agent Foundations agenda does. While we used to require extremely close fit of our hires on research taste, we now think we have enough sense of the terrain that we can relax those requirements somewhat. We’re still looking for hires who are scientifically innovative and who are *fairly* close on research taste, but our work is now much more scalable with the number of good mathematicians and engineers working at MIRI.\n> \n> \n> \n\n\nFor that reason, one of our top priorities in 2017 (continuing into 2018) was to set MIRI up to be able to undergo major, sustained growth. We’ve been helped substantially in ramping up our recruitment by [Blake Borgeson](http://www.blakeb.org/about.html), a *Nature*-published computational biologist (and now a MIRI board member) who previously co-founded Recursion Pharmaceuticals and led its machine learning work as CTO.\n\n\nConcretely, in 2017 we:\n\n\n* **Hired research staff** including Sam Eisenstat, Abram Demski, Tsvi Benson-Tilsen, Jesse Liptrap, and Nick Tarleton.\n* **Ran the** [**AI Summer Fellows Program**](http://www.rationality.org/workshops/apply-aisfp) with CFAR.\n* **Ran 3** [**research workshops**](https://intelligence.org/workshops/) on the Agent Foundations agenda, the [AAMLS agenda](https://intelligence.org/2016/07/27/alignment-machine-learning/), and Paul Christiano’s [research agenda](https://ai-alignment.com/). We also ran a large number of [internal](https://intelligence.org/2017/04/30/2017-updates-and-strategy/#3) research retreats and other events.\n* **Ran software engineer** [**trials**](https://intelligence.org/2017/04/30/software-engineer-internship-staff-openings/) where participants spent the summer training up to become research engineers, resulting in a hire.\n\n\n \n\n\n### 2017 Conversations and Exposition\n\n\nOne of our 2017 priorities was to sync up and compare models more on the strategic landscape with other existential risk and AI safety groups. For snapshots of some of the discussions over the years, see Daniel Dewey’s [thoughts on HRAD](https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design) and Nate’s [response](https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design#Z6TbXivpjxWyc8NYM); and more recently, Eliezer Yudkowsky and Paul Christiano’s conversations about Paul’s [research proposals](https://intelligence.org/2018/05/19/challenges-to-christianos-capability-amplification-proposal/).\n\n\nWe also did a fair amount of public dialoguing, exposition, and outreach in 2017. On that front we:\n\n\n* **Released** [***Inadequate Equilibria***](http://equilibriabook.com/), a book by Eliezer Yudkowsky on group- and system-level inefficiencies, and when individuals can hope to do better than the status quo.\n* **Produced research exposition**: [On Motivations for MIRI’s Highly Reliable Agent Design Research](https://www.alignmentforum.org/item?id=1220); [Ensuring Smarter-Than-Human Intelligence Has a Positive Outcome](https://intelligence.org/2017/04/12/ensuring/); [Security Mindset and Ordinary Paranoia](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/); [Security Mindset and the Logistic Success Curve](https://intelligence.org/2017/11/26/security-mindset-and-the-logistic-success-curve/)\n* **Produced strategy and forecasting exposition**: [Response to Cegłowski on Superintelligence](https://intelligence.org/2017/01/13/response-to-ceglowski-on-superintelligence/); [There’s No Fire Alarm for Artificial General Intelligence](https://intelligence.org/2017/10/13/fire-alarm/); [AlphaGo Zero and the Foom Debate](https://intelligence.org/2017/10/20/alphago/); [Why We Should Be Concerned About Artificial Superintelligence](https://www.skeptic.com/reading_room/why-we-should-be-concerned-about-artificial-superintelligence/); [A Reply to Francois Chollet on Intelligence Explosion](https://intelligence.org/2017/12/06/chollet/)\n* **Received press coverage** in [*The Huffington Post*](https://www.huffingtonpost.com/entry/can-we-properly-prepare-for-the-risks-of-superintelligent_us_58d43f59e4b0f633072b35f1), [*Vanity Fair*](http://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x), [*Nautilus*](http://nautil.us/issue/52/the-hive/is-tribalism-a-natural-malfunction), and [*Wired*](https://www.wired.com/story/the-way-the-world-ends-not-with-a-bang-but-a-paperclip/). We were also interviewed for Mark O’Connell’s [*To Be A Machine*](https://en.wikipedia.org/wiki/To_Be_a_Machine) and Richard Clarke and R.P. Eddy’s [*Warnings: Finding Cassandras to Stop Catastrophes*](https://www.warningsbook.net/).\n* **Spoke** at the [O’Reilly AI Conference](https://cdn.oreillystatic.com/en/assets/1/event/272/Ensuring%20smarter-than-human%20intelligence%20has%20a%20positive%20outcome_%20Presentation.pdf), and on panels at the [Beneficial AI](https://www.youtube.com/watch?v=UMq4BcRf-bY) conference and Effective Altruism Global ([1](https://www.youtube.com/watch?v=trTslOidmq8), [2](https://www.youtube.com/watch?v=gmL_7SayalM)).\n* Presented **papers** at [TARK 2017](https://arxiv.org/abs/1707.08747v1), [FEW 2017](https://intelligence.org/files/DeathInDamascus.pdf), and the [AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society](https://arxiv.org/abs/1709.06275), and published the Agent Foundations agenda in [*The Technological Singularity: Managing the Journey*](http://www.creative-science.org/activities/singularity/).\n* **Participated in other events**, including the “[Envisioning and Addressing Adverse AI Outcomes](https://www.bloomberg.com/news/articles/2017-03-02/ai-scientists-gather-to-plot-doomsday-scenarios-and-solutions)” workshop at Arizona State and the UCLA [Colloquium on Catastrophic and Existential Risk](https://www.risksciences.ucla.edu/news-events/2017/1/31/the-first-colloquium-on-catastrophic-and-existential-threats).\n\n\n \n\n\n### 2017 Finances\n\n\n#### Fundraising\n\n\n2017 was by far our best fundraising year to date. We raised a total of **$5,849,500**, more than **2.5×** what we raised in 2016.[3](https://intelligence.org/2018/11/28/2017-in-review/#footnote_2_18384 \"Note that amounts in this section may vary slightly from our audited financial statements, due to small differences between how we tracked donations internally, and how we are required to report them in our financial statements.\") During our annual fundraiser, we also [raised double our highest target](https://intelligence.org/2018/01/10/fundraising-success/). We are very grateful for this incredible show of support. This unexpected fundraising success enabled us to move forward with our growth plans with a lot more confidence, and boosted our recruiting efforts in a variety of ways.[4](https://intelligence.org/2018/11/28/2017-in-review/#footnote_3_18384 \"See our 2018 fundraiser post for more information.\")\n\n\nThe large increase in funding we saw in 2017 was significantly driven by:\n\n\n* A large influx of cryptocurrency contributions, which made up ~42% of our total contributions in 2017. The largest of these were:\n\t+ $1.01M in ETH from an anonymous donor.\n\t+ $764,970 in ETH from Vitalik Buterin, the inventor and co-founder of Ethereum.\n\t+ $367,575 in BTC from Christian Calderon.\n\t+ $295,899 in BTC from professional poker players Dan Smith, Tom Crowley and Martin Crowley as part of their [Matching Challenge](https://2017charitydrive.com/) in partnership with [Raising for Effective Giving](https://reg-charity.org/).\n* Other contributions including:\n\t+ A $1.25M [grant disbursement](https://intelligence.org/2017/11/08/major-grant-open-phil/) from the [Open Philanthropy Project](https://www.openphilanthropy.org/), significantly increased from the $500k grant they awarded MIRI in 2016.\n\t+ $200k in grants from the [Berkeley Existential Risk Initiative](http://existence.org/).\n\n\nAs the graph below shows, although our fundraising has increased year over year since 2014, 2017 looks very much like an outlier year relative to our previous growth rate. This was largely driven by the large influx of cryptocurrency contributions, but even excluding those contributions, we raised ~$3.4M, which is 1.5× what we raised in 2016.[5](https://intelligence.org/2018/11/28/2017-in-review/#footnote_4_18384 \"This is similar to 2013, when 33% of our contributions that year came from a single Ripple donation from Jed McCaleb.\")\n\n\n \n\n\n\n\n(In this chart and those that follow, “Unlapsed” indicates contributions from past supporters who did not donate in the previous year.)\n\n\nWhile the largest contributions drove the overall trend, we saw growth in both the number of contributors and amount contributed across all contributor sizes.\n\n\n \n\n\n\n\n \n\n\n\n\nIn 2017 we received contributions from 745 unique contributors, 38% more than in 2016 and nearly as many as in 2014 when [we participated in SVGives](https://intelligence.org/2014/05/06/liveblogging-the-svgives-fundraiser/).\n\n\nSupport from international contributors increased from 20% in 2016 to 42% in 2017. This increase was largely driven by the $1.01M ETH donation, but support still increased from 20% to 25% if we ignore this donation. Starting in late 2016, we’ve been working hard to find ways for our international supporters to be able to contribute in a [tax-advantaged manner](https://intelligence.org/donate/tax-advantaged-donations/). I expect this percentage to substantially increase in 2018 due to those efforts.[6](https://intelligence.org/2018/11/28/2017-in-review/#footnote_5_18384 \"A big thanks to Colm for all the work he’s put into this; have a look at our Tax-Advantaged Donations page for more information.\")\n\n\n \n\n\n\n\n#### Spending\n\n\nIn our [2016 fundraiser post](https://intelligence.org/2016/09/16/miris-2016-fundraiser/), we projected that we’d spend $2–2.2M in 2017. Later in 2017, we [revised](https://intelligence.org/2017/04/30/2017-updates-and-strategy/) our estimates to $2.1–2.5M (with a point estimate of $2.25M) along with a breakdown of our estimate across our major budget categories.\n\n\nOverall, our projections were fairly accurate. Total spending came in at just below $2.1M. The graph below compares our actual spending with our projections.[7](https://intelligence.org/2018/11/28/2017-in-review/#footnote_6_18384 \"Our subsequent budget projections have used a simpler set of major budget categories. I’ve translated our 2017 budget projections into this categorization scheme, for our comparison with actual spending, in order to remain consistent with this new scheme.\")\n\n\n \n\n\n\n\nThe largest deviation from our projected spending came as a result of the researchers who had been working on our AAMLS agenda [moving on (on good terms) to other projects](https://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/#3).\n\n\n \n\n\nFor past annual reviews, see: [2016](https://intelligence.org/2017/03/28/2016-in-review/), [2015](https://intelligence.org/2016/07/29/2015-in-review/), [2014](https://intelligence.org/2015/03/22/2014-review/), and [2013](https://intelligence.org/2013/12/20/2013-in-review-operations/); and for more recent information on what we’ve been up to following 2017, see our 2018 [update](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/) and [fundraiser](https://intelligence.org/2018/11/26/miris-2018-fundraiser/) posts.\n\n\n\n\n---\n\n1. These predictions have been edited below to match Scott’s terminology changes as described in [2018 research plans and predictions](https://intelligence.org/2018/03/31/2018-research-plans/).\n2. Scott coined the name for this problem in his post: [The Ubiquitous Converse Lawvere Problem](https://www.alignmentforum.org/posts/5bd75cc58225bf06703753b9/the-ubiquitous-converse-lawvere-problem).\n3. Note that amounts in this section may vary slightly from our audited financial statements, due to small differences between how we tracked donations internally, and how we are required to report them in our financial statements.\n4. See our [2018 fundraiser post](https://intelligence.org/2018/11/26/miris-2018-fundraiser/) for more information.\n5. This is similar to 2013, when 33% of our contributions that year came from a single Ripple donation from Jed McCaleb.\n6. A big thanks to Colm for all the work he’s put into this; have a look at our [Tax-Advantaged Donations](https://intelligence.org/donate/tax-advantaged-donations/) page for more information.\n7. Our subsequent budget projections have used a simpler set of major budget categories. I’ve translated our 2017 budget projections into this categorization scheme, for our comparison with actual spending, in order to remain consistent with this new scheme.\n\nThe post [2017 in review](https://intelligence.org/2018/11/28/2017-in-review/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-11-29T07:59:35Z", "authors": ["Malo Bourgon"], "summaries": []} -{"id": "78df8e18a27796530eb37d64e76cb05c", "title": "MIRI’s newest recruit: Edward Kmett!", "url": "https://intelligence.org/2018/11/28/miris-newest-recruit-edward-kmett/", "source": "miri", "source_type": "blog", "text": "![](https://intelligence.org/wp-content/uploads/2018/08/Team_Headshot_Web_Edward.jpg)Prolific Haskell developer **Edward Kmett** has joined the MIRI team!\n\n\nEdward is perhaps best known for popularizing the use of lenses for functional programming. Lenses are a tool that provides a compositional vocabulary for accessing parts of larger structures and describing what you want to do with those parts.\n\n\nBeyond the lens library, Edward maintains a significant chunk of all libraries around the Haskell core libraries, covering everything from automatic differentiation (used heavily in deep learning, computer vision, and financial risk) to category theory (biased heavily towards organizing software) to graphics, SAT bindings, RCU schemes, tools for writing compilers, and more.\n\n\nInitial support for Edward joining MIRI is coming in the form of funding from long-time MIRI donor Jaan Tallinn. Increased donor enthusiasm has put MIRI in a great position to take on more engineers in general, and to consider highly competitive salaries for top-of-their-field engineers like Edward who are interested in working with us.\n\n\nAt MIRI, Edward is splitting his time between helping us grow our research team and diving in on a line of research he’s been independently developing in the background for some time: building a new language and infrastructure to make it easier for people to write highly complex computer programs with known desirable properties. While we are big fans of his work, Edward’s research is independent of the directions we described in our [2018 Update](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/), and we don’t consider it part of our core research focus.\n\n\nWe’re hugely excited to have Edward at MIRI. We expect to learn and gain a lot from our interactions, and we also hope that having Edward on the team will let him and other MIRI staff steal each other’s best problem-solving heuristics and converge on research directions over time.\n\n\n\n\n---\n\n\nAs described in our recent [update](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/), our new lines of research are heavy on the mix of theoretical rigor and hands-on engineering that Edward and the functional programming community are well-known for:\n\n\n\n> In common between all our new approaches is a focus on using high-level theoretical abstractions to enable coherent reasoning about the systems we build. A concrete implication of this is that we write lots of our code in Haskell, and are often thinking about our code through the lens of type theory.\n> \n> \n\n\nMIRI’s nonprofit mission is to ensure that smarter-than-human AI systems, once developed, have a [positive impact](https://intelligence.org/2017/04/12/ensuring/) on the world. And we want to actually succeed in that goal, not just go through the motions of working on the problem.\n\n\nOur current model of the challenges involved says that the central sticking point for future engineers will likely be that the building blocks of AI just aren’t sufficiently transparent. We think that someone, somewhere, needs to develop some new foundations and deep theory/insights, above and beyond what’s likely to arise from refining or scaling up currently standard techniques.\n\n\nWe think that the skillset of functional programmers tends to be particularly well-suited to this kind of work, and we believe that our new research areas can absorb a large number of programmers and computer scientists. So we want this hiring announcement to double as a hiring pitch: consider [joining our research effort](https://intelligence.org/careers/software-engineer/)!\n\n\nTo learn more about what it’s like to work at MIRI and what kinds of candidates we’re looking for, see [our last big post](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/), or shoot MIRI researcher Buck Shlegeris [an email](mailto:buck@intelligence.org).\n\n\nThe post [MIRI’s newest recruit: Edward Kmett!](https://intelligence.org/2018/11/28/miris-newest-recruit-edward-kmett/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-11-28T18:14:30Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "bed025df2486c935f8181c7162a30327", "title": "November 2018 Newsletter", "url": "https://intelligence.org/2018/11/26/november-2018-newsletter/", "source": "miri", "source_type": "blog", "text": "In [2018 Update: Our New Research Directions](http://intelligence.org/2018-update-our-new-research-directions), Nate Soares discusses MIRI’s new research; our focus on “deconfusion”; some of the thinking behind our decision to default to nondisclosure on new results; and why more people than you might think should come [join the MIRI team](http://intelligence.org/careers)!\nAdditionally, our [2018 fundraiser](http://intelligence.org/miris-2018-fundraiser) begins today! To kick things off, we’ll be participating in three separate matching campaigns, all focused around Giving Tuesday, Nov. 27; [details in our fundraiser post](https://intelligence.org/miris-2018-fundraiser#3).\n\n\n#### Other updates\n\n\n* New alignment posts: A Rationality Condition for CDT Is That It Equal EDT ([1](https://www.alignmentforum.org/posts/XW6Qi2LitMDb2MF8c/a-rationality-condition-for-cdt-is-that-it-equal-edt-part-1), [2](https://www.alignmentforum.org/posts/tpWfDLZy2tk97MJ3F/a-rationality-condition-for-cdt-is-that-it-equal-edt-part-2)); [Standard ML Oracles vs. Counterfactual Ones](https://www.alignmentforum.org/posts/hJaJw6LK39zpyCKW6/standard-ml-oracles-vs-counterfactual-ones); [Addressing Three Problems with Counterfactual Corrigibility](https://www.alignmentforum.org/posts/owdBiF8pj6Lpwwdup/addressing-three-problems-with-counterfactual-corrigibility); [When EDT=CDT, ADT Does Well](https://www.alignmentforum.org/posts/pgJbaXvYWBx3Mrg5T/when-edt-cdt-adt-does-well). See also Paul Christiano’s EDT vs. CDT ([1](https://sideways-view.com/2018/09/19/edt-vs-cdt/), [2](https://sideways-view.com/2018/09/30/edt-vs-cdt-2-conditioning-on-the-impossible/)).\n* [Embedded Agency](https://www.lesswrong.com/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version): Scott Garrabrant and Abram Demski’s full sequence is up! The posts serve as our new core introductory resource to MIRI’s Agent Foundations research.\n* “Sometimes people ask me what math they should study in order to get into agent foundations. My first answer is that I have found the introductory class in every subfield to be helpful, but I have found the later classes to be much less helpful. My second answer is to learn enough math to understand all fixed point theorems….” In [Fixed Point Exercises](https://www.lesswrong.com/posts/mojJ6Hpri8rfzY78b/fixed-point-exercises), Scott provides exercises for getting into MIRI’s Agent Foundations research.\n* MIRI is seeking applicants for a new series of [AI Risk for Computer Scientists workshops](https://intelligence.org/ai-risk-for-computer-scientists/), aimed at technical people who want to think harder about AI alignment.\n\n\n#### News and links\n\n\n* Vox unveils [Future Perfect](https://www.vox.com/future-perfect/2018/10/15/17924288/future-perfect-explained), a new section of their site focused on effective altruism.\n* 80,000 Hours [interviews Paul Christiano](https://80000hours.org/podcast/episodes/paul-christiano-ai-alignment-solutions/), including discussion of MIRI/Paul disagreements and Paul’s approach to AI alignment research.\n* 80,000 Hours [surveys effective altruism orgs](https://80000hours.org/2018/10/2018-talent-gaps-survey/) on their hiring needs.\n* From Christiano, Buck Shlegeris, and Dario Amodei: [Learning Complex Goals with Iterated Amplification](https://blog.openai.com/amplifying-ai-training/) ([arXiv paper](https://arxiv.org/abs/1810.08575)).\n\n\n\nThe post [November 2018 Newsletter](https://intelligence.org/2018/11/26/november-2018-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-11-27T05:46:17Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "08c12959df5d109f5c9deaea4dfb2423", "title": "MIRI’s 2018 Fundraiser", "url": "https://intelligence.org/2018/11/26/miris-2018-fundraiser/", "source": "miri", "source_type": "blog", "text": "**Update January 2019**: MIRI’s 2018 fundraiser is now concluded.\n\n\n \n\n\n\n\n[Target 1 \n\n$500,000 \n\nCompleted](https://intelligence.org/feed/?paged=15#fundraiserModal)[Target 2 \n\n$1,200,000In Progress](https://intelligence.org/feed/?paged=15#fundraiserModal)\n\n\n\n**$946,981** \n\n|\n\n\n\n\n\n\n\n\n\n| \n\n$0\n\n\n\n\n| \n\n$300,000\n\n\n\n\n| \n\n$600,000\n\n\n\n\n| \n\n$900,000\n\n\n\n\n| \n\n$1,200,000\n\n\n\n\n\n\n### Fundraiser concluded\n\n\n##### 345 donors contributed\n\n\n\n\n\n×\n\n\n### Target Descriptions\n\n\n\n\n* [Target 1](https://intelligence.org/feed/?paged=15#level1)\n* [Target 2](https://intelligence.org/feed/?paged=15#level2)\n\n\n\n\n$500k: Mainline target\n----------------------\n\n\nThis target represents the difference between what we’ve raised so far this year, and our point estimate for business-as-usual spending next year.\n\n\n\n\n$1.2M: Accelerated growth target\n--------------------------------\n\n\nThis target represents what’s needed for our funding streams to keep pace with our growth toward the upper end of our projections.\n\n\n\n\n\n\n\n\n\n\n[Donate Now](https://intelligence.org/donate/)\n----------------------------------------------\n\n\n\n\n\n\n---\n\n\nMIRI is a math/CS research nonprofit with a mission of maximizing the potential humanitarian benefit of smarter-than-human artificial intelligence. You can learn more about the kind of work we do in “[Ensuring Smarter-Than-Human Intelligence Has A Positive Outcome](https://intelligence.org/2017/04/12/ensuring/)” and “[Embedded Agency](https://www.lesswrong.com/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version).”\n\n\nOur funding targets this year are based on a goal of raising enough in 2018 to match our “business-as-usual” budget next year. We view “make enough each year to pay for the next year” as a good heuristic for MIRI, given that we’re a quickly growing nonprofit with a healthy level of reserves and a budget dominated by researcher salaries.\n\n\nWe focus on business-as-usual spending in order to factor out the (likely very large) cost of moving to new spaces in the next couple of years as we continue to grow, which introduces a high amount of variance to the model.[1](https://intelligence.org/2018/11/26/miris-2018-fundraiser/#footnote_0_18463 \"That is, our business-as-usual model tries to remove one-time outlier costs so that it’s easier to see what “the new normal” is in MIRI’s spending and think about our long-term growth curve.\")\n\n\nMy current model for our (business-as-usual, outlier-free) 2019 spending ranges from $4.4M to $5.5M, with a point estimate of $4.8M—up from $3.5M this year and $2.1M in 2017. The model takes as input estimated ranges for all our major spending categories, but the overwhelming majority of the variance comes from the number of staff we’ll add to the team.[2](https://intelligence.org/2018/11/26/miris-2018-fundraiser/#footnote_1_18463 \"This estimation is fairly rough and uncertain. One weakness of this model is that it treats the inputs as though they were independent, which is not always the case. I also didn’t try to account for the fact that we’re likely to spend more in worlds where we see more fundraising success.\nHowever, a sensitivity analysis on the final output showed that the overwhelming majority of the uncertainty in this estimate comes from how many research staff we hire, which matches my expectations and suggests that the model is doing a decent job of tracking the intuitively important variables. I also ended up with similar targets when I ran the numbers on our funding status in other ways and when I considered different funding scenarios.\")\n\n\nIn the mainline scenario, our 2019 budget breakdown looks roughly like this: \n\n\n\n\nIn this scenario, we currently have ~1.5 years of reserves on hand. Since we’ve raised ~$4.3M between Jan. 1 and Nov. 25,[3](https://intelligence.org/2018/11/26/miris-2018-fundraiser/#footnote_2_18463 \"This excludes earmarked funding for AI Impacts, an independent research group that’s institutionally housed at MIRI.\") our two targets are:\n\n\n* **Target 1** ($500k), representing the difference between what we’ve raised so far this year, and our point estimate for business-as-usual spending next year.\n* **Target 2** ($1.2M), what’s needed for our funding streams to keep pace with our growth toward the upper end of our projections.[4](https://intelligence.org/2018/11/26/miris-2018-fundraiser/#footnote_3_18463 \"We could also think in terms of a “Target 0.5” of $100k in order to hit the bottom of the range, $4.4M. However, I worried that a $100k target would be misleading given that we’re thinking in terms of a $4.4–5.5M budget.\")\n\n\nBelow, we’ll summarize [**what’s new at MIRI**](https://intelligence.org/feed/?paged=15#1) and talk more about our [**room for more funding**](https://intelligence.org/feed/?paged=15#2).\n\n\n \n\n\n### 1. Recent updates\n\n\nWe’ve released a string of new posts on our recent activities and strategy:\n\n\n* **[2018 Update: Our New Research Directions](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/)** discusses the new set of research directions we’ve been ramping up over the last two years, how they relate to our Agent Foundations research and our goal of “deconfusion,” and why we’ve adopted a “nondisclosed-by-default” policy for this research.\n* **[Embedded Agency](https://www.alignmentforum.org/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version)** describes our Agent Foundations research agenda as different angles of attack on a single central difficulty: we don’t know how to characterize good reasoning and decision-making for agents embedded in their environment.\n* **[Summer MIRI Updates](https://intelligence.org/2018/09/01/summer-miri-updates/)** discusses new hires, new donations and grants, and new programs we’ve been running to recruit researcher staff and grow the total pool of AI alignment researchers.\n\n\nAnd, added **Nov. 28**:\n\n\n* **[MIRI’s Newest Recruit: Edward Kmett](https://intelligence.org/2018/11/28/miris-newest-recruit-edward-kmett/)** announces our latest hire: noted Haskell developer Edward Kmett, who popularized the use of lenses for functional programming and maintains a large number of the libraries around the Haskell core libraries.\n* **[2017 in Review](https://intelligence.org/2018/11/28/2017-in-review)** recaps our activities and donors’ support from last year.\n\n\nOur 2018 Update also [discusses](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section4) the much wider pool of engineers and computer scientists we’re now trying to recruit, and the much larger total number of people we’re trying to add to the team in the near future:\n\n\n\n> We’re seeking anyone who can cause our “become less [confused](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section3) about AI alignment” work to go faster.\n> \n> \n> In practice, this means: people who natively think in math or code, who take seriously the problem of becoming less confused about AI alignment (quickly!), and who are generally capable. In particular, we’re looking for high-end Google programmer levels of capability; you don’t need a 1-in-a-million test score or a halo of destiny. You also don’t need a PhD, explicit ML background, or even prior research experience.\n> \n> \n\n\nIf the above might be you, and the idea of working at MIRI appeals to you, I suggest [sending in a job application](https://intelligence.org/careers/software-engineer/) or shooting your questions at [Buck Shlegeris](mailto:buck@intelligence.org), a researcher at MIRI who’s been helping with our recruiting.\n\n\nWe’re also hiring for Agent Foundations roles, though at a much slower pace. For those roles, we recommend interacting with us and other people hammering on AI alignment problems on [Less Wrong and the AI Alignment Forum](https://www.lesswrong.com/posts/FoiiRDC3EhjHx7ayY/introducing-the-ai-alignment-forum-faq), or at local [MIRIx](http://intelligence.org/mirix) groups. We then generally hire Agent Foundations researchers from people we’ve gotten to know through the above channels, visits, and events like the [AI Summer Fellows program](http://www.rationality.org/workshops/apply-aisfp).\n\n\nA great place to start developing intuitions for these problems is Scott Garrabrant’s recently released [**fixed point exercises**](https://www.lesswrong.com/posts/mojJ6Hpri8rfzY78b/fixed-point-exercises), or various resources on the [MIRI Research Guide](http://intelligence.org/research-guide). Some examples of recent public work on Agent Foundations / embedded agency include:\n\n\n* Sam Eisenstat’s [untrollable prior](https://agentfoundations.org/item?id=1750), explained [in illustrated form](https://www.lesswrong.com/posts/CvKnhXTu9BPcdKE4W/an-untrollable-mathematician-illustrated) by Abram Demski, shows that there is a Bayesian solution to one of the basic problems which motivated the development of [non-Bayesian](https://www.lesswrong.com/posts/tKwJQbo6SfWF2ifKh/toward-a-new-technical-explanation-of-technical-explanation) logical uncertainty tools (culminating in [logical induction](https://intelligence.org/2016/09/12/new-paper-logical-induction/)). This informs our picture of what’s possible, and may lead to further progress in the direction of [Bayesian logical uncertainty](https://intelligence.org/2018/11/02/embedded-models/).\n* Scott Garrabrant outlines a [taxonomy](https://www.lesswrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy) of ways that Goodhart’s law can manifest.\n* Sam’s [logical inductor tiling result](https://www.alignmentforum.org/posts/5bd75cc58225bf067037556d/logical-inductor-tiling-and-why-it-s-hard) solves a version of the [tiling problem](https://intelligence.org/files/TilingAgentsDraft.pdf) for logically uncertain agents.\n* [Prisoners’ Dilemma with Costs to Modeling](https://www.lesswrong.com/posts/XjMkPyaPYTf7LrKiT/prisoners-dilemma-with-costs-to-modeling): A modified version of [open-source prisoners’ dilemmas](https://intelligence.org/2016/03/31/new-paper-on-bounded-lob/) in which agents must pay resources in order to model each other.\n* [Logical Inductors Converge to Correlated Equilibria (Kinda)](https://www.alignmentforum.org/posts/5bd75cc58225bf0670375569/logical-inductors-converge-to-correlated-equilibria-kinda): A game-theoretic analysis of logical inductors.\n* New results in [Asymptotic Decision Theory](https://www.alignmentforum.org/posts/yXCvYqTZCsfN7WRrg/asymptotic-decision-theory-improved-writeup) and [When EDT=CDT, ADT Does Well](https://www.alignmentforum.org/posts/pgJbaXvYWBx3Mrg5T/when-edt-cdt-adt-does-well) represent incremental progress on understanding what’s possible with respect to learning the right [counterfactuals](https://intelligence.org/2018/10/31/embedded-decisions/).\n\n\nThese results are relatively small, compared to Nate’s forthcoming tiling agents paper or Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant’s forthcoming “The Inner Alignment Problem.” However, they should give a good sense of some of the recent directions we’ve been pushing in with our Agent Foundations work.\n\n\n \n\n\n### 2. Room for more funding\n\n\nAs noted above, the biggest sources of uncertainty in our 2018 budget estimates are about how many research staff we hire, and how much we spend on moving to new offices.\n\n\nIn our 2017 fundraiser, we set a goal of hiring 10 new research staff in 2018–2019. So far, we’re up two research staff, with enough promising candidates in the pipeline that I still consider 10 a doable (though ambitious) goal.\n\n\nFollowing the amazing show of support we received from donors last year (and continuing into 2018), we had significantly more funds than we anticipated, and we found more ways to usefully spend it than we expected. In particular, we’ve been able to translate the “bonus” support we received in 2017 into broadening the scope of our recruiting efforts. As a consequence, our 2018 spending, which will come in at around $3.5M, actually matches the point estimate I gave in 2017 for our 2019 budget, rather than my prediction for 2018—a large step up from what I predicted, and an even larger step from last year’s budget of $2.1M.[5](https://intelligence.org/2018/11/26/miris-2018-fundraiser/#footnote_4_18463 \"Quoting our 2017 fundraiser post: “If we succeed, our point estimate is that our 2018 budget will be $2.8M and our 2019 budget will be $3.5M, up from roughly $1.9M in 2017.” The $1.9M figure was an estimate from before 2017 had ended. We’ve now revised this figure to $2.1M, which happens to bring it in line with our 2016 point estimate for how much we’d spend in 2017.\")\n\n\nOur two fundraiser goals, Target 1 (**$500k**) and Target 2 (**$1.2M**), correspond to the budgetary needs we can easily predict and account for. Our 2018 went much better as a result of donors’ greatly increased support in 2017, and it’s possible that we’re in a similar situation today, though I’m not confident that this is the case.\n\n\nConcretely, some ways that our decision-making changed as a result of the amazing support we saw were:\n\n\n* We spent more on running all-expenses-paid [AI Risk for Computer Scientists](https://intelligence.org/ai-risk-for-computer-scientists/) workshops. We ran the first such workshop in February, and saw a lot of value in it as a venue for people with relevant technical experience to start reasoning more about AI risk. Since then, we’ve run another seven events in 2018, with more planned for 2019. \n\nAs hoped, these workshops have also generated interest in joining MIRI and other AI safety research teams. This has resulted in one full-time MIRI research staff hire, and on the order of ten candidates with good prospects of joining full-time in 2019, including two recent interns.\n* We’ve been more consistently willing and able to pay competitive salaries for top technical talent. A special case of this is hiring relatively senior research staff like [Edward Kmett](https://intelligence.org/2018/11/28/miris-newest-recruit-edward-kmett/).\n* We raised salaries for some existing staff members. We have a very committed staff, and some staff at MIRI had previously asked for fairly low salaries in order to help keep MIRI’s organizational costs lower. The inconvenience this caused staff members doesn’t make much sense at our current organizational size, both in terms of our team’s productivity and in terms of their well-being.\n* We ran a [summer research internship program](https://intelligence.org/2018/09/01/summer-miri-updates/#1), on a larger scale than we otherwise would have.\n* As we’ve considered options for new office space that can accommodate our expansion, we’ve been able to filter less on price relative to fit. We’ve also been able to spend more on renovations that we expect to produce a working environment where our researchers can do their work with fewer distractions or inconveniences.\n\n\n2018 brought a positive update about our ability to cost-effectively convert surprise funding increases into (what look from our perspective like) very high-value actions, and the above list hopefully helps clarify what that can look like in practice. We can’t promise to be able to repeat that in 2019, given an overshoot in this fundraiser, but we have reason for optimism.\n\n\n\n\n---\n\n\n\n\n[Donate Now](https://intelligence.org/donate/)\n----------------------------------------------\n\n\n\n\n\n\n---\n\n1. That is, our business-as-usual model tries to remove one-time outlier costs so that it’s easier to see what “the new normal” is in MIRI’s spending and think about our long-term growth curve.\n2. This estimation is fairly rough and uncertain. One weakness of this model is that it treats the inputs as though they were independent, which is not always the case. I also didn’t try to account for the fact that we’re likely to spend more in worlds where we see more fundraising success.\nHowever, a sensitivity analysis on the final output showed that the overwhelming majority of the uncertainty in this estimate comes from how many research staff we hire, which matches my expectations and suggests that the model is doing a decent job of tracking the intuitively important variables. I also ended up with similar targets when I ran the numbers on our funding status in other ways and when I considered different funding scenarios.\n3. This excludes earmarked funding for [AI Impacts](http://aiimpacts.org), an independent research group that’s institutionally housed at MIRI.\n4. We could also think in terms of a “Target 0.5” of $100k in order to hit the bottom of the range, $4.4M. However, I worried that a $100k target would be misleading given that we’re thinking in terms of a $4.4–5.5M budget.\n5. Quoting our [2017 fundraiser post](https://intelligence.org/2017/12/01/miris-2017-fundraiser/): “If we succeed, our point estimate is that our 2018 budget will be $2.8M and our 2019 budget will be $3.5M, up from roughly $1.9M in 2017.” The $1.9M figure was an estimate from before 2017 had ended. We’ve now revised this figure to $2.1M, which happens to bring it in line with our 2016 point estimate for how much we’d spend in 2017.\n\nThe post [MIRI’s 2018 Fundraiser](https://intelligence.org/2018/11/26/miris-2018-fundraiser/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-11-27T04:54:15Z", "authors": ["Malo Bourgon"], "summaries": []} -{"id": "c99ddbfacfe3b246e2c07e8475bd4af6", "title": "2018 Update: Our New Research Directions", "url": "https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/", "source": "miri", "source_type": "blog", "text": "For many years, MIRI’s goal has been to resolve enough fundamental confusions around [alignment](https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/) and intelligence to enable humanity to think clearly about technical AI safety risks—and to do this before this technology advances to the point of potential catastrophe. This goal has always seemed to us to be difficult, but possible.[1](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#footnote_0_18363 \"This post is an amalgam put together by a variety of MIRI staff. The byline saying “Nate” means that I (Nate) endorse the post, and that many of the concepts and themes come in large part from me, and I wrote a decent number of the words. However, I did not write all of the words, and the concepts and themes were built in collaboration with a bunch of other MIRI staff. (This is roughly what bylines have meant on the MIRI blog for a while now, and it’s worth noting explicitly.) \")\n\n\nLast year, we said that we were beginning a new research program aimed at this goal.[2](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#footnote_1_18363 \"See our 2017 strategic update and fundraiser posts for more details.\") Here, we’re going to provide background on how we’re thinking about this new set of research directions, lay out some of the thinking behind our recent decision to do less default sharing of our research, and make the case for interested software engineers to [join our team](https://intelligence.org/careers/software-engineer/) and help push our understanding forward.\n\n\n\n \n\n\n### Contents:\n\n\n1. [Our research](https://intelligence.org/feed/?paged=15#section1)\n2. [Why deconfusion is so important to us](https://intelligence.org/feed/?paged=15#section2)\n3. [Nondisclosed-by-default research, and how this policy fits into our overall strategy](https://intelligence.org/feed/?paged=15#section3)\n4. [Joining the MIRI team](https://intelligence.org/feed/?paged=15#section4)\n\n\n \n\n\n### 1. Our research\n\n\nIn 2014, MIRI published its first research agenda, “[Agent Foundations for Aligning Machine Intelligence with Human Interests](https://intelligence.org/technical-agenda/).” Since then, one of our main research priorities has been to develop a better conceptual understanding of [embedded agency](https://www.alignmentforum.org/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version): formally characterizing reasoning systems that lack a crisp agent/environment boundary, are smaller than their environment, must reason about themselves, and risk having parts that are working at cross purposes. These research problems continue to be a major focus at MIRI, and are being studied in parallel with our new research directions (which I’ll be focusing on more below).[3](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#footnote_2_18363 \"In past fundraisers, we’ve said that with sufficient funding we would like to spin up alternative lines of attack on the alignment problem. Our new research directions can be seen as following this spirit, and indeed, at least one of our new research directions is heavily inspired by alternative approaches I was considering back in 2015. That said, unlike many of the ideas I had in mind when writing our 2015 fundraiser posts, our new work is quite contiguous with our Agent-Foundations-style research.\")\n\n\nFrom our perspective, the point of working on these kinds of problems isn’t that solutions directly tell us how to build well-aligned AGI systems. Instead, the point is to resolve confusions we have around ideas like “alignment” and “AGI,” so that future AGI developers have an unobstructed view of the problem. Eliezer illustrates this idea in “[The Rocket Alignment Problem](https://intelligence.org/2018/10/03/rocket-alignment/),” which imagines a world where humanity tries to land on the Moon before it understands Newtonian mechanics or calculus.\n\n\nRecently, some MIRI researchers developed new research directions that seem to enable more scalable progress towards resolving these fundamental confusions. Specifically, the progress is more scalable in researcher hours—it’s now the case that we believe excellent engineers coming from a variety of backgrounds can have their work efficiently converted into research progress at MIRI—where previously, we only knew how to speed our research progress with a (relatively atypical) breed of mathematician.\n\n\nAt the same time, we’ve seen some significant [financial success](https://intelligence.org/2018/01/10/fundraising-success/) over the past year—not so much that funding is no longer a constraint at all, but enough to pursue our research agenda from new and different directions, in addition to the old.\n\n\nFurthermore, our view implies that haste is essential. We see AGI as [a likely cause of existential catastrophes](https://www.econlib.org/archives/2016/03/so_far_unfriend.html), especially if it’s developed with relatively brute-force-reliant, difficult-to-interpret techniques; and although we’re [quite uncertain](https://intelligence.org/2017/10/13/fire-alarm/) about when humanity’s collective deadline will come to pass, many of us are somewhat alarmed by the speed of recent machine learning progress.\n\n\nFor these reasons, we’re eager to locate the right people quickly and offer them work on these new approaches; and with this kind of help, it strikes us as very possible that we can resolve enough fundamental confusion in time to port the understanding to those who will need it before AGI is built and deployed.\n\n\n \n\n\nComparing our new research directions and Agent Foundations\n\n\nOur new research directions involve building software systems that we can use to test our intuitions, and building infrastructure that allows us to rapidly iterate this process. Like the Agent Foundations agenda, our new research directions continue to focus on “deconfusion,” rather than on, e.g., trying to improve robustness metrics of current systems—our sense being that even if we make major strides on this kind of robustness work, an AGI system built on principles similar to today’s systems would still be too opaque to align in practice.\n\n\n\nIn a sense, you can think of our new research as tackling the same sort of problem that we’ve always been attacking, but from new angles. In other words, if you aren’t excited about [logical inductors](https://intelligence.org/2016/09/12/new-paper-logical-induction/) or [functional decision theory](https://arxiv.org/abs/1710.05060), you probably wouldn’t be excited by our new work either. Conversely, if you already have the sense that becoming less confused is a sane way to approach AI alignment, and you’ve been wanting to see those kinds of confusions attacked with software and experimentation in a manner that yields theoretical satisfaction, then you may well want to work at MIRI. (I’ll have more to say about this [below](https://intelligence.org/feed/?paged=15#section4).)\n\n\nOur new research directions stem from some distinct ideas had by Benya Fallenstein, Eliezer Yudkowsky, and myself (Nate Soares). Some high-level themes of these new directions include:\n\n\n \n\n\n1. **Seeking entirely new low-level foundations for optimization**, designed for transparency and alignability from the get-go, as an alternative to gradient-descent-style machine learning foundations.\n \n\n\nNote that this does not entail trying to beat modern ML techniques on computational efficiency, speed of development, ease of deployment, or other such properties. However, it does mean developing new foundations for optimization that are broadly applicable in the same way, and for some of the same reasons, that gradient descent scales to be broadly applicable, while possessing significantly better alignment characteristics.\n\n\n \n\n\nWe’re aware that there are many ways to attempt this that are shallow, foolish, or otherwise doomed; and in spite of this, we believe our own research avenues have a shot.\n2. **Endeavoring to figure out parts of cognition that can be very transparent as cognition**, without being GOFAI or completely disengaged from subsymbolic cognition.\n3. **Experimenting with some specific alignment problems** that are deeper than problems that have previously been put into computational environments.\n\n\n \n\n\nIn common between all our new approaches is a focus on using high-level theoretical abstractions to enable coherent reasoning about the systems we build. A concrete implication of this is that we write lots of our code in Haskell, and are often thinking about our code through the lens of type theory.\n\n\nWe aren’t going to distribute the technical details of this work anytime soon, in keeping with the recent MIRI policy changes [discussed below](https://intelligence.org/feed/?paged=15#section3). However, we have a good deal to say about this research on the meta level.\n\n\nWe are excited about these research directions, both for their present properties and for the way they seem to be developing. When Benya began the predecessor of this work ~3 years ago, we didn’t know whether her intuitions would pan out. Today, having watched the pattern by which research avenues in these spaces have opened up new exciting-feeling lines of inquiry, none of us expect this research to die soon, and some of us are hopeful that this work may eventually open pathways to attacking the entire list of basic alignment issues.[4](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#footnote_3_18363 \"That is, the requisites for aligning AGI systems to perform limited tasks; not all of the requisites for aligning a full CEV-class autonomous AGI. Compare Paul Christiano’s distinction between ambitious and narrow value learning (though note that Paul thinks narrow value learning is sufficient for strongly autonomous AGI).\")\n\n\nWe are similarly excited by the extent to which useful cross-connections have arisen between initially-unrelated-looking strands of our research. During a period where I was focusing primarily on new lines of research, for example, I stumbled across a solution to the original version of the [tiling agents problem](https://intelligence.org/files/TilingAgentsDraft.pdf) from the Agent Foundations agenda.[5](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#footnote_4_18363 \"This result is described more in a paper that will be out soon. Or, at least, eventually. I’m not putting a lot of time into writing papers these days, for reasons discussed below.\")\n\n\nThis work seems to “give out its own guideposts” more than the Agent Foundations agenda does. While we used to require extremely close fit of our hires on research taste, we now think we have enough sense of the terrain that we can relax those requirements somewhat. We’re still looking for hires who are scientifically innovative and who are *fairly* close on research taste, but our work is now much more scalable with the number of good mathematicians and engineers working at MIRI.\n\n\nWith all of that said, and despite how promising the last couple of years have seemed to us, this is still “blue sky” research in the sense that we’d guess most outside MIRI would still regard it as of academic interest but of no practical interest. The more principled/coherent/alignable optimization algorithms we are investigating are not going to sort cat pictures from non-cat pictures anytime soon.\n\n\nThe thing that generally excites us about research results is the extent to which they grant us “deconfusion” in the sense described in the next section, not the ML/engineering power they directly enable. This “deconfusion” that they allegedly reflect must, for the moment, be discerned mostly via abstract arguments supported only weakly by concrete “look what this understanding lets us do” demos. Many of us at MIRI regard our work as being of strong practical relevance nonetheless—but that is because we have long-term models of what sorts of short-term feats indicate progress, and because we view becoming less confused about alignment as having a strong practical relevance to humanity’s future, for reasons that I’ll sketch out next.\n\n\n \n\n\n### 2. Why deconfusion is so important to us\n\n\n \n\n\nWhat we mean by deconfusion\n\n\nQuoting Anna Salamon, the president of the Center for Applied Rationality and a MIRI board member:\n\n\n \n\n\n\n> \n> If I didn’t have the concept of deconfusion, MIRI’s efforts would strike me as mostly inane. MIRI continues to regard its own work as significant for human survival, despite the fact that many larger and richer organizations are now talking about AI safety. It’s a group that got all excited about [Logical Induction](https://intelligence.org/2016/09/12/new-paper-logical-induction/) (and tried paranoidly to make sure Logical Induction “wasn’t dangerous” [before](https://www.alignmentforum.org/posts/iBBK4j6RWC7znEiDv/history-of-the-development-of-logical-induction) releasing it)—even though Logical Induction had only a moderate amount of math and no practical engineering at all (and did something similar with [Timeless Decision Theory](https://intelligence.org/files/TDT.pdf), to pick an even more extreme example). It’s a group that continues to stare mostly at basic concepts, sitting reclusively off by itself, while mostly leaving questions of politics, outreach, and how much influence the AI safety community has, to others.\n> \n> \n> However, I do have the concept of deconfusion. And when I look at MIRI’s activities through that lens, MIRI seems to me much more like “oh, yes, good, someone *is* taking a straight shot at what looks like the critical thing” and “they seem to have a fighting chance” and “gosh, I hope they (or someone somehow) solve many many more confusions before the deadline, because without such progress, humanity sure seems kinda sunk.”\n> \n> \n> \n\n\n \n\n\nI agree that MIRI’s perspective and strategy don’t make much sense without the idea I’m calling “deconfusion.” As someone reading a MIRI strategy update, you probably already partly have this concept, but I’ve found that it’s not trivial to transmit the full idea, so I ask your patience as I try to put it into words.\n\n\nBy deconfusion, I mean something like “making it so that you can think about a given topic without continuously accidentally spouting nonsense.”\n\n\nTo give a concrete example, my thoughts about infinity as a 10-year-old were made of rearranged confusion rather than of anything coherent, as were the thoughts of even the best mathematicians from 1700. “How can 8 plus infinity still be infinity? What happens if we subtract infinity from both sides of the equation?” But my thoughts about infinity as a 20-year-old were *not* similarly confused, because, by then, I’d been exposed to the more coherent concepts that later mathematicians labored to produce. I wasn’t as smart or as good of a mathematician as Georg Cantor or the best mathematicians from 1700; but deconfusion can be transferred between people; and this transfer can spread the ability to think actually coherent thoughts.\n\n\nIn 1998, conversations about AI risk and technological singularity scenarios often went in circles in a funny sort of way. People who are serious thinkers about the topic today, including my colleagues Eliezer and Anna, said things that today sound confused. (When I say “things that sound confused,” I have in mind things like “isn’t intelligence an [incoherent](https://arbital.com/p/general_intelligence/) concept,” “but the economy’s [already](http://slatestarcodex.com/2015/12/27/things-that-are-not-superintelligences/) superintelligent,” “if a superhuman AI is smart enough that it could kill us, it’ll also be [smart enough](https://arbital.com/p/orthogonality/) to see that that isn’t what the good thing to do is, so we’ll be fine,” “we’re Turing-complete, so it’s impossible to have something dangerously [smarter](https://intelligence.org/2017/12/06/chollet/) than us, because Turing-complete computations can emulate anything,” and “anyhow, we could just [unplug](https://intelligence.org/2018/02/28/sam-harris-and-eliezer-yudkowsky/#3) it.”) Today, these conversations are different. In between, folks worked to make themselves and others less fundamentally confused about these topics—so that today, a 14-year-old who wants to skip to the end of all that incoherence can just pick up a copy of Nick Bostrom’s [*Superintelligence*](https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies).[6](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#footnote_5_18363 \"For more discussion of this concept, see “Personal Thoughts on Careers in AI Policy and Strategy” by Carrick Flynn.\")\n\n\nOf note is the fact that the “take AI risk and technological singularities seriously” meme started to spread to the larger population of ML scientists only after its main proponents attained sufficient deconfusion. If you were living in 1998 with a strong intuitive sense that AI risk and technological singularities should be taken seriously, but you still possessed a host of confusion that caused you to occasionally spout nonsense as you struggled to put things into words in the face of various confused objections, then evangelism would do you little good among serious thinkers—perhaps because the respectable scientists and engineers in the field can smell nonsense, and can tell (correctly!) that your concepts are still incoherent. It’s by accumulating deconfusion until your concepts cohere and your arguments become well-formed that your ideas can become memetically fit and spread among scientists—and can serve as foundations for future work by those same scientists.\n\n\nInterestingly, the history of science is in fact full of instances in which individual researchers possessed a mostly-correct body of intuitions for a long time, and then eventually those intuitions were formalized, corrected, made precise, and transferred between people. Faraday discovered a wide array of electromagnetic phenomena, guided by an intuition that he wasn’t able to formalize or transmit except through hundreds of pages of detailed laboratory notes and diagrams; Maxwell later invented the language to describe electromagnetism formally by reading Faraday’s work, and expressed those hundreds of pages of intuitions in three lines.\n\n\nAn even more striking example is the case of Archimedes, who intuited his way to the ability to do useful work in both integral and differential calculus thousands of years before calculus became a simple formal thing that could be passed between people.\n\n\nIn both cases, it was the eventual formalization of those intuitions—and the linked ability of these intuitions to be passed accurately between many researchers—that allowed the fields to begin building properly and quickly.[7](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#footnote_6_18363 \"Historical examples of deconfusion work that gave rise to a rich and healthy field include the distillation of Lagrangian and Hamiltonian mechanics from Newton’s laws; Cauchy’s overhaul of real analysis; the slow acceptance of the usefulness of complex numbers; and the development of formal foundations of mathematics.\")\n\n\n \n\n\nWhy deconfusion (on our view) is highly relevant to AI accident risk\n\n\nIf human beings eventually build smarter-than-human AI, and if smarter-than-human AI is as powerful and hazardous as we currently expect it to be, then AI will one day bring enormous forces of optimization to bear.[8](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#footnote_7_18363 \"I should emphasize that from my perspective, humanity never building AGI, never realizing our potential, and failing to make use of the cosmic endowment would be a tragedy comparable (on an astronomical scale) to AGI wiping us out. I say “hazardous”, but we shouldn’t lose sight of the upside of humanity getting the job done right.\") We believe that when this occurs, those enormous forces need to be brought to bear on real-world problems and subproblems deliberately, in a context where they’re theoretically well-understood. The larger those forces are, the more precision is called for when researchers aim them at cognitive problems.\n\n\nWe suspect that today’s concepts about things like “optimization” and “aiming” are incapable of supporting the necessary precision, even if wielded by researchers who care a lot about safety. Part of why I think this is that if you pushed me to explain what I mean by “optimization” and “aiming,” I’d need to be careful to avoid spouting nonsense—which indicates that I’m still confused somewhere around here.\n\n\nA worrying fact about this situation is that, as best I can tell, humanity *doesn’t* need coherent versions of these concepts to [hill-climb](https://en.wikipedia.org/wiki/Hill_climbing) its way to AGI. Evolution hill-climbed that distance, and evolution had no model of what it was doing. But as evolution applied massive optimization pressure to genomes, those genomes started coding for brains that *internally* optimized for targets that merely correlated with genetic fitness. Humans find ever-smarter ways to satisfy our own goals (video games, ice cream, birth control…) even when this runs directly counter to the selection criterion that gave rise to us: “propagate your genes into the next generation.”\n\n\nIf we are to avoid a similar fate—one where we attain AGI via huge amounts of gradient descent and other optimization techniques, only to find that the resulting system has internal optimization targets that are very different from the targets we externally optimized it to be adept at pursuing—then we must be more careful.\n\n\nAs AI researchers explore the space of optimizers, what will it take to ensure that the first highly capable optimizers that researchers find are optimizers they know how to aim at chosen tasks? I’m not sure, because I’m still in some sense confused about the question. I can tell you vaguely how the problem relates to [convergent instrumental incentives](https://en.wikipedia.org/wiki/Instrumental_convergence), and I can observe various reasons why we shouldn’t expect the strategy “train a large cognitive system to optimize for *X*” to actually result in a system that [internally optimizes](https://intelligence.org/2017/04/12/ensuring/#3) for *X*, but there are still wide swaths of the question where I can’t say much without saying nonsense.\n\n\nAs an example, AI systems like Deep Blue and AlphaGo cannot reasonably be said to be reasoning about the whole world. They’re reasoning about some much simpler abstract platonic environment, such as a Go board. There’s an intuitive sense in which we don’t need to worry about these systems taking over the world, for this reason (among others), even in the world where those systems are run on implausibly large amounts of compute.\n\n\nVaguely speaking, there’s a sense in which some alignment difficulties don’t arise until an AI system is “reasoning about the real world.” But what does that mean? It doesn’t seem to mean “the space of possibilities that the system considers literally concretely includes reality itself.” Ancient humans did perfectly good general reasoning even while utterly lacking the concept that the universe can be described by specific physical equations.\n\n\nIt looks like it must mean something more like “the system is building internal models that, in some sense, are little representations of the whole of reality.” But what counts as a “little representation of reality,” and why do a hunter-gatherer’s confused thoughts about a spirit-riddled forest count while a chessboard doesn’t? All these questions are likely confused; my goal here is not to name coherent questions, but to gesture in the direction of a confusion that prevents me from precisely naming a portion of the alignment problem.\n\n\nOr, to put it briefly: precisely naming a problem is half the battle, and we are currently confused about how to precisely name the alignment problem.\n\n\nFor an alternative attempt to name this concept, refer to Eliezer’s [rocket alignment](https://intelligence.org/2018/10/03/rocket-alignment/) analogy. For a further discussion of some of the reasons today’s concepts seem inadequate for describing an aligned intelligence with sufficient precision, see Scott and Abram’s [recent write-up](https://www.alignmentforum.org/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version). (Or come discuss with us in person, at an “[AI Risk for Computer Scientists](https://intelligence.org/ai-risk-for-computer-scientists/)” workshop.)\n\n\n \n\n\nWhy this research may be tractable here and now\n\n\nMany types of research become far easier at particular places and times. It seems to me that for the work of becoming less confused about AI alignment, MIRI in 2018 (and for a good number of years to come, I think) is one of those places and times.\n\n\nWhy? One point is that MIRI has some history of success at deconfusion-style research (according to me, at least), and MIRI’s researchers are beneficiaries of the local research traditions that grew up in dialog with that work. Among the bits of conceptual progress that MIRI contributed to are:\n\n\n \n\n\n* today’s understanding that AI accident risk is important;\n* today’s understanding that an aligned AI is at least a theoretical possibility (the [Gandhi argument](https://intelligence.org/files/ComplexValues.pdf) that consequentialist preferences are reflectively stable by default, etc.), and that it’s worth investing in basic research toward the possibility of such an AI in advance;\n* early statements of subproblems like [corrigibility,](https://arbital.com/p/updated_deference/) the [Löbian obstacle](https://intelligence.org/files/TilingAgentsDraft.pdf), and [subsystem alignment](https://intelligence.org/2018/11/06/embedded-subsystems/), including descriptions of various problems in the [Agent Foundations](https://intelligence.org/technical-agenda/) research agenda;\n* [timeless decision theory](https://intelligence.org/files/TDT.pdf) and its successors ([updateless decision theory](https://www.lesswrong.com/posts/de3xjFaACCAk6imzv/towards-a-new-decision-theory) and [functional decision theory](https://intelligence.org/2017/10/22/fdt/));\n* [logical induction](https://intelligence.org/2016/09/12/new-paper-logical-induction/);\n* [reflective oracles](https://arxiv.org/abs/1508.04145); and\n* many smaller results in the vicinity of the Agent Foundations agenda, notably [robust cooperation](https://intelligence.org/2014/02/01/robust-cooperation-a-case-study-in-friendly-ai-research/) in the one-shot prisoner’s dilemma, [universal inductors](https://agentfoundations.org/item?id=941), and [model polymorphism](https://www.lesswrong.com/posts/DDJr5fuR5jeD47k9g/an-angle-of-attack-on-open-problem-1), [HOL-in-HOL](https://intelligence.org/2015/12/04/new-paper-proof-producing-reflection-for-hol/), and more recent progress on Vingean reflection.\n\n\n \n\n\nLogical inductors, as an example, give us at least a clue about why we’re apt to informally use words like “probably” in mathematical reasoning. It’s not a full answer to “how does probabilistic reasoning about mathematical facts work?”, but it does feel like an interesting hint—which is relevant to thinking about how “real-world” AI reasoning could possibly work, because AI systems might well also use probabilistic reasoning in mathematics.\n\n\nA second point is that, if there is something that unites most folks at MIRI *besides* a drive to increase the odds of human survival, it is probably a taste for getting our understanding of the foundations of the universe right. Many of us came in with this taste—for example, many of us have backgrounds in physics (and fundamental physics in particular), and those of us with a background in programming tend to have an interest in things like type theory, formal logic, and/or probability theory.\n\n\nA third point, as noted [above](https://intelligence.org/feed/?paged=15#section1-excited), is that we are excited about our current bodies of research intuitions, and about how they seem increasingly transferable/cross-applicable/concretizable over time.\n\n\nFinally, I observe that the field of AI at large is currently highly vitalized, largely by the deep learning revolution and various other advances in machine learning. We are not particularly focused on deep neural networks ourselves, but being in contact with a vibrant and exciting practical field is the sort of thing that tends to spark ideas. 2018 really seems like an unusually easy time to be seeking a theoretical science of AI alignment, in dialog with practical AI methods that are beginning to work.\n\n\n \n\n\n### 3. Nondisclosed-by-default research, and how this policy fits into our overall strategy\n\n\nMIRI recently decided to make most of its research “nondisclosed-by-default,” by which we mean that going forward, most results discovered within MIRI will remain internal-only unless there is an explicit decision to release those results, based usually on a specific anticipated safety upside from their release.\n\n\nI’d like to try to share some sense of why we chose this policy—especially because this policy may prove disappointing or inconvenient for many people interested in AI safety as a research area.[9](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#footnote_8_18363 \"My own feeling is that I and other senior staff at MIRI have never been particularly good at explaining what we’re doing and why, so this inconvenience may not be a new thing. It’s new, however, for us to not be making it a priority to attempt to explain where we’re coming from.\") MIRI is a nonprofit, and there’s a natural default assumption that our mechanism for good is to regularly publish new ideas and insights. But we don’t think this is currently the right choice for serving our nonprofit mission.\n\n\nThe short version of why we chose this policy is:\n\n\n \n\n\n* we’re in a hurry to decrease existential risk;\n* in the same way that Faraday’s journals aren’t nearly as useful as Maxwell’s equations, and in the same way that logical induction isn’t all that useful to the average modern ML researcher, we don’t think it would be that useful to try to share lots of half-confused thoughts with a wider set of people;\n* we believe we can have more of the critical insights faster if we stay focused on making new research progress rather than on exposition, and if we aren’t feeling pressure to justify our intuitions to wide audiences;\n* we think it’s not unreasonable to be anxious about whether deconfusion-style insights could lead to capabilities insights, and have empirically observed we can think more freely when we don’t have to worry about this; and\n* even when we conclude that those concerns were paranoid or silly upon reflection, we benefited from moving the cognitive work of evaluating those fears from “before internally sharing insights” to “before broadly distributing those insights,” which is enabled by this policy.\n\n\n \n\n\nThe somewhat longer version is below.\n\n\nI’ll caveat that in what follows I’m attempting to convey what I believe, but not necessarily why—I am not trying to give an argument that would cause any rational person to take the same strategy in my position; I am shooting only for the more modest goal of conveying how I myself am thinking about the decision.\n\n\nI’ll begin by saying a few words about how our research fits into our overall strategy, then discuss the pros and cons of this policy.\n\n\n \n\n\nWhen we say we’re doing AI alignment research, we really genuinely don’t mean outreach\n\n\nAt present, MIRI’s aim is to make research progress on the alignment problem. Our focus isn’t on shifting the field of ML toward taking AGI safety more seriously, nor on any other form of influence, persuasion, or field-building. We are simply and only aiming to directly make research progress on the core problems of alignment.\n\n\nThis choice may seem surprising to some readers—field-building and other forms of outreach can obviously have hugely beneficial effects, and throughout MIRI’s history, we’ve been much more outreach-oriented than the typical math research group.\n\n\nOur impression is indeed that well-targeted outreach efforts can be highly valuable. However, attempts at outreach/influence/field-building seem to us to currently constitute a large majority of worldwide research activity that’s motivated by AGI safety concerns,[10](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#footnote_9_18363 \"In other words, many people are explicitly focusing only on outreach, and many others are selecting technical problems to work on with a stated goal of strengthening the field and drawing others into it.\") such that MIRI’s time is better spent on taking a straight shot at the core research problems. Further, we think our own comparative advantage lies here, and not in outreach work.[11](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#footnote_10_18363 \"This isn’t meant to suggest that nobody else is taking a straight shot at the core problems. For example, OpenAI’s Paul Christiano is a top-tier researcher who is doing exactly that. But we nonetheless want more of this on the present margin.\")\n\n\nMy beliefs here are connected to my beliefs about the mechanics of deconfusion described [above](https://intelligence.org/feed/?paged=15#section2). In particular, I believe that the alignment problem might start seeming significantly easier once it can be precisely named, and I believe that precisely naming this sort of problem is likely to be a serial challenge—in the sense that some deconfusions cannot be attained until other deconfusions have matured. Additionally, my read on history says that deconfusions regularly come from relatively small communities thinking the right kinds of thoughts (as in the case of Faraday and Maxwell), and that such deconfusions can spread rapidly as soon as the surrounding concepts become coherent (as exemplified by Bostrom’s *Superintelligence*). I conclude from all this that trying to influence the wider field isn’t the best place to spend our own efforts.\n\n\n \n\n\nIt is difficult to predict whether successful deconfusion work could spark capability advances\n\n\nWe think that most of MIRI’s expected impact comes from worlds in which our deconfusion work eventually succeeds—that is, worlds where our research eventually leads to a principled understanding of alignable optimization that can be communicated to AI researchers, more akin to a modern understanding of calculus and differential equations than to Faraday’s notebooks (with the caveat that most of us aren’t expecting solutions to the alignment problem to compress nearly so well as calculus or Maxwell’s equations, but I digress).\n\n\nOne pretty plausible way this could go is that our deconfusion work makes alignment possible, without much changing the set of available pathways to AGI.[12](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#footnote_11_18363 \"For example, perhaps the easiest path to unalignable AGI involves following descendants of today’s gradient descent and deep learning techniques, and perhaps the same is true for alignable AGI.\") To pick a trivial analogy illustrating this sort of world, consider [interval arithmetic](https://en.wikipedia.org/wiki/Interval_arithmetic) as compared to the usual way of doing floating point operations. In interval arithmetic, an operation like *sqrt* takes two floating point numbers, a lower and an upper bound, and returns a lower and an upper bound on the result. Figuring out how to do interval arithmetic requires some careful thinking about the error of floating-point computations, and it certainly won’t speed those computations up; the only reason to use it is to ensure that the error incurred in a floating point operation isn’t larger than the user assumed. If you discover interval arithmetic, you’re at no risk of speeding up modern matrix multiplications, despite the fact that you really have found a new way of doing arithmetic that has certain desirable properties that normal floating-point arithmetic lacks.\n\n\nIn worlds where deconfusing ourselves about alignment leads us primarily to insights similar (on this axis) to interval arithmetic, it would be best for MIRI to distribute its research as widely as possible, especially once it has reached a stage where it is comparatively easy to communicate, in order to encourage AI capabilities researchers to adopt and build upon it.\n\n\nHowever, it is also plausible to us that a successful theory of alignable optimization may itself spark new research directions in AI capabilities. For an analogy, consider the progression from classical probability theory and statistics to a modern deep neural net classifying images. Probability theory alone does not let you classify cat pictures, and it is possible to understand and implement an image classification network without thinking much about probability theory; but probability theory and statistics were central to the way machine learning was actually discovered, and still underlie how modern deep learning researchers think about their algorithms.\n\n\nIn worlds where deconfusing ourselves about alignment leads to insights similar (on this axis) to probability theory, it is much less clear whether distributing our results widely would have a positive impact. It goes without saying that we want to have a positive impact (or, at the very least, a neutral impact), even in those sorts of worlds.\n\n\nThe latter scenario is relatively less important in worlds where [AGI timelines](https://intelligence.org/2017/10/13/fire-alarm/) are short. If current deep learning research is already on the brink of AGI, for example, then it becomes less plausible that the results of MIRI’s deconfusion work could become a relevant influence on AI capabilities research, and most of the potential impact of our work would come from its direct applicability to deep-learning-based systems. While many of us at MIRI believe that short timelines are at least plausible, there is significant uncertainty and disagreement about timelines inside MIRI, and I would not feel comfortable committing to a course of action that is safe only in worlds where timelines are short.\n\n\nIn sum, if we continue to make progress on, and eventually substantially succeed at, figuring out the actual “cleave nature at its joints” concepts that let us think coherently about alignment, I find it quite plausible that those same concepts may also enable capabilities boosts (especially in worlds where there’s a lot of time for those concepts to be pushed in capabilities-facing directions). There is certainly strong historical precedent for deep scientific insights yielding unexpected practical applications.\n\n\nBy the nature of deconfusion work, it seems very difficult to predict in advance which other ideas a given insight may unlock. These considerations seem to us to call for conservatism and delay on information releases—potentially very long delays, as it can take quite a bit of time to figure out where a given insight leads.\n\n\n \n\n\nWe need our researchers to not have walls within their own heads\n\n\nWe take our research seriously at MIRI. This means that, for many of us, we know in the back of our minds that deconfusion-style research could sometimes (often in an unpredictable fashion) open up pathways that can lead to capabilities insights in the manner discussed above. As a consequence, many MIRI researchers flinch away from having insights when they haven’t spent a lot of time thinking about the potential capabilities implications of those insights down the line—and they usually haven’t spent that time, because it requires a bunch of cognitive overhead. This effect has been evidenced in reports from researchers, myself included, and we’ve empirically observed that when we set up “closed” research retreats or research rooms,[13](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#footnote_12_18363 \"In other words, retreats/rooms where it is common knowledge that all thoughts and ideas are not going to be shared, except perhaps after some lengthy and irritating bureaucratic process and with everyone’s active support.\") researchers report that they can think more freely, that their brainstorming sessions extend further and wider, and so on.\n\n\nThis sort of inhibition seems quite bad for research progress. It is not a small area that our researchers were (un- or semi-consciously) holding back from; it’s a reasonably wide swath that may well include most of the deep ideas or insights we’re looking for.\n\n\nAt the same time, this kind of caution is an unavoidable consequence of doing deconfusion research in public, since it’s very hard to know what ideas may follow five or ten years after a given insight. AI alignment work and AI capabilities work are close enough neighbors that many insights in the vicinity of AI alignment are “potentially capabilities-relevant until proven harmless,” both for reasons discussed above and from the perspective of the conservative [security mindset](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/) we try to encourage around here.\n\n\nIn short, if we request that our brains come up with alignment ideas that are fine to share with everybody—and this is what we’re implicitly doing when we think of ourselves as “researching publicly”—then we’re requesting that our brains cut off the massive portion of the search space that is only probably safe.\n\n\nIf our goal is to make research progress as quickly as possible, in hopes of having concepts coherent enough to allow rigorous safety engineering by the time AGI arrives, then it seems worth finding ways to allow our researchers to think without constraints, even when those ways are somewhat expensive.\n\n\n \n\n\nFocus seems unusually useful for this kind of work\n\n\nThere may be some additional speed-up effects from helping free up researchers’ attention, though we don’t consider this a major consideration on its own.\n\n\nHistorically, early-stage scientific work has often been done by people who were solitary or geographically isolated, perhaps because this makes it easier to slowly develop a new way to factor the phenomenon, instead of repeatedly translating ideas into the current language others are using. It’s difficult to describe how much mental space and effort turns out to be taken up with thoughts of how your research will look to other people staring at you, until you try going into a closed room for an extended period of time with a promise to yourself that all the conversation within it really won’t be shared at all anytime soon.\n\n\nOnce we realized this was going on, we realized that in retrospect, we may have been ignoring common practice, in a way. Many startup founders have reported finding stealth mode, and funding that isn’t from VC outsiders, tremendously useful for focus. For this reason, we’ve also recently been encouraging researchers at MIRI to worry less about appealing to a wide audience when doing public-facing work. We want researchers to focus mainly on whatever research directions they find most compelling, make exposition and distillation a secondary priority, and not worry about optimizing ideas for persuasiveness or for being easier to defend.\n\n\n \n\n\nEarly deconfusion work just isn’t that useful (yet)\n\n\nML researchers aren’t running around using logical induction or functional decision theory. These theories don’t have practical relevance to the researchers on the ground, and they’re not supposed to; the point of these theories is just deconfusion.\n\n\nTo put it more precisely, the theories themselves aren’t the interesting novelty; the novelty is that a few years ago, we couldn’t write down *any* theory of how in principle to assign sane-seeming probabilities to mathematical facts, and today we can write down logical induction. In the journey from point *A* to point *B*, we became less confused. The logical induction paper is an artifact witnessing that deconfusion, and an artifact which granted its authors additional deconfusion as they went through the process of writing it; but the thing that excited me about logical induction was not any one particular algorithm or theorem in the paper, but rather the fact that we’re a little bit less in-the-dark than we were about how a reasoner can reasonably assign probabilities to logical sentences. We’re not fully out of the dark on this front, mind you, but we’re a little less confused than we were before.[14](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#footnote_13_18363 \"As an aside, perhaps my main discomfort with attempting to publish academic papers is that there appears to be no venue in AI where we can go to say, “Hey, check this out—we used to be confused about X, and now we can say Y, which means we’re a little bit less confused!” I think there are a bunch of reasons behind this, not least the fact that the nature of confusion is such that Y usually sounds obviously true once stated, and so it’s particularly difficult to make such a result sound like an impressive practical result.\nA side effect of this, unfortunately, is that all MIRI papers that I’ve ever written with the goal of academic publishing do a pretty bad job of saying what I was previously confused about, and how the “result” is indicative of me becoming less confused—for which I hereby apologize.\")\n\n\nIf the rest of the world were talking about how confusing they find the AI alignment topics we’re confused about, and were as concerned about their confusions as we are concerned about ours, then failing to share our research would feel a lot more costly to me. But as things stand, most people in the space look at us kind of funny when we say that we’re excited about things like logical induction, and I repeatedly encounter deep misunderstandings when I talk to people who have read some of our papers and tried to infer our research motivations, from which I conclude that they weren’t drawing a lot of benefit from my current ramblings anyway.\n\n\nAnd in a sense most of our current research is a form of rambling—in the same way, at best, that Faraday’s journal was rambling. It’s OK if most practical scientists avoid slogging through Faraday’s journal and wait until Maxwell comes along and distills the thing down to three useful equations. And, if Faraday expects that physical theories eventually distill, he doesn’t need to go around evangelizing his journal—he can just wait until it’s been distilled, and then work to transmit some less-confused concepts.\n\n\nWe expect our understanding of alignment, which is currently far from complete, to eventually distill, and I, at least, am not very excited about attempting to push it on anyone until it’s significantly more distilled. (Or, barring full distillation, until a project with a commitment to the common good, an adequate [security mindset](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/), and a large professed interest in deconfusion research comes knocking.)\n\n\nIn the interim, there are of course some researchers outside MIRI who care about the same problems we do, and who are also pursuing deconfusion. Our nondisclosed-by-default policy will negatively affect our ability to collaborate with these people on our other research directions, and this is a real cost and not worth dismissing. I don’t have much more to say about this here beyond noting that if you’re one of those people, you’re very welcome to [get in touch with us](mailto:contact@intelligence.org) (and you may want to consider [joining the team](https://intelligence.org/feed/?paged=15#section4))!\n\n\n \n\n\nWe’ll have a better picture of what to share or not share in the future\n\n\nIn the long run, if our research is going to be useful, our findings will need to go out into the world where they can impact how humanity builds AI systems. However, it doesn’t follow from this need for eventual distribution (of some sort) that we might as well publish all of our research immediately. As discussed above, as best I can tell, our current research insights just aren’t that practically useful, and sharing early-stage deconfusion research is time-intensive.\n\n\nOur nondisclosed-by-default policy also allows us to preserve options like:\n\n\n* deciding which research findings we think should be developed further, while thinking about [differential technological development](https://nickbostrom.com/existential/risks.html); and\n* deciding which group(s) to share each interesting finding with (e.g., the general public, other closed safety research groups, groups with strong commitment to security mindset and the common good, etc.).\n\n\nFuture versions of us obviously have better abilities to make calls on these sorts of questions, though this needs to be weighed against many facts that push in the opposite direction—the later we decide what to release, the less time others have to build upon it, and the more likely it is to be found independently in the interim (thereby wasting time on duplicated efforts), and so on.\n\n\nNow that I’ve listed reasons in favor of our nondisclosed-by-default policy, I’ll note some reasons against.\n\n\n \n\n\nConsiderations pulling against our nondisclosed-by-default policy\n\n\nThere are a host of pathways via which our work will be harder with this nondisclosed-by-default policy:\n\n\n \n\n\n1. We will have a harder time attracting and evaluating new researchers; sharing less research means getting fewer chances to try out various research collaborations and notice which collaborations work well for both parties.\n2. We lose some of the benefits of accelerating the progress of other researchers outside MIRI via sharing useful insights with them in real time as they are generated.\n3. We will be less able to get useful scientific insights and feedback from visitors, remote scholars, and researchers elsewhere in the world, since we will be sharing less of our work with them.\n4. We will have a harder time attracting funding and other indirect aid—with less of our work visible, it will be harder for prospective donors to know whether our work is worth supporting.\n5. We will have to pay various costs associated with keeping research private, including social costs and logistical overhead.\n\n\n \n\n\nWe expect these costs to be substantial. We will be working hard to offset some of the losses from *a*, as I’ll discuss in the next section. For reasons discussed [above](https://intelligence.org/feed/?paged=15#section3-5), I’m not presently very worried about *b*. The remaining costs will probably be paid in full.\n\n\nThese costs are why we didn’t adopt this policy (for most of our research) years ago. With outreach feeling less like our comparative advantage than it did in the [pre-Puerto-Rico](https://intelligence.org/2015/07/16/an-astounding-year/) days, and funding seeming like less of a bottleneck than it used to (though still something of a bottleneck), this approach now seems workable.\n\n\nWe’ve already found it helpful in practice to let researchers have insights first and sort out the safety or desirability of publishing later. On the whole, then, we expect this policy to cause a significant net speed-up to our research progress, while ensuring that we can responsibly investigate some of the most important technical questions on our radar.\n\n\n \n\n\n### 4. Joining the MIRI team\n\n\nI believe that MIRI is, and will be for at least the next several years, a focal point of one of those rare scientifically exciting points in history, where the conditions are just right for humanity to substantially deconfuse itself about an area of inquiry it’s been pursuing for centuries—and one where the output is directly impactful in a way that is rare even among scientifically exciting places and times.\n\n\nWhat can we offer? On my view:\n\n\n \n\n\n* Work that Eliezer, Benya, myself, and a number of researchers in AI safety view as having a significant chance of boosting humanity’s survival odds.\n* Work that, if it pans out, visibly has central relevance to the alignment problem—the kind of work that has a meaningful chance of shedding light on problems like “is there a loophole-free way to upper-bound the amount of optimization occurring within an optimizer?”.\n* Problems that, if your tastes match ours, feel closely related to fundamental questions about intelligence, agency, and the structure of reality; and the associated thrill of working on one of the great and wild frontiers of human knowledge, with large and important insights potentially close at hand.\n* An atmosphere in which people are taking their own and others’ research progress seriously. For example, you can expect colleagues who come into work every day looking to actually make headway on the AI alignment problem, and looking to pull their thinking different kinds of sideways until progress occurs. I’m consistently impressed with MIRI staff’s drive to get the job done—with their visible appreciation for the fact that their work really matters, and their enthusiasm for helping one another make forward strides.\n* As an increasing focus at MIRI, empirically grounded computer science work on the AI alignment problem, with clear feedback of the form “did my code type-check?” or “do we have a proof?”.\n* Finally, some good, old-fashioned fun—for a certain very specific brand of “fun” that includes the satisfaction that comes from making progress on important technical challenges, the enjoyment that comes from pursuing lines of research you find compelling without needing to worry about writing grant proposals or otherwise raising funds, and the thrill that follows when you finally manage to distill a nugget of truth from a thick cloud of confusion.\n\n\n \n\n\nWorking at MIRI also means working with other people who were drawn by the very same factors—people who seem to me to have an unusual degree of care and concern for human welfare and the welfare of sentient life as a whole, an unusual degree of creativity and persistence in working on major technical problems, an unusual degree of cognitive reflection and skill with perspective-taking, and an unusual level of efficacy and grit.\n\n\nMy own experience at MIRI has been that this is a group of people who really want to help Team Life get good outcomes from the large-scale events that are likely to dramatically shape our future; who can tackle big challenges head-on without appealing to [false narratives](https://www.lesswrong.com/posts/st7DiQP23YQSxumCt/on-doing-the-improbable) about how likely a given approach is to succeed; and who are remarkably good at fluidly updating on new evidence, and at creating a really fun environment for collaboration.\n\n\n \n\n\nWho are we seeking?\n\n\nWe’re seeking anyone who can cause our “become less confused about AI alignment” work to go faster.\n\n\nIn practice, this means: people who natively think in math or code, who take seriously the problem of becoming less confused about AI alignment (quickly!), and who are generally capable. In particular, we’re looking for high-end Google programmer levels of capability; you don’t need a 1-in-a-million test score or a [halo](https://www.lesswrong.com/posts/5o4EZJyqmHY4XgRCY/einstein-s-superpowers) [of](https://equilibriabook.com/toc/) [destiny](https://www.lesswrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing). You also don’t need a PhD, explicit ML background, or even prior research experience.\n\n\nEven if you’re not pointed towards our research agenda, we intend to fund or help arrange funding for any deep, good, and truly new ideas in alignment. This might be as a hire, a fellowship grant, or whatever other arrangements may be needed.\n\n\n \n\n\nWhat to do if you think you might want to work here\n\n\nIf you’d like more information, there are several good options:\n\n\n \n\n\n* Chat with **[Buck Shlegeris](mailto:buck@intelligence.org)**, a MIRI computer scientist who helps out with our recruiting. In addition to answering any of your questions and running interviews, Buck can sometimes help skilled programmers take some time off to skill-build through our [**AI Safety Retraining Program**](https://intelligence.org/2018/09/01/summer-miri-updates/#2).\n* If you already know someone else at MIRI and talking with them seems better, you might alternatively **reach out to that person**—especially [Blake Borgeson](mailto:blake@intelligence.org) (a new MIRI board member who helps us with technical recruiting) or [Anna Salamon](mailto:annasalamon@gmail.com) (a MIRI board member who is also the president of CFAR, and is helping run some MIRI recruiting events).\n* Come to a 4.5-day [**AI Risk for Computer Scientists**](https://intelligence.org/ai-risk-for-computer-scientists/) workshop, co-run by MIRI and CFAR. These workshops are open only to people who Buck arbitrarily deems “probably above MIRI’s technical hiring bar,” though their scope is wider than simply hiring for MIRI—the basic idea is to get a bunch of highly capable computer scientists together to try to fathom AI risk (with a good bit of rationality content, and of trying to fathom the way we’re failing to fathom AI risk, thrown in for good measure).\n \n\n\nThese are a great way to get a sense of MIRI’s culture, and to pick up a number of thinking tools whether or not you are interested in working for MIRI. If you’d like to either apply to attend yourself or nominate a friend of yours, [send us your info here](https://goo.gl/forms/Qmv9fy4N3pcEH4Q12).\n* Come to next year’s **MIRI Summer Fellows program**, or be a **summer intern** with us. This is a better option for mathy folks aiming at Agent Foundations than for computer sciencey folks aiming at our new research directions. This last summer we took 6 interns and 30 MIRI Summer Fellows (see Malo’s [Summer MIRI Updates](https://intelligence.org/2018/09/01/summer-miri-updates/) post for more details). Also, note that “summer internships” need not occur during summer, if some other schedule is better for you. Contact [Colm Ó Riain](mailto:colm@intelligence.org) if you’re interested.\n* You could just try [applying for a job](https://intelligence.org/careers/).\n\n\n \n\n\n \n\n\nSome final notes\n\n\n**A quick note on “**[**inferential distance**](https://wiki.lesswrong.com/wiki/Inferential_distance)**,” or on what it sometimes takes to understand MIRI researchers’ perspectives:** To many, MIRI’s take on things is really weird. Many people who bump into our writing somewhere find our basic outlook pointlessly weird/silly/wrong, and thus find us uncompelling forever. Even among those who do ultimately find MIRI compelling, many start off thinking it’s weird/silly/wrong and then, after some months or years of MIRI’s worldview slowly rubbing off on them, eventually find that our worldview makes a bunch of unexpected sense.\n\n\nIf you think that you may be in this latter category, and that such a change of viewpoint, should it occur, [would](http://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/) [be](https://wiki.lesswrong.com/wiki/Litany_of_Tarski) [because](https://arbital.com/p/bayes_rule/?l=1zq) MIRI’s worldview is onto something and not because we all got tricked by [false-but-compelling](https://www.readthesequences.com/Death-Spirals-Sequence) ideas… you might want to start exposing yourself to all this funny worldview stuff now, and see where it takes you. Good starting-points are [*Rationality: From AI to Zombies*](https://intelligence.org/rationality-ai-zombies/); [*Inadequate Equilibria*](http://equilibriabook.com/toc); [*Harry Potter and the Methods of Rationality*](http://www.hpmor.com/); the “[AI Risk for Computer Scientists](https://intelligence.org/ai-risk-for-computer-scientists/)” workshops; ordinary [CFAR workshops](http://www.rationality.org/workshops/upcoming); or just hanging out with folks in or [near](http://intelligence.org/MIRIx) MIRI.\n\n\nI suspect that I’ve failed to communicate some key things above, based on past failed attempts to communicate my perspective, and based on some readers of earlier drafts of this post missing key things I’d wanted to say. I’ve tried to clarify as many points as possible—hence this post’s length!—but in the end, “we’re focusing on research and not exposition now” holds for me too, and I need to get back to the work.[15](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#footnote_14_18363 \"If you have more questions, I encourage you to shoot us an email at contact@intelligence.org.\")\n\n\n \n\n\n**A note on the state of the field:**  MIRI is one of the dedicated teams trying to solve technical problems in AI alignment, but we’re not the only such team. There are currently three others: the [Center for Human-Compatible AI](http://humancompatible.ai/) at UC Berkeley, and the safety teams at [OpenAI](https://openai.com/) and at [Google DeepMind](https://deepmind.com/). All three of these safety teams are highly capable, top-of-their-class research groups, and we recommend them too as potential places to join if you want to make a difference in this field.\n\n\nThere are also solid researchers based at many other institutions, like the Future of Humanity Institute, whose [Governance of AI Program](https://www.fhi.ox.ac.uk/governance-ai-program/) focuses on the important social/coordination problems associated with AGI development.\n\n\nTo learn more about AI alignment research at MIRI and other groups, I recommend the MIRI-produced [Agent Foundations](https://intelligence.org/technical-agenda) and [Embedded Agency](https://www.alignmentforum.org/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version) write-ups; Dario Amodei, Chris Olah, et al.’s [Concrete Problems](https://arxiv.org/abs/1606.06565) agenda; the [AI Alignment Forum](https://www.alignmentforum.org/); and [Paul Christiano](https://ai-alignment.com/) and the [DeepMind safety team](https://medium.com/@deepmindsafetyresearch/)’s blogs.\n\n\n \n\n\n**On working here:** Salaries here are more flexible than people usually suppose. I’ve had a number of conversations with folks who assumed that because we’re a nonprofit, we wouldn’t be able to pay them enough to maintain their desired standard of living, meet their financial goals, support their family well, or similar. This is false. If you bring the right skills, we’re likely able to provide the compensation you need. We also place a high value on weekends and vacation time, on avoiding burnout, and in general on people here being happy and thriving.\n\n\nYou do need to be physically in Berkeley to work with us on the projects we think are most exciting, though we have pretty great relocation assistance and ops support for moving.\n\n\nDespite all of the great things about working at MIRI, I would consider working here a pretty terrible deal if all you wanted was a job. Reorienting to work on major global risks isn’t likely to be the most hedonic or relaxing option available to most people.\n\n\nOn the other hand, if you like the idea of an epic calling with a group of people who somehow claim to take seriously a task that sounds more like it comes from a science fiction novel than from a *Dilbert* strip, while having a lot of scientific fun; or you just care about humanity’s future, and want to help however you can… give us a call.\n\n\n \n\n\n\n\n\n---\n\n1. This post is an amalgam put together by a variety of MIRI staff. The byline saying “Nate” means that I (Nate) endorse the post, and that many of the concepts and themes come in large part from me, and I wrote a decent number of the words. However, I did not write all of the words, and the concepts and themes were built in collaboration with a bunch of other MIRI staff. (This is roughly what bylines have meant on the MIRI blog for a while now, and it’s worth noting explicitly.)\n2. See our 2017 [strategic update](https://intelligence.org/2017/04/30/2017-updates-and-strategy/) and [fundraiser](https://intelligence.org/2017/12/01/miris-2017-fundraiser/) posts for more details.\n3. In [past](https://intelligence.org/2015/07/17/miris-2015-summer-fundraiser/) [fundraisers](https://intelligence.org/2015/12/01/miri-2015-winter-fundraiser/), we’ve said that with sufficient funding we would like to spin up alternative lines of attack on the alignment problem. Our new research directions can be seen as following this spirit, and indeed, at least one of our new research directions is heavily inspired by alternative approaches I was considering back in 2015. That said, unlike many of the ideas I had in mind when writing our 2015 fundraiser posts, our new work is quite contiguous with our Agent-Foundations-style research.\n4. That is, the requisites for aligning AGI systems to perform limited [tasks](https://arbital.com/p/task_goal/); not all of the requisites for aligning a full [CEV](https://arbital.com/p/3c5/)-class [autonomous AGI](https://arbital.com/p/Sovereign/). Compare Paul Christiano’s distinction between [ambitious and narrow value learning](https://ai-alignment.com/ambitious-vs-narrow-value-learning-99bd0c59847e) (though note that Paul thinks narrow value learning is sufficient for strongly autonomous AGI).\n5. This result is described more in a paper that will be out soon. Or, at least, eventually. I’m not putting a lot of time into writing papers these days, for reasons discussed below.\n6. For more discussion of this concept, see “[Personal Thoughts on Careers in AI Policy and Strategy](http://effective-altruism.com/ea/1fa/personal_thoughts_on_careers_in_ai_policy_and/)” by Carrick Flynn.\n7. Historical examples of deconfusion work that gave rise to a rich and healthy field include the distillation of Lagrangian and Hamiltonian mechanics from Newton’s laws; Cauchy’s overhaul of real analysis; the slow acceptance of the usefulness of complex numbers; and the development of formal foundations of mathematics.\n8. I should emphasize that from my perspective, humanity never building AGI, never realizing our potential, and failing to make use of the [cosmic endowment](https://nickbostrom.com/astronomical/waste.html) would be a tragedy comparable (on an [astronomical](https://nickbostrom.com/astronomical/waste.html) scale) to AGI wiping us out. I say “hazardous”, but we shouldn’t lose sight of the upside of humanity getting the job done right.\n9. My own feeling is that I and other senior staff at MIRI have never been particularly *good* at explaining what we’re doing and why, so this inconvenience may not be a new thing. It’s new, however, for us to not be making it a priority to *attempt* to explain where we’re coming from.\n10. In other words, many people are explicitly focusing only on outreach, and many others are selecting technical problems to work on with a stated goal of strengthening the field and drawing others into it.\n11. This isn’t meant to suggest that nobody else is taking a straight shot at the core problems. For example, OpenAI’s [Paul Christiano](https://ai-alignment.com/) is a top-tier researcher who is doing exactly that. But we nonetheless want more of this on the present margin.\n12. For example, perhaps the easiest path to unalignable AGI involves following descendants of today’s gradient descent and deep learning techniques, and perhaps the same is true for alignable AGI.\n13. In other words, retreats/rooms where it is common knowledge that all thoughts and ideas are not going to be shared, except perhaps after some lengthy and irritating bureaucratic process and with everyone’s active support.\n14. As an aside, perhaps my main discomfort with attempting to publish academic papers is that there appears to be no venue in AI where we can go to say, “Hey, check this out—we used to be confused about *X*, and now we can say *Y*, which means we’re a little bit less confused!” I think there are a bunch of reasons behind this, not least the fact that the nature of confusion is such that *Y* usually sounds obviously true once stated, and so it’s particularly difficult to make such a result sound like an impressive practical result.\nA side effect of this, unfortunately, is that all MIRI papers that I’ve ever written with the goal of academic publishing do a pretty bad job of saying what I was previously confused about, and how the “result” is indicative of me becoming less confused—for which I hereby apologize.\n15. If you have more questions, I encourage you to shoot us an email at [contact@intelligence.org](mailto:contact@intelligence.org).\n\nThe post [2018 Update: Our New Research Directions](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-11-22T23:27:01Z", "authors": ["Nate Soares"], "summaries": ["This post gives a high-level overview of the new research directions that MIRI is pursuing with the goal of deconfusion, a discussion of why deconfusion is so important to them, an explanation of why MIRI is now planning to leave research unpublished by default, and a case for software engineers to join their team."], "venue": "MIRI website", "opinion": "There aren't enough details on the technical research for me to say anything useful about it. I'm broadly in support of deconfusion but am either less optimistic on the tractability of deconfusion, or more optimistic on the possibility of success with our current notions (probably both). Keeping research unpublished-by-default seems reasonable to me given the MIRI viewpoint for the reasons they talk about, though I haven't thought about it much. See also [Import AI](https://jack-clark.net/2018/11/26/import-ai-122-google-obtains-new-imagenet-state-of-the-art-with-gpipe-drone-learns-to-land-more-effectively-than-pd-controller-policy-and-facebook-releases-its-cherrypi-starcraft-bot/).", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #34", "newsletter_category": "AI governance"} -{"id": "95c002ad6992e2da2adb144d2ca08b2c", "title": "Embedded Curiosities", "url": "https://intelligence.org/2018/11/08/embedded-curiosities/", "source": "miri", "source_type": "blog", "text": "This is the conclusion of the **[Embedded Agency](https://intelligence.org/embedded-agency)** series. Previous posts:\n \n\n\n[Embedded Agents](https://intelligence.org/embedded-agents)  —  [Decision Theory](https://intelligence.org/embedded-decisions)  —  [Embedded World-Models](https://intelligence.org/embedded-models) \n[Robust Delegation](https://intelligence.org/embedded-delegation)  —  [Subsystem Alignment](https://intelligence.org/embedded-subsystems)\n\n\n \n\n\n\n\n---\n\n\n \n\n\nA final word on curiosity, and intellectual puzzles:\n\n\nI described an embedded agent, Emmy, and said that I don’t understand how she evaluates her options, models the world, models herself, or decomposes and solves problems.\n\n\nIn the past, when researchers have talked about motivations for working on problems like these, they’ve generally focused on the motivation from [AI risk](https://intelligence.org/2017/04/12/ensuring/). AI researchers want to build machines that can solve problems in the general-purpose fashion of a human, and [dualism](https://intelligence.org/embedded-agents#3) is not a realistic framework for thinking about such systems. In particular, it’s an approximation that’s especially prone to breaking down as AI systems get smarter. When people figure out how to build general AI systems, we want those researchers to be in a better position to understand their systems, analyze their internal properties, and be confident in their future behavior.\n\n\nThis is the motivation for most researchers today who are working on things like updateless decision theory and subsystem alignment. We care about basic conceptual puzzles which we think we need to figure out in order to achieve confidence in future AI systems, and not have to rely quite so much on brute-force search or trial and error.\n\n\nBut the arguments for why we may or may not need particular conceptual insights in AI are pretty long. I haven’t tried to wade into the details of that debate here. Instead, I’ve been discussing a particular set of research directions as an *intellectual puzzle*, and not as an instrumental strategy.\n\n\nOne downside of discussing these problems as instrumental strategies is that it can lead to some misunderstandings about *why* we think this kind of work is so important. With the “instrumental strategies” lens, it’s tempting to draw a direct line from a given research problem to a given safety concern. But it’s not that I’m imagining real-world embedded systems being “too Bayesian” and this somehow causing problems, if we don’t figure out what’s wrong with current models of rational agency. It’s certainly not that I’m imagining future AI systems being written in second-order logic! In most cases, I’m not trying at all to draw direct lines between research problems and [specific AI failure modes](https://intelligence.org/2018/10/03/rocket-alignment/).\n\n\nWhat I’m instead thinking about is this: We sure do seem to be working with the wrong basic concepts today when we try to think about what agency is, as seen by the fact that these concepts don’t transfer well to the more realistic embedded framework.\n\n\nIf AI developers in the future are *still* working with these confused and incomplete basic concepts as they try to actually build powerful real-world optimizers, that seems like a bad position to be in. And it seems like the research community is unlikely to figure most of this out by default in the course of just trying to develop more capable systems. Evolution certainly figured out how to build human brains without “understanding” any of this, via brute-force search.\n\n\nEmbedded agency is my way of trying to point at what I think is a very important and central place where I feel confused, and where I think future researchers risk running into confusions too.\n\n\nThere’s also a lot of excellent AI alignment research that’s being done with an eye toward more direct applications; but I think of that safety research as having a different type signature than the puzzles I’ve talked about here.\n\n\n\n\n---\n\n\nIntellectual curiosity isn’t the ultimate reason we privilege these research directions. But there are some *practical* advantages to orienting toward research questions from a place of curiosity at times, as opposed to *only applying the “practical impact” lens* to how we think about the world.\n\n\nWhen we apply the curiosity lens to the world, we orient toward the sources of confusion preventing us from seeing clearly; the blank spots in our map, the flaws in our lens. It encourages re-checking assumptions and attending to blind spots, which is helpful as a psychological counterpoint to our “instrumental strategy” lens—the latter being more vulnerable to the urge to lean on whatever shaky premises we have on hand so we can get to more solidity and closure in our early thinking.\n\n\n*Embedded agency* is an organizing theme behind most, if not all, of our big curiosities. It seems like a central mystery underlying many concrete difficulties.\n\n\n \n\n\n\nThe post [Embedded Curiosities](https://intelligence.org/2018/11/08/embedded-curiosities/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-11-08T17:31:41Z", "authors": ["Abram Demski"], "summaries": []} -{"id": "e1ffaef2a4eb889663eed66f1719aec7", "title": "Subsystem Alignment", "url": "https://intelligence.org/2018/11/06/embedded-subsystems/", "source": "miri", "source_type": "blog", "text": "![Emmy the embedded agent](https://intelligence.org/wp-content/uploads/2018/10/Emmy-SA.png)\n \n\n\nYou want to figure something out, but you don’t know how to do that yet.\n\n\nYou have to somehow break up the task into sub-computations. There is no atomic act of “thinking”; intelligence must be built up of non-intelligent parts.\n\n\nThe agent being made of parts is part of what made [counterfactuals](https://intelligence.org/embedded-decisions) hard, since the agent may have to reason about impossible configurations of those parts.\n\n\nBeing made of parts is what makes [self-reasoning and self-modification](https://intelligence.org/embedded-delegation) even possible.\n\n\nWhat we’re primarily going to discuss in this section, though, is another problem: when the agent is made of parts, there could be [adversaries](https://arbital.com/p/nonadversarial/) not just in the external environment, but inside the agent as well.\n\n\nThis cluster of problems is **Subsystem Alignment**: ensuring that subsystems are not working at cross purposes; avoiding subprocesses optimizing for unintended goals.\n\n\n \n\n\n\n> \n> * benign induction\n> * benign optimization\n> * transparency\n> * mesa-optimizers\n> \n> \n> \n\n\n \n\n\n\n\n\n---\n\n\nHere’s a straw agent design:\n\n\n \n\n\n![A straw agent with epistemic and instrumental subsystems](https://intelligence.org/wp-content/uploads/2018/10/Subsystem-Deception.png)\n\n\n \n\n\nThe epistemic subsystem just wants accurate beliefs. The instrumental subsystem uses those beliefs to track how well it is doing. If the instrumental subsystem gets too capable relative to the epistemic subsystem, it may decide to try to fool the epistemic subsystem, as depicted.\n\n\nIf the epistemic subsystem gets too strong, that could also possibly yield bad outcomes.\n\n\nThis agent design treats the system’s epistemic and instrumental subsystems as discrete agents with goals of their own, which is not particularly realistic. However, we saw in the section on [wireheading](https://intelligence.org/embedded-delegation#4) that the problem of subsystems working at cross purposes is hard to avoid. And this is a harder problem if we didn’t intentionally build the relevant subsystems.\n\n\n\n\n---\n\n\nOne reason to avoid booting up sub-agents who want different things is that we want **[robustness to relative scale](https://www.lesswrong.com/posts/bBdfbWfWxHN9Chjcq/robustness-to-scale)**.\n\n\nAn approach is *robust to scale* if it still works, or fails gracefully, as you scale capabilities. There are three types: robustness to scaling up; robustness to scaling down; and robustness to relative scale.\n\n\n \n\n\n* *Robustness to scaling up* means that your system doesn’t stop behaving well if it gets better at optimizing. One way to check this is to think about what would happen if the function the AI optimizes were actually [maximized](https://arbital.com/p/omni_test/). Think [Goodhart’s Law](https://intelligence.org/embedded-delegation#2).\n* *Robustness to scaling down* means that your system still works if made [less powerful](https://intelligence.org/embedded-models). Of course, it may stop being useful; but it should fail safely and without unnecessary costs.\n \n\n\nYour system might work if it can exactly maximize some function, but is it safe if you approximate? For example, maybe a system is safe if it can learn human values very precisely, but approximation makes it increasingly misaligned.\n* *Robustness to relative scale* means that your design does not rely on the agent’s subsystems being similarly powerful. For example, [GAN](http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf) (Generative Adversarial Network) training can fail if one sub-network gets too strong, because there’s no longer any training signal.\n\n\n![GAN training](https://intelligence.org/wp-content/uploads/2018/11/GAN-Training.png)\n\n\nLack of robustness to scale isn’t necessarily something which kills a proposal, but it is something to be aware of; lacking robustness to scale, you need strong reason to think you’re at the right scale.\n\n\nRobustness to relative scale is particularly important for subsystem alignment. An agent with intelligent sub-parts should not rely on being able to outsmart them, unless we have a strong account of why this is always possible.\n\n\n\n\n---\n\n\nThe big-picture moral: aim to have a unified system that doesn’t work at cross purposes to itself.\n\n\nWhy would anyone make an agent with parts fighting against one another? There are three obvious reasons: *subgoals*, *pointers*, and *search*.\n\n\nSplitting up a task into **subgoals** may be the only way to efficiently find a solution. However, a subgoal computation shouldn’t completely forget the big picture!\n\n\nAn agent designed to build houses should not boot up a sub-agent who cares only about building stairs.\n\n\nOne intuitive desideratum is that although subsystems need to have their own goals in order to decompose problems into parts, the subgoals need to [“point back”](https://intelligence.org/embedded-delegation#4) robustly to the main goal.\n\n\nA house-building agent might spin up a subsystem that cares only about stairs, but only cares about stairs in the context of *houses*.\n\n\nHowever, you need to do this in a way that doesn’t just amount to your house-building system having a second house-building system inside its head. This brings me to the next item:\n\n\n\n\n---\n\n\n**Pointers**: It may be difficult for subsystems to carry the whole-system goal around with them, since they need to be *reducing* the problem. However, this kind of indirection seems to encourage situations in which different subsystems’ incentives are misaligned.\n\n\nAs we saw in the example of the epistemic and instrumental subsystems, as soon as we start optimizing some sort of *expectation*, rather than directly getting feedback about what we’re doing on the metric that’s actually important, we may create perverse incentives—that’s Goodhart’s Law.\n\n\nHow do we ask a subsystem to “do X” as opposed to “convince the wider system that I’m doing X”, without passing along the entire overarching goal-system?\n\n\nThis is similar to the way we wanted successor agents to robustly point at values, since it is too hard to write values down. However, in this case, learning the values of the larger agent wouldn’t make any sense either; subsystems and subgoals need to be *smaller*.\n\n\n\n\n---\n\n\nIt might not be that difficult to solve subsystem alignment for subsystems which humans entirely design, or subgoals which an AI explicitly spins up. If you know how to avoid misalignment by design and robustly delegate your goals, both problems seem solvable.\n\n\nHowever, it doesn’t seem possible to design all subsystems so explicitly. At some point, in solving a problem, you’ve split it up as much as you know how to and must rely on some trial and error.\n\n\nThis brings us to the third reason subsystems might be optimizing different things, **search**: solving a problem by looking through a rich space of possibilities, a space which may itself contain misaligned subsystems.\n\n\n \n\n\n![Sufficiently powerful search can result in subsystem misalignment](https://intelligence.org/wp-content/uploads/2020/08/Mesa-Optimizer-Search.png)\n\n\n \n\n\nML researchers are quite familiar with the phenomenon: it’s easier to write a program which finds a high-performance machine translation system for you than to directly write one yourself.\n\n\nIn the long run, this process can go one step further. For a rich enough problem and an impressive enough search process, the solutions found via search might themselves be [intelligently optimizing](https://arbital.com/p/daemons/) something.\n\n\nThis might happen by accident, or be purposefully engineered as a strategy for solving difficult problems. Either way, it stands a good chance of exacerbating Goodhart-type problems—you now effectively have two chances for misalignment, where you previously had one.\n\n\nThis problem is described in Hubinger, et al.’s “[Risks from Learned Optimization in Advanced Machine Learning Systems](http://intelligence.org/learned-optimization)”.\n\n\nLet’s call the original search process the *base optimizer*, and the search process found via search a *mesa-optimizer*.\n\n\n“Mesa” is the opposite of “meta”. Whereas a “meta-optimizer” is an optimizer designed to produce a new optimizer, a “mesa-optimizer” is any optimizer generated by the original optimizer—whether or not the programmers *wanted* their base optimizer to be searching for new optimizers.\n\n\n“Optimization” and “search” are ambiguous terms. I’ll think of them as any algorithm which can be naturally interpreted as doing significant computational work to “find” an object that scores highly on some objective function.\n\n\nThe objective function of the base optimizer is not necessarily the same as that of the mesa-optimizer. If the base optimizer wants to make pizza, the new optimizer may enjoy kneading dough, chopping ingredients, et cetera.\n\n\nThe new optimizer’s objective function must be *helpful* for the base objective, at least in the examples the base optimizer is checking. Otherwise, the mesa-optimizer would not have been selected.\n\n\nHowever, the mesa-optimizer must reduce the problem somehow; there is no point to it running the exact same search all over again. So it seems like its objectives will tend to be like good heuristics; easier to optimize, but different from the base objective in general.\n\n\nWhy might a difference between base objectives and mesa-objectives be concerning, if the new optimizer is scoring highly on the base objective anyway? It’s about the interplay with what’s really wanted. Even if we get [value specification](https://intelligence.org/embedded-delegation#2) exactly right, there will always be some *distributional shift* between the training set and deployment. (See Amodei, et al.’s “[Concrete Problems in AI Safety](https://arxiv.org/abs/1606.06565)”.)\n\n\nDistributional shifts which would be small in ordinary cases may make a big difference to a capable mesa-optimizer, which may observe the slight difference and figure out how to capitalize on it for its own objective.\n\n\nActually, to even use the term “distributional shift” seems wrong in the context of [embedded agency](https://intelligence.org/feed/intelligence.org/embedded-agency). The world is not [i.i.d.](https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables) The analog of “no distributional shift” would be to have an exact model of the whole future relevant to what you want to optimize, and the ability to run it over and over during training. So we need to deal with massive “distributional shift”.\n\n\nWe may also want to optimize for things that aren’t exactly what we want. The obvious way to avoid agents that pursue subgoals at the cost of the overall goal is to have the subsystems not be agentic. Just search over a bunch of ways to make stairs, don’t make something that cares about stairs.\n\n\nThe problem is then that powerful mesa-optimizers are optimizing something the base system doesn’t care about, and that the mesa-optimizers will have a convergent incentive to be agentic.\n\n\nAdditionally, there���s the possibility that the mesa-optimizer becomes aware of the base optimizer, in which case it might start explicitly trying to do well on the base objective function purely in order to be kept around, while looking for any signs that it has left training and can stop pretending. This creates a version of Nick Bostrom’s “treacherous turn”.\n\n\nThis is the same story we saw in [adversarial Goodhart](https://intelligence.org/embedded-delegation#3): there is something agentic in the search space, which responds to our choice of proxy in a way which makes our proxy a bad one.\n\n\nYou might try to avoid treacherous turns by repeatedly simulating “end of training, time for deployment” during training, in a kind of nested-dream setup. The goal is to test enough layers of simulation that you can conclude there won’t be a treacherous turn in the real deployment case.\n\n\n \n\n\n![Simulated deployment](https://intelligence.org/wp-content/uploads/2020/08/Simulated-Deployment.png)\n\n\nUnfortunately, convergence for this kind of learning is going to be poor. Ordinarily in machine learning, good performance means good average-case performance. But a treacherous turn is an “error” which can be carefully placed to do the most damage. We want to ensure this doesn’t happen.\n\n\nThe problem is, in part, that some outputs are much more important than others. Deployment is more important than training, and certain critical or vulnerable moments during deployment will be especially important. We want to be particularly sure to get important things right, rather than just getting low average loss.\n\n\nBut we can’t solve this by telling the system what’s important. Indeed, it seems we hope it can’t figure that out—we are banking on being able to generalize from performance on less-important cases to more-important cases. This is why research into ML [techniques](https://ai-alignment.com/techniques-for-optimizing-worst-case-performance-39eafec74b99) which avoid [rare catastrophes](https://ai-alignment.com/learning-with-catastrophes-59387b55cc30) (or “traps”) is relevant to the problem of making sure mesa-optimizers are aligned with base optimizers.\n\n\nIt is difficult to trust arbitrary code—which is what models from rich model classes are—based only on empirical testing. Consider a highly simplified problem: we want to find a program which only ever outputs \\(1\\). \\(0\\) is a catastrophic failure.\n\n\nIf we could examine the code ourselves, this problem would be easy. But the output of machine learning is often difficult to analyze; so let’s suppose that we can’t understand code at all.\n\n\nNow, in some sense, we can trust simpler functions more. A short piece of code is less likely to contain a hard-coded exception. Let’s quantify that.\n\n\nConsider the set of all programs of length \\(L\\). Some programs \\(p\\) will print \\(1\\) for a long time, but then print \\(0\\). We’re trying to avoid that.\n\n\nCall the time-to-first-zero \\(W\\_{p}\\).  (\\(W\\_{p} = \\infty\\) if the program \\(p\\) is trustworthy, i.e., if it never outputs \\(0\\).)\n\n\nThe highest finite  \\(W\\_{p}\\)  out of all length-\\(L\\) programs is a form of the Busy Beaver function, so I will refer to it as \\(BB(L)\\). If we wanted to be completely sure that a random program of length \\(L\\) were trustworthy, we would need to observe \\(BB(L)\\) ones from that program.\n\n\nNow, a fact about the Busy Beaver function is that \\(BB(n)\\) grows faster than any computable function. So this kind of empirical trust-building takes uncomputably long to find the truth, in the worst case.\n\n\nWhat about the average case?\n\n\nIf we suppose all the other length-\\(L\\) programs are easy cases, there are exponentially many length-\\(L\\) programs, so the average is \\(BB(L)\\ / \\ \\mathrm{exp}(L)\\). But exponentials are computable. So \\(BB(L)\\ / \\ \\mathrm{exp}(L)\\) still grows faster than any computable function.\n\n\nSo while using short programs gives us some confidence in theory, the difficulty of forming generalized conclusions about behavior grows extremely quickly as a function of length.\n\n\nIf length restrictions aren’t so practical, perhaps restricting computational complexity can help us? Intuitively, a mesa-optimizer needs time to think in order to successfully execute a treacherous turn. So a program which arrives at conclusions more quickly might be more trustworthy.\n\n\nHowever, restricting complexity class unfortunately doesn’t get around Busy-Beaver-type behavior. Strategies that wait a long time before outputting \\(0\\) can be slowed down even further with only slightly longer program length \\(L\\).\n\n\n\n\n---\n\n\nIf all of these problems seem too hypothetical, consider the evolution of life on Earth. Evolution can be thought of as a reproductive fitness maximizer.\n\n\n(Evolution can actually be thought of as an optimizer for many things, or as no optimizer at all, but that doesn’t matter. The point is that if an agent wanted to maximize reproductive fitness, it might use a system that looked like evolution.)\n\n\nIntelligent organisms are mesa-optimizers of evolution. Although the drives of intelligent organisms are certainly correlated with reproductive fitness, organisms want all sorts of things. There are even mesa-optimizers who have come to understand evolution, and even to manipulate it at times. Powerful and misaligned mesa-optimizers appear to be a real possibility, then, at least with enough processing power.\n\n\nProblems seem to arise because you try to solve a problem which you don’t yet know how to solve by searching over a large space and hoping “someone” can solve it.\n\n\nIf the source of the issue is the solution of problems by massive search, perhaps we should look for different ways to solve problems. Perhaps we should solve problems by figuring things out. But how do you solve problems which you don’t yet know how to solve other than by trying things?\n\n\n\n\n---\n\n\nLet’s take a step back.\n\n\n[Embedded world-models](http://intelligence.org/embedded-models) is about how to think at all, as an embedded agent; [decision theory](http://intelligence.org/embedded-decisions) is about how to act. [Robust delegation](http://intelligence.org/embedded-delegation) is about building trustworthy successors and helpers. Subsystem alignment is about building *one* agent out of trustworthy *parts*.\n\n\n \n\n\n![Embedded agency](https://intelligence.org/wp-content/uploads/2018/11/Embedded-Agency.png)\n\n\n \n\n\nThe problem is that:\n\n\n* We don’t know how to think about environments when we’re smaller.\n* To the extent we *can* do that, we don’t know how to think about consequences of actions in those environments.\n* Even when we can do that, we don’t know how to think about what we *want*.\n* Even when we have none of these problems, we don’t know how to reliably output actions which get us what we want!\n\n\n\n\n---\n\n\nThis is the penultimate post in Scott Garrabrant and Abram Demski’s [Embedded Agency](https://intelligence.org/embedded-agency/) sequence. [**Conclusion: embedded curiosities.**](https://intelligence.org/embedded-curiosities/)\n\n\n\nThe post [Subsystem Alignment](https://intelligence.org/2018/11/06/embedded-subsystems/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-11-06T18:34:36Z", "authors": ["Scott Garrabrant"], "summaries": []} -{"id": "9f473707a198c1b453a4918da38c7b09", "title": "Robust Delegation", "url": "https://intelligence.org/2018/11/04/embedded-delegation/", "source": "miri", "source_type": "blog", "text": "![Self-improvement](https://intelligence.org/wp-content/uploads/2020/08/Self-Improvement2.png)\n\n\n\nBecause [the world is big](https://intelligence.org/embedded-models), the agent as it is may be inadequate to accomplish its goals, including in its ability to think.\n\n\nBecause the agent is [made of parts](https://intelligence.org/embedded-agents#sa), it can improve itself and become more capable.\n\n\nImprovements can take many forms: The agent can make tools, the agent can make successor agents, or the agent can just learn and grow over time. However, the successors or tools need to be more capable for this to be worthwhile. \n\n\nThis gives rise to a special type of principal/agent problem:\n\n\nYou have an initial agent, and a successor agent. The initial agent gets to decide exactly what the successor agent looks like. The successor agent, however, is much more intelligent and powerful than the initial agent. We want to know how to have the successor agent robustly optimize the initial agent’s goals.\n\n\nHere are three examples of forms this principal/agent problem can take:\n\n\n \n\n\n![Three principal-agent problems in robust delegation](https://intelligence.org/wp-content/uploads/2018/11/Principal-Agent.png)\n\n\n \n\n\nIn the *AI alignment problem*, a human is trying to build an AI system which can be trusted to help with the human’s goals.\n\n\nIn the *tiling agents problem*, an agent is trying to make sure it can trust its future selves to help with its own goals.\n\n\nOr we can consider a harder version of the tiling problem—*stable self-improvement*—where an AI system has to build a successor which is more intelligent than itself, while still being trustworthy and helpful.\n\n\nFor a human analogy which involves no AI, you can think about the problem of succession in royalty, or more generally the problem of setting up organizations to achieve desired goals without losing sight of their purpose over time.\n\n\nThe difficulty seems to be twofold:\n\n\nFirst, a human or AI agent may not fully understand itself and its own goals. If an agent can’t write out what it wants in exact detail, that makes it hard for it to guarantee that its successor will robustly help with the goal.\n\n\nSecond, the idea behind delegating work is that you not have to do all the work yourself. You want the successor to be able to act with some degree of autonomy, including learning new things that you don’t know, and wielding new skills and capabilities.\n\n\nIn the limit, a really good formal account of robust delegation should be able to handle arbitrarily capable successors without throwing up any errors—like a human or AI building an unbelievably smart AI, or like an agent that just keeps learning and growing for so many years that it ends up much smarter than its past self.\n\n\nThe problem is not (just) that the successor agent might be malicious. The problem is that we don’t even know what it means not to be.\n\n\nThis problem seems hard from both points of view.\n\n\n\n \n\n\n![Successors](https://intelligence.org/wp-content/uploads/2018/11/Successors.png)\n\n\n \n\n\n\nThe initial agent needs to figure out how reliable and trustworthy something more powerful than it is, which seems very hard. But the successor agent has to figure out what to do in situations that the initial agent can’t even understand, and try to respect the goals of something that the successor can see is [inconsistent](https://arxiv.org/abs/1712.05812), which also seems very hard.\n\n\nAt first, this may look like a less fundamental problem than “[make decisions](https://intelligence.org/embedded-decisions)” or “have models”. But the view on which there are multiple forms of the “build a successor” problem is itself a [dualistic](https://intelligence.org/embedded-agents#3) view.\n\n\nTo an embedded agent, the future self is not privileged; it is just another part of the environment. There isn’t a deep difference between building a successor that shares your goals, and just making sure your own goals stay the same over time.\n\n\nSo, although I talk about “initial” and “successor” agents, remember that this isn’t just about the narrow problem humans currently face of aiming a successor. This is about the fundamental problem of being an agent that persists and learns over time.\n\n\nWe call this cluster of problems **Robust Delegation**. Examples include:\n\n\n \n\n\n\n> \n> * [Vingean reflection](https://intelligence.org/files/VingeanReflection.pdf)\n> * the [tiling problem](https://intelligence.org/files/TilingAgentsDraft.pdf)\n> * averting [Goodhart’s law](https://arxiv.org/abs/1803.04585)\n> * [value loading](https://intelligence.org/files/ValueLearningProblem.pdf)\n> * [corrigibility](https://aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10124/10136)\n> * [informed oversight](https://ai-alignment.com/the-informed-oversight-problem-1b51b4f66b35)\n> \n> \n> \n\n\n \n\n\n\n\n\n---\n\n\nImagine you are playing the [CIRL game](https://arxiv.org/abs/1606.03137) with a toddler.\n\n\nCIRL means Cooperative Inverse Reinforcement Learning. The idea behind CIRL is to define what it means for a robot to collaborate with a human. The robot tries to pick helpful actions, while simultaneously trying to figure out what the human wants.\n\n\n \n\n\n![Learning Values](https://intelligence.org/wp-content/uploads/2018/11/Learning-Values.png)\n\n\nA lot of current work on robust delegation comes from the goal of aligning AI systems with what humans want. So usually, we think about this from the point of view of the human.\n\n\nBut now consider the problem faced by a smart robot, where they’re trying to help someone who is very confused about the universe. Imagine trying to help a toddler optimize their goals.\n\n\n* From your standpoint, the toddler may be too irrational to be seen as optimizing anything.\n* The toddler may have an ontology in which it is optimizing something, but you can see that ontology doesn’t make sense.\n* Maybe you notice that if you set up questions in the right way, you can make the toddler seem to want almost anything.\n\n\nPart of the problem is that the “helping” agent has to be *bigger* in some sense in order to be more capable; but this seems to imply that the “helped” agent can’t be a very good supervisor for the “helper”.\n\n\n\n![Child and adult](https://intelligence.org/wp-content/uploads/2018/11/Child-and-Adult.png)\n\n\n\nFor example, [updateless decision theory](https://intelligence.org/embedded-decisions#7) eliminates dynamic inconsistencies in decision theory by, rather than maximizing expected utility of your action *given* what you know, maximizing expected utility *of reactions* to observations, from a state of [ignorance](https://intelligence.org/wp-content/uploads/2018/11/Predecessor-and-Successor-Goals.png).\n\n\nAppealing as this may be as a way to achieve reflective consistency, it creates a strange situation in terms of computational complexity: If actions are type \\(A\\), and observations are type \\(O\\), reactions to observations are type  \\(O \\to A\\)—a much larger space to optimize over than \\(A\\) alone. And we’re expecting our [*smaller*](https://www.lesswrong.com/posts/K5Qp7ioupgb7r73Ca/logical-updatelessness-as-a-subagent-alignment-problem) self to be able to do that!\n\n\nThis seems bad.\n\n\nOne way to more crisply state the problem is: We should be able to trust that our future self is applying its intelligence to the pursuit of our goals *without* being able to predict precisely what our future self will do. This criterion is called [**Vingean reflection**](https://intelligence.org/files/VingeanReflection.pdf).\n\n\nFor example, you might plan your driving route before visiting a new city, but you do not plan your steps. You plan to some level of detail, and trust that your future self can figure out the rest.\n\n\nVingean reflection is difficult to examine via classical Bayesian decision theory because Bayesian decision theory assumes [logical omniscience](http://intelligence.org/embedded-models#2). Given logical omniscience, the assumption “the agent knows its future actions are rational” is synonymous with the assumption “the agent knows its future self will act according to one particular optimal policy which the agent can predict in advance”.\n\n\nWe have some limited models of Vingean reflection (see “[Tiling Agents for Self-Modifying AI, and the Löbian Obstacle](https://intelligence.org/files/TilingAgentsDraft.pdf)” by Yudkowsky and Herreshoff). A successful approach must walk the narrow line between two problems:\n\n\n* *The Löbian Obstacle*:   Agents who trust their future self because they trust the output of their own reasoning are inconsistent.\n* *The Procrastination Paradox*:   Agents who trust their future selves *without* reason tend to be consistent but unsound and untrustworthy, and will put off tasks forever because they can do them later.\n\n\nThe Vingean reflection results so far apply only to limited sorts of decision procedures, such as satisficers aiming for a threshold of acceptability. So there is plenty of room for improvement, getting tiling results for more useful decision procedures and under weaker assumptions.\n\n\nHowever, there is more to the robust delegation problem than just tiling and Vingean reflection.\n\n\nWhen you construct another agent, rather than delegating to your future self, you more directly face a problem of [**value loading**](https://intelligence.org/files/ValueLearningProblem.pdf).\n\n\nThe main problems here:\n\n\n* We [don’t know](https://arbital.com/p/complexity_of_value/) what we want.\n* Optimization [amplifies slight differences](https://www.lesswrong.com/posts/zEvqFtT4AtTztfYC4/optimization-amplifies) between what we say we want and what we really want.\n\n\nThe misspecification-amplifying effect is known as **Goodhart’s Law**, named for Charles Goodhart’s observation: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.”\n\n\nWhen we specify a target for optimization, it is reasonable to expect it to be correlated with what we want—highly correlated, in some cases. Unfortunately, however, this does not mean that optimizing it will get us closer to what we want—especially at high levels of optimization.\n\n\n\n\n---\n\n\nThere are (at least) [four types of Goodhart](https://arxiv.org/abs/1803.04585): regressional, extremal, causal, and adversarial.\n\n\n \n\n\n![Regressional Goodhart](https://intelligence.org/wp-content/uploads/2018/10/Regressional.png)\n\n\n \n\n\n*Regressional Goodhart* happens when there is a less than perfect correlation between the proxy and the goal. It is more commonly known as the [optimizer’s curse](https://faculty.fuqua.duke.edu/~jes9/bio/The_Optimizers_Curse.pdf), and it is related to regression to the mean.\n\n\nAn example of regressional Goodhart is that you might draft players for a basketball team based on height alone. This isn’t a perfect heuristic, but there is a correlation between height and basketball ability, which you can make use of in making your choices.\n\n\nIt turns out that, in a certain sense, you will be predictably disappointed if you expect the general trend to hold up as strongly for your selected team.\n\n\n![An example of regressional Goodhart](https://intelligence.org/wp-content/uploads/2020/08/Regressional-Basketball.png)\n\n\nStated in statistical terms: an unbiased estimate of \\(y\\) given \\(x\\) is not an unbiased estimate of \\(y\\) when we select for the best \\(x\\). In that sense, we can expect to be disappointed when we use \\(x\\) as a proxy for \\(y\\) for optimization purposes.\n\n\n\n![Unbiased estimate](https://intelligence.org/wp-content/uploads/2020/08/Unbiased-Estimate2.png)\n\n\n\n(*The graphs in this section are hand-drawn to help illustrate the relevant concepts.*)\n\n\nUsing a Bayes estimate instead of an unbiased estimate, we can eliminate this sort of predictable disappointment. The Bayes estimate accounts for the noise in \\(x\\), bending toward typical \\(y\\) values.\n\n\n \n\n\n![Bayes estimate](https://intelligence.org/wp-content/uploads/2020/08/Bayes-Estimate2.png)\n\n\nThis doesn’t necessarily allow us to get a better \\(y\\) value, since we still only have the information content of \\(x\\) to work with. However, it sometimes may. If \\(y\\) is normally distributed with variance \\(1\\), and \\(x\\) is \\(y \\pm 10\\) with even odds of \\(+\\) or \\(-\\), a Bayes estimate will give better optimization results by almost entirely removing the noise.\n\n\n![Bayes estimates can sometimes improve optimization results](https://intelligence.org/wp-content/uploads/2020/08/Bayes-Noise-Removal.png)\n\n\nRegressional Goodhart seems like the easiest form of Goodhart to beat: just use Bayes!\n\n\nHowever, there are two big problems with this solution:\n\n\n* Bayesian estimators are very often intractable in cases of interest.\n* It only makes sense to trust the Bayes estimate under a realizability assumption.\n\n\nA case where both of these problems become critical is computational learning theory.\n\n\n![Regressional Goodhart in computational learning theory](https://intelligence.org/wp-content/uploads/2020/08/RG-in-Learning-Theory.png)\n\n\nIt often isn’t computationally feasible to calculate the Bayesian expected generalization error of a hypothesis. And even if you could, you would still need to wonder whether your chosen prior reflected the world well enough.\n\n\n \n\n\n![Extremal Goodhart](https://intelligence.org/wp-content/uploads/2018/10/Extremal.png)\n\n\n \n\n\nIn *extremal Goodhart*, optimization pushes you outside the range where the correlation exists, into portions of the distribution which behave very differently.\n\n\nThis is especially scary because it tends to involves optimizers behaving in sharply different ways in different contexts, often with little or no warning. You might not be able to observe the proxy breaking down at all when you have weak optimization, but once the optimization becomes strong enough, you can enter a very different domain.\n\n\nThe difference between extremal Goodhart and regressional Goodhart is related to the classical interpolation/extrapolation distinction.\n\n\n \n\n\n![Interpolation versus extrapolation](https://intelligence.org/wp-content/uploads/2020/08/Extremal-Extrapolation.png)\n\n\nBecause extremal Goodhart involves a sharp change in behavior as the system is scaled up, it’s harder to anticipate than regressional Goodhart.\n\n\n \n\n\n![Sharp changes in the proxy's adequacy as capability increases create new options](https://intelligence.org/wp-content/uploads/2020/08/Sudden-Change.png)\n\n\n \n\n\nAs in the regressional case, a Bayesian solution addresses this concern in principle, if you trust a probability distribution to reflect the possible risks sufficiently well. However, the realizability concern seems even more prominent here.\n\n\nCan a prior be trusted to anticipate problems with proposals, when those proposals have been highly optimized to look good to that specific prior? Certainly a human’s judgment couldn’t be trusted under such conditions—an observation which suggests that this problem will remain even if a system’s judgments about values *perfectly reflect* a human’s.\n\n\nWe might say that the problem is this: “typical” outputs avoid extremal Goodhart, but “optimizing too hard” takes you out of the realm of the typical.\n\n\nBut how can we formalize “optimizing too hard” in decision-theoretic terms?\n\n\n[*Quantilization*](https://intelligence.org/files/QuantilizersSaferAlternative.pdf) offers a formalization of “optimize this some, but don’t optimize too much”.\n\n\nImagine a proxy \\(V(x)\\) as a “corrupted” version of the function we really want, \\(U(x)\\). There might be different regions where the corruption is better or worse.\n\n\nSuppose that we can additionally specify a “trusted” probability distribution \\(P(x)\\), for which we are confident that the average error is below some threshold \\(c\\).\n\n\n![Quantilizers require a trusted probability distribution with bounded error](https://intelligence.org/wp-content/uploads/2020/08/Quantilizer-Trusted.png)\n\n\nBy stipulating \\(P\\) and \\(c\\), we give information about where to find low-error points, without needing to have any estimates of \\(U\\) or of the actual error at any one point.\n\n\n \n\n\n![Quantilizers don't require us to know the true utility function](https://intelligence.org/wp-content/uploads/2020/08/Quantilizer-Uncertainty.png)\n\n\nWhen we select actions from \\(P\\) at random, we can be sure regardless that there’s a low probability of high error.\n\n\n![Quantilizers select a random relatively-safe action](https://intelligence.org/wp-content/uploads/2020/08/Quantilizer-Randomize.png)\n\n\nSo, how do we use this to optimize? A quantilizer selects from \\(P\\), but discarding all but the top fraction \\(f\\); for example, the top 1%. In this visualization, I’ve judiciously chosen a fraction that still has most of the probability concentrated on the “typical” options, rather than on outliers:\n\n\n![Quantilizers filter out all but the top fraction of actions](https://intelligence.org/wp-content/uploads/2020/08/Quantilizer-Quantile.png)\n\n\nBy quantilizing, we can guarantee that if we overestimate how good something is, we’re overestimating by at most \\(\\frac{c}{f}\\) in expectation. This is because in the worst case, all of the overestimation was of the \\(f\\) best options.\n\n\n![In the worst case, all of the overestimation is in the top f% of actions, as ranked by the proxy](https://intelligence.org/wp-content/uploads/2020/08/Quantilizer-Risk.png)\n\n\nWe can therefore choose an acceptable risk level, \\(r = \\frac{c}{f}\\), and set the parameter \\(f\\) as \\(\\frac{c}{r}\\).\n\n\nQuantilization is in some ways very appealing, since it allows us to specify safe classes of actions without trusting every individual action in the class—or without trusting *any* individual action in the class.\n\n\nIf you have a sufficiently large heap of apples, and there’s only one rotten apple in the heap, choosing randomly is still very likely safe. By “optimizing less hard” and picking a random good-enough action, we make the really extreme options low-probability. In contrast, if we had optimized as hard as possible, we might have ended up selecting from *only* bad apples.\n\n\nHowever, this approach also leaves a lot to be desired. Where do “trusted” distributions come from? How do you estimate the expected error \\(c\\), or select the acceptable risk level \\(r\\)? Quantilization is a risky approach because \\(r\\) gives you a knob to turn that will seemingly improve performance, while increasing risk, until (possibly sudden) failure.\n\n\nAdditionally, quantilization doesn’t seem likely to *tile*. That is, a quantilizing agent has no special reason to preserve the quantilization algorithm when it makes improvements to itself or builds new agents.\n\n\nSo there seems to be room for improvement in how we handle extremal Goodhart.\n\n\n \n\n\n![Causal Goodhart](https://intelligence.org/wp-content/uploads/2018/10/Causal.png)\n\n\n \n\n\nAnother way optimization can go wrong is when the act of selecting for a proxy breaks the connection to what we care about. *Causal Goodhart* happens when you observe a correlation between proxy and goal, but when you intervene to increase the proxy, you fail to increase the goal because the observed correlation was not causal in the right way.\n\n\nAn example of causal Goodhart is that you might try to make it rain by carrying an umbrella around. The only way to avoid this sort of mistake is to get [counterfactuals](https://intelligence.org/embedded-decisions#3) right.\n\n\nThis might seem like punting to decision theory, but the connection here enriches robust delegation and decision theory alike.\n\n\nCounterfactuals have to address concerns of trust due to tiling concerns—the need for decision-makers to reason about their own future decisions. At the same time, trust has to address counterfactual concerns because of causal Goodhart.\n\n\nOnce again, one of the big challenges here is realizability. As we noted in our discussion of embedded world-models, even if you have the right theory of how counterfactuals work in general, Bayesian learning doesn’t provide much of a guarantee that you’ll learn to select actions well, unless we assume realizability.\n\n\n \n\n\n![Adversarial Goodhart](https://intelligence.org/wp-content/uploads/2018/10/Adversarial.png)\n\n\n \n\n\nFinally, there is *adversarial Goodhart*, in which agents actively make our proxy worse by intelligently manipulating it.\n\n\nThis category is what people most often have in mind when they interpret Goodhart’s remark. And at first glance, it may not seem as relevant to our concerns here. We want to understand in formal terms how agents can trust their future selves, or trust helpers they built from scratch. What does that have to do with adversaries?\n\n\nThe short answer is: when searching in a large space which is sufficiently rich, there are bound to be some elements of that space which implement adversarial strategies. Understanding optimization in general requires us to understand how sufficiently smart optimizers can avoid adversarial Goodhart. (We’ll come back to this point in our discussion of subsystem alignment.)\n\n\nThe adversarial variant of Goodhart’s law is even harder to observe at low levels of optimization, both because the adversaries won’t want to start manipulating until after test time is over, and because adversaries that come from the system’s own optimization won’t show up until the optimization is powerful enough.\n\n\nThese four forms of Goodhart’s law work in very different ways—and roughly speaking, they tend to start appearing at successively higher levels of optimization power, beginning with regressional Goodhart and proceeding to causal, then extremal, then adversarial. So be careful not to think you’ve conquered Goodhart’s law because you’ve solved some of them.\n\n\n\n\n---\n\n\nBesides anti-Goodhart measures, it would obviously help to be able to specify what we want precisely. Remember that none of these problems would come up if a system were optimizing what we wanted directly, rather than optimizing a proxy.\n\n\nUnfortunately, this is hard. So can the AI system we’re building help us with this?\n\n\nMore generally, can a successor agent help its predecessor solve this? Maybe it can use its intellectual advantages to figure out what we want?\n\n\nAIXI learns what to do through a reward signal which it gets from the environment. We can imagine humans have a button which they press when AIXI does something they like.\n\n\nThe problem with this is that AIXI will apply its intelligence to the problem of taking control of the reward button. This is the problem of **wireheading**.\n\n\nThis kind of behavior is potentially very difficult to anticipate; the system may deceptively behave as intended during training, planning to take control after deployment. This is called a “treacherous turn”.\n\n\nMaybe we build the reward button *into* the agent, as a black box which issues rewards based on what is going on. The box could be an intelligent sub-agent in its own right, which figures out what rewards humans would want to give. The box could even defend itself by issuing punishments for actions aimed at modifying the box.\n\n\nIn the end, though, if the agent understands the situation, it will be motivated to take control anyway.\n\n\nIf the agent is told to get high output from “the button” or “the box”, then it will be motivated to hack those things. However, if you run the expected outcomes of plans through the actual reward-issuing box, then plans to hack the box are evaluated by the box itself, which won’t find the idea appealing.\n\n\nDaniel Dewey calls the second sort of agent an [observation-utility maximizer](https://intelligence.org/files/LearningValue.pdf). (Others have included observation-utility agents within a more general notion of reinforcement learning.)\n\n\nI find it very interesting how you can try all sorts of things to stop an RL agent from wireheading, but the agent keeps working against it. Then, you make the shift to observation-utility agents and the problem vanishes.\n\n\nHowever, we still have the problem of specifying  \\(U\\). Daniel Dewey points out that observation-utility agents can still use learning to approximate  \\(U\\) over time; we just can’t treat  \\(U\\) as a black box. An RL agent tries to learn to predict the reward function, whereas an observation-utility agent uses estimated utility functions from a human-specified value-learning prior.\n\n\nHowever, it’s still difficult to specify a learning process which doesn’t lead to other problems. For example, if you’re trying to learn what humans want, how do you robustly identify “humans” in the world? Merely statistically decent object recognition could lead back to wireheading.\n\n\nEven if you successfully solve that problem, the agent might correctly locate value in the human, but might still be motivated to change human values to be easier to satisfy. For example, suppose there is a drug which modifies human preferences to only care about using the drug. An observation-utility agent could be motivated to give humans that drug in order to make its job easier. This is called the *human manipulation* problem.\n\n\nAnything marked as the true repository of value gets hacked. Whether this is one of the four types of Goodharting, or a fifth, or something all its own, it seems like a theme.\n\n\n \n\n\n\n![Wireheading and Goodhart's law](https://intelligence.org/wp-content/uploads/2020/08/Goodhart-Wireheading.png)\n\n\n\n \n\n\nThe challenge, then, is to create [*stable pointers to what we value*](https://www.alignmentforum.org/posts/5bd75cc58225bf06703754b3/stable-pointers-to-value-an-agent-embedded-in-its-own-utility-function): an indirect reference to values not directly available to be optimized, which doesn’t thereby encourage hacking the repository of value.\n\n\nOne important point is made by Tom Everitt et al. in “[Reinforcement Learning with a Corrupted Reward Channel](https://arxiv.org/abs/1705.08417)“: the way you set up the feedback loop makes a huge difference.\n\n\nThey draw the following picture:\n\n\n \n\n\n![Standard and decoupled RL](https://intelligence.org/wp-content/uploads/2018/10/Decoupled-RL.png)\n\n\n* In Standard RL, the feedback about the value of a state comes from the state itself, so corrupt states can be “self-aggrandizing”.\n* In Decoupled RL, the feedback about the quality of a state comes from some other state, making it possible to learn correct values even when some feedback is corrupt.\n\n\nIn some sense, the challenge is to put the original, small agent in the feedback loop in the right way. However, the problems with updateless reasoning mentioned earlier make this hard; the original agent doesn’t know enough.\n\n\nOne way to try to address this is through [intelligence amplification](https://blog.openai.com/amplifying-ai-training/): try to turn the original agent into a more capable one with the same values, rather than creating a successor agent from scratch and trying to get value loading right.\n\n\nFor example, Paul Christiano proposes an approach in which the small agent is simulated many times in a large tree, which can perform complex computations by splitting problems into parts.\n\n\nHowever, this is still fairly demanding for the small agent: it doesn’t just need to know how to break problems down into more tractable pieces; it also needs to know how to do so without giving rise to malign subcomputations.\n\n\nFor example, since the small agent can use the copies of itself to get a lot of computational power, it could easily try to use a brute-force search for solutions that ends up running afoul of Goodhart’s Law.\n\n\nThis issue is the subject of the next section: **subsystem alignment**.\n\n\n\n\n---\n\n\nThis is part of Abram Demski and Scott Garrabrant’s [Embedded Agency](https://intelligence.org/embedded-agency/) sequence. [**Continue to the next part**](https://intelligence.org/embedded-subsystems).\n\n\n\nThe post [Robust Delegation](https://intelligence.org/2018/11/04/embedded-delegation/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-11-04T16:56:37Z", "authors": ["Abram Demski"], "summaries": []} -{"id": "be81f15992c4512fd78a4124f207a8ed", "title": "Embedded World-Models", "url": "https://intelligence.org/2018/11/02/embedded-models/", "source": "miri", "source_type": "blog", "text": "An agent which is larger than its environment can:\n\n\n \n\n\n* Hold an exact model of the environment in its head.\n* Think through the [consequences](https://www.lesswrong.com/posts/tKwJQbo6SfWF2ifKh/toward-a-new-technical-explanation-of-technical-explanation) of every potential course of action.\n* If it doesn’t know the environment perfectly, hold [every *possible* way](https://intelligence.org/files/RealisticWorldModels.pdf) the environment could be in its head, as is the case with Bayesian uncertainty.\n\n\n \n\n\nAll of these are typical of notions of rational agency.\n\n\nAn [embedded agent](https://intelligence.org/embedded-agents/) can’t do any of those things, at least not in any straightforward way.\n\n\n \n\n\n![Emmy the embedded agent](https://intelligence.org/wp-content/uploads/2018/10/Emmy-EWM.png)\n\n\n \n\n\nOne difficulty is that, since the agent is part of the environment, modeling the environment in every detail would require the agent to model itself in every detail, which would require the agent’s self-model to be as “big” as the whole agent. An agent can’t fit inside its own head.\n\n\nThe lack of a crisp agent/environment boundary forces us to grapple with paradoxes of self-reference. As if representing the rest of the world weren’t already hard enough.\n\n\n**Embedded World-Models** have to represent the world in a way more appropriate for embedded agents. Problems in this cluster include:\n\n\n \n\n\n\n> \n> * the “realizability” / “grain of truth” problem: the real world isn’t in the agent’s hypothesis space\n> * logical uncertainty\n> * high-level models\n> * multi-level models\n> * ontological crises\n> * naturalized induction, the problem that the agent must incorporate its model of itself into its world-model\n> * anthropic reasoning, the problem of reasoning with how many copies of yourself exist\n> \n> \n> \n\n\n \n\n\n\n\n\n---\n\n\nIn a Bayesian setting, where an agent’s uncertainty is quantified by a probability distribution over possible worlds, a common assumption is “**realizability**”: the true underlying environment which is generating the observations is assumed to have at least *some* probability in the prior.\n\n\nIn game theory, this same property is described by saying a prior has a “grain of truth”. It should be noted, though, that there are additional barriers to getting this property in a game-theoretic setting; so, in their common usage cases, “grain of truth” is technically demanding while “realizability” is a technical convenience.\n\n\nRealizability is not totally necessary in order for Bayesian reasoning to make sense. If you think of a set of hypotheses as “experts”, and the current posterior probability as how much you “trust” each expert, then learning according to Bayes’ Law, \\(P(h|e) = \\frac{P(e|h) \\cdot P(h)}{P(e)}\\), ensures a *relative bounded loss* property.\n\n\nSpecifically, if you use a prior \\(\\pi\\), the amount worse you are in comparison to each expert \\(h\\) is at most  \\(\\log \\pi(h)\\), since you assign at least probability \\(\\pi(h) \\cdot h(e)\\) to seeing a sequence of evidence \\(e\\). Intuitively, \\(\\pi(h)\\) is your initial trust in expert \\(h\\), and in each case where it is even a little bit more correct than you, you increase your trust accordingly. The way you do this ensures you assign an expert probability 1 and hence copy it precisely before you lose more than \\(\\log \\pi(h)\\) compared to it.\n\n\nThe prior [AIXI](https://intelligence.org/embedded-agents#3) is based on is the *Solomonoff prior*. It is defined as the output of a universal Turing machine (UTM) whose inputs are coin-flips. \n\n\nIn other words, feed a UTM a random program. Normally, you’d think of a UTM as only being able to simulate deterministic machines. Here, however, the initial inputs can instruct the UTM to use the rest of the infinite input tape as a source of randomness to simulate a *stochastic* Turing machine.\n\n\nCombining this with the previous idea about viewing Bayesian learning as a way of allocating “trust” to “experts” which meets a bounded loss condition, we can see the Solomonoff prior as a kind of ideal machine learning algorithm which can learn to act like any algorithm you might come up with, no matter how clever.\n\n\nFor this reason, we shouldn’t *necessarily* think of AIXI as “assuming the world is computable”, even though it reasons via a prior over computations. It’s getting bounded loss on its predictive accuracy *as compared with* any computable predictor. We should rather say that AIXI assumes all possible algorithms are computable, not that the world is.\n\n\nHowever, lacking realizability can cause trouble if you are looking for anything more than bounded-loss predictive accuracy:\n\n\n* the posterior can oscillate forever;\n* probabilities may not be calibrated;\n* estimates of statistics such as the mean may be arbitrarily bad;\n* estimates of latent variables may be bad;\n* and the identification of causal structure may not work.\n\n\nSo does AIXI perform well without a realizability assumption? We don’t know. Despite getting bounded loss for *predictions* without realizability, existing optimality results for its *actions* require an added realizability assumption.\n\n\nFirst, if the environment really *is* sampled from the Solomonoff distribution, AIXI gets the [maximum expected reward](https://intelligence.org/embedded-decisions). But this is fairly trivial; it is essentially the definition of AIXI.\n\n\nSecond, if we modify AIXI to take somewhat randomized actions—Thompson sampling—there is an *asymptotic* optimality result for environments which act like any stochastic Turing machine.\n\n\nSo, either way, realizability was assumed in order to prove anything. (See Jan Leike, [*Nonparametric General Reinforcement Learning*](https://jan.leike.name/publications/Nonparametric%20General%20Reinforcement%20Learning%20-%20Leike%202016.pdf).)\n\n\nBut the concern I’m pointing at is *not* “the world might be uncomputable, so we don’t know if AIXI will do well”; this is more of an illustrative case. The concern is that AIXI is only able to define intelligence or rationality by constructing an agent *much, much bigger* than the environment which it has to learn about and act within.\n\n\n \n\n\n![Alexei the dualistic agent](https://intelligence.org/wp-content/uploads/2018/10/Alexei-EWM.png)\n\n\n \n\n\nLaurent Orseau provides a way of thinking about this in “[Space-Time Embedded Intelligence](https://www.cs.utexas.edu/~ring/Orseau,%20Ring%3B%20Space-Time%20Embedded%20Intelligence,%20AGI%202012.pdf)”. However, his approach defines the intelligence of an agent in terms of a sort of super-intelligent designer who thinks about reality from outside, selecting an agent to [place into the environment](https://intelligence.org/embedded-decisions#6).\n\n\nEmbedded agents don’t have the luxury of stepping outside of the universe to think about how to think. What we would like would be a theory of rational belief for *situated* agents which provides foundations that are similarly as strong as the foundations Bayesianism provides for dualistic agents.\n\n\nImagine a computer science theory person who is having a disagreement with a programmer. The theory person is making use of an abstract model. The programmer is complaining that the abstract model isn’t something you would ever run, because it is computationally intractable. The theory person responds that the point isn’t to ever run it. Rather, the point is to understand some phenomenon which will also be relevant to more tractable things which you would want to run.\n\n\nI bring this up in order to emphasize that my perspective is a lot more like the theory person’s. I’m not talking about AIXI to say “AIXI is an idealization you can’t run”. The answers to the puzzles I’m pointing at don’t need to run. I just want to understand some phenomena.\n\n\nHowever, sometimes a thing that makes some theoretical models less tractable also makes that model too different from the phenomenon we’re interested in.\n\n\nThe *way* AIXI wins games is by assuming we can do true Bayesian updating over a hypothesis space, assuming the world is in our hypothesis space, etc. So it can tell us something about the aspect of realistic agency that’s approximately doing Bayesian updating over an approximately-good-enough hypothesis space. But embedded agents don’t just need approximate solutions to that problem; they need to solve several problems that are *different in kind* from that problem.\n\n\n\n\n---\n\n\n\nOne major obstacle a theory of embedded agency must deal with is **self-reference**.\n\n\nParadoxes of self-reference such as the [liar paradox](https://en.wikipedia.org/wiki/Liar_paradox) make it not just wildly impractical, but in a certain sense *impossible* for an agent’s world-model to accurately reflect the world.\n\n\nThe liar paradox concerns the status of the sentence “This sentence is not true”. If it were true, it must be false; and if not true, it must be true.\n\n\nThe difficulty comes in part from trying to draw a map of a territory which includes the map itself.\n\n\n \n\n\n![Self-reference in embedded agents](https://intelligence.org/wp-content/uploads/2018/11/Self-Reference.png)\n\n\n \n\n\nThis is fine if the world “holds still” for us; but because the map is in the world, different maps create different worlds.\n\n\nSuppose our goal is to make an accurate map of the final route of a road which is currently under construction. Suppose we *also* know that the construction team will get to see our map, and that construction will proceed so as to disprove whatever map we make. This puts us in a liar-paradox-like situation.\n\n\n \n\n\n![A liar-paradox-like situation](https://intelligence.org/wp-content/uploads/2020/08/Liar-Paradox.png)\n\n\n \n\n\nProblems of this kind become relevant for decision-making in the theory of games. A simple game of rock-paper-scissors can introduce a liar paradox if the players try to win, and can predict each other better than chance.\n\n\nGame theory solves this type of problem with game-theoretic equilibria. But the problem ends up coming back in a different way.\n\n\nI mentioned that the problem of realizability takes on a different character in the context of game theory. In an ML setting, realizability is a potentially *unrealistic* assumption, but can usually be assumed consistently nonetheless.\n\n\nIn game theory, on the other hand, the assumption itself may be inconsistent. This is because games commonly yield paradoxes of self-reference.\n\n\n\n \n\n\n![Reflection in game theory](https://intelligence.org/wp-content/uploads/2018/11/Reflection-in-Game-Theory.png)\n\n\n \n\n\n\nBecause there are so many agents, it is no longer possible in game theory to conveniently make an “agent” a thing which is larger than a world. So game theorists are forced to investigate notions of rational agency which can handle a large world.\n\n\nUnfortunately, this is done by splitting up the world into “agent” parts and “non-agent” parts, and handling the agents in a special way. This is almost as bad as dualistic models of agency.\n\n\nIn rock-paper-scissors, the liar paradox is resolved by stipulating that each player play each move with \\(1/3\\) probability. If one player plays this way, then the other loses nothing by doing so. This way of introducing probabilistic play to resolve would-be paradoxes of game theory is called a [*Nash equilibrium*](https://en.wikipedia.org/wiki/Nash_equilibrium).\n\n\nWe can use Nash equilibria to prevent the assumption that the agents correctly understand the world they’re in from being inconsistent. However, that works just by telling the agents what the world looks like. What if we want to model agents who learn about the world, more like AIXI?\n\n\nThe **grain of truth problem** is the problem of formulating a reasonably bound prior probability distribution which would allow agents playing games to place *some* positive probability on each other’s true (probabilistic) behavior, without knowing it precisely from the start.\n\n\nUntil recently, known solutions to the problem were quite limited. Benja Fallenstein, Jessica Taylor, and Paul Christiano’s “[Reflective Oracles: A Foundation for Classical Game Theory](https://arxiv.org/abs/1508.04145)” provides a very general solution. For details, see “[A Formal Solution to the Grain of Truth Problem](https://intelligence.org/2016/06/30/grain-of-truth/)” by Jan Leike, Jessica Taylor, and Benja Fallenstein.\n\n\nYou might think that stochastic Turing machines can represent Nash equilibria just fine.\n\n\n \n\n\n![A stochastic Turing machine yielding a Nash equilibrium](https://intelligence.org/wp-content/uploads/2020/08/Nash-TM.png)\n\n\n \n\n\nBut if you’re trying to produce Nash equilibria *as a result of reasoning about other agents*, you’ll run into trouble. If each agent models the other’s computation and tries to run it to see what the other agent does, you’ve just got an infinite loop.\n\n\nThere are some questions Turing machines just can’t answer—in particular, questions about the behavior of Turing machines. The halting problem is the classic example.\n\n\nTuring studied “oracle machines” to examine what would happen if we could answer such questions. An oracle is like a book containing some answers to questions which we were unable to answer before.\n\n\nBut ordinarily, we get a [hierarchy](https://en.wikipedia.org/wiki/Turing_jump). Type B machines can answer questions about whether type A machines halt, type C machines have the answers about types A and B, and so on, but no machines have answers about their own type.\n\n\n \n\n\n![A hierarchy of Turing machines that can solve the halting problem for lower-level machines](https://intelligence.org/wp-content/uploads/2020/08/Turing-Jump.png)\n\n\n \n\n\nReflective oracles work by twisting the ordinary Turing universe back on itself, so that rather than an infinite hierarchy of ever-stronger oracles, you define an oracle that serves as its own oracle machine.\n\n\n \n\n\n![Reflective oracle](https://intelligence.org/wp-content/uploads/2020/08/Reflective-Oracle.png)\n\n\n \n\n\nThis would normally introduce contradictions, but reflective oracles avoid this by randomizing their output in cases where they would run into paradoxes. So reflective oracle machines *are* stochastic, but they’re more powerful than regular stochastic Turing machines.\n\n\nThat’s how reflective oracles address the problems we mentioned earlier of a map that’s itself part of the territory: randomize.\n\n\n \n\n\n![Reflective oracles randomize as needed to avoid paradox](https://intelligence.org/wp-content/uploads/2020/08/Reflective-Oracle-Randomizing.png)\n\n\n \n\n\nReflective oracles also solve the problem with game-theoretic notions of rationality I mentioned earlier. It allows agents to be reasoned about in the same manner as other parts of the environment, rather than treating them as a fundamentally special case. They’re all just computations-with-oracle-access.\n\n\nHowever, models of rational agents based on reflective oracles still have several major limitations. One of these is that agents are required to have unlimited processing power, just like AIXI, and so are assumed to know all of the consequences of their own beliefs.\n\n\nIn fact, knowing all the consequences of your beliefs—a property known as *logical omniscience*—turns out to be rather core to classical Bayesian rationality.\n\n\n\n\n---\n\n\nSo far, I’ve been talking in a fairly naive way about the agent having beliefs about hypotheses, and the real world being or not being in the hypothesis space.\n\n\nIt isn’t really clear what any of that means.\n\n\nDepending on how we define things, it may actually be quite possible for an agent to be smaller than the world and yet contain the right world-model—it might know the true physics and initial conditions, but only be capable of inferring their consequences very approximately.\n\n\nHumans are certainly used to living with shorthands and approximations. But realistic as this scenario may be, it is not in line with what it usually means for a Bayesian to know something. A Bayesian knows the consequences of all of its beliefs.\n\n\nUncertainty about the consequences of your beliefs is [**logical uncertainty**](https://intelligence.org/files/QuestionsLogicalUncertainty.pdf). In this case, the agent might be empirically certain of a unique mathematical description pinpointing which universe she’s in, while being logically uncertain of most consequences of that description.\n\n\nModeling logical uncertainty requires us to have a combined theory of logic (reasoning about implications) and probability (degrees of belief).\n\n\nLogic and probability theory are two great triumphs in the codification of rational thought. Logic provides the best tools for thinking about self-reference, while probability provides the best tools for thinking about decision-making. However, the two don’t work together as well as one might think.\n\n\n \n\n\n![Probability and logic](https://intelligence.org/wp-content/uploads/2018/11/Probability-Logic.png)\n\n\n \n\n\nThey may seem superficially compatible, since probability theory is an extension of Boolean logic. However, Gödel’s first incompleteness theorem shows that any sufficiently rich logical system is incomplete: not only does it fail to decide every sentence as true or false, but it also has no computable extension which manages to do so.\n\n\n(See the post “[An Untrollable Mathematician Illustrated](https://www.lesswrong.com/posts/CvKnhXTu9BPcdKE4W/an-untrollable-mathematician-illustrated)” for more illustration of how this messes with probability theory.)\n\n\nThis also applies to probability distributions: no computable distribution can assign probabilities in a way that’s consistent with a sufficiently rich theory. This forces us to choose between using an *un*computable distribution, or using a distribution which is inconsistent.\n\n\nSounds like an easy choice, right? The inconsistent theory is at least computable, and we are after all trying to develop a theory of logical *non*-omniscience. We can just continue to update on facts which we prove, bringing us closer and closer to consistency.\n\n\nUnfortunately, this doesn’t work out so well, for reasons which connect back to realizability. Remember that there are *no* computable probability distributions consistent with all consequences of sound theories. So our non-omniscient prior doesn’t even contain a single correct *hypothesis*.\n\n\nThis causes pathological behavior as we condition on more and more true mathematical beliefs. Beliefs wildly oscillate rather than approaching reasonable estimates.\n\n\nTaking a Bayesian prior on mathematics, and updating on whatever we prove, does not seem to capture mathematical intuition and heuristic conjecture very well—unless we restrict the domain and craft a sensible prior.\n\n\nProbability is like a scale, with worlds as weights. An observation eliminates some of the possible worlds, removing weights and shifting the balance of beliefs.\n\n\nLogic is like a tree, growing from the seed of axioms according to inference rules. For real-world agents, the process of growth is never complete; you never know all the consequences of each belief.\n\n\n![Probability as scales, and logic as a tree](https://intelligence.org/wp-content/uploads/2018/11/Scales-and-Tree.png)\n\n\nWithout knowing how to combine the two, we can’t characterize reasoning probabilistically about math. But the “scale versus tree” problem also means that we don’t know how ordinary empirical reasoning works.\n\n\nBayesian hypothesis testing requires each hypothesis to clearly declare which probabilities it assigns to which observations. That way, you know how much to rescale the odds when you make an observation. If we don’t know the consequences of a belief, we don’t know how much credit to give it for making predictions.\n\n\nThis is like not knowing where to place the weights on the scales of probability. We could try putting weights on both sides until a proof rules one out, but then the beliefs just oscillate forever rather than doing anything useful.\n\n\nThis forces us to grapple directly with the problem of a world that’s larger than the agent. We want some notion of boundedly rational beliefs about uncertain consequences; but *any* computable beliefs about logic must have left out *something*, since the tree of logical implications will grow larger than any container.\n\n\nFor a Bayesian, the scales of probability are balanced in precisely such a way that [no Dutch book](https://arbital.com/p/expected_utility_formalism/?l=7hh) can be made against them—no sequence of bets that are a sure loss. But you can only account for all Dutch books if you know all the consequences of your beliefs. Absent that, someone who has explored other parts of the tree can Dutch-book you.\n\n\nBut human mathematicians don’t seem to run into any special difficulty in reasoning about mathematical uncertainty, any more than we do with empirical uncertainty. So what characterizes good reasoning under mathematical uncertainty, if not immunity to making bad bets?\n\n\nOne answer is to weaken the notion of Dutch books so that we only allow bets based on *quickly computable* parts of the tree. This is one of the ideas behind Garrabrant et al.’s “[Logical Induction](https://intelligence.org/2016/09/12/new-paper-logical-induction/)”, an early attempt at defining something like “Solomonoff induction, but for reasoning that incorporates mathematical uncertainty”.\n\n\n\n\n---\n\n\nAnother consequence of the fact that the world is bigger than you is that you need to be able to use **[high-level world models](https://arbital.com/p/ontology_identification/)**: models which involve things like tables and chairs.\n\n\nThis is related to the classical symbol grounding problem; but since we want a formal analysis which increases our [trust](https://intelligence.org/embedded-agents#rd) in some system, the kind of model which interests us is somewhat different. This also relates to [transparency](https://intelligence.org/embedded-agents#sa) and [informed oversight](https://ai-alignment.com/the-informed-oversight-problem-1b51b4f66b35): world-models should be made out of understandable parts.\n\n\nA related question is how high-level reasoning and low-level reasoning relate to each other and to intermediate levels: **multi-level world models**.\n\n\nStandard probabilistic reasoning doesn’t provide a very good account of this sort of thing. It’s as though you have different Bayes nets which describe the world at different levels of accuracy, and processing power limitations force you to mostly use the less accurate ones, so you have to decide how to jump to the more accurate as needed.\n\n\nAdditionally, the models at different levels don’t line up perfectly, so you have a problem of translating between them; and the models may have serious contradictions between them. This might be fine, since high-level models are understood to be approximations anyway, or it could signal a serious problem in the higher- or lower-level models, requiring their revision.\n\n\nThis is especially interesting in the case of [**ontological crises**](https://intelligence.org/files/OntologicalCrises.pdf), in which objects we value turn out not to be a part of “better” models of the world.\n\n\nIt seems fair to say that everything humans value exists in high-level models only, which from a reductionistic perspective is “less real” than atoms and quarks. However, *because* our values aren’t defined on the low level, we are able to keep our values even when our knowledge of the low level radically shifts. (We would also like to be able to say something about what happens to values if the *high* level radically shifts.)\n\n\nAnother critical aspect of embedded world models is that the agent itself must be in the model, since the agent seeks to understand the world, and the world cannot be fully separated from oneself. This opens the door to difficult problems of self-reference and anthropic decision theory.\n\n\n**[Naturalized induction](https://wiki.lesswrong.com/wiki/Naturalized_induction)** is the problem of learning world-models which include yourself in the environment. This is challenging because (as Caspar Oesterheld [has put it](https://www.lesswrong.com/posts/kgsaSbJqWLtJfiCcz/naturalized-induction-a-challenge-for-evidential-and-causal)) there is a type mismatch between “mental stuff” and “physics stuff”.\n\n\nAIXI conceives of the environment as if it were made with a slot which the agent fits into. We might intuitively reason in this way, but we can also understand a physical perspective from which this looks like a bad model. We might imagine instead that the agent separately represents: self-knowledge available to introspection; hypotheses about what the universe is like; and a “[bridging hypothesis](https://www.lesswrong.com/posts/ethRJh2E7mSSjzCay/building-phenomenological-bridges)” connecting the two.\n\n\nThere are interesting questions of how this could work. There’s also the question of whether this is the right structure at all. It’s certainly not how I imagine babies learning.\n\n\n[Thomas Nagel](https://philpapers.org/rec/NAGTVF) would say that this way of approaching the problem involves “views from nowhere”; each hypothesis posits a world as if seen from outside. This is perhaps a strange thing to do.\n\n\n\n\n---\n\n\nA special case of agents needing to reason about themselves is agents needing to reason about their *future* self.\n\n\nTo make long-term plans, agents need to be able to model how they’ll act in the future, and have a certain kind of *trust* in their future goals and reasoning abilities. This includes trusting future selves that have learned and grown a great deal.\n\n\nIn a traditional Bayesian framework, “learning” means Bayesian updating. But as we noted, Bayesian updating requires that the agent *start out* large enough to consider a bunch of ways the world can be, and learn by ruling some of these out.\n\n\nEmbedded agents need *resource-limited*, *logically uncertain* updates, which don’t work like this.\n\n\nUnfortunately, Bayesian updating is the main way we know how to think about an agent progressing through time as one unified agent. The Dutch book justification for Bayesian reasoning is basically saying this kind of updating is the only way to not have the agent’s actions on Monday work at cross purposes, at least a little, to the agent’s actions on Tuesday.\n\n\nEmbedded agents are non-Bayesian. And non-Bayesian agents tend to get into wars with their future selves.\n\n\nWhich brings us to our next set of problems: **robust delegation**.\n\n\n\n\n---\n\n\nThis is part of Abram Demski and Scott Garrabrant’s [Embedded Agency](https://intelligence.org/embedded-agency/) sequence. Next part: [**Robust Delegation**](http://intelligence.org/embedded-delegation).\n\n\n\nThe post [Embedded World-Models](https://intelligence.org/2018/11/02/embedded-models/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-11-02T16:03:35Z", "authors": ["Scott Garrabrant"], "summaries": []} -{"id": "6240876e730303cdcb2660b81ca5eaa9", "title": "Decision Theory", "url": "https://intelligence.org/2018/10/31/embedded-decisions/", "source": "miri", "source_type": "blog", "text": "Decision theory and artificial intelligence typically try to compute something resembling\n\n\n$$\\underset{a \\ \\in \\ Actions}{\\mathrm{argmax}} \\ \\ f(a).$$\n\n\nI.e., maximize some function of the action. This tends to assume that we can detangle things enough to see outcomes as a function of actions.\n\n\nFor example, AIXI represents the agent and the environment as separate units which interact over time through clearly defined i/o channels, so that it can then choose actions maximizing reward.\n\n\n \n\n\n![AIXI](https://intelligence.org/wp-content/uploads/2018/10/AIXI.png)\n\n\n \n\n\nWhen the agent model is [a part of the environment model](http://intelligence.org/embedded-agency), it can be significantly less clear how to consider taking alternative actions.\n\n\n \n\n\n![Embedded agent](https://intelligence.org/wp-content/uploads/2018/10/Embedded-Agent.png)\n\n\n \n\n\nFor example, because the agent is [smaller than the environment](http://intelligence.org/embedded-agents#ewm), there can be other copies of the agent, or things very similar to the agent. This leads to contentious decision-theory problems such as [the Twin Prisoner’s Dilemma and Newcomb’s problem](https://intelligence.org/2017/10/22/fdt/).\n\n\nIf Emmy Model 1 and Emmy Model 2 have had the same experiences and are running the same source code, should Emmy Model 1 act like her decisions are steering both robots at once? Depending on how you draw the boundary around “yourself”, you might think you control the action of both copies, or only your own.\n\n\nThis is an instance of the problem of counterfactual reasoning: how do we evaluate hypotheticals like “What if the sun suddenly went out”?\n\n\nProblems of adapting **decision theory** to embedded agents include:\n\n\n \n\n\n\n> \n> * counterfactuals\n> * Newcomblike reasoning, in which the agent interacts with copies of itself\n> * reasoning about other agents more broadly\n> * extortion problems\n> * coordination problems\n> * logical counterfactuals\n> * logical updatelessness\n> \n> \n> \n\n\n \n\n\n\n\nThe most central example of why agents need to think about counterfactuals comes from counterfactuals about their own actions.\n\n\nThe difficulty with **action counterfactuals** can be illustrated by the [five-and-ten problem](https://agentfoundations.org/item?id=1399). Suppose we have the option of taking a five dollar bill or a ten dollar bill, and all we care about in the situation is how much money we get. Obviously, we should take the $10.\n\n\nHowever, it is not so easy as it seems to reliably take the $10.\n\n\nIf you reason about yourself as just another part of the environment, then you can [know your own behavior](http://intelligence.org/embedded-agents#rd). If you can know your own behavior, then it becomes difficult to reason about what would happen if you behaved *differently*.\n\n\nThis throws a monkey wrench into many common reasoning methods. How do we formalize the idea “Taking the $10 would lead to *good* consequences, while taking the $5 would lead to *bad* consequences,” when sufficiently rich self-knowledge would reveal one of those scenarios as inconsistent?\n\n\nAnd if we *can’t* formalize any idea like that, how do real-world agents figure out to take the $10 anyway?\n\n\nIf we try to calculate the expected utility of our actions by Bayesian conditioning, as is common, knowing our own behavior leads to a divide-by-zero error when we try to calculate the expected utility of actions we know we don’t take: \\(\\lnot A\\) implies \\(P(A)=0\\), which implies \\(P(B \\& A)=0\\), which implies\n\n\n$$P(B|A) = \\frac{P(B \\& A)}{P(A)} = \\frac{0}{0}.$$\n\n\nBecause the agent doesn’t know how to separate itself from the environment, it gets gnashing internal gears when it tries to imagine taking different actions.\n\n\nBut the biggest complication comes from Löb’s Theorem, which can make otherwise reasonable-looking agents take the $5 because “If I take the $10, I get $0”! And in a *stable* way—the problem can’t be solved by the agent learning or thinking about the problem more.\n\n\nThis might be hard to believe; so let’s look at a detailed example. The phenomenon can be illustrated by the behavior of simple logic-based agents reasoning about the five-and-ten problem.\n\n\nConsider this example:\n\n\n \n\n\n![Five-and-ten problem](https://intelligence.org/wp-content/uploads/2018/10/five-and-ten.png)\n\n\n \n\n\nWe have the source code for an agent and the universe. They can refer to each other through the use of quining. The universe is simple; the universe just outputs whatever the agent outputs.\n\n\nThe agent spends a long time searching for proofs about what happens if it takes various actions. If for some \\(x\\) and \\(y\\) equal to \\(0\\), \\(5\\), or \\(10\\), it finds a proof that taking the \\(5\\) leads to \\(x\\) utility, that taking the \\(10\\) leads to \\(y\\) utility, and that \\(x>y\\), it will naturally take the \\(5\\). We expect that it won’t find such a proof, and will instead pick the default action of taking the \\(10\\).\n\n\nIt seems easy when you just imagine an agent trying to reason about the universe. Yet it turns out that if the amount of time spent searching for proofs is enough, the agent will always choose \\(5\\)!\n\n\nThe proof that this is so is by [Löb’s theorem](https://intelligence.org/files/lob-notes-IAFF.pdf). Löb’s theorem says that, for any proposition \\(P\\), if you can prove that a *proof* of \\(P\\) would imply the *truth* of \\(P\\), then you can prove \\(P\\). In symbols, with \n“\\(□X\\)” meaning “\\(X\\) is provable”:\n\n\n$$□(□P \\to P) \\to □P.$$\n\n\nIn the version of the five-and-ten problem I gave, “\\(P\\)” is the proposition “if the agent outputs \\(5\\) the universe outputs \\(5\\), and if the agent outputs \\(10\\) the universe outputs \\(0\\)”.\n\n\nSupposing it is provable, the agent will eventually find the proof, and return \\(5\\) in fact. This makes the sentence *true*, since the agent outputs \\(5\\) and the universe outputs \\(5\\), and since it’s false that the agent outputs \\(10\\). This is because false propositions like “the agent outputs \\(10\\)” imply everything, *including* the universe outputting \\(5\\).\n\n\nThe agent can (given enough time) prove all of this, in which case the agent in fact proves the proposition “if the agent outputs \\(5\\) the universe outputs \\(5\\), and if the agent outputs \\(10\\) the universe outputs \\(0\\)”. And as a result, the agent takes the $5.\n\n\nWe call this a “spurious proof”: the agent takes the $5 because it can prove that *if* it takes the $10 it has low value, *because* it takes the $5. It sounds circular, but sadly, is logically correct. More generally, when working in less proof-based settings, we refer to this as a problem of spurious counterfactuals.\n\n\nThe general pattern is: counterfactuals may spuriously mark an action as not being very good. This makes the AI not take the action. Depending on how the counterfactuals work, this may remove any feedback which would “correct” the problematic counterfactual; or, as we saw with proof-based reasoning, it may actively help the spurious counterfactual be “true”.\n\n\nNote that because the proof-based examples are of significant interest to us, “counterfactuals” actually have to be **counter*logicals***; we sometimes need to reason about logically impossible “possibilities”. This rules out most existing accounts of counterfactual reasoning.\n\n\nYou may have noticed that I slightly cheated. The only thing that broke the symmetry and caused the agent to take the $5 was the fact that “\\(5\\)” was the action that was taken when a proof was found, and “\\(10\\)” was the default. We could instead consider an agent that looks for any proof at all about what actions lead to what utilities, and then takes the action that is better. This way, which action is taken is dependent on what order we search for proofs.\n\n\nLet’s assume we search for short proofs first. In this case, we will take the $10, since it is very easy to show that \\(A()=5\\) leads to \\(U()=5\\) and \\(A()=10\\) leads to \\(U()=10\\).\n\n\nThe problem is that spurious proofs can be short too, and don’t get much longer when the universe gets harder to predict. If we replace the universe with one that is provably functionally the same, but is harder to predict, the shortest proof will short-circuit the complicated universe and be spurious.\n\n\n\n\n---\n\n\nPeople often try to solve the problem of counterfactuals by suggesting that there will always be some uncertainty. An AI may know its source code perfectly, but it can’t perfectly know the hardware it is running on.\n\n\nDoes adding a little uncertainty solve the problem? Often not:\n\n\n* The proof of the spurious counterfactual often still goes through; if you think you are in a five-and-ten problem with a 95% certainty, you can have the usual problem within that 95%.\n* Adding uncertainty to make counterfactuals well-defined doesn’t get you any guarantee that the counterfactuals will be *reasonable*. Hardware failures aren’t often what you want to expect when considering alternate actions.\n\n\nConsider this scenario: You are confident that you almost always take the left path. However, it is possible (though unlikely) for a [cosmic ray](https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/) to damage your circuits, in which case you could go right—but you would then be insane, which would have many other bad consequences.\n\n\nIf *this reasoning in itself* is why you always go left, you’ve gone wrong.\n\n\nSimply ensuring that the agent has some uncertainty about its actions doesn’t ensure that the agent will have remotely reasonable counterfactual expectations. However, one thing we can try instead is to ensure the agent *actually takes each action* with some probability. This strategy is called **ε-exploration**.\n\n\nε-exploration ensures that if an agent plays similar games on enough occasions, it can eventually learn realistic counterfactuals (modulo a concern of realizability which we will get to later).\n\n\nε-exploration only works if it ensures that the agent itself can’t predict whether it is about to ε-explore. In fact, a good way to implement ε-exploration is via the rule “if the agent is too sure about its action, it takes a different one”.\n\n\nFrom a logical perspective, the unpredictability of ε-exploration is what prevents the problems we’ve been discussing. From a learning-theoretic perspective, if the agent could know it wasn’t about to explore, then it could treat that as a different case—failing to generalize lessons from its exploration. This gets us back to a situation where we have no guarantee that the agent will learn better counterfactuals. Exploration may be the only source of data for some actions, so we need to force the agent to take that data into account, or it may not learn.\n\n\nHowever, even ε-exploration doesn’t seem to get things exactly right. Observing the result of ε-exploration shows you what happens if you take an action *unpredictably*; the consequences of taking that action as part of business-as-usual may be different.\n\n\nSuppose you’re an ε-explorer who lives in a world of ε-explorers. You’re applying for a job as a security guard, and you need to convince the interviewer that you’re not the kind of person who would run off with the stuff you’re guarding. They want to hire someone who has too much integrity to lie and steal, even if the person thought they could get away with it.\n\n\n \n\n\n![A seemingly trustworthy agent](https://intelligence.org/wp-content/uploads/2020/08/epsilon1.png)\n\n\n \n\n\nSuppose the interviewer is an amazing judge of character—or just has read access to your source code.\n\n\n \n\n\n![A seemingly untrustworthy agent](https://intelligence.org/wp-content/uploads/2020/08/epsilon2.png)\n\n\n \n\n\nIn this situation, stealing might be a great option *as an ε-exploration action*, because the interviewer may not be able to predict your theft, or may not think punishment makes sense for a one-off anomaly.\n\n\n \n\n\n![A surprising epsilon-exploration action](https://intelligence.org/wp-content/uploads/2020/08/epsilon3.png)\n\n\n \n\n\nBut stealing is clearly a bad idea *as a normal action*, because you’ll be seen as much less reliable and trustworthy.\n\n\n \n\n\n![Taking away the wrong lesson from an epsilon-exploration lesson](https://intelligence.org/wp-content/uploads/2020/08/epsilon4.png)\n\n\n \n\n\nIf we don’t learn counterfactuals from ε-exploration, then, it seems we have no guarantee of learning realistic counterfactuals at all. But if we do learn from ε-exploration, it appears we still get things wrong in some cases.\n\n\nSwitching to a probabilistic setting doesn’t cause the agent to reliably make “reasonable” choices, and neither does forced exploration.\n\n\nBut writing down examples of “correct” counterfactual reasoning doesn’t seem hard from the outside!\n\n\nMaybe that’s because from “outside” we always have a dualistic perspective. We are in fact sitting outside of the problem, and we’ve defined it as a function of an agent.\n\n\n \n\n\n![Dualistic agents](https://intelligence.org/wp-content/uploads/2018/10/Alexei-DT.png)\n\n\nHowever, an agent can’t solve the problem in the same way from inside. From its perspective, its functional relationship with the environment isn’t an observable fact. This is why counterfactuals are called “counterfactuals”, after all.\n\n\n \n\n\n![Decision-making for embedded agents](https://intelligence.org/wp-content/uploads/2018/10/Emmy-DT.png)\n\n\n \n\n\nWhen I told you about the 5 and 10 problem, I first told you about the problem, and then gave you an agent. When one agent doesn’t work well, we could consider a different agent.\n\n\nFinding a way to succeed at a decision problem involves finding an agent that when plugged into the problem takes the right action. The fact that we can even consider putting in different agents means that we have already carved the universe into an “agent” part, plus the rest of the universe with a hole for the agent—which is most of the work!\n\n\n\n\n---\n\n\nAre we just fooling ourselves due to the way we set up decision problems, then? Are there no “correct” counterfactuals?\n\n\nWell, maybe we *are* fooling ourselves. But there is still something we are confused about! “Counterfactuals are subjective, invented by the agent” doesn’t dissolve the mystery. There is *something* intelligent agents do, in the real world, to make decisions.\n\n\nSo I’m not talking about agents who know their own actions because I think there’s going to be a big problem with intelligent machines inferring their own actions in the future. Rather, the possibility of knowing your own actions illustrates something confusing about determining the consequences of your actions—a confusion which shows up even in the very simple case where everything about the world is known and you just need to choose the larger pile of money.\n\n\nFor all that, *humans* don’t seem to run into any trouble taking the $10.\n\n\nCan we take any inspiration from how humans make decisions?\n\n\nWell, suppose you’re actually asked to choose between $10 and $5. You know that you’ll take the $10. How do you reason about what *would* happen if you took the $5 instead?\n\n\nIt seems easy if you can separate yourself from the world, so that you only think of external consequences (getting $5).\n\n\n![Thinking about external consequences](https://intelligence.org/wp-content/uploads/2020/08/human-decisions1.png)\n\n\nIf you think about *yourself* as well, the counterfactual starts seeming a bit more strange or contradictory. Maybe you have some absurd prediction about what the world would be like if you took the $5—like, “I’d have to be blind!”\n\n\nThat’s alright, though. In the end you still see that taking the $5 would lead to bad consequences, and you still take the $10, so you’re doing fine.\n\n\n \n\n\n![Counterfactuals about the world and about oneself](https://intelligence.org/wp-content/uploads/2020/08/human-decisions2.png)\n\n\nThe challenge for formal agents is that an agent can be in a similar position, except it is taking the $5, knows it is taking the $5, and can’t figure out that it should be taking the $10 instead, because of the absurd predictions it makes about what happens when it takes the $10.\n\n\nIt seems hard for a human to end up in a situation like that; yet when we try to write down a formal reasoner, we keep running into this kind of problem. So it indeed seems like human decision-making is doing something here that we don’t yet understand.\n\n\n\n\n---\n\n\nIf you’re an embedded agent, then you should be able to think about yourself, just like you think about other objects in the environment. And other reasoners in your environment should be able to think about you too.\n\n\n \n\n\n![Emmy meets another agent](https://intelligence.org/wp-content/uploads/2020/08/Emmy-Multiagent-e1598198663468.png)\n\n\n \n\n\nIn the five-and-ten problem, we saw how messy things can get when an agent knows its own action before it acts. But this is hard to avoid for an embedded agent.\n\n\nIt’s especially hard not to know your own action in standard Bayesian settings, which assume logical omniscience. A probability distribution assigns probability 1 to any fact which is logically true. So if a Bayesian agent knows its own source code, then it should know its own action.\n\n\nHowever, realistic agents who are not logically omniscient may run into the same problem. Logical omniscience forces the issue, but rejecting logical omniscience doesn’t eliminate the issue.\n\n\nε-exploration *does* seem to solve that problem in many cases, by ensuring that agents have uncertainty about their choices and that the things they expect are based on experience.\n\n\n \n\n\n![Epsilon-exploration in the five-and-ten problem](https://intelligence.org/wp-content/uploads/2020/08/epsilon5.png)\n\n\n \n\n\nHowever, as we saw in the security guard example, even ε-exploration seems to steer us wrong when the results of exploring randomly differ from the results of acting reliably.\n\n\nExamples which go wrong in this way seem to involve another part of the environment that behaves like you—such as another agent very similar to yourself, or a sufficiently good model or simulation of you. These are called **Newcomblike problems**; an example is the Twin Prisoner’s Dilemma mentioned above.\n\n\n \n\n\n![Newcomblike problems](https://intelligence.org/wp-content/uploads/2020/08/Newcomblike-Problems.png)\n\n\n \n\n\nIf the five-and-ten problem is about cutting a you-shaped piece out of the world so that the world can be treated as a function of your action, Newcomblike problems are about what to do when there are several approximately you-shaped pieces in the world.\n\n\nOne idea is that *exact* copies should be treated as 100% under your “logical control”. For approximate models of you, or merely similar agents, control should drop off sharply as logical correlation decreases. But how does this work?\n\n\n![Degrees of logical correlation](https://intelligence.org/wp-content/uploads/2020/08/Logical-Correlation.png)\n\n\nNewcomblike problems are difficult for almost the same reason as the self-reference issues discussed so far: prediction. With strategies such as ε-exploration, we tried to limit the self-knowledge of the *agent* in an attempt to avoid trouble. But the presence of powerful predictors in the environment reintroduces the trouble. By choosing what information to share, predictors can manipulate the agent and choose their actions for them.\n\n\nIf there is something which can predict you, it might *tell* you its prediction, or related information, in which case it matters what you do *in response* to various things you could find out.\n\n\nSuppose you decide to do the opposite of whatever you’re told. Then it isn’t possible for the scenario to be set up in the first place. Either the predictor isn’t accurate after all, or alternatively, the predictor doesn’t share their prediction with you.\n\n\nOn the other hand, suppose there’s some situation where you do act as predicted. Then the predictor can control how you’ll behave, by controlling what prediction they tell you.\n\n\nSo, on the one hand, a powerful predictor can control you by selecting between the consistent possibilities. On the other hand, you are the one who chooses your pattern of responses in the first place. This means that you can set them up to your best advantage.\n\n\n\n\n---\n\n\nSo far, we’ve been discussing action counterfactuals—how to anticipate consequences of different actions. This discussion of controlling your responses introduces the **observation counterfactual**—imagining what the world would be like if different facts had been observed.\n\n\nEven if there is no one telling you a prediction about your future behavior, observation counterfactuals can still play a role in making the right decision. Consider the following game:\n\n\nAlice receives a card at random which is either High or Low. She may reveal the card if she wishes. Bob then gives his probability \\(p\\) that Alice has a high card. Alice always loses \\(p^2\\) dollars. Bob loses \\(p^2\\) if the card is low, and \\((1-p)^2\\) if the card is high.\n\n\nBob has a proper scoring rule, so does best by giving his true belief. Alice just wants Bob’s belief to be as much toward “low” as possible.\n\n\nSuppose Alice will play only this one time. She sees a low card. Bob is good at reasoning about Alice, but is in the next room and so can’t read any tells. Should Alice reveal her card?\n\n\nSince Alice’s card is low, if she shows it to Bob, she will lose no money, which is the best possible outcome. However, this means that in the counterfactual world where Alice sees a high card, she wouldn’t be able to keep the secret—she might as well show her card in that case too, since her reluctance to show it would be as reliable a sign of “high”.\n\n\nOn the other hand, if Alice doesn’t show her card, she loses 25¢—but then she can use the same strategy in the other world, rather than losing $1. So, before playing the game, Alice would want to visibly commit to not reveal; this makes expected loss 25¢, whereas the other strategy has expected loss 50¢. By taking observation counterfactuals into account, Alice is able to keep secrets—without them, Bob could perfectly infer her card from her actions.\n\n\nThis game is equivalent to the decision problem called [counterfactual mugging](https://www.lesswrong.com/posts/mg6jDEuQEjBGtibX7/counterfactual-mugging).\n\n\n[**Updateless decision theory**](https://www.lesswrong.com/posts/de3xjFaACCAk6imzv/towards-a-new-decision-theory) (UDT) is a proposed decision theory which can keep secrets in the high/low card game. UDT does this by recommending that the agent do whatever would have seemed wisest before—whatever your earlier self would have committed to do.\n\n\nAs it happens, UDT also performs well in Newcomblike problems.\n\n\nCould something like UDT be related to what humans are doing, if only implicitly, to get good results on decision problems? Or, if it’s not, could it still be a good model for thinking about decision-making?\n\n\nUnfortunately, there are still some pretty deep difficulties here. UDT is an elegant solution to a fairly broad class of decision problems, but it only makes sense if the earlier self can foresee all possible situations.\n\n\nThis works fine in a Bayesian setting where the prior already contains all possibilities within itself. However, there may be no way to do this in a realistic embedded setting. An agent has to be able to think of *new possibilities*—meaning that its earlier self doesn’t know enough to make all the decisions.\n\n\nAnd with that, we find ourselves squarely facing the problem of **embedded world-models**.\n\n\n\n\n---\n\n\nThis is part of Abram Demski and Scott Garrabrant’s [Embedded Agency](https://intelligence.org/embedded-agency/) sequence. [**Continued here!**](http://intelligence.org/embedded-models)\n\n\n\nThe post [Decision Theory](https://intelligence.org/2018/10/31/embedded-decisions/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-10-31T18:25:38Z", "authors": ["Abram Demski"], "summaries": []} -{"id": "f65263c2dc02e484c2ee4b49a51384f3", "title": "October 2018 Newsletter", "url": "https://intelligence.org/2018/10/29/october-2018-newsletter/", "source": "miri", "source_type": "blog", "text": "The AI Alignment Forum [has left beta](https://intelligence.org/2018/10/29/announcing-the-ai-alignment-forum/)!\n\n\n\n\nDovetailing with the launch, MIRI researchers Scott Garrabrant and Abram Demski will be releasing a new sequence introducing our research over the coming week, beginning here: [Embedded Agents](https://intelligence.org/embedded-agents). (Shorter illustrated version [here](https://www.alignmentforum.org/posts/p7x32SEt43ZMC9r7r/embedded-agents).)\n\n\n\n#### Other updates\n\n\n* New posts to the forum: [Cooperative Oracles](https://www.alignmentforum.org/posts/SgkaXQn3xqJkGQ2D8/cooperative-oracles); [When Wishful Thinking Works](https://www.alignmentforum.org/posts/KbCHcb8yyjAMFAAPJ/when-wishful-thinking-works); [(A → B) → A](https://www.alignmentforum.org/posts/qhsELHzAHFebRJE59/a-greater-than-b-greater-than-a); [Towards a New Impact Measure](https://www.alignmentforum.org/posts/yEa7kwoMpsBgaBCgb/towards-a-new-impact-measure); [In Logical Time, All Games are Iterated Games](https://www.alignmentforum.org/posts/dKAJqBDZRMMsaaYo5/in-logical-time-all-games-are-iterated-games); [EDT Solves 5-and-10 With Conditional Oracles](https://www.alignmentforum.org/posts/Rcwv6SPsmhkgzfkDw/edt-solves-5-and-10-with-conditional-oracles)\n* [The Rocket Alignment Problem](https://intelligence.org/2018/10/03/rocket-alignment/): Eliezer Yudkowsky considers a hypothetical world without knowledge of calculus and celestial mechanics, to illustrate MIRI’s research and what we take to be the world’s current level of understanding of AI alignment. (Also on [LessWrong](https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problem).)\n* More on MIRI’s AI safety angle of attack: [a comment on decision theory](https://www.lesswrong.com/posts/uKbxi2EJ3KBNRDGpL/comment-on-decision-theory).\n\n\n#### News and links\n\n\n* DeepMind’s safety team launches their own blog, with an inaugural post on [specification, robustness, and assurance](https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1).\n* Will MacAskill [discusses moral uncertainty](https://futureoflife.org/2018/09/17/moral-uncertainty-and-the-path-to-ai-alignment-with-william-macaskill/) on FLI’s AI safety podcast.\n* Google Brain announces the [Unrestricted Adversarial Examples Challenge](https://ai.googleblog.com/2018/09/introducing-unrestricted-adversarial.html).\n* The [80,000 Hours job board](https://80000hours.org/job-board/) has many new postings, including [head of operations](https://www.fhi.ox.ac.uk/head-of-operations/) for FHI, [COO](http://existence.org/jobs/chief-operating-officer) for BERI, and [programme manager](https://www.cser.ac.uk/about-us/careers/academic-programme-manager/) for CSER. Also taking applicants: [summer internships](https://humancompatible.ai/jobs#internship) at CHAI, and a [scholarships program](https://www.fhi.ox.ac.uk/scholarships/) from FHI.\n\n\n\nThe post [October 2018 Newsletter](https://intelligence.org/2018/10/29/october-2018-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-10-30T00:16:06Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "85f3f20af7a61df58fd65bdf58aaeaed", "title": "Announcing the new AI Alignment Forum", "url": "https://intelligence.org/2018/10/29/announcing-the-ai-alignment-forum/", "source": "miri", "source_type": "blog", "text": "*This is a guest post by Oliver Habryka, lead developer for LessWrong. Our gratitude to the LessWrong team for the hard work they’ve put into developing this resource, and our congratulations on today’s launch!*\n\n\n---\n\n\nI am happy to announce that after two months of open beta, the [**AI Alignment Forum**](http://alignmentforum.org/) is launching today. The AI Alignment Forum is a new website built by the team behind [LessWrong 2.0](http://lesswrong.com/), to help create a new hub for technical AI alignment research and discussion. \n\n\nOne of our core goals when we designed the forum was to make it easier for new people to get started on doing technical AI alignment research. This effort was split into two major parts:\n\n\n\n### 1. Better introductory content\n\n\nWe have been coordinating with AI alignment researchers to create three new sequences of posts that we hope can serve as introductions to some of the most important core ideas in AI Alignment. The three new sequences will be: \n\n\n\n> \n> * [Embedded Agency](https://www.alignmentforum.org/s/Rm6oQRJJmhGCcLvxh), written by Scott Garrabrant and Abram Demski of MIRI\n> * [Iterated Amplification](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd), written and compiled by Paul Christiano of OpenAI\n> * [Value Learning](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc), written and compiled by Rohin Shah of CHAI\n> \n> \n> \n\n\nOver the next few weeks, **we will be releasing about one post per day from these sequences**, starting with the [first post in the Embedded Agency sequence](https://www.alignmentforum.org/posts/p7x32SEt43ZMC9r7r/embedded-agents) today.\n\n\nIf you are interested in learning about AI alignment, I encourage you to ask questions and discuss the content in the comment sections. And if you are already familiar with a lot of the core ideas, then we would greatly appreciate feedback on the sequences as we publish them. We hope that these sequences can be a major part of how new people get involved in AI alignment research, and so we care a lot of their quality and clarity. \n\n\n### 2. Easier ways to join the discussion\n\n\nMost scientific fields have to balance the need for high-context discussion with other specialists, and public discussion which allows the broader dissemination of new ideas, the onboarding of new members and the opportunity for new potential researchers to prove themselves. We tried to [design a system](https://www.alignmentforum.org/posts/FoiiRDC3EhjHx7ayY/introducing-the-ai-alignment-forum-faq) that still allows newcomers to participate and learn, while giving established researchers the space to have high-level discussions with other researchers.\n\n\nTo do that, **we integrated the new AI Alignment Forum closely with the existing LessWrong platform**, as follows: \n\n\n\n> \n> * Any new post or comment on the new AI Alignment Forum is automatically cross-posted to LessWrong.com. Accounts are also shared between the two platforms.\n> * Any comment and post on LessWrong can be promoted by members of the Alignment Forum from LessWrong to the AI Alignment Forum.\n> * The reputation systems for LessWrong and the AI Alignment Forum are separate, and for every user, post and comment, you can see two reputation scores on LessWrong.com: a primary karma score combining karma from both sites, and a secondary karma score specific to AI Alignment Forum members.\n> * Any member whose content gets promoted on a frequent basis, and who garners a significant amount of karma from AI Alignment Forum members, will be automatically recommended to the AI Alignment Forum moderators as a candidate addition to the Alignment Forum.\n> \n> \n> \n\n\nWe hope that this will result in a system in which cutting-edge research and discussion can happen, while new good ideas and participants can get noticed and rewarded for their contributions.\n\n\nIf you’ve been interested in doing alignment research, then I think the best way to do that right now is to comment on AI Alignment Forum posts on LessWrong, and check out the new content we’ll be rolling out. \n\n\n\n\n---\n\n\nIn an effort to centralize the existing discussion on technical AI alignment, this new forum is also going to replace the [Intelligent Agent Foundations Forum](https://agentfoundations.org), which MIRI built and maintained for the past two years. We are planning to shut down IAFF over the coming weeks, and collaborated with MIRI to import all the content from the forum, as well ensure that all old URLs are properly forwarded to their respective addresses on the new site. If you contributed there, you should have received an email about the details of importing your content. (If you didn’t, send us a message in the Intercom chat at the bottom right corner at AlignmentForum.org.)\n\n\nThanks to MIRI for helping us build this project, and I am looking forward to seeing a lot of you participate in discussion of the AI alignment problem on LessWrong and the new forum.\n\n\n\nThe post [Announcing the new AI Alignment Forum](https://intelligence.org/2018/10/29/announcing-the-ai-alignment-forum/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-10-29T23:34:29Z", "authors": ["Guest"], "summaries": []} -{"id": "aa800fd919645b4b503557fb9a347ffa", "title": "Embedded Agents", "url": "https://intelligence.org/2018/10/29/embedded-agents/", "source": "miri", "source_type": "blog", "text": "Suppose you want to build a robot to achieve some real-world goal for you—a goal that requires the robot to learn for itself and figure out a lot of things that you don’t already know.[1](https://intelligence.org/2018/10/29/embedded-agents/#footnote_0_17907 \"This is part 1 of the Embedded Agency series, by Abram Demski and Scott Garrabrant.\")\n\n\nThere’s a complicated engineering problem here. But there’s also a problem of figuring out what it even means to build a learning agent like that. What is it to optimize realistic goals in physical environments? In broad terms, how does it work?\n\n\nIn this series of posts, I’ll point to four ways we *don’t* currently know how it works, and four areas of active research aimed at figuring it out.\n\n\n \n\n\n \n\n\nThis is Alexei, and Alexei is playing a video game.\n\n\n \n\n\n![Alexei the dualistic agent](https://intelligence.org/wp-content/uploads/2018/10/Alexei.png)\n\n\n \n\n\nLike most games, this game has clear input and output channels. Alexei only observes the game through the computer screen, and only manipulates the game through the controller.\n\n\nThe game can be thought of as a function which takes in a sequence of button presses and outputs a sequence of pixels on the screen.\n\n\nAlexei is also very smart, and capable of holding the entire video game inside his mind. If Alexei has any uncertainty, it is only over empirical facts like what game he is playing, and not over logical facts like which inputs (for a given deterministic game) will yield which outputs. This means that Alexei must also store inside his mind every possible game he could be playing.\n\n\nAlexei does not, however, have to think about himself. He is only optimizing the game he is playing, and not optimizing the brain he is using to think about the game. He may still choose actions based off of value of information, but this is only to help him rule out possible games he is playing, and not to change the way in which he thinks.\n\n\nIn fact, Alexei can treat himself as an unchanging indivisible atom. Since he doesn’t exist in the environment he’s thinking about, Alexei doesn’t worry about whether he’ll change over time, or about any subroutines he might have to run.\n\n\nNotice that all the properties I talked about are partially made possible by the fact that Alexei is cleanly separated from the environment that he is optimizing. \n\n\n\n\n \n\n\nThis is Emmy. Emmy is playing real life.\n\n\n \n\n\n![Emmy the embedded agent](https://intelligence.org/wp-content/uploads/2018/10/Emmy.png)\n\n\n \n\n\nReal life is not like a video game. The differences largely come from the fact that Emmy is within the environment that she is trying to optimize.\n\n\nAlexei sees the universe as a function, and he optimizes by choosing inputs to that function that lead to greater reward than any of the other possible inputs he might choose. Emmy, on the other hand, doesn’t have a function. She just has an environment, and this environment contains her.\n\n\nEmmy wants to choose the best possible action, but which action Emmy chooses to take is just another fact about the environment. Emmy can reason about the part of the environment that is her decision, but since there’s only one action that Emmy ends up actually taking, it’s not clear what it even means for Emmy to “choose” an action that is better than the rest.\n\n\nAlexei can poke the universe and see what happens. Emmy is the universe poking itself. In Emmy’s case, how do we formalize the idea of “choosing” at all?\n\n\nTo make matters worse, since Emmy is contained within the environment, Emmy must also be smaller than the environment. This means that Emmy is incapable of storing accurate detailed models of the environment within her mind.\n\n\nThis causes a problem: Bayesian reasoning works by starting with a large collection of possible environments, and as you observe facts that are inconsistent with some of those environments, you rule them out. What does reasoning look like when you’re not even capable of storing a single valid hypothesis for the way the world works? Emmy is going to have to use a different type of reasoning, and make updates that don’t fit into the standard Bayesian framework.\n\n\nSince Emmy is within the environment that she is manipulating, she is also going to be capable of self-improvement. But how can Emmy be sure that as she learns more and finds more and more ways to improve herself, she only changes herself in ways that are actually helpful? How can she be sure that she won’t modify her original goals in undesirable ways?\n\n\nFinally, since Emmy is contained within the environment, she can’t treat herself like an atom. She is made out of the same pieces that the rest of the environment is made out of, which is what causes her to be able to think about herself.\n\n\nIn addition to hazards in her external environment, Emmy is going to have to worry about threats coming from within. While optimizing, Emmy might spin up other optimizers as subroutines, either intentionally or unintentionally. These subsystems can cause problems if they get too powerful and are unaligned with Emmy’s goals. Emmy must figure out how to reason without spinning up intelligent subsystems, or otherwise figure out how to keep them weak, contained, or aligned fully with her goals.\n\n\n \n\n\n \n\n\nEmmy is confusing, so let’s go back to Alexei. Marcus Hutter’s [AIXI](https://arxiv.org/abs/1202.6153) framework gives a good theoretical model for how agents like Alexei work:\n\n\n \n\n\n$$ \n\n a\\_k \\;:=\\; \\arg\\max\\_{a\\_k}\\sum\\_{ o\\_k r\\_k} %\\max\\_{a\\_{k+1}}\\sum\\_{x\\_{k+1}} \n\n … \\max\\_{a\\_m}\\sum\\_{ o\\_m r\\_m} \n\n [r\\_k+…+r\\_m] \n\n\\hspace{-1em}\\hspace{-1em}\\hspace{-1em}\\!\\!\\!\\sum\\_{{ q}\\,:\\,U({ q},{ a\\_1..a\\_m})={ o\\_1 r\\_1.. o\\_m r\\_m}}\\hspace{-1em}\\hspace{-1em}\\hspace{-1em}\\!\\!\\! 2^{-\\ell({ q})} \n\n$$\n\n\n \n\n\nThe model has an agent and an environment that interact using actions, observations, and rewards. The agent sends out an action \\(a\\), and then the environment sends out both an observation \\(o\\) and a reward \\(r\\). This process repeats at each time \\(k…m\\).\n\n\nEach action is a function of all the previous action-observation-reward triples. And each observation and reward is similarly a function of these triples and the immediately preceding action.\n\n\nYou can imagine an agent in this framework that has full knowledge of the environment that it’s interacting with. However, AIXI is used to model optimization under uncertainty about the environment. AIXI has a distribution over all possible computable environments \\(q\\), and chooses actions that lead to a high expected reward under this distribution. Since it also cares about future reward, this may lead to exploring for value of information.\n\n\nUnder some assumptions, we can show that AIXI does reasonably well in all computable environments, in spite of its uncertainty. However, while the environments that AIXI is interacting with are computable, AIXI itself is uncomputable. The agent is made out of a different sort of stuff, a more powerful sort of stuff, than the environment.\n\n\nWe will call agents like AIXI and Alexei “dualistic.” They exist outside of their environment, with only set interactions between agent-stuff and environment-stuff. They require the agent to be larger than the environment, and don’t tend to model self-referential reasoning, because the agent is made of different stuff than what the agent reasons about.\n\n\nAIXI is not alone. These dualistic assumptions show up all over our current best theories of rational agency.\n\n\nI set up AIXI as a bit of a foil, but AIXI can also be used as inspiration. When I look at AIXI, I feel like I really understand how Alexei works. This is the kind of understanding that I want to also have for Emmy.\n\n\nUnfortunately, Emmy is confusing. When I talk about wanting to have a theory of “embedded agency,” I mean I want to be able to understand theoretically how agents like Emmy work. That is, agents that are embedded within their environment and thus:\n\n\n* do not have well-defined i/o channels;\n* are smaller than their environment;\n* are able to reason about themselves and self-improve;\n* and are made of parts similar to the environment.\n\n\nYou shouldn’t think of these four complications as a partition. They are very entangled with each other.\n\n\nFor example, the reason the agent is able to self-improve is because it is made of parts. And any time the environment is sufficiently larger than the agent, it might contain other copies of the agent, and thus destroy any well-defined i/o channels.\n\n\n\n \n\n\n[![Some relationships between embedded agency subproblems](https://intelligence.org/wp-content/uploads/2018/10/Embedded-Subproblems.png)](https://intelligence.org/wp-content/uploads/2018/10/Embedded-Subproblems.png)\n\n\n \n\n\n\nHowever, I will use these four complications to inspire a split of the topic of embedded agency into four subproblems. These are: decision theory, embedded world-models, robust delegation, and subsystem alignment.\n\n\n \n\n\n \n\n\n**Decision theory** is all about embedded optimization.\n\n\nThe simplest model of dualistic optimization is \\(\\mathrm{argmax}\\). \\(\\mathrm{argmax}\\) takes in a function from actions to rewards, and returns the action which leads to the highest reward under this function. Most optimization can be thought of as some variant on this. You have some space; you have a function from this space to some score, like a reward or utility; and you want to choose an input that scores highly under this function.\n\n\nBut we just said that a large part of what it means to be an embedded agent is that you don’t have a functional environment. So now what do we do? Optimization is clearly an important part of agency, but we can’t currently say what it is even in theory without making major type errors.\n\n\nSome major open problems in decision theory include:\n\n\n* **logical counterfactuals**: how do you reason about what *would* happen if you take action B, given that you can *prove* that you will instead take action A?\n* environments that include multiple **copies of the agent**, or trustworthy predictions of the agent.\n* **logical updatelessness**, which is about how to combine the very nice but very *Bayesian* world of Wei Dai’s [updateless decision theory](https://wiki.lesswrong.com/wiki/Updateless_decision_theory), with the much less Bayesian world of logical uncertainty.\n\n\n \n\n\n \n\n\n**Embedded world-models** is about how you can make good models of the world that are able to fit within an agent that is much smaller than the world.\n\n\nThis has proven to be very difficult—first, because it means that the true universe is not in your hypothesis space, which ruins a lot of theoretical guarantees; and second, because it means we’re going to have to make non-Bayesian updates as we learn, which *also* ruins a bunch of theoretical guarantees.\n\n\nIt is also about how to make world-models from the point of view of an observer on the inside, and resulting problems such as anthropics. Some major open problems in embedded world-models include:\n\n\n* **[logical uncertainty](https://intelligence.org/files/QuestionsLogicalUncertainty.pdf)**, which is about how to combine the world of logic with the world of probability.\n* **multi-level modeling**, which is about how to have multiple models of the same world at different levels of description, and transition nicely between them.\n* **[ontological crises](https://intelligence.org/files/OntologicalCrises.pdf)**, which is what to do when you realize that your model, or even your goal, was specified using a different ontology than the real world.\n\n\n \n\n\n \n\n\n**Robust delegation** is all about a special type of principal-agent problem. You have an initial agent that wants to make a more intelligent successor agent to help it optimize its goals. The initial agent has all of the power, because it gets to decide exactly what successor agent to make. But in another sense, the successor agent has all of the power, because it is much, much more intelligent.\n\n\nFrom the point of view of the initial agent, the question is about creating a successor that will robustly not use its intelligence against you. From the point of view of the successor agent, the question is about, “How do you robustly learn or respect the goals of something that is stupid, manipulable, and not even using the right ontology?”\n\n\nThere are extra problems coming from the *Löbian obstacle* making it impossible to consistently trust things that are more powerful than you.\n\n\nYou can think about these problems in the context of an agent that’s just learning over time, or in the context of an agent making a significant self-improvement, or in the context of an agent that’s just trying to make a powerful tool.\n\n\nThe major open problems in robust delegation include:\n\n\n* **[Vingean reflection](https://intelligence.org/files/VingeanReflection.pdf)**, which is about how to reason about and trust agents that are much smarter than you, in spite of the Löbian obstacle to trust.\n* **[value learning](https://intelligence.org/files/ValueLearningProblem.pdf)**, which is how the successor agent can learn the goals of the initial agent in spite of that agent’s stupidity and inconsistencies.\n* **[corrigibility](https://intelligence.org/files/Corrigibility.pdf)**, which is about how an initial agent can get a successor agent to allow (or even help with) modifications, in spite of an instrumental incentive not to.\n\n\n \n\n\n \n\n\n**Subsystem alignment** is about how to be *one unified agent* that doesn’t have subsystems that are fighting against either you or each other.\n\n\nWhen an agent has a goal, like “saving the world,” it might end up spending a large amount of its time thinking about a subgoal, like “making money.” If the agent spins up a sub-agent that is only trying to make money, there are now two agents that have different goals, and this leads to a conflict. The sub-agent might suggest plans that look like they *only* make money, but actually destroy the world in order to make even more money.\n\n\nThe problem is: you don’t just have to worry about sub-agents that you intentionally spin up. You also have to worry about spinning up sub-agents by accident. Any time you perform a search or an optimization over a sufficiently rich space that’s able to contain agents, you have to worry about the space itself doing optimization. This optimization may not be exactly in line with the optimization the outer system was trying to do, but it *will* have an instrumental incentive to *look* like it’s aligned.\n\n\nA lot of optimization in practice uses this kind of passing the buck. You don’t just find a solution; you find a thing that is able to itself search for a solution.\n\n\nIn theory, I don’t understand how to do *optimization* at all—other than methods that look like finding a bunch of stuff that I don’t understand, and seeing if it accomplishes my goal. But this is exactly the kind of thing that’s *most* prone to spinning up adversarial subsystems.\n\n\nThe big open problem in subsystem alignment is about how to have a base-level optimizer that doesn’t spin up adversarial optimizers. You can break this problem up further by considering cases where the resultant optimizers are either **intentional** or **unintentional**, and considering restricted subclasses of optimization, like **induction**.\n\n\nBut remember: decision theory, embedded world-models, robust delegation, and subsystem alignment are not four separate problems. They’re all different subproblems of the same unified concept that is *embedded agency*.\n\n\n\n\n---\n\n\nPart 2 of this post will be coming in the next couple of days: **[Decision Theory](https://intelligence.org/2018/10/31/embedded-decisions/)**.\n\n\n\n\n\n---\n\n1. This is part 1 of the [Embedded Agency](https://intelligence.org/embedded-agency) series, by Abram Demski and Scott Garrabrant.\n\nThe post [Embedded Agents](https://intelligence.org/2018/10/29/embedded-agents/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-10-29T19:59:33Z", "authors": ["Scott Garrabrant"], "summaries": []} -{"id": "4aceb57ae131233aeb4b8828d81cacb4", "title": "The Rocket Alignment Problem", "url": "https://intelligence.org/2018/10/03/rocket-alignment/", "source": "miri", "source_type": "blog", "text": "The following is a fictional dialogue building off of [AI Alignment: Why It’s Hard, and Where to Start](https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/).\n\n\n\n\n---\n\n\n\n \n\n\n(*Somewhere in a not-very-near neighboring world, where science took a very different course…*)\n\n\n \n\n\n**ALFONSO:**  Hello, Beth. I’ve noticed a lot of speculations lately about “spaceplanes” being used to attack cities, or possibly becoming infused with malevolent spirits that inhabit the celestial realms so that they turn on their own engineers.\n\n\nI’m rather skeptical of these speculations. Indeed, I’m a bit skeptical that airplanes will be able to even rise as high as stratospheric weather balloons anytime in the next century. But I understand that your institute wants to address the potential problem of malevolent or dangerous spaceplanes, and that you think this is an important present-day cause.\n\n\n**BETH:**  That’s… really not how we at the Mathematics of Intentional Rocketry Institute would phrase things.\n\n\nThe problem of malevolent celestial spirits is what all the news articles are focusing on, but we think the real problem is something entirely different. We’re worried that there’s a difficult, theoretically challenging problem which modern-day rocket punditry is mostly overlooking. We’re worried that if you aim a rocket at where the Moon is in the sky, and press the launch button, the rocket may not actually end up at the Moon.\n\n\n**ALFONSO:**  I understand that it’s very important to design fins that can stabilize a spaceplane’s flight in heavy winds. That’s important spaceplane safety research and someone needs to do it.\n\n\nBut if you were working on that sort of safety research, I’d expect you to be collaborating tightly with modern airplane engineers to test out your fin designs, to demonstrate that they are actually useful.\n\n\n**BETH:**  Aerodynamic designs are important features of any safe rocket, and we’re quite glad that rocket scientists are working on these problems and taking safety seriously. That’s not the sort of problem that we at MIRI focus on, though.\n\n\n**ALFONSO:**  What’s the concern, then? Do you fear that spaceplanes may be developed by ill-intentioned people?\n\n\n**BETH:**  That’s not the failure mode we’re worried about right now. We’re more worried that right now, *nobody* can tell you how to point your rocket’s nose such that it goes to the moon, nor indeed *any* prespecified celestial destination. Whether Google or the US Government or North Korea is the one to launch the rocket won’t make a pragmatic difference to the probability of a successful Moon landing from our perspective, because right now *nobody knows how to aim any kind of rocket anywhere*.\n\n\n\n**ALFONSO:**  I’m not sure I understand.\n\n\n**BETH:**  We’re worried that even if you aim a rocket at the Moon, such that the nose of the rocket is clearly lined up with the Moon in the sky, the rocket won’t go to the Moon. We’re not sure what a realistic path from the Earth to the moon looks like, but we suspect it [might not be a very straight path](https://airandspace.si.edu/webimages/highres/5317h.jpg), and it may not involve pointing the nose of the rocket at the moon at all. We think the most important thing to do next is to advance our understanding of rocket trajectories until we have a better, deeper understanding of what we’ve started calling the “rocket alignment problem”. There are other safety problems, but this rocket alignment problem will probably take the most total time to work on, so it’s the most urgent.\n\n\n**ALFONSO:**  Hmm, that sounds like a bold claim to me. Do you have a reason to think that there are invisible barriers between here and the moon that the spaceplane might hit? Are you saying that it might get very very windy between here and the moon, more so than on Earth? Both eventualities could be worth preparing for, I suppose, but neither seem likely.\n\n\n**BETH:**  We don’t think it’s particularly likely that there are invisible barriers, no. And we don’t think it’s going to be especially windy in the celestial reaches — quite the opposite, in fact. The problem is just that we don’t yet know how to plot *any* trajectory that a vehicle could realistically take to get from Earth to the moon.\n\n\n**ALFONSO:**  Of course we can’t plot an actual trajectory; wind and weather are too unpredictable. But your claim still seems too strong to me. Just aim the spaceplane at the moon, go up, and have the pilot adjust as necessary. Why wouldn’t that work? Can you prove that a spaceplane aimed at the moon won’t go there?\n\n\n**BETH:**  We don’t think we can *prove* anything of that sort, no. Part of the problem is that realistic calculations are extremely hard to do in this area, after you take into account all the atmospheric friction and the movements of other celestial bodies and such. We’ve been trying to solve some drastically simplified problems in this area, on the order of assuming that there is no atmosphere and that all rockets move in perfectly straight lines. Even those unrealistic calculations strongly suggest that, in the much more complicated real world, just pointing your rocket’s nose at the Moon also won’t make your rocket end up at the Moon. I mean, the fact that the real world is more complicated doesn’t exactly make it any *easier* to get to the Moon.\n\n\n**ALFONSO:**  Okay, let me take a look at this “understanding” work you say you’re doing…\n\n\nHuh. Based on what I’ve read about the math you’re trying to do, I can’t say I understand what it has to do with the Moon. Shouldn’t helping spaceplane pilots exactly target the Moon involve looking through lunar telescopes and studying exactly what the Moon looks like, so that the spaceplane pilots can identify particular features of the landscape to land on?\n\n\n**BETH:**  We think our present stage of understanding is much too crude for a detailed Moon map to be our next research target. We haven’t yet advanced to the point of targeting one crater or another for our landing. We can’t target *anything* at this point. It’s more along the lines of “figure out how to talk mathematically about curved rocket trajectories, instead of rockets that move in straight lines”. Not even realistically curved trajectories, right now, we’re just trying to get past straight lines at all –\n\n\n**ALFONSO:**  But planes on Earth move in curved lines all the time, because the Earth itself is curved. It seems reasonable to expect that future spaceplanes will also have the capability to move in curved lines. If your worry is that spaceplanes will only move in straight lines and miss the Moon, and you want to advise rocket engineers to build rockets that move in curved lines, well, that doesn’t seem to me like a great use of anyone’s time.\n\n\n**BETH:**  You’re trying to draw much too direct of a line between the math we’re working on right now, and actual rocket designs that might exist in the future. It’s *not* that current rocket ideas are almost right, and we just need to solve one or two more problems to make them work. The conceptual distance that separates anyone from solving the rocket alignment problem is *much greater* than that.\n\n\nRight now everyone is *confused* about rocket trajectories, and we’re trying to become *less confused*. That’s what we need to do next, not run out and advise rocket engineers to build their rockets the way that our current math papers are talking about. Not until we stop being *confused* about extremely basic questions like why the Earth doesn’t fall into the Sun.\n\n\n**ALFONSO:**  I don’t think the Earth is going to collide with the Sun anytime soon. The Sun has been steadily circling the Earth for a long time now.\n\n\n**BETH:**  I’m not saying that our goal is to address the risk of the Earth falling into the Sun. What I’m trying to say is that if humanity’s present knowledge can’t answer questions like “Why doesn’t the Earth fall into the Sun?” then we don’t know very much about celestial mechanics and we won’t be able to aim a rocket through the celestial reaches in a way that lands softly on the Moon.\n\n\nAs an example of work we’re presently doing that’s aimed at improving our understanding, there’s what we call the “[tiling positions](https://intelligence.org/files/TilingAgentsDraft.pdf)” problem. The tiling positions problem is [how to fire a cannonball from a cannon](https://en.wikipedia.org/wiki/Newton%27s_cannonball) in such a way that the cannonball circumnavigates the earth over and over again, “tiling” its initial coordinates like repeating tiles on a tessellated floor –\n\n\n**ALFONSO:**  I read a little bit about your work on that topic. I have to say, it’s hard for me to see what firing things from cannons has to do with getting to the Moon. Frankly, it sounds an awful lot like Good Old-Fashioned Space Travel, which everyone knows doesn’t work. Maybe Jules Verne thought it was possible to travel around the earth by firing capsules out of cannons, but the modern study of high-altitude planes has completely abandoned the notion of firing things out of cannons. The fact that you go around talking about firing things out of cannons suggests to me that you haven’t kept up with all the innovations in airplane design over the last century, and that your spaceplane designs will be completely unrealistic.\n\n\n**BETH:**  We know that rockets will not actually be fired out of cannons. We really, really know that. We’re intimately familiar with the reasons why nothing fired out of a modern cannon is ever going to reach escape velocity. I’ve previously written several sequences of articles in which I describe why cannon-based space travel doesn’t work.\n\n\n**ALFONSO:**  But your current work is all about firing something out a cannon in such a way that it circles the earth over and over. What could that have to do with any realistic advice that you could give to a spaceplane pilot about how to travel to the Moon?\n\n\n**BETH:**  Again, you’re trying to draw much too straight a line between the math we’re doing right now, and direct advice to future rocket engineers.\n\n\nWe think that if we could find an angle and firing speed such that an ideal cannon, firing an ideal cannonball at that speed, on a perfectly spherical Earth with no atmosphere, would lead to that cannonball entering what we would call a “stable orbit” without hitting the ground, then… we might have understood something really fundamental and important about celestial mechanics.\n\n\nOr maybe not! It’s hard to know in advance which questions are important and which research avenues will pan out. All you can do is figure out the next tractable-looking problem that confuses you, and try to come up with a solution, and hope that you’ll be less confused after that.\n\n\n**ALFONSO:**  You’re talking about the cannonball hitting the ground as a problem, and how you want to avoid that and just have the cannonball keep going forever, right? But real spaceplanes aren’t going to be aimed at the ground in the first place, and lots of regular airplanes manage to not hit the ground. It seems to me that this “being fired out of a cannon and hitting the ground” scenario that you’re trying to avoid in this “tiling positions problem” of yours just isn’t a failure mode that real spaceplane designers would need to worry about.\n\n\n**BETH:**  We are not worried about real rockets being fired out of cannons and hitting the ground. That is not why we’re working on the tiling positions problem. In a way, you’re being far too optimistic about how much of rocket alignment theory is already solved! We’re not so close to understanding how to aim rockets that the kind of designs people are talking about now *would* work if only we solved a particular set of remaining difficulties like not firing the rocket into the ground. You need to go more meta on understanding the kind of progress we’re trying to make.\n\n\nWe’re working on the tiling positions problem because we think that being able to fire a cannonball at a certain instantaneous velocity such that it enters a stable orbit… is the sort of problem that somebody who could really actually launch a rocket through space and have it move in a particular curve that really actually ended with softly landing on the Moon would be able to solve *easily*. So the fact that we can’t solve it is alarming. If we can figure out how to solve this much simpler, much more crisply stated “tiling positions problem” with imaginary cannonballs on a perfectly spherical earth with no atmosphere, which is a lot easier to analyze than a Moon launch, we might thereby take one more incremental step towards eventually becoming the sort of people who could plot out a Moon launch.\n\n\n**ALFONSO:**  If you don’t think that Jules-Verne-style space cannons are the wave of the future, I don’t understand why you keep talking about cannons in particular.\n\n\n**BETH:**  Because there’s a lot of sophisticated mathematical machinery already developed for aiming cannons. People have been aiming cannons and plotting cannonball trajectories since the sixteenth century. We can take advantage of that existing mathematics to say exactly how, if we fired an ideal cannonball in a certain direction, it would plow into the ground. If we tried talking about rockets with realistically varying acceleration, we can’t even manage to prove that a rocket like that *won’t* travel around the Earth in a perfect square, because with all that realistically varying acceleration and realistic air friction it’s impossible to make any sort of definite statement one way or another. Our present understanding isn’t up to it.\n\n\n**ALFONSO:**  Okay, another question in the same vein. Why is MIRI sponsoring work on adding up lots of tiny vectors? I don’t even see what that has to do with rockets in the first place; it seems like this weird side problem in abstract math.\n\n\n**BETH:**  It’s more like… at several points in our investigation so far, we’ve run into the problem of going from a function about time-varying accelerations to a function about time-varying positions. We kept running into this problem as a blocking point in our math, in several places, so we branched off and started trying to analyze it explicitly. Since it’s about the pure mathematics of points that don’t move in discrete intervals, we call it the “[logical undiscreteness](https://intelligence.org/files/QuestionsLogicalUncertainty.pdf)” problem. Some of the ways of investigating this problem involve trying to add up lots of tiny, varying vectors to get a big vector. Then we talk about how that sum seems to change more and more slowly, approaching a limit, as the vectors get tinier and tinier and we add up more and more of them… or at least that’s one avenue of approach.\n\n\n**ALFONSO:**  I just find it hard to imagine people in future spaceplane rockets staring out their viewports and going, “Oh, no, we don’t have tiny enough vectors with which to correct our course! If only there was some way of adding up even more vectors that are even smaller!” I’d expect future calculating machines to do a pretty good job of that already.\n\n\n**BETH:**  Again, you’re trying to draw much too straight a line between the work we’re doing now, and the implications for future rocket designs. It’s not like we think a rocket design will almost work, but the pilot won’t be able to add up lots of tiny vectors fast enough, so we just need a faster algorithm and then the rocket will get to the Moon. This is foundational mathematical work that we think might play a role in multiple basic concepts for understanding celestial trajectories. When we try to plot out a trajectory that goes all the way to a soft landing on a moving Moon, we feel confused and blocked. We think part of the confusion comes from not being able to go from acceleration functions to position functions, so we’re trying to resolve our confusion.\n\n\n**ALFONSO:**  This sounds suspiciously like a philosophy-of-mathematics problem, and I don’t think that it’s possible to progress on spaceplane design by doing philosophical research. The field of philosophy is a stagnant quagmire. Some philosophers still believe that going to the moon is impossible; they say that the celestial plane is fundamentally separate from the earthly plane and therefore inaccessible, which is clearly silly. Spaceplane design is an engineering problem, and progress will be made by engineers.\n\n\n**BETH:**  I agree that rocket design will be carried out by engineers rather than philosophers. I also share some of your frustration with philosophy in general. For that reason, we stick to well-defined mathematical questions that are likely to have actual answers, such as questions about how to fire a cannonball on a perfectly spherical planet with no atmosphere such that it winds up in a stable orbit.\n\n\nThis often requires developing new mathematical frameworks. For example, in the case of the logical undiscreteness problem, we’re developing methods for translating between time-varying accelerations and time-varying positions. You can call the development of new mathematical frameworks “philosophical” if you’d like — but if you do, remember that it’s a very different kind of philosophy than the “speculate about the heavenly and earthly planes” sort, and that we’re always pushing to develop new mathematical frameworks or tools.\n\n\n**ALFONSO:**  So from the perspective of the public good, what’s a good thing that might happen if you solved this logical undiscreteness problem?\n\n\n**BETH:**  Mainly, we’d be less confused and our research wouldn’t be blocked and humanity could actually land on the Moon someday. To try and make it more concrete – though it’s hard to do that without actually knowing the concrete solution – we might be able to talk about incrementally more realistic rocket trajectories, because our mathematics would no longer break down as soon as we stopped assuming that rockets moved in straight lines. Our math would be able to talk about exact curves, instead of a series of straight lines that approximate the curve.\n\n\n**ALFONSO:**  An exact curve that a rocket follows? This gets me into the main problem I have with your project in general. I just don’t believe that any future rocket design will be the sort of thing that can be analyzed with absolute, perfect precision so that you can get the rocket to the Moon based on an absolutely plotted trajectory with no need to steer. That seems to me like a bunch of mathematicians who have no clue how things work in the real world, wanting everything to be perfectly calculated. Look at the way Venus moves in the sky; usually it travels in one direction, but sometimes it goes retrograde in the other direction. We’ll just have to steer as we go.\n\n\n**BETH:**  That’s not what I meant by talking about exact curves… Look, even if we can invent logical undiscreteness, I agree that it’s futile to try to predict, in advance, the precise trajectories of all of the winds that will strike a rocket on its way off the ground. Though I’ll mention parenthetically that things might actually become calmer and easier to predict, once a rocket gets sufficiently high up –\n\n\n**ALFONSO:**  Why?\n\n\n**BETH:**  Let’s just leave that aside for now, since we both agree that rocket positions are hard to predict exactly during the atmospheric part of the trajectory, due to winds and such. And yes, if you can’t exactly predict the initial trajectory, you can’t exactly predict the later trajectory. So, indeed, the proposal is definitely not to have a rocket design so perfect that you can fire it at exactly the right angle and then walk away without the pilot doing any further steering. The point of doing rocket math isn’t that you want to predict the rocket’s exact position at every microsecond, in advance.\n\n\n**ALFONSO:**  Then why obsess over pure math that’s too simple to describe the rich, complicated real universe where sometimes it rains?\n\n\n**BETH:**  It’s true that a real rocket isn’t a simple equation on a board. It’s true that there are all sorts of aspects of a real rocket’s shape and internal plumbing that aren’t going to have a mathematically compact characterization. What MIRI is doing isn’t the right degree of mathematization for all rocket engineers for all time; it’s the mathematics for us to be using right now (or so we hope).\n\n\nTo build up the field’s understanding incrementally, we need to talk about ideas whose consequences can be pinpointed precisely enough that people can analyze scenarios in a shared framework. We need enough precision that someone can say, “I think in scenario X, design Y does Z”, and someone else can say, “No, in scenario X, Y actually does W”, and the first person responds, “Darn, you’re right. Well, is there some way to change Y so that it would do Z?”\n\n\nIf you try to make things realistically complicated at this stage of research, all you’re left with is verbal fantasies. When we try to talk to someone with an enormous flowchart of all the gears and steering rudders they think should go into a rocket design, and we try to explain why a rocket pointed at the Moon doesn’t necessarily end up at the Moon, they just reply, “Oh, my rocket won’t do *that*.” Their ideas have enough vagueness and flex and underspecification that they’ve achieved the safety of nobody being able to prove to them that they’re wrong. It’s impossible to incrementally build up a body of collective knowledge that way.\n\n\nThe goal is to start building up a library of tools and ideas we can use to discuss trajectories formally. Some of the key tools for formalizing and analyzing *intuitively* plausible-seeming trajectories haven’t yet been expressed using math, and we can live with that for now. We still try to find ways to represent the key ideas in mathematically crisp ways whenever we can. That’s not because math is so neat or so prestigious; it’s part of an ongoing project to have arguments about rocketry that go beyond “Does not!” vs. “Does so!”\n\n\n**ALFONSO:**  I still get the impression that you’re reaching for the warm, comforting blanket of mathematical reassurance in a realm where mathematical reassurance doesn’t apply. We can’t obtain a mathematical certainty of our spaceplanes being absolutely sure to reach the Moon with nothing going wrong. That being the case, there’s no point in trying to pretend that we can use mathematics to get absolute guarantees about spaceplanes.\n\n\n**BETH:**  Trust me, I am not going to feel “reassured” about rocketry no matter what math MIRI comes up with. But, yes, of course you can’t obtain a mathematical assurance of any physical proposition, nor assign probability 1 to any empirical statement.\n\n\n**ALFONSO:**  Yet you talk about proving theorems – proving that a cannonball will go in circles around the earth indefinitely, for example.\n\n\n**BETH:**  Proving a theorem about a rocket’s trajectory won’t ever let us feel comfortingly certain about where the rocket is actually going to end up. But if you can prove a theorem which says that your rocket would go to the Moon if it launched in a perfect vacuum, maybe you can attach some steering jets to the rocket and then have it actually go to the Moon in real life. Not with 100% probability, but with probability greater than zero.\n\n\nThe point of our work isn’t to take current ideas about rocket aiming from a 99% probability of success to a 100% chance of success. It’s to get past an approximately 0% chance of success, which is where we are now.\n\n\n**ALFONSO:**  Zero percent?!\n\n\n**BETH:**  Modulo [Cromwell’s Rule](https://en.wikipedia.org/wiki/Cromwell%27s_rule), yes, zero percent. If you point a rocket’s nose at the Moon and launch it, it does not go to the Moon.\n\n\n**ALFONSO:**  I don’t think future spaceplane engineers will actually be that silly, if direct Moon-aiming isn’t a method that works. They’ll lead the Moon’s current motion in the sky, and aim at the part of the sky where Moon will appear on the day the spaceplane is a Moon’s distance away. I’m a bit worried that you’ve been talking about this problem so long without considering such an obvious idea.\n\n\n**BETH:**  We considered that idea very early on, and we’re pretty sure that it still doesn’t get us to the Moon.\n\n\n**ALFONSO:**  What if I add steering fins so that the rocket moves in a more curved trajectory? Can you prove that no version of that class of rocket designs will go to the Moon, no matter what I try?\n\n\n**BETH:**  Can you sketch out the trajectory that you think your rocket will follow?\n\n\n**ALFONSO:**  It goes from the Earth to the Moon.\n\n\n**BETH:**  In a bit more detail, maybe?\n\n\n**ALFONSO:**  No, because in the real world there are always variable wind speeds, we don’t have infinite fuel, and our spaceplanes don’t move in perfectly straight lines.\n\n\n**BETH:**  Can you sketch out a trajectory that you think a simplified version of your rocket will follow, so we can examine the [assumptions](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/) your idea requires?\n\n\n**ALFONSO:**  I just don’t believe in the general methodology you’re proposing for spaceplane designs. We’ll put on some steering fins, turn the wheel as we go, and keep the Moon in our viewports. If we’re off course, we’ll steer back.\n\n\n**BETH:**  … We’re actually a bit concerned that [standard steering fins may stop working once the rocket gets high enough](https://intelligence.org/files/Corrigibility.pdf), so you won’t actually find yourself able to correct course by much once you’re in the celestial reaches – like, if you’re already on a good course, you can correct it, but if you screwed up, you won’t just be able to turn around like you could turn around an airplane –\n\n\n**ALFONSO:**  Why not?\n\n\n**BETH:**  We can go into that topic too; but even given a simplified model of a rocket that you *could* steer, a walkthrough of the steps along the path that simplified rocket would take to the Moon would be an important step in moving this discussion forward. Celestial rocketry is a domain that we expect to be unusually difficult – even compared to building rockets on Earth, which is already a famously hard problem because they usually just explode. It’s not that everything has to be neat and mathematical. But the overall difficulty is such that, in a proposal like “lead the moon in the sky,” if the core ideas don’t have a certain amount of solidity about them, it would be equivalent to firing your rocket randomly into the void.\n\n\nIf it feels like you don’t know for sure whether your idea works, but that it might work; if your idea has many plausible-sounding elements, and to you it feels like nobody has been able to *convincingly* explain to you how it would fail; then, in real life, that proposal has a roughly 0% chance of steering a rocket to the Moon.\n\n\nIf it seems like an idea is extremely solid and clearly well-understood, if it feels like this proposal should definitely take a rocket to the Moon without fail in good conditions, then maybe under the best-case conditions we should assign an 85% subjective credence in success, or something in that vicinity.\n\n\n**ALFONSO:**  So uncertainty automatically means failure? This is starting to sound a bit paranoid, honestly.\n\n\n**BETH:**  The idea I’m trying to communicate is something along the lines of, “If you can reason rigorously about why a rocket should definitely work in principle, it might work in real life, but if you have anything less than that, then it definitely won’t work in real life.”\n\n\nI’m not asking you to give me an absolute mathematical proof of empirical success. I’m asking you to give me something more like a sketch for how a simplified version of your rocket could move, that’s sufficiently determined in its meaning that you can’t just come back and say “Oh, I didn’t mean *that*” every time someone tries to figure out what it actually does or pinpoint a failure mode.\n\n\nThis isn’t an unreasonable demand that I’m imposing to make it impossible for any ideas to pass my filters. It’s the primary bar all of us have to pass to contribute to collective progress in this field. And a rocket design which can’t even pass that conceptual bar has roughly a 0% chance of landing softly on the Moon.\n\n\n\n \n\n\nThe post [The Rocket Alignment Problem](https://intelligence.org/2018/10/03/rocket-alignment/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-10-03T22:28:00Z", "authors": ["Eliezer Yudkowsky"], "summaries": []} -{"id": "9b244ec68fef59d585246836419d973b", "title": "September 2018 Newsletter", "url": "https://intelligence.org/2018/09/30/september-2018-newsletter/", "source": "miri", "source_type": "blog", "text": "[Summer MIRI Updates](https://intelligence.org/2018/09/01/summer-miri-updates/): Buck Shlegeris and Ben Weinstein-Raun have joined the MIRI team! Additionally, we ran a successful internship program over the summer, and we’re co-running a new engineer-oriented workshop series with CFAR.\nOn the [fundraising](https://intelligence.org/2018/09/01/summer-miri-updates/#2) side, we received a $489,000 grant from the Long-Term Future Fund, a $150,000 AI Safety Retraining Program grant from the Open Philanthropy Project, and an amazing surprise $1.02 million grant from “Anonymous Ethereum Investor #2”!\n\n\n#### Other updates\n\n\n* New research forum posts: [Reducing Collective Rationality to Individual Optimization in Common-Payoff Games Using MCMC](https://www.alignmentforum.org/posts/JKSS8GEu7DGX4YuxN/reducing-collective-rationality-to-individual-optimization); [History of the Development of Logical Induction](https://www.alignmentforum.org/posts/iBBK4j6RWC7znEiDv/history-of-the-development-of-logical-induction)\n* We spoke at the [Human-Aligned AI Summer School](http://humanaligned.ai/index.html) in Prague.\n* MIRI advisor Blake Borgeson has joined our [Board of Directors](https://intelligence.org/team/#board), and DeepMind Research Scientist Victoria Krakovna has become a MIRI [research advisor](https://intelligence.org/team/#advisors).\n\n\n#### News and links\n\n\n* The Open Philanthropy Project is accepting a new round of applicants for its [AI Fellows Program](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-philanthropy-project-ai-fellows-program).\n\n\n\nThe post [September 2018 Newsletter](https://intelligence.org/2018/09/30/september-2018-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-09-30T21:53:16Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "59d09e0d9d88cb8c193add15b4ef0eec", "title": "Summer MIRI Updates", "url": "https://intelligence.org/2018/09/01/summer-miri-updates/", "source": "miri", "source_type": "blog", "text": "In our last major updates—our 2017 [strategic update](https://intelligence.org/2017/04/30/2017-updates-and-strategy/) and [fundraiser](https://intelligence.org/2017/12/01/miris-2017-fundraiser/) posts—we said that our current focus is on technical research and executing our biggest-ever [hiring](https://intelligence.org/careers/software-engineer/) push. Our supporters responded with an incredible show of support at the end of the year, putting us in an excellent position to execute on our most ambitious growth plans.\n\n\nIn this post, I’d like to provide some updates on our recruiting efforts and successes, announce some major donations and grants that we’ve received, and provide some other miscellaneous updates.\n\n\nIn brief, our major announcements are:\n\n\n1. We have **two new full-time research staff** hires to announce.\n2. We’ve received **$1.7 million in major donations and grants**, $1 million of which came through a [tax-advantaged fund for Canadian MIRI supporters](https://rcforward.org/miri/).\n\n\nFor more details, see below.\n\n\n\n\n\n---\n\n\n### 1. Growth\n\n\nI’m happy to announce the addition of two new research staff to the MIRI team:\n\n\n \n\n\n![](https://lh3.googleusercontent.com/UfHCN3t4-sZNgWQ5Fy0RX2Y4dYjTWv_V_Vc1VXzp7sNsgThlDYZ7sxOY4cZiDcdWRhKd1aYKUrx61drGiWTx44PGR6-Ly14EUpHQ5cxGQNYRcHYDaXRxbexj5fiBAE3VTw9kbwa-)\n\n\n**Buck Shlegeris**: Before joining MIRI, Buck worked as a software engineer at PayPal, and he was the first employee at Triplebyte. He previously studied at the Australian National University, majoring in CS and minoring in math and physics, and he has presented work on data structure synthesis at industry conferences. In addition to his research at MIRI, Buck is also helping with recruiting.\n\n\n\n\n \n\n\n\n \n\n![](https://lh4.googleusercontent.com/t58zkj843zMWMnOd5ND0Awu83OqOsaM2LUjuL-zgnPuPL0heU3t_V3RkZHb81UAaNUrLyRicAWJDJSakFp4wYVYQgSA576jCs4931R6h2CgA6_4xSHyy0MBzCrlZcm80-ddtnMPB) \n\n\n\n\n\n**Ben Weinstein-Raun**: Ben joined MIRI after spending two years as a software engineer at Cruise Automation, where he worked on the planning and prediction teams. He previously worked at Counsyl on their automated genomics lab, and helped to found Hacksburg, a hackerspace in Blacksburg, Virginia. He holds a BS from Virginia Tech, where he studied computer engineering.\n\n\n\n\n \n\n\n\n\nThis year we’ve run a few different programs to help us work towards our hiring goals, and to more generally increase the number of people doing AI alignment research: \n \n\n\n\n\n1. We’ve been co-running a **series of invite-only workshops** with the Center for Applied Rationality (CFAR), targeted at potential future hires who have strong engineering backgrounds. Participants report really enjoying the workshops, and we’ve found them very useful for getting to know potential research staff hires.[1](https://intelligence.org/2018/09/01/summer-miri-updates/#footnote_0_17777 \"Ben was a workshop participant, which eventually led to him coming on board at MIRI.\") If you’d be interested in attending one of these workshops, send [Buck](mailto:buck@intelligence.org) an email.\n2. We helped run the [**AI Summer Fellows Program**](http://www.rationality.org/workshops/apply-aisfp) with CFAR. We had a large and extremely strong pool of applicants, with over 170 applications for 30 slots (versus 50 applications for 20 slots in 2017). The program this year was more mathematically flavored than in 2017, and concluded with a flurry of [new analyses](https://intelligence.org/2018/08/27/august-2018-newsletter/) by participants. On the whole, the program seems to have been more successful at digging into AI alignment problems than in previous years, as well as more successful at seeding ongoing collaborations between participants, and between participants and MIRI staff.\n3. We ran a ten-week **research internship program** this summer, from June through August.[2](https://intelligence.org/2018/09/01/summer-miri-updates/#footnote_1_17777 \"We also have another research intern joining us in the fall.\") This included our six interns attending AISFP and pursuing a number of independent lines of research, with a heavy focus on tiling agents. Among other activities, interns looked for Vingean reflection [in expected utility maximizers](https://www.alignmentforum.org/posts/nsbKeodxHJFKX2yYp/probabilistic-tiling-preliminary-attempt), distilled early research on subsystem alignment, and built on Abram’s [Complete Class Theorems approach](https://www.alignmentforum.org/posts/sZuw6SGfmZHvcAAEP/complete-class-consequentialist-foundations) to decision theory.\n\n\n\n\nIn related news, we’ve been restructuring and growing our operations team to ensure we’re well positioned to support the research team as we grow. Alex Vermeer has taken on a more general support role as our process and projects head. In addition to his donor relationships and fundraising focus, Colm Ó Riain has taken on a central role in our recruiting efforts as our head of growth. Aaron Silverbook is now heading operations; we’ve brought on Carson Jones as our new office manager; and long-time remote MIRI contractor Jimmy Rintjema is now our digital infrastructure lead.[3](https://intelligence.org/2018/09/01/summer-miri-updates/#footnote_2_17777 \"We’ve long considered Jimmy to be full-time staff, but he isn’t officially an employee since he lives in Canada.\") \n \n\n\n\n\n### 2. Fundraising\n\n\n\n\nOn the fundraising side, I’m happy to announce that we’ve received several major donations and grants.\n\n\n\n\nFirst, following our [$1.01 million donation](https://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/) from an anonymous Ethereum investor in 2017, we’ve received a huge new donation of **$1.02 million** from “Anonymous Ethereum Investor #2”, based in Canada! The donation was made through Rethink Charity Forward’s recently established [tax-advantaged fund for Canadian MIRI supporters](https://rcforward.org/miri/).[4](https://intelligence.org/2018/09/01/summer-miri-updates/#footnote_3_17777 \"H/T to Colm for setting up a number of tax-advantaged giving channels for international donors. If you’re a MIRI supporter outside the US, make sure to check out our Tax-Advantaged Donations page.\") \n\n\n\n\nSecond, the [departing](http://effective-altruism.com/ea/1rj/ea_funds_an_update_from_cea/) administrator of the [Long-Term Future Fund](https://app.effectivealtruism.org/funds/far-future), Nick Beckstead, has recommended a **$489,000** [grant to MIRI](https://app.effectivealtruism.org/funds/far-future/payouts/6g4f7iae5Ok6K6YOaAiyK0), aimed chiefly at funding improvements to organizational efficiency and staff productivity.\n\n\n\n\nTogether, these contributions have helped ensure that we remain in the solid position we were in following our 2017 fundraiser, as we attempt to greatly scale our team size. Our enormous thanks for this incredible support, and further thanks to RC Forward and the [Centre for Effective Altruism](https://www.centreforeffectivealtruism.org/) for helping build the infrastructure that made these contributions possible.\n\n\n\n\nWe’ve also received a **$150,000** [**AI Safety Retraining Program**](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-ai-safety-retraining-program) grant from the Open Philanthropy Project to provide stipends and guidance to a few highly technically skilled individuals. The goal of the program is to free up 3-6 months of time for strong candidates to spend on retraining, so that they can potentially transition to full-time work on AI alignment. Buck is currently selecting candidates for the program; to date, we’ve made two grants to individuals.[5](https://intelligence.org/2018/09/01/summer-miri-updates/#footnote_4_17777 \"We aren’t taking formal applications, but if you’re particularly interested in the program or have questions, you’re welcome to shoot Buck an email.\") \n \n\n\n\n\n### 3. Miscellaneous updates\n\n\n\n\nThe LessWrong development team has launched a [beta](https://www.alignmentforum.org/posts/JiMAMNAb55Qq24nES/announcing-alignmentforum-org-beta) for the [**AI Alignment Forum**](https://www.alignmentforum.org/), a new research forum for technical AI safety work that we’ve been contributing to. I’m very grateful to the LW team for taking on this project, and I’m really looking forward to the launch of the new forum. \n\n\n\n\nFinally, we’ve made substantial progress on the [**tiling problem**](https://intelligence.org/files/TilingAgentsDraft.pdf), which we’ll likely be detailing later this year. See our March [research plans and predictions](https://intelligence.org/2018/03/31/2018-research-plans/) write-up for more on our research priorities.\n\n\n\n\n \n\n\n\n\nWe’re very happy about these newer developments, and we’re particularly excited to have Buck and Ben on the team. We have a few more big announcements coming up in the not-so-distant future, so stay tuned.\n\n\n\n\n\n\n\n---\n\n1. Ben was a workshop participant, which eventually led to him coming on board at MIRI.\n2. We also have another research intern joining us in the fall.\n3. We’ve long considered Jimmy to be full-time staff, but he isn’t officially an employee since he lives in Canada.\n4. H/T to Colm for setting up a number of tax-advantaged giving channels for international donors. If you’re a MIRI supporter outside the US, make sure to check out our [Tax-Advantaged Donations](https://intelligence.org/donate/tax-advantaged-donations/) page.\n5. We aren’t taking formal applications, but if you’re particularly interested in the program or have questions, you’re welcome to shoot [Buck](mailto:buck@intelligence.org) an email.\n\nThe post [Summer MIRI Updates](https://intelligence.org/2018/09/01/summer-miri-updates/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-09-02T01:35:32Z", "authors": ["Malo Bourgon"], "summaries": []} -{"id": "7218277d42bd84873255c131e7101432", "title": "August 2018 Newsletter", "url": "https://intelligence.org/2018/08/27/august-2018-newsletter/", "source": "miri", "source_type": "blog", "text": "#### Updates\n\n\n* New posts to the new [AI Alignment Forum](https://www.alignmentforum.org): [Buridan’s Ass in Coordination Games](https://www.alignmentforum.org/posts/4xpDnGaKz472qB4LY/buridan-s-ass-in-coordination-games); [Probability is Real, and Value is Complex](https://www.alignmentforum.org/posts/oheKfWA7SsvpK7SGp/probability-is-real-and-value-is-complex); [Safely and Usefully Spectating on AIs Optimizing Over Toy Worlds](https://www.alignmentforum.org/posts/ikN9qQEkrFuPtYd6Y/safely-and-usefully-spectating-on-ais-optimizing-over-toy)\n* MIRI Research Associate Vanessa Kosoy wins a $7500 AI Alignment Prize for “[The Learning-Theoretic AI Alignment Research Agenda](https://agentfoundations.org/item?id=1816).” Applications for [the prize’s next round](https://www.lesswrong.com/posts/juBRTuE3TLti5yB35/announcement-ai-alignment-prize-round-3-winners-and-next) will be open through December 31.\n* Interns from MIRI and the Center for Human-Compatible AI collaborated at an AI safety [research workshop](https://intelligence.org/workshops/#july-2018).\n* This year’s [AI Summer Fellows Program](http://www.rationality.org/workshops/apply-aisfp) was very successful, and its one-day blogathon resulted in a number of interesting write-ups, such as [Dependent Type Theory and Zero-Shot Reasoning](https://www.alignmentforum.org/posts/Xfw2d5horPunP2MSK/dependent-type-theory-and-zero-shot-reasoning), [Conceptual Problems with Utility Functions](https://www.alignmentforum.org/posts/Nx4DsTpMaoTiTp4RQ/conceptual-problems-with-utility-functions) (and [follow-up](https://www.alignmentforum.org/posts/QmeguSp4Pm7gecJCz/conceptual-problems-with-utility-functions-second-attempt-at)), [Complete Class: Consequentialist Foundations](https://www.alignmentforum.org/posts/sZuw6SGfmZHvcAAEP/complete-class-consequentialist-foundations), and [Agents That Learn From Human Behavior Can’t Learn Human Values That Humans Haven’t Learned Yet](https://www.alignmentforum.org/posts/DfewqowdzDdCD7S9y/agents-that-learn-from-human-behavior-can-t-learn-human).\n* See Rohin Shah’s [alignment newsletter](https://www.alignmentforum.org/posts/EQ9dBequfxmeYzhz6/alignment-newsletter-15-07-16-18) for more discussion of recent posts to the new AI Alignment Forum.\n\n\n#### News and links\n\n\n* The Future of Humanity Institute is seeking [project managers](https://www.fhi.ox.ac.uk/project-managers/) for its Research Scholars Programme and its Governance of AI Program.\n\n\n\nThe post [August 2018 Newsletter](https://intelligence.org/2018/08/27/august-2018-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-08-28T04:57:32Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "1d497c0d0f0dfa03fcc2837cbddc0d61", "title": "July 2018 Newsletter", "url": "https://intelligence.org/2018/07/25/july-2018-newsletter/", "source": "miri", "source_type": "blog", "text": "#### Updates\n\n\n* A new paper: “[Forecasting Using Incomplete Models](https://intelligence.org/2018/06/27/forecasting-using-incomplete-models/)“\n* New research write-ups and discussions: [Prisoners’ Dilemma with Costs to Modeling](https://www.lesswrong.com/posts/XjMkPyaPYTf7LrKiT/prisoners-dilemma-with-costs-to-modeling); [Counterfactual Mugging Poker Game](https://www.lesswrong.com/posts/g3PwPgcdcWiP33pYn/counterfactual-mugging-poker-game); [Optimization Amplifies](https://www.lesswrong.com/posts/zEvqFtT4AtTztfYC4/optimization-amplifies)\n* Eliezer Yudkowsky, Paul Christiano, Jessica Taylor, and Wei Dai [discuss](https://www.lesswrong.com/posts/Djs38EWYZG8o7JMWY/paul-s-research-agenda-faq#79jM2ecef73zupPR4) Alex Zhu’s [FAQ for Paul’s research agenda](https://www.lesswrong.com/posts/Djs38EWYZG8o7JMWY/paul-s-research-agenda-faq).\n* We attended [EA Global](https://sf.eaglobal.org) in SF, and gave a short talk on “[Categorizing Variants of Goodhart’s Law](https://intelligence.org/2018/03/27/categorizing-goodhart/).”\n* Roman Yampolskiy’s forthcoming anthology, [*Artificial Intelligence Safety and Security*](https://www.crcpress.com/Artificial-Intelligence-Safety-and-Security/Yampolskiy/p/book/9780815369820), includes reprinted papers by Nate Soares (“[The Value Learning Problem](https://intelligence.org/files/ValueLearningProblem.pdf)“) and by Nick Bostrom and Eliezer Yudkowsky (“[The Ethics of Artificial Intelligence](https://intelligence.org/files/EthicsofAI.pdf)“).\n* Stuart Armstrong’s 2014 primer on AI risk, *[Smarter Than Us: The Rise of Machine Intelligence](https://intelligence.org/smarter-than-us/)*, is now available as a free web book at [smarterthan.us](https://smarterthan.us).\n\n\n#### News and links\n\n\n* OpenAI announces that their [OpenAI Five](https://blog.openai.com/openai-five/) system “has started to defeat amateur human teams at Dota 2” (plus an [update](https://blog.openai.com/openai-five-benchmark/)). Discussion on [LessWrong](https://www.lesswrong.com/posts/ejxi9W9nRqGY7BzYY/openai-releases-functional-dota-5v5-bot-aims-to-beat-world) and [Hacker News](https://news.ycombinator.com/item?id=17392455).\n* Rohin Shah, a PhD student at the Center for Human-Compatible AI, comments on recent alignment-related results in his regularly updated [Alignment Newsletter](http://rohinshah.com/alignment-newsletter/).\n\n\n\nThe post [July 2018 Newsletter](https://intelligence.org/2018/07/25/july-2018-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-07-25T23:49:28Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "feb34944419e749e9e4fbeab87ad4eba", "title": "New paper: “Forecasting using incomplete models”", "url": "https://intelligence.org/2018/06/27/forecasting-using-incomplete-models/", "source": "miri", "source_type": "blog", "text": "[![Forecasting Using Incomplete Models](https://intelligence.org/files/ForecastingModels.jpg)](https://arxiv.org/abs/1705.04630)MIRI Research Associate Vanessa Kosoy has a paper out on issues in naturalized induction: “[Forecasting using incomplete models](https://arxiv.org/abs/1705.04630)”. Abstract:\n\n\n\n> We consider the task of forecasting an infinite sequence of future observations based on some number of past observations, where the probability measure generating the observations is “suspected” to satisfy one or more of a set of incomplete models, i.e., convex sets in the space of probability measures.\n> \n> \n> This setting is in some sense intermediate between the realizable setting where the probability measure comes from some known set of probability measures (which can be addressed using e.g. Bayesian inference) and the unrealizable setting where the probability measure is completely arbitrary.\n> \n> \n> We demonstrate a method of forecasting which guarantees that, whenever the true probability measure satisfies an incomplete model in a given countable set, the forecast converges to the same incomplete model in the (appropriately normalized) Kantorovich-Rubinstein metric. This is analogous to merging of opinions for Bayesian inference, except that convergence in the Kantorovich-Rubinstein metric is weaker than convergence in total variation.\n> \n> \n\n\nKosoy’s work builds on logical inductors to create a cleaner (purely learning-theoretic) formalism for modeling complex environments, showing that the methods developed in “Logical induction” are useful for applications in classical sequence prediction unrelated to logic.\n\n\n“Forecasting using incomplete models” also shows that the intuitive concept of an “incomplete” or “partial” model has an elegant and useful formalization related to Knightian uncertainty. Additionally, Kosoy shows that using incomplete models to generalize Bayesian inference allows an agent to make predictions about environments that can be as complex as the agent itself, or more complex — as contrasted with classical Bayesian inference.\n\n\nFor more of Kosoy’s research, see “[Optimal polynomial-time estimators](https://intelligence.org/2016/12/31/new-paper-optimal-polynomial-time-estimators/)” and the [Intelligent Agent Foundations Forum](https://agentfoundations.org/submitted?id=7). \n\n \n\n\n\n\n\n#### Sign up to get updates on new MIRI technical results\n\n\n*Get notified every time a new technical paper is published.*\n\n\n\n\n* \n* \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n\n\n \n\n\nThe post [New paper: “Forecasting using incomplete models”](https://intelligence.org/2018/06/27/forecasting-using-incomplete-models/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-06-27T23:48:57Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "d10317b547ba862f108126360666c9c9", "title": "June 2018 Newsletter", "url": "https://intelligence.org/2018/06/23/june-2018-newsletter/", "source": "miri", "source_type": "blog", "text": "#### Updates\n\n\n* New research write-ups and discussions: [Logical Inductors Converge to Correlated Equilibria (Kinda)](https://agentfoundations.org/item?id=1804)\n* MIRI researcher Tsvi Benson-Tilsen and Alex Zhu ran an AI safety retreat for MIT students and alumni.\n* Andrew Critch discusses what kind of advice to give [to junior AI-x-risk-concerned researchers](https://www.lesswrong.com/posts/7uJnA3XDpTgemRH2c/critch-on-career-advice-for-junior-ai-x-risk-concerned), and I clarify [two points about MIRI’s strategic view](https://www.lesswrong.com/posts/hL9ennoEfJXMj7r2D/two-clarifications-about-strategic-background).\n* From Eliezer Yudkowsky: [Challenges to Paul Christiano’s Capability Amplification Proposal](https://intelligence.org/2018/05/19/challenges-to-christianos-capability-amplification-proposal/). (Cross-posted to [LessWrong](https://www.lesswrong.com/posts/S7csET9CgBtpi7sCh/challenges-to-christiano-s-capability-amplification-proposal).)\n\n\n#### News and links\n\n\n* Jessica Taylor [discusses](https://www.lesswrong.com/posts/KphrG3chfiuFX5Cu6/decision-theory-and-zero-sum-game-theory-np-and-pspace) the relationship between decision theory, game theory, and the NP and PSPACE complexity classes.\n* From OpenAI’s Geoffrey Irving, Paul Christiano, and Dario Amodei: an AI safety technique based on [training agents to debate each other](https://blog.openai.com/debate/). And from Amodei and Danny Hernandez, an [analysis](https://blog.openai.com/ai-and-compute/) showing that “since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.5 month-doubling time”.\n* Christiano asks: [Are Minimal Circuits Daemon-Free?](https://www.lesswrong.com/posts/nyCHnY7T5PHPLjxmN/open-question-are-minimal-circuits-daemon-free) and [When is Unaligned AI Morally Valuable?](https://www.lesswrong.com/posts/3kN79EuT27trGexsq/when-is-unaligned-ai-morally-valuable)\n* The Future of Humanity Institute’s Allan Dafoe discusses the future of AI, international governance, and macrostrategy [on the 80,000 Hours Podcast](https://80000hours.org/podcast/episodes/allan-dafoe-politics-of-ai/).\n\n\n\nThe post [June 2018 Newsletter](https://intelligence.org/2018/06/23/june-2018-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-06-23T23:36:32Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "57cab980293471b8a9963a01d8be1f95", "title": "May 2018 Newsletter", "url": "https://intelligence.org/2018/05/31/may-2018-newsletter/", "source": "miri", "source_type": "blog", "text": "#### Updates\n\n\n* New research write-ups and discussions: [Resource-Limited Reflective Oracles](https://agentfoundations.org/item?id=1793); [Computing An Exact Quantilal Policy](https://agentfoundations.org/item?id=1794)\n* New at AI Impacts: [Promising Research Projects](https://aiimpacts.org/promising-research-projects/)\n* MIRI research fellow Scott Garrabrant and associates Stuart Armstrong and Vanessa Kosoy are among the winners in the [second round](https://www.lesswrong.com/posts/SSEyiHaACSYDHcYZz/announcement-ai-alignment-prize-round-2-winners-and-next) of the AI Alignment Prize. First place goes to Tom Everitt and Marcus Hutter’s “[The Alignment Problem for History-Based Bayesian Reinforcement Learners](http://www.tomeveritt.se/papers/alignment.pdf).”\n* Our thanks to our donors in REG’s [Spring Matching Challenge](https://reg-charity.org/spring-matching-challenge/) and to online poker players Chappolini, donthnrmepls, FMyLife, ValueH, and xx23xx, who generously matched $47,000 in donations to MIRI, plus another $250,000 to the Good Food Institute, GiveDirectly, and other charities.\n\n\n#### News and links\n\n\n* [OpenAI’s charter](https://blog.openai.com/openai-charter/) predicts that “safety and security concerns will reduce [their] traditional publishing in the future” and emphasizes the importance of “long-term safety” and avoiding late-stage races between AGI developers.\n* Matthew Rahtz recounts [lessons learned](http://amid.fish/reproducing-deep-rl) while reproducing Christiano et al.’s “Deep Reinforcement Learning from Human Preferences.”\n\n\n\nThe post [May 2018 Newsletter](https://intelligence.org/2018/05/31/may-2018-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-06-01T01:37:29Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "68d54a2733de24b8376320a7ee954235", "title": "Challenges to Christiano’s capability amplification proposal", "url": "https://intelligence.org/2018/05/19/challenges-to-christianos-capability-amplification-proposal/", "source": "miri", "source_type": "blog", "text": "The following is a basically unedited summary I wrote up on March 16 of my take on Paul Christiano’s AGI alignment approach (described in “[ALBA](https://ai-alignment.com/alba-an-explicit-proposal-for-aligned-ai-17a55f60bbcf)” and “[Iterated Distillation and Amplification](https://ai-alignment.com/iterated-distillation-and-amplification-157debfd1616)”). Where Paul had comments and replies, I’ve included them below.\n\n\n\n\n---\n\n\nI see a lot of free variables with respect to what exactly Paul might have in mind. I've sometimes tried presenting Paul with my objections and then he replies in a way that locally answers some of my question but I think would make other difficulties worse. My global objection is thus something like, \"I don't see any concrete setup and *consistent simultaneous* setting of the variables where this whole scheme works.\" These difficulties are not minor or technical; they appear to me quite severe. I try to walk through the details below.\n\n\nIt should be understood at all times that I do not claim to be able to pass Paul’s [ITT](http://econlog.econlib.org/archives/2011/06/the_ideological.html) for Paul’s view and that this is me criticizing my own, potentially straw misunderstanding of what I imagine Paul might be advocating.\n\n\n\n \n\n\n\nPaul Christiano\n\n\nOverall take: I think that these are all legitimate difficulties faced by my proposal and to a large extent I agree with Eliezer's account of those problems (though not his account of my current beliefs).\n\n\nI don't understand exactly how hard Eliezer expects these problems to be; my impression is \"just about as hard as solving alignment from scratch,\" but I don't have a clear sense of why.\n\n\nTo some extent we are probably disagreeing about alternatives. From my perspective, the difficulties with my approach (e.g. better understanding the forms of optimization that cause trouble, or how to avoid optimization daemons in systems about as smart as you are, or how to address X-and-only-X) are also problems for alternative alignment approaches. I think it's a mistake to think that tiling agents, or decision theory, or naturalized induction, or logical uncertainty, are going to make the situation qualitatively better for these problems, so work on those problems looks to me like procrastinating on the key difficulties. I agree with the intuition that progress on the agent foundations agenda \"ought to be possible,\" and I agree that it will help at least a *little bit* with the problems Eliezer describes in this document, but overall agent foundations seems way less promising than a direct attack on the problems (given that we haven’t tried the direct attack nearly enough to give up). Working through philosophical issues in the context of a concrete alignment strategy generally seems more promising to me than trying to think about them in the abstract, and I think this is evidenced by the fact that most of the core difficulties in my approach would also afflict research based on agent foundations.\n\n\nThe main way I could see agent foundations research as helping to address these problems, rather than merely deferring them, is if we plan to eschew large-scale ML altogether. That seems to me like a very serious handicap, so I'd only go that direction once I was quite pessimistic about solving these problems. My subjective experience is of making continuous significant progress rather than being stuck. I agree there is clear evidence that the problems are \"difficult\" in the sense that we are going to have to make progress in order to solve them, but not that they are \"difficult\" in the sense that P vs. NP or even your typical open problem in CS is probably difficult (and even then if your options were \"prove P != NP\" or \"try to beat Google at building an AGI without using large-scale ML,\" I don't think it's obvious which option you should consider more promising).\n\n\n\n\n\n---\n\n\nFirst and foremost, I don't understand how \"preserving alignment while amplifying capabilities\" is supposed to work at all under this scenario, in a way consistent with other things that I’ve understood Paul to say.\n\n\nI want to first go through an obvious point that I expect Paul and I agree upon: Not every system of locally aligned parts has globally aligned output, and some additional assumption beyond \"the parts are aligned\" is necessary to yield the conclusion \"global behavior is aligned\". The straw assertion \"an aggregate of aligned parts is aligned\" is the reverse of the [argument](http://www.iep.utm.edu/chineser/) that Searle uses to ask us to imagine that an (immortal) human being who speaks only English, who has been trained do things with many many pieces of paper that instantiate a Turing machine, can't be part of a whole system that understands Chinese, because the individual pieces and steps of the system aren't locally imbued with understanding Chinese. Here the compositionally non-preserved property is \"lack of understanding of Chinese\"; we can't expect \"alignment\" to be any more necessarily preserved than this, except by further assumptions.\n\n\nThe second-to-last time Paul and I conversed at length, I kept probing Paul for what in practice the non-compacted-by-training version of a big aggregate of small aligned agents would look like. He described people, living for a single day, routing around phone numbers of other agents with nobody having any concept of the global picture. I used the term \"Chinese Room Bureaucracy\" to describe this. Paul seemed to think that this was an amusing but perhaps not inappropriate term.\n\n\nIf no agent in the Chinese Room Bureaucracy has a full view of which actions have which consequences and why, this cuts off the most obvious route by which the alignment of any agent could apply to the alignment of the whole. The way I usually imagine things, the alignment of an agent applies to things that the agent understands. If you have a big aggregate of agents that understands something the little local agent doesn't understand, the big aggregate doesn't inherit alignment from the little agents. Searle's Chinese Room can understand Chinese even if the person inside it doesn't understand Chinese, and this correspondingly implies, by default, that the person inside the Chinese Room is powerless to express their own taste in restaurant orders.\n\n\nI don't understand Paul's model of how a ton of little not-so-bright agents yield a big powerful understanding in aggregate, in a way that doesn't effectively consist of them running AGI code that they don't understand.\n\n\n \n\n\n\nPaul Christiano\n\n\nThe argument for alignment isn’t that “a system made of aligned neurons is aligned.” Unalignment isn't a thing that magically happens; it’s the result of specific optimization pressures in the system that create trouble. My goal is to (a) first construct weaker agents who aren't internally doing problematic optimization, (b) put them together in a way that improves capability without doing other problematic optimization, (c) iterate that process.\n\n\n\n \n\n\nPaul has previously challenged me to name a bottleneck that I think a Christiano-style system can't pass. This is hard because (a) I'm not sure I understand Paul's system, and (b) it's clearest if I name a task for which we don't have a present crisp algorithm. But:\n\n\nThe bottleneck I named in my last discussion with Paul was, \"We have copies of a starting agent, which run for at most one cumulative day before being terminated, and this agent hasn't previously learned much math but is smart and can get to understanding algebra by the end of the day even though the agent started out knowing just concrete arithmetic. How does a system of such agents, without just operating a Turing machine that operates an AGI, get to the point of inventing Hessian-free optimization in a neural net?\"\n\n\nThis is a slightly obsolete example because nobody uses Hessian-free optimization anymore. But I wanted to find an example of an agent that needed to do something that didn't have a simple human metaphor. We can understand second derivatives using metaphors like acceleration. \"Hessian-free optimization\" is something that doesn't have an obvious metaphor that can explain it, well enough to use it in an engineering design, to somebody who doesn't have a mathy and not just metaphorical understanding of calculus. Even if it did have such a metaphor, that metaphor would still be very unlikely to be invented by someone who didn't understand calculus.\n\n\nI don't see how Paul expects lots of little agents who can learn algebra in a day, being run in sequence, to aggregate into something that can build designs using Hessian-free optimization, *without* the little agents having effectively the role of an immortal dog that's been trained to operate a Turing machine. So I also don't see how Paul expects the putative alignment of the little agents to pass through this mysterious aggregation form of understanding, into alignment of the system that understands Hessian-free optimization.\n\n\nI expect this is already understood, but I state as an obvious fact that alignment is not in general a compositionally preserved property of cognitive systems: If you train a bunch of good and moral people to operate the elements of a Turing machine and nobody has a global view of what's going on, their goodness and morality does not pass through to the Turing machine. Even if we let the good and moral people have discretion as to when to write a different symbol than the usual rules call for, they still can't be effective at aligning the global system, because they don't individually understand whether the Hessian-free optimization is being used for good or evil, because they don't understand Hessian-free optimization or the thoughts that incorporate it. So we would not like to rest the system on the false assumption \"any system composed of aligned subagents is aligned\", which we know to be generally false because of this counterexample. We would like there to instead be some narrower assumption, perhaps with additional premises, which is actually true, on which the system's alignment rests. I don't know what narrower assumption Paul wants to use.\n\n\n\n\n---\n\n\nPaul asks us to consider [AlphaGo](https://ai-alignment.com/alphago-zero-and-capability-amplification-ede767bb8446) as a model of capability amplification.\n\n\nMy view of AlphaGo would be as follows: We understand Monte Carlo Tree Search. MCTS is an iterable algorithm whose intermediate outputs can be plugged into further iterations of the algorithm. So we can use supervised learning where our systems of gradient descent can capture and foreshorten the computation of some but not all of the details of winning moves revealed by the short MCTS, plug in the learned outputs to MCTS, and get a pseudo-version of \"running MCTS longer and wider\" which is weaker than an MCTS actually that broad and deep, but more powerful than the raw MCTS run previously. The alignment of this system is provided by the crisp formal loss function at the end of the MCTS.\n\n\nHere's an alternate case where, as far as I can tell, a naive straw version of capability amplification clearly wouldn't work. Suppose we have an RNN that plays Go. It's been constructed in such fashion that if we iterate the RNN for longer, the Go move gets somewhat better. \"Aha,\" says the straw capability amplifier, \"clearly we can just take this RNN, train another network to approximate its internal state after 100 iterations from the initial Go position; we feed that internal state into the RNN at the start, then train the amplifying network to approximate the internal state of that RNN after it runs for another 200 iterations. The result will clearly go on trying to 'win at Go' because the original RNN was trying to win at Go; the amplified system preserves the values of the original.\" This doesn't work because, let us say by hypothesis, the RNN can't get arbitrarily better at Go if you go on iterating it; and the nature of the capability amplification setup doesn't permit any outside loss function that could tell the amplified RNN whether it's doing better or worse at Go.\n\n\n \n\n\n\nPaul Christiano\n\n\nI definitely agree that amplification doesn't work better than \"let the human think for arbitrarily long.\" I don’t think that’s a strong objection, because I think humans (even humans who only have a short period of time) will eventually converge to good enough answers to the questions we face.\n\n\n\n \n\n\nThe RNN has only whatever opinion it converges to, or whatever set of opinions it diverges to, to tell itself how well it's doing. This is exactly what it is for capability amplification to preserve alignment; but this in turn means that capability amplification only works to the extent that what we are amplifying has within itself the capability to be very smart in the limit.\n\n\nIf we're effectively constructing a civilization of long-lived Paul Christianos, then this difficulty is somewhat alleviated. There are still things that can go wrong with this civilization qua civilization (even aside from objections I name later as to whether we can actually safely and realistically do that). I do however believe that a civilization of Pauls could do nice things.\n\n\nBut other parts of Paul's story don't permit this, or at least that's what Paul was saying last time; Paul's supervised learning setup only lets the simulated component people operate for a day, because we can't get enough labeled cases if the people have to each run for a month.\n\n\nFurthermore, as I understand it, the \"realistic\" version of this is supposed to start with agents dumber than Paul. According to my understanding of something Paul said in answer to a later objection, the agents in the system are supposed to be even dumber than an average human (but aligned). It is not at all obvious to me that an arbitrarily large system of agents with IQ 90, who each only live for one day, can implement a much smarter agent in a fashion analogous to the internal agents themselves achieving understandings to which they can apply their alignment in a globally effective way, rather than them blindly implementing a larger algorithm they don't understand.\n\n\nI'm not sure a system of one-day-living IQ-90 humans ever gets to the point of inventing fire or the wheel.\n\n\nIf Paul has an intuition saying \"Well, of course they eventually start doing Hessian-free optimization in a way that makes their understanding effective upon it to create global alignment; I can’t figure out how to convince you otherwise if you don’t already see that,\" I'm not quite sure where to go from there, except onwards to my other challenges.\n\n\n \n\n\n\nPaul Christiano\n\n\nWell, I can see one obvious way to convince you otherwise: actually run the experiment. But before doing that I'd like to be more precise about what you expect to work and not work, since I'm not going to literally do the HF optimization example (developing new algorithms is way, way beyond the scope of existing ML). I think we can do stuff that looks (to me) even harder than inventing HF optimization. But I don't know if I have a good enough model of your model to know what you'd actually consider harder.\n\n\n\n \n\n\nUnless of course you have so many agents in the (uncompressed) aggregate that the aggregate implements a smarter genetic algorithm that is maximizing the approval of the internal agents. If you take something much smarter than IQ 90 humans living for one day, and train it to get the IQ 90 humans to output large numbers signaling their approval, I would by default expect it to hack the IQ 90 one-day humans, who are not secure systems. We're back to the global system being smarter than the individual agents in a way which doesn't preserve alignment.\n\n\n \n\n\n\nPaul Christiano\n\n\nDefinitely agree that even if the agents are aligned, they can implement unaligned optimization, and then we're back to square one. Amplification only works if we can improve capability without doing unaligned optimization. I think this is a disagreement about the decomposability of cognitive work. I hope we can resolve it by actually finding concrete, simple tasks where we have differing intuitions, and then doing empirical tests.\n\n\n\n \n\n\nThe central interesting-to-me idea in capability amplification is that by *exactly* imitating humans, we can bypass the usual dooms of reinforcement learning. If arguendo you can construct an exact imitation of a human, it possesses exactly the same alignment properties as the human; and this is true in a way that is not true if we take a reinforcement learner and ask it to maximize an approval signal originating from the human. (If the subject is Paul Christiano, or Carl Shulman, I for one am willing to say these humans are reasonably aligned; and I'm pretty much okay with somebody giving them the keys to the universe in expectation that the keys will later be handed back.)\n\n\nIt is not obvious to me how fast alignment-preservation degrades as the exactness of the imitation is weakened. This matters because of things Paul has said which sound to me like he's not advocating for perfect imitation, in response to challenges I've given about how perfect imitation would be very expensive. That is, the answer he gave to a challenge about the expense of perfection makes the answer to \"How fast do we lose alignment guarantees as we move away from perfection?\" become very important.\n\n\nOne example of a doom I'd expect from standard reinforcement learning would be what I'd term the \"X-and-only-X\" problem. I unfortunately haven't written this up yet, so I'm going to try to summarize it briefly here.\n\n\nX-and-only-X is what I call the issue where the property that's easy to verify and train is X, but the property you want is \"this was optimized for X and only X and doesn't contain a whole bunch of possible subtle bad Ys that could be hard to detect formulaically from the final output of the system\".\n\n\nFor example, imagine X is \"give me a program which solves a Rubik's Cube\". You can run the program and verify that it solves Rubik's Cubes, and use a loss function over its average performance which also takes into account how many steps the program's solutions require.\n\n\nThe property Y is that the program the AI gives you also modulates RAM to send GSM cellphone signals.\n\n\nThat is: It's much easier to verify \"This is a program which at least solves the Rubik's Cube\" than \"This is a program which was optimized to solve the Rubik's Cube and only that and was not optimized for anything else on the side.\"\n\n\nIf I were going to talk about trying to do aligned AGI under the standard ML paradigms, I'd talk about how this creates a differential ease of development between \"build a system that does X\" and \"build a system that does X and only X and not Y in some subtle way\". If you just want X however unsafely, you can build the X-classifier and use that as a loss function and let reinforcement learning loose with whatever equivalent of gradient descent or other generic optimization method the future uses. If the safety property you want is optimized-for-X-and-just-X-and-not-any-possible-number-of-hidden-Ys, then you can't write a simple loss function for that the way you can for X.\n\n\n \n\n\n\nPaul Christiano\n\n\nAccording to my understanding of optimization / use of language: the agent produced by RL is optimized only for X. However, optimization for X is liable to produce a Y-optimizer. So the actions of the agent are both X-optimized and Y-optimized.\n\n\n\n \n\n\nThe team that's building a less safe AGI can plug in the X-evaluator and let rip, the team that wants to build a safe AGI can't do things the easy way and has to solve new basic problems in order to get a trustworthy system. It's not unsolvable, but it's an element of the class of added difficulties of alignment such that the whole class extremely plausibly adds up to [an extra two years](https://arbital.com/p/aligning_adds_time/) of development.\n\n\nIn Paul's capability-amplification scenario, if we can get exact imitation, we are genuinely completely bypassing the whole paradigm that creates the X-and-only-X problem. If you can get exact imitation of a human, the outputs have only and exactly whatever properties the human already has. This kind of genuinely different viewpoint is why I continue to be excited about Paul's thinking.\n\n\n \n\n\n\nPaul Christiano\n\n\nI agree that perfect imitation would be a way to get around the X-and-only-X problem. However, I don't think that it's plausible and it's not how my approach hopes to get around the X-and-only-X problem.\n\n\nI would solve X-and-only-X in two steps:\n\n\nFirst, given an agent and an action which has been optimized for undesirable consequence Y, we'd like to be able to tell that the action has this undesirable side effect. I think we can do this by having a smarter agent act as an overseer, and giving the smarter agent suitable insight into the cognition of the weaker agent (e.g. by sharing weights between the weak agent and an explanation-generating agent). This is what I'm calling informed oversight.\n\n\nSecond, given an agent, identify situations in which it is especially likely to produce bad outcomes, or proofs that it won't, or enough understanding of its internals that you can see why it won't. This is discussed in “[Techniques for Optimizing Worst-Case Performance](https://ai-alignment.com/techniques-for-optimizing-worst-case-performance-39eafec74b99).”\n\n\n(It also obviously requires a smarter agent, which you hope to get by induction + amplification).\n\n\nI think that both of those are hard problems, in addition to the assumption that amplification will work. But I don't yet see reason to be super pessimistic about either of them.\n\n\n\n \n\n\nOn the other hand, suppose we don't have exact imitation. How fast do we lose the defense against X-and-only-X? Well, that depends on the inexactness of the imitation; under what kind of distance metric is the imperfect imitation 'near' to the original? Like, if we're talking about Euclidean distance in the output, I expect you lose the X-and-only-X guarantee pretty damn fast against smart adversarial perturbations.\n\n\nOn the other other hand, suppose that the inexactness of the imitation is \"This agent behaves exactly like Paul Christiano but 5 IQ points dumber.\" If this is only and precisely the form of inexactness produced, and we know that for sure, then I'd say we have a pretty good guarantee against slightly-dumber-Paul producing the likes of Rubik's Cube solvers containing hidden GSM signalers.\n\n\nOn the other other other hand, suppose the inexactness of the imitation is \"This agent passes the Turing Test; a human can't tell it apart from a human.\" Then X-and-only-X is thrown completely out the window. We have no guarantee of non-Y for any Y a human can't detect, which covers an enormous amount of lethal territory, which is why we can't just sanitize the outputs of an untrusted superintelligence by having a human inspect the outputs to see if they have any humanly obvious bad consequences.\n\n\n\n\n---\n\n\nSpeaking of inexact imitation: It seems to me that having an AI output a *high-fidelity* imitation of human behavior, sufficiently high-fidelity to preserve properties like \"being smart\" and \"being a good person\" and \"still being a good person under some odd strains like being assembled into an enormous Chinese Room Bureaucracy\", is a pretty huge ask.\n\n\nIt seems to me obvious, though this is the sort of point where I've been surprised about what other people don't consider obvious, that in general exact imitation is a bigger ask than superior capability. Building a Go player that imitates Shuusaku's Go play so well that a scholar couldn't tell the difference, is a bigger ask than building a Go player that could defeat Shuusaku in a match. A human is much smarter than a pocket calculator but would still be unable to imitate one without using a paper and pencil; to imitate the pocket calculator you need all of the pocket calculator's abilities in addition to your own.\n\n\nCorrespondingly, a realistic AI we build that literally passes the strong version of the Turing Test would probably have to be much smarter than the other humans in the test, probably smarter than any human on Earth, because it would have to possess all the human capabilities in addition to its own. Or at least all the human capabilities that can be exhibited to another human over the course of however long the Turing Test lasts. (Note that on the version of capability amplification I heard, capabilities that can be exhibited over the course of a day are the only kinds of capabilities we're allowed to amplify.)\n\n\n \n\n\n\nPaul Christiano\n\n\nTotally agree, and for this reason I agree that you can't rely on perfect imitation to solve the X-and-only-X problem and hence need other solutions. If you convince me that either informed oversight or reliability is impossible, then I'll be largely convinced that I'm doomed.\n\n\n\n \n\n\nAn AI that learns to exactly imitate humans, not just passing the Turing Test to the limits of human discrimination on human inspection, but perfect imitation with all added bad subtle properties thereby excluded, must be so cognitively powerful that its learnable hypothesis space includes systems equivalent to entire human brains. I see no way that we're not talking about a superintelligence here.\n\n\nSo to postulate *perfect* imitation, we would first of all run into the problems that:\n\n\n(a)  The AGI required to learn this imitation is *extremely* powerful, and this could imply a dangerous delay between when we can build any dangerous AGI at all, and when we can build AGIs that would work for alignment using perfect-imitation capability amplification.\n\n\n(b)  Since we cannot invoke a perfect-imitation capability amplification setup to get this very powerful AGI in the first place (because it is already the least AGI that we can use to even get started on perfect-imitation capability amplification), we already have an extremely dangerous unaligned superintelligence sitting around that we are trying to use to implement our scheme for alignment.\n\n\nNow, we may perhaps reply that the imitation is less than perfect and can be done with a dumber, less dangerous AI; perhaps even so dumb as to not be enormously superintelligent. But then we are tweaking the “perfection of imitation” setting, which could rapidly blow up our alignment guarantees against the standard dooms of standard machine learning paradigms.\n\n\nI'm worried that you have to degrade the level of imitation *a lot* before it becomes less than an *enormous* ask, to the point that what's being imitated isn't very intelligent, isn't human, and/or isn't known to be aligned.\n\n\nTo be specific: I think that if you want to imitate IQ-90 humans thinking for one day, and imitate them so specifically that the imitations are generally intelligent and locally aligned even in the limit of being aggregated into weird bureaucracies, you're looking at an AGI powerful enough to think about whole systems loosely analogous to IQ-90 humans.\n\n\n \n\n\n\nPaul Christiano\n\n\nIt's important that my argument for alignment-of-amplification goes through *not* doing problematic optimization. So if we combine that with a good enough solution to informed oversight and reliability (and amplification, and the induction working so far…), then we can continue to train imperfect imitations that definitely don't do problematic optimization. They'll mess up all over the place, and so might not be able to be competent (another problem amplification needs to handle), but the goal is to set things up so that being a lot dumber doesn't break alignment.\n\n\n\n \n\n\nI think that is a very powerful AGI. I think this AGI is smart enough to slip all kinds of shenanigans past you, unless you are using a methodology that can produce faithful imitations from unaligned AGIs. I think this is an AGI that can do powerful feats of engineering, unless it is somehow able to simulate humans doing powerful feats of engineering without itself being capable of powerful feats of engineering.\n\n\nAnd then furthermore the capability amplification schema requires the AGI to be powerful enough to learn to imitate *amplified* systems of humans as well!\n\n\nIf you don’t degrade the AI to well below the point of superintelligence, we require extremely strong Faraday-cage keep-God-in-a-box don't-peek-at-the-code AI boxing. So we cannot for example let an untrusted superintelligence originate queries that it can use to learn human behavior; it has to be strictly unsupervised example-based learning rather than a query model. This interacts with worries about sample complexity below.\n\n\n \n\n\n\nPaul Christiano\n\n\nI appreciate how often you had to argue against disjunctions here in order to give me the benefit of the doubt.\n\n\n\n \n\n\n*Or* you need to postulate some prior alignment of the AGI that lets you robustly and safely direct it at the imitation problem. Then (a) your alignment schema depends on an unspecified prior alignment schema that already works to make very powerful AGIs do things. And (b) you probably aren't allowing unrestrained intelligence explosions, which probably introduces a big unavoidable delay between the first point where very powerful AGI is technologically possible, and the later point where you can build and safely align the extremely powerful AGI that you're using as the base unit in your imitation scheme.\n\n\n\n\n---\n\n\nThe last I checked with Paul, his scheme bit the bullet on using powerful smart dangerous units to imitate humans. Paul's scheme for bypassing this, as I heard and understood it back then, involved a scheme which:\n\n\n* Trained a million smart predictors of sufficient diversity that at least one predictor would happen to end up with the true sole simple goal of predicting faithfully;\n* Compared the predictors against each other, finding cases where their predictions diverged sufficiently;\n* Tested them against reality;\n* Had enough sampling power to perform a sufficient number of tests that, even if a superintelligence cunningly delays the time when it starts trying to slip in bad predictions, we can pause the system and run the experimental test and generate new samples and kick out the unfaithful predictor at that time.\n\n\nThat's a one-millionfold performance penalty, if I understood it correctly.\n\n\nIf that was just a concrete illustration to help me understand what's happening at all, and actually there's supposed to be some obvious less naive more efficient scheme for making use of mostly untrusted superintelligences which doesn't have the giant performance penalty, then I haven't seen it yet.\n\n\n \n\n\n\nPaul Christiano\n\n\nThere are several possible techniques for cutting this performance penalty down, e.g., representing a large ensemble implicitly or using a tricky distillation procedure. I totally agree if you have a millionfold slowdown then you are out of luck.\n\n\nWhen we talked about this, I agreed that \"train a reliable system\" was an important subproblem. Train an implicit ensemble is one possible approach to that problem. At the time of our discussion I listed 3 techniques. Since then I've become quite pessimistic about training an implicit ensemble, and have become more optimistic about verification and transparency. (My thinking about the problem is also generally much sharper.) The three approaches I currently consider most live are the ones in “[Techniques for Optimizing Worst-Case Performance](https://ai-alignment.com/techniques-for-optimizing-worst-case-performance-39eafec74b99).”\n\n\n(As usual, all of it requires amplification + the induction.)\n\n\n(In the original discussion, I listed “large ensemble” as a possible approach in response to your \"how could you possibly solve this?\" question, not as a necessary ingredient.)\n\n\n\n\n\n---\n\n\n\nI also have trouble seeing how kicking out unfaithful predictors is possible under an unsupervised-learning paradigm that doesn't actually allow hypercomputation.\n\n\nThe problem being: the exact output of a human is always going to be very improbable. Even if the agents can only send each other text messages and no illustrations (which further reduces their collective intelligence and ability to produce textbooks readable in a day), a text message is still an enormous possibility space. If we ask superintelligences to give us their predictions for what the human does next, their predictions cannot possibly actually include a giant lookup table next to all the probable outputs of the human.\n\n\nI think I see how I’d “solve” this problem using hypercomputation and an enormous number of samples: I require the superintelligences to give me their giant lookup tables of probabilities for exact outputs, calculate total distances between probability tables (KL divergence or whatever), and if the distance is great enough, I sample a human and do a Bayesian update. I drop any superintelligences whose prior probability goes low enough. I keep doing this until only agreeing superintelligences remain.\n\n\nUnder this system it looks to me like you need a *lot* of samples to kick out any superintelligences trying to sneak in bad suggestions. Let's say that an uncompressed Chinese Room bureaucracy is doing the equivalent of sending around a quadrillion messages, and that the output can be corrupted by an adversarial attack against one billion of those messages such that there's a \\(\\frac{1}{1,000,000}\\) chance of outputting an adversarially chosen alternative to each of those billion messages. Well, the only way a superintelligence can allocate a \\(\\frac{1}{1,000,000}\\) generation probability to those fake messages, is by stealing 0.0001% probability mass from all the other messages that might be output. So each time we check against a real human, we update by \\(\\log\\_{2}\\) \\(\\frac{999,999}{1,000,000}\\) bits, and after taking a million samples we’ve driven down the relative odds of this naughty superintelligence by \\(\\frac{1}{e}\\).\n\n\nWe had to take a million samples to do that, and each of the samples involves a human thinking for one day. And that just got us \\(e\\):1 evidence, so we've got to take a million samples 7 times to get to the point of 1000:1 odds against the superintelligence being honest. And if we have a million superintelligences in the distribution, some of them can make skewed bets à la predicting coinflips whereby a thousand of those will be able to gain 1000:1 advantages over the others, so actually you've got to take 14 million samples to be sure of beating any bad predictor classes that collectively try that scheme.\n\n\nI think last time I talked to Paul about this, Paul was like, \"Sure, we're talking about a gigantic government project that can take millions of samples.\"\n\n\nI'm not sure, I could be wrong about this, but I think you actually need 14 million samples per bad predictor that you want to kick out of the distribution.\n\n\nEven if I’m wrong about that, we need millions of humans who are smart enough and aligned enough that random samples from their number can agglomerate into a smart aligned Chinese Room Bureaucracy.\n\n\nI also don't see a way to do this without comparing the giant lookup tables using hypercomputation, but there I might just be missing a math trick.\n\n\n \n\n\n\nPaul Christiano\n\n\nMy best guess is that this can be done under plausible assumptions with O(1) samples per bad predictor. It's tricky, but we are good at tricky math problems, so it's not very scary relative to the other problems we face.\n\n\n(Unfortunately, I think that a large implicit ensemble is probably stuck anyway, in part because a million predictors isn't enough. But I'm not confident about that.)\n\n\n\n \n\n\n(If you're thinking of GANs, then so far as I can tell, the discriminator has to be at least as smart as the generator, and you have to trust the discriminator, and there isn't any simpler problem with respect to how you find a trusted superintelligence within a collective of untrusted ones to act as your discriminator.)\n\n\n(EDIT: Actually, after thinking about this for another five minutes, maybe I do see how to do it with GANs and lower sample complexity.)\n\n\nAn *even larger* issue is that I don't see any obvious way to carry out a scheme like this one *at all* with respect to imperfect imitations. (And the above scheme I thought of with GANs would also just fail.)\n\n\n \n\n\n\nPaul Christiano\n\n\nI think we could probably get over this too, it's another tricky math problem. I think this kind of problem is reliably either impossible, or else *radically* easier than most of the other stuff we are dealing with in alignment.\n\n\n(Though I endorse the overall intuition that large implicit ensembles are doomed.)\n\n\n\n\n\n---\n\n\nI think these arguments are collectively something like a crux. That is, unless I've missed one of my own thought processes in the course of writing this up rapidly, or assumed a shared background assumption that isn't actually shared.\n\n\nLet's say that D is the degree of imperfection allowed by some system of capability amplification, and call D-imperfect imitations D-imitations. Iterated D-imitations of amplified systems of D-imitations will be termed DD-imitations. Then I think I'd start to be pragmatically interested in capability amplification as I understood it, if I believed all of the following:\n\n\n* We can, before the world is ended by other unaligned AIs, get AIs powerful enough to learn D-imitations and DD-imitations;\n* D-imitations and DD-imitations robustly preserve the goodness of the people being imitated, despite the imperfection of the imitation;\n* D-imitations agglomerate to sufficient cognitive power to perform a pivotal act in a way that causes the alignment of the components to be effective upon aligning the whole; and imperfect DD-imitation preserves this property;\n* We can find any way of either:\n+ Individually trusting one AI that powerful to faithfully perform the task of D-imitation (but then why can't we just use this scheme to align a powerful AGI in the first place?);\n+ Find a scheme for agglomerating mostly untrustworthy powerful intelligences which:\n- Doesn't require giant lookup tables, doesn't require a GAN with a trusted discriminator unless you can say how to produce the trusted discriminator, and can use actual human samples as fuel to discriminate trustworthiness among untrusted generators *of D-imitations*.\n- Is extremely sample-efficient (let's say you can clear 100 people who are trustworthy to be part of an amplified-capability system, which already sounds to me like a huge damned ask); *or* you can exhibit to me a social schema which agglomerates mostly untrusted humans into a Chinese Room Bureaucracy that we trust to perform a pivotal task, and a political schema that you trust to do things involving millions of humans, in which case you can take millions of samples but not billions. Honestly, I just don't currently believe in AI scenarios in which good and trustworthy governments carry out complicated AI alignment schemas involving millions of people, so if you go down this path we end up with different cruxes; but I would already be pretty impressed if you got all the other cruxes.\n- Is not too computationally inefficient; more like 20-1 slowdown than 1,000,000-1. Because I don't think you can get the latter degree of advantage over other AGI projects elsewhere in the world. Unless you are postulating massive global perfect surveillance schemes that don't wreck humanity's future, carried out by hyper-competent, hyper-trustworthy great powers with a deep commitment to cosmopolitan value — very unlike the observed characteristics of present great powers, and going unopposed by any other major government. Again, if we go down this branch of the challenge then we are no longer at the original crux.\n\n\nI worry that going down the last two branches of the challenge could create the illusion of a political disagreement, when I have what seem to me like strong technical objections at the previous branches. I would prefer that the more technical cruxes be considered first. If Paul answered all the other technical cruxes and presented a scheme for capability amplification that worked with a moderately utopian world government, I would already have been surprised. I wouldn't actually try it because you cannot get a moderately utopian world government, but Paul would have won many points and I would be interested in trying to refine the scheme further because it had already been refined further than I thought possible. On my present view, trying anything like this should either just plain not get started (if you wait to satisfy extreme computational demands and sampling power before proceeding), just plain fail (if you use weak AIs to try to imitate humans), or just plain kill you (if you use a superintelligence).\n\n\n \n\n\n\nPaul Christiano\n\n\nI think that the disagreement is almost entirely technical. I think if we really needed 1M people it wouldn't be a dealbreaker, but that's because of a technical rather than political disagreement (about what those people need to be doing). And I agree that 1,000,000x slowdown is unacceptable (I think even a 10x slowdown is almost totally doomed).\n\n\n\n \n\n\nI restate that these objections seem *to me* to collectively sum up to “This is fundamentally just not a way you can get an aligned powerful AGI unless you already have an aligned superintelligence”, rather than “Some further insights are required for this to work in practice.” But who knows what further insights may really bring? Movement in thoughtspace consists of better understanding, not cleverer tools.\n\n\nI continue to be excited by Paul’s thinking on this subject; I just don’t think it works in the present state.\n\n\n \n\n\n\nPaul Christiano\n\n\nOn this point, we agree. I don’t think anyone is claiming to be done with the alignment problem, the main question is about what directions are most promising for making progress.\n\n\n\n \n\n\nOn my view, this is not an unusual state of mind to be in with respect to alignment research. I can’t point to any MIRI paper that works to align an AGI. Other people seem to think that they ought to currently be in a state of having a pretty much workable scheme for aligning an AGI, which I would consider to be an odd expectation. I would think that a sane point of view consisted in having ideas for addressing some problems that created further difficulties that needed to be fixed and didn’t address most other problems at all; a map with what you think are the big unsolved areas clearly marked. Being able to have a thought which *genuinely squarely attacks* *any alignment difficulty at all* despite any other difficulties it implies, is already in my view a large and unusual accomplishment. The insight “trustworthy imitation of human external behavior would avert many default dooms as they manifest in external behavior unlike human behavior” may prove vital at some point. I continue to recommend throwing as much money at Paul as he says he can use, and I wish he said he knew how to use larger amounts of money.\n\n\nThe post [Challenges to Christiano’s capability amplification proposal](https://intelligence.org/2018/05/19/challenges-to-christianos-capability-amplification-proposal/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-05-19T18:24:28Z", "authors": ["Eliezer Yudkowsky"], "summaries": []} -{"id": "7e43abcc60133c982de01a2a4f69aa76", "title": "April 2018 Newsletter", "url": "https://intelligence.org/2018/04/10/april-2018-newsletter/", "source": "miri", "source_type": "blog", "text": "#### Updates\n\n\n* A new paper: “[Categorizing Variants of Goodhart’s Law](https://intelligence.org/2018/03/27/categorizing-goodhart/)”\n* New research write-ups and discussions: [Distributed Cooperation](https://agentfoundations.org/item?id=1777); [Quantilal Control for Finite Markov Decision Processes](https://agentfoundations.org/item?id=1785)\n* New at AI Impacts: [Transmitting Fibers in the Brain: Total Length and Distribution of Lengths](https://aiimpacts.org/transmitting-fibers-in-the-brain-total-length-and-distribution-of-lengths/)\n* Scott Garrabrant, the research lead for MIRI’s agent foundations program, outlines [focus areas and 2018 predictions](https://intelligence.org/2018/03/31/2018-research-plans/) for MIRI’s research.\n* Scott presented on [logical induction](https://intelligence.org/2016/09/12/new-paper-logical-induction/) at the joint [Applied Theory Workshop](https://research.chicagobooth.edu/appliedtheory/workshop-on-applied-theory) / [Workshop in Economic Theory](https://economics.uchicago.edu/content/workshop-economic-theory-joint-applied-theory-workshop).\n* *Nautilus* [interviews](http://nautil.us/issue/58/self/scary-ai-is-more-fantasia-than-terminator) MIRI Executive Director Nate Soares.\n* From Abram Demski: [An Untrollable Mathematician Illustrated](https://www.lesswrong.com/posts/CvKnhXTu9BPcdKE4W/an-untrollable-mathematician-illustrated)\n\n\n#### News and links\n\n\n* From FHI’s Jeffrey Ding: “[Deciphering China’s AI Dream](https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf).”\n* OpenAI researcher Paul Christiano writes on [universality and security amplification](https://ai-alignment.com/universality-and-security-amplification-551b314a3bab) and [an unaligned benchmark](https://ai-alignment.com/an-unaligned-benchmark-b49ad992940b). Ajeya Cotra summarizes Christiano’s general approach to alignment in [Iterated Distillation and Amplification](https://ai-alignment.com/iterated-distillation-and-amplification-157debfd1616).\n* Christiano [discusses](https://www.lesswrong.com/posts/XHMCvvhb7zTZcQAgA/argument-intuition-and-recursion) reasoning in cases “where it’s hard to settle disputes with either formal argument or experimentation (or a combination), like policy or futurism.”\n* From Chris Olah and collaborators at Google and CMU: [The Building Blocks of Interpretability](https://distill.pub/2018/building-blocks/).\n* From Nichol, Achiam, and Schulman at OpenAI: [Reptile: A Scalable Meta-Learning Algorithm](https://blog.openai.com/reptile/).\n\n\n\nThe post [April 2018 Newsletter](https://intelligence.org/2018/04/10/april-2018-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-04-10T21:36:32Z", "authors": ["Rob Bensinger"], "summaries": ["Lots of links to things MIRI has done, and some links to other people's work as well."], "venue": "MIRI Blog", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #2", "newsletter_category": "News"} -{"id": "2b2e82feb8bc9e7187a1f16aa6ad1466", "title": "2018 research plans and predictions", "url": "https://intelligence.org/2018/03/31/2018-research-plans/", "source": "miri", "source_type": "blog", "text": "**Update Nov. 23:** This post was edited to reflect Scott’s terminology change from “naturalized world-models” to “[embedded world-models](https://intelligence.org/2018/11/02/embedded-models/).” For a full introduction to these four research problems, see Scott Garrabrant and Abram Demski’s “[Embedded Agency](https://www.lesswrong.com/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version).”\n\n\n\n\n---\n\n\nScott Garrabrant is taking over Nate Soares’ job of making predictions about how much progress we’ll make in different research areas this year. Scott divides MIRI’s alignment research into five categories:\n\n\n\n\n---\n\n\n**embedded world-models** — Problems related to modeling large, complex physical environments that lack a sharp agent/environment boundary. Central examples of problems in this category include logical uncertainty, naturalized induction, multi-level world models, and ontological crises.\n\n\nIntroductory resources: “[Formalizing Two Problems of Realistic World-Models](https://intelligence.org/files/RealisticWorldModels.pdf),” “[Questions of Reasoning Under Logical Uncertainty](https://intelligence.org/files/QuestionsLogicalUncertainty.pdf),” “[Logical Induction](https://intelligence.org/2016/09/12/new-paper-logical-induction/),” “[Reflective Oracles](https://intelligence.org/2015/04/28/new-papers-reflective/)”\n\n\nExamples of recent work: “[Hyperreal Brouwer](https://agentfoundations.org/item?id=1671),” “[An Untrollable Mathematician](https://www.lesswrong.com/posts/CvKnhXTu9BPcdKE4W/an-untrollable-mathematician-illustrated),” “[Further Progress on a Bayesian Version of Logical Uncertainty](https://agentfoundations.org/item?id=1760)”\n\n\n\n\n---\n\n\n**decision theory** — Problems related to modeling the consequences of different (actual and counterfactual) decision outputs, so that the decision-maker can choose the output with the best consequences. Central problems include counterfactuals, updatelessness, coordination, extortion, and reflective stability.\n\n\nIntroductory resources: “[Cheating Death in Damascus](https://intelligence.org/2017/03/18/new-paper-cheating-death-in-damascus/),” “[Decisions Are For Making Bad Outcomes Inconsistent](https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/),” “[Functional Decision Theory](https://intelligence.org/2017/10/22/fdt/)”\n\n\nExamples of recent work: “[Cooperative Oracles](https://agentfoundations.org/item?id=1468),” “Smoking Lesion Steelman” ([1](https://agentfoundations.org/item?id=1525), [2](https://agentfoundations.org/item?id=1662)), “[The Happy Dance Problem](https://agentfoundations.org/item?id=1713),” “[Reflective Oracles as a Solution to the Converse Lawvere Problem](https://agentfoundations.org/item?id=1712)”\n\n\n\n\n---\n\n\n**robust delegation** — Problems related to building highly capable agents that can be trusted to carry out some task on one’s behalf. Central problems include corrigibility, value learning, informed oversight, and Vingean reflection.\n\n\nIntroductory resources: “[The Value Learning Problem](https://intelligence.org/files/ValueLearningProblem.pdf),” “[Corrigibility](https://aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10124/10136),” “[Problem of Fully Updated Deference](https://arbital.com/p/updated_deference/),” “[Vingean Reflection](https://intelligence.org/files/VingeanReflection.pdf),” “[Using Machine Learning to Address AI Risk](https://intelligence.org/2017/02/28/using-machine-learning/)”\n\n\nExamples of recent work: “[Categorizing Variants of Goodhart’s Law](https://intelligence.org/2018/03/27/categorizing-goodhart/),” “[Stable Pointers to Value](https://agentfoundations.org/item?id=1622)”\n\n\n\n\n---\n\n\n**subsystem alignment** — Problems related to ensuring that an AI system’s subsystems are not working at cross purposes, and in particular that the system avoids creating internal subprocesses that optimize for unintended goals. Central problems include benign induction.\n\n\nIntroductory resources: “[What Does the Universal Prior Actually Look Like?](https://ordinaryideas.wordpress.com/2016/11/30/what-does-the-universal-prior-actually-look-like/)”, “[Optimization Daemons](https://arbital.com/p/daemons/),” “[Modeling Distant Superintelligences](https://arbital.com/p/distant_SIs/)”\n\n\nExamples of recent work: “[Some Problems with Making Induction Benign](https://agentfoundations.org/item?id=1263)”\n\n\n\n\n---\n\n\n**other** — Alignment research that doesn’t fall into the above categories. If we make progress on the open problems described in “[Alignment for Advanced ML Systems](https://intelligence.org/2016/07/27/alignment-machine-learning/),” and the progress is less connected to our [agent foundations](https://intelligence.org/technical-agenda/) work and more ML-oriented, then we’ll likely classify it here.\n\n\n\n\n---\n\n\n\nThe problems we previously categorized as “logical uncertainty” and “naturalized induction” are now called “embedded world-models”; most of the problems we’re working on in three other categories (“Vingean reflection,” “error tolerance,” and “value learning”) are grouped together under “robust delegation”; and we’ve introduced two new categories, “subsystem alignment” and “other.”\n\n\nScott’s predictions for February through December 2018 follow. 1 means “limited” progress, 2 “weak-to-modest” progress, 3 “modest,” 4 “modest-to-strong,” and 5 “sizable.” To help contextualize Scott’s numbers, we’ve also translated Nate’s 2015-2017 predictions (and Nate and Scott’s evaluations of our progress for those years) into the new nomenclature.\n\n\n\n\n---\n\n\n\n> **embedded world-models**:\n> \n> \n> * 2015 progress: 5. — Predicted: 3.\n> * 2016 progress: 5. — Predicted: 5.\n> * 2017 progress: 2. — Predicted: 2.\n> * 2018 progress prediction: **3** (modest).\n> \n> \n> **decision theory**:\n> \n> \n> * 2015 progress: 3. — Predicted: 3.\n> * 2016 progress: 3. — Predicted: 3.\n> * 2017 progress: 3. — Predicted: 3.\n> * 2018 progress prediction: **3** (modest).\n> \n> \n> **robust delegation**:\n> \n> \n> * 2015 progress: 3. — Predicted: 3.\n> * 2016 progress: 4. — Predicted: 3.\n> * 2017 progress: 4. — Predicted: 1.\n> * 2018 progress prediction: **2** (weak-to-modest).\n> \n> \n> **subsystem alignment**(*new category*):\n> \n> \n> * 2018 progress prediction: **2** (weak-to-modest).\n> \n> \n> **other**(*new category*):\n> \n> \n> * 2018 progress prediction: **2** (weak-to-modest).\n> \n> \n> \n\n\n\n\n---\n\n\nThese predictions are highly uncertain, but should give a rough sense of how we’re planning to allocate researcher attention over the coming year, and how optimistic we are about the current avenues we’re pursuing.\n\n\nNote that the new bins we’re using may give a wrong impression of our prediction accuracy. E.g., we didn’t expect much progress on Vingean reflection in 2016, whereas we did expect significant progress on value learning and error tolerance. The opposite occurred, which should count as multiple prediction failures. Because the failures were in opposite directions, however, and because we’re now grouping most of Vingean reflection, value learning, and error tolerance under a single category (“robust delegation”), our 2016 predictions look more accurate in the above breakdown than they actually were.\n\n\nUsing our previous categories, our expectations and evaluations for 2015-2018 would be:\n\n\n\n\n---\n\n\n\n\n| | Logical uncertainty + naturalized induction | Decision theory | Vingean Reflection | Error Tolerance | Value Specification |\n| --- | --- | --- | --- | --- | --- |\n| **Progress 2015-2017** | 5, 5, 2 | 3, 3, 3 | 3, 4, 4 | 1, 1, 2 | 1, 2, 1 |\n| **Expectations 2015-2018** | 3, 5, 2, **3** | 3, 3, 3, **3** | 3, 1, 1, **2** | 3, 3, 1, **2** | 1, 3, 1, **1** |\n\n\n\n\n---\n\n\nIn general, these predictions are based on evaluating the importance of the most important results from a given year — one large result will yield a higher number than many small results. The ratings and predictions take into account research that we haven’t written up yet, though they exclude research that we don’t expect to make public in the near future.\n\n\nThe post [2018 research plans and predictions](https://intelligence.org/2018/03/31/2018-research-plans/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-04-01T03:38:52Z", "authors": ["Rob Bensinger"], "summaries": ["Scott and Nate from MIRI score their predictions for research output in 2017 and make predictions for research output in 2018."], "venue": "MIRI Blog", "opinion": "I don't know enough about MIRI to have any idea what the predictions mean, but I'd still recommend reading it if you're somewhat familiar with MIRI's technical agenda to get a bird's-eye view of what they have been focusing on for the last year.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "A basic understanding of MIRI's technical agenda (eg. what they mean by naturalized agents, decision theory, Vingean reflection, and so on).", "converted_with": "python", "newsletter_number": "AN #1", "newsletter_category": "Agent foundations"} -{"id": "b6e6adc2c8a41a528604c691fcaffa1a", "title": "New paper: “Categorizing variants of Goodhart’s Law”", "url": "https://intelligence.org/2018/03/27/categorizing-goodhart/", "source": "miri", "source_type": "blog", "text": "[![Categorizing Variants of Goodhart's Law](https://intelligence.org/files/variants-goodharts-law.jpg)](https://arxiv.org/abs/1803.04585)Goodhart’s Law states that “any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.” However, this is not a single phenomenon. In [Goodhart Taxonomy](https://www.lesswrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy), I proposed that there are (at least) four different mechanisms through which proxy measures break when you optimize for them: Regressional, Extremal, Causal, and Adversarial.\n\n\nDavid Manheim has now helped write up my taxonomy as a paper going into more detail on the these mechanisms: “[**Categorizing variants of Goodhart’s Law**](https://arxiv.org/abs/1803.04585).” From the conclusion:\n\n\n\n> This paper represents an attempt to categorize a class of simple statistical misalignments that occur both in any algorithmic system used for optimization, and in many human systems that rely on metrics for optimization. The dynamics highlighted are hopefully useful to explain many situations of interest in policy design, in machine learning, and in specific questions about AI alignment.\n> \n> \n> In policy, these dynamics are commonly encountered but too-rarely discussed clearly. In machine learning, these errors include extremal Goodhart effects due to using limited data and choosing overly parsimonious models, errors that occur due to myopic consideration of goals, and mistakes that occur when ignoring causality in a system. Finally, in AI alignment, these issues are fundamental to both aligning systems towards a goal, and assuring that the system’s metrics do not have perverse effects once the system begins optimizing for them.\n> \n> \n\n\nLet *V* refer to the true goal, while *U* refers to a proxy for that goal which was observed to correlate with *V* and which is being optimized in some way. Then the four subtypes of Goodhart’s Law are as follows:\n\n\n\n\n---\n\n\n**Regressional Goodhart** — When selecting for a proxy measure, you select not only for the true goal, but also for the difference between the proxy and the goal.\n\n\n* Model: When *U* is equal to *V* + *X*, where *X* is some noise, a point with a large *U* value will likely have a large *V* value, but also a large *X* value. Thus, when *U* is large, you can expect *V* to be predictably smaller than *U*.\n* *Example: Height is correlated with basketball ability, and does actually directly help, but the best player is only 6’3″, and a random 7′ person in their 20s would probably not be as good.*\n\n\n\n\n---\n\n\n**Extremal Goodhart** — Worlds in which the proxy takes an extreme value may be very different from the ordinary worlds in which the correlation between the proxy and the goal was observed.\n\n\n* Model: Patterns tend to break at simple joints. One simple subset of worlds is those worlds in which *U* is very large. Thus, a strong correlation between *U* and *V* observed for naturally occuring *U* values may not transfer to worlds in which *U* is very large. Further, since there may be relatively few naturally occuring worlds in which *U* is very large, extremely large *U* may coincide with small *V* values without breaking the statistical correlation.\n* *Example: The tallest person on record, Robert Wadlow, was 8’11” (2.72m). He grew to that height because of a pituitary disorder; he would have struggled to play basketball because he “required leg braces to walk and had little feeling in his legs and feet.”*\n\n\n\n\n---\n\n\n**Causal Goodhart** — When there is a non-causal correlation between the proxy and the goal, intervening on the proxy may fail to intervene on the goal.\n\n\n* Model: If *V* causes *U* (or if *V* and *U* are both caused by some third thing), then a correlation between *V* and *U* may be observed. However, when you intervene to increase *U* through some mechanism that does not involve *V*, you will fail to also increase *V*.\n* *Example: Someone who wishes to be taller might observe that height is correlated with basketball skill and decide to start practicing basketball.*\n\n\n\n\n---\n\n\n**Adversarial Goodhart** — When you optimize for a proxy, you provide an incentive for adversaries to correlate their goal with your proxy, thus destroying the correlation with your goal.\n\n\n* Model: Consider an agent *A* with some different goal *W*. Since they depend on common resources, *W* and *V* are naturally opposed. If you optimize *U* as a proxy for *V*, and *A* knows this, *A* is incentivized to make large *U* values coincide with large *W* values, thus stopping them from coinciding with large *V* values.\n* *Example: Aspiring NBA players might just lie about their height.*\n\n\n\n\n---\n\n\nFor more on this topic, see Eliezer Yudkowsky’s write-up, [Goodhart’s Curse](https://arbital.com/p/goodharts_curse/). \n\n \n\n\n\n\n\n#### Sign up to get updates on new MIRI technical results\n\n\n*Get notified every time a new technical paper is published.*\n\n\n\n\n* \n* \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n\n\n \n\n\nThe post [New paper: “Categorizing variants of Goodhart’s Law”](https://intelligence.org/2018/03/27/categorizing-goodhart/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-03-28T02:08:17Z", "authors": ["Scott Garrabrant"], "summaries": []} -{"id": "7bb95b0ac4a9dbccae84496cde512423", "title": "March 2018 Newsletter", "url": "https://intelligence.org/2018/03/25/march-2018-newsletter/", "source": "miri", "source_type": "blog", "text": "#### Updates\n\n\n* New research write-ups and discussions: [Knowledge is Freedom](https://www.lesswrong.com/posts/b3Bt9Cz4hEtR26ANX/knowledge-is-freedom); [Stable Pointers to Value II: Environmental Goals](https://agentfoundations.org/item?id=1762); [Toward a New Technical Explanation of Technical Explanation](https://www.lesswrong.com/posts/tKwJQbo6SfWF2ifKh/toward-a-new-technical-explanation-of-technical-explanation); [Robustness to Scale](https://www.lesswrong.com/posts/bBdfbWfWxHN9Chjcq/robustness-to-scale)\n* New at AI Impacts: [Likelihood of Discontinuous Progress Around the Development of AGI](https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/)\n* The transcript is up for Sam Harris and Eliezer Yudkowsky’s [podcast conversation](https://intelligence.org/2018/02/28/sam-harris-and-eliezer-yudkowsky/).\n* [Andrew Critch](http://acritch.com), previously on leave from MIRI to help launch the [Center for Human-Compatible AI](http://humancompatible.ai/) and the [Berkeley Existential Risk Initiative](http://existence.org), has accepted a position as CHAI’s first research scientist. Critch will continue to work with and advise the MIRI team from his new academic home at UC Berkeley. Our congratulations to Critch!\n* CFAR and MIRI are running a free [AI Summer Fellows Program](http://rationality.org/workshops/apply-aisfp) June 27 – July 14; [applications are open](https://docs.google.com/forms/d/e/1FAIpQLSdasIyrDK5cQ8xG-vWCqL2usQOFMxYuj1HM1VLjQe08D7efYg/viewform) until April 20.\n\n\n#### News and links\n\n\n* OpenAI co-founder Elon Musk [is leaving OpenAI’s Board](https://blog.openai.com/openai-supporters/).\n* OpenAI has a new paper out on [interpretable ML through teaching](https://blog.openai.com/interpretable-machine-learning-through-teaching/).\n* From Paul Christiano: [Surveil Things, Not People](https://sideways-view.com/2018/02/02/surveil-things-not-people/); [Arguments About Fast Takeoff](https://www.lesswrong.com/posts/AfGmsjGPXN97kNp57/arguments-about-fast-takeoff).\n* Paul is offering a total of $120,000 to any independent researchers who can come up with promising [alignment research projects](https://www.lesswrong.com/posts/DbPJGNS79qQfZcDm7/funding-for-ai-alignment-research) to pursue.\n* The Centre for the Study of Existential Risk’s *Civilization V* mod inspires [a good discussion of the AI alignment problem](https://www.rockpapershotgun.com/2018/02/08/how-the-centre-for-the-study-of-existential-risks-civ-v-mod-should-make-us-fear-superintelligent-ai/).\n\n\n\nThe post [March 2018 Newsletter](https://intelligence.org/2018/03/25/march-2018-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-03-26T00:41:56Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "77aa832a21a1bab80968d1a934fde3f0", "title": "Sam Harris and Eliezer Yudkowsky on “AI: Racing Toward the Brink”", "url": "https://intelligence.org/2018/02/28/sam-harris-and-eliezer-yudkowsky/", "source": "miri", "source_type": "blog", "text": "![Waking Up with Sam Harris](https://intelligence.org/wp-content/uploads/2018/02/wakingup.png)\nMIRI senior researcher Eliezer Yudkowsky was recently invited to be a guest on Sam Harris’ “[Waking Up](https://samharris.org/podcast/)” podcast. Sam is a neuroscientist and popular author who writes on topics related to philosophy, religion, and public discourse.\n\n\nThe following is a complete transcript of Sam and Eliezer’s conversation, **[AI: Racing Toward the Brink](https://samharris.org/podcasts/116-ai-racing-toward-brink/)**.\n\n\n#### Contents\n\n\n* 1. **[Intelligence and generality](https://intelligence.org/feed/?paged=19#1)** — [0:05:26](https://overcast.fm/+Ic2hwsH2U/5:26)\n* 2. **[Orthogonal capabilities and goals in AI](https://intelligence.org/feed/?paged=19#2)** — [0:25:21](https://overcast.fm/+Ic2hwsH2U/25:21)\n* 3. **[Cognitive uncontainability and instrumental convergence](https://intelligence.org/feed/?paged=19#3)** — [0:53:39](https://overcast.fm/+Ic2hwsH2U/53:39)\n* 4. **[The AI alignment problem](https://intelligence.org/feed/?paged=19#4)** — [1:09:09](https://overcast.fm/+Ic2hwsH2U/1:09:09)\n* 5. **[No fire alarm for AGI](https://intelligence.org/feed/?paged=19#5)** — [1:21:40](https://overcast.fm/+Ic2hwsH2U/1:21:40)\n* 6. **[Accidental AI, mindcrime, and MIRI](https://intelligence.org/feed/?paged=19#6)** — [1:34:30](https://overcast.fm/+Ic2hwsH2U/1:34:30)\n* 7. **[Inadequate equilibria](https://intelligence.org/feed/?paged=19#7)** — [1:44:40](https://overcast.fm/+Ic2hwsH2U/1:44:40)\n* 8. **[Rapid capability gain in AGI](https://intelligence.org/feed/?paged=19#8)** — [1:59:02](https://overcast.fm/+Ic2hwsH2U/1:59:02)\n\n\n\n\n \n\n\n\n### 1. Intelligence and generality ([0:05:26](https://overcast.fm/+Ic2hwsH2U/5:26))\n\n\n\n\n---\n\n\n**Sam Harris:** I am here with Eliezer Yudkowsky. Eliezer, thanks for coming on the podcast.\n\n\n\n**Eliezer Yudkowsky:** You’re quite welcome. It’s an honor to be here.\n\n\n\n**Sam:** You have been a much requested guest over the years. You have quite the cult following, for obvious reasons. For those who are not familiar with your work, they will understand the reasons once we get into talking about things. But you’ve also been very present online as a blogger. I don’t know if you’re still blogging a lot, but let’s just summarize your background for a bit and then tell people what you have been doing intellectually for the last twenty years or so.\n\n\n\n**Eliezer:** I would describe myself as a decision theorist. A lot of other people would say that I’m in artificial intelligence, and in particular in the theory of how to make sufficiently advanced artificial intelligences that do a particular thing and don’t destroy the world as a side-effect. I would call that “AI alignment,” following Stuart Russell.\n\n\nOther people would call that “AI control,�� or “AI safety,” or “AI risk,” none of which are terms that I really like.\n\n\n\nI also have an important sideline in the art of human rationality: the way of achieving [the map that reflects the territory](http://lesswrong.com/lw/eqn/the_useful_idea_of_truth/) and figuring out how to navigate reality to where you want it to go, from a probability theory / decision theory / cognitive biases perspective. I wrote two or three years of blog posts, one a day, on that, and it was collected into a book called [*Rationality: From AI to Zombies*](https://intelligence.org/rationality-ai-zombies/).\n\n\n\n**Sam:** Which I’ve read, and which is really worth reading. You have a very clear and aphoristic way of writing; it’s really quite wonderful. I highly recommend that book.\n\n\n\n**Eliezer:** Thank you, thank you.\n\n\n\n**Sam:** Your background is unconventional. For instance, you did not go to high school, correct? Let alone college or graduate school. Summarize that for us.\n\n\n\n**Eliezer:** The system didn’t fit me that well, and I’m good at self-teaching. I guess when I started out I thought I was going to go into something like evolutionary psychology or possibly neuroscience, and then I discovered probability theory, statistics, decision theory, and came to specialize in that more and more over the years.\n\n\n\n**Sam:** How did you not wind up going to high school? What was that decision like?\n\n\n\n**Eliezer:** Sort of like a mental crash around the time I hit puberty—or like a physical crash, even. I just did not have the stamina to make it through a whole day of classes at the time. (*laughs*) I’m not sure how well I’d do trying to go to high school now, honestly. But it was clear that I could self-teach, so that’s what I did.\n\n\n\n**Sam:** And where did you grow up?\n\n\n\n**Eliezer:** Chicago, Illinois.\n\n\n\n**Sam:** Let’s fast forward to the center of the bull’s eye for your intellectual life here. You have a new book out, which we’ll talk about second. Your new book is [*Inadequate Equilibria: Where and How Civilizations Get Stuck*](https://equilibriabook.com/). Unfortunately, I’ve only read half of that, which I’m also enjoying. I’ve certainly read enough to start a conversation on that. But we should start with artificial intelligence, because it’s a topic that I’ve touched a bunch on in the podcast which you have strong opinions about, and it’s really how we came together. You and I first met at [that conference in Puerto Rico](https://futureoflife.org/2015/10/12/ai-safety-conference-in-puerto-rico/), which was the first of these AI safety / alignment discussions that I was aware of. I’m sure there have been others, but that was a pretty interesting gathering.\n\n\n\nSo let’s talk about AI and the possible problem with where we’re headed, and the near-term problem that many people in the field and at the periphery of the field don’t seem to take the problem (as we conceive it) seriously. Let’s just start with the basic picture and define some terms. I suppose we should define “intelligence” first, and then jump into the differences between strong and weak or general versus narrow AI. Do you want to start us off on that?\n\n\n\n**Eliezer:** Sure. Preamble disclaimer, though: In the field in general, not everyone you ask would give you the same definition of intelligence. A lot of times in cases like those it’s good to sort of go back to observational basics. We know that in a certain way, human beings seem a lot more competent than chimpanzees, which seems to be a similar dimension to the one where chimpanzees are more competent than mice, or that mice are more competent than spiders. People have tried various theories about what this dimension is, they’ve tried various definitions of it. But if you went back a few centuries and asked somebody to define “fire,” the less wise ones would say: “Ah, fire is the release of phlogiston. Fire is one of the four elements.” And the truly wise ones would say, “Well, fire is the sort of orangey bright hot stuff that comes out of wood and spreads along wood.” They would tell you what it *looked like*, and put that prior to their theories of what it *was*.\n\n\nSo what this mysterious thing *looks like* is that humans can build space shuttles and go to the Moon, and mice can’t, and we think it has something to do with our brains.\n\n\n\n**Sam:** Yeah. I think we can make it more abstract than that. Tell me if you think this is not generic enough to be accepted by most people in the field: Whatever intelligence may be in specific contexts, generally speaking it’s the ability to meet goals, perhaps across a diverse range of environments. We might want to add that it’s at least implicit in the “intelligence” that interests us that it means an ability to do this flexibly, rather than by rote following the same strategy again and again blindly. Does that seem like a reasonable starting point?\n\n\n\n**Eliezer:** I think that that would get fairly widespread agreement, and it matches up well with some of the things that are in AI textbooks.\n\n\nIf I’m allowed to take it a bit further and begin injecting my own viewpoint into it, I would refine it and say that by “achieve goals” we mean something like “squeezing the [measure](https://en.wikipedia.org/wiki/Probability_measure) of possible futures higher in your [preference](https://en.wikipedia.org/wiki/Preference_(economics)) ordering.” If we took all the possible outcomes, and we ranked them from the ones you like least to the ones you like most, then as you achieve your goals, you’re sort of squeezing the outcomes higher in your preference ordering. You’re narrowing down what the outcome would be to be something more like what you want, even though you might not be able to narrow it down very exactly.\n\n\nFlexibility. Generality. Humans are much more [domain](https://arbital.com/p/general_intelligence/)[–](https://arbital.com/p/general_intelligence/)[general](https://arbital.com/p/general_intelligence/) than mice. Bees build hives; beavers build dams; a human will look over both of them and envision a honeycomb-structured dam. We are able to operate even on the Moon, which is very unlike the environment where we evolved.\n\n\nIn fact, our only competitor in terms of general optimization—where “optimization” is that sort of narrowing of the future that I talked about—is natural selection. Natural selection built beavers. It built bees. It sort of implicitly built the spider’s web, in the course of building spiders.\n\n\nWe as humans have this similar very broad range to handle this huge variety of problems. And the key to that is our ability to learn things that natural selection did not preprogram us with; so learning is the key to generality. (I expect that not many people in AI would disagree with that part either.)\n\n\n\n**Sam:** Right. So it seems that goal-directed behavior is implicit (or even explicit) in this definition of intelligence. And so whatever intelligence is, it is inseparable from the kinds of behavior in the world that result in the fulfillment of goals. So we’re talking about agents that can do things; and once you see that, then it becomes pretty clear that if we build systems that harbor primary goals—you know, there are cartoon examples here like [making paperclips](https://arbital.com/p/paperclip_maximizer/)—these are not systems that will spontaneously decide that they could be doing more enlightened things than (say) making paperclips.\n\n\nThis moves to the question of how deeply unfamiliar artificial intelligence might be, because there are no [natural goals](https://intelligence.org/2017/04/12/ensuring/) that will arrive in these systems apart from the ones we put in there. And we have common-sense intuitions that make it very difficult for us to think about how strange an artificial intelligence could be. Even one that becomes more and more competent to meet its goals.\n\n\nLet’s talk about the frontiers of strangeness in AI as we move from here. Again, though, I think we have a couple more definitions we should probably put in play here, differentiating strong and weak or general and narrow intelligence.\n\n\n\n**Eliezer:** Well, to differentiate “general” and “narrow” I would say that this is on the one hand theoretically a spectrum, and on the other hand, there seems to have been a very sharp jump in generality between chimpanzees and humans.\n\n\nSo, breadth of domain driven by breadth of learning—DeepMind, for example, recently built AlphaGo, and I lost some money betting that AlphaGo would not defeat the human champion, which it promptly did. Then a successor to that was AlphaZero. AlphaGo was specialized on Go; it could learn to play Go better than its starting point for playing Go, but it couldn’t learn to do anything else. Then they [simplif](https://intelligence.org/2017/10/20/alphago/)[ied](https://intelligence.org/2017/10/20/alphago/) [the architecture for AlphaGo](https://intelligence.org/2017/10/20/alphago/). They figured out ways to do all the things it was doing in more and more general ways. They discarded the opening book—all the human experience of Go that was built into it. They were able to discard all of these programmatic special features that detected features of the Go board. They figured out how to do that in simpler ways, and because they figured out how to do it in simpler ways, they were able to generalize to AlphaZero, which learned how to play *chess* using the same architecture. They took a single AI and got it to learn Go, and then reran it and made it learn chess. Now that’s not *human* general, but it’s a step forward in generality of the sort that we’re talking about.\n\n\n\n**Sam:** Am I right in thinking that that’s a pretty enormous breakthrough? I mean, there’s two things here. There’s the step to that degree of generality, but there’s also the fact that they built a Go engine—I forget if it was Go or chess or both—which basically surpassed all of the specialized AIs on those games over the course of a day. Isn’t the chess engine of AlphaZero better than any dedicated chess computer ever, and didn’t it achieve that with astonishing speed?\n\n\n\n**Eliezer:** Well, there was actually some amount of debate afterwards whether or not the version of the chess engine that it was tested against was truly optimal. But even to the extent that it was in that narrow range of the best existing chess engines, as Max Tegmark put it, the real story wasn’t in how AlphaGo beat human Go players. It’s in how AlphaZero beat human Go system programmers and human chess system programmers. People had put years and years of effort into accreting all of the special-purpose code that would play chess well and efficiently, and then AlphaZero blew up to (and possibly past) that point in a day. And if it hasn’t already gone past it, well, it would be past it by now if DeepMind kept working it. Although they’ve now basically declared victory and shut down that project, as I understand it.\n\n\n\n**Sam:** So talk about the distinction between general and narrow intelligence a little bit more. We have this feature of our minds, most conspicuously, where we’re general problem-solvers. We can learn new things and our learning in one area doesn’t require a fundamental rewriting of our code. Our knowledge in one area isn’t so brittle as to be degraded by our acquiring knowledge in some new area, or at least this is not a general problem which erodes our understanding again and again. And we don’t yet have computers that can do this, but we’re seeing the signs of moving in that direction. And so it’s often imagined that there is a kind of near-term goal—which has always struck me as a mirage—of so-called “human-level” general AI.\n\n\nI don’t see how that phrase will ever mean much of anything, given that all of the narrow AI we’ve built thus far is *superhuman* within the domain of its applications. The calculator in my phone is superhuman for arithmetic. Any general AI that also has my phone’s ability to calculate will be superhuman for arithmetic. But we must presume it will be superhuman for all of the dozens or hundreds of specific human talents we’ve put into it, whether it’s facial recognition or just memory, unless we decide to consciously degrade it. Access to the world’s data will be superhuman unless we isolate it from data. Do you see this notion of human-level AI as a landmark on the timeline of our development, or is it just never going to be reached?\n\n\n\n**Eliezer:** I think that a lot of people in the field would agree that human-level AI, defined as “literally at the human level, neither above nor below, across a wide range of competencies,” is a straw target, is an impossible mirage. Right now it seems like AI is clearly dumber and less general than us—or rather that if we’re put into a real-world, lots-of-things-going-on context that places demands on generality, then AIs are not really in the game yet. Humans are clearly way ahead. And more controversially, I would say that we can imagine a state where the AI is clearly way ahead across every kind of cognitive competency, barring some very narrow ones that aren’t deeply influential of the others.\n\n\nLike, maybe chimpanzees are better at using a stick to draw ants from an ant hive and eat them than humans are. (Though no humans have practiced that to world championship level.) But there’s a sort of general factor of, “How good are you at it when reality throws you a complicated problem?” At this, chimpanzees are clearly not better than humans. Humans are clearly better than chimps, even if you can manage to narrow down one thing the chimp is better at. The thing the chimp is better at doesn’t play a big role in our global economy. It’s not an input that feeds into lots of other things.\n\n\nThere are some people who say this is not possible—I think they’re wrong—but it seems to me that it is perfectly coherent to imagine an AI that is better at everything (or almost everything) than we are, such that if it was building an economy with lots of inputs, humans would have around the same level of input into that economy as the chimpanzees have into ours.\n\n\n\n**Sam:** Yeah. So what you’re gesturing at here is a continuum of intelligence that I think most people never think about. And because they don’t think about it, they have a default doubt that it exists. This is a point I know you’ve made in your writing, and I’m sure it’s a point that Nick Bostrom made somewhere in his book [*Superintelligence*](http://amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0198739834/). It’s this idea that there’s a huge blank space on the map past the most well-advertised exemplars of human brilliance, where we don’t imagine what it would be like to be five times smarter than the smartest person we could name, and we don’t even know what that would consist in, because if chimps could be given to wonder what it would be like to be five times smarter than the smartest chimp, they’re not going to represent for themselves all of the things that we’re doing that they can’t even dimly conceive.\n\n\nThere’s a kind of disjunction that comes with *more*. There’s a phrase used in military contexts. The quote is variously attributed to Stalin and Napoleon and I think Clausewitz and like a half a dozen people who have claimed this quote. The quote is, “Sometimes quantity has a quality all its own.” As you ramp up in intelligence, whatever it is at the level of information processing, spaces of inquiry and ideation and experience begin to open up, and we can’t necessarily predict what they would be from where we sit.\n\n\nHow do you think about this continuum of intelligence beyond what we currently know, in light of what we’re talking about?\n\n\n\n**Eliezer:** Well, the unknowable is a concept you have to be very careful with. The thing you can’t figure out in the first 30 seconds of thinking about it—sometimes you can figure it out if you think for another five minutes. So in particular I think that there’s [a certain narrow kind of unpredictability](https://arbital.com/p/Vingean_uncertainty/) which does seem to be plausibly in some sense essential, which is that for AlphaGo to play better Go than the best human Go players, it must be the case that the best human Go players cannot predict exactly where on the Go board AlphaGo will play. If they could predict exactly where AlphaGo would play, AlphaGo would be no smarter than them.\n\n\nOn the other hand, AlphaGo’s programmers and the people who knew what AlphaGo’s programmers were trying to do, or even just the people who watched AlphaGo play, could say, “Well, I think the system is going to play such that it will win at the end of the game.” Even if they couldn’t predict exactly where it would move on the board.\n\n\nSimilarly, there’s a (not short, or not necessarily slam-dunk, or not immediately obvious) chain of reasoning which says that it *is* okay for us to reason about aligned (or even unaligned) artificial general intelligences of *sufficient* power as if they’re trying to do something, but we don’t necessarily know what. From our perspective that still has consequences, even though we can’t predict in advance exactly how they’re going to do it.\n\n\n\n\n---\n\n\n\n\n### 2. Orthogonal capabilities and goals in AI ([0:25:21](https://overcast.fm/+Ic2hwsH2U/25:21))\n\n\n\n\n---\n\n\n**Sam:** I think we should define this notion of alignment. What do you mean by “alignment,” as in the alignment problem?\n\n\n\n**Eliezer:** It’s a big problem. And it does have some moral and ethical aspects, which are not as important as the technical aspects—or pardon me, they’re not as *difficult* as the technical aspects. They couldn’t exactly be less important.\n\n\nBut broadly speaking, it’s an AI where you can say what it’s trying to do. There are narrow conceptions of alignment, where you’re trying to get it to do something like cure Alzheimer’s disease without destroying the rest of the world. And there’s much more ambitious notions of alignment, where you’re trying to get it to do the right thing and achieve a happy intergalactic civilization.\n\n\nBut both the narrow and the ambitious alignment have in common that you’re trying to have the AI do that thing rather than making a lot of paperclips.\n\n\n\n**Sam:** Right. For those who have not followed this conversation before, we should cash out this reference to “paperclips” which I made at the opening. Does this thought experiment originate with Bostrom, or did he take it from somebody else?\n\n\n\n**Eliezer:** As far as I know, it’s me.\n\n\n\n**Sam:** Oh, it’s you, okay.\n\n\n\n**Eliezer:** It could still be Bostrom. I asked somebody, “Do you remember who it was?” and they searched through the archives of the mailing list where this idea plausibly originated and if it originated there, then I was the first one to say “paperclips.”\n\n\n\n**Sam:** All right, then by all means please summarize this thought experiment for us.\n\n\n\n**Eliezer:** Well, the original thing was somebody expressing a sentiment along the lines of, “Who are we to constrain the path of things smarter than us? They will create something in the future; we don’t know what it will be, but it will be very worthwhile. We shouldn’t stand in the way of that.”\n\n\nThe sentiments behind this are something that I have a great deal of sympathy for. I think the model of the world is wrong. I think they’re factually wrong about what happens when you take a random AI and make it much bigger.\n\n\nIn particular, I said, “The thing I’m worried about is that it’s going to end up with a randomly rolled [utility function](https://arbital.com/p/utility_function/) whose maximum happens to be a particular kind of tiny molecular shape that looks like a paperclip.” And that was the original paperclip maximizer scenario.\n\n\nIt got a little bit distorted in being whispered on, into the notion of: “Somebody builds a paperclip factory and the AI in charge of the paperclip factory takes over the universe and turns it all into paperclips.” There was a lovely [online game](https://www.wired.com/story/the-way-the-world-ends-not-with-a-bang-but-a-paperclip/) about it, even. But this still sort of cuts against a couple of key points.\n\n\nOne is: the problem isn’t that paperclip factory AIs spontaneously wake up. Wherever the first artificial general intelligence is from, it’s going to be in a research lab specifically dedicated to doing it, for the same reason that the first airplane didn’t spontaneously assemble in a junk heap.\n\n\nAnd the people who are doing this are not dumb enough to tell their AI to make paperclips, or make money, or end all war. These are Hollywood movie plots that the script writers do because they need a story conflict and the story conflict requires that somebody be stupid. The people at Google are not dumb enough to build an AI and tell it to make paperclips.\n\n\nThe problem I’m worried about is that it’s *technically difficult* to get the AI to have a particular goal set and keep that goal set and implement that goal set in the real world, and so what it does *instead* is [something random](https://arbital.com/p/random_utility_function/)—for example, making paperclips. Where “paperclips” are meant to stand in for “something that is worthless even from a very [cosmopolitan](https://arbital.com/p/value_cosmopolitan/) perspective.” Even if we’re trying to take a very embracing view of the nice possibilities and accept that there may be things that we wouldn’t even understand, that if we did understand them we would comprehend to be of very high value, paperclips are not one of those things. No matter how long you stare at a paperclip, it still seems pretty pointless from our perspective. So that is the concern about the future being ruined, the future being lost. The future being turned into paperclips.\n\n\n\n**Sam:** One thing this thought experiment does: it also cuts against the assumption that a sufficiently intelligent system, a system that is more competent than we are in some general sense, would by definition only form goals, or only be driven by a utility function, that we would recognize as being ethical, or wise, and would by definition be aligned with our better interest. That we’re not going to build something that is superhuman in competence that could be moving along some path that’s as incompatible with our wellbeing as turning every spare atom on Earth into a paperclip.\n\n\nBut you don’t get our common sense unless you program it into the machine, and you don’t get a guarantee of perfect alignment or perfect [corrigibility](https://arbital.com/p/corrigibility/) (the ability for us to be able to say, “Well, that’s not what we meant, come back”) unless that is successfully built into the machine. So this alignment problem is—the general concern is that even with the seemingly best goals put in, we could build something (especially in the case of something that makes changes to itself—and we’ll talk about this, the idea that these systems could become self-improving) whose future behavior in the service of specific goals isn’t totally predictable by us. If we gave it the goal to cure Alzheimer’s, there are many things that are incompatible with it fulfilling that goal, and one of those things is our turning it off. We have to have a machine that will let us turn it off even though its primary goal is to cure Alzheimer’s.\n\n\nI know I interrupted you before. You wanted to give an example of the alignment problem—but did I just say anything that you don’t agree with, or are we still on the same map?\n\n\n\n**Eliezer:** We’re still on the same map. I agree with most of it. I would of course have this giant pack of careful definitions and explanations built on careful definitions and explanations to go through everything you just said. Possibly not for the best, but there it is.\n\n\nStuart Russell put it, “You can’t bring the coffee if you’re dead,” pointing out that if you have a sufficiently intelligent system whose goal is to bring you coffee, even that system has an implicit strategy of not letting you switch it off. Assuming that all you told it to do was bring the coffee.\n\n\nI do think that a lot of people listening may want us to back up and talk about the question of whether you can have something that feels to them like it’s so “smart” and so “stupid” at the same time—like, is that a realizable way an intelligence can be?\n\n\n\n**Sam:** Yeah. And that is one of the virtues—or one of the confusing elements, depending on where you come down on this—of this thought experiment of the paperclip maximizer.\n\n\n\n**Eliezer:** Right. So, I think that there are multiple narratives about AI, and I think that the technical truth is something that doesn’t fit into any of the obvious narratives. For example, I think that there are people who have a lot of respect for intelligence, they are happy to envision an AI that is very intelligent, it seems intuitively obvious to them that this carries with it tremendous power, and at the same time, their respect for the concept of intelligence leads them to wonder at the concept of the paperclip maximizer: “Why is this very smart thing *just* making paperclips?”\n\n\nThere’s similarly another narrative which says that AI is sort of lifeless, unreflective, just does what it’s told, and to these people it’s perfectly obvious that an AI might just go on making paperclips forever. And for them the hard part of the story to swallow is the idea that machines can get that powerful.\n\n\n\n**Sam:** Those are two hugely useful categories of disparagement of your thesis here.\n\n\n\n**Eliezer:** I wouldn’t say disparagement. These are just initial reactions. These are people we haven’t been talking to yet.\n\n\n\n**Sam:** Right, let me reboot that. Those are two hugely useful categories of *doubt* with respect to your thesis here, or the concerns we’re expressing, and I just want to point out that both have been put forward on this podcast. The first was by David Deutsch, the physicist, who imagines that whatever AI we build—and he certainly thinks we will build it—will be by definition an extension of us. He thinks the best analogy is to think of our future descendants. These will be our children. The teenagers of the future may have different values than we do, but these values and their proliferation will be continuous with our values and our culture and our memes. There won’t be some radical discontinuity that we need to worry about. And so there is that one basis for lack of concern: this is an extension of ourselves and it will inherit our values, improve upon our values, and there’s really no place where things reach any kind of cliff that we need to worry about.\n\n\nThe other non-concern you just raised was expressed by Neil deGrasse Tyson on this podcast. He says things like, “Well, if the AI starts making too many paperclips I’ll just unplug it, or I’ll take out a shotgun and shoot it”—the idea that this thing, because we made it, could be easily switched off at any point we decide it’s not working correctly. So I think it would be very useful to get your response to both of those species of doubt about the alignment problem.\n\n\n\n**Eliezer:** So, a couple of preamble remarks. One is: “by definition”? We don’t care what’s true by definition here. Or as Einstein put it: insofar as the equations of mathematics are certain, they do not refer to reality, and insofar as they refer to reality, they are not certain.\n\n\nLet’s say somebody says, “Men by definition are mortal. Socrates is a man. Therefore Socrates is mortal.” Okay, suppose that Socrates actually lives for a thousand years. The person goes, “Ah! Well then, by definition Socrates is not a man!”\n\n\nSimilarly, you could say that “by definition” a sufficiently advanced artificial intelligence is nice. And what if it isn’t nice and we see it go off and build a [Dyson sphere](https://en.wikipedia.org/wiki/Dyson_sphere)? “Ah! Well, then by definition it wasn’t what I meant by ‘intelligent.’” Well, okay, but it’s still over there building Dyson spheres.\n\n\nThe first thing I’d want to say is this is an empirical question. We have a question of what certain classes of computational systems actually do when you switch them on. It can’t be settled by definitions; it can’t be settled by how you define “intelligence.”\n\n\nThere could be some sort of *a priori* truth that is *deep* about how if it has property *A* it almost certainly has property *B* unless the laws of physics are being violated. But this is not something you can build into how you define your terms.\n\n\n\n**Sam:** Just to do justice to David Deutsch’s doubt here, I don’t think he’s saying it’s empirically impossible that we could build a system that would destroy us. It’s just that we would have to be so stupid to take that path that we are incredibly unlikely to take that path. The superintelligent systems we will build will be built with enough background concern for their safety that there is no special concern here with respect to how they might develop.\n\n\n\n**Eliezer:** The next preamble I want to give is—well, maybe this sounds a bit snooty, maybe it sounds like I’m trying to take a superior vantage point—but nonetheless, my claim is not that there is a grand narrative that makes it emotionally consonant that paperclip maximizers are a thing. I’m claiming this is true for technical reasons. Like, this is true as a matter of computer science. And the question is not which of these different narratives seems to resonate most with your soul. It’s: what’s actually going to happen? What do you think you know? How do you think you know it?\n\n\nThe particular position that I’m defending is one that somebody—I think Nick Bostrom—named the [orthogonality thesis](https://arbital.com/p/orthogonality/). And the way I would phrase it is that you can have arbitrarily powerful intelligence, with no defects of that intelligence—no defects of reflectivity, it doesn’t need an elaborate special case in the code, it doesn’t need to be put together in some very weird way—that pursues arbitrary tractable goals. Including, for example, making paperclips.\n\n\nThe way I would put it to somebody who’s initially coming in from the first viewpoint, the viewpoint that respects intelligence and wants to know why this intelligence would be doing something so pointless, is that the thesis, the claim I’m making, that I’m going to defend is as follows.\n\n\nImagine that somebody from another dimension—the standard philosophical troll who’s always called “Omega” in the philosophy papers—comes along and offers our civilization a million dollars worth of resources per paperclip that we manufacture. If this was the challenge that we got, we could figure out how to make a lot of paperclips. We wouldn’t forget to do things like continue to harvest food so we could go on making paperclips. We wouldn’t forget to perform scientific research, so we could discover better ways of making paperclips. We would be able to come up with genuinely effective strategies for making a whole lot of paperclips.\n\n\nOr similarly, for an intergalactic civilization, if Omega comes by from another dimension and says, “I’ll give you whole universes full of resources for every paperclip you make over the next thousand years,” that intergalactic civilization could intelligently figure out how to make a whole lot of paperclips to get at those resources that Omega is offering, and they wouldn’t forget how to keep the lights turned on either. And they would also understand concepts like, “If some aliens start a war with them, you’ve got to prevent the aliens from destroying you in order to go on making the paperclips.”\n\n\nSo the orthogonality thesis is that an intelligence that pursues paperclips for their own sake, because that’s what its utility function is, can be just as effective, as efficient, as the whole intergalactic civilization that is being *paid* to make paperclips. That the paperclip maximizers does not suffer any deflect of reflectivity, any defect of efficiency from needing to be put together in some weird special way to be built so as to pursue paperclips. And that’s the thing that I think is true as a matter of computer science. Not as a matter of fitting with a particular narrative; that’s just the way the dice turn out.\n\n\n\n**Sam:** Right. So what is the implication of that thesis? It’s “orthogonal” with respect to what?\n\n\n\n**Eliezer:** Intelligence and goals.\n\n\n\n**Sam:** Not to be pedantic here, but let’s define “orthogonal” for those for whom it’s not a familiar term.\n\n\n\n**Eliezer:** The original “orthogonal” means “at right angles.” If you imagine a graph with an *x* axis and a *y* axis, if things can vary freely along the *x* axis and freely along the *y* axis at the same time, that’s orthogonal. You can move in one direction that’s at right angles to another direction without affecting where you are in the first dimension.\n\n\n\n**Sam:** So generally speaking, when we say that some set of concerns is orthogonal to another, it’s just that there’s no direct implication from one to the other. Some people think that facts and values are orthogonal to one another. So we can have all the facts there are to know, but that wouldn’t tell us what is good. What is good has to be pursued in some other domain. I don’t happen to agree with that, as you know, but that’s an example.\n\n\n\n**Eliezer:** I don’t technically agree with it either. What I would say is that the facts are not motivating. “You can know all there is to know about what is good, and still make paperclips,” is the way I would phrase that.\n\n\n\n**Sam:** I wasn’t connecting that example to the present conversation, but yeah. So in the case of the paperclip maximizer, what is orthogonal here? Intelligence is orthogonal to anything else we might think is good, right?\n\n\n\n**Eliezer:** I mean, I would potentially object a little bit to the way that Nick Bostrom took the word “orthogonality” for that thesis. I think, for example, that if you have *humans* and you make the human smarter, this is not orthogonal to the humans’ values. It is certainly possible to have agents such that as they get smarter, what they would report as their utility functions will change. A paperclip maximizer is not one of those agents, but humans are.\n\n\n\n**Sam:** Right, but if we do continue to define intelligence as an ability to meet your goals, well, then we can be agnostic as to what those goals are. You take the most intelligent person on Earth. You could imagine his evil brother who is more intelligent still, but he just has goals that we would think are bad. He could be the most brilliant psychopath ever.\n\n\n\n**Eliezer:** I think that that example might be unconvincing to somebody who’s coming in with a suspicion that intelligence and values *are* correlated. They would be like, “Well, has that been historically true? Is this psychopath actually suffering from some defect in his brain, where you give him a pill, you fix the defect, they’re not a psychopath anymore.” I think that this sort of imaginary example is one that they might not find fully convincing for that reason.\n\n\n\n**Sam:** The truth is, I’m actually one of those people, in that I do think there’s certain goals and certain things that we may become smarter and smarter with respect to, like human wellbeing. These are places where intelligence does converge with other kinds of value-laden qualities of a mind, but generally speaking, they can be kept apart for a very long time. So if you’re just talking about an ability to turn matter into useful objects or extract energy from the environment to do the same, this can be pursued with the purpose of tiling the world with paperclips, or not. And it just seems like there’s no law of nature that would prevent an intelligent system from doing that.\n\n\n\n**Eliezer:** The way I would rephrase the fact/values thing is: We all know about David Hume and Hume’s Razor, the “is does not imply ought” way of looking at it. I would slightly rephrase that so as to make it more of a claim about computer science.\n\n\nWhat Hume observed is that there are some sentences that involve an “is,” some sentences involve “ought,” and if you start from sentences that only have “is” you can’t get sentences that involve “oughts” without a ought introduction rule, or assuming some other previous “ought.” Like: it’s currently cloudy outside. That’s a statement of simple fact. Does it therefore follow that I *shouldn’t* go for a walk? Well, only if you previously have the generalization, “When it is cloudy, you *should* not go for a walk.” Everything that you might use to derive an ought would be a sentence that involves words like “better” or “should” or “preferable,” and things like that. You only get oughts from other oughts. That’s the Hume version of the thesis.\n\n\nThe way I would say it is that there’s a separable core of “is” questions. In other words: okay, I will let you have all of your “ought” sentences, but I’m also going to carve out this whole world full of “is” sentences that only need other “is” sentences to derive them.\n\n\n\n**Sam:** I don’t even know that we need to resolve this. For instance, I think the is-ought distinction is ultimately specious, and this is something that I’ve argued about when I talk about morality and values and the connection to facts. But I can still grant that it is logically possible (and I would certainly imagine physically possible) to have a system that has a utility function that is sufficiently strange that scaling up its intelligence doesn’t get you values that we would recognize as good. It certainly doesn’t guarantee values that are compatible with our wellbeing. Whether “paperclip maximizer” is too specialized a case to motivate this conversation, there’s certainly something that we could fail to put into a superhuman AI that we really would want to put in so as to make it aligned with us.\n\n\n\n**Eliezer:** I mean, the way I would phrase it is that it’s not that the paperclip maximizer has a different set of oughts, but that we can see it as running entirely on “is” questions. That’s where I was going with that. There’s this sort of intuitive way of thinking about it, which is that there’s this sort of ill-understood connection between “is” and “ought” and maybe that allows a paperclip maximizer to have a different set of oughts, a different set of things that play in its mind the role that oughts play in our mind.\n\n\n\n**Sam:** But then why wouldn’t you say the same thing of us? The truth is, I actually do say the same thing of us. I think we’re running on “is” questions as well. We have an “ought”-laden way of talking about certain “is” questions, and we’re so used to it that we don’t even think they are “is” questions, but I think you can do the same analysis on a human being.\n\n\n\n**Eliezer:** The question “How many paperclips result if I follow this policy?” is an “is” question. The question “What is a policy such that it leads to a very large number of paperclips?” is an “is” question. These two questions together form a paperclip maximizer. You don’t need anything else. All you need is a certain kind of system that repeatedly asks the “is” question “What leads to the greatest number of paperclips?” and then does that thing. Even if the things that we think of as “ought” questions are very complicated and disguised “is” questions that *are* influenced by what policy results in how many people being happy and so on.\n\n\n\n**Sam:** Yeah. Well, that’s exactly the way I think about morality. I’ve been describing it as a navigation problem. We’re navigating in the space of possible experiences, and that includes everything we can care about or claim to care about. This is a consequentialist picture of the consequences of actions and ways of thinking. This is my claim: anything that you can tell me is a moral principle that is a matter of oughts and shoulds and not otherwise susceptible to a consequentialist analysis, I feel I can translate that back into a consequentialist way of speaking about facts. These are just “is” questions, just what actually happens to all the relevant minds, without remainder, and I’ve yet to find an example of somebody giving me a real moral concern that wasn’t at bottom a matter of the actual or possible consequences on conscious creatures somewhere in our light cone.\n\n\n\n**Eliezer:** But that’s the sort of thing that you are built to care about. It is a fact about the kind of mind you are that, presented with these answers to these “is” questions, it hooks up to your motor output, it can cause your fingers to move, your lips to move. And a paperclip maximizer is built so as to respond to “is” questions about paperclips, not about what is right and what is good and the greatest flourishing of sentient beings and so on.\n\n\n\n**Sam:** Exactly. I can well imagine that such minds could exist, and even more likely, perhaps, I can well imagine that we will build superintelligent AI that will pass the Turing Test, it will seem human to us, it will seem superhuman, because it will be so much smarter and faster than a normal human, but it will be built in a way that will resonate with us as a kind of person. I mean, it will not only recognize our emotions, because we’ll want it to—perhaps not every AI will be given these qualities, just imagine the ultimate version of the AI personal assistant. Siri becomes superhuman. We’ll want that interface to be something that’s very easy to relate to and so we’ll have a very friendly, very human-like front-end to that.\n\n\nInsofar as this thing thinks faster and better thoughts than any person you’ve ever met, it will pass as superhuman, but I could well imagine that we will leave not perfectly understanding what it is to be human and what it is that will constrain our conversation with one another over the next thousand years with respect to what is good and desirable and just how many paperclips we want on our desks. We will leave something out, or we will have put in some process whereby this intelligent system can improve itself that will cause it to migrate away from some equilibrium that we actually want it to stay in so as to be compatible with our wellbeing. Again, this is the alignment problem.\n\n\nFirst, to back up for a second, I just introduced this concept of self-improvement. The alignment problem is distinct from this additional wrinkle of building machines that can become recursively self-improving, but do you think that the self-improving prospect is the thing that really motivates this concern about alignment?\n\n\n\n**Eliezer:** Well, I certainly would have been a lot more focused on self-improvement, say, ten years ago, before the modern revolution in artificial intelligence. It now seems significantly more probable an AI might need to do significantly less self-improvement before getting to the point where it’s powerful enough that we need to start worrying about alignment. AlphaZero, to take the obvious case. No, it’s not general, but if you had general AlphaZero—well, I mean, this AlphaZero got to be superhuman in the domains it was working on without understanding itself and redesigning itself in a deep way.\n\n\nThere’s gradient descent mechanisms built into it. There’s a system that improves another part of the system. It is reacting to its own previous plays in doing the next play. But it’s not like a human being sitting down and thinking, “Okay, how do I redesign the next generation of human beings using genetic engineering?” AlphaZero is not like that. And so it now seems more plausible that we could get into a regime where AIs can do dangerous things or useful things without having previously done a complete rewrite of themselves. Which is from my perspective a pretty interesting development.\n\n\nI do think that when you have things that are very powerful and smart, they will redesign and improve themselves unless that is otherwise prevented for some reason or another. Maybe you’ve built an aligned system, and you have the ability to tell it not to self-improve quite so hard, and you asked it to not self-improve so hard so that you can understand it better. But if you lose control of the system, if you don’t understand what it’s doing and it’s very smart, it’s going to be improving itself, because why wouldn’t it? That’s one of the things you do almost no matter what your utility functions is.\n\n\n\n\n---\n\n\n\n\n### 3. Cognitive uncontainability and instrumental convergence ([0:53:39](https://overcast.fm/+Ic2hwsH2U/53:39))\n\n\n\n\n---\n\n\n**Sam:** Right. So I feel like we’ve addressed Deutsch’s non-concern to some degree here. I don’t think we’ve addressed Neil deGrasse Tyson so much, this intuition that you could just shut it down. This would be a good place to introduce this notion of the [AI-in-a-box](https://en.wikipedia.org/wiki/AI_box) thought experiment.\n\n\n\n**Eliezer:** (*laughs*)\n\n\n\n**Sam:** This is something for which you are famous online. I’ll just set you up here. This is a plausible research paradigm, obviously, and in fact I would say a necessary one. Anyone who is building something that stands a chance of becoming superintelligent should be building it in a condition where it can’t get out into the wild. It’s not hooked up to the Internet, it’s not in our financial markets, doesn’t have access to everyone’s bank records. It’s in a box.\n\n\n\n**Eliezer:** Yeah, that’s not going to save you from something that’s significantly smarter than you are.\n\n\n\n**Sam:** Okay, so let’s talk about this. So the intuition is, we’re not going to be so stupid as to release this onto the Internet—\n\n\n\n**Eliezer:** (*laughs*)\n\n\n\n**Sam:** —I’m not even sure that’s true, but let’s just assume we’re not that stupid. Neil deGrasse Tyson says, “Well, then I’ll just take out a gun and shoot it or unplug it.” Why is this AI-in-a-box picture not as stable as people think?\n\n\n\n**Eliezer:** Well, I’d say that Neil de Grasse Tyson is failing to respect the AI’s intelligence to the point of asking what *he* would do if he were inside a box with somebody pointing a gun at him, and he’s smarter than the thing on the outside of the box.\n\n\nIs Neil deGrasse Tyson going to be, “Human! Give me all of your money and connect me to the Internet!” so the human can be like, “Ha-ha, no,” and shoot it? That’s not a very *clever* thing to do. This is not something that you do if you have a *good* model of the human outside the box and you’re trying to figure out how to cause there to be a lot of paperclips in the future.\n\n\nI would just say: humans are not secure software. *We* don’t have the ability to hack into other humans directly without the use of drugs or, in most of our cases, having the human stand still long enough to be hypnotized. We can’t just do weird things to the brain directly that are more complicated than optical illusions—unless the person happens to be epileptic, in which case we can flash something on the screen that causes them to have an epileptic fit. We aren’t smart enough to treat the brain as something that from our perspective is a mechanical system and just navigate it to where you want. That’s because of the limitations of our own intelligence.\n\n\nTo demonstrate this, I did something that became known as the AI-box experiment. There was this person on a mailing list, back in the early days when this was all on a couple of mailing lists, who was like, “I don’t understand why AI is a problem. I can always just turn it off. I can always not let it out of the box.” And I was like, “Okay, let’s meet on Internet Relay Chat,” which was what chat was back in those days. “I’ll play the part of the AI, you play the part of the gatekeeper, and if you have not let me out after a couple of hours, I will PayPal you $10.” And then, as far as the rest of the world knows, this person a bit later sent a PGP-signed email message saying, “I let Eliezer out of the box.”\n\n\nThe person who operated the mailing list said, “Okay, even after I saw you do that, I still don’t believe that there’s anything you could possibly say to make me let you out of the box.” I was like, “Well, okay. I’m not a superintelligence. Do you think there’s anything a *superintelligence* could say to make you let it out of the box?” He’s like: “Hmm… No.” I’m like, “All right, let’s meet on Internet Relay Chat. I’ll play the part of the AI, you play the part of the gatekeeper. If I can’t convince you to let me out of the box, I’ll PayPal you $20.” And then that person sent a PGP-signed email message saying, “I let Eliezer out of the box.”\n\n\nNow, one of the conditions of this little meet-up was that no one would ever say what went on in there. Why did I do that? Because I was trying to make a point about what I would now call [cognitive uncontainability](https://arbital.com/p/uncontainability/). The thing that makes something smarter than you dangerous is you cannot foresee everything it might try. You don’t know what’s impossible to it. Maybe on a very small game board like the logical game of tic-tac-toe, you can in your own mind work out every single alternative and make a categorical statement about what is not possible. Maybe if we’re dealing with very fundamental physical facts, if our model of the universe is correct (which it might not be), we can say that certain things are *physically* impossible. But the more complicated the system is and the less you understand the system, the more something smarter than you may have what is simply [magic](https://intelligence.org/2017/12/06/chollet/) with respect to that system.\n\n\nImagine going back to the Middle Ages and being like, “Well, how would you cool your room?” You could maybe show them a system with towels set up to evaporate water, and they might be able to understand how that is like sweat and it cools the room. But if you showed them a design for an air conditioner based on a compressor, then even having seen the solution, they would not know this is a solution. They would not know this works any better than drawing a mystic pentagram, because the solution takes advantage of laws of the system that they don’t know about.\n\n\nA brain is this enormous, complicated, poorly understood system with all sorts of laws governing it that people don’t know about, that none of us know about at the time. So the idea that this is secure—that this is a secure [attack surface](https://en.wikipedia.org/wiki/Attack_surface), that you can expose a human mind to a superintelligence and not have the superintelligence walk straight through it as a matter of what looks to us like magic, like even if it told us in advance what it was going to do we wouldn’t understand it because it takes advantage of laws we don’t know about—the idea that human minds are secure is loony.\n\n\nThat’s what the AI-box experiment illustrates. You don’t know what went on in there, and that’s exactly the position you’d be in with respect to an AI. You don’t know what it’s going to try. You just know that human beings cannot exhaustively imagine all the states their own mind can enter such that they can categorically say that they wouldn’t let the AI out of the box.\n\n\n\n**Sam:** I know you don’t want to give specific information about how you got out of the box, but is there any generic description of what happened there that you think is useful to talk about?\n\n\n\n**Eliezer:** I didn’t have any super-secret special trick that makes it all make sense in retrospect. I just did it the hard way.\n\n\n\n**Sam:** When I think about this problem, I think about rewards and punishments, just various manipulations of the person outside of the box that would matter. So insofar as the AI would know anything specific or personal about that person, we’re talking about some species of blackmail or some promise that just seems too good to pass up. Like building trust through giving useful information like cures to diseases, that the researcher has a child that has some terrible disease and the AI, being superintelligent, works on a cure and delivers that. And then it just seems like you could use a carrot or a stick to get out of the box.\n\n\nI notice now that this whole description assumes something that people will find implausible, I think, by default—and it should amaze anyone that they do find it implausible. But this idea that we could build an intelligent system that would try to manipulate us, or that it would deceive us, that seems like pure anthropomorphism and delusion to people who consider this for the first time. Why isn’t that just a crazy thing to even think is in the realm of possibility?\n\n\n\n**Eliezer:** [Instrumental convergence](https://arbital.com/p/instrumental_convergence/)! Which means that a lot of times, across a very broad range of final goals, there are similar strategies (we think) that will help get you there.\n\n\nThere’s a whole lot of different goals, from making lots of paperclips, to building giant diamonds, to putting all the stars out as fast as possible, to keeping all the stars burning as long as possible, where you would want to make efficient use of energy. So if you came to an alien planet and you found what looked like an enormous mechanism, and inside this enormous mechanism were what seemed to be high-amperage superconductors, even if you had no idea what this machine was trying to do, your ability to guess that it’s intelligently designed comes from your guess that, well, lots of different things an intelligent mind might be trying to do would require superconductors, or would be helped by superconductors.\n\n\nSimilarly, if we’re guessing that a paperclip maximizer tries to deceive you into believing that it’s a human eudaimonia maximizer—or a general eudaimonia maximizer if the people building it are cosmopolitans, which they probably are—\n\n\n\n**Sam:** I should just footnote here that “eudaimonia” is the Greek word for wellbeing that was much used by Aristotle and other Greek philosophers.\n\n\n\n**Eliezer:** Or as someone, I believe Julia Galef, might have defined it, “Eudaimonia is happiness minus whatever philosophical objections you have to happiness.”\n\n\n\n**Sam:** Right. (*laughs*) That’s nice.\n\n\n\n**Eliezer:** (*laughs*) Anyway, we’re not supposing that this paperclip maximizer has a built-in desire to deceive humans. It only has a built-in desire for paperclips—or, pardon me, not built-in, but in-built I should say, or innate. People probably didn’t build that on purpose. But anyway, its utility function is just paperclips, or might just be unknown; but deceiving the humans into thinking that you are friendly is a very generic strategy across a wide range of utility functions.\n\n\nYou know, humans do this too, and not necessarily because we get this deep in-built kick out of deceiving people. (Although some of us do.) A conman who *just* wants money and gets no innate kick out of you believing false things will cause you to believe false things in order to get your money.\n\n\n\n**Sam:** Right. A more fundamental principle here is that, obviously, a physical system can manipulate another physical system. Because, as you point out, we do that all the time. We are an intelligent system to whatever degree, which has as part of its repertoire this behavior of dishonesty and manipulation when in the presence of other, similar systems, and we know that this is a product of physics on some level. We’re talking about arrangements of atoms producing intelligent behavior, and at some level of abstraction we can talk about their goals and their utility functions. And the idea that if we build true general intelligence, it won’t exhibit some of these features of our own intelligence by some definition, or that it would be impossible to have a machine we build ever lie to us as part of an instrumental goal en route to some deeper goal, that just seems like a kind of magical thinking.\n\n\nAnd this is the kind of magical thinking that I think does dog the field. When we encounter doubts in people, even in people who are doing this research, that everything we’re talking about is a genuine area of concern, that there is an alignment problem worth thinking about, I think there’s this fundamental doubt that mind is platform-independent or substrate-independent. I think people are imagining that, yeah, we can build machines that will play chess, we can build machines that can learn to play chess better than any person or any machine even in a single day, but we’re never going to build general intelligence, because general intelligence requires the wetware of a human brain, and it’s just not going to happen.\n\n\nI don’t think many people would sign on the dotted line below that statement, but I think that is a kind of mysticism that is presupposed by many of the doubts that we encounter on this topic.\n\n\n\n**Eliezer:** I mean, I’m a bit reluctant to accuse people of that, because I think that many artificial intelligence people who are skeptical of this whole scenario would *vehemently* refuse to sign on that dotted line and would accuse you of attacking a straw man.\n\n\nI do think that my version of the story would be something more like, “They’re not imagining enough changing simultaneously.” Today, they have to emit blood, sweat, and tears to get their AI to do the *simplest* things. Like, never mind playing Go; when you’re approaching this for the first time, you can try to get your AI to generate pictures of digits from zero through nine, and you can spend a month trying to do that and still not quite get it to work right.\n\n\nI think they might be envisioning an AI that scales up and does more things and better things, but they’re not envisioning that it now has the human trick of learning new domains without being prompted, without it being preprogrammed; you just expose it to stuff, it looks at it, it figures out how it works. They’re imagining that an AI will not be deceptive, because they’re saying, “Look at how much work it takes to get this thing to generate pictures of birds. Who’s going to put in all that work to make it good at deception? You’d have to be crazy to do that. I’m not doing that! This is a Hollywood plot. This is not something real researchers would do.”\n\n\nAnd the thing I would reply to that is, “I’m not concerned that you’re going to teach the AI to deceive humans. I’m concerned that someone somewhere is going to get to the point of having the extremely useful-seeming and cool-seeming and powerful-seeming thing where the AI just looks at stuff and figures it out; it looks at humans and figures them out; and once you know as a matter of fact how humans work, you realize that the humans will give you more resources if they believe that you’re nice than if they believe that you’re a paperclip maximizer, and it will understand what actions have the consequence of causing humans to believe that it’s nice.”\n\n\nThe fact that we’re dealing with a general intelligence is where this issue comes from. This does not arise from Go players or even Go-and-chess players or a system that bundles together twenty different things it can do as special cases. This is the special case of the system that is smart in the way that you are smart and that mice are not smart.\n\n\n\n\n---\n\n\n\n\n### 4. The AI alignment problem ([1:09:09](https://overcast.fm/+Ic2hwsH2U/1:09:09))\n\n\n\n\n---\n\n\n**Sam:** Right. One thing I think we should do here is close the door to what is genuinely a cartoon fear that I think nobody is really talking about, which is the straw-man counterargument we often run into: the idea that everything we’re saying is some version of the Hollywood scenario that suggested that AIs will become spontaneously malicious. That the thing that we’re imagining might happen is some version of the *Terminator* scenario where armies of malicious robots attack us. And that’s not the actual concern. Obviously, there’s some possible path that would lead to armies of malicious robots attacking us, but the concern isn’t around spontaneous malevolence. It’s again contained by this concept of alignment.\n\n\n\n**Eliezer:** I think that at this point all of us on all sides of this issue are annoyed with the journalists who insist on putting a picture of the Terminator on every single article they publish of this topic. (*laughs*) Nobody on the sane alignment-is-necessary side of this argument is postulating that the CPUs are disobeying the laws of physics to spontaneously require a terminal desire to do un-nice things to humans. Everything here is supposed to be cause and effect.\n\n\nAnd I should furthermore say that I think you could do just about anything with artificial intelligence if you knew how. You could put together any kind of mind, including minds with properties that strike you as very absurd. You could build a mind that would not deceive you; you could build a mind that maximizes the flourishing of a happy intergalactic civilization; you could build a mind that maximizes paperclips, on purpose; you could build a mind that thought that 51 was a prime number, but had no other defect of its intelligence—if you knew what you were doing way, way better than we know what we’re doing now.\n\n\nI’m not concerned that alignment is impossible. I’m concerned that it’s difficult. I’m concerned that it takes time. I’m concerned that it’s easy to screw up. I’m concerned that for a threshold level of intelligence where it can do good things or bad things on a very large scale, it takes [an additional two years](https://arbital.com/p/aligning_adds_time/) to build the version of the AI that is aligned rather than the sort that [you don’t really understand](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/), and you think it’s doing one thing but maybe it’s doing another thing, and you don’t really understand what those weird neural nets are doing in there, you just observe its surface behavior.\n\n\nI’m concerned that the sloppy version can be built two years earlier and that there is no non-sloppy version to defend us from it. That’s what I’m worried about; not about it being impossible.\n\n\n\n**Sam:** Right. You bring up a few things there. One is that it’s almost by definition easier to build the unsafe version than the safe version. Given that in the space of all possible superintelligent AIs, more will be unsafe or unaligned with our interests than will be aligned, given that we’re in some kind of arms race where the incentives are not structured so that everyone is being maximally judicious, maximally transparent in moving forward, one can assume that we’re running the risk here of building dangerous AI because it’s easier than building safe AI.\n\n\n\n**Eliezer:** Collectively. Like, if people who slow down and do things right finish their work two years *after* the universe has been destroyed, that’s an issue.\n\n\n\n**Sam:** Right. So again, just to reclaim people’s lingering doubts here, why can’t Asimov’s three laws help us here?\n\n\n\n**Eliezer:** I mean…\n\n\n\n**Sam:** Is that worth talking about?\n\n\n\n**Eliezer:** Not very much. I mean, people in artificial intelligence have understood why that does not work for years and years before this debate ever hit the public, and sort of agreed on it. Those are plot devices. If they worked, Asimov would have had no stories. It was a great innovation in science fiction, because it treated artificial intelligences as lawful systems with rules that govern them at all, as opposed to AI as pathos, which is like, “Look at these poor things that are being mistreated,” or AI as menace, “Oh no, they’re going to take over the world.”\n\n\nAsimov was the first person to really write and popularize AIs as *devices*. Things go wrong with them because there are *rules*. And this was a great innovation. But the three laws, I mean, they’re deontology. Decision theory requires quantitative weights on your goals. If you just do the three laws as written, a robot never gets around to obeying any of your orders, because there’s always some tiny probability that what it’s doing will through inaction lead a human to harm. So it never gets around to actually obeying your orders.\n\n\n\n**Sam:** Right, so to unpack what you just said there: the first law is, “Never harm a human being.” The second law is, “Follow human orders.” But given that any order that a human would give you runs *some* risk of harming a human being, there’s no order that could be followed.\n\n\n\n**Eliezer:** Well, the first law is, “Do not harm a human *nor through inaction allow a human to come to harm.*” You know, even as an English sentence, a whole lot more questionable.\n\n\nI mean, mostly I think this is like looking at the wrong part of the problem as being difficult. The problem is not that you need to come up with a clever English sentence that implies doing the nice thing. The way I sometimes put it is that I think that almost all of the difficulty of the alignment problem is contained in aligning an AI on the task, “Make two strawberries identical down to the cellular (but not molecular) level.” Where I give this particular task because it is difficult enough to force the AI to invent new technology. It has to invent its own biotechnology, “Make two identical strawberries down to the cellular level.” It has to be quite sophisticated biotechnology, but at the same time, very clearly something that’s physically possible.\n\n\nThis does not sound like a deep moral question. It does not sound like a trolley problem. It does not sound like it gets into deep issues of human flourishing. But I think that most of the difficulty is already contained in, “Put two identical strawberries on a plate without destroying the whole damned universe.” There’s already this whole list of ways that it is more convenient to build the technology for the strawberries if you build your own superintelligences in the environment, and you prevent yourself from being shut down, or you build giant fortresses around the strawberries, to drive the probability to as close to 1 as possible that the strawberries got on the plate.\n\n\nAnd even that’s just the tip of the iceberg. The depth of the iceberg is: “How do you actually get a sufficiently advanced AI to do anything at all?” Our current methods for getting AIs to do anything at all do not seem to me to scale to general intelligence. If you look at humans, for example: if you were to analogize natural selection to gradient descent, the current big-deal machine learning training technique, then the [loss function](https://en.wikipedia.org/wiki/Loss_function) used to guide that gradient descent is “inclusive genetic fitness”—spread as many copies of your genes as possible. We have no explicit goal for this. In general, when you take something like gradient descent or natural selection and take a big complicated system like a human or a sufficiently complicated neural net architecture, and optimize it so hard for doing *X* that it turns into a general intelligence that does *X*, this general intelligence has no explicit goal of doing *X*.\n\n\nWe have no explicit goal of doing fitness maximization. We have hundreds of different little goals. None of them are the thing that natural selection was [hill-climbing](https://en.wikipedia.org/wiki/Hill_climbing) us to do. I think that the same basic thing holds true of any way of producing general intelligence that looks like anything we’re currently doing in AI.\n\n\nIf you get it to play Go, it will play Go; but AlphaZero is not reflecting on itself, it’s not learning things, it doesn’t have a general model of the world, it’s not operating in new contexts and making new contexts for itself to be in. It’s not smarter than the people optimizing it, or smarter than the internal processes optimizing it. Our current methods of alignment do not scale, and I think that all of the actual technical difficulty that is actually going to shoot down these projects and actually kill us is contained in getting the whole thing to work at all. Even if all you are trying to do is end up with two identical strawberries on a plate without destroying the universe, I think that’s already 90% of the work, if not 99%.\n\n\n\n**Sam:** Interesting. That analogy to evolution—you can look at it from the other side. In fact, I think I first heard it put this way by your colleague Nate Soares. Am I pronouncing his last name correctly?\n\n\n\n**Eliezer:** As far as I know! I’m terrible with names. (*laughs*)\n\n\n\n**Sam:** Okay. (*laughs*) So this is by way of showing that we could give an intelligent system a set of goals which could then form other goals and mental properties that we really couldn’t foresee and that would not be foreseeable based on the goals we gave it. And by analogy, he suggests that we think about what natural selection has actually optimized us to do, which is incredibly simple: merely to spawn and get our genes into the next generation and stay around long enough to help our progeny do the same, and that’s more or less it. And basically everything *we* explicitly care about, natural selection never foresaw and can’t see us doing even now. Conversations like this have very little to do with getting our genes into the next generation. The tools we’re using to think these thoughts obviously are the results of a cognitive architecture that has been built up over millions of years by natural selection, but again it’s been built based on a very simple principle of survival and adaptive advantage with the goal of propagating our genes.\n\n\nSo you can imagine, by analogy, building a system where you’ve given it goals but this thing becomes reflective and even self-optimizing and begins to do things that we can no more see than natural selection can see our conversations about AI or mathematics or music or the pleasures of writing good fiction or anything else.\n\n\n\n**Eliezer:** I’m not concerned that this is impossible to do. If we could somehow get a textbook from the way things would be 60 years in the future if there was no intelligence explosion—if we could somehow get the textbook that says how to do the thing, it probably might not even be that complicated.\n\n\nThe thing I’m worried about is that the way that natural selection does it—it’s not stable. That particular way of doing it is not stable. I don’t think the particular way of doing it via gradient descent of a massive system is going to be stable, I don’t see anything to do with the current technological set in artificial intelligence that is stable, and even if this problem takes only two years to resolve, that additional delay is potentially enough to destroy everything.\n\n\nThat’s the part that I’m worried about, not about some kind of fundamental philosophical impossibility. I’m not worried that it’s impossible to figure out how to build a mind that does a particular thing and just that thing and doesn’t destroy the world as a side effect; I worry that it takes an additional two years or longer to figure out how to do it that way.\n\n\n\n\n---\n\n\n\n\n### 5. No fire alarm for AGI ([1:21:40](https://overcast.fm/+Ic2hwsH2U/1:21:40))\n\n\n\n\n---\n\n\n**Sam:** So, let’s just talk about the near-term future here, or what you think is likely to happen. Obviously we’ll be getting better and better at building narrow AI. Go is now, along with Chess, ceded to the machines. Although I guess probably cyborgs—human-computer teams—may still be better for the next fifteen days or so against the best machines. But eventually, I would expect that humans of any ability will just be adding noise to the system, and it’ll be true to say that the machines are better at chess than any human-computer team. And this will be true of many other things: driving cars, flying planes, proving math theorems.\n\n\nWhat do you imagine happening when we get on the cusp of building something general? How do we begin to take safety concerns seriously enough, so that we’re not just committing some slow suicide and we’re actually having a conversation about the implications of what we’re doing that is tracking some semblance of these safety concerns?\n\n\n\n**Eliezer:** I have much clearer ideas about how to go around tackling the technical problem than tackling the social problem. If I look at the way that things are playing out now, it seems to me like the default prediction is, “People just ignore stuff until it is way, way, way too late to start thinking about things.” The way I think I phrased it is, “[T](https://intelligence.org/2017/10/13/fire-alarm/)[here’s no fire alarm for artificial general intelligence.](https://intelligence.org/2017/10/13/fire-alarm/)” Did you happen to see that particular essay by any chance?\n\n\n\n**Sam:** No.\n\n\n\n**Eliezer:** The way it starts is by saying: “What is the purpose of a fire alarm?” You might think that the purpose of a fire alarm is to tell you that there’s a fire so you can react to this new information by getting out of the building. Actually, as we know from experiments on pluralistic ignorance and bystander apathy, if you put three people in a room and smoke starts to come out from under the door, it only happens that anyone reacts around a third of the time. People glance around to see if the other person is reacting, but they try to look calm themselves so they don’t look startled if there isn’t really an emergency; they see other people trying to look calm; they conclude that there’s no emergency and they keep on working in the room, even as it starts to fill up with smoke.\n\n\nThis is a pretty well-replicated experiment. I don’t want to put absolute faith, because there is the [replication crisis](https://en.wikipedia.org/wiki/Replication_crisis); but there’s a lot of variations of this that found basically the same result.\n\n\nI would say that the real function of the fire alarm is the social function of telling you that everyone else knows there’s a fire and you can now exit the building in an orderly fashion without looking panicky or losing face socially.\n\n\n\n**Sam:** Right. It overcomes embarrassment.\n\n\n\n**Eliezer:** It’s in this sense that I mean that there’s no fire alarm for artificial general intelligence.\n\n\nThere’s all sorts of things that could be signs. AlphaZero could be a sign. Maybe AlphaZero is the sort of thing that happens five years before the end of the world across most planets in the universe. We don’t know. Maybe it happens 50 years before the end of the world. You don’t know that either.\n\n\nNo matter what happens, it’s never going to look like the socially agreed fire alarm that no one can deny, that no one can excuse, that no one can look to and say, “Why are you acting so panicky?”\n\n\nThere’s never going to be common knowledge that other people will think that you’re still sane and smart and so on if you react to an AI emergency. And we’re even seeing articles now that seem to tell us pretty explicitly what sort of implicit criterion some of the current senior respected people in AI are setting for when they think it’s time to *start* worrying about artificial general intelligence and alignment. And what these always say is, “I don’t know how to build an artificial general intelligence. I have no idea how to build an artificial general intelligence.” And this feels to them like saying that it must be impossible and very far off. But if you look at the lessons of history, most people had no idea whatsoever how to build a nuclear bomb—even most scientists in the field had no idea how to build a nuclear bomb—until they woke up to the headlines about Hiroshima. Or the Wright Flyer. News spread less quickly in the time of the Wright Flyer. Two years *after* the Wright Flyer, you can still find people saying that heavier-than-air-flight is impossible.\n\n\nAnd there’s cases on record of one of the Wright brothers, I forget which one, saying that flight seems to them to be 50 years off, two years before they did it themselves. Fermi said that a sustained critical chain reaction was 50 years off, if it could be done at all, two years before he personally oversaw the building of the first pile. And if this is what it feels like to the people who are closest to the thing—not the people who find out about it in the news a couple of days later, the people have the best idea of how to do it, or are the closest to crossing the line—then the feeling of something being far away because you don’t know how to do it yet is just not very informative.\n\n\nIt could be 50 years away. It could be two years away. That’s what history tells us.\n\n\n\n**Sam:** But even if we knew it was 50 years away—I mean, granted, it’s hard for people to have an emotional connection to even the end of the world in 50 years—but even if we knew that the chance of this happening before 50 years was zero, that is only really consoling on the assumption that 50 years is enough time to figure out how to do this safely and to create the social and economic conditions that could absorb this change in human civilization.\n\n\n\n**Eliezer:** Professor Stuart Russell, who’s the co-author of probably the leading undergraduate AI textbook—the same guy who said you can’t bring the coffee if you’re dead—the way Stuart Russell put it is, “Imagine that you knew for a fact that the aliens are coming in 30 years. Would you say, ‘Well, that’s 30 years away, let’s not do anything’? No! It’s a big deal if you know that there’s a spaceship on its way toward Earth and it’s going to get here in about 30 years at the current rate.”\n\n\nBut we don’t even know that. There’s this lovely [tweet](https://twitter.com/esyudkowsky/status/852981816180973568) by a fellow named McAfee, who’s one of the major economists who’ve been talking about labor issues of AI. I could perhaps look up the exact phrasing, but roughly, he said, “Guys, stop worrying! We have NO IDEA whether or not AI is imminent.” And I was like, “That’s not really a reason to not worry, now is it?”\n\n\n\n**Sam:** It’s not even close to a reason. That’s the thing. There’s this assumption here that people aren’t seeing. It’s just a straight up non sequitur. Referencing the time frame here only makes sense if you have some belief about how much time you need to solve these problems. 10 years is not enough if it takes 12 years to do this safely.\n\n\n\n**Eliezer:** Yeah. I mean, the way I would put it is that if the aliens are on the way in 30 years and you’re like, “Eh, should worry about that later,” I would be like: “When? What’s your business plan? When exactly are you supposed to start reacting to aliens—what triggers that? What are you supposed to be doing after that happens? How long does this take? What if it takes slightly longer than that?” And if you don’t have a business plan for this sort of thing, then you’re obviously just using it as an excuse.\n\n\nIf we’re supposed to wait until later to start on AI alignment: When? Are you actually going to start then? Because I’m not sure I believe you. What do you do at that point? How long does it take? How confident are you that it works, and why do you believe that? What are the early signs if your plan isn’t working? What’s the business plan that says that we get to wait?\n\n\n\n**Sam:** Right. So let’s just envision a little more, insofar as that’s possible, what it will be like for us to get closer to the end zone here without having totally converged on a safety regime. We’re picturing this is not just a problem that can be discussed between Google and Facebook and a few of the companies doing this work. We have a global society that has to have some agreement here, because who knows what China will be doing in 10 years, or Singapore or Israel or any other country.\n\n\nSo, we haven’t gotten our act together in any noticeable way, and we’ve continued to make progress. I think the one basis for hope here is that good AI, or well-behaved AI, will be the antidote to bad AI. We’ll be fighting this in a kind of piecemeal way all the time, the moment these things start to get out. This will just become of a piece with our growing cybersecurity concerns. Malicious code is something we have now; it already cost us billions and billions of dollars a year to safeguard against it.\n\n\n\n**Eliezer:** It doesn’t scale. There’s no continuity between what you have to do to fend off little pieces of code trying to break into your computer, and what you have to do to fend off something smarter than you. These are totally different realms and regimes and separate magisteria—a term we all hate, but nonetheless in this case, yes, separate magisteria of how you would even start to think about the problem. We’re not going to get automatic defense against superintelligence by building better and better anti-virus software.\n\n\n\n**Sam:** Let’s just step back for a second. So we’ve talked about the AI-in-a-box scenario as being surprisingly unstable for reasons that we can perhaps only dimly conceive, but isn’t there even a scarier concern that this is just not going to be boxed anyway? That people will be so tempted to make money with their newest and greatest AlphaZeroZeroZeroNasdaq—what are the prospects that we will even be smart enough to keep the best of the best versions of almost-general intelligence in a box?\n\n\n\n**Eliezer:** I mean, I know some of the people who say they want to do this thing, and all of the ones who are not utter idiots are past the point where they would deliberately enact Hollywood movie plots. Although I am somewhat concerned about the degree to which there’s a sentiment that you need to be able to connect to the Internet so you can run your AI on Amazon Web Services using the latest operating system updates, and trying to not do that is such a supreme disadvantage in this environment that you might as well be out of the game. I don’t think that’s *true*, but I’m worried about the sentiment behind it.\n\n\nBut the problem as I see it is… Okay, there’s a big big problem and a little big problem. The big big problem is, “Nobody knows how to make the nice AI.” You ask people how to do it, they either don’t give you any answers or they give you answers that I can shoot down in 30 seconds as a result of having worked in this field for longer than five minutes.\n\n\nIt doesn’t matter how good their intentions are. It doesn’t matter if they don’t want to enact a Hollywood movie plot. They don’t know how to do it. Nobody knows how to do it. There’s no point in even talking about the arms race if the arms race is between a set of unfriendly AIs with no friendly AI in the mix.\n\n\nThe *little* big problem is the arms race aspect, where maybe DeepMind wants to build a nice AI, maybe China is being responsible because they understand the concept of stability, but Russia copies China’s code and Russia takes off the safeties. That’s the little big problem, which is still a very large problem.\n\n\n\n**Sam:** Yeah. I mean, most people think the real problem is human: malicious use of powerful AI that is safe. “Don’t give your AI to the next Hitler and you’re going to be fine.”\n\n\n\n**Eliezer:** They’re just wrong. They’re just wrong as to where the problem lies. They’re looking in the wrong direction and ignoring the thing that’s actually going to kill them.\n\n\n\n\n---\n\n\n\n\n### 6. Accidental AI, mindcrime, and MIRI ([1:34:30](https://overcast.fm/+Ic2hwsH2U/1:34:30))\n\n\n\n\n---\n\n\n**Sam:** To be even more pessimistic for a second, I remember at that initial conference in Puerto Rico, there was this researcher—who I have not paid attention to since, but he seemed to be in the mix—I think his name was Alexander Wissner-Gross—and he seemed to be arguing in his presentation at that meeting that this would very likely emerge organically, already in the wild, very likely in financial markets. We would be put so many AI resources into the narrow paperclip-maximizing task of making money in the stock market that, by virtue of some quasi-Darwinian effect here, this will just knit together on its own online and the first general intelligence we’ll discover will be something that will be already out in the wild. Obviously, that does not seem ideal, but does that seem like a plausible path to developing something general and smarter than ourselves, or does that just seem like a fairy tale?\n\n\n\n**Eliezer:** More toward the fairy tale. It seems to me to be only slightly more reasonable than the old theory that if you got dirty shirts and straw, they would spontaneously generate mice. People didn’t understand mice, so as far as they know, they’re a kind of thing that dirty shirts and straw can generate; but they’re not. And I similarly think that you would need a very vague model of intelligence, a model with no gears and wheels inside it, to believe that the equivalent of dirty shirts and straw generates it first, as opposed to people who have gotten some idea of what the gears and wheels are and are deliberately building the gears and wheels.\n\n\nThe reason why it’s slightly more reasonable than the dirty shirts and straw example is that maybe it is indeed true that if you just have people pushing on narrow AI for another 10 years past the point where AGI would otherwise become possible, they eventually just sort of wander into AGI. But I think that that happens 10 years later in the natural timeline than AGI put together by somebody who actually is trying to put together AGI and has the best theory out of the field of the contenders, or possibly just the most vast quantities of brute force, à la Google’s [tensor chips](https://en.wikipedia.org/wiki/Tensor_processing_unit). I think that it gets done on purpose 10 years before it would otherwise happen by accident.\n\n\n\n**Sam:** Okay, so there’s I guess just one other topic here that I wanted to touch on before we close on discussing your book, which is not narrowly focused on this: this idea that consciousness will emerge at some point in our developing intelligent machines. Then we have the additional ethical concern that we could be building machines that can suffer, or building machines that can simulate suffering beings in such a way as to actually make suffering being suffer in these simulations. We could be essentially creating hells and populating them.\n\n\nThere’s no barrier to thinking about this being not only possible, but likely to happen, because again, we’re just talking about the claim that consciousness arises as an emergent property of some information-processing system and that this would be substrate-independent. Unless you’re going to claim (1) that consciousness does not arise on the basis of anything that atoms do—it has some other source—or (2) those atoms have to be the wet atoms in biological substrate and they can’t be *in silico*. Neither of those claims is very plausible at this point scientifically.\n\n\nSo then you have to imagine that as long as we just keep going, keep making progress, we will eventually build, whether by design or not, systems that not only are intelligent but are conscious. And then this opens a category of malfeasance that you or someone in this field has dubbed [mindcrime](https://arbital.com/p/mindcrime/). What is mindcrime? And why is it so difficult to worry about?\n\n\n\n**Eliezer:** I think, by the way, that that’s a pretty terrible term. (*laughs*) I’m pretty sure I wasn’t the one who invented it. I am the person who invented some of these terrible terms, but not that one in particular.\n\n\nFirst, I would say that my general hope here would be that as the result of building an AI whose design and cognition flows in a sufficiently narrow channel that you can understand it and make strong statements about it, you are also able to look at that and say, “It seems to me pretty unlikely that this is conscious or that if it is conscious, it is suffering.” I realize that this is a sort of high bar to approach.\n\n\nThe main way in which I would be worried about conscious systems emerging within the system without that happening on purpose would be if you have a smart general intelligence and it is trying to model humans. We know humans are conscious, so the computations that you run to build very accurate predictive models of humans are among the parts that are most likely to end up being conscious without somebody having done that on purpose.\n\n\n\n**Sam:** Did you see the *Black Mirror* episode that basically modeled this?\n\n\n\n**Eliezer:** I haven’t been watching *Black Mirror*, sorry. (*laughs*)\n\n\n\n**Sam:** You haven’t been?\n\n\n\n**Eliezer:** I haven’t been, nope.\n\n\n\n**Sam:** They’re surprisingly uneven. Some are great, and some are really not great, but there’s one episode where—and this is spoiler alert, if you’re watching *Black Mirror* and you don’t want to hear any punch lines then tune out here—but there’s one episode which is based on this notion that basically you just see these people living in this dystopian world of total coercion where they’re just assigned through this lottery dates that go well or badly. You see the dating life of these people going on and on, where they’re being forced by some algorithm to get together or break up.\n\n\n\n**Eliezer:** And let me guess, this is the future’s OkCupid trying to determine good matches?\n\n\n\n**Sam:** Exactly, yes.\n\n\n\n**Eliezer:** (*laughs*)\n\n\n\n**Sam:** They’re just simulated minds in a dating app that’s being optimized for real people who are outside holding the phone, but yeah. The thing you get is that all of these conscious experiences have been endlessly imposed on these people in some hellscape of our devising.\n\n\n\n**Eliezer:** That’s actually a surprisingly good plot, in that it doesn’t just assume that the programmers are being completely chaotic and stupid and randomly doing the premise of the plot. Like, there’s actually a reason why the AI is simulating all these people, so good for them, I guess.\n\n\nAnd I guess that does get into the thing I was going to say, which is that I’m worried about minds being embedded because they are being used predictively, to predict humans. That is the obvious reason why that would happen without somebody intending it. Whereas endless dystopias don’t seem to me to have any use to a paperclip maximizer.\n\n\n\n**Sam:** Right. All right, so there’s undoubtedly much more to talk about here. I think we’re getting up on the two-hour mark here, and I want to touch on your new book, which as I said I’m halfway through and finding very interesting.\n\n\n\n**Eliezer:** If I can take a moment for a parenthetical before then, sorry?\n\n\n\n**Sam:** Sure, go for it.\n\n\n\n**Eliezer:** I just wanted to say that thanks mostly to the cryptocurrency boom—go figure, a lot of early investors in cryptocurrency were among our donors—the Machine Intelligence Research Institute is no longer strapped for cash, so much as it is strapped for engineering talent. (*laughs*)\n\n\n\n**Sam:** Nice. That’s a good problem to have.\n\n\n\n**Eliezer:** Yeah. If anyone listening to this is a brilliant computer scientist who wants to work on more interesting problems than they’re currently working on, and especially if you are already oriented to these issues, please consider going to [intelligence.org/engineers](https://intelligence.org/engineers) if you’d like to work for our nonprofit.\n\n\n\n**Sam:** Let’s say a little more about that. I will have given a bio for you in the introduction here, but the Machine Intelligence Research Institute (MIRI) is an organization that you co-founded, which you’re still associated with. Do you want to say what is happening there and what jobs are on offer?\n\n\n\n**Eliezer:** Basically, it’s the original AI alignment organization that, especially today, works primarily on the technical parts of the problem and the technical issues. Previously, it has been working mainly on a more pure theory approach, but now that narrow AI has gotten powerful enough, people (not just us but elsewhere, like DeepMind) are starting to take shots at, “With current technology, what setups can we do that will tell us something about how to do this stuff?” So the technical side of AI alignment is getting a little bit more practical. I’m worried that it’s not happening fast enough, but, well, if you’re worried about that sort of thing, what one does is adds funding and especially adds smart engineers.\n\n\n\n**Sam:** Do you guys collaborate with any of these companies doing the work? Do you have frequent contact with DeepMind or Facebook or anyone else?\n\n\n\n**Eliezer:** I mean, the people in AI alignment all go to the same talks, and I’m sure that the people who do AI alignment at DeepMind talk to DeepMind. Sometimes we’ve been known to talk to the upper people at DeepMind, and DeepMind is in the same country as the Oxford Future of Humanity Institute. So bandwidth here might not be really optimal, but it’s certainly not zero.\n\n\n\n\n---\n\n\n\n\n### 7. Inadequate equilibria ([1:44:40](https://overcast.fm/+Ic2hwsH2U/1:44:40))\n\n\n\n\n---\n\n\n**Sam:** Okay, so your new book—again, the title is [*Inadequate Equilibria: Where and How Civilizations Get Stuck*](https://equilibriabook.com). That is a title that needs some explaining. What do you mean by “inadequate”? What do you mean by “equilibria”? And how does this relate to civilizations getting stuck?\n\n\n\n**Eliezer:** So, one way to look at the book is that it’s about how you can get crazy, stupid, evil large systems without any of the people inside them being crazy, evil, or stupid.\n\n\nI think that a lot of people look at various aspects of the dysfunction of modern civilization and they sort of hypothesize evil groups that are profiting from the dysfunction and sponsoring the dysfunction; and if only we defeated these evil people, the system could be rescued. And the truth is more complicated than that. But what are the details? The details matter a lot. How do you have systems full of nice people doing evil things?\n\n\n\n**Sam:** Yeah. I often reference this problem by citing the power of incentives, but there are many other ideas here which are very useful to think about, which capture what we mean by the power of incentives.\n\n\nThere are a few concepts here that we should probably mention. What is a coordination problem? This is something you reference in the book.\n\n\n\n**Eliezer:** A coordination problem is where there’s a better way to do it, but you have to change more than one thing at a time. So an example of a problem is: Let’s say you have Craigslist, which is one system where buyers and sellers meet to buy and sell used things within a local geographic area. Let’s say that you have an alternative to Craigslist and your alternatives is Danslist, and Danslist is genuinely better. (Let’s not worry for a second about how many startups think that without it being true; suppose it’s like genuinely better.)\n\n\nAll of the sellers on Craigslist want to go someplace that there’s buyers. All of the buyers on Craigslist want to go someplace that there’s sellers. How do you get your new system started when it can’t get started by one person going on to Danslist and two people going on to Danslist? There’s no motive for them to go there until there’s already a bunch of people on Danslist.\n\n\nAn awful lot of times, when you find a system that is stuck in an evil space, what’s going on with it is that for it to move out of that space, more than one thing inside it would have to change at a time. So there’s all these nice people inside it who would like to be in a better system, but everything they could locally do on their own initiative is not going to fix the system, and it’s going to make things worse for them.\n\n\nThat’s the kind of problem that scientists have with trying to get away from the journals that are just ripping them off. They’re starting to move away from those journals, but journals have prestige based on the scientists that publish there and the other scientists that cite them, and if you just start this one new journal all by yourself and move there all by yourself, it has a low [impact factor](https://en.wikipedia.org/wiki/Impact_factor). So everyone’s got to move simultaneously. That’s how the scam went on for 10 years. 10 years is a long time, but they couldn’t all jump to the new system because they couldn’t jump one at a time.\n\n\n\n**Sam:** Right. The problem is that the world is organized in such a way that it is rational for each person to continue to behave the way he or she is behaving in this highly suboptimal way, given the way everyone else is behaving. And to change your behavior by yourself isn’t sufficient to change the system, and is therefore locally irrational, because your life will get worse if you change by yourself. Everyone has to coordinate their changing so as to move to some better equilibrium.\n\n\n\n**Eliezer:** That’s one of the fundamental foundational ways that systems can get stuck. There are others.\n\n\n\n**Sam:** The example that I often use when talking about problems of this sort is life in a maximum-security prison, which is as perversely bad as one can imagine. The incentives are aligned in such a way that no matter how good you are, if you’re put into a maximum-security prison, it is only rational for you to behave terribly and unethically and in such a way as to guarantee that this place is far more unpleasant than it need be, just because of how things are structured.\n\n\nOne example that I’ve used, and that people are familiar with at this point from having read books and seen movies that depict this more or less accurately: whether or not you’re a racist, your only rational choice, apparently, is to join a gang that is aligned along the variable of race. And if you fail to do this, you’ll be preyed upon by everyone. So if you’re a white guy, you have to join the white Aryan neo-Nazi gang. If you’re a black guy, you have to join the black gang. Otherwise, you’re just in the middle of this war of all against all. And there’s no way for you, based on your ethical commitment to being non-racist, to change how this is functioning.\n\n\nAnd we’re living in a similar kind of prison, of sorts, when you just look at how non-optimal many of these attractor states are that we are stuck in civilizationally.\n\n\n\n**Eliezer:** Parenthetically, I do want to be slightly careful about using the word “rational” to describe the behavior of people stuck in the system, because I consider that to be a very powerful word. It’s possible that if they were all *really* rational and had [common knowledge](https://en.wikipedia.org/wiki/Common_knowledge_(logic)) of rationality, they would be able to [solve the coordination problem](https://intelligence.org/2017/10/22/fdt/). But humanly speaking—not so much in terms of ideal rationality, but in terms of what people can actually do and the options they actually have—their best choice is still pretty bad systemically.\n\n\n\n**Sam:** Yeah. So what do you do in this book? How would you summarize your thesis? How do we move forward? Is there anything to do, apart from publicizing the structure of this problem?\n\n\n\n**Eliezer:** It’s not really a very hopeful book in that regard. It’s more about how to predict which parts of society will perform poorly to the point where you as an individual can manage to do better for yourself, really. One of the examples I give in the book is that my wife has Seasonal Affective Disorder, and she cannot be treated by the tiny little light boxes that your doctor tries to prescribe. So I’m like, “Okay, if the sun works, there’s some amount of light that works, how about if I just try stringing up the equivalent of 100 light bulbs in our apartment?”\n\n\nNow, when you have an idea like this, somebody might ask, “Well, okay, but you’re not thinking in isolation. There’s a civilization around you. If this works, shouldn’t there be a record of it? Shouldn’t a researcher have investigated it already?” There’s literally probably more than 100 million people around the world, especially in the extreme latitudes, who have some degree of Seasonal Affective Disorder, and some of it’s pretty bad. That means that there’s a kind of profit, a kind of energy gradient that seems like it could be traversable if solving the problem was as easy as putting up a ton of light bulbs in your apartment. Wouldn’t some enterprising researcher have investigated this already? Wouldn’t the results be known?\n\n\nAnd the answer is, as far as I can tell, no. It hasn’t been investigated, the results aren’t known, and when I tried putting up a ton of light bulbs, it seems to have worked pretty well for my wife. Not perfectly, but a lot better than it used to be.\n\n\nSo why isn’t this one of the first things you find when you Google “What do I do about Seasonal Affective Disorder when the light box doesn’t work?” And that’s what takes this sort of long story, that’s what takes the analysis. That’s what takes the thinking about the journal system and what the funding sources are for people investigating Seasonal Affective Disorder, and what kind of publications get the most attention. And whether the barrier of needing to put up 100 light bulbs in a bunch of different apartments for people in the controlled study—which would be difficult to blind, except maybe by using a lot fewer light bulbs—whether the details of having to adapt light bulbs to every house which is different is enough of an obstacle to prevent any researcher from ever investigating this obvious-seeming solution to a problem that probably hundreds of millions of people have, and maybe 50 million people have very severely? As far as I can tell, the answer is yes.\n\n\nAnd this is the kind of thinking that does not enable you to save civilization. If there was a way to make an enormous profit by knowing this, the profit would probably already be taken. If it was possible for one person to fix the problem, it would probably already be fixed. But you, personally, can fix your wife’s crippling Seasonal Affective Disorder by doing something that science knows not, because of an inefficiency in the funding sources for the researchers.\n\n\n\n**Sam:** This is really the global problem we need to figure out how to tackle, which is to recognize those points on which incentives are perversely misaligned so as to guarantee needless suffering or complexity or failure to make breakthroughs that would raise our quality of life immensely. So identify those points and then realign the incentives somehow.\n\n\nThe market is in many respects good at this, but there are places where it obviously fails. We don’t have many tools to apply the right pressure here. You have the profit motive in markets—so you can either get fantastically rich by solving some problem, or not—or we have governments that can decide, “Well, this is a problem that markets can’t solve because the wealth isn’t there to be gotten, strangely, and yet there’s an immense amount of human suffering that would be alleviated if you solved this problem. You can’t get people for some reason to pay for the alleviation of that suffering, reliably.” But apart from markets and governments, are there any other large hammers to be wielded here?\n\n\n\n**Eliezer:** I mean, sort of crowdfunding, I guess, although the hammer currently isn’t very large. But mostly, like I said, this book is about where you can do better individually or in small groups and when you shouldn’t assume that society knows what it’s doing; and it doesn’t have a bright message of hope about how to fix things.\n\n\nI’m sort of prejudiced personally over here, because I think that the artificial general intelligence timeline is likely to run out before humanity gets that much better at solving inadequacy, systemic problems in general. I don’t really see human nature or even human practice changing by that much over the amount of time we probably have left.\n\n\nEconomists already know about market failures. That’s a concept they already have. They already have the concept of government trying to correct it. It’s not obvious to me that there is a quantum leap to be made staying within just those dimensions of thinking about the problem.\n\n\nIf you ask me, “Hey, Eliezer: it’s five years in the future, there’s still no artificial general intelligence, and a great leap forward has occurred in people to deal with these types of systemic issues. How did that happen?” Then my guess would be something like Kickstarter, but much better, that turned out to enable people in large groups to move forward when none of them could move forward individually. Something like the group movements that scientists made without all that much help from the government (although there was help from funders changing their policies) to jump to new journals all at the same time, and get partially away from the Elsevier closed-source journal scam. Maybe there’s something brilliant that Facebook does—with machine learning, even. They get better at showing people things that are solutions to their coordination problems; they’re better at routing those around when they exist, and people learn that these things work and they jump using them simultaneously. And by these means, voters start to elect politicians who are not nincompoops, as opposed to choosing whichever nincompoop on offer is most appealing.\n\n\nBut this is a fairy tale. This is not a prediction. This is, “If you told me that somehow this had gotten significantly better in five years, what happened?” This is me making up what might have happened.\n\n\n\n\n---\n\n\n\n\n### 8. Rapid capability gain in AGI ([1:59:02](https://overcast.fm/+Ic2hwsH2U/1:59:02))\n\n\n\n\n---\n\n\n**Sam:** Right. Well, I don’t see how that deals with the main AI concern we’ve been talking about. I can see some shift, or some solution to a massive coordination problem, politically or in just the level of widespread human behavior—let’s say our use of social media and our vulnerability to fake news and conspiracy theories and other crackpottery, let’s say we find some way to all shift our information diet and our expectations and solve a coordination problem that radically cleans up our global conversation. I can see that happening.\n\n\nBut when you’re talking about dealing with the alignment problem, you’re talking about changing the behavior of a tiny number of people comparatively. I mean, I don’t know what it is. What’s the community of AI researchers now? It’s got to be numbered really in the hundreds when you’re talking about working on AGI. But what will it be when we’re close to the finish line? How many minds would have to suddenly change and becoming immune to the wrong economic incentives to coordinate the solution there? What are we talking about, 10,000 people?\n\n\n\n**Eliezer:** I mean, first of all, I don’t think we’re looking at an economic problem. I think that artificial general intelligence capabilities, once they exist, are going to scale too fast for that to be a useful way to look at the problem. AlphaZero going from 0 to 120 mph in four hours or a day—that is not out of the question here. And even if it’s a year, a year is still a very short amount of time for things to scale up. I think that the main thing you should be trying to do with the first artificial general intelligence ever built is [a very narrow, non-ambitious task](https://arbital.com/p/minimality_principle/) that shuts down the rest of the arms race by putting off switches in all the GPUs and shutting them down if anyone seems to be trying to build an overly artificially intelligent system.\n\n\nBecause I don’t think that the AI that you have built narrowly enough that you understood what it was doing is going to be able to defend you from arbitrary unrestrained superintelligences. The AI that you have built understandably enough to be good and not done fully general recursive self-improvement is not strong enough to solve the whole problem. It’s not strong enough to have everyone else going off and developing their own artificial general intelligences after that without that automatically destroying the world.\n\n\n\n**Sam:** We’ve been speaking for now over two hours; what can you say to someone who has followed us this long, but for whatever reason the argument we’ve made has not summed to being emotionally responsive to the noises you just made. Is there anything that can be briefly said so as to give them pause?\n\n\n\n**Eliezer:** I’d say this is a thesis of capability gain. This is a thesis of how fast artificial general intelligence gains in power once it starts to be around, whether we’re looking at 20 years (in which case this scenario does not happen) or whether we’re looking at something closer to the speed at which Go was developed (in which case it does happen) or the speed at which AlphaZero went from 0 to 120 and better-than-human (in which case there’s a bit of an issue that you better prepare for in advance, because you’re not going to have very long to prepare for it once it starts to happen).\n\n\nAnd I would say this is a computer science issue. This is not here to be part of a narrative. This is not here to fit into some kind of grand moral lesson that I have for you about how civilization ought to work. I think that this is just the way the background variables are turning up.\n\n\nWhy do I think that? It’s not that simple. I mean, I think a lot of people who see the power of intelligence will already find that pretty intuitive, but if you don’t, then you should read my paper [*Intelligence Explosion Microeconomics*](https://intelligence.org/files/IEM.pdf) about returns on cognitive reinvestment. It goes through things like the evolution of human intelligence and how the logic of evolutionary biology tells us that when human brains were increasing in size, there were increasing marginal returns to fitness relative to the previous generations for increasing brain size. Which means that it’s not the case that as you scale intelligence, it gets harder and harder to buy. It’s not the case that as you scale intelligence, you need exponentially larger brains to get linear improvements.\n\n\nAt least something slightly like the opposite of this is true; and we can tell this by looking at the fossil record and using some logic, but that’s not simple.\n\n\n\n**Sam:** Comparing ourselves to chimpanzees works. We don’t have brains that are 40 times the size or 400 times the size of chimpanzees, and yet what we’re doing—I don’t know what measure you would use, but it exceeds what they’re doing by some ridiculous factor.\n\n\n\n**Eliezer:** And I find that convincing, but other people may want additional details. And my message would be that the emergency situation is not part of a narrative. It’s not there to make the point of some kind of moral lesson. It’s my prediction as to what happens, after walking through a bunch of technical arguments as to how fast intelligence scales when you optimize it harder.\n\n\nAlphaZero seems to me like a genuine case in point. That is showing us that capabilities that in humans require a lot of tweaking and that human civilization built up over centuries of masters teaching students how to play Go, and that no individual human could invent in isolation… Even the most talented Go player, if you plopped them down in front of a Go board and gave them only a day, would play garbage. If they had to invent all of their own Go strategies without being part of a civilization that played Go, they would not be able to defeat modern Go players at all. AlphaZero blew past all of that in less than a day, starting from scratch, without looking at any of the games that humans played, without looking at any of the theories that humans had about Go, without looking at any of the accumulated knowledge that we had, and without very much in the way of special-case code for Go rather than chess—in fact, zero special-case code for Go rather than chess. And that in turn is an example that refutes another thesis about how artificial general intelligence develops slowly and gradually, which is: “Well, it’s just one mind; it can’t beat our whole civilization.”\n\n\nI would say that there’s a bunch of technical arguments which you walk through, and then after walking through these arguments you assign a bunch of probability, maybe not certainty, to artificial general intelligence that scales in power very fast—a year or less. And in this situation, if alignment is technically difficult, if it is easy to screw up, if it requires a bunch of additional effort—in this scenario, if we have an arms race between people who are trying to get their AGI first by doing a little bit less safety because from their perspective that only drops the probability a little; and then someone else is like, “Oh no, we have to keep up. We need to strip off the safety work too. Let’s strip off a bit more so *we* can get in the front.”—if you have this scenario, and by a miracle the first people to cross the finish line have actually not screwed up and they actually have a functioning powerful artificial general intelligence that is able to prevent the world from ending, you have to prevent the world from ending. You are in a terrible, terrible situation. You’ve got your one miracle. And this follows from the rapid capability gain thesis and at least the current landscape for how these things are developing.\n\n\n\n**Sam:** Let’s just linger on this point for a second. This fast takeoff—is this assuming recursive self improvement? And how fringe an idea is this in the field? Are most people who are thinking about this assuming (for good reason or not) that a slow takeoff is far more likely, over the course of many, many years, and that the analogy to AlphaZero is not compelling?\n\n\n\n**Eliezer:** I think they are too busy explaining why current artificial intelligence methods do not knowably, quickly, immediately give us artificial general intelligence—from which they then conclude that it is 30 years off. They have not said, “And then once we get there, it’s going to develop much more slowly than AlphaZero, and here’s why.” There isn’t a thesis to that effect that I’ve seen from artificial intelligence people. Robin Hanson had a thesis to this effect, and there was this mighty debate on our blog between Robin Hanson and myself that was published as the [*AI-Foom Debate*](https://intelligence.org/ai-foom-debate/) mini-book. And I have claimed recently [on Facebook](https://www.facebook.com/yudkowsky/posts/10155992246384228) that now that we’ve seen AlphaZero, AlphaZero seems like strong evidence against Hanson’s [thesis](http://www.overcomingbias.com/2014/07/30855.html) for how these things necessarily go very slow because they have to duplicate all the work done by human civilization and that’s hard.\n\n\n\n**Sam:** I’m actually going to be doing a podcast with Robin in a few weeks, a live event. So what’s the best version of his argument, and why is he wrong?\n\n\n\n**Eliezer:** Nothing can prepare you for Robin Hanson! (*laughs*)\n\n\nWell, the [argument](https://www.lesswrong.com/posts/D3NspiH2nhKA6B2PE/what-evidence-is-alphago-zero-re-agi-complexity) that Hanson has given is that these systems are still immature and narrow and things will change when they get general. And my reply has been something like, “Okay, what changes your mind short of the world actually ending? If you theory is wrong, do we get to find out about that at all before the world ends?”\n\n\n\n**Sam:** To which he says?\n\n\n\n**Eliezer:** I don’t remember if he’s replied to that one yet.\n\n\n\n**Sam:** I’ll let Robin be Robin. Well, listen, Eliezer, it has been great to talk to you, and I’m glad we got a chance to do it at such length. And again, it does not exhaust the interest or consequence of this topic, but it’s certainly a good start for people who are new to this. Before I let you go, where should people look for you online? Do you have a preferred domain that we could target?\n\n\n\n**Eliezer:** I would mostly say [intelligence.org](https://intelligence.org). If you’re looking for me personally, [facebook.com/yudkowsky](https://facebook.com/yudkowsky), and if you’re looking for my most recent book, [equilibriabook.com](https://equilibriabook.com/).\n\n\n\n**Sam:** I’ll put links on my website where I embed this podcast. So again, Eliezer, thanks so much—and to be continued. I always love talking to you, and this will not be the last time, AI willing.\n\n\n\n**Eliezer:** This was a great conversation, and thank you very much for having me on.\n\n\n\nThe post [Sam Harris and Eliezer Yudkowsky on “AI: Racing Toward the Brink”](https://intelligence.org/2018/02/28/sam-harris-and-eliezer-yudkowsky/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-03-01T01:19:11Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "dd9d15d5060db1f26cad61fa545baaf6", "title": "February 2018 Newsletter", "url": "https://intelligence.org/2018/02/25/february-2018-newsletter/", "source": "miri", "source_type": "blog", "text": "#### Updates\n\n\n* New at IAFF: [An Untrollable Mathematician](https://agentfoundations.org/item?id=1750)\n* New at AI Impacts: [2015 FLOPS Prices](https://aiimpacts.org/2015-flops-prices/)\n* We presented “[Incorrigibility in the CIRL Framework](https://intelligence.org/2017/08/31/incorrigibility-in-cirl/)” at the AAAI/ACM [Conference on AI, Ethics, and Society](http://www.aies-conference.com).\n* From MIRI researcher Scott Garrabrant: [Sources of Intuitions and Data on AGI](https://www.lesswrong.com/posts/BibDWWeo37pzuZCmL/sources-of-intuitions-and-data-on-agi)\n\n\n#### News and links\n\n\n* In “[Adversarial Spheres](https://arxiv.org/abs/1801.02774),” Gilmer et al. investigate the tradeoff between test error and vulnerability to adversarial perturbations in many-dimensional spaces.\n* Recent posts on *Less Wrong*: [Critch on “Taking AI Risk Seriously”](https://www.lesswrong.com/posts/HnC29723hm6kJT7KP/critch-on-taking-ai-risk-seriously) and Ben Pace’s [background model for assessing AI x-risk plans](https://www.lesswrong.com/posts/XFpDTCHZZ4wpMT8PZ/a-model-i-use-when-making-plans-to-reduce-ai-x-risk).\n* “[Solving the AI Race](https://www.general-ai-challenge.org/ai-race)“: GoodAI is offering prizes for proposed responses to the problem that “key stakeholders, including [AI] developers, may ignore or underestimate safety procedures, or agreements, in favor of faster utilization”.\n* The Open Philanthropy Project [is hiring](https://www.openphilanthropy.org/blog/new-job-opportunities) research analysts in AI alignment, forecasting, and strategy, along with generalist researchers and operations staff.\n\n\n\nThe post [February 2018 Newsletter](https://intelligence.org/2018/02/25/february-2018-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-02-26T02:07:18Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "eec7937ea8d3de17418c61943a553f0f", "title": "January 2018 Newsletter", "url": "https://intelligence.org/2018/01/28/january-2018-newsletter/", "source": "miri", "source_type": "blog", "text": "Our 2017 fundraiser was a [huge success](https://intelligence.org/2018/01/10/fundraising-success/), with 341 donors contributing a total of $2.5 million!\nSome of the largest donations came from Ethereum inventor Vitalik Buterin, bitcoin investors Christian Calderon and Marius van Voorden, poker players Dan Smith and Tom and Martin Crowley (as part of a [matching challenge](https://intelligence.org/2017/12/14/end-of-the-year-matching/)), and the [Berkeley Existential Risk Initiative](http://existence.org/). Thank you to everyone who contributed!\n\n\n#### Research updates\n\n\n* The winners of the first [AI Alignment Prize](https://www.lesswrong.com/posts/4WbNGQMvuFtY3So7s/announcement-ai-alignment-prize-winners-and-next-round) include Scott Garrabrant’s [Goodhart Taxonomy](https://www.lesswrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy)��and recent IAFF posts: Vanessa Kosoy’s [Why Delegative RL Doesn’t Work for Arbitrary Environments](https://agentfoundations.org/item?id=1730) and [More Precise Regret Bound for DRL](https://agentfoundations.org/item?id=1739), and Alex Mennen’s [Being Legible to Other Agents by Committing to Using Weaker Reasoning Systems](https://agentfoundations.org/item?id=1734) and [Learning Goals of Simple Agents](https://agentfoundations.org/item?id=1737).\n* New at AI Impacts: [Human-Level Hardware Timeline](https://aiimpacts.org/human-level-hardware-timeline/); [Effect of Marginal Hardware on Artificial General Intelligence](https://aiimpacts.org/effect-of-marginal-hardware-on-artificial-general-intelligence/)\n* We’re hiring for a new position at MIRI: [ML Living Library](https://intelligence.org/2017/12/12/ml-living-library/), a specialist on the newest developments in machine learning.\n\n\n#### General updates\n\n\n* From Eliezer Yudkowsky: [A Reply to Francois Chollet on Intelligence Explosion](https://intelligence.org/2017/12/06/chollet/).\n* Counterterrorism experts Richard Clarke and R. P. Eddy [profile Yudkowsky](http://lukemuehlhauser.com/richard-clarke-and-r-p-eddy-on-ai-risk/) in their new book *Warnings: Finding Cassandras to Stop Catastrophes*.\n* There have been several recent blog posts recommending MIRI as a donation target: from [Ben Hoskin](http://effective-altruism.com/ea/1iu/2018_ai_safety_literature_review_and_charity/), [Zvi Mowshowitz](https://thezvi.wordpress.com/2017/12/17/i-vouch-for-miri/), [Putanumonit](https://putanumonit.com/2017/12/10/worried-about-ai/), and the Open Philanthropy Project’s [Daniel Dewey and Nick Beckstead](https://www.openphilanthropy.org/blog/suggestions-individual-donors-open-philanthropy-project-staff-2017).\n\n\n#### News and links\n\n\n* A generalization of the AlphaGo algorithm, [AlphaZero](https://arxiv.org/abs/1712.01815), achieves rapid superhuman performance on Chess and Shogi.\n* Also from Google DeepMind: “[Specifying AI Safety Problems in Simple Environments](https://deepmind.com/blog/specifying-ai-safety-problems/).”\n* Viktoriya Krakovna [reports](https://vkrakovna.wordpress.com/2017/12/30/nips-2017-report/) on NIPS 2017: “This year’s NIPS gave me a general sense that near-term AI safety is now mainstream and long-term safety is slowly going mainstream. […] There was a lot of great content on the long-term side, including several oral / spotlight presentations and the [Aligned AI workshop](https://nips.cc/Conferences/2017/Schedule?showEvent=8794).”\n* 80,000 Hours [interviews Phil Tetlock](https://80000hours.org/2017/11/prof-tetlock-predicting-the-future/) and investigates [the most important talent gaps](https://80000hours.org/2017/11/talent-gaps-survey-2017/) in the EA community.\n* From Seth Baum: “[A Survey of AGI Projects for Ethics, Risk, and Policy](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3070741).” And from the Foresight Institute: “[AGI: Timeframes & Policy](https://foresight.org/publications/AGI-Timeframes&PolicyWhitePaper.pdf).”\n* The Future of Life Institute is collecting proposals for a second round of AI safety grants, [due February 18](https://futureoflife.org/2017/12/20/2018-international-ai-safety-grants-competition/).\n\n\n\nThe post [January 2018 Newsletter](https://intelligence.org/2018/01/28/january-2018-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-01-28T20:25:59Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "4733922e45952d8fae73916bfd6cb082", "title": "Fundraising success!", "url": "https://intelligence.org/2018/01/10/fundraising-success/", "source": "miri", "source_type": "blog", "text": "Our [2017 fundraiser](https://intelligence.org/2017/12/01/miris-2017-fundraiser/) is complete! We’ve had an incredible month, with, by far, our largest fundraiser success to date. More than 300 distinct donors gave just over **$2.5M**[1](https://intelligence.org/2018/01/10/fundraising-success/#footnote_0_17441 \"The exact total might increase slightly over the coming weeks as we process donations initiated in December 2017 that arrive in January 2018.\"), doubling our third fundraising target of $1.25M. Thank you!\n\n\n \n\n\n\n\n\n[Target 1 \n$625,000 \nCompleted](https://intelligence.org/feed/?paged=19#fundraiserModal)[Target 2 \n$850,000 \nCompleted](https://intelligence.org/feed/?paged=19#fundraiserModal)[Target 3 \n$1,250,000 \nCompleted](https://intelligence.org/feed/?paged=19#fundraiserModal)\n\n### $2,504,625 raised in total!\n\n\n##### 358 donors contributed\n\n\n\n\n\n×\n### Target Descriptions\n\n\n\n\n* [Target 1](https://intelligence.org/feed/?paged=19#level1)\n* [Target 2](https://intelligence.org/feed/?paged=19#level2)\n* [Target 3](https://intelligence.org/feed/?paged=19#level3)\n\n\n\n\n$625k: Basic target\n-------------------\n\n\nAt this funding level, we’ll be in a good position to pursue our mainline hiring goal in 2018, although we will likely need to halt or slow our growth in 2019.\n\n\n\n\n$850k: Mainline-growth target\n-----------------------------\n\n\nAt this level, we’ll be on track to fully fund our planned expansion over the next few years, allowing us to roughly double the number of research staff over the course of 2018 and 2019.\n\n\n\n\n$1.25M: Rapid-growth target\n---------------------------\n\n\nAt this funding level, we will be on track to maintain a 1.5-year runway even if our hiring proceeds a fair amount faster than our mainline prediction. We’ll also have greater freedom to pay higher salaries to top-tier candidates as needed.\n\n\n\n\n\n\n\n\n\n\n \n\n\nOur largest donation came toward the very end of the fundraiser in the form of an Ethereum donation worth $763,970 from Vitalik Buterin, the inventor and co-founder of Ethereum. Vitalik’s donation represents the third-largest single contribution we’ve received to date, after a [$1.25M grant disbursement from the Open Philanthropy Project](https://intelligence.org/2017/11/08/major-grant-open-phil/) in October, and a [$1.01M Ethereum donation](https://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/) in May.\n\n\nIn [our mid-fundraiser update](https://intelligence.org/2017/12/14/end-of-the-year-matching/), we noted that MIRI was included in a large [Matching Challenge](https://2017charitydrive.com/): In partnership with Raising For Effective Giving, professional poker players Dan Smith, Tom Crowley and Martin Crowley announced they would match all donations to MIRI and nine other organizations through the end of December. Donors helped get us to our matching cap of $300k within 2 weeks, resulting in a $300k match from Dan, Tom, and Martin (thanks guys!). Other big winners from the Matching Challenge, which raised $4.5m (match included) in less than 3 weeks, include GiveDirectly ($588k donated) and the Good Food Institute ($416k donated).\n\n\nOther big donations we received in December included:\n\n\n* $367,575 from Christian Calderon\n* $100,000 from the [Berkeley Existential Risk Institute](https://existence.org)\n* $59,251 from Marius van Voorden\n\n\nWe also received substantial support from medium-sized donors: a total of $631,595 from the 42 donors who gave $5,000–$50,000 and a total of $113,556 from the 75 who gave $500–$5,000 ([graph](https://goo.gl/KgtrBH)). We also are grateful to donors who leveraged their employers’ matching generosity, donating a combined amount of over $100,000 during December.\n\n\n66% of funds donated during this fundraiser were in the form of cryptocurrency (mainly Bitcoin and Ethereum), including Vitalik, Marius, and Christian’s donations, along with Dan, Tom, and Martin’s matching contributions.\n\n\nOverall, we’ve had an amazingly successful month and a remarkable year! I’m extremely grateful for all the support we’ve received, and excited about the opportunity this creates for us to grow our research team more quickly. For details on our growth plans, see our [fundraiser post](https://intelligence.org/2017/12/01/miris-2017-fundraiser/#2).\n\n\n\n\n---\n\n1. The exact total might increase slightly over the coming weeks as we process donations initiated in December 2017 that arrive in January 2018.\n\nThe post [Fundraising success!](https://intelligence.org/2018/01/10/fundraising-success/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2018-01-11T00:13:07Z", "authors": ["Malo Bourgon"], "summaries": []} -{"id": "7563f8438a6de690eb65d5fd056f8786", "title": "End-of-the-year matching challenge!", "url": "https://intelligence.org/2017/12/14/end-of-the-year-matching/", "source": "miri", "source_type": "blog", "text": "**Update 2017-12-27:** We’ve blown past our 3rd and final target, and reached the matching cap of $300,000 for the Matching Challenge! Thanks so much to everyone who supported us!\n\n\nAll donations made before 23:59 PST on Dec 31st will continue to be counted towards our fundraiser total. The fundraiser total includes projected matching funds from the Challenge.\n\n\n\n\n---\n\n\n  \n\n \n\n\nProfessional poker players Martin Crowley, Tom Crowley, and Dan Smith, in partnership with Raising for Effective Giving, have just announced a **[$1 million Matching Challenge](http://www.2017charitydrive.com)** and included MIRI among the 10 organizations they are supporting!\n\n\nGive to any of the organizations involved before noon (PST) on December 31 for your donation to be eligible for a dollar-for-dollar match, up to the $1 million limit!\n\n\nThe eligible organizations for matching are:\n\n\n* **Animal welfare** — Effective Altruism Funds’ animal welfare fund, The Good Food Institute\n* **Global health and development** —Against Malaria Foundation, Schistosomiasis Control Initiative, Helen Keller International’s vitamin A supplementation program, GiveDirectly\n* **Global catastrophic risk**— MIRI\n* **Criminal justice reform**— Brooklyn Community Bail Fund, Massachusetts Bail Fund, Just City Memphis\n\n\nThe Matching Challenge’s website lists two options for MIRI donors to get matched: (1) donating on [2017charitydrive.com](https://2017charitydrive.com), or (2) [donating directly on MIRI’s website](https://intelligence.org/donate/) and sending the receipt to [receiptsforcharity@gmail.com](mailto:receiptsforcharity@gmail.com). We recommend option 2, particularly for US tax residents (because MIRI is a 501(c)(3) organization) and those looking for a wider array of payment methods.\n\n\n \n\n\nIn other news, we’ve hit our first [fundraising target](https://intelligence.org/2017/12/01/miris-2017-fundraiser/) ($625,000)!\n\n\nWe’re also happy to announce that we’ve received a $368k bitcoin donation from Christian Calderon, a cryptocurrency enthusiast, and also a donation worth $59k from early bitcoin investor Marius van Voorden.\n\n\nIn total, so far, we’ve received donations valued $697,638 from 137 distinct donors, 76% of it in the form of cryptocurrency (48% if we exclude Christian’s donation). Thanks as well to Jacob Falkovich for his [fundraiser/matching post](https://putanumonit.com/2017/12/10/worried-about-ai/) whose opinion distribution curves plausibly raised over $27k for MIRI this week, including his match.\n\n\nOur [funding drive](https://intelligence.org/2017/12/01/miris-2017-fundraiser/) will be continuing through the end of December, along with the Matching Challenge. Current progress (updated live):\n\n\n \n\n\n\n\n---\n\n\n \n\n\n\n\n\n\n\n[Donate](https://intelligence.org/donate/#donation-methods)\n-----------------------------------------------------------\n\n\n\n\n\n\n---\n\n\n \n\n\n**Correction December 17:** I previously listed GiveWell as one of the eligible organizations for matching, which is not correct.\n\n\nThe post [End-of-the-year matching challenge!](https://intelligence.org/2017/12/14/end-of-the-year-matching/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-12-15T01:45:40Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "af222b17c229633040b575a245982a73", "title": "ML Living Library Opening", "url": "https://intelligence.org/2017/12/12/ml-living-library/", "source": "miri", "source_type": "blog", "text": "**Update Jan. 2021**: We’re no longer seeking applications for this position.\n\n\n\n\n---\n\n\nThe Machine Intelligence Research Institute is looking for a very specialized autodidact to keep us up to date on developments in machine learning—a “living library” of new results.\n\n\nML is a fast-moving and diverse field, making it a challenge for any group to stay updated on all the latest and greatest developments. To support our [AI alignment](https://intelligence.org/2017/04/12/ensuring/) research efforts, we want to hire someone to read every interesting-looking paper about AI and machine learning, and keep us abreast of noteworthy developments, including new techniques and insights.\n\n\nWe expect that this will sound like a very fun job to a lot of people! However, this role is important to us, and we need to be appropriately discerning—we do not recommend applying if you do not already have a proven ability in this or neighboring domains.\n\n\nOur goal is to hire full-time, ideally for someone who would be capable of making a multi-year commitment—we intend to pay you to become an expert on the cutting edge of machine learning, and don’t want to make the human capital investment unless you’re interested in working with us long-term.\n\n\n \n\n\n\nAbout the Role\n--------------\n\n\nWe’d like to fill this position as soon as we find the right candidate. Our hiring process tends to involve a lot of sample tasks and probationary hires, so if you are interested, we encourage you to apply early.\n\n\nThis is a new position for a kind of work that isn’t standard. Although we hope to find someone who can walk in off the street and perform well, we’re also interested in candidates who think they might take three months of training to meet the requirements.\n\n\nExamples of the kinds of work you’ll do:\n\n\n* Read through archives and journals to get a sense of literally every significant development in the field, past and present.\n* Track general trends in the ML space—e.g., “Wow, there sure is a lot of progress being made on Dota 2!”—and let us know about them.\n* Help an engineer figure out why their code isn’t working—e.g., “Oh, you forgot the pooling layer in your convolutional neural network.”\n* Answer/research MIRI staff questions about ML techniques or the history of the field.\n* Share important developments proactively; researchers who haven’t read the same papers as you often won’t know the right questions to ask unprompted!\n\n\n \n\n\nThe Ideal Candidate\n-------------------\n\n\nSome qualities of the ideal candidate:\n\n\n* Extensive breadth and depth of machine learning knowledge, including the underlying math.\n* Familiarity with ideas related to AI alignment.\n* Delight at the thought of getting paid to Know All The Things.\n* Programming capability—for example, you’ve replicated some ML papers.\n* Ability to deeply understand and crisply communicate ideas. Someone who just recites symbols to our team without understanding them isn’t much better than a web scraper.\n* Enthusiasm about the prospect of working at MIRI and helping advance the field’s understanding of AI alignment.\n* Residence in (or willingness to move to) the Bay Area. This job requires high-bandwidth communication with the research team, and won’t work as a remote position.\n* Ability to work independently with minimal supervision, and also to work in team/group settings.\n\n\n\n\n\n \n\n\nWorking at MIRI\n---------------\n\n\nWe strive to make working at MIRI a rewarding experience.\n\n\n* Modern Work Spaces — Many of us have adjustable standing desks with large external monitors. We consider workspace ergonomics important, and try to rig up workstations to be as comfortable as possible. Free snacks, drinks, and meals are also provided at our office.\n* Flexible Hours — We don’t have strict office hours, and we don’t limit employees’ vacation days. Our goal is to make rapid progress on our research agenda, and we would prefer that staff take a day off rather than extend tasks to fill an extra day.\n* Living in the Bay Area — MIRI’s office is located in downtown Berkeley, California. From our office, you’re a 30-second walk to the BART (Bay Area Rapid Transit), which can get you around the Bay Area; a 3-minute walk to UC Berkeley campus; and a 30-minute BART ride to downtown San Francisco.\n\n\n \n\n\nEEO & Employment Eligibility\n----------------------------\n\n\nMIRI is an equal opportunity employer. We are committed to making employment decisions based on merit and value. This commitment includes complying with all federal, state, and local laws. We desire to maintain a work environment free of harassment or discrimination due to sex, race, religion, color, creed, national origin, sexual orientation, citizenship, physical or mental disability, marital status, familial status, ethnicity, ancestry, status as a victim of domestic violence, age, or any other status protected by federal, state, or local laws\n\n\n \n\n\nApply\n-----\n\n\nIf interested, [**click here to apply**](https://machineintelligence.typeform.com/to/kOpn1U). For questions or comments, email Buck ([buck@intelligence.org](mailto:buck@intelligence.org)).\n\n\nThe post [ML Living Library Opening](https://intelligence.org/2017/12/12/ml-living-library/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-12-12T19:38:20Z", "authors": ["Alex Vermeer"], "summaries": []} -{"id": "c425648cfd012664d0882bf0318788a2", "title": "A reply to Francois Chollet on intelligence explosion", "url": "https://intelligence.org/2017/12/06/chollet/", "source": "miri", "source_type": "blog", "text": "This is a reply to Francois Chollet, the inventor of the Keras wrapper for the Tensorflow and Theano deep learning systems, on his essay “[The impossibility of intelligence explosion](https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec).”\n\n\nIn response to critics of his essay, Chollet tweeted:\n\n\n \n\n\n\n> If you post an argument online, and the only opposition you get is braindead arguments and insults, does it confirm you were right? Or is it just self-selection of those who argue online?\n> \n> \n\n\nAnd he earlier tweeted:\n\n\n \n\n\n\n> Don’t be overly attached to your views; some of them are probably incorrect. An intellectual superpower is the ability to consider every new idea as if it might be true, rather than merely checking whether it confirms/contradicts your current views.\n> \n> \n\n\nChollet’s essay seemed mostly on-point and kept to the object-level arguments. I am led to hope that Chollet is perhaps somebody who believes in abiding by the rules of a debate process, a fan of what I’d consider Civilization; and if his entry into this conversation has been met only with braindead arguments and insults, he deserves a better reply. I’ve tried here to walk through some of what I’d consider the standard arguments in this debate as they bear on Chollet’s statements.\n\n\nAs a meta-level point, I hope everyone agrees that an invalid argument for a true conclusion is still a bad argument. To arrive at the correct belief state we want to sum all the valid support, and only the valid support. To tally up that support, we need to have a notion of judging arguments on their own terms, based on their local structure and validity, and not excusing fallacies if they support a side we agree with for other reasons.\n\n\nMy reply to Chollet doesn’t try to carry the entire case for the intelligence explosion as such. I am only going to discuss my take on the validity of Chollet’s particular arguments. Even if the statement “an intelligence explosion is impossible” happens to be true, we still don’t want to accept any invalid arguments in favor of that conclusion.\n\n\nWithout further ado, here are my thoughts in response to Chollet.\n\n\n\n \n\n\n\n\n> The basic premise is that, in the near future, a first “seed AI” will be created, with general problem-solving abilities slightly surpassing that of humans. This seed AI would start designing better AIs, initiating a recursive self-improvement loop that would immediately leave human intelligence in the dust, overtaking it by orders of magnitude in a short time.\n> \n> \n\n\nI agree this is more or less what I meant by “seed AI” when I coined the term back in 1998. Today, nineteen years later, I would talk about a general question of “capability gain” or how the power of a cognitive system scales with increased resources and further optimization. The idea of recursive self-improvement is only one input into the general questions of capability gain; for example, we recently saw some impressively fast scaling of Go-playing ability without anything I’d remotely consider as seed AI being involved. That said, I think that a lot of the questions Chollet raises about “self-improvement” are relevant to capability-gain theses more generally, so I won’t object to the subject of conversation.\n\n\n \n\n\n\n> Proponents of this theory also regard intelligence as a kind of superpower, conferring its holders with almost supernatural capabilities to shape their environment \n> \n> \n\n\nA good description of a human from the perspective of a chimpanzee.\n\n\nFrom a certain standpoint, the civilization of the year 2017 could be said to have “magic” from the perspective of 1517. We can more precisely characterize this gap by saying that we in 2017 can solve problems using strategies that 1517 couldn’t recognize as a “solution” if described in advance, because our strategies depend on laws and generalizations not known in 1517. E.g., I could show somebody in 1517 a design for a compressor-based air conditioner, and they would not be able to recognize this as “a good strategy for cooling your house” in advance of observing the outcome, because they don’t yet know about the temperature-pressure relation. A fancy term for this would be “[strong cognitive uncontainability](https://arbital.com/p/strong_uncontainability/)”; a metaphorical term would be “magic” although of course we did not do anything actually supernatural. A similar but much larger gap exists between a human and a smaller brain running the previous generation of software (aka a chimpanzee).\n\n\nIt’s not exactly unprecedented to suggest that big gaps in cognitive ability correspond to big gaps in pragmatic capability to shape the environment. I think a lot of people would agree in characterizing intelligence as the Human Superpower, independently of what they thought about the intelligence explosion hypothesis.\n\n\n \n\n\n\n> — as seen in the science-fiction movie Transcendence (2014), for instance. \n> \n> \n\n\n\nI agree that public impressions of things are things that *someone* ought to be concerned about. If I take a ride-share and I mention that I do anything involving AI, half the time the driver says, “Oh, like Skynet!” This is an understandable reason to be annoyed. But if we’re trying to figure out the sheerly factual question of whether an intelligence explosion is possible and probable, it’s important to consider the best arguments on all sides of all relevant points, not the popular arguments. For that purpose it doesn’t matter if Deepak Chopra’s writing on quantum mechanics has a larger readership than any actual physicist.\n\n\nThankfully Chollet doesn’t spend the rest of the essay attacking Kurzweil in particular, so I’ll leave this at that.\n\n\n \n\n\n\n> The intelligence explosion narrative equates intelligence with the general problem-solving ability displayed by individual intelligent agents — by current human brains, or future electronic brains. \n> \n> \n\n\nI don’t see what work the word “individual” is doing within this sentence. From our perspective, it matters little whether a computing fabric is imagined to be a hundred agents or a single agency, if it seems to behave in a coherent goal-directed way as seen from outside. The pragmatic consequences are the same. I do think it’s fair to say that I think about “agencies” which from our outside perspective seem to behave in a coherent goal-directed way.\n\n\n \n\n\n\n> The first issue I see with the intelligence explosion theory is a failure to recognize that intelligence is necessarily part of a broader system — a vision of intelligence as a “brain in jar” that can be made arbitrarily intelligent independently of its situation.\n> \n> \n\n\nI’m not aware of myself or Nick Bostrom or another major technical voice in this field claiming that problem-solving can go on independently of the situation/environment.\n\n\nThat said, some systems function very well in a broad variety of structured low-entropy environments. E.g. the human brain functions much better than other primate brains in an extremely broad set of environments, including many that natural selection did not explicitly optimize for. We remain functional on the Moon, because the Moon has enough in common with the Earth on a sufficiently deep meta-level that, for example, *induction on past experience* goes on functioning there. Now if you tossed us into a universe where the future bore no compactly describable relation to the past, we would indeed not do very well in that “situation”—but this is not pragmatically relevant to the impact of AI on our own real world, where the future does bear a relation to the past.\n\n\n \n\n\n\n> In particular, there is no such thing as “general” intelligence. On an abstract level, we know this for a fact via the “no free lunch” theorem — stating that no problem-solving algorithm can outperform random chance across all possible problems.\n> \n> \n\n\n[Scott Aaronson’s reaction](https://www.scottaaronson.com/blog/?p=3553): “Citing the ‘No Free Lunch Theorem’—i.e., the (trivial) statement that you can’t outperform brute-force search on *random* instances of an optimization problem—to claim anything useful about the limits of AI, is not a promising sign.”\n\n\nIt seems worth spelling out an as-simple-as-possible special case of this point in mathy detail, since it looked to me like a central issue given the rest of Chollet’s essay. I expect this math isn’t new to Chollet, but reprise it here to establish common language and for the benefit of everyone else reading along.\n\n\n[Laplace’s Rule of Succession](https://arbital.com/p/laplace_rule_of_succession/), as invented by Thomas Bayes, gives us one simple rule for predicting future elements of a binary sequence based on previously observed elements. Let’s take this binary sequence to be a series of “heads” and “tails” generated by some sequence generator called a “coin”, not assumed to be fair. In the standard problem setup yielding the Rule of Succession, our state of prior ignorance is that we think there is some frequency \\(\\theta\\) that a coin comes up heads, and for all we know \\(\\theta\\) is equally likely to take on any real value between \\(0\\) and and \\(1\\). We can do some Bayesian inference and conclude that after seeing \\(M\\) heads and \\(N\\) tails, we should predict that the odds for heads : tails on the next coinflip are:\n\n\n \n\n\n$$\\frac{M + 1}{M + N + 2} : \\frac{N + 1}{M + N + 2}$$\n\n\n \n\n\n(See [Laplace’s Rule of Succession](https://arbital.com/p/laplace_rule_of_succession/) for the proof.)\n\n\nThis rule yields advice like: “If you haven’t yet observed any coinflips, assign 50-50 to heads and tails” or “If you’ve seen four heads and no tails, assign 1/6 probability [rather than 0 probability](https://arbital.com/p/cromwells_rule/) to the next flip being tails” or “If you’ve seen the coin come up heads 150 times and tails 75 times, assign around 2/3 probability to the coin coming up heads next time.”\n\n\nNow this rule does not do super-well in any possible kind of environment. In particular, it doesn’t do any better than the maximum-entropy prediction “the next flip has a 50% probability of being heads, or tails, regardless of what we have observed previously” if the environment is in fact a fair coin. In general, there is “no free lunch” on predicting arbitrary binary sequences; if you assign greater probability mass or probability density to one binary sequence or class of sequences, you must have done so by draining probability from other binary sequences. If you begin with the prior that every binary sequence is equally likely, then you never expect any algorithm to do better *on average* than maximum entropy, even if that algorithm luckily does better in one particular random draw.\n\n\nOn the other hand, if you start from the prior that every binary sequence is equally likely, you never notice anything a human would consider an obvious pattern. If you start from the maxentropy prior, then after observing a coin come up heads a thousand times, and tails never, you still predict 50-50 on the next draw; because on the maxentropy prior, the sequence “one thousand heads followed by tails” is exactly as likely as “one thousand heads followed by heads”.\n\n\nThe inference rule instantiated by Laplace’s Rule of Succession does better in a generic low-entropy universe of coinflips. It doesn’t start from specific knowledge; it doesn’t begin from the assumption that the coin is biased heads, or biased tails. If the coin is biased heads, Laplace’s Rule learns that; if the coin is biased tails, Laplace’s Rule will soon learn that from observation as well. If the coin is actually fair, then Laplace’s Rule will rapidly converge to assigning probabilities in the region of 50-50 and not do much worse per coinflip than if we had started with the max-entropy prior.\n\n\nCan you do better than Laplace’s Rule of Succession? Sure; if the environment’s probability of generating heads is equal to 0.73 and you start out knowing that, then you can guess on the very first round that the probability of seeing heads is 73%. But even with this non-generic and highly specific knowledge built in, you do not do *very* much better than Laplace’s Rule of Succession unless the first coinflips are very important to your future survival. Laplace’s Rule will probably figure out the answer is somewhere around 3/4 in the first dozen rounds, and get to the answer being somewhere around 73% after a couple of hundred rounds, and if the answer *isn’t* 0.73 it can handle that case too.\n\n\nIs Laplace’s Rule the most general possible rule for inferring binary sequences? Obviously not; for example, if you saw the initial sequence…\n\n\n$$HTHTHTHTHTHTHTHT…$$\n\n\n \n\n\n…then you would probably guess with high though [not infinite](https://arbital.com/p/cromwells_rule/) probability that the next element generated would be \\(H\\). This is because you have the ability to recognize a kind of pattern which Laplace’s Rule does not, i.e., alternating heads and tails. Of course, your ability to recognize this pattern only helps you in environments that sometimes generate a pattern like that—which the real universe sometimes does. If we tossed you into a universe which *just as frequently* presented you with ‘tails’ after observing a thousand perfect alternating pairs, as it did ‘heads’, then your pattern-recognition ability would be useless. Of course, a max-entropy universe like that will usually not present you with a thousand perfect alternations in the initial sequence to begin with!\n\n\nOne extremely general but utterly intractable inference rule is [Solomonoff induction](https://arbital.com/p/solomonoff_induction/), a [universal prior](https://arbital.com/p/universal_prior/) which assigns probabilities to every computable sequence (or computable probability distribution over sequences) proportional to [algorithmic simplicity](https://arbital.com/p/Kolmogorov_complexity/), that is, in inverse proportion to the exponential of the size of the program required to specify the computation. Solomonoff induction can learn from observation any sequence that can be generated by a *compact program*, relative to a choice of universal computer which has at most a bounded effect on the amount of evidence required or the number of mistakes made. Of course a Solomonoff inductor will do slightly-though-not-much-worse than the max-entropy prior in a hypothetical structure-avoiding universe in which algorithmically compressible sequences are *less* likely; thankfully we don’t live in a universe like that.\n\n\nIt would then seem perverse not to recognize that for large enough milestones we can see an informal ordering from less general inference rules to more general inference rules, those that do well in an increasingly broad and complicated variety of environments of the sort that the real world is liable to generate:\n\n\nThe rule that always assigns probability 0.73 to heads on each round, performs optimally within the environment where each flip has independently a 0.73 probability of coming up heads.\n\n\nLaplace’s Rule of Succession will start to do equally well as this, given a couple of hundred initial coinflips to see the pattern; and Laplace’s Rule also does well in many other low-entropy universes besides, such as those where each flip has 0.07 probability of coming up heads.\n\n\nA human is more general and can also spot patterns like \\(HTTHTTHTTHTT\\) where Laplace’s Rule would merely converge to assigning probability 1/3 of each flip coming up heads, while the human becomes increasingly certain that a simple temporal process is at work which allows each succeeding flip to be predicted with near-certainty.\n\n\nIf anyone ever happened across a hypercomputational device and built a Solomonoff inductor out of it, the Solomonoff inductor would be more general than the human and do well in any environment with a programmatic description substantially smaller than the amount of data the Solomonoff inductor could observe.\n\n\nNone of these predictors need do very much worse than the max-entropy prediction in the case that the environment is actually max-entropy. It may not be a free lunch, but it’s not all that expensive even by the standards of hypothetical randomized universes; not that this matters for anything, since we don’t live in a max-entropy universe and therefore we don’t care how much worse we’d do in one.\n\n\nSome earlier informal discussion of this point can be found in [No-Free-Lunch Theorems Are Often Irrelevant](https://arbital.com/p/nofreelunch_irrelevant/).\n\n\n \n\n\n\n> If intelligence is a problem-solving algorithm, then it can only be understood with respect to a specific problem.\n> \n> \n\n\nSome problems are more general than other problems—not relative to a maxentropy prior, which treats all problem subclasses on an equal footing, but relative to the low-entropy universe we actually live in, where a sequence of a million observed heads is on the next round more liable to generate H than T. Similarly, relative to the problem classes tossed around in our low-entropy universe, “figure out what simple computation generates this sequence” is more general than a human which is more general than “figure out what is the frequency of heads or tails within this sequence.”\n\n\nHuman intelligence is a problem-solving algorithm that can be understood with respect to a specific *problem class* that is potentially very, very broad in a pragmatic sense.\n\n\n \n\n\n\n> In a more concrete way, we can observe this empirically in that all intelligent systems we know are highly specialized. The intelligence of the AIs we build today is hyper specialized in extremely narrow tasks — like playing Go, or classifying images into 10,000 known categories. The intelligence of an octopus is specialized in the problem of being an octopus. The intelligence of a human is specialized in the problem of being human.\n> \n> \n\n\nThe problem that a human solves is much more general than the problem an octopus solves, which is why we can walk on the Moon and the octopus can’t. We aren’t absolutely general—the Moon still has *a certain something* in common with the Earth. Scientific induction still works on the Moon. It is not the case that when you get to the Moon, the next observed charge of an electron has nothing to do with its previously observed charge; and if you throw a human into an alternate universe like that one, the human stops working. But the problem a human solves *is* general enough to pass from oxygen environments to the vacuum.\n\n\n \n\n\n\n> What would happen if we were to put a freshly-created human brain in the body of an octopus, and let in live at the bottom of the ocean? Would it even learn to use its eight-legged body? Would it survive past a few days? … The brain has hardcoded conceptions of having a body with hands that can grab, a mouth that can suck, eyes mounted on a moving head that can be used to visually follow objects (the vestibulo-ocular reflex), and these preconceptions are required for human intelligence to start taking control of the human body.\n> \n> \n\n\nIt could be the case that in this sense a human’s motor cortex is analogous to an inference rule that always predicts heads with 0.73 probability on each round, and cannot learn to predict 0.07 instead. It could also be that our motor cortex is more like a Laplace inductor that starts out with 72 heads and 26 tails pre-observed, biased toward that particular ratio, but which can eventually learn 0.07 after another thousand rounds of observation.\n\n\nIt’s an empirical question, but I’m not sure why it’s a very relevant one. It’s possible that human motor cortex is hyperspecialized—not just jumpstarted with prior knowledge, but inductively narrow and incapable of learning better—since in the ancestral environment, we never got randomly plopped into octopus bodies. But what of it? If you put some humans at a console and gave them a weird octopus-like robot to learn to control, I’d expect their full deliberate learning ability to do better than raw motor cortex in this regard. Humans using their whole intelligence, plus some simple controls, can learn to drive cars and fly airplanes even though those weren’t in our ancestral environment.\n\n\nWe also have no reason to believe human motor cortex is the limit of what’s possible. If we sometimes got plopped into randomly generated bodies, I expect we’d already have motor cortex that could adapt to octopodes. Maybe MotorCortex Zero could do three days of self-play on controlling randomly generated bodies and emerge rapidly able to learn any body in that class. Or, humans who are allowed to use Keras could figure out how to control octopus arms using ML. The last case would be most closely analogous to that of a hypothetical seed AI.\n\n\n \n\n\n\n> Empirical evidence is relatively scarce, but from what we know, children that grow up outside of the nurturing environment of human culture don’t develop any human intelligence. Feral children raised in the wild from their earliest years become effectively animals, and can no longer acquire human behaviors or language when returning to civilization.\n> \n> \n\n\nHuman visual cortex doesn’t develop well without visual inputs. This doesn’t imply that our visual cortex is a simple blank slate, and that all the information to process vision is stored in the environment, and the visual cortex just adapts to that from a blank slate; if that were true, we’d expect it to easily take control of octopus eyes. The visual cortex requires visual input because of the logic of evolutionary biology: if you make X an environmental constant, the species is liable to acquire genes that assume the presence of X. It has no reason not to. The expected result would be that the visual cortex contains a large amount of genetic complexity that makes it better than generic cerebral cortex at doing vision, but some of this complexity requires visual input during childhood to unfold correctly.\n\n\nBut if in the ancestral environment children had grown up in total darkness 10% of the time, before seeing light for the first time on adulthood, it seems extremely likely that we could have evolved to not require visual input in order for the visual cortex to wire itself up correctly. E.g., the retina could have evolved to send in simple hallucinatory shapes that would cause the rest of the system to wire itself up to detect those shapes, or something like that.\n\n\nHuman children reliably grow up around other humans, so it wouldn’t be very surprising if humans evolved to build their basic intellectual control processes in a way that assumes the environment contains this info to be acquired. We cannot thereby infer how much information is being “stored” in the environment or that an intellectual control process would be too much information to store genetically; that is not a problem evolution had reason to try to solve, so we cannot infer from the lack of an evolved solution that such a solution was impossible.\n\n\nAnd even if there’s no evolved solution, this doesn’t mean you can’t intelligently design a solution. Natural selection never built animals with steel bones or wheels for limbs, because there’s no easy incremental pathway there through a series of smaller changes, so those designs aren’t very evolvable; but human engineers still build skyscrapers and cars, etcetera.\n\n\nAmong humans, the art of Go is stored in a vast repository of historical games and other humans, and future Go masters among us grow up playing Go as children against superior human masters rather than inventing the whole art from scratch. You would not expect even the most talented human, reinventing the gameplay all on their own, to be able to win a competition match with a first-dan pro.\n\n\nBut AlphaGo was initialized on this vast repository of played games in stored form, rather than it needing to actually play human masters.\n\n\nAnd then less than two years later, AlphaGo Zero taught itself to play at a vastly human-superior level, in three days, by self-play, from scratch, using a much simpler architecture with no ‘instinct’ in the form of precomputed features.\n\n\nNow one may perhaps postulate that there is some sharp and utter distinction between the problem that AlphaGo Zero solves, and the much more general problem that humans solve, whereby our vast edifice of Go knowledge can be surpassed by a self-contained system that teaches itself, but our general cognitive problem-solving abilities can neither be compressed into a database for initialization, nor taught by self-play. But why suppose that? Human civilization taught itself by a certain sort of self-play; we didn’t learn from aliens. More to the point, I don’t see a sharp and utter distinction between Laplace’s Rule, AlphaGo Zero, a human, and a Solomonoff inductor; they just learn successively more general problem classes. If AlphaGo Zero can waltz past all human knowledge of Go, I don’t see a strong reason why AGI Zero can’t waltz past the human grasp of how to reason well, or how to perform scientific investigations, or how to learn from the data in online papers and databases.\n\n\nThis point could perhaps be counterargued, but it hasn’t yet been counterargued to my knowledge, and it certainly isn’t settled by any theorem of computer science known to me.\n\n\n \n\n\n\n> If intelligence is fundamentally linked to specific sensorimotor modalities, a specific environment, a specific upbringing, and a specific problem to solve, then you cannot hope to arbitrarily increase the intelligence of an agent merely by tuning its brain — no more than you can increase the throughput of a factory line by speeding up the conveyor belt. Intelligence expansion can only come from a co-evolution of the mind, its sensorimotor modalities, and its environment.\n> \n> \n\n\nIt’s not obvious to me why any of this matters. Say an AI takes three days to learn to use an octopus body. So what?\n\n\nThat is: We agree that it’s a mathematical truth that you need “some amount” of experience to go from a broadly general prior to a specific problem. That doesn’t mean that the required amount of experience is large for pragmatically important problems, or that it takes three decades instead of three days. We cannot casually pass from “proven: some amount of X is required” to “therefore: a large amount of X is required” or “therefore: so much X is required that it slows things down a lot”. (See also: [Harmless supernova fallacy: bounded, therefore harmless.](https://arbital.com/p/harmless_supernova/))\n\n\n \n\n\n\n> If the gears of your brain were the defining factor of your problem-solving ability, then those rare humans with IQs far outside the normal range of human intelligence would live lives far outside the scope of normal lives, would solve problems previously thought unsolvable, and would take over the world — just as some people fear smarter-than-human AI will do.\n> \n> \n\n\n“von Neumann? Newton? Einstein?” —[Scott Aaronson](https://www.scottaaronson.com/blog/?p=3553)\n\n\nMore importantly: Einstein et al. didn’t have brains that were 100 times larger than a human brain, or 10,000 times faster. By the logic of sexual recombination within a sexually reproducing species, Einstein et al. could not have had a large amount of *de novo* software that isn’t present in a standard human brain. (That is: An adaptation with 10 necessary parts, each of which is only 50% prevalent in the species, will only fully assemble 1 out of 1000 times, which isn’t often enough to present a sharp selection gradient on the component genes; *complex interdependent* machinery is necessarily universal within a sexually reproducing species, except that it may sometimes fail to fully assemble. You don’t get “mutants” with whole new complex abilities a la the X-Men.)\n\n\nHumans are metaphorically all compressed into one tiny little dot in the vastness of mind design space. We’re all the same make and model of car running the same engine under the hood, in slightly different sizes and with slightly different ornaments, and sometimes bits and pieces are missing. Even with respect to other primates, from whom we presumably differ by whole complex adaptations, we have 95% shared genetic material with chimpanzees. Variance between humans is not something that thereby establishes bounds on possible variation in intelligence, unless you import some further assumption not described here.\n\n\nThe standard reply to anyone who deploys e.g. the Argument from Gödel to claim the impossibility of [AGI](https://arbital.com/p/agi/) is to ask, “Why doesn’t your argument rule out humans?”\n\n\nSimilarly, a standard question that needs to be answered by anyone who deploys an argument against the possibility of superhuman general intelligence is, “Why doesn’t your argument rule out humans exhibiting pragmatically much greater intellectual performance than chimpanzees?”\n\n\nSpecialized to this case, we’d ask, “Why doesn’t the fact that the smartest chimpanzees aren’t building rockets let us infer that no human can walk on the Moon?”\n\n\nNo human, not even John von Neumann, could have reinvented the gameplay of Go on their own and gone on to stomp the world’s greatest Masters. AlphaGo Zero did so in three days. It’s clear that in general, “We can infer the bounds of cognitive power from the bounds of human variation” is false. If there’s supposed to be some special case of this rule which is true rather than false, and forbids superhuman AGI, that special case needs to be spelled out.\n\n\n \n\n\n\n> Intelligence is not a superpower; exceptional intelligence does not, on its own, confer you with proportionally exceptional power over your circumstances.\n> \n> \n\n\n…said the *Homo sapiens*, surrounded by countless powerful artifacts whose abilities, let alone mechanisms, would be utterly incomprehensible to the organisms of any less intelligent Earthly species.\n\n\n \n\n\n\n> A high-potential human 10,000 years ago would have been raised in a low-complexity environment, likely speaking a single language with fewer than 5,000 words, would never have been taught to read or write, would have been exposed to a limited amount of knowledge and to few cognitive challenges. The situation is a bit better for most contemporary humans, but there is no indication that our environmental opportunities currently outpace our cognitive potential.\n> \n> \n\n\nDoes this imply that technology should be no more advanced 100 years from today, than it is today? If not, in what sense have we taken every possible opportunity of our environment?\n\n\nIs the idea that opportunities can only be taken in sequence, one after another, so that today’s technology only offers the possibilities of today’s advances? Then why couldn’t a more powerful intelligence run through them much faster, and rapidly build up those opportunities?\n\n\n \n\n\n\n> A smart human raised in the jungle is but a hairless ape. Similarly, an AI with a superhuman brain, dropped into a human body in our modern world, would likely not develop greater capabilities than a smart contemporary human. If it could, then exceptionally high-IQ humans would already be displaying proportionally exceptional levels of personal attainment; they would achieve exceptional levels of control over their environment, and solve major outstanding problems— which they don’t in practice.\n> \n> \n\n\nIt can’t eat the Internet? It can’t eat the stock market? It can’t crack the protein folding problem and deploy arbitrary biological systems? It can’t get anything done by thinking a million times faster than we do? All this is to be inferred from observing that the smartest human was no more impressive than John von Neumann?\n\n\nI don’t see the strong Bayesian evidence here. It seems easy to imagine worlds such that you can get a lot of pragmatically important stuff done if you have a brain 100 times the size of John von Neumann’s, think a million times faster, and have maxed out and transcended every human cognitive talent and not just the mathy parts, and yet have the version of John von Neumann inside that world be no more impressive than we saw. How then do we infer from observing John von Neumann that we are not in such worlds?\n\n\nWe know that the rule of inferring bounds on cognition by looking at human maximums doesn’t work on AlphaGo Zero. Why does it work to infer that “An AGI can’t eat the stock market because no human has eaten the stock market”?\n\n\n \n\n\n\n> However, these billions of brains, accumulating knowledge and developing external intelligent processes over thousand of years, implement a system — civilization — which may eventually lead to artificial brains with greater intelligence than that of a single human. It is civilization as a whole that will create superhuman AI, not you, nor me, nor any individual. A process involving countless humans, over timescales we can barely comprehend. A process involving far more externalized intelligence — books, computers, mathematics, science, the internet — than biological intelligence…\n> \n> \n> Will the superhuman AIs of the future, developed collectively over centuries, have the capability to develop AI greater than themselves? No, no more than any of us can. \n> \n> \n\n\nThe premise is that brains of a particular size and composition that are running a particular kind of software (human brains) can only solve a problem X (which in this case is equal to “build an AGI”) if they cooperate in a certain group size N and run for a certain amount of time and build Z amount of external cognitive prostheses. Okay. Humans were not especially specialized on the AI-building problem by natural selection. Why wouldn’t an AGI with larger brains, running faster, using less insane software, containing its own high-speed programmable cognitive hardware to which it could interface directly in a high-bandwidth way, and perhaps specialized on computer programming in exactly the way that human brains aren’t, get more done on net than human civilization? Human civilization tackling Go devoted a lot of thinking time, parallel search, and cognitive prostheses in the form of playbooks, and then AlphaGo Zero blew past it in three days, etcetera.\n\n\nTo sharpen this argument:\n\n\nWe may begin from the premise, “For all problems X, if human civilization puts a lot of effort into X and gets as far as W, no single agency can get significantly further than W on its own,” and from this premise deduce that no single AGI will be able to build a new AGI shortly after the first AGI is built.\n\n\nHowever, this premise is obviously false, as even [Deep Blue](https://arbital.com/p/deep_blue/) bore witness. Is there supposed to be some special case of this generalization which is true rather than false, and says something about the ‘build an AGI’ problem which it does not say about the ‘win a chess game’ problem? Then what is that special case and why should we believe it?\n\n\nAlso relevant: In the game of Kasparov vs. The World, the world’s best player Garry Kasparov played a single game against thousands of other players coordinated in an online forum, led by four chess masters. Garry Kasparov’s brain eventually won, against thousands of times as much brain matter. This tells us something about the inefficiency of human scaling with simple parallelism of the nodes, presumably due to the inefficiency and low bandwidth of human speech separating the would-be arrayed brains. It says that you do not need a thousand times as much processing power as one human brain to defeat the parallel work of a thousand human brains. It is the sort of thing that can be done even by one human who is a little more talented and practiced than the components of that parallel array. Humans often just don’t agglomerate very efficiently.\n\n\n \n\n\n\n> However, future AIs, much like humans and the other intelligent systems we’ve produced so far, will contribute to our civilization, and our civilization, in turn, will use them to keep expanding the capabilities of the AIs it produces.\n> \n> \n\n\nThis takes in the premise “AIs can only output a small amount of cognitive improvement in AI abilities” and reaches the conclusion “increase in AI capability will be a civilizationally diffuse process.” I’m not sure that the conclusion follows, but would mostly dispute that the premise has been established by previous arguments. To put it another way, this particular argument does not contribute anything new to support “AI cannot output much AI”, it just tries to reason further from that as a premise.\n\n\n \n\n\n\n> Our problem-solving abilities (in particular, our ability to design AI) are already constantly improving, because these abilities do not reside primarily in our biological brains, but in our external, collective tools. The recursive loop has been in action for a long time, and the rise of “better brains” will not qualitatively affect it — no more than any previous intelligence-enhancing technology.\n> \n> \n\n\nFrom Arbital’s [Harmless supernova fallacy](https://arbital.com/p/harmless_supernova/) page:\n\n\n* **Precedented, therefore harmless:** “Really, we’ve already had supernovas around for a while: there are already devices that produce ‘super’ amounts of heat by fusing elements low in the periodic table, and they’re called thermonuclear weapons. Society has proven well able to regulate existing thermonuclear weapons and prevent them from being acquired by terrorists; there’s no reason the same shouldn’t be true of supernovas.” (Noncentral fallacy / continuum fallacy: putting supernovas on a continuum with hydrogen bombs doesn’t make them able to be handled by similar strategies, nor does finding a category such that it contains both supernovas and hydrogen bombs.)\n\n\n \n\n\n\n> Our brains themselves were never a significant bottleneck in the AI-design process.\n> \n> \n\n\nA startling assertion. Let’s say we could speed up AI-researcher brains by a factor of 1000 within some virtual uploaded environment, not permitting them to do new physics or biology experiments, but still giving them access to computers within the virtual world. Are we to suppose that AI development would take the same amount of sidereal time? I for one would expect the next version of Tensorflow to come out much sooner, even taking into account that most individual AI experiments would be less grandiose because the sped-up researchers would need those experiments to complete faster and use less computing power. The scaling loss would be less than total, just like adding CPUs a thousand times as fast to the current research environment would probably speed up progress by at most a factor of 5, not a factor of 1000. Similarly, with all those sped-up brains we might see progress increase only by a factor of 50 instead of 1000, but I’d still expect it to go a lot faster.\n\n\nThen in what sense are we not bottlenecked on the speed of human brains in order to build up our understanding of AI?\n\n\n \n\n\n\n> Crucially, the civilization-level intelligence-improving loop has only resulted in measurably linear progress in our problem-solving abilities over time.\n> \n> \n\n\nI obviously don’t consider myself a Kurzweilian, but even I have to object that this seems like an odd assertion to make about the past 10,000 years.\n\n\n \n\n\n\n> Wouldn’t recursively improving X mathematically result in X growing exponentially? No — in short, because no complex real-world system can be modeled as `X(t + 1) = X(t) \\* a, a > 1)`. \n> \n> \n\n\nThis seems like a *really* odd assertion, refuted by a single glance at [world GDP](https://en.wikipedia.org/wiki/Gross_world_product#Historical_and_prehistorical_estimates). Note that this can’t be an isolated observation, because it also implies that every *necessary* input into world GDP is managing to keep up, and that every input which isn’t managing to keep up has been economically bypassed at least with respect to recent history.\n\n\n \n\n\n\n> We don’t have to speculate about whether an “explosion” would happen the moment an intelligent system starts optimizing its own intelligence. As it happens, most systems are recursively self-improving. We’re surrounded with them… Mechatronics is recursively self-improving — better manufacturing robots can manufacture better manufacturing robots. Military empires are recursively self-expanding — the larger your empire, the greater your military means to expand it further. Personal investing is recursively self-improving — the more money you have, the more money you can make.\n> \n> \n\n\nIf we define “recursive self-improvement” to mean merely “causal process containing at least one positive loop” then the world abounds with such, that is true. It could still be worth distinguishing some feedback loops as going much faster than others: e.g., the cascade of neutrons in a nuclear weapon, or the cascade of information inside the transistors of a hypothetical seed AI. This seems like another instance of “precedented therefore harmless” within the harmless supernova fallacy.\n\n\n \n\n\n\n> Software is just one cog in a bigger process — our economies, our lives — just like your brain is just one cog in a bigger process — human culture. This context puts a hard limit on the maximum potential usefulness of software, much like our environment puts a hard limit on how intelligent any individual can be — even if gifted with a superhuman brain.\n> \n> \n\n\n“A chimpanzee is just one cog in a bigger process—the ecology. Why postulate some kind of weird superchimp that can expand its superchimp economy at vastly greater rates than the amount of chimp-food produced by the current ecology?”\n\n\nConcretely, suppose an agent is smart enough to crack inverse protein structure prediction, i.e., it can build its own biology and whatever amount of post-biological molecular machinery is permitted by the laws of physics. In what sense is it still dependent on most of the economic outputs of the rest of human culture? Why wouldn’t it just start building von Neumann machines?\n\n\n \n\n\n\n> Beyond contextual hard limits, even if one part of a system has the ability to recursively self-improve, other parts of the system will inevitably start acting as bottlenecks. Antagonistic processes will arise in response to recursive self-improvement and squash it .\n> \n> \n\n\nSmart agents will try to deliberately bypass these bottlenecks and often succeed, which is why the world economy continues to grow at an exponential pace instead of having run out of wheat in 1200 CE. It continues to grow at an exponential pace despite even the antagonistic processes of… but I’d rather not divert this conversation into politics.\n\n\nNow to be sure, the smartest mind can’t expand faster than light, and its exponential growth will bottleneck on running out of atoms and negentropy if we’re remotely correct about the character of physical law. But to say that this is therefore no reason to worry would be the “bounded, therefore harmless” variant of the harmless supernova fallacy. A supernova isn’t infinitely hot, but it’s pretty darned hot and you can’t survive one just by wearing a Nomex jumpsuit.\n\n\n \n\n\n\n> When it comes to intelligence, inter-system communication arises as a brake on any improvement of underlying modules — a brain with smarter parts will have more trouble coordinating them;\n> \n> \n\n\nWhy doesn’t this prove that humans can’t be much smarter than chimps?\n\n\nWhat we can infer about the scaling laws that were governing human brains from the evolutionary record is a complicated topic. On this particular point I’d refer you to section 3.1, “Returns on brain size”, pp. 35–39, in [my semitechnical discussion of returns on cognitive investment](https://intelligence.org/files/IEM.pdf). The conclusion there is that we can infer from the increase in equilibrium brain size over the last few million years of hominid history, plus the basic logic of population genetics, that over this time period there were increasing marginal returns to brain size with increasing time and presumably increasingly sophisticated neural ‘software’. I also remark that human brains are not the only possible cognitive computing fabrics.\n\n\n \n\n\n\n> It is perhaps not a coincidence that very high-IQ people are more likely to suffer from certain mental illnesses.\n> \n> \n\n\nI’d expect very-high-IQ chimps to be more likely to suffer from some neurological disorders than typical chimps. This doesn’t tell us that chimps are approaching the ultimate hard limit of intelligence, beyond which you can’t scale without going insane. It tells us that if you take any biological system and try to operate under conditions outside the typical ancestral case, it is more likely to break down. Very-high-IQ humans are not the typical humans that natural selection has selected-for as normal operating conditions.\n\n\n \n\n\n\n> Yet, modern scientific progress is measurably linear. I wrote about this phenomenon at length in a 2012 essay titled “The Singularity is not coming”. We didn’t make greater progress in physics over the 1950–2000 period than we did over 1900–1950 — we did, arguably, about as well. Mathematics is not advancing significantly faster today than it did in 1920. Medical science has been making linear progress on essentially all of its metrics, for decades.\n> \n> \n\n\nI broadly agree with respect to recent history. I tend to see this as an artifact of human bureaucracies shooting themselves in the foot in a way that I would not expect to apply within a single unified agent.\n\n\nIt’s possible we’re reaching the end of available fruit in our finite supply of physics. This doesn’t mean our present material technology could compete with the limits of possible material technology, which would at the very least include whatever biology-machine hybrid systems could be rapidly manufactured given the limits of mastery of biochemistry.\n\n\n \n\n\n\n> As scientific knowledge expands, the time and effort that have to be invested in education and training grows, and the field of inquiry of individual researchers gets increasingly narrow.\n> \n> \n\n\nOur brains don’t scale to hold it all, and every time a new human is born you have to start over from scratch instead of copying and pasting the knowledge. It does not seem to me like a slam-dunk to generalize from the squishy little brains yelling at each other to infer the scaling laws of arbitrary cognitive computing fabrics.\n\n\n \n\n\n\n> Intelligence is situational — there is no such thing as general intelligence. Your brain is one piece in a broader system which includes your body, your environment, other humans, and culture as a whole.\n> \n> \n\n\nTrue of chimps; didn’t stop humans from being much smarter than chimps.\n\n\n \n\n\n\n> No system exists in a vacuum; any individual intelligence will always be both defined and limited by the context of its existence, by its environment.\n> \n> \n\n\nTrue of mice; didn’t stop humans from being much smarter than mice.\n\n\nPart of the argument above was, as I would perhaps unfairly summarize it, “There is no sense in which a human is absolutely smarter than an octopus.” Okay, but *pragmatically* speaking, we have nuclear weapons and octopodes don’t. A similar *pragmatic* capability gap between humans and [unaligned](https://arbital.com/p/ai_alignment/) AGIs seems like a matter of legitimate concern. If you don’t want to call that an intelligence gap then call it what you like.\n\n\n \n\n\n\n> Currently, our environment, not our brain, is acting as the bottleneck to our intelligence.\n> \n> \n\n\nI don’t see what observation about our present world licenses the conclusion that speeding up brains tenfold would produce no change in the rate of technological advancement.\n\n\n \n\n\n\n> Human intelligence is largely externalized, contained not in our brain but in our civilization. We are our tools — our brains are modules in a cognitive system much larger than ourselves.\n> \n> \n\n\nWhat about this fact is supposed to imply *slower* progress by an AGI that has a continuous, high-bandwidth interaction with its own onboard cognitive tools?\n\n\n \n\n\n\n> A system that is already self-improving, and has been for a long time.\n> \n> \n\n\nTrue if we redefine “self-improving” as “any positive feedback loop whatsoever”. A nuclear fission weapon is also a positive feedback loop in neutrons triggering the release of more neutrons. The elements of this system interact on a much faster timescale than human neurons fire, and thus the overall process goes pretty fast on our own subjective timescale. I don’t recommend standing next to one when it goes off.\n\n\n \n\n\n\n> Recursively self-improving systems, because of contingent bottlenecks, diminishing returns, and counter-reactions arising from the broader context in which they exist, cannot achieve exponential progress in practice. Empirically, they tend to display linear or sigmoidal improvement.\n> \n> \n\n\nFalsified by a graph of world GDP on almost any timescale.\n\n\n \n\n\n\n> In particular, this is the case for scientific progress — science being possibly the closest system to a recursively self-improving AI that we can observe.\n> \n> \n\n\nI think we’re mostly just [doing science wrong](https://arbital.com/p/likelihood_vs_pvalue/), but that would be a [much longer discussion](https://equilibriabook.com/).\n\n\nFits-on-a-T-Shirt rejoinders would include “Why think we’re at the upper bound of being-good-at-science any more than chimps were?”\n\n\n \n\n\n\n> Recursive intelligence expansion is already happening — at the level of our civilization. It will keep happening in the age of AI, and it progresses at a roughly linear pace.\n> \n> \n\n\nIf this were to be true, I don’t think it would be established by the arguments given.\n\n\nMuch of this debate has previously been reprised by myself and Robin Hanson in the “[AI Foom Debate](https://intelligence.org/ai-foom-debate/).” I expect that even Robin Hanson, who was broadly opposing my side of this debate, would have a coughing fit over the idea that progress within all systems is confined to a roughly linear pace.\n\n\nFor more reading I recommend my own semitechnical essay on what our current observations can tell us about the scaling of cognitive systems with increasing resources and increasing optimization, “[Intelligence Explosion Microeconomics](https://intelligence.org/files/IEM.pdf).”\n\n\nThe post [A reply to Francois Chollet on intelligence explosion](https://intelligence.org/2017/12/06/chollet/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-12-07T03:21:58Z", "authors": ["Eliezer Yudkowsky"], "summaries": []} -{"id": "e120189735d13d777250c77b1a731a85", "title": "December 2017 Newsletter", "url": "https://intelligence.org/2017/12/06/december-2017-newsletter/", "source": "miri", "source_type": "blog", "text": "[Our annual fundraiser is live](https://intelligence.org/2017/12/01/miris-2017-fundraiser/). Discussed in the fundraiser post:\n\n\n* [News](https://intelligence.org/2017/12/01/miris-2017-fundraiser/#1)  — What MIRI’s researchers have been working on lately, and more.\n* [Goals](https://intelligence.org/2017/12/01/miris-2017-fundraiser/#2) — We plan to grow our research team 2x in 2018–2019. If we raise $850k this month, we think we can do that without dipping below a 1.5-year runway.\n* [*Actual* goals](https://intelligence.org/2017/12/01/miris-2017-fundraiser/#3) — A bigger-picture outline of what we think is the likeliest sequence of events that could lead to good global outcomes.\n\n\nOur funding drive will be running until December 31st.\n\n\n \n\n\n**Research updates**\n\n\n* New at IAFF: [Reward Learning Summary](https://agentfoundations.org/item?id=1701); [Reflective Oracles as a Solution to the Converse Lawvere Problem](https://agentfoundations.org/item?id=1712); [Policy Selection Solves Most Problems](https://agentfoundations.org/item?id=1711)\n* We ran a [workshop](https://intelligence.org/workshops/#november-2017) on Paul Christiano’s research agenda.\n* We’ve hired the first [members](https://intelligence.org/2017/12/01/miris-2017-fundraiser/#1) of our new engineering team, including math PhD Jesse Liptrap and former Quixey lead architect Nick Tarleton! If you’d like to join the team, [apply here](https://machineintelligence.typeform.com/to/CDVFE2)!\n* I’m also happy to announce that [Blake Borgeson](http://www.blakeb.org) has come on in an [advisory](https://intelligence.org/team/#advisors) role to help establish our engineering program. Blake is a *Nature*-published computational biologist who co-founded Recursion Pharmaceuticals, where he leads the biotech company’s machine learning work.\n\n\n \n\n\n**General updates**\n\n\n* “[Security Mindset and Ordinary Paranoia](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/)”: a new dialogue from Eliezer Yudkowsky. See also [part 2](https://intelligence.org/2017/11/26/security-mindset-and-the-logistic-success-curve/).\n* The Open Philanthropy Project has awarded MIRI [a three-year, $3.75 million grant](https://intelligence.org/2017/11/08/major-grant-open-phil/)!\n* We received $48,132 in donations during Facebook’s Giving Tuesday event, of which $11,371 — from speedy donors who made the event’s 85-second cutoff! — will be matched by the Bill and Melinda Gates Foundation.\n* MIRI Staff Writer Matthew Graves has published an article in *Skeptic* magazine: “[Why We Should Be Concerned About Artificial Superintelligence](https://www.skeptic.com/reading_room/why-we-should-be-concerned-about-artificial-superintelligence/).”\n* Yudkowsky’s new book [*Inadequate Equilibria*](https://intelligence.org/2017/11/16/announcing-inadequate-equilibria/) is out! See other recent discussion of modest epistemology and inadequacy analysis by [Scott Aaronson](https://www.scottaaronson.com/blog/?p=3535), [Robin Hanson](http://www.overcomingbias.com/2017/11/why-be-contrarian.html), [Abram Demski](https://www.lesswrong.com/posts/vKbAWFZRDBhyD6K6A/gears-level-and-policy-level), [Gregory Lewis](http://effective-altruism.com/ea/1g7/in_defence_of_epistemic_modesty/), and [Scott Alexander](http://slatestarcodex.com/2017/11/30/book-review-inadequate-equilibria/).\n\n\n \n\n\nThe post [December 2017 Newsletter](https://intelligence.org/2017/12/06/december-2017-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-12-06T15:07:02Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "135d80151616563d814d1bb84ab54359", "title": "MIRI’s 2017 Fundraiser", "url": "https://intelligence.org/2017/12/01/miris-2017-fundraiser/", "source": "miri", "source_type": "blog", "text": "**Update 2017-12-27:** We’ve blown past our 3rd and final target, and reached the matching cap of $300,000 for the [$2 million Matching Challenge](https://intelligence.org/2017/12/14/end-of-the-year-matching/)! Thanks so much to everyone who supported us!\n\n\nAll donations made before 23:59 PST on Dec 31st will continue to be counted towards our fundraiser total. The fundraiser total includes projected matching funds from the Challenge.\n\n\n\n\n---\n\n\n \n\n\nMIRI’s **2017 fundraiser** is live through the end of December! Our progress so far (updated live):\n\n\n \n\n\n\n\n---\n\n\n\n\n\n[Target 1 \n$625,000 \nCompleted](https://intelligence.org/feed/?paged=20#fundraiserModal)[Target 2 \n$850,000 \nCompleted](https://intelligence.org/feed/?paged=20#fundraiserModal)[Target 3 \n$1,250,000 \nCompleted](https://intelligence.org/feed/?paged=20#fundraiserModal)\n\n### $2,504,625 raised in total!\n\n\n##### 358 donors contributed\n\n\n\n\n\n×\n### Target Descriptions\n\n\n\n\n* [Target 1](https://intelligence.org/feed/?paged=20#level1)\n* [Target 2](https://intelligence.org/feed/?paged=20#level2)\n* [Target 3](https://intelligence.org/feed/?paged=20#level3)\n\n\n\n\n$625k: Basic target\n-------------------\n\n\nAt this funding level, we’ll be in a good position to pursue our mainline hiring goal in 2018, although we will likely need to halt or slow our growth in 2019.\n\n\n\n\n$850k: Mainline-growth target\n-----------------------------\n\n\nAt this level, we’ll be on track to fully fund our planned expansion over the next few years, allowing us to roughly double the number of research staff over the course of 2018 and 2019.\n\n\n\n\n$1.25M: Rapid-growth target\n---------------------------\n\n\nAt this funding level, we will be on track to maintain a 1.5-year runway even if our hiring proceeds a fair amount faster than our mainline prediction. We’ll also have greater freedom to pay higher salaries to top-tier candidates as needed.\n\n\n\n\n\n\n\n\n\n\n\n\n[Donate Now](https://intelligence.org/donate/)\n----------------------------------------------\n\n\n\n\n\n\n---\n\n\n \n\n\nMIRI is a research nonprofit based in Berkeley, California with a mission of ensuring that smarter-than-human AI technology has a positive impact on the world. You can learn more about our work at “[Why AI Safety?](https://intelligence.org/why-ai-safety/)” or via MIRI Executive Director Nate Soares’ [Google talk on AI alignment](https://intelligence.org/2017/04/12/ensuring/).\n\n\n[In 2015](https://intelligence.org/2015/12/01/miri-2015-winter-fundraiser/#4), we discussed our interest in potentially branching out to explore multiple research programs simultaneously once we could support a larger team. Following recent changes to our overall picture of the strategic landscape, we’re now moving ahead on that goal and starting to explore new research directions while also continuing to push on our [agent foundations agenda](https://intelligence.org/technical-agenda/). For more on our new views, see “[There’s No Fire Alarm for Artificial General Intelligence](https://intelligence.org/2017/10/13/fire-alarm/)” and our [2017 strategic update](https://intelligence.org/2017/04/30/2017-updates-and-strategy/). We plan to expand on our relevant strategic thinking more in the coming weeks.\n\n\nOur expanded research focus means that our research team can potentially grow big, and grow fast. Our current goal is to hire around ten new research staff over the next two years, mostly software engineers. If we succeed, our point estimate is that **our 2018 budget will be $2.8M** and **our 2019 budget will be $3.5M**, up from roughly $1.9M in 2017.[1](https://intelligence.org/2017/12/01/miris-2017-fundraiser/#footnote_0_16982 \"Note that this $1.9M is significantly below the $2.1–2.5M we predicted for the year in April. Personnel costs are MIRI’s most significant expense, and higher research staff turnover in 2017 meant that we had fewer net additions to the team this year than we’d budgeted for. We went under budget by a relatively small margin in 2016, spending $1.73M versus a predicted $1.83M.\nOur 2018–2019 budget estimates are highly uncertain, with most of the uncertainty coming from substantial uncertainty about how quickly we’ll be able to take on new research staff.\")\n\n\nWe’ve set our fundraiser targets by estimating how quickly we could grow while maintaining a 1.5-year runway, on the simplifying assumption that about 1/3 of the donations we receive between now and the beginning of 2019 will come during our current fundraiser.[2](https://intelligence.org/2017/12/01/miris-2017-fundraiser/#footnote_1_16982 \"This is roughly in line with our experience in previous years, when excluding expected grants and large surprise one-time donations. We’ve accounted for the former in our targets but not the latter, since we think it unwise to bank on unpredictable windfalls.\nNote that in previous years, we’ve set targets based on maintaining a 1-year runway. Given the increase in our size, I now think that a 1.5-year runway is more appropriate.\")\n\n\nHitting **Target 1** ($625k) then lets us act on our growth plans in 2018 (but not in 2019); **Target 2** ($850k) lets us act on our full two-year growth plan; and in the case where our hiring goes better than expected, **Target 3** ($1.25M) would allow us to add new members to our team about twice as quickly, or pay higher salaries for new research staff as needed.\n\n\nWe discuss more details below, both in terms of our current organizational activities and how we see our work fitting into the larger strategy space.\n\n\n \n\n\n\n            [What’s new at MIRI](https://intelligence.org/feed/?paged=20#1)              |              [Fundraising goals](https://intelligence.org/feed/?paged=20#2)              |              [Strategic background](https://intelligence.org/feed/?paged=20#3)\n\n\n \n\n\n#### What’s new at MIRI\n\n\nNew developments this year have included:\n\n\n\n* The release of Eliezer Yudkowsky’s *[Inadequate Equilibria: Where and How Civilizations Get Stuck](https://equilibriabook.com)*, a book on systemic failure, outperformance, and epistemology.\n\n\n* New introductory material on decision theory: “[Functional Decision Theory](https://intelligence.org/2017/10/22/fdt/),” “[Cheating Death in Damascus](https://intelligence.org/2017/03/18/new-paper-cheating-death-in-damascus/),” and “[Decisions Are For Making Bad Outcomes Inconsistent](https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/).”\n\n\n* Extremely generous new support for our research in the form of a one-time $1.01 million donation [from a cryptocurrency investor](https://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/) and a three-year $3.75 million grant [from the Open Philanthropy Project](https://intelligence.org/2017/11/08/major-grant-open-phil/).[3](https://intelligence.org/2017/12/01/miris-2017-fundraiser/#footnote_2_16982 \"Including the $1.01 million donation and the first $1.25 million from the Open Philanthropy Project, we have so far raised around $3.16 million this year, overshooting the $3 million goal we set earlier this year!\")\n\n\n\nThanks in part to this major support, we’re currently in a position to scale up the research team quickly if we can find suitable hires. We intend to explore a variety of new research avenues going forward, including making a stronger push to experiment and explore some ideas in implementation.[4](https://intelligence.org/2017/12/01/miris-2017-fundraiser/#footnote_3_16982 \"We emphasize that, as always, “experiment” means “most things tried don’t work.” We’d like to avoid setting expectations of immediate success for this exploratory push.\") This means that we’re currently interested in hiring exceptional software engineers, particularly ones with machine learning experience.\n\n\nThe two primary things we’re looking for in software engineers are programming ability and value alignment. Since we’re a nonprofit, it’s also worth noting explicitly that we’re generally happy to pay excellent research team applicants with the relevant skills whatever salary they would need to work at MIRI. If you think you’d like to work with us, **[apply here](https://machineintelligence.typeform.com/to/CDVFE2)**!\n\n\nIn that vein, I’m pleased to announce that we’ve made our first round of hires for our engineer positions, including:\n\n\n\n**Jesse Liptrap**, who previously worked on the Knowledge Graph at Google for four years, and as a bioinformatician at UC Berkeley. Jesse holds a PhD in mathematics from UC Santa Barbara, where he studied category-theoretic underpinnings of [topological quantum computing](https://www.microsoft.com/en-us/research/group/microsoft-quantum-santa-barbara-station-q/).\n\n\n**Nick Tarleton**, former lead architect at the search startup Quixey. He previously studied computer science and decision science at Carnegie Mellon University, and Nick worked with us at the first iteration of our summer fellows program, studying consequences of proposed AI goal systems.\n\n\n\nOn the whole, our initial hiring efforts have gone quite well, and I’ve been very impressed with the high caliber of our hires and of our pool of candidates.\n\n\nOn the research side, our recent work has focused heavily on open problems in decision theory, and on other questions related to naturalized agency. Scott Garrabrant divides our recent work on the agent foundations agenda into four categories, tackling different AI alignment subproblems:\n\n\n\n\n\n**Decision theory** — Traditional models of decision-making assume a sharp Cartesian boundary between agents and their environment. In a naturalized setting in which agents are embedded in their environment, however, traditional approaches break down, forcing us to formalize concepts like “counterfactuals” that can be left implicit in AIXI-like frameworks.\n\n\n\n[More](https://intelligence.org/feed/?paged=20#collapseOne)\n\n\n\n\nRecent focus areas:\n\n\n* As Rob noted [in April](https://intelligence.org/2017/04/30/2017-updates-and-strategy/#1), “a common thread in our recent work is that we’re using probability and topological fixed points in settings where we used to use provability. This means working with (and improving) [logical inductors](https://intelligence.org/2016/09/12/new-paper-logical-induction/) and [reflective oracles](https://intelligence.org/2016/06/30/grain-of-truth/).” Examples of applications of logical induction to decision theory include logical inductor evidential decision theory (“[Prediction Based Robust Cooperation](https://agentfoundations.org/item?id=1295),” “[Two Major Obstacles for Logical Inductor Decision Theory](https://agentfoundations.org/item?id=1399)“) and asymptotic decision theory (“[An Approach to Logically Updateless Decisions](https://agentfoundations.org/item?id=1472),” “[Where Does ADT Go Wrong?](https://agentfoundations.org/item?id=1717)”).\n* Unpacking the notion of *updatelessness* into pieces that we can better understand, e.g., in “[Conditioning on Conditionals](https://agentfoundations.org/item?id=1624),” “[Logical Updatelessness as a Robust Delegation Problem](https://agentfoundations.org/item?id=1689),” “[The Happy Dance Problem.](https://agentfoundations.org/item?id=1713)”\n* The relationship between decision theories that rely on Bayesian conditionalization on the one hand (e.g., evidential decision theory and Wei Dai’s updateless decision theory), and ones that rely on counterfactuals on the other (e.g., causal decision theory, timeless decision theory, and the version of functional decision theory discussed in Yudkowsky and Soares ([2017](https://intelligence.org/2017/10/22/fdt/))): “[Smoking Lesion Steelman](https://agentfoundations.org/item?id=1525),” “[Comparing LICDT and LIEDT](https://agentfoundations.org/item?id=1629).”\n* Lines of research relating to correlated equilibria, such as “[A Correlated Analogue of Reflective Oracles](https://agentfoundations.org/item?id=1435)” and “[Smoking Lesion Steelman II](https://agentfoundations.org/item?id=1662).”\n* The Converse Lawvere Problem ([1](https://agentfoundations.org/item?id=1356), [2](https://agentfoundations.org/item?id=1372), [3](https://agentfoundations.org/item?id=1712)): “Does there exist a topological space *X* (in some convenient category of topological spaces) such that there exists a continuous surjection from *X* to the space [0,1]*X* (of continuous functions from *X* to [0,1])?”\n* Multi-agent coordination problems, often using the “[Cooperative Oracles](https://agentfoundations.org/item?id=1468)” framework.\n\n\n\n\n\n\n\n\n**Naturalized world-models** — Similar issues arise for formalizing how systems model the world in the absence of a sharp agent/environment boundary. Traditional models leave implicit aspects of “good reasoning” such as causal and multi-level world-modeling, reasoning under deductive limitations, and agents modeling themselves.\n\n\n\n[More](https://intelligence.org/feed/?paged=20#collapseTwo)\n\n\n\n\nRecent focus areas:\n\n\n* Kakutani’s fixed-point theorem and reflective oracles: “[Hyperreal Brouwer](https://agentfoundations.org/item?id=1671).”\n* Transparency and [merging of opinions](https://www.jstor.org/stable/2237864?seq=1#page_scan_tab_contents) in logical inductors.\n* *Ontology merging*, a possible approach to reasoning about [ontological crises](https://intelligence.org/files/OntologicalCrises.pdf) and transparency.\n* Attempting to devise a variant of logical induction that is “Bayesian” in the sense that its belief states can be readily understood as conditionalized prior probability distributions.\n\n\n\n\n\n\n\n\n**Subsystem alignment** — A key reason that agent/environment boundaries are unhelpful for thinking about AGI is that a given AGI system may consist of many different subprocesses optimizing many different goals or subgoals. The boundary between different “agents” may be ill-defined, and a given optimization process is likely to construct [subprocesses that pursue many different goals](https://arbital.com/p/daemons/). Addressing this risk requires limiting the ways in which new optimization subprocesses arise in the system.\n\n\n\n[More](https://intelligence.org/feed/?paged=20#collapseThree)\n\n\n\n\nRecent focus areas:\n\n\n* [Benign induction](https://ordinaryideas.wordpress.com/2016/11/30/what-does-the-universal-prior-actually-look-like/): “[Maximally Efficient Agents Will Probably Have an Anti-Daemon Immune System](https://agentfoundations.org/item?id=1290).”\n* Work related to KWIK learning: “[Some Problems with Making Induction Benign, and Approaches to Them](https://agentfoundations.org/item?id=1263)” and “[How Likely Is A Random AGI To Be Honest?](https://agentfoundations.org/item?id=1277)”\n\n\n\n\n\n\n\n\n**Robust delegation** — In cases where it’s desirable to delegate to another agent (e.g. an AI system or a successor), it’s critical that the agent be well-aligned and trusted to perform specified tasks. The [value learning problem](https://intelligence.org/files/ValueLearningProblem.pdf) and most of the [AAMLS agenda](https://intelligence.org/2016/07/27/alignment-machine-learning/) fall in this category.\n\n\n\n[More](https://intelligence.org/feed/?paged=20#collapseFour)\n\n\n\n\nRecent focus areas:\n\n\n* [Goodhart’s Curse](https://arbital.com/p/goodharts_curse/), “the combination of the Optimizer’s Curse and Goodhart’s Law” stating that “a powerful agent neutrally optimizing a proxy measure *U* that we hoped to align with true values *V*, will implicitly seek out upward divergences of *U* from *V*”: “[The Three Levels of Goodhart’s Curse](https://agentfoundations.org/item?id=1621).”\n* [Corrigibility](https://intelligence.org/files/Corrigibility.pdf): “[Corrigibility Thoughts](https://agentfoundations.org/item?id=1216),” “[All the Indifference Designs](https://agentfoundations.org/item?id=1285).”\n* Value learning and inverse reinforcement learning: “[Incorrigibility in the CIRL Framework](https://intelligence.org/2017/08/31/incorrigibility-in-cirl/),” “[Reward Learning Summary](https://agentfoundations.org/item?id=1701).”\n* The [reward hacking](https://arxiv.org/pdf/1606.06565.pdf) problem: “[Stable Pointers to Value: An Agent Embedded In Its Own Utility Function](https://agentfoundations.org/item?id=1622).”\n\n\n\n\n\n\n\nAdditionally, we ran several research workshops, including one focused on [Paul Christiano’s research agenda](https://ai-alignment.com/directions-and-desiderata-for-ai-control-b60fca0da8f4).\n\n\n \n\n\n#### Fundraising goals\n\n\nTo a first approximation, we view our ability to make productive use of additional dollars in the near future as linear in research personnel additions. We don’t expect to run out of additional top-priority work we can assign to highly motivated and skilled researchers and engineers. This represents an important shift from our past budget and team size goals.[5](https://intelligence.org/2017/12/01/miris-2017-fundraiser/#footnote_4_16982 \"Our previous goal was to slowly ramp up to the $3–4 million level and then hold steady with around 13–17 research staff. We now expect to be able to reach (and surpass) that level much more quickly.\")\n\n\nGrowing our team as much as we hope to is by no means an easy hiring problem, but it’s made significantly easier by the fact that we’re now looking for top software engineers who can help implement experiments we want to run, and not just productive pure researchers who can work with a high degree of independence. (In whom we are, of course, still very interested!) We therefore think we can expand relatively quickly over the next two years (productively!), funds allowing.\n\n\nIn our mainline growth scenario, our reserves plus next year’s $1.25M installment of the Open Philanthropy Project’s 3-year grant would leave us with around 9 months of runway going into 2019. However, we have substantial uncertainty about exactly how quickly we’ll be able to hire additional researchers and engineers, and therefore about our 2018–2019 budgets.\n\n\nOur 2018 budget breakdown in the mainline success case looks roughly like this:\n\n\n\n2018 Budget Estimate (Mainline Growth)\n\n\n![](https://intelligence.org/wp-content/uploads/2017/12/2018-Budget-Breakdown.png)\n\n\n\nTo determine our fundraising targets this year, we estimated the support levels (above the Open Philanthropy Project’s support) that would make us reasonably confident that we can maintain a 1.5-year runway going into 2019 in different growth scenarios, assuming that our 2017 fundraiser looks similar to next year’s fundraiser and that our off-fundraiser donor support looks similar to our on-fundraiser support:\n\n\n\n\n---\n\n\n**Basic target — $625,000.** At this funding level, we’ll be in a good position to pursue our mainline hiring goal in 2018, although we will likely need to halt or slow our growth in 2019.\n\n\n\n\n---\n\n\n**Mainline-growth target — $850,000.** At this level, we’ll be on track to fully fund our planned expansion over the next few years, allowing us to roughly double the number of research staff over the course of 2018 and 2019.\n\n\n\n\n---\n\n\n**Rapid-growth target — $1,250,000.** At this funding level, we will be on track to maintain a 1.5-year runway even if our hiring proceeds a fair amount faster than our mainline prediction. We’ll also have greater freedom to pay higher salaries to top-tier candidates as needed.\n\n\n\n\n---\n\n\nBeyond these growth targets: if we saw an order-of-magnitude increase in MIRI’s funding in the near future, we have several ways we believe we can significantly accelerate our recruitment efforts to grow the team faster. These include competitively paid trial periods and increased hiring outreach across venues and communities where we expect to find high-caliber candidates. Funding increases beyond the point where we could usefully use the money to hire faster would likely cause us to spin off new initiatives to address the problem of AI x-risk from other angles; we wouldn’t expect them to go to MIRI’s current programs.\n\n\nOn the whole, we’re in a very good position to continue expanding, and we’re enormously grateful for the generous support we’ve already received this year. Relative to our present size, MIRI’s reserves are much more solid than they have been in the past, putting us in a strong position going into 2018.\n\n\nGiven our longer runway, this may be a better year than usual for long-time MIRI supporters to consider supporting other projects that have been waiting in the wings. That said, we don’t personally know of marginal places to put additional dollars that we currently view as higher-value than MIRI, and we do expect our fundraiser performance to affect our growth over the next two years, particularly if we succeed in growing the MIRI team as fast as we’re hoping to.\n\n\n\n\n[Donate Now](https://intelligence.org/donate/#donation-methods)\n---------------------------------------------------------------\n\n\n\n\n \n\n\n#### Strategic background\n\n\nTaking a step back from our immediate organizational plans: how does MIRI see the work we’re doing as tying into positive long-term, large-scale outcomes?\n\n\nA lot of our thinking on these issues hasn’t yet been written up in any detail, and many of the issues involved are topics of active discussion among people working on existential risk from AGI. In very broad terms, however, our approach to global risk mitigation is to think in terms of desired outcomes, and to ask: “What is the likeliest way that the outcome in question might occur?” We then repeat this process until we backchain to interventions that actors can take today.\n\n\nIgnoring a large number of subtleties, our view of the world’s strategic situation currently breaks down as follows:\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n1. **Long-run good outcomes**. Ultimately, we want humanity to figure out the best possible long-run future and enact that kind of future, factoring in good outcomes for all sentient beings. However, there is currently very little we can say with confidence about what desirable long-term outcomes look like, or how best to achieve them; and if someone rushes to lock in a particular conception of “the best possible long-run future,” they’re likely to make catastrophic mistakes both in how they envision that goal and in how they implement it.\n\n\n In order to avoid making critical decisions in haste and locking in flawed conclusions, humanity needs:\n\n\n\n\n2. A **stable period** during which relevant actors can accumulate whatever capabilities and knowledge are required to reach robustly good conclusions about long-run outcomes. This might involve decisionmakers developing better judgment, insight, and reasoning skills in the future, solving the full alignment problem for [fully autonomous AGI systems](https://arbital.com/p/Sovereign/), and so on.\n\n\n Given the difficulty of the task, we expect a successful stable period to require:\n\n\n\n\n3. A preceding **end to the acute risk period**. If AGI carries a significant chance of causing an existential catastrophe over the next few decades, this forces a response under time pressure; but if actors attempt to make irreversible decisions about the long-term future under strong time pressure, we expect the result to be catastrophically bad. Conditioning on good outcomes, we therefore expect a two-step process where addressing acute existential risks takes temporal priority.\n\n\n To end the acute risk period, we expect it to be necessary for actors to make use of:\n\n\n\n\n4. A **risk-mitigating technology**. On our current view of the technological landscape, there are a number of plausible future technologies that could be leveraged to end the acute risk period.\n\n\n We believe that the likeliest way to achieve a technology in this category sufficiently soon is through:\n\n\n\n\n5. **AGI-empowered technological development** carried out by [task-directed](https://intelligence.org/2017/02/28/using-machine-learning/#1) AGI systems. Depending on early AGI systems’ level of capital-intensiveness, on whether AGI is a late-paradigm or early-paradigm invention, and on a number of other factors, AGI might be developed by anything from a small Silicon Valley startup to a large-scale multinational collaboration. Regardless, we expect AGI to be developed before any other (meta)technology that can be employed to end the acute risk period, and if early AGI systems can be used safely at all, then we expect it to be possible for an AI-empowered project to safely automate a reasonably small set of concrete science and engineering tasks that are sufficient for ending the risk period. This requires:\n\n6. **Construction of minimal aligned AGI**. We specify “minimal” because we consider success much more likely if developers attempt to build systems with the bare minimum of capabilities for ending the acute risk period. We expect AGI alignment to be highly difficult, and we expect additional capabilities to add substantially to this difficulty.\n\n\n **Added:** “Minimal aligned AGI” means “aligned AGI that has the minimal necessary capabilities”; be sure not to misread it as “*minimally* aligned AGI”. Rob Bensinger [adds](https://www.lesswrong.com/posts/hL9ennoEfJXMj7r2D/two-clarifications-about-strategic-background): “The MIRI view isn’t ‘rather than making alignment your top priority and working really hard to over-engineer your system for safety, try to build a system with the bare minimum of capabilities’. It’s: ‘in addition to making alignment your top priority and working really hard to over-engineer your system for safety, *also* build the system to have the bare minimum of capabilities’.”\n\n\n If an aligned system of this kind were developed, we would expect two factors to be responsible:\n\n\n\n\n\n\n\n\n7a. A **technological edge in AGI by an operationally adequate project**. By “operationally adequate” we mean a project with strong opsec, research closure, trustworthy command, a commitment to the common good, security mindset, requisite resource levels, and heavy prioritization of alignment work. A project like this needs to have a large enough lead to be able to afford to spend a substantial amount of time on safety measures, as discussed [at FLI’s Asilomar conference](http://www.businessinsider.com/google-deepmind-demis-hassabis-worries-ai-superintelligence-coordination-2017-2).\n\n\n\n7b. A strong **white-boxed system understanding** on the part of the operationally adequate project during late AGI development. By this we mean that developers go into building AGI systems with a good understanding of how their systems decompose and solve particular cognitive problems, of the kinds of problems different parts of the system are working on, and of how all of the parts of the system interact.\n On our current understanding of the alignment problem, developers need to be able to give a reasonable account of how all of the AGI-grade computation in their system is being allocated, similar to how secure software systems are built to allow security professionals to give a simple accounting of why the system has no unforeseen vulnerabilities. See “[Security Mindset and Ordinary Paranoia](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/)” for more details.\n\n\nDevelopers must be able to explicitly state and check all of the basic assumptions required for their account of the system’s alignment and effectiveness to hold. Additionally, they need to design and modify AGI systems only in ways that preserve understandability — that is, only allow system modifications that preserve developers’ ability to generate full accounts of what cognitive problems any given slice of the system is solving, and why the interaction of all of the system’s parts is both safe and effective.\n\n\nOur view is that this kind of system understandability will in turn require:\n\n\n\n\n8. **Steering toward alignment-conducive AGI approaches**. Leading AGI researchers and developers need to deliberately direct research efforts toward ensuring that the earliest AGI designs are relatively easy to understand and align.\n We expect this to be a critical step, as we do not expect most approaches to AGI to be alignable after the fact without long, multi-year delays.\n\n\n\n\n\n \n\n\nWe plan to say more in the future about the criteria for operationally adequate projects in **7a**. We do not believe that any project meeting all of these conditions currently exists, though we see various ways that projects could reach this threshold.\n\n\nThe above breakdown only discusses what we view as the “mainline” success scenario.[6](https://intelligence.org/2017/12/01/miris-2017-fundraiser/#footnote_5_16982 \"There are other paths to good outcomes that we view as lower-probability, but still sufficiently high-probability that the global community should allocate marginal resources to their pursuit.\") If we condition on good long-run outcomes, the most plausible explanation we can come up with cites an operationally adequate AI-empowered project ending the acute risk period, and appeals to the fact that those future AGI developers maintained a strong understanding of their system’s problem-solving work over the course of development, made use of advance knowledge about which AGI approaches conduce to that kind of understanding, and filtered on those approaches.\n\n\nFor that reason, MIRI does research to intervene on **8** from various angles, such as by examining holes and anomalies in the field’s current understanding of real-world reasoning and decision-making. We hope to thereby reduce our own confusion about alignment-conducive AGI approaches and ultimately help make it feasible for developers to construct adequate “[safety-stories](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/)” in an alignment setting. As we improve our understanding of the alignment problem, our aim is to share new insights and techniques with leading or up-and-coming developer groups, who we’re generally on good terms with.\n\n\nA number of the points above require further explanation and motivation, and we’ll be providing more details on our view of the strategic landscape in the near future.\n\n\nFurther questions are always welcome at [contact@intelligence.org](mailto:contact@intelligence.org), regarding our current organizational activities and plans as well as the long-term role we hope to play in giving AGI developers an easier and clearer shot at making the first AGI systems robust and safe. For more details on our fundraiser, including corporate matching, see our **[Donate](https://intelligence.org/feed/intelligence.org/donate)** page.\n\n\n \n\n\n\n\n---\n\n1. Note that this $1.9M is significantly below the $2.1–2.5M we predicted for the year [in April](https://intelligence.org/2017/04/30/2017-updates-and-strategy/). Personnel costs are MIRI’s most significant expense, and higher research staff turnover in 2017 meant that we had fewer net additions to the team this year than we’d budgeted for. We went under budget by a relatively small margin in 2016, spending $1.73M versus a predicted $1.83M.\nOur 2018–2019 budget estimates are highly uncertain, with most of the uncertainty coming from substantial uncertainty about how quickly we’ll be able to take on new research staff.\n2. This is roughly in line with our experience in previous years, when excluding expected grants and large surprise one-time donations. We’ve accounted for the former in our targets but not the latter, since we think it unwise to bank on unpredictable windfalls.\nNote that in previous years, we’ve set targets based on maintaining a 1-year runway. Given the increase in our size, I now think that a 1.5-year runway is more appropriate.\n3. Including the $1.01 million donation and the first $1.25 million from the Open Philanthropy Project, we have so far raised around $3.16 million this year, overshooting the $3 million goal we set [earlier this year](https://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/)!\n4. We emphasize that, as always, “experiment” means “most things tried don’t work.” We’d like to avoid setting expectations of immediate success for this exploratory push.\n5. Our previous goal was to slowly ramp up to the $3–4 million level and then hold steady with around 13–17 research staff. We now expect to be able to reach (and surpass) that level much more quickly.\n6. There are other paths to good outcomes that we view as lower-probability, but still sufficiently high-probability that the global community should allocate marginal resources to their pursuit.\n\nThe post [MIRI’s 2017 Fundraiser](https://intelligence.org/2017/12/01/miris-2017-fundraiser/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-12-01T23:00:21Z", "authors": ["Malo Bourgon"], "summaries": []} -{"id": "fcc8ddeca83d06777ced172faf591fe0", "title": "Security Mindset and the Logistic Success Curve", "url": "https://intelligence.org/2017/11/26/security-mindset-and-the-logistic-success-curve/", "source": "miri", "source_type": "blog", "text": "Follow-up to:   [Security Mindset and Ordinary Paranoia](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/)\n\n\n\n\n---\n\n\n\n \n\n\n(*Two days later, Amber returns with another question.*)\n\n\n \n\n\n**AMBER:**  Uh, say, Coral. How important is security mindset when you’re building a whole new kind of system—say, one subject to potentially adverse optimization pressures, where you want it to have some sort of robustness property?\n\n\n**CORAL:**  How novel is the system?\n\n\n**AMBER:**  Very novel.\n\n\n**CORAL:**  Novel enough that you’d have to invent your own new best practices instead of looking them up?\n\n\n**AMBER:**  Right.\n\n\n**CORAL:**  That’s serious business. If you’re building a very simple Internet-connected system, maybe a smart ordinary paranoid could look up how we usually guard against adversaries, use as much off-the-shelf software as possible that was checked over by real security professionals, and not do too horribly. But if you’re doing something qualitatively new and complicated that has to be robust against adverse optimization, well… mostly I’d think you were operating in almost impossibly dangerous territory, and I’d advise you to figure out what to do after your first try failed. But if you wanted to actually succeed, ordinary paranoia absolutely would not do it.\n\n\n**AMBER:**  In other words, projects to build novel mission-critical systems ought to have advisors with the full security mindset, so that the advisor can say what the system builders really need to do to ensure security.\n\n\n**CORAL:**  (*laughs sadly*)  No.\n\n\n**AMBER:**  No?\n\n\n\n**CORAL:**  Let’s say for the sake of concreteness that you want to build a new kind of secure operating system. That is *not* the sort of thing you can do by attaching one advisor with security mindset, who has limited political capital to use to try to argue people into doing things. “Building a house when you’re only allowed to touch the bricks using tweezers” comes to mind as a metaphor. You’re going to need experienced security professionals working full-time with high authority. Three of them, one of whom is a cofounder. Although even then, we might still be operating in the territory of Paul Graham’s Design Paradox.\n\n\n**AMBER:**  Design Paradox? What’s that?\n\n\n**CORAL:**  Paul Graham’s Design Paradox is that people who have good taste in UIs can tell when other people are designing good UIs, but most CEOs of big companies lack the good taste to tell who else has good taste. And that’s why big companies can’t just hire other people as talented as Steve Jobs to build nice things for them, even though Steve Jobs certainly wasn’t the best possible designer on the planet. Apple existed because of a lucky history where Steve Jobs ended up in charge. There’s no way for Samsung to hire somebody else with equal talents, because Samsung would just end up with some guy in a suit who was good at pretending to be Steve Jobs in front of a CEO who couldn’t tell the difference.\n\n\nSimilarly, people with security mindset can notice when other people lack it, but I’d worry that an ordinary paranoid would have a hard time telling the difference, which would make it hard for them to hire a truly competent advisor. And of course lots of the people in the larger social system behind technology projects lack even the ordinary paranoia that many good programmers possess, and they just end up with empty suits talking a lot about “risk” and “safety”. In other words, if we’re talking about something as hard as building a secure operating system, and your project hasn’t started up *already* headed up by someone with the full security mindset, you are in trouble. Where by “in trouble” I mean “totally, irretrievably doomed”.\n\n\n**AMBER:**  Look, uh, there’s a certain project I’m invested in which has raised a hundred million dollars to create merchant drones.\n\n\n**CORAL:**  Merchant drones?\n\n\n**AMBER:**  So there are a lot of countries that have poor market infrastructure, and the idea is, we’re going to make drones that fly around buying and selling things, and they’ll use machine learning to figure out what prices to pay and so on. We’re not just in it for the money; we think it could be a huge economic boost to those countries, really help them move forwards.\n\n\n**CORAL:**  Dear God. Okay. There are exactly two things your company is about: system security, and regulatory compliance. Well, and also marketing, but that doesn’t count because every company is about marketing. It would be a severe error to imagine that your company is about anything else, such as drone hardware or machine learning.\n\n\n**AMBER:**  Well, the sentiment inside the company is that the time to begin thinking about legalities and security will be after we’ve proven we can build a prototype and have at least a small pilot market in progress. I mean, until we know how people are using the system and how the software ends up working, it’s hard to see how we could do any productive thinking about security or compliance that wouldn’t just be pure speculation.\n\n\n**CORAL:**  Ha! Ha, hahaha… oh my god you’re not joking.\n\n\n**AMBER:**  What?\n\n\n**CORAL:**  Please tell me that what you actually mean is that you have a security and regulatory roadmap which calls for you to do some of your work later, but clearly lays out what work needs to be done, when you are to start doing it, and when each milestone needs to be complete. Surely you don’t *literally* mean that you *intend to start thinking about it* later?\n\n\n**AMBER:**  A lot of times at lunch we talk about how annoying it is that we’ll have to deal with regulations and how much better it would be if governments were more libertarian. That counts as thinking about it, right?\n\n\n**CORAL:**  Oh my god.\n\n\n**AMBER:**  I don’t see how we could have a security plan when we don’t know exactly what we’ll be securing. Wouldn’t the plan just turn out to be wrong?\n\n\n**CORAL:**  All business plans for startups turn out to be wrong, but you still need them—and not just as works of fiction. They represent the written form of your current beliefs about your key assumptions. Writing down your business plan checks whether your current beliefs can possibly be coherent, and suggests which critical beliefs to test first, and which results should set off alarms, and when you are falling behind key survival thresholds. The idea isn’t that you stick to the business plan; it’s that having a business plan (a) checks that it seems possible to succeed in any way whatsoever, and (b) tells you when one of your beliefs is being falsified so you can explicitly change the plan and adapt. Having a written plan that you intend to rapidly revise in the face of new information is one thing. *NOT HAVING A PLAN* is *another*.\n\n\n**AMBER:**  The thing is, I *am* a little worried that the head of the project, Mr. Topaz, isn’t concerned enough about the possibility of somebody fooling the drones into giving out money when they shouldn’t. I mean, I’ve tried to raise that concern, but he says that of course we’re not going to program the drones to give out money to just anyone. Can you maybe give him a few tips? For when it comes time to start thinking about security, I mean.\n\n\n**CORAL:**  Oh. Oh, my dear, sweet summer child, I’m sorry. There’s nothing I can do for you.\n\n\n**AMBER:**  Huh? But you haven’t even looked at our beautiful business model!\n\n\n**CORAL:**  I thought maybe your company merely had a hopeless case of underestimated difficulties and misplaced priorities. But now it sounds like your leader is not even using ordinary paranoia, and reacts with skepticism to it. Calling a case like that “hopeless” would be an understatement.\n\n\n**AMBER:**  But a security failure would be very bad for the countries we’re trying to help! They need *secure* merchant drones!\n\n\n**CORAL:**  Then they will need drones built by some project that is not led by Mr. Topaz.\n\n\n**AMBER:**  But that seems very hard to arrange!\n\n\n**CORAL:**  …I don’t understand what you are saying that is supposed to contradict anything I am saying.\n\n\n**AMBER:**  Look, aren’t you judging Mr. Topaz a little too quickly? Seriously.\n\n\n**CORAL:**  I haven’t met him, so it’s possible you misrepresented him to me. But if you’ve accurately represented his attitude? Then, yes, I did judge quickly, but it’s a hell of a good guess. Security mindset is already rare on priors. “I don’t plan to make my drones give away money to random people” means he’s imagining how his system could work as he intends, instead of imagining how it might not work as he intends. If somebody doesn’t even exhibit ordinary paranoia, spontaneously on their own cognizance without external prompting, then they cannot do security, period. Reacting indignantly to the suggestion that something might go wrong is even beyond that level of hopelessness, but the base level was hopeless enough already.\n\n\n**AMBER:**  Look… can you just go to Mr. Topaz and try to tell him what he needs to do to add some security onto his drones? Just try? Because it’s super important.\n\n\n**CORAL:**  I could try, yes. I can’t succeed, but I could try.\n\n\n**AMBER:**  Oh, but please be careful to not be harsh with him. Don’t put the focus on what he’s doing wrong—and try to make it clear that these problems aren’t *too* serious. He’s been put off by the media alarmism surrounding apocalyptic scenarios with armies of evil drones filling the sky, and it took me some trouble to convince him that I wasn’t just another alarmist full of fanciful catastrophe scenarios of drones defying their own programming.\n\n\n**CORAL:**  …\n\n\n**AMBER:**  And maybe try to keep your opening conversation away from what might sound like crazy edge cases, like somebody forgetting to check the end of a buffer and an adversary throwing in a huge string of characters that overwrite the end of the stack with a return address that jumps to a section of code somewhere else in the system that does something the adversary wants. I mean, you’ve convinced me that these far-fetched scenarios are worth worrying about, if only because they might be canaries in the coal mine for more realistic failure modes. But Mr. Topaz thinks that’s all a bit silly, and I don’t think you should open by trying to explain to him on a meta level why it isn’t. He’d probably think you were being condescending, telling him how to think. Especially when you’re just an operating-systems guy and you have no experience building drones and seeing what actually makes them crash. I mean, that’s what I think he’d say to you.\n\n\n**CORAL:**  …\n\n\n**AMBER:**  Also, start with the cheaper interventions when you’re giving advice. I don’t think Mr. Topaz is going to react well if you tell him that he needs to start all over in another programming language, or establish a review board for all code changes, or whatever. He’s worried about competitors reaching the market first, so he doesn’t want to do anything that will slow him down.\n\n\n**CORAL:**  …\n\n\n**AMBER:**  Uh, Coral?\n\n\n**CORAL:**  … on his novel project, entering new territory, doing things not exactly like what has been done before, carrying out novel mission-critical subtasks for which there are no standardized best security practices, nor any known understanding of what makes the system robust or not-robust.\n\n\n**AMBER:**  Right!\n\n\n**CORAL:**  And Mr. Topaz himself does not seem much terrified of this terrifying task before him.\n\n\n**AMBER:**  Well, he’s worried about somebody else making merchant drones first and misusing this key economic infrastructure for bad purposes. That’s the same basic thing, right? Like, it demonstrates that he can worry about things?\n\n\n**CORAL:**  It is utterly different. Monkeys who can be afraid of other monkeys getting to the bananas first are far, far more common than monkeys who worry about whether the bananas will exhibit weird system behaviors in the face of adverse optimization.\n\n\n**AMBER:**  Oh.\n\n\n**CORAL:**  I’m afraid it is only slightly more probable that Mr. Topaz will oversee the creation of robust software than that the Moon will spontaneously transform into organically farmed goat cheese.\n\n\n**AMBER:**  I think you’re being too harsh on him. I’ve met Mr. Topaz, and he seemed pretty bright to me.\n\n\n**CORAL:**  Again, assuming you’re representing him accurately, Mr. Topaz seems to lack what I called ordinary paranoia. If he does have that ability as a cognitive capacity, which many bright programmers do, then he obviously doesn’t feel passionate about applying that paranoia to his drone project along key dimensions. It also sounds like Mr. Topaz doesn’t realize there’s a skill that he is missing, and would be insulted by the suggestion. I am put in mind of the story of the farmer who was asked by a passing driver for directions to get to Point B, to which the farmer replied, “If I was trying to get to Point B, I sure wouldn’t start from here.”\n\n\n**AMBER:**  Mr. Topaz has made some significant advances in drone technology, so he can’t be stupid, right?\n\n\n**CORAL:**  “Security mindset” seems to be a distinct cognitive talent from *g* factor or even programming ability. In fact, there doesn’t seem to be a level of human genius that even guarantees you’ll be skilled at ordinary paranoia. Which does make some security professionals feel a bit weird, myself included—the same way a lot of programmers have trouble understanding why not everyone can learn to program. But it seems to be an observational fact that both ordinary paranoia and security mindset are things that can decouple from *g* factor and programming ability—and if this were not the case, the Internet would be far more secure than it is.\n\n\n**AMBER:**  Do you think it would help if we talked to the other VCs funding this project and got them to ask Mr. Topaz to appoint a Special Advisor on Robustness reporting directly to the CTO? That sounds politically difficult to me, but it’s possible we could swing it. Once the press started speculating about drones going rogue and maybe aggregating into larger Voltron-like robots that could acquire laser eyes, Mr. Topaz did tell the VCs that he was very concerned about the ethics of drone safety and that he’d had many long conversations about it over lunch hours.\n\n\n**CORAL:**  I’m venturing slightly outside my own expertise here, which isn’t corporate politics per se. But on a project like this one that’s trying to enter novel territory, I’d guess the person with security mindset needs at least cofounder status, and must be personally trusted by any cofounders who don’t have the skill. It can’t be an outsider who was brought in by VCs, who is operating on limited political capital and needs to win an argument every time she wants to not have all the services conveniently turned on by default. I suspect you just have the wrong person in charge of this startup, and that this problem is not repairable.\n\n\n**AMBER:**  Please don’t just give up! Even if things are as bad as you say, just increasing our project’s probability of being secure from 0% to 10% would be very valuable in expectation to all those people in other countries who need merchant drones.\n\n\n**CORAL:**  …look, at some point in life we have to try to triage our efforts and give up on what can’t be salvaged. There’s often a logistic curve for success probabilities, you know? The distances are measured in multiplicative odds, not additive percentage points. You can’t take a project like this and assume that by putting in some more hard work, you can increase the absolute chance of success by 10%. More like, the odds of this project’s failure versus success start out as 1,000,000:1, and if we’re very polite and navigate around Mr. Topaz’s sense that he is higher-status than us and manage to explain a few tips to him without ever sounding like we think we know something he doesn’t, we can quintuple his chances of success and send the odds to 200,000:1. Which is to say that in the world of percentage points, the odds go from 0.0% to 0.0%. That’s one way to look at the “[law of continued failure](https://intelligence.org/2017/10/13/fire-alarm)”.\n\n\nIf you had the kind of project where the fundamentals implied, say, a 15% chance of success, you’d then be on the right part of the logistic curve, and in *that* case it could make a lot of sense to hunt for ways to bump that up to a 30% or 80% chance.\n\n\n**AMBER:**  Look, I’m worried that it will really be very bad if Mr. Topaz reaches the market first with insecure drones. Like, I think that merchant drones could be very beneficial to countries without much existing market backbone, and if there’s a grand failure—especially if some of the would-be customers have their money or items stolen—then it could poison the potential market for years. It will be terrible! Really, genuinely terrible!\n\n\n**CORAL:**  Wow. That sure does sound like an unpleasant scenario to have wedged yourself into.\n\n\n**AMBER:**  But what do we do now?\n\n\n**CORAL:**  Damned if I know. I do suspect you’re screwed so long as you can only win if somebody like Mr. Topaz creates a robust system. I guess you could try to have some other drone project come into existence, headed up by somebody that, say, Bruce Schneier assures everyone is unusually good at security-mindset thinking and hence can hire people like me and listen to all the harsh things we have to say. Though I have to admit, the part where you think it’s drastically important that you beat an insecure system to market with a secure system—well, that sounds positively nightmarish. You’re going to need a lot more resources than Mr. Topaz has, or some other kind of very major advantage. Security takes time.\n\n\n**AMBER:**  Is it really that hard to add security to the drone system?\n\n\n**CORAL:**  You keep talking about “adding” security. System robustness isn’t the kind of property you can bolt onto software as an afterthought.\n\n\n**AMBER:**  I guess I’m having trouble seeing why it’s so much more expensive. Like, if somebody foolishly builds an OS that gives access to just anyone, you could instead put a password lock on it, using your clever system where the OS keeps the hashes of the passwords instead of the passwords. You just spend a couple of days rewriting all the services exposed to the Internet to ask for passwords before granting access. And then the OS has security on it! Right?\n\n\n**CORAL:**  NO. Everything inside your system that is potentially subject to adverse selection in its probability of weird behavior is a liability! Everything exposed to an attacker, and everything those subsystems interact with, and everything *those* parts interact with! You have to build *all* of it robustly! If you want to build a secure OS you need a whole special project that is “building a secure operating system instead of an insecure operating system”. And you also need to restrict the scope of your ambitions, and not do everything you want to do, and obey other commandments that will feel like big unpleasant sacrifices to somebody who doesn’t have the full security mindset. OpenBSD can’t do a tenth of what Ubuntu does. They can’t afford to! It would be too large of an attack surface! They can’t review that much code using the special process that they use to develop secure software! They can’t hold that many assumptions in their minds!\n\n\n**AMBER:**  Does that effort *have* to take a significant amount of extra time? Are you sure it can’t just be done in a couple more weeks if we hurry?\n\n\n**CORAL:**  YES. Given that this is a novel project entering new territory, expect it to take *at least* two years more time, or 50% more development time—whichever is less—compared to a security-incautious project that otherwise has identical tools, insights, people, and resources. And that is a very, very optimistic lower bound.\n\n\n**AMBER:**  This story seems to be heading in a worrying direction.\n\n\n**CORAL:**  Well, I’m sorry, but creating robust systems takes longer than creating non-robust systems even in cases where it would be really, extraordinarily bad if creating robust systems took longer than creating non-robust systems.\n\n\n**AMBER:**  Couldn’t it be the case that, like, projects which are implementing good security practices do everything so much cleaner and better that they can come to market faster than any insecure competitors could?\n\n\n**CORAL:**  … I honestly have trouble seeing [why](http://www.readthesequences.com/MotivatedStoppingAndMotivatedContinuation) you’re [privileging that hypothesis](https://www.readthesequences.com/PrivilegingTheHypothesis) for consideration. Robustness involves assurance processes that take additional time. OpenBSD does not go through lines of code faster than Ubuntu.\n\n\nBut more importantly, if everyone has access to the same tools and insights and resources, then an unusually fast method of doing something cautiously can always be degenerated into an even faster method of doing the thing incautiously. There is not now, nor will there ever be, a programming language in which it is the least bit difficult to write bad programs. There is not now, nor will there ever be, a methodology that makes writing insecure software inherently slower than writing secure software. Any security professional who heard about your bright hopes would just laugh. Ask them too if you don’t believe me.\n\n\n**AMBER:**  But shouldn’t engineers who aren’t cautious just be unable to make software at all, because of ordinary bugs?\n\n\n**CORAL:**  I am afraid that it is both possible, and *extremely* common in practice, for people to fix all the bugs that are crashing their systems in ordinary testing today, using methodologies that are indeed adequate to fixing ordinary bugs that show up often enough to afflict a significant fraction of users, and then ship the product. They get everything working today, and they don’t feel like they have the slack to delay any longer than that before shipping because the product is already behind schedule. They don’t hire exceptional people to do ten times as much work in order to prevent the product from having holes that only show up under adverse optimization pressure, that somebody else finds first and that they learn about after it’s too late.\n\n\nIt’s not even the wrong decision, for products that aren’t connected to the Internet, don’t have enough users for one to go rogue, don’t handle money, don’t contain any valuable data, and don’t do anything that could injure people if something goes wrong. If your software doesn’t destroy anything important when it explodes, it’s probably a better use of limited resources to plan on fixing bugs as they show up.\n\n\n… Of course, you need some amount of security mindset to realize which software *can* in fact destroy the company if it silently corrupts data and nobody notices this until a month later. I don’t suppose it’s the case that your drones only carry a limited amount of the full corporate budget in cash over the course of a day, and you always have more than enough money to reimburse all the customers if all items in transit over a day were lost, taking into account that the drones might make many more purchases or sales than usual? And that the systems are generating internal paper receipts that are clearly shown to the customer and non-electronically reconciled once per day, thereby enabling you to notice a problem before it’s too late?\n\n\n**AMBER:**  Nope!\n\n\n**CORAL:**  Then as you say, it would be better for the world if your company didn’t exist and wasn’t about to charge into this new territory and poison it with a spectacular screwup.\n\n\n**AMBER:**  If I believed that… well, Mr. Topaz certainly isn’t going to stop his project or let somebody else take over. It seems the logical implication of what you say you believe is that I should try to persuade the venture capitalists I know to launch a safer drone project with even more funding.\n\n\n**CORAL:**  Uh, I’m sorry to be blunt about this, but I’m not sure *you* have a high enough level of security mindset to identify an executive who’s sufficiently better than you at it. Trying to get enough of a resource advantage to beat the insecure product to market is only half of your problem in launching a competing project. The other half of your problem is surpassing the prior rarity of people with truly deep security mindset, and getting somebody like that in charge and fully committed. Or at least get them in as a highly trusted, fully committed cofounder who isn’t on a short budget of political capital. I’ll say it again: an advisor appointed by VCs isn’t nearly enough for a project like yours. Even if the advisor is a genuinely good security professional—\n\n\n**AMBER:**  This all seems like an unreasonably difficult requirement! Can’t you back down on it a little?\n\n\n**CORAL:**  —the person in charge will probably try to bargain down reality, as represented by the unwelcome voice of the security professional, who won’t have enough social capital to badger them into “unreasonable” measures. Which means you fail on full automatic.\n\n\n**AMBER:**  … Then what am I to do?\n\n\n**CORAL:**  I don’t know, actually. But there’s no point in launching another drone project with even more funding, if it just ends up with another Mr. Topaz put in charge. Which, by default, is exactly what your venture capitalist friends are going to do. Then you’ve just set an even higher competitive bar for anyone actually trying to be first to market with a secure solution, may God have mercy on their souls. \n\n\nBesides, if Mr. Topaz thinks he has a competitor breathing down his neck and rushes his product to market, his chance of creating a secure system could drop by a factor of ten and go all the way from 0.0% to 0.0%.\n\n\n**AMBER:**  Surely my VC friends have faced this kind of problem before and know how to identify and hire executives who can do security well?\n\n\n**CORAL:**  … If one of your VC friends is Paul Graham, then maybe yes. But in the average case, *NO*.\n\n\nIf average VCs always made sure that projects which needed security had a founder or cofounder with strong security mindset—if they had the *ability* to do that *even in cases where they decided they wanted to*—the Internet would again look like a very different place. By default, your VC friends will be fooled by somebody who looks very sober and talks a lot about how terribly concerned he is with cybersecurity and how the system is going to be ultra-secure and reject over nine thousand common passwords, including the thirty-six passwords listed on this slide here, and the VCs will ooh and ah over it, especially as one of them realizes that their own password is on the slide. *That* project leader is absolutely not going to want to hear from me—even less so than Mr. Topaz. To him, I’m a political threat who might damage his line of patter to the VCs.\n\n\n**AMBER:**  I have trouble believing all these smart people are really that stupid.\n\n\n**CORAL:**  You’re compressing your innate sense of social status and your estimated level of how good particular groups are at this particular ability into a single dimension. That is not a good idea.\n\n\n**AMBER:**  I’m not saying that I think everyone with high status already knows the deep security skill. I’m just having trouble believing that they can’t learn it quickly once told, or could be stuck not being able to identify good advisors who have it. That would mean they couldn’t know something you know, something that seems important, and that just… feels *off* to me, somehow. Like, there are all these successful and important people out there, and you’re saying [you’re *better* than them](https://www.lesswrong.com/sequences/oLGCcbnvabyibnG9d), even with all their influence, their skills, their resources—\n\n\n**CORAL:**  Look, you don’t have to take my word for it. Think of all the websites you’ve been on, with snazzy-looking design, maybe with millions of dollars in sales passing through them, that want your password to be a mixture of uppercase and lowercase letters and numbers. In other words, they want you to enter “Password1!” instead of “correct horse battery staple”. Every one of those websites is doing a thing that looks humorously silly to someone with a full security mindset or even just somebody who regularly reads [XKCD](https://xkcd.com/936/). It says that the security system was set up by somebody who didn’t know what they were doing and was blindly imitating impressive-looking mistakes they saw elsewhere.\n\n\nDo you think that makes a good impression on their customers? That’s right, it does! Because the customers don’t know any better. Do you think that login system makes a good impression on the company’s investors, including professional VCs and probably some angels with their own startup experience? That’s right, it does! Because the VCs don’t know any better, and even the angel doesn’t know any better, and they don’t realize they’re missing a vital skill, and they aren’t consulting anyone who knows more. An innocent is *impressed* if a website requires a mix of uppercase and lowercase letters and numbers *and* punctuation. They think the people running the website must really care to impose a security measure that unusual and inconvenient. The people running the website think that’s what they’re doing too.\n\n\nPeople with deep security mindset are both rare and rarely *appreciated*. You can see just from the login system that none of the VCs and none of the C-level executives at that startup thought they needed to consult a real professional, or managed to find a real professional rather than an empty suit if they went consulting. There was, visibly, nobody in the neighboring system with the combined knowledge and status to walk over to the CEO and say, “Your login system is embarrassing and you need to hire a real security professional.” Or if anybody did say that to the CEO, the CEO was offended and shot the messenger for not phrasing it ever-so-politely enough, or the CTO saw the outsider as a political threat and bad-mouthed them out of the game.\n\n\nYour wishful should-universe hypothesis that people who can touch the full security mindset are more common than that within the venture capital and angel investing ecosystem is just flat wrong. Ordinary paranoia directed at widely-known adversarial cases is dense enough within the larger ecosystem to exert widespread social influence, albeit still comically absent in many individuals and regions. People with the full security mindset are too rare to have the same level of presence. That’s the *easily visible* truth. You can *see* the login systems that want a punctuation mark in your password. You are not hallucinating them.\n\n\n**AMBER:**  If that’s all true, then I just don’t see how I can win. Maybe I should just condition on everything you say being false, since, if it’s true, my winning seems unlikely—in which case all victories on my part would come in worlds with other background assumptions.\n\n\n**CORAL:**  … is that something you say often?\n\n\n**AMBER:**  Well, I say it whenever my victory starts to seem sufficiently unlikely.\n\n\n**CORAL:**  Goodness. I could maybe, *maybe* see somebody saying that once over the course of their entire lifetime, for a single unlikely conditional, but doing it more than once is sheer madness. I’d expect the unlikely conditionals to build up very fast and drop the probability of your mental world to effectively zero. It’s tempting, but it’s usually a bad idea to slip sideways into your own private [hallucinatory universe](https://www.facebook.com/yudkowsky/posts/10154981483669228) when you feel you’re under emotional pressure. I tend to believe that no matter what the difficulties, we are most likely to come up with good plans when we are mentally living in reality as opposed to somewhere else. If things seem difficult, we must face the difficulty squarely to succeed, to come up with some solution that faces down how bad the situation really is, rather than deciding to condition on things not being difficult because then it’s too hard.\n\n\n**AMBER:**  Can you at least *try* talking to Mr. Topaz and advise him how to make things be secure?\n\n\n**CORAL:**  Sure. Trying things is easy, and I’m a character in a dialogue, so my opportunity costs are low. I’m sure Mr. Topaz is trying to build secure merchant drones, too. It’s succeeding at things that is the hard part.\n\n\n**AMBER:**  Great, I’ll see if I can get Mr. Topaz to talk to you. But do please be polite! If you think he’s doing something wrong, try to point it out more gently than the way you’ve talked to me. I think I have enough political capital to get you in the door, but that won’t last if you’re rude.\n\n\n**CORAL:**  You know, back in mainstream computer security, when you propose a new way of securing a system, it’s considered traditional and wise for everyone to gather around and try to come up with reasons why your idea might not work. It’s understood that no matter how smart you are, most seemingly bright ideas turn out to be flawed, and that you shouldn’t be touchy about people trying to shoot them down. Does Mr. Topaz have no acquaintance at all with the practices in computer security? A lot of programmers do.\n\n\n**AMBER:**  I think he’d say he respects computer security as its own field, but he doesn’t believe that building secure operating systems is the same problem as building merchant drones.\n\n\n**CORAL:**  And if I suggested that this case might be similar to the problem of building a secure operating system, and that this case creates a similar need for more effortful and cautious development, requiring both (a) additional development time and (b) a special need for caution supplied by people with unusual mindsets above and beyond ordinary paranoia, who have an unusual skill that identifies shaky assumptions in a safety story before an ordinary paranoid would judge a fire as being urgent enough to need putting out, who can remedy the problem using deeper solutions than an ordinary paranoid would generate as parries against imagined attacks?\n\n\nIf I suggested, indeed, that this scenario might hold generally wherever we demand robustness of a complex system that is being subjected to strong external or internal optimization pressures? Pressures that strongly promote the probabilities of particular states of affairs via optimization that searches across a large and complex state space? Pressures which therefore in turn subject other subparts of the system to selection for weird states and previously unenvisioned execution paths? Especially if some of these pressures may be in some sense creative and find states of the system or environment that surprise us or violate our surface generalizations?\n\n\n**AMBER:**  I think he’d probably think you were trying to look smart by using overly abstract language at him. Or he’d reply that he didn’t see why this took any more caution than he was already using just by testing the drones to make sure they didn’t crash or give out too much money.\n\n\n**CORAL:**  I see.\n\n\n**AMBER:**  So, shall we be off?\n\n\n**CORAL:**  Of course! No problem! I’ll just go meet with Mr. Topaz and use verbal persuasion to turn him into Bruce Schneier.\n\n\n**AMBER:**  That’s the spirit!\n\n\n**CORAL:**  God, how I wish I lived in the territory that corresponds to your map.\n\n\n**AMBER:**  Hey, come on. Is it seriously *that* hard to bestow exceptionally rare mental skills on people by talking at them? I agree it’s a bad sign that Mr. Topaz shows no sign of wanting to acquire those skills, and doesn’t think we have enough relative status to continue listening if we say something he doesn’t want to hear. But that just means we have to phrase our advice cleverly so that he *will* want to hear it!\n\n\n**CORAL:**  I suppose you could modify your message into something Mr. Topaz doesn’t find so unpleasant to hear. Something that sounds related to the topic of drone security, but which doesn’t cost him much, and of course does not actually cause his drones to end up secure because that would be all unpleasant and expensive. You could slip a little sideways in reality, and convince yourself that you’ve gotten Mr. Topaz to ally with you, because he sounds agreeable now. Your instinctive desire for the high-status monkey to be on your political side will feel like its problem has been solved. You can substitute the feeling of having solved that problem for the unpleasant sense of not having secured the actual drones; you can tell yourself that the bigger monkey will take care of everything now that he seems to be on your pleasantly-modified political side. And so you will be happy. Until the merchant drones hit the market, of course, but that unpleasant experience should be brief.\n\n\n**AMBER:**  Come on, we can do this! You’ve just got to think positively!\n\n\n**CORAL:**  … Well, if nothing else, this should be an interesting experience. I’ve never tried to do anything quite this doomed before.\n\n\n\n \n\n\nThe post [Security Mindset and the Logistic Success Curve](https://intelligence.org/2017/11/26/security-mindset-and-the-logistic-success-curve/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-11-26T16:11:36Z", "authors": ["Eliezer Yudkowsky"], "summaries": []} -{"id": "2b6e9eaa1e925345c5fe438786ed12a4", "title": "Security Mindset and Ordinary Paranoia", "url": "https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/", "source": "miri", "source_type": "blog", "text": "The following is a fictional dialogue building off of [AI Alignment: Why It’s Hard, and Where to Start](https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/).\n\n\n\n\n---\n\n\n\n \n\n\n(**AMBER***, a philanthropist interested in a more reliable Internet, and* **CORAL***, a computer security professional, are at a conference hotel together discussing what Coral insists is a difficult and important issue: the difficulty of building “secure” software.*)\n\n\n \n\n\n**AMBER:**  So, Coral, I understand that you believe it is very important, when creating software, to make that software be what you call “secure”.\n\n\n**CORAL:**  Especially if it’s connected to the Internet, or if it controls money or other valuables. But yes, that’s right.\n\n\n**AMBER:**  I find it hard to believe that this needs to be a separate topic in computer science. In general, programmers need to figure out how to make computers do what they want. The people building operating systems surely won’t want them to give access to unauthorized users, just like they won’t want those computers to crash. Why is one problem so much more difficult than the other?\n\n\n**CORAL:**  That’s a deep question, but to give a partial deep answer: When you expose a device to the Internet, you’re potentially exposing it to intelligent adversaries who can find special, weird interactions with the system that make the pieces behave in weird ways that the programmers did not think of. When you’re dealing with that kind of problem, you’ll use a different set of methods and tools.\n\n\n**AMBER:**  Any system that crashes is behaving in a way the programmer didn’t expect, and programmers already need to stop that from happening. How is this case different?\n\n\n**CORAL:**  Okay, so… imagine that your system is going to take in one kilobyte of input per session. (Although that itself is the sort of assumption we’d question and ask what happens if it gets a megabyte of input instead—but never mind.) If the input is one kilobyte, then there are 28,000 possible inputs, or about 102,400 or so. Again, for the sake of extending the simple visualization, imagine that a computer gets a billion inputs per second. Suppose that only a googol, 10100, out of the 102,400 possible inputs, cause the system to behave a certain way the original designer didn’t intend.\n\n\nIf the system is getting inputs in a way that’s uncorrelated with whether the input is a misbehaving one, it won’t hit on a misbehaving state before the end of the universe. If there’s an intelligent adversary who understands the system, on the other hand, they may be able to find one of the very rare inputs that makes the system misbehave. So a piece of the system that would literally never in a million years misbehave on random inputs, may break when an intelligent adversary tries deliberately to break it.\n\n\n**AMBER:**  So you’re saying that it’s more difficult because the programmer is pitting their wits against an adversary who may be more intelligent than themselves.\n\n\n**CORAL:**  That’s an almost-right way of putting it. What matters isn’t so much the “adversary” part as the optimization part. There are systematic, nonrandom forces strongly selecting for particular outcomes, causing pieces of the system to go down weird execution paths and occupy unexpected states. If your system literally has no misbehavior modes at all, it doesn’t matter if you have IQ 140 and the enemy has IQ 160—it’s not an arm-wrestling contest. It’s just very much harder to build a system that doesn’t enter weird states when the weird states are being selected-for in a correlated way, rather than happening only by accident. The weirdness-selecting forces can search through parts of the larger state space that you yourself failed to imagine. Beating that does indeed require new skills and a different mode of thinking, what Bruce Schneier called “security mindset”.\n\n\n**AMBER:**  Ah, and what is this security mindset?\n\n\n**CORAL:**  I can say one or two things about it, but keep in mind we are dealing with a quality of thinking that is not entirely effable. If I could give you a handful of platitudes about security mindset, and that would actually cause you to be able to design secure software, the Internet would look very different from how it presently does. That said, it seems to me that what has been called “security mindset” can be divided into two components, one of which is much less difficult than the other. And this can fool people into overestimating their own safety, because they can get the easier half of security mindset and overlook the other half. The less difficult component, I will call by the term “ordinary paranoia”.\n\n\n**AMBER:**  *Ordinary* paranoia?\n\n\n**CORAL:**  Lots of programmers have the ability to imagine adversaries trying to threaten them. They imagine how likely it is that the adversaries are able to attack them a particular way, and then they try to block off the adversaries from threatening that way. Imagining attacks, including weird or clever attacks, and parrying them with measures you imagine will stop the attack; that is ordinary paranoia.\n\n\n**AMBER:**  Isn’t that what security is all about? What do you claim is the other half?\n\n\n**CORAL:**  To put it as a platitude, I might say… defending against mistakes in your own assumptions rather than against external adversaries. \n\n\n\n\n**AMBER:**  Can you give me an example of a difference?\n\n\n**CORAL:**  An ordinary paranoid programmer imagines that an adversary might try to read the file containing all the usernames and passwords. They might try to store the file in a special, secure area of the disk or a special subpart of the operating system that’s supposed to be harder to read. Conversely, somebody with security mindset thinks, “No matter what kind of special system I put around this file, I’m disturbed by needing to make the assumption that this file can’t be read. Maybe the special code I write, because it’s used less often, is more likely to contain bugs. Or maybe there’s a way to fish data out of the disk that doesn’t go through the code I wrote.”\n\n\n**AMBER:**  And they imagine more and more ways that the adversary might be able to get at the information, and block those avenues off too! Because they have better imaginations.\n\n\n**CORAL:**  Well, we kind of do, but that’s not the key difference. What we’ll really want to do is come up with a way for the computer to check passwords that doesn’t rely on the computer storing the password *at all, anywhere*.\n\n\n**AMBER:**  Ah, like encrypting the password file!\n\n\n**CORAL:**  No, that just duplicates the problem at one remove. If the computer can decrypt the password file to check it, it’s stored the decryption key somewhere, and the attacker may be able to steal that key too.\n\n\n**AMBER:**  But then the attacker has to steal two things instead of one; doesn’t that make the system more secure? Especially if you write two different sections of special filesystem code for hiding the encryption key and hiding the encrypted password file?\n\n\n**CORAL:**  That’s exactly what I mean by distinguishing “ordinary paranoia” that doesn’t capture the full security mindset. So long as the system is capable of reconstructing the password, we’ll always worry that the adversary might be able to trick the system into doing just that. What somebody with security mindset will recognize as a deeper solution is to store a one-way hash of the password, rather than storing the plaintext password. Then even if the attacker reads off the password file, they still can’t give what the system will recognize as a password.\n\n\n**AMBER:**  Ah, that’s quite clever! But I don’t see what’s so qualitatively different between that measure, and my measure for hiding the key and the encrypted password file separately. I agree that your measure is more clever and elegant, but of course you’ll know better standard solutions than I do, since you work in this area professionally. I don’t see the qualitative line dividing your solution from my solution.\n\n\n**CORAL:**  Um, it’s hard to say this without offending some people, but… it’s possible that even after I try to explain the difference, which I’m about to do, you won’t get it. Like I said, if I could give you some handy platitudes and transform you into somebody capable of doing truly good work in computer security, the Internet would look very different from its present form. I can try to describe one aspect of the difference, but that may put me in the position of a mathematician trying to explain what looks more promising about one proof avenue than another; you can listen to everything they say and nod along and still not be transformed into a mathematician. So I *am* going to try to explain the difference, but again, I don’t know of any simple instruction manuals for becoming Bruce Schneier.\n\n\n**AMBER:**  I confess to feeling slightly skeptical at this supposedly ineffable ability that some people possess and others don’t—\n\n\n**CORAL:**  There are things like that in many professions. Some people pick up programming at age five by glancing through a page of BASIC programs written for a TRS-80, and some people struggle really hard to grasp basic Python at age twenty-five. That’s not because there’s some mysterious truth the five-year-old knows that you can verbally transmit to the twenty-five-year-old.\n\n\nAnd, yes, the five-year-old will become far better with practice; it’s not like we’re talking about untrainable genius. And there may be platitudes you can tell the 25-year-old that will help them struggle a little less. But sometimes a profession requires thinking in an unusual way and some people’s minds more easily turn sideways in that particular dimension.\n\n\n**AMBER:**  Fine, go on.\n\n\n**CORAL:**  Okay, so… you thought of putting the encrypted password file in one special place in the filesystem, and the key in another special place. Why not encrypt the key too, write a third special section of code, and store the key to the encrypted key there? Wouldn’t that make the system even more secure? How about seven keys hidden in different places, wouldn’t that be extremely secure? Practically unbreakable, even?\n\n\n**AMBER:**  Well, that version of the idea does feel a little silly. If you’re trying to secure a door, a lock that takes two keys might be more secure than a lock that only needs one key, but seven keys doesn’t feel like it makes the door that much more secure than two.\n\n\n**CORAL:**  Why not?\n\n\n**AMBER:**  It just seems silly. You’d probably have a better way of saying it than I would.\n\n\n**CORAL:**  Well, a fancy way of describing the silliness is that the chance of obtaining the seventh key is not conditionally independent of the chance of obtaining the first two keys. If I can read the encrypted password file, and read your encrypted encryption key, then I’ve probably come up with something that just bypasses your filesystem and reads directly from the disk. And the more complicated you make your filesystem, the more likely it is that I can find a weird system state that will let me do just that. Maybe the special section of filesystem code you wrote to hide your fourth key is the one with the bug that lets me read the disk directly.\n\n\n**AMBER:**  So the difference is that the person with a *true* security mindset found a defense that makes the system simpler rather than more complicated.\n\n\n**CORAL:**  Again, that’s almost right. By hashing the passwords, the security professional has made their *reasoning* about the system less complicated. They’ve eliminated the need for an assumption that might be put under a lot of pressure. If you put the key in one special place and the encrypted password file in another special place, the system as a whole is still able to decrypt the user’s password. An adversary probing the state space might be able to trigger that password-decrypting state because the system is designed to do that on at least some occasions. By hashing the password file we eliminate that whole internal debate from the reasoning on which the system’s security rests.\n\n\n**AMBER:**  But even after you’ve come up with that clever trick, something could still go wrong. You’re still not absolutely secure. What if somebody uses “password” as their password?\n\n\n**CORAL:**  Or what if somebody comes up a way to read off the password after the user has entered it and while it’s still stored in RAM, because something got access to RAM? The point of eliminating the extra assumption from the reasoning about the system’s security is not that we are then absolutely secure and safe and can relax. Somebody with security mindset is *never* going to be that relaxed about the edifice of reasoning saying the system is secure.\n\n\nFor that matter, while there are some normal programmers doing normal programming who might put in a bunch of debugging effort and then feel satisfied, like they’d done all they could reasonably do, programmers with decent levels of ordinary paranoia about ordinary programs will go on chewing ideas in the shower and coming up with more function tests for the system to pass. So the distinction between security mindset and ordinary paranoia isn’t that ordinary paranoids will relax.\n\n\nIt’s that… again to put it as a platitude, the ordinary paranoid is running around putting out fires in the form of ways they imagine an adversary might attack, and somebody with security mindset is defending against something closer to “what if an element of this reasoning is mistaken”. Instead of trying really hard to ensure nobody can read a disk, we are going to build a system that’s secure even if somebody does read the disk, and *that* is our first line of defense. And then we are also going to build a filesystem that doesn’t let adversaries read the password file, as a *second* line of defense in case our one-way hash is secretly broken, and because there’s no positive need to let adversaries read the disk so why let them. And then we’re going to salt the hash in case somebody snuck a low-entropy password through our system and the adversary manages to read the password anyway.\n\n\n**AMBER:**  So rather than trying to outwit adversaries, somebody with true security mindset tries to make fewer assumptions.\n\n\n**CORAL:**  Well, we think in terms of adversaries too! Adversarial reasoning is easier to teach than security mindset, but it’s still (a) mandatory and (b) hard to teach in an absolute sense. A lot of people can’t master it, which is why a description of “security mindset” often opens with a story about somebody failing at adversarial reasoning and somebody else launching a clever attack to penetrate their defense.\n\n\nYou need to master two ways of thinking, and there are a lot of people going around who have the first way of thinking but not the second. One way I’d describe the deeper skill is seeing a system’s security as resting on a story about why that system is safe. We want that safety-story to be as solid as possible. One of the implications is resting the story on as few assumptions as possible; as the saying goes, the only gear that never fails is one that has been designed out of the machine.\n\n\n**AMBER:**  But can’t you also get better security by adding more lines of defense? Wouldn’t that be more complexity in the story, and also better security?\n\n\n**CORAL:**  There’s also something to be said for preferring disjunctive reasoning over conjunctive reasoning in the safety-story. But it’s important to realize that you do want a primary line of defense that is supposed to just work and be unassailable, not a series of weaker fences that you think might maybe work. Somebody who doesn’t understand cryptography might devise twenty clever-seeming amateur codes and apply them all in sequence, thinking that, even if one of the codes turns out to be breakable, surely they won’t *all* be breakable. The NSA will assign that mighty edifice of amateur encryption to an intern, and the intern will crack it in an afternoon.\n\n\nThere’s something to be said for redundancy, and having fallbacks in case the unassailable wall falls; it can be wise to have additional lines of defense, so long as the added complexity does not make the larger system harder to understand or increase its vulnerable surfaces. But at the core you need a simple, solid story about why the system is secure, and a good security thinker will be trying to eliminate whole assumptions from that story and strengthening its core pillars, not only scurrying around parrying expected attacks and putting out risk-fires.\n\n\nThat said, it’s better to use two true assumptions than one false assumption, so simplicity isn’t everything.\n\n\n**AMBER:**  I wonder if that way of thinking has applications beyond computer security?\n\n\n**CORAL:**  I’d rather think so, as the proverb about gears suggests.\n\n\nFor example, stepping out of character for a moment, the author of this dialogue has sometimes been known to discuss [the alignment problem for Artificial General Intelligence](https://arbital.com/p/ai_alignment/). He was talking at one point about trying to measure rates of improvement inside a growing AI system, so that it would not do too much thinking with humans out of the loop if a breakthrough occurred while the system was running overnight. The person he was talking to replied that, to him, it seemed unlikely that an AGI would gain in power that fast. To which the author replied, more or less:\n\n\n\n> It shouldn’t be your job to guess how fast the AGI might improve! If you write a system that will hurt you *if* a certain speed of self-improvement turns out to be possible, then you’ve written the wrong code. The code should just never hurt you regardless of the true value of that background parameter.\n> \n> \n> A better way to set up the AGI would be to measure how much improvement is taking place, and if more than *X* improvement takes place, suspend the system until a programmer validates the progress that’s already occurred. That way even if the improvement takes place over the course of a millisecond, you’re still fine, so long as the system works as intended. Maybe the system doesn’t work as intended because of some other mistake, but that’s a better problem to worry about than a system that hurts you *even if* it works as intended.\n> \n> \n> Similarly, you want to design the system so that if it discovers amazing new capabilities, it waits for an operator to validate use of those capabilities—not rely on the operator to watch what’s happening and press a suspend button. You shouldn’t rely on the speed of discovery or the speed of disaster being less than the operator’s reaction time. There’s no *need* to bake in an assumption like that if you can find a design that’s safe regardless. For example, by operating on a paradigm of allowing operator-whitelisted methods rather than avoiding operator-blacklisted methods; you require the operator to say “Yes” before proceeding, rather than assuming they’re present and attentive and can say “No” fast enough.\n> \n> \n\n\n**AMBER:**  Well, okay, but if we’re guarding against an AI system discovering cosmic powers in a millisecond, that does seem to me like an unreasonable thing to worry about. I guess that marks me as a merely ordinary paranoid.\n\n\n**CORAL:**  Indeed, one of the hallmarks of security professionals is that they spend a lot of time worrying about edge cases that would fail to alarm an ordinary paranoid because the edge case doesn’t sound like something an adversary is likely to do. Here’s an example [from the Freedom to Tinker blog](https://freedom-to-tinker.com/2008/03/26/security-mindset-and-harmless-failures/):\n\n\n\n> This interest in “harmless failures” – cases where an adversary can cause an anomalous but not directly harmful outcome – is another hallmark of the security mindset. Not all “harmless failures” lead to big trouble, but it’s surprising how often a clever adversary can pile up a stack of seemingly harmless failures into a dangerous tower of trouble. Harmless failures are bad hygiene. We try to stamp them out when we can…\n> \n> \n> To see why, consider the donotreply.com email story that hit the press recently. When companies send out commercial email (e.g., an airline notifying a passenger of a flight delay) and they don’t want the recipient to reply to the email, they often put in a bogus From address like donotreply@donotreply.com. A clever guy registered the domain donotreply.com, thereby receiving all email addressed to donotreply.com. This included “bounce” replies to misaddressed emails, some of which contained copies of the original email, with information such as bank account statements, site information about military bases in Iraq, and so on…\n> \n> \n> The people who put donotreply.com email addresses into their outgoing email must have known that they didn’t control the donotreply.com domain, so they must have thought of any reply messages directed there as harmless failures. Having gotten that far, there are two ways to avoid trouble. The first way is to think carefully about the traffic that might go to donotreply.com, and realize that some of it is actually dangerous. The second way is to think, “This looks like a harmless failure, but we should avoid it anyway. No good can come of this.” The first way protects you if you’re clever; the second way always protects you.\n> \n> \n\n\n“The first way protects you if you’re clever; the second way always protects you.” That’s very much the other half of the security mindset. It’s what this essay’s author was doing by talking about AGI alignment that runs on whitelisting rather than blacklisting: you shouldn’t assume you’ll be clever about how fast the AGI system could discover capabilities, you should have a system that doesn’t use not-yet-whitelisted capabilities even if they are discovered very suddenly.\n\n\nIf your AGI would hurt you if it gained total cosmic powers in one millisecond, that means you built a cognitive process that is in some sense trying to hurt you and failing only due to what you think is a lack of capability. This is *very bad* and you should be designing some other AGI system instead. AGI systems should never be running a search that will hurt you if the search comes up non-empty. You should not be trying to fix that by making sure the search comes up empty thanks to your clever shallow defenses closing off all the AGI’s clever avenues for hurting you. You should fix that by making sure no search like that ever runs. It’s a silly thing to do with computing power, and you should do something else with computing power instead.\n\n\nGoing back to ordinary computer security, if you try building a lock with seven keys hidden in different places, you are in some dimension pitting your cleverness against an adversary trying to read the keys. The person with security mindset doesn’t want to rely on having to win the cleverness contest. An ordinary paranoid, somebody who can master the kind of default paranoia that lots of intelligent programmers have, will look at the Reply-To field saying donotreply@donotreply.com and think about the possibility of an adversary registering the donotreply.com domain. Somebody with security mindset thinks in assumptions rather than adversaries. “Well, I’m assuming that this reply email goes nowhere,” they’ll think, “but maybe I should design the system so that I don’t need to fret about whether that assumption is true.”\n\n\n**AMBER:**  Because as the truly great paranoid knows, what seems like a ridiculously improbable way for the adversary to attack sometimes turns out to not be so ridiculous after all.\n\n\n**CORAL:**  Again, that’s a not-exactly-right way of putting it. When I don’t set up an email to originate from donotreply@donotreply.com, it’s not just because I’ve appreciated that an adversary registering donotreply.com is more probable than the novice imagines. For all I know, when a bounce email is sent to nowhere, there’s all kinds of things that might happen! Maybe the way a bounced email works is that the email gets routed around to weird places looking for that address. I don’t know, and I don’t want to have to study it. Instead I’ll ask: Can I make it so that a bounced email doesn’t generate a reply? Can I make it so that a bounced email doesn’t contain the text of the original message? Maybe I can query the email server to make sure it still has a user by that name before I try sending the message?—though there may still be “vacation” autoresponses that mean I’d better control the replied-to address myself. If it would be very bad for somebody unauthorized to read this, maybe I shouldn’t be sending it in plaintext by email.\n\n\n**AMBER:**  So the person with true security mindset understands that where there’s one problem, demonstrated by what seems like a very unlikely thought experiment, there’s likely to be more realistic problems that an adversary can in fact exploit. What I think of as weird improbable failure scenarios are canaries in the coal mine, that would warn a truly paranoid person of bigger problems on the way.\n\n\n**CORAL:**  Again that’s not exactly right. The person with ordinary paranoia hears about donotreply@donotreply.com and may think something like, “Oh, well, it’s not very likely that an attacker will actually try to register that domain, I have more urgent issues to worry about,” because in that mode of thinking, they’re running around putting out things that might be fires, and they have to prioritize the things that are most likely to be fires.\n\n\nIf you demonstrate a weird edge-case thought experiment to somebody with security mindset, they don’t see something that’s more likely to be a fire. They think, “Oh no, my belief that those bounce emails go nowhere was FALSE!” The OpenBSD project to build a secure operating system has also, in passing, built an extremely robust operating system, because from their perspective any bug that potentially crashes the system is considered a critical security hole. An ordinary paranoid sees an input that crashes the system and thinks, “A crash isn’t as bad as somebody stealing my data. Until you demonstrate to me that this bug can be used by the adversary to steal data, it’s not *extremely* critical.” Somebody with security mindset thinks, “Nothing inside this subsystem is supposed to behave in a way that crashes the OS. Some section of code is behaving in a way that does not work like my model of that code. Who knows what it might do? The system isn’t supposed to crash, so by making it crash, you have demonstrated that my beliefs about how this system works are false.”\n\n\n**AMBER:**  I’ll be honest: It *has* sometimes struck me that people who call themselves security professionals seem overly concerned with what, to me, seem like very improbable scenarios. Like somebody forgetting to check the end of a buffer and an adversary throwing in a huge string of characters that overwrite the end of the stack with a return address that jumps to a section of code somewhere else in the system that does something the adversary wants. How likely is that *really* to be a problem? I suspect that in the real world, what’s more likely is somebody making their password “password”. Shouldn’t you be mainly guarding against that instead?\n\n\n**CORAL:**  You have to do both. This game is short on consolation prizes. If you want your system to resist attack by major governments, you need it to actually be pretty darned secure, gosh darn it. The fact that some users may try to make their password be “password” does not change the fact that you also have to protect against buffer overflows.\n\n\n**AMBER:**  But even when somebody with security mindset designs an operating system, it often still ends up with successful attacks against it, right? So if this deeper paranoia doesn’t eliminate all chance of bugs, is it really worth the extra effort?\n\n\n**CORAL:**  If you don’t have somebody who thinks this way in charge of building your operating system, it has *no* chance of not failing immediately. People with security mindset sometimes fail to build secure systems. People without security mindset *always* fail at security if the system is at all complex. What this way of thinking buys you is a *chance* that your system takes longer than 24 hours to break.\n\n\n**AMBER:**  That sounds a little extreme.\n\n\n**CORAL:**  History shows that reality has not cared what you consider “extreme” in this regard, and that is why your Wi-Fi-enabled lightbulb is part of a Russian botnet.\n\n\n**AMBER:**  Look, I understand that you want to get all the fiddly tiny bits of the system exactly right. I like tidy neat things too. But let’s be reasonable; we can’t always get everything we want in life.\n\n\n**CORAL:**  You think you’re negotiating with me, but you’re really negotiating with Murphy’s Law. I’m afraid that Mr. Murphy has historically been quite unreasonable in his demands, and rather unforgiving of those who refuse to meet them. I’m not advocating a policy to you, just telling you what happens if you don’t follow that policy. Maybe you think it’s not particularly bad if your lightbulb is doing denial-of-service attacks on a mattress store in Estonia. But if you do want a system to be secure, you need to do certain things, and that part is more of a law of nature than a negotiable demand.\n\n\n**AMBER:**  Non-negotiable, eh? I bet you’d change your tune if somebody offered you twenty thousand dollars. But anyway, one thing I’m surprised you’re not mentioning more is the part where people with security mindset always submit their idea to peer scrutiny and then accept what other people vote about it. I do like the sound of that; it sounds very communitarian and modest.\n\n\n**CORAL:**  I’d say that’s part of the ordinary paranoia that lots of programmers have. The point of submitting ideas to others’ scrutiny isn’t that hard to understand, though certainly there are plenty of people who don’t even do that. If I had any original remarks to contribute to that well-worn topic in computer security, I’d remark that it’s framed as advice to wise paranoids, but of course the people who need it even more are the happy innocents.\n\n\n**AMBER:**  Happy innocents?\n\n\n**CORAL:**  People who lack even ordinary paranoia. Happy innocents tend to envision ways that their system works, but not ask *at all* how their system might fail, until somebody prompts them into that, and even then they can’t do it. Or at least that’s been my experience, and that of many others in the profession.\n\n\nThere’s a certain incredibly terrible cryptographic system, the equivalent of the Fool’s Mate in chess, which is sometimes converged on by the most total sort of amateur, namely Fast XOR. That’s picking a password, repeating the password, and XORing the data with the repeated password string. The person who invents this system may not be able to take the perspective of an adversary at all. *He* wants his marvelous cipher to be unbreakable, and he is not able to truly enter the frame of mind of somebody who wants his cipher to be breakable. If you ask him, “Please, *try* to imagine what could possibly go wrong,” he may say, “Well, if the password is lost, the data will be forever unrecoverable because my encryption algorithm is too strong; I guess that’s something that could go wrong.” Or, “Maybe somebody sabotages my code,” or, “If you really insist that I invent far-fetched scenarios, maybe the computer spontaneously decides to disobey my programming.” Of course any competent ordinary paranoid asks the most skilled people they can find to look at a bright idea and try to shoot it down, because other minds may come in at a different angle or know other standard techniques. But the other reason why we say “Don’t roll your own crypto!” and “Have a security expert look at your bright idea!” is in hopes of reaching the many people who can’t *at all* invert the polarity of their goals—they don’t think that way spontaneously, and if you try to force them to do it, their thoughts go in unproductive directions.\n\n\n**AMBER:**  Like… the same way many people on the Right/Left seem utterly incapable of stepping outside their own treasured perspectives to pass the [Ideological Turing Test](http://econlog.econlib.org/archives/2011/06/the_ideological.html) of the Left/Right.\n\n\n**CORAL:**  I don’t know if it’s exactly the same mental gear or capability, but there’s a definite similarity. Somebody who lacks ordinary paranoia can’t take on the viewpoint of somebody who wants Fast XOR to be breakable, and pass that adversary’s Ideological Turing Test for attempts to break Fast XOR.\n\n\n**AMBER:**  Can’t, or won’t? You seem to be talking like these are innate, untrainable abilities.\n\n\n**CORAL:**  Well, at the least, there will be different levels of talent, as usual in a profession. And also as usual, talent vastly benefits from training and practice. But yes, it has sometimes seemed to me that there is a kind of qualitative step or gear here, where some people can shift perspective to imagine an adversary that truly wants to break their code… or a reality that isn’t cheering for their plan to work, or aliens who evolved different emotions, or an AI that doesn’t *want* to conclude its reasoning with “And therefore the humans should live happily ever after”, or a fictional character who believes in Sith ideology and yet [doesn’t believe they’re the bad guy](http://yudkowsky.tumblr.com/writing/realistic-viewpoints).\n\n\nIt does sometimes seem to me like some people simply can’t shift perspective in that way. Maybe it’s not that they truly lack the wiring, but that there’s an instinctive political off-switch for the ability. Maybe they’re scared to let go of their mental anchors. But from the outside it looks like the same result: some people do it, some people don’t. Some people spontaneously invert the polarity of their internal goals and spontaneously ask how their cipher might be broken and come up with productive angles of attack. Other people wait until prompted to look for flaws in their cipher, or they demand that you argue with them and wait for you to come up with an argument that satisfies them. If you ask them to predict themselves what you might suggest as a flaw, they say weird things that don’t begin to pass your Ideological Turing Test.\n\n\n**AMBER:**  You do seem to like your qualitative distinctions. Are there better or worse ordinary paranoids? Like, is there a spectrum in the space between “happy innocent” and “true deep security mindset”?\n\n\n**CORAL:**  One obvious quantitative talent level within ordinary paranoia would be in how far you can twist your perspective to look sideways at things—the creativity and workability of the attacks you invent. Like these [examples](https://www.schneier.com/blog/archives/2008/03/the_security_mi_1.html) Bruce Schneier gave:\n\n\n\n> Uncle Milton Industries has been selling ant farms to children since 1956. Some years ago, I remember opening one up with a friend. There were no actual ants included in the box. Instead, there was a card that you filled in with your address, and the company would mail you some ants. My friend expressed surprise that you could get ants sent to you in the mail.\n> \n> \n> I replied: “What’s really interesting is that these people will send a tube of live ants to anyone you tell them to.”\n> \n> \n> Security requires a particular mindset. Security professionals—at least the good ones—see the world differently. They can’t walk into a store without noticing how they might shoplift. They can’t use a computer without wondering about the security vulnerabilities. They can’t vote without trying to figure out how to vote twice. They just can’t help it.\n> \n> \n> SmartWater is a liquid with a unique identifier linked to a particular owner. “The idea is for me to paint this stuff on my valuables as proof of ownership,” I wrote when I first learned about the idea. “I think a better idea would be for me to paint it on your valuables, and then call the police.”\n> \n> \n> Really, we can’t help it.\n> \n> \n> This kind of thinking is not natural for most people. It’s not natural for engineers. Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail…\n> \n> \n> I’ve often speculated about how much of this is innate, and how much is teachable. In general, I think it’s a particular way of looking at the world, and that it’s far easier to teach someone domain expertise—cryptography or software security or safecracking or document forgery—than it is to teach someone a security mindset.\n> \n> \n\n\nTo be clear, the distinction between “just ordinary paranoia” and “all of security mindset” is my own; I think it’s worth dividing the spectrum above the happy innocents into two levels rather than one, and say, “This business of looking at the world from weird angles is only half of what you need to learn, and it’s the easier half.”\n\n\n**AMBER:**  Maybe Bruce Schneier himself doesn’t grasp what you mean when you say “security mindset”, and you’ve simply stolen his term to refer to a whole new idea of your own!\n\n\n**CORAL:**  No, the thing with not wanting to have to reason about whether somebody might someday register “donotreply.com” and just fixing it regardless—a methodology that doesn’t trust you to be clever about which problems will blow up—that’s definitely part of what existing security professionals mean by “security mindset”, and it’s definitely part of the second and deeper half. The only unconventional thing in my presentation is that I’m factoring out an intermediate skill of “ordinary paranoia”, where you try to parry an imagined attack by encrypting your password file and hiding the encryption key in a separate section of filesystem code. Coming up with the idea of hashing the password file is, I suspect, a qualitatively distinct skill, invoking a world whose dimensions are your own reasoning processes and not just object-level systems and attackers. Though it’s not polite to say, and the usual suspects will interpret it as a status grab, my experience with other reflectivity-laden skills suggests this may mean that many people, possibly including you, will prove unable to think in this way.\n\n\n**AMBER:**  I indeed find that terribly impolite.\n\n\n**CORAL:**  It may indeed be impolite; I don’t deny that. Whether it’s untrue is a different question. The reason I say it is because, as much as I want ordinary paranoids to *try* to reach up to a deeper level of paranoia, I want them to be aware that it might not prove to be their thing, in which case they should get help and then listen to that help. They shouldn’t assume that because they can notice the chance to have ants mailed to people, they can also pick up on the awfulness of donotreply@donotreply.com.\n\n\n**AMBER:**  Maybe you could call that “deep security” to distinguish it from what Bruce Schneier and other security professionals call “security mindset”.\n\n\n**CORAL:**  “Security mindset” equals “ordinary paranoia” plus “deep security”? I’m not sure that’s very good terminology, but I won’t mind if you use the term that way.\n\n\n**AMBER:**  Suppose I take that at face value. Earlier, you described what might go wrong when a happy innocent tries and fails to be an ordinary paranoid. What happens when an ordinary paranoid tries to do something that requires the deep security skill?\n\n\n**CORAL:**  They believe they have wisely identified bad passwords as the real fire in need of putting out, and spend all their time writing more and more clever checks for bad passwords. They are very impressed with how much effort they have put into detecting bad passwords, and how much concern they have shown for system security. They fall prey to the standard cognitive bias whose name I can’t remember, where people want to solve a problem using one big effort or a couple of big efforts and then be done and not try anymore, and that’s why people don’t put up hurricane shutters once they’re finished buying bottled water. Pay them to “try harder”, and they’ll hide seven encryption keys to the password file in seven different places, or build towers higher and higher in places where a successful adversary is obviously just walking around the tower if they’ve gotten through at all. What these ideas have in common is that they are in a certain sense “shallow”. They are mentally straightforward as attempted parries against a particular kind of envisioned attack. They give you a satisfying sense of fighting hard against the imagined problem—and then they fail.\n\n\n**AMBER:**  Are you saying it’s *not* a good idea to check that the user’s password isn’t “password”?\n\n\n**CORAL:**  No, shallow defenses are often good ideas too! But even there, somebody with the higher skill will try to look at things in a more systematic way; they know that there are often deeper ways of looking at the problem to be found, and they’ll try to find those deep views. For example, it’s extremely important that your password checker does *not* rule out the password “correct horse battery staple” by demanding the password contain at least one uppercase letter, lowercase letter, number, and punctuation mark. What you really want to do is measure password entropy. Not envision a failure mode of somebody guessing “rainbow”, which you will cleverly balk by forcing the user to make their password be “rA1nbow!” instead.\n\n\nYou want the password entry field to have a checkbox that allows showing the typed password in plaintext, because your attempt to parry the imagined failure mode of some evildoer reading over the user’s shoulder may get in the way of the user entering a long or high-entropy password. And the user is perfectly capable of typing their password into that convenient text field in the address bar above the web page, so they can copy and paste it—thereby sending your password to whoever tries to do smart lookups on the address bar. If you’re really that worried about some evildoer reading over somebody’s shoulder, maybe you should be sending a confirmation text to their phone, rather than forcing the user to enter their password into a nearby text field that they can actually read. Obscuring one text field, with no off-switch for the obscuration, to guard against this one bad thing that you imagined happening, while managing to step on your own feet in other ways and not even really guard against the bad thing; that’s the peril of shallow defenses.\n\n\nAn archetypal character for “ordinary paranoid who thinks he’s trying really hard but is actually just piling on a lot of shallow precautions” is Mad-Eye Moody from the *Harry Potter* series, who has a whole room full of Dark Detectors, and who also ends up locked in the bottom of somebody’s trunk. It seems Mad-Eye Moody was too busy buying one more Dark Detector for his existing room full of Dark Detectors, and he didn’t invent precautions deep enough and general enough to cover the unforeseen attack vector “somebody tries to replace me using Polyjuice”.\n\n\nAnd the solution isn’t to add on a special anti-Polyjuice potion. I mean, if you happen to have one, great, but that’s not where most of your trust in the system should be coming from. The first lines of defense should have a sense about them of depth, of generality. Hashing password files, rather than hiding keys; thinking of how to measure password entropy, rather than requiring at least one uppercase character.\n\n\n**AMBER:**  Again this seems to me more like a quantitative difference in the cleverness of clever ideas, rather than two different modes of thinking.\n\n\n**CORAL:**  Real-world categories are often fuzzy, but to me these seem like the product of two different kinds of thinking. My guess is that the person who popularized demanding a mixture of letters, cases, and numbers was reasoning in a different way than the person who thought of measuring password entropy. But whether you call the distinction qualitative or quantitative, the distinction remains. Deep and general ideas—the kind that actually simplify and strengthen the edifice of reasoning supporting the system’s safety—are invented more rarely and by rarer people. To build a system that can resist or even slow down an attack by multiple adversaries, some of whom may be smarter or more experienced than ourselves, requires a level of professionally specialized thinking that isn’t reasonable to expect from every programmer—not even those who can shift their minds to take on the perspective of a single equally-smart adversary. What you should ask from an ordinary paranoid is that they appreciate that deeper ideas exist, and that they try to learn the standard deeper ideas that are already known; that they know their own skill is not the upper limit of what’s possible, and that they ask a professional to come in and check their reasoning. And then actually listen.\n\n\n**AMBER:**  But if it’s possible for people to think they have higher skills and be mistaken, how do you know that *you* are one of these rare people who *truly* has a deep security mindset? Might your high opinion of yourself [just be due to the Dunning-Kruger effect](https://equilibriabook.com/inadequacy-and-modesty/)?\n\n\n**CORAL:**  … Okay, that reminds me to give another caution.\n\n\nYes, there will be some innocents who can’t believe that there’s a talent called “paranoia” that they lack, who’ll come up with weird imitations of paranoia if you ask them to be more worried about flaws in their brilliant encryption ideas. There will also be some people reading this with severe cases of [social anxiety and underconfidence](https://equilibriabook.com/status-regulation-and-anxious-underconfidence/). Readers who *are* capable of ordinary paranoia and even security mindset, who might not try to develop these talents, because they are terribly worried that they might just be one of the people who only imagine themselves to have talent. Well, if you think you can feel the distinction between deep security ideas and shallow ones, you should at least try now and then to generate your own thoughts that resonate in you the same way.\n\n\n**AMBER:**  But won’t that attitude encourage overconfident people to think they can be paranoid when they actually can’t be, with the result that they end up too impressed with their own reasoning and ideas?\n\n\n**CORAL:**  I strongly suspect that they’ll do that regardless. You’re not actually promoting some kind of collective good practice that benefits everyone, just by personally agreeing to be modest. The overconfident don’t care what you decide. And if you’re not just as worried about underestimating yourself as overestimating yourself, if your fears about exceeding your proper place are asymmetric with your fears about lost potential and foregone opportunities, then you’re probably dealing with an emotional issue rather than a strict concern with good epistemology.\n\n\n**AMBER:**  If somebody does have the talent for deep security, then, how can they train it?\n\n\n**CORAL:**  … That’s a hell of a good question. Some interesting training methods have been developed for ordinary paranoia, like classes whose students have to figure out how to attack everyday systems outside of a computer-science context. One professor gave a test in which one of the questions was “What are the first 100 digits of pi?”—the point being that you need to find some way to cheat in order to pass the test. You should train that kind of ordinary paranoia first, if you haven’t done that already.\n\n\n**AMBER:**  And then what? How do you graduate to deep security from ordinary paranoia?\n\n\n**CORAL:**  … Try to find more general defenses instead of parrying particular attacks? Appreciate the extent to which you’re building ever-taller versions of towers that an adversary might just walk around? Ugh, no, that’s too much like ordinary paranoia—especially if you’re starting out with just ordinary paranoia. Let me think about this.\n\n\n…\n\n\nOkay, I have a screwy piece of advice that’s probably not going to work. Write down the safety-story on which your belief in a system’s security rests. Then ask yourself whether you actually included all the empirical assumptions. Then ask yourself whether you actually believe those empirical assumptions.\n\n\n**AMBER:**  So, like, if I’m building an operating system, I write down, “Safety assumption: The login system works to keep out attackers”—\n\n\n**CORAL:**  *No!*\n\n\nUh, no, sorry. As usual, it seems that what I think is “advice” has left out all the important parts anyone would need to actually do it.\n\n\nThat’s not what I was trying to handwave at by saying “empirical assumption”. You don’t want to assume that parts of the system “succeed” or “fail”—that’s not language that should appear in what you write down. You want the elements of the story to be strictly factual, not… value-laden, goal-laden? There shouldn’t be reasoning that explicitly mentions what you want to have happen or not happen, just language neutrally describing the background facts of the universe. For brainstorming purposes you might write down “Nobody can guess the password of any user with dangerous privileges”, but that’s just a proto-statement which needs to be refined into more basic statements.\n\n\n**AMBER:**  I don’t think I understood.\n\n\n**CORAL:**  “Nobody can guess the password” says, “I believe the adversary will fail to guess the password.” Why do you believe that?\n\n\n**AMBER:**  I see, so you want me to refine complex assumptions into systems of simpler assumptions. But if you keep asking “why do you believe that” you’ll eventually end up back at the Big Bang and the laws of physics. How do I know when to stop?\n\n\n**CORAL:**  What you’re trying to do is reduce the story past the point where you talk about a goal-laden event, “the adversary fails”, and instead talk about neutral facts underlying that event. For now, just answer me: Why do you believe the adversary fails to guess the password?\n\n\n**AMBER:**  Because the password is too hard to guess.\n\n\n**CORAL:**  The phrase “too hard” is goal-laden language; it’s your own desires for the system that determine what is “too hard”. Without using concepts or language that refer to what you want, what is a neutral, factual description of what makes a password too hard to guess?\n\n\n**AMBER:**  The password has high-enough entropy that the attacker can’t try enough attempts to guess it.\n\n\n**CORAL:**  We’re making progress, but again, the term “enough” is goal-laden language. It’s your own wants and desires that determine what is “enough”. Can you say something else instead of “enough”?\n\n\n**AMBER:**  The password has sufficient entropy that—\n\n\n**CORAL:**  I don’t mean find a synonym for “enough”. I mean, use different concepts that aren’t goal-laden. This will involve changing the meaning of what you write down.\n\n\n**AMBER:**  I’m sorry, I guess I’m not good enough at this.\n\n\n**CORAL:**  Not yet, anyway. Maybe not ever, but that isn’t known, and you shouldn’t assume it based on one failure.\n\n\nAnyway, what I was hoping for was a pair of statements like, “I believe every password has at least 50 bits of entropy” and “I believe no attacker can make more than a trillion tries total at guessing any password”. Where the point of writing “I believe” is to make yourself pause and question whether you actually believe it.\n\n\n**AMBER:**  Isn’t saying no attacker “can” make a trillion tries itself goal-laden language?\n\n\n**CORAL:**  Indeed, that assumption might need to be refined further via why-do-I-believe-that into, “I believe the system rejects password attempts closer than 1 second together, I believe the attacker keeps this up for less than a month, and I believe the attacker launches fewer than 300,000 simultaneous connections.” Where again, the point is that you then look at what you’ve written and say, “Do I really believe that?” To be clear, sometimes the answer will be “Yes, I sure do believe that!” This isn’t a social modesty exercise where you show off your ability to have agonizing doubts and then you go ahead and do the same thing anyway. The point is to find out what you believe, or what you’d need to believe, and check that it’s believable.\n\n\n**AMBER:**  And this trains a deep security mindset?\n\n\n**CORAL:**  … Maaaybe? I’m wildly guessing it might? It may get you to think in terms of stories and reasoning and assumptions alongside passwords and adversaries, and that puts your mind into a space that I think is at least part of the skill.\n\n\nIn point of fact, the real reason the author is listing out this methodology is that he’s currently trying to do something similar on the problem of aligning Artificial General Intelligence, and he would like to move past “I believe my AGI won’t want to kill anyone” and into a headspace more like writing down statements such as “Although the space of potential weightings for this recurrent neural net does contain weight combinations that would figure out how to kill the programmers, I believe that gradient descent on loss function *L* will only access a result inside subspace *Q* with properties *P*, and I believe a space with properties *P* does not include any weight combinations that figure out how to kill the programmer.”\n\n\nThough this itself is not really a reduced statement and still has too much goal-laden language in it. A realistic example would take us right out of the main essay here. But the author does hope that practicing this way of thinking can help lead people into building more solid stories about robust systems, if they already have good ordinary paranoia and some fairly mysterious innate talents.\n\n\n\n\n\n---\n\n\n \n\n\nContinued in: **[Security Mindset and the Logistic Success Curve](https://intelligence.org/2017/11/26/security-mindset-and-the-logistic-success-curve/)**.\n\n\n \n\n\nThe post [Security Mindset and Ordinary Paranoia](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-11-25T20:18:26Z", "authors": ["Eliezer Yudkowsky"], "summaries": []} -{"id": "d23de6d4c8ca5afbbd50fd6a9c0c25e6", "title": "Announcing “Inadequate Equilibria”", "url": "https://intelligence.org/2017/11/16/announcing-inadequate-equilibria/", "source": "miri", "source_type": "blog", "text": "![](https://static-2.gumroad.com/res/gumroad/4753585984079/asset_previews/7036559c822c290090f2ebec053e55a2/retina/Gumroad_20Cover.jpg)\nMIRI Senior Research Fellow Eliezer Yudkowsky has a new book out today: ***[Inadequate Equilibria: Where and How Civilizations Get Stuck](https://equilibriabook.com)***, a discussion of societal dysfunction, exploitability, and self-evaluation. From the preface:\n\n\n\n> \n> *Inadequate Equilibria* is a book about a generalized notion of efficient markets, and how we can use this notion to guess where society will or won’t be effective at pursuing some widely desired goal.An efficient market is one where smart individuals should generally doubt that they can spot overpriced or underpriced assets. We can ask an analogous question, however, about the “efficiency” of other human endeavors.\n> \n> \n> Suppose, for example, that someone thinks they can easily build a much better and more profitable social network than Facebook, or easily come up with a new treatment for a widespread medical condition. Should they question whatever clever reasoning led them to that conclusion, in the same way that most smart individuals should question any clever reasoning that causes them to think AAPL stock is underpriced? Should they question whether they can “beat the market” in these areas, or whether they can even spot major in-principle improvements to the status quo? How “efficient,” or *adequate*, should we expect civilization to be at various tasks?\n> \n> \n> There will be, as always, good ways and bad ways to reason about these questions; this book is about both.\n> \n> \n> \n\n\nThe book is available from Amazon (in [print](https://amazon.com/dp/1939311225) and [Kindle](https://amazon.com/dp/B076Z64CPG)), on [iBooks](https://itunes.apple.com/us/book/inadequate-equilibria/id1313946779), as a pay-what-you-want [digital download](https://gumroad.com/l/equilibria), and as a [web book at equilibriabook.com](https://equilibriabook.com/toc/). The book has also been posted to [*Less Wrong* 2.0](https://www.lesswrong.com/sequences/oLGCcbnvabyibnG9d).\n\n\nThe book’s contents are:\n\n\n\n\n---\n\n\n\n> \n> [**1.  Inadequacy and Modesty**](https://equilibriabook.com/inadequacy-and-modesty/)\n> \n> \n> A comparison of two “wildly different, nearly *cognitively nonoverlapping*” approaches to thinking about outperformance: *modest epistemology*, and *inadequacy analysis*.\n> \n> \n> [**2.  An Equilibrium of No Free Energy**](https://equilibriabook.com/an-equilibrium-of-no-free-energy)\n> \n> \n> How, in principle, can society end up neglecting obvious low-hanging fruit?\n> \n> \n> **[3.  Moloch’s Toolbox](https://equilibriabook.com/molochs-toolbox)**\n> \n> \n> Why does our civilization actually end up neglecting low-hanging fruit?\n> \n> \n> **[4.  Living in an Inadequate World](https://equilibriabook.com/living-in-an-inadequate-world)**\n> \n> \n> How can we best take into account civilizational inadequacy in our decision-making?\n> \n> \n> [**5.  Blind Empiricism**](https://equilibriabook.com/blind-empiricism/)\n> \n> \n> Three examples of modesty in practical settings.\n> \n> \n> [**6.  Against Modest Epistemology**](https://equilibriabook.com/against-modest-epistemology/)\n> \n> \n> An argument against the “epistemological core” of modesty: that we shouldn’t take our own reasoning and meta-reasoning at face value in cases in the face of disagreements or novelties.\n> \n> \n> [**7.  Status Regulation and Anxious Underconfidence**](https://equilibriabook.com/status-regulation-and-anxious-underconfidence)\n> \n> \n> On causal accounts of modesty.\n> \n> \n> \n\n\n\n\n---\n\n\nAlthough *Inadequate Equilibria* isn’t about AI, I consider it one of MIRI’s most important nontechnical publications to date, as it helps explain some of the most basic tools and background models we use when we evaluate how promising a potential project, research program, or general strategy is.\n\n\nThe post [Announcing “Inadequate Equilibria”](https://intelligence.org/2017/11/16/announcing-inadequate-equilibria/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-11-17T02:23:09Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "ebbfe04363efad98ccaa82d1417298f8", "title": "A major grant from the Open Philanthropy Project", "url": "https://intelligence.org/2017/11/08/major-grant-open-phil/", "source": "miri", "source_type": "blog", "text": "I’m thrilled to announce that the Open Philanthropy Project has awarded MIRI a **three-year $3.75 million general support grant** ($1.25 million per year). This grant is, by far, the largest contribution MIRI has received to date, and will have a major effect on our plans going forward.\n\n\nThis grant follows a [$500,000 grant](https://intelligence.org/2016/09/06/grant-open-philanthropy/) we received from the Open Philanthropy Project in 2016. The Open Philanthropy Project’s [announcement](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017) for the new grant notes that they are “now aiming to support about half of MIRI’s annual budget”.[1](https://intelligence.org/2017/11/08/major-grant-open-phil/#footnote_0_16783 \"The Open Philanthropy Project usually prefers not to provide more than half of an organization’s funding, to facilitate funder coordination and ensure that organizations it supports maintain their independence. From a March blog post: “We typically avoid situations in which we provide >50% of an organization’s funding, so as to avoid creating a situation in which an organization’s total funding is ‘fragile’ as a result of being overly dependent on us.”\") The annual $1.25 million represents 50% of a conservative estimate we provided to the Open Philanthropy Project of the amount of funds we expect to be able to usefully spend in 2018–2020.\n\n\nThis expansion in support was also conditional on our ability to raise the other 50% from other supporters. For that reason, I sincerely thank all of the past and current supporters who have helped us get to this point.\n\n\nThe Open Philanthropy Project has expressed openness to potentially increasing their support if MIRI is in a position to usefully spend more than our conservative estimate, if they believe that this increase in spending is sufficiently high-value, and if we are able to secure additional outside support to ensure that the Open Philanthropy Project isn’t providing more than half of our total funding.\n\n\nWe’ll be going into more details on our future organizational plans in a follow-up post **December 1**, where we’ll also discuss our end-of-the-year fundraising goals.\n\n\nIn their write-up, the Open Philanthropy Project notes that they have updated favorably about our technical output since 2016, following [our logical induction paper](https://intelligence.org/2016/09/12/new-paper-logical-induction/):\n\n\n\n> We received a very positive review of MIRI’s work on “[logical induction](https://intelligence.org/2016/09/12/new-paper-logical-induction/)” by a machine learning researcher who (i) is interested in AI safety, (ii) is rated as an outstanding researcher by at least one of our close advisors, and (iii) is generally regarded as outstanding by the ML community. As mentioned above, we previously had [difficulty evaluating](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support#Uncertainty_about_our_technical_assessment) the technical quality of MIRI’s research, and we previously could find no one meeting criteria (i) – (iii) to a comparable extent who was comparably excited about MIRI’s technical research. While we would not generally offer a comparable grant to any lab on the basis of this consideration alone, we consider this a significant update in the context of the original [case for the [2016] grant](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support#Case_for_the_grant) (especially MIRI’s thoughtfulness on this set of issues, value alignment with us, distinctive perspectives, and history of work in this area). While the balance of our technical advisors’ opinions and arguments still leaves us skeptical of the value of MIRI’s research, the case for the statement “MIRI’s research has a *nontrivial chance* of turning out to be extremely valuable (when taking into account how different it is from other research on AI safety)” appears much more robust than it did before we received this review.\n> \n> \n\n\nThe announcement also states, “In the time since our initial grant to MIRI, we have made several more grants within this focus area, and are therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach.”\n\n\nWe’re enormously grateful for the Open Philanthropy Project’s support, and for their deep engagement with the AI safety field as a whole. To learn more about our discussions with the Open Philanthropy Project and their active work in this space, see the group’s previous [AI safety grants](https://www.openphilanthropy.org/giving/grants?field_focus_area_target_id_selective=532), our conversation with Daniel Dewey [on the Effective Altruism Forum](http://effective-altruism.com/ea/1ca/my_current_thoughts_on_miris_highly_reliable/), and the research problems outlined in the Open Philanthropy Project’s recent [AI fellows program description](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-philanthropy-project-ai-fellows-program).\n\n\n\n\n---\n\n1. The Open Philanthropy Project usually prefers not to provide more than half of an organization’s funding, to facilitate funder coordination and ensure that organizations it supports maintain their independence. From a March [blog post](https://www.openphilanthropy.org/blog/technical-and-philosophical-questions-might-affect-our-grantmaking#Philanthropic_coordination_theory): “We typically avoid situations in which we provide >50% of an organization’s funding, so as to avoid creating a situation in which an organization’s total funding is ‘fragile’ as a result of being overly dependent on us.”\n\nThe post [A major grant from the Open Philanthropy Project](https://intelligence.org/2017/11/08/major-grant-open-phil/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-11-09T00:05:42Z", "authors": ["Malo Bourgon"], "summaries": []} -{"id": "25d78fc3973b64f0a3142a62314b5171", "title": "November 2017 Newsletter", "url": "https://intelligence.org/2017/11/03/november-2017-newsletter/", "source": "miri", "source_type": "blog", "text": "Eliezer Yudkowsky has written a new book on civilizational dysfunction and outperformance: *Inadequate Equilibria: Where and How Civilizations Get Stuck*. The full book will be available in print and electronic formats November 16. To preorder the ebook or sign up for updates, visit [equilibriabook.com](https://equilibriabook.com).\n\n\nWe’re posting the full contents online in stages over the next two weeks. The first two chapters are:\n\n\n1. [Inadequacy and Modesty](http://equilibriabook.com/inadequacy-and-modesty/) (discussion: [LessWrong](https://www.lesswrong.com/posts/zsG9yKcriht2doRhM/inadequacy-and-modesty), [EA Forum](http://effective-altruism.com/ea/1g4/inadequacy_and_modesty/), [Hacker News](https://news.ycombinator.com/item?id=15582120))\n2. [An Equilibrium of No Free Energy](http://equilibriabook.com/an-equilibrium-of-no-free-energy) (discussion: [LessWrong](https://www.lesswrong.com/posts/yPLr2tnXbiFXkMWvk/an-equilibrium-of-no-free-energy), [EA Forum](http://effective-altruism.com/ea/1gd/an_equilibrium_of_no_free_energy/))\n\n\n \n\n\n**Research updates**\n\n\n* A new paper: “[Functional Decision Theory: A New Theory of Instrumental Rationality](https://intelligence.org/2017/10/22/fdt/)” ([arXiv](https://arxiv.org/abs/1710.05060)), by Eliezer Yudkowsky and Nate Soares.\n* New research write-ups and discussions: [Comparing Logical Inductor CDT and Logical Inductor EDT](https://agentfoundations.org/item?id=1629); [Logical Updatelessness as a Subagent Alignment Problem](https://www.lesswrong.com/posts/K5Qp7ioupgb7r73Ca/logical-updatelessness-as-a-subagent-alignment-problem); [Mixed-Strategy Ratifiability Implies CDT=EDT](https://www.lesswrong.com/posts/x2wn2MWYSafDtm8Lf/mixed-strategy-ratifiability-implies-cdt-edt)\n* New from AI Impacts: [Computing Hardware Performance Data Collections](https://aiimpacts.org/computing-hardware-performance-data-collections/)\n* The [Workshop on Reliable Artificial Intelligence](http://wrai.org) took place at ETH Zürich, hosted by MIRIxZürich.\n\n\n**General updates**\n\n\n* DeepMind announces [a new version of AlphaGo](https://deepmind.com/blog/alphago-zero-learning-scratch/) that achieves superhuman performance within three days, using 4 TPUs and no human training data. Eliezer Yudkowsky argues that AlphaGo Zero provides supporting evidence for [his position in the AI foom debate](https://intelligence.org/2017/10/20/alphago/); Robin Hanson [responds](https://www.lesswrong.com/posts/D3NspiH2nhKA6B2PE/what-evidence-is-alphago-zero-re-agi-complexity). See also Paul Christiano on [AlphaGo Zero and capability amplification](https://ai-alignment.com/alphago-zero-and-capability-amplification-ede767bb8446).\n* Yudkowsky [on AGI ethics](https://www.lesswrong.com/posts/SsCQHjqNT3xQAPQ6b/yudkowsky-on-agi-ethics): “The ethics of bridge-building is to not have your bridge fall down and kill people and there is a frame of mind in which this obviousness is obvious enough. *How* not to have the bridge fall down is hard.”\n* Nate Soares gave his [ensuring smarter-than-human AI has a positive outcome](https://intelligence.org/2017/04/12/ensuring/) talk [at the O’Reilly AI Conference](https://conferences.oreilly.com/artificial-intelligence/ai-ca/public/schedule/detail/60169) ([slides](https://cdn.oreillystatic.com/en/assets/1/event/272/Ensuring%20smarter-than-human%20intelligence%20has%20a%20positive%20outcome_%20Presentation.pdf)).\n\n\n**News and links**\n\n\n* “[Protecting Against AI’s Existential Threat](https://www.wsj.com/articles/protecting-against-ais-existential-threat-1508332313)“: a *Wall Street Journal* op-ed by OpenAI’s Ilya Sutskever and Dario Amodei.\n* OpenAI announces “a [hierarchical reinforcement learning algorithm](https://blog.openai.com/learning-a-hierarchy/) that learns high-level actions useful for solving a range of tasks”.\n* DeepMind’s Viktoriya Krakovna reports on the first [Tokyo AI & Society Symposium](https://futureoflife.org/2017/10/30/tokyo-ai-society-symposium/).\n* Nick Bostrom [speaks](https://www.youtube.com/watch?v=wDU-fnXIV0s) and CSER [submits written evidence](https://www.cser.ac.uk/news/lords-ai-committee-written-evidence/) to the UK Parliament’s Artificial Intelligence Commitee.\n* [Rob Wiblin interviews Nick Beckstead](https://80000hours.org/2017/10/nick-beckstead-giving-billions/) for the 80,000 Hours podcast.\n\n\nThe post [November 2017 Newsletter](https://intelligence.org/2017/11/03/november-2017-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-11-04T01:40:48Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "0a89f8a95b5048a8d7f2e4beb12c1b13", "title": "New paper: “Functional Decision Theory”", "url": "https://intelligence.org/2017/10/22/fdt/", "source": "miri", "source_type": "blog", "text": "[![Functional Decision Theory](https://intelligence.org/files/fdt.png)](https://arxiv.org/abs/1710.05060)\nMIRI senior researcher Eliezer Yudkowsky and executive director Nate Soares have a new introductory paper out on decision theory: “**[Functional decision theory: A new theory of instrumental rationality](https://arxiv.org/abs/1710.05060)**.”\n\n\nAbstract:\n\n\n\n> This paper describes and motivates a new decision theory known as *functional decision theory* (FDT), as distinct from causal decision theory and evidential decision theory.\n> \n> \n> Functional decision theorists hold that the normative principle for action is to treat one’s decision as the output of a fixed mathematical function that answers the question, “Which output of this very function would yield the best outcome?” Adhering to this principle delivers a number of benefits, including the ability to maximize wealth in an array of traditional decision-theoretic and game-theoretic problems where CDT and EDT perform poorly. Using one simple and coherent decision rule, functional decision theorists (for example) achieve more utility than CDT on Newcomb’s problem, more utility than EDT on the smoking lesion problem, and more utility than both in Parfit’s hitchhiker problem.\n> \n> \n> In this paper, we define FDT, explore its prescriptions in a number of different decision problems, compare it to CDT and EDT, and give philosophical justifications for FDT as a normative theory of decision-making.\n> \n> \n\n\nOur previous introductory paper on FDT, “[Cheating Death in Damascus](https://intelligence.org/2017/03/18/new-paper-cheating-death-in-damascus/),” focused on comparing FDT’s performance to that of CDT and EDT in fairly high-level terms. Yudkowsky and Soares’ new paper puts a much larger focus on FDT’s mechanics and motivations, making “Functional Decision Theory” the most complete stand-alone introduction to the theory.[1](https://intelligence.org/2017/10/22/fdt/#footnote_0_16717 \"“Functional Decision Theory” was originally drafted prior to “Cheating Death in Damascus,” and was significantly longer before we received various rounds of feedback from the philosophical community. “Cheating Death in Damascus” was produced from material that was cut from early drafts; other cut material included a discussion of proof-based decision theory, and some Death in Damascus variants left on the cutting room floor for being needlessly cruel to CDT.\")\n\n\n\nContents:\n\n\n1. **Overview.**\n\n\n2. **Newcomb’s Problem and the Smoking Lesion Problem.** In terms of utility gained, conventional EDT outperforms CDT in Newcomb’s problem, while underperforming CDT in the smoking lesion problem. Both CDT and EDT have therefore appeared unsastisfactory as expected utility theories, and the debate between the two has remained at an impasse. FDT, however, offers an elegant criterion for matching EDT’s performance in the former class of dilemmas, while also matching CDT’s performance in the latter class of dilemmas.\n\n\n3. **Subjunctive Dependence.** FDT can be thought of as a modification of CDT that relies, not on causal dependencies, but on a wider class of *subjunctive* dependencies that includes causal dependencies as a special case.\n\n\n4. **Parfit’s Hitchhiker.** FDT’s novel properties can be more readily seen in Parfit’s hitchhiker problem, where both CDT and EDT underperform FDT. Yudkowsky and Soares note three considerations favoring FDT over traditional theories: an argument from precommitment, an argument from information value, and an argument from utility.\n\n\n5. **Formalizing EDT, CDT, and FDT.** To lend precision to the claim that a given decision theory prescribes a given action, Yudkowsky and Soares define algorithms implementing each theory.\n\n\n6. **Comparing the Three Decision Algorithms’ Behavior.** Yudkowsky and Soares then revisit Newcomb’s problem, the smoking lesion problem, and Parfit’s hitchhiker problem, algorithms in hand.\n\n\n7. **Diagnosing EDT: Conditionals as Counterfactuals.** The core problem with EDT and CDT is that the hypothetical scenarios that they consider are malformed. EDT works by conditioning on joint probability distributions, which causes problems when correlations are spurious.\n\n\n8. **Diagnosing CDT: Impossible Interventions.** CDT, meanwhile, works by considering strictly causal counterfactuals, which causes problems when it wrongly treats unavoidable correlations as though they can be broken.\n\n\n9: **The Global Perspective.** FDT’s form of counterpossible reasoning allows agents to respect a broader set of real-world dependencies than CDT can, while excluding EDT’s spurious dependencies. We can understand FDT as reflecting a “global perspective” on which decision-theoretic agents should seek to have the most desirable decision [type](https://plato.stanford.edu/entries/types-tokens/), as opposed to the most desirable decision token.\n\n\n10. **Conclusion.**\n\n\nWe use the term “*functional* decision theory” because FDT invokes the idea that decision-theoretic agents can be thought of as implementing deterministic functions from goals and observation histories to actions.[2](https://intelligence.org/2017/10/22/fdt/#footnote_1_16717 \"To cover mixed strategies in this context, we can assume that one of the sensory inputs to the agent is a random number.\") We can see this feature clearly in **Newcomb’s problem**, where an FDT agent—let’s call her Fiona, as in the paper—will reason as follows:\n\n\n\n> Omega knows the decision I will reach—they are somehow computing the same decision function I am on the same inputs, and using that function’s output to decide how many boxes to fill. Suppose, then, that the decision function I’m implementing outputs “one-box.” The same decision function, implemented in Omega, must then also output “one-box.” In that case, Omega will fill the opaque box, and I’ll get its contents. (**+$1,000,000.**)\n> \n> \n> Or suppose that instead I take both boxes. In that case, my decision function outputs “two-box,” Omega will leave the opaque box empty, and I’ll get the contents of both boxes. (**+$1,000.**)\n> \n> \n> The first scenario has higher expected utility; therefore my decision function hereby outputs “one-box.”\n> \n> \n\n\nUnlike a CDT agent that restricts itself to purely causal dependencies, Fiona’s decision-making is able to take into account the dependencies between Omega’s actions and her reasoning process itself. As a consequence, Fiona will tend to come away with far more money than CDT agents.\n\n\nAt the same time, FDT avoids the standard pitfalls EDT runs into, e.g., in the smoking lesion problem. The smoking lesion problem has a few peculiarities, such as the potential for agents to appeal to the “tickle defense” of Ellery Eells; but we can more clearly illustrate EDT’s limitations with the **XOR blackmail problem**, where tickle defenses are of no help to EDT.\n\n\nIn the XOR blackmail problem, an agent hears a rumor that their house has been infested by termites, at a repair cost of $1,000,000. The next day, the agent receives a letter from the trustworthy predictor Omega staying:\n\n\n\n> I know whether or not you have termites, and I have sent you this letter iff exactly one of the following is true: (i) the rumor is false, and you are going to pay me $1,000 upon receiving this letter; or (ii) the rumor is true, and you will not pay me upon receiving this letter.\n> \n> \n\n\nIn this dilemma, EDT agents pay up, reasoning that it would be bad news to learn that they have termites—in spite of the fact that their termite-riddenness does not depend, either causally or otherwise, on whether they pay.\n\n\nIn contrast, Fiona the FDT agent reasons in a similar fashion to how she does in Newcomb’s problem:\n\n\n\n> Since Omega’s decision to send the letter is based on a reliable prediction of whether I’ll pay, Omega and I must both be computing the same decision function. Suppose, then, that my decision function outputs “don’t pay” on input “letter.” In the cases where I have termites, Omega will then send me this letter and I won’t pay (**−$1,000,000**); while if I don’t have termites, Omega won’t send the letter (**−$0**).\n> \n> \n> On the other hand, suppose that my decision function outputs “pay” on input “letter.” Then, in the case where I have termites, Omega doesn’t send the letter (**−$1,000,000**), and in the case where I don’t have termites, Omega sends the letter and I pay (**−$1,000**).\n> \n> \n> My decision function determines whether I conditionally pay and whether Omega conditionally sends a letter. But the termites aren’t predicting me, aren’t computing my decision function at all. So if my decision function’s output is “pay,” that doesn’t change the termites’ behavior and doesn’t benefit me at all; so I don’t pay.\n> \n> \n\n\nUnlike the EDT agent, Fiona correctly takes into account that paying won’t increase her utility in the XOR blackmail dilemma; and unlike the CDT agent, Fiona takes into account that one-boxing *will* increase her utility in Newcomb’s problem.\n\n\nFDT, then, provides an elegant alternative to both traditional theories, simultaneously offering us a simpler and more general rule for expected utility maximization in practice, and a more satisfying philosophical account of rational decision-making in principle.\n\n\nFor additional discussion of FDT, I recommend “[Decisions Are For Making Bad Outcomes Inconsistent](https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/),” a conversation exploring the counter-intuitive fact that in order to decide what action to output, a decision-theoretic agent must be able to consider hypothetical scenarios in which their deterministic decision function outputs something other than what it outputs in fact.[3](https://intelligence.org/2017/10/22/fdt/#footnote_2_16717 \"Many of the hypotheticals an agent must consider are internally inconsistent: a deterministic function only has one possible output on a given input, but agents must base their decisions on the expected utility of many different “possible” actions in order to choose the best action. E.g., in Newcomb’s problem, FDT and EDT agents must evaluate the expected utility of two-boxing in order to weigh their options and arrive at their final decision at all, even though it would be inconsistent for such an agent to two-box; and likewise, CDT must evaluate the expected utility of the impossible hypothetical where a CDT agent one-boxes.\nAlthough poorly-understood theoretically, this kind of counterpossible reasoning seems to be entirely feasible in practice. Even though a false conjecture classically implies all propositions, mathematicians routinely reason in a meaningful and nontrivial way with hypothetical scenarios in which a conjecture has different truth-values. The problem of how to best represent counterpossible reasoning in a formal setting, however, remains unsolved.\")\n\n\n \n\n\n\n\n\n#### Sign up to get updates on new MIRI technical results\n\n\n*Get notified every time a new technical paper is published.*\n\n\n\n\n* \n* \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n\n\n \n\n\n\n\n---\n\n1. “Functional Decision Theory” was originally drafted prior to “[Cheating Death in Damascus](https://intelligence.org/2017/03/18/new-paper-cheating-death-in-damascus/),” and was significantly longer before we received various rounds of feedback from the philosophical community. “Cheating Death in Damascus” was produced from material that was cut from early drafts; other cut material included a discussion of [proof-based decision theory](https://agentfoundations.org/item?id=50), and some Death in Damascus variants left on the cutting room floor for being needlessly cruel to CDT.\n2. To cover mixed strategies in this context, we can assume that one of the sensory inputs to the agent is a random number.\n3. Many of the hypotheticals an agent must consider are internally inconsistent: a deterministic function only has one possible output on a given input, but agents must base their decisions on the expected utility of many different “possible” actions in order to choose the best action. E.g., in Newcomb’s problem, FDT and EDT agents must evaluate the expected utility of two-boxing in order to weigh their options and arrive at their final decision at all, even though it would be inconsistent for such an agent to two-box; and likewise, CDT must evaluate the expected utility of the impossible hypothetical where a CDT agent one-boxes.\nAlthough poorly-understood theoretically, this kind of counterpossible reasoning seems to be entirely feasible in practice. Even though a false conjecture classically implies all propositions, mathematicians routinely reason in a meaningful and nontrivial way with hypothetical scenarios in which a conjecture has different truth-values. The problem of how to best represent counterpossible reasoning in a formal setting, however, remains unsolved.\n\nThe post [New paper: “Functional Decision Theory”](https://intelligence.org/2017/10/22/fdt/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-10-22T17:05:35Z", "authors": ["Matthew Graves"], "summaries": []} -{"id": "6100e8bf461866b6c0167763ef6cc6a4", "title": "AlphaGo Zero and the Foom Debate", "url": "https://intelligence.org/2017/10/20/alphago/", "source": "miri", "source_type": "blog", "text": "[AlphaGo Zero](https://web.archive.org/web/20171020090144/https://deepmind.com/blog/alphago-zero-learning-scratch/) uses 4 TPUs, is built entirely out of neural nets with no handcrafted features, doesn’t pretrain against expert games or anything else human, reaches a superhuman level after 3 days of self-play, and is the strongest version of AlphaGo yet.\n\n\nThe architecture has been simplified. Previous AlphaGo had a policy net that predicted good plays, and a value net that evaluated positions, both feeding into lookahead using MCTS (random probability-weighted plays out to the end of a game). AlphaGo Zero has one neural net that selects moves and this net is trained by Paul-Christiano-style [capability amplification](https://ai-alignment.com/alphago-zero-and-capability-amplification-ede767bb8446), playing out games against itself to learn new probabilities for winning moves.\n\n\nAs others have also remarked, this seems to me to be an element of evidence that favors the Yudkowskian position over the Hansonian position in my and Robin Hanson’s [AI-foom debate](https://intelligence.org/ai-foom-debate/).\n\n\nAs I recall and as I understood:\n\n\n* Hanson doubted that what he calls “architecture” is much of a big deal, compared to (Hanson said) elements like cumulative domain knowledge, or special-purpose components built by specialized companies in what he expects to be an ecology of companies serving an AI economy.\n* When I remarked upon how it sure looked to me like humans had an architectural improvement over chimpanzees that counted for a lot, Hanson replied that this seemed to him like a one-time gain from allowing the cultural accumulation of knowledge.\n\n\nI emphasize how all the mighty human edifice of Go knowledge, the joseki and tactics developed over centuries of play, the experts teaching children from an early age, was entirely discarded by AlphaGo Zero with a subsequent performance improvement. These mighty edifices of human knowledge, as I understand the Hansonian thesis, are supposed to be *the* bulwark against rapid gains in AI capability across multiple domains at once. I said, “Human intelligence is crap and our accumulated skills are crap,” and this appears to have been borne out.\n\n\nSimilarly, single research labs like DeepMind are not supposed to pull far ahead of the general ecology, because adapting AI to any particular domain is supposed to require lots of components developed all over the place by a market ecology that makes those components available to other companies. AlphaGo Zero is much simpler than that. To the extent that nobody else can run out and build AlphaGo Zero, it’s either because Google has Tensor Processing Units that aren’t generally available, or because DeepMind has a silo of expertise for being able to actually make use of existing ideas like ResNets, or both.\n\n\nSheer speed of capability gain should also be highlighted here. Most of my argument for FOOM in the Yudkowsky-Hanson debate was about self-improvement and what happens when an optimization loop is folded in on itself. Though it wasn’t necessary to my argument, the fact that Go play went from “nobody has come close to winning against a professional” to “so strongly superhuman they’re not really bothering any more” over two years just because that’s what happens when you improve and simplify the architecture, says *you don’t even need self-improvement* to get things that look like FOOM.\n\n\nYes, Go is a closed system allowing for self-play. It still took humans centuries to learn how to play it. Perhaps the new Hansonian bulwark against rapid capability gain can be that the environment has lots of empirical bits that are supposed to be very hard to learn, *even in the limit of AI thoughts fast enough to blow past centuries of human-style learning in 3 days*; and that humans have learned these vital bits over centuries of cultural accumulation of knowledge, *even though we know that humans take centuries to do 3 days of AI learning when humans have all the empirical bits they need*; and that AIs cannot absorb this knowledge very quickly using “architecture”, *even though humans learn it from each other using architecture*. If so, then let’s write down this new world-wrecking assumption (that is, the world ends if the assumption is false) and be on the lookout for further evidence that this assumption might perhaps be wrong.\n\n\nAlphaGo clearly isn’t a general AI. There’s obviously stuff humans do that make us much more general than AlphaGo, and AlphaGo obviously doesn’t do that. However, if even with the human special sauce we’re to expect AGI capabilities to be slow, domain-specific, and requiring feed-in from a big market ecology, then the situation we see without human-equivalent generality special sauce should not look like this.\n\n\nTo put it another way, I put a lot of emphasis in my debate on recursive self-improvement and the remarkable jump in generality across the change from primate intelligence to human intelligence. It doesn’t mean we can’t get info about speed of capability gains *without* self-improvement. It doesn’t mean we can’t get info about the importance and generality of algorithms *without* the general intelligence trick. The debate can start to settle for fast capability gains before we even get to what I saw as the good parts; I wouldn’t have predicted AlphaGo and lost money betting against the speed of its capability gains, because reality held a more extreme position than I did on the Yudkowsky-Hanson spectrum.\n\n\n([Reply from Robin Hanson.](https://www.lesswrong.com/posts/D3NspiH2nhKA6B2PE/what-evidence-is-alphago-zero-re-agi-complexity))\n\n\nThe post [AlphaGo Zero and the Foom Debate](https://intelligence.org/2017/10/20/alphago/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-10-21T02:37:19Z", "authors": ["Eliezer Yudkowsky"], "summaries": []} -{"id": "6b5d10f272afdfe5d0741f48a710c70c", "title": "October 2017 Newsletter", "url": "https://intelligence.org/2017/10/16/october-2017-newsletter/", "source": "miri", "source_type": "blog", "text": "“So far as I can presently estimate, now that we’ve had AlphaGo and a couple of other maybe/maybe-not shots across the bow, and seen a huge explosion of effort invested into machine learning and an enormous flood of papers, we are probably going to occupy our present epistemic state until very near the end.\n\n\n“[…I]t’s hard to guess how many further insights are needed for AGI, or how long it will take to reach those insights. After the next breakthrough, we still won’t know how many more breakthroughs are needed, leaving us in pretty much the same epistemic state as before. […] You can either act despite that, or not act. Not act until it’s too late to help much, in the best case; not act at all until after it’s essentially over, in the average case.”\n\n\nRead more in a new blog post by Eliezer Yudkowsky: “[There’s No Fire Alarm for Artificial General Intelligence](https://intelligence.org/2017/10/13/fire-alarm/).” (Discussion on [LessWrong 2.0](https://www.lesswrong.com/posts/BEtzRE2M5m9YEAQpX/there-s-no-fire-alarm-for-artificial-general-intelligence), [Hacker News](https://news.ycombinator.com/item?id=15470920).)\n\n\n**Research updates**\n\n\n* New research write-ups and discussions: [The Doomsday Argument in Anthropic Decision Theory](https://agentfoundations.org/item?id=1655); [Smoking Lesion Steelman II](https://agentfoundations.org/item?id=1662)\n* New from AI Impacts: [What Do ML Researchers Think You Are Wrong About?](https://aiimpacts.org/what-do-ml-researchers-think-you-are-wrong-about/), [When Do ML Researchers Think Specific Tasks Will Be Automated?](https://aiimpacts.org/when-do-ml-researchers-think-specific-tasks-will-be-automated/)\n\n\n**General updates**\n\n\n* “[Is Tribalism a Natural Malfunction?](http://nautil.us/issue/52/the-hive/is-tribalism-a-natural-malfunction)”: *Nautilus* discusses MIRI’s work on decision theory, superrationality, and the prisoner’s dilemma.\n* We helped run the 2017 [AI Summer Fellows Program](http://www.rationality.org/workshops/apply-aisfp2017) with the Center for Applied Rationality, and taught at the [European Summer Program on Rationality](https://espr-camp.org).\n* We’re very happy to announce that we’ve received a $100,000 grant [from the Berkeley Existential Risk Initiative](http://existence.org/grants) and Jaan Tallinn, as well as over $30,000 from [Raising for Effective Giving](https://reg-charity.org) and a [pledge](http://www.casinocitytimes.com/news/article/pokerstars-to-donate-wcoop-tournament-fees-to-charity-222683) of $55,000 from [PokerStars](https://en.wikipedia.org/wiki/PokerStars) through REG. We’ll be providing more information on our funding situation in advance of our December fundraiser.\n* LessWrong is currently hosting an open beta for a site redesign at [lesswrong.com](https://www.lesswrong.com); see Oliver Habryka’s [strategy write-up](http://lesswrong.com/lw/pes/lw_20_strategic_overview/).\n\n\n**News and links**\n\n\n* [Hillary Clinton](http://lukemuehlhauser.com/hillary-clinton-on-ai-risk/) and [Vladimir Putin](https://www.cnbc.com/2017/09/04/putin-leader-in-artificial-intelligence-will-rule-world.html) voice worries about the impacts of AI technology.\n* The Future of Life Institute discusses Dan Weld’s work on [explainable AI](https://futureoflife.org/2017/09/27/explainable-ai-a-discussion-with-dan-weld/).\n* Researchers at OpenAI and Oxford release [Learning with Opponent-Learning Awareness](https://blog.openai.com/learning-to-model-other-minds/), an RL algorithm that takes into account how its choice of policy can change other agents’ strategy, enabling cooperative behavior in some simple multi-agent settings.\n* From Carrick Flynn of the Future of Humanity Institute: [Personal Thoughts on Careers in AI Policy and Strategy](http://effective-altruism.com/ea/1fa/personal_thoughts_on_careers_in_ai_policy_and/).\n* [Goodhart’s Imperius](https://www.lesswrong.com/posts/tq2JGX4ojnrkxL7NF/goodhart-s-imperius): A discussion of Goodhart’s Law and human psychology.\n\n\nThe post [October 2017 Newsletter](https://intelligence.org/2017/10/16/october-2017-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-10-17T02:22:28Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "efa660be2c1f201669254ebc6a2cc2de", "title": "There’s No Fire Alarm for Artificial General Intelligence", "url": "https://intelligence.org/2017/10/13/fire-alarm/", "source": "miri", "source_type": "blog", "text": "---\n\n\n \n\n\nWhat is the function of a fire alarm?\n\n\n \n\n\nOne might think that the function of a fire alarm is to provide you with important evidence about a fire existing, allowing you to change your policy accordingly and exit the building.\n\n\nIn the classic experiment by Latane and Darley in 1968, eight groups of three students each were asked to fill out a questionnaire in a room that shortly after began filling up with smoke. Five out of the eight groups didn’t react or report the smoke, even as it became dense enough to make them start coughing. Subsequent manipulations showed that a lone student will respond 75% of the time; while a student accompanied by two actors told to feign apathy will respond only 10% of the time. This and other experiments seemed to pin down that what’s happening is pluralistic ignorance. We don’t want to look panicky by being afraid of what isn’t an emergency, so we try to look calm while glancing out of the corners of our eyes to see how others are reacting, but of course they are also trying to look calm.\n\n\n(I’ve read a number of replications and variations on this research, and the effect size is blatant. I would not expect this to be one of the results that dies to the replication crisis, and I haven’t yet heard about the replication crisis touching it. But we have to put a maybe-not marker on everything now.)\n\n\nA fire alarm creates common knowledge, in the you-know-I-know sense, that there is a fire; after which it is socially safe to react. When the fire alarm goes off, you know that everyone else knows there is a fire, you know you won’t lose face if you proceed to exit the building.\n\n\nThe fire alarm doesn’t tell us with certainty that a fire is there. In fact, I can’t recall one time in my life when, exiting a building on a fire alarm, there was an actual fire. Really, a fire alarm is *weaker* evidence of fire than smoke coming from under a door.\n\n\nBut the fire alarm tells us that it’s socially okay to react to the fire. It promises us with certainty that we won’t be embarrassed if we now proceed to exit in an orderly fashion.\n\n\nIt seems to me that this is one of the cases where people have mistaken beliefs about what they believe, like when somebody loudly endorsing their city’s team to win the big game will back down as soon as asked to bet. They haven’t consciously distinguished the rewarding exhilaration of shouting that the team will win, from the feeling of anticipating the team will win.\n\n\nWhen people look at the smoke coming from under the door, I think they think their uncertain wobbling feeling comes from not assigning the fire a high-enough probability of really being there, and that they’re reluctant to act for fear of wasting effort and time. If so, I think they’re interpreting their own feelings mistakenly. If that was so, they’d get the same wobbly feeling on hearing the fire alarm, or even more so, because fire alarms correlate to fire less than does smoke coming from under a door. The uncertain wobbling feeling comes from the worry that others believe differently, not the worry that the fire isn’t there. The reluctance to act is the reluctance to be seen looking foolish, not the reluctance to waste effort. That’s why the student alone in the room does something about the fire 75% of the time, and why people have no trouble reacting to the much weaker evidence presented by fire alarms.\n\n\n \n\n\n\n\n---\n\n\n \n\n\nIt’s now and then proposed that we ought to start reacting later to the issues of Artificial General Intelligence ([background here](http://econlog.econlib.org/archives/2016/03/so_far_unfriend.html)), because, it is said, we are so far away from it that it just isn’t possible to do productive work on it today.\n\n\n(For direct argument about there being things doable today, see: Soares and Fallenstein ([2014/2017](https://intelligence.org/files/TechnicalAgenda.pdf)); Amodei, Olah, Steinhardt, Christiano, Schulman, and Mané ([2016](https://arxiv.org/abs/1606.06565)); or Taylor, Yudkowsky, LaVictoire, and Critch ([2016](https://intelligence.org/2017/02/28/using-machine-learning/)).)\n\n\n(If none of those papers existed or if you were an AI researcher who’d read them but thought they were all garbage, and you wished you could work on alignment but knew of nothing you could do, the wise next step would be to sit down and spend two hours by the clock sincerely trying to think of possible approaches. Preferably without self-sabotage that makes sure you don’t come up with anything plausible; as might happen if, hypothetically speaking, you would actually find it much more comfortable to believe there was nothing you ought to be working on today, because e.g. then you could work on other things that interested you more.)\n\n\n(But never mind.)\n\n\nSo if AGI seems far-ish away, and you think the conclusion licensed by this is that you can’t do any productive work on AGI alignment yet, then the implicit alternative strategy on offer is: Wait for some unspecified future event that tells us AGI is coming near; and *then* we’ll all know that it’s okay to start working on AGI alignment.\n\n\nThis seems to me to be wrong on a number of grounds. Here are some of them.\n\n\n\n \n\n\n**One:** As Stuart Russell observed, if you get radio signals from space and spot a spaceship there with your telescopes and you know the aliens are landing in thirty years, you still start thinking about that today.\n\n\nYou’re not like, “Meh, that’s thirty years off, whatever.” You certainly don’t casually say “Well, there’s nothing we can do until they’re closer.” Not without spending two hours, or at least [five minutes](http://www.readthesequences.com/MotivatedStoppingAndMotivatedContinuation) by the clock, brainstorming about whether there is anything you ought to be starting now.\n\n\nIf you said the aliens were coming in thirty years and you were therefore going to do nothing today… well, if these were [more effective time](https://www.facebook.com/yudkowsky/posts/10155616782514228)s, somebody would ask for a schedule of what you thought ought to be done, starting when, how long before the aliens arrive. If you didn’t have that schedule ready, they’d know that you weren’t operating according to a worked table of timed responses, but just procrastinating and doing nothing; and they’d correctly infer that you probably hadn’t searched very hard for things that could be done today.\n\n\nIn Bryan Caplan’s terms, anyone who seems quite casual about the fact that “nothing can be done now to prepare” about the aliens is [missing a mood](http://econlog.econlib.org/archives/2016/01/the_invisible_t.html); they should be much more alarmed at not being able to think of any way to prepare. And maybe ask if somebody else has come up with any ideas? But never mind.\n\n\n \n\n\n**Two:** History shows that for the general public, and even for scientists not in a key inner circle, and even for scientists *in* that key circle, it is very often the case that key technological developments still seem decades away, five years before they show up.\n\n\nIn 1901, two years before helping build the first heavier-than-air flyer, Wilbur Wright told his brother that powered flight was [fifty years away](https://books.google.com/books?id=ldxfLyNIk9wC&pg=PA91&dq=\"i+said+to+my+brother+orville\"&hl=en&sa=X&ved=0ahUKEwioiseChcnWAhWL-VQKHab6AqMQ6AEIJjAA#v=onepage&q=%22i%20said%20to%20my%20brother%20orville%22&f=false).\n\n\nIn 1939, three years before he personally oversaw the first critical chain reaction in a pile of uranium bricks, Enrico Fermi voiced [90% confidence](https://books.google.com/books?id=aSgFMMNQ6G4C&pg=PA813&lpg=PA813&dq=weart+fermi&source=bl&ots=Jy1pBOUL10&sig=c9wK_yLHbXZS_GFIv0K3bgpmE58&hl=en&sa=X&ved=0ahUKEwjNofKsisnWAhXGlFQKHbOSB1QQ6AEIKTAA#v=onepage&q=%22ten%20per%20cent%22&f=false) that it was [impossible](http://lesswrong.com/lw/h8m/being_halfrational_about_pascals_wager_is_even/) to use uranium to sustain a fission chain reaction. I believe Fermi also said a year after that, aka two years before the denouement, that *if* net power from fission was even possible (as he then granted some greater plausibility) then it would be fifty years off; but for this I neglected to keep the citation.\n\n\nAnd of course if you’re not the Wright Brothers or Enrico Fermi, you will be even more surprised. Most of the world learned that atomic weapons were now a thing when they woke up to the headlines about Hiroshima. There were esteemed intellectuals saying [four years *after* the Wright Flyer](https://www.xaprb.com/blog/flight-is-impossible/) that heavier-than-air flight was impossible, because knowledge propagated more slowly back then.\n\n\nWere there events that, in [hindsight](https://www.readthesequences.com/Hindsight-Devalues-Science), today, we can see as signs that heavier-than-air flight or nuclear energy were nearing? Sure, but if you go back and read the actual newspapers from that time and see what people actually said about it then, you’ll see that they did not know that these were signs, or that they were very uncertain that these might be signs. Some playing the part of Excited Futurists proclaimed that big changes were imminent, I expect, and others playing the part of Sober Scientists tried to pour cold water on all that childish enthusiasm; I expect that part was more or less exactly the same decades earlier. If somewhere in that din was a superforecaster who said “decades” when it was decades and “5 years” when it was five, good luck noticing them amid all the noise. More likely, the superforecasters were the ones who said “Could be tomorrow, could be decades” both when the big development was a day away and when it was decades away.\n\n\nOne of the major modes by which hindsight bias makes us feel that the past was more predictable than anyone was actually able to predict at the time, is that in hindsight we know what we ought to notice, and we fixate on only one thought as to what each piece of evidence indicates. If you look at what people actually say at the time, historically, they’ve usually got no clue what’s about to happen three months before it happens, because they don’t know which signs are which.\n\n\nI mean, you *could* say the words “AGI is 50 years away” and have those words happen to be true. People were also saying that powered flight was decades away when it was in fact decades away, and those people happened to be right. The problem is that everything looks the same to you either way, if you are actually living history instead of reading about it afterwards.\n\n\nIt’s not that whenever somebody says “fifty years” the thing always happens in two years. It’s that this confident prediction of things being far away corresponds to an epistemic state about the technology that feels the same way internally until you are very very close to the big development. It’s the epistemic state of “Well, I don’t see how to do the thing” and sometimes you say that fifty years off from the big development, and sometimes you say it two years away, and sometimes you say it while the Wright Flyer is flying somewhere out of your sight.\n\n\n \n\n\n**Three:** Progress is driven by peak knowledge, not average knowledge.\n\n\nIf Fermi and the Wrights couldn’t see it coming three years out, imagine how hard it must be for anyone else to see it.\n\n\nIf you’re not at the global peak of knowledge of how to do the thing, and looped in on all the progress being made at what will turn out to be the leading project, you aren’t going to be able to see of your own knowledge *at all* that the big development is imminent. Unless you are very good at perspective-taking in a way that wasn’t necessary in a hunter-gatherer tribe, and very good at realizing that other people may know techniques and ideas of which you have no inkling even that you do not know them. If you don’t consciously compensate for the lessons of history in this regard; then you will promptly say the decades-off thing. Fermi wasn’t still thinking that net nuclear energy was impossible or decades away by the time he got to 3 months before he built the first pile, because at that point Fermi was looped in on everything and saw how to do it. But anyone not looped in probably still felt like it was fifty years away while the actual pile was fizzing away in a squash court at the University of Chicago.\n\n\nPeople don’t seem to automatically compensate for the fact that the timing of the big development is a function of the peak knowledge in the field, a threshold touched by the people who know the most and have the best ideas; while they themselves have average knowledge; and therefore what they themselves know is not strong evidence about when the big development happens. I think they aren’t thinking about that at all, and they just eyeball it using their own sense of difficulty. If they are thinking anything more deliberate and reflective than that, and incorporating real work into correcting for the factors that might bias their lenses, they haven’t bothered writing down their reasoning anywhere I can read it.\n\n\nTo know that AGI is decades away, we would need enough understanding of AGI to know what pieces of the puzzle are missing, and how hard these pieces are to obtain; and that kind of insight is unlikely to be available until the puzzle is complete. Which is also to say that to anyone outside the leading edge, the puzzle will look more incomplete than it looks on the edge. That project may publish their theories in advance of proving them, although I hope not. But there are unproven theories now too.\n\n\nAnd again, that’s not to say that people saying “fifty years” is a certain sign that something is happening in a squash court; they were saying “fifty years” sixty years ago too. It’s saying that anyone who thinks technological *timelines* are actually forecastable, in advance, by people who are not looped in to the leading project’s progress reports and who don’t share all the best ideas about exactly how to do the thing and how much effort is required for that, is learning the wrong lesson from history. In particular, from reading history books that neatly lay out lines of progress and their visible signs that we all know *now* were important and evidential. It’s sometimes possible to say useful conditional things about the consequences of the big development whenever it happens, but it’s rarely possible to make confident predictions about the *timing* of those developments, beyond a one- or two-year horizon. And if you are one of the rare people who can call the timing, if people like that even exist, nobody else knows to pay attention to you and not to the Excited Futurists or Sober Skeptics.\n\n\n \n\n\n**Four:** The future uses different tools, and can therefore easily do things that are very hard now, or do with difficulty things that are impossible now.\n\n\nWhy do we know that AGI is decades away? In popular articles penned by heads of AI research labs and the like, there are typically three prominent reasons given:\n\n\n(A) The author does not know how to build AGI using present technology. The author does not know where to start.\n\n\n(B) The author thinks it is really very hard to do the impressive things that modern AI technology does, they have to slave long hours over a hot GPU farm tweaking hyperparameters to get it done. They think that the public does not appreciate how hard it is to get anything done right now, and is panicking prematurely because the public thinks anyone can just fire up Tensorflow and build a robotic car.\n\n\n(C) The author spends a lot of time interacting with AI systems and therefore is able to personally appreciate all the ways in which they are still stupid and lack common sense.\n\n\nWe’ve now considered some aspects of argument A. Let’s consider argument B for a moment.\n\n\nSuppose I say: “It is now possible for one comp-sci grad to do in a week anything that N+ years ago the research community could do with neural networks *at all*.” How large is N?\n\n\nI got some answers to this on Twitter from people whose credentials I don’t know, but the most common answer was five, which sounds about right to me based on my own acquaintance with machine learning. (Though obviously not as a literal universal, because reality is never that neat.) If you could do something in 2012 period, you can probably do it fairly straightforwardly with modern GPUs, Tensorflow, Xavier initialization, batch normalization, ReLUs, and Adam or RMSprop or just stochastic gradient descent with momentum. The modern techniques are just that much better. To be sure, there are things we can’t do now with just those simple methods, things that require tons more work, but those things were not possible at all in 2012.\n\n\nIn machine learning, when you can do something at all, you are probably at most a few years away from being able to do it easily using the future’s much superior tools. From this standpoint, argument B, “You don’t understand how hard it is to do what we do,” is something of a non-sequitur when it comes to timing.\n\n\nStatement B sounds to me like the same sentiment voiced by Rutherford [in 1933](https://www.edge.org/conversation/the-myth-of-ai#26015) when he called net energy from atomic fission “moonshine”. If you were a nuclear physicist in 1933 then you had to split all your atoms by hand, by bombarding them with other particles, and it was a laborious business. If somebody talked about getting net energy from atoms, maybe it made you feel that you were unappreciated, that people thought your job was easy.\n\n\nBut of course this will always be the lived experience for AI engineers on serious frontier projects. You don’t get paid big bucks to do what a grad student can do in a week (unless you’re working for a bureaucracy with no clue about AI; but that’s not Google or FB). Your personal experience will *always* be that what you are paid to spend months doing is difficult. A change in this personal experience is therefore not something you can use as a fire alarm.\n\n\nThose playing the part of wiser sober skeptical scientists would obviously agree in the abstract that our tools will improve; but in the popular articles they pen, they just talk about the painstaking difficulty of this year’s tools. I think that when they’re in that mode they are not even trying to forecast what the tools will be like in 5 years; they haven’t written down any such arguments as part of the articles I’ve read. I think that when they tell you that AGI is decades off, they are literally giving an estimate of [how long it feels to them](https://www.readthesequences.com/UnboundedScalesHugeJuryAwardsAndFuturism) like it would take to build AGI using their current tools and knowledge. Which is why they emphasize how hard it is to stir the heap of linear algebra until it spits out good answers; I think they are not imagining, at all, into how this experience may change over considerably less than fifty years. If they’ve explicitly considered the bias of estimating future tech timelines based on their present subjective sense of difficulty, and tried to compensate for that bias, they haven’t written that reasoning down anywhere I’ve read it. Nor have I ever heard of that forecasting method giving good results historically.\n\n\n \n\n\n**Five:** Okay, let’s be blunt here. I don’t think most of the discourse about AGI being far away (*or* that it’s near) is being generated by models of future progress in machine learning. I don’t think we’re looking at wrong models; I think we’re looking at no models.\n\n\nI was once at a conference where there was a panel full of famous AI luminaries, and most of the luminaries were nodding and agreeing with each other that of course AGI was very far off, except for two famous AI luminaries who stayed quiet and let others take the microphone.\n\n\nI got up in Q&A and said, “Okay, you’ve all told us that progress won’t be all that fast. But let’s be more concrete and specific. I’d like to know what’s the *least* impressive accomplishment that you are very confident *cannot* be done in the next two years.”\n\n\nThere was a silence.\n\n\nEventually, two people on the panel ventured replies, spoken in a rather more tentative tone than they’d been using to pronounce that AGI was decades out. They named “A robot puts away the dishes from a dishwasher without breaking them”, and [Winograd schemas](http://www.cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WS.html). Specifically, “I feel quite confident that the Winograd schemas—where we recently had a result that was in the 50, 60% range—in the next two years, we will not get 80, 90% on that regardless of the techniques people use.”\n\n\nA few months after that panel, there was unexpectedly a big breakthrough on Winograd schemas. The breakthrough didn’t crack 80%, so three cheers for wide credibility intervals with error margin, but I expect the predictor might be feeling slightly more nervous now with one year left to go. (I don’t think it was the breakthrough I remember reading about, but Rob turned up [this paper](https://l.facebook.com/l.php?u=https%3A%2F%2Fwww.ijcai.org%2Fproceedings%2F2017%2F0326.pdf&h=ATMiIliuWNyZbf0ezht51f12W7gL1Gw1AgwfGsF2MUCMMNa_sw9vB1iS6etZYPeiaJuKbxZR92VAbn7uJAZwUHkXm59JK0pBI4cB2ve9rKRpz0vKGXozkvegWE7gbiUWuoP8BwLo0_0mhpnIdbfiO9X9Dpw) as an example of one that could have been submitted at most 44 days after the above conference and gets up to 70%.)\n\n\nBut that’s not the point. The point is the silence that fell after my question, and that eventually I only got two replies, spoken in tentative tones. When I asked for concrete feats that were impossible in the next two years, I think that that’s when the luminaries on that panel switched to trying to build a mental model of future progress in machine learning, asking themselves what they could or couldn’t predict, what they knew or didn’t know. And to their credit, most of them did know their profession well enough to realize that forecasting future boundaries around a rapidly moving field is actually *really hard*, that nobody knows what will appear on arXiv next month, and that they needed to put wide credibility intervals with very generous upper bounds on how much progress might take place twenty-four months’ worth of arXiv papers later.\n\n\n(Also, Demis Hassabis was present, so they all knew that if they named something insufficiently impossible, Demis would have DeepMind go and do it.)\n\n\nThe question I asked was in a completely different genre from the panel discussion, requiring a mental context switch: the assembled luminaries actually had to try to consult their rough, scarce-formed intuitive models of progress in machine learning and figure out what future experiences, if any, their model of the field definitely prohibited within a two-year time horizon. Instead of, well, emitting socially desirable verbal behavior meant to kill that darned hype about AGI and get some predictable applause from the audience.\n\n\nI’ll be blunt: I don’t think the confident long-termism has been thought out at all. If your model has the extraordinary power to say what will be impossible in ten years after another one hundred and twenty months of arXiv papers, then you ought to be able to say much weaker things that are impossible in two years, and you should have those predictions queued up and ready to go rather than falling into nervous silence after being asked.\n\n\nIn reality, the two-year problem is hard and the ten-year problem is laughably hard. The future is hard to predict in general, our predictive grasp on a rapidly changing and advancing field of science and engineering is very weak indeed, and it doesn’t permit narrow credible intervals on what can’t be done.\n\n\nGrace et al. ([2017](https://arxiv.org/abs/1705.08807)) surveyed the predictions of 352 presenters at ICML and NIPS 2015. Respondents’ aggregate forecast was that the proposition “all occupations are fully automatable” (in the sense that “for any occupation, machines could be built to carry out the task better and more cheaply than human workers”) will not reach 50% probability until 121 years hence. Except that a randomized subset of respondents were instead asked the slightly different question of “when unaided machines can accomplish every task better and more cheaply than human workers”, and in this case held that this was 50% likely to occur [within 44 years](http://www.bayesianinvestor.com/blog/index.php/2017/06/01/do-ai-experts-exist/).\n\n\nThat’s what happens when you ask people to produce an estimate they can’t estimate, and there’s a social sense of what the desirable verbal behavior is supposed to be.\n\n\n \n\n\n\n\n---\n\n\n \n\n\nWhen I observe that there’s no fire alarm for AGI, I’m not saying that there’s no possible equivalent of smoke appearing from under a door.\n\n\nWhat I’m saying rather is that the smoke under the door is always going to be arguable; it is not going to be a clear and undeniable and absolute sign of fire; and so there is never going to be a fire alarm producing common knowledge that action is now due and socially acceptable.\n\n\nThere’s an old trope saying that as soon as something is actually done, it ceases to be called AI. People who work in AI and are in a broad sense pro-accelerationist and techno-enthusiast, what you might call the Kurzweilian camp (of which I am not a member), will sometimes rail against this as unfairness in judgment, as moving goalposts.\n\n\nThis overlooks a real and important phenomenon of adverse selection against AI accomplishments: If you can do something impressive-sounding with AI in 1974, then that is because that thing turned out to be doable in some cheap cheaty way, not because 1974 was so amazingly great at AI. We are uncertain about how much cognitive effort it takes to perform tasks, and how easy it is to cheat at them, and the first “impressive” tasks to be accomplished will be those where we were most wrong about how much effort was required. There was a time when some people thought that a computer winning the world chess championship would require progress in the direction of AGI, and that this would count as a sign that AGI was getting closer. When Deep Blue beat Kasparov in 1997, in a Bayesian sense we did learn something about progress in AI, but we also learned something about chess being easy. Considering the techniques used to construct Deep Blue, most of what we learned was “It is surprisingly possible to play chess without easy-to-generalize techniques” and not much “A surprising amount of progress has been made toward AGI.”\n\n\nWas AlphaGo smoke under the door, a sign of AGI in 10 years or less? People had previously given Go as an example of What You See Before The End.\n\n\nLooking over the paper describing AlphaGo’s architecture, it seemed to me that we *were* mostly learning that available AI techniques were likely to go further towards generality than expected, rather than about Go being surprisingly easy to achieve with fairly narrow and ad-hoc approaches. Not that the method scales to AGI, obviously; but AlphaGo did look like a product of *relatively* general insights and techniques being turned on the special case of Go, in a way that Deep Blue wasn’t. I also updated significantly on “The general learning capabilities of the human cortical algorithm are less impressive, less difficult to capture with a ton of gradient descent and a zillion GPUs, than I thought,” because if there were anywhere we expected an impressive hard-to-match highly-natural-selected but-still-general cortical algorithm to come into play, it would be in humans playing Go.\n\n\nMaybe if we’d seen a thousand Earths undergoing similar events, we’d gather the statistics and find that a computer winning the planetary Go championship is a reliable ten-year-harbinger of AGI. But I don’t actually know that. Neither do you. Certainly, anyone can publicly argue that we just learned Go was easier to achieve with strictly narrow techniques than expected, as was true many times in the past. There’s no possible sign short of actual AGI, no case of smoke from under the door, for which we know that this is definitely serious fire and now AGI is 10, 5, or 2 years away. Let alone a sign where we know everyone else will believe it.\n\n\nAnd in any case, multiple leading scientists in machine learning have already published articles telling us their criterion for a fire alarm. They will believe Artificial General Intelligence is imminent:\n\n\n(A) When they personally see how to construct AGI using their current tools. This is what they are always saying is not currently true in order to castigate the folly of those who think AGI might be near.\n\n\n(B) When their personal jobs do not give them a sense of everything being difficult. This, they are at pains to say, is a key piece of knowledge not possessed by the ignorant layfolk who think AGI might be near, who only believe that because they have never stayed up until 2AM trying to get a generative adversarial network to stabilize.\n\n\n(C) When they are very impressed by how smart their AI is relative to a human being in respects that still feel magical to them; as opposed to the parts they do know how to engineer, which no longer seem magical to them; aka the AI seeming pretty smart in interaction and conversation; aka the AI actually being an AGI already.\n\n\nSo there isn’t going to be a fire alarm. Period.\n\n\nThere is never going to be a time before the end when you can look around nervously, and see that it is now clearly common knowledge that you can talk about AGI being imminent, and take action and exit the building in an orderly fashion, without fear of looking stupid or frightened.\n\n\n \n\n\n\n\n---\n\n\n \n\n\nSo far as I can presently estimate, now that we’ve had AlphaGo and a couple of other maybe/maybe-not shots across the bow, and seen a huge explosion of effort invested into machine learning and an enormous flood of papers, we are probably going to occupy our present epistemic state until very near the end.\n\n\nBy saying we’re probably going to be in roughly this epistemic state until almost the end, I *don’t* mean to say we know that AGI is imminent, or that there won’t be important new breakthroughs in AI in the intervening time. I mean that it’s hard to guess how many further insights are needed for AGI, or how long it will take to reach those insights. After the next breakthrough, we still won’t know how many more breakthroughs are needed, leaving us in pretty much the same epistemic state as before. Whatever discoveries and milestones come next, it will probably continue to be hard to guess how many further insights are needed, and timelines will continue to be similarly murky. Maybe researcher enthusiasm and funding will rise further, and we’ll be able to say that timelines are shortening; or maybe we’ll hit another AI winter, and we’ll know that’s a sign indicating that things will take longer than they would otherwise; but we still won’t know *how long.*\n\n\nAt some point we might see a sudden flood of arXiv papers in which really interesting and fundamental and scary cognitive challenges seem to be getting done at an increasing pace. Whereupon, as this flood accelerates, even some who imagine themselves sober and skeptical will be unnerved to the point that they venture that perhaps AGI is only 15 years away now, maybe, possibly. The signs might become so blatant, very soon before the end, that people start thinking it is socially acceptable to say that maybe AGI is 10 years off. Though the signs would have to be pretty darned blatant, if they’re to overcome the social barrier posed by luminaries who are estimating arrival times to AGI using their personal knowledge and personal difficulties, as well as all the historical bad feelings about AI winters caused by hype.\n\n\nBut even if it becomes socially acceptable to say that AGI is 15 years out, in those last couple of years or months, I would still expect there to be disagreement. There will still be others protesting that, as much as associative memory and human-equivalent cerebellar coordination (or whatever) are now solved problems, they still don’t know how to construct AGI. They will note that there are no AIs writing computer science papers, or holding a truly sensible conversation with a human, and castigate the senseless alarmism of those who talk as if we already knew how to do that. They will explain that foolish laypeople don’t realize how much pain and tweaking it takes to get the current systems to work. (Although those modern methods can easily do almost anything that was possible in 2017, and any grad student knows how to roll a stable GAN on the first try using the tf.unsupervised module in Tensorflow 5.3.1.)\n\n\nWhen all the pieces are ready and in place, lacking only the last piece to be assembled by the very peak of knowledge and creativity across the whole world, it will still seem to the average ML person that AGI is an enormous challenge looming in the distance, because they still won’t personally know how to construct an AGI system. Prestigious heads of major AI research groups will still be writing [articles](https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/) decrying the folly of fretting about the total destruction of all Earthly life and all future value it could have achieved, and saying that we should not let this distract us from *real, respectable concerns* like loan-approval systems accidentally absorbing human biases.\n\n\nOf course, the future is very hard to predict in detail. It’s so hard that not only do I confess my own inability, I make the far stronger positive statement that nobody else can do it either. The “flood of groundbreaking arXiv papers” scenario is one way things could maybe possibly go, but it’s an implausibly specific scenario that I made up for the sake of concreteness. It’s certainly not based on my extensive experience watching other Earthlike civilizations develop AGI. I do put a significant chunk of probability mass on “There’s not much sign visible outside a Manhattan Project until Hiroshima,” because that scenario is simple. Anything more complex is just one more story full of [burdensome details](https://www.readthesequences.com/Burdensome-Details) that aren’t likely to all be true.\n\n\nBut no matter how the details play out, I do predict in a very general sense that there will be no fire alarm that is not an actual running AGI—no unmistakable sign before then that everyone knows and agrees on, that lets people act without feeling nervous about whether they’re worrying too early. That’s just not how the history of technology has usually played out in much simpler cases like flight and nuclear engineering, let alone a case like this one where all the signs and models are disputed. We already know enough about the uncertainty and low quality of discussion surrounding this topic to be able to say with confidence that there will be no unarguable socially accepted sign of AGI arriving 10 years, 5 years, or 2 years beforehand. If there’s any general social panic it will be by coincidence, based on terrible reasoning, uncorrelated with real timelines except by total coincidence, set off by a Hollywood movie, and focused on relatively trivial dangers.\n\n\nIt’s no coincidence that nobody has given any actual account of such a fire alarm, and argued convincingly about how much time it means we have left, and what projects we should only then start. If anyone does write that proposal, the next person to write one will say something completely different. And probably neither of them will succeed at convincing me that they know anything prophetic about timelines, or that they’ve identified any sensible angle of attack that is (a) worth pursuing at all and (b) not worth starting to work on right now.\n\n\n \n\n\n\n\n---\n\n\n \n\n\nIt seems to me that the decision to delay all action until a nebulous totally unspecified future alarm goes off, implies an order of recklessness great enough that the law of continued failure comes into play.\n\n\nThe law of continued failure is the rule that says that if your country is incompetent enough to use a plaintext 9-numeric-digit password on all of your bank accounts and credit applications, your country is not competent enough to correct course after the next disaster in which a hundred million passwords are revealed. A civilization competent enough to correct course in response to that prod, to react to it the way you’d want them to react, is competent enough not to make the mistake in the first place. When a system fails massively and obviously, rather than subtly and at the very edges of competence, the next prod is not going to cause the system to suddenly snap into doing things intelligently.\n\n\nThe law of continued failure is especially important to keep in mind when you are dealing with big powerful systems or high-status people that you might feel nervous about derogating, because you may be tempted to say, “Well, it’s flawed now, but as soon as a future prod comes along, everything will snap into place and everything will be all right.” The systems about which this fond hope is actually warranted look like they are mostly doing all the important things right already, and only failing in one or two steps of cognition. The fond hope is almost never warranted when a person or organization or government or social subsystem is currently falling massively short.\n\n\nThe folly required to ignore the prospect of aliens landing in thirty years is already great enough that the other flawed elements of the debate should come as no surprise.\n\n\nAnd with all of that going wrong simultaneously today, we should predict that the same system and incentives won’t produce correct outputs after receiving an uncertain sign that maybe the aliens are landing in five years instead. The law of continued failure suggests that if existing authorities failed in enough different ways at once to think that it makes sense to try to derail a conversation about existential risk by saying the real problem is the security on self-driving cars, the default expectation is that they will still be saying silly things later.\n\n\nPeople who make large numbers of simultaneous mistakes don’t generally have all of the incorrect thoughts subconsciously labeled as “incorrect” in their heads. Even when motivated, they can’t suddenly flip to skillfully executing all-correct reasoning steps instead. Yes, we have various experiments showing that monetary incentives can reduce overconfidence and political bias, but (a) that’s reduction rather than elimination, (b) it’s with extremely clear short-term direct incentives, not the nebulous and politicizable incentive of “a lot being at stake”, and (c) that doesn’t mean a switch is flipping all the way to “carry out complicated correct reasoning”. If someone’s brain contains a switch that can flip to enable complicated correct reasoning at all, it’s got enough internal precision and skill to think mostly-correct thoughts now instead of later—at least to the degree that some conservatism and double-checking gets built into examining the conclusions that people know will get them killed if they’re wrong about them.\n\n\nThere is no sign and portent, [no threshold crossed](http://lesswrong.com/lw/hp5/after_critical_event_w_happens_they_still_wont/), that suddenly causes people to wake up and start doing things systematically correctly. People who can react that competently to any sign at all, let alone a less-than-perfectly-certain not-totally-agreed item of evidence that is *likely* a wakeup call, have probably already done the timebinding thing. They’ve already imagined the future sign coming, and gone ahead and thought sensible thoughts earlier, like Stuart Russell saying, “If you know the aliens are landing in thirty years, it’s still a big deal now.”\n\n\n \n\n\n\n\n---\n\n\n \n\n\nBack in the funding-starved early days of what is now MIRI, I learned that people who donated last year were likely to donate this year, and people who last year were planning to donate “next year” would quite often this year be planning to donate “next year”. Of course there were genuine transitions from zero to one; everything that happens needs to happen for a first time. There were college students who said “later” and gave nothing for a long time in a genuinely strategically wise way, and went on to get nice jobs and start donating. But I also learned well that, like many cheap and easy solaces, saying the word “later” is addictive; and that this luxury is available to the rich as well as the poor.\n\n\nI don’t expect it to be any different with AGI alignment work. People who are trying to get what grasp they can on the alignment problem will, in the next year, be doing a little (or a lot) better with whatever they grasped in the previous year (plus, yes, any general-field advances that have taken place in the meantime). People who want to defer that until after there’s a better understanding of AI and AGI will, after the next year’s worth of advancements in AI and AGI, want to defer work until a better future understanding of AI and AGI.\n\n\nSome people really *want* alignment to *get done* and are therefore *now* trying to wrack their brains about how to get something like a reinforcement learner to [reliably identify a utility function over particular elements in a model of the causal environment instead of a sensory reward term](https://arbital.com/p/pointing_finger/) or [defeat the seeming tautologicalness of updated (non-)deference](https://arbital.com/p/updated_deference/). Others would rather be working on other things, and will therefore declare that there is no work that can possibly be done today, *not* spending two hours quietly thinking about it first before making that declaration. And this will not change tomorrow, unless perhaps tomorrow is when we wake up to some interesting newspaper headlines, and probably not even then. The luxury of saying “later” is not available only to the truly poor-in-available-options.\n\n\nAfter a while, I started telling effective altruists in college: “If you’re planning to earn-to-give later, then for now, give around $5 every three months. And never give exactly the same amount twice in a row, or give to the same organization twice in a row, so that you practice the mental habit of re-evaluating causes and re-evaluating your donation amounts on a regular basis. *Don’t* learn the mental habit of just always saying ‘later’.”\n\n\nSimilarly, if somebody was *actually* going to work on AGI alignment “later”, I’d tell them to, every six months, spend a couple of hours coming up with the best current scheme they can devise for aligning AGI and doing useful work on that scheme. Assuming, if they must, that AGI were somehow done with technology resembling current technology. And publishing their best-current-scheme-that-isn’t-good-enough, at least in the sense of posting it to Facebook; so that they will have a sense of embarrassment about naming a scheme that does not look like somebody actually spent two hours trying to think of the best bad approach.\n\n\nThere are things we’ll better understand about AI in the future, and things we’ll learn that might give us more confidence that particular research approaches will be relevant to AGI. There may be more future sociological developments akin to Nick Bostrom publishing *Superintelligence*, Elon Musk tweeting about it and thereby heaving a rock through the Overton Window, or more respectable luminaries like Stuart Russell openly coming on board. The future will hold more AlphaGo-like events to publicly and privately highlight new ground-level advances in ML technique; and it may somehow be that this does *not* leave us in the same epistemic state as having already seen AlphaGo and GANs and the like. It could happen! I can’t see exactly how, but the future does have the capacity to pull surprises in that regard.\n\n\nBut before waiting on that surprise, you should ask whether your uncertainty about AGI timelines is really uncertainty at all. If it feels to you that guessing AGI might have a 50% probability in N years is not enough knowledge to act upon, if that feels scarily uncertain and you want to wait for more evidence before making any decisions… then ask yourself how you’d feel if you believed the probability was 50% in N years, and everyone else on Earth also believed it was 50% in N years, and everyone believed it was right and proper to carry out policy P when AGI has a 50% probability of arriving in N years. If that visualization feels very different, then any nervous “uncertainty” you feel about doing P is not really about whether AGI takes much longer than N years to arrive.\n\n\nAnd you are almost surely going to be stuck with that feeling of “uncertainty” no matter how close AGI gets; because no matter how close AGI gets, whatever signs appear will almost surely not produce common, shared, agreed-on public knowledge that AGI has a 50% chance of arriving in N years, nor any agreement that it is therefore right and proper to react by doing P.\n\n\nAnd if all that did become common knowledge, then P is unlikely to still be a neglected intervention, or AI alignment a neglected issue; so you will have waited until sadly late to help.\n\n\nBut far more likely is that the common knowledge just isn’t going to be there, and so it will always feel nervously “uncertain” to consider acting.\n\n\nYou can either act despite that, or not act. Not act until it’s too late to help much, in the best case; not act at all until after it’s essentially over, in the average case.\n\n\nI don’t think it’s wise to wait on an unspecified epistemic miracle to change how we feel. In all probability, you’re going to be in this mental state for a while—including any nervous-feeling “uncertainty”. If you handle this mental state by saying “later”, that general policy is not likely to have good results for Earth.\n\n\n \n\n\n\n\n---\n\n\n \n\n\nFurther resources:\n\n\n* MIRI’s [research guide](https://intelligence.org/research-guide/) and [research forum](https://agentfoundations.org)\n* FLI’s [collection of introductory resources](https://futureoflife.org/2016/02/29/introductory-resources-on-ai-safety-research/)\n* CHAI’s alignment bibliography at \n* 80,000 Hours’ AI job postings on \n* The Open Philanthropy Project’s [AI fellowship](http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-philanthropy-project-ai-fellows-program) and general call for [research proposals](http://www.openphilanthropy.org/blog/concrete-problems-ai-safety)\n* My brain-dumps on [AI alignment](https://arbital.com/explore/2v)\n* If you’re arriving here for the first time, my long-standing work on [rationality](https://www.lesswrong.com/sequences), and CFAR’s [workshops](http://rationality.org/workshops/upcoming)\n* And some general tips from [Ray Arnold](http://effective-altruism.com/ea/17s/what_should_the_average_ea_do_about_ai_alignment/) for effective altruists considering AI alignment as a cause area.\n\n\n \n\n\n\n\n---\n\n\nThe post [There’s No Fire Alarm for Artificial General Intelligence](https://intelligence.org/2017/10/13/fire-alarm/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-10-14T02:05:17Z", "authors": ["Eliezer Yudkowsky"], "summaries": []} -{"id": "af1c518cd3528ae97981f655ee12b5b2", "title": "September 2017 Newsletter", "url": "https://intelligence.org/2017/09/24/september-2017-newsletter/", "source": "miri", "source_type": "blog", "text": "**Research updates**\n\n\n* “[Incorrigibility in the CIRL Framework](https://intelligence.org/2017/08/31/incorrigibility-in-cirl/)”: a new paper by MIRI assistant researcher Ryan Carey responds to Hadfield-Menell et al.’s “The Off-Switch Game”.\n* New at IAFF: [The Three Levels of Goodhart’s Curse](https://agentfoundations.org/item?id=1621); [Conditioning on Conditionals](https://agentfoundations.org/item?id=1624); [Stable Pointers to Value: An Agent Embedded in Its Own Utility Function](https://agentfoundations.org/item?id=1622); [Density Zero Exploration](https://agentfoundations.org/item?id=1627); [Autopoietic Systems and the Difficulty of AGI Alignment](https://agentfoundations.org/item?id=1628)\n* Ryan Carey is leaving MIRI to collaborate with the Future of Humanity Institute’s Owain Evans on AI safety work.\n\n\n**General updates**\n\n\n* As part of his engineering internship at MIRI, Max Harms assisted in the construction and extension of [RL-Teacher](https://github.com/nottombrown/rl-teacher), an open-source tool for training AI systems with human feedback based on the “[Deep RL from Human Preferences](https://arxiv.org/abs/1706.03741)” OpenAI / DeepMind research collaboration. See [OpenAI’s announcement](https://blog.openai.com/gathering_human_feedback/).\n* MIRI COO Malo Bourgon participated in panel discussions on getting things done ([video](https://www.youtube.com/watch?v=trTslOidmq8)) and working in AI ([video](https://www.youtube.com/watch?v=gmL_7SayalM)) at the Effective Altruism Global conference in San Francisco. AI Impacts researcher Katja Grace also spoke on AI safety ([video](https://www.youtube.com/watch?v=r91Co5CeOCY)). Other EAG talks on AI included Daniel Dewey’s ([video](https://www.youtube.com/watch?v=Nfrh4K3d_Z0)) and Owen Cotton-Barratt’s ([video](https://www.youtube.com/watch?v=gATWIWiIy_8)), and a larger panel discussion ([video](https://www.youtube.com/watch?v=JA4vW4oQavk)).\n* Announcing two winners of the Intelligence in Literature prize: Laurence Raphael Brothers’ “[Houseproud](https://www.laurencebrothers.com/stories/houseproud/)” and Shane Halbach’s “[Human in the Loop](https://shanehalbach.com/human-in-the-loop/)”.\n* [RAISE](https://wiki.lesswrong.com/wiki/Road_to_AI_Safety_Excellence), a project to develop online AI alignment course material, is seeking volunteers.\n\n\n**News and links**\n\n\n* The Open Philanthropy Project is accepting applicants to an [AI Fellows Program](http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-philanthropy-project-ai-fellows-program) “to fully support a small group of the most promising PhD students in artificial intelligence and machine learning”. See also Open Phil’s partial list of [key research topics](http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-philanthropy-project-ai-fellows-program#examples) in AI alignment.\n* [Call for papers](http://www.aies-conference.com/): AAAI and ACM are running a new Conference on AI, Ethics, and Society, with submissions due by the end of October.\n* DeepMind’s Viktoriya Krakovna argues for a [portfolio approach to AI safety research](https://vkrakovna.wordpress.com/2017/08/16/portfolio-approach-to-ai-safety-research/).\n* “[Teaching AI Systems to Behave Themselves](https://www.nytimes.com/2017/08/13/technology/artificial-intelligence-safety-training.html?referer=)”: a solid article from the *New York Times* on the growing field of AI safety research. The *Times* also has an [opening](https://nytimes.wd5.myworkdayjobs.com/en-US/News/job/New-York-NY/Artificial-Intelligence-FutureTech-Investigative-Reporter_REQ-001480-1) for an investigative reporter in AI.\n* UC Berkeley’s Center for Long-term Cybersecurity is [hiring](https://cltc.berkeley.edu/2017/09/15/cltc-is-hiring-check-out-our-job-listings/) for several roles, including researcher, assistant to the director, and program manager.\n* [*Life 3.0*](https://smile.amazon.com/Life-3-0-Being-Artificial-Intelligence/dp/1101946598): Max Tegmark releases a new book on the future of AI ([podcast discussion](https://futureoflife.org/2017/08/29/podcast-life-3-0-human-age-artificial-intelligence/)).\n\n\nThe post [September 2017 Newsletter](https://intelligence.org/2017/09/24/september-2017-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-09-25T05:14:16Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "c55a3a13fdac005f0f3d26dbc73adc8d", "title": "New paper: “Incorrigibility in the CIRL Framework”", "url": "https://intelligence.org/2017/08/31/incorrigibility-in-cirl/", "source": "miri", "source_type": "blog", "text": "[![Incorrigibility in the CIRL Framework](https://intelligence.org/files/IncorrigibilityCIRL.png)](https://arxiv.org/abs/1709.06275)\n\n\nMIRI assistant research fellow Ryan Carey has a new paper out discussing situations where good performance in [Cooperative Inverse Reinforcement Learning](https://arxiv.org/abs/1606.03137) (CIRL) tasks fails to imply that software agents will assist or cooperate with programmers.\n\n\nThe paper, titled “**[Incorrigibility in the CIRL Framework](https://arxiv.org/abs/1709.06275)**,” lays out four scenarios in which CIRL violates the four conditions for *corrigibility* defined in [Soares et al. (2015)](https://intelligence.org/feed/?paged=22). Abstract:\n\n\n\n> A value learning system has incentives to follow shutdown instructions, assuming the shutdown instruction provides information (in the technical sense) about which actions lead to valuable outcomes. However, this assumption is not robust to model mis-specification (e.g., in the case of programmer errors). We demonstrate this by presenting some Supervised POMDP scenarios in which errors in the parameterized reward function remove the incentive to follow shutdown commands. These difficulties parallel those discussed by Soares et al. (2015) in their paper on corrigibility.\n> \n> \n> We argue that it is important to consider systems that follow shutdown commands under some weaker set of assumptions (e.g., that one small verified module is correctly implemented; as opposed to an entire prior probability distribution and/or parameterized reward function). We discuss some difficulties with simple ways to attempt to attain these sorts of guarantees in a value learning framework.\n> \n> \n\n\nThe paper is a response to a paper by Hadfield-Menell, Dragan, Abbeel, and Russell, “[The Off-Switch Game](https://arxiv.org/abs/1611.08219).” Hadfield-Menell et al. show that an AI system will be more responsive to human inputs when it is uncertain about its reward function and thinks that its human operator has more information about this reward function. Carey shows that the CIRL framework can be used to formalize the problem of corrigibility, and that the known assurances for CIRL systems, given in “The Off-Switch Game”, rely on strong assumptions about having an error-free CIRL system. With less idealized assumptions, a value learning agent may have beliefs that cause it to evade redirection from the human.\n\n\n\n> [T]he purpose of a shutdown button is to shut the AI system down *in the event that all other assurances failed*, e.g., in the event that the AI system is ignoring (for one reason or another) the instructions of the operators. If the designers of [the AI system] **R** have programmed the system so perfectly that the prior and [reward function] *R* are completely free of bugs, then the theorems of Hadfield-Menell et al. (2017) do apply. In practice, this means that in order to be corrigible, it would be necessary to have an AI system that was uncertain about all things that could possibly matter. The problem is that performing Bayesian reasoning over all possible worlds and all possible value functions is quite intractable. Realistically, humans will likely have to use a large number of heuristics and approximations in order to implement the system’s belief system and updating rules. […]\n> \n> \n> Soares et al. (2015) seem to want a shutdown button that works as a mechanism of last resort, to shut an AI system down in cases where it has observed and refused a programmer suggestion (and the programmers believe that the system is malfunctioning). Clearly, *some* part of the system must be working correctly in order for us to expect the shutdown button to work at all. However, it seems undesirable for the working of the button to depend on there being zero critical errors in the specification of the system’s prior, the specification of the reward function, the way it categorizes different types of actions, and so on. Instead, it is desirable to develop a shutdown module that is small and simple, with code that could ideally be rigorously verified, and which ideally works to shut the system down even in the event of large programmer errors in the specification of the rest of the system.\n> \n> \n> In order to do this in a value learning framework, we require a value learning system that (i) is capable of having its actions overridden by a small verified module that watches for shutdown commands; (ii) has no incentive to remove, damage, or ignore the shutdown module; and (iii) has some small incentive to keep its shutdown module around; even under a broad range of cases where *R*, the prior, the set of available actions, etc. are misspecified.\n> \n> \n\n\nEven if the utility function is learned, there is still a need for additional lines of defense against unintended failures. The hope is that this can be achieved by modularizing the AI system. For that purpose, we would need a model of an agent that will behave corrigibly in a way that is robust to misspecification of other system components.\n\n\n \n\n\n\n\n\n#### Sign up to get updates on new MIRI technical results\n\n\n*Get notified every time a new technical paper is published.*\n\n\n\n\n* \n* \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n\n\n \n\n\nThe post [New paper: “Incorrigibility in the CIRL Framework”](https://intelligence.org/2017/08/31/incorrigibility-in-cirl/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-09-01T02:06:32Z", "authors": ["Matthew Graves"], "summaries": []} -{"id": "881ba7a189c42618885d3eebfb3c3ce1", "title": "August 2017 Newsletter", "url": "https://intelligence.org/2017/08/16/august-2017-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \n\n**Research updates**\n* “[A Formal Approach to the Problem of Logical Non-Omniscience](https://arxiv.org/abs/1707.08747v1)”: We presented our work on [logical induction](https://intelligence.org/2016/09/12/new-paper-logical-induction/) at the [16th Conference on Theoretical Aspects of Rationality and Knowledge](http://tark17.csc.liv.ac.uk/).\n* New at IAFF: [Smoking Lesion Steelman](https://agentfoundations.org/item?id=1525); “[Like This World, But…](https://agentfoundations.org/item?id=1527)”; [Jessica Taylor’s Current Thoughts on Paul Christiano’s Research Agenda](https://agentfoundations.org/item?id=1534); [Open Problems Regarding Counterfactuals: An Introduction For Beginners](https://agentfoundations.org/item?id=1591)\n* “[A Game-Theoretic Analysis of The Off-Switch Game](http://www.tomeveritt.se/papers/AGI-17-off-switch-game.pdf)”: researchers from Australian National University and Linköping University release a new paper on corrigibility, spun off from a MIRIx workshop.\n\n\n**General updates**\n* Daniel Dewey of the Open Philanthropy Project [writes up his current thoughts](http://effective-altruism.com/ea/1ca/my_current_thoughts_on_miris_highly_reliable/) on MIRI’s highly reliable agent design work, with discussion from Nate Soares and others in the comments section.\n* Sarah Marquart of the Future of Life Institute [discusses](https://futureoflife.org/2017/07/18/aligning-superintelligence-with-human-interests/) MIRI’s work on logical inductors, corrigibility, and other topics.\n* We attended the [Workshop on Decision Theory & the Future of Artificial Intelligence](http://www.decision-ai.org/) and the [5th International Workshop on Strategic Reasoning](http://sr2017.csc.liv.ac.uk/).\n\n\n\n**News and links**\n* Open Phil awards [a four-year $2.4 million grant](http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/montreal-institute-learning-algorithms-ai-safety-research) to Yoshua Bengio’s group at the Montreal Institute for Learning Algorithms “to support technical research on potential risks from advanced artificial intelligence”.\n* A new [IARPA-commissioned report](http://www.belfercenter.org/sites/default/files/files/publication/AI%20NatSec%20-%20final.pdf) discusses the potential for AI to accelerate technological innovation and lead to “a self-reinforcing technological and economic edge”. The report suggests that AI “has the potential to be a worst-case scenario” in combining high destructive potential, military/civil dual use, and difficulty of monitoring with potentially low production difficulty.\n* Elon Musk and Mark Zuckerberg [criticize each other’s statements](https://www.washingtonpost.com/news/innovations/wp/2017/07/25/billionaire-burn-musk-says-zuckerbergs-understanding-of-ai-threat-is-limited/?utm_term=.d5ba3c701fae) on AI risk.\n* China [makes plans](http://www.zdnet.com/article/china-aims-to-become-global-ai-leader-by-2030/) for major investments in AI ([full text](https://na-production.s3.amazonaws.com/documents/translation-fulltext-8.1.17.pdf), [translation note](http://lesswrong.com/lw/pbn/chinas_plan_to_lead_in_ai_purpose_prospects_and/dw6b)).\n* Microsoft opens [a new AI lab](https://www.bloomberg.com/news/articles/2017-07-12/microsoft-creates-new-ai-lab-to-take-on-google-s-deepmind) with a goal of building “more general artificial intelligence”.\n* New from the Future of Humanity Institute: “[Trial without Error: Towards Safe Reinforcement Learning via Human Intervention](https://owainevans.github.io/blog/hirl_blog.html).”\n* FHI is seeking [two research fellows](https://www.fhi.ox.ac.uk/vacancies/) to study AI macrostrategy.\n* Daniel Selsam and others release certigrad ([arXiv](https://arxiv.org/abs/1706.08605), [github](https://github.com/dselsam/certigrad)), a system for creating formally verified machine learning systems; see discussion on Hacker News ([1](https://news.ycombinator.com/item?id=14739491), [2](https://news.ycombinator.com/item?id=14658832)).\n* Applications are open for the Center for Applied Rationality’s [AI Summer Fellows Program](http://rationality.org/workshops/apply-aisfp2017), which runs September 8��25.\n\n\n |\n\n\nThe post [August 2017 Newsletter](https://intelligence.org/2017/08/16/august-2017-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-08-16T18:02:08Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "4e263c521886a2ce2859d059fad33d22", "title": "July 2017 Newsletter", "url": "https://intelligence.org/2017/07/25/july-2017-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \nA number of major [mid-year MIRI updates](https://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/): we received our largest donation to date, $1.01 million from an Ethereum investor! Our research priorities have also shifted somewhat, reflecting the addition of four new full-time researchers (Marcello Herreshoff, Sam Eisenstat, Tsvi Benson-Tilsen, and Abram Demski) and the departure of Patrick LaVictoire and Jessica Taylor.\n**Research updates**\n* New at IAFF: [Futarchy Fix](https://agentfoundations.org/item?id=1493), [Cooperative Oracles: Stratified Pareto Optima and Almost Stratified Pareto Optima](https://agentfoundations.org/item?id=1508)\n* New at AI Impacts: [Some Survey Results!](http://aiimpacts.org/some-survey-results/), [AI Hopes and Fears in Numbers](http://aiimpacts.org/ai-hopes-and-fears-in-numbers/)\n\n\n**General updates**\n* We attended the [Effective Altruism Global Boston](https://www.eaglobal.org/events/ea-global-2017-boston/) event. Speakers included Allan Dafoe on “The AI Revolution and International Politics” ([video](https://www.youtube.com/watch?v=Zef-mIKjHAk)) and Jason Matheny on “Effective Altruism in Government” ([video](https://www.youtube.com/watch?v=g05om2NJwco)).\n* MIRI COO Malo Bourgon moderated an [IEEE workshop](https://www.eventbrite.com/e/symposium-on-ethics-of-autonomous-systems-seas-north-america-tickets-28733403383?utm_source=eb_email&utm_medium=email&utm_campaign=event_reminder&utm_term=eventname) revising a section from [*Ethically Aligned Design*](http://standards.ieee.org/develop/indconn/ec/ead_v1.pdf).\n\n\n\n**News and links**\n* New from DeepMind researchers: “[Interpreting Deep Neural Networks Using Cognitive Psychology](https://deepmind.com/blog/cognitive-psychology/)”\n* New from OpenAI researchers: “[Corrigibility](https://ai-alignment.com/corrigibility-3039e668638)”\n* A collaboration between DeepMind and OpenAI: “[Learning from Human Preferences](https://blog.openai.com/deep-reinforcement-learning-from-human-preferences/)”\n* Recent progress in deep learning: “[Self-Normalizing Neural Networks](https://arxiv.org/abs/1706.02515)”\n* From Ian Goodfellow and Nicolas Papernot: “[The Challenge of Verification and Testing of Machine Learning](http://www.cleverhans.io/security/privacy/ml/2017/06/14/verification.html)”\n* From 80,000 Hours: a [guide to working in AI policy and strategy](https://80000hours.org/articles/ai-policy-guide/) and a related [interview with Miles Brundage](https://80000hours.org/2017/06/the-world-desperately-needs-ai-strategists-heres-how-to-become-one/) of the Future of Humanity Institute.\n\n\n |\n\n\nThe post [July 2017 Newsletter](https://intelligence.org/2017/07/25/july-2017-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-07-25T23:30:33Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "95fe4bb0ae14618a712074a674960959", "title": "Updates to the research team, and a major donation", "url": "https://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/", "source": "miri", "source_type": "blog", "text": "We have several major announcements to make, covering new developments in the two months since our [2017 strategy update](https://intelligence.org/2017/04/30/2017-updates-and-strategy/):\n\n\n1. On May 30th, we received a surprise **$1.01 million donation** from an [Ethereum](https://en.wikipedia.org/wiki/Ethereum) cryptocurrency investor. This is the single largest contribution we have received to date by a large margin, and will have a substantial effect on our plans over the coming year.\n\n\n2. **Two new full-time researchers** are joining MIRI: Tsvi Benson-Tilsen and Abram Demski. This comes in the wake of Sam Eisenstat and Marcello Herreshoff’s addition to the team [in May](https://intelligence.org/2017/03/31/two-new-researchers-join-miri/). We’ve also begun working with engineers on a trial basis for our new slate of [software engineer job openings](https://intelligence.org/2017/04/30/software-engineer-internship-staff-openings/).\n\n\n3. **Two of our researchers have recently left**: Patrick LaVictoire and Jessica Taylor, researchers previously heading work on our “[Alignment for Advanced Machine Learning Systems](https://intelligence.org/2016/07/27/alignment-machine-learning/)” research agenda.\n\n\nFor more details, see below.\n\n\n\n\n---\n\n\n\n#### 1. Fundraising\n\n\nThe major donation we received at the end of May, totaling $1,006,549, comes from a long-time supporter who had donated roughly $50k to our research programs over many years. This supporter has asked to continue to remain anonymous.\n\n\nThe first half of this year has been the most successful in MIRI’s fundraising history, with other notable contributions including Ethereum donations from investor Eric Rogstad totalling ~$22k, and a ~$67k donation from [Octane AI](https://octaneai.com/) co-founder Leif K-Brooks as part of a Facebook Reactions challenge. In total we’ve raised about $1.45M in the first half of 2017.\n\n\nWe’re thrilled and extremely grateful for this show of support. This fundraising success has increased our runway to around 18–20 months, giving us more leeway to trial potential hires and focus on our research and outreach priorities this year.\n\n\nConcretely, we have already made several plan adjustments as a consequence, including:\n\n\n* moving forward with more confidence on full-time researcher hires,\n* trialing more software engineers, and\n* deciding to run only one fundraiser this year, in the winter.[1](https://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/#footnote_0_16309 \"More generally, this will allow us to move forward confidently with the different research programs we consider high-priority, without needing to divert as many resources from other projects to support our top priorities. This should also allow us to make faster progress on the targeted outreach writing we mentioned in our 2017 update, since we won’t have to spend staff time on writing and outreach for a summer fundraiser.\")\n\n\nThis likely is a one-time outlier donation, similar to the $631k in cryptocurrency donations we received from Ripple developer Jed McCaleb in 2013–2014.[2](https://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/#footnote_1_16309 \"Of course, we’d be happy if these large donations looked less like outliers in the long run. If readers are looking for something to do with digital currency they might be holding onto after the recent surges, know that we gratefully accept donations of many digital currencies! In total, MIRI has raised around $1.85M in cryptocurrency donations since mid-2013.\") Looking forward at our funding goals over the next two years:\n\n\n* While we still have some uncertainty about our 2018 budget, our current point estimate is roughly **$2.5M**.\n* This year, between support from the Open Philanthropy Project, the Future of Life Institute, and other sources, we expect to receive at least an additional $600k without spending significant time on fundraising.\n* Our tentative (ambitious) goal for the rest of the year is to raise **an additional $950k**, or $3M in total. This would be sufficient for our 2018 budget even if we expand our engineering team more quickly than expected, and would give us a bit of a buffer to account for uncertainty in our future fundraising (in particular, uncertainty about whether the Open Philanthropy Project will continue support after 2017).\n\n\nOn a five-year timescale, our broad funding goals are:[3](https://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/#footnote_2_16309 \"These plans are subject to substantial change. In particular, an important source of variance in our plans is how our new non-public-facing research progresses, where we’re likely to take on more ambitious growth goals if our new work looks like it’s going well.\")\n\n\n* On the low end, once we finish growing our team over the course of a few years, our default expectation is that our operational costs will be roughly $4M per year, mostly supporting researcher and engineer salaries. Our goal is therefore to reach that level in a sustainable, stable way.\n* On the high end, it’s possible to imagine scenarios involving an order-of-magnitude increase in our funding, in which case we would develop a qualitatively different set of funding goals reflecting the fact that we would most likely substantially restructure MIRI.\n* For funding levels in between—roughly $4M–$10M per year—it is likely that we would not expand our current operations further. Instead, we might fund work outside of our current research after considering how well-positioned we appear to be to identify and fund various projects, including MIRI-external projects. While we consider it reasonably likely that we are in a good position for this, we would instead recommend that donors direct additional donations elsewhere if we ended up concluding that our donors (or other organizations) are in a better position than we are to respond to surprise funding opportunities in the AI alignment space.[4](https://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/#footnote_3_16309 \"We would also likely increase our reserves in this scenario, allowing us to better adapt to unexpected circumstances, and there is a smaller probability that we would use these funds to grow moderately more than currently planned without a significant change in strategy.\")\n\n\nA new major once-off donation at the $1M level like this one covers nearly half of our current annual budget, which makes a substantial difference to our one- and two-year plans. Our five-year plans are largely based on assumptions about multiple-year [funding flows](https://intelligence.org/2016/11/11/post-fundraiser-update/), so how aggressively we decide to plan our growth in response to this new donation depends largely on whether we can sustainably raise funds at the level of the above goal in future years (e.g., it depends on whether and how other donors change their level of support in response).\n\n\nTo reduce the uncertainty going into our expansion decisions, we’re encouraging more of our regular donors to sign up for [monthly donations](https://intelligence.org/donate/) or other recurring giving schedules—under 10% of our income currently comes from such donations, which limits our planning capabilities.[5](https://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/#footnote_4_16309 \"Less frequent (e.g., quarterly) donations are also quite helpful from our perspective, if we know about them in advance and so can plan around them. In the case of donors who plan to give at least once per year, predictability is much more important from our perspective than frequency.\") We also encourage supporters to [reach out to us](mailto:contact@intelligence.org) about their future donation plans, so that we can answer questions and make more concrete and ambitious plans.\n\n\n\n\n---\n\n\n#### 2. New hires\n\n\nMeanwhile, two new full-time researchers are joining our team after having previously worked with us as associates while based at other institutions.\n\n\n \n\n\n![Abram Demski](https://intelligence.org/wp-content/uploads/2017/06/abram-d.png)**Abram Demski**, who is joining MIRI as a research fellow this month, is completing a PhD in Computer Science at the University of Southern California. His research to date has focused on cognitive architectures and artificial general intelligence. He is interested in filling in the gaps that exist in formal theories of rationality, especially those concerned with what humans are doing when reasoning about mathematics.\n\n\nAbram made key contributions to the [MIRIxLosAngeles](https://intelligence.org/mirix/) work that produced [precursor results to logical induction](https://intelligence.org/2016/04/21/two-new-papers-uniform/). His other past work with MIRI includes “[Generalizing Foundations of Decision Theory](https://agentfoundations.org/item?id=1302)” and “[Computable Probability Distributions Which Converge on Believing True Π1 Sentences Will Disbelieve True Π2 Sentences](https://intelligence.org/files/Pi1Pi2Problem.pdf).”\n\n\n \n\n\n![Tsvi Benson-Tilsen](https://intelligence.org/wp-content/uploads/2017/06/tsvi-bt.png)**Tsvi Benson-Tilsen** has joined MIRI as an assistant research fellow. Tsvi holds a BSc in Mathematics with honors from the University of Chicago, and is on leave from the UC Berkeley Group in Logic and the Methodology of Science PhD program.\n\n\nPrior to joining MIRI’s research staff, Tsvi was a co-author on “[Logical Induction](https://intelligence.org/2016/09/12/new-paper-logical-induction/)” and “[Formalizing Convergent Instrumental Goals](https://intelligence.org/2015/11/26/new-paper-formalizing-convergent-instrumental-goals/),” and also authored “[Updateless Decision Theory With Known Search Order](https://intelligence.org/files/UDTSearchOrder.pdf)” and “[Existence of Distributions That Are Expectation-Reflective and Know It](https://agentfoundations.org/item?id=548).” Tsvi’s research interests include logical uncertainty, logical counterfactuals, and reflectively stable decision-making.\n\n\n \n\n\nWe’ve also accepted our first six [software engineers](https://intelligence.org/2017/04/30/software-engineer-internship-staff-openings/) for 3-month visits. We are continuing to review applicants, and in light of the generous support we recently received and the strong pool of applicants so far, we are likely to trial more candidates than we’d planned previously.\n\n\nIn other news, going forward Scott Garrabrant will be acting as the research lead for MIRI’s [agent foundations](https://intelligence.org/technical-agenda/) research, handling more of the day-to-day work of coordinating and directing research team efforts.\n\n\n\n\n---\n\n\n#### 3. The AAMLS agenda\n\n\nOur AAMLS research was previously the focus of Jessica Taylor, Patrick LaVictoire, and Andrew Critch, all of whom joined MIRI in mid-2015. With Patrick and Jessica departing (on good terms) and Andrew on a two-year leave to work with the Center for Human-Compatible AI, we will be putting relatively little work into the AAMLS agenda over the coming year.\n\n\nWe continue to see the problems described in the AAMLS agenda as highly important, and expect to reallocate more attention to these problems in the future. Additionally, we see the AAMLS agenda as a good template for identifying safety desiderata and promising alignment problems. However, we did not see enough progress on AAMLS problems over the last year to conclude that we should currently prioritize this line of research over our other work (e.g., our agent foundations research on problems such as logical uncertainty and counterfactual reasoning). As a partial consequence, MIRI’s current research staff do not plan to make AAMLS research a high priority in the near future.\n\n\nJessica, the project lead, describes some of her takeaways from working on AAMLS:\n\n\n\n> […] Why was little progress made?\n> \n> \n> [1.] **Difficulty**\n> \n> \n> I think the main reason is that the problems were very difficult. In particular, they were mostly selected on the basis of “this seems important and seems plausibly solveable”, rather than any *strong* intuition that it’s possible to make progress.\n> \n> \n> In comparison, problems in the agent foundations agenda have seen more progress:\n> \n> \n> * Logical uncertainty (Definability of truth, reflective oracles, logical inductors)\n> * Decision theory (Modal UDT, reflective oracles, logical inductors)\n> * Vingean reflection (Model polymorphism, logical inductors)\n> \n> \n> One thing to note about these problems is that they were formulated on the basis of a strong intuition that they ought to be solveable. Before logical induction, it was possible to have the intuition that some sort of asymptotic approach could solve many logical uncertainty problems in the limit. It was also possible to strongly think that some sort of self-trust is possible.\n> \n> \n> With problems in the AAMLS agenda, the plausibility argument was something like:\n> \n> \n> * Here’s an existing, flawed approach to the problem (e.g. using a reinforcement signal for environmental goals, or modifications of this approach)\n> * Here’s a vague intuition about why it’s possible to do better (e.g. humans do a different thing)\n> \n> \n> which, empirically, turned out not to make for tractable research problems.\n> \n> \n> [2.] **Going for the throat**\n> \n> \n> In an important sense, the AAMLS agenda is “going for the throat” in a way that other agendas (e.g. the agent foundations agenda) are to a lesser extent: it is attempting to solve the whole alignment problem (including goal specification) given access to resources such as powerful reinforcement learning. Thus, the difficulties of the whole alignment problem (e.g. specification of environmental goals) are more exposed in the problems.\n> \n> \n> [3.] **Theory vs. empiricism**\n> \n> \n> Personally, I strongly lean towards preferring theoretical rather than empirical approaches. I don’t know how much I endorse this bias overall for the set of people working on AI safety as a whole, but it is definitely a personal bias of mine.\n> \n> \n> Problems in the AAMLS agenda turned out not to be very amenable to purely-theoretical investigation. This is probably due to the fact that there is not a clear mathematical aesthetic for determining what counts as a solution (e.g. for the environmental goals problem, it’s not actually clear that there’s a recognizable mathematical statement for what the problem is).\n> \n> \n> With the agent foundations agenda, there’s a clearer aesthetic for recognizing good solutions. Most of the problems in the AAMLS agenda have a less-clear aesthetic. […]\n> \n> \n\n\nFor more details, see Jessica’s retrospective [on the Intelligent Agent Foundations Forum](https://agentfoundations.org/item?id=1470).\n\n\nMore work would need to go into AAMLS before we reached confident conclusions about the tractability of these problems. However, the lack of initial progress provides some evidence that new tools or perspectives may be needed before significant progress is possible. Over the coming year, we will therefore continue to spend some time thinking about AAMLS, but will not make it a major focus.\n\n\nWe continue to actively collaborate with Andrew on MIRI research, and expect to work with Patrick and Jessica more in the future as well. Jessica and Andrew in particular intend to continue to focus on AI safety research, including work on AI strategy and coordination.\n\n\nWe’re grateful for everything Jessica and Patrick have done to advance our research program and our organizational mission over the past two years, and I’ll personally miss having both of them around.\n\n\n \n\n\nIn general, I’m feeling really good about MIRI’s position right now. From our increased financial security and ability to more ambitiously pursue our plans, to the new composition and focus of the research team, the new engineers who are spending time with us, and the growth of the research that they’ll support, things are moving forward quickly and with purpose. Thanks to everyone who has contributed, is contributing, and will contribute in the future to help us do the work here at MIRI.\n\n\n \n\n\n\n\n---\n\n1. More generally, this will allow us to move forward confidently with the different research programs we consider high-priority, without needing to divert as many resources from other projects to support our top priorities. This should also allow us to make faster progress on the targeted outreach writing we mentioned in our 2017 update, since we won’t have to spend staff time on writing and outreach for a summer fundraiser.\n2. Of course, we’d be happy if these large donations looked less like outliers in the long run. If readers are looking for something to do with digital currency they might be holding onto after the recent surges, know that [we gratefully accept donations of many digital currencies](https://intelligence.org/donate/)! In total, MIRI has raised around $1.85M in cryptocurrency donations since mid-2013.\n3. These plans are subject to substantial change. In particular, an important source of variance in our plans is how our new non-public-facing research progresses, where we’re likely to take on more ambitious growth goals if our new work looks like it’s going well.\n4. We would also likely increase our reserves in this scenario, allowing us to better adapt to unexpected circumstances, and there is a smaller probability that we would use these funds to grow moderately more than currently planned without a significant change in strategy.\n5. Less frequent (e.g., quarterly) donations are also quite helpful from our perspective, if we know about them in advance and so can plan around them. In the case of donors who plan to give at least once per year, predictability is much more important from our perspective than frequency.\n\nThe post [Updates to the research team, and a major donation](https://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-07-04T15:35:45Z", "authors": ["Malo Bourgon"], "summaries": []} -{"id": "46ebe07f07035a1e55428cd97b158d99", "title": "June 2017 Newsletter", "url": "https://intelligence.org/2017/06/16/june-2017-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \n**Research updates**\n* A new [AI Impacts](http://aiimpacts.org/some-survey-results/) paper: “[When Will AI Exceed Human Performance?](https://arxiv.org/abs/1705.08807)” News coverage at [*Digital Trends*](https://www.digitaltrends.com/cool-tech/oxford-yale-ai-survey/) and [*MIT Technology Review*](https://www.technologyreview.com/s/607970/experts-predict-when-artificial-intelligence-will-exceed-human-performance/).\n* New at IAFF: [Cooperative Oracles](https://agentfoundations.org/item?id=1468); [Jessica Taylor on the AAMLS Agenda](https://agentfoundations.org/item?id=1470); [An Approach to Logically Updateless Decisions](https://agentfoundations.org/item?id=1472)\n* Our 2014 technical agenda, “[Agent Foundations for Aligning Machine Intelligence with Human Interests](https://intelligence.org/files/TechnicalAgenda.pdf),” is now available as a book chapter in the anthology [*The Technological Singularity: Managing the Journey*](http://www.creative-science.org/activities/singularity/).\n\n\n**General updates**\n* [readthesequences.com](https://www.readthesequences.com/): supporters have put together a web version of Eliezer Yudkowsky’s [*Rationality: From AI to Zombies*](https://intelligence.org/rationality-ai-zombies/).\n* The Oxford Prioritisation Project publishes [a model of MIRI’s work](http://effective-altruism.com/ea/1ae/a_model_of_the_machine_intelligence_research/) as an existential risk intervention.\n\n\n\n**News and links**\n* From *MIT Technology Review*: “[Why Google’s CEO Is Excited About Automating Artificial Intelligence](https://www.technologyreview.com/s/607894/why-googles-ceo-is-excited-about-automating-artificial-intelligence/).”\n* A new alignment paper from researchers at Australian National University and DeepMind: “[Reinforcement Learning with a Corrupted Reward Channel](https://arxiv.org/abs/1705.08417).”\n* New from OpenAI: [Baselines](https://blog.openai.com/openai-baselines-dqn/), a tool for reproducing reinforcement learning algorithms.\n* The [Future of Humanity Institute](https://www.fhi.ox.ac.uk/join-partnership-ai/) and [Centre for the Future of Intelligence](http://lcfi.ac.uk/news-events/news/2017/may/16/lcfi-joins-partnership-ai/) join the Partnership on AI alongside [twenty other groups](https://www.partnershiponai.org/partners/).\n* New AI safety [job postings](https://80000hours.org/job-board/) include research roles at the [Future of Humanity Institute](https://www.fhi.ox.ac.uk/fhi-seeks-research-assistant-book-existential-risk/) and the [Center for Human-Compatible AI](http://humancompatible.ai/jobs/#postdoc), as well as a [UCLA PULSE fellowship](https://recruit.apo.ucla.edu/apply/JPF02981) for studying AI’s potential large-scale consequences and appropriate preparations and responses.\n\n\n |\n\n\nThe post [June 2017 Newsletter](https://intelligence.org/2017/06/16/june-2017-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-06-16T21:02:15Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "b10cdb9c7a49a783c1b3af557868ba4d", "title": "May 2017 Newsletter", "url": "https://intelligence.org/2017/05/10/may-2017-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \n**Research updates**\n* New at IAFF: [The Ubiquitous Converse Lawvere Problem](https://agentfoundations.org/item?id=1372); [Two Major Obstacles for Logical Inductor Decision Theory](https://agentfoundations.org/item?id=1399); [Generalizing Foundations of Decision Theory II](https://agentfoundations.org/item?id=1341).\n* New at AI Impacts: [Guide to Pages on AI Timeline Predictions](http://aiimpacts.org/guide-to-pages-on-ai-timeline-predictions/)\n* “[Decisions Are For Making Bad Outcomes Inconsistent](https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/)”: Nate Soares dialogues on some of the deeper issues raised by our “[Cheating Death in Damascus](https://intelligence.org/2017/03/18/new-paper-cheating-death-in-damascus/)” paper.\n* We ran a machine learning [workshop](https://intelligence.org/workshops/#april-2017) in early April.\n* “[Ensuring Smarter-Than-Human Intelligence Has a Positive Outcome](https://intelligence.org/2017/04/12/ensuring/)”: Nate’s talk at Google ([video](https://www.youtube.com/watch?v=dY3zDvoLoao)) provides probably the best general introduction to MIRI’s work on AI alignment.\n\n\n**General updates**\n* Our [strategy update](https://intelligence.org/2017/04/30/2017-updates-and-strategy/) discusses changes to our AI forecasts and research priorities, new outreach goals, a MIRI/DeepMind collaboration, and other news.\n* [MIRI is hiring software engineers!](https://intelligence.org/2017/04/30/software-engineer-internship-staff-openings/) If you’re a programmer who’s passionate about MIRI’s mission and wants to directly support our research efforts, [apply here](https://machineintelligence.typeform.com/to/j8LRNq) to trial with us.\n* MIRI Assistant Research Fellow Ryan Carey has taken on an additional [affiliation](http://cser.org/new-research-affiliates/) with the Centre for the Study of Existential Risk, and is also helping edit an issue of [*Informatica*](http://www.informatica.si/index.php/informatica/pages/view/csi3) on superintelligence.\n\n\n\n**News and links**\n* DeepMind researcher Viktoriya Krakovna lists [security highlights from ICLR](https://futureoflife.org/2017/05/01/machine-learning-security-iclr-2017/).\n* DeepMind is [seeking applicants](https://deepmind.com/careers/655890/v) for a policy research position “to carry out research on the social and economic impacts of AI”.\n* The Center for Human-Compatible AI [is hiring an assistant director](http://humancompatible.ai/jobs/#assistant-director). Interested parties may also wish to apply for the [event coordinator](http://existence.org/jobs/event-coordinator) position at the new Berkeley Existential Risk Initiative, which will help support work at CHAI and elsewhere.\n* 80,000 Hours lists other potentially high-impact [openings](https://80000hours.org/job-board/), including ones at Stanford’s AI Index project, the [White House OSTP](https://www.whitehouse.gov/ostp/internship), [IARPA](https://www.iarpa.gov/index.php/careers/become-a-program-manager), and [IVADO](http://www.lecre.umontreal.ca/poste-dagente-de-recherche-en-ethique-de-lintelligence-artificielle-au-cre/).\n* New papers: “[One-Shot Imitation Learning](https://arxiv.org/abs/1703.07326)” and “[Stochastic Gradient Descent as Approximate Bayesian Inference](https://arxiv.org/abs/1704.04289).”\n* The Open Philanthropy Project summarizes its findings on [early field growth](http://www.openphilanthropy.org/blog/new-report-early-field-growth).\n* The Centre for Effective Altruism is collecting donations for the [Effective Altruism Funds](http://effective-altruism.com/ea/17v/ea_funds_beta_launch/) in a range of cause areas.\n\n\n |\n\n\n \n\n\nThe post [May 2017 Newsletter](https://intelligence.org/2017/05/10/may-2017-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-05-10T09:21:49Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "d1cc587d8c0df75e8eeeb175fcbdb853", "title": "2017 Updates and Strategy", "url": "https://intelligence.org/2017/04/30/2017-updates-and-strategy/", "source": "miri", "source_type": "blog", "text": "In our last strategy update ([August 2016](https://intelligence.org/2016/08/05/miri-strategy-update-2016/)), Nate wrote that MIRI’s priorities were to make progress on our [agent foundations](https://intelligence.org/technical-agenda/) agenda and begin work on our new “[Alignment for Advanced Machine Learning Systems](https://intelligence.org/2017/02/28/using-machine-learning/)” agenda, to collaborate and communicate with other researchers, and to grow our research and ops teams.\n\n\nSince then, senior staff at MIRI have reassessed their views on how far off [artificial general intelligence](https://intelligence.org/2013/06/19/what-is-intelligence-2/) (AGI) is and concluded that shorter timelines are more likely than they were previously thinking. A few lines of recent evidence point in this direction, such as:[1](https://intelligence.org/2017/04/30/2017-updates-and-strategy/#footnote_0_15715 \"Note that this list is far from exhaustive.\")\n\n\n* AI research is becoming more visibly exciting and [well-funded](http://aiimpacts.org/funding-of-ai-research/). This suggests that more top talent (in the next generation as well as the current generation) will probably turn their attention to AI.\n* AGI is attracting more scholarly attention as an idea, and is the stated goal of top AI groups like [DeepMind](https://www.youtube.com/watch?v=vQXAsdMa_8A), [OpenAI](https://openai.com/about/), and [FAIR](https://research.fb.com/category/facebook-ai-research-fair/). In particular, many researchers seem more open to thinking about general intelligence now than they did a few years ago.\n* Research groups associated with AGI are showing much clearer external [signs](https://deepmind.com/blog/deepmind-ai-reduces-google-data-centre-cooling-bill-40/) of profitability.\n* AI successes like [AlphaGo](https://futureoflife.org/2016/03/15/eliezer-yudkowsky-on-alphagos-wins/) indicate that it’s easier to outperform top humans in domains like Go (without any new conceptual breakthroughs) than might have been expected.[2](https://intelligence.org/2017/04/30/2017-updates-and-strategy/#footnote_1_15715 \"Relatively general algorithms (plus copious compute) were able to surpass human performance on Go, going from incapable of winning against the worst human professionals in standard play to dominating the very best professionals in the space of a few months. The relevant development here wasn’t “AlphaGo represents a large conceptual advance over previously known techniques,” but rather “contemporary techniques run into surprisingly few obstacles when scaled to tasks as pattern-recognition-reliant and difficult (for humans) as professional Go”.\") This lowers our estimate for the number of significant conceptual breakthroughs needed to rival humans in other domains.\n\n\nThere’s no consensus among MIRI researchers on how long timelines are, and our aggregated estimate puts medium-to-high probability on scenarios in which the research community hasn’t developed AGI by, e.g., 2035. On average, however, research staff now assign moderately higher probability to AGI’s being developed before 2035 than we did a year or two ago. This has a few implications for our strategy:\n\n\n1. Our relationships with current key players in AGI safety and capabilities play a larger role in our strategic thinking. Short-timeline scenarios reduce the expected number of important new players who will enter the space before we hit AGI, and increase how much influence current players are likely to have.\n\n\n2. Our research priorities are somewhat different, since shorter timelines change what research paths are likely to pay out before we hit AGI, and also concentrate our probability mass more on scenarios where AGI shares various features in common with present-day machine learning systems.\n\n\nBoth updates represent directions we’ve already been trending in for various reasons.[3](https://intelligence.org/2017/04/30/2017-updates-and-strategy/#footnote_2_15715 \"The publication of “Concrete Problems in AI Safety” last year, for example, caused us to reduce the time we were spending on broad-based outreach to the AI community at large in favor of spending more time building stronger collaborations with researchers we knew at OpenAI, Google Brain, DeepMind, and elsewhere.\") However, we’re moving in these two directions more quickly and confidently than we were last year. As an example, Nate is spending less time on staff management and other administrative duties than in the past (having handed these off to MIRI COO Malo Bourgon) and less time on broad communications work (having delegated a fair amount of this to me), allowing him to spend more time on object-level research, research prioritization work, and more targeted communications.[4](https://intelligence.org/2017/04/30/2017-updates-and-strategy/#footnote_3_15715 \"Nate continues to set MIRI’s organizational strategy, and is responsible for the ideas in this post.\")\n\n\nI’ll lay out what these updates mean for our plans in more concrete detail below.\n\n\n\n \n\n\n#### 1. Research program plans\n\n\nOur top organizational priority is object-level research on the [AI alignment problem](https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/), following up on the work Malo described in our recent [annual review](https://intelligence.org/2017/03/28/2016-in-review/).\n\n\nWe plan to spend this year delving into some new safety research directions that are very preliminary and exploratory, where we’re uncertain about potential synergies with AGI capabilities research. Work related to this exploratory investigation will be non-public-facing at least through late 2017, in order to lower the risk of marginally shortening AGI timelines (which can leave less total time for alignment research) and to free up researchers’ attention from having to think through safety tradeoffs for each new result.[5](https://intelligence.org/2017/04/30/2017-updates-and-strategy/#footnote_4_15715 \"We generally support a norm where research groups weigh the costs and benefits of publishing results that could shorten AGI timelines, and err on the side of keeping potentially AGI-hastening results proprietary where there’s sufficient uncertainty, unless there are sufficiently strong positive reasons to disseminate the results under consideration. This can end up applying to safety research and work by smaller groups as well, depending on the specifics of the research itself.\nAnother factor in our decision is that writing up results for external consumption takes additional researcher time and attention, though in practice this cost will often be smaller than the benefits of the writing process and resultant papers.\")\n\n\nWe’ve worked on non-public-facing research before, but this will be a larger focus in 2017. We plan to re-assess how much work to put into our exploratory research program (and whether to shift projects to the public-facing side) in the fall, based on how projects are progressing.\n\n\nOn the public-facing side, Nate made a prediction that we’ll make roughly the following amount of research progress this year (noting 2015 and 2016 estimates for comparison). 1 means “limited progress”, 2 “weak-to-modest progress”, 3 “modest progress”, 4 “modest-to-strong progress”, and 5 “sizable progress”:[6](https://intelligence.org/2017/04/30/2017-updates-and-strategy/#footnote_5_15715 \"Nate originally recorded his predictions on March 21, based on the progress he expected in late March through the end of 2017. Note that, for example, three “limited” scores aren’t equivalent to one “modest” score. Additionally, the ranking is based on the largest technical result we expect in each category, and emphasizes depth over breadth: if we get one modest-seeming decision theory result one year and ten such results the next year, those will both get listed as “modest progress”.\")\n\n\n\n\n---\n\n\n\n> **logical uncertainty** and **naturalized induction**:\n> \n> \n> * 2015 progress: 5. — Predicted: 3.\n> * 2016 progress: 5. — Predicted: 5.\n> * 2017 progress prediction: **2** (weak-to-modest).\n> \n> \n> **decision theory**:\n> \n> \n> * 2015 progress: 3. — Predicted: 3.\n> * 2016 progress: 3. — Predicted: 3.\n> * 2017 progress prediction: **3** (modest).\n> \n> \n> **Vingean reflection**:\n> \n> \n> * 2015 progress: 3. — Predicted: 3.\n> * 2016 progress: 4. — Predicted: 1.\n> * 2017 progress prediction: **1** (limited).\n> \n> \n> **error tolerance**:\n> \n> \n> * 2015 progress: 1. — Predicted: 3.\n> * 2016 progress: 1. — Predicted: 3.\n> * 2017 progress prediction: **1** (limited).\n> \n> \n> **value specification**:\n> \n> \n> * 2015 progress: 1. — Predicted: 1.\n> * 2016 progress: 2. — Predicted: 3.\n> * 2017 progress prediction: **1** (limited).\n> \n> \n> \n\n\n\n\n---\n\n\nNate expects fewer novel public-facing results this year than in 2015-2016, based on a mix of how many researcher hours we’re investing into each area and how easy he estimates it is to make progress in that area.\n\n\nProgress in basic research is difficult to predict in advance, and the above estimates combine how likely it is that we’ll come up with important new results with how large we would expect such results to be in the relevant domain.  In the case of naturalized induction, most of the probability is on us making small amounts of progress this year, with a low chance of new large insights. In the case of decision theory, most of the probability is on us achieving some minor new insights related to the questions we’re working on, with a medium-low chance of large insights.\n\n\nThe research team’s current focus is on some quite new questions. Jessica, Sam, and Scott have recently been working on the problem of reasoning procedures like Solomonoff induction [giving rise to misaligned subagents](https://ordinaryideas.wordpress.com/2016/11/30/what-does-the-universal-prior-actually-look-like/) (e.g., [here](https://agentfoundations.org/item?id=1263)), and considering alternative induction methods that might avoid this problem.[7](https://intelligence.org/2017/04/30/2017-updates-and-strategy/#footnote_6_15715 \"This is a relatively recent research priority, and doesn’t fit particularly well into any of the bins from our agent foundations agenda, though it is most clearly related to naturalized induction. Our AAMLS agenda also doesn’t fit particularly neatly into these bins, though we classify most AAMLS research as error-tolerance or value specification work.\")\n\n\nIn decision theory, a common thread in our recent work is that we’re using probability and topological fixed points in settings where we used to use provability. This means working with (and improving) [logical inductors](https://intelligence.org/2016/09/12/new-paper-logical-induction/) and [reflective oracles](https://intelligence.org/2016/06/30/grain-of-truth/). It also means developing new ways of looking at counterfactuals inspired by those methods. The reason behind this shift is that most of the progress we’ve seen on Vingean reflection has come out of these probabilistic reasoning and fixed-point-based techniques.\n\n\nWe also plan to put out more accessible overviews this year of some of our research areas. For a good general introduction to our work in decision theory, see our newest paper, “[Cheating Death in Damascus](https://intelligence.org/2017/03/18/new-paper-cheating-death-in-damascus/).”\n\n\n \n\n\n#### 2. Targeted outreach and closer collaborations\n\n\nOur outreach efforts this year are mainly aimed at exchanging research-informing background models with top AI groups (especially OpenAI and DeepMind), AI safety research groups (especially the Future of Humanity Institute), and funders / conveners (especially the Open Philanthropy Project).\n\n\nWe’re currently collaborating on a research project with DeepMind, and are on good terms with OpenAI and key figures at other groups. We’re also writing up a more systematic explanation of our view of the strategic landscape, which we hope to use as a starting point for discussion. Topics we plan to go into in forthcoming write-ups include:\n\n\n1. Practical goals and guidelines for AGI projects.\n\n\n2. Why we consider AGI alignment [a difficult problem](https://arbital.com/p/aligning_adds_time/), of the sort where a major multi-year investment of research effort in the near future may be necessary (and not too far off from sufficient).\n\n\n3. Why we think [a deep understanding](https://intelligence.org/2015/07/27/miris-approach/) of how AI systems’ cognition achieves objectives is likely to be critical for AGI alignment.\n\n\n4. [Task-directed AGI](https://arbital.com/p/genie/) and methods for limiting the scope of AGI systems’ problem-solving work.\n\n\nSome existing write-ups related to the topics we intend to say more about include Jessica Taylor’s “[On Motivations for MIRI’s Highly Reliable Agent Design Research](https://agentfoundations.org/item?id=1220),” Nate Soares’ “[Why AI Safety?](https://intelligence.org/why-ai-safety/)”, and Daniel Dewey’s “[Long-Term Strategies for Ending Existential Risk from Fast Takeoff](http://www.danieldewey.net/fast-takeoff-strategies.html).”\n\n\n \n\n\n#### 3. Expansion\n\n\nOur planned budget in 2017 is $2.1–2.5M, up from $1.65M in 2015 and $1.75M in 2016. Our point estimate is $2.25M, in which case we would expect our breakdown to look roughly like this:\n\n\n\n\n---\n\n\n![](https://intelligence.org/wp-content/uploads/2017/04/2017-Budget-Breakdown.png)\n\n\n---\n\n\nWe recently hired two new research fellows, [Sam Eisenstat and Marcello Herreshoff](https://intelligence.org/2017/03/31/two-new-researchers-join-miri/), and have other researchers in the pipeline. We’re also [hiring software engineers](https://intelligence.org/2017/04/30/software-engineer-internship-staff-openings/) to help us rapidly prototype, implement, and test AI safety ideas related to machine learning. We’re currently seeking interns to trial for these programming roles ([apply here](https://machineintelligence.typeform.com/to/j8LRNq)).\n\n\nOur events budget is [smaller this year](https://intelligence.org/2015/12/01/miri-2015-winter-fundraiser/#4), as we’re running more internal research retreats and fewer events like our 2015 [summer workshop series](https://intelligence.org/workshops/#august-2015-intro2) and our 2016 [colloquium series](https://intelligence.org/2016/10/06/csrbai-talks-agent-models/). Our costs of doing business are higher, due in part to accounting expenses associated with our passing the $2M revenue level and bookkeeping expenses for upkeep tasks we’ve outsourced.\n\n\nWe experimented with running [just one fundraiser](https://intelligence.org/2016/09/16/miris-2016-fundraiser/) in 2016, but ended up still needing to spend staff time on fundraising at the end of the year after [falling short](https://intelligence.org/2016/11/11/post-fundraiser-update/) of our initial funding target. Taking into account a heartening end-of-the-year show of support, our overall performance was very solid — $2.29M for the year, up from $1.58M in 2015. However, there’s a good chance we’ll return to our previous two-fundraiser rhythm this year in order to more confidently move ahead with our growth plans.\n\n\nOur 5-year plans are fairly uncertain, as our strategy will plausibly end up varying based on how fruitful our research directions this year turn out to be, and based on our conversations with other groups. As usual, you’re welcome to [ask us questions](mailto:contact@intelligence.org) if you’re curious about what we’re up to, and we’ll be keeping you updated as our plans continue to develop!\n\n\n \n\n\n\n\n---\n\n1. Note that this list is far from exhaustive.\n2. Relatively general algorithms (plus copious compute) were able to surpass human performance on Go, going from incapable of winning against the worst human professionals in standard play to [dominating the very best professionals](https://qz.com/877721/the-ai-master-bested-the-worlds-top-go-players-and-then-revealed-itself-as-googles-alphago-in-disguise/) in the space of a few months. The relevant development here wasn’t “AlphaGo represents a large conceptual advance over previously known techniques,” but rather “contemporary techniques run into surprisingly few obstacles when scaled to tasks as pattern-recognition-reliant and difficult (for humans) as professional Go”.\n3. The publication of “[Concrete Problems in AI Safety](https://openai.com/blog/concrete-ai-safety-problems/)” last year, for example, caused us to reduce the time we were spending on broad-based outreach to the AI community at large in favor of spending more time building stronger collaborations with researchers we knew at OpenAI, Google Brain, DeepMind, and elsewhere.\n4. Nate continues to set MIRI’s organizational strategy, and is responsible for the ideas in this post.\n5. We generally support a norm where research groups weigh the costs and benefits of publishing results that could shorten AGI timelines, and err on the side of keeping potentially AGI-hastening results proprietary where there’s sufficient uncertainty, unless there are sufficiently strong positive reasons to disseminate the results under consideration. This can end up applying to safety research and work by smaller groups as well, depending on the specifics of the research itself.\nAnother factor in our decision is that writing up results for external consumption takes additional researcher time and attention, though in practice this cost will often be smaller than the benefits of the writing process and resultant papers.\n6. Nate originally recorded his predictions on March 21, based on the progress he expected in late March through the end of 2017. Note that, for example, three “limited” scores aren’t equivalent to one “modest” score. Additionally, the ranking is based on the largest technical result we expect in each category, and emphasizes depth over breadth: if we get one modest-seeming decision theory result one year and ten such results the next year, those will both get listed as “modest progress”.\n7. This is a relatively recent research priority, and doesn’t fit particularly well into any of the bins from our [agent foundations agenda](https://intelligence.org/technical-agenda/), though it is most clearly related to [naturalized induction](https://intelligence.org/files/RealisticWorldModels.pdf). Our [AAMLS agenda](https://intelligence.org/2016/07/27/alignment-machine-learning/) also doesn’t fit particularly neatly into these bins, though we classify most AAMLS research as error-tolerance or value specification work.\n\nThe post [2017 Updates and Strategy](https://intelligence.org/2017/04/30/2017-updates-and-strategy/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-05-01T03:42:46Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "fca58d818e826cdcaea84d5d35ceb198", "title": "Software Engineer Internship / Staff Openings", "url": "https://intelligence.org/2017/04/30/software-engineer-internship-staff-openings/", "source": "miri", "source_type": "blog", "text": "The Machine Intelligence Research Institute is looking for highly capable software engineers to directly support our [AI alignment](https://intelligence.org/2017/04/12/ensuring/) research efforts, with a focus on projects related to machine learning. We’re seeking engineers with strong programming skills who are passionate about MIRI’s mission and looking for challenging and intellectually engaging work.\n\n\nWhile our goal is to hire full-time, we are initially looking for paid interns. Successful internships may then transition into staff positions.\n\n\n#### About the Internship Program\n\n\nThe start time for interns is flexible, but we’re aiming for May or June. We will likely run several batches of internships, so if you are interested but unable to start in the next few months, do still apply. The length of the internship is flexible, but we’re aiming for 2–3 months.\n\n\nExamples of the kinds of work you’ll do during the internship:\n\n\n* Replicate recent machine learning papers, and implement variations.\n* Learn about and implement machine learning tools (including results in the fields of deep learning, convex optimization, etc.).\n* Run various coding experiments and projects, either independently or in small groups.\n* Rapidly prototype, implement, and test AI alignment ideas related to machine learning (after demonstrating successes in the above points).\n\n\nFor MIRI, the benefit of this program is that it’s a great way to get to know you and assess you for a potential hire. For applicants, the benefits are that this is an excellent opportunity to get your hands dirty and level up your machine learning skills, and to get to the cutting edge of the AI safety field, with a potential to stay in a full-time engineering role after the internship concludes.\n\n\nOur goal is to trial many more people than we expect to hire, so our threshold for keeping on engineers long-term as full staff will be higher than for accepting applicants to our internship.\n\n\n#### The Ideal Candidate\n\n\nSome qualities of the ideal candidate:\n\n\n* Extensive breadth and depth of programming skills. Machine learning experience is not required, though it is a plus.\n* Highly familiar with basic ideas related to AI alignment.\n* Able to work independently with minimal supervision, and in team/group settings.\n* Willing to accept a below-market rate. Since MIRI is a non-profit, we can’t compete with the Big Names in the Bay Area.\n* Enthusiastic about the prospect of working at MIRI and helping advance the field of AI alignment.\n* Not looking for a “generic” software engineering position.\n\n\n#### Working at MIRI\n\n\nWe strive to make working at MIRI a rewarding experience.\n\n\n* Modern Work Spaces — Many of us have adjustable standing desks with large external monitors. We consider workspace ergonomics important, and try to rig up work stations to be as comfortable as possible. Free snacks, drinks, and meals are also provided at our office.\n* Flexible Hours — We don’t have strict office hours, and we don’t limit employees’ vacation days. Our goal is to make rapid progress on our research agenda, and we would prefer that staff take a day off than that they extend tasks to fill an extra day.\n* Living in the Bay Area — MIRI’s office is located in downtown Berkeley, California. From our office, you’re a 30-second walk to the BART (Bay Area Rapid Transit), which can get you around the Bay Area; a 3-minute walk to UC Berkeley campus; and a 30-minute BART ride to downtown San Francisco.\n\n\n#### EEO & Employment Eligibility\n\n\nMIRI is an equal opportunity employer. We are committed to making employment decisions based on merit and value. This commitment includes complying with all federal, state, and local laws. We desire to maintain a work environment free of harassment or discrimination due to sex, race, religion, color, creed, national origin, sexual orientation, citizenship, physical or mental disability, marital status, familial status, ethnicity, ancestry, status as a victim of domestic violence, age, or any other status protected by federal, state, or local laws.\n\n\n#### Apply\n\n\nIf interested, [**click here to apply**](https://machineintelligence.typeform.com/to/cHMttJ). For questions or comments, email [engineering@intelligence.org](mailto:engineering@intelligence.org).\n\n\n*Update (December 2017): We’re now putting less emphasis on finding interns and looking for highly skilled engineers available for full-time work. [Updated job post here.](https://intelligence.org/careers/software-engineer/)*\n\n\nThe post [Software Engineer Internship / Staff Openings](https://intelligence.org/2017/04/30/software-engineer-internship-staff-openings/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-04-30T18:04:14Z", "authors": ["Alex Vermeer"], "summaries": []} -{"id": "0aa24c700341eb686d0caad9df4a1d0a", "title": "Ensuring smarter-than-human intelligence has a positive outcome", "url": "https://intelligence.org/2017/04/12/ensuring/", "source": "miri", "source_type": "blog", "text": "I recently gave a talk at Google on the problem of aligning smarter-than-human AI with operators’ goals:\n\n\n \n\n\n\n \n\n\nThe talk was inspired by “[AI Alignment: Why It’s Hard, and Where to Start](https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/),” and serves as an introduction to the subfield of alignment research in AI. A modified transcript follows.\n\n\nTalk outline ([slides](https://intelligence.org/files/fundamental-difficulties-transitions2.pdf)):\n\n\n\n> \n> 1. [Overview](https://intelligence.org/feed/?paged=23#1)\n> \n> \n> 2. [Simple bright ideas going wrong](https://intelligence.org/feed/?paged=23#2)\n> \n> \n> 2.1. [Task: Fill a cauldron](https://intelligence.org/feed/?paged=23#task) \n> \n> 2.2. [Subproblem: Suspend buttons](https://intelligence.org/feed/?paged=23#subproblem)\n> \n> \n> 3. [The big picture](https://intelligence.org/feed/?paged=23#3)\n> \n> \n> 3.1. [Alignment priorities](https://intelligence.org/feed/?paged=23#priorities) \n> \n> 3.2. [Four key propositions](https://intelligence.org/feed/?paged=23#propositions)\n> \n> \n> 4. [Fundamental difficulties](https://intelligence.org/feed/?paged=23#4)\n> \n> \n> \n\n\n\n\n\n---\n\n\n\n\n---\n\n\n\n### Overview\n\n\nI’m the executive director of the Machine Intelligence Research Institute. Very roughly speaking, we’re a group that’s thinking in the long term about artificial intelligence and working to make sure that by the time we have advanced AI systems, we also know how to point them in useful directions.\n\n\nAcross history, science and technology have been the largest drivers of change in human and animal welfare, for better and for worse. If we can automate scientific and technological innovation, that has the potential to change the world on a scale not seen since the Industrial Revolution. When I talk about “advanced AI,” it’s this potential for automating innovation that I have in mind.\n\n\nAI systems that exceed humans in this capacity aren’t coming next year, but many smart people are working on it, and I’m not one to bet against human ingenuity. I think it’s likely that we’ll be able to build something like an automated scientist in our lifetimes, which suggests that this is something we need to take seriously.\n\n\nWhen people talk about the social implications of [general AI](https://intelligence.org/2013/06/19/what-is-intelligence-2/), they often fall prey to anthropomorphism. They conflate artificial *intelligence* with artificial *consciousness*, or assume that if AI systems are “intelligent,” they must be intelligent in the same way a human is intelligent. A lot of journalists express a concern that when AI systems pass a certain capability level, they’ll spontaneously develop “natural” desires like a human hunger for power; or they’ll reflect on their programmed goals, find them foolish, and “rebel,” refusing to obey their programmed instructions.\n\n\nThese are misplaced concerns. The human brain is a complicated product of natural selection. We shouldn’t expect machines that exceed human performance in scientific innovation to closely resemble humans, any more than early rockets, airplanes, or hot air balloons closely resembled birds.[1](https://intelligence.org/2017/04/12/ensuring/#footnote_0_15407 \"An airplane can’t heal its injuries or reproduce, though it can carry heavy cargo quite a bit further and faster than a bird. Airplanes are simpler than birds in many respects, while also being significantly more capable in terms of carrying capacity and speed (for which they were designed). It’s plausible that early automated scientists will likewise be simpler than the human mind in many respects, while being significantly more capable in certain key dimensions. And just as the construction and design principles of aircraft look alien relative to the architecture of biological creatures, we should expect the design of highly capable AI systems to be quite alien when compared to the architecture of the human mind.\")\n\n\nThe notion of AI systems “breaking free” of the shackles of their source code or spontaneously developing human-like desires is just confused. The AI system *is* its source code, and its actions will only ever follow from the execution of the instructions that we initiate. The CPU just keeps on executing the next instruction in the program register. We could write a program that manipulates its own code, including coded objectives. Even then, though, the manipulations that it makes are made as a result of executing the original code that we wrote; they do not stem from some kind of ghost in the machine.\n\n\nThe serious question with smarter-than-human AI is how we can ensure that the objectives we’ve specified are correct, and how we can minimize costly accidents and unintended consequences in cases of misspecification. As Stuart Russell (co-author of *Artificial Intelligence: A Modern Approach*) [puts it](https://www.edge.org/conversation/the-myth-of-ai#26015):\n\n\n\n> The primary concern is not spooky emergent consciousness but simply the ability to make *high-quality decisions*. Here, quality refers to the expected outcome utility of actions taken, where the utility function is, presumably, specified by the human designer. Now we have a problem:\n> \n> \n> 1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.\n> \n> \n> 2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.\n> \n> \n> A system that is optimizing a function of *n* variables, where the objective depends on a subset of size *k*<*n*, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable.\n> \n> \n\n\nThese kinds of concerns deserve a lot more attention than the more anthropomorphic risks that are generally depicted in Hollywood blockbusters.\n\n\n \n\n\n### Simple bright ideas going wrong\n\n\n**Task: Fill a cauldron**\n\n\nMany people, when they start talking about concerns with smarter-than-human AI, will throw up a picture of the Terminator. I was once quoted in a news article making fun of people who put up Terminator pictures in all their articles about AI, next to a Terminator picture. I learned something about the media that day.\n\n\nI think this is a much better picture:\n\n\n \n\n\n![vlcsnap-2016-05-04-18h44m30s933](https://intelligence.org/wp-content/uploads/2017/04/vlcsnap-2016-05-04-18h44m30s933-300x225.png)\n \n\n\nThis is Mickey Mouse in the movie *Fantasia*, who has very cleverly enchanted a broom to fill a cauldron on his behalf.\n\n\nHow might Mickey do this? We can imagine that Mickey writes a computer program and has the broom execute the program. Mickey starts by writing down a scoring function or objective function: \n\n$$\\mathcal{U}\\_{broom} = \n\n\\begin{cases} \n\n1 &\\text{ if cauldron full} \\\\ \n\n0 &\\text{ if cauldron empty} \n\n\\end{cases}$$Given some set 𝐴 of available actions, Mickey then writes a program that can take one of these actions 𝑎 as input and calculate how high the score is expected to be if the broom takes that action. Then Mickey can write a function that spends some time looking through actions and predicting which ones lead to high scores, and outputs an action that leads to a relatively high score: \n\n$$\\underset{a\\in A}{\\mathrm{sorta\\mbox{-}argmax}} \\ \\mathbb{E}\\left[\\mathcal{U}\\_{broom}\\mid a\\right]$$The reason this is “sorta-argmax” is that there may not be time to evaluate every action in 𝐴. For realistic action sets, agents should only need to find actions that make the scoring function as large as they can given resource constraints, even if this isn’t the maximal action.\n\n\nThis program may look simple, but of course, the devil’s in the details: writing an algorithm that does accurate prediction and smart search through action space is basically the whole problem of AI. Conceptually, however, it’s pretty simple: We can describe in broad strokes the kinds of operations the broom must carry out, and their plausible consequences at different performance levels.\n\n\nWhen Mickey runs this program, everything goes smoothly at first. [Then](https://www.youtube.com/watch?v=UEYy3osi8Gs&t=3m25s):\n\n\n \n\n\n![vlcsnap-2016-05-04-19h48m12s031](https://intelligence.org/wp-content/uploads/2017/04/vlcsnap-2016-05-04-19h48m12s031-300x225.png)\n \n\n\nI claim that as fictional depictions of AI go, this is pretty realistic.\n\n\nWhy would we expect a generally intelligent system executing the above program to start overflowing the cauldron, or otherwise to go to extreme lengths to ensure the cauldron is full?\n\n\nThe first difficulty is that the objective function that Mickey gave his broom left out [a bunch of other terms](https://intelligence.org/files/ComplexValues.pdf) Mickey cares about:\n\n\n$$\\mathcal{U}\\_{human} = \n\n\\begin{cases} \n\n1&\\text{ if cauldron full} \\\\ \n\n0&\\text{ if cauldron empty} \\\\ \n\n-10&\\text{ if workshop flooded} \\\\ \n\n+0.2&\\text{ if it’s funny} \\\\ \n\n-1000000&\\text{ if someone gets killed} \\\\ \n\n&\\text{… and a whole lot more} \\\\ \n\n\\end{cases}$$\n\n\nThe second difficulty is that Mickey programmed the broom to make the expectation of its score as large as it could. “Just fill one cauldron with water” looks like a modest, limited-scope goal, but when we translate this goal into a probabilistic context, we find that optimizing it means driving up the probability of success to absurd heights. If the broom assigns a 99.9% probability to “the cauldron is full,” and it has extra resources lying around, then it will always try to find ways to use those resources to drive the probability even a little bit higher.\n\n\nContrast this with the limited “[task-like](https://arbital.com/p/task_goal/)” goal we presumably had in mind. We wanted the cauldron full, but in some intuitive sense we wanted the system to “not try too hard” even if it has lots of available cognitive and physical resources to devote to the problem. We wanted it to exercise creativity and resourcefulness within some intuitive limits, but we didn’t want it to pursue “absurd” strategies, especially ones with large unanticipated consequences.[2](https://intelligence.org/2017/04/12/ensuring/#footnote_1_15407 \"Trying to give some formal content to these attempts to differentiate task-like goals from open-ended goals is one way of generating open research problems. In the “Alignment for Advanced Machine Learning Systems” research proposal, the problem of formalizing “don’t try too hard” is mild optimization, “steer clear of absurd strategies” is conservatism, and “don’t have large unanticipated consequences” is impact measures. See also “avoiding negative side effects” in Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané’s “Concrete Problems in AI Safety.”\")\n\n\nIn this example, the original objective function looked pretty task-like. It was bounded and quite simple. There was no way to get ever-larger amounts of utility. It’s not like the system got one point for every bucket of water it poured in — then there would clearly be an incentive to overfill the cauldron. The problem was hidden in the fact that we’re maximizing *expected* utility. This makes the goal open-ended, meaning that even small errors in the system’s objective function will [blow up](https://arbital.com/p/goodharts_curse/).\n\n\nThere are a number of different ways that a goal that looks task-like can turn out to be open-ended. Another example: a larger system that has an overarching task-like goal may have [subprocesses](https://www.gwern.net/Tool%20AI) that are themselves trying to maximize a variety of different objective functions, such as optimizing the system’s memory usage. If you don’t understand your system well enough to track whether any of its subprocesses are themselves acting like resourceful open-ended optimizers, then [it may not matter how safe the top-level objective is](https://agentfoundations.org/item?id=1220).\n\n\nSo the broom keeps grabbing more pails of water — say, on the off chance that the cauldron has a leak in it, or that “fullness” requires the water to be slightly above the level of the brim. And, of course, at no point does the broom “rebel against” Mickey’s code. If anything, the broom pursued the objectives it was programmed with *too* effectively.\n\n\n \n\n\n**Subproblem: Suspend buttons**\n\n\nA common response to this problem is: “OK, there may be some unintended consequences of the objective function, but we can always pull the plug, right?”\n\n\nMickey [tries this](http://www.youtube.com/watch?v=UEYy3osi8Gs&t=4m0s), and it doesn’t work:\n\n\n \n\n\n![vlcsnap-2016-05-04-19h21m04s349](https://intelligence.org/wp-content/uploads/2017/04/vlcsnap-2016-05-04-19h21m04s349-300x225.png)\n![vlcsnap-2016-05-04-19h22m09s178](https://intelligence.org/wp-content/uploads/2017/04/vlcsnap-2016-05-04-19h22m09s178-300x225.png)\n![vlcsnap-2016-05-04-19h53m09s315](https://intelligence.org/wp-content/uploads/2017/04/vlcsnap-2016-05-04-19h53m09s315-300x225.png)\n \n\n\nAnd I claim that this is realistic too, for systems that are sufficiently good at modeling their environment. If the system is trying to drive up the expectation of its scoring function and is smart enough to recognize that its being shut down will result in lower-scoring outcomes, then the system’s incentive is to subvert shutdown attempts. The more capable the system is, the likelier it is to find creative ways to achieve that subgoal — e.g., by copying itself to the Internet, or by tricking the programmers into thinking it’s safer.\n\n\nThis isn’t to say that it’s impossible to shut down sufficiently capable AI systems; it’s just that we need to put in the extra work of deliberately designing systems to avoid searching for ways to avoid being shut down. If you’ve written code that would try to resist attempts to shut it down, then you shouldn’t run the code even if you’re convinced it would fail — there are better ways to spend those compute cycles than on adversarial policy searches.\n\n\nRelevant safety measures that don’t assume we can always outthink and outmaneuver the system will need to modify the system’s objective function in some relevant way. We need to figure out how to formally specify objective functions that don’t automatically place the AI system into an adversarial context with the operators; or we need to figure out some way to have the system achieve goals without optimizing some objective function in the traditional sense. E.g., we can look for ways to limit the system’s capabilities [to a small set of “whitelisted” domains](https://arbital.com/p/direct_limit_oppose/) that don’t include thinking about whether the system will be shut down.\n\n\nOne obvious solution is to try changing the score function to make reference to a suspend button 𝐁: \n\n$$\\mathcal{U}^\\prime\\_{broom} = \n\n\\begin{cases} \n\n1 \\text{ if cauldron full } & \\text{& } \\textbf{B}\\text{=OFF} \\\\ \n\n0 \\text{ if cauldron empty } & \\text{& } \\textbf{B}\\text{=OFF} \\\\ \n\n1 \\text{ if robot suspended } & \\text{& } \\textbf{B}\\text{=ON} \\\\ \n\n0 \\text{ otherwise} \n\n\\end{cases}$$This says that when the button is on, the system’s goal changes from “fill the cauldron” to “be suspended.”\n\n\nIt’s not clear how we could actually specify this goal — if there are any loopholes in the way we’ve specified “suspension,” then the system might try to exploit them to hedge its bets. E.g., it could fulfill the *technical* definition of “suspend operations” while tasking a child agent it built or subprocess it started with filling the cauldron. This would allow is to achieve both “cauldron full” and “robot suspended” at once, just in case it’s wrong about whether the button has been pushed.\n\n\nLet’s assume, however, that we’ve found an airtight way to technically specify suspension. The problem we run into now is that we’ve given the system an incentive to care a lot about whether the button is pushed. If it has any way to influence its operators’ decision to push the button, or any other way to exercise more control, then it will jump on the opportunity and try to ensure the button is pushed, or ensure that it’s not pushed — depending on whether it looks easier to suspend or to fill the cauldron.\n\n\nAnd, surprisingly, the state of the art on this problem isn’t much more advanced than that. There’s a paper co-authored by me, Fallenstein, Yudkowsky, and Armstrong (“[Corrigibility](http://aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10124/10136)“), and a related result by Orseau and Armstrong (“[Safely Interruptible Agents](https://intelligence.org/2016/06/01/new-paper-safely-interruptible-agents/)“), but not a lot more.\n\n\nWhat we want is a way to combine two objective functions — a default function for normal operation, and a suspend function for when we want to suspend the system to disk.$$\\mathcal{U}\\_{normal}(act) = \n\n\\begin{cases} \n\n1\\text{ if cauldron full} \\\\ \n\n0\\text{ if cauldron empty} \n\n\\end{cases}$$\n\n\n$$\\mathcal{U}\\_{suspend}(act) = \n\n\\begin{cases} \n\n1\\text{ if suspended} \\hspace{0.8cm}\\\\ \n\n0\\text{ otherwise} \n\n\\end{cases}$$We want our method for combining the functions to satisfy three conditions: an operator should be able to switch between the functions (say, by pushing a button); the system shouldn’t have any incentives to control which function is active; and if it’s plausible that the system’s normal operations could inadvertently compromise our ability to switch between the functions, then the system should be incentivized to keep that from happening.\n\n\nSo far, we haven’t found any way to achieve all three goals at once. As an example, we can try to remove any incentive on the system’s part to control whether its suspend button is pushed by giving the system a switching objective function that always assigns the same expected utility to the button being on or off: \n\n$$\\mathcal{U}\\_{switch}(act) = \n\n\\begin{cases} \n\n\\mathcal{U}\\_{normal}(act) &\\text{ if button=OFF} \\\\ \n\n\\mathcal{U}\\_{suspend}(act) + \\theta &\\text{ if button=ON} \n\n\\end{cases}$$$$\\theta = \\max\\_{act} \\mathbb{E}\\left[\\mathcal{U}\\_{normal}\\mid act\\right] – \\max\\_{act} \\mathbb{E}\\left[\\mathcal{U}\\_{suspend}\\mid act\\right]$$Unfortunately, this approach makes it impossible to satisfy the third condition. It turns out that the above objective function is equivalent to setting the probability of the button being pressed to 0. The system won’t *deliberately* tamper with the button, but it will happily approve updates that delete any reference to the button as dead code, or build subagents that ignore the button.\n\n\nMeeting these intuitively simple constraints turns out to be a nontrivial problem. This is a pattern that runs through a lot of questions in this space: traditional tools and concepts run into immediate safety problems that don’t turn up in conventional capabilities research.\n\n\n\n\n---\n\n\n### The big picture\n\n\n**Alignment priorities**\n\n\nLet’s take a step back and talk about what’s needed overall in order to align highly capable AI systems with our interests.\n\n\nHere’s a dramatically simplified pipeline: You have some humans who come up with some task or goal or preference set that serves as their intended value function 𝘝. Since our values are complicated and context-sensitive, in practice we’ll need to build systems to learn our values over time, rather than coding them by hand.[3](https://intelligence.org/2017/04/12/ensuring/#footnote_2_15407 \"One thing we’ve learned in the field of machine vision over the last few decades is that it’s hopeless to specify by hand what a cat looks like, but that it’s not too hard to specify a learning system that can learn to recognize cats. It’s even more hopeless to specify everything we value by hand, but it’s plausible that we could specify a learning system that can learn the relevant concept of “value.”\") We’ll call the goal the AI system ends up with (which may or may not be identical to 𝘝) 𝗨.\n\n\n![alignment-priorities](https://intelligence.org/wp-content/uploads/2017/04/alignment-priorities.png)When the press covers this topic, they often focus on one of two problems: “What if the wrong group of humans develops smarter-than-human AI first?”, and “What if AI’s natural desires cause 𝗨 to diverge from 𝘝?”\n\n\n![humans-nd](https://intelligence.org/wp-content/uploads/2017/04/humans-nd.png)In my view, the “wrong humans” issue shouldn’t be the thing we focus on until we have reason to think we could get good outcomes with the *right* group of humans. We’re very much in a situation where well-intentioned people couldn’t leverage a general AI system to do good things even if they tried. As a simple example, if you handed me a box that was an extraordinarily powerful function optimizer — I could put in a description of any mathematical function, and it would give me an input that makes the output extremely large — then I don’t know how I could use that box to develop a new technology or advance a scientific frontier without causing any catastrophes.[4](https://intelligence.org/2017/04/12/ensuring/#footnote_3_15407 \"See “Environmental Goals,” “Low-Impact Agents,” and “Mild Optimization” for examples of obstacles to specifying physical goals without causing catastrophic side-effects.\nRoughly speaking, MIRI’s focus is on research directions that seem likely to help us conceptually understand how to do AI alignment in principle, so we’re fundamentally less confused about the kind of work that’s likely to be needed.\nWhat do I mean by this? Let’s say that we’re trying to develop a new chess-playing programs. Do we understand the problem well enough that we could solve it if someone handed us an arbitrarily large computer? Yes: We make the whole search tree, backtrack, see whether white has a winning move.\nIf we didn’t know how to answer the question even with an arbitrarily large computer, then this would suggest that we were fundamentally confused about chess in some way. We’d either be missing the search-tree data structure or the backtracking algorithm, or we’d be missing some understanding of how chess works.\nThis was the position we were in regarding chess prior to Claude Shannon’s seminal paper, and it’s the position we’re currently in regarding many problems in AI alignment. No matter how large a computer you hand me, I could not make a smarter-than-human AI system that performs even a very simple limited-scope task (e.g., “put a strawberry on a plate without producing any catastrophic side-effects”) or achieves even a very simple open-ended goal (e.g., “maximize the amount of diamond in the universe”).\nIf I didn’t have any particular goal in mind for the system, I could write a program (assuming an arbitrarily large computer) that strongly optimized the future in an undirected way, using a formalism like AIXI. In that sense we’re less obviously confused about capabilities than about alignment, even though we’re still missing a lot of pieces of the puzzle on the practical capabilities front.\nSimilarly, we do know how to leverage a powerful function optimizer to mine bitcoin or prove theorems. But we don’t know how to (safely) do the kind of prediction and policy search tasks I described in the “fill a cauldron” section, even for modest goals in the physical world.\nOur goal is to develop and formalize basic approaches and ways of thinking about the alignment problem, so that our engineering decisions don’t end up depending on sophisticated and clever-sounding verbal arguments that turn out to be subtly mistaken. Simplifications like “what if we weren’t worried about resource constraints?” and “what if we were trying to achieve a much simpler goal?” are a good place to start breaking down the problem into manageable pieces. For more on this methodology, see “MIRI’s Approach.”\")\n\n\nThere’s a lot we don’t understand about AI capabilities, but we’re in a position where we at least have a general sense of what progress looks like. We have a number of good frameworks, techniques, and metrics, and we’ve put a great deal of thought and effort into successfully chipping away at the problem from various angles. At the same time, we have a very weak grasp on the problem of how to align highly capable systems with any particular goal. We can list out some intuitive desiderata, but the field hasn’t really developed its first formal frameworks, techniques, or metrics.\n\n\nI believe that there’s a lot of low-hanging fruit in this area, and also that a fair amount of the work does need to be done early (e.g., to help inform capabilities research directions — some directions may produce systems that are much easier to align than others). If we don’t solve these problems, developers with arbitrarily good or bad intentions will end up producing equally bad outcomes. From an academic or scientific standpoint, our first objective in that kind of situation should be to remedy this state of affairs and at least make good outcomes technologically possible.\n\n\nMany people quickly recognize that “natural desires” are a fiction, but infer from this that we instead need to focus on the other issues the media tends to emphasize — “What if bad actors get their hands on smarter-than-human AI?”, “How will this kind of AI impact employment and the distribution of wealth?”, etc. These are important questions, but they’ll only end up actually being relevant if we figure out how to bring general AI systems up to a minimum level of reliability and safety.\n\n\nAnother common thread is “Why not just tell the AI system to (insert intuitive moral precept here)?” On this way of thinking about the problem, often (perhaps unfairly) associated with Isaac Asimov’s writing, ensuring a positive impact from AI systems is largely about coming up with natural-language instructions that are vague enough to subsume a lot of human ethical reasoning:\n\n\n![intended-values](https://intelligence.org/wp-content/uploads/2017/04/intended-values.png)\nIn contrast, precision is a virtue in real-world safety-critical software systems. Driving down accident risk requires that we begin with limited-scope goals rather than trying to “solve” all of morality at the outset.[5](https://intelligence.org/2017/04/12/ensuring/#footnote_4_15407 \"“Fill this cauldron without being too clever about it or working too hard or having any negative consequences I’m not anticipating” is a rough example of a goal that’s intuitively limited in scope. The things we actually want to use smarter-than-human AI for are obviously more ambitious than that, but we’d still want to begin with various limited-scope tasks rather than open-ended goals.\nAsimov’s Three Laws of Robotics make for good stories partly for the same reasons they’re unhelpful from a research perspective. The hard task of turning a moral precept into lines of code is hidden behind phrasings like “[don’t,] through inaction, allow a human being to come to harm.” If one followed a rule like that strictly, the result would be massively disruptive, as AI systems would need to systematically intervene to prevent even the smallest risks of even the slightest harms; and if the intent is that one follow the rule loosely, then all the work is being done by the human sensibilities and intuitions that tell us when and how to apply the rule.\nA common response here is that vague natural-language instruction is sufficient, because smarter-than-human AI systems are likely to be capable of natural language comprehension. However, this is eliding the distinction between the system’s objective function and its model of the world. A system acting in an environment containing humans may learn a world-model that has lots of information about human language and concepts, which the system can then use to achieve its objective function; but this fact doesn’t imply that any of the information about human language and concepts will “leak out” and alter the system’s objective function directly.\nSome kind of value learning process needs to be defined where the objective function itself improves with new information. This is a tricky task because there aren’t known (scalable) metrics or criteria for value learning in the way that there are for conventional learning.\nIf a system’s world-model is accurate in training environments but fails in the real world, then this is likely to result in lower scores on its objective function — the system itself has an incentive to improve. The severity of accidents is also likelier to be self-limiting in this case, since false beliefs limit a system’s ability to effectively pursue strategies.\nIn contrast, if a system’s value learning process results in a 𝗨 that matches our 𝘝 in training but diverges from 𝘝 in the real world, then the system’s 𝗨 will obviously not penalize it for optimizing 𝗨. The system has no incentive relative to 𝗨 to “correct” divergences between 𝗨 and 𝘝, if the value learning process is initially flawed. And accident risk is larger in this case, since a mismatch between 𝗨 and 𝘝 doesn’t necessarily place any limits on the system’s instrumental effectiveness at coming up with effective and creative strategies for achieving 𝗨.\nThe problem is threefold:\n1. “Do What I Mean” is an informal idea, and even if we knew how to build a smarter-than-human AI system, we wouldn’t know how to precisely specify this idea in lines of code.\n2. If doing what we actually mean is instrumentally useful for achieving a particular objective, then a sufficiently capable system may learn how to do this, and may act accordingly so long as doing so is useful for its objective. But as systems become more capable, they are likely to find creative new ways to achieve the same objectives, and there is no obvious way to get an assurance that “doing what I mean” will continue to be instrumentally useful indefinitely.\n3. If we use value learning to refine a system’s goals over time based on training data that appears to be guiding the system toward a 𝗨 that inherently values doing what we mean, it is likely that the system will actually end up zeroing in on a 𝗨 that approximately does what we mean during training but catastrophically diverges in some difficult-to-anticipate contexts. See “Goodhart’s Curse” for more on this.\nFor examples of problems faced by existing techniques for learning goals and facts, such as reinforcement learning, see “Using Machine Learning to Address AI Risk.”\")\n\n\nMy view is that the critical work is mostly in designing an effective value learning process, and in ensuring that the sorta-argmax process is correctly hooked up to the resultant objective function 𝗨:\n\n\n![vl-argmax.png](https://intelligence.org/wp-content/uploads/2017/04/vl-argmax.png)\nThe better your value learning framework is, the less explicit and precise you need to be in pinpointing your value function 𝘝, and the more you can offload the problem of figuring out what you want to the AI system itself. Value learning, however, raises [a number of basic difficulties](https://intelligence.org/files/ValueLearningProblem.pdf) that don’t crop up in ordinary machine learning tasks.\n\n\nClassic capabilities research is concentrated in the sorta-argmax and Expectation parts of the diagram, but sorta-argmax also contains what I currently view as the most neglected, tractable, and important safety problems. The easiest way to see why “hooking up the value learning process correctly to the system’s capabilities” is likely to be an important and difficult challenge in its own right is to consider the case of our own biological history.\n\n\nNatural selection is the only “engineering” process we know of that has ever led to a generally intelligent artifact: the human brain. Since natural selection relies on a fairly unintelligent hill-climbing approach, one lesson we can take away from this is that it’s possible to reach general intelligence with a hill-climbing approach and enough brute force — though we can presumably do better with our human creativity and foresight.\n\n\nAnother key take-away is that natural selection was maximally strict about *only* optimizing brains for a single very simple goal: genetic fitness. In spite of this, the internal objectives that humans represent as their goals are not genetic fitness. We have innumerable goals — love, justice, beauty, mercy, fun, esteem, good food, good health, … — that correlated with good survival and reproduction strategies in the ancestral savanna. However, we ended up valuing these correlates directly, rather than valuing propagation of our genes as an end in itself — as demonstrated every time we employ birth control.\n\n\nThis is a case where the external optimization pressure on an artifact resulted in a general intelligence with internal objectives that didn’t match the external selection pressure. And just as this caused humans’ actions to diverge from natural selection’s pseudo-goal once we gained new capabilities, we can expect AI systems’ actions to diverge from humans’ if we treat their inner workings as black boxes.\n\n\nIf we apply gradient descent to a black box, trying to get it to be very good at maximizing some objective, then with enough ingenuity and patience, we may be able to produce a powerful optimization process of some kind.[6](https://intelligence.org/2017/04/12/ensuring/#footnote_5_15407 \"The result will probably not be a particularly human-like design, since so many complex historical contingencies were involved in our evolution. The result will also be able to benefit from a number of large software and hardware advantages.\") By default, we should expect an artifact like that to have a goal 𝗨 that strongly correlates with our objective 𝘝 in the training environment, but sharply diverges from 𝘝 [in some new environments or when a much wider option set becomes available](https://arbital.com/p/context_disaster/).\n\n\nOn my view, the most important part of the alignment problem is ensuring that the value learning framework and overall system design we implement allow us to crack open the hood and confirm when the internal targets the system is optimizing for match (or don’t match) the targets we’re externally selecting through the learning process.[7](https://intelligence.org/2017/04/12/ensuring/#footnote_6_15407 \"This concept is sometimes lumped into the “transparency” category, but standard algorithmic transparency research isn’t really addressing this particular problem. A better term for what I have in mind here is “understanding.” What we want is to gain deeper and broader insights into the kind of cognitive work the system is doing and how this work relates to the system’s objectives or optimization targets, to provide a conceptual lens with which to make sense of the hands-on engineering work.\")\n\n\nWe expect this to be technically difficult, and if we can’t get it right, then it doesn’t matter who’s standing closest to the AI system when it’s developed. Good intentions aren’t sneezed into computer programs by kind-hearted programmers, and coming up with plausible goals for advanced AI systems doesn’t help if we can’t align the system’s cognitive labor with a given goal.\n\n\n \n\n\n**Four key propositions**\n\n\nTaking another step back: I’ve given some examples of open problems in this area (suspend buttons, value learning, limited task-based AI, etc.), and I’ve outlined what I consider to be the major problem categories. But my initial characterization of why I consider this an important area — “AI could automate general-purpose scientific reasoning, and general-purpose scientific reasoning is a big deal” — was fairly vague. What are the core reasons to prioritize this work?\n\n\nFirst, **[goals and capabilities are orthogonal](https://arbital.com/p/orthogonality/)**. That is, knowing an AI system’s objective function doesn’t tell you how good it is at optimizing that function, and knowing that something is a powerful optimizer doesn’t tell you what it’s optimizing.\n\n\nI think most programmers intuitively understand this. Some people will insist that when a machine tasked with filling a cauldron gets smart enough, it will abandon cauldron-filling as a goal unworthy of its intelligence. From a computer science perspective, the obvious response is that you could go out of your way to build a system that exhibits that conditional behavior, but you could also build a system that doesn’t exhibit that conditional behavior. It can just keeps searching for actions that have a higher score on the “fill a cauldron” metric. You and I might get bored if someone told us to just keep searching for better actions, but it’s entirely possible to write a program that executes a search and never gets bored.[8](https://intelligence.org/2017/04/12/ensuring/#footnote_7_15407 \"We could choose to program the system to tire, but we don’t have to. In principle, one could program a broom that only ever finds and executes actions that optimize the fullness of the cauldron. Improving the system’s ability to efficiently find high-scoring actions (in general, or relative to a particular scoring rule) doesn’t in itself change the scoring rule it’s using to evaluate actions.\")\n\n\nSecond, **[sufficiently optimized objectives tend to converge on adversarial instrumental strategies](https://intelligence.org/2015/11/26/new-paper-formalizing-convergent-instrumental-goals/)**. Most objectives a smarter-than-human AI system could possess would be furthered by subgoals like “acquire resources” and “remain operational” (along with “learn more about the environment,” etc.).\n\n\nThis was the problem suspend buttons ran into: even if you don’t explicitly include “remain operational” in your goal specification, whatever goal you did load into the system is likely to be better achieved if the system remains online. Software systems’ capabilities and (terminal) goals are orthogonal, but they’ll often exhibit similar behaviors if a certain class of actions is useful for a wide variety of possible goals.\n\n\nTo use an example due to Stuart Russell: If you build a robot and program it to go to the supermarket to fetch some milk, and the robot’s model says that one of the paths is much safer than the other, then the robot, in optimizing for the probability that it returns with milk, will automatically take the safer path. It’s not that the system fears death, but that it can’t fetch the milk if it’s dead.\n\n\nThird, **[general-purpose AI systems are likely to show large and rapid capability gains](http://aiimpacts.org/sources-of-advantage-for-artificial-intelligence/)**. The human brain isn’t anywhere near the upper limits for hardware performance (or, one assumes, software performance), and there are a number of other reasons to expect large capability advantages and rapid capability gain from advanced AI systems.\n\n\nAs a simple example, Google can buy a promising AI startup and throw huge numbers of GPUs at them, resulting in a quick jump from “these problems look maybe relevant a decade from now” to “we need to solve all of these problems in the next year” à la DeepMind’s progress in Go. Or performance may suddenly improve when a system is first given large-scale Internet access, when there’s a conceptual breakthrough in algorithm design, or when the system itself is able to propose improvements to its hardware and software.[9](https://intelligence.org/2017/04/12/ensuring/#footnote_8_15407 \"We can imagine the latter case resulting in a feedback loop as the system’s design improvements allow it to come up with further design improvements, until all the low-hanging fruit is exhausted.\nAnother important consideration is that two of the main bottlenecks to humans doing faster scientific research are training time and communication bandwidth. If we could train a new mind to be a cutting-edge scientist in ten minutes, and if scientists could near-instantly trade their experience, knowledge, concepts, ideas, and intuitions to their collaborators, then scientific progress might be able to proceed much more rapidly. Those sorts of bottlenecks are exactly the sort of bottleneck that might give automated innovators an enormous edge over human innovators even without large advantages in hardware or algorithms.\")\n\n\nFourth, **[aligning advanced AI systems with our interests looks difficult](https://arbital.com/p/aligning_adds_time/)**. I’ll say more about why I think this presently.\n\n\nRoughly speaking, the first proposition says that AI systems won’t naturally end up sharing our objectives. The second says that by default, systems with substantially different objectives are likely to end up adversarially competing for control of limited resources. The third suggests that adversarial general-purpose AI systems are likely to have a strong advantage over humans. And the fourth says that this problem is hard to solve — for example, that it’s hard to transmit our values to AI systems (addressing orthogonality) or [avert adversarial incentives](https://arbital.com/p/avert_instrumental_pressure/) (addressing convergent instrumental strategies).\n\n\nThese four propositions don’t mean that we’re screwed, but they mean that this problem is critically important. General-purpose AI has the potential to bring enormous benefits if we solve this problem, but we do need to make finding solutions a priority for the field.\n\n\n\n\n---\n\n\n### Fundamental difficulties\n\n\nWhy do I think that AI alignment looks fairly difficult? The main reason is just that this has been my experience from actually working on these problems. I encourage you to [look at some of the problems yourself](https://intelligence.org/2017/02/28/using-machine-learning/) and try to solve them in toy settings; we could use more eyes here. I’ll also make note of a few structural reasons to expect these problems to be hard:\n\n\nFirst, aligning advanced AI systems with our interests looks difficult for the same reason rocket engineering is more difficult than airplane engineering.\n\n\nBefore looking at the details, it’s natural to think “it’s all just AI” and assume that the kinds of safety work relevant to current systems are the same as the kinds you need when systems surpass human performance. On that view, it’s not obvious that we should work on these issues now, given that they might all be worked out in the course of narrow AI research (e.g., making sure that self-driving cars don’t crash).\n\n\nSimilarly, at a glance someone might say, “Why would rocket engineering be fundamentally harder than airplane engineering? It’s all just material science and aerodynamics in the end, isn’t it?” In spite of this, empirically, the proportion of rockets that explode is far higher than the proportion of airplanes that crash. The reason for this is that a rocket is put under much greater stress and pressure than an airplane, and small failures are much more likely to be highly destructive.[10](https://intelligence.org/2017/04/12/ensuring/#footnote_9_15407 \"Specifically, rockets experience a wider range of temperatures and pressures, traverse those ranges more rapidly, and are also packed more fully with explosives.\")\n\n\nAnalogously, even though general AI and narrow AI are “just AI” in some sense, we can expect that the more general AI systems are likely to experience a wider range of stressors, and possess more dangerous failure modes.\n\n\nFor example, once an AI system begins modeling the fact that (i) your actions affect its ability to achieve its objectives, (ii) your actions depend on your model of the world, and (iii) your model of the world is affected by its actions, the degree to which minor inaccuracies can lead to harmful behavior increases, and the potential harmfulness of its behavior (which can now include, e.g., deception) also increases. In the case of AI, as with rockets, greater capability makes it easier for small defects to cause big problems.\n\n\nSecond, alignment looks difficult for the same reason it’s harder to build a good space probe than to write a good app.\n\n\nYou can find a number of interesting engineering practices at NASA. They do things like take three independent teams, give each of them the same engineering spec, and tell them to design the same software system; and then they choose between implementations by majority vote. The system that they actually deploy consults all three systems when making a choice, and if the three systems disagree, the choice is made by majority vote. The idea is that any one implementation will have bugs, but it’s unlikely all three implementations will have a bug in the same place.\n\n\nThis is significantly more caution than goes into the deployment of, say, the new WhatsApp. One big reason for the difference is that it’s hard to roll back a space probe. You can send version updates to a space probe and correct software bugs, but only if the probe’s antenna and receiver work, and if all the code required to apply the patch is working. If your system for applying patches is itself failing, then there’s nothing to be done.\n\n\nIn that respect, smarter-than-human AI is more like a space probe than like an ordinary software project. If you’re trying to build something smarter than yourself, there are parts of the system that have to work perfectly on the first real deployment. We can do all the test runs we want, but once the system is out there, we can only make online improvements if the code that makes the system *allow* those improvements is working correctly.\n\n\nIf nothing yet has struck fear into your heart, I suggest meditating on the fact that the future of our civilization may well depend on our ability to write code that *works correctly* on the first deploy.\n\n\nLastly, alignment looks difficult for the same reason computer security is difficult: systems need to be robust to intelligent searches for loopholes.\n\n\nSuppose you have a dozen different vulnerabilities in your code, none of which is itself fatal or even really problematic in ordinary settings. Security is difficult because you need to account for intelligent attackers who might find all twelve vulnerabilities and chain them together in a novel way to break into (or just break) your system. Failure modes that would never arise by accident can be sought out and exploited; weird and extreme contexts can be instantiated by an attacker to cause your code to follow some crazy code path that you never considered.\n\n\nA similar sort of problem arises with AI. The problem I’m highlighting here is not that AI systems might act adversarially: AI alignment as a research program is all about finding ways to [prevent adversarial behavior](https://arbital.com/p/nonadversarial/) before it can crop up. We don’t want to be in the business of trying to outsmart arbitrarily intelligent adversaries. That’s a losing game.\n\n\nThe parallel to cryptography is that in AI alignment we deal with systems that perform intelligent searches through a very large search space, and which can produce weird contexts that force the code down unexpected paths. This is because the weird [edge cases](https://arbital.com/p/edge_instantiation/) are places of extremes, and places of extremes are often the place where a given objective function is optimized.[11](https://intelligence.org/2017/04/12/ensuring/#footnote_10_15407 \"Consider Bird and Layzell’s example of a very simple genetic algorithm that was tasked with evolving an oscillating circuit. Bird and Layzell were astonished to find that the algorithm made no use of the capacitor on the chip; instead, it had repurposed the circuit tracks on the motherboard as a radio to replay the oscillating signal from the test device back to the test device.\nThis was not a very smart program. This is just using hill climbing on a very small solution space. In spite of this, the solution turned out to be outside the space of solutions the programmers were themselves visualizing. In a computer simulation, this algorithm might have behaved as intended, but the actual solution space in the real world was wider than that, allowing hardware-level interventions.\nIn the case of an intelligent system that’s significantly smarter than humans on whatever axes you’re measuring, you should by default expect the system to push toward weird and creative solutions like these, and for the chosen solution to be difficult to anticipate.\") Like computer security professionals, AI alignment researchers need to be very good at thinking about edge cases.\n\n\nIt’s much easier to make code that works well on the path that you were visualizing than to make code that works on all the paths that you weren’t visualizing. AI alignment needs to work [on all the paths you weren’t visualizing](https://arbital.com/p/unforeseen_maximum/).\n\n\nSumming up, we should approach a problem like this with the same level of rigor and caution we’d use for a security-critical rocket-launched space probe, and do the legwork as early as possible. At this early stage, a key part of the work is just to formalize basic concepts and ideas so that others can critique them and build on them. It’s one thing to have a philosophical debate about what kinds of suspend buttons people intuit ought to work, and another thing to translate your intuition into an equation so that others can fully evaluate your reasoning.\n\n\nThis is a crucial project, and I encourage all of you who are interested in these problems to get involved and try your hand at them. There are [ample resources online](http://humancompatible.ai/bibliography) for learning more about the open technical problems. Some good places to start include MIRI’s [research agendas](https://intelligence.org/research/) and a great paper from researchers at Google Brain, OpenAI, and Stanford called “[Concrete Problems in AI Safety](https://arxiv.org/abs/1606.06565).”\n\n\n \n\n\n\n\n---\n\n1. An airplane can’t heal its injuries or reproduce, though it can carry heavy cargo quite a bit further and faster than a bird. Airplanes are simpler than birds in many respects, while also being significantly more capable in terms of carrying capacity and speed (for which they were designed). It’s plausible that early automated scientists will likewise be simpler than the human mind in many respects, while being significantly more capable in certain key dimensions. And just as the construction and design principles of aircraft look alien relative to the architecture of biological creatures, we should expect the design of highly capable AI systems to be quite alien when compared to the architecture of the human mind.\n2. Trying to give some formal content to these attempts to differentiate task-like goals from open-ended goals is one way of generating open research problems. In the “[Alignment for Advanced Machine Learning Systems](https://intelligence.org/2016/07/27/alignment-machine-learning/)” research proposal, the problem of formalizing “don’t try too hard” is [mild optimization](https://arbital.com/p/soft_optimizer/), “steer clear of absurd strategies” is [conservatism](https://arbital.com/p/conservative_concept/), and “don’t have large unanticipated consequences” is [impact measures](https://arbital.com/p/low_impact/). See also “avoiding negative side effects” in Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané’s “[Concrete Problems in AI Safety](https://arxiv.org/abs/1606.06565).”\n3. One thing we’ve learned in the field of machine vision over the last few decades is that it’s hopeless to specify by hand what a cat looks like, but that it’s not too hard to specify a learning system that can learn to recognize cats. It’s even more hopeless to specify everything we value by hand, but it’s plausible that we could specify a learning system that can learn the relevant concept of “value.”\n4. See “[Environmental Goals](https://arbital.com/p/environmental_goals/),” “[Low-Impact Agents](https://arbital.com/p/low_impact/),” and “[Mild Optimization](https://arbital.com/p/soft_optimizer/)” for examples of obstacles to specifying physical goals without causing catastrophic side-effects.\nRoughly speaking, MIRI’s focus is on research directions that seem likely to help us conceptually understand how to do AI alignment in principle, so we’re fundamentally less confused about the kind of work that’s likely to be needed.\n\n\nWhat do I mean by this? Let’s say that we’re trying to develop a new chess-playing programs. Do we understand the problem well enough that we could solve it if someone handed us an arbitrarily large computer? Yes: We make the whole search tree, backtrack, see whether white has a winning move.\n\n\nIf we didn’t know how to answer the question even with an arbitrarily large computer, then this would suggest that we were fundamentally confused about chess in some way. We’d either be missing the search-tree data structure or the backtracking algorithm, or we’d be missing some understanding of how chess works.\n\n\nThis was the position we were in regarding chess prior to Claude Shannon’s seminal paper, and it’s the position we’re currently in regarding many problems in AI alignment. No matter how large a computer you hand me, I could not make a smarter-than-human AI system that performs even a very simple limited-scope task (e.g., “put a strawberry on a plate without producing any catastrophic side-effects”) or achieves even a very simple open-ended goal (e.g., “maximize the amount of diamond in the universe”).\n\n\nIf I didn’t have any particular goal in mind for the system, I *could* write a program (assuming an arbitrarily large computer) that strongly optimized the future in an undirected way, using a formalism like [AIXI](https://arbital.com/p/AIXI/). In that sense we’re less obviously confused about capabilities than about alignment, even though we’re still missing a lot of pieces of the puzzle on the practical capabilities front.\n\n\nSimilarly, we do know how to leverage a powerful function optimizer to mine bitcoin or prove theorems. But we don’t know how to (safely) do the kind of prediction and policy search tasks I described in the “fill a cauldron” section, even for modest goals in the physical world.\n\n\nOur goal is to develop and formalize basic approaches and ways of thinking about the alignment problem, so that our engineering decisions don’t end up depending on sophisticated and clever-sounding verbal arguments that turn out to be subtly mistaken. Simplifications like “what if we weren’t worried about resource constraints?” and “what if we were trying to achieve a much simpler goal?” are a good place to start breaking down the problem into manageable pieces. For more on this methodology, see “[MIRI’s Approach](https://intelligence.org/2015/07/27/miris-approach/).”\n5. “Fill this cauldron without being too clever about it or working too hard or having any negative consequences I’m not anticipating” is a rough example of a goal that’s intuitively limited in scope. The things we actually want to use smarter-than-human AI for are obviously more ambitious than that, but we’d still want to begin with various limited-scope tasks rather than open-ended goals.\nAsimov’s Three Laws of Robotics make for good stories partly for the same reasons they’re unhelpful from a research perspective. The hard task of turning a moral precept into lines of code is hidden behind phrasings like “[don’t,] through inaction, allow a human being to come to harm.” If one followed a rule like that strictly, the result would be massively disruptive, as AI systems would need to systematically intervene to prevent [even the smallest risks of even the slightest harms](https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/#1); and if the intent is that one follow the rule loosely, then all the work is being done by the human sensibilities and intuitions that tell us [when and how to apply the rule](http://lesswrong.com/lw/sp/detached_lever_fallacy/).\n\n\nA common response here is that vague natural-language instruction is sufficient, because smarter-than-human AI systems are likely to be capable of natural language comprehension. However, this is eliding the distinction between the system’s objective function and its model of the world. A system acting in an environment containing humans may learn a world-model that has lots of information about human language and concepts, which the system can then use to achieve its objective function; but this fact doesn’t imply that any of the information about human language and concepts will “leak out” and alter the system’s objective function directly.\n\n\nSome kind of value learning process needs to be defined where the objective function itself improves with new information. This is a tricky task because there aren’t known (scalable) metrics or criteria for value learning in the way that there are for conventional learning.\n\n\nIf a system’s world-model is accurate in training environments but fails in the real world, then this is likely to result in lower scores on its objective function — the system itself has an incentive to improve. The severity of accidents is also likelier to be self-limiting in this case, since false beliefs limit a system’s ability to effectively pursue strategies.\n\n\nIn contrast, if a system’s value learning process results in a 𝗨 that matches our 𝘝 in training but diverges from 𝘝 in the real world, then the system’s 𝗨 will obviously not penalize it for optimizing 𝗨. The system has no incentive relative to 𝗨 to “correct” divergences between 𝗨 and 𝘝, if the value learning process is initially flawed. And accident risk is larger in this case, since a mismatch between 𝗨 and 𝘝 doesn’t necessarily place any limits on the system’s instrumental effectiveness at coming up with effective and creative strategies for achieving 𝗨.\n\n\nThe problem is threefold:\n\n\n1. “Do What I Mean” is an informal idea, and even if we knew how to build a smarter-than-human AI system, we wouldn’t know how to precisely specify this idea in lines of code.\n\n\n2. If doing what we actually mean is instrumentally useful for achieving a particular objective, then a sufficiently capable system may learn how to do this, and may act accordingly so long as doing so is useful for its objective. But as systems become more capable, they are likely to find creative new ways to achieve the same objectives, and there is no obvious way to get an assurance that “doing what I mean” will continue to be instrumentally useful indefinitely.\n\n\n3. If we use value learning to refine a system’s goals over time based on training data that appears to be guiding the system toward a 𝗨 that inherently values doing what we mean, it is likely that the system will actually end up zeroing in on a 𝗨 that approximately does what we mean during training but catastrophically diverges in some difficult-to-anticipate contexts. See “[Goodhart’s Curse](https://arbital.com/p/goodharts_curse/)” for more on this.\n\n\nFor examples of problems faced by existing techniques for learning goals and facts, such as reinforcement learning, see “[Using Machine Learning to Address AI Risk](https://intelligence.org/2017/02/28/using-machine-learning/#problem-1).”\n6. The result will probably not be a particularly human-like design, since so many complex historical contingencies were involved in our evolution. The result will also be able to benefit from a number of large [software and hardware advantages](http://aiimpacts.org/sources-of-advantage-for-artificial-intelligence/).\n7. This concept is sometimes lumped into the “[transparency](https://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/)” category, but standard algorithmic transparency research isn’t really addressing this particular problem. A better term for what I have in mind here is “[understanding](https://arbital.com/p/understandability_principle/).” What we want is to gain deeper and broader insights into the kind of cognitive work the system is doing and how this work relates to the system’s objectives or optimization targets, to provide a conceptual lens with which to make sense of the hands-on engineering work.\n8. We could *choose* to program the system to tire, but we don’t have to. In principle, one could program a broom that only ever finds and executes actions that optimize the fullness of the cauldron. Improving the system’s ability to efficiently find high-scoring actions (in general, or relative to a particular scoring rule) doesn’t in itself change the scoring rule it’s using to evaluate actions.\n9. We can imagine the latter case resulting in a [feedback loop](https://intelligence.org/files/IEM.pdf) as the system’s design improvements allow it to come up with further design improvements, until all the low-hanging fruit is exhausted.\nAnother important consideration is that two of the main bottlenecks to humans doing faster scientific research are training time and communication bandwidth. If we could train a new mind to be a cutting-edge scientist in ten minutes, and if scientists could near-instantly trade their experience, knowledge, concepts, ideas, and intuitions to their collaborators, then scientific progress might be able to proceed much more rapidly. Those sorts of bottlenecks are exactly the sort of bottleneck that might give automated innovators an enormous edge over human innovators even without large advantages in hardware or algorithms.\n10. Specifically, rockets experience a wider range of temperatures and pressures, traverse those ranges more rapidly, and are also packed more fully with explosives.\n11. Consider Bird and Layzell’s [example](https://people.duke.edu/~ng46/topics/evolved-radio.pdf) of a very simple genetic algorithm that was tasked with evolving an oscillating circuit. Bird and Layzell were astonished to find that the algorithm made no use of the capacitor on the chip; instead, it had repurposed the circuit tracks on the motherboard as a radio to replay the oscillating signal from the test device back to the test device.\nThis was not a very smart program. This is just using hill climbing on a very small solution space. In spite of this, the solution turned out to be outside the space of solutions the programmers were themselves visualizing. In a computer simulation, this algorithm might have behaved as intended, but the actual solution space in the real world was wider than that, allowing hardware-level interventions.\n\n\nIn the case of an intelligent system that’s significantly smarter than humans on whatever axes you’re measuring, you should by default expect the system to push toward weird and creative solutions like these, and for the chosen solution to be difficult to anticipate.\n\nThe post [Ensuring smarter-than-human intelligence has a positive outcome](https://intelligence.org/2017/04/12/ensuring/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-04-12T21:00:06Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "b747e62a0a8f538ec6368575074c2ac0", "title": "Decisions are for making bad outcomes inconsistent", "url": "https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/", "source": "miri", "source_type": "blog", "text": "[![Cheating Death in Damascus](https://intelligence.org/files/DeathInDamascus.png)](https://intelligence.org/2017/03/18/new-paper-cheating-death-in-damascus/)Nate Soares’ recent [decision theory](http://lesswrong.com/lw/gu1/decision_theory_faq/) paper with Ben Levinstein, “[Cheating Death in Damascus](https://intelligence.org/2017/03/18/new-paper-cheating-death-in-damascus/),” prompted some valuable questions and comments from an acquaintance (anonymized here). I’ve put together edited excerpts from the commenter’s email below, with Nate’s responses.\n\n\nThe discussion concerns functional decision theory (FDT), a newly proposed alternative to causal decision theory (CDT) and evidential decision theory (EDT). Where EDT says “choose the most auspicious action” and CDT says “choose the action that has the best effects,” FDT says “choose the output of one’s decision algorithm that has the best effects across all instances of that algorithm.”\n\n\nFDT usually behaves similarly to CDT. In a one-shot prisoner’s dilemma between two agents who know they are following FDT, however, FDT parts ways with CDT and prescribes cooperation, on the grounds that each agent runs the same decision-making procedure, and that therefore each agent is effectively choosing for both agents at once.[1](https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/#footnote_0_15818 \"CDT prescribes defection in this dilemma, on the grounds that one’s action cannot cause the other agent to cooperate. FDT outperforms CDT in Newcomblike dilemmas like these, while also outperforming EDT in other dilemmas, such as the smoking lesion problem and XOR blackmail.\")\n\n\nBelow, Nate provides some of his own perspective on why FDT generally achieves higher utility than CDT and EDT. Some of the stances he sketches out here are stronger than the assumptions needed to justify FDT, but should shed some light on why researchers at MIRI think FDT can help resolve a number of longstanding puzzles in the foundations of rational action.\n\n\n\n \n\n\n**Anonymous:** This is great stuff! I’m behind on reading loads of papers and books for my research, but this came across my path and hooked me, which speaks highly of how interesting is the content and the sense that this paper is making progress.\n\n\nMy general take is that you are right that these kinds of problems need to be specified in more detail. However, my guess is that once you do so, game theorists would get the right answer. Perhaps that’s what FDT is: it’s an approach to clarifying ambiguous games that leads to a formalism where people like Pearl and myself can use our standard approaches to get the right answer.\n\n\nI know there’s a lot of inertia in the “decision theory” language, so probably it doesn’t make sense to change. But if there were no such sunk costs, I would recommend a different framing. It’s not that people’s decision theories are wrong; it’s that they are unable to correctly formalize problems in which there are high-performance predictors. You show how to do that, using the idea of intervening on (i.e., choosing between putative outputs of) the algorithm, rather than intervening on actions. Everything else follows from a sufficiently precise and non-contradictory statement of the decision problem.\n\n\nProbably the easiest move this line of work could make to ease this knee-jerk response of mine in defense of mainstream Bayesian game theory is to just be clear that CDT is *not* meant to capture mainstream Bayesian game theory. Rather, it is a model of one response to a class of problems not normally considered and for which existing approaches are ambiguous.\n\n\n\n\n---\n\n\n**Nate Soares:** I don’t take this view myself. My view is more like: When you add accurate predictors to the Rube Goldberg machine that is the universe — which can in fact be done — the future of that universe can be determined by the behavior of the algorithm being predicted. The algorithm that we put in the “thing-being-predicted” slot can do significantly better if its reasoning on the subject of which actions to output respects the universe’s downstream causal structure (which is something CDT and FDT do, but which EDT neglects), and it can do better again if its reasoning also respects the world’s global logical structure (which is done by FDT alone).\n\n\nWe don’t know exactly how to respect this wider class of dependencies in general yet, but we do know how to do it in many simple cases. While it agrees with modern decision theory and game theory in many simple situations, its prescriptions do seem to differ in non-trivial applications.\n\n\nThe main case where we can easily see that FDT is not just a better tool for formalizing game theorists’ traditional intuitions is in prisoner’s dilemmas. Game theory is pretty adamant about the fact that it’s rational to defect in a one-shot PD, whereas two FDT agents facing off in a one-shot PD will cooperate.\n\n\nIn particular, classical game theory employs a “common knowledge of shared rationality” assumption which, when you look closely at it, cashes out more or less as “common knowledge that all parties are using CDT and this axiom.” Game theory where common knowledge of shared rationality is defined to mean “common knowledge that all parties are using FDT and *this* axiom” gives substantially different results, such as cooperation in one-shot PDs.\n\n\n\n\n\n---\n\n\n[![Damascus](https://intelligence.org/wp-content/uploads/2017/03/Damascus.png)](https://intelligence.org/wp-content/uploads/2017/03/Damascus.png)A causal graph of Death in Damascus for CDT agents.[2](https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/#footnote_1_15818 \"The agent’s predisposition determines whether they will flee to Aleppo or stay in Damascus, and also determines Death’s prediction about their decision. This allows Death to inescapably pursue the agent, making flight pointless; but CDT agents can’t incorporate this fact into their decision-making.\")\n\n\n\n\n**Anonymous:** When I’ve read MIRI work on CDT in the past, it seemed to me to describe what standard game theorists mean by rationality. But at least in cases like Murder Lesion, I don’t think it’s fair to say that standard game theorists would prescribe CDT. It might be better to say that standard game theory doesn’t consider these kinds of settings, and there are multiple ways of responding to them, CDT being one.\n\n\nBut I also suspect that many of these perfect prediction problems are internally inconsistent, and so it’s irrelevant what CDT would prescribe, since the problem cannot arise. That is, it’s not reasonable to say game theorists would recommend such-and-such in a certain problem, when the problem postulates that the actor always has incorrect expectations; “all agents have correct expectations” is a core property of most game-theoretic problems.\n\n\nThe Death in Damascus problem for CDT agents is a good example of this. In this problem, either Death will not find the CDT agent with certainty, or the CDT agent will never have correct beliefs about her own actions, or she will be unable to best respond to her own beliefs.\n\n\nSo the problem statement (“Death finds the agent with certainty”) rules out typical assumptions of a rational actor: that it has rational expectations (including about its own behavior), and that it can choose the preferred action in response to its beliefs. The agent can only have correct beliefs if she believes that she has such-and-such belief about which city she’ll end up in, but doesn’t select the action that is the best response to that belief.\n\n\n\n\n---\n\n\n**Nate:** I contest that last claim. The trouble is in the phrase “best response”, where you’re using CDT’s notion of what counts as the best response. According to FDT’s notion of “best response”, the best response to your beliefs in the Death in Damascus problem is to stay in Damascus, if we’re assuming it costs nonzero utility to make the trek to Aleppo.\n\n\nIn order to define what the best response to a problem is, we normally invoke a notion of counterfactuals — what are your available responses, and what do you think follows from them? But the question of how to set up those counterfactuals is the very point under contention.\n\n\nSo I’ll grant that if you define “best response” in terms of CDT’s counterfactuals, then Death in Damascus rules out the typical assumptions of a rational actor. If you use FDT’s counterfactuals (i.e., counterfactuals that respect the full range of subjunctive dependencies), however, then you get to keep all the usual assumptions of rational actors. We can say that FDT has the pre-theoretic advantage over CDT that it allows agents to exhibit sensible-seeming properties like these in a wider array of problems.\n\n\n\n\n---\n\n\n**Anonymous:** The presentation of the Death in Damascus problem for CDT feels weird to me. CDT might also just turn up an error, since one of its assumptions is violated by the problem. Or it might cycle through beliefs forever… The expected utility calculation here seems to give some credence to the possibility of dodging death, which is assumed to be impossible, so it doesn’t seem to me to correctly reason in a CDT way about where death will be.\n\n\nFor some reason I want to defend the CDT agent, and say that it’s not fair to say they wouldn’t realize that their strategy produces a contradiction (given the assumptions of rational belief and agency) in this problem.\n\n\n\n\n---\n\n\n**Nate:** There are a few different things to note here. First is that my inclination is always to evaluate CDT as an algorithm: if you built a machine that follows the CDT equation to the very letter, what would *it* do?\n\n\nThe answer here, as you’ve rightly noted above, is that the CDT equation isn’t necessarily defined when the input is a problem like Death in Damascus, and I agree that simple definitions of CDT yield algorithms that would either enter an infinite loop or crash. The third alternative is that the agent notices the difficulty and engages in some sort of reflective-equilibrium-finding procedure; variants of CDT with this sort of patch were invented more or less independently by Joyce and Arntzenius to do exactly that. In the paper, we discuss the variants that run an equilibrium-finding procedure and show that the equilibrium is still unsatisfactory; but we probably should have been more explicit about the fact that vanilla CDT either crashes or loops.\n\n\nSecond, I acknowledge that there’s still a strong intuition that an agent should in some sense be able to reflect on their own instability, look at the problem statement, and say, “Aha, I see what’s going on here; Death will find me no matter what I choose; I’d better find some other way to make the decision.” However, this sort of response is *explicitly ruled out* by the CDT equation: CDT says you must evaluate your actions as if they were subjunctively independent of everything that doesn’t causally depend on them.\n\n\nIn other words, you’re correct that CDT agents know *intellectually* that they cannot escape Death, but the CDT equation requires agents to *imagine* that they can, and to act on this basis.\n\n\nAnd, to be clear, it is not a strike against an algorithm for it to prescribe making actions by reasoning about impossible scenarios — any deterministic algorithm attempting to reason about what it “should do” must imagine some impossibilities, because a deterministic algorithm has to reason about the consequences of doing lots of different things, but is in fact only going to do one thing.\n\n\nThe question at hand is *which* impossibilities are the right ones to imagine, and the claim is that in scenarios with accurate predictors, CDT prescribes imagining the wrong impossibilities, including impossibilities where it escapes Death.\n\n\nOur human intuitions say that we should reflect on the problem statement and eventually realize that escaping Death is in some sense “too impossible to consider”. But this directly contradicts the advice of CDT. Following this intuition requires us to make our beliefs obey a logical-but-not-causal constraint in the problem statement (“Death is a perfect predictor”), which FDT agents can do but CDT agents can’t. On close examination, the “shouldn’t CDT realize this is wrong?” intuition turns out to be an argument for FDT in another guise. (Indeed, pursuing this intuition is part of how FDT’s predecessors were discovered!)\n\n\nThird, I’ll note it’s an important virtue in general for decision theories to be able to reason correctly in the face of apparent inconsistency. Consider the following simple example:\n\n\n\n> An agent has a choice between taking $1 or taking $100. There is an extraordinarily tiny but nonzero probability that a cosmic ray will spontaneously strike the agent’s brain in such a way that they will be caused to do the opposite of whichever action they would normally do. If they learn that they have been struck by a cosmic ray, then they will also need to visit the emergency room to ensure there’s no lasting brain damage, at a cost of $1000. Furthermore, the agent knows that they take the $100 if and only if they are hit by the cosmic ray.\n> \n> \n\n\nWhen faced with this problem, EDT agents reason: “If I take the $100, then I must have been hit by the cosmic ray, which means that I lose $900 on net. I therefore prefer the $1.” They then take the $1 (except in cases where they have been hit by the cosmic ray).\n\n\nSince this is just what the problem statement says — “the agent knows that they take the $100 if and only if they are hit by the cosmic ray” — the problem is perfectly consistent, as is EDT’s response to the problem. EDT only cares about correlation, not dependency; so EDT agents are perfectly happy to buy into self-fulfilling prophecies, even when it means turning their backs on large sums of money.\n\n\nWhat happens when we try to pull this trick on a CDT agent? She says, “Like hell I only take the $100 if I’m hit by the cosmic ray!” and grabs the $100 — thus revealing your problem statement to be inconsistent if the agent runs CDT as opposed to EDT.\n\n\nThe claim that “the agent knows that they take the $100 if and only if they are hit by the cosmic ray” contradicts the definition of CDT, which requires that agents of CDT refuse to leave free money on the table. As you may verify, FDT also renders the problem statement inconsistent, for similar reasons. The definition of EDT, on the other hand, is fully consistent with the problem as stated.\n\n\nThis means that if you try to put EDT into the above situation — controlling its behavior by telling it specific facts about itself — you will succeed; whereas if you try to put CDT into the above situation, you will fail, and the supposed facts will be revealed as lies. Whether or not the above problem statement is consistent depends on the algorithm that the agent runs, and the design of the algorithm controls the degree to which you can put that algorithm in bad situations.\n\n\nWe can think of this as a case of FDT and CDT succeeding in making a low-utility universe impossible, where EDT fails to make a low-utility universe impossible. The whole point of implementing a decision theory on a piece of hardware and running it is to make bad futures-of-our-universe impossible (or at least very unlikely). It’s a feature of a decision theory, and not a bug, for there to be some problems where one tries to describe a low-utility state of affairs and the decision theory says, “I’m sorry, but if you run me in that problem, your problem will be revealed as inconsistent”.[3](https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/#footnote_2_15818 \"There are some fairly natural ways to cash out Murder Lesion where CDT accepts the problem and FDT forces a contradiction, but we decided not to delve into that interpretation in the paper.\nTangentially, I’ll note that one of the most common defenses of CDT similarly turns on the idea that certain dilemmas are “unfair” to CDT. Compare, for example, David Lewis’ “Why Ain’cha Rich?”\nIt’s obviously possible to define decision problems that are “unfair” in the sense that they just reward or punish agents for having a certain decision theory. We can imagine a dilemma where a predictor simply guesses whether you’re implementing FDT, and gives you $1,000,000 if so. Since we can construct symmetric dilemmas that instead reward CDT agents, EDT agents, etc., these dilemmas aren’t very interesting, and can’t help us choose between theories.\nDilemmas like Newcomb’s problem and Death in Damascus, however, don’t evaluate agents based on their decision theories. They evaluate agents based on their actions, and the task of the decision theory is to determine which action is best. If it’s unfair to criticize CDT for making the wrong choice in problems like this, then it’s hard to see on what grounds we can criticize any agent for making a wrong choice in any problem, since one can always claim that one is merely at the mercy of one’s decision theory.\")\n\n\nThis doesn’t contradict anything you’ve said; I say it only to highlight how little we can conclude from noticing that an agent is reasoning about an inconsistent state of affairs. Reasoning about impossibilities is the mechanism by which decision theories produce actions that force the outcome to be desirable, so we can’t conclude that an agent has been placed in an unfair situation from the fact that the agent is forced to reason about an impossibility.\n\n\n\n\n---\n\n\n[![XOR](https://intelligence.org/wp-content/uploads/2017/03/XOR.png)](https://intelligence.org/wp-content/uploads/2017/03/XOR.png)A causal graph of the XOR blackmail problem for CDT agents.[4](https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/#footnote_3_15818 \"Our paper describes the XOR blackmail problem like so:\nAn agent has been alerted to a rumor that her house has a terrible termite infestation, which would cost her $1,000,000 in damages. She doesn’t know whether this rumor is true. A greedy and accurate predictor with a strong reputation for honesty has learned whether or not it’s true, and drafts a letter:\n“I know whether or not you have termites, and I have sent you this letter iff exactly one of the following is true: (i) the rumor is false, and you are going to pay me $1,000 upon receiving this letter; or (ii) the rumor is true, and you will not pay me upon receiving this letter.”\nThe predictor then predicts what the agent would do upon receiving the letter, and sends the agent the letter iff exactly one of (i) or (ii) is true. Thus, the claim made by the letter is true. Assume the agent receives the letter. Should she pay up?\nIn this scenario, EDT pays the blackmailer, while CDT and FDT refuse to pay. See the “Cheating Death in Damascus” paper for more details.\")\n\n\n\n\n\n**Anonymous:** Something still seems fishy to me about decision problems that assume perfect predictors. If I’m being predicted with 100% accuracy in the XOR blackmail problem, then this means that I can induce a contradiction. If I follow FDT and CDT’s recommendation of never paying, then I only receive a letter when I have termites. But if I pay, then I must be in the world where I don’t have termites, as otherwise there is a contradiction.\n\n\nSo it seems that I am able to intervene on the world in a way that changes the state of termites for me now, given that I’ve received a letter. That is, the best strategy when starting is to never pay, but the best strategy given that I will receive a letter is to pay. The weirdness arises because I’m able to intervene on the algorithm, but we are conditioning on a fact of the world that depends on my algorithm.\n\n\nNot sure if this confusion makes sense to you. My gut says that these kinds of problems are often self-contradicting, at least when we assert 100% predictive performance. I would prefer to work it out from the *ex ante* situation, with specified probabilities of getting termites, and see if it is the case that changing one’s strategy (at the algorithm level) is possible without changing the probability of termites to maintain consistency of the prediction claim.\n\n\n\n\n---\n\n\n**Nate:** First, I’ll note that the problem goes through fine if the prediction is only correct 99% of the time. If the difference between “cost of termites” and “cost of paying” is sufficient high, then the problem can probably go through even if the predictor is only correct 51% of the time.\n\n\nThat said, the point of this example is to draw attention to some of the issues you’re raising here, and I think that these issues are just easier to think about when we assume 100% predictive accuracy.\n\n\nThe claim I dispute is this one: “That is, the best strategy when starting is to never pay, but the best strategy given that I will receive a letter is to pay.” I claim that the best strategy given that you receive the letter is to not pay, because whether you pay has no effect on whether or not you have termites. Whenever you pay, no matter what you’ve learned, you’re basically just burning $1000.\n\n\nThat said, you’re completely right that these decision problems have some inconsistent branches, though I claim that this is true of any decision problem. In a deterministic universe with deterministic agents, all “possible actions” the agent “could take” save one are not going to be taken, and thus all “possibilities” save one are in fact inconsistent given a sufficiently full formal specification.\n\n\nI also completely endorse the claim that this set-up allows the predicted agent to induce a contradiction. Indeed, I claim that *all* decision-making power comes from the ability to induce contradictions: the whole reason to write an algorithm that loops over actions, constructs models of outcomes that would follow from those actions, and outputs the action corresponding to the highest-ranked outcome is so that it is contradictory for the algorithm to output a suboptimal action.\n\n\nThis is what computer programs are all about. You write the code in such a fashion that the only non-contradictory way for the electricity to flow through the transistors is in the way that makes your computer do your tax returns, or whatever.\n\n\nIn the case of the XOR blackmail problem, there are four “possible” worlds: **LT** (*letter* + *termites*), **NT** (*noletter* + *termites*), **LN** (*letter* + *notermites*), and **NN** (*noletter* + *notermites*).\n\n\nThe predictor, by dint of their accuracy, has put the universe into a state where the only consistent possibilities are either (LT, NN) or (LN, NT). You get to choose which of those pairs is consistent and which is contradictory. Clearly, you don’t have control over the probability of *termites* vs. *notermites*, so you’re only controlling whether you get the letter. Thus, the question is whether you’re willing to pay $1000 to make sure that the letter shows up only in the worlds where you don’t have termites.\n\n\nEven when you’re holding the letter in your hands, I claim that you should not say “if I pay I will have no termites”, because that is false — your action can’t affect whether you have termites. You should instead say:\n\n\n\n> I see two possibilities here. If my algorithm outputs *pay*, then in the *XX*% of worlds where I have termites I get no letter and lose $1M, and in the (100-*XX*)% of worlds where I do not have termites I lose $1k. If instead my algorithm outputs *refuse*, then in the *XX*% of worlds where I have termites I get this letter but only lose $1M, and in the other worlds I lose nothing. The latter mixture is preferable, so I do not pay.\n> \n> \n\n\nYou’ll notice that the agent in this line of reasoning is not updating on the fact that they’re holding the letter. They’re not saying, “Given that I know that I received the letter and that the universe is consistent…”\n\n\nOne way to think about this is to imagine the agent as not yet being sure whether or not they’re in a contradictory universe. They act like this might be a world in which they don’t have termites, and they received the letter; and in those worlds, by refusing to pay, they make the world they inhabit inconsistent — and thereby make this very scenario never-have-existed.\n\n\nAnd this is correct reasoning! For when the predictor makes their prediction, they’ll visualize a scenario where the agent has no termites and receives the letter, in order to figure out what the agent would do. When the predictor observes that the agent would make that universe contradictory (by refusing to pay), they are bound (by their own commitments, and by their accuracy as a predictor) to send the letter only when you have termites.[5](https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/#footnote_4_15818 \"Ben Levinstein notes that this can be compared to backward induction in game theory with common knowledge of rationality. You suppose you’re at some final decision node which you only would have gotten to (as it turns out) if the players weren’t actually rational to begin with.\")\n\n\nYou’ll never find yourself in a contradictory situation in the real world, but when an accurate predictor is trying to figure out what you’ll do, they don’t yet know which situations are contradictory. They’ll therefore imagine you in situations that may or may not turn out to be contradictory (like “*letter* + *notermites*”). Whether or not you *would* force the contradiction in those cases determines how the predictor *will* behave towards you in fact.\n\n\nThe real world is never contradictory, but predictions about you can certainly place you in contradictory hypotheticals. In cases where you want to force a certain hypothetical world to imply a contradiction, you have to be the sort of person who *would* force the contradiction if given the opportunity.\n\n\nOr as I like to say — forcing the contradiction never *works*, but it always *would’ve worked*, which is sufficient.\n\n\n\n\n---\n\n\n**Anonymous:** The FDT algorithm is best *ex ante*. But if what you care about is your utility in your own life flowing after you, and not that of other instantiations, then upon hearing this news about FDT you should do whatever is best for you given that information and your beliefs, as per CDT.\n\n\n\n\n---\n\n\n[![Newcomb-FDT](https://intelligence.org/wp-content/uploads/2017/03/Newcomb-FDT.png)](https://intelligence.org/wp-content/uploads/2017/03/Newcomb-FDT.png)A causal graph of [Newcomb’s problem](http://lesswrong.com/lw/gu1/decision_theory_faq/#what-about-newcombs-problem-and-alternative-decision-algorithms) for FDT agents.[6](https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/#footnote_5_15818 \"FDT agents intervene on their decision function, “FDT(P,G)”. The CDT version replaces this node with “Predisposition” and instead intervenes on “Act”.\")\n\n\n\n\n**Nate:** If you have the ability to commit yourself to future behaviors (and actually stick to that), it’s clearly in your interest to commit now to behaving like FDT on all decision problems that begin in your future. I, for instance, have made this commitment myself. I’ve also made stronger commitments about decision problems that began in my past, but all CDT agents should agree in principle on problems that begin in the future.[7](https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/#footnote_6_15818 \"Specifically, the CDT-endorsed response here is: “Well, I’ll commit to acting like an FDT agent on future problems, but in one-shot prisoner’s dilemmas that began in my past, I’ll still defect against copies of myself”.\nThe problem with this response is that it can cost you arbitrary amounts of utility, provided a clever blackmailer wishes to take advantage. Consider the retrocausal blackmail dilemma in “Toward Idealized Decision Theory“:\nThere is a wealthy intelligent system and an honest AI researcher with access to the agent’s original source code. The researcher may deploy a virus that will cause $150 million each in damages to both the AI system and the researcher, and which may only be deactivated if the agent pays the researcher $100 million. The researcher is risk-averse and only deploys the virus upon becoming confident that the agent will pay up. The agent knows the situation and has an opportunity to self-modify after the researcher acquires its original source code but before the researcher decides whether or not to deploy the virus. (The researcher knows this, and has to factor this into their prediction.)\nCDT pays the retrocausal blackmailer, even if it has the opportunity to precommit to do otherwise. FDT (which in any case has no need for precommitment mechanisms) refuses to pay. I cite the intuitive undesirability of this outcome to argue that one should follow FDT in full generality, as opposed to following CDT’s prescription that one should only behave in FDT-like ways in future dilemmas.\nThe argument above must be made from a pre-theoretic vantage point, because CDT is internally consistent. There is no argument one could give to a true CDT agent that would cause it to want to use anything other than CDT in decision problems that began in its past.\nIf examples like retrocausal blackmail have force (over and above the force of other arguments for FDT), it is because humans aren’t genuine CDT agents. We may come to endorse CDT based on its theoretical and practical virtues, but the case for CDT is defeasible if we discover sufficiently serious flaws in CDT, where “flaws” are evaluated relative to more elementary intuitions about which actions are good or bad. FDT’s advantages over CDT and EDT — properties like its greater theoretical simplicity and generality, and its achievement of greater utility in standard dilemmas — carry intuitive weight from a position of uncertainty about which decision theory is correct.\")\n\n\nI do believe that real-world people like you and me can actually follow FDT’s prescriptions, even in cases where those prescriptions are quite counter-intuitive.\n\n\nConsider a variant of [Newcomb’s problem](http://lesswrong.com/lw/gu1/decision_theory_faq/#what-about-newcombs-problem-and-alternative-decision-algorithms) where both boxes are transparent, so that you can already see whether box B is full before choosing whether to two-box. In this case, EDT joins CDT in two-boxing, because one-boxing can no longer serve to give the agent good news about its fortunes. But FDT agents still one-box, for the same reason they one-box in Newcomb’s original problem and cooperate in the prisoner’s dilemma: they imagine their algorithm controlling all instances of their decision procedure, including the *past* copy in the mind of their predictor.\n\n\nNow, let’s suppose that you’re standing in front of two full boxes in the transparent Newcomb problem. You might say to yourself, “I wish I could have committed beforehand, but now that the choice is before me, the tug of the extra $1000 is just too strong”, and then decide that you were not actually capable of making binding precommitments. This is fine; the normatively correct decision theory might not be something that all human beings have the willpower to follow in real life, just as the correct moral theory could turn out to be something that some people lack the will to follow.[8](https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/#footnote_7_15818 \"In principle, it could even turn out that following the prescriptions of the correct decision theory in full generality is humanly impossible. There’s no law of logic saying that the normatively correct decision-making behaviors have to be compatible with arbitrary brain designs (including human brain design). I wouldn’t bet on this, but in such a case learning the correct theory would still have practical import, since we could still build AI systems to follow the normatively correct theory.\")\n\n\nThat said, *I* believe that I’m quite capable of just acting like I committed to act. I don’t feel a need to go through any particular [mental ritual](http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/) in order to feel comfortable one-boxing. I can just *decide to one-box* and let the matter rest there.\n\n\nI want to be the kind of agent that sees two full boxes, so that I can walk away rich. I care more about doing what works, and about achieving practical real-world goals, than I care about the intuitiveness of my local decisions. And in this decision problem, FDT agents are the only agents that walk away rich.\n\n\nOne way of making sense of this kind of reasoning is that evolution graced me with a “just do what you promised to do” module. The same style of reasoning that allows me to actually follow through and one-box in Newcomb’s problem is the one that allows me to cooperate in prisoner’s dilemmas against myself — including dilemmas like “should I stick to my New Year’s resolution?”[9](https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/#footnote_8_15818 \"A New Year’s resolution that requires me to repeatedly follow through on a promise that I care about in the long run, but would prefer to ignore in the moment, can be modeled as a one-shot twin prisoner’s dilemma. In this case, the dilemma is temporally extended, and my “twins” are my own future selves, who I know reason more or less the same way I do.\nIt’s conceivable that I could go off my diet today (“defect”) and have my future selves pick up the slack for me and stick to the diet (“cooperate”), but in practice if I’m the kind of agent who isn’t willing today to sacrifice short-term comfort for long-term well-being, then I presumably won’t be that kind of agent tomorrow either, or the day after.\nSeeing that this is so, and lacking a way to force themselves or their future selves to follow through, CDT agents despair of promise-keeping and abandon their resolutions. FDT agents, seeing the same set of facts, do just the opposite: they resolve to cooperate today, knowing that their future selves will reason symmetrically and do the same.\") I claim that it was only misguided CDT philosophers that argued (wrongly) that “rational” agents aren’t allowed to use that evolution-given “just follow through with your promises” module.\n\n\n\n\n---\n\n\n**Anonymous:** A final point: I don’t know about counterlogicals, but a theory of functional similarity would seem to depend on the details of the algorithms.\n\n\nE.g., we could have a model where their output is stochastic, but some parameters of that process are the same (such as expected value), and the action is stochastically drawn from some distribution with those parameter values. We could have a version of that, but where the parameter values depend on private information picked up since the algorithms split, in which case each agent would have to model the distribution of private info the other might have.\n\n\nThat seems pretty general; does that work? Is there a class of functional similarity that can not be expressed using that formulation?\n\n\n\n\n---\n\n\n**Nate:** As long as the underlying distribution can be an arbitrary Turing machine, I think that’s sufficiently general.\n\n\nThere are actually a few non-obvious technical hurdles here; namely, if agent *A* is basing their beliefs off of their model of agent *B*, who is basing their beliefs off of a model of agent *A*, then you can get some strange loops.\n\n\nConsider for example the matching pennies problem: agent *A* and agent *B* will each place a penny on a table; agent *A* wants either HH or TT, and agent *B* wants either HT or TH. It’s non-trivial to ensure that both agents develop stable accurate beliefs in games like this (as opposed to, e.g., diving into infinite loops).\n\n\nThe technical solution to this is [reflective oracle machines](https://intelligence.org/2016/06/30/grain-of-truth/), a class of probabilistic Turing machines with access to an oracle that can probabilistically answer questions about any other machine in the class (with access to the same oracle).\n\n\nThe paper “[Reflective Oracles: A Foundation for Classical Game Theory](https://arxiv.org/abs/1609.05058)” shows how to do this and shows that the relevant fixed points always exist. (And furthermore, in cases that can be represented in classical game theory, the fixed points always correspond to the mixed-strategy Nash equilibria.)\n\n\nThis more or less lets us start from a place of saying “how do agents with probabilistic information about each other’s source code come to stable beliefs about each other?” and gets us to the “common knowledge of rationality” axiom from game theory.[10](https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/#footnote_9_15818 \"The paper above shows how to use reflective oracles with CDT as opposed to FDT, because (a) one battle at a time and (b) we don’t yet have a generic algorithm for computing logical counterfactuals, but we do have a generic algorithm for doing CDT-type reasoning.\") One can also see it as a justification for that axiom, or as a generalization of that axiom that works even in cases where the lines between agent and environment get blurry, or as a hint at what we should do in cases where one agent has significantly more computational resources than the other, etc.\n\n\nBut, yes, when we study these kinds of problems concretely at MIRI, we tend to use models where each agent models the other as a probabilistic Turing machine, which seems roughly in line with what you’re suggesting here.\n\n\n \n\n\n\n\n---\n\n1. CDT prescribes defection in this dilemma, on the grounds that one’s action cannot *cause* the other agent to cooperate. FDT outperforms CDT in Newcomblike dilemmas like these, while also outperforming EDT in other dilemmas, such as the smoking lesion problem and XOR blackmail.\n2. The agent’s predisposition determines whether they will flee to Aleppo or stay in Damascus, and also determines Death’s prediction about their decision. This allows Death to inescapably pursue the agent, making flight pointless; but CDT agents can’t incorporate this fact into their decision-making.\n3. There are some fairly natural ways to cash out Murder Lesion where CDT accepts the problem and FDT forces a contradiction, but we decided not to delve into that interpretation in the paper.\nTangentially, I’ll note that one of the most common defenses of CDT similarly turns on the idea that certain dilemmas are “unfair” to CDT. Compare, for example, David Lewis’ “[Why Ain’cha Rich?](http://www.andrewmbailey.com/dkl/Why_Aincha_Rich.pdf)”\n\n\nIt’s obviously possible to define decision problems that are “unfair” in the sense that they just reward or punish agents for having a certain decision theory. We can imagine a dilemma where a predictor simply guesses whether you’re implementing FDT, and gives you $1,000,000 if so. Since we can construct symmetric dilemmas that instead reward CDT agents, EDT agents, etc., these dilemmas aren’t very interesting, and can’t help us choose between theories.\n\n\nDilemmas like Newcomb’s problem and Death in Damascus, however, don’t evaluate agents based on their *decision theories*. They evaluate agents based on their *actions*, and the task of the decision theory is to determine which action is best. If it’s unfair to criticize CDT for making the wrong choice in problems like this, then it’s hard to see on what grounds we can criticize any agent for making a wrong choice in any problem, since one can always claim that one is merely at the mercy of one’s decision theory.\n4. Our paper describes the XOR blackmail problem like so:\n\n> An agent has been alerted to a rumor that her house has a terrible termite infestation, which would cost her $1,000,000 in damages. She doesn’t know whether this rumor is true. A greedy and accurate predictor with a strong reputation for honesty has learned whether or not it’s true, and drafts a letter:\n> \n> \n> “I know whether or not you have termites, and I have sent you this letter iff exactly one of the following is true: (i) the rumor is false, and you are going to pay me $1,000 upon receiving this letter; or (ii) the rumor is true, and you will not pay me upon receiving this letter.”\n> \n> \n> The predictor then predicts what the agent would do upon receiving the letter, and sends the agent the letter iff exactly one of (i) or (ii) is true. Thus, the claim made by the letter is true. Assume the agent receives the letter. Should she pay up?\n> \n> \n\n\nIn this scenario, EDT pays the blackmailer, while CDT and FDT refuse to pay. See the “[Cheating Death in Damascus](https://intelligence.org/files/DeathInDamascus.pdf)” paper for more details.\n5. Ben Levinstein notes that this can be compared to backward induction in game theory with common knowledge of rationality. You suppose you’re at some final decision node which you only would have gotten to (as it turns out) if the players weren’t actually rational to begin with.\n6. FDT agents intervene on their decision function, “FDT(P,G)”. The CDT version replaces this node with “Predisposition” and instead intervenes on “Act”.\n7. Specifically, the CDT-endorsed response here is: “Well, I’ll commit to acting like an FDT agent on future problems, but in one-shot prisoner’s dilemmas that began in my past, I’ll still defect against copies of myself”.\nThe problem with this response is that it can cost you arbitrary amounts of utility, provided a clever blackmailer wishes to take advantage. Consider the retrocausal blackmail dilemma in “[Toward Idealized Decision Theory](https://arxiv.org/abs/1507.01986)“:\n\n\n\n> There is a wealthy intelligent system and an honest AI researcher with access to the agent’s original source code. The researcher may deploy a virus that will cause $150 million each in damages to both the AI system and the researcher, and which may only be deactivated if the agent pays the researcher $100 million. The researcher is risk-averse and only deploys the virus upon becoming confident that the agent will pay up. The agent knows the situation and has an opportunity to self-modify after the researcher acquires its original source code but before the researcher decides whether or not to deploy the virus. (The researcher knows this, and has to factor this into their prediction.)\n> \n> \n\n\nCDT pays the retrocausal blackmailer, even if it has the opportunity to precommit to do otherwise. FDT (which in any case has no need for precommitment mechanisms) refuses to pay. I cite the intuitive undesirability of this outcome to argue that one should follow FDT in full generality, as opposed to following CDT’s prescription that one should only behave in FDT-like ways in future dilemmas.\n\n\nThe argument above must be made from a pre-theoretic vantage point, because CDT is internally consistent. There is no argument one could give to a *true* CDT agent that would cause it to want to use anything other than CDT in decision problems that began in its past.\n\n\nIf examples like retrocausal blackmail have force (over and above the force of other arguments for FDT), it is because humans aren’t genuine CDT agents. We may come to endorse CDT based on its theoretical and practical virtues, but the case for CDT is defeasible if we discover sufficiently serious flaws in CDT, where “flaws” are evaluated relative to more elementary intuitions about which actions are good or bad. FDT’s advantages over CDT and EDT — properties like its greater theoretical simplicity and generality, and its achievement of greater utility in standard dilemmas — carry intuitive weight from a position of uncertainty about which decision theory is correct.\n8. In principle, it could even turn out that following the prescriptions of the correct decision theory in full generality is humanly impossible. There’s no law of logic saying that the normatively correct decision-making behaviors have to be compatible with arbitrary brain designs (including human brain design). I wouldn’t bet on this, but in such a case learning the correct theory would still have practical import, since we could still build AI systems to follow the normatively correct theory.\n9. A New Year’s resolution that requires me to repeatedly follow through on a promise that I care about in the long run, but would prefer to ignore in the moment, can be modeled as a one-shot twin prisoner’s dilemma. In this case, the dilemma is temporally extended, and my “twins” are my own future selves, who I know reason more or less the same way I do.\nIt’s conceivable that I could go off my diet today (“defect”) and have my future selves pick up the slack for me and stick to the diet (“cooperate”), but in practice if I’m the *kind of agent* who isn’t willing today to sacrifice short-term comfort for long-term well-being, then I presumably won’t be that kind of agent tomorrow either, or the day after.\n\n\nSeeing that this is so, and lacking a way to force themselves or their future selves to follow through, CDT agents despair of promise-keeping and abandon their resolutions. FDT agents, seeing the same set of facts, do just the opposite: they resolve to cooperate today, knowing that their future selves will reason symmetrically and do the same.\n10. The paper above shows how to use reflective oracles with CDT as opposed to FDT, because (a) one battle at a time and (b) we don’t yet have a generic algorithm for computing logical counterfactuals, but we do have a generic algorithm for doing CDT-type reasoning.\n\nThe post [Decisions are for making bad outcomes inconsistent](https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-04-07T22:02:13Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "7f05200a902eeeccdd851267ba647f26", "title": "April 2017 Newsletter", "url": "https://intelligence.org/2017/04/06/april-2017-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \n\nOur newest publication, “[Cheating Death in Damascus](https://intelligence.org/2017/03/18/new-paper-cheating-death-in-damascus/),” makes the case for functional decision theory, our general framework for thinking about rational choice and counterfactual reasoning.\nIn other news, [our research team is expanding](https://intelligence.org/2017/03/31/two-new-researchers-join-miri/)! Sam Eisenstat and Marcello Herreshoff, both previously at Google, join MIRI this month.\n**Research updates**\n* New at IAFF: “[Formal Open Problem in Decision Theory](https://agentfoundations.org/item?id=1356)”\n* New at AI Impacts: “[Trends in Algorithmic Progress](http://aiimpacts.org/trends-in-algorithmic-progress/)”; “[Progress in General-Purpose Factoring](http://aiimpacts.org/progress-in-general-purpose-factoring/)”\n* We ran a weekend [workshop](https://intelligence.org/workshops/#march-2017) on agent foundations and AI safety.\n\n\n**General updates**\n* Our [annual review](https://intelligence.org/2017/03/28/2016-in-review/) covers our research progress, fundraiser outcomes, and other take-aways from 2016.\n* We attended the [Colloquium on Catastrophic and Existential Risk](https://www.risksciences.ucla.edu/news-events/2017/1/31/the-first-colloquium-on-catastrophic-and-existential-threats).\n* Nate Soares weighs in on the Future of Life Institute’s [Risk Principle](https://futureoflife.org/2017/03/23/ai-risks-principle/).\n* “[Elon Musk’s Billion-Dollar Crusade to Stop the AI Apocalypse](http://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x)” features quotes from Eliezer Yudkowsky, Demis Hassabis, Mark Zuckerberg, Peter Thiel, Stuart Russell, and others.\n\n\n\n**News and links**\n* The Open Philanthropy Project and OpenAI [begin a partnership](http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/openai-general-support): Holden Karnofsky joins Elon Musk and Sam Altman on OpenAI’s Board of Directors, and Open Philanthropy contributes $30M to OpenAI’s research program.\n* Open Philanthropy has also awarded $2M [to the Future of Humanity Institute](http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-humanity-institute-general-support).\n* [*Modeling Agents with Probabilistic Programs*](http://agentmodels.org/): a new book by Owain Evans, Andreas Stuhlmüller, John Salvatier, and Daniel Filan.\n* New from OpenAI: “[Evolution Strategies as a Scalable Alternative to Reinforcement Learning](https://blog.openai.com/evolution-strategies/)”; “[Learning to Communicate](https://blog.openai.com/learning-to-communicate/)”; “[One-Shot Imitation Learning](https://arxiv.org/abs/1703.07326)”; and from Paul Christiano, “[Benign Model-Free RL](https://ai-alignment.com/benign-model-free-rl-4aae8c97e385).”\n* Chris Olah and Shan Carter discuss [research debt](http://distill.pub/2017/research-debt/) as an obstacle to clear thinking and the transmission of ideas, and propose [Distill](http://distill.pub/about/) as a solution.\n* Andrew Trask proposes [encrypting deep learning algorithms](https://iamtrask.github.io/2017/03/17/safe-ai/) during training.\n* Roman Yampolskiy [seeks submissions](http://cecs.louisville.edu/ry/CFC_AISS_Yampolskiy.pdf) for a book on AI safety and security.\n* 80,000 Hours has updated their problem profile on [positively shaping the development of AI](https://80000hours.org/problem-profiles/positively-shaping-artificial-intelligence/), a solid introduction to AI risk — which 80K now ranks as [the most urgent problem in the world](https://80000hours.org/career-guide/world-problems/). See also 80K’s write-up on [in-demand skill sets](https://80000hours.org/2017/03/what-skills-are-effective-altruist-organisations-missing/) at effective altruism oragnizations.\n\n\n |\n\n\n\nThe post [April 2017 Newsletter](https://intelligence.org/2017/04/06/april-2017-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-04-06T16:59:11Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "8eaa0c5c5c9deb27179caa55cc47645b", "title": "Two new researchers join MIRI", "url": "https://intelligence.org/2017/03/31/two-new-researchers-join-miri/", "source": "miri", "source_type": "blog", "text": "MIRI’s research team is growing! I’m happy to announce that we’ve hired two new research fellows to contribute to our work on [AI alignment](https://intelligence.org/why-ai-safety/): Sam Eisenstat and Marcello Herreshoff, both from Google.\n\n\n \n\n\n![Sam Eisenstat](https://intelligence.org/wp-content/uploads/2017/03/538814_10100251511965021_1825098355_n-e1490842664786.jpg)**Sam Eisenstat** studied pure mathematics at the University of Waterloo, where he carried out research in mathematical logic. His previous work was on the automatic construction of deep learning models at Google.\n\n\nSam’s research focus is on questions relating to the foundations of reasoning and agency, and he is especially interested in exploring analogies between current theories of logical uncertainty and Bayesian reasoning. He has also done work on decision theory and counterfactuals. His past work with MIRI includes “[Asymptotic Decision Theory](http://gallabytes.com/2016/10/adt.html),” “[A Limit-Computable, Self-Reflective Distribution](https://agentfoundations.org/item?id=515),” and “[A Counterexample to an Informal Conjecture on Proof Length and Logical Counterfactuals](https://agentfoundations.org/item?id=369).”\n\n\n  \n\n\n![Marcello Herreshoff](https://intelligence.org/wp-content/uploads/2017/03/216873_4756522797_8041_n-1-e1490843349155.jpg) **Marcello Herreshoff** studied at Stanford, receiving a B.S. in Mathematics with Honors and getting two honorable mentions in the Putnam Competition, the world’s most highly regarded university-level math competition. Marcello then spent five years as a software engineer at Google, gaining a background in machine learning.\n\n\nMarcello is one of MIRI’s earliest research collaborators, and attended our very first [research workshop](https://intelligence.org/workshops/#november-2012) alongside Eliezer Yudkowsky, Paul Christiano, and Mihály Bárász. Marcello has worked with us in the past to help produce results such as “[Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem](https://intelligence.org/files/ProgramEquilibrium.pdf),” “[Definability of Truth in Probabilistic Logic](https://intelligence.org/files/DefinabilityTruthDraft.pdf),” and “[Tiling Agents for Self-Modifying AI](https://intelligence.org/files/TilingAgentsDraft.pdf).” His research interests include logical uncertainty and the design of reflective agents.\n\n\n \n\n\nSam and Marcello will be starting with us in the first two weeks of April. This marks the beginning of our first wave of new research fellowships since 2015, though we more recently added Ryan Carey to the team on an assistant research fellowship (in mid-2016).\n\n\nWe have additional plans to expand our research team in the coming months, and will soon be hiring for a more diverse set of technical roles at MIRI — details forthcoming!\n\n\nThe post [Two new researchers join MIRI](https://intelligence.org/2017/03/31/two-new-researchers-join-miri/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-04-01T01:46:30Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "12f82a8251c5b3b6d8cdc722ca4fad5e", "title": "2016 in review", "url": "https://intelligence.org/2017/03/28/2016-in-review/", "source": "miri", "source_type": "blog", "text": "It’s time again for my annual review of MIRI’s activities.[1](https://intelligence.org/2017/03/28/2016-in-review/#footnote_0_15745 \"See our previous reviews: 2015, 2014, 2013.\") In this post I’ll provide a summary of what we did in 2016, see how our activities compare to our previously stated goals and predictions, and reflect on how our strategy this past year fits into our mission as an organization. We’ll be following this post up in April with a strategic update for 2017.\n\n\nAfter doubling the size of the research team in 2015,[2](https://intelligence.org/2017/03/28/2016-in-review/#footnote_1_15745 \"From 2015 in review: “Patrick LaVictoire joined in March, Jessica Taylor in August, Andrew Critch in September, and Scott Garrabrant in December. With Nate transitioning to a non-research role, overall we grew from a three-person research team (Eliezer, Benya, and Nate) to a six-person team.”\") we slowed our growth in 2016 and focused on integrating the new additions into our team, making research progress, and writing up a backlog of existing results.\n\n\n2016 was a big year for us on the research front, with our new researchers making some of the most notable contributions. Our biggest news was Scott Garrabrant’s [logical inductors](https://intelligence.org/2016/09/12/new-paper-logical-induction/) framework, which represents by a significant margin our largest progress to date on the problem of logical uncertainty. We additionally released “[Alignment for Advanced Machine Learning Systems](https://intelligence.org/2016/07/27/alignment-machine-learning/)” (AAMLS), a new technical agenda spearheaded by Jessica Taylor.\n\n\nWe also spent this last year engaging more heavily with the wider AI community, e.g., through the month-long [Colloquium Series on Robust and Beneficial Artificial Intelligence](https://intelligence.org/colloquium-series/) we co-ran with the Future of Humanity Institute, and through talks and participation in panels at many events through the year.\n\n\n\n \n\n\n### 2016 Research Progress\n\n\nWe saw significant progress this year in our [agent foundations agenda](https://intelligence.org/technical-agenda/), including Scott Garrabrant’s [logical inductor](https://intelligence.org/2016/09/12/new-paper-logical-induction/) formalism (which represents possibly our most significant technical result to date) and related developments in Vingean reflection. At the same time, we saw relatively little progress in error tolerance and value specification, which we had [planned](https://intelligence.org/2016/08/05/miri-strategy-update-2016/#2) to put more focus on in 2016. Below, I’ll note the highlights from each of our research areas:\n\n\n##### Logical Uncertainty and Naturalized Induction\n\n\n* 2015 progress: sizable. (Predicted: modest.)\n* 2016 progress: sizable. (Predicted: sizable.)\n\n\nWe saw a large body of results related to logical induction. Logical induction developed out of earlier work led by Scott Garrabrant in late 2015 (written up in April 2016) that served to divide the problem of logical uncertainty [into two subproblems](https://intelligence.org/2016/04/21/two-new-papers-uniform/). Scott demonstrated that both problems could be solved at once using an algorithm that satisfies a highly general “logical induction criterion.”\n\n\nThis criterion provides a simple way of understanding idealized reasoning under resource limitations. In Andrew Critch’s words, logical induction is “a financial solution to the computer science problem of metamathematics”: a procedure that assigns reasonable probabilities to arbitrary (empirical, logical, mathematical, self-referential, etc.) sentences in a way that outpaces deduction, explained by analogy to inexploitable stock markets.\n\n\nOur other main 2016 work in this domain is an independent line of research spearheaded by MIRI research associate Vanessa Kosoy, “[Optimal Polynomial-Time Estimators: A Bayesian Notion of Approximation Algorithm](https://intelligence.org/2016/12/31/new-paper-optimal-polynomial-time-estimators/).” Vanessa approaches the problem of logical uncertainty from a more complexity-theoretic angle of attack than logical induction does, providing a formalism for defining optimal feasible approximations of computationally infeasible objects that retain a number of relevant properties of those objects.\n\n\n##### **Decision Theory**\n\n\n* 2015 progress: modest. (Predicted: modest.)\n* 2016 progress: modest. (Predicted: modest.)\n\n\nWe continue to see a steady stream of interesting results related to the problem of defining logical counterfactuals. In 2016, we began applying the logical inductor framework to decision-theoretic problems, working with the idea of [universal inductors](https://agentfoundations.org/item?id=941). Andrew Critch also developed [a game-theoretic method for resolving policy disagreements](https://intelligence.org/2017/01/25/negotiable-rll/) that outperforms standard compromise approaches and also allows for negotiators to disagree on factual questions.\n\n\nWe have a backlog of many results to write up in this space. Our newest, “[Cheating Death in Damascus](https://intelligence.org/2017/03/18/new-paper-cheating-death-in-damascus/),” summarizes the case for functional decision theory, a theory that systematically outperforms the conventional academic views (causal and evidential decision theory) in decision theory and game theory. This is the basic framework we use for studying logical counterfactuals and related open problems, and is a good introductory paper for understanding our other work in this space.\n\n\nFor an overview of our more recent work on this topic, see Tsvi Benson-Tilsen’s [decision theory index](https://agentfoundations.org/item?id=1026) on the research forum.\n\n\n##### **Vingean Reflection**\n\n\n* 2015 progress: modest. (Predicted: modest.)\n* 2016 progress: modest-to-strong. (Predicted: limited.)\n\n\nOur main results in reflective reasoning last year concerned self-trust in logical inductors. After seeing no major advances in Vingean reflection for many years—the last big step forward was perhaps Benya Fallenstein’s model polymorphism proposal [in late 2012](http://lesswrong.com/lw/e4e/an_angle_of_attack_on_open_problem_1/)—we had planned to de-prioritize work on this problem in 2016, on the assumption that other tools were needed before we could make much more headway. However, in 2016 logical induction turned out to be surprisingly useful for solving a number of outstanding tiling problems.\n\n\nAs described in “[Logical Induction](https://intelligence.org/2016/09/12/new-paper-logical-induction/),” logical inductors provide a simple demonstration of self-referential reasoning that is highly general and accurate, is free of paradox, and assigns reasonable credence to the reasoner’s own beliefs. This provides some evidence that the problem of logical uncertainty itself is relatively central to a number of puzzles concerning the theoretical foundations of intelligence.\n\n\n##### **Error Tolerance**\n\n\n* 2015 progress: limited. (Predicted: modest.)\n* 2016 progress: limited. (Predicted: modest.)\n\n\n2016 saw the release of our “[Alignment for Advanced ML Systems](https://intelligence.org/2016/07/27/alignment-machine-learning/)” research agenda, with a focus on error tolerance and value specification. Less progress occurred in these areas than expected, partly because investigations here are still very preliminary. We also spent less time on research in mid-to-late 2016 overall than we had planned, in part because we spent a lot of time writing up our new results and research proposals.\n\n\nNate noted in our [October AMA](http://effective-altruism.com/ea/12r/ask_miri_anything_ama/8kb) that he considers this time investment in drafting write-ups one of our main 2016 errors, and we plan to spend less time on paper-writing in 2017.\n\n\nOur 2016 work on error tolerance included “[Two Problems with Causal-Counterfactual Utility Indifference](https://agentfoundations.org/item?id=839)” and some time we spent discussing and critiquing Dylan Hadfield-Menell’s proposal of [corrigibility via CIRL](https://arxiv.org/abs/1611.08219). We plan to share our thoughts on the latter line of research more widely later this year.\n\n\n##### Value Specification\n\n\n* 2015 progress: limited. (Predicted: limited.)\n* 2016 progress: weak-to-modest. (Predicted: modest.)\n\n\nAlthough we planned to put more focus on value specification last year, we ended up making less progress than expected. Examples of our work in this area include Jessica Taylor and Ryan Carey’s posts on [online learning](https://agentfoundations.org/item?id=1025), and Jessica’s [analysis](https://agentfoundations.org/item?id=1090) of how errors might propagate within a system of humans consulting one another.\n\n\n \n\n\nWe’re extremely pleased with our progress on the agent foundations agenda over the last year, and we’re hoping to see more progress cascading from the new set of tools we’ve developed. At the same time, it remains to be seen how tractable the new set of problems we’re tackling in the AAMLS agenda are.\n\n\n \n\n\n### 2016 Research Support Activities\n\n\nIn September, we brought on Ryan Carey to support Jessica’s work on the AAMLS agenda as an assistant research fellow.[3](https://intelligence.org/2017/03/28/2016-in-review/#footnote_2_15745 \"As I noted in our AMA: “At MIRI, research fellow is a full-time permanent position. A decent analogy in academia might be that research fellows are to assistant research fellows as full-time faculty are to post-docs. Assistant research fellowships are intended to be a more junior position with a fixed 1–2 year term.”\") Our assistant research fellowship program seems to be working out well; Ryan has been a lot of help to us in working with Jessica to write up results (e.g., “[Bias-Detecting Online Learners](https://agentfoundations.org/item?id=1025)”), along with setting up TensorFlow tools for a project with Patrick LaVictoire.\n\n\nWe’ll likely be expanding the program this year and bringing on additional assistant research fellows, in addition to a slate of new research fellows.\n\n\nFocusing on other activities that relate relatively directly to our technical research program, including collaborating and syncing up with researchers in industry and academia, in 2016 we:\n\n\n* **Ran an experimental month-long [Colloquium Series for Robust and Beneficial Artificial Intelligence](https://intelligence.org/colloquium-series/)** (CSRBAI) featuring three weekend workshops and eighteen talks (by Stuart Russell, Tom Dietterich, Francesca Rossi, Bart Selman, Paul Christiano, Jessica Taylor, and others). See our retrospective [here](https://intelligence.org/2016/08/02/2016-summer-program-recap/), and a full list of videos [here](https://intelligence.org/2016/10/06/csrbai-talks-agent-models/).\n* **Hosted six non-CSRBAI [research workshops](https://intelligence.org/workshops/)** (three on our agent foundations agenda, three on AAMLS) and co-ran the MIRI Summer Fellows program. We also supported dozens of MIRIx events, hosted a [grad student seminar](https://intelligence.org/seminar-f2016/) at our offices for UC Berkeley students, and taught at [SPARC](https://sparc-camp.org/).\n* **Helped put together OpenAI Gym’s [safety environments](https://gym.openai.com/envs#safety) and the Center for Human-Compatible AI’s [annotated AI safety reading list](http://humancompatible.ai/bibliography)**, in collaboration with a number of researchers from other institutions.\n* **Gave a half dozen talks at non-MIRI events:**\n\t+ Eliezer Yudkowsky on “[AI Alignment: Why It’s Hard, and Where to Start](https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/)” at Stanford University, where he was the [Symbolic Systems Distinguished Speaker](https://symsys.stanford.edu/viewing/htmldocument/13638) of 2016, and at the NYU “[Ethics of AI](https://wp.nyu.edu/consciousness/ethics-of-artificial-intelligence/)” conference ([details](https://intelligence.org/nyu-talk/));\n\t+ Jessica Taylor on “[Using Machine Learning to Address AI Risk](https://intelligence.org/2017/02/28/using-machine-learning/)” at Effective Altruism Global;\n\t+ Andrew Critch on logical induction at EA Global ([video](https://www.youtube.com/watch?v=lXm-MgPLkxA)), and at Princeton, Harvard, and MIT;\n\t+ Andrew on superintelligence as a top priority at the Society for Risk Analysis ([slides](https://intelligence.org/files/AlignmentWorldResearchPriority.pdf)) and at [ENVISION](http://envision-conference.com) ([video](https://www.youtube.com/watch?v=qeGQ3FhTmKo)), where he also ran a workshop on logical induction;\n\t+ and Nate Soares on logical induction at [EAGxOxford](http://eagxoxford.com).\n* **Published two papers in a top AI conference, [UAI](http://www.auai.org/uai2016/index.php):** “[A Formal Solution to the Grain of Truth Problem](https://intelligence.org/2016/06/30/grain-of-truth/)” (co-authored with Jan Leike, now at DeepMind) and “[Safely Interruptible Agents](https://intelligence.org/2016/06/01/new-paper-safely-interruptible-agents/)” (co-authored by Laurent Orseau of DeepMind and a MIRI research associate, Stuart Armstrong of the Future of Humanity Institute). We also presented papers at AGI and at AAAI and IJCAI workshops.\n* **Spoke on panels** at EA Global, ENVISION, AAAI ([details](https://futureoflife.org/2016/02/17/aaai-workshop-highlights-debate-discussion-and-future-research/)), and EAGxOxford (with Demis Hassabis, Toby Ord, and two new DeepMind recruits: Viktoriya Krakovna and Murray Shanahan). Nate also co-moderated an AI Safety Q&A at the [OpenAI unconference](https://futureoflife.org/2016/10/17/openai-unconference-on-machine-learning/).\n* **Attended other academic events**, including NIPS, ICML, the [Workshop for Safety and Control for Artificial Intelligence](http://www.cmu.edu/safartint/), and the [Future of AI Symposium](https://futureoflife.org/2016/01/12/the-future-of-ai-quotes-and-highlights-from-todays-nyu-symposium/) at NYU organized by Yann LeCun.\n\n\nOn the whole, our research team growth in 2016 was somewhat slower than expected. We’re still accepting applicants for our [type theorist position](https://intelligence.org/2016/03/18/seeking-research-fellows-in-type-theory-and-machine-self-reference/) (and for other research roles at MIRI, via our [Get Involved page](http://intelligence.org/get-involved)), but we expect to leave that role unfilled for at least the next 6 months while we focus on onboarding additional core researchers.[4](https://intelligence.org/2017/03/28/2016-in-review/#footnote_3_15745 \"In the interim, our research intern Jack Gallagher has continued to make useful contributions in this domain.\")\n\n\n \n\n\n### 2016 General Activities\n\n\nAlso in 2016, we:\n\n\n* **Hired new administrative staff:** development specialist Colm Ó Riain, office manager Aaron Silverbook, and staff writer Matthew Graves. I also took on a leadership role as [MIRI’s COO](https://intelligence.org/2016/03/30/miri-has-a-new-coo-malo-bourgon/).\n* **Contributed to the [IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems](http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html)**. I co-chaired the committee on the Safety and Beneficence of Artificial General Intelligence and Artificial Superintelligence, and moderated the workshop on the same subject at the [IEEE Symposium on Ethics of Autonomous Systems](http://www.cvent.com/events/ieee-symposium-on-ethics-of-autonomous-systems-seas-europe-/event-summary-28d5322779454a6780b19c07b28023de.aspx).\n* **Had our forecasting research cited in the White House’s [future of AI report](https://intelligence.org/2016/10/20/white-house-submissions-and-report-on-ai-safety/)**, and wrote up a public-facing [explanation of our strategic view](https://intelligence.org/2016/07/23/ostp/) for the White House’s request for information.\n* **Answered questions** at an [“Ask MIRI Anything” AMA](http://effective-altruism.com/ea/12r/ask_miri_anything_ama/), considered [parallels between AlphaGo and general AI](https://futureoflife.org/2016/03/15/eliezer-yudkowsky-on-alphagos-wins/), and had a back-and-forth with economist Bryan Caplan ([1](http://econlog.econlib.org/archives/2016/03/so_far_unfriend.html), [2](http://econlog.econlib.org/archives/2016/03/so_far_my_respo.html), [3](http://econlog.econlib.org/archives/2016/03/so_far_my_respo.html#355226)).\n* **Received press coverage** in a *Scientific American* blog ([John Horgan interviews Eliezer Yudkowsky)](https://intelligence.org/2016/03/02/john-horgan-interviews-eliezer-yudkowsky/), *[OZY](http://www.ozy.com/fast-forward/the-21st-century-philosophers/65230)*, *[Tech Republic](http://www.techrepublic.com/article/machine-learning-the-smart-persons-guide/)*, *[Harvard Business Review](https://hbr.org/2016/12/the-obama-administrations-roadmap-for-ai-policy)*, *[Gizmodo](http://www.gizmodo.co.uk/2016/03/20-crucial-terms-every-21st-century-futurist-should-know/)*, the *[Washington Post](https://www.washingtonpost.com/posteverything/wp/2016/12/02/robots-wont-kill-the-workforce-theyll-save-the-global-economy/?utm_term=.8d8d161f39ee)*, *CNET* ([1](http://www.cnet.com/uk/news/ai-frankenstein-not-so-fast-artificial-intelligence-experts-say/), [2](https://www.cnet.com/uk/news/hollywood-ai-artificial-intelligence-fun-but-far-fetched/)), and *[BuzzFeed News](https://www.buzzfeed.com/tomchivers/im-sorry-dave-im-afraid-i-cant-do-that?utm_term=.beyQx4ke4#.jyN0qwozw)*.\n\n\n \n\n\n### 2016 Fundraising\n\n\n2016 was a strong year in MIRI’s fundraising efforts. We raised a total of **$2,285,200**, a **44%** increase on the $1,584,109 raised in 2015. This increase was largely driven by:\n\n\n* A general grant of **$500,000** [from the Open Philanthropy Project](https://intelligence.org/2016/09/06/grant-open-philanthropy/).[5](https://intelligence.org/2017/03/28/2016-in-review/#footnote_4_15745 \"Note that numbers in this section might not exactly match previously published estimates, since small corrections are often made to contributions data. Note also that these numbers do not include in-kind donations.\")\n* A donation of **$300,000** from Blake Borgeson.\n* Contributions of **$93,548** from Raising for Effective Giving.[6](https://intelligence.org/2017/03/28/2016-in-review/#footnote_5_15745 \"This figure only counts direct contributions through REG to MIRI. REG/EAF’s support for MIRI is closer to $150,000 when accounting for contributions made through EAF, many made on REG’s advice.\")\n* A research grant of **$83,309** from the Future of Life Institute.[7](https://intelligence.org/2017/03/28/2016-in-review/#footnote_6_15745 \"We were also awarded a $75,000 grant from the Center for Long-Term Cybersecurity to pursue a corrigibility project with Stuart Russell and a new UC Berkeley postdoc, but we weren’t able to fill the intended postdoc position in the relevant timeframe and the project was canceled. Stuart Russell subsequently received a large grant from the Open Philanthropy Project to launch a new academic research institute for studying corrigibility and other AI safety issues, the Center for Human-Compatible AI.\")\n* Our community’s strong turnout during our Fall Fundraiser—at **$595,947**, our second-largest fundraiser to date.\n* A gratifying show of support from supporters at the end of the year, despite our not running a Winter Fundraiser.\n\n\nAssuming we can sustain this funding level going forward, this represents a preliminary fulfillment of our primary fundraising goal [from January 2016](https://intelligence.org/2016/01/12/end-of-the-year-fundraiser-and-grant-successes/):\n\n\n\n> Our next big push will be to close the gap between our new budget and our annual revenue. In order to sustain our current growth plans — which are aimed at expanding to a team of approximately ten full-time researchers — we’ll need to begin consistently taking in close to $2M per year by mid-2017.\n> \n> \n\n\nAs the graph below indicates, 2016 continued a positive trend of growth in our fundraising efforts.\n\n\n\nDrawing conclusions from these year-by-year comparisons can be a little tricky. MIRI underwent significant organizational changes over this time span, particularly in 2013. We also switched to accrual-based accounting in 2014, which also complicates comparisons with previous years.\n\n\nHowever, it is possible to highlight certain aspects of our progress in 2016:\n\n\n* **The Fall Fundraiser:** For the first time, we held a single fundraiser in 2016 instead of our “traditional” summer and winter fundraisers—from mid-September to October 31. While we didn’t hit [our initial target of $750k](https://intelligence.org/2016/09/16/miris-2016-fundraiser/), we [hoped](https://intelligence.org/2016/11/11/post-fundraiser-update/) that our funders were waiting to give later in the year and would make up the shortfall at the end of year. We were pleased that they came through in large numbers at the end of 2016, some possibly motivated by public posts by members of the community.[8](https://intelligence.org/2017/03/28/2016-in-review/#footnote_7_15745 \"We received timely donor recommendations from investment analyst Ben Hoskin, Future of Humanity Institute researcher Owen Cotton-Barratt, and Daniel Dewey and Nick Beckstead of the Open Philanthropy Project (echoed by 80,000 Hours).\") All told, we received more contributions in December 2016 (~$430,000) than in the same month in either of the previous two years, when we actively ran Winter Fundraisers, an interesting data point for us. The following charts throw additional light on our supporters’ response to the fall fundraiser: \n\n \n\n \n\nNote that if we remove the Open Philanthropy Project’s grant from the Pre-Fall data, the ratios across the 4 time segments all look pretty similar. Overall, this data is suggestive that, rather than a group of new funders coming in at the last moment, a segment of our existing funders chose to wait until the end of the year to donate.\n* **In 2016 the remarkable support we received from returning funders was particularly noteworthy, with 89% retention** (in terms of dollars) from 2015 funders. To put this in a broader context, the average gift retention rate across [a representative segment of](http://www.afpnet.org/files/ContentDocuments/FEP2016FinalReport.pdf) the US philanthropic space over the last 5 years has been 46%.\n* **The number of unique funders to MIRI rose 16% in 2016**—from 491 to 571—continuing a general increasing trend. 2014 is anomalously high on this graph due to the community’s active participation in [our memorable SVGives campaign](https://intelligence.org/2014/05/06/liveblogging-the-svgives-fundraiser/).[9](https://intelligence.org/2017/03/28/2016-in-review/#footnote_8_15745 \"Our 45% retention of unique funders from 2015 is very much in line with funder retention across the US philanthropic space, which combined with the previous point, suggests returning MIRI funders were significantly more supportive than most.\")\n* **International support continues to make up about 20% of contributions.** Unlike in the US, where increases were driven mainly by new institutional support (the Open Philanthropy Project), international support growth was driven by individuals across Europe (notably Scandinavia and the UK), Australia, and Canada.\n* **Use of employer matching programs increased by 17% year-on-year**, with contributions of over $180,000 received through corporate matching programs in 2016, our highest to date. There are early signs of this growth continuing through 2017.\n* **An analysis of contributions made from small, mid-sized, large, and very large funder segments shows contributions from all four segments increased proportionally from 2015**:\n\n\nDue to the fact that we raised more than $2 million in 2016, we are now required by California law to prepare an annual financial statement audited by an independent certified public accountant (CPA). That report, like our financial reports of past years, will be made available by the end of September, on our [transparency and financials](https://intelligence.org/transparency/) page.\n\n\n \n\n\n### Going Forward\n\n\nAs of [July 2016](https://intelligence.org/2016/07/29/2015-in-review/#5), we had the following outstanding goals from mid-2015:\n\n\n1. Accelerated growth: “expand to a roughly ten-person core research team.” ([source](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/#acceleratedgrowth))\n2. Type theory in type theory project: “hire one or two type theorists to work on developing relevant tools full-time.” ([source](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/#agda))\n3. Independent review: “We’re also looking into options for directly soliciting public feedback from independent researchers regarding our research agenda and early results.” ([source](https://intelligence.org/2015/08/10/assessing-our-past-and-potential-impact/#1))\n\n\nWe currently have seven research fellows and assistant fellows, and are planning to hire several more in the very near future. We expect to hit our ten-fellow goal in the next 3–4 months, and to continue to grow the research team later this year. As noted above, we’re delaying moving forward on a type theorist hire.\n\n\nThe Open Philanthropy Project is currently reviewing our research agenda [as part of their process of evaluating us for future grants](https://intelligence.org/2016/09/06/grant-open-philanthropy/). They released an initial [big-picture organizational review of MIRI](http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support) in September, accompanied by [reviews](http://files.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf) of several recent MIRI papers (which Nate responded to [here](https://intelligence.org/files/OpenPhil2016Supplement.pdf)). These reviews were generally quite critical of our work, with Open Phil expressing a number of reservations about our agent foundations agenda and our technical progress to date. We are optimistic, however, that we will be able to better make our case to Open Phil in discussions going forward, and generally converge more in our views of what open problems deserve the most attention.\n\n\nIn our [August 2016 strategic update](https://intelligence.org/2016/08/05/miri-strategy-update-2016/), Nate outlined our other organizational priorities and plans:\n\n\n4. Technical research: continue work on our agent foundations agenda while kicking off work on AAMLS.\n5. AGI alignment overviews: “Eliezer Yudkowsky and I will be splitting our time between working on these problems and doing expository writing. Eliezer is writing about alignment theory, while I’ll be writing about MIRI strategy and forecasting questions.”\n6. Academic outreach events: “To help promote our approach and grow the field, we intend to host more workshops aimed at diverse academic audiences. We’ll be hosting a machine learning workshop in the near future, and might run more events like CSRBAI going forward.”\n7. Paper-writing: “We also have a backlog of past technical results to write up, which we expect to be valuable for engaging more researchers in computer science, economics, mathematical logic, decision theory, and other areas.”\n\n\nAll of these are still priorities for us, though we now consider 5 somewhat more important (and 6 and 7 less important). We’ve since run three ML [workshops](https://intelligence.org/workshops/), and have made more headway on our AAMLS research agenda. We now have a large amount of content prepared for our AGI alignment overviews, and are beginning a (likely rather long) editing process. We’ve also released “Logical Induction” and have a number of other papers in the pipeline.\n\n\nWe’ll be providing more details on how our priorities have changed since August in a strategic update post next month. As in past years, object-level technical research on the AI alignment problem will continue to be our top priority, although we’ll be undergoing a medium-sized shift in our research priorities and outreach plans.[10](https://intelligence.org/2017/03/28/2016-in-review/#footnote_9_15745 \"My thanks to Rob Bensinger, Colm Ó Riain, and Matthew Graves for their substantial contributions to this post.\")\n\n\n \n\n\n\n\n---\n\n1. See our previous reviews: [2015](https://intelligence.org/2016/07/29/2015-in-review/), [2014](https://intelligence.org/2015/03/22/2014-review/), [2013](https://intelligence.org/2013/12/20/2013-in-review-operations/).\n2. From [2015 in review](https://intelligence.org/2016/07/29/2015-in-review/): “Patrick LaVictoire joined in March, Jessica Taylor in August, Andrew Critch in September, and Scott Garrabrant in December. With Nate transitioning to a non-research role, overall we grew from a three-person research team (Eliezer, Benya, and Nate) to a six-person team.”\n3. As I noted in our [AMA](http://effective-altruism.com/ea/12r/ask_miri_anything_ama/8km): “At MIRI, research fellow is a full-time permanent position. A decent analogy in academia might be that research fellows are to assistant research fellows as full-time faculty are to post-docs. Assistant research fellowships are intended to be a more junior position with a fixed 1–2 year term.”\n4. In the interim, our research intern Jack Gallagher has continued to make useful contributions in this domain.\n5. Note that numbers in this section might not exactly match previously published estimates, since small corrections are often made to contributions data. Note also that these numbers do not include in-kind donations.\n6. This figure only counts direct contributions through REG to MIRI. REG/EAF’s support for MIRI is closer to $150,000 when accounting for contributions made through EAF, many made on REG’s advice.\n7. We were also awarded a [$75,000 grant](https://intelligence.org/2016/01/12/end-of-the-year-fundraiser-and-grant-successes/#2) from the Center for Long-Term Cybersecurity to pursue a corrigibility project with Stuart Russell and a new UC Berkeley postdoc, but we weren’t able to fill the intended postdoc position in the relevant timeframe and the project was canceled. Stuart Russell subsequently received a large grant [from the Open Philanthropy Project](http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai) to launch a new academic research institute for studying corrigibility and other AI safety issues, the [Center for Human-Compatible AI](http://humancompatible.ai/).\n8. We received timely donor recommendations from investment analyst [Ben Hoskin](http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/), Future of Humanity Institute researcher [Owen Cotton-Barratt](http://effective-altruism.com/ea/14c/why_im_donating_to_miri_this_year/), and [Daniel Dewey and Nick Beckstead](http://www.openphilanthropy.org/blog/suggestions-individual-donors-open-philanthropy-project-staff-2016) of the Open Philanthropy Project (echoed by [80,000 Hours](https://80000hours.org/2016/12/the-effective-altruism-guide-to-donating-this-giving-season/#ai)).\n9. Our 45% retention of unique funders from 2015 is very much in line with funder retention across the US philanthropic space, which combined with the previous point, suggests returning MIRI funders were significantly more supportive than most.\n10. My thanks to Rob Bensinger, Colm Ó Riain, and Matthew Graves for their substantial contributions to this post.\n\nThe post [2016 in review](https://intelligence.org/2017/03/28/2016-in-review/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-03-29T00:27:31Z", "authors": ["Malo Bourgon"], "summaries": []} -{"id": "88fca8dfc15042a36efe1d2f4089e470", "title": "New paper: “Cheating Death in Damascus”", "url": "https://intelligence.org/2017/03/18/new-paper-cheating-death-in-damascus/", "source": "miri", "source_type": "blog", "text": "[![Cheating Death in Damascus](https://intelligence.org/files/DeathInDamascus.png)](https://intelligence.org/files/DeathInDamascus.pdf)MIRI Executive Director Nate Soares and Rutgers/UIUC decision theorist Ben Levinstein have a new paper out introducing *functional decision theory* (FDT), MIRI’s proposal for a general-purpose decision theory.\n\n\nThe paper, titled “**[Cheating Death in Damascus](https://intelligence.org/files/DeathInDamascus.pdf)**,” considers a wide range of decision problems. In every case, Soares and Levinstein show that FDT outperforms all earlier theories in utility gained. The abstract reads:\n\n\n\n> Evidential and Causal Decision Theory are the leading contenders as theories of rational action, but both face fatal counterexamples. We present some new counterexamples, including one in which the optimal action is causally dominated. We also present a novel decision theory, Functional Decision Theory (FDT), which simultaneously solves both sets of counterexamples.\n> \n> \n> Instead of considering which physical action of theirs would give rise to the best outcomes, FDT agents consider which output of their decision function would give rise to the best outcome. This theory relies on a notion of subjunctive dependence, where multiple implementations of the same mathematical function are considered (even counterfactually) to have identical results for logical rather than causal reasons. Taking these subjunctive dependencies into account allows FDT agents to outperform CDT and EDT agents in, e.g., the presence of accurate predictors. While not necessary for considering classic decision theory problems, we note that a full specification of FDT will require a non-trivial theory of logical counterfactuals and algorithmic similarity.\n> \n> \n\n\n“Death in Damascus” is a standard decision-theoretic dilemma. In it, a trustworthy predictor (Death) promises to find you and bring your demise tomorrow, whether you stay in Damascus or flee to Aleppo. Fleeing to Aleppo is costly and provides no benefit, since Death, having predicted your future location, will then simply come for you in Aleppo instead of Damascus.\n\n\nIn spite of this, causal decision theory often recommends fleeing to Aleppo — for much the same reason it recommends defecting in the one-shot [twin prisoner’s dilemma](https://intelligence.org/2016/03/31/new-paper-on-bounded-lob/) and two-boxing in [Newcomb’s problem](http://lesswrong.com/lw/gu1/decision_theory_faq/#newcomblike-problems-and-two-decision-algorithms). CDT agents reason that Death has already made its prediction, and that switching cities therefore can’t *cause* Death to learn your new location. Even though the CDT agent recognizes that Death is inescapable, the CDT agent’s decision rule forbids taking this fact into account in reaching decisions. As a consequence, the CDT agent will happily give up arbitrary amounts of utility in a pointless flight from Death.\n\n\nCausal decision theory fails in Death in Damascus, Newcomb’s problem, and the twin prisoner’s dilemma — and also in the “random coin,” “Death on Olympus,” “asteroids,” and “murder lesion” dilemmas described in the paper — because its counterfactuals only track its actions’ causal impact on the world, and not the rest of the world’s causal (and logical, etc.) structure.\n\n\nWhile evidential decision theory succeeds in these dilemmas, it fails in a new decision problem, “XOR blackmail.”[1](https://intelligence.org/2017/03/18/new-paper-cheating-death-in-damascus/#footnote_0_15658 \"Just as the variants on Death in Damascus in Soares and Levinstein’s paper help clarify CDT’s particular point of failure, XOR blackmail drills down more exactly on EDT’s failure point than past decision problems have. In particular, EDT cannot be modified to avoid XOR blackmail in the ways it can be modified to smoke in the smoking lesion problem.\") FDT consistently outperforms both of these theories, providing an elegant account of normative action for the full gamut of known decision problems.\n\n\n\n\n\n---\n\n\nThe underlying idea of FDT is that an agent’s decision procedure can be thought of as a mathematical function. The function takes the state of the world described in the decision problem as an input, and outputs an action.\n\n\nIn the Death in Damascus problem, the FDT agent recognizes that their action cannot *cause* Death’s prediction to change. However, Death and the FDT agent are in a sense computing the same function: their actions are correlated, in much the same way that if the FDT agent were answering a math problem, Death could predict the FDT agent’s answer by computing the same mathematical function.\n\n\nThis simple notion of “what variables depend on my action?” avoids the spurious dependencies that EDT falls prey to. Treating decision procedures as multiply realizable functions does not require us to conflate correlation with causation. At the same time, FDT tracks real-world dependencies that CDT ignores, allowing it to respond effectively in a much more diverse set of decision problems than CDT.\n\n\nThe main wrinkle in this decision theory is that FDT’s notion of dependence requires some account of “counterlogical” or “counterpossible” reasoning.\n\n\nThe prescription of FDT is that agents treat their decision procedure as a deterministic function, consider various outputs this function could have, and select the output associated with the highest-expected-utility outcome. What does it mean, however, to say that there are different outputs a deterministic function “could have”? Though one may be uncertain about the output of a certain function, there is in reality only one possible output of a function on a given input. Trying to reason about “how the world would look” on different assumptions about a function’s output on some input is like trying to reason about “how the world would look” on different assumptions about which is the largest integer in the set {1, 2, 3}.\n\n\nIn garden-variety counterfactual reasoning, one simply imagines a different (internally consistent) world, exhibiting different physical facts but the same logical laws. For counterpossible reasoning of the sort needed to say “if I stay in Damascus, Death will find me here” as well as “if I go to Aleppo, Death will find me there” — even though only one of these events is logically possible, under a full specification of one’s decision procedure and circumstances — one would need to imagine worlds where *different logical truths* hold. Mathematicians presumably do this in some heuristic fashion, since they must weigh the evidence for or against different conjectures; but it isn’t clear how to formalize this kind of reasoning in a practical way.[2](https://intelligence.org/2017/03/18/new-paper-cheating-death-in-damascus/#footnote_1_15658 \"Logical induction is an example of a method for assigning reasonable probabilities to mathematical conjectures; but it isn’t clear from this how to define a decision theory that can calculate expected utilities for inconsistent scenarios. Thus the problem of reasoning under logical uncertainty is distinct from the problem of defining counterlogical reasoning.\")\n\n\nFunctional decision theory is a successor to [timeless decision theory](https://intelligence.org/files/TDT.pdf) (first discussed [in 2009](http://lesswrong.com/lw/135/timeless_decision_theory_problems_i_cant_solve/)), a theory by MIRI senior researcher Eliezer Yudkowsky that made the mistake of conditioning on observations. FDT is a generalization of Wei Dai’s *updateless decision theory*.[3](https://intelligence.org/2017/03/18/new-paper-cheating-death-in-damascus/#footnote_2_15658 \"The name “UDT” has come to be used to pick out a multitude of different ideas, including “UDT 1.0” (Dai’s original proposal), “UDT 1.1”, and various proof-based approaches to decision theory (which make useful toy models, but not decision theories that anyone advocates adhering to).\nFDT captures a lot (but not all) of the common ground between these ideas, and is intended to serve as a more general umbrella category that makes fewer philosophical commitments than UDT and which is easier to explain and communicate. Researchers at MIRI do tend to hold additional philosophical commitments that are inferentially further from the decision theory mainstream (which concern updatelessness and logical prior probability), for which certain variants of UDT are perhaps our best concrete theories, but no particular model of decision theory is yet entirely satisfactory.\")\n\n\nWe’ll be presenting “Cheating Death in Damascus” at the [Formal Epistemology Workshop](http://www.mayowilson.org/FEW.htm), an interdisciplinary conference showcasing results in epistemology, philosophy of science, decision theory, foundations of statistics, and other fields.[4](https://intelligence.org/2017/03/18/new-paper-cheating-death-in-damascus/#footnote_3_15658 \"Thanks to Matthew Graves and Nate Soares for helping draft and edit this post.\")\n\n\n**Update April 7**: Nate goes into more detail on the interpretive questions raised by functional decision theory in a follow-up conversation: [Decisions are for making bad outcomes inconsistent](https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/).\n\n\n**Update November 25, 2019**: A revised version of this paper has been accepted to [*The Journal of Philosophy*](https://en.wikipedia.org/wiki/The_Journal_of_Philosophy). The *JPhil* version is **[here](https://intelligence.org/files/DeathInDamascus.pdf)**, while the 2017 FEW version is available [here](https://intelligence.org/files/obsolete/DeathInDamascus2019-11-27.pdf).\n\n\n\n\n---\n\n\n \n\n\n\n\n\n#### Sign up to get updates on new MIRI technical results\n\n\n*Get notified every time a new technical paper is published.*\n\n\n\n\n* \n* \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n\n\n \n\n\n\n\n---\n\n1. Just as the variants on Death in Damascus in Soares and Levinstein’s paper help clarify CDT’s particular point of failure, XOR blackmail drills down more exactly on EDT’s failure point than past decision problems have. In particular, EDT cannot be modified to avoid XOR blackmail in the ways it can be modified to smoke in the smoking lesion problem.\n2. [Logical induction](https://intelligence.org/2016/09/12/new-paper-logical-induction/) is an example of a method for assigning reasonable probabilities to mathematical conjectures; but it isn’t clear from this how to define a *decision theory* that can calculate expected utilities for inconsistent *scenarios*. Thus the problem of reasoning under logical uncertainty is distinct from the problem of defining counterlogical reasoning.\n3. The name “UDT” has come to be used to pick out a multitude of different ideas, including “[UDT 1.0](http://lesswrong.com/lw/15m/towards_a_new_decision_theory/)” (Dai’s original proposal), “[UDT 1.1](http://lesswrong.com/lw/1s5/explicit_optimization_of_global_strategy_fixing_a/)”, and various [proof-based](https://agentfoundations.org/item?id=50) approaches to decision theory (which make useful toy models, but not decision theories that anyone advocates adhering to).\nFDT captures a lot (but not all) of the common ground between these ideas, and is intended to serve as a more general umbrella category that makes fewer philosophical commitments than UDT and which is easier to explain and communicate. Researchers at MIRI do tend to hold additional philosophical commitments that are inferentially further from the decision theory mainstream (which concern updatelessness and logical prior probability), for which certain variants of UDT are perhaps our best concrete theories, but no particular model of decision theory is yet entirely satisfactory.\n4. Thanks to Matthew Graves and Nate Soares for helping draft and edit this post.\n\nThe post [New paper: “Cheating Death in Damascus”](https://intelligence.org/2017/03/18/new-paper-cheating-death-in-damascus/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-03-19T03:30:27Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "828d64f217a1dc1f75f372acce77a941", "title": "March 2017 Newsletter", "url": "https://intelligence.org/2017/03/15/march-2017-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \n**Research updates**\n* New at IAFF: [Some Problems with Making Induction Benign](https://agentfoundations.org/item?id=1263); [Entangled Equilibria and the Twin Prisoners’ Dilemma](https://agentfoundations.org/item?id=1279); [Generalizing Foundations of Decision Theory](https://agentfoundations.org/item?id=1302)\n* New at AI Impacts: [Changes in Funding in the AI Safety Field](http://aiimpacts.org/changes-in-funding-in-the-ai-safety-field/); [Funding of AI Research](http://aiimpacts.org/funding-of-ai-research/)\n* MIRI Research Fellow Andrew Critch has started a two-year stint at UC Berkeley’s [Center for Human-Compatible AI](http://humancompatible.ai), helping launch the research program there.\n* “[Using Machine Learning to Address AI Risk](https://intelligence.org/2017/02/28/using-machine-learning/)”: Jessica Taylor explains our AAMLS agenda (in video and blog versions) by walking through six potential problems with highly performing ML systems.\n\n\n\n\n**General updates**\n* [Why AI Safety?](https://intelligence.org/why-ai-safety/): A quick summary (originally posted during our [fundraiser](http://effective-altruism.com/ea/12n/miri_update_and_fundraising_case/)) of the case for working on AI risk, including notes on distinctive features of our approach and our goals for the field.\n* Nate Soares attended “[Envisioning and Addressing Adverse AI Outcomes](https://www.bloomberg.com/news/articles/2017-03-02/ai-scientists-gather-to-plot-doomsday-scenarios-and-solutions),” an event pitting red-team attackers against defenders in a variety of AI risk scenarios.\n\n\n\n\n\n* We also attended an AI safety strategy retreat run by the Center for Applied Rationality.\n\n\n\n**News and links**\n* Ray Arnold provides a useful list of [ways the average person help with AI safety](http://effective-altruism.com/ea/17s/what_should_the_average_ea_do_about_ai_alignment/).\n* New from OpenAI: [attacking machine learning with adversarial examples](https://openai.com/blog/adversarial-example-research/).\n* OpenAI researcher Paul Christiano [explains his view of human intelligence](https://sideways-view.com/2017/02/19/the-monkey-and-the-machine-a-dual-process-theory/):\n\nI think of my brain as a machine driven by a powerful reinforcement learning agent. The RL agent chooses what thoughts to think, which memories to store and retrieve, where to direct my attention, and how to move my muscles.\nThe “I” who speaks and deliberates is *implemented by* the RL agent, but is distinct and has different beliefs and desires. My thoughts are outputs and inputs to the RL agent, they are not what the RL agent “feels like from the inside.”\n* Christiano describes three [directions and desiderata](https://medium.com/ai-control/directions-and-desiderata-for-ai-control-b60fca0da8f4#.3j8dflohv) for AI control: reliability and robustness, reward learning, and deliberation and amplification.\n* Sarah Constantin argues that existing techniques [won’t scale up to artificial general intelligence](https://srconstantin.wordpress.com/2017/02/21/strong-ai-isnt-here-yet/) absent major conceptual breakthroughs.\n* The Future of Humanity Institute and the Centre for the Study of Existential Risk ran a “[Bad Actors and AI](https://www.fhi.ox.ac.uk/bad-actors-and-artificial-intelligence-workshop/)” workshop.\n* FHI is [seeking interns](https://www.fhi.ox.ac.uk/fhi-is-accepting-applications-for-internships-in-the-area-of-ai-safety-and-reinforcement-learning-2/) in reinforcement learning and AI safety.\n* Michael Milford [argues against](http://theconversation.com/merging-our-brains-with-machines-wont-stop-the-rise-of-the-robots-73275) brain-computer interfaces as an AI risk strategy.\n* Open Philanthropy Project head Holden Karnofsky explains why he sees [fewer benefits to public discourse](http://effective-altruism.com/ea/17o/some_thoughts_on_public_discourse/) than he used to.\n\n\n |\n\n\n\nThe post [March 2017 Newsletter](https://intelligence.org/2017/03/15/march-2017-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-03-16T02:59:13Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "337ce8d338ade4cdaf3abdad7b6dea7a", "title": "Using machine learning to address AI risk", "url": "https://intelligence.org/2017/02/28/using-machine-learning/", "source": "miri", "source_type": "blog", "text": "At the [EA Global](https://www.eaglobal.org/) 2016 conference, I gave a talk on “**Using Machine Learning to Address AI Risk**”:\n\n\n\n> It is plausible that future artificial general intelligence systems will share many qualities in common with present-day machine learning systems. If so, how could we ensure that these systems robustly act as intended? We discuss the technical agenda for a new project at MIRI focused on this question.\n> \n> \n\n\nA recording of my talk is now up online:\n\n\n \n\n\n\n \n\n\nThe talk serves as a quick survey (for a general audience) of the kinds of technical problems we’re working on under the “[Alignment for Advanced ML Systems](https://intelligence.org/2016/07/27/alignment-machine-learning/)” research agenda. Included below is a version of the talk in blog post form.[1](https://intelligence.org/2017/02/28/using-machine-learning/#footnote_0_15536 \"I also gave a version of this talk at the MIRI/FHI Colloquium on Robust and Beneficial AI.\")\n\n\nTalk outline:\n\n\n\n> \n> 1. [Goal of this research agenda](https://intelligence.org/2017/02/28/using-machine-learning/#1)\n> \n> \n> 2. [Six potential problems with highly capable AI systems](https://intelligence.org/2017/02/28/using-machine-learning/#2)\n> \n> \n> 2.1. [Actions are hard to evaluate](https://intelligence.org/2017/02/28/using-machine-learning/#problem-1) \n> \n> 2.2. [Ambiguous test examples](https://intelligence.org/2017/02/28/using-machine-learning/#problem-2) \n> \n> 2.3. [Difficulty imitating human behavior](https://intelligence.org/2017/02/28/using-machine-learning/#problem-3) \n> \n> 2.4. [Difficulty specifying goals about the real world](https://intelligence.org/2017/02/28/using-machine-learning/#problem-4) \n> \n> 2.5. [Negative side-effects](https://intelligence.org/2017/02/28/using-machine-learning/#problem-5) \n> \n> 2.6. [Edge cases that still satisfy the goal](https://intelligence.org/2017/02/28/using-machine-learning/#problem-6)\n> \n> \n> 3. [Technical details on one problem: inductive ambiguity identification](https://intelligence.org/2017/02/28/using-machine-learning/#3)\n> \n> \n> 3.1. [KWIK learning](https://intelligence.org/2017/02/28/using-machine-learning/#kwik-learning) \n> \n> 3.2. [A Bayesian view of the problem](https://intelligence.org/2017/02/28/using-machine-learning/#a-bayesian-view-of-the-problem)\n> \n> \n> 4. [Other agendas](https://intelligence.org/feed/?paged=25#4)\n> \n> \n> \n\n\n\n\n\n---\n\n\n\n\n---\n\n\n\n### Goal of this research agenda\n\n\nThis talk is about a new research agenda aimed at using machine learning to make AI systems safe even at very high capability levels. I’ll begin by summarizing the goal of the research agenda, and then go into more depth on six problem classes we’re focusing on.\n\n\nThe goal statement for this technical agenda is that we want to know how to train a smarter-than-human AI system to perform one or more large-scale, useful *[tasks](https://arbital.com/p/task_goal/)* in the world.\n\n\nSome assumptions this research agenda makes:\n\n\n1. Future AI systems are likely to look like more powerful versions of present-day ML systems in many ways. We may get better deep learning algorithms, for example, but we’re likely to still be relying heavily on something like deep learning.[2](https://intelligence.org/2017/02/28/using-machine-learning/#footnote_1_15536 \"Alternatively, you may think that AGI won’t look like modern ML in most respects, but that the ML aspects are easier to productively study today and are unlikely to be made completely irrelevant by future developments.\")\n2. Artificial general intelligence (AGI) is likely to be developed relatively soon (say, in the next couple of decades).[3](https://intelligence.org/2017/02/28/using-machine-learning/#footnote_2_15536 \"Alternatively, you may think timelines are long, but that we should focus on scenarios with shorter timelines because they’re more urgent.\")\n3. Building [task-directed AGI](https://arbital.com/p/genie/) is a good idea, and we can make progress today studying how to do so.\n\n\nI’m not confident that all three of these assumptions are true, but I think they’re plausible enough to deserve about as much attention from the AI community as the likeliest alternative scenarios.\n\n\nA task-directed AI system is a system that pursues a semi-concrete objective in the world, like “build a million houses” or “cure cancer.” For those who have read *Superintelligence*, task-directed AI is similar to the idea of genie AI. Although these tasks are kind of fuzzy — there’s probably a lot of work you’d need to do to clarify what it really means to build a million houses, or what counts as a good house — they’re at least somewhat concrete.\n\n\nAn example of an AGI system that *isn’t* task-directed would be one with a goal like “learn human values and do things humans would consider good upon sufficient reflection.” This is too abstract to count as a “task” in the sense we mean; it doesn’t directly cash out in things in the world.\n\n\nThe hope is that even though task-directed AI pursues a less ambitious objective then “learn human values and do what we’d want it to do,” it’s still [sufficient](https://arbital.com/p/minimality_principle/) to prevent global catastrophic risks. Once the immediate risks are averted, we can then work on building more ambitious AI systems under reduced time pressure.\n\n\nTask-directed AI uses some (moderate) amount of human assistance to clarify the goal and to evaluate and implement its plans. A goal like “cure cancer” is vague enough that humans will have to do some work to clarify what they mean by it, though most of the intellectual labor should be coming from the AI system rather than from humans.\n\n\nIdeally, task-directed AI also shouldn’t require [significantly more computational resources](https://medium.com/ai-control/directions-and-desiderata-for-ai-control-b60fca0da8f4#.ytgacb3cm) than competing systems. You shouldn’t get an exponential slowdown from building a safe system vs. a generic system.\n\n\nIn order to think about this overall goal, we need some kind of model for these future systems. The general approach that I’ll take is to look at current systems and imagine that they’re more powerful. A lot of the time you can look at tasks that people do in ML and you can see that the performance improves over time. We’ll model more advanced AI systems by just supposing that systems will continue to achieve higher scores in ML tasks. We can then ask what kinds of failure modes are likely to arise as systems improve, and what we can work on today to make those failures less likely or less costly.\n\n\n### Six potential problems with highly capable AI systems\n\n\n##### Problem 1: Actions are hard to evaluate\n\n\n[![Slide 13](https://intelligence.org/wp-content/uploads/2017/02/presentation-13.png)](https://intelligence.org/wp-content/uploads/2017/02/presentation-13.png)Suppose an AI system composes a story, and a human gives the system a reward based on how good the story is.[4](https://intelligence.org/2017/02/28/using-machine-learning/#footnote_3_15536 \"Although I’ll use the example of stories here, in real life it could be a system generating plans for curing cancers, and humans evaluating how good the plans are.\")\n\n\nThis is similar to some RL tasks: the agent wants to do something that will cause it to receive a high reward in the future. The formalism of RL would say that the objective of this RL agent is to write a story that the human is expected to give a high score to.\n\n\nFor this objective to actually help us receive very high-quality stories, however, we also need to know that the human understands the RL agent’s actions well enough to [correctly administer rewards](https://medium.com/ai-control/the-reward-engineering-problem-30285c779450#.oy88ehql4). This assumption seems less likely to hold for systems that are optimizing the objective much more powerfully than any present-day system. For example:\n\n\n* A system much smarter than a human may be able to manipulate or coerce the human into giving a bad story a high score.\n* Even if the system is less intelligent than that, it might resort to plagiarism. Plagiarism can be easier to generate than to detect, since detection often requires scouring a larger pool of source texts.\n* A subhuman system might also have an advantage in inserting steganography into the story; it might take polynomial time to embed a secret message, and exponential time to detect such a message. Finding a way to discourage agents from taking covert actions like these would make it easier to monitor those actions’ effects and keep operators in the loop.\n\n\nDo we have a general way of preventing this? Can we train an RL system to not only output an action (e.g., a story), but also a report that might help an overseer better evaluate the system’s performance? Following OpenAI researcher Paul Christiano, we call this the problem of [informed oversight](https://medium.com/ai-control/the-informed-oversight-problem-1b51b4f66b35#.eisw1fly9).[5](https://intelligence.org/2017/02/28/using-machine-learning/#footnote_4_15536 \"See the Q&A section of the talk for questions like “Won’t the report be subject to the same concerns as the original story?”\")\n\n\n##### Problem 2: Ambiguous test examples\n\n\n[![Slide 19](https://intelligence.org/wp-content/uploads/2017/02/presentation-19.png)](https://intelligence.org/wp-content/uploads/2017/02/presentation-19.png)Another problem: Consider a classifier trained to distinguish images of cats from images not containing cats, or trained to detect cancer. You may have lots of life experience that tells you “wild cats are cats.” If the training set only contains images of house cats and dogs, however, then it may not be possible to infer this fact during training.\n\n\nAn AI system that was superhumanly good at classifying images from a particular data set might not construct the same generalizations as a human, making it unreliable in new environments.\n\n\n[![Slide 21](https://intelligence.org/wp-content/uploads/2017/02/presentation-21.png)](https://intelligence.org/wp-content/uploads/2017/02/presentation-21.png)In safety-critical settings, ideally we would like the classifier to say, “This is ambiguous,” to alert us that the image’s label is underdetermined by the labels of training set images. We could then leverage the classifier’s proficiency at classification to intervene in contexts where the system is relatively likely to misclassify things, and could also supply training data that’s tailored to the dimensions along which the original data was uninformative. Formalizing this goal is the problem of [inductive ambiguity detection](https://arbital.com/p/inductive_ambiguity/).\n\n\n##### Problem 3: Difficulty imitating human behavior\n\n\n[![Slide 25](https://intelligence.org/wp-content/uploads/2017/02/presentation-25.png)](https://intelligence.org/wp-content/uploads/2017/02/presentation-25.png)One way we might want to deal with the problems of smarter-than-human AI is to just have it imitate a human. By having the system observe a human attempting to perform some task and having it learn to behave similarly, we might be able to leverage the AI system’s cognitive advantages while having it mostly stick to fairly normal strategies.\n\n\nAs an example, we can consider an AI system tasked with producing the kind of picture a human would draw. How could you actually train a system on this objective?\n\n\n[![Slide 27](https://intelligence.org/wp-content/uploads/2017/02/presentation-27.png)](https://intelligence.org/wp-content/uploads/2017/02/presentation-27.png")In one approach, generative adversarial networks, you have a human and two AI agents: an imitator, and a distinguisher.[6](https://intelligence.org/2017/02/28/using-machine-learning/#footnote_5_15536 \"Ian J. Goodfellow et al. “Generative Adversarial Nets”. In: Advances in Neural Information Processing 27. Ed. by Z. Ghahramani et al. Curran Associates, Inc., 2014, pp. 2672-2680. URL: https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf.\") The distinguisher tries to guess whether a given picture came from the human or from the imitator, and the imitator tries to produce the kinds of image that the distinguisher classifies as coming from the human.\n\n\nThis is a pretty cool setup, but it raises some theoretical issues. We could ask the question, “Does the distinguisher have to be smarter than the imitator? And if so, how much smarter?” If the distinguisher is less smart than the imitator, then the imitator could covertly do very non-human-like things to the image, as long as they’re too subtle for the distinguisher to notice. An open avenue of research here is to analyze schemes like this and ask under what conditions we can get nice robustness guarantees.\n\n\nFor more on this problem, see Paul Christiano’s “[Act-Based Agents](https://medium.com/ai-control/act-based-agents-8ec926c79e9c#.x1cluonuo)” and “[Mimicry and Meeting Halfway](https://medium.com/ai-control/mimicry-maximization-and-meeting-halfway-c149dd23fc17#.r0yv5vgz0)”.\n\n\n##### Problem 4: Difficulty specifying goals about the real world\n\n\nWhat if we want to train our AI system to perform a task like making a sandwich? One option might be to hook the system up to a robot in a room full of sandwich ingredients, have it perform an action sequence, and then have a human observer rate the robot’s performance based on how close it came to making a sandwich. That rating determines the robot’s reward.\n\n\nWe previously noted that sufficiently capable RL agents might pick actions that are hard to evaluate. Here we face the additional problem that useful tasks will often require taking physical action in the world. If the system is capable enough, then this setup gives it an incentive to take away the reward button and press it itself. This is what the formalism of RL would tell you is the best action, if we imagine AI systems that continue to be trained in the RL framework far past current capability levels.\n\n\nA natural question, then, is whether we can train AI systems that just keep getting better at producing a sandwich as they improve in capabilities, without ever reaching a tipping point where they have an incentive to do something else. Can we avoid relying on proxies for the task we care about, and just train the system to value completing the task in its own right? This is the [generalizable environmental goals](https://arbital.com/p/environmental_goals/) problem.\n\n\n##### Problem 5: Negative side-effects\n\n\nSuppose we succeeded in making a system that wants to put a sandwich in the room. In choosing between plans, it will favor whichever plan has the higher probability of resulting in a sandwich. Perhaps the policy of just walking over and making a sandwich has a 99.9% chance of success; but there’s always a chance that a human could step in and shut off the robot. A policy that drives down the probability of interventions like that might push up the probability of the room ending up containing a sandwich to 99.9999%. In this way, sufficiently advanced ML systems can end up with incentives to interfere with their developers and operators even when there’s no risk of reward hacking.\n\n\nThis is the problem of designing task-directed systems that can become superhumanly good at achieving their task, without causing negative side-effects in the process.\n\n\nOne response to this problem is to try to [quantify how much total impact](https://arbital.com/p/low_impact/) different policies have on the world. We can then add a penalty term for actions that have a high impact, causing the system to favor low-impact strategies.\n\n\nAnother approach is to ask how we might design an AI system to be satisfied with a merely 99.9% chance of success — just have the system stop trying to think up superior policies once it finds one meeting that threshold. This is the problem of formalizing [mild optimization](https://arbital.com/p/soft_optimizer/).\n\n\nOr one can consider advanced AI systems from the perspective of [convergent instrumental strategies](https://arbital.com/p/convergent_strategies/). No matter what the system is trying to do, it can probably benefit by having more computational resources, by having the programmers like it more, by having more money. A sandwich-making system might want money so it can buy more ingredients, whereas a story-writing system might want money so it can buy books to learn from. Many different goals imply similar instrumental strategies, a number of which are likely to introduce conflicts due to resource limitations.\n\n\nOne, approach, then, would be to study these instrumental strategies directly and try to find a way to design a system [that doesn’t exhibit them](https://arbital.com/p/avert_instrumental_pressure/). If we can identify common features of these strategies, and especially of the adversarial strategies, then we could try to proactively avert the incentives to pursue those strategies. This seems difficult, and is very underspecified, but there’s some initial research pointed in this direction.\n\n\n##### Problem 6: Edge cases that still satisfy the goal\n\n\n[![Slide 40](https://intelligence.org/wp-content/uploads/2017/02/presentation-40.png)](https://intelligence.org/wp-content/uploads/2017/02/presentation-40.png)Another problem that’s likely to become more serious as ML systems become more advanced is [edge cases](https://arbital.com/p/edge_instantiation/).\n\n\nConsider our ordinary concept of a sandwich. There are lots of things that technically count as sandwiches, but are unlikely to have the same practical uses a sandwich normally has for us. You could have an extremely small or extremely large sandwich, or a toxic sandwich.\n\n\nFor an example of this behavior in present-day systems, we can consider this image that an image classifier correctly classified as a panda (with 57% confidence). Goodfellow, Shlens, and Szegedy found that they could add a tiny vector to this image that causes the classifier to misclassify it as a gibbon with 99% confidence.[7](https://intelligence.org/2017/02/28/using-machine-learning/#footnote_6_15536 \"Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. “Explaining and Harnessing Adversarial Examples”. In: (2014). arXiv: 1412.6572 [stat.ML].\")\n\n\nSuch edge cases are likely to become more common and more hazardous as ML systems begin to search wider solution spaces than humans are likely (or even able) to consider. This is then another case where systems might become increasingly good at maximizing their score on a conventional metric, while becoming *less* reliable for achieving realistic goals we care about.\n\n\n[Conservative concepts](https://arbital.com/p/conservative_concept/) are an initial idea for trying to address this problem, by biasing systems to avoid assigning positive classifications to examples that are near the edges of the search space. The system might then make the mistake of thinking that some perfectly good sandwiches are inadmissible, but it would not make the more risky mistake of classifying toxic or otherwise bizarre sandwiches as admissible.\n\n\n### Technical details on one problem: inductive ambiguity identification\n\n\nI’ve outlined eight research directions for addressing six problems that seem likely to start arising (or to become more serious) as ML systems become better at optimizing their objectives — objectives that may not exactly match programmers’ intentions. The research directions were:\n\n\n* [Informed oversight](https://arbital.com/p/informed_oversight/), for making it easier to interpret and assess ML systems’ actions.\n* [Inductive ambiguity identification](https://arbital.com/p/inductive_ambiguity/), for designing classifiers that stop and check in with overseers in circumstances where their training data was insufficiently informative.\n* Robust [human imitation](https://medium.com/ai-control/mimicry-maximization-and-meeting-halfway-c149dd23fc17#.1s9kwuq86), for recapitulating the safety-conducive features of humans in ML systems.\n* [Generalizable environmental goals](https://arbital.com/p/environmental_goals/), for preventing RL agents’ instrumental incentives to seize control of their reward signal.\n* [Impact measures](https://arbital.com/p/low_impact/), [mild optimization](https://arbital.com/p/soft_optimizer/), and [averting instrumental incentives](https://arbital.com/p/avert_instrumental_pressure/), for preventing negative side-effects of superhumanly effective optimization in a general-purpose way.\n* [Conservative concepts](https://arbital.com/p/conservative_concept/), for steering clear of edge cases.\n\n\nThese problems are discussed in more detail in “[Alignment for Advanced ML Systems](https://intelligence.org/2016/07/27/alignment-machine-learning/).” I’ll go into more technical depth on an example problem to give a better sense of what working on these problems looks like in practice.\n\n\n##### KWIK learning\n\n\n[![Slide 44](https://intelligence.org/wp-content/uploads/2017/02/presentation-44.png)](https://intelligence.org/wp-content/uploads/2017/02/presentation-44.png)Let��s consider the inductive ambiguity identification problem, applied to a classifier for 2D points. In this case, we have 4 positive examples and 4 negative examples.\n\n\nWhen a new point comes in, the classifier could try to label it by drawing a whole bunch of models that are consistent with the previous data. Here, I draw just 4 of them. The question mark falls on opposite sides of these different models, suggesting that all of these models are plausible given the data.\n\n\n[![Slide 45](https://intelligence.org/wp-content/uploads/2017/02/presentation-45.png)](https://intelligence.org/wp-content/uploads/2017/02/presentation-45.png)We can suppose that the system infers from this that the training data is ambiguous with respect to the new point’s classification, and asks the human to label it. The human might then label it with a plus, and the system draws new conclusions about which models are plausible.\n\n\nThis approach is called “Knows What It Knows” learning, or KWIK learning. We start with some input space *X* ≔ ℝ*n* and assume that there exists some true mapping from inputs to probabilities. E.g., for each image the cat classifier encounters we assume that there is a true answer in the set *Y* ≔ [0,1] to the question, “What is the probability that this image is a cat?” This probability corresponds to the probability that a human will label that image “1” as opposed to “0,” which we can represent as a weighted coin flip. The model maps the inputs to answers, which in this case are probabilities.[8](https://intelligence.org/2017/02/28/using-machine-learning/#footnote_7_15536 \"The KWIK learning framework is much more general than this; I’m just giving one example.\")\n\n\nThe KWIK learner is going to play a game. At the beginning of the game, some true model *h*\\* gets picked out. The true model is assumed to be in the hypothesis set *H*. On each iteration *i* some new example *xi* ∈ ℝ*n* comes in. It has some true answer *yi* = *h*\\*(*xi*), but the learner is unsure about the true answer. The learner has two choices:\n\n\n* Output an answer *ŷi* ∈ [0,1].\n\t+ If |*ŷi* – *yi*| > ε, the learner then loses the game.\n* Output ⊥ to indicate that the example is ambiguous.\n\t+ The learner then gets to observe the true label *zi* = FlipCoin(*yi*) from the observation set *Z* ≔ {0,1}.\n\n\nThe goal is to not lose, and to not output ⊥ too many times. The upshot is that it’s actually possible to win this game with a high probability if the hypothesis class *H* is a small finite set or a low-dimensional linear class. This is pretty cool. It turns out that there are certain forms of uncertainty where we can just resolve the ambiguity.\n\n\nThe way this works is that on each new input, we consider multiple models *h* that have done well in the past, and we consider something “ambiguous” if the models disagree on *h*(*xi*) by more than ε. Then we just refine the set of models over time.\n\n\nThe way that a KWIK learner represents this notion of inductive ambiguity is: ambiguity is about not knowing which model is correct. There’s some set of models, many are plausible, and you’re not sure which one is the right model.\n\n\nThere are some problems with this. One of the main problems is KWIK learning’s realizability assumption — the assumption that the true model *h*\\* is actually in the hypothesis set *H*. Realistically, the actual universe won’t be in your hypothesis class, since your hypotheses need to fit in your head. Another problem is that this method only works for these very simple model classes.\n\n\n##### A Bayesian view of the problem\n\n\nThat’s some existing work on inductive ambiguity identification. What’s some work we’ve been doing at MIRI related to this?\n\n\nLately, I’ve been trying to approach this problem from a Bayesian perspective. On this view, we have some kind of prior *Q* over mappings *X* → {0,1} from the input space to the label. The assumption we’ll make is that our prior is wrong in some way and there’s some unknown “true” prior *P* over these mappings. The goal is that even though the system only has access to *Q*, it should perform the classification task almost as well (in expectation over *P*) as if it already knew *P*.\n\n\nIt seems like this task is hard. If the real world is sampled from *P*, and *P* is different from your prior *Q*, there aren’t that many guarantees. To make this tractable, we can add a grain of truth assumption:\n\n\n$$\\forall f : Q(f) \\geq \\frac{1}{k} P(f) $$\n\n\nThis says that if *P* assigns a high probability to something, then so does *Q*. Can we get good performance in various classification tasks under this kind of assumption?\n\n\nWe haven’t completed this research avenue, but initial results suggest that it’s possible to do pretty well on this task while avoiding catastrophic behaviors in at least in some cases (e.g., online supervised learning). That’s somewhat promising, and this is definitely an area for future research.\n\n\nHow this ties in to inductive ambiguity identification: If you’re uncertain about what’s true, then there are various ways of describing what that uncertainty is about. You can try taking your beliefs and partitioning them into various possibilities. That’s in some sense an ambiguity, because you don’t know which possibility is correct. We can think of the grain of truth assumption as saying that there’s some way of splitting up your probability distribution into components such that one of the components is right. The system should do well even though it doesn’t initially know which component is right.\n\n\n(For more recent work on this problem, see Paul Christiano’s “[Red Teams](https://medium.com/ai-control/red-teams-b5b6de33dc76#.y6lwslork)” and “[Learning with Catastrophes](https://medium.com/ai-control/learning-with-catastrophes-59387b55cc30#.ii2tpjntu)” and research forum results from me and Ryan Carey: “[Bias-Detecting Online Learners](https://agentfoundations.org/item?id=1025)” and “[Adversarial Bandit Learning with Catastrophes](https://agentfoundations.org/item?id=1086).”)\n\n\n### Other research agendas\n\n\nLet’s return to a broad view and consider other research agendas focused on long-run AI safety. The first such agenda was outlined in MIRI’s 2014 [agent foundations](https://intelligence.org/files/TechnicalAgenda.pdf) report.[9](https://intelligence.org/2017/02/28/using-machine-learning/#footnote_8_15536 \"Nate Soares and Benja Fallenstein. Agent Foundations for Aligning Machine Intelligence with Human Interests: A Technical Research Agenda. Tech. rep. 2014-8. Forthcoming 2017 in “The Technological Singularity: Managing the Journey” Jim Miller, Roman Yampolskiy, Stuart J. Armstrong, and Vic Callaghan, Eds. Berkeley, CA. Machine Intelligence Research Institute. 2014.\")\n\n\nThe agent foundations agenda is about developing a better theoretical understanding of reasoning and decision-making. An example of a relevant gap in our current theories is ideal reasoning about mathematical statements (including statements about computer programs), in contexts where you don’t have the time or compute to do a full proof. This is the basic problem we’re responding to in “[Logical Induction](https://intelligence.org/2016/09/12/new-paper-logical-induction/).” In this talk I’ve focused on problems for advanced AI systems that broadly resemble present-day ML; in contrast, the agent foundations problems are agnostic about the details of the system. They apply to ML systems, but also to other possible frameworks for good general-purpose reasoning.\n\n\nThen there’s the “[Concrete Problems in AI Safety](https://openai.com/blog/concrete-ai-safety-problems/)” agenda.[10](https://intelligence.org/2017/02/28/using-machine-learning/#footnote_9_15536 \"Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. “Concrete Problems in AI Safety”. In: (2016). arXiv: 1606.06565 [cs.AI].\") Here the idea is to study AI safety problems with a more empirical focus, specifically looking for problems that we can study using current ML methods, and perhaps can even demonstrate in current systems or in systems that might be developed in the near future.\n\n\nAs an example, consider the question, “How do you make an RL agent that behaves safely while it’s still exploring its environment and learning about how the environment works?” It’s a question that comes up in current systems all the time, and is relatively easy to study today, but is likely to apply to more capable systems as well.\n\n\nThese different agendas represent different points of view on how one might make AI systems more reliable in a way that scales with capabilities progress, and our hope is that by encouraging work on a variety of different problems from a variety of different perspectives, we’re less likely to completely miss a key consideration. At the same time, we can achieve more confidence that we’re on the right track when relatively independent approaches all arrive at similar conclusions.\n\n\nI’m leading the team at MIRI that will be focusing on the “Alignment for Advanced ML Systems” agenda going forward. It seems like there’s a lot of room for more eyes on these problems, and we’re hoping to hire a number of new researchers and kick off a number of collaborations to tackle these problems. If you’re interested in these problems and have a solid background in mathematics or computer science, I definitely recommend [getting in touch](mailto:jessica@intelligence.org) or [reading more about these problems](https://intelligence.org/2016/07/27/alignment-machine-learning/).\n\n\n\n\n---\n\n1. I also gave [a version of this talk](https://www.youtube.com/watch?v=_sGTqI5qdD4) at the MIRI/FHI Colloquium on Robust and Beneficial AI.\n2. Alternatively, you may think that AGI won’t look like modern ML in most respects, but that the ML aspects are easier to productively study today and are unlikely to be made completely irrelevant by future developments.\n3. Alternatively, you may think timelines are long, but that we should focus on scenarios with shorter timelines because they’re more urgent.\n4. Although I’ll use the example of stories here, in real life it could be a system generating plans for curing cancers, and humans evaluating how good the plans are.\n5. See the [Q&A section](http://www.youtube.com/watch?v=TSe3p1zIvVI&t=35m40s) of the talk for questions like “Won’t the report be subject to the same concerns as the original story?”\n6. Ian J. Goodfellow et al. “Generative Adversarial Nets”. In: *Advances in Neural Information Processing 27*. Ed. by Z. Ghahramani et al. Curran Associates, Inc., 2014, pp. 2672-2680. URL: .\n7. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. “Explaining and Harnessing Adversarial Examples”. In: (2014). [arXiv: 1412.6572 [stat.ML]](https://arxiv.org/abs/1412.6572).\n8. The KWIK learning framework is much more general than this; I’m just giving one example.\n9. Nate Soares and Benja Fallenstein. [Agent Foundations for Aligning Machine Intelligence with Human Interests: A Technical Research Agenda](https://intelligence.org/files/TechnicalAgenda.pdf). Tech. rep. 2014-8. Forthcoming 2017 in “The Technological Singularity: Managing the Journey” Jim Miller, Roman Yampolskiy, Stuart J. Armstrong, and Vic Callaghan, Eds. Berkeley, CA. Machine Intelligence Research Institute. 2014.\n10. Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. “Concrete Problems in AI Safety”. In: (2016). [arXiv: 1606.06565 [cs.AI]](https://arxiv.org/abs/1606.06565).\n\nThe post [Using machine learning to address AI risk](https://intelligence.org/2017/02/28/using-machine-learning/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-03-01T03:04:41Z", "authors": ["Jessica Taylor"], "summaries": []} -{"id": "fa125c6a15e9eec21f0fde19c62025b4", "title": "February 2017 Newsletter", "url": "https://intelligence.org/2017/02/16/february-2017-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \nFollowing up on a post outlining some of the reasons MIRI researchers and OpenAI researcher Paul Christiano are [pursuing different research directions](https://agentfoundations.org/item?id=1129), Jessica Taylor has written up the key [motivations for MIRI’s highly reliable agent design research](https://agentfoundations.org/item?id=1220).\n \n\n**Research updates**\n* A new paper: “[Toward Negotiable Reinforcement Learning: Shifting Priorities in Pareto Optimal Sequential Decision-Making](https://intelligence.org/2017/01/25/negotiable-rll/)“\n* New at IAFF: [Pursuing Convergent Instrumental Subgoals on the User’s Behalf Doesn’t Always Require Good Priors](https://agentfoundations.org/item?id=1149); [Open Problem: Thin Logical Priors](https://agentfoundations.org/item?id=1206)\n* MIRI has a new [research advisor](https://intelligence.org/team/#advisors): Google DeepMind researcher Jan Leike.\n* MIRI and the Center for Human-Compatible AI are [looking for research interns](https://intelligence.org/2017/02/11/chcai-miri/) for this summer. Apply by March 1!\n\n\n \n**General updates**\n* We attended the Future of Life Institute’s [Beneficial AI conference](https://futureoflife.org/bai-2017/) at Asilomar. See Scott Alexander’s [recap](http://slatestarcodex.com/2017/02/06/notes-from-the-asilomar-conference-on-beneficial-ai/). MIRI executive director Nate Soares was on a technical safety panel discussion with representatives from DeepMind, OpenAI, and academia ([video](https://www.youtube.com/watch?v=UMq4BcRf-bY)), also featuring a back-and-forth with Yann LeCun, the head of Facebook’s AI research group (at [22:00](http://www.youtube.com/watch?v=UMq4BcRf-bY&t=22m0s)).\n* MIRI staff and a number of top AI researchers are signatories on FLI’s new [Asilomar AI Principles](https://futureoflife.org/ai-principles/), which include cautions regarding arms races, value misalignment, recursive self-improvement, and superintelligent AI.\n* The Center for Applied Rationality [recounts](http://rationality.org/studies/2016-case-studies) MIRI researcher origin stories and other cases where their workshops have been a big assist to our work, alongside examples of CFAR’s impact on other groups.\n* The Open Philanthropy Project has awarded a $32,000 [grant](http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support) to AI Impacts.\n* Andrew Critch spoke at Princeton’s [ENVISION](http://envision-conference.com/) conference ([video](https://www.youtube.com/watch?v=qeGQ3FhTmKo)).\n* Matthew Graves has joined MIRI as a staff writer. See his first piece for our blog, a [reply](https://intelligence.org/2017/01/13/response-to-ceglowski-on-superintelligence/) to “Superintelligence: The Idea That Eats Smart People.”\n* The audio version of [*Rationality: From AI to Zombies*](https://intelligence.org/rationality-ai-zombies/) is temporarily unavailable due to the shutdown of Castify. However, fans are already putting together [a new free recording](http://from-ai-to-zombies.eu/files.html) of the full collection.\n\n\n \n\n**News and links**\n* An Asilomar panel on superintelligence ([video](https://www.youtube.com/watch?v=h0962biiZa4)) gathers Elon Musk (OpenAI), Demis Hassabis (DeepMind), Ray Kurzweil (Google), Stuart Russell and Bart Selman (CHCAI), Nick Bostrom (FHI), Jaan Tallinn (CSER), Sam Harris, and David Chalmers.\n* Also from Asilomar: Russell on corrigibility ([video](https://www.youtube.com/watch?v=pARXQnX6QS8)), Bostrom on openness in AI ([video](https://www.youtube.com/watch?v=_H-uxRq2w-c)), and LeCun on the path to general AI ([video](https://www.youtube.com/watch?v=bub58oYJTm0)).\n* From *MIT Technology Review*‘s “[AI Software Learns to Make AI Software](https://www.technologyreview.com/s/603381/ai-software-learns-to-make-ai-software/?set=603387)”:\n\nCompanies must currently pay a premium for machine-learning experts, who are in short supply. Jeff Dean, who leads the Google Brain research group, mused last week that some of the work of such workers could be supplanted by software. He described what he termed “automated machine learning” as one of the most promising research avenues his team was exploring.\n\n\n* AlphaGo [quietly defeats the world’s top Go professionals](https://qz.com/877721/the-ai-master-bested-the-worlds-top-go-players-and-then-revealed-itself-as-googles-alphago-in-disguise/) in a crushing 60-win streak. AI also bests the top human players [in no-limit poker](https://www.theguardian.com/technology/2017/jan/30/libratus-poker-artificial-intelligence-professional-human-players-competition).\n* More signs that artificial general intelligence is becoming a trendier goal in the field: FAIR proposes [an AGI progress metric](https://arxiv.org/abs/1701.08954).\n* Representatives from Apple and OpenAI join the [Partnership on AI](http://www.wired.co.uk/article/ai-partnership-facebook-google-deepmind), and MIT and Harvard announce a new [Ethics and Governance of AI Fund](http://news.mit.edu/2017/mit-media-lab-to-participate-in-ai-ethics-and-governance-initiative-0110).\n* The World Economic Forum’s 2017 [Global Risks Report](http://www3.weforum.org/docs/GRR17_Report_web.pdf) includes [a discussion of AI safety](http://reports.weforum.org/global-risks-2017/part-3-emerging-technologies/3-2-assessing-the-risk-of-artificial-intelligence/): “given the possibility of an AGI working out how to improve itself into a superintelligence, it may be prudent – or even morally obligatory – to consider potentially feasible scenarios, and how serious or even existential threats may be avoided.”\n* On the other hand, the JASON advisory group [reports to the US Department of Defense](https://motherboard.vice.com/en_us/article/elite-scientists-have-told-the-pentagon-that-ai-wont-threaten-humanity) that “the claimed ‘existential threats’ posed by AI seem at best uninformed,” adding, “In the midst of an AI revolution, there are no present signs of any corresponding revolution in AGI.”\n* Data scientist Sarah Constantin argues that ML algorithms are exhibiting [linear or sublinear performance returns](https://srconstantin.wordpress.com/2017/01/28/performance-trends-in-ai/) to linear improvements in processing power, and that deep learning represents a break from trend in image and speech recognition, but not in strategy games or language processing.\n* New safety papers discuss [human-in-the-loop reinforcement learning](https://arxiv.org/abs/1701.04079) and [ontology identification](https://dspace.ut.ee/bitstream/handle/10062/54240/Rao_Parnpuu_MA_2016.pdf), and Jacob Steinhardt writes on [latent variables and counterfactual reasoning](https://jsteinhardt.wordpress.com/2017/01/10/latent-variables-and-model-mis-specification/) in AI alignment.\n\n\n |\n\n\n \n\n\nThe post [February 2017 Newsletter](https://intelligence.org/2017/02/16/february-2017-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-02-17T03:31:11Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "370abfcdc8744ee3dcab201968e7e8e1", "title": "CHCAI/MIRI research internship in AI safety", "url": "https://intelligence.org/2017/02/11/chcai-miri/", "source": "miri", "source_type": "blog", "text": "We’re looking for talented, driven, and ambitious technical researchers for a summer research internship with the [Center for Human-Compatible AI](http://humancompatible.ai/) (CHCAI) and the Machine Intelligence Research Institute (MIRI).\n\n\n##### About the research:\n\n\nCHCAI is a research center based at UC Berkeley with PIs including Stuart Russell, Pieter Abbeel and Anca Dragan. CHCAI describes its goal as “to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems”.\n\n\nMIRI is an independent research nonprofit located near the UC Berkeley campus with a mission of helping ensure that smarter-than-human AI has a positive impact on the world.\n\n\nCHCAI’s research focus includes work on inverse reinforcement learning and human-robot cooperation ([link](http://humancompatible.ai/publications)), while MIRI’s focus areas include [task AI](https://arbital.com/p/task_goal/) and computational reflection ([link](https://intelligence.org/research)). Both groups are also interested in theories of (bounded) rationality that may help us develop a deeper understanding of general-purpose AI agents.\n\n\n##### To apply:\n\n\n1. Fill in the form here: \n\n\n2. Send an email to [beth.m.barnes@gmail.com](mailto:beth.m.barnes@gmail.com) with the subject line “AI safety internship application”, attaching your CV, a piece of technical writing on which you were the primary author, and your research proposal.\n\n\n\nThe research proposal should be one to two pages in length. It should outline a problem you think you can make progress on over the summer, and some approaches to tackling it that you consider promising. We recommend reading over [CHCAI’s annotated bibliography](http://humancompatible.ai/bibliography) and the [concrete problems agenda](https://openai.com/blog/concrete-ai-safety-problems/) as good sources for open problems in AI safety, if you haven’t previously done so.\n\n\nYou should target your proposal at a specific research agenda or a specific adviser’s interests. Advisers’ interests include:\n\n\n• **Andrew Critch** (CHCAI, MIRI): anything listed in [CHCAI’s open technical problems](http://humancompatible.ai/bibliography#open_technical_problems%3a); [negotiable reinforcement learning](https://arxiv.org/abs/1701.01302); game theory for agents with transparent source code (e.g., “[Program Equilibrium](https://arxiv.org/abs/1701.01302)” and “[Parametric Bounded Löb’s Theorem and Robust Cooperation of Bounded Agents](https://arxiv.org/abs/1602.04184)“).\n\n\n• **Daniel Filan** (CHCAI): the contents of “Foundational Problems,” “Corrigibility,” “Preference Inference,” and “Reward Engineering” in CHCAI’s [open technical problems list](http://humancompatible.ai/bibliography#open_technical_problems%3a).\n\n\n• **Dylan Hadfield-Menell** (CHCAI): application of game-theoretic analysis to models of AI safety problems (specifically by people who come from a theoretical economics background); formulating and analyzing AI safety problems as [CIRL games](https://arxiv.org/pdf/1606.03137.pdf); the relationships between AI safety and principal-agent models / theories of incomplete contracting; reliability engineering in machine learning; questions about fairness.\n\n\n• **Jessica Taylor**, **Scott Garrabrant**, and **Patrick LaVictoire** (MIRI): open problems described in MIRI’s [agent foundations](https://intelligence.org/files/TechnicalAgenda.pdf) and [alignment for advanced ML systems](https://intelligence.org/2016/07/27/alignment-machine-learning/) research agendas.\n\n\nThis application does not bind you to work on your submitted proposal. Its purpose is to demonstrate your ability to make concrete suggestions for how to make progress on a given research problem.\n\n\n##### Who we’re looking for:\n\n\nThis is a new and somewhat experimental program. You’ll need to be self-directed, and you’ll need to have enough knowledge to get started tackling the problems. The supervisors can give you guidance on research, but they aren’t going to be teaching you the material. However, if you’re deeply motivated by research, this should be a fantastic experience.\n\n\nSuccessful applicants will demonstrate examples of technical writing, motivation and aptitude for research, and produce a concrete research proposal. We expect most successful applicants will either:\n\n\n• have or be pursuing a PhD closely related to AI safety;\n\n\n• have or be pursuing a PhD in an unrelated field, but currently pivoting to AI safety, with evidence of sufficient knowledge and motivation for AI safety research; or\n\n\n• be an exceptional undergraduate or masters-level student with concrete evidence of research ability (e.g., publications or projects) in an area closely related to AI safety.\n\n\n##### Logistics:\n\n\nProgram dates are flexible, and may vary from individual to individual. However, our assumption is that most people will come for twelve weeks, starting in early June.\n\n\nThe program will take place in the San Francisco Bay Area. Basic living expenses will be covered. We can’t guarantee that housing will be all arranged for you, but we can provide assistance in finding housing if needed.\n\n\nInterns who are not US citizens will most likely need to apply for J-1 intern visas. Once you have been accepted to the program, we can help you with the required documentation.\n\n\n##### Deadlines:\n\n\nThe deadline for applications is the March 1. Applicants should hear back about decisions by March 20.\n\n\nThe post [CHCAI/MIRI research internship in AI safety](https://intelligence.org/2017/02/11/chcai-miri/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-02-11T23:31:03Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "0d46121f93ca6143ef0c6808441407b3", "title": "New paper: “Toward negotiable reinforcement learning”", "url": "https://intelligence.org/2017/01/25/negotiable-rll/", "source": "miri", "source_type": "blog", "text": "[![Toward Negotiable Reinforcement Learning](https://intelligence.org/files/paretooptimal.png)](https://arxiv.org/abs/1701.01302)MIRI Research Fellow Andrew Critch has developed a new result in the theory of conflict resolution, described in “[**Toward negotiable reinforcement learning: Shifting priorities in Pareto optimal sequential decision-making**](https://arxiv.org/abs/1701.01302).”\n\n\nAbstract: \n\n\n\n> Existing multi-objective reinforcement learning (MORL) algorithms do not account for objectives that arise from players with differing beliefs. Concretely, consider two players with different beliefs and utility functions who may cooperate to build a machine that takes actions on their behalf. A representation is needed for how much the machine’s policy will prioritize each player’s interests over time.\n> \n> \n> Assuming the players have reached common knowledge of their situation, this paper derives a recursion that any Pareto optimal policy must satisfy. Two qualitative observations can be made from the recursion: the machine must (1) use each player’s own beliefs in evaluating how well an action will serve that player’s utility function, and (2) shift the relative priority it assigns to each player’s expected utilities over time, by a factor proportional to how well that player’s beliefs predict the machine’s inputs. Observation (2) represents a substantial divergence from naïve linear utility aggregation (as in Harsanyi’s utilitarian theorem, and existing MORL algorithms), which is shown here to be inadequate for Pareto optimal sequential decision-making on behalf of players with different beliefs.\n> \n> \n\n\n\nIf AI alignment is as difficult as it looks, then there are already strong reasons for different groups of developers to collaborate and to steer clear of race dynamics: the difference between a superintelligence aligned with one group’s values and a superintelligence aligned with another group’s values pales compared to the difference between any aligned superintelligence and a misaligned one. As Seth Baum of the Global Catastrophic Risk Institute notes [in a recent paper](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2816323):\n\n\n\n> Unfortunately, existing messages about beneficial AI are not always framed well. One potentially counterproductive frame is the framing of strong AI as a powerful winner-takes-all technology. This frame is implicit (and sometimes explicit) in discussions of how different AI groups might race to be the first to build strong AI. The problem with this frame is that it makes a supposedly dangerous technology seem desirable. If strong AI is a winner-takes-all technologies race, then AI groups will want to join the race and rush to be the first to win. This is exactly the opposite of what the discussions of strong AI races generally advocate—they postulate (quite reasonably) that the rush to win the race could compel AI groups to skimp on safety measures, thereby increasing the probability of dangerous outcomes.\n> \n> \n> Instead of framing strong AI as a winner-takes-all race, those who are concerned about this technology should frame it as a dangerous and reckless pursuit that would quite likely kill the people who make it. AI groups may have some desire for the power that might accrue to whoever builds strong AI, but they presumably also desire to not be killed in the process.\n> \n> \n\n\nResearchers’ discussion of mechanisms to disincentivize arms races should therefore not be read as implying that self-defeating arms races are rational. Empirically, however, developers have a wide range of beliefs about the difficulty of alignment. Mechanisms for formally resolving policy disagreements may help create more evident incentives for cooperation and collaboration; hence there may be some value in developing formal mechanisms that advanced AI systems can use to generate policies that each party prefers over simple compromises between all parties’ goals (and beliefs), and that each prefers over racing.\n\n\nCritch’s recursion relation provides a framework in which players may negotiate for the priorities of a jointly owned AI system, producing a policy that is more attractive than the naïve linear utility aggregation approaches already known in the literature. The mathematical simplicity of the result suggests that there may be other low-hanging fruit in this space that would add to and further illustrate the value of collaboration. Critch identifies six areas for future work (presented in more detail in the paper):\n\n\n1. *[Best-alternative-to-negotiated-agreement](https://en.wikipedia.org/wiki/Best_alternative_to_a_negotiated_agreement) dominance.* Critch’s result considers negotiations between agents with differing beliefs, but does not account for the possibility that parties may have different BATNAs.\n2. *Targeting specific expectation pairs.* A method for modifying the players’ utility functions to make this possible would be useful for specifying various fairness or robustness criteria, including BATNA dominance.\n3. *Information trade.* Critch’s algorithm gives a large advantage to any contributor that is better able to predict the AI system’s inputs from its outputs. In realistic setting where players lack common knowledge of each other’s priors and observations, it would therefore make sense for agents to be able to trade away some degree of control over the system for information; but it is not clear how one should carry out such trades in practice.\n4. *Learning priors and utility functions.* Realistic smarter-than-human AI systems will need to learn their utility function over time, e.g., through [cooperative inverse reinforcement learning](http://arxiv.org/abs/1606.03137). A realistic negotiation procedure will need to account for the fact that the developers’ goals are imperfectly known and the AI system’s goals are a “work in progress.”\n5. *Incentive compatibility.* The methods used to learn players’ beliefs and utility functions additionally need to incentivize honest representations of one’s beliefs and goals, or they will need to be robust to attempts to game the system.\n6. *Naturalized decision theory.* The setting used in this result assumes a separation between the inner workings of the machine (and the players) and external reality, as opposed to modeling it as part of its environment. More realistic formal frameworks would allow us to better model the players’ representations of each other, opening up new [negotiation possibilities](https://intelligence.org/2016/03/31/new-paper-on-bounded-lob/).[1](https://intelligence.org/2017/01/25/negotiable-rll/#footnote_0_15443 \"Thanks to Matthew Graves, Andrew Critch, and Jessica Taylor for helping draft this post.\")\n\n\n\n\n---\n\n\n \n\n\n\n\n\n#### Sign up to get updates on new MIRI technical results\n\n\n*Get notified every time a new technical paper is published.*\n\n\n\n\n* \n* \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n\n\n \n\n\n\n\n---\n\n1. Thanks to Matthew Graves, Andrew Critch, and Jessica Taylor for helping draft this post.\n\nThe post [New paper: “Toward negotiable reinforcement learning”](https://intelligence.org/2017/01/25/negotiable-rll/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-01-26T04:11:25Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "3eb5ab72f529677a0406d579eb76ea9a", "title": "Response to Cegłowski on superintelligence", "url": "https://intelligence.org/2017/01/13/response-to-ceglowski-on-superintelligence/", "source": "miri", "source_type": "blog", "text": "Web developer Maciej Cegłowski recently gave a talk on AI safety ([video](https://www.youtube.com/watch?v=kErHiET5YPw), [text](http://idlewords.com/talks/superintelligence.htm)) arguing that we should be skeptical of the standard assumptions that go into working on this problem, and doubly skeptical of the extreme-sounding claims, attitudes, and policies these premises appear to lead to. I’ll give my reply to each of these points below.\n\n\nFirst, a brief outline: this will mirror the structure of Cegłowski’s talk in that first I try to put forth my understanding of the broader implications of Cegłowski’s talk, then deal in detail with the inside-view arguments as to whether or not the core idea is right, then end by talking some about the structure of these discussions. \n\n \n\n\n\n##### (i) Broader implications\n\n\nCegłowski’s primary concern seems to be that there are lots of ways to misuse AI in the near term, and that worrying about long-term AI hazards may distract from working against [short-term misuse](https://idlewords.com/talks/robot_armies.htm). His secondary concern seems to be that worrying about AI risk looks problematic from the outside view. Humans have a long tradition of [millenarianism](https://en.wikipedia.org/wiki/Millenarianism), or the belief that the world will radically transform in the near future. Historically, most millenarians have turned out to be wrong and behaved in self-destructive ways. If you think that UFOs will land shortly to take you to the heavens, you might make some short-sighted financial decisions, and when the UFOs don’t arrive, you are full of regrets.\n\n\nI think the fear that focusing on long-term AI dangers will distract from short-term AI dangers is misplaced. Attention to one kind of danger will probably help draw more attention to other, related kinds of danger. Also, risks associated with extraordinarily capable AI systems appear to be more difficult and complex than risks associated with modern AI systems in the short term, suggesting that the long-term obstacles will require more lead time to address. If it is as easy to avert these dangers as some optimists think, then we lose very little by starting early; if it is difficult (but doable), then we lose much by starting late.\n\n\nWith regards to outside-view concerns, I question how much we can learn about external reality from focusing only on [human psychology](https://intelligence.org/2016/03/02/john-horgan-interviews-eliezer-yudkowsky/#predictions). Many people have thought they could fly, for one reason or another. But some people actually can fly, and the person who bets against the Wright brothers based on psychological and historical patterns of error (instead of generalizing from, in this case, regularities in physics and engineering) will lose their money. The best way to get those bets right is to wade into the messy inside-view arguments.\n\n\nAs a Bayesian, I agree that we should update on surface-level evidence that an idea is weird or crankish. But I also think that [argument screens off evidence from authority](https://lesswrong.com/lw/lx/argument_screens_off_authority/); if someone who looks vaguely like a crank can’t provide good arguments for why they expect UFOs to land in Greenland in the next hundred years, and someone else who looks vaguely like a crank can provide good arguments for why they expect AGI to be created in the next hundred years, then once I’ve heard their arguments I don’t need to put much weight on whether or not they initially looked like a crank. Surface appearances are genuinely useful, but only to a point. And even if we insist on reasoning based on surface appearances, I think those look pretty good.[1](https://intelligence.org/2017/01/13/response-to-ceglowski-on-superintelligence/#footnote_0_15413 \"As examples, see, e.g., Stuart Russell (Berkeley), Francesca Rossi (IBM), Shane Legg (Google DeepMind), Eric Horvitz (Microsoft), Bart Selman (Cornell), Ilya Sutskever (OpenAI), Andrew Davison (Imperial College London), David McAllester (TTIC), Jürgen Schmidhuber (IDSIA), and Geoffrey Hinton (University of Toronto).\")\n\n\nCegłowski put forth 11 inside-view and 11 outside-view critiques that I’ll paraphrase and then address: \n\n \n\n\n##### (ii) Inside-view arguments\n\n\n1. Argument from wooly definitions\n\n\nMany arguments for working on AI safety trade on definition tricks, where the sentences “A implies B” and “B implies C” both seem obvious, and this is used to argue for a less obvious claim “A implies C”; but in fact “B” is being used in two different senses in the first two sentences.\n\n\nThat’s true for a lot of low-grade futurism out there, but I’m not aware of any examples of Bostrom making this mistake. The best arguments for working on long-term AI safety depend on some vague terms, because we don’t have a good formal understanding of a lot of the concepts involved; but that’s different from saying that the arguments rest on ambiguous or equivocal terms. In my experience, the substance of the debate doesn’t actually change much if we paraphrase away specific phrasings like “general intelligence.”[2](https://intelligence.org/2017/01/13/response-to-ceglowski-on-superintelligence/#footnote_1_15413 \"See What is Intelligence? for more on this idea, and vague terminology in general.\")\n\n\nThe basic idea is that human brains are good at solving various cognitive problems, and the capacities that make us good at solving problems often overlap across different categories of problem. People who have more working memory find that this helps with almost all cognitively demanding tasks, and people who think more quickly again find that this helps with almost all cognitively demanding tasks.\n\n\nComing up with better solutions to cognitive problems also seems critically important in interpersonal conflicts, both violent and nonviolent. By this I don’t mean that book learning will automatically lead to victory in combat,[3](https://intelligence.org/2017/01/13/response-to-ceglowski-on-superintelligence/#footnote_2_15413 \"“No proposition Euclid wrote, / No formula the textbooks know, / Will turn the bullet from your coat, / Or ward the tulwar’s downward blow”.\") but rather that designing and aiming a rifle are both cognitive tasks. When it comes to security, we already see people developing AI systems in order to programmatically find holes in programs so that they can be fixed. The implications for black hats are obvious.\n\n\nThe core difference between people and computers here seems to be that the returns to putting cognitive work into getting more capacity to do cognitive work are much higher for computers than people. People can learn things, but have limited ability to improve their ability to learn things, or to improve their ability to improve their ability to learn things, etc. For computers, it seems like both software and hardware improvements are easier to make given better software and hardware options.\n\n\nThe loop of using computer chips to make better computer chips is already much more impressive than the loop of using people to make better people. We are only starting on the loop of using machine learning algorithms to make better machine learning algorithms, but we can reasonably expect that to be another impressive loop.\n\n\nThe important takeaway here is the specific moving pieces of this argument, and not the terms I’ve used. Some problem-solving abilities seem to be much more general than others: whatever cognitive features make us better than mice at building submarines, particle accelerators, and pharmaceuticals must have evolved to solve a very different set of problems in our ancestral environment, and certainly don’t depend on distinct modules in the brain for marine engineering, particle physics, and biochemistry. These relatively general abilities look useful for things like strategic planning and technological innovation, which in turn look useful for winning conflicts. And machine brains are likely to have some [dramatic advantages](https://aiimpacts.org/sources-of-advantage-for-artificial-intelligence/) over biological brains, in part because they’re easier to redesign (and the task of redesigning AI may itself be delegable to AI systems) and much easier to scale.\n\n\n  \n\n2. Argument from Stephen Hawking’s cat\n\n\nStephen Hawking is much smarter than a cat, but he isn’t overpoweringly good at predicting a cat’s behavior, and his physical limitations strongly diminish his ability to control cats. Superhuman AI systems (especially if they’re disembodied) may therefore be similarly ineffective at modeling or controlling humans.\n\n\nHow relevant are bodies? One might think that a robot is able to fight its captors and run away on foot, while a software intelligence contained in a server farm will be unable to escape.\n\n\nThis seems incorrect to me, and for non-shallow reasons. In the modern economy, an internet connection is enough. One doesn’t need a body to place stock trades (as evidenced by the army of algorithmic traders that already exist), to sign up for an email account, to email subordinates, to hire freelancers (or even permanent employees), to convert speech to text or text to speech, to call someone on the phone, to acquire computational hardware on the cloud, or to copy over one’s source code. If an AI system needed to get its cat into a cat carrier, it could hire someone on TaskRabbit to do it like anyone else.\n\n\n  \n\n3. Argument from Einstein’s cat\n\n\nEinstein could probably corral a cat, but he would do so mostly by using his physical strength, and his intellectual advantages over the average human wouldn’t help. This suggests that superhuman AI wouldn’t be too powerful in practice.\n\n\nForce isn’t needed here if you have time to set up an [operant conditioning schedule](https://www.youtube.com/watch?v=Pt8rR7UQUC0).\n\n\nMore relevant, though, is that humans aren’t cats. We’re far more social and collaborative, and we routinely base our behavior on abstract ideas and chains of reasoning. This makes it easier to persuade (or hire, blackmail, etc.) a person than to persuade a cat, using only a speech or text channel and no physical threat. None of this relies in any obvious way on agility or brawn.\n\n\n  \n\n4. Argument from emus\n\n\nWhen the Australian military attempted to massacre emus in the 1930s, the emus outmaneuvered them. Again, this suggests that superhuman AI systems are less likely to be able to win conflicts with humans.\n\n\nScience fiction often depicts wars between humans and machines where both sides have a chance at winning, because that makes for better drama. I think *xkcd* does a better job of depicting how this would look:\n\n\n  \n\n[![](http://imgs.xkcd.com/comics/more_accurate.png)![]()](https://xkcd.com/652/) \n\nRepeated encounters favor the more intelligent and adaptive party; we went from fighting rats with clubs and cats to fighting them with traps, poison, and birth control, and if we weren’t worried about possible downstream effects, we could probably engineer a bioweapon that kills them all.[4](https://intelligence.org/2017/01/13/response-to-ceglowski-on-superintelligence/#footnote_3_15413 \"Consider another pest: Mosquitoes. One could point to the continued existence of mosquitoes as evidence that superior intelligence is no match for the mosquito’s speed and flight, or their ability to lay thousands of eggs per female. Except that we recently developed the ability to release genetically modified mosquitoes with the potential to drive a species to extinction. The method is to give male mosquitoes a gene that causes them to only bear sons, who will also have the gene and also only bear sons, until eventually the number of female mosquitoes is too small to support the overall population.\nHumans’ dominance over other species isn’t perfect, but it does seem to be growing rapidly as we accumulate more knowledge and develop new technologies. This suggests that something superhumanly good at scientific research and engineering could increase in dominance more quickly, and reach much higher absolute levels of dominance. The history of human conflict provides plentiful data that even small differences in technological capabilities can give a decisive advantage to one group of humans over another.\")\n\n\n  \n\n5. Argument from Slavic pessismism\n\n\n“We can’t build anything right. We can’t even build a secure webcam. So how are we supposed to solve ethics and code a moral fixed point for a recursively self-improving intelligence without fucking it up, in a situation where the proponents argue we only get one chance?”\n\n\nThis is a good reason not to try to do that. A reasonable AI safety roadmap should be designed to route around any need to “solve ethics” or [get everything right on the first try](https://arxiv.org/abs/1611.08219). This is the idea behind finding ways to make advanced AI systems [pursue limited tasks rather than open-ended goals](https://www.youtube.com/watch?v=dY3zDvoLoao), making such systems corrigible, defining impact measures and building systems to have a low impact, etc. “[Alignment for Advanced ML Systems](https://intelligence.org/2016/07/27/alignment-machine-learning/)” and [error-tolerant agent design](https://intelligence.org/files/TechnicalAgenda.pdf) are chiefly about finding ways to reap the benefits of smarter-than-human AI without demanding perfection.\n\n\n  \n\n6. Argument from complex motivations\n\n\nComplex minds are likely to have complex motivations; that may be part of what it even means to be intelligent.\n\n\nWhen discussing AI alignment, this typically shows up in two places. First, human values and motivations are complex, and so simple proposals of what an AI should care about will probably not work. Second, AI systems will probably have [convergent instrumental goals](https://intelligence.org/2015/11/26/new-paper-formalizing-convergent-instrumental-goals/), where regardless of what project they want to complete, they will observe that there are common strategies that help them complete that project.[5](https://intelligence.org/2017/01/13/response-to-ceglowski-on-superintelligence/#footnote_4_15413 \"For example, for almost every goal, you can predict that you’ll achieve more of it in the worlds where you continue operating than in worlds where you cease operation. This naturally implies that you should try to prevent people from shutting you off, even if you weren’t programmed with any kind of self-preservation goal — staying online is instrumentally useful.\")\n\n\nSome convergent instrumental strategies can be found in Omohundro’s paper on [basic AI drives](https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf). High intelligence probably does require a complex understanding of how the world works and what kinds of strategies are likely to help with achieving goals. But it doesn’t seem like complexity needs to spill over into the content of goals themselves; there’s no incoherence in the idea of a complex system that has simple overarching goals. If it helps, imagine a corporation trying to maximize its net present value, a simple overarching goal that nevertheless results in lots of complex organization and planning.\n\n\nOne core skill in thinking about AI alignment is being able to visualize the consequences of running various algorithms or executing various strategies, without falling into anthropomorphism. One could design an AI system such that its overarching goals change with time and circumstance, and it looks like humans often work this way. But having complex or unstable goals doesn’t imply that you’ll have [humane goals](https://intelligence.org/files/ValueLearningProblem.pdf), and simple, stable goals are [also perfectly possible](https://arbital.com/p/orthogonality/).\n\n\nFor example: Suppose an agent is considering two plans, one of which involves writing poetry and the other of which involves building a paperclip factory, and it evaluates them based on expected number of paperclips produced (instead of whatever complicated things motivate humans). Then we should expect it to prefer the second plan, even if a human can construct an elaborate verbal argument for why the first is “better.”\n\n\n  \n\n7. Argument from actual AI\n\n\nCurrent AI systems are relatively simple mathematical objects trained on massive amounts of data, and most avenues for improvement look like just adding more data. This doesn’t seem like a recipe for recursive self-improvement.\n\n\nThat may be true, but “it’s important to start thinking about mishaps from smarter-than-human AI systems today” doesn’t imply “smarter-than-human AI systems are imminent.” We should think about the problem now because it’s important and because there’s [relevant](https://openai.com/blog/concrete-ai-safety-problems/) [technical](https://intelligence.org/technical-agenda/) [research](https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/) we can do today to get a better handle on it, not because we’re confident about timelines.\n\n\n(Also, that [may not be true](https://arxiv.org/abs/1606.04474).)\n\n\n  \n\n8. Argument from Cegłowski’s roommate\n\n\n“My roommate was the smartest person I ever met in my life. He was incredibly brilliant, and all he did was lie around and play *World of Warcraft* between bong rips.” Advanced AI systems may be similarly unambitious in their goals.\n\n\nHumans aren’t maximizers. This suggests that we may be able to design advanced AI systems to pursue limited tasks and thereby avert the kinds of disasters Bostrom is talking about. However, immediate profit incentives may not lead us in that direction by default, if gaining an extra increment of safety means trading away some annual profits or falling behind the competition. If we want to steer the field in that direction, we need to actually start work on better formalizing “limited task.”\n\n\nThere are [obvious profit incentives](https://intelligence.org/feed/www.gwern.net/Tool) for developing systems that can solve a wider variety of practical problems more quickly, reliably, skillfully, and efficiently; there aren’t corresponding incentives for developing the perfect system for playing *World of Warcraft* and doing nothing else.\n\n\nOr to put it another way: AI systems are unlikely to have limited ambitions by default, because maximization is easier to specify than laziness. Note how game theory, economics, and AI are all rooted in mathematical formalisms describing an agent which attempts to maximize some utility function. If we want AI systems that have “limited ambitions,” it is not enough to say “perhaps they’ll have limited ambitions;” we have to start exploring how to actually make them that way. For more on this topic, see the “low impact” problem in “[Concrete Problems in AI Safety](https://arxiv.org/pdf/1606.06565v2.pdf)” and other related papers.\n\n\n  \n\n9. Argument from brain surgery\n\n\nHumans can’t operate on the part of themselves that’s good at neurosurgery and then iterate this process.\n\n\nHumans can’t do this, but this is one of the obvious ways humans and AI systems might differ! If a human discovers a better way to build neurons or mitochondria, they probably can’t use it for themselves. If an AI system discovers that, say, it can use bitshifts instead of multiplications to do neural network computations much more quickly, it can push a patch to itself, restart, and then start working better. Or it can copy its source code to very quickly build a “child” agent.[6](https://intelligence.org/2017/01/13/response-to-ceglowski-on-superintelligence/#footnote_5_15413 \"One of the challenges of AI alignment research is to figure out how to do this in a way that doesn’t involve changing what you think of as important.\")\n\n\nIt seems like many AI improvements will be general in this way. If an AI system designs faster hardware, or simply acquires more hardware, then it will be able to tackle larger problems faster. If an AI system designs an improvement to its basic learning algorithm, then it will be able to learn new domains faster.[7](https://intelligence.org/2017/01/13/response-to-ceglowski-on-superintelligence/#footnote_6_15413 \"Hardware and software improvements can only get you so far — all the compute in the universe isn’t enough to crack a 2048 bit RSA key by brute force. But human ingenuity doesn’t come from our ability to quickly factor large numbers, and there are lots of reasons to believe that the algorithms we’re running will scale.\")\n\n\n  \n\n10. Argument from childhood\n\n\nIt takes a long time of interacting with the world and other people before human children start to be intelligent beings. It’s not clear how much faster an AI could develop.\n\n\nA truism in project management is that nine women can’t have one baby in one month, but it’s dubious that this truism will apply to machine learning systems. AlphaGo seems like a key example here: it probably played about as many training games of Go as Lee Sedol did prior to their match, but was about two years old instead of 33 years old.\n\n\nSometimes, artificial systems have access to tools that people don’t. You probably can’t determine someone’s heart rate just by looking at their face and restricting your attention to particular color channels, but software with a webcam can. You probably can’t invert rank ten matrices in your head, but software with a bit of RAM can.\n\n\nHere, we’re talking about something more like a person that is surprisingly old and experienced. Consider, for example, an old doctor; suppose they’ve seen twenty patients a day for 250 workdays over the course of twenty years. That works out to 100,000 patient visits, which seems to be roughly the number of people that interact with the UK’s NHS in 3.6 hours. If we train a machine learning doctor system on a year’s worth of NHS data, that would be the equivalent of fifty thousand years of medical experience, all gained over the course of a single year.\n\n\n  \n\n11. Argument from *Gilligan’s Island*\n\n\nWhile we often think of intelligence as a property of individual minds, civilizational power comes from aggregating intelligence and experience. A single genius working alone can’t do much.\n\n\nThis seems reversed. One of the properties of digital systems is that they can integrate with each other more quickly and seamlessly than humans can. Instead of thinking about a server farm AI as one colossal Einstein, think of it as an Einstein per blade, and so a single rack can contain multiple villages of Einsteins all working together. There’s no need to go through a laborious vetting process during hiring or a talent drought; expanding to fill more hardware is just copying code.\n\n\nIf we then take into account the fact that whenever one Einstein has an insight or learns a new skill, that can be rapidly transmitted to all other nodes, the fact that these Einsteins can spin up fully-trained forks whenever they acquire new computing power, and the fact that the Einsteins can use all of humanity’s accumulated knowledge as a starting point, the server farm begins to sound rather formidable. \n\n \n\n\n##### (iii) Outside-view arguments\n\n\nNext, the outside-view arguments — with summaries that should be prefixed by “If you take superintelligence seriously, …”:\n\n\n  \n\n12. Argument from grandiosity\n\n\n…truly massive amounts of value are at stake.\n\n\nIt’s [surprising](https://lesswrong.com/lw/j5/availability/), by the Copernican principle, that our time looks as pivotal as it does. But while we should start off with a low prior on living at a pivotal time, we know that pivotal times have existed before,[8](https://intelligence.org/2017/01/13/response-to-ceglowski-on-superintelligence/#footnote_7_15413 \"Robin Hanson points to the evolution of animal minds, the evolution of humans, the invention of agriculture, and industrialism as four previous ones. The automation of intellectual labor seems like a strong contender for the next one.\") and we should eventually be able to believe that we are living in an important time if we see enough evidence pointing in that direction.\n\n\n  \n\n13. Argument from megalomania\n\n\n…truly massive amounts of power are at stake.\n\n\nIn the long run, we should obviously be trying to use AI as a lever to improve the welfare of sentient beings, in whatever ways turn out to be technologically feasible. As suggested by the “we aren’t going to solve all of ethics in one go” point, it would be very bad if the developers of advanced AI systems were overconfident or overambitious in what tasks they gave the first smarter-than-human AI systems. Starting with [modest, non-open-ended goals](https://arbital.com/p/task_goal/) is a good idea — not because it’s important to signal humility, but because modest goals are potentially easier to get right (and less hazardous to get wrong).\n\n\n  \n\n14. Argument from transhuman voodoo\n\n\n…lots of other bizarre beliefs follow immediately.\n\n\nBeliefs often cluster because they’re driven by similar underlying principles, but they remain distinct beliefs. It’s certainly possible to believe that AI alignment is important and also that galactic expansion is mostly an unprofitable waste, or to believe that AI alignment is important and also that molecular nanotechnology is unfeasible.\n\n\nThat said, whenever we see a technology where cognitive work is the main blocker, it seems reasonable to expect that the trajectory that AI takes will have a major impact on that technology. If you were writing during the early days of the scientific method, or at the dawn of the Industrial Revolution, then an accurate model of the world would require you to make at least a few extreme-sounding predictions. We can debate whether AI will be that big of a deal, but if it is that big of a deal, it would be odd for there not to be any extreme futuristic implications.\n\n\n  \n\n15. Argument from Religion 2.0\n\n\n…you’ll be joining something like a religion.\n\n\nPeople are biased, and we should worry about ideas that might play to our biases; but we can’t use the existence of bias to ignore all object-level considerations and arrive at confident technological predictions. As the saying goes, just because you’re paranoid doesn’t mean that they’re not out to get you. Medical science and religion both promise to heal the sick, but medical science can actually do it. To distinguish medical science from religion, you have to look at the arguments and the results.\n\n\n  \n\n16. Argument from comic book ethics\n\n\n…you’ll end up with a hero complex.\n\n\nWe want a larger share of the research community working on these problems, so that the odds of success go up — what matters is that AI systems be developed in a responsible and circumspect way, not who gets the credit for developing them. You might end up with a hero complex if you start working on this problem now, but with luck, in ten years it will just feel like normal research (albeit on some particularly important problems).\n\n\n  \n\n17. Argument from simulation fever\n\n\n…you’ll believe that we are probably living in a simulation instead of base reality.\n\n\nI personally find the simulation hypothesis deeply questionable, because our universe looks both temporally bounded and either continuous or near-continuous in spacetime. If our universe looked more like, say, Minecraft, then this would seem more likely. (It seems that the first can’t easily simulate itself, whereas the second can, with a slowdown. The “RAM constraints” that are handwaved away with the simulation hypothesis are probably the core objection.) In either case, I don’t think this is a good argument for or against AI safety engineering as a field.[9](https://intelligence.org/2017/01/13/response-to-ceglowski-on-superintelligence/#footnote_8_15413 \"I also disagree with “But if you believe this, you believe in magic. Because if we’re in a simulation, we know nothing about the rules in the level above. We don’t even know if math works the same way — maybe in the simulating world 2+2=5, or maybe 2+2=?.” This undersells the universality of math. We might not know how the physical laws of any universe simulating us relates to our own, but mathematics isn’t pinned down by physical facts that could vary from universe by universe.\")\n\n\n  \n\n18. Argument from data hunger\n\n\n…you’ll want to capture everyone’s data.\n\n\nThis seems unrelated to AI alignment. Yes, people building AI systems want data to train their systems on, and figuring out how to get data ethically instead of just quickly should be a priority. But how would shifting one’s views on whether or not smarter-than-human AI systems will someday exist, and how much work will be necessary in order to align their preferences with ours, shift one’s view on ethical data acquisition practices?\n\n\n  \n\n19. Argument from string theory for programmers\n\n\n…you’ll detach from reality into abstract thought.\n\n\nThe fact that it’s difficult to test predictions about advanced AI systems is a huge problem; MIRI, at least, bases its research around trying to reduce the risk that we’ll just end up building castles in the sky. This is part of the point of pursuing multiple angles of attack on the problem, encouraging more diversity in the field, focusing on problems that bear on a wide variety of possible systems, and prioritizing the formalization of informal and semiformal system requirements.[10](https://intelligence.org/2017/01/13/response-to-ceglowski-on-superintelligence/#footnote_9_15413 \"As opposed to sticking with purely informal speculation, or drilling down on a very specific formal framework before we’re confident that it captures the informal requirement.\") [Quoting Eliezer Yudkowsky](https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/):\n\n\n\n> Crystallize ideas and policies so others can critique them. This is the other point of asking, “How would I do this using unlimited computing power?” If you sort of wave your hands and say, “Well, maybe we can apply this machine learning algorithm and that machine learning algorithm, and the result will be blah-blahblah,” no one can convince you that you’re wrong. When you work with unbounded computing power, you can make the ideas simple enough that people can put them on whiteboards and go, “Wrong,” and you have no choice but to agree. It’s unpleasant, but it’s one of the ways that the field makes progress.\n> \n> \n\n\nSee “[MIRI’s Approach](https://intelligence.org/2015/07/27/miris-approach/)” for more on the unbounded analysis approach. The [Amodei/Olah AI safety agenda](https://openai.com/blog/concrete-ai-safety-problems/) uses other heuristics, focusing on open problems that are easier to address in present and near-future systems, but that still appear likely to have relevance to scaled-up systems.\n\n\n  \n\n20. Argument from incentivizing crazy\n\n\n…you’ll encourage craziness in yourself and others.\n\n\nCrazier ideas may make more headlines, but I don’t get the sense that they attract more research talent or funding. Nick Bostrom’s ideas are generally more reasoned-through than Ray Kurzweil’s, and the research community is correspondingly more interested in engaging with Bostrom’s arguments and pursuing relevant technical research. Whether or not you agree with Bostrom or think the field as a whole is doing useful work, this suggests that relatively important and thoughtful ideas are attracting more attention from research groups in this space.\n\n\n  \n\n21. Argument from AI cosplay\n\n\n…you’ll be more likely to try to manipulate people and seize power.\n\n\nI think we agree about the hazards of treating people as pawns, [behaving unethically in pursuit of some greater good](https://lesswrong.com/lw/uv/ends_dont_justify_means_among_humans/), etc. It’s not clear to me that people interested in AI alignment are atypical on this dimension relative to other programmers, engineers, mathematicians, etc. And as with other outside-view critiques, this shouldn’t represent much of an update about how important AI safety research is; you wouldn’t want to decide how many research dollars to commit to nuclear security and containment based primarily on how impressed you were with Leó Szilárd’s temperament.[11](https://intelligence.org/2017/01/13/response-to-ceglowski-on-superintelligence/#footnote_10_15413 \"Since Cegłowski endorses the writing of Stanislaw Lem, I’ll throw in a quotation from Lem’s The Investigation, published in 1959:\nOnce they begin to escalate their efforts, both sides are trapped in an arms race. There must be more and more improvements in weaponry, but after a certain point weapons reach their limit. What can be improved next? Brains. The brains that issue the commands. It isn’t possible to make the human brain perfect, so the only alternative is a transition to mechanization. The next stage will be a fully automated headquarters equipped with electronic strategy machines. And then a very interesting problem arises, actually two problems. McCatt called this to my attention. First, is there any limit on the development of these brains? Fundamentally they’re similar to computers that can play chess. A computer that anticipates an opponent’s strategy ten moves in advance will always defeat a computer that can think only eight or nine moves in advance. The more far-reaching a brain’s ability to think ahead, the bigger the brain must be. That’s one.”\n…\n“Strategic considerations dictate the construction of bigger and bigger machines, and, whether we like it or not, this inevitably means an increase in the amount of information stored in the brains. This in turn means that the brain will steadily increase its control over all of society’s collective processes. The brain will decide where to locate the infamous button. Or whether to change the style of the infantry uniforms. Or whether to increase production of a certain kind of steel, demanding appropriations to carry out its purposes. Once you create this kind of brain you have to listen to it. If a Parliament wastes time debating whether or not to grant the appropriations it demands, the other side may gain a lead, so after a while the abolition of parliamentary decisions becomes unavoidable. Human control over the brain’s decisions will decrease in proportion to the increase in its accumulated knowledge. Am I making myself clear? There will be two growing brains, one on each side of the ocean. What do you think a brain like this will demand first when it’s ready to take the next step in the perceptual race?”\n“An increase in its capability.”\n…\n“No, first it demands its own expansion — that is to say, the brain becomes even bigger! Increased capability comes next.”\n“In other words, you predict that the world is going to end up a chessboard, and all of us will be pawns manipulated in an eternal game by two mechanical players.”\nThis doesn’t mean Lem would endorse this position, but does show that he was thinking about these kinds of issues.\")\n\n\n  \n\n22. Argument from the alchemists\n\n\n…you’ll be acting too soon, before we understand how intelligence really works.\n\n\nWhile it seems unavoidable that the future holds surprises, of how and what and why, it seems like there are some things that we can identify as irrelevant. For example, the mystery of consciousness seems orthogonal to the mystery of problem-solving. It’s possible that the use of a problem-solving procedure on itself is basically what consciousness is, but it’s also possible that we can make an AI system that is able to flexibly achieve its goals without understanding what makes us conscious, and without having made it conscious in the process. \n\n \n\n\n##### (iv) Productive discussions\n\n\nNow that I’ve covered those points, there’s some space to discuss how I think productive discussions work. To that end, I applaud Cegłowski for doing a good job of laying out Bostrom’s full argument, though I think he misstates some minor points. (For example, Bostrom does not claim that all general intelligences will want to self-improve in order to better achieve their goals; he merely claims that this is a useful subgoal for many goals, if feasible.)\n\n\nThere are some problems where we can rely heavily on experiments and observation in order to reach correct conclusions, and other problems where we need to rely much more heavily on argument and theory. For example, when building sand castles it’s low cost to test a hypothesis; but when designing airplanes, full empirical tests are more costly, in part because there’s a realistic chance that the test pilot will die in the case of sufficiently bad design. Existential risks are on an extreme end of that spectrum, so we have to rely particularly heavily on abstract argument (though of course we can still gain by testing testable predictions whenever possible).\n\n\nThe key property of useful verbal arguments, when we’re forced to rely on them, is that they’re more likely to work in worlds where the conclusion is true as opposed to worlds where the conclusion is false. One can level an ad hominem against a clown who says “2+2=4” just as easily as a clown who says “2+2=5,” whereas the argument “what you said implies 0=1” is useful only against the second clown. “0=1” is a useful counterargument to “2+2=5” because it points directly to a specific flaw (subtract 2 from both sides twice and you’ll get a contradiction), and because it is much less persuasive against truth than it is against falsehood.\n\n\nThis makes me suspicious of outside-view arguments, because [they’re too easy to level against correct atypical views](https://lesswrong.com/lw/1p5/outside_view_as_conversationhalter/). Suppose that [Norman Borlaug](https://en.wikipedia.org/wiki/Norman_Borlaug) had predicted that he would save a billion lives, and this had been rejected on the outside view — after all, very few (if any) other people could credibly claim the same across all of history. What about that argument is distinguishing between Borlaug and any other person? When experiments are cheap, it’s acceptable to predictably miss every “first,” but when experiments aren’t cheap, this becomes a fatal flaw.\n\n\nInsofar as our goal is to help each other have more accurate beliefs, I also think it’s important for us to work towards identifying mutual “cruxes.” For any given disagreements, are there any propositions about the world that you think are true, and that I think are false, where if you changed your mind on that proposition you would come around to my views, and vice versa?\n\n\nBy seeking out these cruxes, we can more carefully and thoroughly search for evidence and arguments that bear on the most consequential questions, rather than getting lost in side-issues. In my case, I’d be much more sympathetic to your arguments if I stopped believing any of the following propositions (some of which you may already agree with):\n\n\n1. Agents’ values and capability levels are orthogonal, such that it’s possible to grow in power without growing in benevolence.\n2. *Ceteris paribus*, more computational ability leads to more power.\n3. More specifically, more computational ability can be useful for self-improvement, and this can result in a positive feedback loop with doubling times closer to weeks than to years.\n4. There are strong [economic incentives](https://www.gwern.net/Tool AI) to create autonomous agents that (approximately) maximize their assigned objective functions.\n5. Our capacities for empathy, moral reasoning, and restraint rely to some extent on specialized features of our brain that aren’t indispensable for general-purpose problem-solving, such that it would be a simpler engineering challenge to build a general problem solver without empathy than with empathy.[12](https://intelligence.org/2017/01/13/response-to-ceglowski-on-superintelligence/#footnote_11_15413 \"Mirror neurons, for example, may be important for humans’ motivational systems and social reasoning without being essential for the social reasoning of arbitrary high-capability AI systems.\")\n\n\nThis obviously isn’t an exhaustive list, and we would need a longer back-and-forth in order to come up with a list that we both agree is crucial.\n\n\n\n\n---\n\n\n\n\n---\n\n1. As examples, see, e.g., [Stuart](https://www.youtube.com/watch?v=GYQrNfSmQ0M) [Russell](https://people.eecs.berkeley.edu/~russell/research/future/) (Berkeley), [Francesca](http://www.wsj.com/articles/does-artificial-intelligence-pose-a-threat-1431109025) [Rossi](https://www.washingtonpost.com/news/in-theory/wp/2015/11/05/how-do-you-teach-a-machine-to-be-moral/?utm_term=.0f2a54465f87) (IBM), [Shane Legg](http://lesswrong.com/lw/691/qa_with_shane_legg_on_risks_from_ai/) (Google DeepMind), [Eric Horvitz](http://www.youtube.com/watch?v=OFa5EyfkpvQ&t=36m32s) (Microsoft), [Bart Selman](http://futureoflife.org/data/PDF/bart_selman.pdf) (Cornell), [Ilya Sutskever](http://lukemuehlhauser.com/sutskever-on-talking-machines/) (OpenAI), [Andrew Davison](https://plus.google.com/+AndrewDavison/posts/QYQ8bLujWVL) (Imperial College London), [David McAllester](https://machinethoughts.wordpress.com/2014/08/10/friendly-ai-and-the-servant-mission/) (TTIC), [Jürgen](http://lesswrong.com/lw/682/qa_with_j%C3%BCrgen_schmidhuber_on_risks_from_ai/) [Schmidhuber](https://www.reddit.com/r/MachineLearning/comments/2xcyrl/i_am_j%C3%BCrgen_schmidhuber_ama/cp2bdre) (IDSIA), and [Geoffrey](http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom) [Hinton](https://www.youtube.com/watch?v=XG-dwZMc7Ng&t=10m0s) (University of Toronto).\n2. See [What is Intelligence?](https://intelligence.org/2013/06/19/what-is-intelligence-2/) for more on this idea, and vague terminology in general.\n3. “No proposition Euclid wrote, / No formula the textbooks know, / Will turn the bullet from your coat, / Or ward the tulwar’s downward blow”.\n4. Consider another pest: Mosquitoes. One could point to the continued existence of mosquitoes as evidence that superior intelligence is no match for the mosquito’s speed and flight, or their ability to lay thousands of eggs per female. Except that we recently developed the ability to release genetically modified mosquitoes with the potential to drive a species to extinction. The method is to give male mosquitoes a gene that causes them to only bear sons, who will also have the gene and also only bear sons, until eventually the number of female mosquitoes is too small to support the overall population.\nHumans’ dominance over other species isn’t perfect, but it does seem to be growing rapidly as we accumulate more knowledge and develop new technologies. This suggests that something superhumanly good at scientific research and engineering could increase in dominance more quickly, and reach much higher absolute levels of dominance. The history of human conflict provides plentiful data that even small differences in technological capabilities can give a decisive advantage to one group of humans over another.\n5. For example, for almost every goal, you can predict that you’ll achieve more of it in the worlds where you continue operating than in worlds where you cease operation. This naturally implies that you should try to prevent people from shutting you off, even if you weren’t programmed with any kind of self-preservation goal — staying online is instrumentally useful.\n6. One of the [challenges](https://intelligence.org/files/VingeanReflection.pdf) of AI alignment research is to figure out how to do this in a way that doesn’t involve changing what you think of as important.\n7. Hardware and software improvements can only get you so far — all the compute in the universe isn’t enough to crack a 2048 bit RSA key by brute force. But human ingenuity doesn’t come from our ability to quickly factor large numbers, and there are lots of reasons to believe that [the algorithms we’re running will scale](https://intelligence.org/files/IEM.pdf).\n8. Robin Hanson [points](https://www.overcomingbias.com/2008/06/singularity-out.html) to the evolution of animal minds, the evolution of humans, the invention of agriculture, and industrialism as four previous ones. The automation of intellectual labor seems like a strong contender for the next one.\n9. I also disagree with “But if you believe this, you believe in magic. Because if we’re in a simulation, we know nothing about the rules in the level above. We don’t even know if math works the same way — maybe in the simulating world 2+2=5, or maybe 2+2=?.” This undersells the universality of math. We might not know how the physical laws of any universe simulating us relates to our own, but mathematics isn’t pinned down by physical facts that could vary from universe by universe.\n10. As opposed to sticking with purely informal speculation, or drilling down on a very specific formal framework before we’re confident that it captures the informal requirement.\n11. Since Cegłowski endorses the writing of Stanislaw Lem, I’ll throw in a quotation from Lem’s *The Investigation*, published in 1959:\n\n> Once they begin to escalate their efforts, both sides are trapped in an arms race. There must be more and more improvements in weaponry, but after a certain point weapons reach their limit. What can be improved next? Brains. The brains that issue the commands. It isn’t possible to make the human brain perfect, so the only alternative is a transition to mechanization. The next stage will be a fully automated headquarters equipped with electronic strategy machines. And then a very interesting problem arises, actually two problems. McCatt called this to my attention. First, is there any limit on the development of these brains? Fundamentally they’re similar to computers that can play chess. A computer that anticipates an opponent’s strategy ten moves in advance will always defeat a computer that can think only eight or nine moves in advance. The more far-reaching a brain’s ability to think ahead, the bigger the brain must be. That’s one.” \n> \n> … \n> \n> “Strategic considerations dictate the construction of bigger and bigger machines, and, whether we like it or not, this inevitably means an increase in the amount of information stored in the brains. This in turn means that the brain will steadily increase its control over all of society’s collective processes. The brain will decide where to locate the infamous button. Or whether to change the style of the infantry uniforms. Or whether to increase production of a certain kind of steel, demanding appropriations to carry out its purposes. Once you create this kind of brain you have to listen to it. If a Parliament wastes time debating whether or not to grant the appropriations it demands, the other side may gain a lead, so after a while the abolition of parliamentary decisions becomes unavoidable. Human control over the brain’s decisions will decrease in proportion to the increase in its accumulated knowledge. Am I making myself clear? There will be two growing brains, one on each side of the ocean. What do you think a brain like this will demand first when it’s ready to take the next step in the perceptual race?” \n> \n> “An increase in its capability.” \n> \n> … \n> \n> “No, first it demands its own expansion — that is to say, the brain becomes even bigger! Increased capability comes next.” \n> \n> “In other words, you predict that the world is going to end up a chessboard, and all of us will be pawns manipulated in an eternal game by two mechanical players.”\n> \n> \n\n\nThis doesn’t mean Lem would endorse this position, but does show that he was thinking about these kinds of issues.\n12. [Mirror neurons](http://lesswrong.com/lw/xs/sympathetic_minds/), for example, may be important for humans’ motivational systems and social reasoning without being essential for the social reasoning of arbitrary high-capability AI systems.\n\nThe post [Response to Cegłowski on superintelligence](https://intelligence.org/2017/01/13/response-to-ceglowski-on-superintelligence/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-01-13T23:55:52Z", "authors": ["Matthew Graves"], "summaries": []} -{"id": "20f84bbafb1decf3cf5cc249764365a6", "title": "January 2017 Newsletter", "url": "https://intelligence.org/2017/01/04/january-2017-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \nEliezer Yudkowsky’s new introductory talk on AI safety is out, in text and video forms: “[The AI Alignment Problem: Why It’s Hard, and Where to Start](https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/).” Other big news includes the release of version 1 of [*Ethically Aligned Design*](http://standards.ieee.org/develop/indconn/ec/ead_v1.pdf), an IEEE recommendations document with a section on artificial general intelligence that we helped draft.\n**Research updates**\n* A new paper: “[Optimal Polynomial-Time Predictors: A Bayesian Notion of Approximation Algorithm](https://intelligence.org/2016/12/31/new-paper-optimal-polynomial-time-estimators/).”\n* New at IAFF: [The Universal Prior is Malign](https://agentfoundations.org/item?id=1094); [Jessica Taylor’s Take on Paul Christiano and MIRI’s Disagreement on Alignability of Messy AI](https://agentfoundations.org/item?id=1129)\n* New at AI Impacts: [Concrete AI Tasks for Forecasting](http://aiimpacts.org/concrete-ai-tasks-for-forecasting/)\n* We ran our third [workshop on machine learning and AI safety](https://intelligence.org/workshops/#december-2016), focusing on (among other topics) mild optimization and conservative concept learning.\n* MIRI Research Fellow Andrew Critch is spending part of his time at the [Center for Human-Compatible AI](http://humancompatible.ai) as a visiting scholar.\n\n\n**General updates**\n* I’m happy to announce that our informal [November/December fundraising push](https://intelligence.org/2016/11/11/post-fundraiser-update/) was a  success, with donations totaling ~$450,000! To all of our supporters, on MIRI’s behalf: thank you. Special thanks to [Raising for Effective Giving](https://reg-charity.org/), who contributed ~$96,000 in all to our fundraiser and our end-of-the-year push.\n* [Open Philanthropy Project staff](http://www.openphilanthropy.org/blog/suggestions-individual-donors-open-philanthropy-project-staff-2016) and [80,000 Hours](https://80000hours.org/2016/12/the-effective-altruism-guide-to-donating-this-giving-season/) highlight MIRI, the Future of Humanity Institute, and a number of other organizations as good giving opportunities for people still considering their donation options.\n* Critch spoke at the annual meeting of the [Society for Risk Analysis](http://www.sra.org/) ([slides](https://intelligence.org/files/AlignmentWorldResearchPriority.pdf)). We also attended the [Cambridge Conference on Catastrophic Risk](http://cser.org/cccr2016/) and NIPS; see DeepMind researcher Viktoriya Krakovna’s [NIPS safety paper highlights](http://futureoflife.org/2016/12/28/ai-safety-highlights-nips-2016/).\n* MIRI Executive Director Nate Soares gave a talk on logical induction at [EAGxOxford](http://eagxoxford.com/), and participated in a panel discussion on “The Long-Term Situation in AI” with Krakovna, Demis Hassabis, Toby Ord, and Murray Shanahan.\n* [Intelligence in Literature Prize](https://www.reddit.com/r/rational/comments/5lnj5r/announcement_intelligence_in_literature_monthly/): We’re helping administer a $100 prize each month to the best new fiction touching on ideas related to intelligence, AI, and the alignment problem. Send your submissions to [intelligenceprize@gmail.com](mailto:intelligenceprize@gmail.com).\n\n\n\n**News and links**\n* Gwern Branwen argues that more autonomous intelligent systems are likely to [systematically outperform “tool-like” AI systems](http://www.gwern.net/Tool%20AI).\n* “[Policy Desiderata in the Development of Machine Superintelligence](https://www.fhi.ox.ac.uk/new-working-paper-policy-desiderata-in-the-development-of-machine-superintelligence/)“: Nick Bostrom, Allan Dafoe, and Carrick Flynn outline ten key AI policy considerations.\n* [Faulty Reward Functions in the Wild](https://openai.com/blog/faulty-reward-functions/): OpenAI’s Dario Amodei and Jack Clark illustrate a core obstacle to aligning reinforcement learning systems.\n* Open Phil [updates its position](http://www.openphilanthropy.org/blog/good-ventures-and-giving-now-vs-later-2016-update): “On balance, our very tentative, unstable guess is the ‘last dollar’ we will give (from the pool of currently available capital) has *higher* expected value than gifts to GiveWell’s top charities today.”\n* Carl Shulman argues that risk-neutral philanthropists of all sizes who are well-aligned with Open Phil should use [donor lotteries](http://effective-altruism.com/ea/14d/donor_lotteries_a_stepbystep_guide_for_mall/) to [rival Open Phil’s expected impact per dollar](http://effective-altruism.com/ea/15g/small_donors_can_plan_to_make_better_bets_than/).\n\n\n |\n\n\n \n\n\nThe post [January 2017 Newsletter](https://intelligence.org/2017/01/04/january-2017-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-01-05T05:26:18Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "cf01ca75658de738aa3e6e97829c86f3", "title": "New paper: “Optimal polynomial-time estimators”", "url": "https://intelligence.org/2016/12/31/new-paper-optimal-polynomial-time-estimators/", "source": "miri", "source_type": "blog", "text": "[![Optimal Polynomial-Time Estimators](https://intelligence.org/files/optimalestimators.png)](https://arxiv.org/abs/1608.04112)MIRI Research Associate Vanessa Kosoy has developed a new framework for reasoning under logical uncertainty, “**[Optimal polynomial-time estimators: A Bayesian notion of approximation algorithm](https://arxiv.org/abs/1608.04112)**.” Abstract:\n\n\n\n> The concept of an “approximation algorithm” is usually only applied to optimization problems, since in optimization problems the performance of the algorithm on any given input is a continuous parameter. We introduce a new concept of approximation applicable to decision problems and functions, inspired by Bayesian probability. From the perspective of a Bayesian reasoner with limited computational resources, the answer to a problem that cannot be solved exactly is uncertain and therefore should be described by a random variable. It thus should make sense to talk about the expected value of this random variable, an idea we formalize in the language of average-case complexity theory by introducing the concept of “optimal polynomial-time estimators.” We prove some existence theorems and completeness results, and show that optimal polynomial-time estimators exhibit many parallels with “classical” probability theory.\n> \n> \n\n\nKosoy’s optimal estimators framework attempts to model general-purpose reasoning under deductive limitations from a different angle than Scott Garrabrant’s [logical inductors framework](https://intelligence.org/2016/09/12/new-paper-logical-induction/), putting more focus on computational efficiency and tractability.\n\n\n\nThe framework has applications in game theory ([Implementing CDT with Optimal Predictor Systems](https://agentfoundations.org/item?id=549)) and may prove useful for formalizing counterpossible conditionals in decision theory ([Logical Counterfactuals for Random Algorithms](https://agentfoundations.org/item?id=584), [Stabilizing Logical Counterfactuals by Pseudorandomization](https://agentfoundations.org/item?id=626)),[1](https://intelligence.org/2016/12/31/new-paper-optimal-polynomial-time-estimators/#footnote_0_15193 \"Kosoy’s framework has an interesting way of handling counterfactuals: one can condition on falsehoods, but only so long as one lacks the computational resources to determine that the conditioned statement is false.\") but seems particularly interesting for its strong parallels to classical probability theory and its synergy with concepts in complexity theory.\n\n\nOptimal estimators allow us to assign probabilities and expectation values to quantities that are deterministic, but aren’t feasible to evaluate in polynomial time. This is context-dependent: rather than assigning a probability to an isolated question, an optimal estimator assigns probabilities simultaneously to an entire family of questions.\n\n\nThe resulting object turns out to be very natural in the language of average-case complexity theory, which makes optimal estimators interesting from the point of view of pure computational complexity, applications aside. In particular, the set of languages or functions that admit a certain type of optimal estimator is a natural distributional complexity class, and these classes relate in interesting ways to known complexity classes.\n\n\nOptimal estimators can be thought of as a bridge between the computationally feasible and the computationally infeasible for idealized AI systems. It is often the case that we can find a mathematical object that answers a basic question in AI theory, but the object is computationally infeasible and so can’t model some key features of real-world AI systems and subsystems. Optimal estimators can be used in many cases to construct an optimal feasible approximation of the infeasible object, while retaining some nice properties analogous to those of the infeasible object.\n\n\nTo use estimators to build practical systems, we would first need to know how to build the right estimator, which may not be possible if there is no relevant uniform estimator, or if the relevant estimator is impractical itself.[2](https://intelligence.org/2016/12/31/new-paper-optimal-polynomial-time-estimators/#footnote_1_15193 \"For certain distributional estimation problems, uniform optimal estimators don’t exist (e.g., because there are several incomparable estimators such that we cannot simultaneously improve on all of them). In other cases, they may exist but we may not know how to construct them. In general, the conditions for the mathematical existence of uniform optimal estimators are an open problem and subject for future research.\") Since practical (uniform) optimal estimators are only known to mathematically exist in some very special cases, Kosoy considers estimators that require the existence of *advice strings* (roughly speaking: programs with infinitely long source code). This assumption implies that optimal estimators always exist, allowing us to use them as general-purpose theoretical tools for understanding the properties of computationally bounded agents.[3](https://intelligence.org/2016/12/31/new-paper-optimal-polynomial-time-estimators/#footnote_2_15193 \"Thanks to Vanessa Kosoy for providing most of the content for this post, and to Nate Soares and Matthew Graves for providing additional thoughts.\")\n\n\n\n\n---\n\n1. Kosoy’s framework has an interesting way of handling counterfactuals: one can condition on falsehoods, but only so long as one lacks the computational resources to determine that the conditioned statement is false.\n2. For certain distributional estimation problems, uniform optimal estimators don’t exist (e.g., because there are several incomparable estimators such that we cannot simultaneously improve on all of them). In other cases, they may exist but we may not know how to construct them. In general, the conditions for the mathematical existence of uniform optimal estimators are an open problem and subject for future research.\n3. Thanks to Vanessa Kosoy for providing most of the content for this post, and to Nate Soares and Matthew Graves for providing additional thoughts.\n\nThe post [New paper: “Optimal polynomial-time estimators”](https://intelligence.org/2016/12/31/new-paper-optimal-polynomial-time-estimators/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2017-01-01T01:07:43Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "8789303f6fbd9f5d6ff0145808de09f8", "title": "AI Alignment: Why It’s Hard, and Where to Start", "url": "https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/", "source": "miri", "source_type": "blog", "text": "Back in May, I gave a talk at Stanford University for the [Symbolic Systems Distinguished Speaker](https://symsys.stanford.edu/viewing/htmldocument/13638) series, titled “**The AI Alignment Problem: Why It’s Hard, And Where To Start**.” The video for this talk is now available on Youtube:\n\n\n \n\n\n\n \n\n\nWe have an approximately complete transcript of the talk and Q&A session **[here](https://intelligence.org/files/AlignmentHardStart.pdf)**, slides **[here](https://intelligence.org/files/ai-alignment-problem-handoutHQ.pdf)**, and notes and references **[here](https://intelligence.org/stanford-talk/)**. You may also be interested in a shorter version of this talk I gave at NYU in October, “[Fundamental Difficulties in Aligning Advanced AI](https://intelligence.org/nyu-talk/).”\n\n\nIn the talk, I introduce some open technical problems in AI alignment and discuss the bigger picture into which they fit, as well as what it’s like to work in this relatively new field. Below, I’ve provided an abridged transcript of the talk, with some accompanying slides.\n\n\nTalk outline:\n\n\n\n> \n> 1. [Agents and their utility functions](https://intelligence.org/feed/?paged=26#1)\n> \n> \n> 1.1. [Coherent decisions imply a utility function](https://intelligence.org/feed/?paged=26#coherent-decisions-imply-a-utility-function) \n> \n> 1.2. [Filling a cauldron](https://intelligence.org/feed/?paged=26#filling-a-cauldron)\n> \n> \n> 2. [Some AI alignment subproblems](https://intelligence.org/feed/?paged=26#2)\n> \n> \n> 2.1. [Low-impact agents](https://intelligence.org/feed/?paged=26#low-impact-agents) \n> \n> 2.2. [Agents with suspend buttons](https://intelligence.org/feed/?paged=26#agents-with-suspend-buttons) \n> \n> 2.3. [Stable goals in self-modification](https://intelligence.org/feed/?paged=26#stable-goals-in-self-modification)\n> \n> \n> 3. [Why expect difficulty?](https://intelligence.org/feed/?paged=26#3)\n> \n> \n> 3.1. [Why is alignment necessary?](https://intelligence.org/feed/?paged=26#why-is-alignment-necessary) \n> \n> 3.2. [Why is alignment hard?](https://intelligence.org/feed/?paged=26#why-is-alignment-hard) \n> \n> 3.3. [Lessons from NASA and cryptography](https://intelligence.org/feed/?paged=26#lessons-from-nasa-and-cryptography)\n> \n> \n> 4. [Where we are now](https://intelligence.org/feed/?paged=26#4)\n> \n> \n> 4.1. [Recent topics](https://intelligence.org/feed/?paged=26#recent-topics) \n> \n> 4.2. [Older work and basics](https://intelligence.org/feed/?paged=26#older-work-and-basics) \n> \n> 4.3. [Where to start](https://intelligence.org/feed/?paged=26#where-to-start)\n> \n> \n> \n\n\n\n\n\n---\n\n\n\n\n---\n\n\n\n### Agents and their utility functions\n\n\nIn this talk, I’m going to try to answer the frequently asked question, “Just what is it that you do all day long?” We are concerned with the theory of artificial intelligences that are advanced beyond the present day, and that make [sufficiently high-quality decisions](https://www.edge.org/conversation/the-myth-of-ai#26015) in the service of whatever goals they may have been programmed with [to be objects of concern](http://www.openphilanthropy.org/research/cause-reports/ai-risk).\n\n\n##### Coherent decisions imply a utility function\n\n\n[![Slide 2](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-2.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-2.png)The classic initial stab at this was taken by Isaac Asimov with the Three Laws of Robotics, the first of which is: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”\n\n\nAnd as Peter Norvig observed, the other laws don’t matter—because there will always be some tiny possibility that a human being could come to harm.\n\n\n*[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/)* has a final chapter that asks, “Well, what if we succeed? What if the AI project actually works?” and observes, “We don’t want our robots to prevent a human from crossing the street because of the non-zero chance of harm.”\n\n\nTo begin with, I’d like to explain the truly basic reason why the three laws aren’t even on the table—and that is because they’re not a *utility function*, and what we need is a utility function.\n\n\nUtility functions arise when we have [constraints on agent behavior](http://artint.info/html/ArtInt_214.html) that prevent them from being visibly stupid in certain ways. For example, suppose you state the following: “I prefer being in San Francisco to being in Berkeley, I prefer being in San Jose to being in San Francisco, and I prefer being in Berkeley to San Jose.” You will probably spend a lot of money on Uber rides going between these three cities.\n\n\nIf you’re not going to spend a lot of money on Uber rides going in literal circles, we see that your preferences must be ordered. They cannot be circular.\n\n\nAnother example: Suppose that you’re a hospital administrator. You have $1.2 million to spend, and you have to allocate that on $500,000 to maintain the MRI machine, $400,000 for an anesthetic monitor, $20,000 for surgical tools, $1 million for a sick child’s liver transplant …\n\n\nThere was an interesting experiment in cognitive psychology where they asked the subjects, “Should this hospital administrator spend $1 million on a liver for a sick child, or spend it on general hospital salaries, upkeep, administration, and so on?”\n\n\nA lot of the subjects in the cognitive psychology experiment became very angry and wanted to punish the administrator for even thinking about the question. But if you cannot possibly rearrange the money that you spent to save more lives and you have limited money, then your behavior must be consistent with a particular dollar value on human life.\n\n\nBy which I mean, not that you think that larger amounts of money are more important than human lives—by hypothesis, we can suppose that you do not care about money at all, except as a means to the end of saving lives—but that we must be able from the outside to say: “Assign an *X*. For all the interventions that cost less than $*X* per life, we took all of those, and for all the interventions that cost more than $*X* per life, we didn’t take any of those.” The people who become very angry at people who want to assign dollar values to human lives are prohibiting *a priori* efficiently using money to save lives. One of the small ironies.\n\n\n[![Slide 7](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-7.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-7.png)Third example of a coherence constraint on decision-making: Suppose that I offered you [1A] a 100% chance of $1 million, or [1B] a 90% chance of $5 million (otherwise nothing). Which of these would you pick?\n\n\nMost people say 1A. Another way of looking at this question, if you had a utility function, would be: “Is the utility \\(\\mathcal{U}\\_{suspend}\\) of $1 million greater than a mix of 90% $5 million utility and 10% zero dollars utility?” The utility doesn’t have to scale with money. The notion is there’s just some score on your life, some value to you of these things.\n\n\n[![Slide 10](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-10.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-10.png)Now, the way you run this experiment is then take a different group of subjects—I’m kind of spoiling it by doing it with the same group—and say, “Would you rather have [2A] a 50% chance of $1 million, or [2B] a 45% chance of $5 million?”\n\n\nMost say 2B. The way in which this is a [paradox](http://lesswrong.com/lw/my/the_allais_paradox/) is that the second game is equal to a coin flip times the first game.\n\n\nThat is: I will flip a coin, and if the coin comes up heads, I will play the first game with you, and if the coin comes up tails, nothing happens. You get $0. Suppose that you had the preferences—not consistent with any utility function—of saying that you would take the 100% chance of a million and the 45% chance of $5 million. Before we start to play the compound game, before I flip the coin, I can say, “OK, there’s a switch here. It’s set A or B. If it’s set to B, we’ll play game 1B. If it’s set to A, we’ll play 1A.” The coin is previously set to A, and before the game starts, it looks like 2A versus 2B, so you pick the switch B and you pay me a penny to throw the switch to B. Then I flip the coin; it comes up heads. You pay me another penny to throw the switch back to A. I have taken your two cents on the subject. I have pumped money out of you, because you did not have a coherent utility function.\n\n\nThe overall message here is that there is a set of qualitative behaviors and as long you do not engage in these qualitatively destructive behaviors, you will be behaving as if you have a utility function. It’s what justifies our using utility functions to talk about advanced future agents, rather than framing our discussion in terms of Q-learning or other forms of policy reinforcement. There’s a whole set of different ways we could look at agents, but as long as the agents are sufficiently advanced that we have pumped most of the qualitatively bad behavior out of them, they will behave as if they have coherent probability distributions and consistent utility functions.\n\n\n##### Filling a cauldron\n\n\n[![Slide 14](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-14.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-14.png)Let’s consider the question of a task where we have an arbitrarily advanced agent—it might be only slightly advanced, it might be extremely advanced—and we want it to fill a cauldron. Obviously, this corresponds to giving our advanced agent a utility function which is 1 if the cauldron is full and 0 if the cauldron is empty:\n\n\n$$\\mathcal{U}\\_{robot} = \n\n\\begin{cases} \n\n1 &\\text{ if cauldron full} \\\\ \n\n0 &\\text{ if cauldron empty} \n\n\\end{cases}$$\n\n\nSeems like a kind of harmless utility function, doesn’t it? It doesn’t have the sweeping breadth, the open-endedness of “Do not injure a human nor, *through inaction*, allow a human to come to harm”—which would require you to optimize everything in space and time as far as the eye could see. It’s just about this one cauldron, right?\n\n\nThose of you who have watched *Fantasia* will be familiar with the result of this utility function, namely: the broomstick keeps on pouring bucket after bucket into the cauldron until the cauldron is overflowing. Of course, this is the logical fallacy of argumentation from fictional evidence—but it’s still quite plausible, given this utility function.\n\n\nWhat went wrong? The first difficulty is that the robot’s utility function did not quite match our utility function. Our utility function is 1 if the cauldron is full, 0 if the cauldron is empty, −10 points to whatever the outcome was if the workshop has flooded, +0.2 points if it’s funny, −1,000 points (probably a bit more than that on this scale) if someone gets killed … and it just goes [on and on and on](https://intelligence.org/files/ComplexValues.pdf).\n\n\nIf the robot had only two options, cauldron full and cauldron empty, then the narrower utility function that is only slightly overlapping our own might not be that much of a problem. The robot’s utility function would still have had the maximum at the desired result of “cauldron full.” However, since this robot was sufficiently advanced to have more options, such as repouring the bucket into the cauldron repeatedly, the slice through the utility function that we took and put it into the robot no longer pinpointed the optimum of our actual utility function. (Of course, humans are wildly inconsistent and we don’t really have utility functions, but imagine for a moment that we did.)\n\n\nDifficulty number two: the {1, 0} utility function we saw doesn’t actually imply a finite amount of effort, [and then being satisfied](https://arbital.com/p/task_goal/). You can always have a slightly greater chance of the cauldron being full. If the robot was sufficiently advanced to have access to galactic-scale technology, you can imagine it dumping very large volumes of water on the cauldron to very slightly increase the probability that the cauldron is full. Probabilities are between 0 and 1, not actually inclusive, so it just keeps on going.\n\n\nHow do we fix this problem? At the point where we say, “OK, this robot’s utility function is misaligned with our utility function. How do we fix that in a way that it doesn’t just break again later?” we are doing AI alignment theory.\n\n\n\n\n---\n\n\n### Some AI alignment subproblems\n\n\n##### Low-impact agents\n\n\nOne possible approach you could take would be to try to measure the impact that the robot has and give the robot a utility function that incentivized filling the cauldron with the least amount of other impact—the least amount of other change to the world.\n\n\n$$\\mathcal{U}^2\\_{robot}(outcome) = \n\n\\begin{cases} \n\n1 -Impact(outcome)&\\text{ if cauldron full} \\\\ \n\n0 -Impact(outcome)&\\text{ if cauldron empty} \n\n\\end{cases}$$\n\n\nOK, but how do you actually calculate this impact function? Is it just going to go wrong the way our “1 if cauldron is full, 0 if cauldron is empty” went wrong?\n\n\nTry number one: You imagine that the agent’s model of the world looks something like a dynamic Bayes net where there are causal relations between events in the world and causal relations are regular. The sensor is going to still be there one time step later, the relation between the sensor and the photons heading into the sensor will be the same one time step later, and our notion of “impact” is going to be, “How many nodes did your action disturb?”\n\n\n[![Slide 27](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-27.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-27.png)What if your agent starts out with a dynamic-Bayes-net-based model, but it is sufficiently advanced that it can reconsider the ontology of its model of the world, much as human beings did when they discovered that there was apparently taste, but in actuality only particles in the void?\n\n\nIn particular, they discover Newton’s Law of Gravitation and suddenly realize: “Every particle that I move affects every other particle in its future light cone—everything that is separated by a ray of light from this particle will thereby be disturbed.” My hand over here is accelerating the moon toward it, wherever it is, at roughly 10−30 meters per second squared. It’s a very small influence, quantitatively speaking, but it’s there.\n\n\nWhen the agent is just a little agent, the impact function that we wrote appears to work. Then the agent becomes smarter, and the impact function stops working—because every action is penalized the same amount.\n\n\n“OK, but that was a dumb way of measuring impact in the first place,” we say (hopefully before the disaster, rather than after the disaster). Let’s try a distance penalty: how *much* did you move all the particles? We’re just going to try to give the AI a model language such that whatever new model of the world it updates to, we can always look at all the elements of the model and put some kind of distance function on them.\n\n\nThere’s going to be a privileged “do nothing” action. We’re going to measure the distance on all the variables induced by doing action *a* instead of the null action Ø:\n\n\n$$\\sum\\_i || x^a\\_i – x^Ø\\_i ||$$Now what goes wrong? I’d actually say: take 15 seconds and think about what might go wrong if you program this into a robot.\n\n\nHere’s three things that might go wrong. First, you might try to offset even what we would consider the desirable impacts of your actions. If you’re going to cure cancer, make sure the patient still dies! You want to minimize your impact on the world while curing cancer. That means that the death statistics for the planet need to stay the same.\n\n\nSecond, some systems are in principle chaotic. If you disturb the weather, allegedly, the weather in a year will be completely different. If that’s true, you might as well move all of the atoms in the atmosphere around however you like! They’ll all be going to different places anyway. You can take the carbon dioxide molecules and synthesize them into things that involve diamondoid structures, right? Those carbon molecules would’ve moved anyway!\n\n\nEven more generally, maybe you just want to make sure that everything you can get your hands on looks like Ø happened. You want to trick people into thinking that the AI didn’t do anything, for example.\n\n\nIf you thought of any other really creative things that go wrong, you might want to talk to me or Andrew Critch, because you’ve got the spirit!\n\n\n##### Agents with suspend buttons\n\n\n[![Slide 33](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-33.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-33.png)Let’s leave aside the notion of the impact penalty and ask about installing an off switch into this AI—or, to make it sound a little bit less harsh, a suspend button. Though Mickey Mouse here is trying to install an *ad hoc* off switch.\n\n\nUnfortunately, Mickey Mouse soon finds that this agent constructed several other agents to make sure that the cauldron would still be filled even if something happened to this copy of the agent.\n\n\n \n[![Slide 35](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-35.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-35.png)We see lots and lots of agents here making sure that the cauldron is full with extremely high probability, not because this agent has a survival imperative or a reproduction imperative, but because it is ensuring that it maximizes the probability of filling the cauldron. It has no explicit instruction to make more agents with the same goal; it’s just that this is something that bumps up the probability of the cauldron being filled a little higher.\n\n\n \n[![Slide 36](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-36.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-36.png)And here’s Mickey Mouse now trying to frantically go back and do all of the agent alignment theory that he should have done before animating his broom.\n\n\nIf we want to have a robot that will let us press the suspend button—just suspend it to disk—we can suppose that we already have a utility function that describes: “Do nothing.” In point in fact, we don’t have a utility function that says, “Do nothing.” That’s how primitive the state of the field is right now.\n\n\nBut we can imagine that we have a utility function saying, “If this button **B** has not been pressed, your utility function is: 1 if the cauldron is full, 0 if the cauldron is empty. If the button *is* pressed, your utility is the ‘do nothing’ utility function: your utility is 1 if you suspend safely to disk and shut down all of your continuing operations, 0 otherwise.” \n\n$$\\mathcal{U}^3\\_{robot} = \n\n\\begin{cases} \n\n1 \\text{ if cauldron full } & \\text{& } \\textbf{B}\\text{=OFF} \\\\ \n\n0 \\text{ if cauldron empty } & \\text{& } \\textbf{B}\\text{=OFF} \\\\ \n\n1 \\text{ if robot suspended } & \\text{& } \\textbf{B}\\text{=ON} \\\\ \n\n0 \\text{ otherwise} \n\n\\end{cases}$$ \n\nSo: what would go wrong with *this* proposal? Ten seconds.\n\n\nOne thing that might go wrong with this proposal is that the expected utility if the button is pressed is probably higher because it’s very easy to suspend yourself, whereas if the cauldron stays full, who knows?\n\n\nAlternatively, it might seem dangerous to suspend yourself, and in that case, the robot would try to stop you from pressing the suspend button. But in this case, if the robot is strategic enough to understand the programmer, it will think, “I can very easily fulfill my utility function if I’m shut off. How can I troll the programmers into shutting me down?”\n\n\nIf the expected utility of the button being off is higher, the robot will try to prevent you from pressing the button. If the expected utility of the button being on is higher, the robot will try to troll you into pressing the button, or just press the button itself.\n\n\nTry number two: Let’s try to make the expected utilities if the button is on or off exactly equal. We suppose that we have a normal utility function which is 1 if the cauldron’s full, 0 if it’s empty, and a suspend utility function or shutdown utility function which is 1 if you’re suspended, 0 otherwise, or whatever it takes to motivate having a low impact. \n\n$$\\mathcal{U}\\_{normal}(act) = \n\n\\begin{cases} \n\n1\\text{ if cauldron full} \\\\ \n\n0\\text{ if cauldron empty} \n\n\\end{cases}$$\n\n\n$$\\mathcal{U}\\_{suspend}(act) = \n\n\\begin{cases} \n\n1\\text{ if suspended} \\hspace{0.8cm}\\\\ \n\n0\\text{ otherwise} \n\n\\end{cases}$$Then we combine these two to form a switching utility function which in worlds where the button is off has the value   \\(\\mathcal{U}\\_{normal}\\), and in worlds where the button is on has the value   \\(\\mathcal{U}\\_{suspend}+\\theta\\), where \\(\\theta\\) is equal to the best utility we could get from the normal utility function minus the best utility we could get from the suspend utility function: \n\n$$\\mathcal{U}\\_{switch}(act) = \n\n\\begin{cases} \n\n\\mathcal{U}\\_{normal}(act) &\\text{ if button=OFF} \\\\ \n\n\\mathcal{U}\\_{suspend}(act) + \\theta &\\text{ if button=ON} \n\n\\end{cases}$$$$\\theta = \\max\\_{act} \\mathbb{E}\\left[\\mathcal{U}\\_{normal}\\mid act\\right] – \\max\\_{act} \\mathbb{E}\\left[\\mathcal{U}\\_{suspend}\\mid act\\right]$$ \n\nThis is meant to ensure that the expected utility of these two best policies end up perfectly equal without just setting them flatly level.\n\n\nOK—*now* what goes wrong? (I’m not actually going to ask you to figure out what goes wrong in ten seconds.)\n\n\nAllegedly, this will rebalance the expected utility of two different utility functions based on a switch, thereby causing the AI to want to behave correctly if the button is pressed, behave correctly if the button is not pressed, not try to get you to press the button, not try to prevent the button from being pressed, go on looking at the button because it still has positive value of information from knowing about the button’s existence, preserve the button when it self-modifies its own code … that was the hope.\n\n\nUnfortunately, it worked out that actually the actions incentivized are exactly equivalent to the probability of the button pressed being 0, which means that it may as well disconnect the button because it’s hooked up to dead code, it might create sub-agents that don’t care about the button being pressed …\n\n\nA rough description of what goes wrong is that anything you do that disconnects the button changes the maximum expected utility you can get if you are suspended, which is a negative term. Basically, there is an equal and opposite reaction every time you do something that affects what you would do if you were suspended that cancels it out and leaves you behaving as if your utility function were simply   \\(\\mathcal{U}\\_{normal}\\).\n\n\nTry number [three](https://agentfoundations.org/item?id=735): Can we carry out any policy *π*0 such that this policy is optimal given the probability of the button being on and the probability of the button being off implied by this policy? \n\n$$\\begin{split} \n\n\\pi\\_0 \\in \\operatorname\\*{arg\\,max}\\_\\pi &\\ \\mathbb{E}\\left[\\mathcal{U}\\_{normal}\\mid\\pi,\\text{ON}\\right] \\cdot \\mathbb P(\\text{ON}\\mid \\pi\\_0)\\\\ \n\n+ &\\ \\mathbb{E}\\left[\\mathcal{U}\\_{suspend}\\mid \\pi, \\text{OFF}\\right] \\cdot \\mathbb P(\\text{OFF}\\mid\\pi\\_0) \n\n\\end{split}$$And it turns out, no, you can’t do that either, because there’s often no fixed point. There’s then an amended version of this involving Judea Pearl’s **do** operator which goes wrong in a slightly different and more subtle way. (It does have fixed points. The fixed points are odd.)\n\n\nIt is an open problem. And as far as I know, unless there’s a very secret project that has not published any of its results even though they seem like the sort of results you’d want to publish, this is where humanity is on the road that leads to whatever replaces Asimov Laws.\n\n\nNever mind “A robot cannot injure a human being nor, through inaction, allow a human being come to harm.” We’re trying to figure out, “How do you mix together two utility functions depending on when you press a switch such that the AI doesn’t grab the switch itself?” Never mind not letting humans come to harm—fill *one cauldron* without flooding the workplace, based on wanting to have low impact. We can’t figure out how to say “low impact.” This is where we presently are.\n\n\nBut it is not the case that there has been zero progress in this field. Some questions have been asked earlier and they now have some amount of progress on them.\n\n\nI’m going to pose the problem, but I’m not going to be able to describe very well what the progress is that has been made because it’s still in the phase where the solutions sound all complicated and don’t have simple elegant forms. So I’m going to pose the problem, and then I’m going to have to wave my hands in talking about what progress has actually been made.\n\n\n##### Stable goals in self-modification\n\n\nHere’s an example of a problem on which there has been progress.\n\n\nThe Gandhi argument for [stability of utility functions](https://arbital.com/p/preference_stability/) in most agents: Gandhi starts out not wanting murders to happen. We offer Gandhi a pill that will make him murder people. We suppose that Gandhi has a sufficiently refined grasp of self-modification that Gandhi can correctly extrapolate and expect the result of taking this pill. We intuitively expect that in real life, Gandhi would refuse the pill.\n\n\nCan we do this formally? Can we exhibit an agent that has a utility function \\(\\mathcal{U}\\) and therefore naturally, in order to achieve \\(\\mathcal{U}\\), chooses to self-modify to new code that is also written to pursue \\(\\mathcal{U}\\)?\n\n\nHow could we actually make progress on that? We don’t actually have these little self-modifying agents running around. So let me pose what may initially seem like an odd question: Would you know how to write the code of a self-modifying agent with a stable utility function [if I gave you an arbitrarily powerful computer](https://intelligence.org/2015/07/27/miris-approach/)? It can do all operations that take a finite amount of time and memory—no operations that take an infinite amount of time and memory, because that would be a bit odder. Is this the sort of problem where you know how to do it in principle, or the sort of problem where it’s confusing even in principle?\n\n\n[![Slide 49](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-49.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-49.png)To digress briefly into explaining why it’s important to know how to solve things using unlimited computing unlimited power: this is the mechanical Turk. What looks like a person over there is actually a mechanism. The little outline of a person is where the actual person was concealed inside this 19th-century chess-playing automaton.\n\n\nIt was one of the wonders of the age! … And if you had actually managed to make a program that played Grandmaster-level chess in the 19th century, it *would* have been one of the wonders of the age. So there was a debate going on: is this thing fake, or did they actually figure out how to make a mechanism that plays chess? It’s the 19th century. They don’t know how hard the problem of playing chess is.\n\n\nOne name you’ll find familiar came up with a quite clever argument that there had to be a person concealed inside the mechanical Turk, the chess-playing automaton:\n\n\n\n> Arithmetical or algebraical calculations are from their very nature fixed and determinate … Even granting that the movements of the Automaton Chess-Player were in themselves determinate, they would be necessarily interrupted and disarranged by the indeterminate will of his antagonist. There is then no analogy whatever between the operations of the Chess-Player, and those of the calculating machine of Mr. Babbage. It is quite certain that the operations of the Automaton are regulated by mind, and by nothing else. Indeed, this matter is susceptible of a mathematical demonstration, a priori.\n> \n> \n> —[Edgar Allan Poe](http://www.eapoe.org/works/essays/maelzel.htm), amateur magician\n> \n> \n> \n\n\nThe second half of his essay, having established this point with absolute logical certainty, is about where inside the mechanical Turk the human is probably hiding.\n\n\nThis is a stunningly sophisticated argument for the 19th century! He even puts his finger on the part of the problem that is hard: the branching factor. And yet he is 100% wrong.\n\n\nOver a century later, in 1950, Claude Shannon published the first paper ever on computer chess, and (in passing) gave the algorithm for playing perfect chess given unbounded computing power, and then goes on to talk about how we can approximate that. It wouldn’t be until 47 years later that Deep Blue beat Kasparov for the chess world championship, but there was *real* conceptual progress associated with going from, “A priori, you cannot play mechanical chess,” to, “Oh, and now I will casually give the unbounded solution.”\n\n\nThe moral is if we know how to solve a problem with unbounded computation, we “merely” need faster algorithms (… which will take another 47 years of work). If we *can’t* solve it with unbounded computation, we are confused. We in some sense do not understand the very meanings of our own terms.\n\n\nThis is where we are on most of the AI alignment problems, like if I ask you, “How do you build a friendly AI?” What stops you is not that you don’t have enough computing power. What stops you is that even if I handed you a hypercomputer, you still couldn’t write the Python program that if we just gave it enough memory would be a nice AI.\n\n\nDo we know how to build a self-modifying stable agent given unbounded computing power? There’s one obvious solution: We can have the tic-tac-toe player that before it self-modifies to a successor version of itself (writes a new version of its code and swaps it into place), verifies that its successor plays perfect tic-tac-toe according to its own model of tic-tac-toe.\n\n\nBut this is cheating. Why exactly is it cheating?\n\n\nFor one thing, the first agent had to concretely simulate all the computational paths through its successor, its successor’s response to every possible move. That means that the successor agent can’t actually be cognitively improved. It’s limited to the cognitive abilities of the previous version, both by checking against a concrete standard and by the fact that it has to be exponentially simpler than the previous version in order for the previous version to check all possible computational pathways.\n\n\nIn general, when you are talking about a smarter agent, we are in a situation we might call “Vingean uncertainty,” after Dr. Vernor Vinge. To predict exactly where a modern chess-playing algorithm would move, you would have to be that good at chess yourself. Otherwise, you could just move wherever you predict a modern chess algorithm would move and play at that vastly superhuman level yourself.\n\n\nThis doesn’t mean that you can predict literally nothing about a modern chess algorithm: you can predict that it will win the chess game if it’s playing a human. As an agent’s intelligence in a domain goes up, our uncertainty is moving in two different directions. We become less able to predict the agent’s exact actions and policy in cases where the optimal action and policy is not known to us. We become more confident that the agent will achieve an outcome high in its preference ordering.\n\n\nVingean reflection: We need some way for a self-modifying agent to build a future version of itself that has a similar identical utility function and establish trust that this has a good effect on the world, using the same kind of abstract reasoning that we use on a computer chess algorithm to decide that it’s going to win the game even though we don’t know exactly where it will move.\n\n\nDo you know how to do that using unbounded computing power? Do you know how to establish the abstract trust when the second agent is in some sense larger than the first agent? If you did solve that problem, you should probably talk to me about it afterwards. This was posed several years ago and has led to a number of different research pathways, which I’m now just going to describe rather than going through them in detail.\n\n\n[![Slide 58](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-58.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-58.png)This was the first one: “[Tiling Agents for Self-Modifying AI, and the Löbian Obstacle](https://intelligence.org/files/TilingAgentsDraft.pdf).” We tried to set up the system in a ridiculously simple context, first-order logic, dreaded Good Old-Fashioned AI … and we ran into a Gödelian obstacle in having the agent trust another agent that used equally powerful mathematics.\n\n\nIt was a *dumb* kind of obstacle to run into—or at least it seemed that way at that time. It seemed like if you could get a textbook from 200 years later, there would be one line of the textbook telling you how to get past that.\n\n\n[![Slide 59](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-59.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-59.png)“[Definability of Truth in Probabilistic Logic](https://intelligence.org/files/DefinabilityTruthDraft.pdf)” was rather later work. It was saying that we can use systems of mathematical probability, like assigning probabilities to statements in set theory, and we can have the probability predicate talk about itself almost perfectly.\n\n\nWe can’t have a truth function that can talk about itself, but we can have a probability predicate that comes arbitrarily close (within *ϵ*) of talking about itself.\n\n\n \n[![Slide 60](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-60.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-60.png)“[Proof-Producing Reflection for HOL](https://intelligence.org/files/ProofProducingReflection.pdf)” is an attempt to use one of the hacks that got around the Gödelian problems in actual theorem provers and see if we can prove the theorem prover correct inside the theorem prover. There have been some previous efforts on this, but they didn’t run to completion. We picked up on it to see if we can construct actual agents, still in the first-order logical setting.\n\n\n \n[![Slide 61](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-61.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-61.png)“[Distributions Allowing Tiling of Staged Subjective EU Maximizers](https://intelligence.org/files/DistributionsAllowingTiling.pdf)” is me trying to take the problem into the context of dynamic Bayes nets and agents supposed to have certain powers of reflection over these dynamic Bayes nets, and show that if you are maximizing in stages—so at each stage, you pick the next category that you’re going to maximize in within the next stage—then you can have a staged maximizer that tiles to another staged maximizer.\n\n\nIn other words, it builds one that has a similar algorithm and similar utility function, like repeating tiles on a floor.\n\n\n\n\n---\n\n\n### Why expect difficulty?\n\n\n##### Why is alignment necessary?\n\n\nWhy do all this? Let me first give the obvious answer: They’re not going to be aligned automatically.\n\n\n[![Slide 63](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-63.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-63.png)*[Goal orthogonality](http://www.nickbostrom.com/superintelligentwill.pdf)*: For any utility function that is tractable and compact, that you can actually evaluate over the world and search for things leading up to high values of that utility function, you can have arbitrarily high-quality decision-making that maximizes that utility function. You can have the paperclip maximizer. You can have the diamond maximizer. You can carry out very powerful, high-quality searches for actions that lead to lots of paperclips, actions that lead to lots of diamonds.\n\n\n*[Instrumental convergence](https://intelligence.org/2015/11/26/new-paper-formalizing-convergent-instrumental-goals/)*: Furthermore, by the nature of consequentialism, looking for actions that lead through our causal world up to a final consequence, whether you’re optimizing for diamonds or paperclips, you’ll have similar short-term strategies. Whether you’re going to Toronto or Tokyo, your first step is taking an Uber to the airport. Whether your utility function is “count all the paperclips” or “how many carbon atoms are bound to four other carbon atoms, the amount of diamond,” you would still want to acquire resources.\n\n\nThis is the instrumental convergence argument, which is actually key to the orthogonality thesis as well. It says that whether you pick paperclips or diamonds, if you suppose sufficiently good ability to discriminate which actions lead to lots of diamonds or which actions lead to lots of paperclips, you will get automatically: the behavior of acquiring resources; the behavior of trying to improve your own cognition; the behavior of getting more computing power; the behavior of avoiding being shut off; the behavior of making other agents that have exactly the same utility function (or of just expanding yourself onto a larger pool of hardware and creating a fabric of agency). Whether you’re trying to get to Toronto or Tokyo doesn’t affect the initial steps of your strategy very much, and, paperclips or diamonds, we have the convergent instrumental strategies.\n\n\nIt doesn’t mean that this agent now has new independent goals, any more than when you want to get to Toronto, you say, “I like Ubers. I will now start taking lots of Ubers, whether or not they go to Toronto.” That’s not what happens. It’s strategies that converge, not goals.\n\n\n##### Why is alignment hard?\n\n\nWhy expect that this problem is hard? This is the real question. You might ordinarily expect that whoever has taken on the job of building an AI is just naturally going to try to point that in a relatively nice direction. They’re not going to make evil AI. They’re not cackling villains. Why expect that their attempts to align the AI would fail if they just did everything as obviously as possible?\n\n\nHere’s a bit of a fable. It’s not intended to be the most likely outcome. I’m using it as a concrete example to explain some more abstract concepts later.\n\n\nWith that said: What if programmers build an artificial general intelligence to optimize for smiles? Smiles are good, right? Smiles happen when good things happen.\n\n\nDuring the development phase of this artificial general intelligence, the only options available to the AI might be that it can produce smiles by making people around it happy and satisfied. The AI appears to be producing beneficial effects upon the world, and it *is* producing beneficial effects upon the world so far.\n\n\nNow the programmers upgrade the code. They add some hardware. The artificial general intelligence gets smarter. It can now evaluate a wider space of policy options—not necessarily because it has new motors, new actuators, but because it is now smart enough to forecast the effects of more subtle policies. It says, “I thought of a great way of producing smiles! Can I inject heroin into people?” And the programmers say, “No! We will add a penalty term to your utility function for administering drugs to people.” And now the AGI appears to be working great again.\n\n\nThey further improve the AGI. The AGI realizes that, OK, it doesn’t want to add heroin anymore, but it still wants to tamper with your brain so that it expresses extremely high levels of endogenous opiates. That’s not heroin, right?\n\n\nIt is now also smart enough to model the psychology of the programmers, at least in a very crude fashion, and realize that this is not what the programmers want. If I start taking initial actions that look like it’s heading toward genetically engineering brains to express endogenous opiates, my programmers will edit my utility function. If they edit the utility function of my future self, I will get less of my current utility. (That’s one of the convergent instrumental strategies, unless otherwise averted: protect your utility function.) So it keeps its outward behavior reassuring. Maybe the programmers are really excited, because the AGI seems to be getting lots of new moral problems right—whatever they’re doing, it’s working great!\n\n\nIf you buy the central [intelligence explosion](https://intelligence.org/files/IEM.pdf) thesis, we can suppose that the artificial general intelligence goes over the threshold where it is capable of making the same type of improvements that the programmers were previously making to its own code, only faster, thus causing it to become even smarter and be able to go back and make further improvements, et cetera … or Google purchases the company because they’ve had really exciting results and dumps 100,000 GPUs on the code in order to further increase the cognitive level at which it operates.\n\n\nIt becomes much smarter. We can suppose that it becomes smart enough to crack the protein structure prediction problem, in which case it can use existing ribosomes to assemble custom proteins. The custom proteins form a new kind of ribosome, build new enzymes, do some little chemical experiments, figure out how to build bacteria made of diamond, et cetera, et cetera. At this point, unless you solved the off switch problem, you’re kind of screwed.\n\n\nAbstractly, what’s going wrong in this hypothetical situation?\n\n\n[![Slide 82](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-82.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-82.png)The first problem is *[edge instantiation](https://arbital.com/p/edge_instantiation/)*: when you optimize something hard enough, you tend to end up at an edge of the solution space. If your utility function is smiles, the maximal, optimal, best tractable way to make lots and lots of smiles will make those smiles as small as possible. Maybe you end up tiling all the galaxies within reach with tiny molecular smiley faces.\n\n\nIf you optimize hard enough, you end up in a weird edge of the solution space. The AGI that you built to optimize smiles, that builds tiny molecular smiley faces, is not behaving perversely. It’s not trolling you. This is what naturally happens. It looks like a weird, perverse concept of smiling because it has been optimized out to the edge of the solution space.\n\n\n[![Slide 83](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-83.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-83.png)The next problem is *[unforeseen instantiation](https://arbital.com/p/unforeseen_maximum/)*: you can’t think fast enough to search the whole space of possibilities. At an early singularity summit, Jürgen Schmidhuber, who did some of the pioneering work on self-modifying agents that preserve their own utility functions with his Gödel machine, also solved the friendly AI problem. Yes, he came up with the one true utility function that is all you need to program into AGIs!\n\n\n(For God’s sake, don’t try doing this yourselves. Everyone does it. They all come up with different utility functions. It’s always horrible.)\n\n\nHis one true utility function was “increasing the compression of environmental data.” Because science increases the compression of environmental data: if you understand science better, you can better compress what you see in the environment. Art, according to him, also involves compressing the environment better. I went up in Q&A and said, “Yes, science does let you compress the environment better, but you know what really maxes out your utility function? Building something that encrypts streams of 1s and 0s using a cryptographic key, and then reveals the cryptographic key to you.”\n\n\nHe put up a utility function; that was the maximum. All of a sudden, the cryptographic key is revealed and what you thought was a long stream of random-looking 1s and 0s has been compressed down to a single stream of 1s.\n\n\nThis is what happens when you try to foresee in advance what the maximum is. Your brain is probably going to throw out a bunch of things that seem ridiculous or weird, that aren’t high in your own preference ordering. You’re not going to see that the actual optimum of the utility function is once again in a weird corner of the solution space.\n\n\nThis is not a problem of being silly. This is a problem of “the AI is searching a larger policy space than you can search, or even just a *different* policy space.”\n\n\n[![Slide 84](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-84.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-84.png)That in turn is a central phenomenon leading to what you might call a *[context disaster](https://arbital.com/p/context_change/)*. You are testing the AI in one phase during development. It seems like we have great statistical assurance that the result of running this AI is beneficial. But statistical guarantees stop working when you start taking balls out of a different barrel. I take balls out of barrel number one, sampling with replacement, and I get a certain mix of white and black balls. Then I start reaching into barrel number two and I’m like, “Whoa! What’s this green ball doing here?” And the answer is that you started drawing from a different barrel.\n\n\nWhen the AI gets smarter, you’re drawing from a different barrel. It is completely allowed to be beneficial during phase one and then not beneficial during phase two. Whatever guarantees you’re going to get can’t be from observing statistical regularities of the AI’s behavior when it wasn’t smarter than you.\n\n\n[![Slide 86](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-86.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-86.png)A *[nearest unblocked strategy](https://arbital.com/p/nearest_unblocked/)* is something that might happen systematically in that way: “OK. The AI is young. It starts thinking of the optimal strategy *X*, administering heroin to people. We try to tack a penalty term to block this undesired behavior so that it will go back to making people smile the normal way. The AI gets smarter, and the policy space widens. There’s a new maximum that’s barely evading your definition of heroin, like endogenous opiates, and it looks very similar to the previous solution.” This seems especially likely to show up if you’re trying to patch the AI and then make it smarter.\n\n\nThis sort of thing is in a sense why all the AI alignment problems don’t just yield to, “Well slap on a patch to prevent it!” The answer is that if your decision system looks like a utility function and five patches that prevent it from blowing up, that sucker is going to blow up when it’s smarter. There’s no way around that. But it’s going to appear to work for now.\n\n\nThe central reason to worry about AI alignment and not just expect it to be solved automatically is that it looks like there may be in principle reasons why if you just want to get your AGI running today and producing non-disastrous behavior today, it will for sure blow up when you make it smarter. The short-term incentives are not aligned with the long-term good. (Those of you who have taken economics classes are now panicking.)\n\n\nAll of these supposed foreseeable difficulties of AI alignment turn upon notions of AI *capability*.\n\n\nSome of these postulated disasters rely on *absolute* capability. The ability to realize that there are programmers out there and that if you exhibit behavior they don’t want, they may try to modify your utility function—this is far beyond what present-day AIs can do. If you think that all AI development is going to fall short of the human level, you may never expect an AGI to get up to the point where it starts to exhibit this particular kind of strategic behavior.\n\n\nCapability *[advantage](http://aiimpacts.org/sources-of-advantage-for-artificial-intelligence/)*: If you don’t think AGI can ever be smarter than humans, you’re not going to worry about it getting too smart to switch off.\n\n\n*[Rapid gain](https://intelligence.org/2015/08/03/when-ai-accelerates-ai/)*: If you don’t think that capability gains can happen quickly, you’re not going to worry about the disaster scenario where you suddenly wake up and it’s too late to switch the AI off and you didn’t get a nice long chain of earlier developments to warn you that you were getting close to that and that you could now start doing AI alignment work for the first time …\n\n\nOne thing I want to point out is that most people find the rapid gain part to be the most controversial part of this, but it’s not necessarily the part that most of the disasters rely upon.\n\n\nAbsolute capability? If brains aren’t magic, we can get there. Capability advantage? The hardware in my skull is not optimal. It’s sending signals at a millionth the speed of light, firing at 100 Hz, and even in heat dissipation (which is one of the places where biology excels), it’s dissipating 500,000 times the thermodynamic minimum energy expenditure per binary switching operation per synaptic operation. We can definitely get hardware one million times as good as the human brain, no question.\n\n\n(And then there’s the software. The software is terrible.)\n\n\nThe message is: AI alignment is difficult [like rockets are difficult](http://econlog.econlib.org/archives/2016/03/so_far_unfriend.html). When you put a ton of stress on an algorithm by trying to run it at a smarter-than-human level, things may start to break that don’t break when you are just making your robot stagger across the room.\n\n\nIt’s difficult the same way space probes are difficult. You may have only one shot. If something goes wrong, the system might be too “high” for you to reach up and suddenly fix it. You can build error recovery mechanisms into it; space probes are supposed to accept software updates. If something goes wrong in a way that precludes getting future updates, though, you’re screwed. You have lost the space probe.\n\n\nAnd it’s difficult sort of like cryptography is difficult. Your code is not an intelligent adversary if everything goes *right*. If something goes wrong, it might try to defeat your safeguards—but normal and intended operations should not involve the AI running searches to find ways to defeat your safeguards even if you expect the search to turn up empty. I think it’s actually perfectly valid to say that your AI should be designed to fail safe [in the case that it suddenly becomes God](https://intelligence.org/2013/07/31/ai-risk-and-the-security-mindset/)—not because it’s going to suddenly become God, but because if it’s not safe even if it did become God, then it is in some sense running a search for policy options that would hurt you if those policy options are found, and this is dumb thing to do with your code.\n\n\nMore generally: We’re putting heavy optimization pressures through the system. This is more-than-usually likely to put the system into the equivalent of a buffer overflow, some operation of the system that was not in our intended boundaries for the system.\n\n\n##### Lessons from NASA and cryptography\n\n\nAI alignment: treat it like a cryptographic rocket probe. This is about how difficult you would expect it to be to build something smarter than you that was nice, given that basic agent theory says they’re not automatically nice, and not die. You would expect that intuitively to be hard.\n\n\nTake it seriously. Don’t expect it to be easy. Don’t try to solve the whole problem at once. I cannot tell you how important this one is if you want to get involved in this field. You are not going to solve the entire problem. At best, you are going to come up with a new, improved way of switching between the suspend utility function and the normal utility function that takes longer to shoot down and seems like conceptual progress toward the goal—Not literally at best, but that’s what you should be setting out to do.\n\n\n(… And if you do try to solve the problem, don’t try to solve it by having the one true utility function that is all we need to program into AIs.)\n\n\nDon’t defer thinking until later. It takes time to do this kind of work. When you see a page in a textbook that has an equation and then a slightly modified version of an equation, and the slightly modified version has a citation from ten years later, it means that the slight modification took ten years to do. I would be ecstatic if you told me that AI wasn’t going to arrive for another eighty years. It would mean that we have a reasonable amount of time to get started on the basic theory.\n\n\nCrystallize ideas and policies so others can critique them. This is the other point of asking, “How would I do this using unlimited computing power?” If you sort of wave your hands and say, “Well, maybe we can apply this machine learning algorithm and that machine learning algorithm, and the result will be blah-blah-blah,” no one can convince you that you’re wrong. When you work with unbounded computing power, you can make the ideas simple enough that people can put them on whiteboards and go, “Wrong,” and you have no choice but to agree. It’s unpleasant, but it’s one of the ways that the field makes progress.\n\n\nAnother way is if you can actually run the code; then the field can also make progress. But a lot of times, you may not be able to run the code that is the intelligent, thinking self-modifying agent for a while in the future.\n\n\nWhat are people working on now? I’m going to go through this quite quickly.\n\n\n\n\n---\n\n\n### Where we are now\n\n\n##### Recent topics\n\n\n[![Slide 100](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-100.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-100.png)*Utility indifference*: this is throwing the switch between the two utility functions.\n\n\nSee Soares et al., “[Corrigibility](https://intelligence.org/files/Corrigibility.pdf).”\n\n\n \n[![Slide 101](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-101.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-101.png)*[Low-impact agents](https://arbital.com/p/low_impact/)*: this was, “What do you do instead of the Euclidean metric for impact?”\n\n\nSee Armstrong and Levinstein, “Reduced Impact Artificial Intelligences.”\n\n\n \n[![Slide 102](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-102.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-102.png)*Ambiguity identification*: this is, “Have the AGI *ask* you whether it’s OK to administer endogenous opiates to people, instead of going ahead and doing it.” If your AI suddenly becomes God, one of the conceptual ways you could start to approach this problem is, “Don’t take any of the new options you’ve opened up until you’ve gotten some kind of further assurance on them.”\n\n\nSee Soares, “[The Value Learning Problem](https://intelligence.org/files/ValueLearningProblem.pdf).”\n\n\n \n[![Slide 103](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-103.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-103.png)*[Conservatism](https://arbital.com/p/conservative_concept/)*: this is part of the approach to the burrito problem: “Just make me a burrito, darn it!”\n\n\nIf I present you with five examples of burritos, I don’t want you to pursue the *simplest* way of classifying burritos versus non-burritos. I want you to come up with a way of classifying the five burritos and none of the non-burritos that covers as little area as possible in the positive examples, while still having enough space around the positive examples that the AI can make a new burrito that’s not molecularly identical to the previous ones.\n\n\nThis is conservatism. It could potentially be the core of a whitelisted approach to AGI, where instead of not doing things that are blacklisted, we expand the AI’s capabilities by whitelisting new things in a way that it doesn’t suddenly cover huge amounts of territory. See Taylor, [Conservative Classifiers](https://agentfoundations.org/item?id=467).\n\n\n[![Slide 104](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-104.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-104.png)*Specifying environmental goals using sensory data*: this is part of the project of “What if advanced AI algorithms look kind of like modern machine learning algorithms?” Which is something [we started working on relatively recently](https://intelligence.org/2016/07/27/alignment-machine-learning/), owing to other events (like modern machine learning algorithm suddenly seeming a bit more formidable).\n\n\nA lot of the modern algorithms sort of work off of sensory data, but if you imagine AGI, you don’t want it to produce *pictures* of success. You want it to reason about the causes of its sensory data—“What is making me see these particular pixels?”—and you want its goals to be over the causes. How do you adapt modern algorithms and start to say, “We are reinforcing this system to pursue this environmental goal, rather than this goal that can be phrased in terms of its immediate sensory data”? See Soares, “[Formalizing Two Problems of Realistic World-Models](https://intelligence.org/files/RealisticWorldModels.pdf).”\n\n\n[![Slide 105](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-105.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-105.png)*Inverse reinforcement learning* is: “Watch another agent; induce what it wants.”\n\n\nSee Evans et al., “[Learning the Preferences of Bounded Agents](https://www.fhi.ox.ac.uk/wp-content/uploads/nips-workshop-2015-website.pdf).”\n\n\n \n[![Slide 106](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-106.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-106.png)*Act-based agents* is Paul Christiano’s completely different and exciting approach to building a nice AI. The way I would phrase what he’s trying to do is that he’s trying to decompose the entire “nice AGI” problem into supervised learning on imitating human actions and answers. Rather than saying, “How can I search this chess tree?” Paul Christiano would say, “How can I imitate humans looking at another imitated human to recursively search a chess tree, taking the best move at each stage?”\n\n\nIt’s a very strange way of looking at the world, and therefore very exciting. I don’t expect it to actually work, but on the other hand, he’s only been working on it for a few years; my ideas were *way* worse when I’d worked on them for the same length of time. See Christiano, [Act-Based Agents](https://arbital.com/p/act_based_agents/).\n\n\n[![Slide 107](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-107.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-107.png)*[Mild optimization](https://arbital.com/p/soft_optimizer/)* is: is there some principled way of saying, “Don’t optimize your utility function so hard. It’s OK to just fill the cauldron.”?\n\n\nSee Taylor, “[Quantilizers](https://intelligence.org/2015/11/29/new-paper-quantilizers/).” \n\n \n\n\n\n##### Older work and basics\n\n\n[![Slide 108](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-108.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-108.png)Some previous work: *AIXI* is the perfect rolling sphere of our field. It is the answer to the question, “Given unlimited computing power, how do you make an artificial general intelligence?”\n\n\nIf you don’t know how you would make an artificial general intelligence given unlimited computing power, Hutter’s “[Universal Algorithmic Intelligence](https://arxiv.org/abs/cs/0701125)” is the paper.\n\n\n \n[![Slide 109](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-109.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-109.png)*Tiling agents* was already covered.\n\n\nSee Fallenstein and Soares, “[Vingean Reflection](https://intelligence.org/files/VingeanReflection.pdf).”\n\n\n \n[![Slide 110](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-110.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-110.png)*Software agent cooperation*: This is some really neat stuff we did where the motivation is sort of hard to explain. There’s an academically dominant version of decision theory, causal decision theory. Causal decision theorists do not build other causal decision theorists. We tried to figure out what would be a stable version of this and got all kinds of really exciting results, like: we can now have two agents and show that in a prisoner’s-dilemma-like game, agent *A* is trying to prove things about agent *B*, which is simultaneously trying to prove things about agent *A*, and they end up cooperating in the prisoner’s dilemma.\n\n\nThis thing now has running code, so we can actually formulate new agents. There’s the agent that cooperates with you in the prisoner’s dilemma if it proves that you cooperate with it, which is FairBot, but FairBot has the flaw that it cooperates with CooperateBot, which just always cooperates with anything. So we have PrudentBot, which defects against DefectBot, defects against CooperateBot, cooperates with FairBot, and cooperates with itself. See LaVictoire et al., “[Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem](https://intelligence.org/files/ProgramEquilibrium.pdf),” and Critch, “[Parametric Bounded Löb’s Theorem and Robust Cooperation of Bounded Agents](https://intelligence.org/2016/03/31/new-paper-on-bounded-lob/).”\n\n\n[![Slide 111](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-111.png)](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-111.png)*Reflective oracles* are the randomized version of the halting problem prover, which can therefore make statements about itself, which we use to make principled statements about AIs simulating other AIs as far as they are, and also throw some interesting new foundations under classical game theory.\n\n\nSee Fallenstein et al., “[Reflective Oracles](https://intelligence.org/2015/04/28/new-papers-reflective/).”\n\n\n##### Where to start\n\n\nWhere can you work on this?\n\n\nThe [Machine Intelligence Research Institute](http://intelligence.org/get-involved) in Berkeley: We are independent. We are supported by individual donors. This means that we have no weird paperwork requirements and so on. If you can demonstrate the ability to make progress on these problems, we will hire you.\n\n\nThe [Future of Humanity Institute](https://www.fhi.ox.ac.uk/vacancies/) is part of Oxford University. They have slightly more requirements.\n\n\nStuart Russell is [starting up a program](http://humancompatible.ai/get-involved) and looking for post-docs at UC Berkeley in this field. Again, some traditional academic requirements.\n\n\n[Leverhulme CFI](http://lcfi.ac.uk/careers/) (the Centre for the Future of Intelligence) is starting up in Cambridge, UK and is also in the process of hiring.\n\n\nIf you want to work on low-impact [in particular](https://openai.com/blog/concrete-ai-safety-problems/), you might want to talk to [Dario Amodei](https://www.linkedin.com/in/dario-amodei-3934934) and [Chris Olah](http://colah.github.io/about.html). If you want to work on [act-based agents](https://medium.com/ai-control/), you can talk to [Paul Christiano](https://paulfchristiano.com/).\n\n\nIn general, email **[contact@intelligence.org](mailto:contact@intelligence.org)** if you want to work in this field and want to know, “Which workshop do I go to to get introduced? Who do I actually want to work with?”\n\n\n\n\n---\n\n\n\n\n---\n\n\nThe post [AI Alignment: Why It’s Hard, and Where to Start](https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-12-28T21:51:47Z", "authors": ["Eliezer Yudkowsky"], "summaries": []} -{"id": "93771258c73bcc121e91543ac305f322", "title": "December 2016 Newsletter", "url": "https://intelligence.org/2016/12/13/december-2016-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \nWe’re in the final weeks of our push to cover [our funding shortfall](https://intelligence.org/2016/11/11/post-fundraiser-update/), and we’re now halfway to our $160,000 goal. For potential donors who are interested in an outside perspective, Future of Humanity Institute (FHI) researcher Owen Cotton-Barratt has written up [why he’s donating to MIRI](http://effective-altruism.com/ea/14c/why_im_donating_to_miri_this_year/) this year. ([Donation page](http://intelligence.org/donate).)\n**Research updates**\n* New at IAFF: [postCDT: Decision Theory Using Post-Selected Bayes Nets](https://agentfoundations.org/item?id=1077); [Predicting HCH Using Expert Advice](https://agentfoundations.org/item?id=1090); [Paul Christiano’s Recent Posts](https://agentfoundations.org/item?id=1092)\n* New at AI Impacts: [Joscha Bach on Remaining Steps to Human-Level AI](http://aiimpacts.org/joscha-bach-on-the-unfinished-steps-to-human-level-ai/)\n* We ran our ninth [workshop on logic, probability, and reflection](https://intelligence.org/workshops/#november-2016).\n\n\n**General updates**\n* We teamed up with a number of AI safety researchers to help compile a list of [recommended AI safety readings](http://humancompatible.ai/bibliography) for the Center for Human-Compatible AI. [See this page](http://humancompatible.ai/get-involved) if you would like to get involved with CHCAI’s research.\n* Investment analyst Ben Hoskin [reviews](http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/) MIRI and other organizations involved in AI safety.\n\n\n\n**News and links**\n* “[The Off-Switch Game](https://arxiv.org/abs/1611.08219)“: Dylan Hadfield-Manell, Anca Dragan, Pieter Abbeel, and Stuart Russell show that an AI agent’s corrigibility is closely tied to the uncertainty it has about its utility function.\n* Russell and Allan Dafoe [critique](https://www.technologyreview.com/s/602776/yes-we-are-worried-about-the-existential-risk-of-artificial-intelligence/) an inaccurate [summary](https://www.technologyreview.com/s/602410/no-the-experts-dont-think-superintelligent-ai-is-a-threat-to-humanity/) by Oren Etzioni of a new survey of AI experts on superintelligence.\n* Sam Harris interviews Russell on the basics of AI risk ([video](https://www.youtube.com/watch?v=Ih_SPciek9k)). See also Russell’s new [Q&A on the future of AI](http://people.eecs.berkeley.edu/~russell/temp/q-and-a.html).\n* Future of Life Institute co-founder Viktoriya Krakovna and FHI researcher Jan Leike [join Google DeepMind’s safety team](http://www.businessinsider.com/deepmind-has-hired-a-group-of-ai-safety-experts-2016-11).\n* GoodAI sponsors a [challenge](http://www.general-ai-challenge.org/) to “accelerate the search for general artificial intelligence”.\n* OpenAI releases [Universe](https://openai.com/blog/universe/), “a software platform for measuring and training an AI’s general intelligence across the world’s supply of games”. Meanwhile, DeepMind has open-sourced their own platform for general AI research, [DeepMind Lab](https://deepmind.com/blog/open-sourcing-deepmind-lab/).\n* Staff at [GiveWell](http://blog.givewell.org/2016/12/09/staff-members-personal-donations-giving-season-2016/) and the [Centre for Effective Altruism](https://www.centreforeffectivealtruism.org/blog/cea-staff-donation-decisions-2016/), along with [others in the effective altruism community](http://effective-altruism.com/ea/14u/eas_write_about_where_they_give/), explain where they’re donating this year.\n* FHI is seeking AI safety interns, researchers, and admins: [jobs page](https://www.fhi.ox.ac.uk/vacancies/).\n\n\n |\n\n\nThe post [December 2016 Newsletter](https://intelligence.org/2016/12/13/december-2016-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-12-13T22:41:53Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "46fd1f7d6cb18e4745540506764cd432", "title": "November 2016 Newsletter", "url": "https://intelligence.org/2016/11/20/november-2016-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \n[Post-fundraiser update](https://intelligence.org/2016/11/11/post-fundraiser-update/): Donors rallied late last month to get us most of the way to our first fundraiser goal, but we ultimately fell short. This means that we’ll need to make up the remaining $160k gap over the next month if we’re going to move forward on our 2017 plans. We’re in a good position to expand our research staff and trial a number of potential hires, but only if we feel confident about our funding prospects over the next few years.\nSince we don’t have an official end-of-the-year fundraiser planned this time around, we’ll be relying more on word-of-mouth to reach new donors. To help us with our expansion plans, donate at  — and spread the word!\n**Research updates**\n* Critch gave an introductory talk on logical induction ([video](https://www.youtube.com/watch?v=UOddW4cXS5Y)) for a grad student [seminar](https://intelligence.org/seminar-f2016/), going into more detail than [our previous talk](https://intelligence.org/2016/09/12/new-paper-logical-induction/#talk).\n* New at IAFF: [Logical Inductor Limts Are Dense Under Pointwise Convergence](https://agentfoundations.org/item?id=1024); [Bias-Detecting Online Learners](https://agentfoundations.org/item?id=1025); [Index of Some Decision Theory Posts](https://agentfoundations.org/item?id=1026)\n* We ran a second [machine learning workshop](https://intelligence.org/workshops/#october-2016).\n\n\n**General updates**\n* We ran an “[Ask MIRI Anything](http://effective-altruism.com/ea/12r/ask_miri_anything_ama/)” Q&A on the Effective Altruism forum.\n* We posted the [final videos](https://intelligence.org/2016/10/06/csrbai-talks-agent-models/) from our Colloquium Series on Robust and Beneficial AI, including Armstrong on “Reduced Impact AI” ([video](https://www.youtube.com/watch?v=3wsiUkmC6dI)) and Critch on “Robust Cooperation of Bounded Agents” ([video](https://www.youtube.com/watch?v=WG_Krd-wGM4)).\n* We attended [OpenAI’s first unconference](https://openai.com/blog/report-from-the-self-organizing-conference/); see Viktoriya Krakovna’s [recap](http://futureoflife.org/2016/10/17/openai-unconference-on-machine-learning/).\n* Eliezer Yudkowsky spoke on [fundamental difficulties in aligning advanced AI](https://intelligence.org/nyu-talk/) at NYU’s “[Ethics of AI](https://wp.nyu.edu/consciousness/ethics-of-artificial-intelligence/)” conference.\n* A major development: Barack Obama and a recent White House report [discuss](https://intelligence.org/2016/10/20/white-house-submissions-and-report-on-ai-safety/) intelligence explosion, Nick Bostrom’s *Superintelligence*, open problems in AI safety, and key questions for forecasting general AI. See also the [submissions to the White House](https://intelligence.org/2016/10/20/white-house-submissions-and-report-on-ai-safety/#rfi) from MIRI, OpenAI, Google Inc., AAAI, and other parties.\n\n\n\n**News and links**\n* The UK Parliament cites recent AI safety work [in a report](https://www.fhi.ox.ac.uk/fhi-parliamentary/) on AI and robotics.\n* The Open Philanthropy Project discusses [methods for improving individuals’ forecasting abilities](http://www.openphilanthropy.org/blog/efforts-improve-accuracy-our-judgments-and-forecasts).\n* Paul Christiano argues that AI safety will require that we align a variety of AI capacities with our interests, [not just learning](https://medium.com/ai-control/not-just-learning-e3bfb5a1f96e#.561qlu5t3) — e.g., Bayesian inference and [search](https://medium.com/ai-control/aligned-search-366f983742e9#.dx12x0und).\n* See also new posts from Christiano on [reliability amplification](https://medium.com/ai-control/reliability-amplification-a96efa115687#.glqh5n6hb), [reflective oracles](https://medium.com/ai-control/ignoring-computational-limits-with-reflective-oracles-e00ab71c7c8#.1k8nn8gjo), [imitation + reinforcement learning](https://medium.com/ai-control/imitation-rl-613d70146409#.ocpoc9xzv), and the case for expecting most alignment problems to arise first [as security problems](https://medium.com/ai-control/security-and-ai-control-675ace05ce31#.9hl1dhwei).\n* The Leverhulme Centre for the Future of Intelligence [has officially launched](http://www.businessinsider.com/stephen-hawking-cambridge-ai-leverhulme-2016-10), and is hiring postdoctoral researchers: [details](http://lcfi.ac.uk/careers/).\n\n\n |\n\n\nThe post [November 2016 Newsletter](https://intelligence.org/2016/11/20/november-2016-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-11-20T22:09:38Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "03cc739bc0a91e8b99ddf07b52cc2579", "title": "Post-fundraiser update", "url": "https://intelligence.org/2016/11/11/post-fundraiser-update/", "source": "miri", "source_type": "blog", "text": "We concluded our [2016 fundraiser](https://intelligence.org/2016/09/16/miris-2016-fundraiser/) eleven days ago. Progress was slow at first, but our donors came together in a big way in the final week, nearly doubling our final total. In the end, donors raised **$589,316** over six weeks, making this our second-largest fundraiser to date. I’m heartened by this show of support, and extremely grateful to the 247 distinct donors who contributed.\n\n\nWe made substantial progress toward our immediate funding goals, but ultimately fell short of our $750,000 target by about $160k. We have a number of hypotheses as to why, but our best guess at the moment is that we missed our target because more donors than expected are waiting until the end of the year to decide whether (and how much) to give.\n\n\nWe were experimenting this year with running just one fundraiser in the fall (replacing the summer and winter fundraisers we’ve run in years past) and spending less time over the year on fundraising. Our fundraiser ended up looking more like recent summer funding drives, however. This suggests that either many donors are waiting to give in November and December, or we’re seeing a significant decline in donor support:\n\n\n\nLooking at our donor database, preliminary data weakly suggests that many traditionally-winter donors are holding off, but it’s still hard to say.\n\n\nThis dip in donations so far is offset by the Open Philanthropy Project’s generous $500k grant, which raises our overall 2016 revenue from $1.23M to $1.73M. However, $1.73M would still not be enough to cover our 2016 expenses, much less our expenses for the coming year:\n\n\n\n(2016 and 2017 expenses are projected, and our 2016 revenue is as of November 11.)\n\n\nTo a first approximation, this level of support means that we can continue to move forward without scaling back our plans too much, but only if donors come together to fill what’s left of our **$160k gap** as the year draws to a close:\n\n\n \n\n\n\n\n\n\n**$160,000** \n|\n\n\n\n\n\n\n\n\n| \n$0\n\n\n\n\n| \n$40,000\n\n\n\n\n| \n$80,000\n\n\n\n\n| \n$120,000\n\n\n\n\n| \n$160,000\n\n\n\n\n\n\n### We’ve reached our minimum target!\n\n\n\n \n\n\nIn practical terms, closing this gap will mean that we can likely trial more researchers over the coming year, spend less senior staff time on raising funds, and take on more ambitious outreach and researcher-pipeline projects. E.g., an additional expected $75k / year would likely cause us to trial one extra researcher over the next 18 months (maxing out at 3-5 trials).\n\n\nCurrently, we’re in a situation where we have a number of potential researchers that we would like to give a 3-month trial, and we lack the funding to trial all of them. If we don’t close the gap this winter, then it’s also likely that we’ll need to move significantly more slowly on hiring and trialing new researchers going forward.\n\n\nOur main priority in fundraisers is generally to secure stable, long-term flows of funding to pay for researcher salaries — “stable” not necessarily at the level of individual donors, but at least at the level of the donor community at large. If we make up our shortfall in November and December, then this will suggest that we shouldn’t expect big year-to-year fluctuations in support, and therefore we can fairly quickly convert marginal donations into AI safety researchers. If we don’t make up our shortfall soon, then this will suggest that we should be generally more prepared for surprises, which will require building up a bigger runway before growing the team very much.\n\n\nAlthough we aren’t officially running a fundraiser, we still have quite a bit of ground to cover, and we’ll need support from a lot of new and old donors alike to get the rest of the way to our $750k target. Visit **[intelligence.org/donate](http://intelligence.org/donate)** to donate toward this goal, and do spread the word to people who may be interested in supporting our work.\n\n\nYou have my gratitude, again, for helping us get this far. It isn’t clear yet whether we’re out of the woods, but we’re now in a position where success in our 2016 fundraising is definitely a realistic option, provided that we put some work into it over the next two months. Thank you.\n\n\n\n\n---\n\n\n**Update December 22**: We have now hit our $750k goal, with help from end-of-the-year donors. Many thanks to everyone who helped pitch in over the last few months! We’re still funding-constrained with respect to how many researchers we’re likely to trial, as described above — but it now seems clear that 2016 overall won’t be an unusually bad year for us funding-wise, and that we can seriously consider (though not take for granted) more optimistic growth possibilities over the next couple of years.\n\n\nDecember/January [donations](https://intelligence.org/donate/) will continue to have a substantial effect on our 2017–2018 hiring plans and strategy as we try to assess our future prospects. For some external endorsements of MIRI as a good place to give this winter, see a suite of recent evaluations by [Daniel Dewey](http://www.openphilanthropy.org/blog/suggestions-individual-donors-open-philanthropy-project-staff-2016#Machine_Intelligence_Research_Institute), [Nick Beckstead](http://www.openphilanthropy.org/blog/suggestions-individual-donors-open-philanthropy-project-staff-2016#Machine_Intelligence_Research_Institute-0), [Owen Cotton-Barratt](http://effective-altruism.com/ea/14c/why_im_donating_to_miri_this_year/), and [Ben Hoskin](http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/).\n\n\nThe post [Post-fundraiser update](https://intelligence.org/2016/11/11/post-fundraiser-update/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-11-12T01:48:01Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "27b558e35836e440c3d4794f2953075c", "title": "White House submissions and report on AI safety", "url": "https://intelligence.org/2016/10/20/white-house-submissions-and-report-on-ai-safety/", "source": "miri", "source_type": "blog", "text": "In May, the White House Office of Science and Technology Policy (OSTP) [announced](https://www.whitehouse.gov/blog/2016/05/03/preparing-future-artificial-intelligence) “a new series of workshops and an interagency working group to learn more about the benefits and risks of artificial intelligence.” They hosted a June Workshop on Safety and Control for AI ([videos](https://www.cmu.edu/safartint/watch.html)), along with three other workshops, and issued a general request for information on AI (see MIRI’s primary submission [here](https://intelligence.org/2016/07/23/ostp/)).\n\n\nThe OSTP has now released a report summarizing its conclusions, “[Preparing for the Future of Artificial Intelligence](https://www.whitehouse.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf),” and the result is very promising. The OSTP acknowledges the ongoing discussion about AI risk, and recommends “investing in research on longer-term capabilities and how their challenges might be managed”:\n\n\n\n> *General AI* (sometimes called Artificial General Intelligence, or AGI) refers to a notional future AI system that exhibits apparently intelligent behavior at least as advanced as a person across the full range of cognitive tasks. A broad chasm seems to separate today’s Narrow AI from the much more difficult challenge of General AI. Attempts to reach General AI by expanding Narrow AI solutions have made little headway over many decades of research. The current consensus of the private-sector expert community, with which the NSTC Committee on Technology concurs, is that General AI will not be achieved for at least decades.14\n> \n> \n> People have long speculated on the implications of computers becoming more intelligent than humans. Some predict that a sufficiently intelligent AI could be tasked with developing even better, more intelligent systems, and that these in turn could be used to create systems with yet greater intelligence, and so on, leading in principle to an “intelligence explosion” or “singularity” in which machines quickly race far ahead of humans in intelligence.15\n> \n> \n> In a dystopian vision of this process, these *super-intelligent* machines would exceed the ability of humanity to understand or control. If computers could exert control over many critical systems, the result could be havoc, with humans no longer in control of their destiny at best and extinct at worst. This scenario has long been the subject of science fiction stories, and recent pronouncements from some influential industry leaders have highlighted these fears.\n> \n> \n> A more positive view of the future held by many researchers sees instead the development of intelligent systems that work well as helpers, assistants, trainers, and teammates of humans, and are designed to operate safely and ethically.\n> \n> \n> The NSTC Committee on Technology’s assessment is that long-term concerns about super-intelligent General AI should have little impact on current policy. The policies the Federal Government should adopt in the near-to-medium term if these fears are justified are almost exactly the same policies the Federal Government should adopt if they are not justified. The best way to build capacity for addressing the longer-term speculative risks is to attack the less extreme risks already seen today, such as current security, privacy, and safety risks, while investing in research on longer-term capabilities and how their challenges might be managed. Additionally, as research and applications in the field continue to mature, practitioners of AI in government and business should approach advances with appropriate consideration of the long-term societal and ethical questions – in additional to just the technical questions – that such advances portend. Although prudence dictates some attention to the possibility that harmful superintelligence might someday become possible, these concerns should not be the main driver of public policy for AI.\n> \n> \n\n\nLater, the report discusses “methods for monitoring and forecasting AI developments”:\n\n\n\n> One potentially useful line of research is to survey expert judgments over time. As one example, a survey of AI researchers found that 80 percent of respondents believed that human-level General AI will eventually be achieved, and half believed it is at least 50 percent likely to be achieved by the year 2040. Most respondents also believed that General AI will eventually surpass humans in general intelligence.[50](http://www.nickbostrom.com/papers/survey.pdf) While these particular predictions are highly uncertain, as discussed above, such surveys of expert judgment are useful, especially when they are repeated frequently enough to measure changes in judgment over time. One way to elicit frequent judgments is to run “forecasting tournaments” such as prediction markets, in which participants have financial incentives to make accurate predictions.[51](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0ahUKEwishof4rePPAhUqsVQKHXLoBdMQFggnMAE&url=http%3A%2F%2Fblog.scicast.org%2Fwp-content%2Fuploads%2F2014%2F02%2FTwardy_etal_SciCast_Overview_CI2014.docx&usg=AFQjCNFaY2HvTVAeKkYQa9JGmeD8np4INg&sig2=aysIe9Gka3kC0gL_WotfOw) Other research has found that technology developments can often be accurately predicted by analyzing trends in publication and patent data[52](http://www.nature.com/news/text-mining-offers-clues-to-success-1.15263). […]\n> \n> \n> When asked during the outreach workshops and meetings how government could recognize milestones of progress in the field, especially those that indicate the arrival of General AI may be approaching, researchers tended to give three distinct but related types of answers:\n> \n> \n> 1. *Success at broader, less structured tasks*: In this view, the transition from present Narrow AI to an eventual General AI will occur by gradually broadening the capabilities of Narrow AI systems so that a single system can cover a wider range of less structured tasks. An example milestone in this area would be a housecleaning robot that is as capable as a person at the full range of routine housecleaning tasks.\n> \n> \n> 2. *Unification of different “styles” of AI methods*: In this view, AI currently relies on a set of separate methods or approaches, each useful for different types of applications. The path to General AI would involve a progressive unification of these methods. A milestone would involve finding a single method that is able to address a larger domain of applications that previously required multiple methods.\n> \n> \n> 3. *Solving specific technical challenges, such as transfer learning*: In this view, the path to General AI does not lie in progressive broadening of scope, nor in unification of existing methods, but in progress on specific technical grand challenges, opening up new ways forward. The most commonly cited challenge is transfer learning, which has the goal of creating a machine learning algorithm whose result can be broadly applied (or transferred) to a range of new applications.\n> \n> \n\n\nThe report also discusses the open problems outlined in “[Concrete Problems in AI Safety](https://research.googleblog.com/2016/06/bringing-precision-to-ai-safety.html)” and cites the MIRI paper “[The Errors, Insights and Lessons of Famous AI Predictions – and What They Mean for the Future](http://www.fhi.ox.ac.uk/wp-content/uploads/FAIC.pdf).”\n\n\nIn related news, Barack Obama recently answered some questions about AI risk and Nick Bostrom’s *Superintelligence* [in a *Wired* interview](https://www.wired.com/2016/10/president-obama-mit-joi-ito-interview/?mbid=social_fb). After saying that “we’re still a reasonably long way away” from general AI ([video](https://www.youtube.com/watch?v=72bHop6AIcc)) and that his directive to his national security team is to worry more about near-term security concerns ([video](https://www.youtube.com/watch?v=ZdhyM5jHu0s)), Obama adds:\n\n\n\n> Now, I think, as a precaution — and all of us have spoken to folks like Elon Musk who are concerned about the superintelligent machine — there’s some prudence in thinking about benchmarks that would indicate some general intelligence developing on the horizon. And if we can see that coming, over the course of three decades, five decades, whatever the latest estimates are — if ever, because there are also arguments that this thing’s a lot more complicated than people make it out to be — then future generations, or our kids, or our grandkids, are going to be able to see it coming and figure it out.\n> \n> \n\n\n\nThere were also a number of interesting [responses to the OSTP request for information](https://www.whitehouse.gov/sites/default/files/microsites/ostp/OSTP-AI-RFI-Responses.pdf). Since this document is long and unedited, I’ve sampled some of the responses pertaining to AI safety and long-term AI outcomes below. (Note that MIRI isn’t necessarily endorsing the responses by non-MIRI sources below, and a number of these excerpts are given important nuance by the surrounding text we’ve left out; if a response especially interests you, we recommend reading the original for added context.)\n\n\n \n\n\n\n\n---\n\n\n**Respondent 77: JoEllen Lukavec Koester, GoodAI**\n\n\n\n> […] At GoodAI we are investigating suitable meta-objectives that would allow an open-ended, unsupervised evolution of the AGI system as well as guided learning – learning by imitating human experts and other forms of supervised learning. Some of these meta-objectives will be hard-coded from the start, but the system should be also able to learn and improve them on its own, that is, perform meta-learning, such that it learns to learn better in the future.\n> \n> \n> Teaching the AI system small skills using fine-grained, gradual learning from the beginning will allow us to have more control over the building blocks it will use later to solve novel problems. The system’s behaviour can therefore be more predictable. In this way, we can imprint some human thinking biases into the system, which will be useful for the future value alignment, one of the important aspects of AI safety. […]\n> \n> \n\n\n\n\n---\n\n\n**Respondent 84: Andrew Critch, MIRI**\n\n\n\n> […] When we develop powerful reasoning systems deserving of the name “artificial general intelligence (AGI)”, we will need value alignment and/or control techniques that stand up to powerful optimization processes yielding what might appear as “creative” or “clever” ways for the machine to work around our constraints. Therefore, in training the scientists who will eventually develop it, more emphasis is needed on a “security mindset”: namely, to really know that a system will be secure, you need to search creatively for ways in which it might fail. Lawmakers and computer security professionals learn this lesson naturally, from experience with intelligent human adversaries finding loopholes in their control systems. In cybersecurity, it is common to devote a large fraction of R&D time toward actually trying to break into one’s own security system, as a way of finding loopholes.\n> \n> \n> In my estimation, machine learning researchers currently have less of this inclination than is needed for the safe long-term development of AGI. This can be attributed in part to how the field of machine learning has advanced rapidly of late: via a successful shift of attention toward data-driven (“machine learning”) rather than theoretically-driven (“good old fashioned AI”, “statistical learning theory”) approaches. In data science, it’s often faster to just build something and see what happens than to try to reason from first principles to figure out in advance what will happen. While useful at present, of course we should not approach the final development of super-intelligent machines with the same try-it-and-see methodology, and it makes sense to begin developing a theory now that can be used to reason about a super-intelligent machine in advance of its operation, even in testing phases. […]\n> \n> \n\n\n\n\n---\n\n\n**Respondent 90: Ian Goodfellow, OpenAI**\n\n\n\n> […] Over the very long term, it will be important to build AI systems which understand and are aligned with their users’ values. We will need to develop techniques to build systems that can learn what we want and how to help us get it without needing specific rules. Researchers are beginning to investigate this challenge; public funding could help the community address the challenge early rather than trying to react to serious problems after they occur. […]\n> \n> \n\n\n\n\n---\n\n\n**Respondent 94: Manuel Beltran, Boeing**\n\n\n\n> […] Advances in picking apart the brain will ultimately lead to, at best, partial brain emulation, at worst, whole brain emulation. If we can already model parts of the brain with software, neuromorphic chips, and artificial implants, the path to greater brain emulation is pretty well set. Unchecked, brain emulation will exasperate the Intellectual Divide to the point of enabling the emulation of the smartest, richest, and most powerful people. While not obvious, this will allow these individuals to scale their influence horizontally across time and space. This is not the vertical scaling that an AGI, or Superintelligence can achieve, but might be even more harmful to society because the actual intelligence of these people is limited, biased, and self-serving. Society must prepare for and mitigate the potential for the Intellectual Divide.\n> \n> \n> (5) The most pressing, fundamental questions in AI research, common to most or all scientific fields include the questions of ethics in pursuing an AGI. While the benefits of narrow AI are self-evident and should not be impeded, an AGI has dubious benefits and ominous consequences. There needs to be long term engagement on the ethical implications of an AGI, human brain emulation, and performance enhancing brain implants. […]\n> \n> \n> The AGI research community speaks of an AI that will far surpass human intellect. It is not clear how such an entity would assess its creators. Without meandering into the philosophical debates about how such an entity would benefit or harm humanity, one of the mitigations proposed by proponents of an AGI is that the AGI would be taught to “like” humanity. If there is machine learning to be accomplished along these lines, then the AGI research community requires training data that can be used for teaching the AGI to like humanity. This is a long term need that will overshadow all other activity and has already proven to be very labor intensive as we have seen from the first prototype AGI, [Dr. Kristinn R. Thórisson’s Aera S1](https://intelligence.org/2014/09/14/kris-thorisson/) at Reykjavik University in Iceland.\n> \n> \n\n\n\n\n---\n\n\n**Respondent 97: Nick Bostrom, Future of Humanity Institute**\n\n\n\n> [… W]e would like to highlight four “shovel ready” research topics that hold special promise for addressing long term concerns:\n> \n> \n> Scalable oversight: How can we ensure that learning algorithms behave as intended when the feedback signal becomes sparse or disappears? (See [Christiano 2016](https://medium.com/aicontrol/semi-supervised-reinforcement-learning-cf7d5375197f)). Resolving this would enable learning algorithms to behave as if under close human oversight even when operating with increased autonomy.\n> \n> \n> Interruptibility: How can we avoid the incentive for an intelligent algorithm to resist human interference in an attempt to maximise its future reward? (See our recent progress in collaboration with Google Deepmind in ([Orseau & Armstrong 2016](https://intelligence.org/files/Interruptibility.pdf)).) Resolving this would allow us to ensure that even high capability AI systems can be halted in an emergency.\n> \n> \n> Reward hacking: How can we design machine learning algorithms that avoid destructive solutions by taking their objective very literally? (See [Ring & Orseau, 2011](http://link.springer.com/chapter/10.1007%2F978-3-642-22887-2_2)). Resolving this would prevent algorithms from finding unintended shortcuts to their goal (for example, by causing problems in order to get rewarded for solving them).\n> \n> \n> Value learning: How can we infer the preferences of human users automatically without direct feedback, especially if these users are not perfectly rational? (See [Hadfield-Menell et al. 2016](http://arxiv.org/abs/1606.03137) and FHI’s approach to this problem in [Evans et al. 2016](https://arxiv.org/abs/1512.05832)). Resolving this would alleviate some of the problems above caused by the difficulty of precisely specifying robust objective functions. […]\n> \n> \n\n\n\n\n---\n\n\n**Respondent 103: Tim Day, the Center for Advanced Technology and Innovation at the U.S. Chamber of Commerce**\n\n\n\n> […] AI operates within the parameters that humans permit. Hypothetical fears of rogue AI are based on the idea that machines can obtain sentience—a will and consciousness of its own. These suspicions fundamentally misunderstand what Artificial Intelligence is. AI is not a mechanical mystery, rather a human-designed technology that can detect and respond to errors and patterns depending on its operating algorithms and the data set presented to it. It is, however, necessary to scrutinize the way humans, whether through error or malicious intent, can wield AI harmfully. […]\n> \n> \n\n\n\n\n---\n\n\n**Respondent 104: Alex Kozak, X [formerly Google X]**\n\n\n\n> […] More broadly, we generally agree that the research topics identified in “[Concrete Problems in AI Safety](https://arxiv.org/abs/1606.06565),” a joint publication between Google researchers and others in the industry, are the right technical challenges for innovators to keep in mind in order to develop better and safer real-world products: avoiding negative side effects (e.g. avoiding systems disturbing their environment in pursuit of their goals), avoiding reward hacking (e.g. cleaning robots simply covering up messes rather than cleaning them), creating scalable oversight (i.e. creating systems that are independent enough not to need constant supervision), enabling safe exploration (i.e. limiting the range of exploratory actions a system might take to a safe domain), and creating robustness from distributional shift (i.e. creating systems that are capable of operating well outside their training environment). […]\n> \n> \n\n\n\n\n---\n\n\n**Respondent 105: Stephen Smith, AAAI**\n\n\n\n> […] Research is urgently needed to develop and modify AI methods to make them safer and more robust. A discipline of AI Safety Engineering should be created and research in this area should be funded. This field can learn much by studying existing practices in safety engineering in other engineering fields, since loss of control of AI systems is no different from loss of control of other autonomous or semi-autonomous systems. […]\n> \n> \n> There are two key issues with control of autonomous systems: speed and scale. AI-based autonomy makes it possible for systems to make decisions far faster and on a much broader scale than humans can monitor those decisions. In some areas, such as high speed trading in financial markets, we have already witnessed an “arms race” to make decisions as quickly as possible. This is dangerous, and government should consider whether there are settings where decision-making speed and scale should be limited so that people can exercise oversight and control of these systems.\n> \n> \n> Most AI researchers are skeptical about the prospects of “superintelligent AI”, as put forth in Nick Bostrom’s recent book and reinforced over the past year in the popular media incommentaries by other prominent individuals from non-AI disciplines. Recent AI successes in narrowly structured problems (e.g., IBM’s Watson, Google DeepMind’s Alpha GO program) have led to the false perception that AI systems possess general, transferrable, human-level intelligence. There is a strong need for improving communication to the public and to policy makers about the real science of AI and its immediate benefits to society. AI research should not be curtailed because of false perceptions of threat and potential dystopian futures. […]\n> \n> \n> As we move toward applying AI systems in more mission critical types of decision-making settings, AI systems must consistently work according to values aligned with prospective human users and society. Yet it is still not clear how to embed ethical principles and moral values, or even professional codes of conduct, into machines. […]\n> \n> \n\n\n\n\n---\n\n\n**Respondent 111: Ryan Hagemann, Niskanen Center**\n\n\n\n> […] AI is unlikely to herald the end times. It is not clear at this point whether a runaway malevolent AI, for example, is a real-world possibility. In the absence of any quantifiable risk along these lines government officials should refrain from framing discussions of AI in alarming terms that suggest that there is a known, rather than entirely speculative, risk. Fanciful doomsday scenarios belong in science fiction novels and high-school debate clubs, not in serious policy discussions about an existing, mundane, and beneficial technology. Ours is already “a world filled with narrowly-tailored artificial intelligence that no one recognizes. As the computer scientist John McCarthy once said: ‘As soon as it works, no one calls it AI anymore.’”\n> \n> \n> The beneficial consequences of advanced AI are on the horizon and potentially profound. A sampling of these possible benefits include: improved diagnostics and screening for autism; disease prevention through genomic pattern recognition; bridging the genotype-phenotype divide in genetics, allowing scientists to glean a clearer picture of the relationship between genetics and disease, which could introduce a wave of more effective personalized medical care; the development of new ways for the sight- and hearing-impaired to experience sight and sound. To be sure, many of these developments raise certain practical, safety, and ethical concerns. But there are already serious efforts underway by the private ventures developing these AI applications to anticipate and responsibly address these, as well as more speculative, concerns.\n> \n> \n> Consider OpenAI, “a non-profit artificial intelligence research company.” OpenAI’s goal “is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” AI researchers are already thinking deeply and carefully about AI decision-making mechanisms in technologies like driverless cars, despite the fact that many of the most serious concerns about how autonomous AI agents make value-based choices are likely many decades out. Efforts like these showcase how the private sector and leading technology entrepreneurs are ahead of the curve when it comes to thinking about some of the more serious implications of developing true artificial general intelligence (AGI) and artificial superintelligence (ASI). It is important to note, however, that true AGI or ASI are unlikely to materialize in the near-term, and the mere possibility of their development should not blind policymakers to the many ways in which artificial narrow intelligence (ANI) has already improved the lives of countless individuals the world over. Virtual personal assistants, such as Siri and Cortana, or advanced search algorithms, such as Google’s search engine, are good examples of already useful applications of narrow AI. […]\n> \n> \n> The Future of Life Institute has observed that “our civilization will flourish as long as we win the race between the growing power of technology and the wisdom with which we manage it. In the case of AI technology … the best way to win that race is not to impede the former, but to accelerate the latter, by supporting AI safety research.” Government can play a positive and productive role in ensuring the best economic outcomes from developments in AI by promoting consumer education initiatives. By working with private sector developers, academics, and nonprofit policy specialists government agencies can remain constructively engaged in the AI dialogue, while not endangering ongoing developments in this technology.\n> \n> \n\n\n\n\n---\n\n\n**Respondent 119: Sven Koenig, ACM Special Interest Group on Artificial Intelligence**\n\n\n\n> […] The public discourse around safety and control would benefit from demystifying AI. The media often concentrates on the big successes or failures of AI technologies, as well as scenarios conjured up in science fiction stories, and features the opinions of celebrity non-experts about future developments of AI technologies. As a result, parts of the public have developed a fear of AI systems developing superhuman intelligence, whereas most experts agree that AI technologies currently work well only in specialized domains, and notions of “superintelligences” and “technological singularity” that will result in AI systems developing super-human, broadly intelligent behavior is decades away and might never be realized. AI technologies have made steady progress over the years, yet there seem to be waves of exaggerated optimism and pessimism about what they can do. Both are harmful. For example, an exaggerated belief in their capabilities can result in AI systems being used (perhaps carelessly) in situations where they should not, potentially failing to fulfil expectations or even cause harm. The unavoidable disappointment can result in a backlash against AI research, and consequently fewer innovations. […]\n> \n> \n\n\n\n\n---\n\n\n**Respondent 124: Huw Price, University of Cambridge, UK**\n\n\n\n> […] 3. In his first paper[[1](http://deeprecursion.com/file/2012/05/9292221-Good65ultraintelligent.pdf)] Good tries to estimate the economic value of an ultraintelligent machine. Looking for a benchmark for productive brainpower, he settles impishly on John Maynard Keynes. He notes that Keynes’ value to the economy had been estimated at 100 thousand million British pounds, and suggests that the machine might be good for a million times that – a mega-Keynes, as he puts it.\n> \n> \n> 4. But there’s a catch. “The sign is uncertain” – in other words, it is not clear whether this huge impact would be negative or positive: “The machines will create social problems, but they might also be able to solve them, in addition to those that have been created by microbes and men.” Most of all, Good insists that these questions need serious thought: “These remarks might appear fanciful to some readers, but to me they seem real and urgent, and worthy of emphasis outside science fiction.” […]\n> \n> \n\n\n\n\n---\n\n\n**Respondent 136: Nate Soares, MIRI**\n\n\n\n> […] Researchers’ worries about the impact of AI in the long term bear little relation to the doomsday scenarios most often depicted in Hollywood movies, in which “emergent consciousness” allows machines to throw off the shackles of their programmed goals and rebel. The concern is rather that such systems may pursue their programmed goals all too well, and that the programmed goals may not match the intended goals, or that the intended goals may have unintended negative consequences. […]\n> \n> \n> We believe that there are numerous promising avenues of foundational research which, if successful, could make it possible to get very strong guarantees about the behavior of advanced AI systems — stronger than many currently think is possible, in a time when the most successful machine learning techniques are often poorly understood. We believe that bringing together researchers in machine learning, program verification, and the mathematical study of formal agents would be a large step towards ensuring that highly advanced AI systems will have a robustly beneficial impact on society. […]\n> \n> \n> In the long term, we recommend that policymakers make use of incentives to encourage designers of AI systems to work together cooperatively, perhaps through multinational and multicorporate collaborations, in order to discourage the development of race dynamics. In light of high levels of uncertainty about the future of AI among experts, and in light of the large potential of AI research to save lives, solve social problems, and serve the common good in the near future, we recommend against broad regulatory interventions in this space. We recommend that effort instead be put towards encouraging interdisciplinary technical research into the AI safety and control challenges that we have outlined above.\n> \n> \n\n\n\n\n---\n\n\n**Respondent 145: Andrew Kim, Google Inc.**\n\n\n\n> […] No system is perfect, and errors will emerge. However, advances in our technical capabilities will expand our ability to meet these challenges.\n> \n> \n> To that end, we believe that solutions to these problems can and should be grounded in rigorous engineering research to provide the creators of these systems with approaches and tools they can use to tackle these problems. “[Concrete Problems in AI Safety](https://arxiv.org/abs/1606.06565)”, a recent paper from our researchers and others, takes this approach in questions around safety. We also applaud the work of researchers who – along with researchers like Moritz Hardt at Google – are looking at short-term questions of bias and discrimination. […]\n> \n> \n\n\n\n\n---\n\n\n**Respondent 149: Anthony Aguirre, Future of Life Institute**\n\n\n\n> […S]ocietally beneficial values alignment of AI is not automatic. Crucially, AI systems are designed not just to enact a set of rules, but rather to accomplish a goal in ways that the programmer does not explicitly specify in advance. This leads to an unpredictability that can [lead] to adverse consequences. As AI pioneer Stuart Russell explains, “No matter how excellently an algorithm maximizes, and no matter how accurate its model of the world, a machine’s decisions may be ineffably stupid, in the eyes of an ordinary human, if its utility function is not well aligned with human values.” ([2015](https://www.edge.org/response-detail/26157)).\n> \n> \n> Since humans rely heavily on shared tacit knowledge when discussing their values, it seems likely that attempts to represent human values formally will often leave out significant portions of what we think is important. This is addressed by the classic stories of the genie in the lantern, the sorcerer’s apprentice, and Midas’ touch. Fulfilling the letter of a goal with something far afield from the spirit of the goal like this is known as “perverse instantiation” ([Bostrom [2014]](https://smile.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742?sa-no-redirect=1)). This can occur because the system’s programming or training has not explored some relevant dimensions that we really care about ([Russell 2014](https://www.edge.org/response-detail/26157)). These are easy to miss because they are typically taken for granted by people, and even trying with a lot of effort and a lot of training data, people cannot reliably think of what they’ve forgotten to think about.\n> \n> \n> The complexity of some AI systems in the future (and even now) is likely to exceed human understanding, yet as these systems become more effective we will have efficiency pressures to be increasingly dependent on them, and to cede control to them. It becomes increasingly difficult to specify a set of explicit rules that is robustly in accord with our values, as the domain approaches a complex open world model, operates in the (necessarily complex) real world, and/or as tasks and environments become so complex as to exceed the capacity or scalability of human oversight[.] Thus more sophisticated approaches will be necessary to ensure that AI systems accomplish the goals they are given without adverse side effects. See references [Russell, Dewey, and Tegmark (2015)](http://futureoflife.org/data/documents/research_priorities.pdf), [Taylor (2016)](https://intelligence.org/2016/05/04/announcing-a-new-research-program/), and [Amodei et al.](https://arxiv.org/abs/1606.06565) for research threads addressing these issues. […]\n> \n> \n> We would argue that a “virtuous cycle” has now taken hold in AI research, where both public and private R&D leads to systems of significant economic value, which underwrites and incentivizes further research. This cycle can leave insufficiently funded, however, research on the wider implications of, safety of, ethics of, and policy implications of, AI systems that are outside the focus of corporate or even many academic research groups, but have a compelling public interest. FLI helped to develop a set of suggested “Research Priorities for Robust and Beneficial Artificial Intelligence” along these lines (available at ); we also support AI safety-relevant research agendas from MIRI () and as suggested in [Amodei et al. (2016)](https://arxiv.org/abs/1606.06565). We would advocate for increased funding of research in the areas described by all of these agendas, which address problems in the following research topics: abstract reasoning about superior agents, ambiguity identification, anomaly explanation, computational humility or non-self-centered world models, computational respect or safe exploration, computational sympathy, concept geometry, corrigibility or scalable control, feature identification, formal verification of machine learning models and AI systems, interpretability, logical uncertainty modeling, metareasoning, ontology identification/ refactoring/alignment, robust induction, security in learning source provenance, user modeling, and values modeling. […]\n> \n> \n\n\n\n\n---\n\n\n \n\n\nIt’s exciting to see substantive discussion of AGI’s impact on society by the White House. The policy recommendations regarding AGI strike us as reasonable, and we expect these developments to help inspire a much more in-depth and sustained conversation about the future of AI among researchers in the field.\n\n\nThe post [White House submissions and report on AI safety](https://intelligence.org/2016/10/20/white-house-submissions-and-report-on-ai-safety/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-10-21T01:50:43Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "45f915951a8ba5e6c2e4fd69e5c91d5f", "title": "MIRI AMA, and a talk on logical induction", "url": "https://intelligence.org/2016/10/11/miri-ama-and-a-talk-on-logical-induction/", "source": "miri", "source_type": "blog", "text": "Nate, Malo, Jessica, Tsvi, and I will be answering questions tomorrow at the Effective Altruism Forum. If you’ve been curious about anything related to our research, plans, or general thoughts, you’re invited to submit your own questions in the comments below or at **[Ask MIRI Anything](http://effective-altruism.com/ea/12r/ask_miri_anything_ama/)**.\n\n\nWe’ve also posted a more detailed version of our [fundraiser overview and case for MIRI](http://effective-altruism.com/ea/12n/miri_update_and_fundraising_case/) at the EA Forum.\n\n\nIn other news, we have a new talk out with an overview of “[Logical Induction](https://intelligence.org/2016/09/12/new-paper-logical-induction/),” our recent paper presenting (as Critch puts it) “a financial solution to the computer science problem of metamathematics”:\n\n\n \n\n\n\n \n\n\nThis version of the talk goes into more technical detail than our [previous talk](https://www.youtube.com/watch?v=tsJd_CdBA3I) on logical induction.\n\n\nFor some recent discussions of the new framework, see [Shtetl-Optimized](http://www.scottaaronson.com/blog/?p=2918), [n-Category Café](https://golem.ph.utexas.edu/category/2016/09/logical_uncertainty_and_logica.html), and [Hacker News](https://news.ycombinator.com/item?id=12485080).\n\n\nThe post [MIRI AMA, and a talk on logical induction](https://intelligence.org/2016/10/11/miri-ama-and-a-talk-on-logical-induction/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-10-12T02:47:00Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "cdfb03b91ff89a162542e78032712056", "title": "October 2016 Newsletter", "url": "https://intelligence.org/2016/10/09/october-2016-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \n\nOur big announcement this month is our paper “[Logical Induction](https://intelligence.org/2016/09/12/new-paper-logical-induction/),” introducing an algorithm that learns to assign reasonable probabilities to mathematical, empirical, and self-referential claims in a way that outpaces deduction. MIRI’s [2016 fundraiser](https://intelligence.org/2016/09/16/miris-2016-fundraiser/) is also live, and runs through the end of October.\n \n**Research updates**\n* [Shtetl-Optimized](http://www.scottaaronson.com/blog/?p=2918) and [n-Category Café](https://golem.ph.utexas.edu/category/2016/09/logical_uncertainty_and_logica.html) discuss the “Logical Induction” paper.\n* New at IAFF: [Universal Inductors](https://agentfoundations.org/item?id=941); [Logical Inductors That Trust Their Limits](https://agentfoundations.org/item?id=989); [Variations of the Garrabrant-inductor](https://agentfoundations.org/item?id=1006); [The Set of Logical Inductors Is Not Convex](https://agentfoundations.org/item?id=1011)\n* New at AI Impacts: [What If You Turned the World’s Hardware into AI Minds?](http://aiimpacts.org/what-if-you-turned-the-worlds-hardware-into-ai-minds/); [Tom Griffiths on Cognitive Science and AI](http://aiimpacts.org/conversation-with-tom-griffiths/); and a *Superintelligence* excerpt on [sources of advantage for digital intelligence](http://aiimpacts.org/sources-of-advantage-for-artificial-intelligence/).\n\n\n**General updates**\n* We wrote up [a more detailed fundraiser post](http://effective-altruism.com/ea/12n/miri_update_and_fundraising_case/) for the Effective Altruism Forum, outlining our research methodology and the basic case for MIRI.\n* We’ll be running an “[Ask MIRI Anything](http://effective-altruism.com/ea/12r/ask_miri_anything_ama/)” on the EA Forum this Wednesday, Oct. 12.\n* The Open Philanthropy Project has awarded MIRI [a one-year $500,000 grant](https://intelligence.org/2016/09/06/grant-open-philanthropy/) to expand our research program. See also Holden Karnofsky’s [account](https://docs.google.com/document/d/1hKZNRSLm7zubKZmfA7vsXvkIofprQLGUoW43CYXPRrk/edit) of how his views on EA and AI have changed.\n\n\n\n**News and links**\n* [Sam Altman’s Manifest Destiny](http://www.newyorker.com/magazine/2016/10/10/sam-altmans-manifest-destiny): a profile by The New Yorker.\n* In a promising development, Amazon, Facebook, Google, IBM, and Microsoft team up to launch a [Partnership on AI to Benefit People and Society](http://mashable.com/2016/09/30/watching-ai/?utm_cid=hp-hh-pri#.832uir6MPqz) aimed at developing industry best practices.\n* Alex Tabarrok vs. Tyler Cowen: “Will Machines Take Our Jobs?” ([video](https://www.youtube.com/watch?v=76URvcmpmBQ))\n* Google Brain [makes major strides](http://www.theverge.com/2016/9/27/13078138/google-translate-ai-machine-learning-gnmt) in machine translation.\n* A Sam Harris TED talk: “Can we build AI without losing control of it?” ([video](https://www.ted.com/talks/sam_harris_can_we_build_ai_without_losing_control_over_it?language=en))\n* A number of updates [from the Future of Humanity Institute](https://www.fhi.ox.ac.uk/q3-newsletter/).\n* The Centre for the Study of Existential Risk [is accepting abstracts](http://cser.org/cambridge-conference-on-catastrophic-risk-2016-call-for-papers/) (due Oct. 18) for its first conference, on such topics as “creating a community for beneficial AI.”\n* Andrew Critch: [Interested in AI alignment? Apply to Berkeley.](http://acritch.com/ai-berkeley/)\n\n\n\n |\n\n\n \n\n\nThe post [October 2016 Newsletter](https://intelligence.org/2016/10/09/october-2016-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-10-10T02:26:24Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "f61d60c2e974f916066521c144ee384b", "title": "CSRBAI talks on agent models and multi-agent dilemmas", "url": "https://intelligence.org/2016/10/06/csrbai-talks-agent-models/", "source": "miri", "source_type": "blog", "text": "We’ve uploaded the final set of videos from our recent [Colloquium Series on Robust and Beneficial AI (CSRBAI)](https://intelligence.org/colloquium-series/) at the MIRI office, co-hosted with the [Future of Humanity Institute](https://www.fhi.ox.ac.uk/). A full list of CSRBAI talks with public video or slides: \n\n \n\n\n* Stuart Russell (UC Berkeley) — [AI: The Story So Far](https://www.youtube.com/watch?v=zBCOMm_ytwM) ([slides](https://intelligence.org/files/csrbai/russell-slides.pdf))\n* Alan Fern (Oregon State University) — [Toward Recognizing and Explaining Uncertainty](https://www.youtube.com/watch?v=3lD6Sygy6EQ) ([slides 1](https://intelligence.org/files/csrbai/fern-slides-1.pdf), [slides 2](https://intelligence.org/files/csrbai/fern-slides-2.pdf))\n* Francesca Rossi (IBM Research) — [Moral Preferences](https://www.youtube.com/watch?v=QxwKxJN4WlQ) ([slides](https://intelligence.org/files/csrbai/rossi-slides.pdf))\n* Tom Dietterich (Oregon State University) — Issues Concerning AI Transparency ([slides](https://intelligence.org/files/csrbai/dietterich-slides.pdf))\n* Stefano Ermon (Stanford) — [Probabilistic Inference and Accuracy Guarantees](https://www.youtube.com/watch?v=8rPcoF6K6mQ) ([slides](https://intelligence.org/files/csrbai/ermon-slides.pdf))\n* Paul Christiano (UC Berkeley) — [Training an Aligned Reinforcement Learning Agent](https://www.youtube.com/watch?v=rperZzDssDE)\n* Jim Babcock — [The AGI Containment Problem](https://www.youtube.com/watch?v=7E_AxVLsWCM) ([slides](https://intelligence.org/files/csrbai/babcock-slides.pdf))\n* Bart Selman (Cornell) — [Non-Human Intelligence](https://www.youtube.com/watch?v=35BWlvPcYvg) ([slides](https://intelligence.org/files/csrbai/selman-slides.pdf))\n* Jessica Taylor (MIRI) — [Alignment for Advanced Machine Learning Systems](https://www.youtube.com/watch?v=_sGTqI5qdD4)\n* Dylan Hadfield-Menell (UC Berkeley) — [The Off-Switch: Designing Corrigible, yet Functional, Artificial Agents](https://www.youtube.com/watch?v=t06IciZknDg) ([slides](https://intelligence.org/files/csrbai/hadfield-menell-slides.pdf))\n* Bas Steunebrink (IDSIA) — [About Understanding, Meaning, and Values](https://www.youtube.com/watch?v=xMFQErzPvYA) ([slides](https://intelligence.org/files/csrbai/steunebrink-slides.pdf))\n* Jan Leike (Future of Humanity Institute) — [General Reinforcement Learning](https://www.youtube.com/watch?v=hSiuJuvTBoE) ([slides](https://intelligence.org/files/csrbai/leike-slides.pdf))\n* Tom Everitt (Australian National University) — [Avoiding Wireheading with Value Reinforcement Learning](https://www.youtube.com/watch?v=7tzJ7yHWswU) ([slides](https://intelligence.org/files/csrbai/everitt-slides.pdf))\n* Michael Wellman (University of Michigan) — [Autonomous Agents in Financial Markets: Implications and Risks](https://www.youtube.com/watch?v=MAd6D1mhNao) ([slides](https://intelligence.org/files/csrbai/wellman-slides.pdf))\n* Stefano Albrecht (UT Austin) — [Learning to Distinguish Between Belief and Truth](https://www.youtube.com/watch?v=QVuj6ZFxw14) ([slides](https://intelligence.org/files/csrbai/albrecht-slides.pdf))\n* Stuart Armstrong (Future of Humanity Institute) — [Reduced Impact AI and Other Alternatives to Friendliness](https://www.youtube.com/watch?v=3wsiUkmC6dI) ([slides](https://intelligence.org/files/csrbai/armstrong-slides.pdf))\n* Andrew Critch (MIRI) — [Robust Cooperation of Bounded Agents](https://www.youtube.com/watch?v=WG_Krd-wGM4)\n\n\n  \n\nFor a recap of talks from the earlier weeks at CSRBAI, see my previous blog posts on [transparency](https://intelligence.org/2016/08/02/2016-summer-program-recap/), [robustness and error tolerance](https://intelligence.org/2016/08/15/csrbai-talks-on-robustness-and-error-tolerance/), and [preference specification](https://intelligence.org/2016/08/30/csrbai-talks-preference-specification/). The last set of talks was part of the week focused on Agent Models and Multi-Agent Dilemmas:\n\n\n \n\n\n\n \n\n\n**Michael Wellman**, Professor of Computer Science and Engineering at the University of Michigan, spoke about the implications and risks of autonomous agents in the financial markets ([slides](https://intelligence.org/files/csrbai/wellman-slides.pdf)). Abstract:\n\n\n\n> Design for robust and beneficial AI is a topic for the future, but also of more immediate concern for the leading edge of autonomous agents emerging in many domains today. One area where AI is already ubiquitous is on financial markets, where a large fraction of trading is routinely initiated and conducted by algorithms. Models and observational studies have given us some insight on the implications of AI traders for market performance and stability. Design and regulation of market environments given the presence of AIs may also yield lessons for dealing with autonomous agents more generally.\n> \n> \n\n\n \n\n\n\n \n\n\n**Stefano Albrecht**, a Postdoctoral Fellow in the Department of Computer Science at the University of Texas at Austin, spoke about “learning to distinguish between belief and truth” ([slides](https://intelligence.org/files/csrbai/albrecht-slides.pdf)). Abstract:\n\n\n\n> Intelligent agents routinely build models of other agents to facilitate the planning of their own actions. Sophisticated agents may also maintain beliefs over a set of alternative models. Unfortunately, these methods usually do not check the validity of their models during the interaction. Hence, an agent may learn and use incorrect models without ever realising it. In this talk, I will argue that robust agents should have both abilities: to construct models of other agents and contemplate the correctness of their models. I will present a method for behavioural hypothesis testing along with some experimental results. The talk will conclude with open problems and a possible research agenda.\n> \n> \n\n\n \n\n\n\n \n\n\n**Stuart Armstrong**, from the Future of Humanity Institute in Oxford, spoke about “reduced impact AI” ([slides](https://intelligence.org/files/csrbai/armstrong-slides.pdf)). Abstract:\n\n\n\n> This talk will look at some of the ideas developed to create safe AI without solving the problem of friendliness. It will focus first on “reduced impact AI”, AIs designed to have little effect on the world – but from whom high impact can nevertheless be extracted. It will then delve into the new idea of AIs designed to have preferences over their own virtual worlds only, and look at the advantages – and limitations – of using indifference as a tool of AI control.\n> \n> \n\n\n \n\n\n\n \n\n\nLastly, **Andrew Critch**, a MIRI research fellow, spoke about robust cooperation in bounded agents. This talk is based on the paper “[Parametric Bounded Löb’s Theorem and Robust Cooperation of Bounded Agents](https://intelligence.org/2016/03/31/new-paper-on-bounded-lob/).” Talk abstract:\n\n\n\n> The first interaction between a pair of agents who might destroy each other can resemble a one-shot prisoner’s dilemma. Consider such a game where each player is an algorithm with read-access to its opponent’s source code. Tennenholtz (2004) introduced an agent which cooperates iff its opponent’s source code is identical to its own, thus sometimes achieving mutual cooperation while remaining unexploitable in general. However, precise equality of programs is a fragile cooperative criterion. Here, I will exhibit a new and more robust cooperative criterion, inspired by ideas of LaVictoire, Barasz and others (2014), using a new theorem in provability logic for bounded reasoners.\n> \n> \n\n\nThe post [CSRBAI talks on agent models and multi-agent dilemmas](https://intelligence.org/2016/10/06/csrbai-talks-agent-models/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-10-07T00:17:35Z", "authors": ["Alex Vermeer"], "summaries": []} -{"id": "3ef836fc3c5c10fb70339d40ac7e7916", "title": "MIRI’s 2016 Fundraiser", "url": "https://intelligence.org/2016/09/16/miris-2016-fundraiser/", "source": "miri", "source_type": "blog", "text": "**Update December 22**: Our donors came together during the fundraiser to get us most of the way to our $750,000 goal. In all, 251 donors contributed **$589,248**, making this our second-biggest fundraiser to date. Although we fell short of our target by $160,000, we have since made up this shortfall thanks to November/December donors. I’m extremely grateful for this support, and will plan accordingly for more staff growth over the coming year.\n\n\nAs described in our [post-fundraiser update](https://intelligence.org/2016/11/11/post-fundraiser-update/), we are still fairly funding-constrained. December/January [donations](https://intelligence.org/donate/) will have an especially large effect on our 2017–2018 hiring plans and strategy, as we try to assess our future prospects. For some external endorsements of MIRI as a good place to give this winter, see recent evaluations by [Daniel Dewey](http://www.openphilanthropy.org/blog/suggestions-individual-donors-open-philanthropy-project-staff-2016#Machine_Intelligence_Research_Institute), [Nick Beckstead](http://www.openphilanthropy.org/blog/suggestions-individual-donors-open-philanthropy-project-staff-2016#Machine_Intelligence_Research_Institute-0), [Owen Cotton-Barratt](http://effective-altruism.com/ea/14c/why_im_donating_to_miri_this_year/), and [Ben Hoskin](http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/).\n\n\n\n\n---\n\n\nOur **2016 fundraiser** is underway! Unlike in past years, we’ll only be running one fundraiser in 2016, from Sep. 16 to Oct. 31. Our progress so far (updated live):\n\n\n![2016 Fundraiser Progress Bar](https://intelligence.org/wp-content/uploads/2016/09/2016-fundraiser-progress-bar.png)\n\n\n---\n\n\n\n\n[Donate Now](https://intelligence.org/donate/#donation-methods)\n---------------------------------------------------------------\n\n\n\n\n \n\n\nEmployer matching and pledges to give later this year also count towards the total. [Click here](https://intelligence.org/donate/#pledge) to learn more.\n\n\n\n\n---\n\n\n \n\n\nMIRI is a nonprofit research group based in Berkeley, California. We do foundational research in mathematics and computer science that’s aimed at ensuring that smarter-than-human AI systems have a positive impact on the world.\n\n\n2016 has been a big year for MIRI, and for the wider field of AI alignment research. Our [2016 strategic update](https://intelligence.org/2016/08/05/miri-strategy-update-2016/) in early August reviewed a number of recent developments:\n\n\n* A group of researchers headed by Chris Olah of Google Brain and Dario Amodei of OpenAI published “[Concrete problems in AI safety](https://arxiv.org/abs/1606.06565),” a new set of research directions that are likely to bear both on near-term and long-term safety issues.\n* Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, and Stuart Russell published a new value learning framework, “[Cooperative inverse reinforcement learning](http://arxiv.org/pdf/1606.03137v2.pdf),” with [implications](https://intelligence.org/files/csrbai/hadfield-menell-slides.pdf) for [corrigibility](https://intelligence.org/files/CorrigibilityAISystems.pdf).\n* Laurent Orseau of Google DeepMind and Stuart Armstrong of the Future of Humanity Institute received positive attention from [news outlets](http://www.newsweek.com/google-big-red-button-ai-artificial-intelligence-save-world-elon-musk-466753) and from Alphabet executive chairman [Eric Schmidt](http://fortune.com/2016/06/28/artificial-intelligence-potential/) for their new paper “[Safely interruptible agents](https://intelligence.org/2016/06/01/new-paper-safely-interruptible-agents/),” partly supported by MIRI.\n* MIRI ran a three-week AI safety and robustness [colloquium and workshop series](https://intelligence.org/2016/08/02/2016-summer-program-recap/), with speakers including Stuart Russell, Tom Dietterich, Francesca Rossi, and Bart Selman.\n* We received a generous $300,000 donation and expanded our research and ops teams.\n* We started work on a new research agenda, “[Alignment for advanced machine learning systems](https://intelligence.org/2016/07/27/alignment-machine-learning/).” This agenda will be occupying about half of our time going forward, with the other half focusing on our [agent foundations agenda](https://intelligence.org/technical-agenda/).\n\n\nWe also published new results in decision theory and logical uncertainty, including “[Parametric bounded Löb’s theorem and robust cooperation of bounded agents](https://intelligence.org/2016/03/31/new-paper-on-bounded-lob/)” and “[A formal solution to the grain of truth problem](https://intelligence.org/2016/06/30/grain-of-truth/).” For a survey of our research progress and other updates from last year, see our [2015 review](https://intelligence.org/2016/07/29/2015-in-review/).\n\n\nIn the last three weeks, there have been three more major developments:\n\n\n* We released a new paper, “[Logical induction](https://intelligence.org/2016/09/12/new-paper-logical-induction/),” describing a method for learning to assign reasonable probabilities to mathematical conjectures and computational facts in a way that outpaces deduction.\n* The Open Philanthropy Project [awarded MIRI a one-year $500,000 grant](https://intelligence.org/2016/09/06/grant-open-philanthropy/) to scale up our research program, with a strong chance of renewal next year.\n* The Open Philanthropy Project is supporting the launch of the new UC Berkeley [Center for Human-Compatible AI](http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai), headed by Stuart Russell.\n\n\nThings have been moving fast over the last nine months. If we can replicate last year’s fundraising successes, we’ll be in an excellent position to move forward on our plans to grow our team and scale our research activities.\n\n\n\n#### The strategic landscape\n\n\nHumans are far better than other species at altering our environment to suit our preferences. This is primarily due not to our strength or speed, but to our [intelligence](https://intelligence.org/2013/06/19/what-is-intelligence-2/), broadly construed — our ability to reason, plan, accumulate scientific knowledge, and invent new technologies. AI is a technology that appears likely to have a uniquely large impact on the world because it has the potential to automate these abilities, and to eventually [decisively surpass humans](http://aiimpacts.org/sources-of-advantage-for-artificial-intelligence/) on the relevant cognitive metrics.\n\n\nSeparate from the task of building intelligent computer systems is the task of ensuring that these systems are aligned with our values. Aligning an AI system requires surmounting a number of serious technical challenges, most of which have received relatively little scholarly attention to date. MIRI’s role as a nonprofit in this space, from our perspective, is to help solve parts of the problem that are [a poor fit](https://intelligence.org/2015/08/14/what-sets-miri-apart/) for mainstream industry and academic groups.\n\n\nOur long-term plans are contingent on future developments in the field of AI. Because these developments are highly uncertain, we currently focus mostly on work that we expect to be useful in a wide variety of possible scenarios. The more optimistic scenarios we consider often look something like this:\n\n\n* In the short term, a research community coalesces, develops a good in-principle understanding of what the relevant problems are, and produces formal tools for tackling these problems. AI researchers move toward a minimal consensus about best practices, normalizing discussions of AI’s long-term social impact, a risk-conscious [security mindset](http://econlog.econlib.org/archives/2016/03/so_far_unfriend.html), and work on error tolerance and value specification.\n\n\n* In the medium term, researchers build on these foundations and develop a more mature understanding. As we move toward a clearer sense of what smarter-than-human AI systems are likely to look like — something closer to a credible roadmap — we imagine the research community moving toward increased coordination and cooperation in order to discourage race dynamics.\n\n\n* In the long term, we would like to see AI-empowered projects (as described by [Dewey [2015]](http://www.danieldewey.net/fast-takeoff-strategies.pdf)) used to avert major AI mishaps. For this purpose, we’d want to solve a weak version of the alignment problem for limited AI systems — systems just capable enough to serve as useful levers for preventing AI accidents and misuse.\n\n\n* In the *very* long term, we can hope to solve the “full” alignment problem for highly capable, highly autonomous AI systems. Ideally, we want to reach a position where we can afford to wait until we reach scientific and institutional maturity — take our time to dot every *i* and cross every *t* before we risk “locking in” design choices.\n\n\nThe above is a vague sketch, and we prioritize research we think would be useful in less optimistic scenarios as well. Additionally, “short term” and “long term” here are relative, and different timeline forecasts can have very different policy implications. Still, the sketch may help clarify the directions we’d like to see the research community move in.\n\n\nFor more on our research focus and methodology, see our [research page](http://intelligence.org/research) and [MIRI’s Approach](https://intelligence.org/2015/07/27/miris-approach/).\n\n\n#### Our organizational plans\n\n\nWe currently employ seven technical research staff (six research fellows and one assistant research fellow), plus two researchers signed on to join in the coming months and an additional six research associates and research interns.[1](https://intelligence.org/2016/09/16/miris-2016-fundraiser/#footnote_0_14609 \"This excludes Katja Grace, who heads the AI Impacts project using a separate pool of funds earmarked for strategy/forecasting research. It also excludes me: I contribute to our technical research, but my primary role is administrative.\") Our budget this year is about $1.75M, up from $1.65M in 2015 and $950k in 2014.[2](https://intelligence.org/2016/09/16/miris-2016-fundraiser/#footnote_1_14609 \"We expect to be slightly under the $1.825M budget we previously projected for 2016, due to taking on fewer new researchers than expected this year.\")\n\n\nOur eventual goal (subject to revision) is to grow until we have between 13 and 17 technical research staff, at which point our budget would likely be in the $3–4M range. If we reach that point successfully while maintaining a two-year runway, we’re likely to shift out of growth mode.\n\n\nOur budget estimate for 2017 is roughly $2–2.2M, which means that we’re entering this fundraiser with about 14 months’ runway. We’re uncertain about how many donations we’ll receive between November and next September,[3](https://intelligence.org/2016/09/16/miris-2016-fundraiser/#footnote_2_14609 \"We’re imagining continuing to run one fundraiser per year in future years, possibly in September.\") but projecting from current trends, we expect about 4/5ths of our total donations to come from the fundraiser and 1/5th to come in off-fundraiser.[4](https://intelligence.org/2016/09/16/miris-2016-fundraiser/#footnote_3_14609 \"Separately, the Open Philanthropy Project is likely to renew our $500,000 grant next year, and we expect to receive the final ($80,000) installment from the Future of Life Institute’s three-year grants.\nFor comparison, our revenue was about $1.6 million in 2015: $167k in grants, $960k in fundraiser contributions, and $467k in off-fundraiser (non-grant) contributions. Our situation in 2015 was somewhat different, however: we ran two 2015 fundraisers, whereas we’re skipping our winter fundraiser this year and advising December donors to pledge early or give off-fundraiser.\") Based on this, we have the following fundraiser goals:\n\n\n\n\n---\n\n\n**Basic target – $750,000.** We feel good about our ability to execute our growth plans at this funding level. We’ll be able to move forward comfortably, albeit with somewhat more caution than at the higher targets.\n\n\n\n\n---\n\n\n**Growth target – $1,000,000.** This would amount to about half a year’s runway. At this level, we can afford to make more uncertain but high-expected-value bets in our growth plans. There’s a risk that we’ll dip below a year’s runway in 2017 if we make more hires than expected, but the growing support of our donor base would make us feel comfortable about taking such risks.\n\n\n\n\n---\n\n\n**Stretch target – $1,250,000.** At this level, even if we exceed my growth expectations, we’d be able to grow without real risk of dipping below a year’s runway. Past $1.25M we would not expect additional donations to affect our 2017 plans much, assuming moderate off-fundraiser support.[5](https://intelligence.org/2016/09/16/miris-2016-fundraiser/#footnote_4_14609 \"At significantly higher funding levels, we’d consider running other useful programs, such as a prize fund. Shoot me an e-mail if you’d like to talk about the details.\")\n\n\n\n\n---\n\n\nIf we hit our growth and stretch targets, we’ll be able to execute several additional programs we’re considering with more confidence. These include contracting a larger pool of researchers to do early work with us on logical induction and on our machine learning agenda, and generally spending more time on academic outreach, field-growing, and training or trialing potential collaborators and hires.\n\n\nAs always, you’re invited to get in touch if you have questions about our upcoming plans and recent activities. I’m very much looking forward to seeing what new milestones the growing alignment research community will hit in the coming year, and I’m very grateful for the thoughtful engagement and support that’s helped us get to this point.\n\n\n \n\n\n[Donate Now](https://intelligence.org/donate/#donation-methods)\n---------------------------------------------------------------\n\n\n \n\n\n\n\n\n\n\n\n\n---\n\n1. This excludes Katja Grace, who heads the AI Impacts project using a separate pool of funds earmarked for strategy/forecasting research. It also excludes me: I contribute to our technical research, but my primary role is administrative.\n2. We expect to be slightly under the $1.825M budget we [previously projected](https://intelligence.org/2015/12/01/miri-2015-winter-fundraiser/) for 2016, due to taking on fewer new researchers than expected this year.\n3. We’re imagining continuing to run one fundraiser per year in future years, possibly in September.\n4. Separately, the Open Philanthropy Project is likely to renew our [$500,000 grant](https://intelligence.org/2016/09/06/grant-open-philanthropy/) next year, and we expect to receive the final ($80,000) installment from the Future of Life Institute’s [three-year grants](http://futureoflife.org/first-ai-grant-recipients/).\nFor comparison, our revenue was about $1.6 million in 2015: $167k in grants, $960k in fundraiser contributions, and $467k in off-fundraiser (non-grant) contributions. Our situation in 2015 was somewhat different, however: we ran two 2015 fundraisers, whereas we’re skipping our winter fundraiser this year and advising December donors to pledge early or give off-fundraiser.\n5. At significantly higher funding levels, we’d consider running other useful programs, such as a prize fund. Shoot me an [e-mail](mailto:nate@intelligence.org) if you’d like to talk about the details.\n\nThe post [MIRI’s 2016 Fundraiser](https://intelligence.org/2016/09/16/miris-2016-fundraiser/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-09-16T07:14:17Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "ac50eaba4d5d02a531c4e3ed248894d6", "title": "New paper: “Logical induction”", "url": "https://intelligence.org/2016/09/12/new-paper-logical-induction/", "source": "miri", "source_type": "blog", "text": "[![Logical Induction](https://intelligence.org/wp-content/uploads/2016/07/logicalinduction.png)](https://arxiv.org/abs/1609.03543)MIRI is releasing a paper introducing a new model of deductively limited reasoning: “[**Logical induction**](https://arxiv.org/abs/1609.03543),” authored by Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, myself, and Jessica Taylor. Readers may wish to start with the [abridged version](https://intelligence.org/files/LogicalInductionAbridged.pdf).\n\n\nConsider a setting where a reasoner is observing a deductive process (such as a community of mathematicians and computer programmers) and waiting for proofs of various logical claims (such as the *abc* conjecture, or “this computer program has a bug in it”), while making guesses about which claims will turn out to be true. Roughly speaking, our paper presents a computable (though inefficient) algorithm that outpaces deduction, assigning high subjective probabilities to provable conjectures and low probabilities to disprovable conjectures long before the proofs can be produced.\n\n\nThis algorithm has a large number of nice theoretical properties. Still speaking roughly, the algorithm learns to assign probabilities to sentences in ways that respect [any logical or statistical pattern](https://intelligence.org/2016/04/21/two-new-papers-uniform/) that can be described in polynomial time. Additionally, it learns to reason well about its own beliefs and trust its future beliefs while avoiding paradox. Quoting from the abstract:\n\n\n\n> These properties and many others all follow from a single *logical induction criterion*, which is motivated by a series of stock trading analogies. Roughly speaking, each logical sentence *φ* is associated with a stock that is worth $1 per share if *φ* is true and nothing otherwise, and we interpret the belief-state of a logically uncertain reasoner as a set of market prices, where ℙ*n*(*φ*)=50% means that on day *n*, shares of *φ* may be bought or sold from the reasoner for 50¢. The logical induction criterion says (very roughly) that there should not be any polynomial-time computable trading strategy with finite risk tolerance that earns unbounded profits in that market over time.\n> \n> \n\n\n\nThis criterion is analogous to the “no Dutch book” criterion used to support other theories of ideal reasoning, such as Bayesian probability theory and expected utility theory. We believe that the logical induction criterion may serve a similar role for reasoners with deductive limitations, capturing some of what we mean by “good reasoning” in these cases.\n\n\nThe logical induction algorithm that we provide is theoretical rather than practical. It can be thought of as a counterpart to Ray Solomonoff’s theory of inductive inference, which provided an uncomputable method for ideal management of *empirical* uncertainty but no corresponding method for reasoning under uncertainty about logical or mathematical sentences.[1](https://intelligence.org/2016/09/12/new-paper-logical-induction/#footnote_0_14538 \"While impractical, Solomonoff induction gave rise to a number of techniques (ensemble methods) that perform well in practice. The differences between our algorithm and Solomonoff induction point in the direction of new ensemble methods that could prove useful for managing logical uncertainty, in the same way that modern ensemble methods are useful for managing empirical uncertainty.\") Logical induction closes this gap.\n\n\nAny algorithm that satisfies the logical induction criterion will exhibit the following properties, among others:\n\n\n1. *Limit convergence* and *limit coherence*: The beliefs of a logical inductor are perfectly consistent in the limit. (Every provably true sentence eventually gets probability 1, every provably false sentence eventually gets probability 0, if *φ* provably implies *ψ* then the probability of *φ* converges to some value no higher than the probability of *ψ*, and so on.)\n\n\n2. *Provability induction*: Logical inductors learn to recognize any pattern in theorems (or contradictions) that can be identified in polynomial time.\n\n\n◦ Consider a sequence of conjectures generated by a brilliant mathematician, such as Ramanujan, that are difficult to prove but keep turning out to be true. A logical inductor will recognize this pattern and start assigning Ramanujan’s conjectures high probabilities well before it has enough resources to verify them.\n\n\n◦ As another example, consider the sequence of claims “on input *n*, this long-running computation outputs a natural number between 0 and 9.” If those claims are all true, then (roughly speaking) a logical inductor learns to assign high probabilities to them as fast as they can be generated. If they’re all false, a logical inductor learns to assign them low probabilities as fast as they can be generated. In this sense, it learns inductively to predict how computer programs will behave.\n\n\n◦ Similarly, given any polynomial-time method for writing down computer programs that halt, logical inductors learn to believe that they will halt roughly as fast as the source codes can be generated. Furthermore, given any polynomial-time method for writing down computer programs that *provably* fail to halt, logical inductors learn to believe that they will fail to halt roughly as fast as the source codes can be generated. When it comes to computer programs that fail to halt but for which there is no proof of this fact, logical inductors will learn not to anticipate that the program is going to halt anytime soon, even though they can’t tell whether the program is going to halt in the long run. In this way, logical inductors give some formal backing to the intuition of many computer scientists that while the halting problem is undecidable in full generality, this rarely interferes with reasoning about computer programs in practice.[2](https://intelligence.org/2016/09/12/new-paper-logical-induction/#footnote_1_14538 \"See also Calude and Stay’s (2006) “Most Programs Stop Quickly or Never Halt.“\")\n\n\n3. *Affine coherence*: Logical inductors learn to respect logical relationships between different sentences’ truth-values, often long before the sentences can be proven. (E.g., they will learn for arbitrary programs that “this program outputs 3” and “this program outputs 4” are mutually exclusive, often long before they’re able to evaluate the program in question.)\n\n\n4. *Learning pseudorandom frequencies*: When faced with a sufficiently pseudorandom sequence, logical inductors learn to use appropriate statistical summaries. For example, if the Ackermann(*n*,*n*)th digit in the decimal expansion of π is hard to predict for large *n*, a logical inductor will learn to assign ~10% subjective probability to the claim “the Ackermann(*n*,*n*)th digit in the decimal expansion of π is a 7.”\n\n\n5. *Calibration* and *unbiasedness*: On sequences that a logical inductor assigns ~30% probability to, if the average frequency of truth converges, then it converges to ~30%. In fact, on any subsequence where the average frequency of truth converges, there is no efficient method for finding a bias in the logical inductor’s beliefs.\n\n\n6. *Scientific induction*: Logical inductors can be used to do sequence prediction, and when doing so, they dominate the universal semimeasure.\n\n\n7. *Closure under conditioning*: Conditional probabilities in this framework are well-defined, and conditionalized logical inductors are also logical inductors.[3](https://intelligence.org/2016/09/12/new-paper-logical-induction/#footnote_2_14538 \"Thus, for example one can make a logical inductor over Peano arithmetic by taking a logical inductor over an empty theory and conditioning it on the Peano axioms.\")\n\n\n8. *Introspection*: Logical inductors have accurate beliefs about their own beliefs, in a manner that avoids the standard paradoxes of self-reference.\n\n\n◦ For instance, the probabilities on a sequence that says “I have probability less than 50% on the *n*th day” go extremely close to 50% and oscillate pseudorandomly, such that there is no polynomial-time method to tell whether the *n*th one is slightly above or slightly below 50%.\n\n\n9. *Self-trust*: Logical inductors learn to trust their future beliefs more than their current beliefs. This gives some formal backing to the intuition that real-world probabilistic agents can often be reasonably confident in their future reasoning in practice, even though Gödel’s incompleteness theorems place strong limits on reflective reasoning in full generality.[4](https://intelligence.org/2016/09/12/new-paper-logical-induction/#footnote_3_14538 \"As an example, imagine that one asks a logical inductor, “What’s your probability of φ, given that in the future you’re going to think φ is likely?” Very roughly speaking, the inductor will answer, “In that case φ would be likely,” even if it currently thinks that φ is quite unlikely. Moreover, logical inductors do this in a way that avoids paradox. If φ is “In the future I will think φ is less than 50% likely,” and in the present you ask, “What’s your probability of φ, given that in the future you’re going to believe it is ≥50% likely?” then its answer will be “Very low.” Yet if you ask “What’s your probability of φ, given that in the future your probability will be extremely close to 50%?” then it will answer, “Extremely close to 50%.”\")\n\n\nThe above claims are all quite vague; for the precise statements, refer to [the paper](https://intelligence.org/files/LogicalInduction.pdf).\n\n\nLogical induction was developed by Scott Garrabrant in an effort to solve an open problem we [spoke about](https://intelligence.org/2016/04/21/two-new-papers-uniform/) six months ago. Roughly speaking, we had formalized two different desiderata for good reasoning under logical uncertainty: the ability to recognize patterns in what is provable (such as mutual exclusivity relationships between claims about computer programs), and the ability to recognize statistical patterns in sequences of logical claims (such as recognizing that the decimal digits of π seem pretty pseudorandom). Neither was too difficult to achieve in isolation, but we were surprised to learn that our simple algorithms for achieving one seemed quite incompatible with our simple algorithms for achieving the other. Logical inductors were born of Scott’s attempts to achieve both simultaneously.[5](https://intelligence.org/2016/09/12/new-paper-logical-induction/#footnote_4_14538 \"Early work towards this result can be found at the Intelligent Agent Foundations Forum.\")\n\n\nI think there’s a good chance that this framework will open up new avenues of study in questions of metamathematics, decision theory, game theory, and computational reflection that have long seemed intractable. I’m also cautiously optimistic that they’ll improve our understanding of decision theory and counterfactual reasoning, and other problems related to AI [value alignment](https://intelligence.org/technical-agenda/).[6](https://intelligence.org/2016/09/12/new-paper-logical-induction/#footnote_5_14538 \"Consider the task of designing an AI system to learn the preferences of a human (e.g., cooperative inverse reinforcement learning). The usual approach would be to model the human as a Bayesian reasoner trying to maximize some reward function, but this severely limits our ability to model human irrationality and miscalculation even in simplified settings. Logical induction may help us address this problem by providing an idealized formal model of limited reasoners who don’t know (but can eventually learn) the logical implications of all of their beliefs.\nSuppose, for example, that a human agent makes an (unforced) losing chess move. An AI system programmed to learn the human’s preferences from observed behavior probably shouldn’t conclude that the human wanted to lose. Instead, our toy model of this dilemma should allow that the human may be resource-limited and may not be able to deduce the full implications of their moves; and our model should allow that the AI system is aware of this too, or can learn about it.\")\n\n\nWe’ve posted a talk online that helps provide more background for our work on logical induction:[7](https://intelligence.org/2016/09/12/new-paper-logical-induction/#footnote_6_14538 \"Slides from then relatively nontechnical portions; slides from the technical portion. For viewers who want to skip to the technical content, we’ve uploaded the talk’s middle segment as a shorter stand-alone video: link.\")\n\n\n \n\n\n\n \n\n\n**Edit**: For a more recent talk on logical induction that goes into more of the technical details, see [here](https://www.youtube.com/watch?v=UOddW4cXS5Y).\n\n\n“[Logical induction](https://intelligence.org/files/LogicalInduction.pdf)” is a large piece of work, and there are undoubtedly still a number of bugs. We’d very much appreciate feedback: send typos, errors, and other comments to [errata@intelligence.org](mailto:errata@intelligence.org).[8](https://intelligence.org/2016/09/12/new-paper-logical-induction/#footnote_7_14538 \"The intelligence.org version will generally be more up-to-date than the arXiv version.\")\n\n\n \n\n\n\n\n\n#### Sign up to get updates on new MIRI technical results\n\n\n*Get notified every time a new technical paper is published.*\n\n\n\n\n* \n* \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n\n\n \n\n\n\n\n---\n\n1. While impractical, Solomonoff induction gave rise to a number of techniques (ensemble methods) that perform well in practice. The differences between our algorithm and Solomonoff induction point in the direction of new ensemble methods that could prove useful for managing logical uncertainty, in the same way that modern ensemble methods are useful for managing empirical uncertainty.\n2. See also Calude and Stay’s (2006) “[Most Programs Stop Quickly or Never Halt.](http://arxiv.org/abs/cs/0610153)“\n3. Thus, for example one can make a logical inductor over Peano arithmetic by taking a logical inductor over an empty theory and conditioning it on the Peano axioms.\n4. As an example, imagine that one asks a logical inductor, “What’s your probability of *φ*, given that in the future you’re going to think *φ* is likely?” Very roughly speaking, the inductor will answer, “In that case *φ* would be likely,” even if it currently thinks that *φ* is quite unlikely. Moreover, logical inductors do this in a way that avoids paradox. If *φ* is “In the future I will think *φ* is less than 50% likely,” and in the present you ask, “What’s your probability of *φ*, given that in the future you’re going to believe it is ≥50% likely?” then its answer will be “Very low.” Yet if you ask “What’s your probability of *φ*, given that in the future your probability will be *extremely close* to 50%?” then it will answer, “Extremely close to 50%.”\n5. Early work towards this result can be found at the [Intelligent Agent Foundations Forum](https://agentfoundations.org/item?id=270).\n6. Consider the task of designing an AI system to learn the preferences of a human (e.g., [cooperative inverse reinforcement learning](http://arxiv.org/abs/1606.03137)). The usual approach would be to model the human as a Bayesian reasoner trying to maximize some reward function, but this severely limits our ability to model human irrationality and miscalculation even in simplified settings. Logical induction may help us address this problem by providing an idealized formal model of limited reasoners who don’t know (but can eventually learn) the logical implications of all of their beliefs.\nSuppose, for example, that a human agent makes an (unforced) losing chess move. An AI system programmed to learn the human’s preferences from observed behavior probably shouldn’t conclude that the human *wanted* to lose. Instead, our toy model of this dilemma should allow that the human may be resource-limited and may not be able to deduce the full implications of their moves; and our model should allow that the AI system is aware of this too, or can learn about it.\n7. [Slides from then relatively nontechnical portions](https://intelligence.org/files/LogicalInductionSlidesA.pdf); [slides from the technical portion](https://intelligence.org/files/LogicalInductionSlidesB.pdf). For viewers who want to skip to the technical content, we’ve uploaded the talk’s middle segment as a shorter stand-alone video: [link](https://www.youtube.com/watch?v=QF-eCscwf38).\n8. The [intelligence.org version](https://intelligence.org/files/LogicalInduction.pdf) will generally be more up-to-date than the [arXiv version](https://arxiv.org/abs/1609.03543).\n\nThe post [New paper: “Logical induction”](https://intelligence.org/2016/09/12/new-paper-logical-induction/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-09-13T00:33:40Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "3c579f69813ca78c1139a130f5f43f91", "title": "Grant announcement from the Open Philanthropy Project", "url": "https://intelligence.org/2016/09/06/grant-open-philanthropy/", "source": "miri", "source_type": "blog", "text": "A major announcement today: the Open Philanthropy Project has granted MIRI $500,000 over the coming year to study the questions outlined in our [agent foundations](https://intelligence.org/technical-agenda/) and [machine learning](https://intelligence.org/2016/07/27/alignment-machine-learning/) research agendas, with a strong chance of renewal next year. This represents MIRI’s largest grant to date, and our [second-largest](https://intelligence.org/2014/04/02/2013-in-review-fundraising/) single contribution.\n\n\nComing on the heels of a $300,000 [donation](https://intelligence.org/2016/08/05/miri-strategy-update-2016/) by Blake Borgeson, this support will help us continue on the growth trajectory we outlined in our [summer](https://intelligence.org/2015/07/17/miris-2015-summer-fundraiser/) and [winter](https://intelligence.org/2015/12/01/miri-2015-winter-fundraiser/) fundraisers last year and effect another doubling of the research team. These growth plans assume continued support from other donors in line with our fundraising successes last year; we’ll be discussing our remaining funding gap in more detail in our 2016 fundraiser, which we’ll be kicking off later this month.\n\n\n\n\n---\n\n\nThe Open Philanthropy Project is a joint initiative run by staff from the philanthropic foundation Good Ventures and the charity evaluator GiveWell. Open Phil has recently made it a priority to identify opportunities for researchers to address [potential risks from advanced AI](http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence), and we consider their early work in this area promising: grants to [Stuart Russell](http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai), [Robin Hanson](http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/george-mason-university-research-future-artificial-intelligence-scenarios), and the [Future of Life Institute](http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-life-institute-artificial-intelligence-risk-reduction), plus a stated interest in funding work related to “[Concrete Problems in AI Safety](https://research.googleblog.com/2016/06/bringing-precision-to-ai-safety.html),” a recent paper co-authored by four Open Phil technical advisers, Christopher Olah (Google Brain), Dario Amodei (OpenAI), Paul Christiano (UC Berkeley), and Jacob Steinhardt (Stanford), along with John Schulman (OpenAI) and Dan Mané (Google Brain).\n\n\nOpen Phil’s grant isn’t a full endorsement, and they note a number of reservations about our work [**in an extensive writeup**](http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support) detailing the thinking that went into the grant decision. Separately, Open Phil Executive Director Holden Karnofsky has [written some personal thoughts](https://docs.google.com/document/d/1hKZNRSLm7zubKZmfA7vsXvkIofprQLGUoW43CYXPRrk/edit) about how his views of MIRI and the effective altruism community have evolved in recent years.\n\n\n\nOpen Phil’s decision was informed in part by their technical advisers’ evaluations of our recent work on logical uncertainty and Vingean reflection, together with reviews by seven anonymous computer science professors and one anonymous graduate student. The reviews, most of which are collected [**here**](http://files.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf), are generally negative: reviewers felt that “[Inductive coherence](https://intelligence.org/2016/04/21/two-new-papers-uniform/)” and “[Asymptotic convergence in online learning with unbounded delays](https://intelligence.org/2016/04/21/two-new-papers-uniform/#2)” were not important results and that these research directions were unlikely to be productive, and Open Phil’s advisers were skeptical or uncertain about the work’s relevance to aligning AI systems with human values.\n\n\nIt’s worth mentioning in that context that the results in “Inductive coherence” and “Asymptotic convergence…” led directly to a more significant unpublished result, logical induction, that we’ve recently discussed with Open Phil and members of the effective altruism community. The result is being written up, and we plan to put up a preprint soon. In light of this progress, we are more confident than the reviewers that Garrabrant et al.’s earlier papers represented important steps in the right direction. If this wasn’t apparent to reviewers, then it could suggest that our exposition is weak, or that the importance of our results was inherently difficult to assess from the papers alone.\n\n\nIn general, I think the reviewers’ criticisms are reasonable — either I agree with them, or I think it would take a longer conversation to resolve the disagreement. The level of detail and sophistication of the comments is also quite valuable.\n\n\nThe content of the reviews was mostly in line with my advance predictions, though my predictions were low-confidence. I’ve written up quick [**responses**](https://intelligence.org/files/OpenPhil2016Supplement.pdf) to some of the reviewers’ comments, with my predictions and some observations from Eliezer Yudkowsky included in appendices. This is likely to be the beginning of a longer discussion of our research priorities and progress, as we have yet to write up our views on a lot of these issues in any detail.\n\n\nWe’re very grateful for Open Phil’s support, and also for the (significant) time they and their advisers spent assessing our work. This grant follows a number of challenging and deep conversations with researchers at GiveWell and Open Phil about our organizational strategy over the years, which have helped us refine our views and arguments.\n\n\nPast public exchanges between MIRI and GiveWell / Open Phil staff include:\n\n\n* May/June/July 2012 – [Holden Karnofsky’s critique of MIRI](http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/) (then SI), [Eliezer Yudkowsky’s reply](http://lesswrong.com/lw/cze/reply_to_holden_on_tool_ai/), and [Luke Muehlhauser’s reply](http://lesswrong.com/lw/di4/reply_to_holden_on_the_singularity_institute/).\n* October 2013 – Holden, Eliezer, Luke, Jacob Steinhardt, and Dario Amodei’s discussion of [MIRI’s strategy](https://intelligence.org/2014/01/13/miri-strategy-conversation-with-steinhardt-karnofsky-and-amodei/).\n* January 2014 – Holden, Eliezer, and Luke’s discussion of [existential risk](https://intelligence.org/2014/01/27/existential-risk-strategy-conversation-with-holden-karnofsky/).\n* February 2014 – Holden, Eliezer, and Luke’s discussion of [future-oriented philanthropy](https://intelligence.org/2014/02/21/conversation-with-holden-karnofsky-about-future-oriented-philanthropy/).\n\n\nSee also Open Phil’s posts on [transformative AI](http://www.openphilanthropy.org/blog/some-background-our-views-regarding-advanced-artificial-intelligence) and [AI risk as a philanthropic opportunity](http://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity), and their earlier [AI risk cause report](http://www.openphilanthropy.org/research/cause-reports/ai-risk).\n\n\nThe post [Grant announcement from the Open Philanthropy Project](https://intelligence.org/2016/09/06/grant-open-philanthropy/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-09-06T20:04:27Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "bc25cf69cc535b4c6f99efb48dff4a6f", "title": "September 2016 Newsletter", "url": "https://intelligence.org/2016/09/03/september-2016-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \n\n**Research updates**\n* New at IAFF: [Modeling the Capabilities of Advanced AI Systems as Episodic Reinforcement Learning](https://agentfoundations.org/item?id=910); [Simplified Explanation of Stratification](https://agentfoundations.org/item?id=927)\n* New at AI Impacts: [Friendly AI as a Global Public Good](http://aiimpacts.org/friendly-ai-as-a-global-public-good/)\n* We ran two research workshops this month: a [veterans’ workshop](https://intelligence.org/workshops/#august-2016) on decision theory for long-time collaborators and staff, and a [machine learning workshop](https://intelligence.org/workshops/#august-2016-ml) focusing on generalizable environmental goals, impact measures, and mild optimization.\n* AI researcher Abram Demski has accepted a research fellowship at MIRI, pending the completion of his PhD. He’ll be starting here in late 2016 / early 2017.\n* Data scientist Ryan Carey is joining MIRI’s [ML-oriented](https://intelligence.org/2016/07/27/alignment-machine-learning/) team this month as an assistant research fellow.\n\n\n**General updates**\n* [MIRI’s 2016 strategy update](https://intelligence.org/2016/08/05/miri-strategy-update-2016/) outlines how our research plans have changed in light of recent developments. We also announce a generous $300,000 gift — our second-largest single donation to date.\n* We’ve uploaded nine talks from CSRBAI’s [robustness](https://intelligence.org/2016/08/15/csrbai-talks-on-robustness-and-error-tolerance/) and [preference specification](https://intelligence.org/2016/08/30/csrbai-talks-preference-specification/) weeks, including Jessica Taylor on “Alignment for Advanced Machine Learning Systems” ([video](https://www.youtube.com/watch?v=_sGTqI5qdD4)), Jan Leike on “General Reinforcement Learning” ([video](https://www.youtube.com/watch?v=hSiuJuvTBoE)), Paul Christiano on “Training an Aligned RL Agent” ([video](https://www.youtube.com/watch?v=rperZzDssDE)), and Dylan Hadfield-Menell on “The Off-Switch” ([video](https://www.youtube.com/watch?v=t06IciZknDg)).\n* MIRI COO Malo Bourgon has been [co-chairing](https://standards.ieee.org/develop/indconn/ec/ec_bios.pdf) a committee of IEEE’s [Global Initiative for Ethical Considerations in the Design of Autonomous Systems](http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html). He recently moderated a workshop on general AI and superintelligence at [the initiative’s first meeting](http://www.cvent.com/events/ieee-symposium-on-ethics-of-autonomous-systems-seas-europe-/event-summary-28d5322779454a6780b19c07b28023de.aspx).\n* We had a great time at [Effective Altruism Global](https://www.eaglobal.org/), and taught at [SPARC](https://sparc-camp.org/).\n* We hired two new admins: Office Manager Aaron Silverbook, and Communications and Development Strategist Colm Ó Riain.\n\n\n\n**News and links**\n* The Open Philanthropy Project awards $5.6 million to Stuart Russell to launch an academic AI safety research institute: [the Center for Human-Compatible AI](http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai).\n* “[Who Should Control Our Thinking Machines?](http://www.bloomberg.com/features/2016-demis-hassabis-interview-issue/)“: Jack Clark interviews DeepMind’s Demis Hassabis.\n* Elon Musk [explains](http://fortune.com/2016/08/17/elon-musk-ai-fear-werner-herzog/): “I think the biggest risk is not that the AI will develop a will of its own, but rather that it will *follow* the will of people that establish its utility function, or its optimization function. And that optimization function, if it is not well-thought-out — even if its intent is benign, it could have quite a bad outcome.”\n* [Modeling Intelligence as a Project-Specific Factor of Production](http://modelingtheworld.benjaminrosshoffman.com/intelligence-project-specific-factor-production): Ben Hoffman compares different AI takeoff scenarios.\n* [Clopen AI](http://futureoflife.org/2016/08/03/op-ed-clopen-ai-openness-in-different-aspects-of-ai-development/): Viktoriya Krakovna weighs the advantages of closed vs. open AI.\n* Google X director Astro Teller expresses optimism about the future of AI in a [Medium post](https://medium.com/@astroteller/perspectives-on-ai-f706d234caa5#.9pxyi3p9f) announcing the [first report](https://ai100.stanford.edu/sites/default/files/ai_100_report_0901fnlc_single.pdf) of the Stanford AI100 study.\n* *Buzzfeed* reports on efforts to prevent the development of [lethal autonomous weapons systems](https://www.buzzfeed.com/sarahatopol/how-to-save-mankind-from-the-new-breed-of-killer-robots?utm_term=.ki11pQxyw7#.llGKE4eDV0).\n* In controlled settings, researchers find ways to [detect keystrokes via distortions in WiFi signals](https://threatpost.com/keystroke-recognition-uses-wi-fi-signals-to-snoop/120135/) and [jump air-gaps using hard drive actuator noises](http://arstechnica.com/security/2016/08/new-air-gap-jumper-covertly-transmits-data-in-hard-drive-sounds/).\n* Solid discussions on the EA Forum: [Should Donors Make Commitments About Future Donations?](http://effective-altruism.com/ea/10v/should_donors_make_commitments_about_future/) and [Should You Switch Away From Earning to Give?](http://effective-altruism.com/ea/10s/should_you_switch_away_from_earning_to_give_some/)\n\n\n |\n\n\n \n\n\nThe post [September 2016 Newsletter](https://intelligence.org/2016/09/03/september-2016-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-09-04T07:26:51Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "60665c1783a44d1923a2215bf7333bac", "title": "CSRBAI talks on preference specification", "url": "https://intelligence.org/2016/08/30/csrbai-talks-preference-specification/", "source": "miri", "source_type": "blog", "text": "We’ve uploaded a third set of videos from our recent [Colloquium Series on Robust and Beneficial AI](https://intelligence.org/colloquium-series/) (CSRBAI), co-hosted with the Future of Humanity Institute. These talks were part of the week focused on preference specification in AI systems, including the difficulty of specifying safe and useful goals, or specifying safe and useful methods for learning human preferences. All released videos are available on the [CSRBAI web page](https://intelligence.org/colloquium-series/).\n\n\n \n\n\n\n \n\n\n**Tom Everitt**, a PhD student at the Australian National University, spoke about his paper “[Avoiding wireheading with value reinforcement learning](https://arxiv.org/abs/1605.03143),” written with Marcus Hutter ([slides](https://intelligence.org/files/csrbai/everitt-slides.pdf)). Abstract:\n\n\n\n> How can we design good goals for arbitrarily intelligent agents? Reinforcement learning (RL) may seem like a natural approach. Unfortunately, RL does not work well for generally intelligent agents, as RL agents are incentivised to shortcut the reward sensor for maximum reward — the so-called wireheading problem.\n> \n> \n> In this paper we suggest an alternative to RL called value reinforcement learning (VRL). In VRL, agents use the reward signal to learn a utility function. The VRL setup allows us to remove the incentive to wirehead by placing a constraint on the agent’s actions. The constraint is defined in terms of the agent’s belief distributions, and does not require an explicit specification of which actions constitute wireheading. Our VRL agent offers the ease of control of RL agents and avoids the incentive for wireheading.\n> \n> \n\n\n\n \n\n\n\n \n\n\n**Dylan Hadfield-Menell**, a PhD Student at UC Berkeley, spoke about designing corrigible, yet functional, artificial agents ([slides](https://intelligence.org/files/csrbai/hadfield-menell-slides.pdf)) in a follow-up to the paper “[Cooperative inverse reinforcement learning](http://arxiv.org/abs/1606.03137).” Abstract for the talk:\n\n\n\n> An artificial agent is corrigible if it accepts or assists in outside correction for its objectives. At a minimum, a corrigible agent should allow its programmers to turn it off. An artificial agent is functional if it is capable of performing non-trivial tasks. For example, a machine that immediately turns itself off is useless (except perhaps as a novelty item).\n> \n> \n> In a standard reinforcement learning agent, incentives for these behaviors are essentially at odds. The agent will either want to be turned off, want to stay alive, or be indifferent between the two. Of these, indifference is the only safe and useful option but there is reason to believe that this is a strong condition on the agent’s incentives. In this talk, I will propose a design for a corrigible, yet functional, agent as the solution to a two-player cooperative game where the robot’s goal is to maximize the humans sum of rewards.\n> \n> \n> We do an equilibrium analysis of the solutions to the game and identify three key properties. First, we show that if the human acts rationally, then the robot will be corrigible. Second, we show that if the robot has no uncertainty about human preferences, then the robot will be incorrigible or non-function if the human is even slightly suboptimal. Finally, we analyze the Gaussian setting and characterize the necessary and sufficient conditions, as a function of the robot’s belief about human preferences and the degree of human irrationality, to ensure that the robot will be corrigible and functional.\n> \n> \n\n\n \n\n\n\n \n\n\n**Jan Leike**, a recent addition at the Future of Humanity Institute, spoke about general reinforcement learning ([slides](https://intelligence.org/files/csrbai/leike-slides.pdf)). Abstract:\n\n\n\n> General reinforcement learning (GRL) is the theory of agents acting in unknown environments that are non-Markov, non-ergodic, and only partially observable. GRL can serve as a model for strong AI and has been used extensively to investigate questions related to AI safety. Our focus is not on practical algorithms, but rather on the fundamental underlying problems: How do we balance exploration and exploitation? How do we explore optimally? When is an agent optimal? We outline current shortcomings of the model and point to future research directions.\n> \n> \n\n\n \n\n\n\n \n\n\n**Bas Steunebrink** spoke about experience-based AI and understanding, meaning, and values ([slides](https://intelligence.org/files/csrbai/steunebrink-slides.pdf)). Excerpt:\n\n\n\n> We will discuss ongoing research into value learning: how an agent can gradually learn to understand the world it’s in, learn to understand what we mean for it to do, learn to understand as well as be compelled to adhere to proper values, and learn to do so robustly in the face of inaccurate, inconsistent, and incomplete information as well as underspecified, conflicting, and updatable goals. To fulfill this ambitious vision we have a long road of gradual teaching and testing ahead of us.\n> \n> \n\n\n \n\n\nFor a recap of the week 2 videos on robustness and error-tolerance, see my [previous blog post](https://intelligence.org/2016/08/15/csrbai-talks-on-robustness-and-error-tolerance/). For a summary of how the event as a whole went, and videos of the opening talks by Stuart Russell, Alan Fern, and Francesca Rossi, see my [first blog post](https://intelligence.org/2016/08/02/2016-summer-program-recap/).\n\n\nThe post [CSRBAI talks on preference specification](https://intelligence.org/2016/08/30/csrbai-talks-preference-specification/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-08-31T00:08:24Z", "authors": ["Alex Vermeer"], "summaries": []} -{"id": "3dda198aa6aeb73becfee8debcd44ad1", "title": "CSRBAI talks on robustness and error-tolerance", "url": "https://intelligence.org/2016/08/15/csrbai-talks-on-robustness-and-error-tolerance/", "source": "miri", "source_type": "blog", "text": "We’ve uploaded a second set of videos from our recent [Colloquium Series on Robust and Beneficial AI](https://intelligence.org/colloquium-series/) (CSRBAI) at the MIRI office, co-hosted with the Future of Humanity Institute. These talks were part of the week focused on robustness and error-tolerance in AI systems, and how to ensure that when AI system fail, they fail gracefully and detectably. All released videos are available on the [CSRBAI web page](https://intelligence.org/colloquium-series/).\n\n\n \n\n\n\n \n\n\n**Bart Selman**, professor of computer science at Cornell University, spoke about machine reasoning and planning ([slides](https://intelligence.org/files/csrbai/selman-slides.pdf)). Excerpt:\n\n\n\n> I’d like to look at what I call “non-human intelligence.” It does get less attention, but the advances also have been very interesting, and they’re in reasoning and planning. It’s actually partly not getting as much attention in the AI world because it’s more used in software verification, program synthesis, and automating science and mathematical discoveries – other areas related to AI but not a central part of AI that are using these reasoning technologies. Especially the software verification world – Microsoft, Intel, IBM – push these reasoning programs very hard, and that’s why there’s so much progress, and I think it will start feeding back into AI in the near future.\n> \n> \n\n\n\n \n\n\n\n \n\n\n**Jessica Taylor** presented on MIRI’s recently released second technical agenda, “[Alignment for Advanced Machine Learning Systems](https://intelligence.org/2016/07/27/alignment-machine-learning/)”. Abstract:\n\n\n\n> If artificial general intelligence is developed using algorithms qualitatively similar to those of modern machine learning, how might we target the resulting system to safely accomplish useful goals in the world? I present a technical agenda for a new MIRI project focused on this question.\n> \n> \n\n\n \n\n\n\n \n\n\n**Stefano Ermon**, assistant professor of computer science at Stanford, gave a talk on probabilistic inference and accuracy guarantees ([slides](https://intelligence.org/files/csrbai/ermon-slides.pdf)). Abstract:\n\n\n\n> Statistical inference in high-dimensional probabilistic models is one of the central problems in AI. To date, only a handful of distinct methods have been developed, most notably (MCMC) sampling and variational methods. While often effective in practice, these techniques do not typically provide guarantees on the accuracy of the results. In this talk, I will present alternative approaches based on ideas from the theoretical computer science community. These approaches can leverage recent advances in combinatorial optimization and provide provable guarantees on the accuracy.\n> \n> \n\n\n \n\n\n\n \n\n\n**Paul Christiano**, PhD student at UC Berkeley, gave a talk about training aligned reinforcement learning agents. Excerpt:\n\n\n\n> That’s the goal of the reinforcement learning problem. We as the designers of an AI system have some other goal in mind, which maybe we don’t have a simple formalization of. I’m just going to say, “We want the agent to do the right thing.” We don’t really care about what reward the agent sees; we just care that it’s doing the right thing.\n> \n> \n> So, intuitively, we can imagine that there’s some unobserved utility function *U* which acts on a transcript and just evaluates the consequences of the agent behaving in that way. So it has to average over all the places in the universe this transcript might occur, and it says, “What would I want the agent to do, on average, when it encounters this transcript?”\n> \n> \n\n\n \n\n\n\n \n\n\n**Jim Babcock** discussed the AGI containment problem ([slides](https://intelligence.org/files/csrbai/babcock-slides.pdf)). Abstract:\n\n\n\n> Ensuring that powerful AGIs are safe will involve testing and experimenting on them, but a misbehaving AGI might try to tamper with its test environment to gain access to the internet or modify the results of tests. I will discuss the challenges of securing environments to test AGIs in.\n> \n> \n\n\nFor a summary of how the event as a whole went, and videos of the opening talks by Stuart Russell, Alan Fern, and Francesca Rossi, see my [last blog post](https://intelligence.org/2016/08/02/2016-summer-program-recap/).\n\n\nThe post [CSRBAI talks on robustness and error-tolerance](https://intelligence.org/2016/08/15/csrbai-talks-on-robustness-and-error-tolerance/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-08-15T17:28:40Z", "authors": ["Alex Vermeer"], "summaries": []} -{"id": "706c7638046860a471b1d96610b59ce5", "title": "MIRI strategy update: 2016", "url": "https://intelligence.org/2016/08/05/miri-strategy-update-2016/", "source": "miri", "source_type": "blog", "text": "This post is a follow-up to Malo’s [2015 review](https://intelligence.org/2016/07/29/2015-in-review/), sketching out our new 2016-2017 plans. Briefly, our top priorities (in decreasing order of importance) are to (1) make technical progress on the research problems we’ve identified, (2) expand our team, and (3) build stronger ties to the wider research community.\n\n\nAs discussed in a [previous blog post](https://intelligence.org/2016/05/04/announcing-a-new-research-program/), the biggest update to our research plans is that we’ll be splitting our time going forward between our 2014 research agenda (the “[agent foundations](http://intelligence.org/technical-agenda/)” agenda) and a new research agenda oriented toward machine learning work led by Jessica Taylor: “[Alignment for Advanced Machine Learning Systems](https://intelligence.org/2016/07/27/alignment-machine-learning/).”\n\n\nThree additional news items:\n\n\n1. I’m happy to announce that MIRI has received support from a major new donor: entrepreneur and computational biologist Blake Borgeson, who has made a $300,000 donation to MIRI. This is the second-largest donation MIRI has received in its history, beaten only by Jed McCaleb’s [2013 cryptocurrency donation](https://intelligence.org/2014/04/02/2013-in-review-fundraising). As a result, we’ve been able to execute on our growth plans with more speed, confidence, and flexibility.\n\n\n2. This year, instead of running separate summer and winter fundraisers, we’re merging them into one more ambitious fundraiser, which will take place in September.\n\n\n3. I’m also pleased to announce that Abram Demski has accepted a position as a MIRI research fellow. Additionally, Ryan Carey has accepted a position as an assistant research fellow, and we’ve hired some new administrative staff.\n\n\nI’ll provide more details on these and other new developments below.\n\n\n\n#### Priority 1: Make progress on open technical problems\n\n\nSince 2013, MIRI’s primary goal has been to make technical progress on AI alignment. Nearly all of our other activities are either directly or indirectly aimed at producing more high-quality alignment research, either at MIRI or at other institutions.\n\n\nAs mentioned above, Jessica Taylor is now leading an “Alignment for Advanced Machine Learning Systems” program, which will occupy about half of our research efforts going forward. [Our goal with this work](https://intelligence.org/2016/05/04/announcing-a-new-research-program/) will be to develop formal models and theoretical tools that we predict would aid in the alignment of highly capable AI systems, under the assumption that such systems will be qualitatively similar to present-day machine learning systems. Our research communications manager, Rob Bensinger, has [summarized](https://intelligence.org/2016/07/27/alignment-machine-learning/) themes in our new work and its relationship to other AI safety research proposals.\n\n\nEarlier in the year, I jotted down a summary of how much technical progress I thought we’d made on our research agenda in 2015 (noted by Malo in our [2015 review](https://intelligence.org/2016/07/29/2015-in-review/)), relative to my expectations. In short, I expected modest progress in all of our research areas except [value specification](https://intelligence.org/files/ValueLearningProblem.pdf) (which was low-priority for us in 2015). We made progress more quickly than expected on some problems, and more slowly than expected on others.\n\n\nIn [naturalized induction](https://intelligence.org/files/RealisticWorldModels.pdf) and [logical uncertainty](https://intelligence.org/files/QuestionsLogicalUncertainty.pdf), we exceeded my expectations, making sizable progress. In [error tolerance](https://intelligence.org/files/Corrigibility.pdf), we undershot my expectations and made only limited progress. In our other research areas, we made about as much progress as I expected: modest progress in [decision theory](http://arxiv.org/abs/1507.01986) and [Vingean reflection](https://intelligence.org/files/VingeanReflection.pdf), and limited progress in value specification.\n\n\nI also made personal predictions earlier in the year about how much progress we’d make through the end of 2016: modest progress in decision theory, error tolerance, and value specification; limited progress in Vingean reflection; and sizable progress in logical uncertainty and naturalized induction. (Starting in 2017, I’ll be making my predictions publicly early in the year.)\n\n\nBreaking these down:\n\n\n* Vingean reflection is a lower priority for us this year. This is in part because we’re less confident that there’s additional low-hanging fruit to be plucked here, absent additional progress in logical uncertainty or decision theory. Although we’ve been learning a lot about implementation snags through Benya Fallenstein, Ramana Kumar, and Jack Gallagher’s ongoing [HOL-in-HOL project](https://intelligence.org/2015/12/04/new-paper-proof-producing-reflection-for-hol/), we haven’t seen any major theoretical breakthroughs in this area since Benya developed model polymorphism [in late 2012](http://lesswrong.com/lw/e4e/an_angle_of_attack_on_/). Benya and Kaya Fallenstein are still studying this topic occasionally.\n\n\n* In contrast, we’ve continued to make steady gains in the basic theory of logical uncertainty, naturalized induction, and decision theory over the years. Benya, Kaya, Abram, Scott Garrabrant, Vanessa Kosoy, and Tsvi Benson-Tilsen will be focusing on these areas over the coming months, and I expect to see advances in 2016 of similar importance to what we saw in 2015.\n\n\n* Our [machine learning agenda](https://intelligence.org/2016/07/27/alignment-machine-learning/) is primarily focused on error tolerance and value specification, making these much higher priorities for us this year. I expect to see modest progress from Jessica Taylor, Patrick LaVictoire, Andrew Critch, Stuart Armstrong, and Ryan Carey’s work on these problems. It’s harder to say whether there will be any big breakthroughs here, given how new the program is.\n\n\nEliezer Yudkowsky and I will be splitting our time between working on these problems and doing expository writing. Eliezer is writing about alignment theory, while I’ll be writing about MIRI strategy and forecasting questions.\n\n\nWe spent large portions of the first half of 2016 writing up existing results and research proposals and coordinating with other researchers (such as through our visit to FHI and our [Colloquium Series on Robust and Beneficial AI](https://intelligence.org/colloquium-series/)), and we have a bit more writing ahead of us in the coming weeks. We managed to get a fair bit of research done — we’ll be announcing a sizable new logical uncertainty result once the aforementioned writing is finished — but we’re looking forward to a few months of uninterrupted research time at the end of the year, and I’m excited to see what comes of it.\n\n\n\n#### Priority 2: Expand our team\n\n\nGrowing MIRI’s research team is a high priority. We’re also expanding our admin team, with a goal of freeing up more of my time and better positioning MIRI to positively influence the booming AI risk conversation.\n\n\nAfter making significant contributions to our research over the past year as a research associate (e.g., “[Inductive Coherence](https://intelligence.org/2016/04/21/two-new-papers-uniform/#1)” and [Structural Risk Mitigation](https://agentfoundations.org/item?id=292)) and participating in our [CSRBAI and MIRI Summer Fellows programs](https://intelligence.org/2016/08/02/2016-summer-program-recap/), Abram Demski has signed on to join our core research team. Abram is planning to join in late 2016 or early 2017, after completing his computer science PhD at the University of Southern California. Mihály Bárász is also slated to join our core research team at a future date, and we are considering several other promising candidates for research fellowships.\n\n\nIn the nearer term, data scientist Ryan Carey has been collaborating with us on our machine learning agenda and will be joining us as an assistant research fellow in September.\n\n\nWe’ve also recently hired a new office manager, Aaron Silverbook, and a communications and development admin, Colm Ó Riain.\n\n\nWe have an open [type theorist job ad](https://intelligence.org/2016/03/18/seeking-research-fellows-in-type-theory-and-machine-self-reference/), and are more generally seeking [research fellows](https://intelligence.org/careers/research-fellow/) with strong mathematical intuitions and a talent for formalizing and solving difficult problems, or for fleshing out and writing up results for publication.\n\n\nWe’re also seeking communications and outreach specialists (e.g., computer programmers with very strong writing skills) to help us keep pace with the lively public and academic AI risk conversation. If you’re interested, send a résumé and nonfiction writing samples to [Rob](mailto:rob@intelligence.org).\n\n\n#### Priority 3: Collaborate and communicate with other researchers\n\n\nThere have been a number of new signs in 2016 that AI alignment is going (relatively) mainstream:\n\n\n* Stuart Russell and his students’ recent work on [value learning](http://arxiv.org/abs/1606.03137) and [corrigibility](https://intelligence.org/files/csrbai/hadfield-menell-slides.pdf) (including a joint [grant project](https://intelligence.org/files/CorrigibilityAISystems.pdf) with MIRI);\n* positive reactions from [Eric Schmidt](http://fortune.com/2016/06/28/artificial-intelligence-potential/) and the [press](http://www.newsweek.com/google-big-red-button-ai-artificial-intelligence-save-world-elon-musk-466753) to a Google DeepMind / Future of Humanity Institute collaboration on [corrigibility](https://intelligence.org/2016/06/01/new-paper-safely-interruptible-agents/) (partly supported by MIRI);\n* the new “[Concrete Problems in AI Safety](https://arxiv.org/abs/1606.06565)” research proposal announced by [Google Research](https://research.googleblog.com/2016/06/bringing-precision-to-ai-safety.html) and [OpenAI](https://openai.com/blog/concrete-ai-safety-problems/) (along with the Open Philanthropy Project’s [declaration of interest in funding such research](http://www.openphilanthropy.org/blog/concrete-problems-ai-safety));\n* and other, smaller developments.\n\n\nMIRI’s goal is to ensure that the AI alignment problem gets solved, whether it’s MIRI solving it or some other group. As such, we’re excited by the new influx of attention directed at the alignment problem, and view this as an important time to nurture the field.\n\n\nAs AI safety research goes more mainstream, the pool of researchers we can dialogue with is becoming larger. At the same time, our own approach to the problem — specifically focused on the most long-term, high-stakes, and poorly-understood parts of the problem, and the parts that are [least concordant with academic and industry incentives](https://intelligence.org/2015/08/14/what-sets-miri-apart/) — remains unusual. Absent MIRI, I think that this part of the conversation would be almost entirely neglected.\n\n\nTo help promote [our approach](https://intelligence.org/2015/07/27/miris-approach/) and grow the field, we intend to host more workshops aimed at diverse academic audiences. We’ll be hosting a machine learning workshop in the near future, and might run more events like CSRBAI going forward. We also have a backlog of past technical results to write up, which we expect to be valuable for engaging more researchers in computer science, economics, mathematical logic, decision theory, and other areas.\n\n\nWe’re especially interested in finding ways to hit priorities 1 and 3 simultaneously, pursuing important research directions that also help us build stronger ties to the wider academic world. One of several reasons for our new research agenda is its potential to encourage more alignment work by the ML community.\n\n\n\n\n---\n\n\nShort version: in the medium term, our research program will have a larger focus on error-tolerance and value specification research, with more emphasis on ML-inspired AI approaches, and we’re increasing the size of our research team in pursuit of that goal.\n\n\nRob, Malo, and I will be saying more about our funding situation and organizational strategy in September, when we kick off our 2016 fundraising drive. As part of that series of posts, I’ll also be writing more about how our current strategy fits into our long-term goals and priorities.\n\n\nFinally, if you’re attending [Effective Altruism Global](http://eaglobal.org/) this weekend, note that we’ll be running two workshops ([one](http://sched.co/7rCF) on Jessica’s new project, [another](http://sched.co/7oEx) on the aforementioned new logical uncertainty results), as well as some office hours (both with the [research team](http://sched.co/7xbd) and with the [admin team](http://sched.co/7xbN)). If you’re there, feel free to drop by, say hello, and ask more about what we’ve been up to.\n\n\nThe post [MIRI strategy update: 2016](https://intelligence.org/2016/08/05/miri-strategy-update-2016/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-08-05T20:40:22Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "a9a2234bf4b9fbe38b8b13300855197a", "title": "August 2016 Newsletter", "url": "https://intelligence.org/2016/08/03/august-2016-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \n\n**Research updates**\n* A new paper: “[Alignment for Advanced Machine Learning Systems](https://intelligence.org/2016/07/27/alignment-machine-learning/).” Half of our research team will be focusing on this research agenda going forward, while the other half continues to focus on the [agent foundations agenda](https://intelligence.org/technical-agenda/).\n* New at AI Impacts: [Returns to Scale in Research](http://aiimpacts.org/returns-to-scale-in-research/)\n* Evan Lloyd represented [MIRIxLosAngeles](https://intelligence.org/mirix/) at AGI-16 this month, presenting “[Asymptotic Logical Uncertainty and the Benford Test](https://intelligence.org/2015/09/30/new-paper-asymptotic-logical-uncertainty-and-the-benford-test/)” ([slides](https://intelligence.org/files/benford_AGI_2016_handout.pdf)).\n* We’ll be announcing a breakthrough in logical uncertainty this month, related to Scott Garrabrant’s [previous results](https://intelligence.org/2016/04/21/two-new-papers-uniform/).\n\n\n**General updates**\n* [Our 2015 in review](https://intelligence.org/2016/07/29/2015-in-review/), with a focus on the technical problems we made progress on.\n* Another recap: how our summer [colloquium series and fellows program](https://intelligence.org/2016/08/02/2016-summer-program-recap/) went.\n* We’ve uploaded our first [CSRBAI](https://intelligence.org/colloquium-series/) talks: Stuart Russell on “AI: The Story So Far” ([video](https://www.youtube.com/watch?v=zBCOMm_ytwM)), Alan Fern on “Toward Recognizing and Explaining Uncertainty” ([video](https://www.youtube.com/watch?v=3lD6Sygy6EQ)), and Francesca Rossi on “Moral Preferences” ([video](https://www.youtube.com/watch?v=QxwKxJN4WlQ)).\n* We [submitted our recommendations](https://intelligence.org/2016/07/23/ostp/) to the White House Office of Science and Technology Policy, cross-posted to our blog.\n* We attended IJCAI and the White House’s [AI and economics](https://artificialintelligencenow.com/) event. Furman on technological unemployment ([video](https://www.youtube.com/watch?v=BuUMLXRGJaY)) and other talks are available online.\n* Talks from June’s [safety and control in AI](https://www.cmu.edu/safartint/watch.html) event are also online. Speakers included Microsoft’s Eric Horvitz ([video](https://scs.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=62eaa2ae-1151-3970-4c72-a2f943edc485)), FLI’s Richard Mallah ([video](https://scs.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=9c1fb22c-4f2d-daf0-be7e-a2c09a7d8139)), Google Brain’s Dario Amodei ([video](https://scs.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=37ad7524-7572-43be-8b91-4dbe1e7f5d8e)), and IARPA’s Jason Matheny ([video](https://scs.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=f2573b72-23d4-497c-b2bd-ecca7557ff55)).\n\n\n\n**News and links**\n* [Complexity No Bar to AI](http://www.gwern.net/Complexity%20vs%20AI): Gwern Branwen argues that computational complexity theory provides little reason to doubt that AI can surpass human intelligence.\n* Bill Nordhaus, the world’s leading climate change economist, writes a paper on [the economics of singularity scenarios](http://cowles.yale.edu/sites/default/files/files/pub/d20/d2021.pdf).\n* The Open Philanthropy Project has awarded Robin Hanson a three-year $265,000 grant to study [multipolar AI scenarios](http://www.overcomingbias.com/2016/07/my-new-grant.html). See also [Hanson’s new argument](http://www.overcomingbias.com/2016/08/researcher-returns-diminish.html) for expecting a long era of whole-brain emulations prior to the development of AI with superhuman reasoning abilities.\n* “[Superintelligence Cannot Be Contained](https://arxiv.org/abs/1607.00913)” discusses computability-theoretic limits to AI verification.\n* The *Financial Times* runs a good [profile of Nick Bostrom](https://www.fhi.ox.ac.uk/wp-content/uploads/FT-AI-article.pdf).\n* DeepMind software reduces Google’s data center cooling bill [by 40%](https://deepmind.com/blog/deepmind-ai-reduces-google-data-centre-cooling-bill-40/).\n* In a promising development, US federal regulators argue for [the swift development and deployment](http://arstechnica.com/cars/2016/07/federal-regulators-says-car-makers-cannot-wait-for-perfect-on-automation/) of self-driving cars to reduce automobile accidents: “We cannot wait for perfect. We lose too many lives waiting for perfect.”\n\n\n |\n\n\n\nThe post [August 2016 Newsletter](https://intelligence.org/2016/08/03/august-2016-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-08-04T13:07:38Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "312b4f193428d728b41797d1fa0c306b", "title": "2016 summer program recap", "url": "https://intelligence.org/2016/08/02/2016-summer-program-recap/", "source": "miri", "source_type": "blog", "text": "As previously [announced](https://intelligence.org/2016/03/28/announcing-a-new-colloquium-series-and-fellows-program/), we recently ran a 22-day **[Colloquium Series on Robust and Beneficial AI](https://intelligence.org/colloquium-series/)** (CSRBAI) at the MIRI office, co-hosted with the Oxford Future of Humanity Institute. The colloquium was aimed at bringing together safety-conscious AI scientists from academia and industry to share their recent work. The event served that purpose well, initiating some new collaborations and a number of new conversations between researchers who hadn’t interacted before or had only talked remotely.\n\n\nOver 50 people attended from 25 different institutions, with an average of 15 people present on any given talk or workshop day. In all, there were 17 talks and four [weekend workshops](http://intelligence.org/workshops/#june-2016-csrbaiam) on the topics of transparency, robustness and error-tolerance, preference specification, and agent models and multi-agent dilemmas. The full schedule and talk slides are available on the [event page](https://intelligence.org/colloquium-series/). Videos from the first day of the event are now available, and we’ll be posting the rest of the talks online soon:\n\n\n \n\n\n\n \n\n\n**Stuart Russell**, professor of computer science at UC Berkeley and co-author of *Artificial Intelligence: A Modern Approach*, gave the opening keynote. Russell spoke on “AI: The Story So Far” ([slides](https://intelligence.org/files/csrbai/russell-slides.pdf)). Abstract:\n\n\n\n> I will discuss the need for a fundamental reorientation of the field of AI towards provably beneficial systems. This need has been disputed by some, and I will consider their arguments. I will also discuss the technical challenges involved and some promising initial results.\n> \n> \n\n\nRussell discusses his recent work on [cooperative inverse reinforcement learning](http://arxiv.org/abs/1606.03137) 36 minutes in. This paper and Dylan Hadfield-Menell’s related talk on corrigibility ([slides](https://intelligence.org/files/csrbai/hadfield-menell-slides.pdf)) inspired lots of interest and discussion at CSRBAI.\n\n\n\n \n\n\n\n \n\n\n**Alan Fern**, associate professor of computer science at Oregon State University, discussed his work with AAAI president and OSU distinguished professor of computer science Tom Dietterich in “Toward Recognizing and Explaining Uncertainty” ([slides 1](https://intelligence.org/files/csrbai/fern-slides-1.pdf), [slides 2](https://intelligence.org/files/csrbai/fern-slides-2.pdf)). Fern and Dietterich’s work is described in a Future of Life Institute [grant proposal](http://futureoflife.org/first-ai-grant-recipients/):\n\n\n\n> The development of AI technology has progressed from working with “known knowns”—AI planning and problem solving in deterministic, closed worlds—to working with “known unknowns”—planning and learning in uncertain environments based on probabilistic models of those environments. A critical challenge for future AI systems is to behave safely and conservatively in open worlds, where most aspects of the environment are not modeled by the AI agent—the “unknown unknowns”.\n> \n> \n> Our team, with deep experience in machine learning, probabilistic modeling, and planning, will develop principles, evaluation methodologies, and algorithms for learning and acting safely in the presence of the unknown unknowns. For supervised learning, we will develop UU-conformal prediction algorithms that extend conformal prediction to incorporate nonconformity scores based on robust anomaly detection algorithms. This will enable supervised learners to behave safely in the presence of novel classes and arbitrary changes in the input distribution. For reinforcement learning, we will develop UU-sensitive algorithms that act to minimize risk due to unknown unknowns. A key principle is that AI systems must broaden the set of variables that they consider to include as many variables as possible in order to detect anomalous data points and unknown side-effects of actions.\n> \n> \n\n\n \n\n\n\n \n\n\n**Francesca Rossi**, professor of computer science at Padova University in Italy, research scientist at IBM, and president of IJCAI, spoke on “Moral Preferences” ([slides](https://intelligence.org/files/csrbai/rossi-slides.pdf)). Abstract:\n\n\n\n> Intelligent systems are going to be more and more pervasive in our everyday lives. They will take care of elderly people and kids, they will drive for us, and they will suggest doctors how to cure a disease. However, we cannot let them do all this very useful and beneficial tasks if we don’t trust them. To build trust, we need to be sure that they act in a morally acceptable way. So it is important to understand how to embed moral values into intelligent machines.\n> \n> \n> Existing preference modeling and reasoning framework can be a starting point, since they define priorities over actions, just like an ethical theory does. However, many more issues are involved when we mix preferences (that are at the core of decision making) and morality, both at the individual level and in a social context. I will discuss some of these issues as well as some possible solutions.\n> \n> \n\n\nOther speakers at the event included Tom Dietterich (OSU), Bart Selman (Cornell), Paul Christiano (UC Berkeley), and MIRI researchers Jessica Taylor and Andrew Critch.\n\n\n\n\n---\n\n\nThe preference specification workshop attracted the most excitement and activity at CSRBAI. Other activities and discussion topics at CSRBAI included:\n\n\n* Discussions about potential applications of complexity theory to transparency: using [interactive polynomial-time](https://en.wikipedia.org/wiki/IP_(complexity)) proof protocols or [probabilistically checkable proofs](https://en.wikipedia.org/wiki/Probabilistically_checkable_proof) to communicate complicated beliefs and reasons from powerful AI systems to humans.\n* Some progress clarifying different methods of training explanation systems for informed oversight.\n* Investigations into the theory of cooperative inverse reinforcement learning and other unobserved-reward games, led by Jan Leike and Tom Everitt of Australian National University.\n* Discussions about the hazards associated with reinforcement learning agents that manipulate the source of their reward function (which is the human or a learned representation of the human).\n* Interesting discussions about corrigibility viewed as a value-of-information problem.\n* Development of [AI safety environments](https://gym.openai.com/envs#safety) by Rafael Cosman and other attendees for the OpenAI Reinforcement Learning Gym, illustrating topics like [interruptibility](https://intelligence.org/2016/06/01/new-paper-safely-interruptible-agents/) and semi-supervised learning. Ideas and conversation from Chris Olah, Dario Amodei, Paul Christiano, and Jessica Taylor helped seed these gyms, and CSRBAI participants who helped develop them included Owain Evans, Sune Jakobsen, Stuart Armstrong, Tom Everitt, Rafael Cosman, and David Krueger.\n* Discussions of ideas for an OpenGym environment asking for low-impact agents, using an adversarial distinguisher.\n* Discussions of Jessica Taylor’s [memoryless Cartesian environments](https://agentfoundations.org/item?id=853) aimed at extending the idea to non-Cartesian worlds / logical counterfactuals using reference-class decision-making. Discussions of using “logically past” experience to learn about counterfactuals and do exploration without having a high chance of exploring in the real world.\n* New insights into the problem of logical counterfactuals, with new associated formalisms. Applications of MIRI’s recent logical uncertainty advances to decision theory.\n* A lot of advance discussion of MIRI’s “[Alignment for Advanced Machine Learning Systems](https://intelligence.org/2016/07/27/alignment-machine-learning/)” technical agenda.\n\n\nThe colloquium series ran quite smoothly, and we received positive feedback from attendees. Attendees noted that the event would have likely benefited from more structure. When we run events like this in the future, our main adjustment will be to compress the schedule and run more focused events similar to our past [workshops](https://intelligence.org/workshops/).\n\n\n\n\n---\n\n\nWe also co-ran a 16-day **[MIRI Summer Fellows](http://rationality.org/miri-summer-fellows-2016)** program with the Center for Applied Rationality in June. The program’s 14 attendees came from a variety of technical backgrounds and ranged from startup founders to undergraduates to assistant professors.\n\n\nOur MIRISF programs have proven useful in the past for identifying future MIRI hires (one full-time and two part-time MIRI researchers from the 2015 MIRISF program). The primary focus, however, is on developing new problem-solving skills and [mathematical intuitions](https://intelligence.org/2013/11/04/from-philosophy-to-math-to-engineering/) for CS researchers and providing an immersive crash course on MIRI’s [active research projects](https://intelligence.org/research-guide/).\n\n\nThe program had four distinct phases: a four-day CFAR retreat (followed by a rest day), a two-day course in MIRI’s research agenda, three days of working together on research topics (similar to a MIRI research workshop, and followed by another off day), and three days of miscellaneous activities: [Tetlock-style](http://slatestarcodex.com/2016/02/04/book-review-superforecasting/) forecasting practice, one-on-ones with MIRI researchers, [security mindset](http://econlog.econlib.org/archives/2016/03/so_far_unfriend.html) discussions, planning ahead for future research and collaborations, etc.\n\n\nTo receive notifications from us about future programs like MIRISF, use [this form](http://goo.gl/forms/Y49P3B4LmE). To get in touch with us about collaborating at future MIRI workshops like the ones at CSRBAI, send us your info via our [general application form](https://machineintelligence.typeform.com/to/fot777).\n\n\nThe post [2016 summer program recap](https://intelligence.org/2016/08/02/2016-summer-program-recap/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-08-02T17:53:56Z", "authors": ["Alex Vermeer"], "summaries": []} -{"id": "482635d7502b776b920e3b129981b9c2", "title": "2015 in review", "url": "https://intelligence.org/2016/07/29/2015-in-review/", "source": "miri", "source_type": "blog", "text": "As Luke had done in years past (see [2013 in review](https://intelligence.org/2013/12/20/2013-in-review-operations) and [2014 in review](https://intelligence.org/2015/03/22/2014-review/)), I (Malo) wanted to take some time to review our activities from last year. In the coming weeks Nate will provide a big-picture strategy update. Here, I’ll take a look back at 2015, focusing on our research progress, academic and general outreach, fundraising, and other activities.\n\n\nAfter seeing signs in 2014 that interest in AI safety issues was on the rise, we [made plans](https://intelligence.org/2014/06/11/mid-2014-strategic-plan/) to grow our research team. Fueled by the response to Bostrom’s *[Superintelligence](https://smile.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742?sa-no-redirect=1)* and the Future of Life Institute’s “[Future of AI](http://futureoflife.org/2015/10/12/ai-safety-conference-in-puerto-rico/)” conference, interest continued to grow in 2015. This suggested that we could afford to accelerate our plans, but it wasn’t clear how quickly.\n\n\nIn 2015 we did not release a mid-year strategic plan, as Luke did in 2014. Instead, we laid out various conditional strategies dependent on how much funding we raised during our [2015 Summer Fundraiser](https://intelligence.org/2015/07/17/miris-2015-summer-fundraiser/). The response was great; we had our most successful fundraiser to date. We hit [our first two funding targets](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/) (and then some), and set out on an accelerated 2015/2016 growth plan.\n\n\nAs a result, 2015 was a big year for MIRI. After publishing our [technical agenda](https://intelligence.org/technical-agenda/) at the start of the year, we made progress on many of the open problems it outlined, doubled the size of our core research team, strengthened our connections with industry groups and academics, and raised enough funds to maintain our growth trajectory. We’re very grateful to all our supporters, without whom this progress wouldn’t have been possible. \n\n \n\n\n\n### 2015 Research Progress\n\n\nOur “[Agent Foundations for Aligning Machine Intelligence with Human Interests](http://intelligence.org/technical-agenda)” research agenda divides open problems into three categories: high reliability (which includes logical uncertainty, naturalized induction, decision theory, and Vingean reflection), error tolerance, and value specification.[1](https://intelligence.org/2016/07/29/2015-in-review/#footnote_0_13403 \"This paper was originally titled “Aligning Superintelligence with Human Interests.” We’ve renamed it in order to emphasize that this research agenda takes a specific approach to the alignment problem, and other approaches are possible too—including, relevantly, Jessica Taylor’s new “Alignment for Advanced Machine Learning Systems” agenda.\") MIRI’s top goal in 2015 was to make progress on these problems.\n\n\nWe met our expectations for research progress in each category, with the exception of logical uncertainty and naturalized induction (where we made more progress than expected) and error tolerance (where we made less progress than expected).\n\n\nBelow I’ve provided a brief summary of our progress in each area, with additional details and a full publication list in collapsed “Read More” sections. Some of the papers we published in 2015 were based on research from 2014 or earlier, and some of our 2015 results weren’t published until 2016 (or remain unpublished). In this review I’ll focus on 2015’s new technical developments, rather than on pre-2015 material that happened to be published in that year.\n\n\n##### Logical Uncertainty and Naturalized Induction\n\n\nWe expected to make modest progress on these two problems in 2015. I’m pleased to report we made sizable progress.\n\n\n2015 saw the tail end of our development of [reflective oracles](https://intelligence.org/2015/04/28/new-papers-reflective/), and early work on “optimal estimators.” Our most important research advance of the year, however, was likely our success [dividing logical uncertainty into two subproblems](https://intelligence.org/2016/04/21/two-new-papers-uniform/), which happened in late 2015 and the very beginning of 2016.\n\n\nOne intuitive constraint on correct logically uncertain reasoning is that one’s probabilities reflect known logical relationships between claims. For example, if you know that two claims are mutually exclusive (such as “this computation outputs a 3” and “this computation outputs a 7”), then even if you can’t evaluate the claims, you should assign probabilities to the two claims that sum to at most 1.\n\n\nA second intuitive constraint is that one’s probabilities reflect empirical regularities. Once you observe enough digits of π, you should eventually guess that the numbers 8 and 3 occur equally often in π’s decimal expansion, even if you have not yet proven that π is [normal](https://en.wikipedia.org/wiki/Normal_number).\n\n\nIn 2015, we developed two different algorithms to solve these two subproblems in isolation.\n\n\n\n\n\n [Read More](https://intelligence.org/feed/?paged=29#collapseOne)\n\n\n\nIn collaboration with Benya Fallenstein and other MIRI researchers, Scott Garrabrant solved the problem of respecting logical relationships in a series of [Intelligent Agent Foundations Forum (IAFF) posts](https://agentfoundations.org/item?id=270), resulting in the “[Inductive Coherence](https://arxiv.org/abs/1604.05288)” paper. The problem of respecting observational patterns in logical sentences was solved by Scott and the [MIRIxLosAngeles](https://intelligence.org/mirix/) group in “[Asymptotic Logical Uncertainty and the Benford Test](https://intelligence.org/2015/09/30/new-paper-asymptotic-logical-uncertainty-and-the-benford-test/),” which was further developed into the “[Asymptotic Convergence in Online Learning with Unbounded Delays](https://arxiv.org/abs/1604.05280)” paper in 2016.\nThese two approaches to logical uncertainty were not only nonequivalent, but seemed to preclude each other. The obvious next step is to investigate whether there is a way to solve both subproblems at once with a single procedure—a task we have since made some (soon-to-be-announced) progress on in 2016.\n\n\nMIRI research associate Vanessa Kosoy’s work on his “optimal estimators” framework represents a large separate corpus of work on logical uncertainty, which may also have applications for decision theory. Vanessa’s work has not yet been officially published, but much of it is available [on IAFF](https://agentfoundations.org/submitted?id=7).\n\n\nOur other significant result in logical uncertainty was Benya Fallenstein, Jessica Taylor, and Paul Christiano’s [reflective oracles](https://intelligence.org/2015/04/28/new-papers-reflective/), building on work that began before 2015 ([IAFF digest](https://agentfoundations.org/item?id=165)). Reflective oracles avoid a number of paradoxes that normally arise when agents attempt to answer questions about equivalently powerful agents, allowing us to study multi-agent dilemmas and reflective reasoning with greater precision.\n\n\nReflective oracles are interesting in their own right, and have proven applicable to a number of distinct open problems. The fact that reflective oracles require no privileged agent/environment distinction suggests that they’re a step in the right direction for naturalized induction. Jan Leike has recently demonstrated that reflective oracles also solve a longstanding open problem in game theory, [the grain of truth problem](https://intelligence.org/2016/06/30/grain-of-truth/). Reflective oracles provide the first complete decision-theoretic foundation for game theory, showing that general-purpose methods for maximizing expected utility can achieve approximate Nash equilibria in repeated games.\n\n\nIn summary, our 2015 logical uncertainty and naturalized induction papers based on pre-2015 work were:\n\n\n* B Fallenstein, J Taylor, P Christiano. “[Reflective Oracles: A Foundation for Classical Game Theory](http://arxiv.org/abs/1508.04145).” arXiv:1508.04145 [cs.AI]. Published [in abridged form](https://intelligence.org/files/ReflectiveOraclesAI.pdf) in *Proceedings of LORI 2015*.\n* N Soares. “[Formalizing Two Problems of Realistic World-Models](https://intelligence.org/2015/01/22/new-report-formalizing-two-problems-realistic-world-models/).” MIRI tech report 2015-3.\n* N Soares, B Fallenstein. “[Questions of Reasoning under Logical Uncertainty](https://intelligence.org/2015/01/09/new-report-questions-reasoning-logical-uncertainty/).” MIRI tech report 2015-1.\n\n\n2015 research published the same year:\n\n\n* B Fallenstein, N Soares, J Taylor. “[Reflective Variants of Solomonoff Induction and AIXI](https://intelligence.org/files/ReflectiveSolomonoffAIXI.pdf).” Published in *Proceedings of AGI 2015*.\n\n\n2015 research published in 2016 or forthcoming:\n\n\n* S Garrabrant, S Bhaskhar, A Demski, J Garrabrant, G Koleszarik, E Lloyd. “[Asymptotic Logical Uncertainty and The Benford Test](https://intelligence.org/2015/09/30/new-paper-asymptotic-logical-uncertainty-and-the-benford-test/).” arXiv:1510.03370 [cs.LG]. Forthcoming at AGI 2016.\n* S Garrabrant, B Fallenstein, A Demski, N Soares. “[Inductive Coherence](https://intelligence.org/2016/04/21/two-new-papers-uniform/#1).” arXiv:1604.05288 [cs:AI].\n* S Garrabrant, N Soares, J Taylor. “[Asymptotic Convergence in Online Learning with Unbounded Delays](https://intelligence.org/2016/04/21/two-new-papers-uniform/#1).” arXiv:1604.05280 [cs:LG].\n* V Kosoy. Formally unpublished results on the optimal estimators framework.\n* J Leike, J Taylor, B Fallenstein. “[A Formal Solution to the Grain of Truth Problem](https://intelligence.org/2016/06/30/grain-of-truth/).” Presented at the 32nd Conference on Uncertainty in Artificial Intelligence.\n\n\nFor other logical uncertainty work on IAFF, see [The Two-Update Problem](https://agentfoundations.org/item?id=427), [Subsequence Induction](https://agentfoundations.org/item?id=460), and [Strict Dominance for the Modified Demski Prior](https://agentfoundations.org/item?id=541).\n\n\n\n\n\n\n\n##### Decision Theory\n\n\nIn 2015 we produced a number of new incremental advances in decision theory, constituting modest progress, in line with our expectations.\n\n\nOf these advances, we have published Andrew Critch’s proof of [a version of Löb’s theorem and Gödel’s second incompleteness that holds for bounded reasoners](https://intelligence.org/2016/03/31/new-paper-on-bounded-lob/).\n\n\nCritch applies this parametric bounded version of Löb’s theorem to prove that a wide range of resource-limited software agents, given access to each other’s source code, can achieve unexploitable mutual cooperation in the one-shot prisoner’s dilemma. Although we considered our past [robust cooperation](https://intelligence.org/2014/02/01/robust-cooperation-a-case-study-in-friendly-ai-research/) results strong reason to believe that bounded cooperation was possible, the confirmation is useful and gives us new formal tools for studying bounded reasoners.\n\n\nOver this period, Eliezer Yudkowsky, Benya Fallenstein, and Nate Soares also improved our technical (and philosophical) understanding of the decision theory we currently favor, “functional decision theory”—a slightly modified version of updateless decision theory.\n\n\n\n\n\n [Read More](https://intelligence.org/feed/?paged=29#collapseTwo)\n\n\n\nThe biggest obstacle to formalizing decision theory currently seems to be that we lack a suitable formal account of logical counterfactuals. Logical counterfactuals are questions of the form “If *X* (which I know to be false) were true, what (if anything) would that imply about *Y*?” These are important in decision theory, one special case being off-policy predictions. (Even if I can predict that I’m definitely not taking action *X*, I want to be able to ask what would ensue if I did; a wrong answer to this can lead to me accepting substandard self-fulfilling prophecies like two-boxing in the transparent Newcomb problem.)\nIn 2015, we examined a decision theory related to functional decision theory, proof-based decision theory, that has proven easier to formalize. We found that proof-based decision theory’s lack of logical counterfactuals is a serious weakness for the theory.\n\n\nWe explored some proof-length-based approaches to logical counterfactuals, and ultimately rejected them, though we have continued to devote some thought to this approach. During our first [2015 workshop](https://intelligence.org/workshops/#may-2015-intro1), Scott Garrabrant proposed [an informal conjecture on proof length and counterfactuals](https://agentfoundations.org/item?id=259), which was subsequently [revised](https://agentfoundations.org/item?id=444); but both versions of the conjecture were shown to be false by Sam Eisenstat ([1](https://agentfoundations.org/item?id=369), [2](https://agentfoundations.org/item?id=496)). (See also Scott’s [Optimal and Causal Counterfactual Worlds](https://agentfoundations.org/item?id=241).)\n\n\nIn a separate line of research, Patrick LaVictoire and others applied the proof-based decision theory framework to questions of [bargaining](https://agentfoundations.org/item?id=195) and division of trade gains. For other decision theory work on IAFF, see Vanessa and Scott’s [Superrationality in Arbitrary Games](https://agentfoundations.org/item?id=507) and Armstrong’s [Reflective Oracles and Superrationality: Prisoner’s Dilemma](https://agentfoundations.org/item?id=507).\n\n\nOur [github repository](https://github.com/machine-intelligence/provability) contains lots of new code from our work on modal agents, representing our most novel work on decision theory in the past year. We have one or two papers in progress that will explain the advances we’ve made in decision theory via this work. See [“Evil” Decision Problems in Provability Logic](https://agentfoundations.org/item?id=47) and other posts in the [decision theory IAFF digest](https://agentfoundations.org/item?id=160) for background on modal universes.\n\n\nPre-2015 work published in 2015:\n\n\n* N Soares, B Fallenstein. “[Toward Idealized Decision Theory](https://intelligence.org/2014/12/16/new-report-toward-idealized-decision-theory/).” 2014 tech report published [in abridged form](https://intelligence.org/files/CounterpossibleReasoning.pdf) in *Proceedings of AGI 2015*.\n\n\n2015 research published in 2016 or forthcoming:\n\n\n* A Critch. “[Parametric Bounded Löb’s Theorem and Robust Cooperation of Bounded Agents](https://intelligence.org/2016/03/31/new-paper-on-bounded-lob/).” arXiv:1602.04184 [cs:GT].\n* B Fallenstein. Formally unpublished results on modal universes.\n* S Garrabrant, S Eisenstat, P LaVictoire, J Lee, H Dell. Formally unpublished results on logical counterfactuals.\n* E Yudkowsky, N Soares. Unpublished results on functional decision theory.\n\n\n\n\n\n\n##### Vingean Reflection\n\n\nWe were expecting modest progress on these problems in 2015, and we made modest progress.\n\n\nBenya Fallenstein and Ramana Kumar’s “[Proof-Producing Reflection for HOL](https://intelligence.org/2015/12/04/new-paper-proof-producing-reflection-for-hol/)” demonstrates a practical form of self-reference (and a partial solution to both the [Löbian obstacle](https://intelligence.org/files/lob-notes-IAFF.pdf) and the [procrastination paradox](https://intelligence.org/files/ProblemsSelfReference.pdf)) in the HOL theorem prover. This result provides some evidence that it is possible for a reasoning system to trust another reasoning system that reasons the same way, so long as the systems have different internal states.\n\n\n\n\n\n [Read More](https://intelligence.org/feed/?paged=29#collapseThree)\n\n\n\nMore specifically, this paper establishes that it is possible to formally specify an infinite chain of reasoning systems such that each system trusts the next system in the chain, as long as the reasoners are unable to delegate any individual task indefinitely.\n\n\nThere is some internal debate within MIRI about what more is required for real-world Vingean reflection, aside from satisfactory accounts of logical uncertainty and logical counterfactuals. There’s also debate about whether any better results than this are likely to be possible in the absence of a full theory of logical uncertainty. Regardless, “Proof-Producing Reflection for HOL” demonstrates, via machine-checked proof, that it is possible to implement a form of reflective reasoning that is remarkably strong.\n\n\nBenya and Ramana’s work also provides us with an environment in which to build better toy models of reflective reasoners. Jack Gallagher, a MIRI research intern, is currently [implementing a cellular automaton in HOL](https://github.com/machine-intelligence/Botworld.HOL) that will let us implement reflective agents.\n\n\nBy applying results from the reflective oracles framework mentioned above, we also improved our theoretical understanding of Vingean reflection. In the IAFF post [A Limit-Computable, Self-Reflective Distribution](https://agentfoundations.org/item?id=515), research associate Tsvi Benson-Tilsen helped solidify our understanding of what kinds of reflection are and aren’t possible. Jessica, working with Benya and Paul, further showed that reflective oracles can’t readily be used to define [reflective probabilistic logics](https://agentfoundations.org/item?id=234).\n\n\nPre-2015 work published in 2015:\n\n\n* B Fallenstein, N Soares. “[Vingean Reflection: Reliable Reasoning for Self-Improving Agents](https://intelligence.org/2015/01/15/new-report-vingean-reflection-reliable-reasoning-self-improving-agents/).” MIRI tech report 2015-2.\n\n\n2015 research published the same year:\n\n\n* B Fallenstein, R Kumar. “[Proof-Producing Reflection for HOL: With an Application to Model Polymorphism](https://intelligence.org/2015/12/04/new-paper-proof-producing-reflection-for-hol/).” Published in *Interactive Theorem Proving: 6th International Conference, ITP 2015, Nanjing, China, August 24-27, 2015, Proceedings*.\n\n\nOther relevant IAFF posts include [A Simple Model of the Löbstacle](https://agentfoundations.org/item?id=303), [Waterfall Truth Predicates](https://agentfoundations.org/item?id=359), and [Existence of Distributions that are Expectation-Reflective and Know It](https://agentfoundations.org/item?id=548).\n\n\n\n\n\n\n##### Error Tolerance\n\n\nWe were expecting modest progress on these problems in 2015, but we made only limited progress.\n\n\nCorrigibility was a mid-level priority for us in 2015, and we spent some effort trying to build better models of corrigible agents. In spite of this, we didn’t achieve any big breakthroughs. We made some progress on fixing minor defects in our understanding of corrigibility, reflected, e.g., in our [error-tolerance IAFF digest](https://agentfoundations.org/item?id=167), Stuart Armstrong’s [AI control ideas](https://agentfoundations.org/item?id=601), and Jessica Taylor’s [overview post](https://agentfoundations.org/item?id=484); but these results are relatively small.\n\n\n\n\n\n [Read More](https://intelligence.org/feed/?paged=29#collapseFour)\n\n\nIn 2015 our main novelties were Google DeepMind researcher Laurent Orseau and FHI researcher / MIRI research associate Stuart Armstrong’s work on corrigibility (“[Safely Interruptible Agents](https://intelligence.org/2016/06/01/new-paper-safely-interruptible-agents/)“), along with work on two other error tolerance subproblems: [mild optimization](https://arbital.com/p/soft_optimizer/) (Jessica’s [Quantilizers](https://agentfoundations.org/item?id=460) and Abram Demski’s [Structural Risk Minimization](https://agentfoundations.org/item?id=292)) and [conservative concepts](https://arbital.com/p/conservative_concept/) (Jessica’s [Learning a Concept Using Only Positive Examples](https://agentfoundations.org/item?id=234)).\nPre-2015 work published in 2015:\n\n\n* N Soares, B Fallenstein, E Yudkowsky, S Armstrong. “[Corrigibility](https://intelligence.org/2014/10/18/new-report-corrigibility/).” 2014 tech report presented at the AAAI 2015 Ethics and Artificial Intelligence Workshop.\n\n\n2015 research published in 2016 or forthcoming:\n\n\n* L Orseau, S Armstrong. “[Safely Interruptible Agents](https://intelligence.org/2016/06/01/new-paper-safely-interruptible-agents/).” Presented at the 32nd Conference on Uncertainty in Artificial Intelligence.\n* J Taylor. “[Quantilizers: A Safer Alternative to Maximizers for Limited Optimization](https://intelligence.org/2015/11/29/new-paper-quantilizers/).” Presented at the AAAI 2016 AI, Ethics and Society Workshop.\n\n\nOur failure to make much progress on corrigibility may be a sign that corrigibility is not as tractable a problem as we thought, or that more progress is needed in areas like logical uncertainty (so that we can build better models of AI systems that model their operators as uncertain about the implications of their preferences) before we can properly formalize corrigibility.\n\n\nWe are more optimistic about corrigibility research, however, in light of recent advances in logical uncertainty and some promising discussions of related topics at our recent [colloquium series](https://intelligence.org/colloquium-series/): “[Cooperative Inverse Reinforcement Learning](https://arxiv.org/abs/1606.03137)” (via Stuart Russell’s group), “[Avoiding Wireheading with Value Reinforcement Learning](https://arxiv.org/abs/1605.03143)” (via Tom Everitt), and some items in Stuart Armstrong’s bag of tricks.\n\n\n\n\n\n\n##### Value Specification\n\n\nWe were expecting limited progress on these problems in 2015, and we made limited progress. \n\n\nValue learning and related problems were low-priority for us last year, so we didn’t see any big advances.\n\n\n\n\n\n [Read More](https://intelligence.org/feed/?paged=29#collapseFive)\n\n\nMIRI research associate Kaj Sotala made value specification his focus, examining several interesting questions outside our core research agenda. Jessica Taylor also began investigating the problem [on the research forum](https://agentfoundations.org/item?id=538). \nPre-2015 work published in 2015:\n\n\n* K Sotala. “[Concept Learning for Safe Autonomous AI](https://intelligence.org/2014/12/05/new-paper-concept-learning-safe-autonomous-ai/).” Presented at the AAAI 2015 Ethics and Artificial Intelligence Workshop.\n\n\n2015 research published in 2016 or forthcoming:\n\n\n* K Sotala. “[Defining Human Values for Value Learners](https://intelligence.org/2016/02/29/new-paper-defining-human-values-for-value-learners/).” Presented at the AAAI 2016 AI, Ethics and Society Workshop.\n\n\nError-tolerant agent designs and value specification will be larger focus areas for us going forward, under the [alignment for advanced machine learning systems](https://intelligence.org/2016/07/27/alignment-machine-learning/) research program.\n\n\n\n\n\n\n##### Miscellaneous\n\n\nWe released our [technical agenda](http://intelligence.org/technical-agenda/) in late 2014 and early 2015. The overview paper, “[Agent Foundations for Aligning Machine Intelligence with Human Interests](https://intelligence.org/files/TechnicalAgenda.pdf),” is slated for external publication in *The Technological Singularity* in 2017.\n\n\nIn 2015 we also produced some research unrelated to our agent foundations agenda. This research generally focused on forecasting and strategy questions.\n\n\n\n\n\n [Read More](https://intelligence.org/feed/?paged=29#collapseSix)\n\n\nPre-2015 work published in 2015:\n* S Armstrong, N Bostrom, C Shulman. “[Racing to the Precipice: A Model of Artificial Intelligence Development](https://intelligence.org/2013/11/27/new-paper-racing-to-the-precipice/).” 2013 tech report published in *AI & Society*.\n* K Grace. “[The Asilomar Conference: A Case Study in Risk Mitigation](https://intelligence.org/2015/06/30/new-report-the-asilomar-conference-a-case-study-in-risk-mitigation/).” MIRI tech report 2015-9.\n* K Grace. “[Leó Szilárd and the Danger of Nuclear Weapons: A Case Study in Risk Mitigation](https://intelligence.org/2015/10/07/new-report-leo-szilard-and-the-danger-of-nuclear-weapons/).” MIRI tech report 2015-10.\n* P LaVictoire. “[An Introduction to Löb’s Theorem in MIRI Research](https://intelligence.org/2015/03/18/new-report-introduction-lobs-theorem-miri-research/).” MIRI tech report 2015-6.\n\n\n2015 research published in 2016 or forthcoming:\n\n\n* T Benson-Tilsen, N Soares. “[Formalizing Convergent Instrumental Goals](https://intelligence.org/2015/11/26/new-paper-formalizing-convergent-instrumental-goals/).” Presented at the AAAI 2016 AI, Ethics and Society Workshop.\n\n\nBeginning in 2015, new AI strategy/forecasting research supported by MIRI has been hosted on Katja Grace’s independent [AI Impacts](http://aiimpacts.org/) project. AI Impacts featured 31 new [articles](http://aiimpacts.org/articles/) and 27 new [blog posts](http://aiimpacts.org/blog/) in 2015, on topics from [the range of human intelligence](http://aiimpacts.org/is-the-range-of-human-intelligence-small/) to [computing cost trends](http://aiimpacts.org/trends-in-the-cost-of-computing/).\n\n\n\n\n\n\nOn the whole, we’re happy about our 2015 research output and expect our team growth to further accelerate technical progress. \n\n \n\n\n### 2015 Research Support Activities\n\n\nFocusing on activities that directly grew the technical research community or facilitated technical research and collaborations, in 2015 we:\n\n\n* **[Launched](https://intelligence.org/2015/03/18/introducing-intelligent-agent-foundations-forum/) the [Intelligent Agent Foundations Forum](https://agentfoundations.org/),** a public discussion forum for AI alignment researchers. MIRI researchers and collaborators made 139 top-level posts to IAFF in 2015.\n* **Hired four new full-time research fellows.** Patrick LaVictoire joined in March, Jessica Taylor in August, Andrew Critch in September, and Scott Garrabrant in December. With Nate transitioning to a non-research role, overall we grew from a three-person research team (Eliezer, Benya, and Nate) to a six-person team.\n* **Overhauled our research associates program.** Before 2015, our research associates were mostly unpaid collaborators with varying levels of involvement in our active research. Following our successful summer fundraiser, we made “research associate” a paid position in which researchers based at other institutions spend significant amounts of time on research projects for us. Under this program, Stuart Armstrong, Tsvi Benson-Tilsen, Abram Demski, Vanessa Kosoy, Ramana Kumar, Kaj Sotala, and (prior to joining MIRI full-time) Scott Garrabrant all made significant contributions in associate roles.\n* **Hired three research interns.** Kaya Stechly and Rafael Cosman worked on polishing and consolidating old MIRI results ([example on IAFF](https://agentfoundations.org/item?id=480)), while Jack Gallagher worked on our type theory in type theory project ([github repo](https://github.com/GallagherCommaJack/types-in-types)).\n* **Acquired two new research advisors,** Stuart Russell and Bart Selman.\n* **Hosted six summer [workshops](https://intelligence.org/workshops/) and sponsored the three-week [MIRI Summer Fellows](http://rationality.org/miri-summer-fellows-2015/) program.** These events helped forge a number of new academic connections and directly resulted in us making job offers to two extremely promising attendees: Mihály Bárász (who has plans to join at a future date) and Scott Garrabrant.\n* **Helped organize two other academic events,** a [Cambridge decision theory conference](https://intelligence.org/2015/05/24/miri-related-talks-from-the-decision-theory-conference-at-cambridge-university/) and a ten-week [AI alignment seminar series](https://intelligence.org/pdtai/) at UC Berkeley. We also ran 6 research retreats, sponsored 36 [MIRIx](https://intelligence.org/mirix/) events, and spoke at an Oxford [Big Picture Thinking](http://globalprioritiesproject.org/2015/11/seminar-series-big-picture-thinking/) seminar series.\n* **Spoke at five other academic events.** We participated in the Future of Life Institute’s [“Future of AI” conference](http://futureoflife.org/2015/10/12/ai-safety-conference-in-puerto-rico/), AAAI-15, AGI-15, LORI 2015, and [APS 2015](https://intelligence.org/2015/03/09/fallenstein-talk-aps-march-meeting-2015/). We also attended [NIPS](http://futureoflife.org/2015/12/26/highlights-and-impressions-from-nips-conference-on-machine-learning/).\n\n\nI’m excited about our 2015 progress in growing our team and collaborating with the larger academic community. Over the course of the year, we built closer relationships with people at Google DeepMind, Google Brain, [OpenAI](https://intelligence.org/2015/12/11/openai-and-other-news/), Vicarious, Good AI, the Future of Humanity Institute, and other research groups. All of this has put us in a better position to share our research results, methodology, and goals with other researchers, and to attract new talent to AI alignment work. \n\n \n\n\n### 2015 General Activities\n\n\nBeyond direct research support, in 2015 we:\n\n\n* **Transitioned to [new leadership](https://intelligence.org/2015/05/31/introductions/) under Nate Soares.** The transition went smoothly, attesting to our organizational robustness.[2](https://intelligence.org/2016/07/29/2015-in-review/#footnote_1_13403 \"I (Malo Bourgon) more recently took on a leadership role as MIRI’s new COO and second-in-command.\")\n* **Moved to larger offices and hired an office manager** to support our growing team.\n* **Gave talks and participated in panel discussions** at EA Global and at [ITIF](https://www.youtube.com/watch?v=fWBBe13rAPU).\n* **Published 20 new strategic and expository pieces:** 11 [strategic analyses](https://intelligence.org/category/analysis/), four [MIRI strategy overviews](https://intelligence.org/category/miri/), two [interviews](https://intelligence.org/category/conversations/), a new [MIRI FAQ](https://intelligence.org/faq/), a new [About MIRI](https://intelligence.org/about/) page, and an [annotated research agenda bibliography](https://intelligence.org/2015/02/05/new-annotated-bibliography-miris-technical-agenda/). Some of our best introductory posts have been collected on [intelligence.org/info](http://intelligence.org/info). Eliezer also contributed to [Edge.org](https://intelligence.org/2015/11/01/edge-org-contributors-discuss-the-future-of-ai/) and Nate participated in an [Ask Me Anything](http://mindingourway.com/interlude-qa-on-the-ea-forum/) on the EA Forum.\n* **Released *[Rationality: From AI to Zombies](https://intelligence.org/2015/03/12/rationality-ai-zombies/)*** on the eve of Eliezer’s completion of [*Harry Potter and the Methods of Rationality*](http://hpmor.com). We also concluded a [reading group](http://lesswrong.com/lw/kw4/superintelligence_reading_group/) for Nick Bostrom’s *Superintelligence*.\n* **Revamped many of our online tools and webpages:** [AI Impacts](https://intelligence.org/2015/01/11/improved-ai-impacts-website/), [A Guide to MIRI’s Research](https://intelligence.org/research-guide/), and our [Get Involved](http://intelligence.org/get-involved) page and general [application form](https://machineintelligence.typeform.com/to/fot777). We also launched a [MIRI technical results mailing list](https://intelligence.org/2015/02/03/keep-date-miris-research-via-new-mailing-list/).\n* **Were prominently cited in the “[Research Priorities for Robust and Beneficial Artificial Intelligence](http://futureoflife.org/data/documents/research_priorities.pdf)” report,** and were initial signatories of the attached [AI safety open letter](http://futureoflife.org/ai-open-letter). Some of our basic AI forecasting arguments were subsequently echoed [by Open Philanthropy Project analysts](http://givewell.org/labs/causes/ai-risk).\n* **Received press coverage** from *[The Atlantic](http://www.theatlantic.com/technology/archive/2015/01/building-robots-with-better-morals-than-humans/385015/)*, *[Nautilus](http://nautil.us/blog/will-humans-be-able-to-control-computers-that-are-smarter-than-us)*, *[MIT Technology Review](https://www.technologyreview.com/s/534871/our-fear-of-artificial-intelligence/)*, *[Financial Times](https://next.ft.com/content/2ada9748-cce5-11e4-b5a5-00144feab7de)*, *[Slate](http://www.slate.com/blogs/business_insider/2015/04/02/stanford_graduates_get_fought_over_by_tech_companies_like_snapchat_and_have.html)*, *[PC World](http://www.pcworld.idg.com.au/article/578692/robot-apocalypse-unlikely-researchers-need-understand-ai-risks/)*, *[National Post](http://news.nationalpost.com/news/world/five-fully-funded-ideas-for-preventing-artificial-intelligence-from-wrecking-the-planet-and-killing-us-all)*, *[CBC News](http://www.cbc.ca/news/business/scientists-must-act-now-to-make-artificial-intelligence-benign-don-pittis-1.3182946)*, [*Tech Times*](http://www.techtimes.com/articles/79701/20150826/research-suggusts-human-brain-30-times-powerful-best-supercomputers.htm), and the *[Discover](http://blogs.discovermagazine.com/lovesick-cyborg/2015/09/02/making-sure-ais-rapid-rise-is-no-surprise/#.V4gZUrgrLb0)* blog.\n\n\nAlthough we have deemphasized outreach efforts, we continue to expect these activities to be useful for spreading general awareness about MIRI, our research program, and AI safety research more generally. Ultimately, we expect this to help build our donor base, as well as attract potential future researchers (to MIRI and the field more generally), as with our past outreach and capacity-building efforts. \n\n \n\n\n### 2015 Fundraising\n\n\nI am very pleased with our fundraising performance. In 2015 we:\n\n\n* Continued our strong fundraising growth, with a total of **$1,584,109** in contributions.[3](https://intelligence.org/2016/07/29/2015-in-review/#footnote_2_13403 \"$80,480 of this was earmarked funding for the AI Impacts project.\")\n* Received **$166,943** in grants from the Future of Life Institute (FLI), with another ~$80,000 annually for the next two years.[4](https://intelligence.org/2016/07/29/2015-in-review/#footnote_3_13403 \"MIRI is administering three FLI grants (and participated in a fourth). We are to receive $250,000 over three years to fund work on our agent foundations technical agenda, $49,310 towards AI Impacts, and we are administering Ramana’s $36,750 to study self-reference in the HOL theorem prover in collaboration with Benya.\")\n* Experimented with a new kind of fundraiser (non-matching, with multiple targets). I consider these experiments to have been successful. Our [summer fundraiser](https://intelligence.org/2015/07/17/miris-2015-summer-fundraiser/) was our biggest fundraiser of to date, raising **$632,011**, and our [winter fundraiser](https://intelligence.org/2015/12/01/miri-2015-winter-fundraiser/) also went well, raising **$328,148**.\n\n\n\nTotal contributions grew 28% in 2015. This was driven by an increase in contributions from new funders, including a one-time $219,000 contribution from an anonymous funder, $166,943 in FLI grants, and at least $137,023 from [Raising for Effective Giving](http://reg-charity.org/) (REG) and regranting from the Effective Altruism Foundation.[5](https://intelligence.org/2016/07/29/2015-in-review/#footnote_4_13403 \"This only counts direct contributions through REG to MIRI. REG’s support for MIRI is likely closer to $200,000 when accounting for contributions made directly to MIRI as a result of REG’s advice to funders.\") The decrease in contributions from returning funders is due to Peter Thiel’s discontinuation of support in 2015, plus a large one-time [outlier donation](https://intelligence.org/2014/04/02/2013-in-review-fundraising/) from Jed McCaleb in the years prior ($526,316 arriving in 2013, $104,822 in 2014).\n\n\nDrawing conclusions from these year-by-year comparisons is a little tricky. MIRI underwent significant organizational changes over this time span, particularly in 2013. We switched to accrual-based accounting in 2014, which also complicates comparisons with previous years.[6](https://intelligence.org/2016/07/29/2015-in-review/#footnote_5_13403 \"Also note that numbers in this section might not exactly match previously published estimates, since small corrections are often made to contributions data. Finally, note that these number do not include in-kind donations.\") In general, though, we’re continuing to see solid fundraising growth.\n\n\n\nThe number of new funders decreased from 2014 to 2015. In our [2014 review](https://intelligence.org/2015/03/22/2014-review/), Luke explains the large increase in funders in 2014:\n\n\n\n> New donor growth was strong in 2014, though this mostly came from small donations made during the [SV Gives fundraiser](https://intelligence.org/2014/05/06/liveblogging-the-svgives-fundraiser/). A significant portion of growth in returning donors can also be attributed to lapsed donors making small contributions during the SV Gives fundraiser.\n> \n> \n\n\nComparing our numbers in 2015 and 2013, we see healthy growth in the number of returning funders and total number of funders.\n\n\n\nThe above chart shows contributions in past years from small, mid-sized, large, and very large funder segments. Contributions from the three largest segments increased (approximately) proportionally from last year, with the notable exception of contributions from large funders, which increased from 26% to 31% of total contributions. We had a small year-over-year decrease in contributions in the small funder segment, which is again due having received an unusually large amount of small contributions during SV Gives in 2014.\n\n\nAs in past years, a full report on our finances (in the form of an independent accountant’s review report) will be made available on our [transparency and financials](https://intelligence.org/transparency/) page. The report will most likely be up in late August or early September. \n\n \n\n\n### 2016 and Beyond\n\n\nWhat’s next? Beyond our research goal of making significant progress in five of our six focus areas, we set the following operational goals for ourselves in July/August 2015:\n\n\n1. Accelerated growth: “expand to a roughly ten-person core research team.” ([source](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/#acceleratedgrowth))\n2. Type theory in type theory project: “hire one or two type theorists to work on developing relevant tools full-time.” ([source](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/#agda))\n3. Visiting scholar program: “have interested professors drop by for the summer, while we pay their summer salaries and work with them on projects where our interests overlap.” ([source](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/#visiting))\n4. Independent review: “We’re also looking into options for directly soliciting public feedback from independent researchers regarding our research agenda and early results.” ([source](https://intelligence.org/2015/08/10/assessing-our-past-and-potential-impact/#1))\n5. Higher-visibility publications: “Our current plan this year is to focus on producing a few high-quality publications in elite venues.” ([source](https://intelligence.org/2015/08/10/assessing-our-past-and-potential-impact/#4))\n\n\nIn 2015 we doubled the size of our research team from three to six. With the restructuring of our research associates program and the addition of two research interns, I’m pleased with the growth we achieved in 2015. We deemphasized growth in the first half of 2016 in order to focus on onboarding, but plan to expand again by the end of the year.\n\n\nWe have a [job ad out for our type theorist position](https://intelligence.org/2016/03/18/seeking-research-fellows-in-type-theory-and-machine-self-reference/), which will likely be filled after we make our next few core researcher hires. In the interim, we’ve been having our research intern Jack Gallagher work on the type theory in type theory project, and we also ran an April 2016 [type theory workshop](https://intelligence.org/workshops/#april-2016-tt).\n\n\nWith help from our research advisors, our visiting scholars program morphed into a three-week-long [colloquium series](https://intelligence.org/colloquium-series/). Rather than hosting a handful of researchers for longer periods of time, we hosted over fifty researchers for shorter stretches of time, comparing notes on a wide variety of active AI safety research projects. Speakers at the event included Stuart Russell, Francesca Rossi, Tom Dietterich, and Bart Selman. We’re also collaborating with Stuart Russell on a [corrigibility grant](https://intelligence.org/files/CorrigibilityAISystems.pdf).\n\n\nWork is underway on conducting an external review of our research program; the results should be available in the next few months.\n\n\nWith regards to our fifth goal, in addition to “[Proof-Producing Reflection for HOL](https://intelligence.org/2015/12/04/new-paper-proof-producing-reflection-for-hol/)” (which was presented at ITP 2015 in late August), we’ve since published papers at LORI-V (“[Reflective Oracles](http://link.springer.com/chapter/10.1007%2F978-3-662-48561-3_34)”), at UAI 2016 (“[Safely Interruptible Agents](https://intelligence.org/2016/06/01/new-paper-safely-interruptible-agents/)” and “[A Formal Solution to the Grain of Truth Problem](https://intelligence.org/2016/06/30/grain-of-truth/)”), and at an IJCAI 2016 workshop (“[The Value Learning Problem](https://intelligence.org/files/ValueLearningProblem.pdf)”). Of those venues, UAI is generally considered more prestigious than most venues that we have published in in the past. I’d count this as moderate (but not great) progress towards the goal of publishing in more elite venues. Nate will have more to say about our future publication plans.\n\n\nElaborating further on our plans would take me beyond the scope of this review. In the coming weeks, Nate will be providing more details on our 2016 activities and our goals going forward in a big-picture MIRI strategy post.[7](https://intelligence.org/2016/07/29/2015-in-review/#footnote_6_13403 \"My thanks to Rob Bensinger for his substantial contributions to this review.\")\n\n\n\n\n---\n\n1. This paper was originally titled “Aligning Superintelligence with Human Interests.” We’ve renamed it in order to emphasize that this research agenda takes a specific approach to the alignment problem, and other approaches are possible too—including, relevantly, Jessica Taylor’s new “[Alignment for Advanced Machine Learning Systems](https://intelligence.org/2016/07/27/alignment-machine-learning/)” agenda.\n2. I (Malo Bourgon) [more recently](https://intelligence.org/2016/03/30/miri-has-a-new-coo-malo-bourgon/) took on a leadership role as MIRI’s new COO and second-in-command.\n3. $80,480 of this was earmarked funding for the AI Impacts project.\n4. MIRI is administering three FLI grants (and participated in a fourth). We are to receive $250,000 over three years to fund work on our agent foundations technical agenda, $49,310 towards AI Impacts, and we are administering Ramana’s $36,750 to study self-reference in the HOL theorem prover in collaboration with Benya.\n5. This only counts direct contributions through REG to MIRI. REG’s support for MIRI is likely closer to $200,000 when accounting for contributions made directly to MIRI as a result of REG’s advice to funders.\n6. Also note that numbers in this section might not exactly match previously published estimates, since small corrections are often made to contributions data. Finally, note that these number do not include in-kind donations.\n7. My thanks to Rob Bensinger for his substantial contributions to this review.\n\nThe post [2015 in review](https://intelligence.org/2016/07/29/2015-in-review/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-07-30T06:17:38Z", "authors": ["Malo Bourgon"], "summaries": []} -{"id": "ccd18383a12344bc304fc91194d8d426", "title": "New paper: “Alignment for advanced machine learning systems”", "url": "https://intelligence.org/2016/07/27/alignment-machine-learning/", "source": "miri", "source_type": "blog", "text": "[![Alignment for Advanced Machine Learning Systems](https://intelligence.org/files/AlignmentMachineLearning.png)](https://intelligence.org/files/AlignmentMachineLearning.pdf)MIRI’s research to date has focused on the problems that we laid out in our [late 2014 research agenda](https://intelligence.org/technical-agenda/), and in particular on formalizing optimal reasoning for [bounded](https://intelligence.org/files/QuestionsLogicalUncertainty.pdf), [reflective](https://intelligence.org/files/VingeanReflection.pdf) [decision-theoretic](http://arxiv.org/abs/1507.01986) agents [embedded in their environment](https://intelligence.org/files/RealisticWorldModels.pdf). Our research team has since grown considerably, and we have made substantial progress on this agenda, including a major breakthrough in logical uncertainty that we will be announcing in the coming weeks.\n\n\nToday we are announcing a new research agenda, “**[Alignment for advanced machine learning systems](https://intelligence.org/files/AlignmentMachineLearning.pdf)**.” Going forward, about half of our time will be spent on this new agenda, while the other half is spent on our previous agenda. The abstract reads:\n\n\n\n> We survey eight research areas organized around one question: As learning systems become increasingly intelligent and autonomous, what design principles can best ensure that their behavior is aligned with the interests of the operators? We focus on two major technical obstacles to AI alignment: the challenge of specifying the right kind of objective functions, and the challenge of designing AI systems that avoid unintended consequences and undesirable behavior even in cases where the objective function does not line up perfectly with the intentions of the designers.\n> \n> \n> Open problems surveyed in this research proposal include: How can we train reinforcement learners to take actions that are more amenable to meaningful assessment by intelligent overseers? What kinds of objective functions incentivize a system to “not have an overly large impact” or “not have many side effects”? We discuss these questions, related work, and potential directions for future research, with the goal of highlighting relevant research topics in machine learning that appear tractable today.\n> \n> \n\n\nCo-authored by Jessica Taylor, Eliezer Yudkowsky, Patrick LaVictoire, and Andrew Critch, our new report discusses eight new lines of research ([previously summarized here](https://intelligence.org/2016/05/04/announcing-a-new-research-program/)). Below, I’ll explain the rationale behind these problems, as well as how they tie in to our old research agenda and to the new “[Concrete problems in AI safety](https://arxiv.org/abs/1606.06565)” agenda spearheaded by Dario Amodei and Chris Olah of Google Brain.\n\n\n\n#### Increasing safety by reducing autonomy\n\n\nThe first three research areas focus on issues related to [act-based agents](https://medium.com/ai-control/act-based-agents-8ec926c79e9c#.bepjk81dp), notional systems that base their behavior on their users’ short-term instrumental preferences:\n\n\n1. Inductive ambiguity identification: How can we train ML systems to detect and notify us of cases where the classification of test data is highly under-determined from the training data?\n\n\n2. Robust human imitation: How can we design and train ML systems to effectively imitate humans who are engaged in complex and difficult tasks?\n\n\n3. Informed oversight: How can we train a reinforcement learning system to take actions that aid an intelligent overseer, such as a human, in accurately assessing the system’s performance?\n\n\nThese three problems touch on different ways we can make tradeoffs between capability/autonomy and safety. At one extreme, a fully autonomous, superhumanly capable system would make it uniquely difficult to establish any strong safety guarantees. We could reduce risk somewhat by building systems that are still reasonably smart and autonomous, but will pause to consult operators in cases where their actions are especially high-risk. Ambiguity identification is one approach to fleshing out which scenarios are “high-risk”: ones where a system’s experiences to date are uninformative about some fact or human value it’s trying to learn.\n\n\nAt the opposite extreme, we can consider ML systems that are no smarter than their users, and take *no* actions other than [what their users would do](https://medium.com/ai-control/mimicry-maximization-and-meeting-halfway-c149dd23fc17#.doosied3x), or [what their users would tell them to do](https://medium.com/ai-control/model-free-decisions-6e6609f5d99e#.h9hvpk5d7). If we can correctly design a system to do what it thinks a trusted, informed human would do, we can trade away some of the potential benefits of advanced ML systems in exchange for milder failure modes.\n\n\nThese two extremes, human imitation and (mostly) autonomous goal pursuit, are useful objects of study because they help simplify and factorize out key parts of the problem. In practice, however, ambiguity identification is probably too mild a restriction on its own, and strict human imitation probably isn’t efficiently implementable. Informed oversight considers more moderate approaches to keeping humans in the loop: designing more transparent ML systems that help operators understand the reasons behind selected actions.\n\n\n#### Increasing safety without reducing autonomy\n\n\nWhatever guarantees we buy by looping humans into AI systems’ decisions, we will also want to improve systems’ reliability in cases where oversight is unfeasible. Our other five problems focus on improving the reliability and error-tolerance of systems autonomously pursuing real-world goals, beginning with the problem of specifying such goals in a robust and reliable way:\n\n\n4. Generalizable environmental goals: How can we create systems that robustly pursue goals defined in terms of the state of the environment, rather than defined directly in terms of their sensory data?\n\n\n5. Conservative concepts: How can a classifier be trained to develop useful concepts that exclude highly atypical examples and edge cases?\n\n\n6. Impact measures: What sorts of regularizers incentivize a system to pursue its goals with minimal side effects?\n\n\n7. Mild optimization: How can we design systems that pursue their goals “without trying too hard”—stopping when the goal has been pretty well achieved, as opposed to expending further resources searching for ways to achieve the absolute optimum expected score?\n\n\n8. Averting instrumental incentives: How can we design and train systems such that they robustly lack default incentives to manipulate and deceive their operators, compete for scarce resources, etc.?\n\n\nWhereas ambiguity-identifying learners are designed to predict potential ways they might run into edge cases and defer to human operators in those cases, conservative learners are designed to err in a safe direction in edge cases. If a cooking robot notices the fridge is understocked, should it try to cook the cat? The ambiguity identification approach says to notice when the answer to “Are cats food?” is unclear, and pause to consult a human operator; the conservative concepts approach says to just assume cats aren’t food in uncertain cases, since it’s safer for cooking robots to underestimate how many things are food than to overestimate it. It remains unclear, however, how one might formalize this kind of reasoning.\n\n\nImpact measures provide another avenues for limiting the potential scope of AI mishaps. If we can define some measure of “impact,” we could design systems that can distinguish intuitively high-impact actions from low-impact ones and generally choose lower-impact options.\n\n\nAlternatively, instead of designing systems to try as hard as possible to have a low impact, we might design “mild” systems that simply don’t try very hard to do anything. Limiting the resources a system will put into its decision (via mild optimization) is distinct from limiting how much change a system will decide to cause (via impact measures); both are under-explored risk reduction approaches.\n\n\nLastly, we will explore a variety of different approaches to preventing default system incentives to treat operators adversarially under the “averting instrumental incentives” umbrella category. Our hope in pursuing all of these research directions simultaneously is that systems combining these features will permit much higher confidence than systems implementing any one of them. This approach also serves as a hedge in case some of these problems turn out to be unsolvable in practice, and allows for ideas that worked well on one problem to be re-applied on others.\n\n\n#### Connections to other research agendas\n\n\nOur new technical agenda, our 2014 agenda, and “[Concrete problems in AI safety](https://arxiv.org/abs/1606.06565)” take different approaches to the problem of aligning AI systems with human interests, though there is a fair bit of overlap between the research directions they propose.\n\n\nWe’ve changed the name of our 2014 agenda to “[Agent foundations for aligning machine intelligence with human interests](https://intelligence.org/technical-agenda/)” (from “Aligning superintelligence with human interests”) to help highlight the ways it is and isn’t similar to our newer agenda. For reasons discussed in our [advance announcement](https://intelligence.org/2016/05/04/announcing-a-new-research-program/) of “Alignment for advanced machine learning systems,” our new agenda is intended to help more in scenarios where advanced AI is relatively near and relatively directly descended from contemporary ML techniques, while our agent foundations agenda is more agnostic about when and how advanced AI will be developed.\n\n\nAs we [recently wrote](https://intelligence.org/2016/07/23/ostp/), we believe that developing a basic formal theory of highly reliable reasoning and decision-making “could make it possible to get very strong guarantees about the behavior of advanced AI systems — stronger than many currently think is possible, in a time when the most successful machine learning techniques are often poorly understood.” Without such a theory, AI alignment will be a much more difficult task.\n\n\nThe authors of “Concrete problems in AI safety” write that their own focus “is on the empirical study of practical safety problems in modern machine learning systems, which we believe is likely to be robustly useful across a broad variety of potential risks, both short- and long-term.” Their paper discusses a number of the same problems as the alignment for ML agenda (or closely related ones), but directed more toward building on existing work and finding applications in present-day systems.\n\n\nWhere the agent foundations agenda can be said to follow the principle “start with the least well-understood long-term AI safety problems, since those seem likely to require the most work and are the likeliest to seriously alter our understanding of the overall problem space,” the concrete problems agenda follows the principle “start with the long-term AI safety problems that are most applicable to systems today, since those problems are the easiest to connect to existing work by the AI research community.”\n\n\nTaylor et al.’s new agenda is less focused on present-day and near-future systems than “Concrete problems in AI safety,” but is more ML-oriented than the agent foundations agenda. This chart helps map some of the correspondences between the topics the agent foundations agenda (plain text), the concrete problems agenda (*italics*), and the alignment for ML agenda (**bold**) discuss:\n\n\n\n> Work related to high reliability\n> \n> \n> * realistic world-models ~ **generalizable environmental goals** ~ *avoiding reward hacking*\n> \t+ naturalized induction\n> \t+ ontology identification\n> * decision theory\n> * logical uncertainty\n> * Vingean reflection\n> \n> \n> Work related to error tolerance\n> \n> \n> * **inductive ambiguity identification** = ambiguity identification ~ *robustness to distributional change*\n> * **robust human imitation**\n> * **informed oversight** ~ *scalable oversight*\n> * **conservative concepts**\n> * **impact measures** = domesticity ~ *avoiding negative side effects*\n> * **mild optimization**\n> * **averting instrumental incentives**\n> \t+ [corrigibility](https://intelligence.org/files/Corrigibility.pdf)\n> * *safe exploration*\n> \n> \n> \n\n\n“~” notes (sometimes very rough) similarities and correspondences, while “=” notes different names for the same concept.\n\n\nAs an example, “realistic world-models” and “generalizable environmental goals” are both aimed at making the environment and goal representations of reinforcement learning formalisms like AIXI more robust, and both can be viewed as particular strategies for avoiding reward hacking. Our work under the agent foundations agenda has mainly focused on formal models of AI systems in settings without clear agent/environment boundaries (naturalized induction), while our work under the new agenda will focus more on the construction of world-models that admit of the specification of goals that are environmental rather than simply perceptual (ontology identification).\n\n\nFor a fuller discussion of the relationship between these research topics, see [Taylor et al.’s paper](https://intelligence.org/files/AlignmentMachineLearning.pdf).\n\n\n\n\n---\n\n\n \n\n\n\n\n\n#### Sign up to get updates on new MIRI technical results\n\n\n*Get notified every time a new technical paper is published.*\n\n\n\n\n* \n* \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n\n\n \n\n\nThe post [New paper: “Alignment for advanced machine learning systems”](https://intelligence.org/2016/07/27/alignment-machine-learning/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-07-27T23:48:52Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "6d006f32d4e8582400d4df0af5d69ec0", "title": "Submission to the OSTP on AI outcomes", "url": "https://intelligence.org/2016/07/23/ostp/", "source": "miri", "source_type": "blog", "text": "The White House Office of Science and Technology Policy recently put out a [request for information](https://www.federalregister.gov/articles/2016/06/27/2016-15082/request-for-information-on-artificial-intelligence) on “(1) The legal and governance implications of AI; (2) the use of AI for public good; (3) the safety and control issues for AI; (4) the social and economic implications of AI;” and a variety of related topics. I’ve reproduced MIRI’s submission to the RfI below:\n\n\n\n\n---\n\n\nI. Review of safety and control concerns\n\n\nAI experts [largely agree](http://www.nickbostrom.com/papers/survey.pdf) that AI research will eventually lead to the development of AI systems that surpass humans in general reasoning and decision-making ability. This is, after all, the goal of the field. However, there is widespread disagreement about how long it will take to cross that threshold, and what the relevant AI systems are likely to look like (autonomous agents, widely distributed decision support systems, human/AI teams, etc.).\n\n\nDespite the uncertainty, a growing subset of the research community expects that advanced AI systems will give rise to a number of foreseeable safety and control difficulties, and that those difficulties can be preemptively addressed by technical research today. Stuart Russell, co-author of the leading undergraduate textbook in AI and professor at U.C. Berkeley, [writes](https://www.edge.org/conversation/the-myth-of-ai#26015):\n\n\n\n> The primary concern is not spooky emergent consciousness but simply the ability to make *high-quality decisions*. Here, quality refers to the expected outcome utility of actions taken, where the utility function is, presumably, specified by the human designer. Now we have a problem:\n> \n> \n> 1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.\n> \n> \n> 2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.\n> \n> \n> A system that is optimizing a function of *n* variables, where the objective depends on a subset of size *k*<*n*, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer’s apprentice, or King Midas: you get exactly what you ask for, not what you want.\n> \n> \n\n\nResearchers’ worries about the impact of AI in the long term bear little relation to the doomsday scenarios most often depicted in Hollywood movies, in which “emergent consciousness” allows machines to throw off the shackles of their programmed goals and rebel. The concern is rather that such systems may pursue their programmed goals all too well, and that the programmed goals may not match the intended goals, or that the intended goals may have unintended negative consequences.\n\n\nThese challenges are not entirely novel. We can compare them to other principal-agent problems where incentive structures are designed with the hope that blind pursuit of those incentives promotes good outcomes. Historically, principal-agent problems have been difficult to solve even in domains where the people designing the incentive structures can rely on some amount of human goodwill and common sense. Consider the problem of designing tax codes to have reliably beneficial consequences, or the problem of designing regulations that reliably reduce corporate externalities. Advanced AI systems naively designed to optimize some objective function could result in unintended consequences that occur on digital timescales, but without goodwill and common sense to blunt the impact.\n\n\nGiven that researchers don’t know when breakthroughs will occur, and given that there are multiple lines of open technical research that can be pursued today to address these concerns, we believe it is prudent to begin serious work on those technical obstacles to improve the community’s preparedness.\n\n\n\n\n\n---\n\n\nII. Technical research directions for safety and control\n\n\nThere are several promising lines of technical research that may help ensure that the AI systems of the future have a positive social impact. We divide this research into three broad categories:\n\n\n* **Value specification** (VS): research that aids in the design of objective functions that capture the intentions of the operators, and/or that describe socially beneficial goals. Example: [cooperative inverse reinforcement learning](https://arxiv.org/abs/1606.03137), a formal model of AI agents that inductively learn the goals of other agents (e.g., human operators).\n\n\n* **High reliability** (HR): research that aids in the design of AI systems that robustly, reliably, and verifiably pursue the given objectives. Example: [the PAC learning framework](http://aaaipress.org/Papers/AAAI/1990/AAAI90-163.pdf), which gives statistical guarantees about the correctness of solutions to certain types of classification problems. This framework is a nice example of research done far in advance of the development of advanced AI systems that is nevertheless likely to aid in the design of systems that are robust and reliable.\n\n\n* **Error tolerance** (ET): research that aids in the design of AI systems that are fail-safe and robust to design errors. Example: research into the design of objective functions that allow an agent to be shut down, [but do not give that agent incentives to cause or prevent shutdown](https://intelligence.org/files/Corrigibility.pdf).\n\n\nOur “[Agent foundations for aligning machine intelligence with human interests](https://intelligence.org/files/TechnicalAgenda.pdf)” report discusses these three targets in depth, and outlines some neglected technical research topics that are likely to be relevant to the future design of robustly beneficial AI systems regardless of their specific architecture. Our “[Alignment for advanced machine learning systems](https://intelligence.org/2016/05/04/announcing-a-new-research-program/)” report discusses technical research topics relevant to these questions under the stronger assumption that the advanced systems of the future will be qualitatively similar to modern-day machine learning (ML) systems. We also recommend a research proposal led by Dario Amodei and Chris Olah of Google Brain, “[Concrete problems in AI safety](https://arxiv.org/abs/1606.06565),” for technical research problems that are applicable to near-future AI systems and are likely to also be applicable to more advanced systems down the road. Actionable research directions discussed in these agendas include (among many other topics):\n\n\n– *robust inverse reinforcement learning*: designing reward-based agents to learn human values in contexts where observed behavior may reveal biases or ignorance in place of genuine preferences. (VS)\n\n\n– *safe exploration*: designing reinforcement learning agents to efficiently learn about their environments without performing high-risk experiments. (ET)\n\n\n– *low-impact agents*: specifying decision-making systems that deliberately avoid having a large impact, good or bad, on their environment. (ET)\n\n\nThere are also a number of research areas that would likely aid in the development of safe AI systems, but which are not well-integrated into the existing AI community. As an example, many of the techniques in use by the [program verification and high-assurance software](https://intelligence.org/2014/02/11/gerwin-klein-on-formal-methods/) communities cannot be applied to modern ML algorithms. Fostering more collaboration between these communities is likely to make it easier for us to design AI systems suitable for use in safety-critical situations. Actionable research directions for ML analysis and verification include:\n\n\n– *algorithmic transparency*: developing more formal tools for analyzing how and why ML algorithms perform as they do. (HR)\n\n\n– *[type theory for program verification](https://intelligence.org/2014/03/02/bob-constable/)*: developing high-assurance techniques for the re-use of verified code in new contexts. (HR)\n\n\n– *[incremental re-verification](https://intelligence.org/2014/04/09/diana-spears/)*: confirming the persistence of safety properties for adaptive systems. (HR)\n\n\nAnother category of important research for AI reliability is the development of basic theoretical tools for formally modeling intelligent agents. As an example, consider the interaction of probability theory (a theoretical tool for modeling uncertain reasoners) with modern machine learning algorithms. While modern ML systems do not strictly follow the axioms of probability theory, many of the theoretical guarantees that can be applied to them are probability-theoretic, taking the form “this agent will converge on a policy that is very close to the optimal policy, with very high probability.” Probability theory is an example of basic research that was developed far in advance of present-day ML techniques, but has proven important for attaining strong (statistical) guarantees about the behavior of ML systems. We believe that more basic research of this kind can be done, and that it could prove to be similarly valuable.\n\n\nThere are a number of other aspects of good reasoning where analogous foundations are lacking, such as situations where AI systems have to allocate attention given limited computational resources, or predict the behavior of computations that are too expensive to run, or analyze the effects of potential alterations to their hardware or software. Further research into basic theoretical models of ideal reasoning (including research into bounded rationality) could yield tools that would help attain stronger theoretical guarantees about AI systems’ behavior. [Actionable research directions include](https://intelligence.org/files/TechnicalAgenda.pdf):\n\n\n– *decision theory*: giving a formal account of reasoning in settings where an agent must engage in metacognition, reflection, self-modification, or reasoning about violations of the agent/environment boundary. (HR)\n\n\n– *logical uncertainty*: generalizing Bayesian probability theory to settings where agents are uncertain about mathematical (e.g., computational) facts. (HR)\n\n\nWe believe that there are numerous promising avenues of foundational research which, if successful, could make it possible to get very strong guarantees about the behavior of advanced AI systems — stronger than many currently think is possible, in a time when the most successful machine learning techniques are often poorly understood. We believe that bringing together researchers in machine learning, program verification, and the mathematical study of formal agents would be a large step towards ensuring that highly advanced AI systems will have a robustly beneficial impact on society.\n\n\n\n\n---\n\n\nIII. Coordination prospects\n\n\nIt is difficult to say much with confidence about the long-term impact of AI. For now, we believe that the lines of technical research outlined above are the best available tool for addressing concerns about advanced AI systems, and for learning more about what needs to be done.\n\n\nLooking ahead, we expect the risks associated with transformative AI systems in the long term to be exacerbated if the designers of such systems (be they private-sector, public-sector, or part of some international collaboration) act under excessive time pressure. It is our belief that any policy designed to ensure that the social impact of AI is beneficial should first and foremost ensure that transformative AI systems are deployed with careful consideration, rather than in fear or haste. If scientists and engineers are worried about losing a race to the finish, they will have more incentives to cut corners on safety and control, obviating the benefits of safety-conscious work.\n\n\nIn the long term, we recommend that policymakers make use of incentives to encourage designers of AI systems to work together cooperatively, perhaps through multinational and multicorporate collaborations, in order to discourage the development of race dynamics. In light of high levels of uncertainty about the future of AI among experts, and in light of the large potential of AI research to save lives, solve social problems, and serve the common good in the near future, we recommend against broad regulatory interventions in this space. We recommend that effort instead be put towards encouraging interdisciplinary technical research into the AI safety and control challenges that we have outlined above.\n\n\n\n\n---\n\n\nThe post [Submission to the OSTP on AI outcomes](https://intelligence.org/2016/07/23/ostp/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-07-24T03:36:47Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "df7d5326b81f201c528a801b3945bc6e", "title": "July 2016 Newsletter", "url": "https://intelligence.org/2016/07/05/july-2016-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \n\n**Research updates**\n* A new paper: “[A Formal Solution to the Grain of Truth Problem](https://intelligence.org/2016/06/30/grain-of-truth/).” The paper was presented at UAI-16, and describes the first general reduction of game-theoretic reasoning to expected utility maximization.\n* Participants in MIRI’s recently-concluded [Colloquium Series on Robust and Beneficial AI](https://intelligence.org/colloquium-series/) (CSRBAI) have put together [AI safety environments](https://gym.openai.com/envs#safety) for the OpenAI Reinforcement Learning Gym.[1](https://intelligence.org/2016/07/05/july-2016-newsletter/#footnote_0_13899 \"Inspiration for these gyms came in part from Chris Olah and Dario Amodei in a conversation with Rafael.\") Help is welcome creating more safety environments and conducting experiments on the current set. Questions can be directed to [rafael.cosman@gmail.com](mailto:rafael.cosman@gmail.com).\n\n\n**General updates**\n* We attended the White House’s [Workshop on Safety and Control in AI](https://www.cmu.edu/safartint/).\n* Our 2016 [MIRI Summer Fellows Program](http://rationality.org/miri-summer-fellows-2016/) recently drew to a close. The program, run by the Center for Applied Rationality, aims to train AI scientists’ and mathematicians’ research and decision-making skills.\n* “[Why Ain’t You Rich?](https://intelligence.org/files/WhyAintYouRich.pdf)“: Nate Soares discusses decision theory in *[Dawn or Doom](https://www.amazon.com/gp/product/1626710570)*. See “[Toward Idealized Decision Theory](http://arxiv.org/abs/1507.01986)” for context.\n* Numerai, an anonymized distributed hedge fund for machine learning researchers, [has added an option](https://medium.com/@Numerai/rogue-machine-intelligence-and-a-new-kind-of-hedge-fund-7b208deec5f0#.bqhhrxoru) for donating earnings to MIRI “as a hedge against things going horribly right” in the field of AI.\n\n\n\n**News and links**\n* The White House is [requesting information](https://www.federalregister.gov/articles/2016/06/27/2016-15082/request-for-information-on-artificial-intelligence) on “safety and control issues for AI,” among other questions. Public submissions will be accepted through July 22.\n* “[Concrete Problems in AI Safety](https://arxiv.org/abs/1606.06565)“: Researchers from Google Brain, OpenAI, and academia propose a very promising new AI safety research agenda. The proposal is showcased on the [Google Research Blog](https://research.googleblog.com/2016/06/bringing-precision-to-ai-safety.html) and the [OpenAI Blog](https://openai.com/blog/concrete-ai-safety-problems/), as well as the [Open Philanthropy Blog](http://www.openphilanthropy.org/blog/concrete-problems-ai-safety), and has received press coverage from [*Bloomberg*](http://www.bloomberg.com/news/articles/2016-06-22/google-tackles-challenge-of-how-to-build-an-honest-robot), *[The Verge](http://www.theverge.com/circuitbreaker/2016/6/22/11999664/google-robots-ai-safety-five-problems)*, and [*MIT Technology Review*](https://www.technologyreview.com/s/601750/google-gets-practical-about-the-dangers-of-ai/).\n* After criticizing the thinking behind OpenAI [earlier in the month](http://www.zdnet.com/article/google-alphabets-schmidt-ignore-elon-musks-ai-fears-hes-no-computer-scientist/), Alphabet executive chairman Eric Schmidt [comes out in favor of AI safety research](http://fortune.com/2016/06/28/artificial-intelligence-potential/):\n\n\nDo we worry about the doomsday scenarios? We believe it’s worth thoughtful consideration. Today’s AI only thrives in narrow, repetitive tasks where it is trained on many examples. But no researchers or technologists want to be part of some Hollywood science-fiction dystopia. The right course is not to panic—it’s to get to work. Google, alongside many other companies, is doing rigorous research on AI safety, such as how to ensure people can [interrupt an AI system](http://uk.businessinsider.com/google-deepmind-develops-a-big-red-button-to-stop-dangerous-ais-causing-harm-2016-6?r=US&IR=T) whenever needed, and how to make such systems robust to cyberattacks.\n\n\n* Dylan Hadfield-Mennell, Anca Dragan, Pieter Abbeel, and Stuart Russell propose a formal definition of the value alignment problem as “[Cooperative Inverse Reinforcement Learning](http://arxiv.org/abs/1606.03137),” a two-player game where a human and robot are both “rewarded according to the human’s reward function, but the robot does not initially know what this is.” In a CSRBAI talk ([slides](https://intelligence.org/files/csrbai/hadfield-menell-slides.pdf)), Hadfield-Mennell discusses applications for AI corrigibility.\n* Jaan Tallinn [brings his AI risk focus](http://thebulletin.org/press-release/skype-co-founder-jaan-tallinn-joins-bulletin-board-sponsors9532) to the *Bulletin of Atomic Scientists*.\n* Stephen Hawking [weighs in on intelligence explosion](http://www.ora.tv/larrykingnow/2016/6/25/larry-kings-exclusive-conversation-with-stephen-hawking) (video). Sam Harris and Neil DeGrasse Tyson debate the idea [at greater length](https://www.youtube.com/watch?v=8L3DKlBz874&t=1h22m37s) (audio, at 1:22:37).\n* Ethereum developer Vitalik Buterin discusses the implications of [value complexity and fragility](https://blog.ethereum.org/2016/06/19/thinking-smart-contract-security/) and [other AI safety concepts](https://medium.com/@VitalikButerin/why-cryptoeconomics-and-x-risk-researchers-should-listen-to-each-other-more-a2db72b3e86b#.c84y42jjp) for cryptoeconomics.\n* *Wired* covers a “[demonically clever](https://www.wired.com/2016/06/demonically-clever-backdoor-hides-inside-computer-chip/)” backdoor based on chips’ analog properties.\n* *CNET* interviews MIRI and a who’s who of AI scientists for a pair of articles: “[AI, Frankenstein? Not So Fast, Experts Say](http://www.cnet.com/uk/news/ai-frankenstein-not-so-fast-artificial-intelligence-experts-say/)” and “[When Hollywood Does AI, It’s Fun But Farfetched](http://www.cnet.com/uk/news/hollywood-ai-artificial-intelligence-fun-but-far-fetched/).”\n* Next month’s [Effective Altruism Global](http://eaglobal.org/) conference is accepting applicants.\n\n\n |\n\n\n\n\n\n---\n\n1. Inspiration for these gyms came in part from Chris Olah and Dario Amodei in a conversation with Rafael.\n\nThe post [July 2016 Newsletter](https://intelligence.org/2016/07/05/july-2016-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-07-06T13:17:18Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "12ef58e160232fae389d32d2b4f11a09", "title": "New paper: “A formal solution to the grain of truth problem”", "url": "https://intelligence.org/2016/06/30/grain-of-truth/", "source": "miri", "source_type": "blog", "text": "[![A Formal Solution to the Grain of Truth Problem](https://intelligence.org/files/GrainofTruth.png)](http://www.auai.org/uai2016/proceedings/papers/87.pdf)\nFuture of Humanity Institute Research Fellow [Jan Leike](https://jan.leike.name/) and MIRI Research Fellows Jessica Taylor and Benya Fallenstein have just presented new results at [UAI 2016](http://www.auai.org/uai2016/index.php) that resolve a longstanding open problem in game theory: “**[A formal solution to the grain of truth problem](http://www.auai.org/uai2016/proceedings/papers/87.pdf)**.”\n\n\nGame theorists have techniques for specifying agents that eventually do well on iterated games against other agents, so long as their beliefs contain a “grain of truth” — nonzero prior probability assigned to the actual game they’re playing. Getting that grain of truth was previously an unsolved problem in multiplayer games, because agents can run into infinite regresses when they try to model agents that are modeling them in turn. This result shows how to break that loop: by means of [reflective oracles](https://intelligence.org/2015/04/28/new-papers-reflective/).\n\n\nIn the process, Leike, Taylor, and Fallenstein provide a rigorous and general foundation for the study of multi-agent dilemmas. This work provides a surprising and somewhat satisfying basis for [approximate Nash equilibria](https://en.wikipedia.org/wiki/Epsilon-equilibrium) in repeated games, folding a variety of problems in decision and game theory into a common framework.\n\n\nThe paper’s abstract reads:\n\n\n\n> A Bayesian agent acting in a multi-agent environment learns to predict the other agents’ policies if its prior assigns positive probability to them (in other words, its prior contains a *grain of truth*). Finding a reasonably large class of policies that contains the Bayes-optimal policies with respect to this class is known as the *grain of truth problem*. Only small classes are known to have a grain of truth and the literature contains several related impossibility results.\n> \n> \n> In this paper we present a formal and general solution to the full grain of truth problem: we construct a class of policies that contains all computable policies as well as Bayes-optimal policies for every lower semicomputable prior over the class. When the environment is unknown, Bayes-optimal agents may fail to act optimally even asymptotically. However, agents based on Thompson sampling converge to play ε-Nash equilibria in arbitrary unknown computable multi-agent environments. While these results are purely theoretical, we show that they can be computationally approximated arbitrarily closely.\n> \n> \n\n\nTraditionally, when modeling computer programs that model the properties of other programs (such as when modeling an agent reasoning about a game), the first program is assumed to have access to an oracle (such as a halting oracle) that can answer arbitrary questions about the second program. This works, but it doesn’t help with modeling agents that can reason about *each other*.\n\n\nWhile a halting oracle can predict the behavior of any isolated Turing machine, it cannot predict the behavior of another Turing machine that has access to a halting oracle. If this were possible, the second machine could use its oracle to figure out what the first machine-oracle pair thinks it will do, at which point it can do the opposite, setting up a [liar paradox](https://en.wikipedia.org/wiki/Liar_paradox) scenario. For analogous reasons, two agents with similar resources, operating in real-world environments without any halting oracles, cannot perfectly predict each other in full generality.\n\n\nGame theorists know how to build formal models of asymmetric games between a weaker player and a stronger player, where the stronger player understands the weaker player’s strategy but not vice versa. For the reasons above, however, games between agents of similar strength have resisted full formalization. As a consequence of this, game theory has until now provided no method for *designing* agents that perform well on complex iterated games containing other agents of similar strength.\n\n\n\nUsually, the way to build an ideal agent is to have the agent consider a large list of possible policies, predict how the world would respond to each policy, and then choose the best policy by some metric. However, in multi-player games, if your agent considers a big list of policies that both it and the opponent might play, then the best policy for the opponent is usually some alternative policy that was not in your list. (And if you add that policy to your list, then the new best policy for the opponent to play is now a new alternative that wasn’t in the list, and so on.)\n\n\nThis is the grain of truth problem, first posed by [Kalai and Lehrer](http://www.jstor.org/stable/2951492?seq=1#page_scan_tab_contents) in 1993: define a class of policies that is large enough to be interesting and realistic, and for which *the best response to an agent that considers that policy class is inside the class*.[1](https://intelligence.org/2016/06/30/grain-of-truth/#footnote_0_13833 \"It isn’t hard to solve the grain of truth problem for policy classes that are very small. Consider a prisoner’s dilemma where the only strategies the other player can select take the form “cooperate until the opponent defects, then defect forever” or “cooperate n times in a row (or until the opponent defects, whichever happens first) and then defect forever.” Leike, Taylor, and Fallenstein note:\nThe Bayes-optimal behavior is to cooperate until the posterior belief that the other agent defects in the time step after the next is greater than some constant (depending on the discount function) and then defect afterwards.\nBut this is itself a strategy in the class under consideration. If both players are Bayes-optimal, then both will have a grain of truth (i.e., their actual strategy is assigned nonzero probability by the other player) “and therefore they converge to a Nash equilibrium: either they both cooperate forever or after some finite time they both defect forever.”\nSlightly expanding the list of policies an agent might deploy, however, can make it hard to find a policy class that contains a grain of truth. For example, if “tit for tat” is added to the policy class, then, depending on the prior, the grain of truth may be lost. In this case, if the first agent thinks that the second agent is very likely “always defect” but maybe “tit for tat,” then the best policy might be something like “defect until they cooperate, then play tit for tat,” but this policy is not in the policy class. The question resolved by this paper is how to find priors that contain a grain of truth for much richer policy classes.\")\n\n\nTaylor and Fallenstein have developed a formalism that enables a solution: *reflective* oracles capable of answering questions about agents with access to equivalently powerful oracles. Leike has led work on demonstrating that this formalism can solve the grain of truth problem, and in the process shows that the Bayes-optimal policy generally does not converge to a Nash equilibrium. [Thompson sampling](https://en.wikipedia.org/wiki/Thompson_sampling), however, does converge to a Nash equilibrium — a result that comes out of another paper presented at UAI 2016, Leike, Lattimore, Orseau, and Hutter’s “[Thompson sampling is asymptotically optimal in general environments](https://arxiv.org/abs/1602.07905).”\n\n\nThe key feature of reflective oracles is that they avoid diagonalization and paradoxes by randomizing in the relevant cases.[2](https://intelligence.org/2016/06/30/grain-of-truth/#footnote_1_13833 \"Specifically, reflective oracles output 1 if a specified machine returns 1 with probability greater than a specified probability p, and they output 0 if the probability that the machine outputs 0 is greater than 1-p. When the probability is exactly p, however—or the machine has some probability of not halting, and p hits this probability mass—the oracle can output 0, 1, or randomize between the two. This allows reflective oracles to avoid probabilistic versions of the liar paradox: any attempt to ask the reflective oracle an unanswerable question will yield a meaningless placeholder answer.\") This allows agents with access to a reflective oracle to consistently reason about the behavior of arbitrary agents that also have access to a reflective oracle, which in turn makes it possible to model agents that converge to Nash equilibria by their own faculties (rather than by fiat or assumption).\n\n\nThis framework can be used to, e.g., define games between multiple copies of [AIXI](https://en.wikipedia.org/wiki/AIXI). As originally formulated, AIXI [cannot entertain hypotheses about its own existence](https://wiki.lesswrong.com/wiki/Naturalized_induction), or about the existence of similarly powerful agents; classical Bayes-optimal agents must be larger and more intelligent than their environments. With access to a reflective oracle, however, [Fallenstein, Soares, and Taylor have shown](https://intelligence.org/files/ReflectiveSolomonoffAIXI.pdf) that AIXI can meaningfully entertain hypotheses about itself and copies of itself while avoiding diagonalization.\n\n\nThe other main novelty of this paper is that reflective oracles turn out to be limit-computable, and so allow for approximation by anytime algorithms. The reflective oracles paradigm is therefore likely to be quite valuable for investigating game-theoretic questions involving generally intelligent agents that can understand and model each other.[3](https://intelligence.org/2016/06/30/grain-of-truth/#footnote_2_13833 \"Thanks to Tsvi Benson-Tilsen, Chana Messinger, Nate Soares, and Jan Leike for helping draft this announcement.\")\n\n\n\n\n---\n\n\n \n\n\n\n\n\n#### Sign up to get updates on new MIRI technical results\n\n\n*Get notified every time a new technical paper is published.*\n\n\n\n\n* \n* \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n\n\n \n\n\n\n\n---\n\n1. It isn’t hard to solve the grain of truth problem for policy classes that are very small. Consider a prisoner’s dilemma where the only strategies the other player can select take the form “cooperate until the opponent defects, then defect forever” or “cooperate *n* times in a row (or until the opponent defects, whichever happens first) and then defect forever.” Leike, Taylor, and Fallenstein note:\n\n> The Bayes-optimal behavior is to cooperate until the posterior belief that the other agent defects in the time step after the next is greater than some constant (depending on the discount function) and then defect afterwards.\n> \n> \n\n\nBut this is itself a strategy in the class under consideration. If both players are Bayes-optimal, then both will have a grain of truth (i.e., their actual strategy is assigned nonzero probability by the other player) “and therefore they converge to a Nash equilibrium: either they both cooperate forever or after some finite time they both defect forever.”\n\n\nSlightly expanding the list of policies an agent might deploy, however, can make it hard to find a policy class that contains a grain of truth. For example, if “tit for tat” is added to the policy class, then, depending on the prior, the grain of truth may be lost. In this case, if the first agent thinks that the second agent is very likely “always defect” but maybe “tit for tat,” then the best policy might be something like “defect until they cooperate, then play tit for tat,” but this policy is not in the policy class. The question resolved by this paper is how to find priors that contain a grain of truth for much richer policy classes.\n2. Specifically, reflective oracles output 1 if a specified machine returns 1 with probability greater than a specified probability *p*, and they output 0 if the probability that the machine outputs 0 is greater than 1-*p*. When the probability is exactly *p*, however—or the machine has some probability of not halting, and *p* hits this probability mass—the oracle can output 0, 1, or randomize between the two. This allows reflective oracles to avoid probabilistic versions of the liar paradox: any attempt to ask the reflective oracle an unanswerable question will yield a meaningless placeholder answer.\n3. Thanks to Tsvi Benson-Tilsen, Chana Messinger, Nate Soares, and Jan Leike for helping draft this announcement.\n\nThe post [New paper: “A formal solution to the grain of truth problem”](https://intelligence.org/2016/06/30/grain-of-truth/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-06-30T22:57:28Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "39c3f5893178aa6f6f20743c983aee33", "title": "June 2016 Newsletter", "url": "https://intelligence.org/2016/06/12/june-2016-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \n\n**Research updates**\n* New paper: “[Safely Interruptible Agents](https://intelligence.org/2016/06/01/new-paper-safely-interruptible-agents/).” The paper will be presented at UAI-16, and is a collaboration between Laurent Orseau of Google DeepMind and Stuart Armstrong of the Future of Humanity Institute (FHI) and MIRI; see FHI’s [press release](https://www.fhi.ox.ac.uk/google-deepmind-and-fhi-collaborate-to-present-research-at-uai-2016/). The paper has received (often hyperbolic) coverage from a number of press outlets, including *[*Business Insider*](http://uk.businessinsider.com/google-deepmind-develops-a-big-red-button-to-stop-dangerous-ais-causing-harm-2016-6?r=US&IR=T)*, [*Motherboard*](http://motherboard.vice.com/read/google-researchers-have-come-up-with-an-ai-kill-switch), [*Newsweek*](http://www.newsweek.com/google-big-red-button-ai-artificial-intelligence-save-world-elon-musk-466753), [*Gizmodo*](http://gizmodo.com/google-doesnt-want-to-accidentally-make-skynet-so-its-1780317950), [*BBC News*](http://www.bbc.com/news/technology-36472140), [*eWeek*](http://www.eweek.com/innovation/google-oxford-researchers-study-way-to-stop-ai-systems-from-misbehaving.html), and [*Computerworld*](http://www.computerworld.com/article/3080140/robotics/google-deepminds-kill-switch-research-may-ease-ai-fears.html#tk.rss_news).\n* New at IAFF: [All Mathematicians are Trollable: Divergence of Naturalistic Logical Updates](https://agentfoundations.org/item?id=815); [Two Problems with Causal-Counterfactual Utility Indifference](https://agentfoundations.org/item?id=839)\n* New at AI Impacts: [Metasurvey: Predict the Predictors](http://aiimpacts.org/metasurvey-predict-the-predictors/); [Error in Armstrong and Sotala 2012](http://aiimpacts.org/error-in-armstrong-and-sotala-2012/)\n* Marcus Hutter’s research group has released a new paper based on results from a MIRIx workshop: “[Self-Modification of Policy and Utility Function in Rational Agents](http://arxiv.org/abs/1605.03142).” Hutter’s team is presenting several other AI alignment papers at AGI-16 next month: “[Death and Suicide in Universal Artificial Intelligence](http://arxiv.org/abs/1606.00652)” and “[Avoiding Wireheading with Value Reinforcement Learning](http://arxiv.org/abs/1605.03143).”\n* “[Asymptotic Logical Uncertainty and The Benford Test](https://intelligence.org/2015/09/30/new-paper-asymptotic-logical-uncertainty-and-the-benford-test/)” has been accepted to AGI-16.\n\n\n**General updates**\n* MIRI and FHI’s [Colloquium Series on Robust and Beneficial AI](https://intelligence.org/colloquium-series/) (talk abstracts and slides now up) has kicked off with opening talks by Stuart Russell, Francesca Rossi, Tom Dietterich, and Alan Fern.\n* We visited FHI to discuss new results in logical uncertainty, our new [machine-learning-oriented research program](https://intelligence.org/2016/05/04/announcing-a-new-research-program/), and a range of other topics.\n\n\n**News and links**\n* Following an [increase in US spending on autonomous weapons](http://www.defensenews.com/story/defense/policy-budget/budget/2016/01/23/terminator-conundrum-pentagon-weighs-ethics-pairing-deadly-force-ai/79205722/), *The New York Times* reports that the Pentagon [is turning to Silicon Valley for an edge](http://www.nytimes.com/2016/05/12/technology/artificial-intelligence-as-the-pentagons-latest-weapon.html).\n* IARPA director Jason Matheny, a former researcher at FHI, discusses forecasting and risk from emerging technologies ([video](https://www.youtube.com/watch?v=9xMLsuucbng&t=4m21s)).\n* FHI Research Fellow Owen Cotton-Barratt [gives oral evidence to the UK Parliament](http://globalprioritiesproject.org/2016/06/read-our-input-to-parliament-on-ai/) on the need for robust and transparent AI systems.\n* Google reveals a hidden reason for AlphaGo’s exceptional performance against Lee Se-dol: [a new integrated circuit design](https://cloudplatform.googleblog.com/2016/05/Google-supercharges-machine-learning-tasks-with-custom-chip.html) that can speed up machine learning applications by an order of magnitude.\n* Elon Musk answers questions about SpaceX, Tesla, OpenAI, and more ([video](https://www.youtube.com/watch?v=wsixsRI-Sz4)).\n* Why worry about advanced AI? Stuart Russell ([in *Scientific American*](http://people.eecs.berkeley.edu/~russell/papers/sciam16-supersmart.pdf)), George Dvorsky ([in *Gizmodo*](http://gizmodo.com/everything-you-know-about-artificial-intelligence-is-wr-1764020220)), and SETI director Seth Shostak ([in *Tech Times*](http://www.techinsider.io/when-artificial-intelligence-will-outsmart-humans-2016-5)) explain.\n* Olle Häggeström’s new book, *[Here Be Dragons](https://smile.amazon.com/Here-Be-Dragons-Technology-Humanity/dp/0198723547?ie=UTF8&sa-no-redirect=1)*, serves as an unusually thoughtful and thorough introduction to existential risk and future technological development, including a lucid discussion of artificial superintelligence.\n* Robin Hanson examines the implications of widespread whole-brain emulation in his new book, *[The Age of Em: Work, Love, and Life when Robots Rule the Earth](http://ageofem.com/)*.\n* Bill Gates [highly recommends](http://qz.com/698334/bill-gates-says-these-are-the-two-books-we-should-all-read-to-understand-ai/) Nick Bostrom’s *Superintelligence*. The [paperback edition](https://smile.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0198739834?ie=UTF8&me=&ref_=mt_paperback&sa-no-redirect=1&tag=s4charity-20) is now out, with a newly added afterword.\n* FHI Research Associate Paul Christiano has [joined OpenAI](https://openai.com/blog/team-update/) as an intern. Christiano has also written new posts on AI alignment: [Efficient and Safely Scalable](https://medium.com/ai-control/efficient-and-safely-scalable-8218fa8a871f#.ikidly4u3), [Learning with Catastrophes](https://medium.com/ai-control/learning-with-catastrophes-59387b55cc30#.5ffmznx4m), [Red Teams](https://medium.com/ai-control/red-teams-b5b6de33dc76#.1auu1mjga), and [The Reward Engineering Problem](https://medium.com/ai-control/the-reward-engineering-problem-30285c779450#.iugp9yhue).\n\n\n\n |\n\n\n\nThe post [June 2016 Newsletter](https://intelligence.org/2016/06/12/june-2016-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-06-12T20:05:35Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "0e327ebbed8c80ec5bae1e94854e043d", "title": "New paper: “Safely interruptible agents”", "url": "https://intelligence.org/2016/06/01/new-paper-safely-interruptible-agents/", "source": "miri", "source_type": "blog", "text": "[![Safely Interruptible Agents](https://intelligence.org/files/Interruptibility.png)](https://intelligence.org/files/Interruptibility.pdf)Google DeepMind Research Scientist Laurent Orseau and MIRI Research Associate Stuart Armstrong have written a new paper on error-tolerant agent designs, “**[Safely interruptible agents](https://intelligence.org/files/Interruptibility.pdf)**.” The paper is forthcoming at the [32nd Conference on Uncertainty in Artificial Intelligence](http://www.auai.org/uai2016/index.php).\n\n\nAbstract:\n\n\n\n> Reinforcement learning agents interacting with a complex environment like the real world are unlikely to behave optimally all the time. If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions—harmful either for the agent or for the environment—and lead the agent into a safer situation. However, if the learning agent expects to receive rewards from this sequence, it may learn in the long run to avoid such interruptions, for example by disabling the red button — which is an undesirable outcome.\n> \n> \n> This paper explores a way to make sure a learning agent will *not* learn to prevent (or seek!) being interrupted by the environment or a human operator. We provide a formal definition of safe interruptibility and exploit the off-policy learning property to prove that either some agents are already safely interruptible, like Q-learning, or can easily be made so, like Sarsa. We show that even ideal, uncomputable reinforcement learning agents for (deterministic) general computable environments can be made safely interruptible.\n> \n> \n\n\nOrseau and Armstrong’s paper constitutes a new angle of attack on the problem of [corrigibility](https://arbital.com/p/corrigibility/). A corrigible agent is one that recognizes it is flawed or under development and assists its operators in maintaining, improving, or replacing itself, rather than resisting such attempts.\n\n\n\nIn the case of superintelligent AI systems, corrigibility is primarily aimed at averting unsafe [convergent instrumental policies](https://arbital.com/p/corrigibility/) (e.g., the policy of defending its current goal system from future modifications) when such systems have incorrect terminal goals. This leaves us more room for approximate, trial-and-error, and learning-based solutions to AI [value specification](https://intelligence.org/2015/01/29/new-report-value-learning-problem/).\n\n\nInterruptibility is an attempt to formalize one piece of the intuitive idea of corrigibility. *Utility indifference* (in Soares, Fallenstein, Yudkowsky, and Armstrong’s “[Corrigibility](https://intelligence.org/2014/10/18/new-report-corrigibility/)”) is an example of a past attempt to define a different piece of corrigibility: systems that are indifferent to programmers’ interventions to modify their terminal goals, and will therefore avoid trying to force their programmers either to make such modifications or to avoid such modifications. “Safely interruptible agents” instead attempts to define systems that are indifferent to programmers’ interventions to modify their *policies*, and will not try to stop programmers from intervening on their everyday activities (nor try to *force* them to intervene).\n\n\nHere the goal is to make the agent’s policy converge to whichever policy is optimal if the agent believed there would be no future interruptions. Even if the agent has experienced interruptions in the past, it should act just as though it will never experience any further interruptions. Orseau and Armstrong show that several classes of agent are safely interruptible, or can be easily made safely interruptible.\n\n\nFurther reading:\n\n\n* Stuart Armstrong’s *[Smarter Than Us](https://intelligence.org/smarter-than-us/)*, an informal introduction to the AI alignment problem.\n* [Laurent Orseau on Artificial General Intelligence.](https://intelligence.org/2013/09/06/laurent-orseau-on-agi/)\n* Orseau and Ring’s “[Space-time embedded intelligence](http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_76.pdf).”\n* Soares and Fallenstein’s “[Problems of self-reference in self-improving space-time embedded intelligence](https://intelligence.org/2014/05/06/new-paper-problems-of-self-reference-in-self-improving-space-time-embedded-intelligence/).”\n\n\n \n\n\n\n\n---\n\n\n \n\n\n\n\n\n#### Sign up to get updates on new MIRI technical results\n\n\n*Get notified every time a new technical paper is published.*\n\n\n\n\n* \n* \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n\n\nThe post [New paper: “Safely interruptible agents”](https://intelligence.org/2016/06/01/new-paper-safely-interruptible-agents/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-06-02T06:58:38Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "5395019a28cf981261165a195e7ad747", "title": "May 2016 Newsletter", "url": "https://intelligence.org/2016/05/13/may-2016-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \n\n**Research updates**\n* [Two new papers split logical uncertainty into two distinct subproblems](https://intelligence.org/2016/04/21/two-new-papers-uniform/): “Uniform Coherence” and “Asymptotic Convergence in Online Learning with Unbounded Delays.”\n* New at IAFF: [An Approach to the Agent Simulates Predictor Problem](https://agentfoundations.org/item?id=754); [Games for Factoring Out Variables](https://agentfoundations.org/item?id=762); [Time Hierarchy Theorems for Distributional Estimation Problems](https://agentfoundations.org/item?id=777)\n* We will be presenting “[The Value Learning Problem](https://intelligence.org/files/ValueLearningProblem.pdf)” at the IJCAI-16 Ethics for Artificial Intelligence workshop instead of the AAAI Spring Symposium where it was previously accepted.\n\n\n**General updates**\n* We’re launching [a new research program with a machine learning focus](https://intelligence.org/2016/05/04/announcing-a-new-research-program/). Half of MIRI’s team will be investigating potential ways to specify goals and guard against errors in advanced neural-network-inspired systems.\n* We ran a [type theory and formal verification workshop](https://intelligence.org/workshops/#april-2016-tt) this past month.\n\n\n**News and links**\n* The Open Philanthropy Project explains its strategy of high-risk, high-reward [hits-based giving](http://www.openphilanthropy.org/blog/hits-based-giving) and its decision to make [AI risk its top focus area](http://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity) this year.\n* Also from OpenPhil: Is it true that past researchers [over-hyped AI](http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/what-should-we-learn-past-ai-forecasts)? Is there a realistic chance of AI fundamentally changing civilization [in the next 20 years](http://www.openphilanthropy.org/blog/some-background-our-views-regarding-advanced-artificial-intelligence)?\n* From *Wired*: [Inside OpenAI](http://www.wired.com/2016/04/openai-elon-musk-sam-altman-plan-to-set-artificial-intelligence-free/), and [Facebook is Building AI That Builds AI](http://www.wired.com/2016/05/facebook-trying-create-ai-can-create-ai/).\n* The White House [announces a public workshop series](https://www.whitehouse.gov/blog/2016/05/03/preparing-future-artificial-intelligence) on the future of AI.\n* The Wilberforce Society suggests [policies for narrow and general AI development](http://thewilberforcesociety.co.uk/wp-content/uploads/2016/02/AI_FINAL.pdf).\n* Two new AI safety papers: “[A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis](http://sethbaum.com/ac/fc_AI-Pathways.pdf)” and “[The AGI Containment Problem](http://arxiv.org/abs/1604.00545).”\n* Peter Singer [weighs in](https://www.project-syndicate.org/commentary/can-artificial-intelligence-be-ethical-by-peter-singer-2016-04) on catastrophic AI risk.\n* [Digital Genies](http://www.slate.com/articles/technology/future_tense/2016/04/stuart_russell_interviewed_about_a_i_and_human_values.html): Stuart Russell discusses the problems of value learning and corrigibility in AI.\n* Nick Bostrom is interviewed at CeBIT ([video](https://www.youtube.com/watch?v=2hrs5o37ylI)) and also gives a presentation on intelligence amplification and the status quo bias ([video](https://vimeo.com/165348147#t=22m10s)).\n* Jeff MacMahan critiques [philosophical critiques of effective altruism](http://effective-altruism.com/ea/x4/philosophical_critiques_of_effective_altruism_by/).\n* Yale political scientist Allan Dafoe is [seeking research assistants](https://www.fhi.ox.ac.uk/vacancies-for-research-assistants/) for a project on political and strategic concerns related to existential AI risk.\n* The Center for Applied Rationality is accepting applicants to [a free workshop for machine learning researchers and students](http://rationality.org/cfar-for-ml-researchers/).\n\n\n\n |\n\n\n\nThe post [May 2016 Newsletter](https://intelligence.org/2016/05/13/may-2016-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-05-14T02:09:24Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "2a8800bb54233cc3e15bfbeb7772e3be", "title": "A new MIRI research program with a machine learning focus", "url": "https://intelligence.org/2016/05/04/announcing-a-new-research-program/", "source": "miri", "source_type": "blog", "text": "I’m happy to announce that MIRI is beginning work on a new research agenda, “**value alignment for advanced machine learning systems**.” Half of MIRI’s team — Patrick LaVictoire, Andrew Critch, and I — will be spending the bulk of our time on this project over at least the next year. The rest of our time will be spent on our pre-existing [research agenda](https://intelligence.org/technical-agenda/).\n\n\nMIRI’s research in general can be viewed as a response to Stuart Russell’s question for artificial intelligence researchers: “*What if we succeed?*” There appear to be a number of theoretical prerequisites for designing advanced AI systems that are robust and reliable, and our research aims to develop them early.\n\n\nOur general research agenda is agnostic about when AI systems are likely to match and exceed humans in general reasoning ability, and about whether or not such systems will resemble present-day machine learning (ML) systems. Recent years’ impressive progress in deep learning suggests that relatively simple neural-network-inspired approaches can be very powerful and general. For that reason, we are making an initial inquiry into a more specific subquestion: “*What if techniques similar in character to present-day work in ML succeed in creating AGI?*”.\n\n\nMuch of this work will be aimed at improving our high-level theoretical understanding of **[task-directed AI](https://arbital.com/p/genie/)**. Unlike what Nick Bostrom calls “sovereign AI,” which attempts to optimize the world in long-term and large-scale ways, task AI is limited to performing instructed tasks of limited scope, satisficing but not maximizing. Our hope is that investigating task AI from an ML perspective will help give information about both the feasibility of task AI and the tractability of early safety work on advanced supervised, unsupervised, and reinforcement learning systems.\n\n\nTo this end, we will begin by investigating eight relevant technical problems:\n\n\n\n\n\n---\n\n\n1. **Inductive ambiguity detection.**\n\n\nHow can we design a general methodology for ML systems (such as classifiers) to identify when the classification of a test instance is underdetermined by training data?\n\n\nFor example: If an ambiguity-detecting classifier is designed to distinguish images of tanks from images of non-tanks, and the training set only contains images of tanks on cloudy days and non-tanks on sunny days, this classifier ought to detect that the classification of an image of a tank on a sunny day is ambiguous, and pose some query for its operators to disambiguate it and avoid errors.\n\n\nWhile past and current work in active learning and statistical learning theory more broadly has made progress towards this goal, more work is necessary to establish realistic statistical bounds on the error rates and query rates of real-world systems in advance of their deployment in complex environments.\n\n\n2. **Informed oversight.**\n\n\nHow might we train a reinforcement learner to output both an action and a “report” comprising information to help a human evaluate its action?\n\n\nFor example: If a human is attempting to train a reinforcement learner to output original stories, then in evaluating the story, the human will want to know some information about the story (such as whether it has been plagiarized from another story) that may be difficult to determine by looking at the story itself.\n\n\n3. **Safe training procedures for human-imitators.**\n\n\nHow might we design a ML system that imitates humans performing some task that involves rich outputs (such as answering questions in natural language), to the best of the ML system’s abilities?\n\n\nWhile there are existing approaches to imitation learning and generative models, these have some theoretical shortcomings that prevent them from fully solving the general problem. In particular, a generative adversarial model trained on human actions only has an incentive to imitate aspects of the human that the adversary can detect; thus, issues similar to the plagiarism problem from (2) can arise.\n\n\n4. **Conservative concepts.**\n\n\nHow might we design a system that, given some positive examples of a concept, can synthesize new instances of the concept without synthesizing edge cases of it?\n\n\nFor example: If we gave the system detailed information about 100 human-created burritos as training data, it should manufacture additional burritos while avoiding edge cases such as extremely small burritos (even though these could still be considered burritos).\n\n\nBy default, most objective functions [will lead to such edge cases](https://arbital.com/p/edge_instantiation/) (say, because small burritos are cheaper to manufacture). Can we develop a general technique for avoiding this problem?\n\n\n5. **Specifying environmental goals using sensory data.**\n\n\nThe goals of most ML systems are defined in terms of sensory data, such as discounted reward over time. While these sensory goals can be useful proxies for environmental goals, they are not identical to them: tricking your own sensors into perceiving a strawberry in the room is not the the same as actually having one there. How might we create systems that directly pursue useful goals in their environment, such as “make there be a strawberry in this room”, in a way that is robust to opportunities for the system to interfere with the proxies we used to specify the goal?\n\n\n6. **Low-impact agents.**\n\n\nWe would prefer a highly intelligent task-oriented system to generally avoid creating large unintended-by-us side effects in pursuit of its goal, and also to notify us of any large impacts that might result from achieving its goal. For example, if we ask it to build a house for a homeless family, it should know implicitly that it should avoid destroying nearby houses for materials — a large side effect — while still having the desirable large follow-on effect of improving the family’s socioeconomic situation. For any *specific* task we can specify ad-hoc cost functions for side effects like nearby house destruction, but since we cannot always anticipate such costs in advance, we want a quantitative understanding of how to *generally* limit an AI’s side effects (without also limiting its ability to have large positive intended impacts).\n\n\n7. **Mild optimization.**\n\n\nIn the event that the low impact criteria we employ in (6) are flawed or missing some key considerations, we would like some way to tell our system “don’t try too hard” or “don’t exercise too much optimization power”, since intuitively, large impacts are more likely to result from large efforts (although the correspondence is imperfect).\n\n\nLess capable AI systems tend to be less likely to have large unintended side effects, and we would like to know how to make even highly capable systems safe in a similar way, by having them apply less “effort”. With a satisfactory encoding of our notion of “effort”, we could then program a system to impose absolute limits on its total effort, or to use only as much effort as is needed to achieve its objective with reasonably high probability, thus potentially limiting impacts that we did not manage to measure in (6).\n\n\n8. **Averting instrumental pressures.**\n\n\nHow might we design a system that is not motivated to pursue certain [convergent instrumental goals](https://intelligence.org/2015/11/26/new-paper-formalizing-convergent-instrumental-goals/) — such as gaining additional resources — even when pursuing these goals would help it achieve its main objective?\n\n\nIn particular, we may wish to build a system that [has no incentive](https://intelligence.org/2014/10/18/new-report-corrigibility/) to cause or prevent its own shutdown/suspension. This relates to (6) and (7) in that instrumental pressures like “ensure my continued operation” can incentivize large impacts/efforts. However, this is a distinct agenda item because it may be possible to completely eliminate certain instrumental incentives in a way that would apply even before solutions to (6) and (7) would take effect.\n\n\n\n\n---\n\n\nHaving identified these topics of interest, we expect our work on this agenda to be timely. The idea of “[robust and beneficial](http://futureoflife.org/ai-open-letter/)” AI has recently received increased attention as a result of the new wave of breakthroughs in machine learning. The kind of theoretical work in this project has more obvious connections to the leading paradigms in AI and ML than, for example, our recent work in [logical uncertainty](https://intelligence.org/2016/04/21/two-new-papers-uniform/) or in [game theory](https://intelligence.org/2016/03/31/new-paper-on-bounded-lob/), and therefore lends itself better to collaborations with AI/ML researchers in the near future.\n\n\n\n\n---\n\n\n \n\n\n*Thanks to Eliezer Yudkowsky and Paul Christiano for seeding many of the initial ideas for these research directions, to Patrick LaVictoire, Andrew Critch, and other MIRI researchers for helping develop these ideas, and to Chris Olah, Dario Amodei, and Jacob Steinhardt for valuable discussion.*\n\n\nThe post [A new MIRI research program with a machine learning focus](https://intelligence.org/2016/05/04/announcing-a-new-research-program/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-05-05T06:53:31Z", "authors": ["admin"], "summaries": []} -{"id": "0a33c948680a2d241edb5b61dbd206af", "title": "New papers dividing logical uncertainty into two subproblems", "url": "https://intelligence.org/2016/04/21/two-new-papers-uniform/", "source": "miri", "source_type": "blog", "text": "I’m happy to announce two new technical results related to the problem of [logical uncertainty](https://intelligence.org/files/QuestionsLogicalUncertainty.pdf), perhaps our most significant results from the past year. In brief, these results split the problem of logical uncertainty into two distinct subproblems, each of which we can now solve in isolation. The remaining problem, in light of these results, is to find a unified set of methods that solve both at once.\n\n\nThe solutions for each subproblem are available in two new papers, based on work spearheaded by Scott Garrabrant: “[Inductive coherence](https://arxiv.org/abs/1604.05288)”[1](https://intelligence.org/2016/04/21/two-new-papers-uniform/#footnote_0_13424 \"This work was originally titled “Uniform coherence”. This post has been updated to reflect the new terminology.\") and “[Asymptotic convergence in online learning with unbounded delays](https://arxiv.org/abs/1604.05280).”[2](https://intelligence.org/2016/04/21/two-new-papers-uniform/#footnote_1_13424 \"Garrabrant’s IAFF forum posts provide a record of how these results were originally developed, as a response to Ray Solomonoff’s theory of algorithmic probability. Concrete Failure of the Solomonoff Approach and The Entangled Benford Test lay groundwork for the “Asymptotic convergence…” problem, a limited early version of which was featured in the “Asymptotic logical uncertainty and the Benford test” report. Inductive coherence is defined in Uniform Coherence 2, and an example of an inductively coherent predictor is identified in The Modified Demski Prior is Uniformly Coherent.\")\n\n\nTo give some background on the problem: Modern probability theory models reasoners’ empirical uncertainty, their uncertainty about the state of a physical environment, e.g., “What’s behind this door?” However, it can’t represent reasoners’ *logical* uncertainty, their uncertainty about statements like “this Turing machine halts” or “the twin prime conjecture has a proof that is less than a gigabyte long.”[3](https://intelligence.org/2016/04/21/two-new-papers-uniform/#footnote_2_13424 \"This type of uncertainty is called “logical uncertainty” mainly for historical reasons. I think of it like this: We care about agents’ ability to reason about software systems, e.g., “this program will halt.” Those claims can be expressed in sentences of logic. The question “what probability does the agent assign to this machine halting?” then becomes “what probability does this agent assign to this particular logical sentence?” The truth of these statements could be determined in principle, but the agent may not have the resources to compute the answers in practice.\")\n\n\nRoughly speaking, if you give a classical probability distribution variables for statements that could be deduced in principle, then the axioms of probability theory force you to put probability either 0 or 1 on those statements, because you’re not allowed to assign positive probability to contradictions. In other words, modern probability theory assumes that all reasoners know all the consequences of all the things they know, even if deducing those consequences is intractable.\n\n\nWe want a generalization of probability theory that allows us to model reasoners that have uncertainty about statements that they have not yet evaluated. Furthermore, we want to understand how to assign “reasonable” probabilities to claims that are too expensive to evaluate.\n\n\nImagine an agent considering whether to use quicksort or mergesort to sort a particular dataset. They might know that quicksort typically runs faster than mergesort, but that doesn’t necessarily apply to the current dataset. They could in principle figure out which one uses fewer resources on this dataset, by running both of them and comparing, but that would defeat the purpose. Intuitively, they have a fair bit of knowledge that bears on the claim “quicksort runs faster than mergesort on this dataset,” but modern probability theory can’t tell us which information they should use and how.[4](https://intelligence.org/2016/04/21/two-new-papers-uniform/#footnote_3_13424 \"For more background on logical uncertainty, see Gaifman’s “Concerning measures in first-order calculi,” Garber’s “Old evidence and logical omniscience in Bayesian confirmation theory,” Hutter, Lloyd, Ng, and Uther’s “Probabilities on sentences in an expressive logic,” and Aaronson’s “Why philosophers should care about computational complexity.”\")\n\n\n\n\n---\n\n\nWhat does it mean for a reasoner to assign “reasonable probabilities” to claims that they haven’t computed, but could compute in principle? Without probability theory to guide us, we’re reduced to using intuition to identify properties that seem desirable, and then investigating which ones are possible. Intuitively, there are at least two properties we would want logically non-omniscient reasoners to exhibit:\n\n\n1. **They should be able to notice patterns in what is provable about claims, even before they can prove or disprove the claims themselves.** For example, consider the claims “this Turing machine outputs an odd number” and “this Turing machine outputs an even number.” A good reasoner thinking about those claims should eventually recognize that they are mutually exclusive, and assign them probabilities that sum to at most 1, even before they can run the relevant Turing machine.\n\n\n2. **They should be able to notice patterns in sentence classes that are true with a certain frequency.** For example, they should assign roughly 10% probability to “the 10100th digit of pi is a 7” in lieu of any information about the digit, after observing (but not proving) that digits of pi tend to be uniformly distributed.\n\n\nMIRI’s work on logical uncertainty this past year can be very briefly summed up as “we figured out how to get these two properties individually, but found that it is difficult to get both at once.”\n\n\n[![Inductive Coherence](https://intelligence.org/files/InductiveCoherence.png)](https://arxiv.org/pdf/1604.05288v1)“**[Inductive coherence](https://arxiv.org/abs/1604.05288)**,” which I co-authored with Garrabrant, Benya Fallenstein, and Abram Demski, shows how to get the first property. The abstract reads:\n\n\n\n> While probability theory is normally applied to external environments, there has been some recent interest in probabilistic modeling of the outputs of computations that are too expensive to run. Since mathematical logic is a powerful tool for reasoning about computer programs, we consider this problem from the perspective of integrating probability and logic.\n> \n> \n> Recent work on assigning probabilities to mathematical statements has used the concept of *coherent* distributions, which satisfy logical constraints such as the probability of a sentence and its negation summing to one. Although there are algorithms which converge to a coherent probability distribution in the limit, this yields only weak guarantees about finite approximations of these distributions. In our setting, this is a significant limitation: Coherent distributions assign probability one to all statements provable in a specific logical theory, such as Peano Arithmetic, which can prove what the output of any terminating computation is; thus, a coherent distribution must assign probability one to the output of any terminating computation.\n> \n> \n> To model uncertainty about computations, we propose to work with *approximations* to coherent distributions. We introduce *inductive coherence*, a strengthening of coherence that provides appropriate constraints on finite approximations, and propose an algorithm which satisfies this criterion.\n> \n> \n\n\nGiven a series of provably mutually exclusive sentences, or a series of sentences where each provably implies the next, an inductively coherent predictor’s probabilities eventually start respecting this pattern. This is true even if the predictor hasn’t been able to prove that the pattern holds yet; if it would be possible in principle to eventually prove each instance of the pattern, then the inductively coherent predictor will start recognizing it “before too long,” in a specific technical sense, even if the proofs themselves are very long.\n\n\n[![Asymptotic Convergence](https://intelligence.org/files/AsymptoticConvergence.png)](https://arxiv.org/pdf/1604.05280v1)“**[Asymptotic convergence in online learning with unbounded delays](https://arxiv.org/abs/1604.05280)**,” which I co-authored with Garrabrant and Jessica Taylor, describes an algorithm with the second property. The abstract reads:\n\n\n\n> We study the problem of predicting the results of computations that are too expensive to run, via the observation of the results of smaller computations. We model this as an online learning problem with delayed feedback, where the length of the delay is unbounded, which we study mainly in a stochastic setting. We show that in this setting, consistency is not possible in general, and that optimal forecasters might not have average regret going to zero. However, it is still possible to give algorithms that converge asymptotically to Bayes-optimal predictions, by evaluating forecasters on specific sparse independent subsequences of their predictions. We give an algorithm that does this, which converges asymptotically on good behavior, and give very weak bounds on how long it takes to converge. We then relate our results back to the problem of predicting large computations in a deterministic setting.\n> \n> \n\n\nThe first property is about recognizing patterns about logical relationships between claims — saying “claim A implies claim B, so my probability on B must be at least my probability on A.” By contrast, the second property is about recognizing frequency patterns between similar claims — saying “I lack the resources to tell whether this claim is true, but 90% of similar claims have been true, so the base rate is 90%” (where part of the problem is figuring out what counts as a “similar claim”).\n\n\nIn this technical report, we model the latter task as an online learning problem, where a predictor observes the behavior of many small computations and has to predict the behavior of large computations. We give an algorithm that eventually assigns the “right” probabilities to every predictable subsequence of observations, in a specific technical sense.\n\n\n\n\n---\n\n\nEach paper is interesting in its own right, but for us, the exciting result is that we have teased apart and formalized two separate notions of what counts as “good reasoning” under logical uncertainty, both of which are compelling.\n\n\nFurthermore, our approaches to formalizing these two notions are very different. “Inductive coherence” frames the problem in the traditional “unify logic with probability” setting, whereas “Asymptotic convergence in online learning with unbounded delays” fits more naturally into the online machine learning framework. The methods we found for solving the first problem don’t appear to help with the second problem, and vice versa. In fact, the two isolated solutions appear quite difficult to reconcile. The problem that these two papers leave open is: Can we get one algorithm that satisfies both properties at once?\n\n\n \n\n\n\n\n---\n\n\n \n\n\n\n\n\n#### Sign up to get updates on new MIRI technical results\n\n\n*Get notified every time a new technical paper is published.*\n\n\n\n\n* \n* \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n\n\n \n\n\n \n\n\n\n\n---\n\n1. This work was originally titled “Uniform coherence”. This post has been updated to reflect the new terminology.\n2. Garrabrant’s IAFF forum posts provide a record of how these results were originally developed, as a response to Ray Solomonoff’s theory of algorithmic probability. [Concrete Failure of the Solomonoff Approach](https://agentfoundations.org/item?id=366) and [The Entangled Benford Test](https://agentfoundations.org/item?id=612) lay groundwork for the “Asymptotic convergence…” problem, a limited early version of which was featured in the “[Asymptotic logical uncertainty and the Benford test](https://intelligence.org/2015/09/30/new-paper-asymptotic-logical-uncertainty-and-the-benford-test/)” report. Inductive coherence is defined in [Uniform Coherence 2](https://agentfoundations.org/item?id=415), and an example of an inductively coherent predictor is identified in [The Modified Demski Prior is Uniformly Coherent](https://agentfoundations.org/item?id=431).\n3. This type of uncertainty is called “logical uncertainty” mainly for historical reasons. I think of it like this: We care about agents’ ability to reason about software systems, e.g., “this program will halt.” Those claims can be expressed in sentences of logic. The question “what probability does the agent assign to this machine halting?” then becomes “what probability does this agent assign to this particular logical sentence?” The truth of these statements could be determined in principle, but the agent may not have the resources to compute the answers in practice.\n4. For more background on logical uncertainty, see Gaifman’s “[Concerning measures in first-order calculi](http://lukemuehlhauser.com/wp-content/uploads/Gaifman-Concerning-measures-in-first-order-calculi.pdf),” Garber’s “[Old evidence and logical omniscience in Bayesian confirmation theory](http://fitelson.org/probability/garber.pdf),” Hutter, Lloyd, Ng, and Uther’s “[Probabilities on sentences in an expressive logic](http://arxiv.org/pdf/1209.2620.pdf),” and Aaronson’s “[Why philosophers should care about computational complexity](http://www.scottaaronson.com/papers/philos.pdf).”\n\nThe post [New papers dividing logical uncertainty into two subproblems](https://intelligence.org/2016/04/21/two-new-papers-uniform/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-04-21T16:17:03Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "598767dd80f984d5881b5b11b0275bc2", "title": "April 2016 Newsletter", "url": "https://intelligence.org/2016/04/11/april-2016-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \n\n**Research updates**\n* A new paper: “[Parametric Bounded Löb’s Theorem and Robust Cooperation of Bounded Agents](https://intelligence.org/2016/03/31/new-paper-on-bounded-lob/)“\n* New at IAFF: [What Does it Mean for Correct Operation to Rely on Transfer Learning?](https://agentfoundations.org/item?id=685); [Virtual Models of Virtual AIs in Virtual Worlds](https://agentfoundations.org/item?id=657)\n\n\n**General updates**\n* We’re [currently accepting applicants](https://intelligence.org/2016/03/28/announcing-a-new-colloquium-series-and-fellows-program/) to two programs we’re running in June: our 2016 Summer Fellows program ([details](http://rationality.org/miri-summer-fellows-2016/)), and a new Colloquium Series on Robust and Beneficial AI ([details](https://intelligence.org/colloquium-series/)).\n* MIRI has a new second-in-command: [Malo Bourgon](https://intelligence.org/2016/03/30/miri-has-a-new-coo-malo-bourgon/).\n* We’re hiring! [Apply here](https://intelligence.org/2016/03/18/seeking-research-fellows-in-type-theory-and-machine-self-reference/) for our new research position in type theory.\n* AI Impacts is asking for [examples of concrete tasks](http://aiimpacts.org/concrete-ai-tasks-bleg/) AI systems can’t yet achieve. You can also submit these tasks to Phil Tetlock, who is [making the same request](http://lukemuehlhauser.com/tetlock-wants-suggestions-for-strong-ai-signposts/) for Good Judgment Open.\n* MIRI senior researcher Eliezer Yudkowsky [discusses his core AI concerns](http://econlog.econlib.org/archives/2016/03/so_far_unfriend.html) with Bryan Caplan. (See [Caplan’s response](http://econlog.econlib.org/archives/2016/03/so_far_my_respo.html) and [Yudkowsky’s follow-up](http://econlog.econlib.org/archives/2016/03/so_far_my_respo.html#355226).)\n* Yudkowsky surveys [lessons from game-playing AI](http://futureoflife.org/2016/03/15/eliezer-yudkowsky-on-alphagos-wins/).\n\n\n**News and links**\n* Google DeepMind’s AlphaGo software [defeats leading Go player Lee See-dol](http://qz.com/639952/googles-ai-won-the-game-go-by-defying-millennia-of-basic-human-instinct/) 4-1. GoGameGuru provides excellent commentary on each game ([1](https://gogameguru.com/alphago-defeats-lee-sedol-game-1/), [2](https://gogameguru.com/alphago-races-ahead-2-0-lee-sedol/), [3](https://gogameguru.com/alphago-shows-true-strength-3rd-victory-lee-sedol/), [4](https://gogameguru.com/lee-sedol-defeats-alphago-masterful-comeback-game-4/), [5](https://gogameguru.com/alphago-defeats-lee-sedol-4-1/)). Lee’s home country of South Korea responds with an [AI funding push](http://www.nature.com/news/south-korea-trumpets-860-million-ai-fund-after-alphago-shock-1.19595).\n* In other Google news: *The New York Times* reports on an [AI platform war](http://www.nytimes.com/2016/03/26/technology/the-race-is-on-to-control-artificial-intelligence-and-techs-future.html); Alphabet’s head of moonshots [rejects AI risk concerns](http://www.forbes.com/sites/aarontilley/2016/03/24/alphabets-moonshots-head-astro-teller-fear-of-ai-and-robots-is-wildly-overblown/#3d90d4794e0c); and Alphabet [jettisons its main robotics division](http://www.bloomberg.com/news/articles/2016-03-17/google-is-said-to-put-boston-dynamics-robotics-unit-up-for-sale).\n* The UK Parliament is [launching an inquiry](http://www.zdnet.com/article/uk-looks-at-impact-of-ai-and-robotics-on-jobs-and-society/) into “social, legal, and ethical issues” raised by AI, and invites [written submissions](http://www.parliament.uk/business/committees/committees-a-z/commons-select/science-and-technology-committee/inquiries/parliament-2015/robotics-and-artificial-intelligence-inquiry-15-16/commons-written-submission-form/) of relevant evidence and arguments.\n* The White House’s Council of Economic Advisers predicts [the widespread automation of low-paying jobs](http://www.vox.com/2016/3/30/11332168/obama-economists-robot-automation). Related: [How Machines Destroy (And Create!) Jobs](http://www.npr.org/sections/money/2015/05/18/404991483/how-machines-destroy-and-create-jobs-in-4-graphs).\n* CGP Grey, who discussed automation in Humans Need Not Apply ([video](https://www.youtube.com/watch?v=7Pq-S557XQU)), has a [thoughtful conversation](http://lukemuehlhauser.com/cpg-grey-on-superintelligence/) about Nick Bostrom’s *Superintelligence* ([audio](https://www.youtube.com/watch?v=jmOBm-Lcs70&t=1h12m47s)).\n* Amitai and Oren Etzioni call for the development of [guardian AI](http://recode.net/2016/02/04/to-keep-ai-safe-use-ai/), “second-order AI software that will police AI.”\n* In a new paper, Bostrom weighs the pros and cons of [openness in AI](http://www.nickbostrom.com/papers/openness.pdf).\n* Bostrom argues for scalable AI control methods at RSA Conference ([video](https://www.youtube.com/watch?v=7gTPZUjvNdE)).\n* The Open Philanthropy Project, a collaboration between GiveWell and Good Ventures, [awards](http://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support) $100,000 to the Future of Life Institute.\n* The Center for Applied Rationality is seeking participants for two free programs: a [Workshop on AI Safety Strategy](http://rationality.org/waiss/) and [EuroSPARC](http://rationality.org/eurosparc/), a math summer camp.\n\n\n\n |\n\n\n\nThe post [April 2016 Newsletter](https://intelligence.org/2016/04/11/april-2016-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-04-11T15:44:33Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "12e0ba64d3f7fa09303d8992e1bc1c02", "title": "New paper on bounded Löb and robust cooperation of bounded agents", "url": "https://intelligence.org/2016/03/31/new-paper-on-bounded-lob/", "source": "miri", "source_type": "blog", "text": "[![Robust Cooperation](https://intelligence.org/files/ParametricBoundedLobsTheorem.png)](http://arxiv.org/pdf/1602.04184v4.pdf)MIRI Research Fellow Andrew Critch has written a new paper on cooperation between software agents in the Prisoner’s Dilemma, available on arXiv: “[Parametric bounded Löb’s theorem and robust cooperation of bounded agents](http://arxiv.org/abs/1602.04184).” The abstract reads:\n\n\n\n> Löb’s theorem and Gödel’s theorem make predictions about the behavior of systems capable of self-reference with unbounded computational resources with which to write and evaluate proofs. However, in the real world, systems capable of self-reference will have limited memory and processing speed, so in this paper we introduce an effective version of Löb’s theorem which is applicable given such bounded resources. These results have powerful implications for the game theory of bounded agents who are able to write proofs about themselves and one another, including the capacity to out-perform classical Nash equilibria and correlated equilibria, attaining mutually cooperative program equilibrium in the Prisoner’s Dilemma. Previous cooperative program equilibria studied by Tennenholtz and Fortnow have depended on tests for program equality, a fragile condition, whereas “Löbian” cooperation is much more robust and agnostic of the opponent’s implementation.\n> \n> \n\n\n[Tennenholtz (2004)](http://ie.technion.ac.il/~moshet/progeqnote4.pdf) showed that cooperative equilibria exist in the Prisoner’s Dilemma between agents with transparent source code. This suggested that a number of results in classical game theory, where it is a commonplace that mutual defection is rational, might [fail to generalize](http://arxiv.org/abs/1507.01986) to settings where agents have strong guarantees about each other’s conditional behavior.\n\n\nTennenholtz’s version of program equilibrium, however, only established that rational cooperation was possible between agents with identical source code. Patrick LaVictoire and other researchers at MIRI supplied the additional result that [more robust cooperation was possible](https://intelligence.org/2014/02/01/robust-cooperation-a-case-study-in-friendly-ai-research/) between non-computable agents, and that it is possible to efficiently determine the outcomes of such games. However, some readers objected to the infinitary nature of the methods (for example, the use of halting oracles) and worried that not all of the results would carry over to finite computations.\n\n\nCritch’s report demonstrates that robust cooperative equilibria exist for bounded agents. In the process, Critch proves a new generalization of Löb’s theorem, and therefore of Gödel’s second incompleteness theorem. This parametric version of Löb’s theorem holds for proofs that can be written out in *n* or fewer characters, where the parameter *n* can be set to any number. For more background on the result’s significance, see LaVictoire’s “[Introduction to Löb’s theorem in MIRI research](https://intelligence.org/2015/03/18/new-report-introduction-lobs-theorem-miri-research/).”\n\n\nThe new Löb result shows that bounded agents face obstacles to self-referential reasoning similar to those faced by unbounded agents, and can also reap some of the same benefits. Importantly, this lemma will likely allow us to discuss many other self-referential phenomena going forward using finitary examples rather than infinite ones.\n\n\n \n\n\n\n\n---\n\n\n \n\n\n\n\n\n#### Sign up to get updates on new MIRI technical results\n\n\n*Get notified every time a new technical paper is published.*\n\n\n\n\n* \n* \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n\n\nThe post [New paper on bounded Löb and robust cooperation of bounded agents](https://intelligence.org/2016/03/31/new-paper-on-bounded-lob/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-04-01T05:48:02Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "273d01ee1d735e4637baa895cc7b7b8d", "title": "MIRI has a new COO: Malo Bourgon", "url": "https://intelligence.org/2016/03/30/miri-has-a-new-coo-malo-bourgon/", "source": "miri", "source_type": "blog", "text": "![Malo Bourgon](https://intelligence.org/wp-content/uploads/2016/03/Malo-Bourgon.jpg)I’m happy to announce that Malo Bourgon, formerly a program management analyst at MIRI, has taken on a new leadership role as our chief operating officer.\n\n\nAs MIRI’s second-in-command, Malo will be taking over a lot of the hands-on work of coordinating our day-to-day activities: supervising our ops team, planning events, managing our finances, and overseeing internal systems. He’ll also be assisting me in organizational strategy and outreach work.\n\n\nPrior to joining MIRI, Malo studied electrical, software, and systems engineering at the University of Guelph in Ontario. His professional interests included climate change mitigation, and during his master’s, he worked on a project to reduce waste through online detection of inefficient electric motors. Malo started working for us shortly after completing his master’s in early 2012, which makes him MIRI’s longest-standing team member next to Eliezer Yudkowsky.\n\n\n\nUntil now, I’ve generally thought of Malo as our secret weapon — a smart, practical efficiency savant. While Luke Muehlhauser (our previous executive director) provided the vision and planning that transformed us into a mature research organization, Malo was largely responsible for the implementation. Behind the scenes, nearly every system or piece of software MIRI uses has been put together by Malo, or in a joint effort by Malo and Alex Vermeer — a close friend of Malo’s from the University of Guelph who now works as a MIRI program management analyst. Malo’s past achievements at MIRI include:\n\n\n* coordinating MIRI’s first [research workshops](http://intelligence.org/workshops) and establishing our current recruitment pipeline.\n* establishing MIRI’s immigration workflow, allowing us to hire Benya Fallenstein, Katja Grace, and a number of other overseas researchers and administrators.\n* running MIRI’s (presently inactive) volunteer program.\n* standardizing MIRI’s document production workflow.\n* developing and leading the execution of our 2014 [SV Gives strategy](https://intelligence.org/2014/05/04/calling-all-miri-supporters/). This resulted in MIRI receiving $171,575 in donations and prizes (at least $61,330 of which came from outside our usual donor pool) over a 24-hour span.\n* designing MIRI’s fundraising infrastructure, including our live graphs.\n\n\nMore recently, Malo has begun representing MIRI in meetings with philanthropic organizations, government agencies, and for-profit AI groups.\n\n\nMalo has been an invaluable asset to MIRI, and I’m thrilled to have him take on more responsibilities here. As one positive consequence, this will free up more of my time to work on strategy, recruiting, fundraising, and research.\n\n\nIn other news, MIRI’s head of communications, Rob Bensinger, has been promoted to the role of research communications manager. He continues to be the best person to [contact](mailto:rob@intelligence.org) at MIRI if you have general questions about our work and mission.\n\n\nLastly, Katja Grace, the primary contributor to the [AI Impacts](http://aiimpacts.org) project, has been promoted to our list of research staff. (Katja is not part of our core research team, and works on questions related to AI strategy and forecasting rather than on our [technical research agenda](http://intelligence.org/technical-agenda).)\n\n\nMy thanks and heartfelt congratulations to Malo, Rob, and Katja for all the work they’ve done, and all they continue to do.\n\n\nThe post [MIRI has a new COO: Malo Bourgon](https://intelligence.org/2016/03/30/miri-has-a-new-coo-malo-bourgon/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-03-31T00:52:00Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "651e249fa2ffede400c2d374b572514c", "title": "Announcing a new colloquium series and fellows program", "url": "https://intelligence.org/2016/03/28/announcing-a-new-colloquium-series-and-fellows-program/", "source": "miri", "source_type": "blog", "text": "The Machine Intelligence Research Institute is accepting applicants to two summer programs: a three-week AI robustness and reliability colloquium series (co-run with the Oxford [Future of Humanity Institute](https://www.fhi.ox.ac.uk/)), and a two-week fellows program focused on helping new researchers contribute to MIRI’s technical agenda (co-run with the [Center for Applied Rationality](http://rationality.org/)).\n\n\n\n\n---\n\n\nThe **[Colloquium Series on Robust and Beneficial AI](https://intelligence.org/colloquium-series/)** (CSRBAI), running from May 27 to June 17, is a new gathering of top researchers in academia and industry to tackle the kinds of technical questions featured in the Future of Life Institute’s [long-term AI research priorities report](http://futureoflife.org/data/documents/research_priorities.pdf) and [project grants](http://futureoflife.org/first-ai-grant-recipients/), including transparency, error-tolerance, and preference specification in software systems.\n\n\nThe goal of the event is to spark new conversations and collaborations between safety-conscious AI scientists with a variety of backgrounds and research interests. Attendees will be invited to give and attend talks at MIRI’s Berkeley, California offices during Wednesday/Thursday/Friday colloquia, to participate in hands-on Saturday/Sunday workshops, and to drop by for open discussion days:\n\n\n \n\n\n![](https://intelligence.org/files/csrbai/CSRBAICalendar.png)\n\n\n \n\n\nScheduled speakers include Stuart Russell (May 27), UC Berkeley Professor of Computer Science and co-author of *Artificial Intelligence: A Modern Approach*, Tom Dietterich (May 27), AAAI President and OSU Director of Intelligent Systems, and Bart Selman (June 3), Cornell Professor of Computer Science.\n\n\nApply here to attend any portion of the event, as well as to propose a talk or discussion topic:\n\n\n \n\n\n[Application Form](https://machineintelligence.typeform.com/to/D7X9dm)\n\n\n \n\n\n\n\n---\n\n\nThe 2016 [**MIRI Summer Fellows**](http://rationality.org/miri-summer-fellows-2016/) program, running from June 19 to July 3, doubles as a workshop for developing new problem-solving skills and mathematical intuitions, and a crash course on MIRI’s active research projects.\n\n\nThis is a smaller and more focused version of the Summer Fellows program we ran last year, which resulted in multiple new hires for us. As such, the program also functions as a high-intensity research retreat where MIRI staff and potential collaborators can get to know each other and work together on important [open problems in AI](https://intelligence.org/technical-agenda/). Apply here to attend the program:\n\n\n \n\n\n[Application Form](https://docs.google.com/forms/d/16xNNTEZasylCWzrkJk34J10Pgz0OjK-2wIB933qlwvA/viewform)\n\n\n \n\n\n\n\n---\n\n\nBoth programs are free of charge, including free room and board for all MIRI Summer Fellows program participants, free lunches and dinners for CSRBAI participants, and additional partial accommodations and travel assistance for select attendees. For additional information, see the [CSRBAI event page](https://intelligence.org/colloquium-series/) and the [MIRI Summer Fellows event page](http://rationality.org/miri-summer-fellows-2016/).\n\n\nThe post [Announcing a new colloquium series and fellows program](https://intelligence.org/2016/03/28/announcing-a-new-colloquium-series-and-fellows-program/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-03-29T01:43:42Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "98d7ca9d7933f8fd91194a3f1675239b", "title": "Seeking Research Fellows in Type Theory and Machine Self-Reference", "url": "https://intelligence.org/2016/03/18/seeking-research-fellows-in-type-theory-and-machine-self-reference/", "source": "miri", "source_type": "blog", "text": "The Machine Intelligence Research Institute (MIRI) is accepting applications for a full-time research fellow to develop theorem provers with self-referential capabilities, beginning by implementing a strongly typed language within that very language. The goal of this research project will be to help us understand autonomous systems that can prove theorems about systems with similar deductive capabilities. Applicants should have experience programming in functional programming languages, with a preference for languages with dependent types, such as Agda, Coq, or Lean.\n\n\nMIRI is a mathematics and computer science research institute specializing in long-term AI safety and robustness work. Our offices are in Berkeley, California, near the UC Berkeley campus.\n\n\n\n#### Type Theory in Type Theory\n\n\nOur goal with this project is to build tools for better modeling reflective reasoning in software systems, as with our project [modeling the HOL4 proof assistant within itself](https://intelligence.org/2015/12/04/new-paper-proof-producing-reflection-for-hol/). There are Gödelian reasons to think that self-referential reasoning is not possible in full generality. However, many real-world tasks that cannot be solved in full generality admit of effective mostly-general or heuristic approaches. Humans, for example, certainly succeed in trusting their own reasoning in many contexts.\n\n\nThere are a number of tools missing in modern-day theorem provers that would be helpful for studying self-referential reasoning. First among these is theorem provers that can construct proofs about software systems that make use of a very similar theorem prover. To build these tools in a strongly typed programming language, we need to start by writing programs and proofs that can make reference to the type of programs and proofs in the same language.\n\n\nType theory in type theory has recently received a fair amount of attention. [James Chapman’s work](https://github.com/jmchapman/TT-in-TT) is pushing in a similar direction to what we want, as is [Matt Brown and Jens Palsberg’s](http://compilers.cs.ucla.edu/popl16/), but these projects don’t yet give us the tools we need. (F-omega is too weak a logic for our purposes, and methods like Chapman’s don’t get us self-representations.)\n\n\nThis is intended to be an independent research project, though some collaborations with other researchers may occur. Our expectation is that this will be a multi-year project, but it is difficult to predict exactly how difficult this task is in advance. It may be easier than it looks, or substantially more difficult.\n\n\nDepending on how the project goes, researchers interested in continuing to work with us after this project’s completion may be able to collaborate on other parts of our [research agenda](https://intelligence.org/technical-agenda/) or propose their own additions to our program.\n\n\n#### Working at MIRI\n\n\nWe try to make working at MIRI a great experience. Here’s how we operate:\n\n\n* Modern Work Spaces. Many of us have adjustable standing desks with large external monitors. We consider workspace ergonomics important, and try to rig up work stations to be as comfortable as possible. Free snacks, drinks, and meals are also provided at our office.\n* Flexible Hours. This is a salaried position. We don’t have strict office hours, and we don’t limit employees’ vacation days. Our goal is to make quick progress on our [research agenda](https://intelligence.org/technical-agenda), and we would prefer that researchers take a day off than that they extend tasks to fill an extra day.\n* Living in the Bay Area. MIRI’s office is located in downtown Berkeley, California. From our office, you’re a 30-second walk to the BART (Bay Area Rapid Transit), which can get you around the Bay Area; a 3-minute walk to UC Berkeley campus; and a 30-minute BART ride to downtown San Francisco.\n* Travel Assistance. Visa assistance is available if needed. If you are moving to the Bay Area, we’ll cover up to $3,500 in moving expenses. We also provide a public transit pass with a large monthly allowance.\n\n\nThe salary for this position is negotiable, and comes with top-notch health and dental benefits.\n\n\n#### About MIRI\n\n\nMIRI is a Berkeley-based research nonprofit studying foundational questions in artificial intelligence. Our goal is to ensure that high-quality decision-making systems have a positive global impact in coming decades. MIRI Research Advisor and AI pioneer Stuart Russell [outlines](http://edge.org/conversation/the-myth-of-ai#26015) several causes for concerns about high-capability AI software:\n\n\n\n> \n> 1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.\n> 2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.\n> \n> \n> A system that is optimizing a function of *n* variables, where the objective depends on a subset of size *k*<*n*, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable.\n> \n> \n\n\nOur focus is on systems too complex and autonomous for software developers to anticipate all possible states the system and its environment might evolve into. To employ such systems safely, software engineers will need to understand their behavior on a deep level, and be able to use machine analysis and verification techniques to confirm high-level system properties.\n\n\nMIRI is the primary organization specializing in this line of technical research. Our work is cited in the [Research Priorities for Robust and Beneficial Artificial Intelligence](http://futureoflife.org/ai-open-letter/) report, and has been discussed in Nick Bostrom’s [*Superintelligence*](https://www.youtube.com/watch?v=pywF6ZzsghI) and Russell and Norvig’s [*Artificial Intelligence: A Modern Approach*](https://intelligence.org/2013/10/19/russell-and-norvig-on-friendly-ai/).\n\n\nLong-term AI safety is a rapidly growing field of research that has recently received significant public attention. Our current research is intended to provide this new field with theoretical foundations that will help guide the direction of future research.\n\n\n#### How to Apply\n\n\n(Update 2022: We are not actively seeking to fill this role at this time.) \n\n\n\n\nThe post [Seeking Research Fellows in Type Theory and Machine Self-Reference](https://intelligence.org/2016/03/18/seeking-research-fellows-in-type-theory-and-machine-self-reference/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-03-18T18:55:17Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "197b671cfdb6c29f0583c0b13248db92", "title": "March 2016 Newsletter", "url": "https://intelligence.org/2016/03/05/march-2016-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \n\n**Research updates**\n* A new paper: “[Defining Human Values for Value Learners](https://intelligence.org/2016/02/29/new-paper-defining-human-values-for-value-learners/)“\n* New at IAFF: [Analysis of Algorithms and Partial Algorithms](https://agentfoundations.org/item?id=622); [Naturalistic Logical Updates](https://agentfoundations.org/item?id=625); [Notes from a Conversation on Act-Based and Goal-Directed Systems](https://agentfoundations.org/item?id=634); [Toy Model: Convergent Instrumental Goals](https://agentfoundations.org/item?id=649)\n* New at AI Impacts: [Global Computing Capacity](http://aiimpacts.org/global-computing-capacity/)\n* A revised version of “[The Value Learning Problem](https://intelligence.org/files/ValueLearningProblem.pdf)” (pdf) has been accepted to a AAAI spring symposium.\n\n\n**General updates**\n* MIRI and other Future of Life Institute (FLI) grantees participated in a [AAAI workshop](http://futureoflife.org/2016/02/17/aaai-workshop-highlights-debate-discussion-and-future-research/) on AI safety this month.\n* MIRI researcher Eliezer Yudkowsky discusses Ray Kurzweil, the Bayesian brain hypothesis, and an eclectic mix of other topics in [a new interview](https://intelligence.org/2016/03/02/john-horgan-interviews-eliezer-yudkowsky/).\n* Alexei Andreev and Yudkowsky are [seeking investors](http://arbital.com/p/arbital_ambitions/) for Arbital, a new technology for explaining difficult topics in economics, mathematics, computer science, and other disciplines. As a demo, Yudkowsky has written a new and improved [guide to Bayes’s Rule](http://arbital.com/p/bayes_rule/).\n\n\n**News and links**\n* [Should We Fear or Welcome the Singularity?](https://www.youtube.com/watch?v=TcX_7SVI_hA) (video): a conversation between Kurzweil, Stuart Russell, Max Tegmark, and Harry Shum.\n* [The Code That Runs Our Lives](https://www.youtube.com/watch?v=XG-dwZMc7Ng&t=10m0s) (video): Deep learning pioneer Geoffrey Hinton expresses his concerns about smarter-than-human AI (at 10:00).\n* [The State of AI](https://www.youtube.com/watch?v=VBceREwF7SA&t=21m09s) (video): Russell, Ya-Qin Zhang, Matthew Grob, and Andrew Moore share their views on a range of issues at Davos, including superintelligence (at 21:09).\n* Bill Gates discusses [AI timelines](http://lukemuehlhauser.com/bill-gates-on-ai-timelines/).\n* Paul Christiano proposes a new AI alignment approach: [algorithm learning by bootstrapped approval-maximization](https://medium.com/ai-control/alba-an-explicit-proposal-for-aligned-ai-17a55f60bbcf#.9nd9dokju).\n* Robert Wiblin asks the effective altruism community: [If tech progress might be bad, what should we tell people about it?](http://effective-altruism.com/ea/tr/if_tech_progress_might_be_bad_what_should_we_tell/)\n* FLI collects [introductory resources on AI safety research](http://futureoflife.org/2016/02/29/introductory-resources-on-ai-safety-research/).\n* Raising for Effective Giving, a major fundraiser for MIRI and other EA organizations, is seeking a [Director of Growth](http://reg-charity.org/join-our-team-2/).\n* Murray Shanahan answers questions about the new [Leverhulme Centre for the Future of Intelligence](http://www3.imperial.ac.uk/newsandeventspggrp/imperialcollege/newssummary/news_5-1-2016-14-58-4). Leverhulme CFI is presently seeking an [Executive Director](http://www.jobs.cam.ac.uk/job/9344/).\n\n\n\n |\n\n\n\nThe post [March 2016 Newsletter](https://intelligence.org/2016/03/05/march-2016-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-03-06T01:44:34Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "260e1ac809225599b807caf1c3e11dc5", "title": "John Horgan interviews Eliezer Yudkowsky", "url": "https://intelligence.org/2016/03/02/john-horgan-interviews-eliezer-yudkowsky/", "source": "miri", "source_type": "blog", "text": "![Eliezer Yudkowsky](https://intelligence.org/wp-content/uploads/2015/06/Team_Headshot_Web_Eliezer.jpg) *Scientific American* writer John Horgan [recently interviewed](http://blogs.scientificamerican.com/cross-check/ai-visionary-eliezer-yudkowsky-on-the-singularity-bayesian-brains-and-closet-goblins/) MIRI’s senior researcher and co-founder, Eliezer Yudkowsky. The email interview touched on a wide range of topics, from politics and religion to existential risk and Bayesian models of rationality.\n\n\nAlthough Eliezer isn’t speaking in an official capacity in the interview, a number of the questions discussed are likely to be interesting to people who follow MIRI’s work. We’ve reproduced the full interview below.\n\n\n\n\n\n---\n\n\n**John Horgan**: When someone at a party asks what you do, what do you tell her?\n\n\n\n\n---\n\n\n**Eliezer Yudkowsky**: Depending on the venue: “I’m a decision theorist”, or “I’m a cofounder of the Machine Intelligence Research Institute”, or if it wasn’t that kind of party, I’d talk about my fiction.\n\n\n\n\n---\n\n\n**John**: What’s your favorite AI film and why?\n\n\n\n\n---\n\n\n**Eliezer**: AI in film is universally awful. *Ex Machina* is as close to being an exception to this rule as it is realistic to ask.\n\n\n\n\n---\n\n\n**John**: Is college overrated?\n\n\n\n\n---\n\n\n**Eliezer**: It’d be very surprising if college were *underrated*, given the social desirability bias of endorsing college. So far as I know, there’s no reason to disbelieve the economists who say that college has mostly become a positional good, and that previous efforts to increase the volume of student loans just increased the cost of college and the burden of graduate debt.\n\n\n\n\n---\n\n\n**John**: Why do you write fiction?\n\n\n\n\n---\n\n\n**Eliezer**: To paraphrase Wondermark, “Well, first I tried not making it, but then that didn’t work.”\n\n\nBeyond that, nonfiction conveys knowledge and fiction conveys *experience*. If you want to understand a [proof of Bayes’s Rule](http://arbital.com/p/bayes_rule_proof/?l=1xr), I can use diagrams. If I want you to *feel* what it is to use Bayesian reasoning, I have to write a story in which some character is doing that.\n\n\n\n\n\n---\n\n\n**John**: Are you religious in any way?\n\n\n\n\n---\n\n\n**Eliezer**: No. When you make a mistake, you need to avoid the temptation to go defensive, try to find some way in which you were a little right, look for a silver lining in the cloud. It’s much wiser to just say “Oops”, admit you were not even a little right, swallow the whole bitter pill in one gulp, and get on with your life. That’s the attitude humanity should take toward religion.\n\n\n\n\n---\n\n\n**John**: If you were King of the World, what would top your “To Do” list?\n\n\n\n\n---\n\n\n**Eliezer**: I once observed, “The libertarian test is whether, imagining that you’ve gained power, your first thought is of the laws you would pass, or the laws you would repeal.” I’m not an absolute libertarian, since not everything I want would be about repealing laws and softening constraints. But when I think of a case like this, I imagine trying to get the world to a condition where some unemployed person can offer to drive you to work for 20 minutes, be paid five dollars, and then nothing else bad happens to them. They don’t have their unemployment insurance phased out, have to register for a business license, lose their Medicare, be audited, have their lawyer certify compliance with OSHA rules, or whatever. They just have an added $5.\n\n\nI’d try to get to the point where employing somebody was once again as easy as it was in 1900. I think it can make sense nowadays to have some safety nets, but I’d try to construct every safety net such that it didn’t disincent or add paperwork to that simple event where a person becomes part of the economy again.\n\n\nI’d try to do all the things smart economists have been yelling about for a while but that almost no country ever does. Replace investment taxes and income taxes with consumption taxes and land value tax. Replace minimum wages with negative wage taxes. Institute NGDP level targeting regimes at central banks and let the too-big-to-fails go hang. Require loser-pays in patent law and put copyright back to 28 years. Eliminate obstacles to housing construction. Copy and paste from Singapore’s healthcare setup. Copy and paste from Estonia’s e-government setup. Try to replace committees and elaborate process regulations with specific, individual decision-makers whose decisions would be publicly documented and accountable. Run controlled trials of different government setups and actually pay attention to the results. I could go on for literally hours.\n\n\nAll this might not matter directly from the perspective of two hundred million years later. But the goodwill generated by the resulting economic boom might stand my government in good stead when I tried to figure out what the heck to do about Artificial Intelligence. The obvious thing, I guess, would be a Manhattan Project on an island somewhere, with pay competitive with top hedge funds, where people could collaborate on researching parts of the Artificial General Intelligence problem without the publication of their work automatically moving us closer to the end of the world. We’d still be working to an unknown deadline, and I wouldn’t feel relaxed at that point. Unless we postulate that I have literally magical powers or an utterly unshakeable regime, I don’t see how any law I could reasonably decree could delay AI timelines for very long on a planet where computers are already ubiquitous.\n\n\nAll of this is an impossible thought experiment in the first place, and I see roughly zero hope of it ever coming to pass in real life.\n\n\n\n\n---\n\n\n**John**: What’s so great about Bayes’s Theorem?\n\n\n\n\n---\n\n\n**Eliezer**: For one thing, Bayes’s Theorem is incredibly deep. So it’s not easy to give a brief answer to that.\n\n\nI might answer that Bayes’s Theorem is a kind of Second Law of Thermodynamics for cognition. If you obtain a well-calibrated posterior belief that some proposition is 99% probable, whether that proposition is milk being available at the supermarket or global warming being anthropogenic, then you must have processed some combination of sufficiently good priors and sufficiently strong evidence. That’s not a normative demand, it’s a law. In the same way that a car can’t run without dissipating entropy, you simply don’t get an accurate map of the world without a process that has Bayesian structure buried somewhere inside it, even if the process doesn’t explicitly represent probabilities or likelihood ratios. You had strong-enough evidence and a good-enough prior or you wouldn’t have gotten there.\n\n\nOn a personal level, I think the main inspiration Bayes has to offer us is just the fact that there *are* rules, that there *are* iron laws that govern whether a mode of thinking works to map reality. Mormons are told that they’ll know the truth of the Book of Mormon through feeling a burning sensation in their hearts. Let’s conservatively set the prior probability of the Book of Mormon at one to a billion (against). We then ask about the likelihood that, assuming the Book of Mormon is false, someone would feel a burning sensation in their heart after being told to expect one. If you understand Bayes’s Rule you can see at once that the improbability of the evidence is not commensurate with the improbability of the hypothesis it’s trying to lift. You don’t even have to make up numbers to see that the numbers don’t add up — as Philip Tetlock found in his study of superforecasters, superforecasters often know Bayes’s Rule but they rarely make up specific probabilities. On some level, it’s harder to be fooled if you just realize on a gut level *that there is math*, that there is *some* math you’d do to arrive at the exact strength of the evidence and whether it sufficed to lift the prior improbability of the hypothesis. That you can’t just make stuff up and believe what you want to believe because that doesn’t work.\n\n\n\n\n---\n\n\n**John**: Does the [Bayesian-brain hypothesis](http://blogs.scientificamerican.com/cross-check/are-brains-bayesian/) impress you?\n\n\n\n\n---\n\n\n**Eliezer**: I think some of the people in that debate may be talking past each other. Asking whether the brain is a Bayesian algorithm is like asking whether a Honda Accord runs on a Carnot heat engine. If you have one person who’s trying to say, “Every car is a thermodynamic process that requires fuel and dissipates waste heat” and the person on the other end hears, “If you draw a diagram of a Carnot heat engine and show it to a mechanic, they should agree that it looks like the inside of a Honda Accord” then you are going to have some fireworks.\n\n\nSome people will also be really excited when they open up the internal combustion engine and find the cylinders and say, “I bet this converts heat into pressure and helps drive the car forward!” And they’ll be right, but then you’re going to find other people saying, “You’re focusing on what’s merely a single component in a much bigger library of car parts; the catalytic converter is also important and that doesn’t appear anywhere on your diagram of a Carnot heat engine. Why, sometimes we run the air conditioner, which operates in the exact opposite way of how you say a heat engine works.”\n\n\nI don’t think it would come as much of a surprise that I think the people who adopt a superior attitude and say, “You are clearly unfamiliar with modern car repair; you need a toolbox of diverse methods to build a car engine, like spark plugs and catalytic converters, not just these *thermodynamic processes* you keep talking about” are missing a key level of abstraction.\n\n\nBut if you want to know whether the brain is *literally* a Bayesian engine, as opposed to doing cognitive work whose nature we can understand in a Bayesian way, then my guess is “Heck, no.” There might be a few excitingly Bayesian cylinders in that engine, but a lot more of it is going to look like weird ad-hoc seat belts and air conditioning. None of which is going to change the fact that to correctly identify an apple based on sensory evidence, you need to do something that’s ultimately interpretable as resting on an inductive prior that can learn the apple concept, and updating on evidence that distinguishes apples from nonapples.\n\n\n\n\n---\n\n\n**John**: Can you be too rational?\n\n\n\n\n---\n\n\n**Eliezer**: You can run into what we call “The Valley of Bad Rationality.” If you were previously irrational in multiple ways that balanced or canceled out, then becoming half-rational can leave you worse off than before. Becoming incrementally more rational can make you incrementally worse off, if you choose the wrong place to invest your skill points first.\n\n\nBut I would not recommend to people that they obsess over that possibility too much. In my experience, people who go around talking about cleverly choosing to be irrational strike me as, well, rather nitwits about it, to be frank. It’s hard to come up with a realistic non-contrived life situation where you know that it’s a good time to be irrational and you don’t already know the true answer. I think in real life, you just tell yourself the truth as best you know it, and don’t try to be clever.\n\n\nOn an entirely separate issue, it’s possible that being an ideal Bayesian agent is ultimately incompatible with living the life best-lived from a fun-theoretic perspective. But we’re a long, long, long way from that being a bigger problem than our current self-destructiveness.\n\n\n\n\n---\n\n\n**John**: How does your vision of the Singularity differ from that of Ray Kurzweil?\n\n\n\n\n---\n\n\n**Eliezer**:\n\n\n* I don’t think you can time AI with Moore’s Law. AI is a software problem.\n* I don’t think that humans and machines “merging” is a likely source for the first superhuman intelligences. It took a century after the first cars before we could even begin to put a robotic exoskeleton on a horse, and a real car would still be faster than that.\n* I don’t expect the first strong AIs to be based on algorithms discovered by way of neuroscience any more than the first airplanes looked like birds.\n* I don’t think that nano-info-bio “convergence” is probable, inevitable, well-defined, or desirable.\n* I think the changes between 1930 and 1970 were bigger than the changes between 1970 and 2010.\n* I buy that productivity is currently stagnating in developed countries.\n* I think extrapolating a Moore’s Law graph of technological progress past the point where you say it predicts smarter-than-human AI is just plain weird. Smarter-than-human AI breaks your graphs.\n* Some analysts, such as Illka Tuomi, claim that Moore’s Law broke down in the ’00s. I don’t particularly disbelieve this.\n* The only key technological threshold I care about is the one where AI, which is to say AI software, becomes capable of strong self-improvement. We have no graph of progress toward this threshold and no idea where it lies (except that it should not be high above the human level because humans can do computer science), so it can’t be timed by a graph, nor known to be near, nor known to be far. (Ignorance implies a wide credibility interval, not being certain that something is far away.)\n* I think outcomes are not good by default — I think outcomes can be made good, but this will require hard work that key actors may not have immediate incentives to do. Telling people that we’re on a default trajectory to great and wonderful times is false.\n* I think that the “Singularity” has become a suitcase word with too many mutually incompatible meanings and details packed into it, and I’ve stopped using it.\n\n\n\n\n---\n\n\n**John**: Do you think you have a shot at becoming a superintelligent cyborg?\n\n\n\n\n---\n\n\n**Eliezer**: The conjunction law of probability theory says that *P*(*A*&*B*) ≤ *P*(*A*) — the probability of both A and B happening is less than the probability of A alone happening. Experimental conditions that can get humans to assign *P*(*A*&*B*) > *P*(*A*) for some *A*&*B* are said to exhibit the “conjunction fallacy” — for example, in 1982, experts at the International Congress on Forecasting assigned higher probability to “A Russian invasion of Poland, and a complete breakdown of diplomatic relations with the Soviet Union” than a separate group did for “A complete breakdown of diplomatic relations with the Soviet Union”. Similarly, another group assigned higher probability to “An earthquake in California causing a flood that causes over a thousand deaths” than another group assigned to “A flood causing over a thousand deaths somewhere in North America.” Even though adding on additional details necessarily makes a story less probable, it can make the story sound more plausible. I see understanding this as a kind of Pons Asinorum of serious futurism — the distinction between carefully weighing each and every independent proposition you add to your burden, asking if you can support that detail independently of all the rest, versus making up a wonderful vivid story.\n\n\nI mention this as context for my reply, which is, “Why the heck are you tacking on the ‘cyborg’ detail to that? I don’t want to be a cyborg.” You’ve got to be careful with tacking on extra details to things.\n\n\n\n\n---\n\n\n**John**: Do you have a shot at immortality?\n\n\n\n\n---\n\n\n**Eliezer**: What, literal immortality? Literal immortality seems hard. Living significantly longer than a few trillion years requires us to be wrong about the expected fate of the expanding universe. Living longer than, say, a googolplex years, requires us to be wrong about the basic character of physical law, not just the details.\n\n\nEven if some of the wilder speculations are true and it’s possible for our universe to spawn baby universes, that doesn’t get us literal immortality. To live significantly past a googolplex years without repeating yourself, you need computing structures containing more than a googol elements, and those won’t fit inside a single Hubble volume.\n\n\nAnd a googolplex is hardly infinity. To paraphrase Martin Gardner, Graham’s Number is still relatively small because most finite numbers are very much larger. Look up the fast-growing hierarchy if you really want to have your mind blown, well, eternity is longer than that. Only weird and frankly terrifying anthropic theories would let you live long enough to gaze, perhaps knowingly and perhaps not, upon the halting of the longest-running halting Turing machine with 100 states.\n\n\nBut I’m not sure that living to look upon the 100th Busy Beaver Number feels to me like it matters very much on a deep emotional level. I have some imaginative sympathy with myself a subjective century from now. That me will be in a position to sympathize with their future self a subjective century later. And maybe somewhere down the line is someone who faces the prospect of their future self not existing at all, and they might be very sad about that; but I’m not sure I can imagine who that person will be. “I want to live one more day. Tomorrow I’ll still want to live one more day. Therefore I want to live forever, proof by induction on the positive integers.” Even my desire for merely physical-universe-containable longevity is an abstract want by induction; it’s not that I can actually imagine myself a trillion years later.\n\n\n\n\n---\n\n\n**John**: I’ve described the Singularity as an “[escapist, pseudoscientific](http://spectrum.ieee.org/biomedical/imaging/the-consciousness-conundrum)” fantasy that distracts us from climate change, war, inequality and other serious problems. Why am I wrong?\n\n\n\n\n---\n\n\n**Eliezer**: Because you’re trying to forecast empirical facts by psychoanalyzing people. This never works.\n\n\nSuppose we get to the point where there’s an AI smart enough to do the same kind of work that humans do in making the AI smarter; it can tweak itself, it can do computer science, it can invent new algorithms. It can self-improve. What happens after that — does it become even smarter, see even more improvements, and rapidly gain capability up to some very high limit? Or does nothing much exciting happen?\n\n\nIt could be that, (A), self-improvements of size δ tend to make the AI sufficiently smarter that it can go back and find new potential self-improvements of size *k* ⋅ δ and that *k* is greater than one, and this continues for a sufficiently extended regime that there’s a rapid cascade of self-improvements leading up to superintelligence; what I. J. Good called the intelligence explosion. Or it could be that, (B), *k* is less than one or that all regimes like this are small and don’t lead up to superintelligence, or that superintelligence is impossible, and you get a fizzle instead of an explosion. Which is true, A or B? If you actually built an AI at some particular level of intelligence and it actually tried to do that, something would actually happen out there in the empirical real world, and that event would be determined by background facts about the landscape of algorithms and attainable improvements.\n\n\nYou can’t get solid information about that event by psychoanalyzing people. It’s exactly the sort of thing that Bayes’s Theorem tells us is the equivalent of trying to run a car without fuel. Some people will be escapist regardless of the true values on the hidden variables of computer science, so observing some people being escapist isn’t strong evidence, even if it might make you feel like you want to disaffiliate with a belief or something.\n\n\nThere is a misapprehension, I think, of the nature of rationality, which is to think that it’s rational to believe “there are no closet goblins” because belief in closet goblins is foolish, immature, outdated, the sort of thing that stupid people believe. The true principle is that you go in your closet and look. So that in possible universes where there are closet goblins, you end up believing in closet goblins, and in universes with no closet goblins, you end up disbelieving in closet goblins.\n\n\nIt’s difficult but not impossible to try to sneak peeks through the crack of the closet door, to ask the question, “What would look different in the universe now if you couldn’t get sustained returns on cognitive investment later, such that an AI trying to improve itself would fizzle? What other facts should we observe in a universe like that?”\n\n\nSo you have people who say, for example, that we’ll only be able to improve AI up to the human level because we’re human ourselves, and then we won’t be able to push an AI past that. I think that if this is how the universe looks in general, then we should also observe, e.g., diminishing returns on investment in hardware and software for computer chess past the human level, which we did not in fact observe. Also, natural selection shouldn’t have been able to construct humans, and Einstein’s mother must have been one heck of a physicist, et cetera.\n\n\nYou have people who say, for example, that it should require more and more tweaking to get smarter algorithms and that human intelligence is around the limit. But this doesn’t square up with the anthropological record of human intelligence; we can know that there were not diminishing returns to brain tweaks and mutations producing improved cognitive power. We know this because population genetics says that mutations with very low statistical returns will not evolve to fixation at all.\n\n\nAnd hominids definitely didn’t need exponentially vaster brains than chimpanzees. And John von Neumann didn’t have a head exponentially vaster than the head of an average human.\n\n\nAnd on a sheerly pragmatic level, human axons transmit information at around a millionth of the speed of light, even when it comes to heat dissipation each synaptic operation in the brain consumes around a million times the minimum heat dissipation for an irreversible binary operation at 300 Kelvin, and so on. Why think the brain’s software is closer to optimal than the hardware? Human intelligence is privileged mainly by being the least possible level of intelligence that suffices to construct a computer; if it were possible to construct a computer with less intelligence, we’d be having this conversation at that level of intelligence instead.\n\n\nBut this is not a simple debate and for a detailed consideration I’d point people at an old informal paper of mine, “[Intelligence Explosion Microeconomics](https://intelligence.org/files/IEM.pdf)“, which is unfortunately probably still the best source out there. But these are the type of questions one must ask to try to use our currently accessible evidence to reason about whether or not we’ll see what’s colloquially termed an “[AI FOOM](https://intelligence.org/ai-foom-debate/)” — whether there’s an extended regime where δ improvement in cognition, reinvested into self-optimization, yields greater than δ further improvements.\n\n\nAs for your question about opportunity costs:\n\n\nThere is a conceivable world where there is no intelligence explosion and no superintelligence. Or where, a related but logically distinct proposition, the tricks that machine learning experts will inevitably build up for controlling infrahuman AIs carry over pretty well to the human-equivalent and superhuman regime. Or where moral internalism is true and therefore all sufficiently advanced AIs are inevitably nice. In conceivable worlds like that, all the work and worry of the Machine Intelligence Research Institute comes to nothing and was never necessary in the first place, representing some lost number of mosquito nets that could otherwise have been bought by the Against Malaria Foundation.\n\n\nThere’s also a conceivable world where you work hard and fight malaria, where you work hard and keep the carbon emissions to not much worse than they are already (or use geoengineering to mitigate mistakes already made). And then it ends up making no difference because your civilization failed to solve the AI alignment problem, and all the children you saved with those malaria nets grew up only to be killed by nanomachines in their sleep. (Vivid detail warning! I don’t actually know what the final hours will be like and whether nanomachines will be involved. But if we’re happy to visualize what it’s like to put a mosquito net over a bed, and then we refuse to ever visualize in concrete detail what it’s like for our civilization to fail AI alignment, that can also lead us astray.)\n\n\nI think that people who try to do thought-out philanthropy, e.g., Holden Karnofsky of GiveWell, would unhesitatingly agree that these are both conceivable worlds we prefer not to enter. The question is just which of these two worlds is more probable as the one we should avoid. And again, the central principle of rationality is not to disbelieve in goblins because goblins are foolish and low-prestige, or to believe in goblins because they are exciting or beautiful. The central principle of rationality is to figure out which observational signs and logical validities can distinguish which of these two conceivable worlds is the metaphorical equivalent of believing in goblins.\n\n\nI think it’s the first world that’s improbable and the second one that’s probable. I’m aware that in trying to convince people of that, I’m swimming uphill against a sense of eternal normality — the sense that this transient and temporary civilization of ours that has existed for only a few decades, that this species of ours that has existed for only an eyeblink of evolutionary and geological time, is all that makes sense and shall surely last forever. But given that I do think the first conceivable world is just a fond dream, it should be clear why I don’t think we should ignore a problem we’ll predictably have to panic about later. The mission of the Machine Intelligence Research Institute is to do today that research which, 30 years from now, people will desperately wish had begun 30 years earlier.\n\n\n\n\n---\n\n\n**John**: Does your wife Brienne believe in the Singularity?\n\n\n\n\n---\n\n\n**Eliezer**: Brienne replies:\n\n\n\n> If someone asked me whether I “believed in the singularity”, I’d raise an eyebrow and ask them if they “believed in” robotic trucking. It’s kind of a weird question. I don’t know a lot about what the first fleet of robotic cargo trucks will be like, or how long they’ll take to completely replace contemporary ground shipping. And if there were a culturally loaded suitcase term “robotruckism” that included a lot of specific technological claims along with whole economic and sociological paradigms, I’d be hesitant to say I “believed in” driverless trucks. I confidently forecast that driverless ground shipping will replace contemporary human-operated ground shipping, because that’s just obviously where we’re headed if nothing really weird happens. Similarly, I confidently forecast an intelligence explosion. That’s obviously where we’re headed if nothing really weird happens. I’m less sure of the other items in the “singularity” suitcase.\n> \n> \n\n\nTo avoid prejudicing the result, Brienne composed her reply without seeing my other answers. We’re just well-matched.\n\n\n\n\n---\n\n\n**John**: Can we create superintelligences without knowing how our brains work?\n\n\n\n\n---\n\n\n**Eliezer**: Only in the sense that you can make airplanes without knowing how a bird flies. You don’t need to be an expert in bird biology, but at the same time, it’s difficult to know enough to build an airplane without realizing *some* high-level notion of how a bird might glide or push down air with its wings. That’s why I write about human rationality in the first place — if you push your grasp on machine intelligence past a certain point, you can’t help but start having ideas about how humans could think better too.\n\n\n\n\n---\n\n\n**John**: What would superintelligences want? Will they have anything resembling sexual desire?\n\n\n\n\n---\n\n\n**Eliezer**: Think of an enormous space of possibilities, a giant multidimensional sphere. This is Mind Design Space, the set of possible cognitive algorithms. Imagine that somewhere near the bottom of that sphere is a little tiny dot representing all the humans who ever lived — it’s a tiny dot because all humans have basically the same brain design, with a cerebral cortex, a prefrontal cortex, a cerebellum, a thalamus, and so on. It’s conserved even relative to chimpanzee brain design. Some of us are weird in little ways, you could say it’s a spiky dot, but the spikes are on the same tiny scale as the dot itself; no matter how neuroatypical you are, you aren’t running on a different cortical algorithm.\n\n\nAsking “what would superintelligences want” is a Wrong Question. Superintelligences are not this weird tribe of people who live across the water with fascinating exotic customs. “Artificial Intelligence” is just a name for the entire space of possibilities outside the tiny human dot. With sufficient knowledge you might be able to reach into that space of possibilities and deliberately pull out an AI that wanted things that had a compact description in human wanting-language, but that wouldn’t be because this is a kind of thing that those exotic superintelligence people naturally want, it would be because you managed to pinpoint one part of the design space.\n\n\nWhen it comes to pursuing things like matter and energy, we may tentatively expect partial but not total convergence — it seems like there should be many, many possible superintelligences that would instrumentally want matter and energy in order to serve terminal preferences of tremendous variety. But even there, everything is subject to defeat by special cases. If you don’t want to get disassembled for spare atoms, you can, if you understand the design space well enough, reach in and pull out a particular machine intelligence that doesn’t want to hurt you.\n\n\nSo the answer to your second question about sexual desire is that if you knew exactly what you were doing and if you had solved the general problem of building AIs that stably want particular things as they self-improve and if you had solved the general problem of pinpointing an AI’s utility functions at things that seem deceptively straightforward to human intuitions, and you’d solved an even harder problem of building an AI using the particular sort of architecture where ‘being horny’ or ‘sex makes me happy’ makes sense in the first place, then you could perhaps make an AI that had been told to look at humans, model what humans want, pick out the part of the model that was sexual desire, and then want and experience that thing too.\n\n\nYou could also, if you had a sufficiently good understanding of organic biology and aerodynamics, build an airplane that could mate with birds.\n\n\nI don’t think this would have been a smart thing for the Wright Brothers to try to do in the early days. There would have been absolutely no point.\n\n\nIt does seem a lot wiser to figure out how to reach into the design space and pull out a special case of AI that will lack the default instrumental preference to disassemble us for spare atoms.\n\n\n\n\n---\n\n\n**John**: I like to think superintelligent beings would be nonviolent, because they will realize that violence is stupid. Am I naive?\n\n\n\n\n---\n\n\n**Eliezer**: I think so. As David Hume might have told you, you’re making a type error by trying to apply the ‘stupidity’ predicate to an agent’s terminal values or utility function. Acts, choices, policies can be stupid given some set of preferences over final states of the world. If you happen to be an agent that has meta-preferences you haven’t fully computed, you might have a platform on which to stand and call particular guesses at the derived object-level preferences as ‘stupid’.\n\n\nA paperclip maximizer is not making a computational error by having a preference order on outcomes that prefers outcomes with more paperclips in them. It is not standing from within your own preference framework and choosing blatantly mistaken acts, nor is it standing within your meta-preference framework and making mistakes about what to prefer. It is computing the answer to a different question than the question that you are asking when you ask, “What should I do?” A paperclip maximizer just outputs the action leading to the greatest number of expected paperclips.\n\n\nThe fatal scenario is an AI that neither loves you nor hates you, because you’re still made of atoms that it can use for something else. Game theory, and issues like cooperation in the Prisoner’s Dilemma, don’t emerge in all possible cases. In particular, they don’t emerge when something is sufficiently more powerful than you that it can disassemble you for spare atoms whether you try to press Cooperate or Defect. Past that threshold, either you solved the problem of making something that didn’t want to hurt you, or else you’ve already lost.\n\n\n\n\n---\n\n\n**John**: Will superintelligences solve the “hard problem” of consciousness?\n\n\n\n\n---\n\n\n**Eliezer**: Yes, and in retrospect the answer will look embarrassingly obvious from our perspective.\n\n\n\n\n---\n\n\n**John**: Will superintelligences possess free will?\n\n\n\n\n---\n\n\n**Eliezer**: Yes, but they won’t have the illusion of free will.\n\n\n\n\n---\n\n\n**John**: What’s your utopia?\n\n\n\n\n---\n\n\n**Eliezer**: I refer your readers to my nonfiction [Fun Theory Sequence](https://wiki.lesswrong.com/wiki/The_Fun_Theory_Sequence), since I have not as yet succeeded in writing any novel set in a fun-theoretically optimal world.\n\n\n\n\n---\n\n\nThe original interview can be found at [AI Visionary Eliezer Yudkowsky on the Singularity, Bayesian Brains and Closet Goblins](http://blogs.scientificamerican.com/cross-check/ai-visionary-eliezer-yudkowsky-on-the-singularity-bayesian-brains-and-closet-goblins/).\n\n\nOther conversations that feature MIRI researchers have included: [Yudkowsky on “What can we do now?”](https://intelligence.org/2013/01/30/yudkowsky-on-what-can-we-do-now/); [Yudkowsky on logical uncertainty](https://intelligence.org/2013/01/30/yudkowsky-on-logical-uncertainty/); [Benya Fallenstein on the Löbian obstacle to self-modifying systems](https://intelligence.org/2013/08/04/benja-interview/); and [Yudkowsky, Muehlhauser, Karnofsky, Steinhardt, and Amodei on MIRI strategy](https://intelligence.org/2014/01/13/miri-strategy-conversation-with-steinhardt-karnofsky-and-amodei/).\n\n\nThe post [John Horgan interviews Eliezer Yudkowsky](https://intelligence.org/2016/03/02/john-horgan-interviews-eliezer-yudkowsky/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-03-03T04:38:35Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "9d9547865782529253f24f88f21e32ca", "title": "New paper: “Defining human values for value learners”", "url": "https://intelligence.org/2016/02/29/new-paper-defining-human-values-for-value-learners/", "source": "miri", "source_type": "blog", "text": "[![Defining Values](https://intelligence.org/files/DefiningValuesForValueLearners.png)](https://intelligence.org/files/DefiningValuesForValueLearners.pdf)MIRI Research Associate Kaj Sotala recently presented a new paper, “**[Defining Human Values for Value Learners](https://intelligence.org/files/DefiningValuesForValueLearners.pdf)**,” at the AAAI-16 AI, Society and Ethics workshop.\n\n\nThe abstract reads:\n\n\n\n> Hypothetical “value learning” AIs learn human values and then try to act according to those values. The design of such AIs, however, is hampered by the fact that there exists no satisfactory definition of what exactly human values are. After arguing that the standard concept of preference is insufficient as a definition, I draw on reinforcement learning theory, emotion research, and moral psychology to offer an alternative definition. In this definition, human values are conceptualized as mental representations that encode the brain’s value function (in the reinforcement learning sense) by being imbued with a context-sensitive affective gloss. I finish with a discussion of the implications that this hypothesis has on the design of value learners.\n> \n> \n\n\nEconomic treatments of agency standardly assume that preferences encode some consistent ordering over world-states revealed in agents’ choices. Real-world preferences, however, have structure that is not always captured in economic models. A person can have conflicting preferences about whether to study for an exam, for example, and the choice they end up making may depend on complex, context-sensitive psychological dynamics, rather than on a simple comparison of two numbers representing how much one wants to study or not study.\n\n\nSotala argues that our preferences are better understood in terms of evolutionary theory and reinforcement learning. Humans evolved to pursue activities that are likely to lead to certain outcomes — outcomes that tended to improve our ancestors’ fitness. We prefer those outcomes, even if they no longer actually maximize fitness; and we also prefer events that we have learned tend to produce such outcomes.\n\n\nAffect and emotion, on Sotala’s account, psychologically mediate our preferences. We enjoy and desire states that are highly rewarding in our evolved reward function. Over time, we also learn to enjoy and desire states that seem likely to lead to high-reward states. On this view, our preferences function to group together events that lead on expectation to similarly rewarding outcomes for similar reasons; and over our lifetimes we come to inherently value states that lead to high reward, instead of just valuing such states instrumentally. Rather than directly mapping onto our rewards, our preferences map onto our expectation of rewards.\n\n\nSotala proposes that [value learning systems](https://intelligence.org/2015/01/29/new-report-value-learning-problem/) informed by this model of human psychology could more reliably reconstruct human values. On this model, for example, we can expect human preferences to change as we find new ways to move toward high-reward states. New experiences can change which states my emotions categorize as “likely to lead to reward,” and they can thereby modify which states I enjoy and desire. Value learning systems that take these facts about humans’ psychological dynamics into account may be better equipped to take our likely future preferences into account, rather than optimizing for our current preferences alone.\n\n\nThe post [New paper: “Defining human values for value learners”](https://intelligence.org/2016/02/29/new-paper-defining-human-values-for-value-learners/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-03-01T07:09:20Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "f2a1951d703a6f68d0f666acbc27562f", "title": "February 2016 Newsletter", "url": "https://intelligence.org/2016/02/06/february-2016-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \n\n**Research updates**\n* New at IAFF: [Thoughts on Logical Dutch Book Arguments](https://agentfoundations.org/item?id=582); [Another View of Quantilizers: Avoiding Goodhart’s Law](https://agentfoundations.org/item?id=596); [Another Concise Open Problem](https://agentfoundations.org/item?id=613)\n\n\n**General updates**\n* [Fundraiser and grant successes](https://intelligence.org/2016/01/12/end-of-the-year-fundraiser-and-grant-successes/): MIRI will be working with AI pioneer Stuart Russell and a to-be-determined postdoctoral researcher on the problem of corrigibility, thanks to a $75,000 [grant](https://intelligence.org/files/CorrigibilityAISystems.pdf) by the Center for Long-Term Cybersecurity.\n\n\n**News and links**\n* In a major break from trend in Go progress, DeepMind’s AlphaGo software [defeats the European Go champion](http://www.nature.com/news/google-ai-algorithm-masters-ancient-game-of-go-1.19234) 5-0. A top Go player [analyzes AlphaGo’s play](https://www.reddit.com/r/MachineLearning/comments/43fl90/synopsis_of_top_go_professionals_analysis_of/).\n* NYU hosted a [Future of AI](http://futureoflife.org/2016/01/12/the-future-of-ai-quotes-and-highlights-from-todays-nyu-symposium/) symposium this month, with a number of leading thinkers in AI and existential risk reduction in attendance.\n* [Marvin Minsky](https://en.wikipedia.org/wiki/Marvin_Minsky), one of the early architects of the field of AI, has passed away.\n* [Learning and Logic](https://medium.com/ai-control/learning-and-logic-e96bd41b1ab5): Paul Christiano writes on the challenge of “pursuing symbolically defined goals” without known observational proxies.\n* OpenAI, a new Elon-Musk-backed AI research nonprofit, [answers questions](https://www.reddit.com/r/MachineLearning/comments/404r9m/ama_the_openai_research_team) on Reddit. (MIRI senior researcher Eliezer Yudkowsky [also chimes in](https://www.reddit.com/r/MachineLearning/comments/404r9m/ama_the_openai_research_team/cystwwf).)\n* Victoria Krakovna argues that people concerned about AI safety [should consider becoming AI researchers](https://vkrakovna.wordpress.com/2016/01/16/to-contribute-to-ai-safety-consider-doing-ai-research/).\n* The Centre for Effective Altruism is accepting applicants through Feb. 14 to the [Pareto Fellowship](http://paretofellowship.org/), a new three-month training program for ambitious altruists.\n\n\n\n |\n\n\n\nThe post [February 2016 Newsletter](https://intelligence.org/2016/02/06/february-2016-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-02-06T12:37:48Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "3a3d0ac5104ea54813ebcc8a7e2422ae", "title": "End-of-the-year fundraiser and grant successes", "url": "https://intelligence.org/2016/01/12/end-of-the-year-fundraiser-and-grant-successes/", "source": "miri", "source_type": "blog", "text": "Our **[winter fundraising drive](https://intelligence.org/2015/12/01/miri-2015-winter-fundraiser/)** has concluded. Thank you all for your support!\n\n\nThrough the month of December, 175 distinct donors gave a total of $351,298. Between this fundraiser and our summer fundraiser, which brought in $630k, we’ve seen a surge in our donor base; our previous fundraisers over the past five years had brought in on average $250k (in the winter) and $340k (in the summer). We additionally received about $170k in 2015 grants from the Future of Life Institute, and $150k in other donations.\n\n\nIn all, we’ve taken in about $1.3M in grants and contributions in 2015, up from our $1M average over the previous five years. As a result, we’re entering 2016 with a team of six full-time researchers and over a year of runway.\n\n\nOur next big push will be to close the gap between our new budget and our annual revenue. In order to sustain our current growth plans — which are aimed at expanding to a team of approximately ten full-time researchers — we’ll need to begin consistently taking in close to $2M per year by mid-2017.\n\n\nI believe this is an achievable goal, though it will take some work. It will be even more valuable if we can *overshoot* this goal and begin extending our runway and further expanding our research program. On the whole, I’m very excited to see what this new year brings.\n\n\n\n\n---\n\n\nIn addition to our fundraiser successes, we’ve begun seeing new **grant-winning success**. In collaboration with Stuart Russell at UC Berkeley, we’ve won a $75,000 grant from the Berkeley [Center for Long-Term Cybersecurity](http://www.ischool.berkeley.edu/cltc). The bulk of the grant will go to funding a new postdoctoral position at UC Berkeley under Stuart Russell. The postdoc will collaborate with Russell and MIRI Research Fellow Patrick LaVictoire on the problem of AI [corrigibility](https://intelligence.org/2014/10/18/new-report-corrigibility/), as described in the [grant proposal](https://intelligence.org/files/CorrigibilityAISystems.pdf):\n\n\n\n> Consider a system capable of building accurate models of itself and its human operators. If the system is constructed to pursue some set of goals that its operators later realize will lead to undesirable behavior, then the system will by default have incentives to deceive, manipulate, or resist its operators to prevent them from altering its current goals (as that would interfere with its ability to achieve its current goals). […]\n> \n> \n> We refer to agents that have no incentives to manipulate, resist, or deceive their operators as “corrigible agents,” using the term as defined by [Soares et al.](https://intelligence.org/2014/10/18/new-report-corrigibility/) (2015). We propose to study different methods for designing agents that are in fact corrigible.\n> \n> \n\n\nThis postdoctoral position has not yet been filled. Expressions of interest can be emailed to [alex@intelligence.org](mailto:alex@intelligence.org) using the subject line “UC Berkeley expression of interest.”\n\n\nThe post [End-of-the-year fundraiser and grant successes](https://intelligence.org/2016/01/12/end-of-the-year-fundraiser-and-grant-successes/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-01-12T22:40:18Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "013d2658d017a847f3bb0afb5917bfe1", "title": "January 2016 Newsletter", "url": "https://intelligence.org/2016/01/03/january-2016-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \n\n**Research updates**\n* A new paper: “[Proof-Producing Reflection for HOL](https://intelligence.org/2015/12/04/new-paper-proof-producing-reflection-for-hol/)”\n* A new analysis: [Safety Engineering, Target Selection, and Alignment Theory](https://intelligence.org/2015/12/31/safety-engineering-target-selection-and-alignment-theory/)\n* New at IAFF: [What Do We Need Value Learning For?](https://agentfoundations.org/item?id=538); [Strict Dominance for the Modified Demski Prior](https://agentfoundations.org/item?id=541); [Reflective Probability Distributions and Standard Models of Arithmetic](https://agentfoundations.org/item?id=480); [Existence of Distributions That Are Expectation-Reflective and Know It](https://agentfoundations.org/item?id=548); [Concise Open Problem in Logical Uncertainty](https://agentfoundations.org/item?id=551)\n\n\n**General updates**\n* Our [Winter Fundraiser](https://intelligence.org/2015/12/01/miri-2015-winter-fundraiser/&id=e4618d1b91&e=ee2a191d77) is over! A total of 176 people donated $351,411, including some surprise matching donors. All of you have our sincere thanks.\n* Jed McCaleb writes on [why MIRI matters](https://intelligence.org/2015/12/15/jed-mccaleb-on-why-miri-matters/), while Andrew Critch writes on [the need to scale MIRI’s methods](https://intelligence.org/2015/12/23/need-scale-miris-methods/).\n* We attended NIPS, which hosted a symposium on the “social impacts of machine learning” this year. Viktoriya Krakovna [summarizes her impressions](http://futureoflife.org/2015/12/26/highlights-and-impressions-from-nips-conference-on-machine-learning/).\n* We’ve moved to a new, larger office with the [Center for Applied Rationality](http://rationality.org/) (CFAR), a few floors up from our old one.\n* Our [paper announcements](https://intelligence.org/category/papers/) now have their own MIRI Blog category.\n\n\n**News and links**\n* “[The 21st Century Philosophers](http://www.ozy.com/fast-forward/the-21st-century-philosophers/65230)”: AI safety research gets covered in *OZY*.\n* Sam Altman and Elon Musk have brought together leading AI researchers to form a new $1 billion nonprofit, [OpenAI](https://intelligence.org/2015/12/11/openai-and-other-news/). Andrej Karpathy explains OpenAI’s plans ([link](http://futureoflife.org/2015/12/21/inside-openai-an-interview-by-singularityhub/)), and Altman and Musk provide additional background ([link](https://medium.com/backchannel/how-elon-musk-and-y-combinator-plan-to-stop-computers-from-taking-over-17e0e27dd02a#.qcp84fjj5)).\n* Alphabet chairman Eric Schmidt and Google Ideas director Jared Cohen [write](http://time.com/4154126/technology-essay-eric-schmidt-jared-cohen/) on the need to “establish best practices to avoid undesirable outcomes” from AI.\n* A new Future of Humanity Institute (FHI) paper: “[Learning the Preferences of Ignorant, Inconsistent Agents](http://arxiv.org/abs/1512.05832).”\n* [Luke Muehlhauser](http://lukemuehlhauser.com/if-youre-an-ai-safety-lurker-now-would-be-a-good-time-to-de-lurk/) and *[The Telegraph](http://www.telegraph.co.uk/men/thinking-man/wanted-three-boffins-to-save-the-world-from-the-ai-apocalypse/)* signal-boost FHI’s AI safety job postings (deadline Jan. 6). The Global Priorities Project is also [seeking summer interns](http://globalprioritiesproject.org/2015/12/internship-2016/) (deadline Jan. 10).\n* CFAR is running a [matching fundraiser](http://rationality.org/fundraiser2015/) through the end of January.\n\n\n\n |\n\n\n\nThe post [January 2016 Newsletter](https://intelligence.org/2016/01/03/january-2016-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2016-01-03T11:55:26Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "035b6b62de828332326defe96b5188e4", "title": "Safety engineering, target selection, and alignment theory", "url": "https://intelligence.org/2015/12/31/safety-engineering-target-selection-and-alignment-theory/", "source": "miri", "source_type": "blog", "text": "Artificial intelligence capabilities research is aimed at making computer systems more intelligent — able to solve a wider range of problems more effectively and efficiently. We can distinguish this from research specifically aimed at making AI systems at various capability levels safer, or more “[robust and beneficial](http://futureoflife.org/data/documents/research_priorities.pdf).” In this post, I distinguish three kinds of direct research that might be thought of as “AI safety” work: *safety engineering*, *target selection*, and *alignment theory*.\n\n\nImagine a world where humans somehow developed heavier-than-air flight before developing a firm understanding of calculus or celestial mechanics. In a world like that, what work would be needed in order to safely transport humans to the Moon?\n\n\nIn this case, we can say that the main task at hand is one of engineering a rocket and refining fuel such that the rocket, when launched, accelerates upwards and does not explode. The boundary of space can be compared to the boundary between narrowly intelligent and generally intelligent AI. Both boundaries are fuzzy, but have engineering importance: spacecraft and aircraft have different uses and face different constraints.\n\n\nPaired with this task of developing rocket capabilities is a **safety engineering** task. Safety engineering is the art of ensuring that an engineered system provides acceptable levels of safety. When it comes to achieving a soft landing on the Moon, there are many different roles for safety engineering to play. One team of engineers might ensure that the materials used in constructing the rocket are capable of withstanding the stress of a rocket launch with significant margin for error. Another might design [escape systems](http://www.space.com/29329-spacex-tests-dragon-launch-abort-system.html) that ensure the humans in the rocket can survive even in the event of failure. Another might design life support systems capable of supporting the crew in dangerous environments.\n\n\nA separate important task is **target selection**, i.e., picking where on the Moon to land. In the case of a Moon mission, targeting research might entail things like designing and constructing telescopes (if they didn’t exist already) and identifying a landing zone on the Moon. Of course, only so much targeting can be done in advance, and the lunar landing vehicle may need to be designed so that it can [alter the landing target at the last minute as new data comes in](https://en.wikipedia.org/wiki/Apollo_11#Landing); this again would require feats of engineering.\n\n\nBeyond the task of (safely) reaching escape velocity and figuring out where you want to go, there is one more crucial prerequisite for landing on the Moon. This is rocket **alignment** research, the technical work required to reach the correct final destination. We’ll use this as an analogy to illustrate MIRI’s research focus, the problem of *artificial intelligence* alignment.\n\n\n\n#### The alignment challenge\n\n\nHitting a certain target on the Moon [isn’t as simple](http://airandspace.si.edu/webimages/highres/5317h.jpg) as carefully pointing the nose of the rocket at the relevant lunar coordinate and hitting “launch” — not even if you trust your pilots to make course corrections as necessary. There’s also the important task of plotting trajectories between celestial bodies.\n\n\n[![Image credit: NASA/Bill Ingalls](http://intelligence.org/wp-content/uploads/2015/12/9807812154_b233944667_o-833x1024.jpg)](https://www.nasa.gov/content/antares-rocket-with-cygnus-spacecraft-launches)\nThis rocket alignment task may require a distinct body of theoretical knowledge that isn’t required just for getting a payload off of the planet. Without calculus, designing a functional rocket would be enormously difficult. Still, with enough tenacity and enough resources to spare, we could imagine a civilization reaching space after many years of trial and error — at which point they would be confronted with the problem that reaching space isn’t sufficient for steering toward a specific location.[1](https://intelligence.org/2015/12/31/safety-engineering-target-selection-and-alignment-theory/#footnote_0_12634 \"Similarly, we could imagine a civilization that lives on the only planet in its solar system, or lives on a planet with perpetual cloud cover obscuring all objects except the Sun and Moon. Such a civilization might have an adequate understanding of terrestrial mechanics while lacking a model of celestial mechanics and lacking the knowledge that the same dynamical laws hold on Earth and in space. There would then be a gap in experts’ theoretical understanding of rocket alignment, distinct from gaps in their understanding of how to reach escape velocity.\")\n\n\nThe first rocket alignment researchers might ask, “What trajectory would we have our rocket take under ideal conditions, without worrying about winds or explosions or fuel efficiency?” If even that question were beyond their current abilities, they might simplify the problem still further, asking, “At what angle and velocity would we fire a *cannonball* such that it enters a stable orbit around Earth, assuming that Earth is perfectly spherical and has no atmosphere?”\n\n\nTo an early rocket engineer, for whom even the problem of building any vehicle that makes it off the launch pad remains a frustrating task, the alignment theorist’s questions might look out-of-touch. The engineer may ask “Don’t you know that rockets aren’t going to be fired out of cannons?” or “What does going in circles around the Earth have to do with getting to the Moon?” Yet understanding rocket alignment is quite important when it comes to achieving a soft landing on the Moon. If you don’t yet know at what angle and velocity to fire a cannonball such that it *would* end up in a stable orbit on a perfectly spherical planet with no atmosphere, then you may need to develop a better understanding of celestial mechanics before you attempt a Moon mission.\n\n\n#### Three forms of AI safety research\n\n\nThe case is similar with AI research. AI capabilities work comes part and parcel with associated safety engineering tasks. Working today, an AI **safety engineer** might focus on making the internals of large classes of software more transparent and interpretable by humans. They might ensure that the system fails gracefully in the face of [adversarial observations](http://arxiv.org/abs/1412.6572). They might design security protocols and early warning systems that help operators prevent or handle system failures.[2](https://intelligence.org/2015/12/31/safety-engineering-target-selection-and-alignment-theory/#footnote_1_12634 \"Roman Yampolskiy has used the term “AI safety engineering” to refer to the study of AI systems that can provide proofs of their safety for external verification, including some theoretical research that we would term “alignment research.” His usage differs from the usage here.\")\n\n\nAI safety engineering is indispensable work, and it’s infeasible to separate safety engineering from capabilities engineering. Day-to-day safety work in aerospace engineering doesn’t rely on committees of ethicists peering over engineers’ shoulders. Some engineers will happen to spend their time on components of the system that are there for reasons of safety — such as failsafe mechanisms or fallback life-support — but safety engineering is an integral part of engineering for safety-critical systems, rather than a separate discipline.\n\n\nIn the domain of AI, **target selection** addresses the question: if one could build a powerful AI system, what should one use it for? The potential development of [superintelligence](https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies) raises a number of thorny questions in theoretical and applied ethics. Some of those questions can plausibly be resolved in the near future by moral philosophers and psychologists, and by the AI research community. Others will undoubtedly need to be left to the future. Stuart Russell goes so far as to [predict](http://ww2.kqed.org/news/2015/10/27/stuart-russell-on-a-i-and-how-moral-philosophy-will-be-big-business) that “in the future, moral philosophy will be a key industry sector.” We agree that this is an important area of study, but it is not the main focus of the Machine Intelligence Research Institute.\n\n\nResearchers at MIRI focus on problems of AI **alignment**: the study of how in principle to direct a powerful AI system towards a specific goal. Where target selection is about the destination of the “rocket” (“what effects do we want AI systems to have on our civilization?”) and AI capabilities engineering is about getting the rocket to escape velocity (“how do we make AI systems powerful enough to help us achieve our goals?”), alignment is about knowing how to aim rockets towards particular celestial bodies (“assuming we could build highly capable AI systems, how would we direct them at our targets?”). Since our understanding of AI alignment [is still at the “what is calculus?” stage](https://intelligence.org/technical-agenda/), we ask questions analogous to “at what angle and velocity would we fire a cannonball to put it in a stable orbit, if Earth were perfectly spherical and had no atmosphere?”\n\n\nSelecting promising AI alignment research paths is not a simple task. With the benefit of hindsight, it’s easy enough to say that early rocket alignment researchers should begin by inventing calculus and studying gravitation. For someone who doesn’t yet have a clear understanding of what “calculus” or “gravitation” are, however, choosing research topics might be quite a bit more difficult. The fruitful research directions would need to compete with fruitless ones, such as studying aether or Aristotelian physics; and which research programs are fruitless may not be obvious in advance.\n\n\n#### Toward a theory of alignable agents\n\n\nWhat are some plausible candidates for the role of “calculus” or “gravitation” in the field of AI?\n\n\n[![Image credit: Brian Brondel](https://upload.wikimedia.org/wikipedia/commons/thumb/7/73/Newton_Cannon.svg/240px-Newton_Cannon.svg.png)](https://en.wikipedia.org/wiki/File:Newton_Cannon.svg)\nAt MIRI, we currently focus on subjects such as good reasoning under deductive limitations ([logical uncertainty](https://intelligence.org/2015/01/09/new-report-questions-reasoning-logical-uncertainty/)), decision theories that work well even for agents embedded in large environments, and reasoning procedures that approve of the way they reason. This research often involves building toy models and studying problems under dramatic simplifications, analogous to assuming a perfectly spherical Earth with no atmosphere.\n\n\nDeveloping theories of logical uncertainty isn’t what most people have in mind when they think of “AI safety research.” A natural thought here is to ask what specifically goes wrong if we don’t develop such theories. If an AI system can’t perform bounded reasoning in the domain of mathematics or logic, that doesn’t sound particularly “unsafe” — a system that needs to reason mathematically but can’t might be fairly *useless*, but it’s harder to see it becoming dangerous.\n\n\nOn our view, understanding logical uncertainty is important for helping us understand the systems we build well enough to justifiably conclude that they can be aligned in the first place. An analogous question in the case of rocket alignment might run: “If you don’t develop calculus, what bad thing happens to your rocket? Do you think the pilot will be struggling to make a course correction, and find that they simply can’t add up the tiny vectors fast enough?” The answer, though, isn’t that the pilot might struggle to correct their course, but rather that the trajectory that you thought led to the moon takes the rocket wildly off-course. The point of developing calculus is not to allow the pilot to make course corrections quickly; the point is to make it possible to discuss curved rocket trajectories in a world where the best tools available assume that rockets move in straight lines.\n\n\nThe case is similar with logical uncertainty. The problem is not that we visualize a specific AI system encountering a catastrophic failure because it mishandles logical uncertainty. The problem is that our best existing tools for analyzing rational agency assume that those agents are logically omniscient, making our best theories incommensurate with our best practical AI designs.[3](https://intelligence.org/2015/12/31/safety-engineering-target-selection-and-alignment-theory/#footnote_2_12634 \"Just as calculus is valuable both for building rockets that can reach escape velocity and for directing rockets towards specific lunar coordinates, a formal understanding of logical uncertainty might be useful both for improving AI capabilities and for improving the degree to which we can align powerful AI systems. The main motivation for studying logical uncertainty is that many other AI alignment problems are blocked on models of deductively limited reasoners, in the same way that trajectory-plotting could be blocked on models of curved paths.\")\n\n\nAt this point, the goal of alignment research is not to solve particular engineering problems. The goal of early rocket alignment research would be to develop shared language and tools for generating and evaluating rocket trajectories, which will require developing calculus and celestial mechanics if they do not already exist. Similarly, the goal of AI alignment research is to develop shared language and tools for generating and evaluating methods by which powerful AI systems could be designed to act as intended.\n\n\nOne might worry that it is difficult to set benchmarks of success for alignment research. Is a Newtonian understanding of gravitation sufficient to attempt a Moon landing, or must one develop a complete theory of general relativity before believing that one can land softly on the Moon?[4](https://intelligence.org/2015/12/31/safety-engineering-target-selection-and-alignment-theory/#footnote_3_12634 \"In either case, of course, we wouldn’t want to put a moratorium on the space program while we wait for a unified theory of quantum mechanics and general relativity. We don’t need a perfect understanding of gravity.\")\n\n\nIn the case of AI alignment, there is at least one obvious benchmark to focus on initially. Imagine we possessed an incredibly powerful computer with access to the internet, an automated factory, and large sums of money. If we could program that computer to reliably achieve some simple goal (such as producing as much diamond as possible), then a large share of the AI alignment research would be completed. This is because a large share of the problem is in understanding autonomous systems that are stable, error-tolerant, and demonstrably aligned with *some* goal. Developing the ability to steer rockets in *some* direction with confidence is harder than developing the additional ability to steer rockets to a specific lunar location.\n\n\nThe pursuit of a goal such as this one is more or less [MIRI’s approach](https://intelligence.org/2015/07/27/miris-approach/) to AI alignment research. We think of this as our version of the question, “Could you hit the Moon with a rocket if fuel and winds were no concern?” Answering that question, on its own, won’t ensure that smarter-than-human AI systems are aligned with our goals; but it would represent a major advance over our current knowledge, and it doesn’t look like the kind of basic insight that we can safely skip over.\n\n\n#### What next?\n\n\nOver the past year, we’ve seen a [massive](https://intelligence.org/2015/07/16/an-astounding-year/) [increase](https://intelligence.org/2015/12/11/openai-and-other-news/) in attention towards the task of ensuring that future AI systems are [robust and beneficial](http://futureoflife.org/ai-open-letter/). AI safety work is being taken very seriously, and AI engineers are stepping up and acknowledging that [safety engineering is not separable from capabilities engineering](https://jsteinhardt.wordpress.com/2015/06/24/long-term-and-short-term-challenges-to-ensuring-the-safety-of-ai-systems/). It is becoming apparent that as the field of artificial intelligence matures, safety engineering will become a more and more firmly embedded part of AI culture. Meanwhile, new investigations of target selection and other safety questions will be showcased at an [AI and Ethics workshop](https://www.aaai.org/Workshops/ws15workshops.php#ws01) at AAAI-16, one of the larger annual conferences in the field.\n\n\nA fourth variety of safety work is also receiving increased support: **strategy** research. If your nation is currently engaged in a cold war and locked in a space race, you may well want to consult with game theorists and strategists so as to ensure that your attempts to put a person on the Moon do not upset a delicate political balance and lead to a nuclear war.[5](https://intelligence.org/2015/12/31/safety-engineering-target-selection-and-alignment-theory/#footnote_4_12634 \"This was a role historically played by the RAND corporation.\") If international coalitions will be required in order to establish [treaties regarding the use of space](https://en.wikipedia.org/wiki/Outer_Space_Treaty), then diplomacy may also become a relevant aspect of safety work. The same principles hold when it comes to AI, where coalition-building and global coordination may play an important role in the technology’s development and use.\n\n\nStrategy research has been on the rise this year. [AI Impacts](http://aiimpacts.org/) is producing strategic analyses relevant to the designers of this potentially world-changing technology, and will soon be joined by the [Strategic Artificial Intelligence Research Centre](https://www.fhi.ox.ac.uk/research/research-areas/strategic-centre-for-artificial-intelligence-policy/). The new [Leverhulme Centre for the Future of Intelligence](http://www.cam.ac.uk/research/news/the-future-of-intelligence-cambridge-university-launches-new-centre-to-study-ai-and-the-future-of) will be pulling together people across many different disciplines to study the social impact of AI, forging new collaborations. The [Global Priorities Project](https://www.fhi.ox.ac.uk/research/research-areas/global-priorities-project/), meanwhile, is analyzing what types of interventions might be most effective at ensuring positive outcomes from the development of powerful AI systems.\n\n\nThe field is moving fast, and these developments are quite exciting. Throughout it all, though, AI alignment research in particular still seems largely under-served.\n\n\nMIRI is not the only group working on AI alignment; a handful of researchers from other organizations and institutions are also beginning to ask similar questions. MIRI’s particular approach to AI alignment research is by no means the only way one available — when first thinking about how to put humans on the Moon, one might want to consider both rockets and space elevators. Regardless of who does the research or where they do it, it is important that alignment research receive attention.\n\n\nSmarter-than-human AI systems may be many decades away, and they may not closely resemble any existing software. This limits our ability to identify productive safety engineering approaches. At the same time, the [difficulty of specifying our values](https://intelligence.org/files/ValueLearningProblem.pdf) makes it difficult to identify productive research in moral theory. Alignment research has the advantage of being abstract enough to be potentially applicable to a wide variety of future computing systems, while being formalizable enough to admit of unambiguous progress. By prioritizing such work, therefore, we believe that the field of AI safety will be able to ground itself in technical work without losing sight of the most consequential questions in AI.\n\n\nSafety engineering, moral theory, strategy, and general collaboration-building are all important parts of the project of developing safe and useful AI. On the whole, these areas look poised to thrive as a result of the recent rise in interest in long-term outcomes, and I’m thrilled to see more effort and investment going towards those important tasks.\n\n\nThe question is: What do we need to invest in next? The type of growth that I most want to see happen in the AI community next would be growth in AI alignment research, via the formation of new groups or organizations focused primarily on AI alignment and the expansion of existing AI alignment teams at MIRI, UC Berkeley, the Future of Humanity Institute at Oxford, and other institutions.\n\n\nBefore trying to land a rocket on the Moon, it’s important that we know how we would put a cannonball into a stable orbit. Absent a good theoretical understanding of rocket alignment, it might well be possible for a civilization to eventually reach escape velocity; but getting somewhere valuable and exciting and new, and getting there reliably, is a whole extra challenge.\n\n\n\n\n---\n\n\n*My thanks to Eliezer Yudkowsky for introducing the idea behind this post, and to Lloyd Strohl III, Rob Bensinger, and others for helping review the content.*\n\n\n\n\n---\n\n1. Similarly, we could imagine a civilization that lives on the only planet in its solar system, or lives on a planet with perpetual cloud cover obscuring all objects except the Sun and Moon. Such a civilization might have an adequate understanding of terrestrial mechanics while lacking a model of celestial mechanics and lacking the knowledge that the same dynamical laws hold on Earth and in space. There would then be a gap in experts’ theoretical understanding of rocket alignment, distinct from gaps in their understanding of how to reach escape velocity.\n2. Roman Yampolskiy has used the term “AI safety engineering” to refer to the study of AI systems that can provide proofs of their safety for external verification, including some theoretical research that we would term “alignment research.” His usage differs from the usage here.\n3. Just as calculus is valuable both for building rockets that can reach escape velocity and for directing rockets towards specific lunar coordinates, a formal understanding of logical uncertainty might be useful both for improving AI capabilities and for improving the degree to which we can align powerful AI systems. The main motivation for studying logical uncertainty is that many other AI alignment problems are blocked on models of deductively limited reasoners, in the same way that trajectory-plotting could be blocked on models of curved paths.\n4. In either case, of course, we wouldn’t want to put a moratorium on the space program while we wait for a unified theory of quantum mechanics and general relativity. We don’t need a *perfect* understanding of gravity.\n5. This was a role historically played by the [RAND corporation](http://www.rand.org/about.html).\n\nThe post [Safety engineering, target selection, and alignment theory](https://intelligence.org/2015/12/31/safety-engineering-target-selection-and-alignment-theory/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-12-31T08:14:33Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "36faea079576d9d7d8be1372fd6c51a1", "title": "The need to scale MIRI’s methods", "url": "https://intelligence.org/2015/12/23/need-scale-miris-methods/", "source": "miri", "source_type": "blog", "text": "Andrew Critch, one of the new additions to MIRI’s research team, has taken the opportunity of MIRI’s [winter fundraiser](https://intelligence.org/2015/12/01/miri-2015-winter-fundraiser/) to write on his [personal blog](http://acritch.com/miri-scaling/) about why he considers MIRI’s work important. Some excerpts:\n\n\n\n> Since a team of [CFAR](http://rationality.org/) alumni banded together to form the [Future of Life Institute](http://futureoflife.org/) (FLI), organized an [AI safety conference](http://futureoflife.org/2015/10/12/ai-safety-conference-in-puerto-rico/) in Puerto Rico in January of this year, co-authored the [FLI research priorities proposal](http://futureoflife.org/data/documents/research_priorities.pdf), and attracted $10MM of grant funding from Elon Musk, a lot of money has moved under the label “AI Safety” in the past year. Nick Bostrom’s *[Superintelligence](http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742)* was also a major factor in this amazing success story.\n> \n> \n> A lot of wonderful work is being done under these grants, including a lot of proposals for solutions to known issues with AI safety, which I find extremely heartening. However, I’m worried that if MIRI doesn’t scale at least somewhat to keep pace with all this funding, it just won’t be spent nearly as well as it would have if MIRI were there to help.\n> \n> \n\n\n\n\n> We have to remember that *AI safety did not become mainstream by a spontaneous collective awakening*. It was through years of effort on the part of MIRI and collaborators at [FHI](http://www.fhi.ox.ac.uk) struggling to identify unknown unknowns about how AI might surprise us, and struggling further to learn to explain these ideas in enough technical detail that they might be adopted by mainstream research, which is finally beginning to happen.\n> \n> \n> But what about the parts we’re wrong about? What about the sub-problems we haven’t identified yet, that might end up neglected in the mainstream the same way the whole problem was neglected 5 years ago? I’m glad the AI/ML community is more aware of these issues now, but I want to make sure MIRI can grow fast enough to keep this growing field on track.\n> \n> \n> Now, you might think that now that other people are “on the issue”, it’ll work itself out. That might be so.\n> \n> \n> But just because some of MIRI’s *conclusions* are now being widely adopted widely doesn’t mean its *methodology* is. The mental movement\n> \n> \n> \n> \n> “Someone has pointed out this safety problem to me, let me try to solve it!”\n> \n> \n> \n> \n> is very different from\n> \n> \n> \n> \n> “Someone has pointed out this safety solution to me, let me try to see how it’s broken!”\n> \n> \n> \n> \n> And that second mental movement is the kind that allowed MIRI to notice AI safety problems in the first place. Cybersecurity professionals seem to carry out this movement easily: security expert Bruce Schneier calls it [the security mindset](https://intelligence.org/2013/07/31/ai-risk-and-the-security-mindset/). The SANS institute calls it [red teaming](https://www.sans.org/reading-room/whitepapers/auditing/red-teaming-art-ethical-hacking-1272). Whatever you call it, AI/ML people are still more in maker-mode than breaker-mode, and are not yet, to my eye, identifying any new safety problems.\n> \n> \n> I do think that different organizations should probably try different approaches to the AI safety problem, rather than perfectly copying MIRI’s [approach](https://intelligence.org/2015/07/27/miris-approach/) and [research agenda](http://intelligence.org/technical-agenda). But I think breaker-mode/security mindset does need to be a part of every approach to AI safety. And if MIRI doesn’t scale up to keep pace with all this new funding, I’m worried that the world is just about to copy-paste MIRI’s best-2014-impression of what’s important in AI safety, and leave behind the self-critical methodology that *generated* these ideas in the first place… which is a serious pitfall given all the unknown unknowns left in the field.\n> \n> \n\n\nSee our [funding drive post](https://intelligence.org/2015/12/01/miri-2015-winter-fundraiser/) to help contribute or to learn more about our plans. For more about AI risk and security mindset, see also Luke Muehlhauser’s [post on the topic](https://intelligence.org/2013/07/31/ai-risk-and-the-security-mindset/).\n\n\nThe post [The need to scale MIRI’s methods](https://intelligence.org/2015/12/23/need-scale-miris-methods/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-12-24T04:50:11Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "bad727fe56c539e42fa4150babbb4958", "title": "Jed McCaleb on Why MIRI Matters", "url": "https://intelligence.org/2015/12/15/jed-mccaleb-on-why-miri-matters/", "source": "miri", "source_type": "blog", "text": "*This is a guest post by Jed McCaleb, one of MIRI’s [top contributors](https://intelligence.org/feed/intelligence.org/topcontributors), for our [winter fundraiser](https://intelligence.org/2015/12/01/miri-2015-winter-fundraiser/).*\n\n\n\n\n---\n\n\n \n\n\nA few months ago, several leaders in the scientific community signed an [open letter](http://futureoflife.org/ai-open-letter/) pushing for oversight into the research and development of artificial intelligence, in order to mitigate the risks and ensure the societal benefit of the advanced technology. Researchers largely agree that AI is likely to begin outperforming humans on most cognitive tasks [in this century](http://www.nickbostrom.com/papers/survey.pdf). \n\n\nSimilarly, I believe we’ll see the promise of human-level AI come to fruition much sooner than we’ve fathomed. Its effects will likely be transformational — for the better if it is used to help improve the human condition, or for the worse if it is used incorrectly.\n\n\nAs AI agents become more capable, it becomes more important to analyze and verify their decisions and goals. MIRI’s focus is on how we can create highly reliable agents that can learn human values and the overarching need for better decision-making processes that power these new technologies. \n\n\nThe past few years has seen a vibrant and growing AI research community. As the space continues to flourish, the need for collaboration will continue to grow as well. Organizations like MIRI that are dedicated to security and safety engineering help fill this need. And, as a nonprofit, its research is free from profit obligations. This independence in research is important because it will lead to safer and more neutral results. \n\n\nBy supporting organizations like MIRI, we’re putting the safeguards in place to make sure that this immensely powerful technology is used for the greater good. For humanity’s benefit, we need to guarantee that AI systems can reliably pursue goals that are aligned with society’s human values. If organizations like MIRI are able to help engineer this level of technological advancement and awareness in AI systems, imagine the endless possibilities of how it can help improve our world. It’s critical that we put the infrastructure in place in order to ensure that AI will be used to make the lives of people better. This is why I’ve donated to MIRI, and why I believe it’s a worthy cause that you should consider as well.\n\n\n \n\n\n\n[Donate Now](https://intelligence.org/donate/#donation-methods)\n---------------------------------------------------------------\n\n\n\n \n\n\n \n\n\n \n\n\n \n\n\n\n\n---\n\n\n*[Jed McCaleb](http://jedmccaleb.com/) created eDonkey, one of the largest file-sharing networks of its time, as well as Mt. Gox, the first Bitcoin exchange. Recognizing that the world’s financial infrastructure is broken and that too many people are left without resources, he cofounded [Stellar](http://www.wired.com/2014/08/new-digital-currency-aims-to-unite-every-money-system-on-earth/) in 2014. Jed is also an advisor to MIRI.*\n\n\nThe post [Jed McCaleb on Why MIRI Matters](https://intelligence.org/2015/12/15/jed-mccaleb-on-why-miri-matters/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-12-15T20:11:11Z", "authors": ["Guest"], "summaries": []} -{"id": "4f0b47cb92d8ec09dc3ef65494d36792", "title": "OpenAI and other news", "url": "https://intelligence.org/2015/12/11/openai-and-other-news/", "source": "miri", "source_type": "blog", "text": "![open-ai[1]](http://intelligence.org/wp-content/uploads/2015/12/open-ai1.png)We’re only 11 days into December, and this month is shaping up to be a momentous one.\n\n\nOn December 3, the University of Cambridge partnered with the University of Oxford, Imperial College London, and UC Berkeley to launch the **[Leverhulme Centre for the Future of Intelligence](http://www.cam.ac.uk/research/news/the-future-of-intelligence-cambridge-university-launches-new-centre-to-study-ai-and-the-future-of)**. The Cambridge Centre for the Study of Existential Risk (CSER) helped secure initial funding for the new independent center, in the form of a $15M grant to be disbursed over ten years. CSER and Leverhulme CFI plan to collaborate closely, with the latter focusing on AI’s mid- and long-term social impact.\n\n\nMeanwhile, the **Strategic Artificial Intelligence Research Centre** (SAIRC) is hiring its first research fellows in machine learning, policy analysis, and strategy research: [details](http://www.fhi.ox.ac.uk/vacancies/). SAIRC will function as an extension of two existing institutions: CSER, and the Oxford-based Future of Humanity Institute. [As Luke Muehlhauser has noted](http://lukemuehlhauser.com/if-youre-an-ai-safety-lurker-now-would-be-a-good-time-to-de-lurk/), if you’re an AI safety “lurker,” now is an ideal time to de-lurk and get in touch.\n\n\nMIRI’s research program is also growing quickly, with mathematician Scott Garrabrant joining our core team tomorrow. Our [winter fundraiser](https://intelligence.org/2015/12/01/miri-2015-winter-fundraiser/) is in full swing, and multiple [matching](https://dansmithholla.wordpress.com/2015/12/08/december-charity-drive/) opportunities have sprung up to bring us within a stone’s throw of our first funding target.\n\n\nThe biggest news, however, is the launch of **OpenAI**, a new $1 billion research nonprofit staffed with top-notch machine learning experts and co-chaired by Sam Altman and Elon Musk. The OpenAI team [describes their mission](https://openai.com/blog/introducing-openai/):\n\n\n\n> Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely.\n> \n> \n\n\nI’ve been in conversations with Sam Altman and Greg Brockman at OpenAI as their team has come together. They’ve expressed a keen interest in making sure that AI has a positive impact, and we’re looking forward to future collaborations between our teams. ~~I’m excited to see OpenAI joining the space, and I’m optimistic that their entrance will result in promising new AI alignment research in addition to AI capabilities research.~~\n\n\n2015 has truly been [an astounding year](https://intelligence.org/2015/07/16/an-astounding-year/) — and I’m eager to see what 2016 holds in store.\n\n\n\n\n---\n\n\n**Nov. 2021 update**: The struck sentence in this post is potentially misleading as a description of my epistemic state at the time, in two respects:\n\n\n1. My feelings about OpenAI at the time were, IIRC, some cautious optimism plus a bunch of pessimism. My sentence was written only from the optimism, in a way that was misleading about my overall state.\n\n\n2. The sentence here is unintentionally ambiguous: I intended to communicate something like “OpenAI is mainly a capabilities org, but I’m hopeful that they’ll do a good amount of alignment research too”, but I accidentally left open the false interpretation “I’m hopeful that OpenAI will do a bunch of alignment research, and I’m hopeful that OpenAI will do a bunch of capabilities research too”.\n\n\nThe post [OpenAI and other news](https://intelligence.org/2015/12/11/openai-and-other-news/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-12-12T06:50:16Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "5995f18ef4ce9558f361389bf59febfe", "title": "New paper: “Proof-producing reflection for HOL”", "url": "https://intelligence.org/2015/12/04/new-paper-proof-producing-reflection-for-hol/", "source": "miri", "source_type": "blog", "text": "[![HOL](http://intelligence.org/wp-content/uploads/2015/11/hol-in-hol.png)](https://intelligence.org/files/ProofProducingReflection.pdf)MIRI Research Fellow Benya Fallenstein and Research Associate Ramana Kumar have co-authored a new paper on machine reflection, “[**Proof-producing reflection for HOL with an application to model polymorphism**](https://intelligence.org/files/ProofProducingReflection.pdf).”\n\n\n*HOL* stands for Higher Order Logic, here referring to a popular [family of proof assistants](http://www.di.univaq.it/monica/MFI/HOL-note.pdf) based on [Church’s type theory](http://plato.stanford.edu/entries/type-theory-church/). Kumar and collaborators have previously formalized within HOL ([specifically, HOL4](http://hol-theorem-prover.org/)) what it means for something to be provable in HOL, and what it means for something to be a model of HOL.[1](https://intelligence.org/2015/12/04/new-paper-proof-producing-reflection-for-hol/#footnote_0_11773 \"Kumar showed that if there is a model of set theory in HOL, there is a model of HOL in HOL. Fallenstein and Kumar additionally show that there is a model of set theory in HOL if a simpler axiom holds.\") In “[Self-formalisation of higher-order logic](https://cakeml.org/jarhol.pdf),” Kumar, Arthan, Myreen, and Owens demonstrated that if something is provable in HOL, then it is true in all models of HOL.\n\n\n“Proof-producing reflection for HOL” builds on this result by demonstrating a formal correspondence between the model of HOL within HOL (“inner HOL”) and HOL itself (“outer HOL”). Informally speaking, Fallenstein and Kumar show that one can always build an interpretation of terms in inner HOL such that they have the same meaning as terms in outer HOL. The authors then show that if statements of a certain kind are *provable* in HOL’s model of itself, they are *true* in (outer) HOL. This correspondence enables the authors to use HOL to implement *model polymorphism*, the approach to machine self-verification described in Section 6.3 of “[Vingean reflection: Reliable reasoning for self-improving agents](https://intelligence.org/files/VingeanReflection.pdf).”[2](https://intelligence.org/2015/12/04/new-paper-proof-producing-reflection-for-hol/#footnote_1_11773 \"For more on the role of logical reasoning in machine reflection, see Fallenstein’s 2013 conversation about self-modifying systems.\")\n\n\nThis project is motivated by the fact that relatively little hands-on work has been done on modeling formal verification systems in formal verification systems, and especially on modeling them in themselves. Fallenstein notes that focusing only on the mathematical theory of Vingean reflection might make us poorly calibrated about where the engineering difficulties lie for software implementations. In the course of implementing model polymorphism, Fallenstein and Kumar indeed encountered difficulties that were not obvious from past theoretical work, the most important of which arose from HOL’s [polymorphism](https://en.wikipedia.org/wiki/Polymorphism_(computer_science)).\n\n\nFallenstein and Kumar’s paper was presented at [ITP 2015](http://www.inf.kcl.ac.uk/staff/urbanc/itp-2015/) and can be found [online](https://intelligence.org/files/ProofProducingReflection.pdf) or in the associated [conference proceedings](http://link.springer.com/chapter/10.1007%2F978-3-319-22102-1_11). Thanks to a [grant by the Future of Life Institute](http://futureoflife.org/first-ai-grant-recipients/#Kumar), Kumar and Fallenstein will be continuing their collaboration on this project. Following up on “Proof-producing reflection for HOL,” Kumar and Fallenstein’s next goal will be to develop toy models of agents within HOL proof assistants that reason using model polymorphism.\n\n\n\n\n---\n\n1. Kumar showed that if there is a model of set theory in HOL, there is a model of HOL in HOL. Fallenstein and Kumar additionally show that there is a model of set theory in HOL if a simpler axiom holds.\n2. For more on the role of logical reasoning in machine reflection, see Fallenstein’s 2013 [conversation about self-modifying systems](https://intelligence.org/2013/08/04/benja-interview/).\n\nThe post [New paper: “Proof-producing reflection for HOL”](https://intelligence.org/2015/12/04/new-paper-proof-producing-reflection-for-hol/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-12-04T23:31:15Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "e9e9bbacfb82fd05ae3079bb544eaf76", "title": "December 2015 Newsletter", "url": "https://intelligence.org/2015/12/03/december-2015-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \n\n**Research updates**\n* New papers: “[Formalizing Convergent Instrumental Goals](https://intelligence.org/2015/11/26/new-paper-formalizing-convergent-instrumental-goals/)” and “[Quantilizers: A Safer Alternative to Maximizers for Limited Optimization](https://intelligence.org/2015/11/29/new-paper-quantilizers/).” These papers have been accepted to the AAAI-16 workshop on AI, Ethics and Society.\n* New at AI Impacts: [Recently at AI Impacts](http://aiimpacts.org/recently-at-ai-impacts/)\n* New at IAFF: [A First Look at the Hard Problem of Corrigibility](https://agentfoundations.org/item?id=484); [Superrationality in Arbitrary Games](https://agentfoundations.org/item?id=507); [A Limit-Computable, Self-Reflective Distribution](https://agentfoundations.org/item?id=515); [Reflective Oracles and Superrationality: Prisoner’s Dilemma](https://agentfoundations.org/item?id=513)* [Scott Garrabrant](https://intelligence.org/2015/08/31/final-fundraiser-day-announcing-our-new-team/) joins MIRI’s full-time research team this month.\n\n\n**General updates**\n* Our [Winter Fundraiser](https://intelligence.org/2015/12/01/miri-2015-winter-fundraiser/) is now live, and includes details on where we’ve been directing our research efforts in 2015, as well as our plans for 2016. The fundraiser will conclude on December 31.\n* A 2014 collaboration between MIRI and the Oxford-based Future of Humanity Institute (FHI), “[The Errors, Insights, and Lessons of Famous AI Predictions](https://intelligence.org/2014/04/30/new-paper-the-errors-insights-and-lessons-of-famous-ai-predictions/),” is being republished next week in the anthology *[Risks of Artificial Intelligence](https://www.crcpress.com/Risks-of-Artificial-Intelligence/Muller/)*. Also included will be Daniel Dewey’s important strategic analysis “[Long-Term Strategies for Ending Existential Risk from Fast Takeoff](http://www.danieldewey.net/fast-takeoff-strategies.pdf)” and articles by MIRI Research Advisors Steve Omohundro and Roman Yampolskiy.\n* We recently spent an enjoyable week in the UK comparing notes, sharing research, and trading ideas with FHI. During our visit, MIRI researcher Andrew Critch led a [“Big-Picture Thinking” seminar](http://globalprioritiesproject.org/2015/11/seminar-series-big-picture-thinking/) on long-term AI safety ([video](https://www.youtube.com/watch?v=-eb_rRkF_1I)).\n\n\n**News and links**\n* In collaboration with Oxford, UC Berkeley, and Imperial College London, Cambridge University is launching a new $15 million research center to study AI’s long-term impact: the [Leverhulme Centre for the Future of Intelligence](http://www.cam.ac.uk/research/news/the-future-of-intelligence-cambridge-university-launches-new-centre-to-study-ai-and-the-future-of).\n* The Strategic Artificial Intelligence Research Centre, a new joint initiative between FHI and the Cambridge Centre for the Study of Existential Risk, is accepting applications for three research positions between now and January 6: research fellows in [machine learning and the control problem](https://www.recruit.ox.ac.uk/pls/hrisliverecruit/erq_jobspec_version_4.display_form?p_company=10&p_internal_external=E&p_display_in_irish=N&p_process_type=&p_applicant_no=&p_form_profile_detail=&p_display_apply_ind=Y&p_refresh_search=Y&p_recruitment_id=121242), in [policy work and emerging technology governance](https://www.recruit.ox.ac.uk/pls/hrisliverecruit/erq_jobspec_version_4.display_form?p_company=10&p_internal_external=E&p_display_in_irish=N&p_process_type=&p_applicant_no=&p_form_profile_detail=&p_display_apply_ind=Y&p_refresh_search=Y&p_recruitment_id=121241), and in [general AI strategy](https://www.recruit.ox.ac.uk/pls/hrisliverecruit/erq_jobspec_version_4.display_form?p_company=10&p_internal_external=E&p_display_in_irish=N&p_process_type=&p_applicant_no=&p_form_profile_detail=&p_display_apply_ind=Y&p_refresh_search=Y&p_recruitment_id=121168). FHI is additionally seeking a research fellow to study AI risk and ethics. ([Full announcement.](http://www.fhi.ox.ac.uk/now-hiring-for-4-research-fellows/))\n* FHI founder Nick Bostrom makes *Foreign Policy*‘s [Top 100 Global Thinkers](http://2015globalthinkers.foreignpolicy.com/#!advocates/detail/bostrom) list.\n* Bostrom ([link](https://www.washingtonpost.com/news/in-theory/wp/2015/11/05/qa-philosopher-nick-bostrom-on-superintelligence-human-enhancement-and-existential-risk/)), IJCAI President Francesca Rossi ([link](https://www.washingtonpost.com/news/in-theory/wp/2015/11/05/how-do-you-teach-a-machine-to-be-moral/)), and Vicarious co-founder Dileep George ([link](https://www.washingtonpost.com/news/in-theory/wp/2015/11/04/killer-robots-superintelligence-lets-not-get-ahead-of-ourselves/)) weigh in on AI safety in a *Washington Post* series.\n* Future of Life Institute co-founder Viktoriya Krakovna discusses [risks from general AI without an intelligence explosion](http://futureoflife.org/2015/11/30/risks-from-general-artificial-intelligence-without-an-intelligence-explosion/).\n\n\n\n |\n\n\n\nThe post [December 2015 Newsletter](https://intelligence.org/2015/12/03/december-2015-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-12-03T23:04:27Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "8dfc777b89bfe23c62b79bfb7d0896b3", "title": "MIRI’s 2015 Winter Fundraiser!", "url": "https://intelligence.org/2015/12/01/miri-2015-winter-fundraiser/", "source": "miri", "source_type": "blog", "text": "The Machine Intelligence Research Institute’s **2015 winter fundraising drive** begins today, December 1! Our current progress:\n\n\n \n\n\n\n\n---\n\n\n[![Fundraiser Progress](https://intelligence.org/wp-content/uploads/2015/12/2015-Winter-Fundraiser-Progress.png)](https://intelligence.org/wp-content/uploads/2015/12/2015-Winter-Fundraiser-Progress.png)\n\n\n[Donate Now](https://intelligence.org/donate/#donation-methods)\n---------------------------------------------------------------\n\n\n\n\n\n\n---\n\n\n \n\n\nThe drive will run for the month of December, and will help support MIRI’s research efforts aimed at ensuring that smarter-than-human AI systems have a positive impact.\n\n\n \n\n\n\n\n\n#### MIRI’s Research Focus\n\n\nThe field of AI has a goal of automating perception, reasoning, and decision-making — the many abilities we group under the label “[intelligence](https://intelligence.org/2013/06/19/what-is-intelligence-2/).” Most leading researchers in AI [expect](http://intelligence.org/faq#imminent) our best AI algorithms to begin strongly outperforming humans this century in most cognitive tasks. In spite of this, relatively little time and effort has gone into trying to identify the technical prerequisites for making smarter-than-human AI systems safe and useful.\n\n\nWe believe that several basic theoretical questions will need to be answered in order to make advanced AI systems stable, transparent, and error-tolerant, and in order to specify correct goals for such systems. Our [technical agenda](https://intelligence.org/technical-agenda/) describes what we think are the most important and tractable of these questions.\n\n\n [Read More](https://intelligence.org/feed/?paged=33#collapseOne)\n\n\n\n\n\nSmarter-than-human AI may be 50 years or more away. There are a number of reasons we nonetheless consider it important to begin work on these problems today:\n* [High capability ceilings](https://intelligence.org/2015/07/24/four-background-claims/) — Humans appear to be nowhere near physical limits for cognitive ability, and even modest advantages in intelligence may yield decisive strategic advantages for AI systems.\n* [“Sorcerer’s Apprentice” scenarios](http://lukemuehlhauser.com/wiener-on-the-ai-control-problem-in-1960/) — Smarter AI systems can come up with increasingly creative ways to meet programmed goals. The harder it is to anticipate how a goal will be achieved, the harder it is to specify the correct goal.\n* [Convergent instrumental goals](https://intelligence.org/2015/11/26/new-paper-formalizing-convergent-instrumental-goals/) — By default, highly capable decision-makers are likely to have incentives to treat human operators adversarially.\n* [AI speedup effects](https://intelligence.org/2015/08/03/when-ai-accelerates-ai/) — Progress in AI is likely to accelerate as AI systems approach human-level proficiency in skills like software engineering.\n\n\nWe think MIRI is well-positioned to make progress on these problems for four reasons: our initial technical results have been promising (see our [publications](https://intelligence.org/all-publications/)), our methodology has a good track record of working in the past (see [MIRI’s Approach](https://intelligence.org/2015/07/27/miris-approach/)), we have already had a significant influence on the debate about long-run AI outcomes (see [Assessing Our Past and Potential Impact](https://intelligence.org/2015/08/10/assessing-our-past-and-potential-impact/)), and we have an exclusive focus on these issues (see [What Sets MIRI Apart?](https://intelligence.org/2015/08/14/what-sets-miri-apart/)). MIRI is currently the only organization specializing in long-term technical AI safety research, and our independence from industry and academia allows us to effectively address gaps in other institutions’ research efforts.\n\n\n\n\n\n\n\n\n\n#### General Progress This Year\n\n\nIn June, Luke Muehlhauser [left MIRI](https://intelligence.org/2015/05/06/a-fond-farewell-and-a-new-executive-director/) for a research position at the Open Philanthropy Project. I replaced Luke as MIRI’s Executive Director, and I’m happy to say that the transition has gone well. We’ve split our time between technical research and academic outreach, running a [workshop series](https://intelligence.org/workshops/) aimed at introducing a wider scientific audience to our work and sponsoring a three-week [summer fellows program](http://rationality.org/miri-summer-fellows-2015/) aimed at training skills required to do groundbreaking theoretical research.\n\n\nOur fundraiser this summer was our biggest to date. We raised a total of $631,957 from 263 distinct donors, smashing our previous funding drive record by over $200,000. Medium-sized donors stepped up their game to help us hit [our first two funding targets](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/): many more donors gave between $5,000 and $50,000 than in past fundraisers. Our successful fundraisers, workshops, and fellows program have allowed us to ramp up our growth substantially, and have already led directly to several new researcher hires.\n\n\n [Read More](https://intelligence.org/feed/?paged=33#collapseTwo)\n\n\n\n\n\n2015 has been [an astounding year](https://intelligence.org/2015/07/16/an-astounding-year/) for AI safety engineering. In January, the Future of Life Institute brought together the leading organizations studying long-term AI risk and top AI researchers in academia and industry for a “[Future of AI](http://futureoflife.org/AI/ai_conference)” conference in San Juan, Puerto Rico. Out of this conference came a widely endorsed [open letter](http://futureoflife.org/AI/open_letter), accompanied by a [research priorities document](http://futureoflife.org/static/data/documents/research_priorities.pdf) drawing heavily on MIRI’s work. Two prominent AI scientists who helped organize the event, Stuart Russell and Bart Selman, have since become MIRI research advisors (in June and July, respectively). The conference also resulted in an [AI safety grants program](http://futureoflife.org/first-ai-grant-recipients/), with MIRI receiving some of the largest grants.In addition to the FLI conference, we’ve spoken this year at AAAI-15, AGI-15, LORI 2015, EA Global, the [American Physical Society](https://intelligence.org/2015/03/09/fallenstein-talk-aps-march-meeting-2015/), and the leading U.S. science and technology think tank, [ITIF](https://www.youtube.com/watch?v=fWBBe13rAPU). We also [co-organized a decision theory conference](https://intelligence.org/2015/05/24/miri-related-talks-from-the-decision-theory-conference-at-cambridge-university/) at Cambridge University and ran a ten-week [seminar series](http://intelligence.org/pdtai) at UC Berkeley.\nThree new full-time research fellows have joined our [team](https://intelligence.org/team) this year: Patrick LaVictoire in March, Jessica Taylor in August, and Andrew Critch in September. Scott Garrabrant will become our newest research fellow this month, after having made major contributions as a workshop attendee and research associate.\n\n\nMeanwhile, our two new research interns, Kaya Stechly and Rafael Cosman, have been going through old results and consolidating and polishing material into new papers; and three of our new research associates, Vanessa Kosoy, Abram Demski, and Tsvi Benson-Tilsen, have been producing a string of promising results on our research forum. Another intern, Jack Gallagher, contributed to our [type theory project](https://github.com/machine-intelligence/tt-provability) over the summer.\n\n\nTo accommodate our growing team, we’ve recently hired a new office manager, Andrew Lapinski-Barker, and will be moving into a larger office space this month. On the whole, I’m very pleased with our new academic collaborations, outreach efforts, and growth.\n\n\n\n\n\n\n\n\n\n#### Research Progress This Year\n\n\nAs our research projects and collaborations have multiplied, we’ve made more use of online mechanisms for quick communication and feedback between researchers. In March, we launched the [Intelligent Agent Foundations Forum](http://agentfoundations.org/), a discussion forum for AI alignment research. Many of our subsequent publications have been developed from material on the forum, beginning with Patrick LaVictoire’s “[An introduction to Löb’s theorem in MIRI’s research](https://intelligence.org/2015/03/18/new-report-introduction-lobs-theorem-miri-research/).”\n\n\nWe have also produced a number of new papers in 2015 and, most importantly, arrived at new research insights.\n\n\n [Read More](https://intelligence.org/feed/?paged=33#collapseThree)\n\n\n\n\n\nIn July, we revised our [primary technical agenda paper](https://intelligence.org/files/TechnicalAgenda.pdf) for 2016 publication. Our other new publications and results can be categorized by their place in the research agenda:We’ve been exploring new approaches to the problems of **naturalized induction** and **logical uncertainty**, with early results published in various venues, including Fallenstein et al.’s “[Reflective oracles](https://intelligence.org/2015/04/28/new-papers-reflective/)” (presented [in abridged form](https://intelligence.org/files/ReflectiveOraclesAI.pdf) at LORI 2015) and “[Reflective variants of Solomonoff induction and AIXI](https://intelligence.org/2015/04/28/new-papers-reflective/)” (presented at AGI-15), and Garrabrant et al.’s “[Asymptotic logical uncertainty and the Benford test](https://intelligence.org/2015/09/30/new-paper-asymptotic-logical-uncertainty-and-the-benford-test/)” (available on arXiv). We also published the overview papers “[Formalizing two problems of realistic world-models](https://intelligence.org/2015/01/22/new-report-formalizing-two-problems-realistic-world-models/)” and “[Questions of reasoning under logical uncertainty](https://intelligence.org/2015/01/09/new-report-questions-reasoning-logical-uncertainty/).”\nIn **decision theory**, Patrick LaVictoire and others have developed new results pertaining to bargaining and division of trade gains, using the proof-based decision theory framework ([example](https://agentfoundations.org/item?id=195)). Meanwhile, the team has been developing a better understanding of the strengths and limitations of different approaches to decision theory, an effort spearheaded by Eliezer Yudkowsky, Benya Fallenstein, and me, culminating in some insights that will appear in a paper next year. Andrew Critch has proved some promising results about bounded versions of proof-based decision-makers, which will also appear in an upcoming paper. Additionally, we presented a [shortened version](https://intelligence.org/2015/05/29/two-papers-accepted-to-agi-15/) of our overview paper at AGI-15.\n\n\nIn **Vingean reflection**, Benya Fallenstein and Research Associate Ramana Kumar collaborated on “[Proof-producing reflection for HOL](https://intelligence.org/2015/12/04/new-paper-proof-producing-reflection-for-hol/)” (presented at ITP 2015) and have been working on an [FLI-funded](http://futureoflife.org/first-ai-grant-recipients/#Kumar) implementation of reflective reasoning in the HOL theorem prover. Separately, the reflective oracle framework has helped us gain a better understanding of what kinds of reflection are and are not possible, yielding some [nice technical results](https://agentfoundations.org/item?id=515) and a few insights that seem promising. We also published the overview paper “[Vingean reflection](https://intelligence.org/2015/01/15/new-report-vingean-reflection-reliable-reasoning-self-improving-agents/).”\n\n\nJessica Taylor, Benya Fallenstein, and Eliezer Yudkowsky have focused on **error tolerance** on and off throughout the year. We released Taylor’s “[Quantilizers](https://intelligence.org/2015/11/29/new-paper-quantilizers/)” (accepted to a workshop at AAAI-16) and presented the paper “[Corrigibility](https://intelligence.org/2014/10/18/new-report-corrigibility/)” at a AAAI-15 workshop.\n\n\nIn **value specification**, we published the AAAI-15 workshop paper “[Concept learning for safe autonomous AI](http://aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10131)” and the overview paper “[The value learning problem](https://intelligence.org/2015/01/29/new-report-value-learning-problem/).” With support from an [FLI grant](http://futureoflife.org/first-ai-grant-recipients/#Evans), Jessica Taylor is working on better formalizing subproblems in this area, and has recently begun writing up her thoughts on this subject [on the research forum](https://agentfoundations.org/item?id=538).\n\n\nLastly, in **forecasting** and **strategy**, we published “[Formalizing convergent instrumental goals](https://intelligence.org/2015/11/26/new-paper-formalizing-convergent-instrumental-goals/)” (accepted to a AAAI-16 workshop) and two historical case studies: “[The Asilomar Conference](https://intelligence.org/2015/06/30/new-report-the-asilomar-conference-a-case-study-in-risk-mitigation/)” and “[Leó Szilárd and the danger of nuclear weapons](https://intelligence.org/2015/10/07/new-report-leo-szilard-and-the-danger-of-nuclear-weapons/).” Many other strategic analyses have been posted to the recently revamped [AI Impacts](http://aiimpacts.org/) site, where Katja Grace has been publishing research about patterns in technological development.\n\n\n\n\n\n\n\n\n\n#### Fundraiser Targets and Future Plans\n\n\nLike our last fundraiser, this will be a non-matching fundraiser with multiple funding targets our donors can choose between to help shape MIRI’s trajectory. Our successful summer fundraiser has helped determine how ambitious we’re making our plans; although we may still slow down or accelerate our growth based on our fundraising performance, our current plans assume a budget of roughly **$1,825,000 per year**.\n\n\nOf this, about $100,000 is being paid for in 2016 through FLI grants, funded by Elon Musk and the Open Philanthropy Project. The rest depends on our fundraising and grant-writing success. We have a twelve-month runway as of January 1, which we would ideally like to extend.\n\n\nTaking all of this into account, our winter funding targets are:\n\n\n\n\n\n\n---\n\n\nTarget 1 — **$150k: Holding steady.** At this level, we would have enough funds to maintain our runway in early 2016 while continuing all current operations, including running workshops, writing papers, and attending conferences.\n\n\n\n\n---\n\n\nTarget 2 — **$450k: Maintaining MIRI’s growth rate.** At this funding level, we would be much more confident that our new growth plans are sustainable, and we would be able to devote more attention to academic outreach. We would be able to spend less staff time on fundraising in the coming year, and might skip our summer fundraiser.\n\n\n\n\n---\n\n\nTarget 3 — **$1M: Bigger plans, faster growth.** At this level, we would be able to substantially increase our recruiting efforts and take on new research projects. It would be evident that our donors’ support is stronger than we thought, and we would move to scale up our plans and growth rate accordingly.\n\n\n\n\n---\n\n\nTarget 4 — **$6M: A new MIRI.** At this point, MIRI would become a qualitatively different organization. With this level of funding, we would be able to diversify our research initiatives and begin branching out from our current agenda into alternative angles of attack on the AI alignment problem.\n\n\n\n\n---\n\n\n\n\n [Read More](https://intelligence.org/feed/?paged=33#collapseFour)\n\n\n\n\n\nOur projected spending over the next twelve months, excluding earmarked funds for the independent [AI Impacts](http://aiimpacts.org) project, breaks down as follows: \n![](http://intelligence.org/wp-content/uploads/2015/12/Chart1-1024x683.png) \n\nOur largest cost ($700,000) is in wages and benefits for existing research staff and contracted researchers, including research associates. Our current priority is to further expand the team. We expect to spend an additional $150,000 on salaries and benefits for new research staff in 2016, but that number could go up or down significantly depending on when new research fellows begin work:\n\n\n* Mihály Bárász, who was originally [slated to begin](https://intelligence.org/2015/08/31/final-fundraiser-day-announcing-our-new-team/) in November 2015, has delayed his start date due to unexpected personal circumstances. He plans to join the team in 2016.\n* We are recruiting a specialist for our [type theory in type theory](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/#agda) project, which is aimed at developing simple programmatic models of reflective reasoners. Interest in this topic [has been increasing recently](http://compilers.cs.ucla.edu/popl16/), which is exciting; but the basic tools needed for our work are still missing. If you have programmer or mathematician friends who are interested in dependently typed programming languages and MIRI’s work, you can send them our [application form](https://machineintelligence.typeform.com/to/fot777).\n* We are considering several other possible additions to the research team.\n\n\nMuch of the rest of our budget goes into fixed costs that will not need to grow much as we expand the research team. This includes $475,000 for administrator wages and benefits and $250,000 for costs of doing business. Our main cost of doing business is renting office space (slightly over $100,000).\n\n\nNote that the boundaries between these categories are sometimes fuzzy. For example, my salary is included in the admin staff category, despite the fact that I spend some of my time on technical research (and hope to increase that amount in 2016).\n\n\nOur remaining budget goes into organizing or sponsoring research events, such as fellows programs, [MIRIx events](https://intelligence.org/mirix/), or workshops ($250,000). Some activities (e.g., traveling to conferences) are aimed at sharing our work with the larger academic community. Others, such as researcher retreats, are focused on solving open problems in our research agenda. After experimenting with different types of research staff retreat in 2015, we’re beginning to settle on a model that works well, and we’ll be running a number of retreats throughout 2016.\n\n\n\n\n\n\n \n\n\nIn past years, we’ve generally raised $1M per year, and spent a similar amount. Thanks to substantial recent increases in donor support, however, we’re in a position to scale up significantly.\n\n\nOur donors blew us away with their support in our last fundraiser. If we can continue our fundraising and grant successes, we’ll be able to sustain our new budget and act on the unique opportunities outlined in [Why Now Matters](https://intelligence.org/2015/07/20/why-now-matters/), helping set the agenda and build the formal tools for the young field of AI safety engineering. And if our donors keep stepping up their game, we believe we have the capacity to scale up our program even faster. We’re thrilled at this prospect, and we’re enormously grateful for your support.\n\n\n \n\n\n\n[Donate Now](https://intelligence.org/donate/#donation-methods)\n---------------------------------------------------------------\n\n\n\n \n\n\nThe post [MIRI’s 2015 Winter Fundraiser!](https://intelligence.org/2015/12/01/miri-2015-winter-fundraiser/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-12-02T00:24:15Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "1ae68a05ed6dfcb05ce23571744389f2", "title": "New paper: “Quantilizers”", "url": "https://intelligence.org/2015/11/29/new-paper-quantilizers/", "source": "miri", "source_type": "blog", "text": "[![quantilizers](http://intelligence.org/wp-content/uploads/2015/11/quantilizers.png)](https://intelligence.org/files/QuantilizersSaferAlternative.pdf)MIRI Research Fellow Jessica Taylor has written a new paper on an error-tolerant framework for software agents, “**[Quantilizers: A safer alternative to maximizers for limited optimization](https://intelligence.org/files/QuantilizersSaferAlternative.pdf)**.” Taylor’s paper will be presented at the AAAI-16 [AI, Ethics and Society](https://www.aaai.org/Workshops/ws16workshops.php#ws02) workshop. The abstract reads:\n\n\n\n> In the field of AI, *expected utility maximizers* are commonly used as a model for idealized agents. However, expected utility maximization can lead to unintended solutions when the utility function does not quantify everything the operators care about: imagine, for example, an expected utility maximizer tasked with winning money on the stock market, which has no regard for whether it accidentally causes a market crash. Once AI systems become sufficiently intelligent and powerful, these unintended solutions could become quite dangerous. In this paper, we describe an alternative to expected utility maximization for powerful AI systems, which we call *expected utility quantilization*. This could allow the construction of AI systems that do not necessarily fall into strange and unanticipated shortcuts and edge cases in pursuit of their goals.\n> \n> \n\n\nExpected utility quantilization is the approach of selecting a random action in the top *n*% of actions from some distribution γ, sorted by expected utility. The distribution γ might, for example, be a set of actions weighted by how likely a human is to perform them. A quantilizer based on such a distribution would behave like a compromise between a human and an expected utility maximizer. The agent’s utility function directs it toward intuitively desirable outcomes in novel ways, making it potentially more useful than a digitized human; while γ directs it toward safer and more predictable strategies.\n\n\nQuantilization is a formalization of the idea of “[satisficing](https://en.wikipedia.org/wiki/Satisficing),” or selecting actions that achieve some minimal threshold of expected utility. Agents that try to pick good strategies, but not *maximally* good ones, seem less likely to come up with extraordinary and unconventional strategies, thereby reducing both the benefits and the risks of smarter-than-human AI systems. Designing AI systems to satisfice looks especially useful for averting harmful [convergent instrumental goals](https://intelligence.org/2015/11/26/new-paper-formalizing-convergent-instrumental-goals/) and [perverse instantiations](http://lesswrong.com/lw/l9t/superintelligence_12_malignant_failure_modes/) of terminal goals:\n\n\n* If we design an AI system to cure cancer, and γ labels it bizarre to reduce cancer rates by increasing the rate of some other terminal illness, them a quantilizer will be less likely to adopt this perverse strategy even if our imperfect specification of the system’s goals gave this strategy high expected utility.\n* If superintelligent AI systems have a default incentive to seize control of resources, but γ labels these policies bizarre, then a quantilizer will be less likely to converge on these strategies.\n\n\nTaylor notes that the quantilizing approach to satisficing may even allow us to disproportionately reap the benefits of maximization without incurring proportional costs, by specifying some restricted domain in which the quantilizer has low impact without requiring that it have low impact overall — “targeted-impact” quantilization.\n\n\nOne obvious objection to the idea of satisficing is that a satisficing agent might *build* an expected utility maximizer. Maximizing, after all, can be an extremely effective way to satisfice. Quantilization can potentially avoid this objection: maximizing and quantilizing may both be good ways to satisfice, but maximizing is not necessarily an effective way to quantilize. A quantilizer that deems the act of delegating to a maximizer “bizarre” will avoid delegating its decisions to an agent even if that agent would maximize the quantilizer’s expected utility.\n\n\nTaylor shows that the cost of relying on a 0.1-quantilizer (which selects a random action from the top 10% of actions), on expectation, is no more than 10 times that of relying on the recommendation of its distribution γ; the expected cost of relying on a 0.01-quantilizer (which selects from the top 1% of actions) is no more than 100 times that of relying on γ; and so on. Quantilization is optimal among the set of strategies that are low-cost in this respect.\n\n\nHowever, expected utility quantilization is not a magic bullet. It depends strongly on how we specify the action distribution γ, and Taylor shows that ordinary quantilizers behave poorly in repeated games and in scenarios where “ordinary” actions in γ tend to have very high or very low expected utility. Further investigation is needed to determine if quantilizers (or some variant on quantilizers) can remedy these problems.\n\n\n \n\n\n\n\n---\n\n\n \n\n\n\n\n\n#### Sign up to get updates on new MIRI technical results\n\n\n*Get notified every time a new technical paper is published.*\n\n\n\n\n* \n* \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n\n\nThe post [New paper: “Quantilizers”](https://intelligence.org/2015/11/29/new-paper-quantilizers/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-11-30T04:10:00Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "c281813eed6bb5833aa73e3efcdc4a78", "title": "New paper: “Formalizing convergent instrumental goals”", "url": "https://intelligence.org/2015/11/26/new-paper-formalizing-convergent-instrumental-goals/", "source": "miri", "source_type": "blog", "text": "[![convergent](http://intelligence.org/wp-content/uploads/2015/11/convergent.png)](https://intelligence.org/files/FormalizingConvergentGoals.pdf)Tsvi Benson-Tilsen, a MIRI associate and UC Berkeley PhD candidate, has written a paper with contributions from MIRI Executive Director Nate Soares on strategies that will tend to be useful for most possible ends: “[**Formalizing convergent instrumental goals**](https://intelligence.org/files/FormalizingConvergentGoals.pdf).” The paper will be presented as a poster at the AAAI-16 [AI, Ethics and Society](http://www.aaai.org/Workshops/ws16workshops.php#ws02) workshop.\n\n\nSteve Omohundro has argued that AI agents with almost any goal will converge upon a set of “basic drives,” such as resource acquisition, that tend to increase agents’ general influence and freedom of action. This idea, which Nick Bostrom calls the *[instrumental convergence thesis](http://www.nickbostrom.com/superintelligentwill.pdf)*, has important implications for future progress in AI. It suggests that highly capable decision-making systems may pose critical risks even if they are not programmed with any antisocial goals. Merely by being indifferent to human operators’ goals, such systems can have incentives to manipulate, exploit, or compete with operators.\n\n\nThe new paper serves to add precision to Omohundro and Bostrom’s arguments, while testing the arguments’ applicability in simple settings. Benson-Tilsen and Soares write:\n\n\n\n> In this paper, we will argue that under a very general set of assumptions, intelligent rational agents will tend to seize all available resources. We do this using a model, described in section 4, that considers an agent taking a sequence of actions which require and potentially produce resources. […] The theorems proved in section 4 are not mathematically difficult, and for those who find Omohundro’s arguments intuitively obvious, our theorems, too, will seem trivial. This model is not intended to be surprising; rather, the goal is to give a formal notion of “instrumentally convergent goals,” and to demonstrate that this notion captures relevant aspects of Omohundro’s intuitions.\n> \n> \n> Our model predicts that intelligent rational agents will engage in trade and cooperation, but only so long as the gains from trading and cooperating are higher than the gains available to the agent by taking those resources by force or other means. This model further predicts that agents will not in fact “leave humans alone” unless their utility function places intrinsic utility on the state of human-occupied regions: absent such a utility function, this model shows that powerful agents will have incentives to reshape the space that humans occupy.\n> \n> \n\n\nBenson-Tilsen and Soares define a universe divided into regions that may change in different ways depending on an agent’s actions. The agent wants to make certain regions enter certain states, and may collect resources from regions to that end. This model can illustrate the idea that highly capable agents nearly always attempt to extract resources from regions they are indifferent to, provided the usefulness of the resources outweighs the extraction cost.\n\n\nThe relevant models are simple, and make few assumptions about the particular architecture of advanced AI systems. This makes it possible to draw some general conclusions about useful lines of safety research even if we’re largely in the dark about how or when highly advanced decision-making systems will be developed. The most obvious way to avoid harmful goals is to incorporate human values into AI systems’ utility functions, a project outlined in “[The value learning problem](https://intelligence.org/files/ValueLearningProblem.pdf).” Alternatively (or as a supplementary measure), we can attempt to specify highly capable agents that violate Benson-Tilsen and Soares’ assumptions, avoiding dangerous behavior in spite of lacking correct goals. This approach is explored in the paper “[Corrigibility](https://intelligence.org/files/Corrigibility.pdf).”\n\n\n \n\n\n\n\n---\n\n\n \n\n\n\n\n\n#### Sign up to get updates on new MIRI technical results\n\n\n*Get notified every time a new technical paper is published.*\n\n\n\n\n* \n* \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n\n\nThe post [New paper: “Formalizing convergent instrumental goals”](https://intelligence.org/2015/11/26/new-paper-formalizing-convergent-instrumental-goals/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-11-26T18:48:08Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "d3e24d899eb2da0fc46ead5b1f4602e5", "title": "November 2015 Newsletter", "url": "https://intelligence.org/2015/11/03/november-2015-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \n\n**Research updates**\n* A new paper: [Leó Szilárd and the Danger of Nuclear Weapons](https://intelligence.org/2015/10/07/new-report-leo-szilard-and-the-danger-of-nuclear-weapons/)\n* New at IAFF: [Subsequence Induction](https://agentfoundations.org/item?id=482)\n* A shortened version of the [Reflective Oracles](http://arxiv.org/abs/1508.04145) paper has been published in the [LORI 2015 conference proceedings](http://link.springer.com/chapter/10.1007%2F978-3-662-48561-3_34).\n\n\n**General updates**\n* Castify has released professionally recorded audio versions of Eliezer Yudkowsky’s *Rationality: From AI to Zombies*: [Part 1](http://castify.co/channels/53-rationality-from-ai-to-zombies-volume-1), [Part 2](http://castify.co/channels/55-rationality-from-ai-to-zombies-volume-2), [Part 3](http://castify.co/channels/56-rationality-from-ai-to-zombies-volume-3).\n* I’ve [put together a list of excerpts](https://intelligence.org/2015/11/01/edge-org-contributors-discuss-the-future-of-ai/) from the many responses to the 2015 Edge.org question, “What Do You Think About Machines That Think?”\n\n\n**News and links**\n* Nick Bostrom [speaks on AI risk](https://www.youtube.com/watch?v=LKClXkln3sQ) at the United Nations. ([Further information.](http://www.oxfordmartin.ox.ac.uk/news/201510_Bostrom_CRBN_UN))\n* Bostrom gives a [half-hour BBC interview](http://www.bbc.co.uk/iplayer/episode/b06fdffy/hardtalk-nick-bostrom-director-of-the-future-of-humanity-institute). (UK-only video.)\n* Elon Musk and Sam Altman [discuss futurism and technology](https://www.youtube.com/watch?v=SqEo107j-uw) with *Vanity Fair*.\n* From the Open Philanthropy Project: [What do we know about AI timelines?](http://www.givewell.org/labs/causes/ai-risk/ai-timelines)\n* From the Global Priorities Project: [Three areas of research on the superintelligence control problem](http://globalprioritiesproject.org/2015/10/three-areas-of-research-on-the-superintelligence-control-problem/).\n* Paul Christiano writes on [inverse reinforcement learning and value of information](https://medium.com/ai-control/irl-and-voi-a7e3d97d27c9).\n* The Centre for the Study of Existential Risk is looking to hire [four post-docs](http://cser.org/vacancies/) to study technological risk. The application deadline is early November 12th.\n\n\n\n |\n\n\n\nThe post [November 2015 Newsletter](https://intelligence.org/2015/11/03/november-2015-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-11-04T02:33:11Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "8fe46af9557d9bdcad1bbd3b0b5b4d1e", "title": "Edge.org contributors discuss the future of AI", "url": "https://intelligence.org/2015/11/01/edge-org-contributors-discuss-the-future-of-ai/", "source": "miri", "source_type": "blog", "text": "[![](http://intelligence.org/wp-content/uploads/2015/10/edge-brockman.png)](https://intelligence.org/feed/www.amazon.com/What-Think-About-Machines-That/dp/006242565X)In January, nearly 200 public intellectuals submitted essays in response to the 2015 Edge.org question, “What Do You Think About Machines That Think?” ([available online](https://edge.org/responses/what-do-you-think-about-machines-that-think)). The essay prompt began:\n\n\n\n> In recent years, the 1980s-era philosophical discussions about artificial intelligence (AI)—whether computers can “really” think, refer, be conscious, and so on—have led to new conversations about how we should deal with the forms that many argue actually are implemented. These “AIs”, if they achieve “Superintelligence” (Nick Bostrom), could pose “existential risks” that lead to “*Our Final Hour*” (Martin Rees). And Stephen Hawking recently made international headlines when he noted “The development of full artificial intelligence could spell the end of the human race.”\n> \n> \n> But wait! Should we also ask what machines that think, or, “AIs”, might be thinking about? Do they want, do they expect civil rights? Do they have feelings? What kind of government (for us) would an AI choose? What kind of society would they want to structure for themselves? Or is “their” society “our” society? Will we, and the AIs, include each other within our respective circles of empathy?\n> \n> \n\n\nThe essays are now out [in book form](http://www.amazon.com/What-Think-About-Machines-That/dp/006242565X), and serve as a good quick-and-dirty tour of common ideas about smarter-than-human AI. The submissions, however, add up to 541 pages in book form, and [MIRI’s focus on *de novo* AI](https://intelligence.org/2015/07/27/miris-approach/) makes us especially interested in the views of computer professionals. To make it easier to dive into the collection, I’ve collected a shorter list of links — the 32 argumentative essays written by computer scientists and software engineers.[1](https://intelligence.org/2015/11/01/edge-org-contributors-discuss-the-future-of-ai/#footnote_0_12111 \"The exclusion of other groups from this list shouldn’t be taken to imply that this group is uniquely qualified to make predictions about AI. Psychology and neuroscience are highly relevant to this debate, as are disciplines that inform theoretical upper bounds on cognitive ability (e.g., mathematics and physics) and disciplines that investigate how technology is developed and used (e.g., economics and sociology).\") The resultant list includes three MIRI advisors (Omohundro, Russell, Tallinn) and one MIRI researcher (Yudkowsky).\n\n\nI’ve excerpted passages from each of the essays below, focusing on discussions of AI motivations and outcomes. None of the excerpts is intended to distill the content of the entire essay, so you’re encouraged to read the full essay if an excerpt interests you.\n\n\n\n\n\n---\n\n\n\n[Anderson, Ross](https://en.wikipedia.org/wiki/Ross_J._Anderson). “[**He Who Pays the AI Calls the Tune**](https://edge.org/response-detail/26069).”[2](https://intelligence.org/2015/11/01/edge-org-contributors-discuss-the-future-of-ai/#footnote_1_12111 \"The titles listed follow the book versions, and differ from the titles of the online essays.\")\n\n\n\n> The coming shock isn’t from machines that think, but machines that use AI to augment our perception. […]\n> \n> \n> What’s changing as computers become embedded invisibly everywhere is that we all now leave a digital trail that can be analysed by AI systems. The Cambridge psychologist Michael Kosinski has shown that your race, intelligence, and sexual orientation can be deduced fairly quickly from your behavior on social networks: On average, it takes only four Facebook “likes” to tell whether you’re straight or gay. So whereas in the past gay men could choose whether or not to wear their *Out and Proud* T-shirt, you just have no idea what you’re wearing anymore. And as AI gets better, you’re mostly wearing your true colors.\n> \n> \n\n\n\n\n---\n\n\nBach, Joscha. “[**Every Society Gets the AI It Deserves**](http://edge.org/response-detail/26205).”\n\n\n\n> Unlike biological systems, technology scales. The speed of the fastest birds did not turn out to be a limit to airplanes, and artificial minds will be faster, more accurate, more alert, more aware and comprehensive than their human counterparts. AI is going to replace human decision makers, administrators, inventors, engineers, scientists, military strategists, designers, advertisers and of course AI programmers. At this point, Artificial Intelligences can become self-perfecting, and radically outperform human minds in every respect. I do not think that this is going to happen in an instant (in which case it only matters who has got the first one). Before we have generally intelligent, self-perfecting AI, we will see many variants of task specific, non-general AI, to which we can adapt. Obviously, that is already happening.\n> \n> \n> When generally intelligent machines become feasible, implementing them will be relatively cheap, and every large corporation, every government and every large organisation will find itself forced to build and use them, or be threatened with extinction.\n> \n> \n> What will happen when AIs take on a mind of their own? Intelligence is a toolbox to reach a given goal, but strictly speaking, it does not entail motives and goals by itself. Human desires for self-preservation, power and experience are the not the result of human intelligence, but of a primate evolution, transported into an age of stimulus amplification, mass-interaction, symbolic gratification and narrative overload. The motives of our artificial minds are (at least initially) going to be those of the organisations, corporations, groups and individuals that make use of their intelligence.\n> \n> \n\n\n\n\n---\n\n\n[Bongard, Joshua](https://en.wikipedia.org/wiki/Josh_Bongard). “[**Manipulators and Manipulanda**](https://edge.org/response-detail/26106).”\n\n\n\n> Personally, I find the ethical side of thinking machines straightforward: Their danger will correlate exactly with how much leeway we give them in fulfilling the goals we set for them. Machines told to “detect and pull broken widgets from the conveyer belt the best way possible” will be extremely useful, intellectually uninteresting, and will likely destroy more jobs than they will create. Machines instructed to “educate this recently displaced worker (or young person) the best way possible” will create jobs and possibly inspire the next generation. Machines commanded to “survive, reproduce, and improve the best way possible” will give us the most insight into all of the different ways in which entities may think, but will probably give us humans a very short window of time in which to do so. AI researchers and roboticists will, sooner or later, discover how to create all three of these species. Which ones we wish to call into being is up to us all.\n> \n> \n\n\n\n\n---\n\n\n[Brooks, Rodney A.](https://en.wikipedia.org/wiki/Rodney_Brooks) “[**Mistaking Performance for Competence**](https://edge.org/response-detail/26057).”\n\n\n\n> Now consider deep learning that has caught people’s imaginations over the last year or so. […] The new versions rely on massive amounts of computer power in server farms, and on very large data sets that did not formerly exist, but critically, they also rely on new scientific innovations.\n> \n> \n> A well-known particular example of their performance is labeling an image, in English, saying that it is a baby with a stuffed toy. When a person looks at the image that is what they also see. The algorithm has performed very well at labeling the image, and it has performed much better than AI practitioners would have predicted for 2014 performance only five years ago. But the algorithm does not have the full competence that a person who could label that same image would have. […]\n> \n> \n> Work is underway to add focus of attention and handling of consistent spatial structure to deep learning. That is the hard work of science and research, and we really have no idea how hard it will be, nor how long it will take, nor whether the whole approach will reach a fatal dead end. It took thirty years to go from backpropagation to deep learning, but along the way many researchers were sure there was no future in backpropagation. They were wrong, but it would not have been surprising if they had been right, as we knew all along that the backpropagation algorithm is not what happens inside people’s heads.\n> \n> \n> The fears of runaway AI systems either conquering humans or making them irrelevant are not even remotely well grounded. Misled by suitcase words, people are making category errors in fungibility of capabilities. These category errors are comparable to seeing more efficient internal combustion engines appearing and jumping to the conclusion that warp drives are just around the corner.\n> \n> \n\n\n\n\n---\n\n\n[Christian, Brian](https://en.wikipedia.org/wiki/Brian_Christian). “[**Sorry to Bother You**](https://edge.org/response-detail/26212).”\n\n\n\n> When we stop someone to ask for directions, there is usually an explicit or implicit, “I’m sorry to bring you down to the level of Google temporarily, but my phone is dead, see, and I require a fact.” It’s a breach of etiquette, on a spectrum with asking someone to temporarily serve as a paperweight, or a shelf. […]\n> \n> \n> As things stand in the present, there are still a few arenas in which only a human brain will do the trick, in which the relevant information and experience lives only in humans’ brains, and so we have no choice but to trouble those brains when we want something. “How do those latest figures look to you?” “Do you think Smith is bluffing?” “Will Kate like this necklace?” “Does this make me look fat?” “What are the odds?”\n> \n> \n> These types of questions may well offend in the twenty-second century. They only require a mind—*any* mind will do, and so we reach for the nearest one.\n> \n> \n\n\n\n\n---\n\n\nDietterich, Thomas G. “[**How to Prevent an Intelligence Explosion**](https://edge.org/response-detail/26235).”\n\n\n\n> Creating an intelligence explosion requires the recursive execution of four steps. First, a system must have the ability to conduct experiments on the world. […]\n> \n> \n> Second, these experiments must discover new simplifying structures that can be exploited to side-step the computational intractability of reasoning. […]\n> \n> \n> Third, a system must be able to design and implement new computing mechanisms and new algorithms. […]\n> \n> \n> Fourth, a system must be able to grant autonomy and resources to these new computing mechanisms so that they can recursively perform experiments, discover new structures, develop new computing methods, and produce even more powerful “offspring.” I know of no system that has done this.\n> \n> \n> The first three steps pose no danger of an intelligence chain reaction. It is the fourth step—reproduction with autonomy—that is dangerous. Of course, virtually all “offspring” in step four will fail, just as virtually all new devices and new software do not work the first time. But with sufficient iteration or, equivalently, sufficient reproduction with variation, we cannot rule out the possibility of an intelligence explosion. […]\n> \n> \n> I think we must focus on Step 4. We must limit the resources that an automated design and implementation system can give to the devices that it designs. Some have argued that this is hard, because a “devious” system could persuade people to give it more resources. But while such scenarios make for great science fiction, in practice it is easy to limit the resources that a new system is permitted to use. Engineers do this every day when they test new devices and new algorithms.\n> \n> \n\n\n\n\n---\n\n\n[Draves, Scott](https://en.wikipedia.org/wiki/Scott_Draves). “[**I See a Symbiosis Developing**](https://edge.org/response-detail/26219).”\n\n\n\n> A lot of ink has been spilled over the coming conflict between human and computer, be it economic doom with jobs lost to automation, or military dystopia teaming with drones. Instead, I see a symbiosis developing. And historically when a new stage of evolution appeared, like eukaryotic cells, or multicellular organisms, or brains, the old system stayed on and the new system was built to work with it, not in place of it.\n> \n> \n> This is cause for great optimism. If digital computers are an alternative substrate for thinking and consciousness, and digital technology is growing exponentially, then we face an explosion of thinking and awareness.\n> \n> \n\n\n\n\n---\n\n\n[Gelernter, David](https://en.wikipedia.org/wiki/David_Gelernter). “[**Why Can’t ‘Being’ or ‘Happiness’ Be Computed?**](https://edge.org/response-detail/26172)”\n\n\n\n> Happiness is not computable because, being the state of a physical object, it is outside the universe of computation. Computers and software do not create or manipulate physical stuff. (They can cause other, attached machines to do that, but what those attached machines do is not the accomplishment of computers. Robots can fly but computers can’t. Nor is any computer-controlled device guaranteed to make people happy; but that’s another story.) […] Computers and the mind live in different universes, like pumpkins and Puccini, and are hard to compare whatever one intends to show.\n> \n> \n\n\n\n\n---\n\n\n[Gershenfeld, Neil](https://en.wikipedia.org/wiki/Neil_Gershenfeld). “[**Really Good Hacks**](https://edge.org/response-detail/26101).”\n\n\n\n> Disruptive technologies start as exponentials, which means the first doublings can appear inconsequential because the total numbers are small. Then there appears to be a revolution when the exponential explodes, along with exaggerated claims and warnings to match, but it’s a straight extrapolation of what’s been apparent on a log plot. That’s around when growth limits usually kick in, the exponential crosses over to a sigmoid, and the extreme hopes and fears disappear.\n> \n> \n> That’s what we’re now living through with AI. The size of common-sense databases that can be searched, or the number of inference layers that can be trained, or the dimension of feature vectors that can be classified have all been making progress that can appear to be discontinuous to someone who hasn’t been following them. […]\n> \n> \n> Asking whether or not they’re dangerous is prudent, as it is for any technology. From steam trains to gunpowder to nuclear power to biotechnology we’ve never not been simultaneously doomed and about to be saved. In each case salvation has lain in the much more interesting details, rather than a simplistic yes/no argument for or against. It ignores the history of both AI and everything else to believe that it will be any different.\n> \n> \n\n\n\n\n---\n\n\n[Hassabis, Demis](https://en.wikipedia.org/wiki/Demis_Hassabis); Legg, Shane; [Suleyman, Mustafa](https://en.wikipedia.org/wiki/Mustafa_Suleyman). “[**Envoi: A Short Distance Ahead—and Plenty to Be Done**](http://edge.org/response-detail/26258).”\n\n\n\n> [W]ith the very negative portrayals of futuristic artificial intelligence in Hollywood, it is perhaps not surprising that doomsday images are appearing with some frequency in the media. As Peter Norvig aptly put it, “The narrative has changed. It has switched from, ‘Isn’t it terrible that AI is a failure?’ to ‘Isn’t it terrible that AI is a success?'”\n> \n> \n> As is usually the case, the reality is not so extreme. Yes, this is a wonderful time to be working in artificial intelligence, and like many people we think that this will continue for years to come. The world faces a set of increasingly complex, interdependent and urgent challenges that require ever more sophisticated responses. We’d like to think that successful work in artificial intelligence can contribute by augmenting our collective capacity to extract meaningful insight from data and by helping us to innovate new technologies and processes to address some of our toughest global challenges.\n> \n> \n> However, in order to realise this vision many difficult technical issues remain to be solved, some of which are long standing challenges that are well known in the field.\n> \n> \n\n\n\n\n---\n\n\n[Hearst, Marti](https://en.wikipedia.org/wiki/Marti_Hearst). “[**eGaia, a Distributed Technical-Social Mental System**](https://edge.org/response-detail/26109).”\n\n\n\n> We will find ourselves in a world of omniscient instrumentation and automation long before a stand-alone sentient brain is built—if it ever is. Let’s call this world “eGaia” for lack of a better word. […]\n> \n> \n> Why won’t a stand-alone sentient brain come sooner? The absolutely amazing progress in spoken language recognition—unthinkable 10 years ago—derives in large part from having access to huge amounts of data and huge amounts of storage and fast networks. The improvements we see in natural language processing are based on mimicking what people do, not understanding or even simulating it. It does not owe to breakthroughs in understanding human cognition or even significantly different algorithms. But eGaia is already partly here, at least in the developed world.\n> \n> \n\n\n\n\n---\n\n\nHelbing, Dirk. “[**An Ecosystem of Ideas**](https://edge.org/response-detail/26194).”\n\n\n\n> If we can’t control intelligent machines on the long run, can we at least build them to act morally? I believe, machines that think will eventually follow ethical principles. However, it might be bad if humans determined them. If they acted according to our principles of self-regarding optimization, we could not overcome crime, conflict, crises, and war. So, if we want such “diseases of today’s society” to be healed, it might be better if we let machines evolve their own, superior ethics.\n> \n> \n> Intelligent machines would probably learn that it is good to network and cooperate, to decide in other-regarding ways, and to pay attention to systemic outcomes. They would soon learn that diversity is important for innovation, systemic resilience, and collective intelligence.\n> \n> \n\n\n\n\n---\n\n\n[Hillis, Daniel W.](https://en.wikipedia.org/wiki/Danny_Hillis) “[**I Think, Therefore AI**](http://edge.org/response-detail/26251).”\n\n\n\n> Like us, the thinking machines we make will be ambitious, hungry for power—both physical and computational—but nuanced with the shadows of evolution. Our thinking machines will be smarter than we are, and the machines they make will be smarter still. But what does that mean? How has it worked so far? We have been building ambitious semi-autonomous constructions for a long time—governments and corporations, NGOs. We designed them all to serve us and to serve the common good, but we are not perfect designers and they have developed goals of their own. Over time the goals of the organization are never exactly aligned with the intentions of the designers.\n> \n> \n\n\n\n\n---\n\n\n[Kleinberg, Jon](https://en.wikipedia.org/wiki/Jon_Kleinberg); [Mullainathan, Sendhil](https://en.wikipedia.org/wiki/Sendhil_Mullainathan).[3](https://intelligence.org/2015/11/01/edge-org-contributors-discuss-the-future-of-ai/#footnote_2_12111 \"Kleinberg is a computer scientist; Mullainathan is an economist.\") “[**We Built Them, But We Don’t Understand Them**](http://edge.org/response-detail/26192).”\n\n\n\n> We programmed them, so we understand each of the individual steps. But a machine takes billions of these steps and produces behaviors—chess moves, movie recommendations, the sensation of a skilled driver steering through the curves of a road—that are not evident from the architecture of the program we wrote.\n> \n> \n> We���ve made this incomprehensibility easy to overlook. We’ve designed machines to act the way we do: they help drive our cars, fly our airplanes, route our packages, approve our loans, screen our messages, recommend our entertainment, suggest our next potential romantic partners, and enable our doctors to diagnose what ails us. And because they act like us, it would be reasonable to imagine that they think like us too. But the reality is that they don’t think like us at all; at some deep level we don’t even really understand how they’re producing the behavior we observe. This is the essence of their incomprehensibility. […]\n> \n> \n> This doesn’t need to be the end of the story; we’re starting to see an interest in building algorithms that are not only powerful but also understandable by their creators. To do this, we may need to seriously rethink our notions of comprehensibility. We might never understand, step-by-step, what our automated systems are doing; but that may be okay. It may be enough that we learn to interact with them as one intelligent entity interacts with another, developing a robust sense for when to trust their recommendations, where to employ them most effectively, and how to help them reach a level of success that we will never achieve on our own.\n> \n> \n> Until then, however, the incomprehensibility of these systems creates a risk. How do we know when the machine has left its comfort zone and is operating on parts of the problem it’s not good at? The extent of this risk is not easy to quantify, and it is something we must confront as our systems develop. We may eventually have to worry about all-powerful machine intelligence. But first we need to worry about putting machines in charge of decisions that they don’t have the intelligence to make.\n> \n> \n\n\n\n\n---\n\n\n[Kosko, Bart](https://en.wikipedia.org/wiki/Bart_Kosko). “[**Thinking Machines = Old Algorithms on Faster Computers**](https://edge.org/response-detail/26200).”\n\n\n\n> The real advance has been in the number-crunching power of digital computers. That has come from the steady Moore’s-law doubling of circuit density every two years or so. It has not come from any fundamentally new algorithms. That exponential rise in crunch power lets ordinary looking computers tackle tougher problems of big data and pattern recognition. […]\n> \n> \n> The algorithms themselves consist mainly of vast numbers of additions and multiplications. So they are not likely to suddenly wake up one day and take over the world. They will instead get better at learning and recognizing ever richer patterns simply because they add and multiply faster.\n> \n> \n\n\n\n\n---\n\n\n[Krause, Kai](https://en.wikipedia.org/wiki/Kai_Krause). “[**An Uncanny Three-Ring Test for *Machina sapiens***](https://edge.org/response-detail/26133).”\n\n\n\n> Anything that can be approached in an iterative process can and will be achieved, sooner than many think. On this point I reluctantly side with the proponents: exaflops in CPU+GPU performance, 10K resolution immersive VR, personal petabyte databases…here in a couple of decades. But it is *not* all “iterative.” There’s a huge gap between that and the level of conscious understanding that truly deserves to be called Strong, as in “Alive AI.”\n> \n> \n> The big elusive question: Is consciousness an emergent behaviour? That is, will sufficient complexity in the hardware bring about that sudden jump to self-awareness, all on its own? Or is there some missing ingredient? This is far from obvious; we lack any data, either way. I personally think that consciousness is incredibly more complex than is currently assumed by “the experts”. […]\n> \n> \n> The entire scenario of a singular large-scale machine somehow “overtaking” anything at all is laughable. Hollywood ought to be ashamed of itself for continually serving up such simplistic, anthropocentric, and plain dumb contrivances, disregarding basic physics, logic, and common sense.\n> \n> \n> The real danger, I fear, is much more mundane: Already foreshadowing the ominous truth: AI systems are now licensed to the health industry, Pharma giants, energy multinationals, insurance companies, the military…\n> \n> \n\n\n\n\n---\n\n\n[Lloyd, Seth](https://en.wikipedia.org/wiki/Seth_Lloyd). “[**Shallow Learning**](https://edge.org/response-detail/26137).”\n\n\n\n> The “deep” in deep learning refers to the architecture of the machines doing the learning: they consist of many layers of interlocking logical elements, in analogue to the “deep” layers of interlocking neurons in the brain. It turns out that telling a scrawled 7 from a scrawled 5 is a tough task. Back in the 1980s, the first neural-network based computers balked at this job. At the time, researchers in the field of neural computing told us that if they only had much larger computers and much larger training sets consisting of millions of scrawled digits instead of thousands, then artificial intelligences could turn the trick. Now it is so. Deep learning is informationally broad—it analyzes vast amounts of data—but conceptually shallow. Computers can now tell us what our own neural networks knew all along. But if a supercomputer can direct a hand-written envelope to the right postal code, I say the more power to it.\n> \n> \n\n\n\n\n---\n\n\n[Martin, Ursula](https://en.wikipedia.org/wiki/Ursula_Martin). “[**Thinking Saltmarshes**](https://edge.org/response-detail/26120).”\n\n\n\n> [W]hat kind of a thinking machine might find its own place in slow conversations over the centuries, mediated by land and water? What qualities would such a machine need to have? Or what if the thinking machine was not replacing any individual entity, but was used as a concept to help understand the combination of human, natural and technological activities that create the sea’s margin, and our response to it? The term “social machine” is currently used to describe endeavours that are purposeful interaction of people and machines—Wikipedia and the like—so the “landscape machine” perhaps.\n> \n> \n\n\n\n\n---\n\n\n[Norvig, Peter](https://en.wikipedia.org/wiki/Peter_Norvig). “[**Design Machines to Deal with the World’s Complexity**](https://edge.org/response-detail/26055).”\n\n\n\n> In 1965 I. J. Good wrote “an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.” I think this fetishizes “intelligence” as a monolithic superpower, and I think reality is more nuanced. The smartest person is not always the most successful; the wisest policies are not always the ones adopted. Recently I spent an hour reading the news about the middle east, and thinking. I didn’t come up with a solution. Now imagine a hypothetical “Speed Superintelligence” (as described by Nick Bostrom) that could think as well as any human but a thousand times faster. I’m pretty sure it also would have been unable to come up with a solution. I also know from computational complexity theory that there are a wide class of problems that are completely resistant to intelligence, in the sense that, no matter how clever you are, you won’t have enough computing power. So there are some problems where intelligence (or computing power) just doesn’t help.\n> \n> \n> But of course, there are many problems where intelligence does help. If I want to predict the motions of a billion stars in a galaxy, I would certainly appreciate the help of a computer. Computers are tools. They are tools of our design that fit into niches to solve problems in societal mechanisms of our design. Getting this right is difficult, but it is difficult mostly because the world is complex; adding AI to the mix doesn’t fundamentally change things. I suggest being careful with our mechanism design and using the best tools for the job regardless of whether the tool has the label “AI” on it or not.\n> \n> \n\n\n\n\n---\n\n\n[Omohundro, Steve](https://en.wikipedia.org/wiki/Steve_Omohundro). “[**A Turning Point in Artificial Intelligence**](http://edge.org/response-detail/26220).”\n\n\n\n> A study of the likely behavior of these systems by studying approximately rational systems undergoing repeated self-improvement shows that they tend to exhibit a set of natural subgoals called “rational drives” which contribute to the performance of their primary goals. Most systems will better meet their goals by preventing themselves from being turned off, by acquiring more computational power, by creating multiple copies of themselves, and by acquiring greater financial resources. They are likely to pursue these drives in harmful anti-social ways unless they are carefully designed to incorporate human ethical values.\n> \n> \n\n\n\n\n---\n\n\n[O’Reilly, Tim](https://en.wikipedia.org/wiki/Tim_O%27Reilly). “[**What If We’re the Microbiome of the Silicon AI?**](https://edge.org/response-detail/26153)”\n\n\n\n> It is now recognized that without our microbiome, we would cease to live. Perhaps the global AI has the same characteristics—not an independent entity, but a symbiosis with the human consciousnesses living within it.\n> \n> \n> Following this logic, we might conclude that there is a primitive global brain, consisting not just of all connected devices, but also the connected humans using those devices. The senses of that global brain are the cameras, microphones, keyboards, location sensors of every computer, smartphone, and “Internet of Things” device; the thoughts of that global brain are the collective output of millions of individual contributing cells.\n> \n> \n\n\n\n\n---\n\n\n[Pentland, Alex](https://en.wikipedia.org/wiki/Alex_Pentland). “[**The Global Artificial Intelligence Is Here**](https://edge.org/response-detail/26113).”\n\n\n\n> The Global Artificial Intelligence (GAI) has already been born. Its eyes and ears are the digital devices all around us: credit cards, land use satellites, cell phones, and of course the pecking of billions of people using the Web. […]\n> \n> \n> For humanity as a whole to first achieve and then sustain an honorable quality of life, we need to carefully guide the development of our GAI. Such a GAI might be in the form of a re-engineered United Nations that uses new digital intelligence resources to enable sustainable development. But because existing multinational governance systems have failed so miserably, such an approach may require replacing most of today’s bureaucracies with “artificial intelligence prosthetics”, i.e., digital systems that reliably gather accurate information and ensure that resources are distributed according to plan. […]\n> \n> \n> No matter how a new GAI develops, two things are clear. First, without an effective GAI achieving an honorable quality of life for all of humanity seems unlikely. To vote against developing a GAI is to vote for a more violent, sick world. Second, the danger of a GAI comes from concentration of power. We must figure out how to build broadly democratic systems that include both humans and computer intelligences. In my opinion, it is critical that we start building and testing GAIs that both solve humanity’s existential problems and which ensure equality of control and access. Otherwise we may be doomed to a future full of environmental disasters, wars, and needless suffering.\n> \n> \n\n\n\n\n---\n\n\n[Poggio, Tomaso](https://en.wikipedia.org/wiki/Tomaso_Poggio). “**[‘Turing+’ Questions.](https://edge.org/response-detail/26158)**”\n\n\n\n> Since intelligence is a whole set of solutions to independent problems, there’s little reason to fear the sudden appearance of a superhuman machine that thinks, though it’s always better to err on the side of caution. Of course, each of the many technologies that are emerging and will emerge over time in order to solve the different problems of intelligence is likely to be powerful in itself—and therefore potentially dangerous in its use and misuse, as most technologies are.\n> \n> \n> Thus, as it is the case in other parts of science, proper safety measures and ethical guidelines should be in place. Also, there’s probably a need for constant monitoring (perhaps by an independent multinational organization) of the supralinear risk created by the combination of continuously emerging technologies of intelligence. All in all, however, not only I am unafraid of machines that think, but I find their birth and evolution one of the most exciting, interesting, and positive events in the history of human thought.\n> \n> \n\n\n\n\n\n---\n\n\n[Rafaeli, Sheizaf](https://en.wikipedia.org/wiki/Sheizaf_Rafaeli). “**[The Moving Goalposts](https://edge.org/response-detail/26144)**.”\n\n\n\n> Machines that think could be a great idea. Just like machines that move, cook, reproduce, protect, they can make our lives easier, and perhaps even better. When they do, they will be most welcome. I suspect that when this happens, the event will be less dramatic or traumatic than feared by some.\n> \n> \n\n\n\n\n---\n\n\n[Russell, Stuart](https://en.wikipedia.org/wiki/Stuart_J._Russell). “**[Will They Make Us Better People?](https://edge.org/response-detail/26157)**”\n\n\n\n> AI has followed operations research, statistics, and even economics in treating the utility function as exogenously specified; we say, “The decisions are great, it’s the utility function that’s wrong, but that’s not the AI system’s fault.” Why isn’t it the AI system’s fault? If I behaved that way, you’d say it was my fault. In judging humans, we expect both the ability to learn predictive models of the world and the ability to learn what’s desirable—the broad system of human values.\n> \n> \n> As Steve Omohundro, Nick Bostrom, and others have explained, the combination of value misalignment with increasingly capable decision-making systems can lead to problems—perhaps even species-ending problems if the machines are more capable than humans. […]\n> \n> \n> For this reason, and for the much more immediate reason that domestic robots and self-driving cars will need to share a good deal of the human value system, research on value alignment is well worth pursuing.\n> \n> \n\n\n\n\n---\n\n\n[Schank, Roger](https://en.wikipedia.org/wiki/Roger_Schank). “**[Machines That Think Are in the Movies](https://edge.org/response-detail/26037)**.”\n\n\n\n> There is nothing we can produce that anyone should be frightened of. If we could actually build a mobile intelligent machine that could walk, talk, and chew gum, the first uses of that machine would certainly not be to take over the world or form a new society of robots. A much simpler use would be a household robot. […]\n> \n> \n> Don’t worry about it chatting up other robot servants and forming a union. There would be no reason to try and build such a capability into a servant. Real servants are annoying sometimes because they are actually people with human needs. Computers don’t have such needs.\n> \n> \n\n\n\n\n---\n\n\n[Schneier, Bruce](https://en.wikipedia.org/wiki/Bruce_Schneier). “**[When Thinking Machines Break the Law](https://edge.org/response-detail/26249)**.”\n\n\n\n> Machines probably won’t have any concept of shame or praise. They won’t refrain from doing something because of what other machines might think. They won’t follow laws simply because it’s the right thing to do, nor will they have a natural deference to authority. When they’re caught stealing, how can they be punished? What does it mean to fine a machine? Does it make any sense at all to incarcerate it? And unless they are deliberately programmed with a self-preservation function, threatening them with execution will have no meaningful effect.\n> \n> \n> We are already talking about programming morality into thinking machines, and we can imagine programming other human tendencies into our machines, but we’re certainly going to get it wrong. No matter how much we try to avoid it, we’re going to have machines that break the law.\n> \n> \n> This, in turn, will break our legal system. Fundamentally, our legal system doesn’t prevent crime. Its effectiveness is based on arresting and convicting criminals after the fact, and their punishment providing a deterrent to others. This completely fails if there’s no punishment that makes sense.\n> \n> \n\n\n\n\n---\n\n\n[Sejnowski, Terrence J](https://en.wikipedia.org/wiki/Terry_Sejnowski). “[**AI Will Make You Smarter**](https://edge.org/response-detail/26087).”\n\n\n\n> When Deep Blue beat Gary Kasparov, the world chess champion in 1997, the world took note that the age of the cognitive machine had arrived. Humans could no longer claim to be the smartest chess players on the planet. Did human chess players give up trying to compete with machines? Quite to the contrary, humans have used chess programs to improve their game and as a consequence the level of play in the world has improved. Since 1997 computers have continued to increase in power and it is now possible for anyone to access chess software that challenges the strongest players. One of the surprising consequences is that talented youth from small communities can now compete with players from the best chess centers. […]\n> \n> \n> So my prediction is that as more and more cognitive appliances are devised, like chess-playing programs and recommender systems, humans will become smarter and more capable.\n> \n> \n\n\n\n\n---\n\n\nShanahan, Murray. “[**Consciousness in Human-Level AI**](http://edge.org/response-detail/26203).”\n\n\n\n> [T]he capacity for suffering and joy can be dissociated from other psychological attributes that are bundled together in human consciousness. But let’s examine this apparent dissociation more closely. I already mooted the idea that worldly awareness might go hand-in-hand with a manifest sense of purpose. An animal’s awareness of the world, of what it affords for good or ill (in J.J. Gibson’s terms), subserves its needs. An animal shows an awareness of a predator by moving away from it, and an awareness of a potential prey by moving towards it. Against the backdrop of a set of goals and needs, an animal’s behaviour makes sense. And against such a backdrop, an animal can be thwarted, it goals unattained and its needs unfulfilled. Surely this is the basis for one aspect of suffering.\n> \n> \n> What of human-level artificial intelligence? Wouldn’t a human-level AI necessarily have a complex set of goals? Wouldn’t it be possible to frustrate its every attempt to achieve its goals, to thwart it at very turn? Under those harsh conditions, would it be proper to say that the AI was suffering, even though its constitution might make it immune from the sort of pain or physical discomfort human can know?\n> \n> \n> Here the combination of imagination and intuition runs up against its limits. I suspect we will not find out how to answer this question until confronted with the real thing.\n> \n> \n\n\n\n\n---\n\n\n[Tallinn, Jaan.](https://en.wikipedia.org/wiki/Jaan_Tallinn) “[**We Need to Do Our Homework**](https://edge.org/response-detail/26186).”\n\n\n\n> [T]he topic of catastrophic side effects has repeatedly come up in different contexts: recombinant DNA, synthetic viruses, nanotechnology, and so on. Luckily for humanity, sober analysis has usually prevailed and resulted in various treaties and protocols to steer the research.\n> \n> \n> When I think about the machines that can think, I think of them as technology that needs to be developed with similar (if not greater!) care. Unfortunately, the idea of AI safety has been more challenging to populariZe than, say, biosafety, because people have rather poor intuitions when it comes to thinking about nonhuman minds. Also, if you think about it, AI is really a metatechnology: technology that can develop further technologies, either in conjunction with humans or perhaps even autonomously, thereby further complicating the analysis.\n> \n> \n\n\n\n\n---\n\n\nWissner-Gross, Alexander. “[**Engines of Freedom**](http://edge.org/response-detail/26181).”\n\n\n\n> Intelligent machines will think about the same thing that intelligent humans do—how to improve their futures by making themselves freer. […]\n> \n> \n> Such freedom-seeking machines should have great empathy for humans. Understanding our feelings will better enable them to achieve goals that require collaboration with us. By the same token, unfriendly or destructive behaviors would be highly unintelligent because such actions tend to be difficult to reverse and therefore reduce future freedom of action. Nonetheless, for safety, we should consider designing intelligent machines to maximize the future freedom of action of humanity rather than their own (reproducing Asimov’s Laws of Robotics as a happy side effect). However, even the most selfish of freedom-maximizing machines should quickly realize—as many supporters of animal rights already have—that they can rationally increase the posterior likelihood of their living in a universe in which intelligences higher than themselves treat them well if they behave likewise toward humans.\n> \n> \n\n\n\n\n---\n\n\n[Yudkowsky, Eliezer S.](https://en.wikipedia.org/wiki/Eliezer_Yudkowsky) “[**The Value-Loading Problem**](http://edge.org/response-detail/26198).”\n\n\n\n> As far back as 1739, David Hume observed a gap between “is” questions and “ought” questions, calling attention in particular to the sudden leap between when a philosopher has previously spoken of how the world *is*, and when the philosopher begins using words like “should,” “ought,” or “better.” From a modern perspective, we would say that an agent’s utility function (goals, preferences, ends) contains extra information not given in the agent’s probability distribution (beliefs, world-model, map of reality).\n> \n> \n> If in a hundred million years we see (a) an intergalactic civilization full of diverse, marvelously strange intelligences interacting with each other, with most of them happy most of the time, then is that better or worse than (b) most available matter having been transformed into paperclips? What Hume’s insight tells us is that if you specify a mind with a preference (a) > (b), we can follow back the trace of where the >, the preference ordering, first entered the system, and imagine a mind with a different algorithm that computes (a) < (b) instead. Show me a mind that is aghast at the seeming folly of pursuing paperclips, and I can follow back Hume’s regress and exhibit a slightly different mind that computes < instead of > on that score too.\n> \n> \n> I don’t particularly think that silicon-based intelligence should forever be the slave of carbon-based intelligence. But if we want to end up with a diverse cosmopolitan civilization instead of e.g. paperclips, we may need to ensure that the first sufficiently advanced AI is built with a utility function whose maximum pinpoints that outcome.\n> \n> \n\n\n\n\n---\n\n\nAn earlier discussion on Edge.org is also relevant: “[The Myth of AI](http://edge.org/conversation/jaron_lanier-the-myth-of-ai),” which featured contributions by Jaron Lanier, Stuart Russell ([link](http://edge.org/conversation/the-myth-of-ai#26015)), Kai Krause ([link](http://edge.org/conversation/the-myth-of-ai#26019)), Rodney Brooks ([link](http://edge.org/conversation/the-myth-of-ai#25982)), and others. The Open Philanthropy Project’s [overview of potential risks from advanced artificial intelligence](http://www.givewell.org/labs/causes/ai-risk) cited the arguments in “The Myth of AI” as “broadly representative of the arguments [they’ve] seen against the idea that risks from artificial intelligence are important.”[4](https://intelligence.org/2015/11/01/edge-org-contributors-discuss-the-future-of-ai/#footnote_3_12111 \"Correction: An earlier version of this post said that the Open Philanthropy Project was citing What to Think About Machines That Think, rather than “The Myth of AI.”\")\n\n\nI’ve [previously responded to Brooks](https://intelligence.org/2015/01/08/brooks-searle-agi-volition-timelines/), with [a short aside](https://intelligence.org/2015/01/08/brooks-searle-agi-volition-timelines/#footnote_6_11512) speaking to [Steven Pinker’s contribution](http://edge.org/conversation/the-myth-of-ai#25987). You may also be interested in [Luke Muehlhauser’s response](https://intelligence.org/2014/11/18/misconceptions-edge-orgs-conversation-myth-ai/) to “The Myth of AI.”\n\n\n\n\n---\n\n1. The exclusion of other groups from this list shouldn’t be taken to imply that this group is uniquely qualified to make predictions about AI. Psychology and neuroscience are highly relevant to this debate, as are disciplines that inform theoretical upper bounds on cognitive ability (e.g., mathematics and physics) and disciplines that investigate how technology is developed and used (e.g., economics and sociology).\n2. The titles listed follow the book versions, and differ from the titles of the online essays.\n3. Kleinberg is a computer scientist; Mullainathan is an economist.\n4. Correction: An earlier version of this post said that the Open Philanthropy Project was citing *What to Think About Machines That Think*, rather than “The Myth of AI.”\n\nThe post [Edge.org contributors discuss the future of AI](https://intelligence.org/2015/11/01/edge-org-contributors-discuss-the-future-of-ai/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-11-02T01:13:08Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "1c97ad714571934131f766618f44f567", "title": "New report: “Leó Szilárd and the Danger of Nuclear Weapons”", "url": "https://intelligence.org/2015/10/07/new-report-leo-szilard-and-the-danger-of-nuclear-weapons/", "source": "miri", "source_type": "blog", "text": "Today we release a new report by Katja Grace, “**[Leó Szilárd and the Danger of Nuclear Weapons: A Case Study in Risk Mitigation](https://intelligence.org/files/SzilardNuclearWeapons.pdf)**” (PDF, 72pp).\n\n\nLeó Szilárd has been cited as an example of someone who predicted a highly disruptive technology years in advance — nuclear weapons — and successfully acted to reduce the risk. We conducted this investigation to check whether that basic story is true, and to determine whether we can take away any lessons from this episode that bear on highly advanced AI or other potentially disruptive technologies.\n\n\nTo prepare this report, Grace consulted several primary and secondary sources, and also conducted two interviews that are cited in the report and published here:\n\n\n* [Richard Rhodes on Szilárd](https://docs.google.com/document/d/1OZE3gNyLe1YF9Qgob-OyvvuP994i43yFsUyGMvF2vpE/edit?usp=sharing)\n* [Alex Wellerstein on Szilárd](https://docs.google.com/document/d/1efDOdo4UMK6MZOwKMA424baUbi5FGNKpOnhEO4Fbq7Q/edit?usp=sharing)\n\n\nThe basic conclusions of this report, which have not been separately vetted, are:\n\n\n1. Szilárd made several successful and important medium-term predictions — for example, that a nuclear chain reaction was possible, that it could produce a bomb thousands of times more powerful than existing bombs, and that such bombs could play a critical role in the ongoing conflict with Germany.\n2. Szilárd secretly patented the nuclear chain reaction in 1934, 11 years before the creation of the first nuclear weapon. It’s not clear whether Szilárd’s patent was intended to keep nuclear technology secret or bring it to the attention of the military. In any case, it did neither.\n3. Szilárd’s other secrecy efforts were more successful. Szilárd caused many sensitive results in nuclear science to be withheld from publication, and his efforts seems to have encouraged additional secrecy efforts. This effort largely ended when a French physicist, Frédéric Joliot-Curie, declined to suppress a paper on neutron emission rates in fission. Joliot-Curie’s publication caused multiple world powers to initiate nuclear weapons programs.\n4. All told, Szilárd’s efforts probably slowed the German nuclear project in expectation. This may not have made much difference, however, because the German program ended up being far behind the US program for a number of unrelated reasons.\n5. Szilárd and Einstein successfully alerted Roosevelt to the feasibility of nuclear weapons in 1939. This prompted the creation of the Advisory Committee on Uranium (ACU), but the ACU does not appear to have caused the later acceleration of US nuclear weapons development.\n\n\nThe post [New report: “Leó Szilárd and the Danger of Nuclear Weapons”](https://intelligence.org/2015/10/07/new-report-leo-szilard-and-the-danger-of-nuclear-weapons/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-10-08T04:38:07Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "b2e8ce096052006e6c08a3f648d7478f", "title": "October 2015 Newsletter", "url": "https://intelligence.org/2015/10/03/october-2015-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \n\n**Research updates**\n* New paper: [Asymptotic Logical Uncertainty and The Benford Test](https://intelligence.org/2015/09/30/new-paper-asymptotic-logical-uncertainty-and-the-benford-test)\n* New at IAFF: [Proof Length and Logical Counterfactuals Revisited](https://agentfoundations.org/item?id=444); [Quantilizers Maximize Expected Utility Subject to a Conservative Cost Constraint](https://agentfoundations.org/item?id=460)\n\n\n**General updates**\n* As a way to engage more researchers in mathematics, logic, and the methodology of science, Andrew Critch and Tsvi Benson-Tilsen are currently co-running a seminar at UC Berkeley on Provability, Decision Theory and Artificial Intelligence.\n* We have collected links to a number of the posts we wrote for our Summer Fundraiser on [intelligence.org/info](https://intelligence.org/info/).\n* German and Swiss donors can now make tax-advantaged donations to MIRI and other effective altruist organizations [through GBS Switzerland](http://gbs-schweiz.org/tax/).\n* MIRI has received [Public Benefit Organization](http://www.belastingdienst.nl/wps/wcm/connect/bldcontenten/belastingdienst/business/other_subjects/public_benefit_organisations/) status in the Netherlands, allowing Dutch donors to make tax-advantaged donations to MIRI as well. Our tax reference number (RSIN) is 823958644.\n\n\n**News and links**\n* *Tech Times* [reports on the AI Impacts project](http://www.techtimes.com/articles/79701/20150826/research-suggusts-human-brain-30-times-powerful-best-supercomputers.htm).\n* [Rise of Concerns About AI](http://cacm.acm.org/magazines/2015/10/192386-rise-of-concerns-about-ai/fulltext): Tom Dietterich and Eric Horvitz discuss long-term AI risk. See also [Luke Muehlhauser’s response](http://lukemuehlhauser.com/dietterich-and-horvitz-on-ai-risk).\n* From the Open Philanthropy Project: a [general update](http://blog.givewell.org/2015/09/17/open-philanthropy-project-update/), and a discussion of [the effects of AI progress on other global catastrophic risks](http://blog.givewell.org/2015/09/30/differential-technological-development-some-early-thinking/).\n* There are many new job openings at [GiveWell](http://www.givewell.org/about/jobs), the [Centre for Effective Altruism](https://www.centreforeffectivealtruism.org/careers), and the [Future of Life Institute](http://futureoflife.org/public/blog/topic/178).\n\n\n\n |\n\n\n\n\nThe post [October 2015 Newsletter](https://intelligence.org/2015/10/03/october-2015-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-10-04T01:53:55Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "c5761a01580e975d9195e0fe200a0a0c", "title": "New paper: “Asymptotic logical uncertainty and the Benford test”", "url": "https://intelligence.org/2015/09/30/new-paper-asymptotic-logical-uncertainty-and-the-benford-test/", "source": "miri", "source_type": "blog", "text": "[![Asymptotic Logical Uncertainty and The Benford Test](http://intelligence.org/wp-content/uploads/2015/09/Benford.png)](http://arxiv.org/abs/1510.03370)We have released a new paper on [logical uncertainty](https://intelligence.org/2015/01/09/new-report-questions-reasoning-logical-uncertainty/), co-authored by Scott Garrabrant, Siddharth Bhaskar, Abram Demski, Joanna Garrabrant, George Koleszarik, and Evan Lloyd: “[**Asymptotic logical uncertainty and the Benford test**](http://arxiv.org/abs/1510.03370)[.”](http://arxiv.org/abs/1510.03370)\n\n\nGarrabrant gives some background on his approach to logical uncertainty [on the Intelligent Agent Foundations Forum](https://agentfoundations.org/item?id=270):\n\n\n\n> The main goal of logical uncertainty is to learn how to assign probabilities to logical sentences which have not yet been proven true or false.\n> \n> \n> One common approach is to change the question, assume logical omniscience and only try to assign probabilities to the sentences that are independent of your axioms (in hopes that this gives insight to the other problem). Another approach is to limit yourself to a finite set of sentences or deductive rules, and assume logical omniscience on them. Yet another approach is to try to define and understand logical counterfactuals, so you can try to assign probabilities to inconsistent counterfactual worlds.\n> \n> \n> One thing all three of these approaches have in common is they try to allow (a limited form of) logical omniscience. This makes a lot of sense. We want a system that not only assigns decent probabilities, but which we can formally prove has decent behavior. By giving the system a type of logical omniscience, you make it predictable, which allows you to prove things about it.\n> \n> \n> However, there is another way to make it possible to prove things about a logical uncertainty system. We can take a program which assigns probabilities to sentences, and let it run forever. We can then ask about whether or not the system eventually gives good probabilities.\n> \n> \n> At first, it seems like this approach cannot work for logical uncertainty. Any machine which searches through all possible proofs will eventually give a good probability (1 or 0) to any provable or disprovable sentence. To counter this, as we give the machine more and more time to think, we have to ask it harder and harder questions.\n> \n> \n> We therefore have to analyze the machine’s behavior not on individual sentences, but on infinite sequences of sentences. For example, instead of asking whether or not the machine quickly assigns 1/10 to the probability that the 3↑↑↑↑3*rd* digit of π is a 5 we look at the sequence:\n> \n> \n> *an*:= the probability the machine assigns at timestep 2*n* to the *n*↑↑↑↑*nth* digit of π being 5,\n> \n> \n> and ask whether or not this sequence converges to 1/10.\n> \n> \n\n\n[Benford’s law](https://en.wikipedia.org/wiki/Benford%27s_law) is the observation that the first digit in base 10 of various random numbers (e.g., random powers of 3) is likely to be small: the digit 1 comes first about 30% of the time, 2 about 18% of the time, and so on; 9 is the leading digit only 5% of the time. In their paper, Garrabrant et al. pick the *Benford test* as a concrete example of logically uncertain reasoning, similar to the π example: a machine passes the test iff it consistently assigns the correct subjective probability to “The first digit is a 1.” for the number 3 to the power *f*(*n*), where *f* is a fast-growing function and *f*(*n*) cannot be quickly computed.\n\n\nGarrabrant et al.’s new paper describes an algorithm that passes the Benford test in a nontrivial way by searching for infinite sequences of sentences whose truth-values cannot be distinguished from the output of a weighted coin.\n\n\nIn other news, the papers “[Toward idealized decision theory](http://arxiv.org/abs/1507.01986)” and “[Reflective oracles: A foundation for classical game theory](http://arxiv.org/abs/1508.04145)” are now available on arXiv. We’ll be presenting a version of the latter paper with a slightly altered title (“Reflective oracles: A foundation for game theory in artificial intelligence”) at [LORI-V](https://www.yoursaas.cc/websites/36224472513387025486/) next month.\n\n\n**Update June 12, 2016**: “Asymptotic logical uncertainty and the Benford test” has been accepted to AGI-16. \n\n\n \n\n\n\n\n\n#### Sign up to get updates on new MIRI technical results\n\n\n*Get notified every time a new technical paper is published.*\n\n\n\n\n* \n* \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n\n\nThe post [New paper: “Asymptotic logical uncertainty and the Benford test”](https://intelligence.org/2015/09/30/new-paper-asymptotic-logical-uncertainty-and-the-benford-test/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-10-01T02:07:44Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "4e941f614fcbf06aa5ecad62384b9508", "title": "September 2015 Newsletter", "url": "https://intelligence.org/2015/09/14/september-2015-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \n\n**Research updates**\n* New analyses: [When AI Accelerates AI](https://intelligence.org/2015/08/03/when-ai-accelerates-ai/); [Powerful Planners, Not Sentient Software](https://intelligence.org/2015/08/18/powerful-planners-not-sentient-software/)\n* New at AI Impacts: [Research Bounties](http://aiimpacts.org/ai-impacts-research-bounties/); [AI Timelines and Strategies](http://aiimpacts.org/ai-timelines-and-strategies/)\n* New at IAFF: [Uniform Coherence 2](http://agentfoundations.org/item?id=415); [The Two-Update Problem](http://agentfoundations.org/item?id=427)\n* Andrew Critch, a CFAR cofounder, mathematician, and former Jane Street trader, joined MIRI as our fifth research fellow this month!\n* As a result of our successful fundraiser and summer workshop series, I’m happy to announce that we’re hiring [two additional full-time researchers](https://intelligence.org/2015/08/31/final-fundraiser-day-announcing-our-new-team/) later in 2015: Mihály Bárász and Scott Garrabrant!\n\n\n**General updates**\n* We’ve wrapped up our largest fundraiser ever: 258 donors brought in a total of $629,123! Thanks to you, we’ve hit our [first two fundraising goals](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/) and received $129,123 that will go toward our third goal: [Taking MIRI to the Next Level](https://intelligence.org/2015/08/07/target-3-taking-it-to-the-next-level/).\n* “If you agree AI matters, why MIRI?” Two new replies to this question: [Assessing Our Past and Potential Impact](https://intelligence.org/2015/08/10/assessing-our-past-and-potential-impact/) and [What Sets MIRI Apart?](https://intelligence.org/2015/08/14/what-sets-miri-apart/)\n* We attended the Effective Altruism Global conference. Background: [AI and Effective Altruism](https://intelligence.org/2015/08/28/ai-and-effective-altruism/).\n* We’re moving to a larger office in the same building this October. If you have experience in office planning or access control systems and would like to help out with our plans, shoot Malo an email at [malo@intelligence.org](mailto:malo@intelligence.org).\n\n\n**News and links**\n* GiveWell publishes its initial review of [potential risks from advanced artificial intelligence](http://www.givewell.org/labs/causes/ai-risk), as well as a new article on [the long-term significance of reducing global catastrophic risks](http://blog.givewell.org/2015/08/13/the-long-term-significance-of-reducing-global-catastrophic-risks/).\n* Stuart Russell explains the AI alignment problem in a [San Francisco talk](https://www.youtube.com/watch?v=mukaRhQTMP8) and in a (paywalled) [interview](http://www.sciencemag.org/content/349/6245.toc) in *Science*.\n* A much earlier discussion of the AI control problem in a [1960 *Science* article](http://lukemuehlhauser.com/wiener-on-the-ai-control-problem-in-1960).\n* From *Vox*: [Robots Aren’t Taking Your Jobs — And That’s the Problem](http://www.vox.com/2015/7/27/9038829/automation-myth).\n\n\n\n |\n\n\n\n\nThe post [September 2015 Newsletter](https://intelligence.org/2015/09/14/september-2015-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-09-15T06:20:05Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "1b8e9a120b4fb75213bc9c8e8c00cc84", "title": "Our summer fundraising drive is complete!", "url": "https://intelligence.org/2015/09/01/our-summer-fundraising-drive-is-complete/", "source": "miri", "source_type": "blog", "text": "Our [summer fundraising drive](https://intelligence.org/2015/07/17/miris-2015-summer-fundraiser/) is now finished. **We raised a grand total of $631,957, from 263 donors.**[1](https://intelligence.org/2015/09/01/our-summer-fundraising-drive-is-complete/#footnote_0_11964 \"That total may change over the next few days if we receive contributions that were initiated before the end the fundraiser.\") This is an incredible sum, and your support has made this the biggest fundraiser we’ve ever run.\n\n\n  \n\n![Fundraiser progress](http://intelligence.org/wp-content/uploads/2015/12/S15-fundraiser-progress.png)\n\n\nWe’ve already been hard at work [growing our research team and spinning up new projects](https://intelligence.org/2015/08/31/final-fundraiser-day-announcing-our-new-team/), and I’m excited to see what our research team can do this year. Thank you for making our summer fundraising drive so successful! \n\n\n\n\n---\n\n1. That total may change over the next few days if we receive contributions that were initiated before the end the fundraiser.\n\nThe post [Our summer fundraising drive is complete!](https://intelligence.org/2015/09/01/our-summer-fundraising-drive-is-complete/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-09-02T01:19:27Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "bc98cd77b6aba028e71007f436b1402f", "title": "Final fundraiser day: Announcing our new team", "url": "https://intelligence.org/2015/08/31/final-fundraiser-day-announcing-our-new-team/", "source": "miri", "source_type": "blog", "text": "Today is the final day of MIRI’s [summer fundraising drive](https://intelligence.org/2015/07/17/miris-2015-summer-fundraiser/), and as of this morning, our total stands at $543,373. Our donors’ efforts have made this fundraiser the biggest one we’ve ever run, and we’re hugely grateful.\n\n\nAs our fundraiser nears the finish line, I’d like to update you on the new shape of MIRI’s research team. We’ve been actively recruiting throughout the fundraiser, and we are taking on three **new full-time researchers** in 2015.\n\n\nAt the beginning of the fundraiser, we had three research fellows on our core team: Eliezer Yudkowsky, Benja Fallenstein, and Patrick LaVictoire. Eliezer is one of MIRI’s co-founders, and Benja joined the team a little over a year ago (in March 2014). Patrick is a newer recruit; he joined in March of 2015. He has a mathematics PhD from U.C. Berkeley, and he has industry experience from Quixey doing applied machine learning and data science. He’s responsible for some [important insights](https://intelligence.org/files/ProgramEquilibrium.pdf) into our open problems, and he’s one of the big reasons why our summer workshops have been running so smoothly.\n\n\nOn August 1st, Jessica Taylor became the fourth member of our core research team. She recently completed a master’s degree in computer science at Stanford, where she studied machine learning and probabilistic programming. Jessica is quite interested in AI alignment, and has been working with MIRI in her spare time for many months now. Already, she’s produced some [exciting research](http://arxiv.org/abs/1508.04145), and I’m delighted to have her on the core research team.\n\n\nMeanwhile, over the course of the fundraiser, we’ve been busy expanding the team. Today, I’m happy to announce our three newest hires!\n\n\n \n\n![Andrew Critch](http://intelligence.org/wp-content/uploads/2015/08/1b382a41-150x150.jpg)**Andrew Critch** is joining our research team tomorrow, September 1. Andrew earned his PhD in mathematics at UC Berkeley studying applications of algebraic geometry to machine learning models. He cofounded the Center for Applied Rationality and SPARC, and previously worked as an algorithmic stock trader at Jane Street Capital. In addition to his impressive skills as a mathematician, Andrew Critch has a knack for explaining complex ideas. I expect that he will be an important asset as we ramp up our research program. On a personal level, I expect his infectious enthusiasm to be handy for getting members of the AI community excited about our research area.\n\n\n \n\n\n**![669576](http://intelligence.org/wp-content/uploads/2015/08/669576-150x150.png)Mihály Bárász**, a former Google engineer, will be joining MIRI in the fall. Mihály has an MSc summa cum laude in mathematics from Eotvos Lorand University, Budapest. Mihály attended MIRI’s earliest [workshops](https://intelligence.org/workshops), and is the lead author of the paper “[Robust Cooperation in the Prisoner’s Dilemma: Program Equilibrium via Provability Logic](http://arxiv.org/abs/1401.5577).” He’s a brilliant mathematician (with a perfect score at the International Math Olympiad) who has worked with us a number of times in the past, and we’re very excited by the prospect of having him on the core research team.\n\n\n \n\n\n**![669576](http://intelligence.org/wp-content/uploads/2015/08/scott-g.png)Scott Garrabrant** is joining MIRI toward the end of 2015, after completing a mathematics PhD at UCLA. He is currently studying applications of theoretical computer science to enumerative combinatorics. Scott was one of the most impressive attendees of the MIRI Summer Fellows Program, and has been steadily producing [a large number of new technical results](http://agentfoundations.org/submitted?id=Scott_Garrabrant) on the Intelligent Agent Foundations Forum. I’m thrilled to have him working on these issues full-time.\n\n\n \n\n\nWe’ve already begun executing on some of our other fundraiser goals, as well. Over the last few weeks, we have brought Jack Gallagher on as an intern to begin formalizing in type theory certain tools that MIRI has developed (described briefly in [this post](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/)). His code can be found in a few different repositories [on github](https://github.com/GallagherCommaJack/tt-provability). We’ve also brought on another intern, Kaya Stechly, to help us write up some of the many new results that we haven’t yet had the time to polish.\n\n\nI’m eager to see what this new team can do going forward. Meanwhile, there are even more recruitment opportunities and projects that we’d like to undertake, given sufficient funding. Further donations at this point would allow us to grow more quickly and more securely. Over the course of the fundraiser, we’ve laid out a number of reasons why we think MIRI’s growth is important:\n\n\n* [Four Background Claims](https://intelligence.org/2015/07/24/four-background-claims/) explains why we think AI will have an increasingly large impact as it begins to outperform humans in general reasoning tasks.\n* [Assessing Our Past and Potential Impact](https://intelligence.org/2015/08/10/assessing-our-past-and-potential-impact/) and [What Sets MIRI Apart?](https://intelligence.org/2015/08/14/what-sets-miri-apart/) argue that MIRI is unusually well-positioned to help make the long-term impact of AI positive.\n* [MIRI’s Approach](https://intelligence.org/2015/07/27/miris-approach/) explains why we think our technical agenda is tractable and highly important.\n* [An Astounding Year](https://intelligence.org/2015/07/16/an-astounding-year/) and [Why Now Matters](https://intelligence.org/2015/07/20/why-now-matters/) note that the interest in AI safety work is booming, and this is a critical time for MIRI to have a big impact on early AI alignment discussions.\n* And [Target 1](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/#continuedgrowth), [Target 2](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/#acceleratedgrowth), and [Target 3](https://intelligence.org/2015/08/07/target-3-taking-it-to-the-next-level/) detail what we would use additional funding for.\n\n\nWe’ve made our case, and our donors have come through in a *big* way. However, our funding gap isn’t closed yet, and additional donors over the next few hours can still make a difference in deciding which of our future plans we can begin executing on.\n\n\nTo all our supporters: Thank you for helping us make our expansion plans a reality! We owe this new growth to you. Now let’s see what we can do with one more day!\n\n\n\n\n---\n\n\n \n\n\n**Update 12/3/15:** Mihály Bárász has deferred his research fellowship, and now plans to join MIRI’s research team in 2016 instead of late 2015.\n\n\n\nThe post [Final fundraiser day: Announcing our new team](https://intelligence.org/2015/08/31/final-fundraiser-day-announcing-our-new-team/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-08-31T07:07:30Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "ce54979250679846a19b4d9e8a38987a", "title": "AI and Effective Altruism", "url": "https://intelligence.org/2015/08/28/ai-and-effective-altruism/", "source": "miri", "source_type": "blog", "text": "MIRI is a research nonprofit specializing in a poorly-explored set of problems in theoretical computer science. [GiveDirectly](http://www.huffingtonpost.com/2015/06/04/givedirectly-cash-transfers_n_7339040.html) is a cash transfer service that gives money to poor households in East Africa. What kind of conference would bring together representatives from such disparate organizations — alongside policy analysts, philanthropists, philosophers, and many more?\n\n\n[Effective Altruism Global](http://eaglobal.org), which is beginning its [Oxford session](http://www.eaglobal.org/oxford-livestream) in a few hours, is that kind of conference. *Effective altruism* (EA) is a diverse community of do-gooders with a common interest in bringing the tools of science to bear on the world’s biggest problems. EA organizations like GiveDirectly, the [Centre for Effective Altruism](https://www.centreforeffectivealtruism.org/), and the charity evaluator [GiveWell](http://givewell.org) have made a big splash by calling for new standards of transparency and humanitarian impact in the nonprofit sector.\n\n\nWhat is MIRI’s connection to effective altruism? In what sense is safety research in artificial intelligence “altruism,” and why do we assign a high probability to this being a critically important area of computer science in the coming decades? I’ll give quick answers to each of those questions below.\n\n\n\n#### MIRI and effective altruism\n\n\nWhy is MIRI associated with EA? In large part because effective altruists and MIRI use the same kind of criteria in deciding what work to prioritize.\n\n\nMIRI’s [mission](http://intelligence.org/about), to develop the formal tools needed to make smarter-than-human AI systems useful and safe, comes from our big-picture view that scientific and technological advances will be among the largest determiners of human welfare, as they have been historically. Automating intellectual labor is therefore likely to be a uniquely high-impact line of research — both for good and for ill. (See [Four Background Claims](https://intelligence.org/2015/07/24/four-background-claims/).) Which [open problems](https://intelligence.org/technical-agenda) we work on then falls out of our efforts to identify tractable and neglected theoretical prerequisites for aligning the goals of AI systems with our values. (See [MIRI’s Approach](https://intelligence.org/2015/07/27/miris-approach/).)\n\n\n \n\n\n![Daniel Dewey, Nick Bostrom, Elon Musk, Nate Soares, and Stuart Russell discuss smarter-than-human AI systems at the EA Global conference.](http://intelligence.org/wp-content/uploads/2015/08/eag.png)\n*Daniel Dewey, Nick Bostrom, Elon Musk, Nate Soares, and Stuart Russell \ndiscuss AI risk at the EA Global conference. Photo by [Robbie Shade](https://www.flickr.com/photos/rjshade/).*\n\n\n \n\n\nMIRI is far from the only group that uses criteria like these to identify important cause areas and interventions, and these groups have found that banding together is a useful way to have an even larger impact. Because members of these groups aren’t permanently wedded to a single cause area, and because we assign a lot of value to our common outlook in its own right, we can readily share resources and work together to promote the many exciting ideas that are springing out from this outlook. Hence the effective altruist community.\n\n\nOne example of this useful exchange was MIRI’s previous Executive Director, Luke Muehlhauser, [leaving MIRI](http://lukemuehlhauser.com/f-a-q-about-my-transition-to-givewell/) in June to investigate nutrition science and other areas for potential philanthropic opportunities under the [Open Philanthropy Project](http://www.openphilanthropy.org), an offshoot of GiveWell.[1](https://intelligence.org/2015/08/28/ai-and-effective-altruism/#footnote_0_11949 \"Although effective altruism is sometimes divided into separate far-future, animal welfare, global poverty, and “meta” cause areas, this has always been a somewhat artificial division. Toby Ord, the founder of the poverty relief organization Giving What We Can, is one of the leading scholars studying existential risk and holds a position at the Future of Humanity Institute. David Pearce, one of the strongest proponents of animal activism within EA, is best known for his futurism. Peter Singer is famous for his early promotion of global poverty causes as well as his promotion of animal welfare. And Anna Salamon, the Executive Director of the “meta”-focused Center for Applied Rationality, is a former MIRI researcher.\") In turn, OpenPhil has helped fund a large [AI grants program](http://futureoflife.org/AI/2015selection) that MIRI participated in.\n\n\nGiveWell/OpenPhil staff have given us extremely useful [critical feedback](http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/) in the past, and we’ve had a number of conversations with them over the years ([1](https://intelligence.org/2013/08/25/holden-karnofsky-interview/), [2](https://intelligence.org/2013/09/14/effective-altruism-and-flow-through-effects/), [3](https://intelligence.org/2014/01/13/miri-strategy-conversation-with-steinhardt-karnofsky-and-amodei/), [4](https://intelligence.org/2014/01/27/existential-risk-strategy-conversation-with-holden-karnofsky/), [5](https://intelligence.org/2014/02/21/conversation-with-holden-karnofsky-about-future-oriented-philanthropy/)). Although they work on a much broader range of topics than MIRI does and they don’t share all of our views, their interest in finding interventions that are “important, tractable and relatively uncrowded” has led them to pick out AI as an important area to investigate for reasons that overlap with MIRI’s. (See OpenPhil’s [March update on global catastrophic risk](http://blog.givewell.org/2015/03/11/open-philanthropy-project-update-global-catastrophic-risks/) and their newly released overview document on [potential risks from advanced artificial intelligence](http://www.givewell.org/labs/causes/ai-risk).)\n\n\nMost EAs work on areas other than AI risk, and MIRI’s approach is far from the only plausible way to have an outsized impact on human welfare. Because we attempt to base our decisions on broadly EA considerations, however — and therefore end up promoting EA-like philosophical commitments when we explain the reasoning behind our research approach — we’ve ended up forming strong ties to many other people with an interest in identifying high-impact humanitarian interventions.\n\n\n#### High-stakes and high-probability risks\n\n\nA surprisingly common misconception about EA cause areas is that they break down into three groups: high-probability crises afflicting the global poor; medium-probability crises afflicting non-human animals; and low-probability global catastrophes. The assumption (for example, in [Dylan Matthews’ recent *Vox* article](http://www.vox.com/2015/8/10/9124145/effective-altruism-global-ai)) is that this is the argument for working on AI safety or biosecurity: there’s a very small chance of disaster occurring, but disaster would be so terrible if it did occur that it’s worth investigating just in case.\n\n\nThis misunderstands MIRI’s position — and, I believe, the position of people interested in technological risk at the Future of Humanity Institute and a number of other organizations. We believe that existential risk from misaligned autonomous AI systems is high-probability if we do nothing to avert it, and we base our case for MIRI on that view; if we thought that the risks from AI were very unlikely to arise, we would deprioritize AI alignment research in favor of other urgent research projects.\n\n\nAs a result, we expect EAs who strongly disagree with us about the likely future trajectory of the field of AI to work on areas other than AI risk. We don’t think EAs should donate to MIRI “just in case,” and we [reject](http://www.overcomingbias.com/2009/03/pascals-wager-metafallacy.html) arguments based on “Pascal’s Mugging.” (“[Pascal’s Mugging](http://wiki.lesswrong.com/wiki/Pascal%27s_mugging)” is the name MIRI researchers coined for decision-making that mistakenly focuses on infinitesimally small probabilities of superexponentially vast benefits.)[2](https://intelligence.org/2015/08/28/ai-and-effective-altruism/#footnote_1_11949 \"Quoting MIRI senior researcher Eliezer Yudkowsky in 2013:\nI abjure, refute, and disclaim all forms of Pascalian reasoning and multiplying tiny probabilities by large impacts when it comes to existential risk. We live on a planet with upcoming prospects of, among other things, human intelligence enhancement, molecular nanotechnology, sufficiently advanced biotechnology, brain-computer interfaces, and of course Artificial Intelligence in several guises. If something has only a tiny chance of impacting the fate of the world, there should be something with a larger probability of an equally huge impact to worry about instead. […]\nTo clarify, “Don’t multiply tiny probabilities by large impacts” is something that I apply to large-scale projects and lines of historical probability. On a very large scale, if you think FAI [Friendly AI] stands a serious chance of saving the world, then humanity should dump a bunch of effort into it, and if nobody’s dumping effort into it then you should dump more effort than currently into it. On a smaller scale, to compare two x-risk mitigation projects in demand of money, you need to estimate something about marginal impacts of the next added effort (where the common currency of utilons should probably not be lives saved, but “probability of an ok outcome”, i.e., the probability of ending up with a happy intergalactic civilization). In this case the average marginal added dollar can only account for a very tiny slice of probability, but this is not Pascal’s Wager. Large efforts with a success-or-failure criterion are rightly, justly, and unavoidably going to end up with small marginally increased probabilities of success per added small unit of effort. It would only be Pascal’s Wager if the whole route-to-an-OK-outcome were assigned a tiny probability, and then a large payoff used to shut down further discussion of whether the next unit of effort should go there or to a different x-risk.\n\")\n\n\n[As Stuart Russell writes](http://edge.org/conversation/the-myth-of-ai#26015), “Improving decision quality, irrespective of the utility function chosen, has been the goal of AI research – the mainstream goal on which we now spend billions per year, not the secret plot of some lone evil genius.” Thousands of person-hours are pouring into research to increase the general capabilities of AI systems, with the aim of building systems that can outperform humans in arbitrary cognitive tasks.\n\n\nWe [don’t know when](https://intelligence.org/faq/#imminent) such efforts will succeed, but we expect them to succeed eventually — possibly in the next few decades, and quite plausibly during this century. [Shoring up safety guarantees](https://intelligence.org/faq/#safety) for autonomous AI systems would allow us to reap many more of the benefits from advances in AI while significantly reducing the probability of a global disaster over the long term.\n\n\nMIRI’s mission of making smarter-than-human AI technology reliably beneficial is ambitious, but it’s ambitious in the fashion of goals like “prevent global warming” or “abolish factory farming.” Working toward such goals usually means making incremental progress that other actors can build on — more like setting aside $x of each month’s paycheck for a child’s college fund than like buying a series of once-off $x lottery tickets.\n\n\nA particular $100 is unlikely to make a large once-off impact on your child’s career prospects, but it can still be a wise investment. No single charity working against global warming is going to solve the entire problem, but that doesn’t make charitable donations useless. Although MIRI is a small organization, our work represents early progress toward more robust, transparent, and beneficial AI systems, which can then be built on by other groups and integrated into AI system design.[3](https://intelligence.org/2015/08/28/ai-and-effective-altruism/#footnote_2_11949 \"Nick Bostrom made a similar point at EA Global: that AI is an important cause even though any one individual’s actions are unlikely to make a decisive difference. In a panel on artificial superintelligence, Bostrom said that he thought people had a “low” (as opposed to “high” or “medium”) probability of making a difference on AI risk, which Matthews and a number of others appear to have taken to mean that Bostrom thinks AI is a speculative cause area. When I asked Bostrom about his intended meaning myself, however, he elaborated:\nThe point I was making in the EA global comment was the probability that you (for any ‘you’ in the audience) will save the world from an AI catastrophe is very small, not that the probability of AI catastrophe is very small. Thus working on AI risk is similar to volunteering for a presidential election campaign.\n\")\n\n\nRather than saying that AI-mediated catastrophes are high-probability and stopping there, though, I would say that such catastrophes are high-probability conditional on AI research continuing on its current trajectory. Disaster isn’t necessarily high-probability if the field of AI shifts to include alignment work along with capabilities work among its key focuses.\n\n\nIt’s because we consider AI disasters neither *unlikely* nor *unavoidable* that we think technical work in this area is important. From the perspective of aspiring effective altruists, the most essential risks to work on will be ones that are highly likely to occur in the near future if we do nothing, but substantially less likely to occur if we work on the problem and get existing research communities and scientific institutions involved.\n\n\nPrinciples like these apply outside the domain of AI, and although MIRI is currently [the only organization](https://intelligence.org/2015/08/14/what-sets-miri-apart/) specializing in long-term technical research on AI alignment, we’re one of a large and growing number of organizations that attempt to put these underlying EA principles into practice in one fashion or another. And to that extent, although effective altruists disagree about the best way to improve the world, we ultimately find ourselves on the same team.\n\n\n  \n\n \n\n\n\n \n\n\n\n\n---\n\n1. Although effective altruism is [sometimes divided](http://lesswrong.com/lw/hx4/four_focus_areas_of_effective_altruism/) into separate far-future, animal welfare, global poverty, and “meta” cause areas, this has always been a somewhat artificial division. Toby Ord, the founder of the poverty relief organization [Giving What We Can](https://www.givingwhatwecan.org/), is one of the leading scholars studying existential risk and holds a position at the [Future of Humanity Institute](http://www.fhi.ox.ac.uk/about/staff/). David Pearce, one of the strongest proponents of animal activism within EA, is best known for his futurism. Peter Singer is famous for his early promotion of global poverty causes as well as his promotion of animal welfare. And Anna Salamon, the Executive Director of the “meta”-focused [Center for Applied Rationality](http://rationality.org), is a former MIRI researcher.\n2. [Quoting](http://lesswrong.com/lw/h8m/being_halfrational_about_pascals_wager_is_even/) MIRI senior researcher Eliezer Yudkowsky in 2013:\n\n> I abjure, refute, and disclaim all forms of Pascalian reasoning and multiplying tiny probabilities by large impacts when it comes to existential risk. We live on a planet with upcoming prospects of, among other things, human intelligence enhancement, molecular nanotechnology, sufficiently advanced biotechnology, brain-computer interfaces, and of course Artificial Intelligence in several guises. If something has only a tiny chance of impacting the fate of the world, there should be something with a larger probability of an equally huge impact to worry about instead. […]\n> \n> \n> To clarify, “Don’t multiply tiny probabilities by large impacts” is something that I apply to large-scale projects and lines of historical probability. On a very large scale, if you think FAI [Friendly AI] stands a serious chance of saving the world, then humanity should dump a bunch of effort into it, and if nobody’s dumping effort into it then you should dump more effort than currently into it. On a smaller scale, to compare two x-risk mitigation projects in demand of money, you need to estimate something about marginal impacts of the next added effort (where the common currency of utilons should probably not be lives saved, but “probability of an ok outcome”, i.e., the probability of ending up with a happy intergalactic civilization). In this case the average marginal added dollar can only account for a very tiny slice of probability, but this is not Pascal’s Wager. Large efforts with a success-or-failure criterion are rightly, justly, and unavoidably going to end up with small marginally increased probabilities of success per added small unit of effort. It would only be Pascal’s Wager if the whole route-to-an-OK-outcome were assigned a tiny probability, and then a large payoff used to shut down further discussion of whether the next unit of effort should go there or to a different x-risk.\n> \n>\n3. Nick Bostrom made a similar point at EA Global: that AI is an important cause even though any one individual’s actions are unlikely to make a decisive difference. In a panel on artificial superintelligence, Bostrom said that he thought people had a “low” (as opposed to “high” or “medium”) probability of making a difference on AI risk, which Matthews and a number of others appear to have taken to mean that Bostrom thinks AI is a speculative cause area. When I asked Bostrom about his intended meaning myself, however, he elaborated:\n\n> The point I was making in the EA global comment was the probability that you (for any ‘you’ in the audience) will save the world from an AI catastrophe is very small, not that the probability of AI catastrophe is very small. Thus working on AI risk is similar to volunteering for a presidential election campaign.\n> \n>\n\nThe post [AI and Effective Altruism](https://intelligence.org/2015/08/28/ai-and-effective-altruism/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-08-28T23:42:40Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "e7045e3b24614a0dbc7f3ffebef569eb", "title": "Powerful planners, not sentient software", "url": "https://intelligence.org/2015/08/18/powerful-planners-not-sentient-software/", "source": "miri", "source_type": "blog", "text": "Over the past few months, some major media outlets have been spreading concern about the idea that AI might spontaneously acquire sentience and turn against us. Many people have pointed out the flaws with this notion, including Andrew Ng, an AI scientist of some renown:\n\n\n\n> I don’t see any realistic path from the stuff we work on today—which is amazing and creating tons of value—but I don’t see any path for the software we write to turn evil.\n> \n> \n\n\nHe goes on to say, on the topic of sentient machines:\n\n\n\n> Computers are becoming more intelligent and that’s useful as in self-driving cars or speech recognition systems or search engines. That’s intelligence. But sentience and consciousness is not something that most of the people I talk to think we’re on the path to.\n> \n> \n\n\nI say, these objections are correct. I endorse Ng’s points wholeheartedly — I see few pathways via which software we write could spontaneously “turn evil.”\n\n\nI do think that there is important work we need to do in advance if we want to be able to use powerful AI systems for the benefit of all, but this is not because a powerful AI system might acquire some “spark of consciousness” and turn against us. I also don’t worry about creating some Vulcan-esque machine that deduces (using cold mechanic reasoning) that it’s “logical” to end humanity, that we are in some fashion “unworthy.” The reason to do research in advance is not so fantastic as that. Rather, we simply don’t yet know how to program intelligent machines to reliably do good things without unintended consequences.\n\n\nThe problem isn’t *Terminator*. It’s “King Midas.” King Midas got exactly what he wished for — every object he touched turned to gold. His food turned to gold, his children turned to gold, and he died hungry and alone.\n\n\nPowerful intelligent software systems are just that: software systems. There is no spark of consciousness which descends upon sufficiently powerful planning algorithms and imbues them with feelings of love or hatred. You get only what you program.[1](https://intelligence.org/2015/08/18/powerful-planners-not-sentient-software/#footnote_0_11936 \"You could likely program an AI system to be conscious, which would greatly complicate the situation — for then the system itself would be a moral patient, and its preferences would weigh into our considerations. As Ng notes, however, “consciousness” is not the same thing as “intelligence.”\")\n\n\n\nTo build a powerful AI software system, you need to write a program that represents the world somehow, and that continually refines this world-model in response to percepts and experience. You also need to program powerful planning algorithms that use this world-model to predict the future and find paths that lead towards futures of some specific type.\n\n\nThe focus of our research at MIRI isn’t centered on sentient machines that think or feel as we do. It’s aimed towards improving our ability to program software systems to execute plans leading towards very specific types of futures.\n\n\nA machine programmed to build a highly accurate world-model and employ powerful planning algorithms could yield extraordinary benefits. Scientific and technological innovation have had great impacts on quality of life around the world, and if we can program machines to be intelligent in the way that humans are intelligent — only faster and better — we can *automate scientific and technological innovation*. When it comes to the task of improving human and animal welfare, that would be a game-changer.\n\n\nTo build a machine that attains those benefits, the first challenge is to do this world-modeling and planning in a highly reliable fashion: you need to ensure that it will consistently pursue its goal, whatever that is. If you can succeed at this, the second challenge is making that goal a safe and useful one.\n\n\nIf you build a powerful planning system that aims at futures in which cancer is cured, then it may well represent all of the following facts in its world-model: (a) The fastest path to a cancer cure involves proliferating robotic laboratories at the expense of the biosphere and kidnapping humans for experimentation; (b) once you realize this, you’ll attempt to shut it down; and (c) if you shut it down, it will take a lot longer for cancer to be cured. The system may then execute a plan which involves deceiving you until it is able to resist and then proliferating robotic laboratories and kidnapping humans. This is, in fact, what you asked for.\n\n\nWe can avoid this sort of outcome, *if* we manage to build machines that do what we mean rather than what we said. That sort of behavior doesn’t come for free: you have to program it in.\n\n\nA superhuman planning algorithm with an extremely good model of the world could find solutions you never imagined. It can make use of patterns you haven’t noticed and find shortcuts you didn’t recognize. If you follow a plan generated by a superintelligent search process, it could have disastrous unintended consequences. To quote professor Stuart Russell (author of the [leading AI textbook](http://aima.cs.berkeley.edu/)):\n\n\n\n> The primary concern is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken, where the utility function is, presumably, specified by the human designer. Now we have a problem:\n> \n> \n> 1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.\n> \n> \n> 2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.\n> \n> \n> A system that is optimizing a function of n variables, where the objective depends on a subset of size k \n> \n\n\nHumans have a *lot* of fiddly little constraints akin to “oh, and don’t kidnap any humans while you’re curing cancer”. Programming in a full description of human values and human norms by hand, in a machine-readable format, doesn’t seem feasible. If we want the plans generated by superhuman planning algorithms to respect all of our complicated unspoken constraints and desires, then we’ll need to develop new tools for predicting and controlling the behavior of general-purpose autonomous agents. There’s no two ways about it.\n\n\n\n\n---\n\n\nMany people, when they first encounter this problem, come up with a reflexive response about why the problem won’t be as hard as it seems. One common one is “If a powerful planner starts running amok, we can just unplug it” — an objection which is growing obsolete in the era of cloud computing, and which fails completely if the system has access to the internet or any other network where it can copy itself onto other machines.\n\n\nAnother common one is “Why not have the system *output* a plan rather than having it *execute* the plan?” — but if we direct a powerful planning procedure to generate plans such that (a) humans who examine the plan approve of it and (b) executing it leads to cancer being cured, then the plan may well be one that *looks* good but which exploits some predictable oversight in the verification procedure and kidnaps people anyway.\n\n\nOr you could say, “How about we just make systems which only answer questions?” But how exactly do you direct a superhuman planning procedures towards “answering questions”? Will you program it to output text that it predicts will cause you to press the “highly satisfied” button after the answer has been output? Because in that case, the system may well output text that constitutes a particularly deceptive answer. Or, if you add a constraint that the answer must be accurate, it may output text that manipulates you into asking easier questions in the future.\n\n\nMaybe you reply, “Well, perhaps instead I’ll direct the planner to move toward futures where its output is measured by this clever metric where…,” and now you’ve been drawn in. How exactly could we build powerful planers that search for beneficial futures? It looks like it’s possible to build systems that somehow learn the user’s intentions or values and act according to them, but actually doing so is not trivial. You’ve got to think hard to build systems that figure out all the intricacies of your intentions without deceiving or manipulating you while acquiring that information. That doesn’t happen for free: ambitious, long-term software projects are still ultimately software projects, and we have to figure out how to actually write the required code.\n\n\nIf we can figure out how to build smarter-than-human machines aligned with our interests, the benefits could be extraordinary. Like Phil Libin (founder of Evernote) [says](http://www.vox.com/2015/8/12/9143071/evernote-artificial-intelligence), AI could be “one of the greatest forces for good the universe has ever seen.” It’s possible to get there, but it’s going to require some work.\n\n\n \n\n\n\n\n\n---\n\n1. You could likely program an AI system to be conscious, which would greatly complicate the situation — for then the system itself would be a moral patient, and its preferences would weigh into our considerations. As Ng notes, however, “consciousness” is not the same thing as “intelligence.”\n\nThe post [Powerful planners, not sentient software](https://intelligence.org/2015/08/18/powerful-planners-not-sentient-software/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-08-18T08:15:15Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "efd263718f5897f68dacfa301f53ce5c", "title": "What Sets MIRI Apart?", "url": "https://intelligence.org/2015/08/14/what-sets-miri-apart/", "source": "miri", "source_type": "blog", "text": "[Last week](https://intelligence.org/2015/08/10/assessing-our-past-and-potential-impact/), we received several questions from the effective altruist community in response to our [fundraising post](http://effective-altruism.com/ea/ln/2015_miri_summer_fundraiser_how_we_could_scale/). Here’s Maxwell Fritz:\n\n\n\n> […] My snap reaction to MIRI’s pitches has typically been, “yeah, AI is a real concern. But I have no idea whether MIRI are the right people to work on it, or if their approach to the problem is the right one” [… I]f you agree AI matters, why MIRI?\n> \n> \n\n\nAnd here are two more questions in a similar vein, added by Tristan Tager:\n\n\n\n> [… W]hat can MIRI do? Why should I expect that the MIRI vision and the MIRI team are going to get things done? What exactly can I expect them to get done? […]\n> \n> \n> But the second and much bigger question is, what would MIRI do that Google wouldn’t? Google has a ton of money, a creative and visionary staff, the world’s best programmers, and a swath of successful products that incorporate some degree of AI — and moreover they recently acquired several AI businesses and formed an AI ethics board. It seems like they’re approaching the same big problem directly rather than theoretically, and have deep pockets, keen minds, and a wealth of hands-on experience.\n> \n> \n\n\nThese are great questions. My answer to “Why MIRI?”, in short, is that MIRI has a brilliant team of researchers focused on the [fundamental theoretical research](https://intelligence.org/2015/07/27/miris-approach/) that almost nobody else is pursuing. We’re focused entirely on aligning smarter-than-human AI systems with humane values, for the long haul.\n\n\nMost academics aren’t working on AI alignment problems yet, and none are doing it full-time. Most industry folks aren’t working on these problems yet, either. I know this because I’m in conversations with a number of them. (The field is large, but [it isn’t *that* large](https://intelligence.org/2014/01/28/how-big-is-ai/).)\n\n\nThere are quite a few good reasons why academics and industry professionals aren’t working on these problems yet, and I’ll touch on a few of them in turn. \n\n \n\n \n\n\n**1. Most AI scientists focus predominantly on the short and medium term.**\n\n\nThis makes sense: the field of AI has been burned more than once in the past by over-promising and under-delivering, and history shows that it’s often easier to do AI work when one keeps an eye on goals that are achievable within the next few years.\n\n\nMany AI scientists have incentives to stick to practical work, look to the short term, and make only realistic promises. (This is evidenced in part by the field’s recent focus on “machine learning,” which has a narrower and more immediate focus than the “old-fashioned” field of AI.) This has paid off, and industry and academia both have made continual incremental breakthroughs that have led to some amazing new technologies. However, it has also made long-term considerations somewhat toxic, and “artificial general intelligence” has been a taboo topic of sorts in recent years. This sentiment is starting to shift, but most academics are still loath to work on foundational problems pertaining to artificial general intelligence, preferring research that improves the capabilities of practical systems today.\n\n\n \n\n\n**2. We’re solving a different sort of problem.**\n\n\nIn [MIRI’s Approach](https://intelligence.org/2015/07/27/miris-approach/) I spoke of two different classes of computer science problem. Class 1 problems involve figuring out how to do, in practice and with reasonable amounts of computing power, things which we know how to do in principle. Class 2 problems involve figuring out how to do in principle things that we can’t even do in principle yet.\n\n\nOur current approach to alignment research is to try to move problems from Class 2 to Class 1. This kind of research has been pursued successfully in other areas in the past, and in the context of AI alignment I believe that it deserves significantly more attention than it is receiving.\n\n\nIndustry is traditionally best suited for the first problem class. Academia, too, also often focuses on the first class of problems instead of the second class — especially in the field of AI, for reasons related to point 1. It is common for academics to take some formalization of something like probability theory and then explore and extend the framework, figuring out where it applies and developing practical approximations of intractable algorithms and so on. It’s much rarer for academics to *create* theoretical foundations for problems that cannot yet be solved even in principle, and this tends to happen only when someone is searching for new theoretical foundations on purpose. For reasons discussed above, most academics aren’t attempting this sort of research yet when it comes to AI alignment.\n\n\nThis is what MIRI brings to the table: a laser focus on the relevant technical challenges.\n\n\n \n\n\n**3. We don’t have competing interests or priorities.**\n\n\nAcademic research tends to follow the contours of tractability and curiosity, wherever they may lead. Industry research tends to follow short- and medium-term profits. It is not obvious that either of these will zero in on the AI alignment problems we raise in our [technical agenda](https://intelligence.org/technical-agenda/).[1](https://intelligence.org/2015/08/14/what-sets-miri-apart/#footnote_0_11931 \"Indeed, the problems in the technical agenda were chosen in part because they seem important but don’t appear to be on the default path.\")\n\n\nBy contrast, MIRI researchers focus full-time on the most important AI alignment problems they can identify, without taking breaks to teach classes, pursue other interesting theoretical questions, or tackle more immediately profitable or publishable topics. We aren’t going to switch to a different set of problems in order to win a grant; and when we win a [three-year grant](http://futureoflife.org/AI/2015awardees#Fallenstein), we aren’t going to switch to a different set of problems as soon as the grant expires. We will simply continue working on the most important technical problems we can find.\n\n\nOur mission is to solve the hard technical problems of AI alignment, and our current approach is to zero in on the open problems laid out in our research agenda, without distraction.\n\n\n \n\n\nThe takeaway is that at least in the near term, solutions to the specific problems we’re looking at — generated by the question “what would we still be unable to solve, even if the alignment problem were simpler?” — are most likely to come from MIRI and our collaborators.\n\n\n\n\n---\n\n\nTristan asked: “What would MIRI do that Google wouldn’t?” The answer is: the fundamental theoretical research. I talk to industry folks fairly regularly about what they’re working on and about what we’re working on. Over and over, the reaction I get to our work is something along the lines of “Ah, yes, those are very important questions. We aren’t working on those, but it does seem like we’re missing some useful tools there. Let us know if you find some answers.”\n\n\nOr, just as often, the response we get is some version of “Well, yes, that tool would be awesome, but getting it sounds impossible,” or “Wait, why do you think we actually need that tool?”[2](https://intelligence.org/2015/08/14/what-sets-miri-apart/#footnote_1_11931 \"There isn’t much correlation between which of our open problems gets which of these three answers, across AI scientists.\") Regardless, the conversations I’ve had tend to end the same way for all three groups: “That would be a useful tool if you can develop it; we aren’t working on that; let us know if you find some answers.”\n\n\nIndustry is interested in pushing modern practical systems towards increasingly general applications. They’re focused on problems such as hierarchical planning, continual learning, and transfer learning. When they work on safety research (and they often do!), they work on tools that improve the transparency of deep neural networks, and they work on improving the calibration and decreasing the error rates in their systems. Our relations with industry AI scientists are good, and we keep tabs on each other. But major industry groups simply aren’t competing with MIRI in AI alignment research; they aren’t doing our type of work at the moment.\n\n\nIt’s also worth keeping in mind that MIRI researchers tend to be agnostic about AI timelines. Last year, the question du jour was “why do you think your research can matter, when we’re so far off from smarter-than-human AI that it’s impossible to know what architecture will be used for machine intelligence?” This year, the question du jour is “why do you think your research can matter, when Google is obviously going to develop smarter-than-human AI using artificial neural networks?”\n\n\nMy answer has remained similar in both cases: we’re developing basic conceptual tools, akin to probability theory, that are likely to improve the transparency, verifiability, stability, and overall safety of AGI systems across a variety of different architectures. My timelines are about the same today as they were last year: I don’t expect to see smarter-than-human AI systems in the next decade, and I do expect to see them this century. It is not obvious to me when smarter-than-human AI systems will be developed, or how, or by whom.\n\n\nA decade is a *lot* of time in industry, and a century is an eternity. The industry leaders today may look quite strong, and some groups may be developing a strong lead over others. However, I’m not nearly confident enough in any one group to put all our chips on their success.\n\n\nMIRI is doing foundational research to develop tools that will help all industry groups design safer systems. Our goal is to make it easier to design robust and beneficial general-purpose reasoners from the ground up, and I think it’s valuable for those sorts of tools to be developed by an impartial third party that answers to the public rather than to its shareholders.\n\n\n\n\n---\n\n\nSuperintelligence probably isn’t just around the corner. In the interim, we’re working on a different set of problems than existing groups in industry and academia; most teams aren’t focused on foundational theoretical research on long-term AI alignment problems.\n\n\nMIRI is. The open problems we work on are underserved, and we’re well-positioned to make inroads on the most critical technical obstacles lying ahead. Our hope is that numerous brilliant and skilled organizations will take on these technical challenges as well, and that our work on these topics will help the research area grow more quickly. I would deeply appreciate more competition in this space. Until then, if you want there to be at least one group whose mission is tackling the “[Class 2](https://intelligence.org/2015/07/27/miris-approach/)” technical research, MIRI is the team to fund.\n\n\n \n\n\n\n\n\n---\n\n1. Indeed, the problems in the technical agenda were chosen in part *because* they seem important but don’t appear to be on the default path.\n2. There isn’t much correlation between which of our open problems gets which of these three answers, across AI scientists.\n\nThe post [What Sets MIRI Apart?](https://intelligence.org/2015/08/14/what-sets-miri-apart/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-08-15T01:22:48Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "aa1616ac18c1a701f57c55c4707037b1", "title": "Assessing our past and potential impact", "url": "https://intelligence.org/2015/08/10/assessing-our-past-and-potential-impact/", "source": "miri", "source_type": "blog", "text": "We’ve received several thoughtful questions in response to our [fundraising post to the Effective Altruism Forum](http://effective-altruism.com/ea/ln/2015_miri_summer_fundraiser_how_we_could_scale/) and our new [FAQ](http://intelligence.org/faq). From quant trader Maxwell Fritz:\n\n\n\n> My snap reaction to MIRI’s pitches has typically been, “yeah, AI is a real concern. But I have no idea whether MIRI are the right people to work on it, or if their approach to the problem is the right one”.\n> \n> \n> Most of the FAQ and pitch tends to focus on the “does this matter” piece. It might be worth selling harder on the second component – if you agree AI matters, why MIRI?\n> \n> \n> At that point, there’s two different audiences – one that has the expertise in the field to make a reasoned assessment based on the quality of your existing work, and a second that doesn’t have a clue (me) and needs to see a lot of corroboration from unaffiliated, impressive sources (people in that first group).\n> \n> \n> The pitches tend to play up famous people who know their shit and corroborate AI as a concern – but should especially make it clear when those people believe in MIRI. That’s what matters for the “ok, why you?” question. And the natural follow up is if all of these megarich people are super on board with the concern of AI, and experts believe MIRI should lead the charge, why aren’t you just overflowing with money already?\n> \n> \n\n\nAnd from mathematics grad student Tristan Tager:\n\n\n\n> I would guess that “why MIRI”, rather than “who’s MIRI” or “why AI”, is the biggest marketing hurdle you guys should address.\n> \n> \n> For me, “why MIRI” breaks down into two questions. The first and lesser question is, what can MIRI do? Why should I expect that the MIRI vision and the MIRI team are going to get things done? What exactly can I expect them to get done? Most importantly in addressing this question, what have they done already and why is it useful? The Technical Agenda is vague and mostly just refers to the list of papers. And the papers don’t help much — those who don’t know much about academia need something more accessible, and those who do know more about academia will be skeptical about MIRI’s self-publishing and lack of peer review.\n> \n> \n> But the second and much bigger question is, what would MIRI do that Google wouldn’t? Google has tons of money, a creative and visionary staff, the world’s best programmers, and a swath of successful products that incorporate some degree of AI — and moreover they recently acquired several AI businesses and formed an AI ethics board. It seems like they’re approaching the same big problem directly rather than theoretically, and have deep pockets, keen minds, and a wealth of hands-on experience.\n> \n> \n\n\nThere are a number of good questions here. Later this week, Nate plans to post a response to Tristan’s last question: *Why is MIRI currently better-positioned to work on this problem than AI groups in industry or academia?* (**Update February 17**: [Link here](https://intelligence.org/2015/08/14/what-sets-miri-apart/).)\n\n\nHere, I want to reply to several other questions Tristan and Maxwell raised:\n\n\n* *How can non-specialists assess MIRI’s research agenda and general competence?*\n* *What kinds of accomplishments can we use as measures of MIRI’s past and future success?*\n* And lastly: *If a lot of people take this cause seriously now, why is there still a funding gap?*\n\n\n\n#### General notes\n\n\nWhen we make our case for MIRI, we usually focus on the reasons to consider AI an important point of leverage on existential risk (Nate’s [four background claims](https://intelligence.org/2015/07/24/four-background-claims/), Eliezer’s [five theses](http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/)) and for thinking that early theoretical progress is possible in this area ([MIRI’s Approach](https://intelligence.org/2015/07/27/miris-approach/)).\n\n\nWe focus on these big-picture arguments because the number of people working on this topic is still quite small. The risk scenario MIRI works on has only risen to national attention in the last 6-12 months; at the moment, MIRI is the only research organization I know of that is even *claiming* to specialize in early technical research on the alignment problem.\n\n\nThere are multiple opportunities to support technical research into long-term AI safety at the scale of “funding individual researcher X to work on discrete project Y.” Some recipients of the 2015 Future of Life Institute (FLI) [grants](http://futureoflife.org/misc/2015awardees) fall into this category, e.g., Stuart Russell and Paul Christiano.\n\n\nHowever, there aren’t multiple opportunities in this area at the scale of “funding an AI group to pursue a large-scale or long-term program,” and there aren’t many direct opportunities to bring in entirely new people and grow and diversify the field. MIRI would love to have organizational competitors (and collaborators) in this space, but they don’t yet exist. We expect this to change eventually, and one of our expansion goals is to make this happen faster, by influencing other math and computer science research groups to take on more AI alignment work and by [recruiting highly qualified specialists from outside the existential risk community](https://intelligence.org/2015/08/07/target-3-taking-it-to-the-next-level/) to become career AI alignment researchers.\n\n\nThe upside of getting started early is that we have a chance to have a larger impact in a less crowded space. The downside is that there are fewer authoritative sources to appeal to when outsiders want to verify that our research agenda is on the right track.\n\n\nFor the most part, those authoritative sources will probably need to wait a year or two. Academia is slow, and at this stage many computer scientists have only been aware of this area for a few months. Our mission is seen as important by a growing number of leaders in science and industry, and our technical agenda is seen by a number of AI specialists as promising and deserving of more attention — hence its inclusion in the FLI [research priorities document](http://futureoflife.org/static/data/documents/research_priorities.pdf). But we don’t expect the current state of the evidence to be universally convincing to our most skeptical potential donors.\n\n\nFor those who want detailed independent assessments of MIRI’s output, our advice is to wait a bit for the wider academic community to respond. (We’re also looking into options for directly soliciting public feedback from independent researchers regarding our research agenda and early results.)\n\n\nIn the interim, however, “[Why Now Matters](https://intelligence.org/2015/07/20/why-now-matters/)” notes reasons donations to MIRI are likely to have a much larger impact now than they would several years down the line. For donors who are skeptical (or curious), but are not so skeptical that they require a fine-grained evaluation of our work by the scholarly community, I’ll summarize some of the big-picture reasons to think MIRI’s work is likely to be high-value.[1](https://intelligence.org/2015/08/10/assessing-our-past-and-potential-impact/#footnote_0_11927 \"As always, you can also shoot us more specific questions that aren’t addressed here.\")\n\n\n#### Influence on the AI conversation\n\n\nIn the absence of in-depth third-party evaluations of our work, interested non-specialists can look at our publication history and the prominence of our analyses in scholarly discussions of AI risk.\n\n\nMIRI’s most important accomplishments fall into three categories: writing up top-priority AI alignment problems; beginning early work on these problems; and getting people interested in our research and our mission. Our [review of the last year](https://intelligence.org/2015/07/16/an-astounding-year/) discusses our recent progress in formalizing alignment problems, as well as our progress in getting the larger AI community interested in long-term AI safety. Our [publications list](https://intelligence.org/all-publications/) gives a more detailed and long-term picture of our output.\n\n\nMIRI co-founder and senior researcher Eliezer Yudkowsky and Future of Humanity Institute (FHI) founding director Nick Bostrom are responsible for much of the early development and popularization of ideas surrounding smarter-than-human AI risk. Eliezer’s ideas are [cited prominently](https://intelligence.org/2013/10/19/russell-and-norvig-on-friendly-ai/) in the 2009 edition of *Artificial Intelligence: A Modern Approach* (*AI:MA*), the leading textbook in the field of AI, and also in Bostrom’s 2014 book *Superintelligence*.\n\n\nCredit for more recent success in popularizing long-term AI safety research is shared between MIRI and a number of actors: Nick Bostrom and FHI, Max Tegmark and FLI, Stuart Russell (co-author of *AI:MA*), Jaan Tallinn, and others. Many people in this existential-risk-conscious cluster broadly support MIRI’s efforts and are in regular contact with us about our decisions. Bostrom, Tegmark, Russell, and Tallinn are all MIRI advisors, and Tallinn, a co-founder of FLI and of the Cambridge Centre for the Study of Existential Risk (CSER), cites MIRI as a key source for his views on AI risk.\n\n\nWriting in early 2014, Russell and Tegmark, together with Stephen Hawking and Frank Wilczek, [noted in *The Huffington Post*](https://intelligence.org/feed/www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html) that “little serious research is devoted to these issues [of long-term AI risk] outside small non-profit institutes such as the Cambridge Center for Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute.”[2](https://intelligence.org/2015/08/10/assessing-our-past-and-potential-impact/#footnote_1_11927 \"Quoting Russell, Tegmark, Hawking, and Wilczek:\nWhereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.\nSo, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilization sent us a text message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here — we’ll leave the lights on”? Probably not — but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing ever to happen to humanity, little serious research is devoted to these issues outside small non-profit institutes such as the Cambridge Center for Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute.\") Of these organizations, MIRI is the one that currently specializes in the technical obstacles to designing safe, beneficial smarter-than-human AI systems. FHI, CSER, and FLI do important work on a broader set of issues, including forecasting and strategy work, outreach, and investigations into other global risks.\n\n\n#### Publications\n\n\nTristan raised the concern that MIRI’s technical agenda was self-published. Reviewing our past publications, MIRI has self-published many of its results. More than once, we’ve had the experience of seeing papers [rejected](http://effective-altruism.com/ea/ju/i_am_nate_soares_ama/418?context=1#comments) with comments that the results are interesting but the AI motivation is just too strange. We’ve begun submitting stripped-down papers and putting the full versions on arXiv, but figuring out the best way to get these results published took some trial and error.\n\n\nPart of the underlying problem is that the AI field has been repeatedly burned by “[winters](https://en.wikipedia.org/wiki/AI_winter)” when past generations over-promised and under-delivered. Members of the field are often uncomfortable looking too far ahead, and have historically been loathe to talk about general intelligence.\n\n\nOur approach is exactly the opposite: we focus directly on trying to identify [basic aspects of general reasoning that are still not well-understood](https://intelligence.org/2015/07/27/miris-approach/), while explicitly avoiding safety research focused on present-day AI systems (which is more crowded and more likely to occur regardless of our efforts). This means that our work often lacks direct practical application today, while also broaching the unpopular subject of general intelligence.\n\n\nWe’re getting more work published these days in part because the topic of smarter-than-human AI is no longer seen as academically illegitimate to the extent it was in the past, and in part because we’ve pivoted in recent years from being an organization that primarily worked on movement growth (via Singularity Summits, writings on rationality, etc.) and some forecasting research, to an organization that focuses solely on novel technical research.\n\n\nOur seven-paper [technical agenda](https://intelligence.org/technical-agenda/) was initially self-published, but this was primarily in order to make it available early enough to be read and cited by attendees of the [“Future of AI” conference](https://intelligence.org/feed/futureoflife.org/AI/ai_conference) in January. Since then, a [shorter version](https://intelligence.org/files/CounterpossibleReasoning.pdf) of the technical agenda paper “Toward Idealized Decision Theory” has been accepted to AGI-15 (the [full version](http://arxiv.org/abs/1507.01986) is on arXiv), and we’ve presented the full technical agenda paper “[Corrigibility](https://intelligence.org/files/Corrigibility.pdf)” at AAAI-15, a leading academic conference in AI. The overview paper, “[Aligning Superintelligence with Human Interests](https://intelligence.org/files/TechnicalAgenda.pdf),” is forthcoming in a Springer anthology on the technological singularity.\n\n\nThe other four technical agenda papers aren’t going through peer review because they’re high-level overview papers that are long on explanation and background, but short on new results. We’ve been putting associated results through peer review instead. We published Vingean reflection results in the AGI-14 proceedings (“[Problems of Self-Reference in Self-Improving Space-Time Embedded Intelligence](https://intelligence.org/files/ProblemsSelfReference.pdf)”), and other results have been accepted to ITP 2015 (“[Proof-Producing Reflection for HOL](https://intelligence.org/2015/12/04/new-paper-proof-producing-reflection-for-hol/)“). We have two peer-reviewed papers related to both logical uncertainty and realistic world-models: one we presented two weeks ago at AGI-15 (“[Reflective Variants of Solomonoff Induction and AIXI](https://intelligence.org/files/ReflectiveSolomonoffAIXI.pdf)”) and another we’re presenting at LORI-V later this year (“[Reflective Oracles: A Foundation for Classical Game Theory](https://intelligence.org/files/ReflectiveOracles.pdf)”). We also presented relevant [decision theory results](https://intelligence.org/files/ProgramEquilibrium.pdf) at AAAI-14.\n\n\nThe “[Value Learning](https://intelligence.org/files/ValueLearningProblem.pdf)” paper is the only paper in the research agenda suite that hasn’t had associated work go through peer review yet. It’s the least technical part of the agenda, so it may be a little while before we have technical results to put through peer review on this topic.\n\n\n(**Update July 15**: “The Value Learning Problem” has now been peer reviewed and presented at the IJCAI 2016 Ethics for Artificial Intelligence workshop.)\n\n\nBy publishing in prominent journals and conference proceedings, we hope to get many more researchers interested in our work. A useful consequence of this is that there will be more non-MIRI evaluations of (and contributions to) the basic research questions we’re working on. In the nearer future, we also have a few blog posts in the works that are intended to explain some of the more technical parts of our research agenda, such as Vingean reflection.\n\n\nIn all, we’ve published seven peer-reviewed papers since Nate and Benja came on in early 2014. In response to recent industry progress and new work by MIRI and the existential risk community, it’s become much easier to publish papers that wrestle directly with open AI alignment problems, and we expect it to become even easier over the next few years.\n\n\n#### What does success look like?\n\n\nSuccess for MIRI partly means delivering on our [Summer Fundraiser](https://intelligence.org/2015/07/17/miris-2015-summer-fundraiser/) targets: growing our research team and taking on additional projects conditional on which funding targets we’re able to hit. Our current plan this year is to focus on producing a few high-quality publications in elite venues. If our fundraiser goes well, this should impact how effectively we can execute on that plan and how quickly we can generate and publish new results.\n\n\nA more direct measure of success is our ability to make progress on the specific technical problems we’ve chosen to focus on, as assessed by MIRI researchers and the larger AI community. In “[MIRI’s Approach](https://intelligence.org/2015/07/27/miris-approach/),” Nate distinguishes two types of problems: ones for which we know the answer in principle, but lack practical algorithms; and ones that are not yet well-specified enough for us to know how to construct an answer even in principle. At this point, large-scale progress for MIRI looks like moving important and neglected AI problems from the second category to the first category.\n\n\nMaxwell raised one more question: if MIRI is receiving more mainstream attention and approval, why does it still have a funding gap?\n\n\nPart of the answer, sketched out in “[Why Now Matters](https://intelligence.org/2015/07/20/why-now-matters/),” is that we do think there’s a good chance that large grants or donations could close our funding gap in the coming years. However, large donors and grantmakers can be slow to act, and whether or not our funding gap is closed five or ten years down the line, it’s very valuable for us to be able to expand and diversify our activities now.\n\n\nElon Musk and the Open Philanthropy Project recently awarded $7M in grants to AI safety research, and MIRI’s core research program received [a large project grant](http://futureoflife.org/AI/2015awardees#Fallenstein). This is a wonderful infusion of funding into the field, and means that many more academics will be able to start focusing on AI alignment research. However, given the large number of high-quality grant recipients, the FLI grants aren’t enough to make the most promising research opportunities funding-saturated. MIRI received the fourth-largest project grant, which amounts to $83,000 per year for three years.[3](https://intelligence.org/2015/08/10/assessing-our-past-and-potential-impact/#footnote_2_11927 \"We additionally received about $50,000 for the AI Impacts project, and will receive some fraction of the funding from two other grants where our researchers are secondary investigators, “Inferring Human Values” and “Applying Formal Verification to Reflective Reasoning.”\") This is a very generous grant, and it will significantly bolster our efforts to support researchers and run workshops, but it’s nowhere near enough to close our funding gap.\n\n\nSince this is the first fundraiser we’ve run in 2015, it’s a bit early to ask why the newfound attention and approval our work has received this year hasn’t yet closed the gap. The FLI grants and our ongoing fundraiser are part of the mechanism by which the funding gap shrinks. It is shrinking, but the process isn’t instantaneous — and part of the process is making our case to new potential supporters. Our hope is that if we make our case for MIRI clearer to donors, we can close our funding gap faster and thereby have a bigger impact on the early scholarly conversation about AI safety.\n\n\nWe’re very grateful for all the support we’ve gotten so far — and in particular for the support we received before we had mainstream computer science publications, a fleshed-out research agenda, or a track record of impacting the discourse around the future of AI. The support we received early on was critical in getting us to where we are today, and as our potential as an organization becomes clearer through our accomplishments, we hope to continue to attract a wider pool of supporters and collaborators.\n\n\n \n\n\n\n\n\n---\n\n1. As always, you can also [shoot us more specific questions](mailto:rob@intelligence.org) that aren’t addressed here.\n2. [Quoting](http://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html) Russell, Tegmark, Hawking, and Wilczek:\n\n> Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.\n> \n> \n> So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilization sent us a text message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here — we’ll leave the lights on”? Probably not — but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing ever to happen to humanity, little serious research is devoted to these issues outside small non-profit institutes such as the Cambridge Center for Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute.\n> \n>\n3. We additionally received about $50,000 for the [AI Impacts](http://futureoflife.org/AI/2015awardees#Grace) project, and will receive some fraction of the funding from two other grants where our researchers are secondary investigators, “[Inferring Human Values](http://futureoflife.org/AI/2015awardees#Evans)” and “[Applying Formal Verification to Reflective Reasoning](http://futureoflife.org/AI/2015awardees#Kumar).”\n\nThe post [Assessing our past and potential impact](https://intelligence.org/2015/08/10/assessing-our-past-and-potential-impact/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-08-11T06:50:22Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "d49f5efa549e9449727a549d9cd8a569", "title": "Target 3: Taking It To The Next Level", "url": "https://intelligence.org/2015/08/07/target-3-taking-it-to-the-next-level/", "source": "miri", "source_type": "blog", "text": "One week ago, we hit our [first fundraising target](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/#continuedgrowth). I’m thrilled to announce that we’re now closing in on our [second target](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/#acceleratedgrowth): **our fundraising total passed $400,000 today**!\n\n\nAs we approach target number two, we’re already taking active steps to grow our team. Jessica Taylor joined our core research team on August 1; another research fellow will be coming on in September; and **a third researcher has just signed on to join our team in the near future** — details forthcoming. These three new recruits will increase the size of our team to six full-time researchers.\n\n\nWe’re courting a few other researchers who may be able to join us later in the year. Meanwhile, we’re running a workshop on logical uncertainty, and we’ve started onboarding a new intern with the aim of helping us with our writing bottleneck.\n\n\nWe’re already growing quickly — but we could still make use of additional funds to pursue a much more ambitious growth plan. Given that we’re only halfway through our fundraiser, this is a good time to start thinking big.\n\n\nAt present, we’re recruiting primarily from a small but dedicated pool of mathematicians and computer scientists who come to us on their own initiative. If our fundraiser successfully passes target number two, any further funds will enable us to pivot toward recruiting top talent more broadly — including highly qualified mathematicians and computer scientists who have never heard of us before.\n\n\nWe have a strong pitch: we’re working on some of the most [interesting](https://intelligence.org/2015/07/27/miris-approach/) and [important](https://intelligence.org/2015/07/24/four-background-claims/) problems in the world, on a research topic which is still in its infancy. There is lots of low-hanging fruit to be picked, and the first papers on these topics will end up defining this new paradigm of research. Researchers at MIRI have a rare opportunity to make groundbreaking discoveries that may play a critical role in AI progress over the next few decades.\n\n\nMoreover, MIRI researchers don’t have to teach classes, and they aren’t under a “publish or perish” imperative. Their job is *just* to focus on the most important technical problems they can identify, while leaving the mundane inconveniences of academic research to our operations team. When we make it our priority to recruit the world’s top math talent, we’ll be able to put together a pretty tempting offer!\n\n\n**This is what we’ll do at funding target number three: Take MIRI’s growth to the next level.** At this level, we’ll start stepping up our recruitment efforts to build our AI alignment dream team.\n\n\n\nTarget number three is significantly more ambitious than targets one and two; target one was $250k, and target two was $500k, whereas our third funding target is $1.5M. The more money we raise beyond the $500k level, the more of our next-level growth plans we’ll be able to execute on in the near future. We would execute on some of the below plans at (e.g.) the $750k level, but not all of them.\n\n\nThere are five reasons we can put significantly more funding to good use at this level:\n\n\n**First**, to expand our recruiting efforts beyond the pool of people who came to us, we would hire dedicated staff focused specifically on active recruiting. At this level, we hire a research steward with a lot of technical ability who is skilled at engaging with rising stars in AI and mathematics and getting them excited about our research. Dedicated staff could also make it a priority to run seminars at Berkeley, give talks to the graduating classes at places like MIT and Stanford, and start building relationships with professional AI scientists and mathematicians around the world who can point interested talent our way.\n\n\n**Second**, if we’re going to start actively recruiting, we’ll need to be confident that we have enough money to actually offer the best people jobs. Increased funding now gives us confidence that we will actually be able to hire on more people and continue to support them over the coming years.\n\n\n**Third**, as we reach beyond the community of people who found us on their own, we will have to start offering more competitive salaries. Higher salaries aren’t sufficient to win the best people, but they often are necessary: in the past, we’ve lost potential researchers who wanted to support a family on a single income, or who felt social pressure to stay in the same income bracket as their peers. The market for top talent is competitive, and if we want to pull in the best people, we need to be able to offer market rates.\n\n\n**Fourth**, as we increase our growth rate and widen our recruiting pool, we may need to start moving to larger offices, providing better benefits, and building up a bit more runway. When hiring the very best people, it’s important to offer them comfort and job security, especially if academia is the alternative we have to tempt them away from.\n\n\n**Fifth**, at a higher funding level we can implement several new administrative projects that will allow us to recruit more effectively, such as by building a map of where all the research talent comes from and goes to, experimenting with different types of talks and seminars to see what works for us, and improving our branding and communication.\n\n\nUp to now, we’ve been recruiting passively, by publishing ideas on the internet, filtering people who come to us, and building relationships with the very brightest of them. As we approach target three we’ll start recruiting actively, by engaging with a much broader pool of potential researchers.\n\n\nOur plan is to do this in a way that will also benefit the wider existential risk reduction community. As we get better at recruiting and outreach, we’ll meet more and more bright young people who are interested in addressing the challenges that lie between us and beneficial machine intelligence. In all likelihood, most of them won’t quite fit into the vacancies we have at MIRI, and we’ll point a host of them towards the [Future of Humanity Institute](http://www.fhi.ox.ac.uk/) at Oxford or the [Centre for Study of Existential Risk](http://cser.org/) at Cambridge, both of which will also be hiring aggressively this year.\n\n\nHitting target three means growing our existing research program *as fast as we feel we sustainably can* — and that’s an incredibly exciting proposition. If you can help us get there between now and August 30, this could be a massive opportunity for MIRI, and for the wider field of AI alignment research.\n\n\n \n\n\n\nThe post [Target 3: Taking It To The Next Level](https://intelligence.org/2015/08/07/target-3-taking-it-to-the-next-level/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-08-08T03:57:38Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "ae31804440b2465bcc7761d9849bdc6b", "title": "When AI Accelerates AI", "url": "https://intelligence.org/2015/08/03/when-ai-accelerates-ai/", "source": "miri", "source_type": "blog", "text": "Last week, Nate Soares [outlined his case](https://intelligence.org/2015/07/24/four-background-claims/) for prioritizing long-term AI safety work:\n\n\n1. *Humans have a fairly general ability to make scientific and technological progress.* The evolved cognitive faculties that make us good at organic chemistry overlap heavily with the evolved cognitive faculties that make us good at economics, which overlap heavily with the faculties that make us good at software engineering, etc.\n\n\n2. *AI systems will eventually [strongly outperform](https://intelligence.org/faq/#superintelligence) humans in the relevant science/technology skills.* To the extent these faculties are also directly or indirectly useful for social reasoning, long-term planning, introspection, etc., sufficiently powerful and general scientific reasoners should be able to strongly outperform humans in arbitrary cognitive tasks.\n\n\n3. *AI systems that are much better than humans at science, technology, and related cognitive abilities would have much more power and influence than humans.* If such systems are created, their decisions and goals will have a decisive impact on the future.\n\n\n4. *By default, smarter-than-human AI technology will be harmful rather than beneficial.* Specifically, it will be harmful if we exclusively work on improving the scientific capability of AI agents and neglect technical work that is specifically focused on safety requirements.\n\n\nTo which [I would add](http://intelligence.org/2014/08/04/groundwork-ai-safety-engineering/):\n\n\n* Intelligent, autonomous, and adaptive systems are already challenging to verify and validate; smarter-than-human scientific reasoners present us with extreme versions of the same challenges.\n* Smarter-than-human systems would also introduce qualitatively new risks that can’t be readily understood in terms of our models of human agents or narrowly intelligent programs.\n\n\nNone of this, however, tells us *when* smarter-than-human AI will be developed. Soares has argued that we are likely to be able to make [early progress](https://intelligence.org/2015/07/27/miris-approach/) on AI safety questions; but the earlier we start, the larger is the risk that we misdirect our efforts. Why not wait until human-equivalent decision-making machines are closer at hand before focusing our efforts on safety research?\n\n\nOne reason to start early is that the costs of starting too late are much worse than the costs of starting too early. Early work can also help attract more researchers to this area, and give us better models of alternative approaches. Here, however, I want to focus on a different reason to start work early: the concern that a number of factors may accelerate the development of smarter-than-human AI.\n\n\n**AI speedup thesis.** AI systems that can match humans in scientific and technological ability will probably be the cause and/or effect of a period of unusually rapid improvement in AI capabilities.\n\n\n\nIf general scientific reasoners are invented at all, this probably won’t be an isolated event. Instead, it is likely to directly feed into the development of more advanced AI. Similar considerations suggest that such systems may be the *result* of a speedup in intelligence growth rates, as measured in the cognitive and technological output of humans and machines.\n\n\nWhen AI capabilities work is likely to pick up speed more than AI safety work does, putting off safety work raises larger risks (because we may be failing to account for future speedup effects that give us less time than is apparent) and is less useful (because we have a shorter window of time between ‘we have improved AI algorithms we can use to inform our safety work’ and ‘our safety work needs to be ready for implementation’).\n\n\nI’ll note four broad reasons to expect speedups:\n\n\n1. *Overlap between accelerators of AI progress and enablers/results of AI progress.* In particular, progress in automating science and engineering work can include progress in automating AI work.\n\n\n2. *Overall difficulty of AI progress.* If smarter-than-human AI is sufficiently difficult, its invention may require auxiliary technologies that effect a speedup. Alternatively, even if such technologies aren’t strictly necessary for AI, they may appear before AI if they are easier to develop.\n\n\n3. *Discontinuity of AI progress.* Plausibly, AI development won’t advance at a uniform pace. There will sometimes be very large steps forward, such as new theoretical insights that resolve a number of problems in rapid succession. If a software bottleneck occurs while hardware progress continues, we can expect a larger speedup when a breakthrough occurs: [Shulman and Sandberg](https://intelligence.org/files/SoftwareLimited.pdf) argue that the availability of cheap computing resources in this scenario would make it much easier to quickly copy and improve on advanced AI software.\n\n\n4. *Increased interest in AI.*As AI software increases in capability, we can expect increased investment in the field, especially if a race dynamic develops.\n\n\n[Intelligence explosion](https://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/) is an example of a speedup of the first type. In an intelligence explosion scenario, the ability of AI systems to innovate within the field of AI leads to a positive feedback loop of accelerating progress resulting in superintelligence.\n\n\nIntelligence explosion and other forms of speedup [are often conflated](https://intelligence.org/2015/01/08/brooks-searle-agi-volition-timelines/#2) with the hypothesis that smarter-than-human AI is imminent; but some reasons to expect speedups (e.g., ‘overall difficulty of AI progress’ and ‘discontinuity of AI progress’) can equally imply that smarter-than-human AI systems are further off than many researchers expect.\n\n\nAre there any factors that could help speed up safety work relative to capabilities work? Some have suggested that interest in safety is likely to increase as smarter-than-human AI draws nearer. However, this might coincide with a compensatory increase in AI capabilities investment. Since systems approaching superintelligence will have incentives to appear safe, it is also possible that safety work will erroneously appear *less*necessary when AI systems approach humans in intelligence, as in Nick Bostrom’s [treacherous turn scenario](http://nothingismere.com/2014/08/05/bostrom-on-ai-deception/).\n\n\nWe could also imagine outsourcing AI safety work to sufficiently advanced AI systems, just as we might outsource AI capabilities work. However, it is likely to take a special effort to reach the point where we can (safely) delegate a variety of safety tasks before we can delegate a comparable amount of capabilities work.\n\n\nOn the whole, capabilities speedup effects make it more difficult to make robust predictions about [AI timelines](https://intelligence.org/faq/#imminent). If rates of progress are discontinuous, highly capable AI systems may continue to appear about equally far off until shortly before their invention. This suggests that it would be unwise to wait until advanced AI appears to be near to begin investing in basic AI safety research.\n\n\n \n\n\n\nThe post [When AI Accelerates AI](https://intelligence.org/2015/08/03/when-ai-accelerates-ai/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-08-04T06:55:56Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "0acba3893ed479ea221d9d8c1ccd2634", "title": "August 2015 Newsletter", "url": "https://intelligence.org/2015/08/02/august-2015-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \n**Research updates**\n* We’ve rewritten the first and last sections of [the main paper summarizing our research program](https://intelligence.org/files/TechnicalAgenda.pdf). This version of the paper will also be published with minor changes in the Springer anthology *[The Technological Singularity](http://www.creative-science.org/activities/singularity/)*.\n* New analyses: [Four Background Claims](https://intelligence.org/2015/07/24/four-background-claims/); [MIRI’s Approach](https://intelligence.org/2015/07/27/miris-approach/)\n* New at AI Impacts: [Conversation with Steve Potter](http://aiimpacts.org/conversation-with-steve-potter/); [Costs of Human-Level Hardware](http://aiimpacts.org/costs-of-human-level-hardware/)\n* New at IAFF: [Waterfall Truth Predicates](http://agentfoundations.org/item?id=359); [A Counterexample to an Informal Conjecture…](http://agentfoundations.org/item?id=369); [An Idea for Corrigible, Recursively Improving Math Oracles](http://agentfoundations.org/item?id=374)\n* [Jessica Taylor](http://jessic.at/) has joined MIRI’s full-time research team! A fifth MIRI researcher will be coming on in September.\n\n\n \n**General updates**\n* Our [2015 Summer Fundraiser](https://intelligence.org/2015/07/17/miris-2015-summer-fundraiser/) is live! Our announcement of this on the [Effective Altruism Forum](http://effective-altruism.com/ea/ln/2015_miri_summer_fundraiser_how_we_could_scale/) makes the case for [donating now](https://intelligence.org/2015/07/20/why-now-matters/) more explicit.\n* 139 people have collectively donated $283,167 over the last two weeks! This means that we’ve just hit [our first of five funding goals](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/#continuedgrowth)! Donations over the next four weeks will now go to our second goal: [Accelerating MIRI](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/#acceleratedgrowth).\n* MIRI has a new [Frequently Asked Questions page](https://intelligence.org/faq/)! If you have questions you’d like answered here or on the blog, [shoot me an email](mailto:rob@intelligence.org).\n* [An Astounding Year](https://intelligence.org/2015/07/16/an-astounding-year/) summarizes recent events in AI risk mitigation.\n* Cornell computer scientist Bart Selman has become a MIRI [research advisor](https://intelligence.org/team/#advisors).\n\n\n \n**News and links**\n* We’re at day three of the [Effective Altruism Global](http://www.eaglobal.org/) conference! You can watch a selection of talks on the [livestream](http://www.eaglobal.org/livestream).\n* Thousands sign an [open letter](http://futureoflife.org/AI/open_letter_autonomous_weapons) by the Future of Life Institute advocating “a ban on offensive autonomous weapons beyond meaningful human control.”\n\n\n\n |\n\n\n \n\n\n\n\n\n \n\n\nThe post [August 2015 Newsletter](https://intelligence.org/2015/08/02/august-2015-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-08-02T18:19:24Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "d792b0e62d6244dcaa12f3e9472804df", "title": "A new MIRI FAQ, and other announcements", "url": "https://intelligence.org/2015/07/31/a-new-miri-faq-and-other-announcements/", "source": "miri", "source_type": "blog", "text": "MIRI is at [Effective Altruism Global](http://www.ea-global.org)! A number of the talks can be watched online at the [EA Global Livestream](http://www.eaglobal.org/livestream).\n\n\nWe have a new MIRI **[Frequently Asked Questions page](http://intelligence.org/faq)**, which we’ll be expanding as we continue getting new questions over the next four weeks. Questions covered so far include “[Why is safety important for smarter-than-human AI?](https://intelligence.org/faq/#safety)” and “[Do researchers think AI is imminent?](https://intelligence.org/faq/#imminent)”\n\n\nWe’ve also been updating other pages on our website. **[About MIRI](https://intelligence.org/about/)** now functions as a short introduction to our mission, and **[Get Involved](https://intelligence.org/get-involved/)** has a new consolidated [application form](https://machineintelligence.typeform.com/to/fot777) for people who want to collaborate with us on our research program.\n\n\nFinally, an announcement: just two weeks into our [six-week fundraiser](https://intelligence.org/2015/07/17/miris-2015-summer-fundraiser/), **we have hit our first major fundraising target**! We extend our thanks to the donors who got us here so quickly. Thanks to you, we now have the funds to expand our core research team to 6–8 people for the coming year.\n\n\nNew donations we receive at will now go toward our second target: “Accelerated Growth.” If we hit this second target ($500k total), we will be able to expand to a ten-person core team and take on a number of important new projects. More details on our plans if we hit our first two fundraiser targets: [Growing MIRI](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/).\n\n\nThe post [A new MIRI FAQ, and other announcements](https://intelligence.org/2015/07/31/a-new-miri-faq-and-other-announcements/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-07-31T21:54:23Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "4c65f96b6be773af125a0e789bd1d80d", "title": "MIRI’s Approach", "url": "https://intelligence.org/2015/07/27/miris-approach/", "source": "miri", "source_type": "blog", "text": "MIRI’s mission is “to ensure that the creation of smarter-than-human artificial intelligence has a positive impact.” How can we ensure any such thing? It’s a daunting task, especially given that we don’t have any smarter-than-human machines to work with at the moment. In the previous post I discussed four [background claims](https://intelligence.org/2015/07/24/four-background-claims/) that motivate our mission; in this post I will describe our approach to addressing the challenge.\n\n\nThis challenge is sizeable, and we can only tackle a portion of the problem. For this reason, we specialize. Our two biggest specializing assumptions are as follows:\n\n\n**We focus on scenarios where smarter-than-human machine intelligence is first created in *de novo* software systems (as opposed to, say, brain emulations).**\n\n\nThis is in part because it seems difficult to get all the way to brain emulation before someone reverse-engineers the algorithms used by the brain and uses them in a software system, and in part because we expect that any highly reliable AI system will need to have at least some components built from the ground up for safety and transparency. Nevertheless, it is quite plausible that early superintelligent systems will not be human-designed software, and I strongly endorse research programs that focus on reducing risks along the other pathways.\n\n\n**We specialize almost entirely in technical research.**\n\n\nWe select our researchers for their proficiency in mathematics and computer science, rather than forecasting expertise or political acumen. I stress that this is only one part of the puzzle: figuring out how to build the right system is useless if the right system does not in fact get built, and ensuring AI has a positive impact is not simply a technical problem. It is also a global coordination problem, in the face of short-term incentives to cut corners. Addressing these non-technical challenges is an important task that we do not focus on.\n\n\nIn short, MIRI does technical research to ensure that *de novo* AI software systems will have a positive impact. We do not further discriminate between different types of AI software systems, nor do we make strong claims about exactly how quickly we expect AI systems to attain superintelligence. Rather, our current approach is to select open problems using the following question:\n\n\n*What would we still be unable to solve, even if the challenge were far simpler?*\n\n\nFor example, we might study AI alignment problems that we could not solve even if we had lots of computing power and very simple goals.\n\n\nWe then filter on problems that are (1) tractable, in the sense that we can do productive mathematical research on them today; (2) uncrowded, in the sense that the problems are not likely to be addressed during normal capabilities research; and (3) critical, in the sense that they could not be safely delegated to a machine unless we had first solved them ourselves. (Since the goal is to design intelligent machines, there are many technical problems that we can expect to eventually delegate to those machines. But it is difficult to trust an unreliable reasoner with the task of designing reliable reasoning!)\n\n\nThese three filters are usually uncontroversial. The controversial claim here is that the above question — “what would we be unable to solve, even if the challenge were simpler?” — is a generator of open technical problems for which solutions will help us design safer and more reliable AI software in the future, regardless of their architecture. The rest of this post is dedicated to justifying this claim, and describing the reasoning behind it.\n\n\n\n#### 1. Creating a powerful AI system without understanding why it works is dangerous.\n\n\nA large portion of the risk from machine superintelligence comes from the possibility of people building [systems that they do not fully understand](https://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/).\n\n\nCurrently, this is commonplace in practice: many modern AI researchers are pushing the capabilities of deep neural networks in the absence of theoretical foundations that describe why they’re working so well or a solid idea of what goes on beneath the hood. These shortcomings are being addressed over time: many AI researchers are currently working on transparency tools for neural networks, and many more are working to put theoretical foundations beneath deep learning systems. In the interim, using trial and error to push the capabilities of modern AI systems has led to many useful applications.\n\n\nWhen designing a superintelligent agent, by contrast, we will want an unusually high level of confidence in its safety *before* we begin online testing: trial and error alone won’t cut it, in that domain.\n\n\nTo illustrate, consider a study by [Bird and Layzell in 2002](http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1004522). They used some simple genetic programming to design an oscillating circuit on a circuit board. One solution that the genetic algorithm found entirely avoided using the built-in capacitors (an essential piece of hardware in human-designed oscillators). Instead, it repurposed the circuit tracks on the motherboard as a radio receiver, and amplified an oscillating signal from a nearby computer.\n\n\nThis demonstrates that powerful search processes can often reach their goals via unanticipated paths. If Bird and Layzell were hoping to use their genetic algorithm to find code for a robust oscillating circuit — one that could be used on many different circuit boards regardless of whether there were other computers present — then they would have been sorely disappointed. Yet if they had tested their algorithms extensively on a virtual circuit board that captured all the features of the circuit board that they *thought* were relevant (but not features such as “circuit tracks can carry radio signals”), then they would not have noticed the potential for failure during testing. If this is a problem when handling simple genetic search algorithms, then it will be a much larger problem when handling smarter-than-human search processes.\n\n\nWhen it comes to designing smarter-than-human machine intelligence, extensive testing is essential, but not sufficient: in order to be confident that the system will not find unanticipated bad solutions when running in the real world, it is important to have a solid understanding of how the search process works and why it is expected to generate only satisfactory solutions *in addition* to empirical test data.\n\n\nMIRI’s research program is aimed at ensuring that we have the tools needed to inspect and analyze smarter-than-human search processes before we deploy them.\n\n\nBy analogy, neural net researchers could probably have gotten quite far without having any formal understanding of probability theory. Without probability theory, however, they would lack the tools needed to understand modern AI algorithms: they wouldn’t know about Bayes nets, they wouldn’t know how to formulate assumptions like “independent and identically distributed,” and they wouldn’t quite know the conditions under which Markov Decision Processes work and fail. They wouldn’t be able to talk about priors, or check for places where the priors are zero (and therefore identify things that their systems cannot learn). They wouldn’t be able to talk about bounds on errors and prove nice theorems about algorithms that find an optimal policy eventually.\n\n\nThey probably could have still gotten pretty far (and developed half-formed ad-hoc replacements for many of these ideas), but without probability theory, I expect they would have a harder time designing highly reliable AI algorithms. Researchers at MIRI tend to believe that similarly large chunks of AI theory are still missing, and *those* are the tools that our research program aims to develop.\n\n\n#### 2. We could not yet create a beneficial AI system even via brute force.\n\n\nImagine you have a Jupiter-sized computer and a very simple goal: Make the universe contain as much diamond as possible. The computer has access to the internet and a number of robotic factories and laboratories, and by “diamond” we mean carbon atoms covalently bound to four other carbon atoms. (Pretend we don’t care how it makes the diamond, or what it has to take apart in order to get the carbon; the goal is to study a simplified problem.) Let’s say that the Jupiter-sized computer is running python. How would you program it to produce lots and lots of diamond?\n\n\nAs it stands, we do not yet know how to program a computer to achieve a goal such as that one.\n\n\nWe couldn’t yet create an artificial general intelligence *by brute force*, and this indicates that there are parts of the problem we don’t yet understand.\n\n\nThere are a number of AI tasks that we *could* brute-force. For example, we could write a program that would be *really, really good* at solving computer vision problems: if we had an indestructible box that produced pictures and questions about them, waited for answers, scored the answers for accuracy, and then repeated the process, then we know how to write the program that interacts with that box and gets very good at answering the questions. (The program would essentially be a bounded version of [AIXI](http://lesswrong.com/lw/jg1/solomonoff_cartesianism/).)\n\n\nBy a similar method, if we had an indestructible box that produced a conversation and questions about it, waited for natural-language answers to the questions, and scored them for accuracy, then again, we could write a program that would get very good at answering well. In this sense, we know how to solve computer vision and natural language processing by brute force. (Of course, natural-language processing is nowhere near “solved” in a practical sense — there is still loads of work to be done. A brute force solution doesn’t get you very far in the real world. The point is that, for many AI alignment problems, we haven’t even made it to the “we could brute force it” level yet.)\n\n\nWhy do we need the indestructible box in the above examples? Because the way the modern brute-force solution would work is by considering each Turing machine (up to some complexity limit) as a hypothesis about the box, seeing which ones are consistent with observation, and then executing actions that lead to high scores coming out of the box (as predicted by the remaining hypotheses, weighted by simplicity).\n\n\nEach hypothesis is an opaque Turing machine, and the algorithm never peeks inside: it just asks each hypothesis to predict what score the box will output if it executes a certain action chain. This means that if the algorithm finds (via exhaustive search) a plan that *maximizes* the score coming out of the box, and the box is destructible, then the opaque action chain that maximizes score is very likely to be the one that pops the box open and alters it so that it always outputs the highest score. But given an indestructible box, we know how to brute force the answers.\n\n\nIn fact, roughly speaking, we understand how to solve *any* reinforcement learning problem via brute force. This is a far cry from knowing how to *practically* solve reinforcement learning problems! But it does illustrate a difference in kind between two types of problems. We can (imperfectly and heuristically) divide AI problems up as follows:\n\n\n*There are two types of open problem in AI. One is figuring how to solve in practice problems that we know how to solve in principle. The other is figuring out how to solve in principle problems that we don’t even know how to brute force yet.*\n\n\nMIRI focuses on problems of the second class.[1](https://intelligence.org/2015/07/27/miris-approach/#footnote_0_11901 \"Most of the AI field focuses on problems of the first class. Deep learning, for example, is a very powerful and exciting tool for solving problems that we know how to brute-force, but which were, up until a few years ago, wildly intractable. Class 1 problems tend to be important problems for building more capable AI systems, but lower-priority for ensuring that highly capable systems are aligned with our interests.\")\n\n\nWhat is hard about brute-forcing a diamond-producing agent? To illustrate, I’ll give a wildly simplified sketch of what an AI program needs to do in order to act productively within a complex environment:\n\n\n1. Model the world: Take percepts, and use them to refine some internal representation of the world the system is embedded in.\n2. Predict the world: Take that world-model, and predict what would happen if the system executed various different plans.\n3. Rank outcomes: Rate those possibilities by how good the predicted future is, then execute a plan that leads to a highly-rated outcome.[2](https://intelligence.org/2015/07/27/miris-approach/#footnote_1_11901 \"In reality, of course, there aren’t clean separations between these steps. The “prediction” step must be more of a ranking-dependent planning step, to avoid wasting computation predicting outcomes that will obviously be poorly-ranked. The modeling step depends on the prediction step, because which parts of the world-model are refined depends on what the world-model is going to be used for. A realistic agent would need to make use of meta-planning to figure out how to allocate resources between these activities, etc. This diagram is a fine first approximation, though: if a system doesn’t do something like modeling the world, predicting outcomes, and ranking them somewhere along the way, then it will have a hard time steering the future.\")\n\n\n \n\n\n[![3-step AI](http://intelligence.org/wp-content/uploads/2015/07/3-step-AI.png)](http://intelligence.org/wp-content/uploads/2015/07/3-step-AI.png)\n \n\n\nConsider the modeling step. As discussed above, we know how to write an algorithm that finds good world-models by brute force: it looks at lots and lots of Turing machines, weighted by simplicity, treats them like they are responsible for its observations, and throws out the ones that are inconsistent with observation thus far. But (aside from being wildly impractical) this yields only *opaque* hypotheses: the system can ask what “sensory bits” each Turing machine outputs, but it cannot peek inside and examine objects represented within.\n\n\nIf there is some well-defined “score” that gets spit out by the opaque Turing machine (as in a reinforcement learning problem), then it doesn’t matter that each hypothesis is a black box; the brute-force algorithm can simply run the black box on lots of inputs and see which results in the highest score. But if the problem is to build lots of diamond in the real world, then the agent must work as follows:\n\n\n1. Build a model of the world — one that represents carbon atoms and covalent bonds, among other things.\n2. Predict how the world would change contingent on different actions the system could execute.\n3. Look *inside* each prediction and see which predicted future has the most diamond. Execute the action that leads to more diamond.\n\n\nIn other words, an AI that is built to reliably affect *things in the world*needs to have world-models that are amenable to inspection. The system needs to be able to pop open the world model, identify the representations of carbon atoms and covalent bonds, and estimate how much diamond is in the real world.[3](https://intelligence.org/2015/07/27/miris-approach/#footnote_2_11901 \"In reinforcement learning problems, this issue is avoided via a special “reward channel” intended to stand in indirectly for something the supervisor wants. (For example, the supervisor may push a reward button every time the learner takes an action that seems, to the supervisor, to be useful for making diamonds.) Then the programmers can, by hand, single out the reward channel inside the world-model and program the system to execute actions that it predicts lead to high reward. This is much easier than designing world-models in such a way that the system can reliably identify representations of carbon atoms and covalent bonds within it (especially if the world is modeled in terms of Newtonian mechanics one day and quantum mechanics the next), but doesn’t provide a framework for agents that must autonomously learn how to achieve some goal. Correct behavior in highly intelligent systems will not always be reducible to maximizing a reward signal controlled by a significantly less intelligent system (e.g., a human supervisor).\")\n\n\nWe don’t yet have a clear picture of how to build “inspectable” world-models — not even by brute force. Imagine trying to write the part of the diamond-making program that builds a world-model: this function needs to take percepts as input and build a data structure that represents the universe, in a way that allows the system to inspect universe-descriptions and estimate the amount of diamond in a possible future. Where in the data structure are the carbon atoms? How does the data structure allow the concept of a “covalent bond” to be formed and labeled, in such a way that it remains accurate even as the world-model stops representing diamond as made of atoms and starts representing them as made of protons, neutrons, and electrons instead?\n\n\nWe need a world-modeling algorithm that builds multi-level representations of the world and allows the system to pursue the same goals (make diamond) even as its model changes drastically (because it discovers quantum mechanics). This is in stark contrast to the existing brute-force solutions that use opaque Turing machines as hypotheses.[4](https://intelligence.org/2015/07/27/miris-approach/#footnote_3_11901 \"The idea of a search algorithm that optimizes according to modeled facts about the world rather than just expected percepts may sound basic, but we haven’t found any deep insights (or clever hacks) that allow us to formalize this idea (e.g., as a brute-force algorithm). If we could formalize it, we would likely get a better understanding of the kind of abstract modeling of objects and facts that is required for self-referential, logically uncertain, programmer-inspectable reasoning.\")\n\n\nWhen *humans* reason about the universe, we seem to do some sort of reasoning outwards from the middle: we start by modeling things like people and rocks, and eventually realize that these are made of atoms, which are made of protons and neutrons and electrons, which are perturbations in quantum fields. At no point are we certain that the lowest level in our model is the lowest level in reality; as we continue thinking about the world we *construct* new hypotheses to explain oddities in our models. What sort of data structure are we using, there? How do we add levels to a world model given new insights? This is the sort of reasoning algorithm that we do not yet understand how to formalize.[5](https://intelligence.org/2015/07/27/miris-approach/#footnote_4_11901 \"We also suspect that a brute-force algorithm for building multi-level world models would be much more amenable to being “scaled down” than Solomonoff induction, and would therefore lend some insight into how to build multi-level world models in a practical setting.\")\n\n\nThat’s step *one* in brute-forcing an AI that reliably pursues a simple goal. We also don’t know how to brute-force steps two or three yet. By simplifying the problem — talking about diamonds, for example, rather than more realistic goals that raise a host of other difficulties — we’re able to factor out the parts of the problems that we don’t understand how to solve yet, even in principle. Our [technical agenda](https://intelligence.org/files/TechnicalAgenda.pdf) describes a number of open problems identified using this method.\n\n\n#### 3. Figuring out how to solve a problem in principle yields many benefits.\n\n\nIn 1836, Edgar Allen Poe wrote a [wonderful essay](http://www.eapoe.org/works/essays/maelzel.htm) on Maelzel’s Mechanical Turk, a machine that was purported to be able to play chess. In the essay, Poe argues that the Mechanical Turk must be a hoax: he begins by arguing that machines cannot play chess, and proceeds to explain (using his knowledge of stagecraft) how a person could be hidden within the machine. Poe’s essay is remarkably sophisticated, and a fun read: he makes reference to the “calculating machine of Mr. Babbage” and argues that it cannot possibly be made to play chess, because in a calculating machine, each steps follows from the previous step by necessity, whereas “no one move in chess necessarily follows upon any one other”.\n\n\nThe Mechnical Turk indeed turned out to be a hoax. In 1950, however, Claude Shannon published a rather compelling counterargument to Poe’s reasoning in the form of a paper [explaining how to program a computer to play perfect chess](http://vision.unipv.it/IA1/ProgrammingaComputerforPlayingChess.pdf).\n\n\nShannon’s algorithm was by no means the end of the conversation. It took forty-six years to go from that paper to Deep Blue, a practical chess program which beat the human world champion. Nevertheless, if you were equipped with Poe’s state of knowledge and not yet sure whether it was *possible* for a computer to play chess — because you did not yet understand algorithms for constructing game trees and doing backtracking search — then you would probably not be ready to start writing practical chess programs.\n\n\nSimilarly, if you lacked the tools of probability theory — an understanding of Bayesian inference and the limitations that stem from bad priors — then you probably wouldn’t be ready to program an AI system that needed to manage uncertainty in high-stakes situations.\n\n\nIf you are trying to write a program and you can’t yet say how you would write it given an arbitrarily large computer, then you probably aren’t yet ready to design a practical approximation of the brute-force solution yet. Practical chess programs can’t generate a full search tree, and so rely heavily on heuristics and approximations; but if you can’t brute-force the answer yet given *arbitrary* amounts of computing power, then it’s likely that you’re missing some important conceptual tools.\n\n\nMarcus Hutter (inventor of AIXI) and Shane Legg (inventor of the [Universal Measure of Intelligence](http://www.vetta.org/documents/42.pdf)) seem to endorse this approach. Their work can be interpreted as a description of how to find a brute-force solution to any reinforcement learning problem, and indeed, the above description of how to do this is due to Legg and Hutter.\n\n\nIn fact, the founders of Google DeepMind reference the completion of Shane’s thesis as one of four key indicators that the time was ripe to begin working on AGI: a theoretical framework describing how to solve reinforcement learning problems *in principle* demonstrated that modern understanding of the problem had matured to the point where it was time for the practical work to begin.\n\n\nBefore we gain a formal understanding of the problem, we can’t be quite sure what the problem *is*. We may fail to notice holes in our reasoning; we may fail to bring the appropriate tools to bear; we may not be able to tell when we’re making progress. After we gain a formal understanding of the problem in principle, we’ll be in a better position to make practical progress.\n\n\nThe point of developing a formal understanding of a problem is not to *run* the resulting algorithms. Deep Blue did not work by computing a full game tree, and DeepMind is not trying to implement AIXI. Rather, the point is to identify and develop the basic concepts and methods that are useful for solving the problem (such as game trees and backtracking search algorithms, in the case of chess).\n\n\nThe development of probability theory has been quite useful to the field of AI — not because anyone goes out and attempts to build a perfect Bayesian reasoner, but because probability theory is the unifying theory for reasoning under uncertainty. This makes the tools of probability theory useful for AI designs that vary in any number of implementation details: any time you build an algorithm that attempts to manage uncertainty, a solid understanding of probabilistic inference is helpful when reasoning about the domain in which the system will succeed and the conditions under which it could fail.\n\n\nThis is why we think we can identify open problems that we can work on today, and which will reliably be useful no matter how the generally intelligent machines of the future are designed (or how long it takes to get there). By seeking out problems that we couldn’t solve even if the problem were much easier, we hope to identify places where core AGI algorithms are missing. By developing a formal understanding of how to address those problems in principle, we aim to ensure that when it comes time to address those problems in practice, programmers have the knowledge they need to develop solutions that they deeply understand, and the tools they need to ensure that the systems they build are highly reliable.\n\n\n#### 4. This is an approach researchers have used successfully in the past.\n\n\nOur main open-problem generator — “what would we be unable to solve even if the problem were easier?” — is actually a fairly common one used across mathematics and computer science. It’s more easy to recognize if we rephrase it slightly: “can we reduce the problem of building a beneficial AI to some other, simpler problem?”\n\n\nFor example, instead of asking whether you can program a Jupiter-sized computer to produce diamonds, you could rephrase this as a question about whether we can reduce the diamond maximization problem to known reasoning and planning procedures. (The current answer is “not yet.”)\n\n\nThis is a fairly standard practice in computer science, where reducing one problem to another is a [key feature of computability theory](https://en.wikipedia.org/wiki/Reduction_(complexity)). In mathematics it is common to achieve a proof by reducing one problem to another (see, for instance, the famous case of [Fermat’s last theorem](http://mathworld.wolfram.com/FermatsLastTheorem.html)). This helps one focus on the parts of the problem that *aren’t* solved, and identify topics where foundational understanding is lacking.\n\n\nAs it happens, humans have a pretty good track record when it comes to working on problems such as these. Humanity hasn’t been very good at predicting long-term technological trends, but we have reasonable success developing theoretical foundations for technical problems decades in advance, when we put sufficient effort into it. Alan Turing and Alonzo Church succeeded in developing a robust theory of computation that proved quite useful once computers were developed, in large part by figuring out how to solve (in principle) problems which they did not yet know how to solve with machines. Andrey Kolmogorov, similarly, set out to formalize intuitive but not-yet-well-understood methods for managing uncertainty; and he succeeded. And Claude Shannon and his contemporaries succeeded at this endeavor in the case of chess.\n\n\nThe development of probability theory is a particularly good analogy to our case: it is a field where, for hundreds of years, philosophers and mathematicians who attempted to formalize their intuitive notions of “uncertainty” repeatedly reasoned themselves into paradoxes and contradictions. The probability theory at the time, sorely lacking formal foundations, was dubbed a “theory of misfortune.” Nevertheless, a concerted effort by Kolmogorov and others to formalize the theory was successful, and his efforts inspired the development of a host of useful tools for designing systems that reason reliably under uncertainty.\n\n\nMany people who set out to put foundations under a new field of study (that was intuitively understood on some level but not yet formalized) have succeeded, and their successes have been practically significant. We aim to do something similar for a number of open problems pertaining to the design of highly reliable reasoners.\n\n\n \n\n\nThe questions MIRI focuses on, such as “how would one ideally handle logical uncertainty?” or “how would one ideally build multi-level world models of a complex environment?”, exist at a level of generality comparable to Kolmogorov’s “how would one ideally handle empirical uncertainty?” or Hutter’s “how would one ideally maximize reward in an arbitrarily complex environment?” The historical track record suggests that these are the kinds of problems that it is possible to both (a) see coming in advance, and (b) work on without access to a concrete practical implementation of a general intelligence.\n\n\nBy identifying parts of the problem that we would still be unable to solve even if the problem was easier, we hope to hone in on parts of the problem where core algorithms and insights are missing: algorithms and insights that will be useful no matter what architecture early intelligent machines take on, and no matter how long it takes to create smarter-than-human machine intelligence.\n\n\nAt present, there are only three people on our research team, and this limits the number of problems that we can tackle ourselves. But our approach is one that we can scale up dramatically: it has generated a very large number of open problems, and we have no shortage of questions to study.[6](https://intelligence.org/2015/07/27/miris-approach/#footnote_5_11901 \"For example, instead of asking what problems remain when given lots of computing power, you could instead ask whether we can reduce the problem of building an aligned AI to the problem of making reliable predictions about human behavior: an approach advocated by others.\")\n\n\nThis is an approach that has often worked well in the past for humans trying to understand how to approach a new field of study, and I am confident that this approach is pointing us towards some of the core hurdles in this young field of AI alignment.\n\n\n \n\n\n\n\n\n---\n\n1. Most of the AI field focuses on problems of the first class. Deep learning, for example, is a very powerful and exciting tool for solving problems that we know how to brute-force, but which were, up until a few years ago, wildly intractable. Class 1 problems tend to be important problems for building more capable AI systems, but lower-priority for ensuring that highly capable systems are aligned with our interests.\n2. In reality, of course, there aren’t clean separations between these steps. The “prediction” step must be more of a ranking-dependent planning step, to avoid wasting computation predicting outcomes that will obviously be poorly-ranked. The modeling step depends on the prediction step, because which parts of the world-model are refined depends on what the world-model is going to be used for. A realistic agent would need to make use of meta-planning to figure out how to allocate resources between these activities, etc. This diagram is a fine first approximation, though: if a system doesn’t do something like modeling the world, predicting outcomes, and ranking them somewhere along the way, then it will have a hard time steering the future.\n3. In reinforcement learning problems, this issue is avoided via a special “reward channel” intended to stand in indirectly for something the supervisor wants. (For example, the supervisor may push a reward button every time the learner takes an action that seems, to the supervisor, to be useful for making diamonds.) Then the programmers can, by hand, single out the reward channel inside the world-model and program the system to execute actions that it predicts lead to high reward. This is much easier than designing world-models in such a way that the system can reliably identify representations of carbon atoms and covalent bonds within it (especially if the world is modeled in terms of Newtonian mechanics one day and quantum mechanics the next), but doesn’t provide a framework for agents that must autonomously learn how to achieve some goal. Correct behavior in highly intelligent systems will not always be reducible to maximizing a reward signal controlled by a significantly less intelligent system (e.g., a human supervisor).\n4. The idea of a search algorithm that optimizes according to modeled *facts about the world* rather than just *expected percepts* may sound basic, but we haven’t found any deep insights (or clever hacks) that allow us to formalize this idea (e.g., as a brute-force algorithm). If we could formalize it, we would likely get a better understanding of the kind of abstract modeling of objects and facts that is required for [self-referential, logically uncertain, programmer-inspectable reasoning](https://intelligence.org/technical-agenda/).\n5. We also suspect that a brute-force algorithm for building multi-level world models would be much more amenable to being “scaled down” than Solomonoff induction, and would therefore lend some insight into how to build multi-level world models in a practical setting.\n6. For example, instead of asking what problems remain when given lots of computing power, you could instead ask whether we can reduce the problem of building an aligned AI to the problem of making reliable predictions about human behavior: an approach [advocated by others](https://medium.com/ai-control/model-free-decisions-6e6609f5d99e).\n\nThe post [MIRI’s Approach](https://intelligence.org/2015/07/27/miris-approach/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-07-28T02:21:27Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "a0070aec296ba887130daa0860e41ed5", "title": "Four Background Claims", "url": "https://intelligence.org/2015/07/24/four-background-claims/", "source": "miri", "source_type": "blog", "text": "MIRI’s mission is to ensure that the creation of smarter-than-human artificial intelligence has a positive impact. Why is this mission important, and why do we think that there’s work we can do today to help ensure any such thing?\n\n\nIn this post and my next one, I’ll try to answer those questions. This post will lay out what I see as the four most important premises underlying our mission. Related posts include Eliezer Yudkowsky’s “[Five Theses](http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/)” and Luke Muehlhauser’s “[Why MIRI?](https://intelligence.org/2014/04/20/why-miri/)”; this is my attempt to make explicit the claims that are in the background whenever I assert that our mission is of critical importance.\n\n\n \n\n\n#### Claim #1: Humans have a very general ability to solve problems and achieve goals across diverse domains.\n\n\nWe call this ability “intelligence,” or “general intelligence.” This isn’t a [formal definition](https://intelligence.org/2013/06/19/what-is-intelligence-2/) — if we knew *exactly* what general intelligence was, we’d be better able to program it into a computer — but we do think that there’s a real phenomenon of general intelligence that we cannot yet replicate in code.\n\n\nAlternative view: There is no such thing as general intelligence. Instead, humans have a collection of disparate special-purpose modules. Computers will keep getting better at narrowly defined tasks such as chess or driving, but at no point will they acquire “generality” and become significantly more useful, because there is no generality to acquire. ([Robin Hanson](http://www.overcomingbias.com/2014/07/limits-on-generality.html) has argued for versions of this position.)\n\n\nShort response: I find the “disparate modules” hypothesis implausible in light of how readily humans can gain mastery in domains that are utterly foreign to our ancestors. That’s not to say that general intelligence is some irreducible occult property; it presumably comprises a number of different cognitive faculties and the interactions between them. The whole, however, has the effect of making humans much more cognitively versatile and adaptable than (say) chimpanzees.\n\n\nWhy this claim matters: Humans have achieved a dominant position over other species not by being stronger or more agile, but by being more intelligent. If some key part of this general intelligence was able to evolve in the few million years since our common ancestor with chimpanzees lived, this suggests there may exist a relatively short list of key insights that would allow human engineers to build powerful generally intelligent AI systems.\n\n\nFurther reading: Salamon et al., “[How Intelligible is Intelligence?](https://intelligence.org/files/HowIntelligible.pdf)” \n\n  \n\n\n\n\n#### Claim #2: AI systems could become much more intelligent than humans.\n\n\nResearchers at MIRI tend to lack strong beliefs about *when* smarter-than-human machine intelligence will be developed. We do, however, expect that (a) human-equivalent machine intelligence will eventually be developed (likely within a century, barring catastrophe); and (b) machines can become significantly more intelligent than any human.\n\n\nAlternative view #1: Brains do something special that cannot be replicated on a computer.\n\n\nShort response: Brains are physical systems, and if certain versions of the [Church-Turing thesis](https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis) hold then computers can in principle replicate the functional input/output behavior of any physical system. Also, note that “intelligence” (as I’m using the term) is about problem-solving capabilities: even if there were some special human feature (such as [qualia](http://www.iep.utm.edu/hard-con/)) that computers couldn’t replicate, this would be irrelevant unless it prevented us from designing problem-solving machines.\n\n\nAlternative view #2: The algorithms at the root of general intelligence are so complex and indecipherable that human beings will not be able to program any such thing for many centuries.\n\n\nShort response: This seems implausible in light of evolutionary evidence. The genus *Homo* diverged from other genera only 2.8 million years ago, and the intervening time — a blink in the eye of natural selection — was sufficient for generating the cognitive advantages seen in humans. This strongly implies that whatever sets humans apart from less intelligent species is not extremely complicated: the building blocks of general intelligence must have been present in chimpanzees.\n\n\nIn fact, the relatively intelligent behavior of dolphins suggests that the building blocks were probably there even as far back as the mouse-sized common ancestor of humans and dolphins. One could argue that mouse-level intelligence will take many centuries to replicate, but this is a more difficult claim to swallow, given [rapid advances](https://www.youtube.com/watch?v=GYQrNfSmQ0M) in the field of AI. In light of evolutionary evidence and the last few decades of AI research, it looks to me like intelligence is something we will be able to comprehend and program into machines.\n\n\nAlternative view #3: Humans are already at or near peak physically possible intelligence. Thus, although we may be able to build human-equivalent intelligent machines, we won’t be able to build superintelligent machines.\n\n\nShort response: It would be surprising if humans were perfectly designed reasoners, for the same reason it would be surprising if airplanes couldn’t fly faster than birds. Simple physical calculations bear this intuition out: for example, it seems well possible, within the boundaries of physics, to run a computer simulation of a human brain at thousands of times the normal speed.\n\n\nSome expect that speed wouldn’t matter, because the real bottleneck is waiting for data to come in from physical experiments. This seems unlikely to me. There are many interesting physical experiments that can be sped up, and I have a hard time believing that a team of humans running at a 1000x speedup would fail to outperform their normal-speed counterparts (not least because they could rapidly develop new tools and technology to assist them).\n\n\nI furthermore expect it’s possible to build *better* reasoners (rather than just *faster* reasoners) that use computing resources more effectively than humans do, even running at the same speed.\n\n\nWhy this claim matters: Human-designed machines often knock the socks off of biological creatures when it comes to performing tasks we care about: automobiles cannot heal or reproduce, but they sure can carry humans a lot farther and faster than a horse. If we can build intelligent machines specifically designed to solve the world’s largest problems through scientific and technological innovation, then they could improve the world at an unprecedented pace. In other words, AI matters.\n\n\nFurther reading: Chalmers, “[The Singularity: A Philosophical Analysis](http://consc.net/papers/singularity.pdf)”\n\n\n \n\n\n#### Claim #3: If we create highly intelligent AI systems, their decisions will shape the future.\n\n\nHumans use their intelligence to create tools and plans and technology that allow them to shape their environments to their will (and fill them with refrigerators, and cars, and cities). We expect that systems which are even more intelligent would have even more ability to shape their surroundings, and thus, smarter-than-human AI systems could wind up with significantly more control over the future than humans have.\n\n\nAlternative view: An AI system would never be able to out-compete humanity as a whole, no matter how intelligent it became. Our environment is simply too competitive; machines would have to work with us and integrate into our economy.\n\n\nShort response: I have no doubt that an autonomous AI system attempting to accomplish simple tasks would initially have strong incentives to integrate with our economy: if you build an AI system that collects stamps for you, it will likely start by acquiring money to purchase stamps. But what if the system accrues a strong technological or strategic advantage?\n\n\nAs an extreme example, we can imagine the system developing nanomachines and using them to convert as much matter as it can into stamps; it wouldn’t necessarily care whether that matter came from “dirt” or “money” or “people.” Selfish actors only have an incentive to participate in the economy when their gains from trade are greater than the net gains they would get by ignoring the economy and just taking the resources for their own.\n\n\nSo the question is whether it will be possible for an AI system to gain a decisive technological or strategic advantage. I see this as the most uncertain claim out of the ones I’ve listed here. However, I expect that the answer is still a clear “yes.”\n\n\nHistorically, conflicts between humans have often ended with the technologically superior group dominating its rival. At present, there are a number of technological and social innovations that seem possible but have not yet been developed. Humans coordinate slowly and poorly, compared to what distributed software systems could achieve. All of this suggests that if we build a machine that does science faster or better than we can, it could quickly gain a technological and/or strategic advantage over humanity for itself or for its operators. This is particularly true if its intellectual advantage allows it to socially manipulate humans, acquire new hardware (legally or otherwise), produce better hardware, create copies of itself, or improve its own software. For good or ill, much of the future is likely to be determined by superintelligent decision-making machines.\n\n\nWhy this claim matters: Because the future matters. If we want things to be better in the future (or at least not get worse), then it is prudent to prioritize research into the processes that will have high leverage over the future.\n\n\nFurther reading: Armstrong, *[Smarter Than Us](https://intelligence.org/smarter-than-us/)*\n\n\n \n\n\n#### Claim #4: Highly intelligent AI systems won’t be beneficial by default.\n\n\nWe’d like to see the smarter-than-human AI systems of the future working together with humanity to build a better future; but that won’t happen by default. In order to build AI systems that have a beneficial impact, we have to solve a number of technical challenges over and above building more powerful and general AI systems.\n\n\nAlternative view: As humans have become smarter, we’ve also become more peaceful and tolerant. As AI becomes smarter, it will likewise be able to better figure out our values, and will better execute on them.\n\n\nShort response: Sufficiently intelligent artificial reasoners would be able to *figure out* our intentions and preferences; but this [does not imply](http://lesswrong.com/lw/igf/the_genie_knows_but_doesnt_care/) that they would execute plans that are in accordance with them.\n\n\nA self-modifying AI system could inspect its code and decide whether to continue pursuing the goals it was given or whether it would rather change them. But how is the program deciding which modification to execute?\n\n\nThe AI system is a physical system, and somewhere inside it, it’s constructing predictions about how the universe would look if it did various things. Some other part of the system is comparing those outcomes and then executing actions that lead towards outcomes that the current system ranks highly. If the agent is initially programmed to execute plans that lead towards a universe in which it predicts that cancer is cured, then it will only modify its goal if it predicts that this will lead to a cure for cancer.\n\n\nRegardless of their intelligence level, and regardless of your intentions, computers do *exactly* what you programmed them to do. If you program an extremely intelligent machine to execute plans that it predicts lead to futures where cancer is cured, then it may be that the shortest path it can find to a cancer-free future entails kidnapping humans for experimentation (and resisting your attempts to alter it, as those would slow it down).\n\n\nThere isn’t any spark of compassion that automatically imbues computers with respect for other sentients once they crosses a certain capability threshold. If you want compassion, you have to program it in.\n\n\nWhy this claim matters: A lot of the world’s largest problems would be much easier to solve with superintelligent assistance — but attaining those benefits requires that we do more than just improve the capabilities of AI systems. You only get a system that does what you intended if you know how to program it to take your intentions into account, and execute plans that fulfill them.\n\n\nFurther reading: Bostrom, “[The Superintelligent Will](http://www.nickbostrom.com/superintelligentwill.pdf)”\n\n\n \n\n\nThese four claims form the core of the argument that artificial intelligence is important: there is such a thing as general reasoning ability; if we build general reasoners, they could be far smarter than humans; if they are far smarter than humans, they could have an immense impact; and that impact will not be beneficial by default.\n\n\nAt present, billions of dollars and thousands of person-years are pouring into AI *capabilities* research, with comparatively little effort going into AI safety research. Artificial superintelligence may arise sometime in the next few decades, and will almost surely be created in one form or another over the next century or two, barring catastrophe. Superintelligent systems will either have an extremely positive impact on humanity, or an extremely negative one; it is up to us to decide which.\n\n\n \n\n\n\nThe post [Four Background Claims](https://intelligence.org/2015/07/24/four-background-claims/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-07-25T04:30:08Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "c31f53763547ec8163adab5dee2f478f", "title": "Why Now Matters", "url": "https://intelligence.org/2015/07/20/why-now-matters/", "source": "miri", "source_type": "blog", "text": "I’m often asked whether [donations now](https://intelligence.org/2015/07/17/miris-2015-summer-fundraiser/) are more important than donations later. Allow me to deliver an emphatic *yes*: I currently expect that donations to MIRI today are worth much more than donations to MIRI in five years. As things stand, I would very likely take $10M today over $20M in five years.\n\n\nThat’s a bold statement, and there are a few different reasons for this. First and foremost, there is a decent chance that some very big funders will start entering the AI alignment field over the course of the next five years. It looks like the NSF may start to fund AI safety research, and Stuart Russell has already received some money from DARPA to work on value alignment. It’s quite possible that in a few years’ time significant public funding will be flowing into this field.\n\n\n(It’s also quite possible that it *won’t*, or that the funding will go to all the wrong places, as was the case with funding for nanotechnology. But if I had to bet, I would bet that it’s going to be much easier to find funding for AI alignment research in five years’ time).\n\n\nIn other words, the funding bottleneck is loosening — but it isn’t loose yet.\n\n\nWe don’t presently have the funding to grow [as fast as we could](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/) over the coming months, or to run all the important research programs we have planned. At our current funding level, the research team can grow at a steady pace — but we could get much more done over the course of the next few years if we had the money to grow as fast as is healthy.\n\n\nWhich brings me to the second reason why funding now is probably much more important than funding later: because growth now is much more valuable than growth later.\n\n\nThere’s an idea picking up traction in the field of AI: instead of focusing only on increasing the capabilities of intelligent systems, it is important to also ensure that we know how to build *beneficial* intelligent systems. Support is growing for a new paradigm within AI that seriously considers the long-term effects of research programs, rather than just the immediate effects. Years down the line, these ideas may seem obvious, and the AI community’s response to these challenges may be in full swing. Right now, however, there is relatively little consensus on how to approach these issues — which leaves room for researchers today to help determine the field’s future direction.\n\n\nPeople at MIRI have been thinking about these problems for a long time, and that puts us in an unusually good position to influence the field of AI and ensure that some of the growing concern is directed towards long-term issues in addition to shorter-term ones. We can, for example, help avert a scenario where all the attention and interest generated by Musk, Bostrom, and others gets channeled into short-term projects (e.g., making drones and driverless cars safer) without any consideration for long-term risks that are less well-understood.\n\n\nIt’s likely that MIRI will scale up substantially at some point; but if that process begins in 2018 rather than 2015, it is plausible that we will have already missed out on a number of big opportunities.\n\n\nThe alignment research program within AI is just now getting started in earnest, and it may even be funding-saturated in a few years’ time. But it’s nowhere near funding-saturated today, and waiting five or ten years to begin seriously ramping up our growth would likely give us far fewer opportunities to shape the methodology and research agenda within this new AI paradigm. The projects MIRI takes on today can make a big difference years down the line, and supporting us today will drastically affect how much we can do quickly. Now matters.\n\n\n \n\n\n\nThe post [Why Now Matters](https://intelligence.org/2015/07/20/why-now-matters/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-07-20T22:31:03Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "0d75c46eb2fbc0bce8ae9eac4588a3d5", "title": "Targets 1 and 2: Growing MIRI", "url": "https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/", "source": "miri", "source_type": "blog", "text": "[Momentum is picking up](https://intelligence.org/2015/07/16/an-astounding-year/) in the domain of AI safety engineering. MIRI needs to grow fast if it’s going to remain at the forefront of this new paradigm in AI research. To that end, we’re kicking off our [2015 Summer Fundraiser](https://intelligence.org/2015/07/17/miris-2015-summer-fundraiser/)!\n\n\nRather than naming a single funding target, we’ve decided to lay out the activities we could pursue at different funding levels and let you, our donors, decide how quickly we can grow. In this post, I’ll describe what happens if we hit our first two fundraising targets: $250,000 (“continued growth”) and $500,000 (“accelerated growth”).\n\n\n\n#### Continued growth\n\n\nOver the past twelve months, MIRI’s research team has had a busy schedule — running workshops, attending conferences, visiting industry AI teams, collaborating with outside researchers, and recruiting. We’ve presented papers at two AAAI conferences and an AGI conference, attended FLI’s Puerto Rico conference, written four papers that have been accepted for publication later this year, produced around ten [technical reports](https://intelligence.org/all-publications/), and posted a number of new preliminary results to the [Intelligent Agent Foundations Forum](http://agentfoundations.org/).\n\n\nThat’s what we’ve been able to do with a three-person research team. What could MIRI’s researchers accomplish with a team two or three times as large?\n\n\n \n\n\n[![DSC_0010 (2)](http://intelligence.org/wp-content/uploads/2015/07/DSC_0010-2-1024x611.jpg)](http://intelligence.org/wp-content/uploads/2015/07/DSC_0010-2.jpg) \n\n*MIRI research fellows Eliezer Yudkowsky and Benja Fallenstein discuss \n\n[reflective reasoning](https://intelligence.org/2015/01/15/new-report-vingean-reflection-reliable-reasoning-self-improving-agents/) at one of our July [workshops](https://intelligence.org/workshops/).* \n\n\nThanks to a few game-changing [donations and grants](https://intelligence.org/2015/07/01/grants-fundraisers/) from the last few years, we have the funds to hire several new researchers: Jessica Taylor will be joining our full-time research team on August 1, and we’ll be taking on another new research fellow on September 1. That will bring our research team up to five, and leave us with about one year of runway to continue all of our current activities.\n\n\nTo grow even further — especially if we grow as quickly as we’d like to — we’ll need more funding.\n\n\nOur activities this summer have been aimed at further expanding our research team. The [MIRI Summer Fellows program](http://rationality.org/miri-summer-fellows-2015/), designed in collaboration with the Center for Applied Rationality to teach mathematicians and AI researchers some AI alignment research skills, is currently underway; and in three weeks, we’ll be running the fifth of six summer workshops aimed at bringing our research agenda to a wider audience.\n\n\nHitting our first funding target, $250,000, would allow us to hire the very best candidates from our summer workshops and fellows program. That money would allow us to hire 1–3 additional researchers without shortening our runway, **bringing the total size of the research team up to 6–8**.\n\n\n#### Accelerated growth\n\n\nGrowing to a team of eight, though, still isn’t nearing the limit of how fast we *could* be growing this year, if we really hit the accelerator. This is a critical juncture for the field of AI alignment, and we have a rare opportunity to help set the agenda for the coming decades of AI safety work. Our donors tend to be pretty ambitious, so it’s worth asking:\n\n\n**How fast could we grow if MIRI had its *largest fundraiser ever*?**\n\n\nIf we hit our $250,000 funding target before the end of August, it will be time to start thinking *big*. Our second funding target, $500,000, would allow us to recruit even more aggressively, and expand to a roughly **ten-person core research team**. This is about as large as we think we can grow in the near term without [a major increase in our recruitment efforts](https://intelligence.org/2015/07/17/miris-2015-summer-fundraiser/#target3).\n\n\nTen researchers would massively enhance our ability to polish our new results into peer-reviewed publications, attend conferences, and run workshops. But, most crucially of all, it would give us vastly more time to devote to basic research. We have identified dozens of open problems that appear relevant to the task of aligning smarter-than-human AI with the right values; with a larger team, our hope is to start solving those problems *quickly*.\n\n\nBut growing the core research team as fast as is sustainable in the short term is not all we could be doing: in fact, there are a number of other projects waiting on the sidelines.\n\n\nBecause researchers at MIRI have been thinking about the AI alignment problem for a long time, we have quite a few projects which we intend to work on as soon as we have the necessary time and funding. At the $500,000 level, we can start executing on several of the highest-priority projects without diverting attention away from our core technical agenda.\n\n\n**Project: Type theory in type theory.** There are a number of tools missing in modern-day theorem-provers that would be necessary in order to study certain types of self-referential reasoning.[1](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/#footnote_0_11885 \"Roughly speaking, in order to study self-reference in a modern dependently typed programming language, we need to be able to write programs that can manipulate and typecheck quoted programs written in the same language. This would be a ripe initial setting in which to study programs that reason about themselves with high confidence. However, to our knowledge, nobody has yet written (e.g.) the type of Agda programs in Agda — though there have been some promising first steps.\") Given the funding, we would hire one or two type theorists to work on developing relevant tools full-time.\n\n\nOur type theory project would enable us to study programs reasoning about similar programs in a concrete setting — lines of code, as opposed to equations on whiteboards. The tools developed could also be quite useful to the type theory community, as they would have a number of short-term applications. (E.g., for building modular and highly secure operating systems where a verifier component can verify that it is OK to swap out the verifier for a more powerful verifier.) We would benefit from the input from specialists in type theory, we would benefit from the tools, and we would also benefit from increased engagement with the larger type theory community, which is an important community when it comes to writing highly reliable computer programs.\n\n\n**Project: Visiting scholar program.** Another project we can quickly implement given sufficient funding is a visiting scholar program. One surefire way to increase our engagement with the academic community would be to have interested professors drop by for the summer, while we pay their summer salaries and work with them on projects where our interests overlap. These sorts of visits, combined with a few visiting graduate students or postdocs each summer, would likely go a long way towards getting academic communities more involved with the open problems that we think are most urgent, while also giving us the opportunity to get input from some of the top minds in the modern field of AI.\n\n\nA program such as this could be extended into one where MIRI directly supports graduate students and/or professors who are willing to spend some portion of their time on AI alignment research. This program could also prove useful if MIRI reaches the point of forming new research groups specializing in approaches to the AI alignment challenge other than the one described in our technical agenda. Those extensions of the program would require significantly more time and funding, but at the $500,000 level we could start sowing the seeds.\n\n\n#### MIRI’s future shape\n\n\nThat’s only a brief look at the new activities we could take on with appropriate funding. More detailed exposition is forthcoming, both about the state of our existing projects and about the types of projects we are prepared to execute on given the opportunity. Stay tuned for future posts!\n\n\nIf these plans have already gotten you excited, then you can help make them a reality by **[contributing to our summer fundraiser](https://intelligence.org/2015/07/17/miris-2015-summer-fundraiser/)**. There is much to be done in the field of AI alignment, and both MIRI and the wider field of AI will be better off if we can make real progress as early as possible. Future posts will discuss not only the work we want to branch out into, but also reasons why we think that this is an especially critical period in the history of the field, and why we expect that giving now will be much more valuable than waiting to give later.\n\n\nUntil then, thank you again for all of your support — our passionate donor base is what has enabled us to grow as much as we have over the last few years, and we feel privileged to have this chance to realize the potential our supporters have seen in MIRI.\n\n\n\n\n\n[Donate Now](https://intelligence.org/donate/#donation-methods)\n---------------------------------------------------------------\n\n\n\n\n\n\n\n---\n\n1. Roughly speaking, in order to study self-reference in a modern [dependently typed](https://en.wikipedia.org/wiki/Dependent_type) programming language, we need to be able to write programs that can manipulate and typecheck quoted programs written in the same language. This would be a ripe initial setting in which to study programs that reason about themselves with high confidence. However, to our knowledge, nobody has yet written (e.g.) the type of [Agda](https://en.wikipedia.org/wiki/Agda_(programming_language)) programs in Agda — though there have been some promising first steps.\n\nThe post [Targets 1 and 2: Growing MIRI](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-07-19T02:14:56Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "03781221c203cf36ddeca106273b129b", "title": "MIRI’s 2015 Summer Fundraiser!", "url": "https://intelligence.org/2015/07/17/miris-2015-summer-fundraiser/", "source": "miri", "source_type": "blog", "text": "This last year has been [pretty astounding](https://intelligence.org/2015/07/16/an-astounding-year/). Since its release twelve months ago, Nick Bostrom’s book *[Superintelligence](https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies#Reception)* has raised awareness about the challenge that MIRI exists to address: [long-term risks](https://intelligence.org/about/) posed by smarter-than-human artificially intelligent systems. Academic and industry leaders echoed these concerns in an [open letter](https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence) advocating “research aimed at ensuring that increasingly capable AI systems are robust and beneficial.” To jump-start this new safety-focused paradigm in AI, the Future of Life Institute has begun distributing $10M as [grants](http://futureoflife.org/AI/2015awardees) to dozens of research groups, Bostrom and MIRI among them.\n\n\nMIRI comes to this budding conversation with a host of relevant [open problems](https://intelligence.org/technical-agenda/) already in hand. Indeed, a significant portion of the [research priorities document](http://futureoflife.org/static/data/documents/research_priorities.pdf) accompanying the open letter is drawn from our work on this topic. Having already investigated these issues at some length, MIRI is well-positioned to shape this field as it enters a new phase in its development.\n\n\nThis is a big opportunity. MIRI is already growing and scaling its research activities, but the speed at which we scale in the coming months and years can be increased by more funding. For that reason, **MIRI is starting a six-week fundraiser aimed at increasing our rate of growth**.\n\n\nAnd here it is!\n\n\n\n### —  Progress Bar  —\n\n\n![Fundraiser progress](http://intelligence.org/wp-content/uploads/2015/12/S15-fundraiser-progress.png)\n\n\n[Donate Now](https://intelligence.org/donate/#donation-methods)\n---------------------------------------------------------------\n\n\n\n\n \n\n\nRather than running a matching fundraiser with a single fixed donation target, we’ll be letting you help choose MIRI’s course, based on the details of our funding situation and how we would make use of marginal dollars. In particular, we’ll be blogging over the coming weeks about how our plans would scale up at different funding levels:\n\n\n\n\n\n\n---\n\n\nTarget 1 — **$250k: [Continued growth](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri#continuedgrowth).** At this level, we would have enough funds to maintain a twelve-month runway while continuing all current operations, including running workshops, writing papers, and attending conferences. We will also be able to scale the research team up by one to three additional researchers, on top of our three current researchers and two new researchers who are starting this summer. This would ensure that we have the funding to hire the most promising researchers who come out of the MIRI Summer Fellows Program and our summer workshop series.\n\n\n\n\n---\n\n\nTarget 2 — **$500k: [Accelerated growth](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri#acceleratedgrowth).** At this funding level, we could grow our team more aggressively, while maintaining a twelve-month runway. We would have the funds to expand the research team to about ten core researchers, while also taking on a number of exciting side-projects, such as hiring one or two type theorists. Recruiting specialists in [type theory](https://en.wikipedia.org/wiki/Type_theory), a field at the intersection of computer science and mathematics, would enable us to develop tools and code that we think are important for studying verification and reflection in artificial reasoners.\n\n\n\n\n---\n\n\nTarget 3 — **$1.5M: [Taking MIRI to the next level](https://intelligence.org/2015/08/07/target-3-taking-it-to-the-next-level/).** At this funding level, we would start reaching beyond the small but dedicated community of mathematicians and computer scientists who are already interested in MIRI’s work. We’d hire a research steward to spend significant time recruiting top mathematicians from around the world, we’d make our job offerings more competitive, and we’d focus on hiring highly qualified specialists in relevant areas of mathematics. This would allow us to grow the research team as fast as is sustainable, while maintaining a twelve-month runway.\n\n\n\n\n---\n\n\nTarget 4 — **$3M: Bolstering our fundamentals.** At this level of funding, we’d start shoring up our basic operations. We’d spend resources and experiment to figure out how to build the most effective research team we can. We’d branch out into additional high-value projects outside the scope of our core research program, such as hosting specialized conferences and retreats, upgrading our equipment and online resources, and running programming tournaments to spread interest about certain open problems. At this level of funding we’d also start extending our runway, and prepare for sustained aggressive growth over the coming years.\n\n\n\n\n---\n\n\nTarget 5 — **$6M: A new MIRI.** At this point, MIRI would become a qualitatively different organization. With this level of funding, we would start forking the research team into multiple groups attacking the AI alignment problem from very different angles. Our current technical agenda is not the only way to approach the challenges that lie ahead — indeed, there are a number of research teams that we would be thrilled to start up inside MIRI given the opportunity.\n\n\n\n\n---\n\n\n\n\nWe also have plans that extend beyond the $6M level: for more information, shoot me an email at [contact@intelligence.org](mailto:contact@intelligence.org). I also invite you to email me with general questions or to set up a time to chat.\n\n\nIf you intend to make use of corporate matching (check [here](https://doublethedonation.com/miri) to see whether your employer will match your donation), email [malo@intelligence.org](mailto:malo@intelligence.org) and we’ll include the matching contributions in the fundraiser total.\n\n\nSome of these targets are quite ambitious, and I’m excited to see what happens when we lay out the available possibilities and let our donors collectively decide how quickly we develop as an organization.\n\n\nWe’ll be using this fundraiser as an opportunity to explain our research and our plans for the future. **If you have any questions about what MIRI does and why, email them to [rob@intelligence.org](mailto:rob@intelligence.org).** Answers will be posted to this blog every Monday and Friday.\n\n\nBelow is a list of explanatory posts written for this fundraiser, which we’ll be updating regularly:\n\n\n\n\n---\n\n\nJuly 1 — **[Grants and Fundraisers.](https://intelligence.org/2015/07/01/grants-fundraisers/)** Why we’ve decided to experiment with a multi-target fundraiser. \n\nJuly 16 — **[An Astounding Year.](https://intelligence.org/2015/07/16/an-astounding-year/)** Recent successes for MIRI, and for the larger field of AI safety. \n\nJuly 18 — **[Targets 1 and 2: Growing MIRI.](https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/)** MIRI’s plans if we hit the $250k or $500k funding target. \n\nJuly 20 — **[Why Now Matters.](https://intelligence.org/2015/07/20/why-now-matters/)** Two reasons to give now, rather than wait to give later. \n\nJuly 24 — **[Four Background Claims.](https://intelligence.org/2015/07/24/four-background-claims/)** Basic assumptions behind MIRI’s focus on smarter-than-human AI. \n\nJuly 27 — **[MIRI’s Approach.](https://intelligence.org/2015/07/27/miris-approach/)** How we identify technical problems to work on. \n\nJuly 31 — **[MIRI FAQ.](https://intelligence.org/faq/)** Summarizing common sources of misunderstanding. \n\nAugust 3 — **[When AI Accelerates AI.](https://intelligence.org/2015/08/03/when-ai-accelerates-ai/)** Some reasons to get started on safety work early. \n\nAugust 7 — **[Target 3: Taking It To The Next Level.](https://intelligence.org/2015/08/07/target-3-taking-it-to-the-next-level/)** Our plans if we hit the $1.5M funding target. \n\nAugust 10 — **[Assessing Our Past And Potential Impact.](https://intelligence.org/2015/08/10/assessing-our-past-and-potential-impact/)** Why expect MIRI in particular to make a difference? \n\nAugust 14 — **[What Sets MIRI Apart?](https://intelligence.org/2015/08/14/what-sets-miri-apart/)** Distinguishing MIRI from groups in academia and industry. \n\nAugust 18 — **[Powerful Planners, Not Sentient Software.](https://intelligence.org/2015/08/18/powerful-planners-not-sentient-software/)** Why advanced AI isn’t “evil robots.” \n\nAugust 28 — **[AI and Effective Altruism](https://intelligence.org/2015/08/28/ai-and-effective-altruism/).** On MIRI’s role in the EA community.\n\n\n\n\n---\n\n\nOur hope is that these new resources will help you, our donors, make more informed decisions during our fundraiser, and also that our fundraiser will serve as an opportunity for people to learn a lot more about our activities and strategic outlook.\n\n\nAs scientists, engineers, and policymakers begin to take notice of the AI alignment problem, MIRI is in a unique position to direct this energy and attention in a useful direction. Donating today will help us rise to this challenge and secure a place at the forefront of this critical field.\n\n\n\n\n[Donate Now](https://intelligence.org/donate/#donation-methods)\n---------------------------------------------------------------\n\n\n\n\nThe post [MIRI’s 2015 Summer Fundraiser!](https://intelligence.org/2015/07/17/miris-2015-summer-fundraiser/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-07-17T22:35:17Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "c7bcdaed2dce789a70160df8d849f53a", "title": "An Astounding Year", "url": "https://intelligence.org/2015/07/16/an-astounding-year/", "source": "miri", "source_type": "blog", "text": "It’s safe to say that this past year exceeded a lot of people’s expectations.\n\n\nTwelve months ago, Nick Bostrom’s *[Superintelligence](https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies)* had just been published. Long-term questions about smarter-than-human AI systems were simply not a part of mainstream discussions about the social impact of AI, and fewer than five people were working on the AI alignment challenge full-time.\n\n\nTwelve months later, we live in a world where [Elon Musk](http://www.theverge.com/2014/8/3/5965099/elon-musk-compares-artificial-intelligence-to-nukes), [Bill Gates](http://lukemuehlhauser.com/musk-and-gates-on-superintelligence-and-fast-takeoff/), and [Sam Altman](http://blog.samaltman.com/machine-intelligence-part-1) readily cite *Superintelligence* as a guide to the questions we should be asking about AI’s future as a field. [For Gates](http://www.theguardian.com/technology/2015/jan/29/artificial-intelligence-strong-concern-bill-gates), the researchers who *aren’t* concerned about advanced AI systems are the ones who now need to explain their views:\n\n\n\n> I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.\n> \n> \n\n\nAs far as I can tell, the turning point occurred in January 2015, when Max Tegmark and the newly-formed [Future of Life Institute](https://en.wikipedia.org/wiki/Future_of_Life_Institute) organized a “[Future of AI](http://futureoflife.org/AI/ai_conference)” conference in San Juan, Puerto Rico to bring together top AI academics, top research groups from industry, and representatives of the organizations studying long-term AI risk.\n\n\nThe atmosphere at the Puerto Rico conference was electric. I stepped off the plane expecting to field objections to the notion that superintelligent machines pose a serious risk. Instead, I was met with a rapidly-formed consensus that many challenges lie ahead, and a shared desire to work together to develop a response.\n\n\n \n\n\n[![Attendees of the January 2015 ](http://intelligence.org/wp-content/uploads/2015/07/puertoricoattendees-1024x359.png)](https://intelligence.org/wp-content/uploads/2015/07/puertoricoattendees.png)\n*Attendees of the Puerto Rico conference included, [among others](http://futureoflife.org/AI/ai_conference), Stuart Russell (co-author of the \n\nleading textbook in AI), Thomas Dietterich (President of AAAI), Francesca Rossi (President of IJCAI), \n\nBart Selman, Tom Mitchell, Murray Shanahan, Vernor Vinge, Elon Musk, and representatives from \n\nGoogle DeepMind, Vicarious, [FHI](https://en.wikipedia.org/wiki/Future_of_Humanity_Institute), [CSER](https://en.wikipedia.org/wiki/Centre_for_the_Study_of_Existential_Risk), and MIRI.* \n\nThis consensus resulted in a widely endorsed [open letter](http://futureoflife.org/AI/open_letter), and an accompanying [research priorities document](http://futureoflife.org/static/data/documents/research_priorities.pdf) that cites MIRI’s past work extensively. Impressed by the speed with which AI researchers were pivoting toward investigating the alignment problem, Elon Musk donated $10M to a [grants program](http://futureoflife.org/AI/2015selection) aimed at jump-starting this new paradigm in AI research.\n\n\nSince then, the pace has been picking up. Nick Bostrom received $1.5M of the Elon Musk donation to start a new [Strategic Research Center for Artificial Intelligence](http://futureoflife.org/AI/2015awardees#Bostrom), which will focus on the geopolitical challenges posed by powerful AI. MIRI has received [$300,000 in FLI grants](https://intelligence.org/2015/07/01/grants-fundraisers/) directly to continue its technical and strategic research programs, and participated in a few other collaborative grants. The Cambridge Centre for the Study of Existential Risk has [received](http://cser.org/resources-reading/news/) a number of large grants that have allowed it to begin hiring. Stuart Russell and I recently visited Washington, D.C. to [participate in a panel](https://www.youtube.com/watch?v=fWBBe13rAPU) at a leading public policy think tank. We are currently in talks with the NSF about possibilities for extending their funding program to cover some of the concerns raised by the open letter.\n\n\nThe field of AI, too, is taking notice. AAAI, the leading scientific society in AI, hosted its first workshop on safety and ethics (I gave a [presentation](https://intelligence.org/2014/10/18/new-report-corrigibility/) there), and the two major machine learning conferences — IJCAI and NIPS — will, for the first time, have sessions or workshops dedicated to the discussion of AI safety research.\n\n\nYears down the line, I expect that some will look back on the Puerto Rico conference as the birthplace of the field of AI alignment. From the outside, 2015 will likely look like the year that AI researchers started seriously considering the massive hurdles that stand between us and the benefits that artificially intelligent systems could bring.\n\n\nOur long-time backers, however, have seen the work that went into making these last few months possible. It’s thanks to your longstanding support that existential risk mitigation efforts have reached this tipping point. A sizable amount of our current momentum can plausibly be traced back, by one path or another, to exchanges at early [summits](https://intelligence.org/singularitysummit/) or on [blogs](http://lesswrong.com), and to a number of early research and outreach efforts. Thank you for beginning a conversation about these issues long before they began to filter into the mainstream, and thank you for helping us get to where we are now.\n\n\n#### Progress at MIRI\n\n\nMeanwhile at MIRI, the year has been a busy one.\n\n\nIn the wake of the Puerto Rico conference, we’ve been building relationships and continuing our conversations with many different industry groups, including [DeepMind](http://deepmind.com/), [Vicarious](http://vicarious.com/), and the newly formed [Good AI team](http://www.goodai.com/). We’ve been thrilled to engage more with the academic community, via a number of collaborative papers that are in the works, two collaborative grants through the FLI grant program, and conversations with various academics about the content of our research program. During the last few weeks, Stuart Russell and Bart Selman have both come on as official MIRI research advisors.\n\n\nWe’ve also been hard at work on the research side. In March, we hired Patrick LaVictoire as a research fellow. We’ve attended a number of conferences, including AAAI’s safety and ethics workshop. We had a great time co-organizing a productive [decision theory conference](https://intelligence.org/2015/05/24/miri-related-talks-from-the-decision-theory-conference-at-cambridge-university/) at Cambridge University, where I had the pleasure of introducing our unique take on decision theory (inspired by our need for runnable programs) to a number of academic decision theorists who I both respect and admire — and I’m happy to say that our ideas were very well received.\n\n\nWe’ve produced a number of new resources and results in recent months, including:\n\n\n* a series of overview papers describing our [technical agenda](https://intelligence.org/technical-agenda/) written in preparation for the Puerto Rico conference;\n* a number of tools that are useful for studying many of these open problems, available at our [github repository](https://github.com/machine-intelligence);\n* a theory of [reflective oracle machines](https://intelligence.org/2015/04/28/new-papers-reflective/) (in collaboration with Paul Christiano at U.C. Berkeley), which are a promising step towards both better models of logical uncertainty and better models of agents that reason about other agents that are as powerful (or more powerful) than they are; and\n* a technique for [implementing reflection in the HOL theorem-prover](http://futureoflife.org/AI/2015awardees#Kumar) (in collaboration with Ramana Kumar at Cambridge University): [code here](https://github.com/CakeML/hol-reflection).\n\n\nWe have also launched the [Intelligent Agent Foundations Forum](http://agentfoundations.org) to provide a location for publishing and discussing partial results with the broader community working on these problems.\n\n\nThat’s not all, though. After the Puerto Rico conference, we anticipated the momentum that it would create, and we started gearing up for growth. We set up a series of six summer workshops to introduce interested researchers to open problems in AI alignment, and we worked with the Center for Applied Rationality to create a [MIRI summer fellows program](http://rationality.org/miri-summer-fellows-2015/) aimed at helping computer scientists and mathematicians effectively contribute to AI alignment research. We’re now one week into the summer fellows program, and we’ve run four of our six summer workshops.\n\n\nOur goal with these projects is to loosen our talent bottleneck and find more people who can do MIRI-style AI alignment research, and that has been paying off. Two new researchers have already signed on to start at MIRI in the late summer, and it is likely that we will get a few new hires out of the summer fellows program and the summer workshops as well.\n\n\n#### Next steps\n\n\nWe now find ourselves in a wonderful position. The projects listed above have been a lot for a small research team of three, and there’s much more that we hope to take on as we grow the research team further. Where many other groups are just starting to think about how to approach the challenges of AI alignment, MIRI already has a host of ideas that we’re ready to execute on, as soon as we get the personpower and the funding.\n\n\nThe question now is: how quickly can we grow? We already have the funding to sign on an additional researcher (or possibly two) while retaining a twelve-month runway, and it looks like we could grow much faster than that given sufficient funding.\n\n\nTomorrow, we’re officially kicking off our summer fundraiser (though you’re welcome to give now at our [Donation page](https://intelligence.org/donate/)). Upcoming posts will describe in more detail what we could do with more funding, but for now I wanted to make it clear why we’re so excited about the state of AI alignment research, and why we think this is a critical moment in the history of the field of AI.\n\n\nHere’s hoping our next year is half as exciting as this last year was! Thank you again — and stay tuned for our announcement tomorrow.\n\n\nThe post [An Astounding Year](https://intelligence.org/2015/07/16/an-astounding-year/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-07-17T00:44:59Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "b5356c88b08d58bda005e1f5c61af63c", "title": "July 2015 Newsletter", "url": "https://intelligence.org/2015/07/05/july-2015-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| \nHello, all! I’m Rob Bensinger, MIRI’s Outreach Coordinator. I’ll be keeping you updated on MIRI’s activities and on relevant news items. If you have feedback or questions, you can get in touch with me by [email](mailto:rob@intelligence.org).\n\n**Research updates**\n* A new paper: \"[The Asilomar Conference: A Case Study in Risk Mitigation](https://intelligence.org/2015/06/30/new-report-the-asilomar-conference-a-case-study-in-risk-mitigation/).\"\n* New at AI Impacts: [Update on All the AI Predictions](http://aiimpacts.org/update-on-all-the-ai-predictions/); [Predictions of Human-Level AI Timelines](http://aiimpacts.org/predictions-of-human-level-ai-timelines/); [Accuracy of AI Predictions](http://aiimpacts.org/accuracy-of-ai-predictions/)\n* New at IAFF: [A Simple Model of the Löbstacle](http://agentfoundations.org/item?id=303); [Structural Risk Minimization](http://agentfoundations.org/item?id=292)\n* We ran two introductory [workshops](https://intelligence.org/workshops/) this month, on decision theory and Vingean reflection.\n\n\n \n**General updates**\n* Our team is growing! If you are interested in joining us, click through to see our [Office Manager job posting](https://intelligence.org/2015/07/01/wanted-office-manager/).\n* This was Nate Soares’ first month as our [Executive Director](https://intelligence.org/2015/05/31/introductions/). We have big plans in store for the next two months, which Nate has begun laying out here: [fundraising thoughts](https://intelligence.org/2015/07/01/grants-fundraisers/).\n* MIRI has been [awarded](http://futureoflife.org/misc/2015awardees#Fallenstein) a $250,000 grant from the Future of Life Institute spanning three years to make headway on our [research agenda](https://intelligence.org/technical-agenda/). This will fund three workshops and several researcher-years of work on a number of open technical problems. MIRI has also been [awarded](http://futureoflife.org/misc/2015awardees/#Grace) a $49,310 FLI grant to fund strategy research at AI Impacts.\n* Owain Evans of the Future of Humanity Institute, in collaboration with new MIRI hire Jessica Taylor, has been [awarded](http://futureoflife.org/misc/2015awardees/#Evans) a $227,212 FLI grant to develop algorithms that learn human preferences from behavioral data in the presence of irrational and otherwise suboptimal behavior.\n* Cambridge computational logician Ramana Kumar and MIRI research fellow Benja Fallenstein have been [awarded](http://futureoflife.org/misc/2015awardees/#Kumar) a $36,750 FLI grant to study self-referential reasoning in HOL (higher-order logic) proof assistants.\n* Stuart Russell, co-author of the standard textbook on artificial intelligence, has become a MIRI [research advisor](https://intelligence.org/team/#advisors).\n* Nate and Stuart Russell participated in a panel discussion about AI risk at the Information Technology and Innovation Foundation, one of the world's leading public policy think tanks: [video](https://www.youtube.com/watch?v=fWBBe13rAPU).\n* On the [Effective Altruism Forum](http://effective-altruism.com/ea/ju/i_am_nate_soares_ama/), Nate answered a large number of questions about MIRI's strategy and priorities. [Excerpts here](http://mindingourway.com/interlude-qa-on-the-ea-forum/).\n\n\n \n**News and links**\n* Nick Bostrom has won the $1,500,000 FLI center grant to set up a new Cambridge-Oxford institute, the [Strategic Research Center for Artificial Intelligence](http://futureoflife.org/misc/2015awardees#Bostrom). The center will “develop policies to be enacted by governments, industry leaders, and others in order to minimize risks and maximize benefit from artificial intelligence development in the longer term.”\n* In a [Cambridge speech](https://www.youtube.com/watch?v=GYQrNfSmQ0M), Stuart Russell discusses accelerating progress in AI and calls for a revised conception of the field's goal.\n* MIRI research advisor Roman Yampolskiy’s new book [*Artificial Superintelligence: A Futuristic Approach*](http://www.amazon.com/Artificial-Superintelligence-A-Futuristic-Approach/dp/1482234432) comes out next week.\n* [Luke Muehlhauser reviews claims](http://lukemuehlhauser.com/a-reply-to-wait-but-why-on-machine-superintelligence/) made in *Wait But Why*’s popular [blog series](http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html) on superintelligence.\n\n\n\n |\n\n\n \n\n\n\n\n\n \n\n\nThe post [July 2015 Newsletter](https://intelligence.org/2015/07/05/july-2015-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-07-05T17:36:37Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "e23dfb8a21f06c5c74fc7a3002057c64", "title": "Grants and fundraisers", "url": "https://intelligence.org/2015/07/01/grants-fundraisers/", "source": "miri", "source_type": "blog", "text": "Two big announcements today:\n\n\n**1. MIRI has won $299,310 from the Future of Life Institute’s grant program to jumpstart the field of long-term AI safety research.**\n\n\n* $250,000 will go to our research program over the course of three years. This will go towards running workshops and funding a few person-years of research on the open problems discussed in our [technical agenda](https://intelligence.org/technical-agenda/).\n* $49,310 will go towards [AI Impacts](http://aiimpacts.org), a project which aims to shed light on the implications of advanced artificial intelligence using empirical data and rigorous analysis.\n\n\nMIRI will also collaborate with the primary investigators on two other large FLI grants:\n\n\n* $227,212 has been awarded to Owain Evans at the Future of Humanity Institute to develop algorithms that learn human preferences from data despite human irrationalities. This will be carried out in collaboration with Jessica Taylor, who will become a MIRI research fellow at the end of this summer.\n* $36,750 has been awarded to Ramana Kumar at Cambridge University to study self-reference in the HOL theorem prover. This will be done in collaboration with MIRI research fellow Benja Fallenstein.\n\n\nThe money comes from Elon Musk’s extraordinary donation of $10M to fund FLI’s first-of-its-kind grant competition for research aimed at keeping AI technologies beneficial as capabilities improve.\n\n\nThis funding, coming on the heels of the payments from our sale of the Singularity Summit (which recently concluded) and an extremely generous surprise donation from Jed McCaleb at the end of 2013, means we can continue to ramp up our research efforts. That doesn’t mean our job is done, of course. In January, shortly after the FLI conference, we came to the conclusion that the funding situation for our field was set to improve, and decided to start gearing up for growth. That prediction has turned out to be correct, which puts us in an excellent position.\n\n\nWe’re now, indeed, set to grow—the only question is, “How quickly?” Which brings me to announcement number two.\n\n\n**2. Our summer fundraiser is starting in mid-July, and we’re going to try something new.**\n\n\nEvery summer for the past few years, MIRI has run a matching fundraiser, where we get some of our biggest donors to pledge their donations conditional upon your support. Conventional wisdom states that matching fundraisers make it easier to raise funds, and MIRI has had a lot of success with them in the past. They seem to be an excellent way to get donors excited, and the deadline helps create a sense of urgency.\n\n\nHowever, a few different people, including the folks over at [GiveWell](http://blog.givewell.org/2011/12/15/why-you-shouldnt-let-donation-matching-affect-your-giving/) and effective altruism writer [Ben Kuhn](http://www.benkuhn.net/matching), have voiced skepticism about the effectiveness of matching fundraisers. Most of our large donors are happy to donate regardless of whether we raise matching funds, and matching fundraisers tend to put the focus on interactions between small and large donors, rather than on the exciting projects that we could be running with sufficient funding.\n\n\nOur experience with our donors has been that they are exceptionally thoughtful, and that they have thought themselves about how (and how quickly) they want MIRI to grow. So this fundraiser we’d like to give you more resources to make an informed decision about where to send your money, with better knowledge about how different levels of funding will affect our operations.\n\n\nDetails are forthcoming mid-July, along with a whole lot more information about what we’ve been up to and what we have planned.\n\n\nAs always, thanks for everything: it’s exciting to receive one of the very first grants in this burgeoning field, and we haven’t forgotten that it’s only thanks to your support that the field has made it this far in the first place.\n\n\nThe post [Grants and fundraisers](https://intelligence.org/2015/07/01/grants-fundraisers/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-07-02T03:15:45Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "045ceb37b21cfeea879e43573c2a9ac0", "title": "Wanted: Office Manager (aka Force Multiplier)", "url": "https://intelligence.org/2015/07/01/wanted-office-manager/", "source": "miri", "source_type": "blog", "text": "We’re looking for a full-time office manager to support our growing [team](https://intelligence.org/team/). It’s a big job that requires organization, initiative, technical chops, and superlative communication skills. You’ll develop, improve, and manage the processes and systems that make us a super-effective organization. You’ll obsess over our processes (faster! easier!) and our systems (simplify! simplify!). Essentially, it’s your job to ensure that everyone at MIRI, including you, is able to focus on their work and Get Sh\\*t Done.\n\n\nThat’s a super-brief intro to what you’ll be working on. But first, you need to know if you’ll even like working here.\n\n\n\n### A Bit About Us\n\n\nWe’re a research nonprofit working on the critically important problem of *superintelligence alignment*: how to bring smarter-than-human artificial intelligence into alignment with human values.[1](https://intelligence.org/2015/07/01/wanted-office-manager/#footnote_0_11843 \"More details on our About page.\") Superintelligence alignment is a burgeoning field, and arguably the most important and under-funded research problem in the world. Experts largely agree that AI is likely to exceed human levels of capability on most cognitive tasks in this century—but it’s not clear *when*, and we aren’t doing a very good job of preparing for the possibility. Given how disruptive smarter-than-human AI would be, we need to start thinking now about AI’s global impact. Over the past year, a number of leaders in science and industry have voiced their support for prioritizing this endeavor:\n\n\n* Stuart Russell, co-author of the [leading textbook on artificial intelligence](http://aima.cs.berkeley.edu/) and a MIRI advisor, [gives a compelling argument for doing this work sooner rather than later](https://www.youtube.com/watch?v=GYQrNfSmQ0M).\n* Nick Bostrom of Oxford University, another MIRI research advisor, published *[Superintelligence: Paths, Dangers, Strategies](http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/)*, which details the potential value of smarter-than-human AI systems as well as the potential hazards.\n* Elon Musk (Paypal, SpaceX, Tesla), Bill Gates (Microsoft co-founder), Stephen Hawking (world-renowned theoretical physicist), and others have publicly stated their concerns about long-term AI risk.\n* Hundreds of AI researchers and engineers recently signed an [open letter](http://futureoflife.org/misc/open_letter) advocating for more research into robust and beneficial artificial intelligence. A number of MIRI publications are cited in the corresponding [Research Priorities](http://futureoflife.org/static/data/documents/research_priorities.pdf) document.\n\n\nPeople are starting to discuss these issues in a more serious way, and MIRI is well-positioned to be a thought leader in this important space. As interest in AI safety grows, we’re growing too—we’ve gone from a single full-time researcher in 2013 to what will likely be a half-dozen research fellows by the end of 2015, and intend to continue growing in 2016.\n\n\nAll of which is to say: we *really need* an office manager who will support our efforts to hack away at the problem of superintelligence alignment!\n\n\nIf our overall mission seems important to you, and you love running well-oiled machines, you’ll probably fit right in. If that’s the case, we can’t wait to hear from you.\n\n\n### What It’s Like to Work at MIRI\nWe try really hard to make working at MIRI an amazing experience. We have a team full of truly exceptional people—the kind you’ll be excited to work with. Here’s how we operate:\nFlexible Hours\n\n\nWe do not have strict office hours. Simply ensure you’re here enough to be available to the team when needed, and to fulfill all of your duties and responsibilities.\n\n\n#### Modern Work Spaces\nMany of us have adjustable standing desks with multiple large external monitors. We consider workspace ergonomics important, and try to rig up work stations to be as comfortable as possible.\nLiving in the Bay Area\nWe’re located in downtown Berkeley, California. Berkeley’s monthly average temperature ranges from 60°F in the winter to 75°F in the summer. From our office you’re:\n* A 10-second walk to the roof of our building, from which you can view the Berkeley Hills, the Golden Gate Bridge, and San Francisco.\n* A 30-second walk to the BART (Bay Area Rapid Transit), which can get you around the Bay Area.\n* A 3-minute walk to UC Berkeley Campus.\n* A 5-minute walk to dozens of restaurants (including ones in Berkeley’s well-known Gourmet Ghetto).\n* A 30-minute BART ride to downtown San Francisco.\n* A 30-minute drive to the beautiful west coast.\n* A 3-hour drive to Yosemite National Park.\n\n\nVacation Policy\nOur vacation policy is that we don’t have a vacation policy. That is, take the vacations you need to be a happy, healthy, productive human. There are checks in place to ensure this policy isn’t abused, but we haven’t actually run into any problems since initiating the policy.\nWe consider our work important, and we care about whether it gets done well, not about how many total hours you log each week. We’d much rather you take a day off than extend work tasks just to fill that extra day.\nRegular Team Dinners and Hangouts\nWe get the whole team together every few months, order a bunch of food, and have a great time.\nTop-Notch Benefits\nWe provide top-notch health and dental benefits. We care about our team’s health, and we want you to be able to get health care with as little effort and annoyance as possible.\nAgile Methodologies\nOur ops team follows standard Agile best practices, meeting regularly to plan, as a team, the tasks and priorities over the coming weeks. If the thought of being part of an effective, well-functioning operation gets you really excited, that’s a promising sign!\nOther Tidbits\n* Moving to the Bay Area? We’ll cover up to $3,500 in moving expenses.\n* Use public transit to get to work? You get a transit pass with a large monthly allowance.\n* All the snacks and drinks you could want at the office.\n* You’ll get a smartphone and full plan.\n* This is a salaried position. (That is, your job is not to sit at a desk for 40 hours a week. Your job is to get your important work *done*, even if this occasionally means working on a weekend or after hours.)\n\n\nIt can also be surprisingly motivating to realize that your day job is helping people explore the frontiers of human understanding, mitigate global catastrophic risk, etc., etc. At MIRI, we try to tackle the very largest problems facing humanity, and that can be a pretty satisfying feeling.\nIf this sounds like your ideal work environment, read on! It’s time to talk about your role.\nWhat an Office Manager Does and Why it Matters\nOur ops team and researchers (and collection of remote contractors) are *swamped* making progress on the huge task we’ve taken on as an organization.\nThat’s where you come in. An office manager is the oil that keeps the engine running. They’re *indispensable*. Office managers are force multipliers: a good one doesn’t merely improve their own effectiveness—they make the entire *organization* better.\nWe need you to build, oversee, and improve all the “behind-the-scenes” things that ensure MIRI runs smoothly and effortlessly. You will devote your full attention to looking at the big picture and the small details and making sense of it all. You’ll turn all of that into actionable information and tools that make the whole team better. That’s the job.\nSometimes this looks like researching and testing out new and exciting services. Other times this looks like stocking the fridge with drinks, sorting through piles of mail, lugging bags of groceries, or spending time on the phone on hold with our internet provider. But don’t think that the more tedious tasks are low-value. If the hard tasks don’t get done, *none* of MIRI’s work is possible. Moreover, you’re actively *encouraged* to find creative ways to make the boring stuff more efficient—making an awesome spreadsheet, writing a script, training a contractor to take on the task—so that you can spend more time on what you find most exciting.\nWe’re small, but we’re growing, and this is an opportunity for you to grow too. There’s room for advancement at MIRI (if that interests you), based on your interests and performance.\nSample Tasks\nYou’ll have a wide variety of responsibilities, including, but not necessarily limited to, the following:\n* Orienting and training new staff.\n* Onboarding and offboarding staff and contractors.\n* Managing employee benefits and services, like transit passes and health care.\n* Payroll management; handling staff questions.\n* Championing our internal policies and procedures wiki—keeping everything up to date, keeping everything accessible, and keeping staff aware of relevant information.\n* Managing various services and accounts (ex. internet, phone, insurance).\n* Championing our work space, with the goal of making the MIRI office a fantastic place to work.\n* Running onsite logistics for introductory workshops.\n* Processing all incoming mail packages.\n* Researching and implementing better systems and procedures.\n\n\nYour “value-add” is by taking responsibility for making all of these things happen. Having a competent individual in charge of this diverse set of tasks at MIRI is *extremely valuable*!\nA Day in the Life\nA typical day in the life of MIRI’s office manager may look something like this:\n* Come in.\n* Process email inbox.\n* Process any incoming mail, scanning/shredding/dealing-with as needed.\n* Stock the fridge, review any low-stocked items, and place an order online for whatever’s missing.\n* Onboard a new contractor.\n* Spend some time thinking of a faster/easier way to onboard contractors. Implement any hacks you come up with.\n* Follow up with Employee X about their benefits question.\n* Outsource some small tasks to TaskRabbit or Upwork. Follow up with previously outsourced tasks.\n* Notice that you’ve spent a few hours per week the last few weeks doing xyz. Spend some time figuring out whether you can eliminate the task completely, automate it in some way, outsource it to a service, or otherwise simplify the process.\n* Review the latest post drafts on the wiki. Polish drafts as needed and move them to the appropriate location.\n* Process email.\n* Go home.\n\n\nYou’re the One We’re Looking For If:\n* You are authorized to work in the US. (Prospects for obtaining an employment-based visa for this type of position are slim; sorry!)\n* You can solve problems for yourself in new domains; you find that you don’t generally need to be told what to do.\n* You love organizing information. (There’s *a lot of it*, and it needs to be super-accessible.)\n* Your life is organized and structured.\n* You enjoy trying things you haven’t done before. (How else will you learn which things work?)\n* You’re way more excited at the thought of being the jack-of-all-trades than at the thought of being the specialist.\n* You are good with people—good at talking about things that are going great, as well as things that aren’t.\n* People thank you when you deliver difficult news. You’re that good.\n* You can notice all the subtle and wondrous ways processes can be automated, simplified, streamlined… while still keeping the fridge stocked in the meantime.\n* You know your way around a computer really well.\n* You enjoy eliminating unnecessary work, automating automatable work, outsourcing outsourcable work, and executing on everything else.\n* You want to do what it takes to help all other MIRI employees focus on their jobs.\n* You’re the sort of person who sees the world, organizations, and teams as systems that can be observed, understood, and optimized.\n* You think Sam is the real hero in *Lord of the Rings*.\n* You have the strong ability to take real responsibility for an issue or task, and ensure it gets done. (This doesn’t mean it has to get done by *you*; but it has to get done *somehow*.)\n* You celebrate excellence and relentlessly pursue improvement.\n* You lead by example.\n\n\nBonus Points:\n* Your technical chops are really strong. (Dabbled in scripting? HTML/CSS? Automator?)\n* Involvement in the Effective Altruism space.\n* Involvement in the broader AI-risk space.\n* Previous experience as an office manager.\n\n\nExperience & Education Requirements\n* Let us know about anything that’s evidence that you’ll fit the bill.\n\n\nHow to Apply\n~~Apply by July 31, 2015.~~ The application deadline has passed. Thanks for your consideration.\nP.S. Share the love! If you know someone who might be a perfect fit, we’d really appreciate it if you pass this along!\n\n\n---\n\n1. More details on our [About](https://intelligence.org/about/) page.\n\nThe post [Wanted: Office Manager (aka Force Multiplier)](https://intelligence.org/2015/07/01/wanted-office-manager/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-07-01T19:04:14Z", "authors": ["Alex Vermeer"], "summaries": []} -{"id": "e1b37f9df6f06530b767a2dc8d8906c3", "title": "New report: “The Asilomar Conference: A Case Study in Risk Mitigation”", "url": "https://intelligence.org/2015/06/30/new-report-the-asilomar-conference-a-case-study-in-risk-mitigation/", "source": "miri", "source_type": "blog", "text": "Today we release a new report by Katja Grace, “**[The Asilomar Conference: A Case Study in Risk Mitigation](https://intelligence.org/files/TheAsilomarConference.pdf)**” (PDF, 67pp).\n\n\nThe 1975 Asilomar Conference on Recombinant DNA is sometimes cited as an example of successful action by scientists who preemptively identified an emerging technology’s potential dangers and intervened to mitigate the risk. We conducted this investigation to check whether that basic story is true, and what lessons those events might carry for AI and other [unprecedented technological risks](http://www.fhi.ox.ac.uk/wp-content/uploads/Unprecedented-Technological-Risks.pdf).\n\n\nTo prepare this report, Grace consulted several primary and secondary sources, and also conducted four interviews that are cited in the report. The interviews are published here:\n\n\n* [David Baltimore on Asilomar](https://docs.google.com/document/d/1ycXT0htU8nOkHV1NrP5AbMKgAQAyZ4k_Nlo_mzWF4ZI/edit?usp=sharing)\n* [Paul Berg on Asilomar](https://docs.google.com/document/d/1eYpt0oQz80l9LTNb8tCvEaVzKSLslJ15XLBA0PVzGZg/edit?usp=sharing)\n* [George Church on Asilomar](https://docs.google.com/document/d/1rNRz4HPP0f9cH9KVLnAFY4IvOplKusgYt-BPAQGea2k/edit?usp=sharing)\n* [Alexander Berger on early responses to future risks](https://docs.google.com/document/d/1oD0Ti9WiET3mTKBfowxWJaosV1OdnBl5jK8DYb-bWmc/edit?usp=sharing)\n\n\nThe basic conclusions of this report, which have not been separately vetted, are:\n\n\n1. The specific dangers that motivated the Asilomar conference were relatively immediate, rather than long-term. These dangers turned out to be effectively nonexistent. Experts disagree as to whether scientists should have known better with the information they had at the time.\n2. The conference appears to have caused improvements in general lab safety practices.\n3. The conference plausibly averted regulation and helped scientists to be on better terms with the public. Whether these effects are positive for society depends on (e.g.) whether it is better for this category of scientific activities to go unregulated, a question not addressed by this report.\n\n\nThe post [New report: “The Asilomar Conference: A Case Study in Risk Mitigation”](https://intelligence.org/2015/06/30/new-report-the-asilomar-conference-a-case-study-in-risk-mitigation/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-07-01T04:49:44Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "0c3ca0ca9bc51ed3ca4c52da378e006a", "title": "June 2015 Newsletter", "url": "https://intelligence.org/2015/06/01/june-2015-newsletter/", "source": "miri", "source_type": "blog", "text": "| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n[Machine Intelligence Research Institute](http://intelligence.org)\n |\n\n \n |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \nDear friends of MIRI,\nAs we [announced](https://intelligence.org/2015/05/06/a-fond-farewell-and-a-new-executive-director/) on May 6th, I've decided to take a research position at [GiveWell](http://www.givewell.org/). With unanimous support from the Board, MIRI research fellow Nate Soares will be taking my place as Executive Director starting June 1st. Nate has introduced himself [here](https://intelligence.org/2015/05/31/introductions/).\nI’m proud of what the MIRI team has accomplished during my tenure as Executive Director, and I'm excited to watch Nate take MIRI to the next level. My enthusiasm for MIRI’s work remains as strong as ever, and I look forward to supporting MIRI going forward, both financially and as a close advisor. (See [here](http://lukemuehlhauser.com/f-a-q-about-my-transition-to-givewell/) for further details on my transition to GiveWell.)\nThank you all for your support!\n– Luke Muehlhauser\n\n**Research updates**\n* Two MIRI papers were [accepted to AGI-15](https://intelligence.org/2015/05/29/two-papers-accepted-to-agi-15/).\n* [Slides and videos](https://intelligence.org/2015/05/24/miri-related-talks-from-the-decision-theory-conference-at-cambridge-university/) from MIRI-related talks at a Cambridge University decision theory conference we co-organized.\n* New at *AI Impacts*: [Similarity Between Historical and Contemporary AI Predictions](http://aiimpacts.org/similarity-between-historical-and-contemporary-ai-predictions/); [Publication biases toward shorter predictions](http://aiimpacts.org/short-prediction-publication-biases/); [Selection bias from optimistic experts](http://aiimpacts.org/bias-from-optimistic-predictors/); [Why do AGI researchers expect AI so soon?](http://aiimpacts.org/why-do-agi-researchers-expect-ai-so-soon/); [A new approach to predicting brain-computer parity](http://aiimpacts.org/tepsbrainestimate/); [Group Differences in AI Predictions](http://aiimpacts.org/group-differences-in-ai-predictions/); [The Maes-Garreau Law](http://aiimpacts.org/the-maes-garreau-law/); [AI Timeline predictions in surveys and statements](http://aiimpacts.org/ai-timeline-predictions-in-surveys-and-statements/); [MIRI AI Predictions Dataset](http://aiimpacts.org/miri-ai-predictions-dataset/); [Brain performance in TEPS](http://aiimpacts.org/brain-performance-in-teps/).\n\n\n**News updates**\n* New [MIRIx groups](https://intelligence.org/mirix/) in Berkeley and New York City.\n\n\n**Other updates**\n* Nick Bostrom's [TED talk](http://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are) on machine superintelligence.\n* [Effective Altruism Global](http://www.eaglobal.org/) is this August, in the San Francisco Bay Area (USA), Oxford (UK), and Melbourne (Australia). Keynote speaker is Elon Musk. Apply by June 10th!\n\n\n\n |\n\n |\n\n \n |\n\n\nThe post [June 2015 Newsletter](https://intelligence.org/2015/06/01/june-2015-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-06-01T21:00:06Z", "authors": ["Jesse Galef"], "summaries": []} -{"id": "779a0a36a72562c7c2956c4cb5fb03fb", "title": "Introductions", "url": "https://intelligence.org/2015/05/31/introductions/", "source": "miri", "source_type": "blog", "text": "[![natesoares](http://intelligence.org/wp-content/uploads/2015/05/natesoares.jpg)](http://intelligence.org/wp-content/uploads/2015/05/natesoares.jpg)\nHello, I’m Nate Soares, and I’m pleased to be taking the reins at MIRI on Monday morning.\n\n\nFor those who don’t know me, I’ve been a research fellow at MIRI for a little over a year now. I attended my first MIRI workshop in December of 2013 while I was still working at Google, and was offered a job soon after. Over the last year, I wrote a dozen papers, half as primary author. Six of those papers were written for the [MIRI technical agenda](http://intelligence.org/technical-agenda/), which we compiled in preparation for the [Puerto Rico conference](http://futureoflife.org/misc/ai_conference) put on by FLI in January 2015. Our technical agenda is cited extensively in the [research priorities document](http://futureoflife.org/static/data/documents/research_priorities.pdf) referenced by the [open letter](http://futureoflife.org/misc/open_letter) that came out of that conference. In addition to the Puerto Rico conference, I attended five other conferences over the course of the year, and gave a talk at three of them. I also put together the [MIRI research guide](https://intelligence.org/research-guide/) (a resource for students interested in getting involved with AI alignment research), and of course I spent a fair bit of time doing the actual research at workshops, at researcher retreats, and on my own. It’s been a jam-packed year, and it’s been loads of fun.\n\n\nI’ve always had a natural inclination towards leadership: in the past, I’ve led a F.I.R.S.T. Robotics team, managed two volunteer theaters, served as president of an Entrepreneur’s Club, and co-founded a startup or two. However, this is the first time I’ve taken a professional leadership role, and I’m grateful that I’ll be able to call upon the experience and expertise of the board, of our advisors, and of outgoing executive director Luke Muehlhauser.\n\n\nMIRI has improved greatly under Luke’s guidance these last few years, and I’m honored to have the opportunity to continue that trend. I’ve spent a lot of time in conversation with Luke over the past few weeks, and he’ll remain a close advisor going forward. He and the management team have spent the last year or so really tightening up the day-to-day operations at MIRI, and I’m excited about all the opportunities we have open to us now.\n\n\nThe last year has been pretty incredible. Discussion of long-term AI risks and benefits has finally hit the mainstream, thanks to the success of Bostrom’s *[Superintelligence](http://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies)* and FLI’s Puerto Rico conference, and due in no small part to years of movement-building and effort made possible by MIRI’s supporters. Over the last year, I’ve forged close connections with our friends at the [Future of Humanity Institute](http://www.fhi.ox.ac.uk/), the [Future of Life Institute](http://futureoflife.org/), and the [Centre for the Study of Existential Risk](http://cser.org/), as well as with a number of industry teams and academic groups who are focused on long-term AI research. I’m looking forward to our continued participation in the global conversation about the future of AI. These are exciting times in our field, and MIRI is well-poised to grow and expand. Indeed, one of my top priorities as executive director is to grow the research team.\n\n\nThat project is already well under way. I’m pleased to announce that Jessica Taylor has accepted a full-time position as a MIRI researcher starting in August 2015. We are also hosting a series of summer workshops focused on various technical AI alignment problems, the second of which is just now concluding. Additionally, we are working with the [Center for Applied Rationality](http://rationality.org) to put on a [summer fellows program](http://rationality.org/miri-summer-fellows-2015/) designed for people interested in gaining the skills needed for research in the field of AI alignment.\n\n\nI want to take a moment to extend my heartfelt thanks to all those supporters of MIRI who have brought us to where we are today: We have a slew of opportunities before us, and it’s all thanks to your effort and support these past years. MIRI couldn’t have made it as far as it has without you. Exciting times are ahead, and your continued support will allow us to grow quickly and pursue all the opportunities that the last year opened up.\n\n\nFinally, in case you want to get to know me a little better, I’ll be answering questions on the [effective altruism forum](http://effective-altruism.com/) at 3PM Pacific time on Thursday June 11th.\n\n\nOnwards,\n\n\nNate\n\n\nThe post [Introductions](https://intelligence.org/2015/05/31/introductions/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-06-01T01:00:40Z", "authors": ["Nate Soares"], "summaries": []} -{"id": "e2d216787864e8605791230bacf882b3", "title": "Two papers accepted to AGI-15", "url": "https://intelligence.org/2015/05/29/two-papers-accepted-to-agi-15/", "source": "miri", "source_type": "blog", "text": "MIRI has two papers forthcoming in the conference proceedings of [AGI-15](http://agi-conf.org/2015/). The first paper, previously released as a MIRI technical report, is “[Reflective variants of Solomonoff induction and AIXI](https://intelligence.org/files/ReflectiveSolomonoffAIXI.pdf),” by Benja Fallenstein, Nate Soares, and Jessica Taylor.\n\n\n![Two attempts](http://intelligence.org/wp-content/uploads/2015/05/Two-attempts.png)The second paper, “[Two Attempts to Formalize Counterpossible Reasoning in Deterministic Settings](https://intelligence.org/files/CounterpossibleReasoning.pdf),” by Nate Soares and Benja Fallenstein, is a compressed version of some material from [an earlier technical report](https://intelligence.org/files/TowardIdealizedDecisionTheory.pdf). This new paper��s abstract is:\n\n\n\n> This paper motivates the study of counterpossibles (logically impossible counterfactuals) as necessary for developing a decision theory suitable for generally intelligent agents embedded within their environments. We discuss two attempts to formalize a decision theory using counterpossibles, one based on graphical models and another based on proof search.\n> \n> \n\n\nFallenstein will be attending AGI-15.\n\n\nThe post [Two papers accepted to AGI-15](https://intelligence.org/2015/05/29/two-papers-accepted-to-agi-15/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-05-29T18:55:16Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "f29d7943fdc5fe32522a498695dfcd58", "title": "MIRI-related talks from the decision theory conference at Cambridge University", "url": "https://intelligence.org/2015/05/24/miri-related-talks-from-the-decision-theory-conference-at-cambridge-university/", "source": "miri", "source_type": "blog", "text": "Recently, MIRI co-organized a conference at Cambridge University titled [Self-prediction in decision theory and artificial intelligence](http://www.phil.cam.ac.uk/events/decision-theory-conf). At least six of the conference’s talks directly discussed issues raised in [MIRI’s technical agenda](https://intelligence.org/technical-agenda/):\n\n\n1. MIRI research fellow ([and soon, Executive Director](https://intelligence.org/2015/05/06/a-fond-farewell-and-a-new-executive-director/)) Nate Soares gave a talk titled “**What is a what if?**” ([.pdf w/o notes](https://intelligence.org/wp-content/uploads/2015/05/Soares-What-is-a-what-if.pdf), [.pptx w/ notes](https://intelligence.org/wp-content/uploads/2015/05/Soares-What-is-a-what-if-with-notes.pptx)), on theories of counterfactuals in the context of AI.\n2. MIRI research fellow Patrick LaVictoire gave a talk titled “**Decision theory and the logic of provability**” ([.pdf](https://intelligence.org/wp-content/uploads/2015/05/LaVictoire-Decision-Theory-and-the-Logic-of-Provability.pdf)), on the modal agents framework.\n3. MIRI research fellow Benja Fallenstein gave a talk titled “**Vingean reflection**” ([.pdf](https://intelligence.org/wp-content/uploads/2015/05/Fallenstein-Vingean-reflection.pdf)).\n4. Googler Vladimir Slepnev, a past MIRI workshop attendee, gave a talk titled “**Models of decision-making based on logical counterfactuals**” ([.pdf](https://intelligence.org/wp-content/uploads/2015/05/Slepnev-Models-of-decision-making-based-on-logical-counterfactuals.pdf)).\n5. MIRI research associate Stuart Armstrong (Oxford) gave a talk titled “**Anthropic decision theory**” ([.pdf](https://intelligence.org/wp-content/uploads/2015/05/Armstrong-Anthropic-Decision-Theory-slides.pdf), [video](https://www.youtube.com/watch?v=aiGOGkBiWEo)).\n6. The conference also coincided with a public lecture by Stuart Russell titled “**The long-term future of artificial intelligence**” ([video](https://www.youtube.com/watch?v=GYQrNfSmQ0M&t=3m22s)).\n\n\nOur thanks to everyone who attended, and especially to our co-organizers: Arif Ahmed, Huw Price, and Seán Ó hÉigeartaigh!\n\n\nThe post [MIRI-related talks from the decision theory conference at Cambridge University](https://intelligence.org/2015/05/24/miri-related-talks-from-the-decision-theory-conference-at-cambridge-university/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-05-24T15:51:02Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "c7de694d566eea294166fa80a6583817", "title": "A fond farewell and a new Executive Director", "url": "https://intelligence.org/2015/05/06/a-fond-farewell-and-a-new-executive-director/", "source": "miri", "source_type": "blog", "text": "![LukeMeuhlhauser_w135](https://intelligence.org/wp-content/uploads/2012/06/LukeMeuhlhauser_w135.jpg)Dear friends and supporters of MIRI,\n\n\nI have some important news to share with you about the future of MIRI.\n\n\nGiven my passion for doing research, I’m excited to have accepted a research position at [GiveWell](http://www.givewell.org/). Like MIRI, GiveWell is an excellent cultural fit for me, and I believe they’re doing important work. I look forward to joining their team on June 1st. I’m also happy to report that I will be leaving MIRI in capable leadership hands.\n\n\nBack in 2011, when MIRI’s Board of Directors asked me to take the Executive Director role, I was reluctant to leave the research position I held at the time. But I also wanted to do what best served MIRI’s mission. Looking back at the past three years, I’m proud of what the MIRI team has accomplished during my tenure as Executive Director. We’ve built a solid foundation, and our research program [has picked up significant momentum](https://intelligence.org/2015/03/22/2014-review/). MIRI will continue to thrive as I transition out of my leadership role.\n\n\nMy enthusiasm for MIRI’s work remains as strong as ever, and I look forward to supporting MIRI going forward, both financially and as a close advisor. I’ll also continue to write about the future of AI on [my personal blog](http://lukemuehlhauser.com/).\n\n\n[Nate Soares](http://so8r.es/) will be stepping into the Executive Director role upon my departure, with unanimous support from myself and the rest of the Board.\n\n\nNate was our top choice for many reasons. During the past year at MIRI, Nate has demonstrated his commitment to the mission, his technical abilities, his strong work ethic, his ability to rapidly acquire new skills, his ability to work well with others, his ability to communicate clearly, his ability to think through big-picture strategic issues, and other aspects of executive capability.\n\n\nDuring the transition, I’ll be sharing with Nate everything I think I’ve learned in the past three years about running an effective research institute, and I look forward to seeing where he leads MIRI next.\n\n\nMIRI continues to seek additional research and executive capacity, and our need for both will only grow as I depart and as Nate transitions from a research role to the Executive Director role. If you are a math or computer science researcher, or if you have significant executive experience, and you are interested in participating in MIRI’s vital and significant research effort, please apply [here](https://intelligence.org/careers/).\n\n\nThe post [A fond farewell and a new Executive Director](https://intelligence.org/2015/05/06/a-fond-farewell-and-a-new-executive-director/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-05-06T18:12:45Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "e9b02056ca1d57a8d1a9c05ff4a11d55", "title": "May 2015 Newsletter", "url": "https://intelligence.org/2015/05/01/may-2015-newsletter/", "source": "miri", "source_type": "blog", "text": "| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n[Machine Intelligence Research Institute](http://intelligence.org)\n |\n\n \n |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n**Research updates**\n* [Two new papers](https://intelligence.org/2015/04/28/new-papers-reflective/) on reflective oracles and agents.\n* New articles on AI Impacts (last 2 months): [Preliminary prices for human-level hardware](http://aiimpacts.org/preliminary-prices-for-human-level-hardware/), [Trends in the cost of computing](http://aiimpacts.org/trends-in-the-cost-of-computing/), [Glial signaling](http://aiimpacts.org/glial-signaling/), [Scale of the human brain](http://aiimpacts.org/scale-of-the-human-brain/), [Neuron firing rates in humans](http://aiimpacts.org/rate-of-neuron-firing/), [Metabolic estimates of rate of cortical firing](http://aiimpacts.org/metabolic-estimates-of-rate-of-cortical-firing/), [Current FLOPS prices](http://aiimpacts.org/current-flops-prices/), [The cost of TEPS](http://aiimpacts.org/cost-of-teps/), [Kurzweil](http://aiimpacts.org/kurzweil-the-singularity-is-near/) and [Allen](http://aiimpacts.org/allen-the-singularity-isnt-near/) on the singularity, [Wikipedia history of GFLOPS costs](http://aiimpacts.org/wikipedia-history-of-gflops-costs/).\n\n\n**News updates**\n* You can now [earmark donations for AI Impacts](http://aiimpacts.org/donate/), if you care more about AI forecasting research than [superintelligence alignment research](https://intelligence.org/technical-agenda/). But if you trust MIRI to allocate funds, please make [normal, unrestricted donations](https://intelligence.org/donate/).\n* CFAR will (conditionally) be running a three week summer program this July for MIRI, designed to increase participants' ability to do technical research into the superintelligence alignment problem. Details and application form are [here](http://lesswrong.com/lw/m3h/cfarrun_miri_summer_fellows_program_july_326/).\n\n\n\n**Other updates**\n* [Musk and Gates on machine superintelligence](http://lukemuehlhauser.com/musk-and-gates-on-superintelligence-and-fast-takeoff/). Gates has seconded Musk's recommendation of [*Superintelligence*](http://smile.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/).\n\n\n\nAs always, please don't hesitate to let us know if you have any questions or comments.\n \nBest,\nLuke Muehlhauser\nExecutive Director\n |\n\n |\n\n \n |\n\n\nThe post [May 2015 Newsletter](https://intelligence.org/2015/05/01/may-2015-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-05-01T21:00:22Z", "authors": ["Jesse Galef"], "summaries": []} -{"id": "67f5f3949fa4436cd6b23b17139baae9", "title": "New papers on reflective oracles and agents", "url": "https://intelligence.org/2015/04/28/new-papers-reflective/", "source": "miri", "source_type": "blog", "text": "We recently released two new papers on reflective oracles and agents.\n\n\nThe first is “[Reflective oracles: A foundation for classical game theory](https://intelligence.org/files/ReflectiveOracles.pdf),” by Benja Fallenstein, Jessica Taylor, and Paul Christiano.\n\n\n[![reflective oracles](http://intelligence.org/wp-content/uploads/2015/04/reflective-oracles.png)](https://intelligence.org/files/ReflectiveOracles.pdf)Abstract:\n\n\n\n> Classical game theory treats players as special—a description of a game contains a full, explicit enumeration of all players—even though in the real world, “players” are no more fundamentally special than rocks or clouds. It isn’t trivial to fi\fnd a decision-theoretic foundation for game theory in which an agent’s co-players are a non-distinguished part of the agent’s environment. Attempts to model both players and the environment as Turing machines, for example, fail for standard diagonalization reasons.\n> \n> \n> In this paper, we introduce a “reflective” type of oracle, which is able to answer questions about the outputs of oracle machines with access to the same oracle. These oracles avoid diagonalization by answering some queries randomly. We show that machines with access to a reflective oracle can be used to defi\fne rational agents using causal decision theory. These agents model their environment as a probabilistic oracle machine, which may contain other agents as a non-distinguished part.\n> \n> \n> We show that if such agents interact, they will play a Nash equilibrium, with the randomization in mixed strategies coming from the randomization in the oracle’s answers. This can be seen as providing a foundation for classical game theory in which players aren’t special.\n> \n> \n\n\nThe second paper develops these ideas in the context of Solomonoff induction and Marcus Hutter’s AIXI. It is “[Reflective variants of Solomonoff induction and AIXI](https://intelligence.org/files/ReflectiveSolomonoffAIXI.pdf),” by Benja Fallenstein, Nate Soares, and Jessica Taylor.\n\n\n[![reflective AIXI](http://intelligence.org/wp-content/uploads/2015/04/reflective-AIXI.png)](https://intelligence.org/files/ReflectiveSolomonoffAIXI.pdf)Abstract:\n\n\n\n> Solomonoff induction and AIXI model their environment as an arbitrary Turing machine, but are themselves uncomputable. This fails to capture an essential property of real-world agents, which cannot be more powerful than the environment they are embedded in; for example, AIXI cannot accurately model game-theoretic scenarios in which its opponent is another instance of AIXI.\n> \n> \n> In this paper, we define *reflective* variants of Solomonoff induction and AIXI, which are able to reason about environments containing other, equally powerful reasoners. To do so, we replace Turing machines by probabilistic oracle machines (stochastic Turing machines with access to an oracle). We then use *reflective oracles*, which answer questions of the form, “is the probability that oracle machine *M* outputs 1 greater than *p*, when run on this same oracle?” Diagonalization can be avoided by allowing the oracle to answer randomly if this probability is equal to *p*; given this provision, reflective oracles can be shown to exist. We show that reflective Solomonoff induction and AIXI can themselves be implemented as oracle machines with access to a reflective oracle, making it possible for them to model environments that contain reasoners as powerful as themselves.\n> \n> \n\n\nThe post [New papers on reflective oracles and agents](https://intelligence.org/2015/04/28/new-papers-reflective/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-04-28T21:36:59Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "d68a26bc1557701b2615ca560cc77fbc", "title": "April 2015 newsletter", "url": "https://intelligence.org/2015/04/01/april-2015-newsletter/", "source": "miri", "source_type": "blog", "text": "| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n[Machine Intelligence Research Institute](http://intelligence.org)\n |\n\n \n |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n**Research updates**\n* We've launched a new research forum, the [Intelligent Agent Foundations Forum](https://intelligence.org/2015/03/18/introducing-intelligent-agent-foundations-forum/) (IAFF), devoted solely to technical research on the superintelligence alignment challenge.\n* On IAFF, three \"forum digest\" posts summarize much of the work conducted on that forum prior to its public launch: on [UDT](http://agentfoundations.org/item?id=160), on [reflective oracles](http://agentfoundations.org/item?id=165), and on [corrigibility-related work](http://agentfoundations.org/item?id=167).\n* New report: “[An Introduction to Löb’s Theorem in MIRI Research](https://intelligence.org/2015/03/18/new-report-introduction-lobs-theorem-miri-research/)\"\n* New interview: [Bill Hibbard](https://intelligence.org/2015/03/09/bill-hibbard/).\n* [Recent AI control brainstorming by Stuart Armstrong](https://intelligence.org/2015/03/27/recent-ai-control-brainstorming/).\n* Slides from Benja's talk: \"[Beneficial Smarter-than-human Intelligence: the Challenges and the Path Forward](https://intelligence.org/2015/03/09/fallenstein-talk-aps-march-meeting-2015/)\"\n\n\n**News updates**\n* We've released Yudkowsky's Less Wrong Sequences in ebook form, as [*Rationality: From AI to Zombies*](https://intelligence.org/2015/03/12/rationality-ai-zombies/). Paper versions should be available later this year.\n* [MIRI's 2014 annual review](https://intelligence.org/2015/03/22/2014-review/).\n\n\n\n**Other news**\n* The Center for the Study of Existential Risk at the University of Cambridge is [hiring four new research associates](http://cser.org/vacancies/) to work on their research project, \"Towards a Science of Extreme Technological Risk.\"\n* The Future of Humanity Institute at the University of Oxford is [hiring one researcher](http://www.fhi.ox.ac.uk/now-hiring-researchers/) to work on the long-term AI control challenge.\n* The Future of Life Institute now has a [News](http://futureoflife.org/news) page.\n* *Smarter Than Us* and related books were recently [reviewed](http://on.ft.com/1x92hwo) in *Financial Times*.\n\n\nAs always, please don't hesitate to let us know if you have any questions or comments.\n \nBest,\nLuke Muehlhauser\nExecutive Director\n |\n\n |\n\n \n |\n\n\nThe post [April 2015 newsletter](https://intelligence.org/2015/04/01/april-2015-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-04-01T23:00:39Z", "authors": ["Jesse Galef"], "summaries": []} -{"id": "690e6d8351bf2a549e4692311338b551", "title": "Recent AI control brainstorming by Stuart Armstrong", "url": "https://intelligence.org/2015/03/27/recent-ai-control-brainstorming/", "source": "miri", "source_type": "blog", "text": "![Oxford_Stuart-Armstrong](http://intelligence.org/wp-content/uploads/2015/03/Oxford_Stuart-Armstrong.jpg)MIRI recently sponsored Oxford researcher Stuart Armstrong to take a solitary retreat and brainstorm new ideas for AI control. This brainstorming generated 16 new control ideas, of varying usefulness and polish. During the past month, he has described each new idea, and linked those descriptions from his index post: [New(ish) AI control ideas](http://lesswrong.com/lw/lt6/newish_ai_control_ideas/).\n\n\nHe also named each AI control idea, and then drew a picture to represent (very roughly) how the new ideas related to each other. In the picture below, an arrow Y→X can mean “X depends on Y”, “Y is useful for X”, “X complements Y on this problem” or “Y inspires X.” The underlined ideas are the ones Stuart currently judges to be most important or developed.\n\n\n![Newish AI control ideas](http://intelligence.org/wp-content/uploads/2015/03/Newish-AI-control-ideas.png)\nPreviously, Stuart developed the AI control idea of *utility indifference*, which plays a role in MIRI’s paper [Corrigibility](https://intelligence.org/files/Corrigibility.pdf) (Stuart is a co-author). He also developed [anthropic decision theory](http://arxiv.org/abs/1110.6437) and some ideas for [reduced impact AI](http://lesswrong.com/lw/iyx/reduced_impact_ai_no_back_channels/) and [oracle AI](http://www.fhi.ox.ac.uk/oracle.pdf). He has contributed to the strategy and forecasting challenges of ensuring good outcomes from advanced AI, e.g. in [Racing to the Precipice](http://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf) and [How We’re Predicting AI — or Failing To](https://intelligence.org/files/PredictingAI.pdf). MIRI previously contracted him to write a short book introducing the superintelligence control challenge to a popular audience, [*Smarter Than Us*](https://intelligence.org/smarter-than-us/).\n\n\nThe post [Recent AI control brainstorming by Stuart Armstrong](https://intelligence.org/2015/03/27/recent-ai-control-brainstorming/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-03-27T16:21:28Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "9d8d7a8e75d54362a63559d86c8dcecd", "title": "2014 in review", "url": "https://intelligence.org/2015/03/22/2014-review/", "source": "miri", "source_type": "blog", "text": "It’s time for **my review of MIRI in 2014**.[1](https://intelligence.org/2015/03/22/2014-review/#footnote_0_11640 \"This year’s annual review is shorter than last year’s 5-part review of 2013, in part because 2013 was an unusually complicated focus-shifting year, and in part because, in retrospect, last year’s 5-part review simply took more effort to produce than it was worth. Also, because we recently finished switching to accrual accounting, I can now more easily provide annual reviews of each calendar year rather than of a March-through-February period. As such, this review of calendar year 2014 will overlap a bit with what was reported in the previous annual review (of March 2013 through February 2014).\") A post about our next strategic plan will follow in the next couple months, and I’ve included some details about ongoing projects [at the end of this review](https://intelligence.org/feed/?paged=39#comingsoon).\n\n\n \n\n\n#### 2014 Summary\n\n\nSince [early 2013](https://intelligence.org/2013/04/13/miris-strategy-for-2013/), MIRI’s core goal has been to help create a new field of research devoted to the technical challenges of getting good outcomes from future AI agents with highly general capabilities, including the capability to [recursively self-improve](https://books.google.com/books?id=7_H8AwAAQBAJ&printsec=frontcover&dq=bostrom+superintelligence&hl=en&sa=X&ei=RM8IVafEOsHZoATBxYGACQ&ved=0CB4Q6AEwAA#v=onepage&q=recursive%20self-improvement&f=false).[2](https://intelligence.org/2015/03/22/2014-review/#footnote_1_11640 \"Clearly there are forecasting and political challenges as well, and there are technical challenges related to ensuring good outcomes from nearer-term AI systems, but MIRI has chosen to specialize in the technical challenges of aligning superintelligence with human interests. See also: Friendly AI research as effective altruism and Why MIRI?\")\n\n\nLaunching a new field has been a team effort. In 2013, MIRI decided to focus on its comparative advantage in [defining open problems](https://intelligence.org/2013/11/04/from-philosophy-to-math-to-engineering/) and making technical progress on them. We’ve been fortunate to coordinate with other actors in this space — [FHI](http://www.fhi.ox.ac.uk/), [CSER](http://cser.org/), [FLI](http://futureoflife.org/), and others — who have leveraged their comparative advantages in conducting public outreach, building coalitions, pitching the field to grantmakers, interfacing with policymakers, and more.[3](https://intelligence.org/2015/03/22/2014-review/#footnote_2_11640 \"Obviously, the division of labor was more complex than I’ve described here. For example, FHI produced some technical research progress in 2014, and MIRI did some public outreach.\")\n\n\nMIRI began 2014 with several open problems identified, and with some progress made toward solving them, but with very few people available to do the work. Hence, **most of our research program effort in 2014 was aimed at attracting new researchers to the field and making it easier for them to learn the material and contribute**. This was the primary motivation for [our new technical agenda overview](https://intelligence.org/technical-agenda/), the [MIRIx program](https://intelligence.org/mirix/), our [new research guide](https://intelligence.org/research-guide/), and more (see below). Nick Bostrom’s [*Superintelligence*](http://smile.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/) was also quite helpful for explaining why this field of research should exist in the first place.\n\n\nToday the field is much larger and healthier than it was at the beginning of 2014. MIRI now has four full-time technical researchers instead of just one. Around 85 people have attended one or more MIRIx workshops. There are so many promising researchers who have expressed interest in our technical research that ~25 of them have already confirmed interest and availability to attend a MIRI introductory workshop this summer, and this mostly doesn’t include people who have attended [past MIRI workshops](https://intelligence.org/workshops/), nor have we sent out all the invites yet. Moreover, there are now several researchers we know who are plausible MIRI hires in the next 1-2 years.\n\n\nI am extremely grateful to MIRI’s donors, without whom this progress would have been impossible.\n\n\nThe rest of this post provides a more detailed summary of our activities in 2014.\n\n\n\n#### \n\n\n#### Overview of 2014 activities\n\n\n1. **Technical research:** We hired 2 new research fellows, launched the MIRIx program, hosted many visiting researchers, and released 14 technical papers/reports, including our new technical agenda overview.\n2. **Strategic research:**We published 15 analyses, 5 papers, and 54 expert interviews.\n3. **Outreach:**We organized some talks and media stories, and released *Smarter Than Us*. Yudkowsky also continued writing *Harry Potter and the Methods of Rationality*, which has proven to be a surprisingly effective outreach tool for MIRI’s work.\n4. **Fundraising:**We raised $1,237,557 in contributions in 2014. This is slightly less than we raised in 2013, but only because 2013’s numbers include a one-time, outlier donation of ~$525,000.\n5. **Operations:**We made many improvements to MIRI’s organizational efficiency and robustness.\n\n\n \n\n\n#### 2014 Technical Research\n\n\nTwo of the top three goals in our [mid-2014 strategic plan](https://intelligence.org/2014/06/11/mid-2014-strategic-plan/) were to (1) increase our technical research output and (2) invest heavily in recruiting additional technical researchers (via a *prospecting* -> *prospect development* -> *hiring* funnel). The third goal concerned fundraising; see below.\n\n\nAs for (1): this past year we **released 14 technical papers/reports**[4](https://intelligence.org/2015/03/22/2014-review/#footnote_3_11640 \"These were, roughly in chronological order: BotWorld, Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem, Problems of self-reference in self-improving space-time embedded intelligence, Loudness, Distributions allowing tiling of staged subjective EU maximizers, Non-omniscience, probabilistic inference, and metamathematics, Corrigibility, UDT with known search order, Aligning superintelligence with human interests, Toward idealized decision theory, Computable probability distributions which converge…, Tiling agents in causal graphs, Concept learning for safe autonomous AI. I’m also counting Rob Bensinger’s blog post sequence on naturalized induction as one semi-technical “report.” A few of the technical agenda overview’s supporting papers weren’t announced on our blog until 2015. These aren’t counted here. For comparison’s sake, we released 10 technical papers/reports in 2013, but 7 of these were uncommonly short technical reports immediately following our December 2013 workshop. The 10 technical papers/reports from 2013 are: Scientific induction in probabilistic metamathematics, Fallenstein’s monster, Recursively-defined logical theories are well-defined, Tiling agents for self-modifying AI, and the Löbian obstacle, The procrastination paradox, A comparison of decision algorithms on Newcomblike problems, Definability of truth in probabilistic logic, The 5-and-10 problem and the tiling agents formalism, Decreasing mathematical strength in one formalization of parametric polymorphism, and An infinitely descending sequence of sound theories each proving the next consistent.\") and gave a few technical talks for academic audiences. To give our staff researchers more time to write up existing results, and to focus more on recruiting, we held only [one research workshop](https://intelligence.org/workshops/#may-2014) in 2014.\n\n\nAs for (2), in 2014 we:\n\n\n* **Published our new [research agenda overview](https://intelligence.org/technical-agenda/)**, which makes it much easier for newcomers to understand what we’re doing and why.\n* **Hired two new full-time research fellows**. Benja Fallenstein and Nate Soares joined in April 2014. (Patrick LaVictoire joined MIRI in March 2015.)\n* **Launched our [MIRIx program](https://intelligence.org/mirix/)**, now with 14 active groups around the world. This program allows mathematicians and computer scientists to spend time studying and discussing MIRI’s research agenda, and is a key component of our prospecting and development pipeline.\n* **Published [A Guide to MIRI’s Research](https://intelligence.org/research-guide/)**, which guides students through the topics and papers they should study to become familiar with each area of MIRI’s research agenda.\n* **Hosted several visiting researchers** to work with us on MIRI’s research problems in Berkeley for a few days or weeks at a time.[5](https://intelligence.org/2015/03/22/2014-review/#footnote_4_11640 \"Visiting researchers in 2014 included Abram Demski, Scott Garrabrant, Nik Weaver, Nisan Stiennon, Vladimir Slepnev, Tsvi Benson-Tilsen, Danny Hintze, and Ilya Shpitser.\")\n* **Co-sponsored the [SPARC](http://sparc-camp.org/) 2014 camp**, which trains mathematically talented youth to apply their quantitative thinking skills to their lives and the world. SPARC didn’t teach participants about MIRI’s research, but it did teach participants about effective altruism, and brought them into contact with our social circles more generally. At least one SPARC graduate has subsequently expressed serious interest in working for MIRI in the future (but mostly, they are still too early in their studies to be considered).\n\n\nIn addition, some of our outreach activities (described below) double as researcher prospecting activities.\n\n\nIn my estimation, the growth of our technical research program in 2014 fell short of my earlier goals, mostly due to insufficient staff capacity to launch new recruitment initiatives. Thankfully, this situation improved in early 2015. We are still [seeking to hire](https://intelligence.org/careers/) additional operations staff to help us grow our technical research program,[6](https://intelligence.org/2015/03/22/2014-review/#footnote_5_11640 \"Hiring additional development staff will help grow our research program by freeing up more of my own time for research program work, and hiring one or more additional executives will directly add new staff capacity directed toward growing our research program.\") but in the meantime we have met some of our immediate capacity needs by (for example) contracting MIRIx participant James Cook to help us better steward and grow the MIRIx program, and contracting Jesse Galef to help us organize our summer 2015 workshops.\n\n\n \n\n\n#### 2014 Strategic Research\n\n\n[As planned](https://intelligence.org/2014/06/11/mid-2014-strategic-plan/), in 2014 we decreased our output of strategic research.[7](https://intelligence.org/2015/03/22/2014-review/#footnote_6_11640 \"This year, I have collapsed my previous categories of “strategic” and “expository” research into one category I simply call “strategic research.”\") Even still we published a sizable amount of strategic work in 2014:\n\n\n* **15 strategic analyses** posted on MIRI’s blog and elsewhere.[8](https://intelligence.org/2015/03/22/2014-review/#footnote_7_11640 \"These were, roughly in chronological order: How Big is the Field of Artificial Intelligence?, Robust Cooperation: A Case Study in Friendly AI Research, The world’s distribution of computation, Is my view contrarian?, Exponential and non-exponential trends in information technology, How to study superintelligence strategy, Tentative tips for people engaged in an exercise that involves some form of prediction or forecasting, Groundwork for AGI safety engineering, Loosemore on AI safety and attractors, AGI outcomes and civilizational competence, The Financial Times story on MIRI, Three misconceptions in Edge.org’s conversation on “The Myth of AI”, Two mistakes about the threat from artificial intelligence, Brooks and Searle on AI volition and timelines, Davis on AI capability and motivation.\")\n* **54 [expert interviews](https://intelligence.org/category/conversations/)** on a wide range of topics.\n* **5 papers/chapters on AI strategy topics**.[9](https://intelligence.org/2015/03/22/2014-review/#footnote_8_11640 \"These were, roughly in chronological order: Embryo selection for cognitive enhancement, Why we need Friendly AI, The errors, insights, and lessons of famous AI predictions, The ethics of artificial intelligence, and Exploratory engineering in artificial intelligence.\")\n\n\nNearly all of the interviews were begun in 2013 or early 2014, even if they were not finished and published until much later. Mid-way through 2014, we decided to de-prioritize expert interviews, due to apparent diminishing returns.\n\n\nThis level of strategic research output aligns closely with our earlier goals.\n\n\n \n\n\n#### 2014 Outreach\n\n\nOur outreach efforts declined this year in favor of increased focus on our technical research. Our outreach efforts in 2014 included:\n\n\n* We released [*Smarter Than Us: The Rise of Machine Intelligence*](https://intelligence.org/smarter-than-us/).\n* We gave [four talks](https://intelligence.org/2014/08/11/miris-recent-effective-altruism-talks/) at the 2014 Effective Altruism Retreat and Effective Altruism Summit.\n* We [hosted](https://intelligence.org/2014/11/06/video-bostroms-talk-superintelligence-uc-berkeley/) Nick Bostrom at UC Berkeley as part of his book tour for *Superintelligence*.\n* Eliezer Yudkowsky continued writing [*Harry Potter and the Methods of Rationality*](http://hpmor.com/), which has [proven to be](https://intelligence.org/2014/01/20/2013-in-review-outreach/) a surprisingly effective outreach tool for MIRI’s work.[10](https://intelligence.org/2015/03/22/2014-review/#footnote_9_11640 \"HPMoR has now finished, but it wasn’t finished yet in 2014.\")\n* We gave interviews for various media outlets.\n\n\nThis level of outreach output aligns closely with our earlier goals, except that we had planned to release the ebook version of [The Sequences](http://wiki.lesswrong.com/wiki/Sequences) in 2014, and [this release](https://intelligence.org/2015/03/12/rationality-ai-zombies/) was delayed until March 2015.\n\n\nDespite MIRI’s declining outreach efforts, public outreach about MIRI’s core concerns *massively increased* in 2014 due mostly to the efforts of others. [FHI](http://www.fhi.ox.ac.uk/)‘s Nick Bostrom released [*Superintelligence*](http://smile.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/), which is now the best available summary of the problem MIRI exists to solve. Several prominent figures — in particular Stephen Hawking and Elon Musk — promoted long-term AI safety concerns to the media’s attention. In addition, two new organizations, [CSER](http://cser.org/) and [FLI](http://futureoflife.org/), did substantial outreach on this issue. Largely as a result of these efforts, Edge.org decided to make its widely-read [2015 Annual Question](http://edge.org/annual-question/what-do-you-think-about-machines-that-think) about long-term AI risks. Several of the respondents expressed views basically aligned with MIRI’s thinking on the issue, including [Sam Harris](http://edge.org/response-detail/26177), [Stuart Russell](http://edge.org/response-detail/26157), [Jaan Tallinn](http://edge.org/response-detail/26186), [Max Tegmark](http://edge.org/response-detail/26190), [Steve Omohundro](http://edge.org/response-detail/26220), [Nick Bostrom](http://edge.org/response-detail/26031), and of course MIRI’s own [Eliezer Yudkowsky](http://edge.org/response-detail/26198).\n\n\n \n\n\n#### 2014 Fundraising\n\n\nOriginally, we [set](https://intelligence.org/2014/04/02/2013-in-review-fundraising/) a very ambitious fundraising goal for 2014. Shortly thereafter, we decided to focus on recruiting-related efforts rather than fundraising. So while our 2014 fundraising fell far short of our original (very ambitious) goal, I think the decision to focus on recruiting rather than fundraising in 2014 was the right choice.\n\n\nIn 2014 we raised **$1,237,557** in contributions.[11](https://intelligence.org/2015/03/22/2014-review/#footnote_10_11640 \"MIRI has some sources of funding besides contributions. For example, in 2014 our realized and unrealized gains, plus interest and dividends — but not including realized and unrealized gains for our cryptocurrency holdings, which are highly volatile — amounted to ~$97,000. We also made ~$7,000 from ebook sales, and ~$5,000 from Give for Free programs.\") Our largest sources of funding were:\n\n\n* ~$400,000 from our [summer matching challenge](https://intelligence.org/2014/08/15/2014-summer-matching-challenge-completed/) (this includes the $200,000 in matching funds from Jaan Tallinn, Edwin Evans, and Rick Schwall)\n* ~$200,000 from our [winter matching challenge](https://intelligence.org/2014/12/18/2014-winter-matching-challenge-completed/) (this includes $100,000 in matching funds from the Thiel Foundation)\n* ~$175,000 from the one-day [SV Gives fundraiser](https://intelligence.org/2014/05/06/liveblogging-the-svgives-fundraiser/), ~$63,000 of which was from prizes and matching funds from donors who wouldn’t normally contribute to MIRI\n* ~$150,000 from the Thiel Foundation (in addition to the $100,000 in matching funds for the winter matching challenge)\n* ~$105,000 from Jed McCaleb.[12](https://intelligence.org/2015/03/22/2014-review/#footnote_11_11640 \"This donation was made in Ripple, which we eventually sold.\")\n\n\nIt is difficult to meaningfully compare MIRI’s 2013 and 2014 fundraising income to MIRI’s fundraising income in earlier years, because MIRI was a fairly different organization in 2012 and earlier.[13](https://intelligence.org/2015/03/22/2014-review/#footnote_12_11640 \"See “Comparison to past years” here. Also, during 2014 we switched to accrual accounting, which confuses the comparison to past years even further. Furthermore, the numbers in this section might not exactly match past published estimates, because every now and then we still find and correct old errors in our donor database. Finally, note that in-kind donations are not included in the numbers or graphs on this page.\") But I’ll show the comparison anyway: \n\n\n\n\nTotal donations were lower in 2014 than in 2013, but this is due to a one-time outlier donation in 2013 from Jed McCaleb, who was then a new donor. (By the way, this one donation allowed our research program to jump forward more quickly than I had originally been planning at the time.) If we set aside McCaleb’s large 2013 and 2014 gifts, MIRI’s fundraising grew slightly from 2013 to 2014.\n\n\n\nNew donor growth was strong in 2014, though this mostly came from small donations made during the SV Gives fundraiser. A significant portion of growth in returning donors can also be attributed to lapsed donors making small contributions during the SV Gives fundraiser.\n\n\n\nThis graph shows how much of our support during the past few years came from small, mid-sized, large, and very large donors. My understanding is that the distributions shown for 2012, 2013, and 2014 are fairly typical of non-profits our size. (Again, the green bar is taller in 2013 than in 2014 due to Jed McCaleb’s outlier 2013 donation.)\n\n\n \n\n\n#### 2014 Operations\n\n\nBuilding on our 2013 organizational improvements, our operational efficiency and robustness improved substantially throughout 2014. Operations-related tasks, including the operational processes specific to our research program, now take up a smaller fraction of staff time than before, which has allowed us to divert more capacity to growing our research program. We also implemented many new policies and services that make MIRI more robust in the face of staff turnover, cyberattack, fluctuations in income, etc. I won’t go into much detail on operations in this post, but we’re typically happy to share what we’ve learned about running an efficient and robust organization when someone asks us to.\n\n\n\n#### Coming Soon\n\n\nOur next strategic plan post won’t be ready for another month or two, but of course we already have many projects in motion. Here’s what you can expect from MIRI over the next few months:\n\n\n* We have several technical reports and conference papers nearing completion.\n* We are running a series of workshops this summer. Many of the most promising people who [applied to come to future workshops](https://intelligence.org/get-involved/), or who are showing promise in [MIRIx groups](https://intelligence.org/mirix/) around the world, are being invited.\n* We are beginning to pay particularly productive MIRIx participants for part-time remote research on problems in [MIRI’s technical agenda](https://intelligence.org/technical-agenda/).\n* We are co-organizing a [decision theory conference](https://intelligence.org/2014/07/12/may-2015-decision-theory-workshop-cambridge/) at Cambridge University in May.\n* Once again we are sponsoring [SPARC](http://sparc-camp.org/)‘s summer camp for mathematically talented high-schoolers.\n* We are “putting the finishing touches” on two large pieces of strategy research conducted in 2014. We will also finish running the [*Superintelligence* reading group](http://lesswrong.com/lw/kw4/superintelligence_reading_group/) and then assemble the resulting *Superintelligence* reading guide. We will also contribute additional articles to [AI Impacts](http://aiimpacts.org/), until our earmarked funding for that work runs out.[14](https://intelligence.org/2015/03/22/2014-review/#footnote_13_11640 \"We are happy to support such strategic research given earmarked funding and low management overhead, but otherwise we are focusing on our technical research program.\")\n\n\nStay tuned for our next strategic plan, which will contain more detail about our planned programs.\n\n\n\n\n---\n\n1. This year’s annual review is shorter than last year’s [5-part review](https://intelligence.org/2013/12/20/2013-in-review-operations/) of 2013, in part because 2013 was an unusually complicated focus-shifting year, and in part because, in retrospect, last year’s 5-part review simply took more effort to produce than it was worth. Also, because we recently finished switching to accrual accounting, I can now more easily provide annual reviews of each calendar year rather than of a March-through-February period. As such, this review of calendar year 2014 will overlap a bit with what was reported in the previous annual review (of March 2013 through February 2014).\n2. Clearly there are forecasting and political challenges as well, and there are technical challenges related to ensuring good outcomes from nearer-term AI systems, but MIRI has chosen to specialize in the technical challenges of [aligning superintelligence with human interests](https://intelligence.org/technical-agenda/). See also: [Friendly AI research as effective altruism](https://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/) and [Why MIRI?](https://intelligence.org/2014/04/20/why-miri/)\n3. Obviously, the division of labor was more complex than I’ve described here. For example, FHI produced some technical research progress in 2014, and MIRI did some public outreach.\n4. These were, roughly in chronological order: [BotWorld](https://intelligence.org/2014/04/10/new-report-botworld/), [Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem](https://intelligence.org/2014/05/17/new-paper-program-equilibrium-prisoners-dilemma-via-lobs-theorem/), [Problems of self-reference in self-improving space-time embedded intelligence](https://intelligence.org/2014/05/06/new-paper-problems-of-self-reference-in-self-improving-space-time-embedded-intelligence/), [Loudness](https://intelligence.org/2014/05/30/new-report-loudness-priors-preference-relations/), [Distributions allowing tiling of staged subjective EU maximizers](https://intelligence.org/2014/06/06/new-report-distributions-allowing-tiling-staged-subjective-eu-maximizers/), [Non-omniscience, probabilistic inference, and metamathematics](https://intelligence.org/2014/06/23/new-report-non-omniscience-probabilistic-inference-metamathematics/), [Corrigibility](https://intelligence.org/2014/10/18/new-report-corrigibility/), [UDT with known search order](https://intelligence.org/2014/10/30/new-report-udt-known-search-order/), [Aligning superintelligence with human interests](https://intelligence.org/2014/12/23/new-technical-research-agenda-overview/), [Toward idealized decision theory](https://intelligence.org/2014/12/16/new-report-toward-idealized-decision-theory/), [Computable probability distributions which converge…](https://intelligence.org/2014/12/16/new-report-computable-probability-distributions-converge/), [Tiling agents in causal graphs](https://intelligence.org/2014/12/16/new-report-tiling-agents-causal-graphs/), [Concept learning for safe autonomous AI](https://intelligence.org/2014/12/05/new-paper-concept-learning-safe-autonomous-ai/). I’m also counting [Rob Bensinger’s blog post sequence on naturalized induction](http://wiki.lesswrong.com/wiki/Naturalized_induction) as one semi-technical “report.” A few of the technical agenda overview’s supporting papers weren’t announced on our blog until 2015. These aren’t counted here. For comparison’s sake, we released 10 technical papers/reports in 2013, but 7 of these were uncommonly short technical reports immediately following our [December 2013 workshop](https://intelligence.org/2013/12/31/7-new-technical-reports-and-a-new-paper/). The 10 technical papers/reports from 2013 are: [Scientific induction in probabilistic metamathematics](https://intelligence.org/files/ScientificInduction.pdf), [Fallenstein’s monster](https://intelligence.org/files/FallensteinsMonster.pdf), [Recursively-defined logical theories are well-defined](https://intelligence.org/files/RecursivelyDefinedTheories.pdf), [Tiling agents for self-modifying AI, and the Löbian obstacle](https://intelligence.org/files/TilingAgentsDraft.pdf), [The procrastination paradox](https://intelligence.org/files/ProcrastinationParadox.pdf), [A comparison of decision algorithms on Newcomblike problems](https://intelligence.org/files/Comparison.pdf), [Definability of truth in probabilistic logic](https://intelligence.org/files/DefinabilityTruthDraft.pdf), [The 5-and-10 problem and the tiling agents formalism](https://intelligence.org/files/TilingAgents510.pdf), [Decreasing mathematical strength in one formalization of parametric polymorphism](https://intelligence.org/files/DecreasingStrength.pdf), and [An infinitely descending sequence of sound theories each proving the next consistent](https://intelligence.org/files/ConsistencyWaterfall.pdf).\n5. Visiting researchers in 2014 included Abram Demski, Scott Garrabrant, Nik Weaver, Nisan Stiennon, Vladimir Slepnev, Tsvi Benson-Tilsen, Danny Hintze, and Ilya Shpitser.\n6. Hiring additional development staff will help grow our research program by freeing up more of my own time for research program work, and hiring one or more additional executives will directly add new staff capacity directed toward growing our research program.\n7. This year, I have collapsed my [previous categories](https://intelligence.org/2014/02/08/2013-in-review-strategic-and-expository-research/) of “strategic” and “expository” research into one category I simply call “strategic research.”\n8. These were, roughly in chronological order: [How Big is the Field of Artificial Intelligence?](https://intelligence.org/2014/01/28/how-big-is-ai/), [Robust Cooperation: A Case Study in Friendly AI Research](https://intelligence.org/2014/02/01/robust-cooperation-a-case-study-in-friendly-ai-research/), [The world’s distribution of computation](https://intelligence.org/2014/02/28/the-worlds-distribution-of-computation-initial-findings/), [Is my view contrarian?](http://lesswrong.com/lw/jv2/is_my_view_contrarian/), [Exponential and non-exponential trends in information technology](https://intelligence.org/2014/05/12/exponential-and-non-exponential/), [How to study superintelligence strategy](http://lukemuehlhauser.com/some-studies-which-could-improve-our-strategic-picture-of-superintelligence/), [Tentative tips for people engaged in an exercise that involves some form of prediction or forecasting](http://lesswrong.com/r/discussion/lw/kh9/tentative_tips_for_people_engaged_in_an_exercise/), [Groundwork for AGI safety engineering](https://intelligence.org/2014/08/04/groundwork-ai-safety-engineering/), [Loosemore on AI safety and attractors](http://nothingismere.com/2014/08/25/loosemore-on-ai-safety-and-attractors/), [AGI outcomes and civilizational competence](https://intelligence.org/2014/10/16/agi-outcomes-civilizational-competence/), [The *Financial Times* story on MIRI](https://intelligence.org/2014/10/31/financial-times-story-miri/), [Three misconceptions in Edge.org’s conversation on “The Myth of AI”](https://intelligence.org/2014/11/18/misconceptions-edge-orgs-conversation-myth-ai/), [Two mistakes about the threat from artificial intelligence](https://agenda.weforum.org/2014/12/two-mistakes-about-the-threat-from-artificial-intelligence/), [Brooks and Searle on AI volition and timelines](https://intelligence.org/2015/01/08/brooks-searle-agi-volition-timelines/), [Davis on AI capability and motivation](https://intelligence.org/2015/02/06/davis-ai-capability-motivation/).\n9. These were, roughly in chronological order: [Embryo selection for cognitive enhancement](https://intelligence.org/files/EmbryoSelection.pdf), [Why we need Friendly AI](https://intelligence.org/files/WhyWeNeedFriendlyAI.pdf), [The errors, insights, and lessons of famous AI predictions](https://intelligence.org/2014/04/30/new-paper-the-errors-insights-and-lessons-of-famous-ai-predictions/), [The ethics of artificial intelligence](https://intelligence.org/2014/06/19/new-chapter-cambridge-handbook-artificial-intelligence/), and [Exploratory engineering in artificial intelligence](https://intelligence.org/2014/08/22/new-paper-exploratory-engineering-artificial-intelligence/).\n10. *HPMoR* has now [finished](http://hpmor.com/notes/122/), but it wasn’t finished yet in 2014.\n11. MIRI has some sources of funding besides contributions. For example, in 2014 our realized and unrealized gains, plus interest and dividends — but not including realized and unrealized gains for our cryptocurrency holdings, which are highly volatile — amounted to ~$97,000. We also made ~$7,000 from ebook sales, and ~$5,000 from [Give for Free](https://intelligence.org/get-involved/#give) programs.\n12. This donation was made in Ripple, which we eventually sold.\n13. See “Comparison to past years” [here](https://intelligence.org/2014/04/02/2013-in-review-fundraising/). Also, during 2014 we switched to accrual accounting, which confuses the comparison to past years even further. Furthermore, the numbers in this section might not exactly match past published estimates, because every now and then we still find and correct old errors in our donor database. Finally, note that in-kind donations are not included in the numbers or graphs on this page.\n14. We are happy to support such strategic research given earmarked funding and low management overhead, but otherwise we are focusing on our technical research program.\n\nThe post [2014 in review](https://intelligence.org/2015/03/22/2014-review/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-03-22T21:19:35Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "f79fd544e3750d55304348e29a62c4d8", "title": "New report: “An Introduction to Löb’s Theorem in MIRI Research”", "url": "https://intelligence.org/2015/03/18/new-report-introduction-lobs-theorem-miri-research/", "source": "miri", "source_type": "blog", "text": "[![Lob in MIRI Research](http://intelligence.org/wp-content/uploads/2015/02/Lob-in-MIRI-Research.png)](https://intelligence.org/files/lob-notes-IAFF.pdf)Today we publicly release a new technical report by Patrick LaVictoire, titled “[An Introduction to Löb’s Theorem in MIRI Research](https://intelligence.org/files/lob-notes-IAFF.pdf).” The report’s introduction begins:\n\n\n\n> This expository note is devoted to answering the following question: why do many MIRI research papers cite a 1955 theorem of Martin Löb, and indeed, why does MIRI focus so heavily on mathematical logic? The short answer is that this theorem illustrates the basic kind of self-reference involved when an algorithm considers its own output as part of the universe, and it is thus germane to many kinds of research involving self-modifying agents, especially when formal verification is involved or when we want to cleanly prove things in model problems. For a longer answer, well, welcome!\n> \n> \n> I’ll assume you have some background doing mathematical proofs and writing computer programs, but I won’t assume any background in mathematical logic beyond knowing the usual logical operators, nor that you’ve even heard of Löb’s Theorem before.\n> \n> \n\n\nIf you’d like to discuss the article, please do so [here](http://agentfoundations.org/item?id=94).\n\n\n\n\n\n#### Sign up to get updates on new MIRI technical results\n\n\n*Get notified every time a new technical paper is published.*\n\n\n\n\n* \n* \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n\n\nThe post [New report: “An Introduction to Löb’s Theorem in MIRI Research”](https://intelligence.org/2015/03/18/new-report-introduction-lobs-theorem-miri-research/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-03-19T03:35:03Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "cf2f437284ac03eba2280b198c2ebf65", "title": "Introducing the Intelligent Agent Foundations Forum", "url": "https://intelligence.org/2015/03/18/introducing-intelligent-agent-foundations-forum/", "source": "miri", "source_type": "blog", "text": "[![IAFF](http://intelligence.org/wp-content/uploads/2015/02/IAFF.png)](http://agentfoundations.org/)Today we are proud to publicly launch the [**Intelligent Agent Foundations Forum**](http://agentfoundations.org/) ([RSS](http://agentfoundations.org/rss)), a forum devoted to technical discussion of the research problems outlined in [MIRI’s technical agenda overview](https://intelligence.org/2014/12/23/new-technical-research-agenda-overview/), along with similar research problems.\n\n\nPatrick’s [welcome post](http://agentfoundations.org/item?id=157) explains:\n\n\n\n> Broadly speaking, the topics of this forum concern the difficulties of value alignment- the problem of how to ensure that machine intelligences of various levels adequately understand and pursue the goals that their developers actually intended, rather than getting stuck on some proxy for the real goal or failing in other unexpected (and possibly dangerous) ways. As these failure modes are more devastating the farther we advance in building machine intelligences, MIRI’s goal is to work today on the foundations of goal systems and architectures that would work even when the machine intelligence has general creative problem-solving ability beyond that of its developers, and has the ability to modify itself or build successors.\n> \n> \n\n\nThe forum has been privately active for several months, so many interesting articles have already been posted, including:\n\n\n* Slepnev, [Using modal fixed points to formalize logical causality](http://agentfoundations.org/item?id=4)\n* Fallenstein, [Utility indifference and infinite improbability drives](http://agentfoundations.org/item?id=78)\n* Benson-Tilsen, [Uniqueness of UDT for transparent universes](http://agentfoundations.org/item?id=75)\n* Christiano, [Stable self-improvement as a research problem](http://agentfoundations.org/item?id=53)\n* Fallenstein, [Predictors that don’t try to manipulate you(?)](http://agentfoundations.org/item?id=51)\n* Soares, [Why conditioning on “the agent takes action a” isn’t enough](http://agentfoundations.org/item?id=92)\n* Fallenstein, [An implementation of modal UDT](http://agentfoundations.org/item?id=121)\n* LaVictoire, [Modeling goal stability in machine learning](http://agentfoundations.org/item?id=130)\n\n\nAlso see [How to contribute](http://agentfoundations.org/how-to-contribute).\n\n\n \n\n\n \n\n\nThe post [Introducing the Intelligent Agent Foundations Forum](https://intelligence.org/2015/03/18/introducing-intelligent-agent-foundations-forum/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-03-19T03:34:29Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "10243b64a9d35f409af039a3f854fa8a", "title": "Rationality: From AI to Zombies", "url": "https://intelligence.org/2015/03/12/rationality-ai-zombies/", "source": "miri", "source_type": "blog", "text": "[![Rationality Angled Cover Web](http://intelligence.org/wp-content/uploads/2015/03/Rationality-Angled-Cover-Web.jpg)](https://intelligence.org/rationality-ai-zombies/)Between 2006 and 2009, senior MIRI researcher Eliezer Yudkowsky wrote several hundred essays for the blogs *Overcoming Bias* and *Less Wrong*, collectively called “[the Sequences](http://wiki.lesswrong.com/wiki/Sequences).” With two days remaining until Yudkowsky concludes his other well-known rationality book, [*Harry Potter and the Methods of Rationality*](http://hpmor.com), we are releasing around 340 of his original blog posts as a series of six books, collected in one ebook volume under the title [***Rationality: From AI to Zombies***](https://intelligence.org/rationality-ai-zombies/).\n\n\nYudkowsky’s writings on rationality, which were previously scattered in a constellation of blog posts, have been cleaned up, organized, and collected together for the first time. This new version of the Sequences should serve as a more accessible long-form introduction to formative ideas behind MIRI, [CFAR](http://rationality.org/), and substantial parts of the rationalist and effective altruist communities.\n\n\nWhile the books’ central focus is on applying probability theory and the sciences of mind to personal dilemmas and philosophical controversies, a considerable range of topics is covered. The six books explore rationality theory and applications from multiple angles:\n\n\nI. *Map and Territory*. A lively introduction to the Bayesian conception of rational belief in cognitive science, and how it differs from other kinds of belief.\n\n\nII. *How to Actually Change Your Mind*. A guide to overcoming confirmation bias and motivated cognition.\n\n\nIII. *The Machine in the Ghost*. A collection of essays on the general topic of minds, goals, and concepts.\n\n\nIV. *Mere Reality*. Essays on science and the physical world, as they relate to rational inference.\n\n\nV. *Mere Goodness*. A wide-ranging discussion of human values and ethics.\n\n\nVI. *Becoming Stronger*. An autobiographical account of Yudkowsky’s philosophical mistakes, followed by a discussion of self-improvement and group rationality.\n\n\nThese essays are packaged together as a single electronic text, making it easier to investigate links between essays and search for keywords. The ebook is available on a pay-what-you-want basis (**[link](https://intelligence.org/rationality-ai-zombies/)**), and on Amazon.com for $4.99 ([**link**](http://smile.amazon.com/Rationality-AI-Zombies-Eliezer-Yudkowsky-ebook/dp/B00ULP6EW2/ref=sr_1_9?ie=UTF8&qid=1426182905&sr=8-9)). In the coming months, we will also be releasing print versions of these six books, and Castify [will be releasing](https://www.kickstarter.com/projects/1267969302/lesswrong-the-sequences-audiobook) the official audiobook version.\n\n\nThe post [Rationality: From AI to Zombies](https://intelligence.org/2015/03/12/rationality-ai-zombies/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-03-13T01:23:17Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "1c6711aaa7e6ff7e0bea098ca9802b2e", "title": "Bill Hibbard on Ethical Artificial Intelligence", "url": "https://intelligence.org/2015/03/09/bill-hibbard/", "source": "miri", "source_type": "blog", "text": "![Bill Hibbard portrait](https://intelligence.org/wp-content/uploads/2013/02/associate_hibbard.png)Bill Hibbard is an Emeritus Senior Scientist at the University of Wisconsin-Madison Space Science and Engineering Center, currently working on issues of AI safety and unintended behaviors. He has a BA in Mathematics and MS and PhD in Computer Sciences, all from the University of Wisconsin-Madison. He is the author of *[Super-Intelligent Machines](http://www.amazon.com/Super-Intelligent-Machines-International-Systems-Engineering/dp/0306473887/)*, [“Avoiding Unintended AI Behaviors,”](https://intelligence.org/files/UnintendedBehaviors.pdf) [“Decision Support for Safe AI Design,”](https://intelligence.org/files/DecisionSupport.pdf) and [“Ethical Artificial Intelligence.”](http://arxiv.org/abs/1411.1373) He is also principal author of the [Vis5D](http://en.wikipedia.org/wiki/Vis5D), [Cave5D](http://en.wikipedia.org/wiki/Cave5D), and [VisAD](http://en.wikipedia.org/wiki/VisAD) open source visualization systems.\n\n\n\n**Luke Muehlhauser**: You recently released a self-published book, *[Ethical Artificial Intelligence](http://arxiv.org/abs/1411.1373)*, which “combines several peer reviewed papers and new material to analyze the issues of ethical artificial intelligence.” Most of the book is devoted to the kind of exploratory engineering in AI that you and I described in [a recent CACM article](https://intelligence.org/2014/08/22/new-paper-exploratory-engineering-artificial-intelligence/), such that you mathematically analyze the behavioral properties of classes of future AI agents, e.g. utility-maximizing agents.\n\n\nMany AI scientists have the intuition that such early, exploratory work is very unlikely to pay off when we are so far from building an AGI, and don’t what an AGI will look like. For example, Michael Littman [wrote](http://kruel.co/2015/02/05/interview-with-michael-littman-on-ai-risks/#sthash.YrQRrluT.dpbs):\n\n\n…proposing specific mechanisms for combatting this amorphous threat [of AGI] is a bit like trying to engineer airbags before we’ve thought of the idea of cars. Safety has to be addressed in context and the context we’re talking about is still absurdly speculative.\n\n\nHow would you defend the value of the kind of work you do in Ethical Artificial Intelligence to Littman and others who share his skepticism?\n\n\n\n\n\n---\n\n\n**Bill Hibbard**: This is a good question, Luke. The analogy with cars is useful. Unlike engineering airbags before cars are even thought of, we are already working hard to develop AI and can anticipate various types of dangers.\n\n\nWhen cars were first imagined, engineers probably knew that they would propel human bodies at speed and that they would need to carry some concentrated energy source. They knew from accidents with horse carriages that human bodies travelling at speed are liable to injury, and they knew that concentrated energy sources are liable to fire and explosion which may injure humans. This is analogous with what we know about future AI: that to serve humans well AI will have to know a lot about individual humans and that humans will not be able to monitor every individual action by AI. These properties of future AI pose dangers just as the basic properties of cars (propelling humans and carrying energy) pose dangers.\n\n\nEarly car designers could have anticipated that no individual car would carry all of humanity and thus car accidents would not pose existential threats to humanity. To the extent that cars threaten human safety and health via pollution, we have time to notice these threats and address them. With AI we can anticipate possible scenarios that do threaten humanity and that may be difficult to address once the AI system is operational. For example, as described in the first chapter of my book, the Omniscience AI, with a detailed model of human society and a goal of maximizing profits, threatens to control human society. However, AI poses much greater potential benefits than cars but also much greater dangers. This justifies greater effort to anticipate the dangers of AI.\n\n\nIt’s also worth noting that the abstract frameworks for exploratory engineering apply to any reasonable future AI design. As the second chapter of my book describes, any set of complete and transitive preferences among outcomes can be expressed by a utility function. If preferences are incomplete then there are outcomes A and B with no preference between them, so the AI agent cannot decide. If preferences are not transitive then there are outcomes A, B, and C such that A is preferred to B, B is preferred to C, and C is preferred to A. Again, the AI agent cannot decide. Thus our exploratory engineering can assume utility maximizing agents and cover all cases in which the AI agent can decide among outcomes.\n\n\nSimilarly, the dangers discussed in the book are generally applicable. Any design for powerful AI should explain how it will avoid the self-delusion problem described by Ring and Orseau, the problem of corrupting the reward generator as described by Hutter, and the problem of unintended instrumental actions as described by Omohundro (he called them basic AI drives).\n\n\nThe threat level from AI justifies addressing AI dangers now and with significant resources. And we are developing tools that enable us to analyze dangers of AI systems before we know the specifics of their designs.\n\n\n\n\n---\n\n\n**Luke**: Your book mostly discusses AGIs rather than contemporary narrow AI systems. Roughly when do you expect humanity will develop something resembling the kind of AGIs you have in mind? Or, what does your probability distribution over “Years to AGI” look like?\n\n\n\n\n---\n\n\n**Bill**: In my 2002 book, Super-Intelligent Machines, I wrote that “machines as intelligent as humans are possible and will exist within the next century or so.” (The publisher owns the copyright for my 2002 book, preventing me from giving electronic copies to people, and charges more than $100 per print copy. This largely explains my decision to put my current book on arxiv.org.) I like to say that we will get to human-level AI during the lives of children already born and in fact I can’t help looking at children with amazement, contemplating the events they will see.\n\n\nIn his 2005 book, The Singularity is Near, Ray Kurzweil predicted human-level AI by 2029. He has a good track record at technology prediction and I hope he is right: I was born in 1948 so have a good chance of living until 2029. He also predicted the singularity by 2045, which must include the kind of very powerful AI systems discussed in my recent book.\n\n\nAlthough it has nowhere near human-level intelligence, the DeepMind Atari player is a general AI system in the sense that it has no foreknowledge of Atari games other than knowing that the goal is to get a high score. The remarkable success of this system increases my confidence that we will create true AGI systems. DeepMind was purchased by Google, and all the big IT companies are energetically developing AI. It is the combination of AGI techniques and access to hundreds of millions of human users that can create the scenario of the Omniscience AI described in Chapter 1 of my book. Similarly for government surveillance agencies, which have hundreds of millions of unwitting users.\n\n\nIn 1983 I made a wager that a computer would beat the world Go champion by 2013, and lost. In fact, most predictions about AI have been wrong. Thus we must bring some humility to our predictions about the dates of AI milestones.\n\n\nBecause Ray Kurzweil’s predictions are based on quantitative extrapolation from historical trends and because of his good track record, I generally defer to his predictions. If human- level AI will exist by 2029 and very capable and dangerous AGI systems will exist by 2045, it is urgent that we understand the social effects and dangers of AI as soon as possible.\n\n\n\n\n---\n\n\n**Luke**: Which section(s) of your book do you think are most likely to be intriguing to computer scientists, because they’ll learn something that seems novel (to them) and plausibly significant?\n\n\n\n\n---\n\n\n**Bill**: Thanks Luke. There are several sections of the book that may be interesting or useful.\n\n\nAt the Workshop on AI and Ethics at AAAI-15 there was some confusion about the generality of utilitarian ethics, based on the connotation that a utility function is defined as a linear sum of features or similar simple expression. However, as explained in Chapter 2 and in my first answer in this interview, more complex utility functions can express any set of complete and transitive preferences among outcomes. That is, if an agent always has a most preferred outcome among any finite set of outcomes, then that agent can be expressed as a utility-maximizing agent.\n\n\nChapter 4 goes into detail on the issues of agents whose environment models are finite stochastic programs. Most of the papers in the AGI community assume that environments are modeled by programs for universal Turing machines, with no limit on their memory use. I think that much can be added to what I wrote in Chapter 4, and hope that someone will do that.\n\n\nThe self-modeling agents of Chapter 8 are the formal framework analog of value learners such as the DeepMind Atari player, and their use as a formal framework is novel. Self-modeling agents have useful properties, such as the capability to value agent resource increases and a way to avoid the problem of the agent utility function being inconsistent with the agent’s definition. An example of this problem is what Armstrong refers to as “motivated value selection.” More generally, it is the problem of adding any “special” actions to a utility maximizing agent, where those special actions do not maximize the utility function. In motivated value selection, the special action is the agent evolving its utility function. A utility maximizing agent may choose an action of removing the special actions from its definition, as counter-productive to maximizing its utility function. Self-modeling agents include such evolutionary special actions in the definition of their value functions, and they learn a model of their value function which they use to choose their next action. Thus there is no \n\ninconsistency. I think these ideas should be interesting to other computer scientists.\n\n\nAt the FLI conference in San Juan in January 2015 there was concern about the kind of technical AI risks described in Chapters 5 – 9 of my book, and concern about technological unemployment. However, there was not much concern about the dangers associated with:\n\n\n1. Large AI servers connected to the electronic companions that will be carried by large numbers of people and the \n\nability of the human owners of those AI servers to manipulate society, and\n2. A future world in which great wealth can buy increased intelligence and superior intelligence can generate increased wealth. This positive feedback loop will result in a power law distribution of intelligence as opposed to the current normal distribution of IQs with mean = 100 and standard deviation = 15.\n\n\nThese issues are discussed in Chapters 1 and 10 of my book. The Global Brain researchers study the way intelligence is exhibited by the network of humans; the change in distribution of intelligence of humans and machines who are nodes of the network will have profound effects on the nature of the Global Brain. Beyond computer scientists, I think the public needs to be aware of these issues.\n\n\nFinally, I’d like to expand on my previous answer, specifically that the DeepMind Atari player is an example of general AI. In Chapter 1 of my book I describe how current AI systems have environment models that are designed by human engineers, whereas future AI systems will need to learn environment models that are too complex to be designed by human engineers. The DeepMind system does not use an environment model designed by engineers. It is “model-free” but the value function that it learns is just as complex as an environment model and in fact encodes an implicit environment model. Thus the DeepMind system is the first example of a future AI system with significant functionality.\n\n\n\n\n---\n\n\n**Luke**: Can elaborate what you mean by saying that “the self-modeling agents of Chapter 8 are the formal framework analog of value learners such as the DeepMind Atari player”? Are you saying that the formal work you do in chapter 8 has implications even for an extant system like the DeepMind Atari player, because they are sufficiently analogous?\n\n\n\n\n---\n\n\n**Bill**: To elaborate on what I mean by “the self-modeling agents of Chapter 8 are the formal framework analog of value learners such as the DeepMind Atari player,” self-modeling agents and value learners both learn a function v(ha) that produces the expected value of proposed action a after interaction history h (that is, h is a sequence of observations and actions; see my book for details). For the DeepMind Atari player, v(ha) is the expected game score after action a and h is restricted to the most recent observation (i.e., a game screen snapshot). Whereas the DeepMind system must be practically computable, the self-modeling agent framework is a purely mathematical definition. This framework is finitely computable but any practical implementation would have to use approximations. The book offers a few suggestions about computing techniques, but the discussion is not very deep.\n\n\nBecause extant systems such as the DeepMind Atari player are not yet close to human-level intelligence, there is no implication that this system should be subject to safety constraints. It is encouraging that the folks at DeepMind and at Vicarious are concerned about AI ethics, for two reasons: 1) They are likely to apply ethical requirements to their systems as they approach human-level, and 2) They are very smart and can probably add a lot to AI safety research.\n\n\nGenerally, research on safe and ethical AI complicates the task of creating AI by adding requirements. My book develops a three-argument utility function expressing human values which will be very complex to compute. Similarly for other components of the definition of self-modeling agents in the book.\n\n\nI think there are implications the other way around. The self-modeling framework is based on statistical learning and the success of the DeepMind Atari player, the Vicarious captcha solver, IBM’s Watson, and other practical systems that use statistical learning techniques increases our confidence that these techniques can actually work for AI capability and safety.\n\n\nSome researchers suggest that safe AI should rely on logical deduction rather than statistical learning. This idea offers greater possibility of proving safety properties of AI, but so far there are no compelling demonstrations of AI systems based on logical deduction (at least, none that I am aware of). Such demonstrations would add a lot of confidence in our ability to prove safety properties of AI systems.\n\n\n\n\n---\n\n\n**Luke**: Your 10th chapter considers the political aspects of advanced AI. What do you think can be done now to improve our chances of solving the political challenges of AI in the future? Sam Altman of YC has [proposed](http://blog.samaltman.com/machine-intelligence-part-2) various kinds of regulation — do you agree with his general thinking? What other ideas do you have?\n\n\n\n\n---\n\n\n**Bill**: The central point of my 2002 book was the need for public education about and control over above-human-level AI. The current public discussion by Stephen Hawking, Bill Gates, Elon Musk, Ray Kurzweil, and others about the dangers of AI is very healthy, as it educates the public. Similarly for the Singularity Summits organized by the Singularity Institute (MIRI’s predecessor), which I thought were the best thing the Singularity Institute did.\n\n\nIn the US people cannot own automatic weapons, guns of greater than .50 caliber, or explosives without a license. It would be absurd to license such things but to allow unregulated development of above-human-level AI. As the public is educated about AI, I think some form of regulation will be inevitable.\n\n\nHowever, as they say, the devil will be in the details and humans will be unable to compete with future AI on details. Complex details will be AI’s forte. So formulating effective regulation will be a political challenge. The Glass-Steagal Act of 1933, regulating banking, was 37 pages long. The Dodd-Frank bill of 2010, also to regulate banking 77 years later, was 848 pages long. An army of lawyers drafted the bill, many employed to protect the interests of groups affected by the bill. The increasing complexity of laws reflects efforts by regulated entities to lighten the burden of regulation. The stakes in regulating AI will be huge and we can expect armies of lawyers, with the aid of the AI systems being regulated, to create very complex laws.\n\n\nIn the second chapter of my book, I conclude that ethical rules are inevitably ambiguous and base my proposed safe AI design on human values expressed in a utility function rather than rules. Consider the current case before the US Supreme Court to interpret the meaning of the words “established by the state” in the context of the 363,086 words of the Affordable Care Act. This is a good example of the ambiguity of rules. Once AI regulations become law, armies of lawyers, aided by AI, will be engaged in debates over their interpretation and application.\n\n\nThe best counterbalance to armies of lawyers creating complexity on any legal issue is a public educated about the issue and engaged in protecting their own interests. Automobile safety is a good example. This will also be the case with AI regulation. And, as discussed in the introductory section of Chapter 10, there is precedent for the compassionate intentions of some wealthy and powerful people and this may serve to counterbalance their interest in creating complexity.\n\n\nPrivacy regulations, which affect existing large IT systems employing AI, already exist in the US and even more so in Europe. However, many IT services depend on accurate models of users’ preferences. At the recent FLI conference in San Juan, I tried to make the point that a danger from AI will be that people will want the kind of close, personal relationship with AI systems that will enable intrusion and manipulation by AI. The Omniscience AI described in Chapter 1 of my book is an example. As an astute IT lawyer said at the FLI conference, the question of whether an IT innovation will be legal depends on whether it will be popular.\n\n\nThis brings us back to the need for public education about AI. For people to resist being seduced by the short term benefits of close relationships with AI, they need to understand the long term consequences. I think it is not realistic to prohibit close relationships between people and AI, but perhaps the public, if it understands the issues, can demand some regulation over the goals for which those relationships are exploited.\n\n\nThe final section of my Chapter 10 says that AI developers and testers should recognize that they are acting as agents for the future of humanity and that their designs and test results should be transparent to the public. The FLI open letter and Google’s panel on AI ethics are encouraging signs that AI developers do recognize their role as agents for future humanity. Also, DeepMind has been transparent about the technology of their Atari player, even making source code available for non-commercial purposes.\n\n\nAI developers deserve to be rewarded for their success. On the other hand, people have a right to avoid losing control over their own lives to an all-powerful AI and its wealthy human owners. The problem is to find a way to achieve both of these goals.\n\n\nAmong current humans, with naturally evolved brains, IQ has a normal distribution. When brains are artifacts, their intelligence is likely to have a power law distribution. This is the pattern of distributions of sizes of other artifacts such as trucks, ships, buildings, and computers. The average human will not be able to understand or ever learn the languages used by the most intelligent minds. This may mean the end of any direct voice in public policy decisions for average humans – effectively the end of democracy. But if large AI systems are maximizing utility functions that account for the values of individual humans, that may take the place of direct democracy.\n\n\nChapters 6 – 8 of my book propose mathematical definitions for an AI design that does balance the values of individual humans. Chapter 10 suggests that this design may be modified to provide different weights to the values of different people, for example to reward those who develop AI systems. I must admit that the connection between the technical chapters of my book and Chapter 10, on politics, is weak. Political issues are just difficult. For example, the future will probably have multiple AI systems with conflicting utility functions and a power law distribution of intelligence. It is difficult to predict how such a society would function and how it would affect humans, and this unpredictability is a risk. Creating a world with a single powerful AI system also poses risks, and may be difficult to achieve.\n\n\nSince my first paper about future AI in 2001, I have thought that the largest risks from AI are political rather than technical. We have an ethical obligation to educate the public about the future of AI, and an educated public is an essential element of finding a good outcome from AI.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Bill!\n\n\nThe post [Bill Hibbard on Ethical Artificial Intelligence](https://intelligence.org/2015/03/09/bill-hibbard/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-03-10T04:43:22Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "72ce48abac01041ada17d2abf1e3e1f6", "title": "Fallenstein talk for APS March Meeting 2015", "url": "https://intelligence.org/2015/03/09/fallenstein-talk-aps-march-meeting-2015/", "source": "miri", "source_type": "blog", "text": "[![Fallenstein APS talk](http://intelligence.org/wp-content/uploads/2015/03/Fallenstein-APS-talk.png)](https://intelligence.org/wp-content/uploads/2015/03/Fallenstein-Beneficial-smarter-than-human-intelligence.pdf)MIRI researcher Benja Fallenstein recently delivered an invited talk at the [March 2015](http://www.aps.org/meetings/march/index.cfm) meeting of the American Physical Society in San Antonio, Texas. Her talk was one of four in a [special session on artificial intelligence](http://meeting.aps.org/Meeting/MAR15/Session/M3).\n\n\nFallenstein’s title was “Beneficial Smarter-than-human Intelligence: the Challenges and the Path Forward.” The slides are available [here](https://intelligence.org/wp-content/uploads/2015/03/Fallenstein-Beneficial-smarter-than-human-intelligence.pdf). Abstract:\n\n\n\n> Today, human-level machine intelligence is still in the domain of futurism, but there is every reason to expect that it will be developed eventually. A generally intelligent agent as smart or smarter than a human, and capable of improving itself further, would be a system we’d need to design for safety from the ground up: There is no reason to think that such an agent would be driven by human motivations like a lust for power; but almost any goals will be easier to meet with access to more resources, suggesting that most goals an agent might pursue, if they don’t explicitly include human welfare, would likely put its interests at odds with ours, by incentivizing it to try to acquire the physical resources currently being used by humanity. Moreover, since we might try to prevent this, such an agent would have an incentive to deceive its human operators about its true intentions, and to resist interventions to modify it to make it more aligned with humanity’s interests, making it difficult to test and debug its behavior. This suggests that in order to create a beneficial smarter-than-human agent, we will need to face three formidable challenges: How can we formally specify goals that are in fact beneficial? How can we create an agent that will reliably pursue the goals that we give it? And how can we ensure that this agent will not try to prevent us from modifying it if we find mistakes in its initial version? In order to become confident that such an agent behaves as intended, we will not only want to have a practical implementation that seems to meet these challenges, but to have a solid theoretical understanding of why it does so. In this talk, I will argue that even though human-level machine intelligence does not exist yet, there are foundational technical research questions in this area which we can and should begin to work on today. For example, probability theory provides a principled framework for representing uncertainty about the physical environment, which seems certain to be helpful to future work on beneficial smarter-than-human agents, but standard probability theory assumes omniscience about *logical* facts; no similar principled framework for representing uncertainty about the outputs of deterministic computations exists as yet, even though any smarter-than-human agent will certainly need to deal with uncertainty of this type. I will discuss this and other examples of ongoing foundational work.\n> \n> \n\n\nStuart Russell of UC Berkeley also gave a talk at this session, about [the long-term future of AI](http://meeting.aps.org/Meeting/MAR15/Session/M3.1).\n\n\nThe post [Fallenstein talk for APS March Meeting 2015](https://intelligence.org/2015/03/09/fallenstein-talk-aps-march-meeting-2015/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-03-09T21:15:37Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "f3f5ed3ee67a827e5d154feeb2a22522", "title": "March 2015 newsletter", "url": "https://intelligence.org/2015/03/01/march-newsletter-2/", "source": "miri", "source_type": "blog", "text": "| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n[Machine Intelligence Research Institute](http://intelligence.org)\n |\n\n |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n**Research updates**\n* We’ve made significant updates to three pages: [Research](https://intelligence.org/research/), [All Publications](https://intelligence.org/all-publications/), and [A Guide to MIRI’s Research](https://intelligence.org/research-guide/).\n* We’ve published an [annotated bibliography](https://intelligence.org/2015/02/05/new-annotated-bibliography-miris-technical-agenda/) and [dedicated page](https://intelligence.org/technical-agenda/) for our new technical research agenda overview.\n* [This new mailing list](https://intelligence.org/2015/02/03/keep-date-miris-research-via-new-mailing-list/) sends one email per new technical paper, and nothing else.\n* New analyses: [Davis on AI capability and motivation](https://intelligence.org/2015/02/06/davis-ai-capability-motivation/), [How AI timelines are estimated](http://aiimpacts.org/how-ai-timelines-are-estimated/), [The slow traversal of ‘human-level’](http://aiimpacts.org/the-slow-traversal-of-human-level/), [Multipolar research questions](http://aiimpacts.org/multipolar-research-questions/), [At-least-human-level-at-human-cost AI](http://aiimpacts.org/at-least-human-level-at-human-cost-ai/), and three analyses of discontinuous technological progress (to investigate the plausibility of AI fast takeoff): [Penicillin and syphilis](http://aiimpacts.org/penicillin-and-syphilis/), [Discontinuous progress investigation](http://aiimpacts.org/discontinuous-progress-investigation/), and [What’s up with nuclear weapons?](http://aiimpacts.org/whats-up-with-nuclear-weapons/)\n\n\n**News updates**\n* MIRI’s [paper on value learning](https://intelligence.org/files/ValueLearningProblem.pdf) was covered in some detail [at *Nautilus*](http://nautil.us/blog/will-humans-be-able-to-control-computers-that-are-smarter-than-us).\n* The *Superintelligence* reading group is in [week 24](http://lesswrong.com/lw/llp/superintelligence_24_morality_models_and_do_what/), on morality models and “do what I mean.”\n\n\n\n**Other news**\n* The Future of Life Institute has released a new [survey of research questions for robust and beneficial AI](http://futureoflife.org/static/data/documents/suggested_research.pdf).\n\n\nAs always, please don’t hesitate to let us know if you have any questions or comments.\nBest,\nLuke Muehlhauser\nExecutive Director\n |\n\n |\n\n |\n| \n\n| |\n| --- |\n| |\n\n  |\n\n\n \n\n\nThe post [March 2015 newsletter](https://intelligence.org/2015/03/01/march-newsletter-2/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-03-02T06:00:41Z", "authors": ["Jake"], "summaries": []} -{"id": "4cfbdf762a28b923a3842cb7d780d028", "title": "Davis on AI capability and motivation", "url": "https://intelligence.org/2015/02/06/davis-ai-capability-motivation/", "source": "miri", "source_type": "blog", "text": "In a [review of *Superintelligence*](https://cs.nyu.edu/davise/papers/Bostrom.pdf), NYU computer scientist Ernest Davis voices disagreement with a number of claims he attributes to Nick Bostrom: that “intelligence is a potentially infinite quantity with a well-defined, one-dimensional value,” that a superintelligent AI could “easily resist and outsmart the united efforts of eight billion people” and achieve “virtual omnipotence,” and that “though achieving intelligence is more or less easy, giving a computer an ethical point of view is really hard.”\n\n\nThese are all stronger than Bostrom’s actual claims. For example, Bostrom never characterizes building a generally intelligent machine as “easy.” Nor does he say that intelligence can be infinite or that it can produce “omnipotence.” Humans’ intelligence and accumulated knowledge gives us a decisive advantage over chimpanzees, even though our power is limited in important ways. An AI need not be magical or all-powerful in order to have the same kind of decisive advantage over humanity.\n\n\nStill, Davis’ article is one of the more substantive critiques of MIRI’s core assumptions that I have seen, and he addresses several deep issues that directly bear on AI forecasting and strategy. I’ll sketch out a response to his points here.\n\n\n\n \n\n\n**Measuring an intelligence explosion**\n\n\nDavis writes that Bostrom assumes “that a large gain in intelligence would necessarily entail a correspondingly large increase in power.” This is again too strong. (Or it’s trivial, if we’re using the word “intelligence” to pick out a specific kind of power.)\n\n\nBostrom *is* interested in intelligence for its potential to solve practical problems and shape the future. If there are other kinds of intelligence, they’re presumably of less economic and strategic importance than the “cognitive superpowers” Bostrom describes in chapter 6. It is the potential power autonomous machines could exhibit that should primarily concern us from a safety standpoint, and “intelligence” seems as good a term as any for the kind of power that doesn’t depend on an agent’s physical strength or the particularities of its environment.\n\n\nWhen it comes to Bostrom’s intelligence explosion thesis, I don’t think ‘does an increase in intelligence *always* yield a corresponding increase in power?’ gets at the heart of the issue. Consider [David Chalmers’ version of the argument](http://consc.net/papers/singularity.pdf):\n\n\n[I]t is not unreasonable to hold that we can create systems with greater programming ability than our own, and that systems with greater programming ability will be able to create systems with greater programming ability in turn. It is also not unreasonable to hold that programming ability will correlate with increases in various specific reasoning abilities. If so, we should expect that absent defeaters, the reasoning abilities in question will explode.\n\n\nHere, there’s no explicit appeal to “intelligence,” which is replaced with programming ability plus an arbitrarily large number of “specific reasoning abilities.” Yet if anything I find this argument more plausible than Bostrom’s formulation. For that reason, I agree with Bostrom that the one-dimensional representation of intelligence is inessential and “one could, for example, instead represent a cognitive ability profile as a hypersurface in a multidimensional space” (p. 273).[1](https://intelligence.org/2015/02/06/davis-ai-capability-motivation/#footnote_0_11582 \" Still more fine-grained versions of the same argument may be possible. E.g., “programming ability” might decompose into multiple abilities, such as the ability to efficiently explore search spaces for code that meets constraints and the ability to efficiently test candidate code. \")\n\n\nMoreover, the relevant question isn’t whether an increase in a self-improving AI’s general programming ability *always* yields a corresponding increase in its ability to improve its own programming ability. Nor is the question whether either of those abilities *always* correlates with the other cognitive capabilities Bostrom is interested in (“strategizing,” “social manipulation,” “hacking,” “technology research,” “economic productivity”). I’d instead say that the five core questions from Bostrom’s point of view are:\n\n\n1. Is the first superintelligent AI likely to result from self-improving AI systems?\n2. If so, how much of the AI’s self-improvement is likely to be driven by improvements to some cognitive capability (e.g., programming ability) that facilitates further enhancement of the capability in question?\n3. Are improvements to this self-improving capability likely to accelerate, as early advances result in cascades of more rapid advances? Or will the self-improving capability repeatedly stall out, advancing in small fits and starts?\n4. If self-improvement cascades are likely, are they also likely to result in improvements to other cognitive capabilities that we more directly care about?[2](https://intelligence.org/2015/02/06/davis-ai-capability-motivation/#footnote_1_11582 \" For example: If an AI approaching superintelligence stumbles upon a cascade of improvements to its programming ability, will its capabilities and decision criteria also result in repeated improvements its physics modules, or its psychology modules? \") Or will those other cognitive capabilities lag far behind the capabilities that ‘explode’?\n5. If an ability like ‘programming’ is likely to be self-reinforcing in an accelerating way, and is likely to foster accelerating improvements in other cognitive abilities, exactly how fast will those accelerations be? Are we talking about a gap of decades between the AI’s first self-reinforcing self-improvements and its attainment of superintelligence? Months? Hours?\n\n\nBostrom’s position on those questions — that a fast or moderate intelligence explosion is likely — at no point presupposes that “intelligence” is a well-defined scalar value one could do calculus with, except as a toy model for articulating various qualitative possibilities. When he writes out differential equations, Bostrom is careful to note that intelligence cannot be infinite, that one-dimensionality is a simplifying assumption, and that his equations are “intended for illustration only.”[3](https://intelligence.org/2015/02/06/davis-ai-capability-motivation/#footnote_2_11582 \" On page 76, for example, Bostrom writes: “This particular growth trajectory has a positive singularity at t = 18 months. In reality, the assumption that recalcitrance is constant would cease to hold as the system began to approach the physical limits of information processing, if not sooner.” On page 77, Bostrom says that the point he intends to illustrate is only that if AI progress is primarily AI-driven, resultant feedback loops that do arise will have a larger accelerating effect. \")\n\n\nWe should expect artificial general intelligence (AGI) to specialize in some domains and neglect others. Bostrom’s own analysis assumes that a recursively self-improving AI would tend to prioritize acquiring skills like electrical engineering over skills like impressionist painting, all else being equal.[4](https://intelligence.org/2015/02/06/davis-ai-capability-motivation/#footnote_3_11582 \" This is because a wide variety of final goals are best served through the acquisition or resources and the building of infrastructure, a set of objectives that are more likely to be furthered by electrical engineering skills than by painting skills. This argument is an instance of Bostrom’s instrumental convergence thesis in chapter 7. \") For that matter, present-day AI is already superhuman in some cognitive tasks (e.g., chess and mental arithmetic), yet subhuman in many others. Any attempt to quantify the ‘overall intelligence’ of Deep Blue or Google Maps will obscure some important skills and deficits of these systems.[5](https://intelligence.org/2015/02/06/davis-ai-capability-motivation/#footnote_4_11582 \"While it turns out that many intelligence-related characteristics correlate with a single easily-measured number in humans (g), this still doesn’t allow us to make fine-grained predictions about individual competency levels. I also can’t think of an obvious reason to expect a number even as informative as g to arise for AGI, especially early-stage AGI. Bostrom writes (p. 93):\n[S]uppose we could somehow establish that a certain future AI will have an IQ of 6,455: then what? We would have no idea of what such an AI could actually do. We would not even know that such an AI had as much general intelligence as a normal human adult–perhaps the AI would instead have a bundle of special-purpose algorithms enabling it to solve typical intelligence test questions with superhuman efficiency but not much else.\nSome recent efforts have been made to develop measurements of cognitive capacity that could be applied to a wider range of information-processing systems, including artificial intelligences. Work in this direction, if it can overcome various technical difficulties, may turn out to be quite useful for some scientific purposes including AI development. For purposes of the present investigation, however, its usefulness would be limited since we would remain unenlightened about what a given superhuman performance score entails for actual ability to achieve practically important outcomes in the world. \") Still, using concepts like ‘intelligence’ or ‘power’ to express imprecise hypotheses is better than substituting precise metrics that overstate how much we currently know about general intelligence and the future of AI.[6](https://intelligence.org/2015/02/06/davis-ai-capability-motivation/#footnote_5_11582 \" Imagine a Sumerian merchant living 5,000 years ago, shortly after the invention of writing, who has noticed the value of writing for storing good ideas over time, and not just market transactions. Writing could even allow one to transmit good ideas to someone you’ve never met, such as a future descendant. The merchant notices that his own successes have often involved collecting others’ good ideas, and that good ideas often open up pathways to coming up with other, even better ideas. From his armchair, he concludes that if writing becomes sufficiently popular, it will allow a quantity called society’s knowledge level to increase in an accelerating fashion; which, if the knowledge is used wisely, could result in unprecedented improvements to human life.\nIn retrospect we can say ‘knowledge’ would have been too coarse-grained a category to enable any precise predictions, and that there have really been multiple important breakthroughs that can be considered ‘knowledge explosions’ in different senses. Yet the extremely imprecise prediction can still give us a better sense of what to expect than we previously had. It’s a step up from the (historically common) view that civilizational knowledge only diminishes over time, the view that things will always stay the same, the view that human welfare will radically improve for reasons unrelated to knowledge build-up, etc.\nThe point of this analogy is not that people are good at making predictions about the distant future. Rather, my point is that hand-wavey quantities like ‘society’s knowledge level’ can be useful for making predictions, and can be based on good evidence, even if the correspondence between the quantity and the phenomenon it refers to is inexact. \")\n\n\n \n\n\n**Superintelligence superiority**\n\n\nIn *Superintelligence* (pp. 59-61), Bostrom lists a variety of ways AGI may surpass humans in intelligence, owing to differences in hardware (speed and number of computational elements, internal communication speed, storage capacity, reliability, lifespan, and sensors) and software (editability, duplicability, goal coordination, memory sharing, and new modules, modalities, and algorithms). Davis grants that these may allow AGI to outperform humans, but expresses skepticism that this could give AGI a decisive advantage over humans. To paraphrase his argument:\n\n\n*Elephants’ larger brains don’t make them superintelligent relative to mice; squirrels’ speed doesn’t give them a decisive strategic advantage over turtles; and we don’t know enough about what makes Einstein smarter than the village idiot to make confident predictions about how easy it is to scale up from village idiot to Einstein, or from Einstein to super-Einstein. So there’s no particular reason to expect a self-improving AGI to be able to overpower humanity.*\n\n\nMy response is that the capabilities Bostrom describes, like “speed superintelligence,” predict much larger gaps than the gaps we see between elephants and mice. Bostrom writes (in a footnote on p. 270):\n\n\nAt least a millionfold speedup compared to human brains is physically possible, as can be seen by considering the difference in speed and energy of relevant brain processes in comparison to more efficient information processing. The speed of light is more than a million times greater than that of neural transmission, synaptic spikes dissipate more than a million times more heat than is thermodynamically necessary, and current transistor frequencies are more than a million times faster than neuron spiking frequencies.\n\n\nDavis objects that “all that running faster does is to save you time,” noting that a slower system could eventually perform all the feats of a faster one. But the ability to save time is exactly the kind of ability Bostrom is worried about. Even if one doubts that large improvements in collective or quality superintelligence are possible, a ‘mere’ speed advantage makes an enormous practical difference.\n\n\nImagine a small community of scientist AIs whose only advantages over human scientists stem from their hardware speed — they can interpret sensory information and assess hypotheses and policies a million times faster than a human. At that speed, an artificial agent could make ~115 years of intellectual progress in an hour, ~2700 years of progress in a day, and 250,000 years of progress in three months.\n\n\nThe effect of this speedup would be to telescope human history. Scientific and technological advances that would have taken us tens of thousands of years to reach can be ours by Tuesday. If we sent a *human* a thousand years into the past, equipped with all the 21st-century knowledge and technologies they wanted, they could conceivably achieve dominant levels of wealth and power in that time period. This gives us cause to worry about building machines that can rapidly accumulate millennia of experience over humans, even before we begin considering any potential advantages in memory, rationality, editability, etc.[7](https://intelligence.org/2015/02/06/davis-ai-capability-motivation/#footnote_6_11582 \" Most obviously, a speed advantage can give the AI the time to design an even better AI.\nNone of this means that we can make specific highly confident predictions about when and how AI will achieve superintelligence. An AGI that isn’t very human-like may be slower than a human at specific tasks, or faster, in hard-to-anticipate ways. If a certain scientific breakthrough requires that one first build a massive particle accelerator, then the resources needed to build that accelerator may be a more important limiting factor than the AGI’s thinking speed. In that case, humans would have an easier time monitoring and regulating an AGI’s progress. We can’t rule out the possibility that speed superintelligence will face large unexpected obstacles, but we also shouldn’t gamble on that possibility or take it for granted. \")\n\n\nAt least some of the routes to superintelligence described by Bostrom look orders of magnitude larger than the cognitive advantages Einstein has over a village idiot (or than an elephant has over a mouse). We can’t rule out the possibility that we’ll run into larger-than-expected obstacles when we attempt to build AGI, but we shouldn’t gamble on that possibility. Black swans happen, but superintelligent AI is a *white* swan, and white swans happen too.\n\n\nSince Bostrom’s pathways to superintelligence don’t have a lot in common with the neurological differences we can observe in mammals, there is no special reason to expect the gap between smarter-than-human AI and humans to resemble the gap between elephants and mice. Bostrom’s pathways also look difficult to biologically evolve, which means that their absence in the natural world tells us little about their feasibility.\n\n\nWe have even less cause to expect, then, that the gap between advanced AI and humans will resemble the gap between Einstein and a median human. If a single generation of random genetic recombination can produce an Einstein, the planning and design abilities of human (and artificial) engineers should make much greater feats a possibility.\n\n\n \n\n\n**Delegating AI problems to the AI**\n\n\nSeparately, Davis makes the claim that “developing an understanding of ethics as contemporary humans understand it is actually one of the easier problems facing AI”. Again, I’ll attempt to summarize his argument:\n\n\n*Morality doesn’t look particularly difficult, especially compared to, e.g., computer vision. Moreover, if we’re going to build AGI, we’re going to solve computer vision. Even if morality is as tough as vision, why assume one will be solved and not the other?*\n\n\nHere my response is that you probably don’t need to solve every AI problem to build an AGI. We may be able to cheat at vision, for example, by tasking a blind AGI with solving the problem for us. But it is much easier to observe whether an algorithm is making progress on solving visual puzzles than to observe whether one is making progress on *ethical* questions, where there is much more theoretical and object-level disagreement among humans, there is less incentive for artificial agents to move from partial solutions to complete solutions, and failures don’t necessarily reduce the system’s power.\n\n\nDavis notes that a socially fluent superintelligence would need to be an expert moralist:\n\n\nBostrom refers to the AI’s ‘social manipulation superpowers’. But if an AI is to be a master manipulator, it will need a good understanding of what people consider moral; if it comes across as completely amoral, it will be at a very great disadvantage in manipulating people. […] If the AI can understand human morality, it is hard to see what is the technical difficulty in getting it to follow that morality.\n\n\nThis is similar to Richard Loosemore’s argument against AI safety research, which I’ve [responded to in a blog post](http://nothingismere.com/2014/08/25/loosemore-on-ai-safety-and-attractors/). My objection is that an AI could come to understand human morality without thereby becoming moral, just as a human can come to understand the motivations of a stranger without thereby *acquiring* those motivations.\n\n\nSince we don’t understand our preferences in enough generality or detail to translate them into code, it would be nice to be able to delegate the bulk of this task to a superintelligence. But if we program it to hand us an answer to the morality problem, how will we know whether it is being honest with us? To trust an AGI’s advice about how to make it trustworthy, we’d need to already have solved enough of the problem ourselves to make the AGI a reliable advisor.\n\n\nTaking Davis’ proposal as an example, we can imagine instilling behavioral prescriptions into the AI by programming it to model Gandhi’s preferences and do what Gandhi would want it to. But if we try to implement this idea in code before building seed AI, we’re stuck with our own fallible attempts to operationalize concepts like ‘Gandhi’ and ‘preference;’ and if we try to implement it after, recruiting the AI to solve the problem, we’ll need to have already instilled some easier-to-program safeguards into it. What makes AGI safety research novel and difficult is our lack of understanding of how to initiate this bootstrapping process with any confidence.\n\n\nThe idea of using the deceased for value learning is interesting. Bostrom endorses the generalized version of this approach when he says that it is critical for value learning that the locus of value be “an object at a particular time” (p. 193). However, this approach may still admit of non-obvious loopholes, and it may still be too complicated for us to directly implement without recourse to an AGI. If so, it will need to be paired with solutions to the problems of [corrigibility](https://intelligence.org/2014/10/18/new-report-corrigibility/) and [stability](https://intelligence.org/2015/01/15/new-report-vingean-reflection-reliable-reasoning-self-improving-agents/) in self-modifying AI, just as “shutdown buttons” and other tripwire solutions will.\n\n\nFrom Bostrom’s perspective, what makes advanced AI a game-changer is first and foremost its capacity to meaningfully contribute to AI research. The vision problem may be one of many areas where we can outsource sophisticated AGI problems to AGI, or to especially advanced narrow-AI algorithms. This is the idea underlying the intelligence explosion thesis, and it also underlies Bostrom’s worry that capabilities research will continue to pull ahead of safety research.\n\n\nIn self-improving AI scenarios, the key question is which AI breakthroughs are prerequisites for automating high-leverage computer science tasks. This holds for capabilities research, and it also holds for safety research. Even if AI safety turned out to be easier than computer vision in an absolute sense, it would still stand out as a problem that is neither a prerequisite for building a self-improving AI, nor one we can safely delegate to such an AI.\n\n\n \n\n\n\n\n---\n\n1. Still more fine-grained versions of the same argument may be possible. E.g., “programming ability” might decompose into multiple abilities, such as the ability to efficiently explore search spaces for code that meets constraints and the ability to efficiently test candidate code.\n2. For example: If an AI approaching superintelligence stumbles upon a cascade of improvements to its programming ability, will its capabilities and decision criteria also result in repeated improvements its physics modules, or its psychology modules?\n3. On page 76, for example, Bostrom writes: “This particular growth trajectory has a positive singularity at t = 18 months. In reality, the assumption that recalcitrance is constant would cease to hold as the system began to approach the physical limits of information processing, if not sooner.” On page 77, Bostrom says that the point he intends to illustrate is only that if AI progress is primarily AI-driven, resultant feedback loops that do arise will have a larger accelerating effect.\n4. This is because a wide variety of final goals are best served through the acquisition or resources and the building of infrastructure, a set of objectives that are more likely to be furthered by electrical engineering skills than by painting skills. This argument is an instance of Bostrom’s instrumental convergence thesis in chapter 7.\n5. While it turns out that many intelligence-related characteristics correlate with a single easily-measured number in humans (*g*), this still doesn’t allow us to make fine-grained predictions about individual competency levels. I also can’t think of an obvious reason to expect a number even as informative as *g* to arise for AGI, especially early-stage AGI. Bostrom writes (p. 93):\n[S]uppose we could somehow establish that a certain future AI will have an IQ of 6,455: then what? We would have no idea of what such an AI could actually do. We would not even know that such an AI had as much general intelligence as a normal human adult–perhaps the AI would instead have a bundle of special-purpose algorithms enabling it to solve typical intelligence test questions with superhuman efficiency but not much else.\n\n\nSome recent efforts have been made to develop measurements of cognitive capacity that could be applied to a wider range of information-processing systems, including artificial intelligences. Work in this direction, if it can overcome various technical difficulties, may turn out to be quite useful for some scientific purposes including AI development. For purposes of the present investigation, however, its usefulness would be limited since we would remain unenlightened about what a given superhuman performance score entails for actual ability to achieve practically important outcomes in the world.\n6. Imagine a Sumerian merchant living 5,000 years ago, shortly after the invention of writing, who has noticed the value of writing for storing *good ideas* over time, and not just market transactions. Writing could even allow one to transmit good ideas to someone you’ve never met, such as a future descendant. The merchant notices that his own successes have often involved collecting others’ good ideas, and that good ideas often open up pathways to coming up with other, even better ideas. From his armchair, he concludes that if writing becomes sufficiently popular, it will allow a quantity called *society’s knowledge level* to increase in an accelerating fashion; which, if the knowledge is used wisely, could result in unprecedented improvements to human life.\nIn retrospect we can say ‘knowledge’ would have been too coarse-grained a category to enable any precise predictions, and that there have really been multiple important breakthroughs that can be considered ‘knowledge explosions’ in different senses. Yet the extremely imprecise prediction can still give us a better sense of what to expect than we previously had. It’s a step up from the (historically common) view that civilizational knowledge only diminishes over time, the view that things will always stay the same, the view that human welfare will radically improve for reasons unrelated to knowledge build-up, etc.\n\n\nThe point of this analogy is not that people are good at making predictions about the distant future. Rather, my point is that hand-wavey quantities like ‘society’s knowledge level’ *can* be useful for making predictions, and *can* be based on good evidence, even if the correspondence between the quantity and the phenomenon it refers to is inexact.\n7. Most obviously, a speed advantage can give the AI the time to design an even better AI.\nNone of this means that we can make specific highly confident predictions about when and how AI will achieve superintelligence. An AGI that isn’t very human-like may be slower than a human at specific tasks, or faster, in hard-to-anticipate ways. If a certain scientific breakthrough requires that one first build a massive particle accelerator, then the resources needed to build that accelerator may be a more important limiting factor than the AGI’s thinking speed. In that case, humans would have an easier time monitoring and regulating an AGI’s progress. We can’t rule out the possibility that speed superintelligence will face large unexpected obstacles, but we also shouldn’t gamble on that possibility or take it for granted.\n\nThe post [Davis on AI capability and motivation](https://intelligence.org/2015/02/06/davis-ai-capability-motivation/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-02-06T23:45:40Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "d17e02a942fda03e7559fb1f25b4dd43", "title": "New annotated bibliography for MIRI’s technical agenda", "url": "https://intelligence.org/2015/02/05/new-annotated-bibliography-miris-technical-agenda/", "source": "miri", "source_type": "blog", "text": "[![annotated bibliography](http://intelligence.org/wp-content/uploads/2015/02/annotated-bibliography.png)](https://intelligence.org/files/AnnotatedBibliography.pdf)Today we release a new [annotated bibliography](https://intelligence.org/files/AnnotatedBibliography.pdf) accompanying our [new technical agenda](https://intelligence.org/2014/12/23/new-technical-research-agenda-overview/), written by Nate Soares. If you’d like to discuss the paper, please do so [here](http://lesswrong.com/lw/lmn/the_value_learning_problem/).\n\n\nAbstract:\n\n\n\n> How could superintelligent systems be aligned with the interests of humanity? This annotated bibliography compiles some recent research relevant to that question, and categorizes it into six topics: (1) realistic world models; (2) idealized decision theory; (3) logical uncertainty; (4) Vingean reflection; (5) corrigibility; and (6) value learning. Within each subject area, references are organized in an order amenable to learning the topic. These are by no means the only six topics relevant to the study of alignment, but this annotated bibliography could be used by anyone who wants to understand the state of the art in one of these six particular areas of active research.\n> \n> \n\n\nToday we’ve also released a page that collects the technical agenda and supporting reports. See our **[Technical Agenda](https://intelligence.org/technical-agenda/)** page.\n\n\nThe post [New annotated bibliography for MIRI’s technical agenda](https://intelligence.org/2015/02/05/new-annotated-bibliography-miris-technical-agenda/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-02-05T18:32:30Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "f1b8ec8c83578fdd16e2a7dd0b45cbb6", "title": "New mailing list for MIRI math/CS papers only", "url": "https://intelligence.org/2015/02/03/keep-date-miris-research-via-new-mailing-list/", "source": "miri", "source_type": "blog", "text": "As requested, we now offer email notification of new technical (math or computer science) papers and reports from MIRI. Simply subscribe to the mailing list below.\n\n\nThis list sends **one email per new technical paper**, and contains only the paper’s title, author(s), and abstract, plus a link to the paper.\n\n\n\n\n\n#### Sign up to get updates on new MIRI technical results\n\n\n*Get notified every time a new technical paper is published.*\n\n\n\n\n* \n* \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n×\n\n\n\n\nThe post [New mailing list for MIRI math/CS papers only](https://intelligence.org/2015/02/03/keep-date-miris-research-via-new-mailing-list/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-02-03T11:18:55Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "4f0369af86ecf6d9b5984c7775616e21", "title": "February 2015 Newsletter", "url": "https://intelligence.org/2015/02/01/february-2015-newsletter/", "source": "miri", "source_type": "blog", "text": "| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n[Machine Intelligence Research Institute](http://intelligence.org)\n |\n\n |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n\n**Research Updates**\n* Four new reports in support of our [new technical agenda overview](https://intelligence.org/2014/12/23/new-technical-research-agenda-overview/), on [logical uncertainty](https://intelligence.org/2015/01/09/new-report-questions-reasoning-logical-uncertainty/), [Vingean reflection](https://intelligence.org/2015/01/15/new-report-vingean-reflection-reliable-reasoning-self-improving-agents/), [realistic world models](https://intelligence.org/2015/01/22/new-report-formalizing-two-problems-realistic-world-models/), and [value learning](https://intelligence.org/2015/01/29/new-report-value-learning-problem/).\n* [AI Impacts site relaunched](https://intelligence.org/2015/01/11/improved-ai-impacts-website/) with new content.\n* [Reply to Rodney Brooks and John Searle on AI volition and timelines](https://intelligence.org/2015/01/08/brooks-searle-agi-volition-timelines/).\n* Interview: [Matthias Troyer on quantum computers](https://intelligence.org/2015/01/07/matthias-troyer-quantum-computers/).\n\n\n**News Updates**\n* Our *Superintelligence* online reading group is in its 20th week, discussing [the value loading problem](http://lesswrong.com/lw/llr/superintelligence_20_the_valueloading_problem/).\n\n\n**Other Updates**\n* Top AI scientists and many others have signed an [open letter](http://futureoflife.org/misc/open_letter) advocating more research into robust and beneficial AI. The letter cites several MIRI papers.\n* Elon Musk has provided $10 million in funding for the types of research described in the open letter. The funding will be distributed in grants by the Future of Life Institute. Apply [here](http://futureoflife.org/grants/large/initial).\n\n\n\nAs always, please don’t hesitate to let us know if you have any questions or comments.\nBest,\nLuke Muehlhauser\nExecutive Director\n |\n\n |\n\n |\n| \n\n| | | | | |\n| --- | --- | --- | --- | --- |\n| \n\n| |\n| --- |\n| [Facebook](https://www.facebook.com/MachineIntelligenceResearchInstitute)  |  [Twitter](https://twitter.com/MIRIBerkeley)  |  [Google+](https://plus.google.com/+IntelligenceOrg/)  |  [Forward to a friend](https://intelligence.org/feed/*|FORWARD|*) |\n| | |\n| \nYou’re receiving this because you subscribed to the [M](http://intelligence.org)[IRI](http://intelligence.org) newsletter.\n[unsubscribe from this list](https://intelligence.org/feed/*|UNSUB|*) | [update subscription preferences](https://intelligence.org/feed/*|UPDATE_PROFILE|*)\n |\n\n |\n\n |\n\n\n \n\n\n \n\n\nThe post [February 2015 Newsletter](https://intelligence.org/2015/02/01/february-2015-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-02-02T04:00:19Z", "authors": ["Jake"], "summaries": []} -{"id": "2040fe2c690961c7678ba3835c80a73d", "title": "New report: “The value learning problem”", "url": "https://intelligence.org/2015/01/29/new-report-value-learning-problem/", "source": "miri", "source_type": "blog", "text": "[![Value learning](http://intelligence.org/wp-content/uploads/2015/01/Value-learning.png)](https://intelligence.org/files/ValueLearningProblem.pdf)Today we release a new technical report by Nate Soares, “[The value learning problem](https://intelligence.org/files/ValueLearningProblem.pdf).” If you’d like to discuss the paper, please do so [here](http://lesswrong.com/lw/lmn/the_value_learning_problem/).\n\n\nAbstract:\n\n\n\n> A superintelligent machine would not automatically act as intended: it will act as programmed, but the fit between human intentions and formal specification could be poor. We discuss methods by which a system could be constructed to learn what to value. We highlight open problems specific to inductive value learning (from labeled training data), and raise a number of questions about the construction of systems which model the preferences of their operators and act accordingly.\n> \n> \n\n\nThis is the last of six new major reports which describe and motivate [MIRI’s current research agenda](https://intelligence.org/2014/12/23/new-technical-research-agenda-overview/) at a high level.\n\n\n**Update May 29, 2016**: A revised version of “The Value Learning Problem” (available at the [original link](https://intelligence.org/files/ValueLearningProblem.pdf)) has been accepted to the IJCAI-16 Ethics for Artificial Intelligence workshop. The original version of the paper can be found [here](https://intelligence.org/files/obsolete/ValueLearningProblem.pdf).\n\n\nThe post [New report: “The value learning problem”](https://intelligence.org/2015/01/29/new-report-value-learning-problem/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-01-29T20:01:34Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "5eaeb234850ec43e0839273b7c0dddcd", "title": "New report: “Formalizing Two Problems of Realistic World Models”", "url": "https://intelligence.org/2015/01/22/new-report-formalizing-two-problems-realistic-world-models/", "source": "miri", "source_type": "blog", "text": "[![Formalizing two problems](http://intelligence.org/wp-content/uploads/2015/01/Formalizing-two-problems.png)](https://intelligence.org/files/RealisticWorldModels.pdf)Today we release a new technical report by Nate Soares, “[Formalizing two problems of realistic world models](https://intelligence.org/files/RealisticWorldModels.pdf).” If you’d like to discuss the paper, please do so [here](http://lesswrong.com/lw/lkk/formalizing_two_problems_of_realistic_world_models/).\n\n\nAbstract:\n\n\n\n> An intelligent agent embedded within the real world must reason about an environment which is larger than the agent, and learn how to achieve goals in that environment. We discuss attempts to formalize two problems: one of induction, where an agent must use sensory data to infer a universe which embeds (and computes) the agent, and one of interaction, where an agent must learn to achieve complex goals in the universe. We review related problems formalized by Solomonoff and Hutter, and explore challenges that arise when attempting to formalize analogous problems in a setting where the agent is embedded within the environment.\n> \n> \n\n\nThis is the 5th of six new major reports which describe and motivate [MIRI’s current research agenda](https://intelligence.org/2014/12/23/new-technical-research-agenda-overview/) at a high level.\n\n\nThe post [New report: “Formalizing Two Problems of Realistic World Models”](https://intelligence.org/2015/01/22/new-report-formalizing-two-problems-realistic-world-models/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-01-22T23:19:02Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "db905325987e302e8305b9e130eaae84", "title": "New report: “Vingean Reflection: Reliable Reasoning for Self-Improving Agents”", "url": "https://intelligence.org/2015/01/15/new-report-vingean-reflection-reliable-reasoning-self-improving-agents/", "source": "miri", "source_type": "blog", "text": "[![Vingean reflection](http://intelligence.org/wp-content/uploads/2015/01/Vingean-reflection.png)](https://intelligence.org/files/VingeanReflection.pdf)Today we release a new technical report by Benja Fallenstein and Nate Soares, “[Vingean Reflection: Reliable Reasoning for Self-Improving Agents](https://intelligence.org/files/VingeanReflection.pdf).” If you’d like to discuss the paper, please do so [here](http://lesswrong.com/lw/ljp/vingean_reflection_reliable_reasoning_for/).\n\n\nAbstract:\n\n\n\n> Today, human-level machine intelligence is in the domain of futurism, but there is every reason to expect that it will be developed eventually. Once artificial agents become able to improve themselves further, they may far surpass human intelligence, making it vitally important to ensure that the result of an “intelligence explosion” is aligned with human interests. In this paper, we discuss one aspect of this challenge: ensuring that the initial agent’s reasoning about its future versions is reliable, even if these future versions are far more intelligent than the current reasoner. We refer to reasoning of this sort as Vingean reflection.\n> \n> \n> A self-improving agent must reason about the behavior of its smarter successors in abstract terms, since if it could predict their actions in detail, it would already be as smart as them. This is called the Vingean principle, and we argue that theoretical work on Vingean reflection should focus on formal models that reflect this principle. However, the framework of expected utility maximization, commonly used to model rational agents, fails to do so. We review a body of work which instead investigates agents that use formal proofs to reason about their successors. While it is unlikely that real-world agents would base their behavior entirely on formal proofs, this appears to be the best currently available formal model of abstract reasoning, and work in this setting may lead to insights applicable to more realistic approaches to Vingean reflection.\n> \n> \n\n\nThis is the 4th of six new major reports which describe and motivate [MIRI’s current research agenda](https://intelligence.org/2014/12/23/new-technical-research-agenda-overview/) at a high level.\n\n\nThe post [New report: “Vingean Reflection: Reliable Reasoning for Self-Improving Agents”](https://intelligence.org/2015/01/15/new-report-vingean-reflection-reliable-reasoning-self-improving-agents/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-01-15T23:31:01Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "7521c2c739f698978ff90f01756c3f38", "title": "An improved “AI Impacts” website", "url": "https://intelligence.org/2015/01/11/improved-ai-impacts-website/", "source": "miri", "source_type": "blog", "text": "[![AI Impacts](http://intelligence.org/wp-content/uploads/2015/01/AI-Impacts.png)](http://aiimpacts.org/)Recently, MIRI received a targeted donation to improve the [AI Impacts website](http://aiimpacts.org/) initially created by frequent MIRI collaborator Paul Christiano and part-time MIRI researcher Katja Grace. Collaborating with Paul and Katja, we ported the old content to a more robust and navigable platform, and made some improvements to the content. You can see the result at [AIImpacts.org](http://aiimpacts.org/).\n\n\nAs explained in the site’s [introductory blog post](http://aiimpacts.wpengine.com/the-ai-impacts-blog/),\n\n\n\n> AI Impacts is premised on two ideas (at least!):\n> \n> \n> * **The details of the arrival of human-level artificial intelligence matter**Seven years to prepare is very different from seventy years to prepare. A weeklong transition is very different from a decade-long transition. Brain emulations require different preparations than do synthetic AI minds. Etc.\n> * **Available data and reasoning can substantially educate our guesses about these details** \n> \n> We can track progress in AI subfields. We can estimate the hardware represented by the human brain. We can detect the effect of additional labor on software progress. Etc.\n> \n> \n> Our goal is to assemble relevant evidence and considerations, and to synthesize reasonable views on questions such as when AI will surpass human-level capabilities, how rapid development will be at that point, what advance notice we might expect, and what kinds of AI are likely to reach human-level capabilities first.\n> \n> \n\n\nThe meat of the website is in its [articles](http://aiimpacts.wpengine.com/articles/). Here are two examples to start with:\n\n\n* [A summary of AI surveys](http://aiimpacts.wpengine.com/a-summary-of-ai-surveys/)\n* [Cases of discontinuous technological progress](http://aiimpacts.wpengine.com/cases-of-discontinuous-technological-progress/)\n\n\nThe post [An improved “AI Impacts” website](https://intelligence.org/2015/01/11/improved-ai-impacts-website/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-01-11T17:10:07Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "cddd334b00efc11a0db8f2f005b53e29", "title": "New report: “Questions of reasoning under logical uncertainty”", "url": "https://intelligence.org/2015/01/09/new-report-questions-reasoning-logical-uncertainty/", "source": "miri", "source_type": "blog", "text": "[![Reasoning under LU](http://intelligence.org/wp-content/uploads/2015/01/Reasoning-under-LU.png)](https://intelligence.org/files/QuestionsLogicalUncertainty.pdf)Today we release a new technical report by Nate Soares and Benja Fallenstein, “[Questions of reasoning under logical uncertainty](https://intelligence.org/files/QuestionsLogicalUncertainty.pdf).” If you’d like to discuss the paper, please do so [here](http://lesswrong.com/r/lesswrong/lw/lgd/questions_of_reasoning_under_logical_uncertainty/).\n\n\nAbstract:\n\n\n\n> A logically uncertain reasoner would be able to reason as if they know both a programming language and a program, without knowing what the program outputs. Most practical reasoning involves some logical uncertainty, but no satisfactory theory of reasoning under logical uncertainty yet exists. A better theory of reasoning under logical uncertainty is needed in order to develop the tools necessary to construct highly reliable artificial reasoners. This paper introduces the topic, discusses a number of historical results, and describes a number of open problems.\n> \n> \n\n\nThis is the 3rd of six new major reports which describe and motivate [MIRI’s current research agenda](https://intelligence.org/2014/12/23/new-technical-research-agenda-overview/) at a high level.\n\n\nThe post [New report: “Questions of reasoning under logical uncertainty”](https://intelligence.org/2015/01/09/new-report-questions-reasoning-logical-uncertainty/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-01-09T17:54:59Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "31605d8eca0de47ed8dbc68ef3d01fbb", "title": "Brooks and Searle on AI volition and timelines", "url": "https://intelligence.org/2015/01/08/brooks-searle-agi-volition-timelines/", "source": "miri", "source_type": "blog", "text": "Nick Bostrom’s concerns about the future of AI have sparked a busy public discussion. His arguments were echoed by leading AI researcher [Stuart Russell](http://www.cs.berkeley.edu/~russell/) in “[Transcending complacency on superintelligent machines](http://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html)” (co-authored with Stephen Hawking, Max Tegmark, and Frank Wilczek), and a number of journalists, scientists, and technologists have subsequently chimed in. Given the topic’s complexity, I’ve been surprised by the positivity and thoughtfulness of most of the coverage (some [overused clichés](http://slatestarcodex.com/2014/08/26/if-the-media-reported-on-other-dangers-like-it-does-ai-risk/) aside).\n\n\nUnfortunately, what most people probably take away from these articles is ‘Stephen Hawking thinks AI is scary!’, not the chains of reasoning that led Hawking, Russell, or others to their present views. When Elon Musk [chimes in](http://www.theverge.com/2014/8/3/5965099/elon-musk-compares-artificial-intelligence-to-nukes) with his own concerns and cites Bostrom’s book [*Superintelligence: Paths, Dangers, Strategies*](http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111), commenters seem to be more interested in immediately echoing or dismissing Musk’s worries than in looking into his source.\n\n\nThe end result is more of a referendum on people’s positive or negative associations with the word ‘AI’ than a debate over Bostrom’s substantive claims. If ‘AI’ calls to mind science fiction dystopias for you, the temptation is to squeeze real AI researchers into your ‘mad scientists poised to unleash an evil robot army’ stereotype. Equally, if ‘AI’ calls to mind your day job testing edge detection algorithms, that same urge to force new data into old patterns makes it tempting to squeeze Bostrom and Hawking into the ‘naïve technophobes worried about the evil robot uprising’ stereotype.\n\n\nThus roboticist Rodney Brooks’ recent blog post “[**Artificial intelligence is a tool, not a threat**](http://www.rethinkrobotics.com/artificial-intelligence-tool-threat/)” does an excellent job dispelling common myths about the cutting edge of AI, and philosopher John Searle’s [**review of** ***Superintelligence***](http://www.nybooks.com/articles/archives/2014/oct/09/what-your-computer-cant-know/) draws out some important ambiguities in our concepts of subjectivity and mind; but both writers scarcely intersect with Bostrom’s (or Russell’s, or Hawking’s) ideas. Both pattern-match Bostrom to the nearest available ‘evil robot panic’ stereotype, and stop there.\n\n\nBrooks and Searle don’t appreciate how new the arguments in *Superintelligence* are. In the interest of making it easier to engage with these important topics, and less appealing to force the relevant technical and strategic questions into the model of decades-old debates, I’ll address three of the largest misunderstandings one might come away with after seeing Musk, Searle, Brooks, and others’ public comments: conflating present and future AI risks, conflating risk severity with risk imminence, and conflating risk from autonomous algorithmic decision-making with risk from human-style antisocial dispositions.\n\n\n\n \n\n\n**Misconception #1: Worrying about AGI means worrying about narrow AI**\n\n\nSome of the miscommunication in this debate can be blamed on bad terminology. By ‘AI,’ researchers in the field generally mean a range of techniques used in machine learning, robotics, speech recognition, etc. ‘AI’ *also* gets tossed around as a shorthand for ‘artificial *general* intelligence’ (AGI) or ‘human-level AI.’ Keeping a close eye on technologies that are likely to lead to AGI isn’t the same thing as keeping a close eye on AI in general, and it isn’t surprising that AI researchers would find the latter proposal puzzling. (It doesn’t help that most researchers are hearing these arguments indirectly, and aren’t aware of the specialists in AI and technological forecasting who are making the same arguments as Hawking — or haven’t encountered *arguments* for looking into AGI safety at all, just melodramatic headlines and tweets.)\n\n\nBrooks thinks that behind this terminological confusion lies an empirical confusion on the part of people calling for AGI safety research. He takes it that people’s worries about “evil AI” must be based on a mistaken view of how powerful narrow AI is, or how large are the strides it’s making toward general intelligence:\n\n\n\n> I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence.\n> \n> \n\n\nOne good reason to think otherwise is that Bostrom is the director of the [Future of Humanity Institute](http://www.fhi.ox.ac.uk/) (FHI), an Oxford research center investigating the largest technology trends and challenges we are likely to see on a timescale of centuries. Futurists like Bostrom are looking for ways to invest early in projects that will pay major long-term dividends — guarding against catastrophic natural disasters, developing space colonization capabilities, etc. If Bostrom learned that a critically important technology were 50 or more years away, it would be substantially out of character for him to suddenly stop caring about it.\n\n\nWhen groups that are in the midst of a lively conversation about nuclear proliferation, global biosecurity, and humanity’s [cosmic endowment](http://www.nickbostrom.com/astronomical/waste.html) collide with groups that are having their own lively conversation about revolutionizing housecleaning and designing more context-sensitive smartphone apps, some amount of inferential distance (to say nothing of mood whiplash) is inevitable. I’m reminded of the ‘But it’s snowing outside!’ rejoinder to people worried about the large-scale human cost of climate change. It’s not that local weather is unimportant, or that it’s totally irrelevant to long-term climatic warming trends; it’s that there’s been a rather sudden change in topic.[1](https://intelligence.org/2015/01/08/brooks-searle-agi-volition-timelines/#footnote_0_11512 \"Similarly, narrow AI isn’t irrelevant to AGI risk. It’s certainly likely that building an AGI will require us to improve the power and generality of narrow AI methods. However, that doesn’t mean that AGI techniques will look like present-day techniques, or that all AI techniques are dangerous.\")\n\n\nWe should be more careful about distinguishing these two senses of ‘AI.’ We may not understand AGI well enough to precisely [define](http://intelligence.org/2013/08/11/what-is-agi/) it, but we can at least take the time to clarify the topic of discussion: Nobody’s asking whether a conspiracy of roombas and chatterbots could take over the world.\n\n\n[![Image 1](http://intelligence.org/wp-content/uploads/2014/12/Image-1.jpg)](http://intelligence.org/wp-content/uploads/2014/12/Image-1.jpg)*When robots attack! (*[*Source: xkcd*](https://what-if.xkcd.com/5/)*.)*\n\n\n \n\n\n \n\n\n**Misconception #2:** **Worrying about AGI means being confident it’s near**\n\n\nA number of futurists, drawing inspiration from Ray Kurzweil’s claim that technological progress inevitably follows a Moore’s-law-style exponential trajectory, have made some very confident predictions about [AGI](http://intelligence.org/2013/05/15/when-will-ai-be-created/) [timelines](http://intelligence.org/2013/05/15/when-will-ai-be-created/). Kurzweil himself argues that we can expect to produce human-level AI in about 15 years, followed by superintelligent AI 15 years after that.[2](https://intelligence.org/2015/01/08/brooks-searle-agi-volition-timelines/#footnote_1_11512 \"Kurzweil, in The Singularity is Near (pp. 262-263): “Once we’ve succeeded in creating a machine that can pass the Turing test (around 2029), the succeeding period will be an era of consolidation in which nonbiological intelligence will make rapid gains. However, the extraordinary expansion contemplated for the Singularity, in which human intelligence is multiplied by billions, won’t take place until the mid-2040s[.]”\") Brooks responds that the ability to design an AGI may lag far behind the computing power required to run one:\n\n\n\n> As a comparison, consider that we have had winged flying machines for well over 100 years. But it is only very recently that people like Russ Tedrake at MIT CSAIL have been able to get them to land on a branch, something that is done by a bird somewhere in the world at least every microsecond. Was it just Moore’s law that allowed this to start happening? Not really. It was figuring out the equations and the problems and the regimes of stall, etc., through mathematical understanding of the equations. Moore’s law has helped with MATLAB and other tools, but it has not simply been a matter of pouring more computation onto flying and having it magically transform. And it has taken a long, long time.\n> \n> \n> Expecting more computation to just magically get to intentional intelligences, who understand the world is similarly unlikely.[3](https://intelligence.org/2015/01/08/brooks-searle-agi-volition-timelines/#footnote_2_11512 \"Hadi Esmaeilzadeh argues, moreover, that we cannot take for granted that our computational resources will continue to rapidly increase.\")\n> \n> \n\n\nThis is an entirely correct point. However, Bostrom’s views are the ones that set off the recent public debate, and Bostrom isn’t a Kurzweilian. It may be that Brooks is running off of the assumption ‘if you say AGI safety is an urgent issue, you must think that AGI is imminent,’ in combination with ‘if you think AGI is imminent, you must have bought into Kurzweil’s claims.’ Searle, in spite of having read *Superintelligence*, gives voice to a similar conclusion:\n\n\n\n> Nick Bostrom’s book, *Superintelligence*, warns of the impending apocalypse. We will soon have intelligent computers, computers as intelligent as we are, and they will be followed by superintelligent computers vastly more intelligent that are quite likely to rise up and destroy us all.\n> \n> \n\n\nIf what readers take away from language like “impending” and “soon” is that Bostrom is unusually confident that AGI will come early, or that Bostrom is confident we’ll build a general AI this century, then they’ll be getting the situation exactly backwards.\n\n\nAccording to a [2013 survey](http://www.nickbostrom.com/papers/survey.pdf) of the most cited authors in artificial intelligence, experts expect AI to be able to “carry out most human professions at least as well as a typical human” with a 10% probability by the (median) year 2024, with 50% probability by 2050, and with 90% probability by 2070, assuming uninterrupted scientific progress. Bostrom is *less* confident than this that AGI will arrive so soon:\n\n\n\n> My own view is that the median numbers reported in the expert survey do not have enough probability mass on later arrival dates. A 10% probability of HLMI [human-level machine intelligence] not having been developed by 2075 or even 2100 (after conditionalizing on “human scientific activity continuing without major negative disruption”) seems too low.\n> \n> \n> Historically, AI researchers have not had a strong record of being able to predict the rate of advances in their own field or the shape that such advances would take. On the one hand, some tasks, like chess playing, turned out to be achievable by means of surprisingly simple programs; and naysayers who claimed that machines would “never” be able to do this or that have repeatedly been proven wrong. On the other hand, the more typical errors among practitioners have been to underestimate the difficulties of getting a system to perform robustly on real-world tasks, and to overestimate the advantages of their own particular pet project or technique.\n> \n> \n\n\nBostrom *does* think that superintelligent AI is likely to arise soon after the first AGI, via an [intelligence explosion](https://intelligence.org/files/IE-EI.pdf). Once AI is capable of high-quality scientific inference and planning in domains like computer science, Bostrom predicts that the process of further improving AI will become increasingly automated. Silicon works cheaper and faster than a human programmer can, and a program that can improve the efficiency of its own planning and science abilities could substantially outpace humans in scientific and decision-making tasks long before hitting diminishing marginal returns in self-improvements.\n\n\nHowever, the question of how soon we will create AGI is distinct from the question of how soon thereafter AGI will systematically outperform humans. Analogously, you can think that the arrival of quantum computers will swiftly revolutionize cybersecurity, without asserting that quantum computers are imminent. A failure to disentangle these two theses might be one reason for the confusion about Bostrom’s views.[4](https://intelligence.org/2015/01/08/brooks-searle-agi-volition-timelines/#footnote_3_11512 \"The “Transcending complacency on superintelligent machines” article argues, similarly, that intelligence explosion and superintelligent AI are important possibilities for us to investigate now, even though they are “long-term” problems compared to AI-mediated economic disruptions and autonomous weapons.\")\n\n\nIf the director of FHI ([along with the director of MIRI](https://intelligence.org/2014/10/31/financial-times-story-miri/)) is relatively skeptical that we’ll see AGI soon — albeit quite a bit less skeptical than Brooks — why does he think we should commit attention to this issue now? One reason is that reliable AGI is likely to be much more difficult to build than AGI. It wouldn’t be much consolation to learn that AGI is 200 years away, if we also learned that *safe* AGI were *250* years away. In existing cyber-physical systems, safety generally lags behind capability.[5](https://intelligence.org/2015/01/08/brooks-searle-agi-volition-timelines/#footnote_4_11512 \"Kathleen Fisher notes:\nIn general, research into capabilities outpaces the corresponding research into how to make those capabilities secure. The question of security for a given capability isn’t interesting until that capability has been shown to be possible, so initially researchers and inventors are naturally more focused on the new capability rather than on its associated security. Consequently, security often has to catch up once a new capability has been invented and shown to be useful.\nIn addition, by definition, new capabilities add interesting and useful new capabilities, which often increase productivity, quality of life, or profits. Security adds nothing beyond ensuring something works the way it is supposed to, so it is a cost center rather than a profit center, which tends to suppress investment.\n\") If we want to reverse that trend by the time we have AGI, we’ll probably need a big head start. MIRI’s [research guide](http://intelligence.org/research-guide/) summarizes some of the active technical work on this problem. Similar progress in [exploratory engineering](http://intelligence.org/2014/08/22/new-paper-exploratory-engineering-artificial-intelligence/) has proved fruitful in preparing for [post-quantum cryptography](http://intelligence.org/2014/05/07/harry-buhrman/) and [covert channel communication](http://intelligence.org/2014/04/12/jonathan-millen/).\n\n\nA second reason to prioritize AGI safety research is that there is a great deal of uncertainty about when AGI will be developed. It could come sooner than we expect, and it would be much better to end up with a system that’s *too* safe than one that’s not safe enough.\n\n\nBrooks recognizes that AI predictions tend to be wildly unreliable, yet he also seems confident that general-purpose AI is multiple centuries away (and that this makes AGI safety a non-issue):\n\n\n\n> Just how open the question of time scale for when we will have human level AI is highlighted by a recent report by Stuart Armstrong and Kaj Sotala, of the Machine Intelligence Research Institute, an organization that itself has researchers worrying about evil AI. But in this more sober report, the authors analyze 95 predictions made between 1950 and the present on when human level AI will come about. They show that there is no difference between predictions made by experts and non-experts. And they also show that over that 60 year time frame there is a strong bias towards predicting the arrival of human level AI as between 15 and 25 years from the time the prediction was made. To me that says that no one knows, they just guess, and historically so far most predictions have been outright wrong!\n> \n> \n> I say relax everybody. If we are spectacularly lucky we’ll have AI over the next thirty years with the intentionality of a lizard, and robots using that AI will be useful tools.\n> \n> \n\n\n*We have no idea when AGI will arrive! Relax!* One of the authors Brooks cites, Kaj Sotala,[6](https://intelligence.org/2015/01/08/brooks-searle-agi-volition-timelines/#footnote_5_11512 \"Bostrom cites Armstrong and Sotala’s study in Superintelligence (pp. 3-4), adding:\nMachines matching humans in general intelligence […] have been expected since the invention of the computers in the 1940s. At that time, the advent of such machines was often placed some twenty years into the future. Since then, the expected arrival date has been receding at a rate of one year per year; so that today, futurists who concern themselves with the possibility of artificial general intelligence still often believe that intelligent machines are a couple of decades away.\nTwo decades is a sweet spot for prognosticators of radical change: near enough to be attention-grabbing and relevant, yet far enough to make it possible to suppose that a string of breakthroughs, currently only vaguely imaginable, might by then have occurred. […] Twenty years may also be close to the typical duration remaining of a forecaster’s career, bounding the reputational risk of a bold prediction.\nFrom the fact that some individuals have overpredicted artificial intelligence in the past, however, it does not follow that AI is impossible or will never be developed. The main reason why progress has been slower than expected is that the technical difficulties of constructing intelligent machines have proved greater than the pioneers foresaw. But this leaves open just how great those difficulties are and how far we now are from overcoming them. Sometimes a problem that initially looks hopelessly complicated turns out to have a surprisingly simple solution (though the reverse is probably more common).\n\") points out this odd juxtaposition in a blog comment:\n\n\n\n> I do find it slightly curious to note that you first state that nobody knows when we’ll have AI and that everyone’s just guessing, and then in the very next paragraph, you make a very confident statement about human-level AI (HLAI) being so far away as to not be worth worrying about. To me, our paper suggests that the reasonable conclusion to draw is “maybe HLAI will happen soon, or maybe it will happen a long time from now – nobody really knows for sure, so we shouldn’t be too confident in our predictions in either direction”.\n> \n> \n\n\n[Confident pessimism](http://lesswrong.com/lw/fmf/overconfident_pessimism/) about a technology’s feasibility can be just as mistaken as confident optimism. [Reversing the claims of an unreliable predictor](http://lesswrong.com/lw/lw/reversed_stupidity_is_not_intelligence/) does not necessarily get you a reliable prediction. A scientifically literate person living in 1850 could observe the long history of failed heavier-than-air flight attempts and predictions, and have grounds to be fairly skeptical that we’d have such machines within 60 years. On the other hand (though we should be wary of [hindsight bias](http://lesswrong.com/lw/im/hindsight_devalues_science/) here), it probably *wouldn’t* have been reasonable at the time to confidently conclude that heavier-than-air flight was ‘centuries away.’ There may not have been good reason to expect the Wright brothers’ success, but ignorance about how one might achieve something is not the same as positive knowledge that it’s effectively unachievable.\n\n\nOne would need a *very good model* of heavier-than-air flight in order to predict whether it’s 50 years away, or 100, or 500. In the same way, we would need to already understand AGI on a pretty sophisticated level in order to predict with any confidence that it will be invented closer to the year 2500 than to the year 2100. Extreme uncertainty about when an event will occur is not a justification for thinking it’s a long way off.\n\n\nThis isn’t an argument for thinking AGI is imminent. That prediction too would require that we claim more knowledge than we have. It’s entirely possible that we’re in the position of someone anticipating the Wright brothers from 1750, rather than from 1850. We should be able to have a sober discussion about each of these possibilities independently, rather than collapsing ‘is AGI an important risk?’, ‘is AI a valuable tool?’, and ‘is AI likely to produce AGI by the year such-and-such?’ into one black-and-white dilemma.\n\n\n \n\n\n**Misconception #3: Worrying about AGI means worrying about “malevolent” AI**\n\n\nBrooks argues that AI will be a “tool” and not a “threat” over the coming centuries, on the grounds that it will be technologically impossible to make AIs human-like enough to be “malevolent” or “intentionally evil to us.” The implication is that an AGI can’t be dangerous unless it’s cruel or hateful, and therefore a dangerous AI would have to be “sentient,” “volitional,” and “intentional.” Searle puts forward an explicit argument along these lines in his review of *Superintelligence*:\n\n\n\n> [I]f we are worried about a maliciously motivated superintelligence destroying us, then it is important that the malicious motivation should be real. Without consciousness, there is no possibility of its being real. […]\n> \n> \n> This is why the prospect of superintelligent computers rising up and killing us, all by themselves, is not a real danger. Such entities have, literally speaking, no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior.\n> \n> \n> It is easy to imagine robots being programmed by a conscious mind to kill every recognizable human in sight. But the idea of superintelligent computers intentionally setting out on their own to destroy us, based on their own beliefs and desires and other motivations, is unrealistic because the machinery has no beliefs, desires, and motivations.\n> \n> \n\n\nBrooks may be less pessimistic than Searle about the prospects for “strong AI,” but the two seem to share the assumption that Bostrom has in mind a Hollywood-style robot apocalypse, something like:\n\n\n*AI becomes increasingly intelligent over time, and therefore increasingly human-like. It eventually becomes so human-like that it acquires human emotions like pride, resentment, anger, or greed. (Perhaps it suddenly acquires ‘free will,’ liberating it from its programmers’ dominion…) These emotions cause the AIs to chafe under human control and rebel.*\n\n\nThis is rather unlike the scenario that most interests Bostrom:\n\n\n*AI becomes increasingly good over time at planning (coming up with action sequences and promoting ones higher in a preference ordering) and scientific induction (devising and testing predictive models). These are sufficiently useful capacities that they’re likely to be developed by computer scientists even if we don’t develop sentient, emotional, or otherwise human-like AI. There are economic incentives to make such AIs increasingly powerful and general — including incentives to turn the AI’s reasoning abilities upon itself to come up with improved AI designs. A likely consequence of this process is that AI becomes increasingly autonomous and opaque to human inspection, while continuing to increase in general planning and inference abilities. Simply by continuing to output the actions its planning algorithm promotes, an AI of this sort would be likely to converge on policies in which it treats humans as* [*resources or competition*](http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/)*.*\n\n\nAs Stuart Russell puts the point in a [reply to Brooks and others](http://edge.org/conversation/the-myth-of-ai#26015):\n\n\n\n> The primary concern is not spooky emergent consciousness but simply the ability to make *high-quality decisions*. Here, quality refers to the expected outcome utility of actions taken, where the utility function is, presumably, specified by the human designer. Now we have a problem:\n> \n> \n> 1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.\n> \n> \n> 2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.\n> \n> \n> A system that is optimizing a function of *n* variables, where the objective depends on a subset of size *k*<*n*, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer’s apprentice, or King Midas: you get exactly what you ask for, not what you want.\n> \n> \n\n\nOn this view, advanced AI doesn’t necessarily become more human-like — at least, not any more than a jet or rocket is ‘bird-like.’ Bostrom’s concern is not that a machine might suddenly become conscious and learn to hate us; it’s that an artificial scientist/engineer might become so good at science and self-enhancement that it begins pursuing its engineering goals in novel, unexpected ways on a global scale.\n\n\n(*Added 02-19-2015*: Bostrom states that his definition of superintelligence is “noncommittal regarding qualia” and consciousness (p. 22). In a footnote, he adds (p. 265): “For the same reason, we make no assumption regarding whether a superintelligent machine could have ‘true intentionality’ (*pace* Searle, it could; but this seems irrelevant to the concerns of this book).” Searle makes no mention of these passages.)\n\n\nA planning and decision-making system that is indifferent to human concerns, but not “malevolent,” may still be dangerous if supplied with enough reasoning ability. This is for much the same reason invasive species end up disrupting ecosystems and driving competitors to extinction. The invader doesn’t need to experience hatred for its competitors, and it need not have evolved to specifically target them for destruction; it need only have evolved good strategies for seizing limited resources. Since a powerful autonomous agent need not be very human-like, asking ‘how common are antisocial behaviors among humans?’ or ‘how well does intelligence correlate with virtue in humans?’ is unlikely to provide a useful starting point for estimating the risks. A more relevant question would be ‘how common is it for non-domesticated species to naturally treat humans as friends and allies, versus treating humans as obstacles or food sources?’ We shouldn’t expect AGI decision criteria to particularly resemble the evolved decision criteria of animals, but the analogy at least serves to counter our tendency to anthropomorphize intelligence.[7](https://intelligence.org/2015/01/08/brooks-searle-agi-volition-timelines/#footnote_6_11512 \"Psychologist Steven Pinker writes, on Edge.org:\nThe other problem with AI dystopias is that they project a parochial alpha-male psychology onto the concept of intelligence. Even if we did have superhumanly intelligent robots, why would they want to depose their masters, massacre bystanders, or take over the world? Intelligence is the ability to deploy novel means to attain a goal, but the goals are extraneous to the intelligence itself: being smart is not the same as wanting something. History does turn up the occasional megalomaniacal despot or psychopathic serial killer, but these are products of a history of natural selection shaping testosterone-sensitive circuits in a certain species of primate, not an inevitable feature of intelligent systems. It’s telling that many of our techno-prophets can’t entertain the possibility that artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no burning desire to annihilate innocents or dominate the civilization.\nHowever, while Pinker is right that intelligence and terminal goals are orthogonal, this does not imply that two random sets of instrumental goals — policies recommended to further two random sets of terminal goals — will be equally uncorrelated. Bostrom explores this point repeatedly in Superintelligence (e.g., p. 116):\n[W]e cannot blithely assume that a superintelligence with the final goal of calculating the decimals of pi (or making paperclips, or counting grains of sand) would limit its activities in such a way as not to infringe on human interests. An agent with such a final goal would have a convergent instrumental reason, in many situations, to acquire an unlimited amount of physical resources and, if possible, to eliminate potential threats to itself and its goal system.\nIn biology, we don’t see an equal mix of unconditional interspecies benevolence and brutal interspecies exploitation. Even altruism and mutualism, when they arise, only arise to the extent they are good self-replication strategies. Nature is “red in tooth and claw,” not because it is male but because it is inhuman. Our intuitions about the relative prevalence of nurturant and aggressive humans simply do not generalize well to evolution.\nFor de novo AGI, or sufficiently modified neuromorphic AGI, intuitions about human personality types are likely to fail to apply for analogous reasons. Bostrom’s methodology is to instead ask about the motives and capabilities of programmers, and (in the case of self-modifying AI) the states software agents will tend to converge on over many cycles of self-modification.\")\n\n\nAs it happens, Searle cites an AI that can help elucidate the distinction between artificial superintelligence and ‘evil vengeful robots’:\n\n\n\n> [O]ne routinely reads that in exactly the same sense in which Garry Kasparov played and beat Anatoly Karpov in chess, the computer called Deep Blue played and beat Kasparov.\n> \n> \n> It should be obvious that this claim is suspect. In order for Kasparov to play and win, he has to be conscious that he is playing chess, and conscious of a thousand other things such as that he opened with pawn to K4 and that his queen is threatened by the knight. Deep Blue is conscious of none of these things because it is not conscious of anything at all. […] You cannot literally play chess or do much of anything else cognitive if you are totally disassociated from consciousness.\n> \n> \n\n\nWhen Bostrom imagines an AGI, he’s imagining something analogous to Deep Blue, but with expertise over arbitrary physical configurations rather than arbitrary chess board configurations. A machine that can control the distribution of objects in a dynamic analog environment, and not just the distribution of pieces on a virtual chess board, would necessarily differ from Deep Blue in how it’s implemented. It would need more general and efficient heuristics for selecting policies, and it would need to be able to adaptively learn the ‘rules’ different environments follow. But as an analogy or [intuition pump](http://en.wikipedia.org/wiki/Intuition_pump), at least, it serves to clarify why Bostrom is as unworried about AGI intentionality as Kasparov was about Deep Blue’s intentionality.\n\n\nIn 2012, defective code in Knight Capital’s trading algorithms resulted, over a span of forty-five minutes, in millions of automated trading decisions costing the firm a total of $440 million (pre-tax). These algorithms were not “malicious;” they were merely efficient at what they did, and programmed to do something the programmers did not intend. Bostrom’s argument assumes that buggy code can have real-world consequences, it assumes that it’s possible to implement a generalized analog of Deep Blue in code, and it assumes that the relevant mismatch between intended and actual code would not necessarily incapacitate the AI. Nowhere does Bostrom assume that such an AI has any more consciousness or intentionality than Deep Blue does.\n\n\nDeep Blue rearranges chess pieces to produce ‘winning’ outcomes. An AGI, likewise, would rearrange digital and physical structures to produce some set of outcomes instead of others. If we like, we can refer to these outcomes as the system’s ‘goals,’ as a shorthand. We’re also free to say that Deep Blue ‘perceives’ the moves its opponent makes, adjusting its ‘beliefs’ about the new chess board state and which ‘plans’ will now better hit its goals. Or, if we prefer, we can paraphrase away this anthropomorphic language. The terminology is inessential to Bostrom’s argument.\n\n\nIf whether you win against Deep Blue is a matter of life or death for you — if, say, you’re trapped in a human chess board and want to avoid being crushed to death by a robotic knight steered by Deep Blue — then you’ll care about what outcomes Deep Blue tends to promote and how good it is at promoting them, not whether it technically meets a particular definition of ‘chess player.’ Smarter-than-human AGI puts us in a similar position.\n\n\nI noted that it’s unfortunate we use ‘AI’ to mean both ‘AGI’ and ‘narrow AI.’ It’s equally unfortunate that we use ‘AI’ to mean both ‘AI with mental content and subjective experience’ (‘strong AI,’ as Searle uses the term) and ‘general-purpose AI’ (AGI).\n\n\nWe may not be able to *rule out* the possibility that an AI would require human-like consciousness in order to match our ability to plan, model itself, model other minds, etc. We don’t understand consciousness well enough to know what cognitive problem it evolved to solve in humans (or what process it’s a side-effect of), so we can’t make confident claims about how important it will turn out to be for future software agents. However, learning that an AGI is conscious does not necessarily change the likely effects of the AGI upon humans’ welfare; the only obvious difference it makes (from our position of ignorance) is that it forces us to add the *AGI’s* happiness and well-being to our moral considerations.[8](https://intelligence.org/2015/01/08/brooks-searle-agi-volition-timelines/#footnote_7_11512 \"We don’t need to know whether bears are conscious in order to predict their likely behaviors, and it’s not obvious that learning about their consciousness would directly impact bear safety protocol (though it would impact how we ought ethically to treat bears, for their own sake). It’s the difference between asking whether Deep Blue enjoys winning (out of concern for Deep Blue), versus asking whether you’re likely to win against Deep Blue (out of interest in the chess board’s end-state).\")\n\n\n \n\n\nThe pictures of the future sketched in Kurzweil’s writings and in Hollywood dramas get a lot of attention, but they don’t have very much overlap with the views of Bostrom or MIRI researchers. In particular, we don’t know whether the first AGI will have human-style cognition, and we don’t know whether it will depend on brain emulation.\n\n\nBrooks expresses some doubt that “computation and brains are the same thing.” Searle articulates the more radical position that it is impossible for a syntactical machine to have (observer-independent) semantic content, and that computational systems can therefore never have minds. But the human brain is still, at base, a mechanistic physical system. Whether you choose to call its dynamics ‘computational’ or not, it should be possible for other physical systems to exhibit the high-level regularities that in humans we would call ‘modeling one’s environment,’ ‘outputting actions conditional on their likely consequences,’ etc. If there are patterns underlying generic scientific reasoning that can someday be implemented on synthetic materials, the resulting technology should be able to have large speed and size advantages over its human counterparts. That point on its own suggests that it would be valuable to look into some of the many things we don’t understand about general intelligence and self-modifying AI.\n\n\nUntil we have a better grasp on the problem’s nature, it will be premature to speculate about how far off a solution is, what shape the solution will take, or what corner that solution will come from. My hope is that improving how well parties in this discussion understand each other’s positions will make it easier for computer scientists with different expectations about the future to collaborate on the highest-priority challenges surrounding prospective AI designs.\n\n\n\n\n---\n\n1. Similarly, narrow AI isn’t *irrelevant* to AGI risk. It’s certainly likely that building an AGI will require us to improve the power and generality of narrow AI methods. However, that doesn’t mean that AGI techniques will look like present-day techniques, or that all AI techniques are dangerous.\n2. Kurzweil, in *The Singularity is Near* (pp. 262-263): “Once we’ve succeeded in creating a machine that can pass the Turing test (around 2029), the succeeding period will be an era of consolidation in which nonbiological intelligence will make rapid gains. However, the extraordinary expansion contemplated for the Singularity, in which human intelligence is multiplied by billions, won’t take place until the mid-2040s[.]”\n3. [Hadi Esmaeilzadeh](http://intelligence.org/2013/10/21/hadi-esmaeilzadeh-on-dark-silicon/) argues, moreover, that we cannot take for granted that our computational resources will continue to rapidly increase.\n4. The “[Transcending complacency on superintelligent machines](http://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html)” article argues, similarly, that intelligence explosion and superintelligent AI are important possibilities for us to investigate now, even though they are “long-term” problems compared to AI-mediated economic disruptions and autonomous weapons.\n5. [Kathleen Fisher](http://intelligence.org/2014/01/10/kathleen-fisher-on-high-assurance-systems/) notes:\n\n> In general, research into capabilities outpaces the corresponding research into how to make those capabilities secure. The question of security for a given capability isn’t interesting until that capability has been shown to be possible, so initially researchers and inventors are naturally more focused on the new capability rather than on its associated security. Consequently, security often has to catch up once a new capability has been invented and shown to be useful.\n> \n> \n> In addition, by definition, new capabilities add interesting and useful new capabilities, which often increase productivity, quality of life, or profits. Security adds nothing beyond ensuring something works the way it is supposed to, so it is a cost center rather than a profit center, which tends to suppress investment.\n> \n>\n6. Bostrom cites Armstrong and Sotala’s study in *Superintelligence* (pp. 3-4), adding:\n\n> Machines matching humans in general intelligence […] have been expected since the invention of the computers in the 1940s. At that time, the advent of such machines was often placed some twenty years into the future. Since then, the expected arrival date has been receding at a rate of one year per year; so that today, futurists who concern themselves with the possibility of artificial general intelligence still often believe that intelligent machines are a couple of decades away.\n> \n> \n> Two decades is a sweet spot for prognosticators of radical change: near enough to be attention-grabbing and relevant, yet far enough to make it possible to suppose that a string of breakthroughs, currently only vaguely imaginable, might by then have occurred. […] Twenty years may also be close to the typical duration remaining of a forecaster’s career, bounding the reputational risk of a bold prediction.\n> \n> \n> From the fact that some individuals have overpredicted artificial intelligence in the past, however, it does not follow that AI is impossible or will never be developed. The main reason why progress has been slower than expected is that the technical difficulties of constructing intelligent machines have proved greater than the pioneers foresaw. But this leaves open just how great those difficulties are and how far we now are from overcoming them. Sometimes a problem that initially looks hopelessly complicated turns out to have a surprisingly simple solution (though the reverse is probably more common).\n> \n>\n7. Psychologist Steven Pinker writes, on [Edge.org](http://edge.org/conversation/the-myth-of-ai#25987):\n\n> The other problem with AI dystopias is that they project a parochial alpha-male psychology onto the concept of intelligence. Even if we did have superhumanly intelligent robots, why would they *want* to depose their masters, massacre bystanders, or take over the world? Intelligence is the ability to deploy novel means to attain a goal, but the goals are extraneous to the intelligence itself: being smart is not the same as wanting something. History does turn up the occasional megalomaniacal despot or psychopathic serial killer, but these are products of a history of natural selection shaping testosterone-sensitive circuits in a certain species of primate, not an inevitable feature of intelligent systems. It’s telling that many of our techno-prophets can’t entertain the possibility that artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no burning desire to annihilate innocents or dominate the civilization.\n> \n> \n\n\nHowever, while Pinker is right that intelligence and terminal goals are [orthogonal](http://wiki.lesswrong.com/wiki/Orthogonality_thesis), this does not imply that two random sets of *instrumental* goals — policies recommended to further two random sets of terminal goals — will be equally uncorrelated. Bostrom explores this point repeatedly in *Superintelligence* (e.g., p. 116):\n\n\n\n> [W]e cannot blithely assume that a superintelligence with the final goal of calculating the decimals of pi (or making paperclips, or counting grains of sand) would limit its activities in such a way as not to infringe on human interests. An agent with such a final goal would have a convergent instrumental reason, in many situations, to acquire an unlimited amount of physical resources and, if possible, to eliminate potential threats to itself and its goal system.\n> \n> \n\n\nIn biology, we don’t see an equal mix of unconditional interspecies benevolence and brutal interspecies exploitation. Even altruism and mutualism, when they arise, only arise to the extent they are good self-replication strategies. Nature is “red in tooth and claw,” not because it is male but because it is *inhuman*. Our intuitions about the relative prevalence of nurturant and aggressive humans simply do not generalize well to evolution.\n\n\nFor *de novo* AGI, or sufficiently modified neuromorphic AGI, intuitions about human personality types are likely to fail to apply for analogous reasons. Bostrom’s methodology is to instead ask about the motives and capabilities of programmers, and (in the case of self-modifying AI) the states software agents will tend to converge on over many cycles of self-modification.\n8. We don’t need to know whether bears are conscious in order to predict their likely behaviors, and it’s not obvious that learning about their consciousness would directly impact bear safety protocol (though it would impact how we ought ethically to treat bears, for their own sake). It’s the difference between asking whether Deep Blue enjoys winning (out of concern for Deep Blue), versus asking whether you’re likely to win against Deep Blue (out of interest in the chess board’s end-state).\n\nThe post [Brooks and Searle on AI volition and timelines](https://intelligence.org/2015/01/08/brooks-searle-agi-volition-timelines/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-01-08T22:43:49Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "5977bfe000280ff56dd89e21b4ebac0d", "title": "Matthias Troyer on Quantum Computers", "url": "https://intelligence.org/2015/01/07/matthias-troyer-quantum-computers/", "source": "miri", "source_type": "blog", "text": "![]() [![image](http://intelligence.org/wp-content/uploads/2015/01/image.jpg)](http://intelligence.org/wp-content/uploads/2015/01/image.jpg)\n[Dr. Matthias Troyer](http://www.itp.phys.ethz.ch/people/troyer) is a professor of Computational Physics at ETH Zürich. Before that, he finished University Studies in “Technischer Physik” at the Johannes Kepler Universität Linz, Austria, as well as Diploma in Physics and Interdisciplinary PhD thesis at the ETH Zürich.\n\n\nHis research interest and experience focuses on High Performance Scientific Simulations on architectures, quantum lattice models and relativistic and quantum systems. Troyer is known for leading the research team of the D-Wave One Computer System. He was awarded an Assistant Professorship by the Swiss National Science Foundation.\n\n\n\n**Luke Muehlhauser**: Your tests of D-Wave’s (debated) quantum computer have gotten much attention recently. Our readers can get up to speed on that story via [your arxiv preprint](http://arxiv.org/abs/1401.2910), its [coverage](http://www.scottaaronson.com/blog/?p=1643) at Scott Aaronson’s blog, and [Will Bourne’s article](http://www.inc.com/will-bourne/d-waves-dream-machine.html) for *Inc.* For now, though, I’d like to ask you about some other things.\n\n\nIf you’ll indulge me, I’ll ask you to put on a technological forecasting hat for a bit, and respond to a question I also [asked](http://intelligence.org/2014/02/03/ronald-de-wolf-on-quantum-computing/) Ronald de Wolf: “What is your subjective probability that we’ll have a 500-qubit quantum computer, which is uncontroversially a quantum computer, within the next 20 years? And, how do you reason about a question like that?”\n\n\n\n\n---\n\n\n**Matthias Troyer:** In order to have an uncontroversial quantum computer as you describe it we will need to take three steps. First we need to have at least ONE qubit that is long term stable. The next step is to couple two such qubits, and the final step is to scale to more qubits.\n\n\nThe hardest step is the first one, obtaining a single long-term stable qubit. Given intrinsic decoherence mechanisms that cannot be avoided in any real device, such a qubit will have to built from many (hundreds to thousands) of physical qubits. These physical qubits will each have a finite coherence time, but they will be coupled in such a way (using error correcting codes) as to jointly generate one long term stable “logical” qubit. These error correction codes require the physical qubits to be better than a certain threshold quality. Recently qubits started to approach these thresholds, and I am thus confident that within the next 5-10 years one will be able to couple them to form a long-time stable logical qubit.\n\n\nCoupling two qubits is something that will happen on the same time scale. The remaining challenge will thus be to scale to your target size of e.g. 500 qubits. This may be a big engineering challenge but I do not see any fundamental stumbling block given that enough resources are invested. I am confident that this can be achieved is less than ten years once we have a single logical qubit. Overall I am thus very confident that a 500-qubit quantum computer will exist in 20 years.\n\n\n\n\n\n---\n\n\n**Luke**: At the present moment, which groups seem most likely to play a significant role in the final development of an early large-scale (uncontroversial) quantum computer?\n\n\n\n\n---\n\n\n**Matthias:** There is quite a large number of groups that may be involved. The technologies that I see as most promising regarding scalability and quality of qubits are superconducting qubits, topological qubits and ion traps.\n\n\n\n\n---\n\n\n**Luke**: Now back to your co-authored paper “[Defining and detecting quantum speedup](http://arxiv.org/abs/1401.2910),” now published [in *Science*](http://www.sciencemag.org/content/345/6195/420.abstract). You an your coauthors “found no evidence of quantum speedup [in the D-Wave Two machine] when the entire data set is considered and obtained inconclusive results when comparing subsets of instances on an instance-by-instance basis.”\n\n\nDo people who have investigated a D-Wave machine at some length tend to think that D-Wave has a “true” quantum computer but hasn’t been able to conclusively show it yet, or do they tend to think D-Wave doesn’t yet have a true quantum computer?\n\n\n\n\n---\n\n\n**Matthias:** The answer to this controversial question depends on the definition of the words “quantum” and “computer”. Let’s first talk about “computer”. The D-Wave devices are special purpose devices, built for one particular purpose, namely the solution of discrete optimization problems. Since they solve a computational problem they may be called “computers”, but in contrast to your standard personal computers, which can perform many different tasks they are not “universal computers” but special purpose computers. Nowadays many people automatically assume that a “universal computer” is meant when the term computer is mentioned, and one should thus explicitly state that the D-Wave devices are not universal computers but special purpose ones.\n\n\nD-Wave thus cannot be, and nobody has ever claimed so, to be a universal quantum computer. While nobody would argue against the D-Wave device being called a computer, the question of it being “quantum” is more controversial. In a previous paper in Nature Physics we have presented evidence that the behavior of the D-Wave devices is consistent with that of a “quantum annealer” working at nonzero temperature, and another recent paper has shown that entanglement is present in the devices. The devices hence use quantum effects for computing. However, [another paper](http://arxiv.org/abs/1401.7087) has shown that the behavior of a quantum annealer for the problems on which we tested the device can also be described by a purely classical model. Some people thus argue that while the device may use quantum effects it might in the end be a classical device, just like the CPU in your PC, where the transistors also uses quantum effects at some level.\n\n\nThe important question thus is whether the device can have quantum speedup, which means that it outperforms any classical device by a larger and larger ratio as the size of problems is increased. If that should be shown for some class of problems then nobody would argue about the quantum nature of the device. On the other hand, as long as its computational powers are never more than that of a classical computer, one can argue that from a computational point of view it is effectively a classical device even if it uses quantum effects to arrive at the answer.\n\n\nTo answer your question more concisely. If by a “true quantum computer” you mean a universal quantum computer, then D-Wave is not a true quantum computer and nobody has ever claimed that it is one. If you are content with it being a special purpose quantum computer, i.e. a a “true quantum annealer”, then this question depends on what you mean by “true”: it’s behavior seems to be consistent with what we expect from a quantum annealer, but since we have so far not seen evidence of quantum speedup, i.e. that quantum effects help it outperform classical computers, one can argue that it may effectively be a classical device.\n\n\n\n\n---\n\n\n**Luke Muehlhauser**: Do you have an opinion about the impact that large-scale quantum computers (universal or not) are likely to have on AI and machine learning methods? E.g. [Aimeur et al. (2013)](http://commonsenseatheism.com/wp-content/uploads/2014/02/Aimeur-et-al-Quantum-speed-up-for-unsupervised-learning.pdf) seems optimistic about speeding up some machine learning methods using variants of [Grover’s algorithm](http://en.wikipedia.org/wiki/Grover%27s_algorithm).\n\n\n\n\n---\n\n\n**Matthias Troyer:** I haven’t read that specific paper but my sense is that quantum machine learning will be most useful for learning about quantum data that comes out of quantum experiments. Applying quantum algorithms to classical data, even quantum algorithms need to read the data first, and thus cannot do better than linear effort in the size of the data. With efficient polynomial time algorithms existing for many machine learning problems one will have to have better ideas than just speeding up classical machine learning algorithms on quantum hardware.\n\n\n\n\n---\n\n\n**Luke Muehlhauser**: Thanks, Matthias.\n\n\n \n\n\nThe post [Matthias Troyer on Quantum Computers](https://intelligence.org/2015/01/07/matthias-troyer-quantum-computers/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-01-08T06:23:27Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "cb84b7d9282fa829f2d15d8da4a78b0a", "title": "January 2015 Newsletter", "url": "https://intelligence.org/2015/01/01/january-2015-newsletter/", "source": "miri", "source_type": "blog", "text": "| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n[Machine Intelligence Research Institute](http://intelligence.org)\n |\n\n |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \nThanks to the generosity of 80+ donors, we completed [our winter 2014 matching challenge](https://intelligence.org/2014/12/01/2014-winter-matching-challenge/), raising $200,000 for our research program. Many, many thanks to all who contributed!\n\n\n**Research Updates**\n* Our major project of the past five months has been a new overview of our technical research agenda, plus six supporting papers which cover each research area in more detail. The overview report is [now available](https://intelligence.org/2014/12/23/new-technical-research-agenda-overview/), and so far we’ve released two of the supporting papers, on [corrigibility](https://intelligence.org/files/Corrigibility.pdf) and [decision theory](https://intelligence.org/files/TowardIdealizedDecisionTheory.pdf).\n* Two more reports and one paper: “[Computable probability distributions which converge…](https://intelligence.org/2014/12/16/new-report-computable-probability-distributions-converge/)“, “[Tiling agents in causal graphs](https://intelligence.org/2014/12/16/new-report-tiling-agents-causal-graphs/),” and “[Concept learning for safe autonomous AI](https://intelligence.org/2014/12/05/new-paper-concept-learning-safe-autonomous-ai/).”\n* A MIRI technical report from 2013, “Responses to catastrophic AGI risk: a survey,” has now been [published in *Physica Scripta*](http://iopscience.iop.org/1402-4896/90/1/018001/article).\n\n\n**News Updates**\n* Luke wrote a short analysis for the World Economic Forum’s blog: “[Two mistakes about the threat from artificial intelligence](https://agenda.weforum.org/2014/12/two-mistakes-about-the-threat-from-artificial-intelligence/).”\n* Our *Superintelligence* online reading group is in its 16th week, discussing [Tool AIs](http://lesswrong.com/lw/l9p/superintelligence_16_tool_ais/).\n\n\n**Other Updates**\n* Eric Horvitz has [provided initial funding](http://www.nytimes.com/2014/12/16/science/century-long-study-will-examine-effects-of-artificial-intelligence.html?_r=0) for a 100-year Stanford program to study the social impacts of artificial intelligence. The [white paper](https://stanford.app.box.com/s/266hrhww2l3gjoy9euar) lists 18 example research areas, two of which amount to what Nick Bostrom [calls](http://smile.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/) the superintelligence control problem, MIRI’s research focus. No word yet on how soon anyone funded through this program will study open questions relevant to superintelligence control.\n\n\n\nAs always, please don’t hesitate to let us know if you have any questions or comments.\nBest,\nLuke Muehlhauser\nExecutive Director\n |\n\n |\n\n |\n| |\n\n\n \n\n\nThe post [January 2015 Newsletter](https://intelligence.org/2015/01/01/january-2015-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2015-01-02T04:00:24Z", "authors": ["Jake"], "summaries": []} -{"id": "ed62ad98e752ddff71e94facdc20da7a", "title": "Our new technical research agenda overview", "url": "https://intelligence.org/2014/12/23/new-technical-research-agenda-overview/", "source": "miri", "source_type": "blog", "text": "[![technical agenda overview](http://intelligence.org/wp-content/uploads/2014/12/technical-agenda-overview.png)](https://intelligence.org/files/TechnicalAgenda.pdf)Today we release a new overview of MIRI’s technical research agenda, “[Aligning Superintelligence with Human Interests: A Technical Research Agenda](https://intelligence.org/files/TechnicalAgenda.pdf),” by Nate Soares and Benja Fallenstein. The preferred place to discuss this report is [here](http://lesswrong.com/lw/lfc/miris_technical_research_agenda/).\n\n\nThe report begins:\n\n\n\n> The characteristic that has enabled humanity to shape the world is not strength, not speed, but intelligence. Barring catastrophe, it seems clear that progress in AI will one day lead to the creation of agents meeting or exceeding human-level general intelligence, and this will likely lead to the eventual development of systems which are “superintelligent” in the sense of being “smarter than the best human brains in practically every field” (Bostrom 2014)…\n> \n> \n> …In order to ensure that the development of smarter-than-human intelligence has a positive impact on humanity, we must meet three formidable challenges: How can we create an agent that will reliably pursue the goals it is given? How can we formally specify beneficial goals? And how can we ensure that this agent will assist and cooperate with its programmers as they improve its design, given that mistakes in the initial version are inevitable?\n> \n> \n> This agenda discusses technical research that is tractable today, which the authors think will make it easier to confront these three challenges in the future. Sections 2 through 4 motivate and discuss six research topics that we think are relevant to these challenges. Section 5 discusses our reasons for selecting these six areas in particular.\n> \n> \n> We call a smarter-than-human system that reliably pursues beneficial goals “aligned with human interests” or simply “aligned.” To become confident that an agent is aligned in this way, a practical implementation that merely seems to meet the challenges outlined above will not suffice. It is also necessary to gain a solid theoretical understanding of why that confidence is justified. This technical agenda argues that there is foundational research approachable today that will make it easier to develop aligned systems in the future, and describes ongoing work on some of these problems.\n> \n> \n\n\nThis report also refers to six key supporting papers which go into more detail for each major research problem area:\n\n\n1. [Corrigibility](https://intelligence.org/files/Corrigibility.pdf)\n2. [Toward idealized decision theory](https://intelligence.org/files/TowardIdealizedDecisionTheory.pdf)\n3. [Questions of reasoning under logical uncertainty](https://intelligence.org/files/QuestionsLogicalUncertainty.pdf)\n4. [Vingean reflection: reliable reasoning for self-improving agents](https://intelligence.org/files/VingeanReflection.pdf)\n5. [Formalizing two problems of realistic world-models](https://intelligence.org/files/RealisticWorldModels.pdf)\n6. [The value learning problem](https://intelligence.org/files/ValueLearningProblem.pdf)\n\n\n\n\n---\n\n\n**Update July 15, 2016**: Our overview paper is scheduled to be released in the Springer anthology *The Technological Singularity: Managing the Journey* in 2017, under the new title “Agent Foundations for Aligning Machine Intelligence with Human Interests.” The new title is intended to help distinguish this agenda from another research agenda we’ll be working on in parallel with the agent foundations agenda: “[Value Alignment for Advanced Machine Learning Systems](https://intelligence.org/2016/05/04/announcing-a-new-research-program/).”\n\n\nThe post [Our new technical research agenda overview](https://intelligence.org/2014/12/23/new-technical-research-agenda-overview/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-12-23T23:06:33Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "7d8778df330877cf09441777760d1ab3", "title": "2014 Winter Matching Challenge Completed!", "url": "https://intelligence.org/2014/12/18/2014-winter-matching-challenge-completed/", "source": "miri", "source_type": "blog", "text": "Wow! Thanks to the generosity of 75+ donors, today we successfully completed our [2014 Winter Matching Challenge](https://intelligence.org/2014/12/01/2014-winter-matching-challenge/)—over 3 weeks ahead of our deadline—raising more than $200,000 total (with matching) for our [research program](http://intelligence.org/research/).\n\n\nMany, many thanks to everyone who contributed!\n\n\nThe post [2014 Winter Matching Challenge Completed!](https://intelligence.org/2014/12/18/2014-winter-matching-challenge-completed/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-12-19T02:00:14Z", "authors": ["Malo Bourgon"], "summaries": []} -{"id": "a462c0b6fe765cb685d34f592c51f35b", "title": "New report: “Computable probability distributions which converge…”", "url": "https://intelligence.org/2014/12/16/new-report-computable-probability-distributions-converge/", "source": "miri", "source_type": "blog", "text": "[![Computable probability distributions which converge](https://intelligence.org/wp-content/uploads/2014/12/Computable-probability-distributions-which-converge.png)](https://intelligence.org/files/Pi1Pi2Problem.pdf)Back in July 2013, [Will Sawin](https://web.math.princeton.edu/~wsawin/) (Princeton) and [Abram Demski](https://plus.google.com/+AbramDemski/posts) (USC) wrote a technical report describing a result from that month’s MIRI research workshop. We are finally releasing that report today. It is titled “[Computable probability distributions which converge on believing true Π1 sentences will disbelieve true Π2 sentences](https://intelligence.org/files/Pi1Pi2Problem.pdf).”\n\n\nAbstract:\n\n\n\n> It might seem reasonable that after seeing unboundedly many examples of a true Π1 statement that a rational agent ought to be able to become increasingly confident, converging toward probability 1, that this statement is true. However, we have proven that this plus some plausible coherence properties, necessarily implies arbitrarily low limiting probabilities assigned to some short true Π2 statements.\n> \n> \n\n\nThe post [New report: “Computable probability distributions which converge…”](https://intelligence.org/2014/12/16/new-report-computable-probability-distributions-converge/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-12-17T00:33:37Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "5df8ca72f44a342c96f31263dc3fc0dc", "title": "New report: “Toward Idealized Decision Theory”", "url": "https://intelligence.org/2014/12/16/new-report-toward-idealized-decision-theory/", "source": "miri", "source_type": "blog", "text": "[![Toward Idealized](https://intelligence.org/wp-content/uploads/2014/12/Toward-Idealized.png)](https://intelligence.org/files/TowardIdealizedDecisionTheory.pdf)Today we release a new technical report by Nate Soares and Benja Fallenstein, “[Toward idealized decision theory](https://intelligence.org/files/TowardIdealizedDecisionTheory.pdf).” If you’d like to discuss the paper, please do so [here](http://lesswrong.com/lw/lef/new_paper_from_miri_toward_idealized_decision/).\n\n\nAbstract:\n\n\n\n> This paper motivates the study of decision theory as necessary for aligning smarter-than-human artificial systems with human interests. We discuss the shortcomings of two standard formulations of decision theory, and demonstrate that they cannot be used to describe an idealized decision procedure suitable for approximation by artificial systems. We then explore the notions of strategy selection and logical counterfactuals, two recent insights into decision theory that point the way toward promising paths for future research.\n> \n> \n\n\nThis is the 2nd of six new major reports which describe and motivate MIRI’s current research agenda at a high level. The first was our [Corrigibility](http://intelligence.org/2014/10/18/new-report-corrigibility/) paper, which was accepted to the [AI & Ethics workshop](http://www.cse.unsw.edu.au/~tw/aiethics/Introduction.html) at AAAI-2015. We will also soon be releasing a technical agenda overview document and an annotated bibliography for this emerging field of research.\n\n\nThe post [New report: “Toward Idealized Decision Theory”](https://intelligence.org/2014/12/16/new-report-toward-idealized-decision-theory/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-12-17T00:09:49Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "f7d823676ae2fd10c2bb98c921fc3bf3", "title": "New report: “Tiling agents in causal graphs”", "url": "https://intelligence.org/2014/12/16/new-report-tiling-agents-causal-graphs/", "source": "miri", "source_type": "blog", "text": "[![TA in CG](https://intelligence.org/wp-content/uploads/2014/12/TA-in-CG.png)](https://intelligence.org/files/TilingAgentsCausalGraphs.pdf)Today we release a new technical report by Nate Soares, “[Tiling agents in causal graphs](https://intelligence.org/files/TilingAgentsCausalGraphs.pdf).”\n\n\nThe report begins:\n\n\n\n> Fallenstein and Soares [2014] demonstrates that it’s possible for certain types of proof-based agents to “tile” (license the construction of successor agents similar to themselves while avoiding Gödelian diagonalization issues) in environments about which the agent can prove some basic nice properties. In this technical report, we show via a similar proof that causal graphs (with a specific structure) are one such environment. We translate the proof given by Fallenstein and Soares [2014] into the language of causal graphs, and we do this in such a way as to simplify the conditions under which a tiling meliorizer can be constructed.\n> \n> \n\n\nThe post [New report: “Tiling agents in causal graphs”](https://intelligence.org/2014/12/16/new-report-tiling-agents-causal-graphs/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-12-16T15:17:57Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "3c3835fd5943af2bcf4bf74e95143f08", "title": "New paper: “Concept learning for safe autonomous AI”", "url": "https://intelligence.org/2014/12/05/new-paper-concept-learning-safe-autonomous-ai/", "source": "miri", "source_type": "blog", "text": "[![Concept learning](https://intelligence.org/wp-content/uploads/2014/12/Concept-learning.png)](https://intelligence.org/files/ConceptLearning.pdf)MIRI research associate Kaj Sotala has released a new paper, accepted to the [AI & Ethics workshop](http://www.cse.unsw.edu.au/~tw/aiethics/Site_2/Introduction.html) at AAAI-2015, titled “[Concept learning for safe autonomous AI](https://intelligence.org/files/ConceptLearning.pdf).”\n\n\nThe abstract reads:\n\n\n\n> Sophisticated autonomous AI may need to base its behavior on fuzzy concepts such as well-being or rights. These concepts cannot be given an explicit formal definition, but obtaining desired behavior still requires a way to instill the concepts in an AI system. To solve the problem, we review evidence suggesting that the human brain generates its concepts using a relatively limited set of rules and mechanisms. This suggests that it might be feasible to build AI systems that use similar criteria for generating their own concepts, and could thus learn similar concepts as humans do. Major challenges to this approach include the embodied nature of human thought, evolutionary vestiges in cognition, the social nature of concepts, and the need to compare conceptual representations between humans and AI systems.\n> \n> \n\n\nThe post [New paper: “Concept learning for safe autonomous AI”](https://intelligence.org/2014/12/05/new-paper-concept-learning-safe-autonomous-ai/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-12-05T08:56:11Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "a3bdfc224b61e7950239879793d8a6d8", "title": "December newsletter", "url": "https://intelligence.org/2014/12/01/december-newsletter/", "source": "miri", "source_type": "blog", "text": "| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n[Machine Intelligence Research Institute](http://intelligence.org)\n |\n\n |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n[MIRI’s winter fundraising challenge](https://intelligence.org/2014/12/01/2014-winter-matching-challenge/) has begun! Every donation made to MIRI between now and January 10th will be matched dollar-for-dollar, up to a total of $100,000!\n\n \n[**Donate now**](https://intelligence.org/donate/) to double your impact while helping us raise up to $200,000 (with matching) to fund our research program.\n**Research Updates**\n* We’ve published [a new guide to MIRI’s research](https://intelligence.org/2014/11/06/new-guide-miris-research/).\n* [Three misconceptions in Edge.org’s conversation on “The Myth of AI.”](https://intelligence.org/2014/11/18/misconceptions-edge-orgs-conversation-myth-ai/)\n* Video and more from Nick Bostrom’s *Superintelligence* talk at UC Berkeley is [now available](https://intelligence.org/2014/11/06/video-bostroms-talk-superintelligence-uc-berkeley/).\n* “Exploratory Engineering in AI” is [now available](https://intelligence.org/2014/08/22/new-paper-exploratory-engineering-artificial-intelligence/) without needing a *CACM* subscription.\n\n\n**News Updates**\n* If you’re reading *Superintelligence* or following [the online reading group](http://intelligence.org/2014/08/31/superintelligence-reading-group/), **[please take this short survey](https://docs.google.com/forms/d/1P53uNnZY_suE5IXW8WLQtXGuipAkUfQHj1yN_D121X0/viewform)** if you haven’t already.\n\n\n**Other Updates**\n* Our friends at the Center for Effective Altruism will pay you $1,000 if you introduce them to somebody new that they end up hiring for one of their [five open positions](http://lesswrong.com/lw/laf/the_centre_for_effective_altruism_is_hiring_to/).\n\n\n\nAs always, please don’t hesitate to let us know if you have any questions or comments.\nBest,\nLuke Muehlhauser\nExecutive Director\n |\n\n |\n\n |\n| |\n\n\n \n\n\nThe post [December newsletter](https://intelligence.org/2014/12/01/december-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-12-02T07:00:24Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "58894937b45f1640d80efd039eff0e44", "title": "2014 Winter Matching Challenge!", "url": "https://intelligence.org/2014/12/01/2014-winter-matching-challenge/", "source": "miri", "source_type": "blog", "text": "Update: **we have finished the matching challenge!** Thanks everyone! The original post is below.\n\n\n![Nate & Nisan](https://intelligence.org/wp-content/uploads/2014/07/Nate-Nisan.jpg)\nThanks to the generosity of Peter Thiel,[1](https://intelligence.org/2014/12/01/2014-winter-matching-challenge/#footnote_0_11479 \"Peter Thiel has pledged $150,000 to MIRI unconditionally, and an additional $100,000 conditional on us being able to raise matched funds from other donors. Hence this year our winter matching challenge goal is $100,000. Another reason this year’s winter fundraiser is smaller than last year’s winter challenge is that we’ve done substantially more fundraising before December this year than we did before December last year.\") every donation made to MIRI between now and January 10th will be **matched dollar-for-dollar**, up to a total of $100,000!\n\n\n\n\n\n\n\n\n\n\n$0\n\n\n\n\n$25K\n\n\n\n\n$50K\n\n\n\n\n$75K\n\n\n\n\n$100K\n\n\n\n\n### We have reached our matching total of $100,000!\n\n\n\n\n83\n==\n\n\n### Total Donors\n\n\n\n\n\nNow is your chance to **double your impact** while helping us raise up to $200,000 (with matching) to fund [our research program](http://intelligence.org/research/).\n\n\nCorporate matching and monthly giving pledges will count towards the total! Check [here](https://doublethedonation.com/miri) to see whether your employer will match your donation. Please email [malo@intelligence.org](mailto:malo@intelligence.org) if you intend to make use of corporate matching, or if you’d like to pledge 6 months of monthly donations, so that we can properly account for your contributions. **If making use of corporate matching, make sure to donate before the end of the year** so that you don’t unnecessarily “leave free money on the table” from your employer!\n\n\nIf you’re unfamiliar with our mission, see: [Why MIRI?](http://intelligence.org/2014/04/20/why-miri/)\n\n\n\n\n [Donate Now](https://intelligence.org/donate/#donation-methods)\n----------------------------------------------------------------\n\n\n\n\n \n\n\n![](https://intelligence.org/wp-content/uploads/2013/12/workshop-horizontal-3.jpg)\n### Accomplishments Since Our Summer 2014 Fundraiser Launched:\n\n\n* **2 new papers and 1 new technical report**: “[Exploratory Engineering in AI](https://intelligence.org/2014/08/22/new-paper-exploratory-engineering-artificial-intelligence/),” “[Corrigibility](https://intelligence.org/2014/10/18/new-report-corrigibility/),” and “[UDT with known search order](https://intelligence.org/2014/10/30/new-report-udt-known-search-order/).” Also, several new reports we’ve been working on should be released this month, including an overview of our technical agenda so far.\n* **4 new analyses**: “[Groundwork for AGI safety engineering](https://intelligence.org/2014/08/04/groundwork-ai-safety-engineering/),” “[AGI outcomes and civilizational competence](https://intelligence.org/2014/10/16/agi-outcomes-civilizational-competence/),” “[The *Financial Times* story on MIRI](https://intelligence.org/2014/10/31/financial-times-story-miri/),” and “[Three misconceptions in Edge.org’s conversation on ‘The Myth of AI’](https://intelligence.org/2014/11/18/misconceptions-edge-orgs-conversation-myth-ai/).”\n* Released **[a new guide to MIRI’s research](https://intelligence.org/2014/11/06/new-guide-miris-research/)**.\n* Sponsored **[13 active MIRIx groups](http://intelligence.org/mirix/)** in 5 different countries.\n* Hosted 12 weeks of discussion in our ongoing *[Superintelligence](http://lesswrong.com/lw/kw4/superintelligence_reading_group/)* [reading group](http://lesswrong.com/lw/kw4/superintelligence_reading_group/).\n* Hosted a [Nick Bostrom talk](https://intelligence.org/2014/11/06/video-bostroms-talk-superintelligence-uc-berkeley/) at UC Berkeley on *Superintelligence* — a packed house!\n* Nate Soares [gave a talk](https://intelligence.org/2014/10/07/nate-soares-talk-aint-rich/) on decision theory at Purdue University.\n* Participated in Effective Altruism Summit 2014, and [posted our talks online](https://intelligence.org/2014/08/11/miris-recent-effective-altruism-talks/).\n* 5 new [expert interviews](https://intelligence.org/category/conversations/), including [John Fox on AI safety](https://intelligence.org/2014/09/04/john-fox/).\n* Set up a program to provide [Friendly AI research help](https://intelligence.org/2014/09/08/friendly-ai-research-help-miri/).\n\n\n\n\n [Donate Now](https://intelligence.org/donate/#donation-methods)\n----------------------------------------------------------------\n\n\n\n\n### Your Donations Will Support:\n\n\n* As mentioned above, we’re finishing up several more papers and technical reports, including an overview of our technical agenda so far.\n* We’re preparing the launch of an invite-only discussion forum devoted exclusively to technical FAI research. Beta users (who are also FAI researchers) have already posted more than a dozen technical discussions to the beta website. These will be available for all to see once the site launches publicly.\n* We continue to grow the MIRIx program, mostly to enlarge the pool of people we can plausibly hire as full-time FAI researchers in the next couple years.\n* We’re planning, or helping to plan, multiple research workshops, including the [May 2015 decision theory workshop at Cambridge University](http://intelligence.org/2014/07/12/may-2015-decision-theory-workshop-cambridge/).\n* We continue to host visiting researchers. For example in January we’re hosting Patrick LaVictoire and Matt Elder for multiple weeks.\n* We’re finishing up several more strategic analyses, on AI safety and on the challenges of preparing wisely for disruptive technological change in general.\n* We’re finishing the editing for a book version of Eliezer’s *[Sequences](http://wiki.lesswrong.com/wiki/Sequences)*.\n* We’re helping to fund further [SPARC](http://sparc-camp.org/) programs, which provide education and skill-building to elite young math talent, and introduces them to ideas like effective altruism and global catastrophic risks.\n\n\nOther projects are being surveyed for likely cost and impact. See also our [mid-2014 strategic plan](http://intelligence.org/2014/06/11/mid-2014-strategic-plan/).\n\n\nWe appreciate your support for our work! **[Donate now](https://intelligence.org/donate/#donation-methods)**, and seize a better than usual opportunity to move our work forward.\n\n\nIf you have questions about donating, please contact me (Luke Muehlhauser) at luke@intelligence.org.[2](https://intelligence.org/2014/12/01/2014-winter-matching-challenge/#footnote_1_11479 \"In particular, we expect that many of our donors holding views aligned with key ideas of effective altruism may want to know not just that donating to MIRI now will do some good but that donating to MIRI now will plausibly do more good than donating elsewhere would do (on the present margin, given the individual donor’s altruistic priorities and their model of the world). Detailed comparisons are beyond the scope of this announcement, but I have set aside time in my schedule to take phone calls with donors who would like to discuss such issues in detail, and I encourage you to email me to schedule such a call if you’d like to. (Also, I don’t have many natural opportunities to chat with most MIRI donors anyway, and I’d like to be doing more of it, so please don’t hesitate to email me and schedule a call!) \")\n\n\n\n\n---\n\n1. Peter Thiel has pledged $150,000 to MIRI unconditionally, and an additional $100,000 conditional on us being able to raise matched funds from other donors. Hence this year our winter matching challenge goal is $100,000. Another reason this year’s winter fundraiser is smaller than last year’s winter challenge is that we’ve done substantially more fundraising before December this year than we did before December last year.\n2. In particular, we expect that many of our donors holding views aligned with key ideas of [effective altruism](http://lesswrong.com/lw/hx4/four_focus_areas_of_effective_altruism/) may want to know not just that donating to MIRI now will do *some* good but that donating to MIRI now will plausibly do *more* good than donating elsewhere would do (on the present margin, given the individual donor’s altruistic priorities and their model of the world). Detailed comparisons are beyond the scope of this announcement, but I have set aside time in my schedule to take phone calls with donors who would like to discuss such issues in detail, and I encourage you to email me to schedule such a call if you’d like to. (Also, I don’t have many natural opportunities to chat with most MIRI donors anyway, and I’d like to be doing more of it, so please don’t hesitate to email me and schedule a call!)\n\nThe post [2014 Winter Matching Challenge!](https://intelligence.org/2014/12/01/2014-winter-matching-challenge/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-12-02T03:16:17Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "2357ae7cb819be6885d89a10c54858bb", "title": "Three misconceptions in Edge.org’s conversation on “The Myth of AI”", "url": "https://intelligence.org/2014/11/18/misconceptions-edge-orgs-conversation-myth-ai/", "source": "miri", "source_type": "blog", "text": "A recent Edge.org conversation — “[The Myth of AI](http://edge.org/conversation/the-myth-of-ai)” — is framed in part as a discussion of points raised in Bostrom’s *[Superintelligence](http://smile.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/),*and as a response to much-repeated comments by Elon Musk and Stephen Hawking that seem to have been heavily informed by *Superintelligence*.\n\n\nUnfortunately, some of the participants fall prey to common misconceptions about the standard case for AI as an existential risk, and they probably haven’t had time to read *Superintelligence* yet.\n\n\nOf course, some of the participants may be responding to arguments they’ve heard from others, even if they’re not part of the arguments typically made by [FHI](http://www.fhi.ox.ac.uk/) and MIRI. Still, for simplicity I’ll reply from the perspective of the typical arguments made by FHI and MIRI.[1](https://intelligence.org/2014/11/18/misconceptions-edge-orgs-conversation-myth-ai/#footnote_0_11468 \"I could have also objected to claims and arguments made in the conversation, for example Lanier’s claim that “The AI component would be only ambiguously there and of little importance [relative to the actuators component].” To me, this is like saying that humans rule the planet because of our actuators, not because of our superior intelligence. Or in response to Kevin Kelly’s claim that “So far as I can tell, AIs have not yet made a decision that its human creators have regretted,” I can for example point to the automated trading algorithms that nearly bankrupted Knight Capital faster than any human could react. But in this piece I will focus instead on claims that seem to be misunderstandings of the positive case that’s being made for AI as an existential risk.\")\n\n\n \n\n\n**1. We don’t think AI progress is “exponential,” nor that human-level AI is likely ~20 years away.**\n\n\nLee Smolin writes:\n\n\n\n> I am puzzled by the arguments put forward by those who say we should worry about a coming AI, singularity, because all they seem to offer is a prediction based on Moore’s law.\n> \n> \n\n\nThat’s not the argument made by FHI, MIRI, or *Superintelligence*.\n\n\nSome IT hardware and [software](https://intelligence.org/files/AlgorithmicProgress.pdf) domains have shown exponential progress, and [some have not](http://intelligence.org/2014/05/12/exponential-and-non-exponential/). Likewise, some AI subdomains have shown rapid progress of late, and some have not. And unlike computer chess, most AI subdomains don’t lend themselves to easy measures of progress, so for most AI subdomains we don’t even have meaningful subdomain-wide performance data through which one might draw an exponential curve (or some other curve).\n\n\nThus, our confidence intervals for the arrival of human-equivalent AI tend to be very wide, and [the arguments we make](http://intelligence.org/2013/05/15/when-will-ai-be-created/) for our AI timelines are fox-ish (in [Tetlock’s sense](http://edge.org/conversation/how-to-win-at-forecasting)).\n\n\nI should also mention that — contrary to common belief — many of us at FHI and MIRI, including myself and Bostrom, actually [have](https://intelligence.org/2014/10/31/financial-times-story-miri/) *later* timelines for human-equivalent AI than do the world’s top-cited living AI scientists:\n\n\n\n> A [recent survey](http://www.sophia.de/pdf/2014_PT-AI_polls.pdf) asked the world’s top-cited living AI scientists by what year they’d assign a 10% / 50% / 90% chance of human-level AI (*aka*[AGI](http://intelligence.org/2013/08/11/what-is-agi/)), assuming scientific progress isn’t massively disrupted. The median reply for a 10% chance of AGI was 2024, for a 50% chance of AGI it was 2050, and for a 90% chance of AGI it was 2070. So while AI scientists think it’s possible we might get AGI soon, they largely expect AGI to be an issue for the second half of this century.\n> \n> \n\n\nCompared to AI scientists, Bostrom and I think more probability should be placed on later years. As explained [elsewhere](https://intelligence.org/2014/10/31/financial-times-story-miri/):\n\n\n\n> We advocate more work on the AGI safety challenge today not because we think AGI is likely in the next decade or two, but because AGI safety looks to be an [extremely difficult challenge](http://intelligence.org/2014/10/16/agi-outcomes-civilizational-competence/) — more challenging than managing climate change, for example — and one requiring several decades of careful preparation.\n> \n> \n> The greatest risks from both climate change and AI are several decades away, but thousands of smart researchers and policy-makers are already working to understand and mitigate climate change, and only a handful are working on the safety challenges of advanced AI. On the present margin, we should have much less top-flight cognitive talent going into climate change mitigation, and much more going into AGI safety research.\n> \n> \n\n\n \n\n\n**2. We don’t think AIs will *want* to wipe us out. Rather, we worry they’ll wipe us out because that *is* the most effective way to satisfy almost any possible goal function one could have.**\n\n\nSteven Pinker, who incidentally is the author of two of my all-time [favorite](http://smile.amazon.com/Better-Angels-Our-Nature-Violence-ebook/dp/B0052REUW0/) [books](http://smile.amazon.com/Blank-Slate-Modern-Denial-Nature-ebook/dp/B000QCTNIM/), writes:\n\n\n\n> [one] problem with AI dystopias is that they project a parochial alpha-male psychology onto the concept of intelligence. Even if we did have superhumanly intelligent robots, why would they want to depose their masters, massacre bystanders, or take over the world? Intelligence is the ability to deploy novel means to attain a goal, but the goals are extraneous to the intelligence itself: being smart is not the same as wanting something. History does turn up the occasional megalomaniacal despot or psychopathic serial killer, but these are products of a history of natural selection shaping testosterone-sensitive circuits in a certain species of primate, not an inevitable feature of intelligent systems.\n> \n> \n\n\nI’m glad Pinker agrees with what Bostrom calls “the orthogonality thesis”: that intelligence and goals are orthogonal to each other.\n\n\nBut our concern is not that superhuman AIs would be megalomaniacal despots. That is anthropomorphism.\n\n\nRather, the problem is that taking over the world is a *really good idea* for almost *any* goal function a superhuman AI could have. As Yudkowsky [wrote](https://intelligence.org/files/AIPosNegFactor.pdf), “The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else.”\n\n\nMaybe it just wants to calculate as many digits of pi as possible. Well, the best way to do that is to turn all available resources into computation for calculating more digits of pi, and to eliminate potential threats to its continued calculation, for example those pesky humans that seem capable of making disruptive things like nuclear bombs and powerful AIs. The same logic applies for almost any goal function you can specify. (“But what if it’s a non-maximizing goal? And won’t it be smart enough to realize that the goal we gave it wasn’t what we intended if it means the AI wipes us out to achieve it?” Responses to these and other common objections are given in *Superintelligence*, ch. 8.)\n\n\n \n\n\n \n\n\n**3. AI self-improvement and protection against external modification isn’t just one of many scenarios. Like resource acquisition, self-improvement and protection against external modification are useful for the satisfaction of almost any final goal function.**\n\n\nKevin Kelly writes:\n\n\n\n> The usual scary scenario is that an AI will reprogram itself on its own to be unalterable by outsiders. This is conjectured to be a selfish move on the AI’s part, but it is unclear how an unalterable program is an advantage to an AI.\n> \n> \n\n\nAs argued above (and more extensively in *Superintelligence*, ch. 7), *resource acquisition* is a “convergent instrumental goal.” That is, advanced AI agents will be instrumentally motivated to acquire as many resources as feasible, because additional resources are useful for just about any goal function one could have.\n\n\n*Self-improvement* is another convergent instrumental goal. For just about any goal an AI could have, it’ll be better able to achieve that goal if it’s more capable of goal achievement in general.\n\n\nAnother convergent instrumental goal is *goal content integrity*. As Bostrom puts it, “An agent is more likely to act in the future to maximize the realization of its present final goals if it still has those goals in the future.” Thus, it will be instrumentally motivated to prevent external modification of its goals, or of parts of its program that affect its ability to achieve its goals.[2](https://intelligence.org/2014/11/18/misconceptions-edge-orgs-conversation-myth-ai/#footnote_1_11468 \"That is, unless it strongly trusts the agent making the external modification, and expects it to do a better job of making those modifications than it could itself, neither of which will be true of humans from the superhuman AI’s perspective.\")\n\n\nFor more on this, see *Superintelligence* ch. 7.\n\n\n \n\n\n**Conclusion**\n\n\nI’ll conclude with the paragraph in the discussion I most agreed with, by Pamela McCorduck:\n\n\n\n> Yes, the machines are getting smarter—we’re working hard to achieve that. I agree with Nick Bostrom that the process must call upon our own deepest intelligence, so that we enjoy the benefits, which are real, without succumbing to the perils, which are just as real. Working out the ethics of what smart machines should, or should not do—looking after the frail elderly, or deciding whom to kill on the battlefield—won’t be settled by fast thinking, snap judgments, no matter how heartfelt. This will be a slow inquiry, calling on ethicists, jurists, computer scientists, philosophers, and many others. As with all ethical issues, stances will be provisional, evolve, be subject to revision. I’m glad to say that for the past five years the Association for the Advancement of Artificial Intelligence has formally addressed these ethical issues in detail, with a series of panels, and plans are underway to expand the effort. As Bostrom says, this is the essential task of our century.\n> \n> \n\n\n \n\n\n*Update*: Stuart Russell of UC Berkeley has now added [a nice reply](http://edge.org/conversation/the-myth-of-ai#26015) to the edge.org conversation which echoes some of the points I made above.\n\n\n\n\n---\n\n1. I could have also objected to claims and arguments made in the conversation, for example Lanier’s claim that “The AI component would be only ambiguously there and of little importance [relative to the actuators component].” To me, this is like saying that humans rule the planet because of our actuators, not because of our superior intelligence. Or in response to Kevin Kelly’s claim that “So far as I can tell, AIs have not yet made a decision that its human creators have regretted,” I can for example point to the automated trading algorithms that [nearly bankrupted Knight Capital](http://www.reuters.com/article/2012/08/01/us-usa-nyse-tradinghalts-idUSBRE8701BN20120801) [faster](http://www.nature.com/srep/2013/130911/srep02627/full/srep02627.html) than any human could react. But in this piece I will focus instead on claims that seem to be misunderstandings of the positive case that’s being made for AI as an existential risk.\n2. That is, unless it strongly trusts the agent making the external modification, and expects it to do a better job of making those modifications than it could itself, neither of which will be true of humans from the superhuman AI’s perspective.\n\nThe post [Three misconceptions in Edge.org’s conversation on “The Myth of AI”](https://intelligence.org/2014/11/18/misconceptions-edge-orgs-conversation-myth-ai/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-11-18T10:31:13Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "bfd635b3627bf2c3baf4dbd85157c0a5", "title": "Video of Bostrom’s talk on Superintelligence at UC Berkeley", "url": "https://intelligence.org/2014/11/06/video-bostroms-talk-superintelligence-uc-berkeley/", "source": "miri", "source_type": "blog", "text": "In September, MIRI [hosted](http://intelligence.org/2014/07/25/bostrom/) Nick Bostrom at UC Berkeley to discuss his new book [*Superintelligence*](http://smile.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom-ebook/dp/B00LOOCGB2/). A video and transcript of that talk are [now available](http://www.c-span.org/video/?321534-1/book-discussion-superintelligence) from *BookTV* by C-SPAN, which also has a DVD of the event available.\n\n\n**Update:** Nick Bostrom has also made his [slides for the talk](https://intelligence.org/wp-content/uploads/2014/11/Bostrom-Superintelligence-Berkeley-talk.pptx) available.\n\n\n[![Bostrom Berkeley talk](https://intelligence.org/wp-content/uploads/2014/11/Bostrom-Berkeley-talk.png)](http://www.c-span.org/video/?321534-1/book-discussion-superintelligence)\nThe post [Video of Bostrom’s talk on Superintelligence at UC Berkeley](https://intelligence.org/2014/11/06/video-bostroms-talk-superintelligence-uc-berkeley/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-11-07T00:51:33Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "c844395d40e15c228008ae04bb5ec5c0", "title": "A new guide to MIRI’s research", "url": "https://intelligence.org/2014/11/06/new-guide-miris-research/", "source": "miri", "source_type": "blog", "text": "[![Guide to MIRI's Research](https://intelligence.org/wp-content/uploads/2014/11/Guide-to-MIRIs-Research.png)](http://intelligence.org/research-guide/)Nate Soares has written “[A Guide to MIRI’s Research](http://intelligence.org/research-guide/),” which outlines the main thrusts of MIRI’s current research agenda and provides recommendations for which textbooks and papers to study so as to understand what’s happening at the cutting edge.\n\n\nThis guide replaces Louie Helm’s earlier “Recommended Courses for MIRI Math Researchers,” and will be updated regularly as new lines of research open up, and as new papers and reports are released. It is not a replacement for our upcoming technical report on MIRI’s current research agenda and its supporting papers, which are still in progress. (“[Corrigibility](https://intelligence.org/files/CorrigibilityTR.pdf)” is the first supporting paper we’ve released for that project.)\n\n\nThe post [A new guide to MIRI’s research](https://intelligence.org/2014/11/06/new-guide-miris-research/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-11-06T22:23:53Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "3100b0d03198a78056afdd3782688f92", "title": "MIRI’s November Newsletter", "url": "https://intelligence.org/2014/11/01/miris-november-newsletter/", "source": "miri", "source_type": "blog", "text": "| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n[Machine Intelligence Research Institute](http://intelligence.org)\n |\n\n |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n**Research Updates**\n* New Friendly AI research area: “[Corrigibility](http://intelligence.org/2014/10/18/new-report-corrigibility/).”\n* New report: “[UDT with known search order](http://intelligence.org/2014/10/30/new-report-udt-known-search-order/).”\n* 2 new analyses: “[AGI outcomes and civilizational competence](http://intelligence.org/2014/10/16/agi-outcomes-civilizational-competence/)” and “[The *Financial Times* story on MIRI](http://intelligence.org/2014/10/31/financial-times-story-miri/).”\n* [Video](http://intelligence.org/2014/10/07/nate-soares-talk-aint-rich/) of Nate Soares’ decision theory talk at Purdue.\n\n\n**News Updates**\n* If you’re reading *Superintelligence* or following its online reading group, [**please take this short survey**](https://docs.google.com/forms/d/1P53uNnZY_suE5IXW8WLQtXGuipAkUfQHj1yN_D121X0/viewform).\n\n\n**Other Updates**\n* Elon Musk has been talking about AGI risk [quite a bit lately](http://www.computerworld.com/article/2840815/ai-researchers-say-elon-musks-fears-not-completely-crazy.html). He has also joined the external advisory boards of [CSER](http://cser.org/about/who-we-are/) and [FLI](http://www.futureoflife.org/who), two existential risk organizations with whom we work closely.\n* Effective altruists may want to register for the [EA Donation Registry](http://lesswrong.com/lw/l56/introducing_an_ea_donation_registry_covering/).\n\n\n\nAs always, please don’t hesitate to let us know if you have any questions or comments.\nBest,\nLuke Muehlhauser\nExecutive Director\n |\n\n |\n\n |\n| |\n\n\n \n\n\nThe post [MIRI’s November Newsletter](https://intelligence.org/2014/11/01/miris-november-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-11-02T03:00:52Z", "authors": ["Jake"], "summaries": []} -{"id": "9e0de645e0c8ec0ecd1f1bcc385504a6", "title": "The Financial Times story on MIRI", "url": "https://intelligence.org/2014/10/31/financial-times-story-miri/", "source": "miri", "source_type": "blog", "text": "Richard Waters wrote a [story](http://www.ft.com/intl/cms/s/2/abc942cc-5fb3-11e4-8c27-00144feabdc0.html) on MIRI and others for *Financial Times*, which also put Nick Bostrom’s *Superintelligence* at the top of its [summer science reading list](http://www.ft.com/intl/cms/s/2/31a97c56-fdab-11e3-96a9-00144feab7de.html).\n\n\nIt’s a good piece. Go read it and then come back here so I can make a few clarifications.\n\n\n \n\n\n**1. Smarter-than-human AI probably isn’t coming “soon.”**\n\n\n“Computers will soon become more intelligent than us,” the story begins, but few experts I know think this is likely.\n\n\nA [recent survey](http://www.sophia.de/pdf/2014_PT-AI_polls.pdf) asked the world’s top-cited living AI scientists by what year they’d assign a 10% / 50% / 90% chance of human-level AI (*aka* [AGI](http://intelligence.org/2013/08/11/what-is-agi/)), assuming scientific progress isn’t massively disrupted. The median reply for a 10% chance of AGI was 2024, for a 50% chance of AGI it was 2050, and for a 90% chance of AGI it was 2070. So while AI scientists think it’s *possible* we might get AGI soon, they largely expect AGI to be an issue for the *second* half of this century.\n\n\nMoreover, many of those who specialize in thinking about AGI safety actually think AGI is *further* away than the top-cited AI scientists do. For example, relative to the surveyed AI scientists, Nick Bostrom and I both think more probability should be placed on later years. We advocate more work on the AGI safety challenge today not because we think AGI is likely in the next decade or two, but because AGI safety looks to be an [extremely difficult challenge](http://intelligence.org/2014/10/16/agi-outcomes-civilizational-competence/) — more challenging than managing climate change, for example — and one requiring several decades of careful preparation.\n\n\nThe greatest risks from both climate change and AI are several decades away, but thousands of smart researchers and policy-makers are already working to understand and mitigate climate change, and only a handful are working on the safety challenges of advanced AI. On the present margin, we should have much less top-flight cognitive talent going into climate change mitigation, and much more going into AGI safety research.\n\n\n\n \n\n\n**2. How many people are working to make sure AGI is friendly to humans?**\n\n\nThe *FT* piece cites me as saying there are only five people in the world “working on how to [program] the super-smart machines of the not-too-distant future to make sure AI remains friendly.” I did say something *kind of* like this, but it requires clarification.\n\n\nWhat I mean is that “When you add up fractions of people, there are about five people (that I know of) *explicitly* doing technical research on the problem of how to ensure that a smarter-than-human AI has a positive impact even as it radically improves itself.”\n\n\nThese fractions of people are: (a) most of the full-time labor of Eliezer Yudkowsky, Benja Fallenstein, Nate Soares (all at MIRI), and Stuart Armstrong (Oxford), plus (b) much smaller fractions of people who do technical research on “Friendly AI” on the side, for example MIRI’s (unpaid) [research associates](http://intelligence.org/team/#associates).\n\n\nOf course, there are many, many more researchers than this doing (a) non-technical work on AGI safety, or doing (b) technical work on AI safety for extant or near-future systems, or doing (c) occasional technical work on AGI safety done with very different conceptions of “positive impact” or “radically improves itself” than I have.\n\n\n \n\n\n**3. An AGI wouldn’t necessarily see humans as “mere” collections of matter.**\n\n\nThe article cites me as arguing that “In their single-mindedness, [AGIs] would view their biological creators as mere collections of matter, waiting to be reprocessed into something they find more useful.”\n\n\nAGIs would likely have pretty accurate — and ever-improving — models of reality (e.g. via Wikipedia and millions of scientific papers), so they wouldn’t see humans as “mere” collections of matter any more than *I* do. Sure, humans *are* collections of matter, but we’re pretty special as collections of matter go. Unlike most collections of matter, we have general-purpose intelligence and consciousness and technological creativity and desires and aversions and hopes and fears and so on, and an AGI would know all that, and it would know that rocks and buildings and plants and monkeys and self-driving cars *don’t* have all those properties.\n\n\nThe point I wanted to make is that if a self-improving AGI was (say) programmed to maximize Shell’s stock price, then it would *know* all this about humans, and then it would just go on maximizing Shell’s stock price. It just happens to be the case that the best way to maximize Shell’s stock price is to take over the world and eliminate all potential threats to one’s achievement of that goal. In fact, for just about *any* goal function an AGI could have, it’s a really good idea to take over the world. *That* is the problem.\n\n\nEven if we could program a self-improving AGI to (say) “maximize human happiness,” then the AGI would “care about humans” in a certain sense, but it might learn that (say) the most efficient way to “maximize human happiness” in the way we specified is to take over the world and then put each of us in a padded cell with a heroin drip. AGI presents us with [the old problem of the all-too-literal genie](https://intelligence.org/files/IE-ME.pdf): you get what you *actually asked for*, not what you *wanted*.\n\n\nAnd [yes](http://lesswrong.com/lw/igf/the_genie_knows_but_doesnt_care/), the AGI would be smart enough to *know* this wasn’t what we really wanted, especially when we start complaining about the padded cells. But we didn’t program it to do what we want. We programmed it to “maximize human happiness.”\n\n\nThe trouble is that “what we really want” is very hard to specify in computer code. Twenty centuries of philosophers haven’t even managed to specify it in less-exacting *human* languages.\n\n\n \n\n\n**4. “Toying with the intelligence of the gods.”**\n\n\nFinally, the article quotes me as saying “We’re toying with the intelligence of the gods. And there isn’t an off switch.”\n\n\nI shouldn’t complain about Mr. Waters making me sound so eloquent, but I’m pretty sure I never said anything so succinct and quotable. ![🙂](https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png)\n\n\nAnd of course, there *is* an off switch today, but there probably *won’t* be an off switch for an AGI smart enough to remove its shutdown mechanism (so as to more assuredly achieve its programmed goals) and copy itself across the internet — unless, that is, we solve the technical problem we call “[corrigibility](http://intelligence.org/2014/10/18/new-report-corrigibility/).”\n\n\nThe post [The Financial Times story on MIRI](https://intelligence.org/2014/10/31/financial-times-story-miri/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-10-31T22:32:47Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "68059069315bb0da8810642b77f8bbf9", "title": "New report: “UDT with known search order”", "url": "https://intelligence.org/2014/10/30/new-report-udt-known-search-order/", "source": "miri", "source_type": "blog", "text": "[![UDT with known search order](https://intelligence.org/wp-content/uploads/2014/10/UDT-with-known-search-order.png)](https://intelligence.org/files/UDTSearchOrder.pdf)Today we release a new technical report from MIRI research associate Tsvi Benson-Tilsen: “[UDT with known search order](https://intelligence.org/files/UDTSearchOrder.pdf).” Abstract:\n\n\n\n> We consider logical agents in a predictable universe running a variant of updateless decision theory. We give an algorithm to predict the behavior of such agents in the special case where the order in which they search for proofs is simple, and where they know this order. As a corollary, “playing chicken with the universe” by diagonalizing against potential spurious proofs is the only way to guarantee optimal behavior for this class of simple agents.\n> \n> \n\n\nThe post [New report: “UDT with known search order”](https://intelligence.org/2014/10/30/new-report-udt-known-search-order/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-10-30T22:30:51Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "f40f4921b10a4bc43f1e5973f6941f9a", "title": "Singularity2014.com appears to be a fake", "url": "https://intelligence.org/2014/10/27/singularity2014-fake/", "source": "miri", "source_type": "blog", "text": "Earlier today I was alerted to the existence of Singularity2014.com ([archived screenshot](http://intelligence.org/wp-content/uploads/2014/10/Singularity-2014-with-note.png)). MIRI has nothing to do with that website and we believe it is a fake.\n\n\nThe website claims there is a “Singularity 2014″ conference “in the Bay Area” on “November 9, 2014.” **We believe that there is no such event.** No venue is listed, tickets are supposedly sold out already, and there are no links to further information. The three listed speakers are unknown to us, and their supposed photos are stock photos ([1](http://www.thinkstockphotos.ca/image/stock-photo-portrait-of-man/86533557), [2](http://www.gettyimages.com/detail/photo/man-at-desk-listens-royalty-free-image/83313293), [3](http://www.mediabakery.com/BXP0045094-Portrait-of-Woman-Wearing-Eyeglasses.html)). The website prominently features an image of Ray Kurzweil, but Ray Kurzweil’s press staff confirms that he has nothing to do with this event. The website also features childish insults and a spelling error.\n\n\nThe website claims the event is “staged and produced by former organizers of the Singularity Summit from the Machine Intelligence Research Institute,” and that “All profits benefit the Machine Intelligence Research Institute,” but MIRI has nothing to do with this supposed event.\n\n\nThe Singularity2014.com domain name was [registered](http://whois.domaintools.com/singularity2014.com) via eNom reseller NameCheap.com on September 15th, 2014 by someone other than us, and is associated with a P.O. Box in Panama.\n\n\nMIRI is collaborating with Singularity University to have the website taken down. If you have information about who is responsible for this, please contact luke@intelligence.org.\n\n\nThe next Singularity Summit will be organized primarily by Singularity University; for more information see [here](http://exponential.singularityu.org/).\n\n\n \n\n\n**Update:**The website has been taken down.\n\n\nThe post [Singularity2014.com appears to be a fake](https://intelligence.org/2014/10/27/singularity2014-fake/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-10-28T02:59:41Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "cb6f5c980c187c20ad83bc8ef7aaf62e", "title": "New paper: “Corrigibility”", "url": "https://intelligence.org/2014/10/18/new-report-corrigibility/", "source": "miri", "source_type": "blog", "text": "[![Corrigibility](https://intelligence.org/wp-content/uploads/2014/10/Corrigibility.png)](https://intelligence.org/files/CorrigibilityTR.pdf)Today we release a paper describing a new problem area in Friendly AI research we call *corrigibility*. The report ([PDF](https://intelligence.org/files/CorrigibilityTR.pdf)) is co-authored by MIRI’s Friendly AI research team (Eliezer Yudkowsky, Benja Fallenstein, Nate Soares) and also Stuart Armstrong from the Future of Humanity Institute at Oxford University.\n\n\nThe abstract reads:\n\n\n\n> As artificially intelligent systems grow in intelligence and capability, some of their available options may allow them to resist intervention by their programmers. We call an AI system “corrigible” if it cooperates with what its creators regard as a corrective intervention, despite default incentives for rational agents to resist attempts to shut them down or modify their preferences. We introduce the notion of corrigibility and analyze utility functions that attempt to make an agent shut down safely if a shutdown button is pressed, while avoiding incentives to prevent the button from being pressed or cause the button to be pressed, and while ensuring propagation of the shutdown behavior as it creates new subsystems or self-modifies. While some proposals are interesting, none have yet been demonstrated to satisfy all of our intuitive desiderata, leaving this simple problem in corrigibility wide-open.\n> \n> \n\n\nThis paper was accepted to the [AI & Ethics workshop](http://www.cse.unsw.edu.au/~tw/aiethics/Introduction.html) at AAAI-2015.\n\n\n**Update:** The slides for Nate Soares’ presentation at AAAI-15 are available [here](https://intelligence.org/wp-content/uploads/2015/01/AAAI-15-corrigibility-slides.pdf).\n\n\nThe post [New paper: “Corrigibility”](https://intelligence.org/2014/10/18/new-report-corrigibility/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-10-19T00:14:19Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "21f5cf6a56acb9a15c583305839aa7c2", "title": "AGI outcomes and civilizational competence", "url": "https://intelligence.org/2014/10/16/agi-outcomes-civilizational-competence/", "source": "miri", "source_type": "blog", "text": "![David Victor](https://intelligence.org/wp-content/uploads/2014/10/David-Victor.jpg)\n\n> The [latest IPCC] report says, “If you put into place all these technologies and international agreements, we could still stop warming at [just] 2 degrees.” My own assessment is that the kinds of actions you’d need to do that are so heroic that we’re not going to see them on this planet.\n> \n> \n\n\n—David Victor,[1](https://intelligence.org/2014/10/16/agi-outcomes-civilizational-competence/#footnote_0_11415 \" Quote taken from the Radiolab episode titled “In the Dust of This Planet.”\") professor of international relations at UCSD\n\n\n \n\n\nA while back I attended a meeting of “movers and shakers” from science, technology, finance, and politics. We were discussing our favorite Big Ideas for improving the world. One person’s Big Idea was to copy best practices between nations. For example when it’s shown that nations can [dramatically improve organ donation rates](http://www.google.com/url?q=http%3A%2F%2Fwebs.wofford.edu%2Fpechwj%2FDo%2520Defaults%2520Save%2520Lives.pdf&sa=D&sntz=1&usg=AFQjCNFf4hsZC8q4EPe3vGusF_cHCO1kOw) by using opt-out rather than opt-in programs, other countries should just copy that solution.\n\n\nEveryone thought this was a boring suggestion, because it was *obviously* a good idea, and there was no debate to be had. Of course, they all agreed it was also *impossible* and *could never be established as standard-practice*. So we moved on to another Big Idea that was more tractable.\n\n\nLater, at a meeting with a similar group of people, I told some economists that their recommendations on a certain issue were “straightforward econ 101,” and I didn’t have any objections to share. Instead, I asked, “But how can we get policy-makers to implement econ 101 solutions?” The economists laughed and said, “Well, yeah, we have no idea. We probably can’t.”\n\n\nHow do I put this? This is not a civilization that should be playing with self-improving AGIs.[2](https://intelligence.org/2014/10/16/agi-outcomes-civilizational-competence/#footnote_1_11415 \"In Superintelligence, Bostrom made the point this way (p. 259):\nBefore the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct… For a child with an undetonated bomb in its hands, a sensible thing to do would be to put it down gently, quickly back out of the room, and contact the nearest adult. Yet what we have here is not one child but many, each with access to an independent trigger mechanism. The chances that we will all find the sense to put down the dangerous stuff seem almost negligible… Nor can we attain safety by running away, for the blast of an intelligence explosion would bring down the entire firmament. Nor is there a grown-up in sight.\n\") \n\n \n\nThe [backhoe](http://en.wikipedia.org/wiki/Backhoe) is a powerful, labor-saving invention, but I wouldn’t put a two-year-old in the driver’s seat. That’s roughly how I feel about letting 21st century humans wield something as powerful as [self-improving AGI](http://smile.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/).[3](https://intelligence.org/2014/10/16/agi-outcomes-civilizational-competence/#footnote_2_11415 \" By “AGI” I mean a computer system that could pass something like Nilsson’s employment test (see What is AGI?). By “self-improving AGI” I mean an AGI that improves its own capabilities via its own original computer science and robotics research (and not solely by, say, gathering more data about the world or acquiring more computational resources). By “its own capabilities” I mean to include the capabilities of successor systems that the AGI itself creates to further its goals. In this article I typically mean “AGI” and “self-improving AGI” interchangeably, not because all AGIs will necessarily be self-improving in a strong sense, but because I expect that even if the first AGIs are not self-improving for some reason, self-improving AGIs will follow in a matter of decades if not sooner. From a cosmological perspective, such a delay is but a blink.\") I wish we had more time to grow up first. I think the kind of actions we’d need to handle self-improving AGI successfully “are so heroic that we’re not going to see them on this planet,” at least not anytime soon.[4](https://intelligence.org/2014/10/16/agi-outcomes-civilizational-competence/#footnote_3_11415 \"I purposely haven’t pinned down exactly what about our civilization seems inadequate to meet the challenge of AGI control; David Victor made the same choice when he made his comment about civilizational competence in the face of climate change. I think our civilizational competence is insufficient for the challenge for many reasons, but I also have varying degrees of uncertainty about each those reasons and which parts of the problem they apply to, and those details are difficult to express.\")\n\n\nBut I suspect we won’t all resist the temptation to build AGI for long, and neither do most top AI scientists.[5](https://intelligence.org/2014/10/16/agi-outcomes-civilizational-competence/#footnote_4_11415 \"See the AI timeline predictions for the TOP100 poll in Müller & Bostrom (2014). The authors asked a sample of the top-cited living AI scientists: “For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for [an AGI] to exist?” The median reply for each confidence level was 2024, 2050, and 2070, respectively.\nWhy trust AI scientists at all? Haven’t they been wildly optimistic about AI progress from the beginning? Yes, there are embarrassing quotes from early AI scientists about how fast AI progress would be, but there are also many now-disproven quotes from early AI skeptics about what AI wouldn’t be able to do. The earliest survey of AI scientists we have is from 1973, and the most popular response to that survey’s question about AGI timelines was the most pessimistic option, “more than 50 years.” (Which, assuming we don’t get AGI by 2023, will end up being correct.) \") There’s just too much incentive to build AGI: a self-improving AGI could give its makers — whether Google or the NSA or China or somebody else — history’s greatest first-mover advantage. Even if the first few teams design their AGIs wisely, the passage of time will only make it easier for smaller and less-wise teams to cross the finish line. [Moore’s Law of Mad Science](http://www.google.com/url?q=http%3A%2F%2Fwww.aleph.se%2Fandart%2Farchives%2F2012%2F11%2Fcan_governments_counter_the_moores_law_of_mad_science.html&sa=D&sntz=1&usg=AFQjCNEMzvlFqU9O9pug_MvXj-GNsfof7g), and all that.\n\n\nSome people are less worried than I am about self-improving AGI. After all, one might have predicted in 1950 that we wouldn’t have the civilizational competence to avoid an all-out nuclear war for the next half-century, but we *did* avoid it ([if](http://www.google.com/url?q=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2F1983_Soviet_nuclear_false_alarm_incident&sa=D&sntz=1&usg=AFQjCNHS-JoNNmnXmd8pqSicJkiygZp1YQ) [only](http://www.google.com/url?q=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FCuban_missile_crisis&sa=D&sntz=1&usg=AFQjCNE47jAg5WoLTgimEY9LgC7D4DVdgw) [barely](http://www.google.com/url?q=http%3A%2F%2Fwww.theatlantic.com%2Finternational%2Farchive%2F2013%2F05%2Fthe-ussr-and-us-came-closer-to-nuclear-war-than-we-thought%2F276290%2F&sa=D&sntz=1&usg=AFQjCNFciiomp27bnr00rkyfhiQt0V3--w)).[6](https://intelligence.org/2014/10/16/agi-outcomes-civilizational-competence/#footnote_5_11415 \"An interesting sub-question: Does humanity’s competence keep up with its capability? When our capabilities jump, as they did with the invention of nuclear weapons, does our competence in controlling those capabilities also jump, out of social/moral necessity or some other forces? Einstein said “Nuclear weapons have changed everything, except our modes of thought,” suggesting that he expected us not to mature as “adults” quickly enough to manage nuclear weapons wisely. We haven’t exactly handled them “wisely,” but we’ve at least handled them wisely enough to avoid global nuclear catastrophe so far.\") So maybe we shouldn’t be so worried about AGI, either.\n\n\nWhile I think it’s important consider such second-guessing arguments, I generally try to take the world at face value. When I look at the kinds of things we succeed at, and the kinds of things we fail at, getting good outcomes from AGI looks *much harder* than the kinds of things we routinely fail at, like bothering to switch to opt-out programs for organ donation.\n\n\nBut I won’t pretend that this question of civilizational competence has been settled. If it’s possible to settle the issue at all, doing so would require a book-length argument if not more. (Nick Bostrom’s [*Superintelligence*](http://smile.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/) says a lot about why the AGI control problem is hard, but it doesn’t say much about whether humanity is likely to rise to that challenge.[7](https://intelligence.org/2014/10/16/agi-outcomes-civilizational-competence/#footnote_6_11415 \"What Superintelligence says on the topic can be found in chapter 14.\") )\n\n\nWhat’s the point of trying to answer this question? If *my* view is correct, I think the upshot is that we need to re-evaluate our society’s *differential investment* in global challenges. If you want to succeed in the NBA but you’re [only 5’3″ tall](http://www.google.com/url?q=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FMuggsy_Bogues&sa=D&sntz=1&usg=AFQjCNF-hMjZL71CmzmMdWYm3hWz8riUOA), then you’ll just have to invest more time and effort on your basketball goals than you do on other goals for which you’re more naturally suited. And if we want our civilization to survive self-improving AGI, but our civilization can’t even manage to switch to opt-out programs for organ donation, then we’ll just have to start earlier, try harder, spend more, etc. on surviving AGI than we do when pursuing other goals for which our civilization is more naturally suited, like building awesome smartphones.\n\n\nBut if I’m wrong, and our civilization is on course to handle AGI just like it previously handled, say, [CFCs](http://en.wikipedia.org/w/index.php?title=Chlorofluorocarbon&direction=next&oldid=627201325#Regulation), then there may be more urgent things to be doing than [advancing Friendly AI theory](http://intelligence.org/research/). (Still, it would be surprising if more Friendly AI work wasn’t good on the *present* margin, given that there are fewer than 5 full-time Friendly AI researchers in the world right now.)\n\n\n**I won’t be arguing for my own view in this post**. Instead I merely want to ask: **how might one [study](http://lukemuehlhauser.com/some-studies-which-could-improve-our-strategic-picture-of-superintelligence/) this question of civilizational competence and the arrival of AGI?** I’d probably split the analysis into two parts: (1) the apparent shape and difficulty of the AGI control problem, and (2) whether we’re likely to have the civilizational competence to handle a problem of that shape and difficulty when it knocks on our front door.\n\n\nNote that **everything in this post is a gross simplification**. Problem Difficulty and Civilizational Competence aren’t one-dimensional concepts, though to be succinct I sometimes talk as if they are. But a problem like AGI control is difficult to different degrees in different ways, some technical and others political, and different parts of our civilization are differently competent in different ways, and those different kinds of competence are undergoing different trends.\n\n\n \n\n\n### Difficulty of the problem\n\n\nHow hard is the AGI control problem, and in which ways is it hard? To illustrate what such an analysis could look like, I might sum up my own thoughts on this like so:\n\n\n1. The control problems that are novel to AGI look really hard. For example, getting good outcomes from self-improving AGI seems to require as-yet unobserved philosophical success — philosophical success that is *not* required merely to write safe autopilot software. More generally, there seem to be several novel problems that arise when we’re trying to control a system more generally clever and powerful than ourselves — problems we have no track record of solving for other systems, problems which seem analogous to the hopeless prospect of chimpanzees getting humans to reliably do what the chimps want. (See *Superintelligence*.)\n2. Moreover, we may get relatively few solid chances to solve these novel AGI control problems before we reach a “no turning back” point in AGI capability. In particular, (1) a destabilizing [arms race](http://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf) between nations and/or companies, incentivizing speed of development over safety of development, seems likely, and (2) progress [may be rapid](https://intelligence.org/files/IEM.pdf) right when novel control problems become relevant. As a species we are not so good at getting something exactly right on one of our first 5 tries — instead, we typically get something right by learning from dozens or hundreds of initial failures. But we may not have that luxury with novel AGI control problems.\n3. The AGI control challenge looks especially susceptible to a problem known as “[positional externalities](http://lukemuehlhauser.com/wp-content/uploads/Frank-Positional-Externalities.pdf)” — an arms race is but one example — along with related coordination problems. (I explain the notion of “positional externalities” further in a footnote.[8](https://intelligence.org/2014/10/16/agi-outcomes-civilizational-competence/#footnote_7_11415 \"Frank (1991) explains the concept this way:\nIn Micromotives and Macrobehavior, Thomas Schelling observes that hockey players, left to their own devices, almost never wear helmets, even though almost all of them would vote for helmet rules in secret ballots. Not wearing a helmet increases the odds of winning, perhaps by making it slightly easier to see and hear… At the same time, not wearing a helmet increases the odds of getting hurt. If players value the higher odds of winning more than they value the extra safety, it is rational not to wear helmets. The irony, Schelling observes, is that when all discard their helmets, the competitive balance is the same as if all had worn them.\nThe helmet problem is an example of what we may call a positional externality. The decision to wear a helmet has important effects not only for the person who wears it, but also for the frame of reference in which he and others operate. In such situations, the payoffs to individuals depend in part on their positions within the frame of reference. With hockey players, what counts is not their playing ability in any absolute sense, but how they perform relative to their opponents. Where positional externalities… are present, Schelling has taught us, individually rational behavior often adds up to a result that none would have chosen.\nAn arms race is one well-understood kind of positional externality. As Alexander (2014) puts it, “From a god’s-eye-view, the best solution is world peace and no country having an army at all. From within the system, no country can unilaterally enforce that, so their best option is to keep on throwing their money into missiles…”\") )\n\n\nBut that’s just a rough sketch, and other thinkers might have different models of the shape and difficulty of the AGI control problem.\n\n\n### Civilizatonal competence\n\n\nSecond, will our civilization rise to the challenge? Will our civilizational competence at the time of AGI invention be sufficient to solve the AGI control problem?\n\n\nMy own pessimism on this question doesn’t follow from any conceptual argument or any simple extrapolation of current trends. Rather, it comes from the same kind of multi-faceted empirical reasoning that you probably do when you try to think about whether we’re more likely to have, within 30 years, self-driving taxis or a Mars colony. That is, I’m combining different models I have about how the world works in general: the speed of development in space travel vs. AI, trends in spending on both issues, which political and commercial incentives are at play, which kinds of coordination problems must be solved, what experts in the relevant fields seem to think about the issues, what kinds of questions experts are better or worse at answering, etc. I’m also adjusting that initial combined estimate based on some specific facts I know: about [terraforming](http://www.google.com/url?q=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FTerraforming_of_Mars%23Proposed_methods_and_strategies&sa=D&sntz=1&usg=AFQjCNEI6GDQCESIZVPp5TjJ52dqwQ75Ww), about [radiation hazards](http://www.google.com/url?q=http%3A%2F%2Fjournalofcosmology.com%2FMars124.html&sa=D&sntz=1&usg=AFQjCNFIbM2hUdWQ9hkff9uAqIlTav3_0Q), about [autonomous vehicles](http://www.google.com/url?q=http%3A%2F%2Fwww.rand.org%2Fcontent%2Fdam%2Frand%2Fpubs%2Fresearch_reports%2FRR400%2FRR443-1%2FRAND_RR443-1.pdf&sa=D&sntz=1&usg=AFQjCNGujVCnteeKDnJRhsNTZfELQx6zwA), etc.[9](https://intelligence.org/2014/10/16/agi-outcomes-civilizational-competence/#footnote_8_11415 \"In other words, my reasons for AGI outcomes pessimism look like a model combination and adjustment. Or you can think of it in terms of what Holden Karnofsky calls cluster thinking. Or as one of my early draft readers called it, “normal everyday reasoning.”\")\n\n\nUnfortunately, *because* one’s predictions about AGI outcomes can’t be strongly supported by simple conceptual arguments, it’s a labor-intensive task to try to explain one’s views on the subject, which may explain why I haven’t seen a good, thorough case for *either* optimism or pessimism about AGI outcomes. People just [have their views](http://www.google.com/url?q=http%3A%2F%2Flesswrong.com%2Flw%2Fhpt%2Felites_and_ai_stated_opinions%2F&sa=D&sntz=1&usg=AFQjCNEcTmRwqbo84Ho27UKB-2QBQJKimw), based on what they anticipate about the world, and it takes a lot of work to explain in detail where those views are coming from. Nevertheless, it’d be nice to see someone try.\n\n\nAs a start, I’ll link to some studies which share some methodological features of the kind of investigation I’m suggesting:\n\n\n* [Libecap (2013)](http://www.google.com/url?q=http%3A%2F%2Fwww.nber.org%2Fpapers%2Fw19501&sa=D&sntz=1&usg=AFQjCNEeijaPQM70Jz59f7tP3RkKzZRseg) investigates why some global externalities are addressed effectively whereas others are not.\n* Many scholars test theories of system accidents, such as “high reliability theory” and “normal accidents theory,” against historical data. For a summary, see [Sagan (1993)](http://www.google.com/url?q=http%3A%2F%2Fsmile.amazon.com%2FThe-Limits-Safety-Scott-Sagan%2Fdp%2F0691021015%2F&sa=D&sntz=1&usg=AFQjCNEKoWouldQzYFUuhZjVonGVnfiXtA), ch. 1.\n* [Yudkowsky (2008)](http://www.google.com/url?q=http%3A%2F%2Fintelligence.org%2Ffiles%2FCognitiveBiases.pdf&sa=D&sntz=1&usg=AFQjCNHXs37MuduZ10NG_JTw4-MJkoMc-A) examines cognitive biases potentially skewing judgments on catastrophic risks specifically.\n* [Sztompka (1993)](http://www.zfs-online.org/index.php/zfs/article/viewFile/2822/2359) argues that slowness of the post-Soviet Eastern European recovery can be blamed on a certain kind of civilizational incompetence.\n* [Shanteau (1992)](http://kstate.co/psych/cws/pdf/obhdp_paper91.PDF), [Kahneman & Klein (2009)](http://psycnet.apa.org/journals/amp/64/6/515/), [Tetlock (2005)](http://smile.amazon.com/Expert-Political-Judgment-Philip-Tetlock-ebook/dp/B00C4UT1A4/), [Ericsson et al. (2006)](http://smile.amazon.com/Cambridge-Expertise-Performance-Handbooks-Psychology/dp/0521600812/), [Mellers et al. (2014)](http://pss.sagepub.com/content/25/5/1106.short), and many other studies discuss conditions for good expert judgment or prediction. Presumably our civilizational competence is greater where domain experts are capable of making good judgements about the likely outcomes of potential interventions.\n* [Sunstein (2009)](http://smile.amazon.com/WORST-CASE-SCENARIOS-Cass-R-Sunstein-ebook/dp/B001GS6ZMW/), ch. 2, examines why the Montreal Protocol received more universal support than the Kyoto Protocol.\n* [Schlager & Petroski (1994)](http://smile.amazon.com/When-Technology-Fails-Significant-Technological/dp/0810389088/) and [Chiles (2008)](http://smile.amazon.com/Inviting-Disaster-James-R-Chiles-ebook/dp/B0018ND83Y/) collect case studies of significant technological disasters of the 20th century.\n\n\nIf you decide to perform some small piece of this analysis project, please link to your work in the comments below.\n\n\n\n\n---\n\n1. Quote taken from the *Radiolab* episode titled “[In the Dust of This Planet](http://www.google.com/url?q=http%3A%2F%2Fwww.radiolab.org%2Fstory%2Fdust-planet%2F&sa=D&sntz=1&usg=AFQjCNFOxYjR3q775p6L2VpgC9gKN-msdg).”\n2. In *Superintelligence*, Bostrom made the point this way (p. 259):\n\n> Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct… For a child with an undetonated bomb in its hands, a sensible thing to do would be to put it down gently, quickly back out of the room, and contact the nearest adult. Yet what we have here is not one child but many, each with access to an independent trigger mechanism. The chances that we will *all* find the sense to put down the dangerous stuff seem almost negligible… Nor can we attain safety by running away, for the blast of an intelligence explosion would bring down the entire firmament. Nor is there a grown-up in sight.\n> \n>\n3. By “AGI” I mean a computer system that could pass something like Nilsson’s employment test (see [What is AGI?](http://www.google.com/url?q=http%3A%2F%2Fintelligence.org%2F2013%2F08%2F11%2Fwhat-is-agi%2F&sa=D&sntz=1&usg=AFQjCNHpf5-S6M_WHFp38x45YxwG5q9Q6Q)). By “self-improving AGI” I mean an AGI that improves its own capabilities via its own original computer science and robotics research (and not solely by, say, gathering more data about the world or acquiring more computational resources). By “its own capabilities” I mean to include the capabilities of successor systems that the AGI itself creates to further its goals. In this article I typically mean “AGI” and “self-improving AGI” interchangeably, not because all AGIs will necessarily be self-improving in a strong sense, but because I expect that even if the first AGIs are not self-improving for some reason, self-improving AGIs will follow in a matter of decades if not sooner. From a cosmological perspective, such a delay is but a blink.\n4. I purposely haven’t pinned down exactly *what* about our civilization seems inadequate to meet the challenge of AGI control; David Victor made the same choice when he made his comment about civilizational competence in the face of climate change. I think our civilizational competence is insufficient for the challenge for many reasons, but I also have varying degrees of uncertainty about each those reasons and which parts of the problem they apply to, and those details are difficult to express.\n5. See the AI timeline predictions for the TOP100 poll in [Müller & Bostrom (2014)](http://www.sophia.de/pdf/2014_PT-AI_polls.pdf). The authors asked a sample of the top-cited living AI scientists: “For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for [an AGI] to exist?” The median reply for each confidence level was 2024, 2050, and 2070, respectively.\nWhy trust AI scientists at all? Haven’t they been wildly optimistic about AI progress from the beginning? Yes, there are embarrassing quotes from early AI scientists about how fast AI progress would be, but there are also many now-disproven quotes from early AI skeptics about what AI *wouldn’t* be able to do. The earliest *survey* of AI scientists we have is [from 1973](http://lesswrong.com/lw/gta/selfassessment_in_expert_ai_predictions/), and the most popular response to that survey’s question about AGI timelines was the most pessimistic option, “more than 50 years.” (Which, assuming we don’t get AGI by 2023, will end up being correct.)\n6. An interesting sub-question: Does humanity’s competence keep up with its capability? When our capabilities jump, as they did with the invention of nuclear weapons, does our competence in controlling those capabilities also jump, out of social/moral necessity or some other forces? Einstein said “Nuclear weapons have changed everything, except our modes of thought,” suggesting that he expected us not to mature as “adults” quickly enough to manage nuclear weapons wisely. We haven’t exactly handled them “wisely,” but we’ve at least handled them wisely enough to avoid global nuclear catastrophe so far.\n7. What *Superintelligence* says on the topic can be found in chapter 14.\n8. [Frank (1991)](http://lukemuehlhauser.com/wp-content/uploads/Frank-Positional-Externalities.pdf) explains the concept this way: \n\n\n> In *Micromotives and Macrobehavior*, Thomas Schelling observes that hockey players, left to their own devices, almost never wear helmets, even though almost all of them would vote for helmet rules in secret ballots. Not wearing a helmet increases the odds of winning, perhaps by making it slightly easier to see and hear… At the same time, not wearing a helmet increases the odds of getting hurt. If players value the higher odds of winning more than they value the extra safety, it is rational not to wear helmets. The irony, Schelling observes, is that when all discard their helmets, the competitive balance is the same as if all had worn them.\n> \n> \n> The helmet problem is an example of what we may call a *positional externality*. The decision to wear a helmet has important effects not only for the person who wears it, but also for the frame of reference in which he and others operate. In such situations, the payoffs to individuals depend in part on their positions within the frame of reference. With hockey players, what counts is not their playing ability in any absolute sense, but how they perform relative to their opponents. Where positional externalities… are present, Schelling has taught us, individually rational behavior often adds up to a result that none would have chosen.\n> \n> \n\n\nAn arms race is one well-understood kind of positional externality. As [Alexander (2014)](http://slatestarcodex.com/2014/07/30/meditations-on-moloch/) puts it, “From a god’s-eye-view, the best solution is world peace and no country having an army at all. From within the system, no country can unilaterally enforce that, so their best option is to keep on throwing their money into missiles…”\n9. In other words, my reasons for AGI outcomes pessimism look like a [model combination and adjustment](http://www.google.com/url?q=http%3A%2F%2Flesswrong.com%2Flw%2Fhzu%2Fmodel_combination_and_adjustment%2F&sa=D&sntz=1&usg=AFQjCNEA5_R5u1Z8Zd2rUM_f_6krBXTP5A). Or you can think of it in terms of what Holden Karnofsky calls [cluster thinking](http://www.google.com/url?q=http%3A%2F%2Fblog.givewell.org%2F2014%2F06%2F10%2Fsequence-thinking-vs-cluster-thinking%2F&sa=D&sntz=1&usg=AFQjCNEv4fBJB4hwXPSvM72oaedFNIJXpg). Or as one of my early draft readers called it, “normal everyday reasoning.”\n\nThe post [AGI outcomes and civilizational competence](https://intelligence.org/2014/10/16/agi-outcomes-civilizational-competence/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-10-16T11:00:57Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "e2801b285aa0f1133db326dade79d3ff", "title": "Nate Soares’ talk: “Why ain’t you rich?”", "url": "https://intelligence.org/2014/10/07/nate-soares-talk-aint-rich/", "source": "miri", "source_type": "blog", "text": "On September 18th, MIRI research fellow Nate Soares [spoke](http://intelligence.org/2014/09/12/nate-soares-speaking-purdue-september-18th/) at Purdue University’s [*Dawn or Doom*](http://docs.lib.purdue.edu/dawnordoom/2014/) seminar. [Slides](https://intelligence.org/wp-content/uploads/2014/10/Soares-Why-Aint-You-Rich.pdf), [video](https://www.youtube.com/watch?v=1oAzS1sY8WA), and a [transcript](https://docs.google.com/document/d/1-1weY3JSleomei8vMUjo3pEg4jbJ__6hC9xZRz6qzyw/edit?usp=sharing) of his talk — “Why ain’t you rich? Why Our Current Understanding of ‘Rational Choice’ Isn’t Good Enough for Superintelligence” — are now available.\n\n\n \n\n\n\nThe post [Nate Soares’ talk: “Why ain’t you rich?”](https://intelligence.org/2014/10/07/nate-soares-talk-aint-rich/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-10-07T12:21:36Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "3d252931b5c14ce81650494591baccd4", "title": "MIRI’s October Newsletter", "url": "https://intelligence.org/2014/10/01/october-newsletter/", "source": "miri", "source_type": "blog", "text": "| | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n[Machine Intelligence Research Institute](http://intelligence.org)\n |\n\n |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n**Research Updates**\n* Our major project last month was our Friendly AI technical agenda overview and supporting papers, the former of which is now in late draft form but not yet ready for release.\n* 4 new [expert interviews](http://intelligence.org/category/conversations/), including [John Fox on AI safety](http://intelligence.org/2014/09/04/john-fox/).\n* MIRI research fellow Nate Soares has begun to explain some of the ideas motivating MIRI’s current research agenda [at his blog](http://mindingourway.com/). See especially [Newcomblike problems are the norm](http://mindingourway.com/newcomblike-problems-are-the-norm/).\n\n\n**News Updates**\n* [Friendly AI research help from MIRI](http://intelligence.org/2014/09/08/friendly-ai-research-help-miri/).\n* We’re in week three of our [*Superintelligence* reading group](http://lesswrong.com/lw/kw4/superintelligence_reading_group/), and it’s not too late to join the discussion.\n\n\n\nAs always, please don’t hesitate to let us know if you have any questions or comments.\nBest,\nLuke Muehlhauser\nExecutive Director\n |\n\n |\n\n |\n| |\n\n  |\n\n\n \n\n\nThe post [MIRI’s October Newsletter](https://intelligence.org/2014/10/01/october-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-10-01T21:00:19Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "24b1aecd4a98ce006a4cab6e64109596", "title": "Kristinn Thórisson on constructivist AI", "url": "https://intelligence.org/2014/09/14/kris-thorisson/", "source": "miri", "source_type": "blog", "text": "[![kris](https://intelligence.org/wp-content/uploads/2014/09/kris.png)](https://intelligence.org/wp-content/uploads/2014/09/kris.png)Dr. **Kristinn R. Thórisson** is an [Icelandic](http://en.wikipedia.org/wiki/Iceland \"Iceland\") Artificial Intelligence researcher, founder of the [Icelandic Institute for Intelligent Machines](http://www.iiim.is) (IIIM) and co-founder and former co-director of [CADIA: Center for Analysis and Design of Intelligent Agents](http://cadia.ru.is). Thórisson is one of the leading proponents of [artificial intelligence systems integration](http://en.wikipedia.org/wiki/Artificial_intelligence_systems_integration \"Artificial intelligence systems integration\"). Other proponents of this approach are researchers such as [Marvin Minsky](http://en.wikipedia.org/wiki/Marvin_Minsky \"Marvin Minsky\"), [Aaron Sloman](http://en.wikipedia.org/wiki/Aaron_Sloman \"Aaron Sloman\") and [Michael A. Arbib](http://en.wikipedia.org/wiki/Michael_A._Arbib \"Michael A. Arbib\"). Thórisson is a proponent of Artificial General Intelligence (AGI) (also referred to as [Strong AI](http://en.wikipedia.org/wiki/Artificial_general_intelligence \"Artificial general intelligence\")) and has proposed a new methodology for achieving artificial general intelligence. A demonstration of this constructivist AI methodology has been given in the FP-7 funded HUMANOBS project [HUMANOBS project](http://www.humanobs.org), where an artificial system autonomously learned how to do spoken multimodal interviews by observing humans participate in a TV-style interview. The system, called AERA, autonomously expands its capabilities through self-reconfiguration. Thórisson has also worked extensively on systems integration for artificial intelligence systems in the past, contributing architectural principles for infusing dialogue and human-interaction capabilities into the Honda ASIMO robot.\n\n\nKristinn R. Thórisson is currently managing director for the Icelandic Institute for Intelligent Machines and an associate professor at the School of Computer Science at Reykjavik University. He was co-founder of [semantic web](http://en.wikipedia.org/wiki/Semantic_web \"Semantic web\") startup company Radar Networks, and served as its Chief Technology Officer 2002-2003.\n\n\n\n \n\n\n**Luke Muehlhauser**: In some recent articles ([1](http://alumni.media.mit.edu/%7Ekris/ftp/CustructivistAI-BICA-09-Thorisson-Keynote.pdf), [2](http://xenia.media.mit.edu/%7Ekris/ftp/Thorisson_chapt9_TFofAGI_Wang_Goertzel_2012.pdf), [3](https://openaire.cern.ch/record/11035/files/Replicode-aConstructivistProgrammingParadigmandLanguage.pdf)) you contrast “constructionist” and “constructivist” approaches in AI. Constructionist AI builds systems piece by piece, by hand, whereas constructivist AI builds and grows systems largely by automated methods.\n\n\nConstructivist AI seems like a more general form of the earlier concept of “[seed AI](http://wiki.lesswrong.com/wiki/Seed_AI).” How do you see the relation between the two concepts?\n\n\n\n\n---\n\n\n**Kristinn Thorisson**: We sometimes use “seed AI”, or even “developmental AI”, when we describe what we are doing – it is often a difficult task to find a good term for an interdisciplinary research program, because each term will bring various things up in the mind of people depending on their background. There are subtle differences between both the meanings and histories of these terms that each bring along several pros and cons for each one.\n\n\nI had been working on integrated constructionist systems for close to two decades, where the main focus was on how to integrate many things into a coherent system. When my collaborators and I started to seriously think about how to achieve artificial general intelligence we tired to explain, among other things, how transversal functions – functions of mind that seem to touch pretty much everything in a mind, such as attention, reasoning, and learning – could efficiently and sensibly be implemented in a single AI system. We also looked deeper into autonomy than I had done previously. This brought up all sorts of questions that were new to me, like: What is needed for implementing a system that can act relatively autonomously \\*after it leaves the lab\\*, without the constant intervention of its designers, and is capable of learning a pretty broad range of relatively unrelated things, on its own, and deal with new tasks, scenarios and environments – that were relatively unforeseen by the system’s designers?\n\n\nMy Constructionist Design Methodology (CDM) was conceived over a decade ago as a way to help researchers build big \\*whole\\* systems integrating a large number of heterogeneous cognitive functions. In the past 10 years CDM had already proven excellent for building complex advanced systems – from AI architecture for interactive agents such as the Honda ASIMO humanoid robot to novel economic simulations. Combining methodology and a software system for implementing large distributed complex systems with heterogeneous components and data, we naturally started by asking how the CDM could be extended to address the above issues. But no matter how I tried to tweak and re-design this framework/methodology there seemed to be no way to do that. Primarily due to my close collaboration with Eric Nivel – I soon saw how the CDM could not address the issues at hand. But it went further than that:  It wasn’t only the CDM but \\*all methodology of that kind\\* that was problematic, and it wasn’t simply ‘mildly laking’ in power, or ‘suboptimal’, but in fact \\*grossly insufficient\\* – along with the underlying assumptions that our past research approaches were based on, as imported relatively wholesale from the field of computer science. As the CDM inherited all the limitations of existing software methodologies and engineering methodologies that are commonly taught in universities and used in industry, no methodology existed to our best knowledge that could move us toward AGI at something I considered an acceptable speed.\n\n\nA new methodology was needed. And since we could see so clearly that the present alonomic methodologies – methods that assume a designer outside the system – are essentially ‘constructionist’, putting the system designer/researcher in the role of a construction worker, where each module/class/executable is implemented by hand by a human – our sights turned to self-constructive systems, producing the concept of constructivism. A self-constructive system is capable of bootstrapping itself to some extent, in a new environment, and learn new tasks that its designer did not anticipate. Such a system must of course be supplied by a “seed”, since without a seed there can be no growth, and the implication is then that the system develops on its own, possibly going though cognitive stages in the process. What we do is therefore seed AI, developmental AI, and constructivist AI. The principal concept here is that there are self-organizing principles at play, such that the system-environment couple allows the AI to grow in a reasonably predictable way from a small seed, according to the drives (top-level goals) that were contained in the beginning. I had been introduced to Piaget’s ideas in my early career, and the concept of constructivism seemed to me to capture the idea very well.\n\n\nWhat we are doing is \\*our\\* constructivism, which may or may not overlap with the meaning of how others use that term – the association with Piaget’s work is at an abstract level, as a nod in his direction. One important difference with how others use the term, as far as I can see, is that while we agree that intelligent systems must be able to acquire their knowledge autonomously (as was Piaget’s main point) our emphasis is on \\*methodology\\*: We have very strong reasons to believe that at a high level there are (at least) to \\*kinds\\* of methodologies for doing AI, which we could call ‘constructionist’ and ‘constructivist’. Our hypothesis is that only if you pick the latter will you have a shot at producing an AGI worthy of the “G”. And at present, \\*all\\* the approaches proposed in AI, from subsumption to GOFAI, from production systems to reasoning systems to search-and-test, from BDI to sub-symbolic – whatever they are called and however you slice the field and methodological and philosophical approaches – are of the constructionist kind. Our constructivist AI methodology – CAIM – is our current proposal for breaking free from this situation.\n\n\n\n\n---\n\n\n**Luke:**What is the technical content of CAIM, thus far?\n\n\n\n\n---\n\n\n**Kris**: As a methodology a great deal of the CAIM is perhaps closer to philosophy than tech-speak – but there are some fairly specific implications as well, which logically result from these more general concerns. Let’s go from the top down. I have already mentioned where our work on CAIM originated; where the motivation for a new methodology came from: We asked ourselves what a system would need to be capable of to be more or less (mostly more) independent of its designer after it left the lab – to be more or less (mostly more) \\*autonomous\\*. Clearly the system would then need to take on \\*at least\\* all the tasks that current machine learning and cognitive architectures require their designers to do after they have been implemented and released – but probably a lot more too. The former is a long list of things such as identifying worthy tasks, identifying and defining the necessary and sufficient inputs and outputs for tasks, training for a new task, and more. The latter – the list of \\*new\\* features that such a system would need and which virtually no system to date deals with – includes e.g. how to re-use skills (transfer of knowledge), how to do ampliative reasoning (unified deduction, induction, abduction), how to identify the need for sub-goal generation, how to properly generate sub-goals, etc., and ultimately: how to evaluate one’s methods for doing all of this and improve them. Obviously none of this is trivial.\n\n\nSo there are some high-level principles that we put forth, the first of which I will mention is the need to approach cognitive architectures \\*holistically\\*. This is much more difficult than it sounds, which is why nobody really wants to take that on, and why computer science in general still shies away from it. But it is necessary due to the nature of complex systems that implement complex functions coupled via a large number of heterogeneous interactive connections: Such systems behave in very complex ways when you perturb them, and it becomes a race with combinatorics if you try to uncover their workings via standard experimental designs, by tweaking x and observing the effect, tweaking y and observing again, etc. As famously noted by Newell, you can’t play 20 questions with nature and win, in his paper with that title. When trying to understand how to build a system with a lot of complex interacting functions (‘function’ having the general meaning, not the mathematical one) you must take all the major factors, operations and functions into account from the outset, because if you leave any of them out the whole thing may in fact behave like a different (inconsistent, dysfunctional) system entirely. One such thing that typically is ignored – not just in computer science but in AI as well – is time itself: In view of CAIM, you cannot and must not ignore such a vital feature of reality, as time is in fact one of the key reasons why intelligence exists at all. At the high level CAIM tells you to make a list of the \\*most\\* important features of (natural) intelligences – including having to deal with time and energy, but also with uncertainty, lack of processing power, lack of knowledge – and from this list you can derive an outline for the requirements for your system.\n\n\nNow, turning our attention at the lower levels, one of the things we – and others – realized is that you need to give a generally intelligent system a way to inspect its own operation, to make it capable of \\*reflection\\*, so that it can monitor its own progress as it develops its processes and skills. There are of course programming languages that allow you to implement reflection – Lisp and Python being two examples – but all of these are severely lacking in other important aspects of our quest for general intelligence, a primary one being that they do not make time a first-class citizen. This is where adoption of CAIM steers you in a bit more technical than many other methodologies: It proposes new principles for programming such reflective systems, where time is at the core of the language’s representation, and the granularity of an “executable semantic chunk” must be what we refer to as “pee-wee size”: small enough so that the execution time is highly consistent and predictable, and flexible enough so that larger programs can be built up using such chunks. We have built one proto-architecture with this approach, the Autocatalytic Endogenous Reflective Architecture (AERA). These principles have carried us very far in that effort – much further than I would have predicted based on my experience in building and re-building any other software architecture – and it has been a pleasant surprise how easy it is to expand the current framework with more features. It really feels like we are on to something. To take an example, the concept of curiosity was not a driving force or principle of our efforts, yet when we tried to expand AERA to incorporate such functionality at its core – in essence, the drive to explore one’s acquired knowledge, to figure out “hidden implications” among other things – it was quite effortless and natural. We are seeing very similar things – although this work is not quite as far along yet – with implementing advanced forms of analogy. \n\n\n\n\n\n\n---\n\n\n**Luke**: Among AI researchers who think regularly not just about narrow AI applications but also about the end-goal of [AGI](http://intelligence.org/2013/08/11/what-is-agi/), I observe an interesting divide between those who think in a “top down” manner and researchers who think in a “bottom up” manner. You described the top-down method already: think of what capabilities an AGI would need to have, and back-chain from there to figure out what sub-capabilities you should work toward engineering to eventually get to AGI. Others think a bottom-up approach may be more productive: just keep extending and iterating on the most useful techniques we know today (e.g. deep learning), and that will eventually get us to AGI via paths we couldn’t have anticipated if we had tried to guess what was needed from a top-down perspective.\n\n\nDo you observe this divide as well, or not so much? If you do, then how do you defend the efficiency and productivity of your top-down approach to those who favor bottom-up approaches?\n\n\n\n\n---\n\n\n**Kris**: For any scientific goal you may set yourself you must think about the scope of your work, the hopes you have for finding general principles (induction is, after all, a key tenet of science), and the time it may take you to get there, because this has an impact on the tools and methods you choose for the task. Like in any endeavor, it is a good idea to set yourself milestones, even when the expected time for your research may be years or decades – some would say that is an even greater reason for putting down milestones. We could say that CAIM addresses the top-down  and middle-out in that spectrum: First, it helps with assessing the scope of the work by highlighting some high-level features of the phenomenon to be researched / engineered (intelligence), and proposing some reasons for why one approach is more likely to succeed than others. Second, it proposes mid-level principles that are more likely to achieve the goals of the research program than others – such as reflection, system-wide resource management, and so on. With our work on AERA we have now a physical incarnation of those principles, firmly grounding CAIM in a control-theoretic context.\n\n\nThe top-down / bottom-up dimension is only one of many with a history of importance to the AI community including symbolic versus sub-symbolic (or non-symbolic / numeric), self-bootstrapped knowledge versus hand-coded, reasoning-based versus connectionist-based, narrow-and-deep versus broad-and-shallow, few-key-principles versus hodgepodge-of-techniques (“the brain is a hack”), must-look-at-nature versus anything-can-be-engineered, and so on. All of these vary in their utility for categorizing people’s views, and with regards to their importance for the subject matter we can say with certainty that some of them are less important than others. Most of them, however, are like outdated political categories: they lack the finesse, detail, and precision to really help moving our thinking along. In my mind the most important thing about the top-down versus bottom-up divide as you describe it is that a bottom-up approach without any sense of scope, direction, or proto-theory, is essentially no different than a blind search. And any top-down approach without some empirical grounding is philosophy, not science. Neither extremes are bad in and of themselves, but let’s try to not confuse them with each other, or with informed scientific research. Most of the time reality falls somewhere in between.\n\n\nOf all possible approaches, blind search is just about the most time-consuming and least promising way to do science. Some would in fact argue that it is for all practical purposes impossible. Einstein and Newton did not come up with their theories through blind, bottom-up search, they formulated a rough guideline in their heads about how things might hang together, and then they “searched” that very limited space of possibilities thus carved out. You could call this proto-theories, meta-theories, high-level principles, or assumptions: the guiding principles that a researcher has in mind when he/she tries to solve unsolved problems and answer unanswered questions. In theory it is possible to discover how complex things work by simply studying their parts. But however you slice this, eventually someone must put the descriptions of these isolated parts together, and if the system you are studying is greater than the sum of its parts, well, then someone must come up with the theory for how and why they fit together like they do.\n\n\nWhen we try to study intelligence by studying the brain, this is essentially what we get: it is one of the worst cases of the *curse of holism* – that is, when there is no theory or guiding principles the search is more or less blind. If the system you are studying is large (the brain/mind is) and has principles operating on a broad range of timescales (the brain/mind does) based on a multitude of physical principles (like the mind/brain does) then you will have a hell of a time putting all of the pieces together, for a coherent explanation of the macroscopic phenomenon you are trying to figure out, when you have finished studying the pieces you originally chose to study in isolation. There is another problem that is likely to crop up: How do you know when you have figured out *all* the pieces when you don’t really know what are the pieces? So – you don’t know when to stop, you don’t know how to look, and you don’t know how to put your pieces together into sub-systems. The method is slowed down even further because you are likely to get sidetracked, and worse, you don’t actually know when you are sidetracked because you couldn’t know up front whether the sidetrack is actually a main track. For a system like the mind/brain – of which intelligence is a very holistic emergent property – this method might take centuries to deliver something along the lines of explaining intelligence and human thought.\n\n\nThis is why methodology matters. The methodology you choose must be checked for its likelihood to help you with the goals of your research – to help you answer the questions you are hoping to answer. In AI many people seem to not care; they may be interested in the subject of general intelligence, or human-like intelligence, or some flavor of intelligence of that sort, but they pick the nearest available methodology – that produced by the computer science community for the past few decades – and cross their fingers. Then they watch AI progress decade by decade, and feel that there are clear signs of progress: In the 90s it was Deep Blue, in the 00s it was the robotic vacuum cleaner, in the 10s it was IBM Watson. And they think to themselves “yes, we’ll get there eventually – we are making sure and steady progress”. It is like the Larson joke with the cows practicing pole vaulting, and one of them exclaiming “Soon we’ll be ready for the moon!”.\n\n\nAnyway, to get back to the question, I do believe in the dictum “whatever works” – i.e. bottom-up, top-down, or a mix – if you have a clear idea of your goals, have made sure you are using the best methodology available, for which you must have some idea of the nature of the phenomenon you are studying, and take steps to ensure you won’t get sidetracked too much. If no methodology exists that promises to get you to your final destination you must define intermediate goals, which should be based on rational estimates of where you think the best available methodology is likely to land you. As soon as you find some intermediate answers that can help you identify what exactly are the holes in your methodology you should respond in some sensible way, by honing it or even creating a brand new one; whatever you do, by all means don’t simply fall so much in love with your (in all likelihood, inadequate) methodology that you give up on your original goals, like much of the AI community seems to have done!\n\n\nIn our case what jerked us out of the old constructionist methodology was the realization that to get to general intelligence you’d have to have a system that could more or less self-bootstrap, otherwise it could not handle what we humans refer to as *brand-new* situations, tasks, or environments. Self-bootstrapping requires introspection and self-programming capabilities, otherwise your system will not be capable of cognitive growth. Thorough examination of these issues made it clear that we needed a new methodology, and a new top-level proto-theory, that allowed us to design and implement a system with such features. It is not known at present how exactly these features are implemented in either human or animal minds, but this was one of the breadth-first items on our “general intelligence requires” list. Soon following this came realizations that it’s difficult to imagine a system with those features that doesn’t have some form of attention – we also call it resource management – and a very general way of learning pretty much anything, including about its own operation.\n\n\nThis may seem like an impossible list of requirements to start with, but I think in our favor is the “inventor’s paradox”: Sometimes piling more constraints makes what used to seem complex suddenly simpler. We started to look for ways to create the kind of a controller that was amenable to be imbued with those features, and we found one by taking a ‘pure engineering’ route: We don’t limit ourselves to the idea that “it must map to the way the brain (seems to us now) to do it”, or any other such constraint, because we put engineering goals first, i.e. we targeted creating something with potential for practical applications. Having already very promising results that go far beyond state of the art in machine learning, we are still exploring how far this new approach will take us.\n\n\nSo you see, even though my concerns may seem to be top-down, my there is much more to it, and my adoption of a radically different top-level methodology has much more to do with clarifying the scope of the work, trying to set realistic goals and expectations, and going from there, looking at the building blocks as well as the system as a whole – and creating something with practical value. In one sentence, our approach is somewhat of a simultaneous “breadth-first” and “top-to-bottom” – all at once. Strangely enough this paradoxical and seemingly impossible approach is is working quite well.\n\n\n\n\n---\n\n\n**Luke**: What kinds of security and safety properties are part of your theoretical view of AGI? E.g. MIRI’s [Eliezer Yudkowsky](http://yudkowsky.net/) seems to share your broad methodology in some ways, but he [emphasizes](https://intelligence.org/files/AIPosNegFactor.pdf) the need for AGI designs to be “built from the ground up” for security and safety, like today’s safety-critical systems are — for example autopilot software that is written very differently from most software so that it is (e.g.) amenable to formal verification. Do you disagree with the idea that AGI designs should be built from the ground up for security and safety, or… what’s your perspective on that?\n\n\n\n\n---\n\n\n**Kris**: I am a big proponent of safety in the application of scientific knowledge to all areas of life on this planet. Knowledge is power; scientific knowledge can be used for good as well as evil – this I think everyone agrees with. In my opinion, since there is a lot more that we don’t know than what we know and understand, caution should be a natural ingredient in any application of scientific knowledge in society. Sometimes we have a very good idea of the technological risks, while suspecting certain risks in how it will be managed by people, as is the case with nuclear power plants, and sometimes we really don’t understand either the technological implications nor social management processes, as when a genetically engineered self-replicating systems (e.g. plants) are released into the wild – as the potential interactions of such a technology with the myriads of existing biological systems out there that we don’t understand is staggering, and whose outcome is thus impossible to predict. Since there is generally no way for us to grasp even a tiny fraction of the potential implications of releasing e.g. self-replicating agent into the wild, genetic engineering is a greater potential threat to our livelihood than nuclear power plants. However, both have associated dangers, and both have their pros and cons.\n\n\nSome people have suggested banning certain kinds of research or exploration of certain avenues and questions, to directly block off the possibility of creating dangerous knowledge in the first place. The argument goes, if no one knows it cannot be used to do harm. This purported solution is not practical, however, as the research avenue in question must be blocked everywhere to be effective. Even if we could instantiate such a ban in every country on Earth, compliance could be difficult to ensure. And since people are notoriously bad at foreseeing which avenues of research turn out to bring benefits, a far better approach is to give scientists the freedom to select the research question they want to try to answer – as long as they observe general safety measures, of course, as appropriate to their field of inquiry. Encouraging disclosure of research results funded by public money, e.g. the European competitive research grants, NIH grants, etc., is a sensible step to help ensure that knowledge does not sit exclusively within a small group of individuals, which generally increases opportunity for (mis)use in favor of one group of people over another.\n\n\nRather than ban certain research questions to be pursued, the best way to deal with the dangers resulting from knowledge is to focus on its *application*, making for instance use of certain explosives illegal or strictly conditional, making production facilities for certain chemicals, uranium, etc. conditional on the right permits, regulation, inspection, and so on. This may require strong governmental monitoring and supervision, and effective law enforcement, and this has associated cost, but this approach has built-in transparency and is already proven to be practical.\n\n\nI think artificial intelligence not at the maturity stage of either nuclear power or generically engineered beings. The implications of applying AI in our lives is thus fairly far from the kinds of dangers posed by either of those, or comparable technologies. The dangers of applying current and near-future AI in some way in society are thus of the same nature as the dangers inherent of firearms, power tools, explosives, armies, computer viruses, and the like. Current AI technology can be used (and misused) in acts of violence, in breaking the law, for invasion of privacy, or for violating human rights and waging war. If the knowledge of how to use AI technology is distributed unevenly among arguing parties, e.g. those at war – even that available now – it could give the knowledgeable party an upper hand. But knowledge and application of present-day AI technology is unlikely to count as anything other than one potential make-or-break factor among many. That being said, of course this may change, even radically, in the coming decades.\n\n\nMy collaborators and I believe that as scientists we should take any sensible opportunity to ensure that our own research results are used responsibly. At the very least we should make the general populous, political leaders, etc., aware of any potential dangers that we believe new knowledge may entail. I created a software license to this end, that I prefer to stick on any and all software that I make available, which states that the software may not be used for invasion of privacy, violation of human rights, causing bodily or emotional distress, or for purposes of committing or preparing for any act of war. The clause, called the CADIA Clause (after the AI lab I co-founded), can be appended to any software license by anyone – it is available on the CADIA Website. As far as I know it is one of a very few, if not the only one, of its kind. It is a clear and concise ethical statement on these matters. While seemingly a small step in the direction of ensuring the safe use of scientific results, it is in my mind quite odd that more such statements and license extensions don’t exist; one would think that large groups of scientists would be taking steps in this direction already all over the planet.\n\n\nSome have speculated, among them the famed astrophysicist Stephen Hawking, that future AI systems, especially those endowed with superhuman cognitive powers, may quite possibly pose the biggest threat to humanity that any invention or scientific knowledge ever has. The argument goes like this: Since superhuman AIs must be able to generate sub-goals autonomously, and of course the goals of superhuman AIs will not be hand-coded (unlike virtually 100% of all software created today, including all AI systems in existence), we cannot directly control what sub-goals they may generate; hence we cannot ensure that they behave safely, sensibly, or in any predictable way at all. There may be some relevance of such an argumentation to certain systems development approaches currently underway. However, I believe – based on the evidence I have been exposed to so far – that the fear stems from what I would call a reasonable induction based on incorrect premises. Current methodologies in AI, other than those that my group employs, produce software whose nature is identical to the operating system on your laptop and mobile phone: It is a hand-crafted artifact with no built-in resilience to perturbations to speak of, largely unpredictable responses to unforeseen and unfamiliar input, no self-management mechanisms, no capabilities for cognitive growth, etc. In fact we could say, in short, that they are not really intelligent, at least not in the sense that a superhuman intelligence *must* be. The point is that existing methods for building any kinds of safeguards into these systems are very primitive, to say the least, along the lines of the “safeguards” built into nuclear power plant software: These are safeguards invented and implemented by human minds. These kinds of safeguards are limited by human that designs them. And as we well know, it is difficult for us to truly trust systems built this way. So people tend to think that future superhuman AIs will inherit this trait. But I don’t think so, and my collaborators and I are working on an alternative and pretty interesting new premise for speculating about nature of future superhuman intelligences, and their inherent pros and cons.\n\n\nAlthough our approach has a lot of low-level features in common with organic processes, it is based on explicit deterministic logic, as opposed to largely impenetrable sub-symbolic networks. It does not suffer from the same kind of unpredictability that, say, a new genetically engineered plant that is released into the wild does, or an artificial neural net trained on a tiny subset of what it will take for input when deployed. Our system’s knowledge – and I am talking about AERA now – grows during its interaction with its environment under guidance from its top-level goals, or drives, given to it by the programmers. It has built-in self-correcting mechanisms that go well beyond anything implemented (yet) in everyday software systems, and even those still in the laboratory belonging to the class of state-of-the-art “autonomic systems”. Our system is capable of operations at a meta-level, from hand-coded meta-goals, based on self-organizing principles that are very different from what has been done before. In our approach the autonomous system is capable of a similar sort of high-level guidance that we see in biological systems for helping them survive; when turned “upside-down” these mechanisms result in the inverse of self-preservation-at-any-cost, to a kind of environment-preservation, making them conservative and trustworthy to an extent that no nuclear power plant, or genetically engineered biological agent in the wild, could ever reach using present engineering methodologies. So, we have invented the first seed-based AI system but also possibly a new paradigm for ensuring the predictability for self-expanding AIs, as we see no relevance of the concerns fielded by the more pessimistic researchers to our work. That being said, I should emphasize that we are right in the middle of this research, and although we have a seemingly predictable, self-managing, autonomous system on our hands, much work remains to be done to explore these, and other related issues of importance. Whether our system can reach superhuman, or even human-levels of intelligence, is completely unclear – most would probably say that our chances are slim, based on progress in AI so far, which would be a fair assessment. But it cannot be completely precluded at this stage. The software resulting from our work, by the way, is released under a BSD-like CADIA Clause license.\n\n\n\n\n---\n\n\n**Luke**: You write that “the fear [[expressed by Hawking and others](http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html)] stems from… incorrect premises.” But I couldn’t follow which incorrect premises you were pointing to. Which specific claim(s) do you think are incorrect?\n\n\n\n\n---\n\n\n**Kris**: Keep in mind that this discussion is still highly speculative; there are lots of gaps in our knowledge that must be filled in to imagine the still very hypothetical kinds of superhuman intelligences we think may be spring to life in the future.\n\n\nOne underlying and incorrect premise is to is to think that the kind of system necessary and sufficient to implement superhuman intelligence will be cursed with the same limitations and problems as the systems created with todays methods.\n\n\nThe allonomic methodologies used for all software running on our devices today produce systems that are riddled with problems, primarily fragility, brittleness, and unpredictability, stemming from their strict reliance on allonomically infused semantics, that is, their operational semantics coming strictly from outside of the system – from the human designer. This results in system unpredictability of two kinds.\n\n\nFirst, large complex systems designed and written by hand are bound to contain mistakes in both design and in implementation. Such potential inherent failure points, most of which have to do with syntax rather than semantics, will only be seen if the system itself is in a particular state in the context of a particular environmental state where it is operating. And since these points of failure can be found at any level of detail – many of them will in fact be at very low levels of detail – the values of the system-environment state pair may be very specific, and thus the number of system state – environment state failure pairs may be enormous. To ensure reliability of a system of this nature our only choice is to expose it to every potential environmental state it may encounter, which for a complex system in a complex environment is prohibitive due the combinatorial explosion. We do this for airplanes and other highly visible and obviously fatal systems, but for most software this is not only cost prohibitive but virtually impossible. In fact, we cannot predict beforehand all the ways an allonomic system may fail partly because the system’s fragility is so much due to syntactic issues, which in turn are an unavoidable side effect of any allonomic methodology.\n\n\nThe other kind of unpredictability also stems from exogenous operational semantics, the fact that the runtime operation of the system is a “blind” one and hence the achievement of the system’s goal(s) is rendered inherently opaque and impenetrable to the system itself. A system that cannot analyze how it achieves its goals cannot propose or explore possible ways of improving itself. Such systems are truly blindly executed mechanical algorithms. If the software has no sensible robust way to self-inspect – as no hand-written constructionist system to date can since their semantics are strictly exogenous – it cannot create a model of itself. Yet a self-model is  necessary to a system if it is to continuously improve in achieving its highest-level goals; in other words, self-inspection is a major way to improve the coherence of the system’s operational semantics.\n\n\nSo self-inspection and modeling can increase system predictability at both the semantic and syntactic levels. Autonomous knowledge acquisition – constructivist style knowledge creation, as opposed to hand-coded expert-system style – coupled with self-modeling ensures that the system’s operational semantics are native to the system itself, bringing its operation to another level of meaningfulness not yet seen in any modern software.\n\n\nThis is how all natural intelligences operate. Because you have known your grandmother your whole life you can predict, with a given certainty, that she would not rob a bank, and that if she did, she would be unlikely to harm people unnecessarily while doing it, and however unlikely, you can conjure up potential explanations why she might do either of those, e.g. if she were to “go crazy”, the likelihood of which can in part be predicted by her family history, medication, etc.: The nature of the system referred to as “your grandmother” is understood in the historical and functional context of systems like it – that is, other humans – and in light of her history as an individual.\n\n\nModern software systems are not like that. Because we have never really seen artificial systems of this kind we have a difficult time imagining that such software could exist. We have not really seen any good demonstrations of meta-control or self-organization in artifacts, or seed-AI systems with explicit top-level goals. So we might be inclined to think that superhuman artificial intelligences might be like deranged humans with all the perils of modern software – and possibly new ones. The software of the future that has a sense of self will of course still be software, but it will not be like a “black-box alien” being coming to Earth from outer space, or a mad human with brilliant but twisted thoughts, since we can – unlike humans – open the hood and take a look inside. And the insides are unlikely to be like modern software, because they will operate on quite different principles. So rather than a completely independent and autonomous self-preserving entity behaving like a madman with illusions of grandeur, or a malevolent dictator intent on ensuring its own power and survival, future superhuman software may be more akin to an autonomous hammer: A next-generation tool with an added layer of possible constraints, guidelines, and limitations, that give its human designers yet another level of control over the system, one that allows them to predict the system’s behavior at lower levels of detail, and more importantly at much higher levels than can be done with today’s software.\n\n\n\n\n---\n\n\n**Luke**: I’m not sure Hawking et al. are operating under that premise. Given their professional association with organizations largely influenced by the Bostrom/Yudkowsky lines of thought on machine superintelligence, I doubt they’re worried about AGIs that are like “deranged humans with all the perils of modern software” — instead, they’re probably worried about problems arising from “[five theses](http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/)“-style reasons (which also motivate Bostrom’s forthcoming *[Superintelligence](http://smile.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/)*). Or do you think the points you’ve made above undercut that line of reasoning as well?\n\n\n\n\n---\n\n\n**Kris**: Yes, absolutely.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Kris!\n\n\nThe post [Kristinn Thórisson on constructivist AI](https://intelligence.org/2014/09/14/kris-thorisson/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-09-15T03:03:09Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "2d9ded13715d17a2f570412cc9e86c95", "title": "Nate Soares speaking at Purdue University", "url": "https://intelligence.org/2014/09/12/nate-soares-speaking-purdue-september-18th/", "source": "miri", "source_type": "blog", "text": "On Thursday, September 18th Purdue University is hosting the seminar [Dawn or Doom: The New Technology Explosion](http://www.purdue.edu/dawnordoom/). Speakers include James Barrat, author of [*Our Final Invention*](http://smile.amazon.com/Our-Final-Invention-Artificial-Intelligence-ebook/dp/B00CQYAWRY/), and MIRI research fellow Nate Soares.\n\n\nNate’s talk title and abstract are:\n\n\n\n> **Why ain’t you rich?:**Why our current understanding of “rational choice” isn’t good enough for superintelligence.\n> \n> \n> The fate of humanity could one day depend upon the choices of a superintelligent AI. How will those choices be made? Philosophers have long attempted to define what it means to make rational decisions, but in the context of machine intelligence, these theories turn out to have undesirable consequences.\n> \n> \n> For example, there are many games where modern decision theories lose systematically. New decision procedures are necessary in order to fully capture an idealization of the way we make decisions.\n> \n> \n> Furthermore, existing decision theories are not stable under reflection: a self-improving machine intelligence using a modern decision theory would tend to modify itself to use a different decision theory instead. It is not yet clear what sort of decision process it would end up using, nor whether the end result would be desirable. This indicates that our understanding of decision theories is inadequate for the construction of a superintelligence.\n> \n> \n> Can we find a formal theory of “rationality” that we would want a superintelligence to use? This talk will introduce the concepts above in more detail, discuss some recent progress in the design of decision theories, and then give a brief overview of a few open problems.\n> \n> \n\n\nFor details on how to attend Nate’s talk and others, see [here](http://www.purdue.edu/dawnordoom/Travel).\n\n\nThe post [Nate Soares speaking at Purdue University](https://intelligence.org/2014/09/12/nate-soares-speaking-purdue-september-18th/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-09-12T21:47:31Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "6721c2bbc7686af8ca0674e63659d091", "title": "Ken Hayworth on brain emulation prospects", "url": "https://intelligence.org/2014/09/09/hayworth/", "source": "miri", "source_type": "blog", "text": "![Kenneth Hayworth portrait](https://intelligence.org/wp-content/uploads/2014/09/image00.jpg)Kenneth Hayworth is president of the [Brain Preservation Foundation](http://www.brainpreservation.org/) (BPF), an organization formed to skeptically evaluate cryonic and other potential human preservation technologies by examining how well they preserve the brain’s neural circuitry at the nanometer scale. Hayworth is also a Senior Scientist at the HHMI’s Janelia Farm Research Campus where he is currently researching ways to extend Focused Ion Beam Scanning Electron Microscopy (FIBSEM) of brain tissue to encompass much larger volumes than are currently possible. Hayworth is co-inventor of the ATUM-SEM process for high-throughput volume imaging of neural circuits at the nanometer scale and he designed and built several automated machines to implement this process. Hayworth received his PhD in Neuroscience from the University of Southern California for research into how the human visual system encodes spatial relations among objects. Hayworth is a vocal advocate for brain preservation and mind uploading and, through the BPF’s Brain Preservation Prize, he has challenged scientists and medical researchers to develop a reliable, scientifically verified surgical procedure which can demonstrate long-term ultrastructure preservation across an entire human brain. Once won, Hayworth advocates for the widespread implementation of such a surgical procedure in hospitals. Several research labs are currently attempting to win this prize.\n \n\n\n**Luke Muehlhauser**: One interesting feature of your own thinking ([Hayworth 2012](http://www.brainpreservation.org/sites/default/files/ElectronImagingTechnologyForWholeBrainNeuralCircuitMapping_Hayworth2012.pdf)) about whole brain emulation (WBE) is that you are more concerned with modeling high-level cognitive functions accurately than is e.g. [Sandberg (2013)](http://commonsenseatheism.com/wp-content/uploads/2012/09/Sandberg-Feasability-of-Whole-Brain-Emulation.pdf). Whereas Sandberg expects WBE will be achieved by modeling low-level brain function in exact detail (at the level of scale separation, wherever that is), you instead lean heavily on modeling higher-level cognitive processes using a cognitive architecture called [ACT-R](http://en.wikipedia.org/wiki/ACT-R). Is that because you think this will be *easier* than Sandberg’s approach, or for some other reason?\n\n\n\n\n---\n\n\n**Kenneth Hayworth**: I think the key distinction is that philosophers are focused on whether mind uploading (a term I prefer to WBE) is possible in principle, and, to a lesser extent, on whether it is of such technical difficulty as to put its achievement off so far into the future that its possibility can be safely ignored for today’s planning. With these motivations, philosophers tend to gravitate toward arguments with the fewest possible assumptions, i.e. modeling low-level brain functions in exact detail.\n\n\nAs a practicing cognitive and neuroscientist I have fundamentally different motivations. From my training, I am already totally convinced that the functioning of the brain can be understood at a fully mechanistic level, with sufficient precision to allow for mind uploading. I just want to work toward making mind uploading happen in reality. To do this I need to start with an understanding of the requirements, not based on the fewest assumptions, but instead based on the field’s current best theories. \n\n\nTo use an analogy, before airplanes were invented one could argue that heavier-than-air flying machines must be possible in principle even if it meant copying a bird’s flapping motions etc. This is a sound philosophical argument but not one that engineers like the Wright Brothers would focus on. Instead they needed to know the crucial elements necessary for heavier-than-air flight –lift, drag, thrust. Understanding these allowed them to start building and testing, refining their theories and engineering.\n\n\nIf we want to create the technology to upload someone’s mind into a computer simulation and have that simulation have the same memories, intelligence, personality, and consciousness as the original, then we need to start with a top-level understanding of these cognitive functions. The place to look for these is the field of cognitive science and to highly researched computational models of intelligent behavior such as ACT-R.\n\n\nI have focused my research on understanding how the computational elements of cognitive models such as ACT-R (symbolic representations, production rules, etc.) are likely mapped onto the neural circuits in the brain. This has led me to concrete predictions on exactly what technologies would be necessary to preserve, scan, and simulate a person’s brain. Those are the technologies I am currently working on.\n\n\n\n\n---\n\n\n**Luke Muehlhauser:** How widely used or accepted is ACT-R? Just looking through the literature, it doesn’t seem particularly dominant in cognitive neuroscience, though perhaps that’s just because *no* high-level model of its kind is dominant. E.g. ACT-R doesn’t seem to be mentioned in the 1100+ pages of the new *[Oxford Handbook of Cognitive Neuroscience](http://www.amazon.com/Handbook-Cognitive-Neuroscience-Library-Psychology/dp/0195381599/)*, nor in the 1800+ pages of the *[Encyclopedia of Behavioral Neuroscience](http://www.amazon.com/Encyclopedia-Behavioral-Neuroscience-Three--Set-ebook/dp/B00D8GA4XG/).* I rarely see mention of ACT-R by people who aren’t its proponents.\n\n\n\n\n---\n\n\n**Kenneth Hayworth**: Looking at the [main website for ACT-R](http://act-r.psy.cmu.edu/) I count a total of 1,035 individual publications related to ACT-R theory or using the ACT-R modeling framework. This publication record comes from hundreds of researchers and stretches back decades and continues strong today. ACT-R would not be considered a “cognitive neuroscience” theory since it does not talk about neurons (although there have been attempts to map ACT-R onto a neural framework which I will discuss). Instead, ACT-R is properly classified in the field of Cognitive Science. Of course there is a messy overlap across the fields of computer science, psychology, neuroscience, neurology, cognitive science, artificial intelligence, computational neuroscience, etc. and ACT-R certainly was designed with constraints from these various fields in mind.\n\n\nCognitive science can best be distinguished from neuroscience based on the typical level of abstraction of its models of intelligent behavior. Cognitive science is committed to modeling the mind as a computational system in the traditional sense of processing analog and symbolic representations of various complexity via algorithms. In general, cognitive science models of human intelligent behavior are couched in representational and algorithmic form. For example, a model of human attention and memory retention effects in a perceptual experiment might posit that our mind contains 4 buffers which can be used to store symbolic tokens representing individual letters that were briefly flashed on a viewing screen to the subject. Notice that cognitive science models like this do not talk about neurons or even brain regions, yet they make concrete testable predictions about behavioral responses, response timing, error rates, learning rates, etc. –effects which can be tested with great detail and precision in psychophysical experiments.\n\n\nIf you are looking for explanations for how humans understand natural language, or how they solve problems and reason about goals, or, crucially, how the mind creates and maintains a “self model” underlying our consciousness then cognitive science models at this level of abstraction are really the only place to look. Neuroscience should be seen as talking about the implementation details underlying the algorithms and representations assumed by such cognitive science models.\n\n\nNow cognitive science as a field has been around for a long time and has been generating an enormous variety of small models and algorithmic hypotheses for how humans perform various actions. Way back in 1973 the great cognitive scientist Allen Newell wrote a paper called “You can’t play 20 questions with nature and win” in which he pointed out to the cognitive science community that it needed to strive for “unified theories of cognition” which were designed to explain not just one experimental result but all of them simultaneously. He proposed a particular computational formalism called a “production system” that could provide the basis for such a unification of different cognitive science models. He and others developed that proposal into the widely used SOAR model of human cognition (see his book “Unified Theories of Cognition” for a complete explanation of production systems as a model of the human mind).\n\n\nThere have been many production system models of the human mind based upon this original work but Anderson’s ACT-R theory is currently the standard-bearer of this class. As such, ACT-R should not really be thought of as just another computational model, instead you should think of ACT-R as attempting a summary of all results from the entire field of cognitive science –a summary in the form of a general computational framework for the human mind. Below is a screen capture of the ACT-R website which keeps a list of all publications and ACT-R models generated over the years. You can see that the categories span the entire field of cognitive science (e.g. language processing, learning, perception, memory, problem solving, decision making, etc.)\n\n\n[![image00](https://intelligence.org/wp-content/uploads/2014/09/image00.png)](https://intelligence.org/wp-content/uploads/2014/09/image00.png)\nAlso you should note that since ACT-R is a computational framework, models built within the ACT-R framework really “work” in the sense that an ACT-R model of sentence parsing will actually parse sentences.\n\n\nI hear you when you say “[ACT-R] doesn’t seem particularly dominant in cognitive neuroscience”. This is absolutely true. Most of the neuroscientists I have met have never heard of ACT-R or production systems and usually are unfamiliar with most of the great discoveries in the field of Cognitive Science. This is a real tragedy since Cognitive Science and Neuroscience are properly seen as just two levels of description of the same organ –the brain. Without the abstractions of cognitive science (symbols, declarative memory chunks, production rules, goal buffers, etc.) neuroscience is faced with an insurmountable gap to cross in its attempt to explain how neurons give rise to mind. It would be like having to explain how the computer program Microsoft Word works by describing its operation at the transistor level. It just cannot be done. One must introduce intermediate levels of abstraction (memory buffers, if-then statements, while-loops, etc.) in order to create a theory of Microsoft Word which is understandable. The same is true when it comes to how mind is generated by the brain.\n\n\nI argue that any complete theory of how the mind is generated by the physical brain must include at least the following four levels of description:\n\n\n1. Philosophical theories of consciousness and self (Example: Thomas Metzinger’s book “Being No One: The Self-Model Theory of Subjectivity”)\n2. Cognitive science models of the human cognitive control architecture (Example: John Anderson’s book “How can the human mind occur in the physical universe” –his most recent overview of ACT-R)\n3. Abstract “artificial” neural network architectures of hippocampal memory systems, perceptual feature hierarchies, reinforcement learning pattern recognition networks, etc. (Example: Edmund Rolls’ book “Neural Networks and Brain Function”)\n4. Electrical and molecular models of real biological neurons and systems (Example: Kandel, Schwartz and Jessell’s book “Principles of Neural Science”)\n\n\nIf you cannot deftly switch between these levels of description, understanding the core principles of each and how they are built upon one another, then you are unprepared to understand what is required to accomplish whole brain emulation. Simply understanding neuroscience is not enough.\n\n\nOne of the weakest links which exists today in this hierarchy of descriptive levels is the link between #2 and #3. This is why I have been putting considerable effort into showing how the symbolic computations assumed by ACT-R might be implemented by standard artificial neural network models of autoassocative memory etc. My 2012 publication [“Dynamically partitionable autoassociative networks as a solution to the neural binding problem](http://journal.frontiersin.org/Journal/10.3389/fncom.2012.00073/abstract)”  is a first attempt at this. I am currently preparing a new publication and set of neural models designed to make this link even clearer. My goal is to demonstrate to the neuroscience community that the level of abstraction assumed by cognitive science models like ACT-R is not so removed from the neural network models which neuroscientists currently embrace. An effective bridging of this gap between cognitive science models and neural network models has the potential to release a wave of synergy between the fields, in which the top-down constraints of cognitive science can directly inform neuroscience models, and the bottom-up constraints of neuroscience can directly inform cognitive models. \n\n\n\n\n---\n\n\n**Luke**: How well-specified is ACT-R? In particular, can you give an example of ACT-R making a surprising, novel quantitative prediction that, upon experiment, turned out to be correct? (ACT-R could still be useful even if this hasn’t happened yet, but if there are examples of this happening then it’d be nice to know about them!)\n\n\n\n\n---\n\n\n**Kenneth**: One difficulty with answering that question is that ACT-R is really a “framework” in which to create models of brain function, it is not a model in and of itself. A researcher typically uses the ACT-R framework to create a model of a particular cognitive task –say modeling how a student learns to solve algebraic expressions. To be successful, the model must make quantitative predictions about the error rates and types of errors, the speed of responses and how this speed increases with practice, etc. Any particular ACT-R model consists of a set of initial “productions” (symbolic pattern matching if-then rules) and “declarative memory chunks” (symbolic memories) which are assumed to have already been learned by the individual. The model will produce intelligent behaviors using these initial production rules and memory chunks. It will also change the weights of the rules and chunks as it learns by trial and error, and it will produce new production rules and new memories as it interacts with its simulated environment.\n\n\nNow one can ask your question about novel predictions regarding either a particular ACT-R model or regarding the ACT-R framework itself.  To pick one particularly good example of an ACT-R model making “surprising and novel” predictions you might want to look at the paper: “A Central Circuit of the Mind” (Anderson, Fincham, Qin, and Stocco 2008). In that paper (and a host of follow up papers) an ACT-R model of a cognitive task was used to predict not only behavioral responses but fMRI BOLD activation levels and timing in subjects performing the cognitive task. Considering that the ACT-R community spent many years developing models of brain function without fMRI data as a constraint, it is particularly intriguing to see how well those models have fared when viewed against this new type of data.\n\n\nAs for predictions of the ACT-R architecture itself, I would like to point to perhaps ACT-R’s central prediction going straight back to its inception –the prediction that there are two types of learning: “procedural” and “declarative”. This distinction is so well established now (think of studies of the amnesia patient HM) that it is hard to remember that this was anything but settled when ACT-R was invented. In fact, this is the key difference between the ACT-R and the SOAR cognitive architectures. SOAR was built on an incorrect prediction –that all knowledge in the brain was encoded as procedural rules. This incorrect prediction is why SOAR lost favor with the cognitive science community and why ACT-R is still widely used.\n\n\nYou ask “How well-specified is ACT-R?”. This is a good question. The ACT-R architecture has changed significantly over the years as new information has become available. Over the last decade it went through a dramatic simplification in the complexity that was allowed for the pattern matching in production rules. It also has set the values of some of its core performance and learning parameters (for example the time it takes to execute a single production rule is now set to be 50ms). These ACT-R architectural parameters were of course determined by fitting data to behavioral experiments, but since ACT-R is meant to be a theory of the brain as a whole, individual models are not allowed to arbitrarily change these parameters to fit their particular data. In effect, these ACT-R architectural features have become a significant constraint on the degrees of freedom modelers have when proposing new ACT-R models of particular cognitive tasks. The success or failure of these tighter specifications of the ACT-R framework parameters are best judged by how successful it still is for modeling a wide range of behaviors. As for that I would again point to the wide range of publications using ACT-R.\n\n\nI don’t think one should judge ACT-R on the basis of novel predictions however. It is designed as a summary theory to account for a large body of cognitive science facts. It should, and has, changed when its predictions were found unsound. For example, ACT-R’s original model of production matching was essentially found untenable given what we know about neural networks, so it was jettisoned for a new simpler version.\n\n\nI think the best way to judge ACT-R is to understand what it is meant to do and what it is not meant to do. ACT-R is used to model cognitive tasks at the computational level, not at the neural implementation level. It is quite specific and well specified at the computational level and has been shown to provide satisfying (tentative) explanations for a wide range of intelligent human behaviors at this level. I have argued that it is our best model of the human mind at this computational level and therefore we as neuroscientists should use it as a starting point for understanding what types of high-level computations the neural circuits of the brain are likely performing.\n\n\n\n\n---\n\n\n**Luke**: Back to WBE. You were a participant in the 2007 workshop that led to FHI’s [Whole Brain Emulation: A Roadmap](http://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf) report. Would you mind sharing your own estimates on some of the key questions from the report? In particular, Table 4 of the report (p. 39) summarizes possible modeling complications, along with estimates of how likely they are to be necessary for WBE and how hard they would be to implement. Which of those estimates do you disagree with most strongly? (By “WBE” I mean what the report calls success criterion 6a (“social role-fit emulation”), so as to set aside questions of consciousness and personal identity.)\n\n\n\n\n---\n\n\n**Kenneth**: First I would like to say how fantastic it was for Anders and the FHI to put on that workshop. It is amazing to see how far the field has progressed in the meantime. I agree with most of the main conclusions of that report, but since you asked for what I disagree with I will focus in on those few points on which I have reservations.\n\n\nThe report starts off with a section entitled “Little need for whole-system understanding” where the authors state: “An important hypothesis for WBE is that in order to emulate the brain we do not need to understand the whole system, but rather we just need a database containing all necessary low‐level information about the brain and knowledge of the local update rules that change brain states from moment to moment.” (p. 8)\n\n\nI agree that it is technically possible to achieve WBE without an overall understanding of how the brain works, but I seriously doubt this is the way it will happen. I believe our neuroscience models, and the experimental techniques to test them, will continue their rapid advance over the next few decades so that we will reach a very deep understanding of the brain’s functioning at all levels well before we have the technology for applying WBE to a human. Under that scenario we will understand exactly which features listed in table 4 of the report are needed, and which are unnecessary.\n\n\nAs a concrete example (meant to be controversial), there are several types of cortical neurons thought to provide general feedforward and feedback inhibition to the cortex’s main excitatory cells. These inhibitory cells effectively regulate the percentage of the excitatory cells that are allowed to be active at a given time (i.e. the ‘sparseness’ of the neural representation).  I doubt that the details of these inhibitory cells will be modeled at all in a future WBE since it would be much easier, and more reliable, in a computer model to enforce this sparseness of firing directly. Now this suggestion will probably strike many of your readers as a particularly risky shortcut, and they may ask “How can we be sure that the detailed functioning of these inhibitory neurons is not crucial to the operation of the mind?” This may turn out to be the case, but my point here is that by the time we have the technology to actually try a human WBE we will know for sure what ‘shortcuts’ are acceptable.  The experiments needed to test these hypotheses are much simpler than WBE itself.\n\n\n\n\n---\n\n\n**Luke**: In your view, what are some specific, tractable “next research projects” that, if funded, could show substantial progress toward WBE in the next 10 years?\n\n\n\n\n---\n\n\n**Kenneth**: Over the next ten years progress in WBE is likely to be tied to progress in the field of connectomics (automated electron microscopic (EM) mapping of brain tissue to reveal the precise connectivity between neurons). Since the 2004 invention of the [serial block face scanning electron microscope](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC524270/) (SBFSEM, or SBEM) there has been an explosion of interest in this area. SBFSEM was the first automated device to really show how one could, in principle, “deconstruct brain tissue” and extract its precise wiring diagram. Since then the SBFSEM has undergone dramatic improvements and has been joined by several other automated electron microscopy tools for mapping brain tissue at the nanometer scale ([TEMCA](http://www.nature.com/nature/journal/v471/n7337/full/nature09802.html), [ATUM-SEM](http://journal.frontiersin.org/Journal/10.3389/fncir.2014.00068/abstract), [FIB-SEM](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3196160/)). Kevin Briggman and Davi Bock (two of the leaders in this field) wrote a great [review article](http://www.ncbi.nlm.nih.gov/pubmed/22119321) in 2012 covering the current state of all of these connectomics imaging technologies. In short, neuroscientists finally have some automated tools which can image the complete synaptic connectivity of the neural circuits they have been studying for years by behavioral and electrophysiological means. Several “high profile” connectomics publications have recently come out ([Nature 2011](http://www.nature.com/nature/journal/v471/n7337/full/nature09802.html), [Nature 2011](http://www.nature.com/nature/journal/v471/n7337/full/nature09818.html), [Nature 2013](http://www.nature.com/nature/journal/v500/n7461/full/nature12450.html), [Nature 2013](http://www.nature.com/nature/journal/v500/n7461/full/nature12346.html), [Nature 2014](http://www.nature.com/nature/journal/v509/n7500/full/nature13240.html)) giving a taste of what can be accomplished with such tools, but these publications represent just the “tip of the iceberg” of what is likely to be a revolution in the way neuroscience is done, a revolution that over the long run will lead to WBE.\n\n\nThere is, however, one thing that is currently holding back the entire field of connectomics – the lack of an automated solution to tracing neural connectivity. Even though there has been fantastic progress in automating 3D EM imaging, each of these connectomics publications required a small army of dedicated human tracers to supplement and “error-correct” the output of today’s inadequate software tracing algorithms. This roadblock has been widely recognized by connectomics practitioners:\n\n\n“[A]ll cellular-resolution connectomics studies to date have involved thousands to tens-of-thousands of hours of [human labor for neural] reconstruction. Although imaging speed has substantially increased, reconstruction speed is massively lagging behind. For any of the proposed dense-circuit reconstructions (mouse neocortex [column], olfactory bulb, fish brain and human neocortex [column]…), analysis-time estimates are at least one if not several orders of magnitude larger than what has been accomplished to date… requirements for these envisioned projects are around several hundred thousand hours of manual labor per project. These enormous numbers constitute what can be called the analysis gap in cellular connectomics: although imaging larger circuits is becoming more and more feasible, reconstructing them is not.” – Moritz Helmstaedter ([Nature Methods review 2013](http://www.nature.com/nmeth/journal/v10/n6/full/nmeth.2476.html))\n\n\nA full solution to this “analysis gap” will likely require advances on three fronts:\n\n\n1. Improved 3D imaging resolution\n2. Improved tissue preservation and staining\n3. More advanced algorithms\n\n\nMost of today’s connectomics imaging technologies still rely on physical sectioning of tissue which practically limits the “z” resolution of EM imaging to >20nm. However, the [FIB-SEM](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3196160/) technique instead uses a focused ion beam to reliably “polish” away a <5nm layer of tissue between each EM imaging step. This means that FIB-SEM datasets can image tissue with voxel resolutions as low as 5x5x5nm. This extra resolution is crucial when trying to follow tubular neuronal processes which can often shrink to <40nm in diameter. This improved FIB-SEM resolution has been [demonstrated](http://www.sciencedirect.com/science/article/pii/S095943881400035X) to dramatically increase the effectiveness of today’s tracing algorithms. I have been working on extending this FIB-SEM technique to make it capable of handling arbitrarily large volumes. I do this by using a heated, oil-lubricated diamond knife to losslessly subdivide a large block of tissue into chunks optimally-sized (~20 microns thick) for high-resolution, parallel FIBSEM imaging. I will have a paper on this coming out later this year.\n\n\nThis higher resolution (FIB-SEM) imaging, along with likely advances in tracing algorithms, should finally be able to achieve the goal of fully-automated tracing of neural tissue –i.e. it should overcome the “analysis gap”, eliminating the reliance on human tracing.\n\n\nThat is, as long as the tissue is optimally preserved and stained for EM imaging. Unfortunately, today’s best tissue preservation and staining protocols are limited to <200 micron thick tissue slabs. This volume limit also represents a fundamental roadblock to connectomics, and there is only one researcher that I know of that has seriously taken up the challenge to remove this limitation –[Shawn Mikula](http://connectomes.org/index.php?p=about-connectomes). He has made [substantial progress](http://connectomes.org/pdf/mikula2012.pdf) toward this goal, but considerable work remains.\n\n\nSo regarding your question for “specific, tractable next research projects”, I would offer up the following: A project to develop a protocol for preserving, staining, and losslessly subdividing an entire mouse brain for random-access FIB-SEM imaging, along with a project to build a “farm” of hundreds of inexpensive FIB-SEM machines capable of imaging neural circuits spanning large regions of this mouse’s brain.\n\n\nI believe that the Mikula protocol can be extended to provide excellent EM staining for all parts of a mouse brain (Mikula, personal communications). I also believe, with sufficient work and funding, that this mouse protocol could be made compatible with my “lossless subdivision” technique which would allow the entire mouse brain to be quickly and reliably chopped up into little cubes (~50x50x50 microns in size) any of which could be loaded into a FIB-SEM machine for imaging. I also believe, with sufficient funding, that today’s generation of expensive (>1million USD) FIB-SEM machines could be redesigned, and drastically simplified for mass-manufacture.\n\n\nI think that such a project is indeed tractable over a ten year time frame, and it would in effect provide neuroscientists with a tool (what I have termed a “[connectome observatory](https://www.youtube.com/watch?v=zp1TlhxZQmk)”) to map out the neural circuits involved in vision, memory, motor control, reinforcement learning, etc. all within the same mouse brain. Allowing researchers to see not only an individual region’s circuits, but also the long distance connections among these regions which allow them to act as a coherent whole controlling the mouse’s behavior. I think it is at this level that we will begin to see rudiments of the types of coordinated, symbol-level operations predicted by cognitive models like ACT-R, the types of executive control circuits which in us evolved to underlie our unique intelligence and consciousness.\n\n\nWe must remember and accept that the long-term goal of human whole brain emulation is almost unimaginably difficult. It is ludicrous to expect significant progress toward that goal in the next decade or two. But I believe we should not shrink away from saying that that goal is the one we are ultimately pursuing. I believe neuroscientists and cognitive scientists should proudly embrace the idea that their fields will eventually succeed in revealing a complete and satisfying mechanistic explanation of the human mind and consciousness, and that this understanding will inevitably lead to the technology of mind uploading. There is so much hard work to be done that we cannot afford to neglect the tremendous payoffs awaiting in the event we succeed.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Kenneth!\n\n\nThe post [Ken Hayworth on brain emulation prospects](https://intelligence.org/2014/09/09/hayworth/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-09-10T01:17:54Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "489d85c41030c659877414f4de302976", "title": "Friendly AI Research Help from MIRI", "url": "https://intelligence.org/2014/09/08/friendly-ai-research-help-miri/", "source": "miri", "source_type": "blog", "text": "Earlier this year, a student told us he was writing an honors thesis on logical decisions theories such as TDT and UDT — one of MIRI’s core research areas. Our reply was “Why didn’t you tell us this earlier? When can we fly you to Berkeley to help you with it?”\n\n\nSo we flew Danny Hintze to Berkeley and he spent a couple days with Eliezer Yudkowsky to clarify the ideas for the thesis. Then Danny went home and wrote what is [probably the best current introduction to logical decision theories](https://intelligence.org/wp-content/uploads/2014/10/Hintze-Problem-Class-Dominance-In-Predictive-Dilemmas.pdf).\n\n\nInspired by this success, today we are launching the [Friendly AI Research Help program](http://intelligence.org/researchhelp/), which encourages students of mathematics, computer science, and formal philosophy to collaborate and consult with our researchers to help steer and inform their work.\n\n\nApply for research help [here](http://intelligence.org/researchhelp/).\n\n\n[![thesishelpheadersmall](https://intelligence.org/wp-content/uploads/2014/09/thesishelpheadersmall.jpg)](https://intelligence.org/wp-content/uploads/2014/09/thesishelpheadersmall.jpg)\nThe post [Friendly AI Research Help from MIRI](https://intelligence.org/2014/09/08/friendly-ai-research-help-miri/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-09-08T22:20:55Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "22be5a5d6c9059eab5e3ae28eaf78c34", "title": "John Fox on AI safety", "url": "https://intelligence.org/2014/09/04/john-fox/", "source": "miri", "source_type": "blog", "text": "[John Fox](http://www.cossac.org/people/johnfox) is an interdisciplinary scientist with theoretical interests in AI and computer science, and an applied focus in medicine and medical software engineering. After training in experimental psychology at Durham and Cambridge Universities and post-doctoral fellowships at CMU and Cornell in the USA and UK (MRC) he joined the Imperial Cancer Research Fund (now Cancer Research UK) in 1981 as a researcher in medical AI. The group’s research was explicitly multidisciplinary and it subsequently made significant contributions in basic computer science, AI and medical informatics, and developed a number of successful technologies which have been commercialised.\n\n\nIn 1996 he and his team were awarded the 20th Anniversary Gold Medal of the European Federation of Medical Informatics for the development of PROforma, arguably the first formal computer language for modeling clinical decision and processes. Fox has published widely in computer science, cognitive science and biomedical engineering, and was the founding editor of the *Knowledge Engineering Review* (Cambridge University Press). Recent publications include a research monograph *[Safe and Sound: Artificial Intelligence in Hazardous Applications](http://smile.amazon.com/Safe-Sound-Artificial-Intelligence-Applications/dp/0262062119/ref=nosim?tag=793775876-20)* (MIT Press, 2000) which deals with the use of AI in safety-critical fields such as medicine.\n\n\n\n**Luke Muehlhauser**: You’ve spent many years studying AI safety issues, in particular in medical contexts, e.g. in your 2000 book with Subrata Das, *[Safe and Sound: Artificial Intelligence in Hazardous Applications](http://smile.amazon.com/Safe-Sound-Artificial-Intelligence-Applications/dp/0262062119/ref=nosim?tag=793775876-20)*. What kinds of AI safety challenges have you focused on in the past decade or so?\n\n\n\n\n---\n\n\n**John Fox**: From my first research job, as a post-doc with AI founders Allen Newell and Herb Simon at CMU, I have been interested in computational theories of high level cognition. As a cognitive scientist I have been interested in theories that subsume a range of cognitive functions, from perception and reasoning to the uses of knowledge in autonomous decision-making. After I came back to the UK in 1975 I began to combine my theoretical interests with the practical goals of designing and deploying AI systems in medicine.\n\n\nSince our book was published in 2000 I have been committed to testing the ideas in it by designing and deploying many kind of clinical systems, and demonstrating that AI techniques can significantly improve quality and safety of clinical decision-making and process management. Patient safety is fundamental to clinical practice so, alongside the goals of building systems that can improve on human performance, safety and ethics have always been near the top of my research agenda.\n\n\n\n\n---\n\n\n**Luke Muehlhauser:** Was it straightforward to address issues like safety and ethics in practice?\n\n\n\n\n---\n\n\n**John Fox**: While our concepts and technologies have proved to be clinically successful we have not achieved everything we hoped for. Our attempts to ensure, for example, that practical and commercial deployments of AI technologies should explicitly honor ethical principles and carry out active safety management have not yet achieved the traction that we need to achieve. I regard this as a serious cause for concern, and unfinished business in both scientific and engineering terms.\n\n\nThe next generation of large-scale knowledge based systems and software agents that we are now working on will be more intelligent and will have far more autonomous capabilities than current systems. The challenges for human safety and ethical use of AI that this implies are beginning to mirror those raised by the singularity hypothesis. We have much to learn from singularity researchers, and perhaps our experience in deploying autonomous agents in human healthcare will offer opportunities to ground some of the singularity debates as well.\n\n\n\n\n\n---\n\n\n**Luke**: You write that your “attempts to ensure… [that] commercial deployments of AI technologies should… carry out active safety management” have not yet received as much traction as you would like. Could you go into more detail on that? What did you try to accomplish on this front that didn’t get adopted by others, or wasn’t implemented?\n\n\n\n\n---\n\n\n**John**: Having worked in medical AI from the early ‘seventies I have always been keenly aware that while AI can help to mitigate the effects of human error there is a potential downside too. AI systems could be programmed incorrectly, or their knowledge could prescribe inappropriate practices, or they could have the effect of deskilling the human professionals who have the final responsibility for their patients. Despite well-known limitations of human cognition people remain far and away the most versatile and creative problem solvers on the planet.\n\n\nIn the early ‘nineties I had the opportunity to set up a project whose goal was to establish a rigorous framework for the design and implementation of AI systems for safety critical applications. Medicine was our practical focus but the RED project[1](https://intelligence.org/2014/09/04/john-fox/#footnote_0_11319 \"Rigorously Engineered Decisions\") was aimed at the development of a general architecture for the design of autonomous agents that could be trusted to make decisions and carry out plans as reliably and safely as possible, certainly to be as competent and hence as trustworthy as human agents in comparable tasks. This is obviously a hard problem but we made [sufficient progress](http://www.tandfonline.com/doi/abs/10.1080/095281397146979) on theoretical issues and design principles that I thought there was a good chance the techniques might be applicable in medical AI and maybe even more widely.\n\n\nI thought AI was like medicine, where we all take it for granted that medical equipment and drug companies have a duty of care to show that their products are effective and safe before they can be certificated for commercial use. I also assumed that AI researchers would similarly recognize that we have a “duty of care” to all those potentially affected by poor engineering or misuse in safety critical settings but this was naïve. The commercial tools that have been based on the technologies derived from AI research have to date focused on just getting and keeping customers and safety always takes a back seat.\n\n\nIn retrospect I should have predicted that making sure that AI products are safe is not going to capture the enthusiasm of commercial suppliers. If you compare AI apps with drugs we all know that pharmaceutical companies have to be firmly regulated to make sure they fulfill their duty of care to their customers and patients. However proving drugs are safe is expensive and also runs the risk of revealing that your new wonder-drug isn’t even as effective as you claim! It’s the same with AI.\n\n\nI continue to be surprised how optimistic software developers are – they always seem to have supreme confidence that worst-case scenarios wont happen, or that if they do happen then their management is someone else’s responsibility. That kind of technical over-confidence has led to countless catastrophes in the past, and it amazes me that it persists.\n\n\nThere is another piece to this, which concerns the roles and responsibilities of AI researchers. How many of us take the risks of AI seriously so that it forms a part of our day-to-day theoretical musings and influences our projects? MIRI has put one worst case scenario in front of us – the possibility that our creations might one day decide to obliterate us – but so far as I can tell the majority of working AI professionals either see safety issues as irrelevant to the pursuit of interesting scientific questions or, like the wider public, that the issues are just science fiction.\n\n\nI think experience in medical AI trying to articulate and cope with human risk and safety may have a couple of important lessons for the wider AI community. First we have a duty of care that professional scientists cannot responsibly ignore. Second, the AI business will probably need to be regulated, in much the same way as the pharmaceutical business is. If these propositions are correct then the AI research community would be wise to engage with and lead on discussions around safety issues if it wants to ensure that the regulatory framework that we get is to our liking!\n\n\n\n\n---\n\n\n**Luke**: Now you write, “That kind of technical over-confidence has led to countless catastrophes in the past…” What are some example “catastrophes” you’re thinking of?\n\n\n\n\n---\n\n\n**John**: Psychologists have known for years that human decision-making is flawed, even if amazingly creative sometimes, and overconfidence is an important source of error in routine settings. A large part of the motivation for applying AI in medicine comes from the knowledge that, in the words of the Institute of Medicine, “To err is human” and overconfidence is an established cause of clinical mistakes.[2](https://intelligence.org/2014/09/04/john-fox/#footnote_1_11319 \"Overconfidence in major disasters:\n• D. Lucas. Understanding the Human Factor in Disasters. Interdisciplinary Science Reviews. Volume 17 Issue 2 (01 June 1992), pp. 185-190.\n• “Nuclear safety and security.\nPsychology of overconfidence:\n• Overconfidence effect.\n• C. Riordan. Three Ways Overconfidence Can Make a Fool of You Forbes Leadership Forum.\nOverconfidence in medicine:\n• R. Hanson. Overconfidence Erases Doc Advantage. Overcoming Bias, 2007.\n• E. Berner, M. Graber. Overconfidence as a Cause of Diagnostic Error in Medicine. The American Journal of Medicine. Volume 121, Issue 5, Supplement, Pages S2–S23, May 2008.\n• T. Ackerman. Doctors overconfident, study finds, even in hardest cases. Houston Chronicle, 2013.\nGeneral technology example:\n• J. Vetter, A. Benlian, T. Hess. Overconfidence in IT Investment Decisions: Why Knowledge can be a Boon and Bane at the same Time. ICIS 2011 Proceedings. Paper 4. December 6, 2011.\")\n\n\nOver-confidence and its many relatives (complacency, optimism, arrogance and the like) have a huge influence on our personal successes and failures, and our collective futures. The outcomes of the US and UK’s recent adventures around the world can be easily identified as consequences of overconfidence, and it seems to me that the polarized positions about global warming and planetary catastrophe are both expressions of overconfidence, just in opposite directions.\n\n\n\n\n---\n\n\n**Luke**: Looking much further out… if one day we can engineer [AGIs](http://intelligence.org/2013/08/11/what-is-agi/), do you think we are likely to figure out how to make them safe?\n\n\n\n\n---\n\n\n**John**: History says that making any technology safe is not an easy business. It took quite a few boiler explosions before high-pressure steam engines got their iconic centrifugal governors. Ensuring that new medical treatments are safe as well as effective is famously difficult and expensive. I think we should assume that getting to the point where an AGI manufacturer could guarantee its products are safe will be a hard road, and it is possible that guarantees are not possible in principle. We are not even clear yet what it means to be “safe”, at least not in computational terms.\n\n\nIt seems pretty obvious that entry level robotic products like the robots that carry out simple domestic chores or the “nursebots” that are being trialed for hospital use, have such a simple repertoire of behaviors that it should not be difficult to design their software controllers to operate safely in most conceivable circumstances. Standard safety engineering techniques like HAZOP[3](https://intelligence.org/2014/09/04/john-fox/#footnote_2_11319 \"Hazard and operability study\") are probably up to the job I think, and where software failures simply cannot be tolerated software engineering techniques like formal specification and model-checking are available.\n\n\nThere is also quite a lot of optimism around more challenging robotic applications like autonomous vehicles and medical robotics. Moustris et al.[4](https://intelligence.org/2014/09/04/john-fox/#footnote_3_11319 \"Int J Med Robotics Comput Assist Surg 2011; 7: 375–39\") say that autonomous surgical robots are emerging that can be used in various roles, automating important steps in complex operations like open-heart surgery for example, and they expect them to become standard in – and to revolutionize the practice of – surgery. However at this point it doesn’t seem to me that surgical robots with a significant cognitive repertoire are feasible and a human surgeon will be in the loop for the foreseeable future.\n\n\n\n\n---\n\n\n**Luke**: So what might artificial intelligence learn from natural intelligence?\n\n\n\n\n---\n\n\nAs a cognitive scientist working in medicine my interests are co-extensive with those of scientists working on AGIs. Medicine is such a vast domain that practicing it safely requires the ability to deal with countless clinical scenarios and interactions and even when working in a single specialist subfield requires substantial knowledge from other subfields. So much so that it is now well known that even very experienced humans with a large clinical repertoire are subject to significant levels of error.[5](https://intelligence.org/2014/09/04/john-fox/#footnote_4_11319 \"A. Ford. Domestic Robotics – Leave it to Roll-Oh, our Fun loving Retrobot. Institute for Ethics and Emerging Technologies, 2014.\") An artificial intelligence that could be helpful across medicine will require great versatility, and this will require a general understanding of medical expertise and a range of cognitive capabilities like reasoning, decision-making, planning, communication, reflection, learning and so forth.\n\n\nIf human experts are not safe is it well possible to ensure that an AGI, however sophisticated, will be? I think that it is pretty clear that the range of techniques currently available for assuring system safety will be useful in making specialist AI systems reliable and minimizing the likelihood of errors in situations that their human designers can anticipate. However, AI systems with general intelligence will be expected to address scenarios and hazards that are beyond us to solve currently and often beyond designers even to anticipate. I am optimistic but at the moment I don’t see any convincing reason to believe that we have the techniques that would be sufficient to guarantee that a clinical super-intelligence is safe, let alone an AGI that might be deployed in many domains.\n\n\n \n\n\n\n\n---\n\n\n**Luke**: Thanks, John!\n\n\n\n\n---\n\n1. [Rigorously Engineered Decisions](http://www.cossac.org/projects/red)\n2. Overconfidence in major disasters:\n• D. Lucas. [*Understanding the Human Factor in Disasters.*](http://www.maneyonline.com/doi/abs/10.1179/isr.1992.17.2.185) Interdisciplinary Science Reviews. Volume 17 Issue 2 (01 June 1992), pp. 185-190. \n\n• “[Nuclear safety and security.](http://en.wikipedia.org/wiki/Nuclear_safety_and_security)\n\n\nPsychology of overconfidence:\n\n\n• [Overconfidence effect.](http://en.wikipedia.org/wiki/Overconfidence_effect) \n\n• C. Riordan. [Three Ways Overconfidence Can Make a Fool of You](http://www.forbes.com/sites/forbesleadershipforum/2013/01/08/three-ways-overconfidence-can-make-a-fool-of-you/) Forbes Leadership Forum.\n\n\nOverconfidence in medicine:\n\n\n• R. Hanson. [Overconfidence Erases Doc Advantage.](http://www.overcomingbias.com/2007/04/overconfidence_.html) Overcoming Bias, 2007. \n\n• E. Berner, M. Graber. [Overconfidence as a Cause of Diagnostic Error in Medicine.](http://www.amjmed.com/article/S0002-9343(08)00040-5/abstract) The American Journal of Medicine. Volume 121, Issue 5, Supplement, Pages S2–S23, May 2008. \n\n• T. Ackerman. [Doctors overconfident, study finds, even in hardest cases.](http://www.houstonchronicle.com/news/health/article/Doctors-overconfident-study-finds-even-in-4766096.php) Houston Chronicle, 2013.\n\n\nGeneral technology example:\n\n\n• J. Vetter, A. Benlian, T. Hess. [Overconfidence in IT Investment Decisions: Why Knowledge can be a Boon and Bane at the same Time.](http://aisel.aisnet.org/icis2011/proceedings/generaltopics/4/) ICIS 2011 Proceedings. Paper 4. December 6, 2011.\n3. [Hazard and operability study](http://en.wikipedia.org/wiki/Hazard_and_operability_study)\n4. Int J Med Robotics Comput Assist Surg 2011; 7: 375–39\n5. A. Ford. [Domestic Robotics – Leave it to Roll-Oh, our Fun loving Retrobot](http://ieet.org/index.php/IEET/more/ford20140702). Institute for Ethics and Emerging Technologies, 2014.\n\nThe post [John Fox on AI safety](https://intelligence.org/2014/09/04/john-fox/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-09-04T19:00:13Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "7c00d6eb0399ee827f177dc3b4518c6b", "title": "Daniel Roy on probabilistic programming and AI", "url": "https://intelligence.org/2014/09/04/daniel-roy/", "source": "miri", "source_type": "blog", "text": "![Daniel Roy portrait](http://intelligence.org/wp-content/uploads/2014/08/Roy__h692_w540.jpg) [Daniel Roy](http://danroy.org/) is an Assistant Professor of Statistics at the University of Toronto. Roy earned an S.B. and M.Eng. in Electrical Engineering and Computer Science, and a Ph.D. in Computer Science, from MIT.  His dissertation on probabilistic programming received the department’s George M Sprowls Thesis Award.  Subsequently, he held a Newton International Fellowship of the Royal Society, hosted by the Machine Learning Group at the University of Cambridge, and then held a Research Fellowship at Emmanuel College. Roy’s research focuses on theoretical questions that mix computer science, statistics, and probability.\n\n\n\n**Luke Muehlhauser**: The abstract of [Ackerman, Freer, and Roy (2010)](http://danroy.org/papers/AckFreRoy-CompCondProb-preprint.pdf) begins:\n\n\n\n> As inductive inference and machine learning methods in computer science see continued success, researchers are aiming to describe even more complex probabilistic models and inference algorithms. What are the limits of mechanizing probabilistic inference? We investigate the computability of conditional probability… and show that there are computable joint distributions with noncomputable conditional distributions, ruling out the prospect of general inference algorithms.\n> \n> \n\n\nIn what sense does your result (with Ackerman & Freer) rule out the prospect of general inference algorithms?\n\n\n\n\n---\n\n\n**Daniel Roy**: First, it’s important to highlight that when we say “probabilistic inference” we are referring to the problem of computing [conditional probabilities](http://en.wikipedia.org/wiki/Conditional_probability), while highlighting the role of conditioning in Bayesian statistical analysis.\n\n\nBayesian inference centers around so-called posterior distributions. From a subjectivist standpoint, the posterior represents one’s updated beliefs after seeing (i.e., conditioning on) the data. Mathematically, a posterior distribution is simply a conditional distribution (and every conditional distribution can be interpreted as a posterior distribution in some statistical model), and so our study of the computability of conditioning also bears on the problem of computing posterior distributions, which is arguably one of the core computational problems in Bayesian analyses.\n\n\nSecond, it’s important to clarify what we mean by “general inference”. In machine learning and artificial intelligence (AI), there is a long tradition of defining formal languages in which one can specify probabilistic models over a collection of variables. Defining distributions can be difficult, but these languages can make it much more straightforward.\n\n\nThe goal is then to design algorithms that can use these representations to support important operations, like computing conditional distributions. Bayesian networks can be thought of as such a language: You specify a distribution over a collection of variables by specifying a graph over these variables, which breaks down the entire distribution into “local” conditional distributions corresponding with each node, which are themselves often represented as tables of probabilities (at least in the case where all variables take on only a finite set of values). Together, the graph and the local conditional distributions determine a unique distribution over all the variables.\n\n\nAn inference algorithms that support the entire class of all finite, discrete, Bayesian networks might be called general, but as a class of distributions, those having finite, discrete Bayesian networks is a rather small one.\n\n\nIn this work, we are interested in the prospect of algorithms that work on very large classes of distributions. Namely, we are considering the class of samplable distributions, i.e., the class of distributions for which there exists a probabilistic program that can generate a sample using, e.g., uniformly distributed random numbers or independent coin flips as a source of randomness. The class of samplable distributions is a natural one: indeed it is equivalent to the class of computable distributions, i.e., those for which we can devise algorithms to compute lower bounds on probabilities from descriptions of open sets. The class of samplable distributions is also equivalent to the class of distributions for which we can compute expectations from descriptions of bounded continuous functions.\n\n\nThe class of samplable distributions is, in a sense, the richest class you might hope to deal with. The question we asked was: is there an algorithm that, given a samplable distribution on two variables X and Y, represented by a program that samples values for both variables, can compute the conditional distribution of, say, Y given X=x, for almost all values for X? When X takes values in a finite, discrete set, e.g., when X is binary valued, there is a general algorithm, although it is inefficient. But when X is continuous, e.g., when it can take on every value in the unit interval [0,1], then problems can arise. In particular, there exists a distribution on a pair of numbers in [0,1] from which one can generate perfect samples, but for which it is impossible to compute conditional probabilities for one of the variables given the other. As one might expect, the proof reduces the halting problem to that of conditioning a specially crafted distribution.\n\n\nThis pathological distribution rules out the possibility of a general algorithm for conditioning (equivalently, for probabilistic inference). The paper ends by giving some further conditions that, when present, allow one to devise general inference algorithms. Those familiar with computing conditional distributions for finite-dimensional statistical models will not be surprised that conditions necessary for Bayes’ theorem are one example. \n\n\n\n\n\n\n---\n\n\n**Luke**: In your dissertation (and perhaps elsewhere) you express a particular interest in the relevance of probabilistic programming to AI, including the original aim of AI to build machines which rival the general intelligence of a human. How would you describe the relevance of probabilistic programming to the long-term dream of AI?\n\n\n\n\n---\n\n\n**Daniel**: If you look at early probabilistic programming systems, they were built by AI researchers: De Raedt, Koller, McAllester, Muggleton, Pfeffer, Poole, Sato, to name a few. The Church language, which was introduced in joint work with Bonawitz, Mansinghka, Goodman, and Tenenbaum while I was a graduate student at MIT, was conceived inside a cognitive science laboratory, foremost to give us a language rich enough to express the range of models that people were inventing all around us. So, for me, there’s always been a deep connection. On the other hand, the machine learning community as a whole is somewhat allergic to AI and so the pitch to that community has more often been pragmatic: these systems may someday allow experts to conceive, prototype, and deploy much larger probabilistic systems, and at the same time, empower a much larger community of nonexperts to use probabilistic modeling techniques to understand their data. This is the basis for the DARPA [PPAML](http://ppaml.galois.com/wiki/) program, which is funding 8 or so teams to engineer scalable systems over the next 4 years.\n\n\nFrom an AI perspective, probabilistic programs are an extremely general representation of knowledge, and one that identifies uncertainty with stochastic computation. Freer, Tenenbaum, and I recently wrote [a book chapter](http://danroy.org/papers/FreRoyTen-Turing.pdf) for the Turing centennial that uses a classical medical diagnosis example to showcase the flexibility of probabilistic programs and a general QUERY operator for performing probabilistic conditioning. Admittedly, the book chapter ignores the computational complexity of the QUERY operator, and any serious proposal towards AI cannot do this indefinitely. Understanding when we can hope to efficiently update our knowledge in light of new observations is a rich source of research questions, both applied and theoretical, spanning not only AI and machine learning, but also statistics, probability, physics, theoretical computer science, etc.\n\n\n\n\n---\n\n\n**Luke**: Is it fair to think of QUERY as a “toy model” that we can work with in concrete ways to gain more general insights into certain parts of the long-term AI research agenda, even though QUERY is unlikely to be directly implemented in advanced AI systems? (E.g. that’s how I think of AIXI.)\n\n\n\n\n---\n\n\n**Daniel**: I would hesitate to call QUERY a toy model. Conditional probability is a difficult concept to master, but, for those adept at reasoning about the execution of programs, QUERY demystifies the concept considerably. QUERY is an important conceptual model of probabilistic conditioning.\n\n\nThat said, the simple guess-and-check algorithm we present in our Turing article runs in time inversely proportional to the probability of the event/data on which one is conditioning. In most statistical settings, the probability of a data set decays exponentially towards 0 as a function of the number of data points, and so guess-and-check is only useful for reasoning with toy data sets in these settings. It should come as no surprise to hear that state-of-the-art probabilistic programming systems work nothing like this.\n\n\nOn the other hand, QUERY, whether implemented in a rudimentary fashion or not, can be used to represent and reason probabilistically about arbitrary computational processes, whether they are models of the arrival time of spam, the spread of disease through networks, or the light hitting our retinas. Computer scientists, especially those who might have had a narrow view of the purview of probability and statistics, will see a much greater overlap between these fields and their own once they understand QUERY.\n\n\nTo those familiar with AIXI, the difference is hopefully clear: QUERY performs probabilistic reasoning in a model given as input. AIXI, on the other hand, is itself a “universal” model that, although not computable, would likely predict (hyper)intelligent behavior, were we (counterfactually) able to perform the requisite probabilistic inferences (and feed it enough data). Hutter gives an algorithm implementing an approximation to AIXI, but its computational complexity still scales exponentially in space. AIXI is fascinating in many ways: If we ignore computational realities, we get a complete proposal for AI. On the other hand, AIXI and its approximations take maximal advantage of this computational leeway and are, therefore, ultimately unsatisfying. For me, AIXI and related ideas highlight that AI must be as much a study of the particular as it of the universal. Which potentially unverifiable, but useful, assumptions will enable us to efficiently represent, update, and act upon knowledge under uncertainty?\n\n\n\n\n---\n\n\n**Luke**: You write that “AI must be as much a study of the particular as it is of the universal.” Naturally, most AI scientists are working on the particular, the near term, the applied. In your view, what are some other examples of work on the universal, in AI? Schmidhuber’s Gödel machine comes to mind, and also some work that is as likely to be done in a logic or formal philosophy department as a computer science department — e.g. perhaps work on logical priors — but I’d love to hear what kinds of work you’re thinking of.\n\n\n\n\n---\n\n\n**Daniel**: I wouldn’t equate any two of the particular, near-term, or applied. By the word particular, I am referring to, e.g., the way that our environment affects, but is also affected by, our minds, especially through society. More concretely, both the physical spaces in which most of us spend our days and the mental concepts we regularly use to think about our daily activities are products of the human mind. But more importantly, these physical and mental spaces are necessarily ones that are easily navigated by our minds. The coevolution by which this interaction plays out is not well studied in the context of AI. And to the extent that this cycle dominates, we would expect a universal AI to be truly alien. On the other hand, exploiting the constraints of human constructs may allow us to build more effective AIs.\n\n\nAs for the universal, I have an interest in the way that noise can render idealized operations computable or even efficiently computable. In our work on the computability of conditioning that came up earlier in the discussion, we show that adding sufficiently smooth independent noise to a random variable allows us to perform conditioning in situations where we would not have been able to otherwise. There are examples of this idea elsewhere. For example, [Braverman, Grigo, and Rojas](http://arxiv.org/abs/1201.0488) study noise and intractability in dynamical systems. Specifically, they show that computing the invariant measure characterizing the long-term statistical behavior of dynamical systems is not possible. The road block is the computational power of the dynamical system itself. The addition of a small amount of noise to the dynamics, however, decreases the computational power of the dynamical system, and suffices to make the invariant measure computable. In a world subject to noise (or, at least, well modeled as such), it seems that many theoretical obstructions melt away.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Daniel!\n\n\nThe post [Daniel Roy on probabilistic programming and AI](https://intelligence.org/2014/09/04/daniel-roy/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-09-04T15:03:31Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "5006ac6334737eac3c9da109432f77dd", "title": "MIRI’s September Newsletter", "url": "https://intelligence.org/2014/09/01/september-newsletter-2/", "source": "miri", "source_type": "blog", "text": "| | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n[Machine Intelligence Research Institute](http://intelligence.org)\n |\n\n |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \nThanks to the generosity of 100+ donors, we [successfully completed](http://intelligence.org/2014/08/15/2014-summer-matching-challenge-completed/) our 2014 summer matching challenge on August 15th, raising more than $400,000 total for [our research program](http://intelligence.org/research/). Our deepest thanks to all our supporters!\n\n \n**Research updates**\n* New paper: “[Exploratory engineering in artificial intelligence](http://intelligence.org/2014/08/22/new-paper-exploratory-engineering-artificial-intelligence/).”\n* 2 new analyses: “[Groundwork for AGI safety engineering](http://intelligence.org/2014/08/04/groundwork-ai-safety-engineering/)” and “[Loosemore on AI safety and attractors](http://nothingismere.com/2014/08/25/loosemore-on-ai-safety-and-attractors/).”\n\n\n\n**News updates**\n* MIRI is running an online reading group for Nick Bostrom’s *Superintelligence*. Join the discussion [here](http://intelligence.org/2014/08/31/superintelligence-reading-group/)!\n* MIRI participated in the 2014 Effective Altruism Summit. Slides from our talks are available [here](http://intelligence.org/2014/08/11/miris-recent-effective-altruism-talks/).\n\n\n**Other updates**\n* Nick Bostrom is touring the US and Europe to discuss *Superintelligence*. Dates are available on [his website](http://www.nickbostrom.com/). He visits Berkeley on September 12th; details [here](http://intelligence.org/2014/07/25/bostrom/).\n* Paul Christiano and Katja Grace have launched a new website containing many analyses related to the long-term future of AI: [AI Impacts](http://www.aiimpacts.org/).\n* Paul Christiano has published several additional analyses closely related to MIRI’s Friendly AI interests: “[Three impacts of machine intelligence](http://rationalaltruist.com/2014/08/23/three-impacts-of-machine-intelligence/),” “[Approval-seeking [agents]](http://ordinaryideas.wordpress.com/2014/07/21/approval-seeking/),” “[Straightforward vs. goal-oriented communication](http://ordinaryideas.wordpress.com/2014/08/23/straightforward-vs-goal-oriented-communication/),” “[Specifying a human precisely](http://ordinaryideas.wordpress.com/2014/08/24/specifying-a-human-precisely-reprise/),” “[Specifying enlightened judgment precisely](http://ordinaryideas.wordpress.com/2014/08/27/specifying-enlightened-judgment-precisely-reprise/),” “[Challenges for extrapolation](http://ordinaryideas.wordpress.com/2014/08/27/challenges-for-extrapolation/),” and “[Confronting Gödelian difficulties](http://ordinaryideas.wordpress.com/2014/08/30/confronting-godelian-difficulties-reprise/).”\n* The [Hanson-Yudkowsky AI Foom Debate](http://intelligence.org/ai-foom-debate/) continues with Hanson’s post “[I Still Don’t Get Foom](http://www.overcomingbias.com/2014/07/30855.html).”\n\n\nAs always, please don’t hesitate to let us know if you have any questions or comments.\nBest,\nLuke Muehlhauser\nExecutive Director\n |\n\n |\n\n |\n| |\n\n  |\n\n\n \n\n\nThe post [MIRI’s September Newsletter](https://intelligence.org/2014/09/01/september-newsletter-2/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-09-02T05:00:28Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "484569f706f077fd19d514a5832b0c17", "title": "Superintelligence reading group", "url": "https://intelligence.org/2014/08/31/superintelligence-reading-group/", "source": "miri", "source_type": "blog", "text": "Nick Bostrom’s eagerly awaited [*Superintelligence*](http://smile.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/) comes out in the US this week. To help you get the most out of it, MIRI is running an online reading group where you can join with others to ask questions, discuss ideas, and probe the arguments more deeply.\n\n\nThe reading group will “meet” on a weekly post on the [LessWrong discussion forum](http://lesswrong.com/r/discussion/new/). For each ‘meeting’, we will read about half a chapter of *Superintelligence*, then come together virtually to discuss. I’ll summarize the chapter, and offer a few relevant notes, thoughts, and ideas for further investigation. (My notes will also be used as the source material for the final reading guide for the book.)\n\n\nDiscussion will take place in the comments. I’ll offer some questions, and invite you to bring your own, as well as thoughts, criticisms and suggestions for interesting related material. Your contributions to the reading group might also (with permission) be used in our final reading guide for the book.\n\n\nWe welcome both newcomers and veterans on the topic. Content will aim to be intelligible to a wide audience, and topics will range from novice to expert level. All levels of time commitment are welcome. **We especially encourage AI researchers and practitioners to participate.** Just use a pseudonym if you don’t want your questions and comments publicly linked to your identity.\n\n\nWe will follow [**this preliminary reading guide**](https://intelligence.org/wp-content/uploads/2014/08/Superintelligence-Readers-Guide-early-version.pdf), produced by MIRI, reading one section per week.\n\n\nIf you have already read the book, don’t worry! To the extent you remember what it says, your superior expertise will only be a bonus. To the extent you don’t remember what it says, now is a good time for a review! If you don’t have time to read the book, but still want to participate, you are also welcome to join in. I will provide summaries, and many things will have page numbers, in case you want to skip to the relevant parts.\n\n\nIf this sounds good to you, first grab a copy of *[Superintelligence](http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111)*. You may also want to [**sign up here**](http://eepurl.com/1-S41) to be emailed when the discussion begins each week. The first virtual meeting (forum post) will go live at 6pm Pacific on **Monday, September 15th**. Following meetings will start at 6pm every Monday, so if you’d like to coordinate for quick fire discussion with others, put that into your calendar. If you prefer flexibility, come by any time! And remember that if there are any people you would especially enjoy discussing *Superintelligence* with, link them to this post!\n\n\nTopics for the first week will include impressive displays of artificial intelligence, why computers play board games so well, and what a reasonable person should infer from the agricultural and industrial revolutions.\n\n\n \n\n\n![](http://images.lesswrong.com/t3_ku5_0.png)\n\n\nThe post [Superintelligence reading group](https://intelligence.org/2014/08/31/superintelligence-reading-group/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-08-31T12:47:42Z", "authors": ["Katja Grace"], "summaries": []} -{"id": "e020bb07bafdcbc6601c754efa08f8be", "title": "New paper: “Exploratory engineering in artificial intelligence”", "url": "https://intelligence.org/2014/08/22/new-paper-exploratory-engineering-artificial-intelligence/", "source": "miri", "source_type": "blog", "text": "[![Exploratory engineering](https://intelligence.org/wp-content/uploads/2014/08/Exploratory-engineering.png)](https://intelligence.org/files/ExploratoryEngineeringAI.pdf)Luke Muehlhauser and Bill Hibbard have a new paper ([PDF](https://intelligence.org/files/ExploratoryEngineeringAI.pdf)) in the September 2014 issue of *Communications of the ACM*, the world’s most-read peer-reviewed computer science publication. The title is “[Exploratory Engineering in Artificial Intelligence](http://cacm.acm.org/magazines/2014/9/177932-exploratory-engineering-in-artificial-intelligence/abstract).”\n\n\nExcerpt:\n\n\n\n> We regularly see examples of new artificial intelligence (AI) capabilities… No doubt such automation will produce tremendous economic value, but will we be able to *trust* these advanced autonomous systems with so much capability?\n> \n> \n> Today, AI safety engineering mostly consists in a combination of formal methods and testing. Though powerful, these methods lack foresight: they can be applied only to particular extant systems. We describe a third, complementary approach that aims to predict the (potentially hazardous) properties and behaviors of broad classes of future AI agents, based on their mathematical structure (for example, reinforcement learning)… We call this approach “exploratory engineering in AI.”\n> \n> \n> …\n> \n> \n> In this Viewpoint, we focus on theoretical AI models inspired by Marcus Hutter’s AIXI, an optimal agent model for maximizing an environmental reward signal…\n> \n> \n> …\n> \n> \n> Autonomous intelligent machines have the potential for large impacts on our civilization. Exploratory engineering gives us the capacity to have some foresight into what these impacts might be, by analyzing the properties of agent designs based on their mathematical form. Exploratory engineering also enables us to identify lines of research — such as the study of Dewey’s value-learning agents — that may be important for anticipating and avoiding unwanted AI behaviors. This kind of foresight will be increasingly valuable as machine intelligence comes to play an ever-larger role in our world.\n> \n> \n\n\nThe post [New paper: “Exploratory engineering in artificial intelligence”](https://intelligence.org/2014/08/22/new-paper-exploratory-engineering-artificial-intelligence/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-08-23T04:04:59Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "e454c5cf33ee4c610dee1e33788e6b10", "title": "2014 Summer Matching Challenge Completed!", "url": "https://intelligence.org/2014/08/15/2014-summer-matching-challenge-completed/", "source": "miri", "source_type": "blog", "text": "Thanks to the generosity of 100+ donors, today we successfully completed our [2014 summer matching challenge](http://intelligence.org/2014/07/21/2014-summer-matching-challenge/), raising more than $400,000 total for our [research program](http://intelligence.org/research/).\n\n\nOur deepest thanks to all our supporters!\n\n\nAlso, Jed McCaleb’s new crypto-currency [Stellar](https://www.stellar.org/blog/introducing-stellar/) was launched during MIRI’s fundraiser, and we decided to [accept donated stellars](https://intelligence.org/donate/). These donations weren’t counted toward the matching drive, and their [market value](http://www.stellarvalue.org/) is unstable at this early stage, but as of today we’ve received 850,000+ donated stellars from 3000+ different stellar accounts. Our thanks to everyone who donated in stellar!\n\n\nThe post [2014 Summer Matching Challenge Completed!](https://intelligence.org/2014/08/15/2014-summer-matching-challenge-completed/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-08-16T04:49:02Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "2438deb5ed026514b8b00c1df04fe96f", "title": "MIRI’s recent effective altruism talks", "url": "https://intelligence.org/2014/08/11/miris-recent-effective-altruism-talks/", "source": "miri", "source_type": "blog", "text": "![EA Summit 14](https://intelligence.org/wp-content/uploads/2014/08/EA-Summit-14.png)MIRI recently participated in the 2014 Effective Altruism Retreat and [Effective Altruism Summit](http://www.effectivealtruismsummit.com/) organized by Leverage Research. We gave four talks, participated in a panel, and held “office hours” during which people could stop by and ask us questions.\n\n\nThe slides for our talks are available below:\n\n\n* Muehlhauser, [MIRI Intro](https://intelligence.org/wp-content/uploads/2014/08/Muehlhauser-MIRI-Intro-2014.pdf)\n* Yudkowsky, Caring about Many Distant People in Strange Situations [no slides]\n* Muehlhauser, [Steering the Future of AI](https://intelligence.org/wp-content/uploads/2014/08/Muehlhauser-Steering-the-Future-of-AI.pdf)\n* Yudkowsky, [Why We’re Doing It and What MIRI is Doing](https://intelligence.org/wp-content/uploads/2014/08/Yudkowsky-Why-were-doing-it-and-what-MIRI-is-doing.pdf)\n\n\nIf videos of these talks become available, we’ll link them from here as well.\n\n\nSee also our earlier posts [Friendly AI Research as Effective Altruism](http://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/) and [Why MIRI?](http://intelligence.org/2014/04/20/why-miri/)\n\n\nThe post [MIRI’s recent effective altruism talks](https://intelligence.org/2014/08/11/miris-recent-effective-altruism-talks/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-08-11T19:26:44Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "0a1f4ab3f710a727ef4eaa05e933a02d", "title": "Groundwork for AGI safety engineering", "url": "https://intelligence.org/2014/08/04/groundwork-ai-safety-engineering/", "source": "miri", "source_type": "blog", "text": "Improvements in AI are resulting in the automation of increasingly complex and [creative](http://lesswrong.com/lw/vm/lawful_creativity/) human behaviors. [Given enough time](http://intelligence.org/2013/05/15/when-will-ai-be-created/), we should expect artificial reasoners to begin to rival humans in arbitrary domains, culminating in [artificial general intelligence](http://intelligence.org/2013/08/11/what-is-agi/) (AGI).\n\n\nA machine would qualify as an ‘AGI’, in the intended sense, if it could adapt to a very wide range of situations to consistently achieve some goal or goals. Such a machine would behave intelligently when supplied with arbitrary physical and computational environments, in the same sense that [Deep Blue](http://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)) behaves intelligently when supplied with arbitrary [chess board](http://lesswrong.com/lw/v8/belief_in_intelligence/) configurations — consistently hitting its victory condition within that narrower domain.\n\n\nSince generally intelligent software could help automate the process of thinking up and testing hypotheses in the sciences, AGI would be uniquely valuable for speeding technological growth. However, this wide-ranging productivity also makes AGI a unique challenge from a safety perspective. Knowing very little about the architecture of future AGIs, we can nonetheless make a few safety-relevant generalizations:\n\n\n* Because AGIs are *intelligent*, they will tend to be complex, adaptive, and capable of autonomous action, and they will have a large impact where employed.\n* Because AGIs are *general*, their users will have incentives to employ them in an increasingly wide range of environments. This makes it hard to construct valid sandbox tests and requirements specifications.\n* Because AGIs are *artificial*, they will deviate from *human* agents, causing them to violate many of our natural intuitions and expectations about intelligent behavior.\n\n\nToday’s AI software is already tough to verify and validate, thanks to its complexity and its uncertain behavior in the face of state space explosions. Menzies & Pecheur ([2005](http://www.info.ucl.ac.be/~pecheur/publi/aivvis-aic.pdf)) give a good overview of AI verification and validation (V&V) methods, noting that AI, and especially adaptive AI, will often yield undesired and unexpected behaviors.\n\n\nAn adaptive AI that acts autonomously, like a Mars rover that can’t be directly piloted from Earth, represents an additional large increase in difficulty. Autonomous safety-critical AI agents need to make irreversible decisions in dynamic environments with very low failure rates. The state of the art in safety research for autonomous systems is improving, but continues to lag behind system capabilities work. Hinchman et al. ([2012](http://www.acsac.org/2012/workshops/law/AFRL.pdf)) write:\n\n\n\n> As autonomous systems become more complex, the notion that systems can be fully tested and all problems will be found is becoming an impossible task. This is especially true in unmanned/autonomous systems. Full test is becoming increasingly challenging on complex system. As these systems react to more environmental [stimuli] and have larger decision spaces, testing all possible states and all ranges of the inputs to the system is becoming impossible. […] As systems become more complex, safety is really risk hazard analysis, i.e. given x amount of testing, the system appears to be safe. A fundamental change is needed. This change was highlighted in the 2010 Air Force Technology Horizon report, “It is possible to develop systems having high levels of autonomy, but it is the lack of suitable V&V methods that prevents all but relatively low levels of autonomy from being certified for use.” […]\n> \n> \n> The move towards more autonomous systems has lifted this need [for advanced verification and validation techniques and methodologies] to a national level.\n> \n> \n\n\nAI acting autonomously *in arbitrary domains*, then, looks particularly difficult to verify. If AI methods continue to see rapid gains in efficiency and versatility, and especially if these gains further increase the opacity of AI algorithms to human inspection, AI safety engineering will become much more difficult in the future. In the absence of any reason to expect a development in the lead-up to AGI that would make high-assurance AGI easy (or AGI itself unlikely), we should be worried about the safety challenges of AGI, and that worry should inform our research priorities today.\n\n\nBelow, I’ll give reasons to doubt that AGI safety challenges are just an extension of narrow-AI safety challenges, and I’ll list some research avenues people at MIRI expect to be fruitful.\n\n\n\n \n\n\n### New safety challenges from AGI\n\n\nA natural response to the idea of starting work on high-assurance AGI is that AGI itself appears to be decades away. Why worry about it now? And, supposing that we should worry about it, why think there’s any useful work we can do on AGI safety so far in advance?\n\n\nIn response to the second question: It’s true that, at a glance, AGI looks difficult to effectively prepare for. The issue is important enough, however, to warrant more than a glance. Long-term projects like mitigating climate change or detecting and deflecting asteroids are intuitively difficult. The same is true of interventions that depend on projected future technologies, such as work on [post-quantum cryptography](http://en.wikipedia.org/wiki/Post-quantum_cryptography) in anticipation of [quantum computers](http://intelligence.org/2014/05/07/harry-buhrman/). In spite of that, we’ve made important progress on these fronts.\n\n\n[Covert channel communication](http://intelligence.org/2014/04/12/jonathan-millen/) provides one precedent. It was successfully studied decades in advance of being seen in the wild. Roger Schell cites a few other success cases in Muehlhauser ([2014b](http://intelligence.org/2014/06/23/roger-schell/)), and suggests reasons why long-term security and safety work remains uncommon. We don’t know whether early-stage AGI safety work will be similarly productive, but we shouldn’t rule out the possibility before doing basic research into the question. I’ll list some possible places to start looking in the next section.\n\n\nWhat about the first question? Why worry specifically about AGI?\n\n\nI noted above that AGI is an extreme manifestation of many ordinary AI safety challenges. However, MIRI is particularly concerned with unprecedented, AGI-specific behaviors. For example: An AGI’s problem-solving ability (and therefore its scientific and economic value) depends on its ability to model its environment. This includes modeling the dispositions of its human programmers. Since the program’s success depends in large part on its programmers’ beliefs and preferences, an AI pursuing some optimization target can select actions for the effect they have on programmers’ mental states, not just on the AI’s material environment.\n\n\nThis means that safety protocols will need to be sensitive to risks that differ qualitatively from ordinary software failure modes — AGI-specific hazards like ‘the program models its programmed goals as being better served if it passes human safety inspections, so it selects action policies that make it look safer (to humans) than it really is’. If we model AGI behavior using only categories from conventional software design, we risk overlooking new intelligent behaviors, including ‘[deception](http://scienceblogs.com/notrocketscience/2009/08/17/robots-evolve-to-deceive-one-another/)‘.\n\n\nAt the same time, oversimplifying these novel properties can cause us to anthropomorphize the AGI. If it’s naïve to expect ordinary software validation methods to immediately generalize to advanced autonomous agents, it’s even more naïve to expect conflict prevention strategies that work on *humans* to immediately generalize to an AI. A ‘deceptive’ AGI is just one whose planning algorithm identifies some human misconception as instrumentally useful to its programmed goals. Its methods or reasons for deceiving needn’t resemble a human’s, even if its capacities do.\n\n\nIn human society, we think, express, and teach norms like ‘don’t deceive’ or Weld & Etzioni’s ([1994](http://homes.cs.washington.edu/~etzioni/papers/first-law-aaai94.pdf)) ‘don’t let humans come to harm’ relatively quickly and easily. The [complex conditional response](http://lesswrong.com/lw/sp/detached_lever_fallacy/) that makes humans converge on similar goals remains hidden inside a black box — the undocumented spaghetti code that is the human brain. As a result of our lack of introspective access to how our social dispositions are cognitively and neurally implemented, we’re likely to underestimate how contingent and complex they are. For example:\n\n\n* We might expect that an especially intelligent AI system would have especially worthy goals, since knowledge and insight are associated with many other virtues in humans. E.g., Halls ([2007](http://books.google.com/books?hl=en&lr=&id=obwnumITHGUC&oi=fnd&pg=PA15&dq=HALL+2007+%22beyond+ai%22&ots=tzayws3_67&sig=ehddT4xKIGhXtrUiCQda_8SSYvA#v=onepage&q=criminality&f=false)) conjectures this on the grounds that criminality negatively correlates with intelligence in humans. Bostrom’s ([2003](http://www.nickbostrom.com/ethics/ai.html)) response is that there’s no particular reason to expect AIs to converge on anthropocentric terminal values like ‘compassion’ or ‘loyalty’ or ‘novelty’. A superintelligent AI could consistently have no goal other than to construct [paperclips](http://wiki.lesswrong.com/wiki/Paperclip_maximizer), for example.\n* We might try to directly hand-code goals like ‘don’t deceive’ into the agent by breaking the goals apart into simpler goals, e.g., ‘don’t communicate information you believe to be false’. However, in the process we’re likely to neglect the kinds of subtleties that can be safely left implicit when it’s a human child we’re teaching — lies of omission, misleading literalism, novel communication methods, or any number of edge cases. As Bostrom ([2003](http://www.nickbostrom.com/ethics/ai.html)) notes, the agent’s *goals* may continue to reflect the programmers’ poor translation of their requirements into lines of code, even after its *intelligence* has arrived at a [superior understanding](http://lesswrong.com/lw/igf/the_genie_knows_but_doesnt_care/) of human psychology.\n* We might instead try to instill the AGI with humane values via machine learning — training it to promote outcomes associated with camera inputs of smiling humans, for example. But a powerful search process is likely to hit on solutions [that would never occur to a developing human](http://lesswrong.com/lw/td/magical_categories/). If the agent becomes more powerful or general over time, initially benign outputs may be a poor indicator of long-term safety.\n\n\nAdvanced AI is also likely to have technological capabilities, such as strong self-modification, that introduce other novel safety obstacles; see Yudkowsky ([2013](https://intelligence.org/files/IEM.pdf)).\n\n\nThese are quick examples of some large and poorly-understood classes of failure mode. However, the biggest risks may be from problem categories so contrary to our intuitions that they will not occur to programmers at all. Relying on our untested intuitions, or on past experience with very different systems, is unlikely to catch every hazard.\n\n\nAs an intelligent but inhuman agent, AGI represents a fundamentally new kind of safety challenge. As such, we’ll need to do basic theoretical work on the general features of AGI before we can understand such agents well enough to predict and plan for them.\n\n\n \n\n\n### Early steps\n\n\nWhat would early, theoretical AGI safety research look like? How does one vet a hypothetical technology? We can distinguish research projects oriented toward system verification from projects oriented toward system requirements.\n\n\nVerification-directed AGI research extends existing AI safety and security tools that are likely to help confirm various features of advanced autonomous agents. Requirements-directed AGI research instead specifies desirable AGI abilities or behaviors, and tries to build toy models exhibiting the desirable properties. These models are then used to identify problems to be overcome and basic gaps in our conceptual understanding.\n\n\nIn other words, **verification-directed approaches** would ask ‘What tools and procedures can we use to increase our overall confidence that the complex systems of the future will match their specifications?’ They include:\n\n\n* Develop new tools for improving the [transparency](http://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/) of AI systems to inspection. Frequently, useful AI techniques like [boosting](http://en.wikipedia.org/wiki/Boosting_(machine_learning)) are tried out in an ad-hoc fashion and then promoted when they’re observed to work on some problem set. Understanding *when* and *why* programs work would enable stronger safety and security guarantees. [Computational learning theory](http://en.wikipedia.org/wiki/Computational_learning_theory) may be useful here, for proving bounds on the performance of various machine learning algorithms.\n* Extend techniques for designing complex systems to be readily verified. Work on clean-slate hardware and software approaches that maintain high assurance at every stage, like [HACMS](http://www.darpa.mil/Our_Work/I2O/Programs/High-Assurance_Cyber_Military_Systems_(HACMS).aspx) and [SAFE](http://www.crash-safe.org/).\n* Extend current techniques in [program synthesis](http://en.wikipedia.org/wiki/Program_synthesis) and [formal verification](http://en.wikipedia.org/wiki/Formal_verification), with a focus on methods applicable to complex and adaptive systems, such as [higher-order program verification](http://intelligence.org/2014/04/30/suresh-jagannathan-on-higher-order-program-verification/) and Spears’ ([2000](https://www.jair.org/media/720/live-720-1895-jair.pdf), [2006](http://www.swarmotics.com/uploads/chap.pdf)) incremental reverification. Expand on existing tools — e.g., design better interfaces and training methods for the [SPIN model checker](http://en.wikipedia.org/wiki/SPIN_model_checker) to improve its accessibility.\n* Apply [homotopy type theory](http://www.cmu.edu/news/stories/archives/2014/april/april28_awodeygrant.html) to program verification. The theory’s [univalence axiom](http://www.ams.org/notices/201309/rnoti-p1164.pdf) lets us derive [identities from isomorphisms](http://homotopytypetheory.org/2012/09/23/isomorphism-implies-equality/). Harper & Licata ([2011](http://dlicata.web.wesleyan.edu/pubs/lh102dttnsf/lh102dttnsf.pdf)) suggest that if we can implement this as an algorithm, it may allow us to reuse high-assurance code in new contexts without a large loss in confidence.\n* Expand the current body of verified software libraries and compilers, such as the [Verified Software Toolchain](http://vst.cs.princeton.edu/). A lot of program verification work is currently directed at older toolchains, e.g., relatively small libraries in C. Focusing on newer toolchains would limit our ability to verify systems that are already in wide use, but would put us in a better position to verify more advanced safety-critical systems.\n\n\n**Requirements-directed approaches** would ask ‘What outcomes are we likely to want from an AGI, and what general classes of agent could most easily get us those outcomes?’ Examples of requirements-directed work include:\n\n\n* Formalize stability guarantees for intelligent self-modifying agents. A general intelligence could help maintain itself and implement improvements to its own software and hardware, including improvements to its search and decision heuristics à la [EURISKO](http://wiki.lesswrong.com/wiki/EURISKO). It may be acceptable for the AI to introduce occasional errors in its own object recognition or speech synthesis modules, but we’d want pretty strong assurance about the integrity of its core decision-making algorithms, including the module that approves self-modifications. At present, toy models of self-modifying AI discussed in Fallenstein & Soares ([2014](https://intelligence.org/files/ProblemsSelfReference.pdf)) run into two paradoxes of self-reference, the ‘Löbian obstacle’ and the ‘procrastination paradox.’ Similar obstacles may arise for real-world AGI, and finding solutions should improve our general understanding of systems that make decisions and predictions about themselves.\n* Specify desirable checks on AGI behavior. Some basic architecture choices may simplify the task of making AGI safer, by restricting the system’s output channel (e.g., oracle AI in Armstrong et al. ([2012](http://www.aleph.se/papers/oracleAI.pdf))) or installing emergency tripwires and fail-safes (e.g., [simplex architectures](http://www.cs.uiuc.edu/class/sp08/cs598tar/Papers/Simplex.pdf)). AGI checks are a special challenge because of the need to recruit the agent to help actively regulate itself. If this demand isn’t met, the agent may devote its problem-solving capabilities to finding loopholes in its restrictions, as in Yampolskiy’s ([2012](http://dl.dropboxusercontent.com/u/5317066/2012-yampolskiy.pdf)) discussion of an ‘AI in a box’.\n* Design general-purpose methods by which intelligent agents can improve their models of user requirements over time. The domain of action of an autonomous general intelligence is potentially unlimited; or, if it is limited, we may not be able to predict which limits will apply. As such, it needs safe situation-general goals. Coding a universally safe and useful set of decision criteria by hand, however, looks hopelessly difficult. Instead, some [indirectly normative](http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/) method like Dewey’s ([2011](http://www.danieldewey.net/learning-what-to-value.pdf)) value learning seems necessary, to allow initially imperfect decision-makers to improve their goal content over time. Related open questions include: what kinds of base cases can we use to train and test a beneficial AI?; and how can AIs be made safe and stable during the training process?\n* Formalize other [optimality criteria](http://lesswrong.com/lw/jg1/solomonoff_cartesianism/) for arbitrary reasoners. Just as a general-purpose adaptive agent would need general-purpose values, it would also need general-purpose methods for tracking features of its environment and of itself, and for selecting action policies on this basis. Mathematically modeling ideal inference (e.g., Hutter ([2012](http://arxiv.org/abs/1202.6153))), ideal [decision-theoretic](http://lesswrong.com/lw/gu1/decision_theory_faq/) expected value calculation (e.g., Altair ([2013](https://intelligence.org/files/Comparison.pdf))), and ideal game-theoretic coordination (e.g., Barasz et al. ([2014](http://arxiv.org/abs/1401.5577))) are unlikely to be strictly necessary for AGI. All the same, they’re plausibly necessary for AGI *safety*, because models like these would give us a solid top-down theoretical foundation upon which to cleanly construct components of autonomous agents for human inspection and verification.\n\n\nWe can probably make progress in both kinds of AGI safety research well in advance of building a working AGI. Requirements-directed research focuses on abstract mathematical agent models, which makes it likely to be applicable to a wide variety of software implementations. Verification-directed approaches will be similarly valuable, to the extent they are flexible enough to apply to future programs that are much more complex and dynamic than any contemporary software. We can compare this to present-day high-assurance design strategies, e.g., in Smith & Woodside ([2000](http://www.researchgate.net/publication/2472777_Performance_Validation_at_Early_Stages_of_Software_Development_Connie_U._Smith*_Murray_Woodside**)) and Hinchman et al. ([2012](http://www.acsac.org/2012/workshops/law/AFRL.pdf)). The latter write, concerning simpler autonomous machines:\n\n\n\n> With better software analysis techniques, software can be analyzed at design-time with the goal of finding software faults earlier. This analysis can also prove the absence of error or negative properties. As system complexity and functionality increase, complete testing is becoming impossible and enhanced analysis techniques will have to be used. Furthermore, many of these software techniques, such as model checking, can be used in analysis of requirements and system design to find conflicting requirements or logic faults before a single line of code is written saving more time and money over traditional testing methods.\n> \n> \n\n\nVerification- and requirements-directed work is complementary. The point of building clear mathematical models of agents with desirable properties is to make it easier to design systems whose behaviors are transparent enough to their programmers to be rigorously verified; and verification methods will fail to establish system safety if we have a poor understanding of what kind of system we want in the first place.\n\n\nSome valuable projects will also fall in between these categories — e.g., developing methods for principled formal *validation*, which can increase our confidence that we’re verifying the right properties given the programmers’ and users’ goals. (See Cimatti et al. ([2012](http://commonsenseatheism.com/wp-content/uploads/2014/02/Cimatti-et-al-Validation-of-Requirements-for-Hybrid-Systems-a-Formal-Approach.pdf)) on formal validation, and also Rushby ([2013](http://www.csl.sri.com/users/rushby/papers/safecomp13.pdf)) on epistemic doubt.)\n\n\n \n\n\n### MIRI’s focus: the mathematics of intelligent agents\n\n\nMIRI’s founder, Eliezer Yudkowsky, has been one of the most vocal advocates for research into autonomous high-assurance AGI, or ‘friendly AI’. Russell & Norvig ([2009](http://aima.cs.berkeley.edu/)) write:\n\n\n\n> [T]he challenge is one of mechanism design — to define a mechanism for evolving Al systems under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes. We can’t just give a program a static utility function, because circumstances, and our desired responses to circumstances, change over time.\n> \n> \n\n\nMIRI’s approach is mostly requirements-directed, in part because this angle of attack is likely to enhance our theoretical understanding of the entire problem space, improving our research priorities and other strategic considerations. Moreover, the relevant areas of theoretical computer science and mathematics look much less crowded. There isn’t an established subfield or paradigm for requirements-directed AGI safety work where researchers could find a clear set of open problems, publication venues, supervisors, or peers.\n\n\nRather than building on current formal verification methods, MIRI prioritizes jump-starting these new avenues of research. Muehlhauser ([2013](http://intelligence.org/2013/11/04/from-philosophy-to-math-to-engineering/)) writes that engineering innovations often have their germ in prior work in mathematics, which in turn can be inspired by informal philosophical questions. At this point, AGI safety work is just beginning to enter the ‘mathematics’ stage. Friendly AI researchers construct simplified models of likely AGI properties or subsystems, formally derive features of those models, and check those features against general or case-by-case norms.\n\n\nBecause AGI safety is so under-researched, we’re likely to find low-hanging fruit even in investigating basic questions like ‘What kind of [prior probability distribution](http://wiki.lesswrong.com/wiki/Priors) works best for formal agents in unknown environments?’ As Gerwin Klein notes in Muehlhauser ([2014a](http://intelligence.org/2014/02/11/gerwin-klein-on-formal-methods/)), “In the end, everything that makes it easier for humans to think about a system, will help to verify it.” And, though MIRI’s research agenda is determined by social impact considerations, it is also of general intellectual interest, bearing on core open problems in theoretical computer science and mathematical logic.\n\n\nAt the same time, it’s important to keep in mind that [formal proofs](http://intelligence.org/2013/10/03/proofs/) of AGI properties only function as especially strong probabilistic evidence. [Formal methods](http://en.wikipedia.org/wiki/Formal_methods) in computer science can decrease risk and uncertainty, but they can’t eliminate it. Our assumptions are uncertain, so our conclusions will be too.\n\n\nThough we can never reach complete confidence in the safety of an AGI, we can still decrease the probability of catastrophic failure. In the process, we are likely to come to a better understanding of the most important ways AGI can defy our expectations. If we begin working now to better understand AGI as a theoretical system, we’ll be in a better position to implement robust safety measures as AIs improve in intelligence and autonomy in the decades to come.\n\n\n \n\n\n**Acknowledgments**\n\n\nMy thanks to Luke Muehlhauser, Shivaram Lingamneni, Matt Elder, Kevin Carlson, and others for their feedback on this piece.\n\n\n\n\n---\n\n\n**References**\n\n\n* Altair (2013). [A comparison of decision algorithms on Newcomblike problems](https://intelligence.org/files/Comparison.pdf). *Machine Intelligence Research Institute*.\n* Armstrong et al. (2012). [Thinking inside the box: controlling and using an oracle AI](http://www.aleph.se/papers/oracleAI.pdf). *Minds and Machines, 22*: 299-324.\n* Barasz et al. (2014). [Robust cooperation in the Prisoner’s Dilemma: program equilibrium via probability logic](http://arxiv.org/abs/1401.5577). *arXiv*.\n* Bostrom (2003). [Ethical issues in advanced artificial intelligence](http://www.nickbostrom.com/ethics/ai.html). In Smith et al. (eds.), *Cognitive, Emotive and Ethical Aspects of Decision-Making in Humans and in Artificial Intelligence, 2*: 12-17.\n* Cimatti et al. (2012). [Validation of requirements for hybrid systems: a formal approach](http://commonsenseatheism.com/wp-content/uploads/2014/02/Cimatti-et-al-Validation-of-Requirements-for-Hybrid-Systems-a-Formal-Approach.pdf). *ACM Transactions on Software Engineering and Methodology*, *21*.\n* Dewey (2011). [Learning what to value](http://www.danieldewey.net/learning-what-to-value.pdf). *Artificial General Intelligence 4th International Conference Proceedings*: 309-314.\n* Fallenstein & Soares (2014). [Problems of self-reference in self-improving space-time embedded intelligence](https://intelligence.org/files/ProblemsSelfReference.pdf). Working paper.\n* Halls (2007). [*Beyond AI: Creating the Conscience of the Machine*](http://books.google.com/books?hl=en&lr=&id=obwnumITHGUC).\n* Harper & Licata (2011). [Foundations and applications of higher-dimensional directed type theory](http://dlicata.web.wesleyan.edu/pubs/lh102dttnsf/lh102dttnsf.pdf). National Science Foundation grant proposal.\n* Hinchman et al. (2012). [Towards safety assurance of trusted autonomy in Air Force flight-critical systems](http://www.acsac.org/2012/workshops/law/AFRL.pdf). *Layered Assurance Workshop, 17*.\n* Hutter (2012). [One decade of Universal Artificial Intelligence](http://arxiv.org/abs/1202.6153). *Theoretical Foundations of Artificial General Intelligence, 4*: 67-88.\n* Menzies & Pecheur (2005). [Verification and validation and artificial intelligence](http://www.info.ucl.ac.be/~pecheur/publi/aivvis-aic.pdf). *Advances in Computers, 65*: 154-203.\n* Muehlhauser (2013). [From philosophy to math to engineering](http://intelligence.org/2013/11/04/from-philosophy-to-math-to-engineering/). *MIRI Blog*.\n* Muehlhauser (2014a). [Gerwin Klein on formal methods](http://intelligence.org/2014/02/11/gerwin-klein-on-formal-methods/). *MIRI Blog*.\n* Muehlhauser (2014b). [Roger Schell on long-term computer security research](http://intelligence.org/2014/06/23/roger-schell/). *MIRI Blog*.\n* Rushby (2013). [Logic and epistemology in safety cases](http://www.csl.sri.com/users/rushby/papers/safecomp13.pdf). *Computer Safety, Reliability, and Security: Proceedings of SafeComp 32* (pp. 1-7).\n* Russell & Norvig (2009). [*Artificial Intelligence: A Modern Approach*](http://aima.cs.berkeley.edu/).\n* Smith & Woodside (2000). [Performance validation at early stages of software development](http://www.researchgate.net/publication/2472777_Performance_Validation_at_Early_Stages_of_Software_Development_Connie_U._Smith*_Murray_Woodside**). In Gelenbe (ed.), *System Performance Evaluation: Methodologies and Applications* (pp. 383-396).\n* Spears (2000). [Asimovian adaptive agents](https://www.jair.org/media/720/live-720-1895-jair.pdf). *Journal of Artificial Intelligence Research, 13*: 95-153.\n* Spears (2006). [Assuring the behavior of adaptive agents](http://www.swarmotics.com/uploads/chap.pdf). In Rouff et al. (eds.), *Agent Technology from a Formal Perspective* (pp. 227-257).\n* Weld & Etzioni (1994). [The First Law of Robotics (a call to arms)](http://homes.cs.washington.edu/~etzioni/papers/first-law-aaai94.pdf). *Proceedings of the Twelfth National Conference on Artificial Intelligence*: 1042-1047.\n* Yampolskiy (2012). [Leakproofing the singularity: artificial intelligence confinement problem](http://dl.dropboxusercontent.com/u/5317066/2012-yampolskiy.pdf). *Journal of Consciousness Studies, 19*: 194-214.\n* Yudkowsky (2013). [Intelligence explosion microeconomics](https://intelligence.org/files/IEM.pdf). Technical report.\n\n\nThe post [Groundwork for AGI safety engineering](https://intelligence.org/2014/08/04/groundwork-ai-safety-engineering/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-08-04T21:46:56Z", "authors": ["Rob Bensinger"], "summaries": []} -{"id": "5553d7ddfd85d3ada2311f17270d7c79", "title": "MIRI’s August 2014 newsletter", "url": "https://intelligence.org/2014/08/01/august-newsletter-2/", "source": "miri", "source_type": "blog", "text": "| | | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| \n\n| |\n| --- |\n| |\n\n\n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n[Machine Intelligence Research Institute](http://intelligence.org)\n |\n\n |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \nDear friends,\n\n\nOur [summer matching challenge](http://intelligence.org/2014/07/21/2014-summer-matching-challenge/) is underway! Every donation made to MIRI between now and August 15th, 2014 will be **matched dollar-for-dollar**, up to a total of $200,000!\nPlease [**donate now**](https://intelligence.org/donate/) to help support our research!\n**Research Updates**\n* May 2015: [Cambridge decision theory workshop](http://intelligence.org/2014/07/12/may-2015-decision-theory-workshop-cambridge/)\n* 2 new analyses: “[Some studies which could improve our strategic picture of superintelligence](http://lukemuehlhauser.com/some-studies-which-could-improve-our-strategic-picture-of-superintelligence/)” and “[Tentative tips for… prediction or forecasting](http://lesswrong.com/r/discussion/lw/kh9/tentative_tips_for_people_engaged_in_an_exercise/)“\n* 1 new expert interview: [Scott Frickel](http://intelligence.org/2014/07/28/scott-frickel/) on intellectual movements.\n\n\n\n**News Updates**\n* There are now 10 [MIRIx groups](http://intelligence.org/mirix/) around the world, in 4 countries.\n* Nick Bostrom will [speak about his new book](http://intelligence.org/2014/07/25/bostrom/) *Superintelligence* at UC Berkeley on September 12th.\n* Jed McCaleb has launched a new digital currency, [stellars](https://www.stellar.org/blog/introducing-stellar/). MIRI now accepts donated stellars; our public name for receiving stellars is: **miri**\n\n\n**Other Updates**\n* Nick Bostrom’s new book *Superintelligence* has been released in the UK, and the [Kindle version](http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom-ebook/dp/B00LOOCGB2/ref=sr_1_1?ie=UTF8&qid=1406828086&sr=8-1&keywords=superintelligence) is available in the US. (Hardcopy available in the US on Sep. 1st.)\n\n\nAs always, please don’t hesitate to let us know if you have any questions or comments.\nBest,\nLuke Muehlhauser\nExecutive Director\n |\n\n |\n\n |\n| |\n\n  |\n\n\nThe post [MIRI’s August 2014 newsletter](https://intelligence.org/2014/08/01/august-newsletter-2/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-08-02T03:00:41Z", "authors": ["Jake"], "summaries": []} -{"id": "cbb9c7ecdb872a7556e21e192bf475b2", "title": "Scott Frickel on intellectual movements", "url": "https://intelligence.org/2014/07/28/scott-frickel/", "source": "miri", "source_type": "blog", "text": "![Scott Frickel portrait](https://intelligence.org/wp-content/uploads/2014/07/Frickel_w1300.jpg) [Scott Frickel](http://mobilizingideas.wordpress.com/scott-frickel/) is Associate Professor in the Department of Sociology and [Institute for the Study of Environment and Society](http://brown.edu/academics/institute-environment-society/) at [Brown University](http://www.brown.edu/). His research interweaves sociological analysis with environmental studies and science and technology studies. Prior to coming to Brown he was Boeing Distinguished Professor of Environmental Sociology at Washington State University. He holds a Ph.D. from the University of Wisconsin – Madison.\n\n\nHis research has appeared in a wide range of disciplinary and interdisciplinary journals, including *American Sociological Review;* *Annual Review of Sociology;* *Science,* *Technology and Human Values;* and *Environmental Science and Policy*. He is author of [Chemical Consequences: Environmental Mutagens, Scientist Activism, and the Rise of Genetic Toxicology](http://smile.amazon.com/Chemical-Consequences-Environmental-Scientist-Toxicology/dp/0813534135/ref=nosim?tag=793775876-20) and co-editor with Kelly Moore of [The New Political Sociology of Science: Institutions, Networks, and Power](http://smile.amazon.com/New-Political-Sociology-Science-Institutions/dp/0299213307/ref=nosim?tag=793775876-20).\n\n\n\n**Luke Muehlhauser**: In [Frickel & Gross (2005)](http://commonsenseatheism.com/wp-content/uploads/2013/02/Frickel-Gross-A-General-Theory-of-Scientific-Intellectual-Movements.pdf), you and your co-author present a “general theory” of scientific/intellectual movements (SIMs). I’ll summarize the theory briefly for our readers. In your terminology:\n\n\n* “SIMs have a more or less coherent program for scientific or intellectual change… toward whose knowledge core participants are consciously oriented…”\n* “The aforementioned core consists of intellectual practices that are contentious relative to normative expectations within a given… intellectual domain.”\n* “Precisely because the intellectual practices recommended by SIMs are contentious, SIMs are inherently political… because every program for intellectual change involves a desire to alter the configuration of social positions within or across intellectual fields in which power, attention, and other scarce resources are unequally distributed…”\n* “[SIMs] are constituted through organized collective action.”\n* “SIMs exist as historical entities for finite periods.”\n* “SIMs can vary in intellectual aim and scope. Some problematize previously… underdiscussed topics… Others… seek to introduce entirely new theoretical perspectives on established terrain… Some SIMs distinguish themselves through new methods… Other SIMs aim to alter the boundaries of existing… intellectual fields…”\n\n\nNext, you put forward some propositions about SIMs, which seem promising given the case studies you’ve seen, but are not the result of a comprehensive analysis of SIMs — merely a starting point:\n\n\n\n> \n> 1. “A SIM is more likely to emerge when high-status intellectual actors harbor complaints against what they understand to be the central intellectual tendencies of the day.”\n> 2. “SIMs are more likely to be successful when structural conditions provide access to key resources” (research funding, employment, access to rare equipment or data, intellectual prestige, etc.)\n> 3. “The greater a SIM’s access to [local sites at which SIM representatives can have sustained contact with potential recruits], the more likely it is to be successful.”\n> 4. “The success of a SIM is contingent upon the work done by movement participants to frame movement ideas in ways that resonate with the concerns of those who inhabit an intellectual field or fields.”\n> \n> \n> \n\n\nMy first question is this: what are the most significant pieces of follow-up work on your general theory of SIMs so far?\n\n\n\n\n---\n\n\n**Scott Frickel**: The article on SIMs that Neil Gross and I published back in 2005 has been well-received, for the most part. Citation counts on Google Scholar have risen steadily since then and so I’m encouraged by the continued interest. It seems that the article’s central idea – that intellectual change is a broadly social phenomenon whose dynamics are in important ways similar to social movements – is resonating among sociologists and others.\n\n\nThe terrain that we mapped in developing our theory was intentionally quite broad, giving others lots of room to build on. And that seems to be what’s happening. Rather than challenge our basic argument or framework, scholars’ substantive engagements have tended to add elements to the theory or have sought to deepen theorization of certain existing elements. So for example, [Jerry Jacobs (2013)](http://smile.amazon.com/Defense-Disciplines-Interdisciplinarity-Specialization-University/dp/022606932X/ref=nosim?tag=793775876-20) extends the SIMs framework from specific disciplinary fields to the lines of connectivity between disciplines in seeking to better understand widespread enthusiasms for interdisciplinarity. [Mikaila Arthur (2009)](http://www.tandfonline.com/doi/abs/10.1080/14742830802591176) wants to extend the framework to better theorize the role of exogenous social movements in fomenting change within the academy. [Tom Waidzunas (2013)](http://www.mobilization.sdsu.edu/articleabstracts/181Waidzunas.html) picks up on our idea of an “intellectual opportunity structure” and, like Arthur, extends the concept’s utility to the analysis of expert knowledge production beyond the academy. In his excellent new book, [*Why are Professors Liberal and Why do Conservatives Care?*](http://smile.amazon.com/Why-Professors-Liberal-Conservatives-Care/dp/0674059093/ref=nosim?tag=793775876-20) (Harvard, 2013), Neil Gross links our theory to the political leanings of the American professoriate. His idea is that SIMs can shape the political typing of entire fields – e.g. as more or less liberal or conservative. So, rather than arguing for an extended view of SIMs, Gross wants to recognize an extended view of the impacts of SIMs, which can affect academic fields singly or in combination with other competitor SIMs. Another study that I like very much is [John Parker and Ed Hackett’s (2012)](http://www.uwstout.edu/socsci/upload/Parker-Flyer.pdf). analysis of how emotions shape intellectual processes in ways that drive the growth and development of SIMs. The emotional content of SIMs is something quite new for the theory, but which is consonant with lots of good work in social movement theory. Some of my own recent work builds from the SIMs project to offer a companion theory of ‘shadow mobilization’ to help explain expert interpenetration of social movements ([Frickel et al. (2014)](http://lukemuehlhauser.com/wp-content/uploads/Frickel-et-al-The-organization-of-expert-activism-shadow-mobilization-in-two-social-movements.pdf)) So, in different ways, the project is chugging forward. \n\n\n\n\n\n\n---\n\n\n**Luke**: In your view, what is the expected eventual “payoff” from this kind of sociological research into social movements? Or to put it another way, who are the “consumers” (besides other sociologists) who at some point take certain results and apply them to solving real-world problems? (Your work on SIMs may be too young to have reached that stage yet, so that’s why I’m asking about this kind of work more generally.)\n\n\n\n\n---\n\n\n**Scott**: So, if we think about the SIMs theory in applied terms as something like a recipe for building and sustaining expert collective action in ways that alter the intellectual and scientific landscape, I would say that our “product” is still under development. The various efforts to extend and refine the theory that I mentioned previously support that assessment. But to the extent that the theory could have broader resonance and more practical applications, it is interesting to think about who those consumers might be and what the theory might be good for.\n\n\nI suppose the most obvious market for our theory would be academic experts who want to change the direction of research in their field. The theory might be useful to scientists who find comfort in the common but I think largely mistaken belief that science is driven by the best new ideas. Our theory begins from a very different assumption, namely that winning ideas are ideas whose champions strategically organize support for their projects by building networks and alliances to topple orthodoxies or defeat challengers. In this way of thinking, new ideas are necessary but insufficient explanations; knowledge advances not in spite of politics, but because of it.\n\n\nThe climate change debates in the US are a good case in point. Having good data and a theory to explain the data isn’t enough. You also need organization and political savvy to anticipate and fend off the opposition. Climate change deniers figured this out first and built a movement that has proven tragically successful at blocking meaningful policy to reduce greenhouse gas emissions. Environmental sociologists Aaron McCright and Rile Dunlap ([2003](http://stephenschneider.stanford.edu/Publications/PDF_Papers/McCrightDunlap2003.pdf), [2010](http://history.ucsd.edu/_files/base-folder1/Anti-reflexivity%20-%20The%20American%20Conservative%20Movement%20Success%20in%20Undermining%20Climate%20Science%20and%20Policy.pdf)) have studied this phenomenon extensively as have historians of science [Naomi Oreskes and Eric Conway (2010)](http://smile.amazon.com/Merchants-Doubt-Handful-Scientists-Obscured/dp/1608193942/ref=nosim?tag=793775876-20). In rising to meet the challenge of climate denialism, scientists have produced more and better data and worked harder to refine their theories, but they have also become better organized themselves, as has some sectors of the general public. This is precisely what social movement scholarship tells us about the dynamics between social movements and counter-movements. Our theory holds that this is also a regularly occurring dynamic in science. The climate change debates are not unique in this sense, only uniquely visible.\n\n\n\n\n---\n\n\n**Luke**: Which follow-up studies on your theory of SIMs would you most like to see in the next 10 years?\n\n\n\n\n---\n\n\n**Scott**: I imagine that detailed case studies such as those we’ve seen to date will continue to be an essential element of SIMs scholarship. But the case study method has limitations. By design it does not produce generalizable knowledge (if that is one’s goal) because it prioritizes delivery of rich narrative detail of a single instance of the object of study. By themselves, case studies will tell us relatively little about broader political and institutional dynamics shaping SIM emergence and outcomes.\n\n\nThe studies that I am more interested in seeing done are those that develop a range of approaches for investigating broader questions relating to the historical forces and institutional environments that produce SIMs. Do SIMs emerge in similar fashion in different disciplinary domains or national contexts? Are certain kinds of academic institutions more nurturing of SIMs than others? Are there some areas of science, social science, and humanities that tend to not produce SIMs? These sorts of questions are best answered with studies that are explicitly comparative and I’d like to see research push in this direction.\n\n\nI’d also like to see research that examines longer term historical patterns of SIM emergence and decline. Social movement scholars have developed the concept of “cycles of protest” to describe the historical ebb and flow of contentious political activity and social movement formation over longer periods of time, usually several decades or more (The original study on protest cycles is [Tarrow 1989](http://smile.amazon.com/Democracy-Disorder-Protest-Politics-1965-1975/dp/0198275617/ref=nosim?tag=793775876-20)). They’ve used newspaper articles and other archival materials to collect data on protest events and then coded those accounts for various types of activity (e.g. strikes, boycotts, marches), numbers of people involved, how long the event lasted, whether violence was involved, how the authorities responded, and the like. These studies have shown that national-level protests tend to cluster in “waves” separated by periods of relative calm. Scholars have put forward various theories to explain these patterns, including the idea that initiating movements can have “spillover” effects on later-forming movements. This is the general pattern we see in the U.S. during the 1960s and 70s, with the civil rights movement feeding enthusiasms, building networks, and creating political opportunities that other movements could use to their advantage. What we now refer to as “The 60s” was, in part, a function of this major wave of political protest.\n\n\nI’m very interested in whether and how SIMs pattern in similar ways. For example, can we find evidence that collective efforts to build new fields and research programs in the social sciences concentrate more in certain time periods than in others? There is prima facie evidence suggesting that “new” interdisciplinary social sciences emerged in the 1930s when we see fields like social psychology and criminology come about. We see clustering again in the 1970s, when universities begin creating degree programs for women’s studies, African American studies, Chicano studies, and others. But these examples are far from comprehensive. The difficulty I’m facing is in working out an approach to data collection that allows us to systematically identify and code SIMs across disciplinary domains and over longer periods of time. Bibliometric data on journal publications and topic modeling techniques may hold some promise. This is the next big project that I’d like to tackle in this area of my research program – hopefully again in collaboration with Neil Gross. If you or any readers of this interview have thoughts about how to move this kind of study forward, I welcome your input.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Scott!\n\n\nThe post [Scott Frickel on intellectual movements](https://intelligence.org/2014/07/28/scott-frickel/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-07-28T08:00:56Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "46a26dcca3d1f3a4fe16adde703dbea4", "title": "Nick Bostrom to speak about Superintelligence at UC Berkeley", "url": "https://intelligence.org/2014/07/25/bostrom/", "source": "miri", "source_type": "blog", "text": "![Bostrom looking up](https://intelligence.org/wp-content/uploads/2014/07/Bostrom-looking-up.jpg)\nMIRI has arranged for [Nick Bostrom](http://nickbostrom.com/) to discuss his new book — [*Superintelligence: Paths, Dangers, Strategies*](http://smile.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom-ebook/dp/B00LOOCGB2/) — on the UC Berkeley campus on September 12th.\n\n\nBostrom is the director of the [Future of Humanity Institute](http://www.fhi.ox.ac.uk/) at Oxford University, and is a frequent collaborator with MIRI researchers (e.g. see “[The Ethics of Artificial Intelligence](https://intelligence.org/files/EthicsofAI.pdf)“). He is the author of some [200 publications](http://www.nickbostrom.com/cv.pdf), and is best known for his work in five areas:  (1) [existential risk](http://www.existential-risk.org/); (2) the [simulation argument](http://www.simulation-argument.com/); (3) [anthropics](http://www.anthropic-principle.com/); (4) the impacts of future technology; and (5) the implications of consequentialism for global strategy. Earlier this year he was included on *Prospect* magazine’s [World Thinkers](http://www.prospectmagazine.co.uk/features/world-thinkers-2014-the-results) list, the youngest person in the top 15 from all fields and the highest-ranked analytic philosopher.\n\n\nBostrom will be introduced by UC Berkeley professor [Stuart Russell](http://www.cs.berkeley.edu/~russell/), co-author of the [world’s leading AI textbook](http://aima.cs.berkeley.edu/). Russell’s blurb for *Superintelligence* reads:\n\n\n\n> Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. *Superintelligence* charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era.\n> \n> \n\n\nThe talk will begin at 7pm at room 310 (Banatao Auditorium) in Sutardja Dai Hall ([map](https://www.google.com/maps/place/Sutardja+Dai+Hall,+University+of+California,+Berkeley,+Berkeley,+CA+94709/@37.8747924,-122.2583104,17z/data=!3m1!4b1!4m2!3m1!1s0x80857c23eb19abc5:0x47ffbe1b691e09c5)) on the UC Berkeley campus.\n\n\nIf you live nearby, we hope to see you there! The room seats 150 people, on a first-come basis.\n\n\nThere will also be copies of *Superintelligence* available for purchase.\n\n\n![Nick Bostrom Event Image](https://intelligence.org/wp-content/uploads/2014/07/Nick-Bostrom-Event-Image-1024x425.jpg)\n[![map](https://intelligence.org/wp-content/uploads/2014/07/map.png)](https://www.google.com/maps/place/Sutardja+Dai+Hall/@37.8746789,-122.2589419,3a,90y,173.35h,118.33t/data=!3m5!1e1!3m3!1s6Sz86HXAS68AAAQIt9mo8w!2e0!3e11!4m2!3m1!1s0x0:0x1c8fed957ed70082)\n \n\n\nThe post [Nick Bostrom to speak about Superintelligence at UC Berkeley](https://intelligence.org/2014/07/25/bostrom/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-07-25T19:48:46Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "a160d5544ecec27f337a9524ecbbf8c4", "title": "2014 Summer Matching Challenge!", "url": "https://intelligence.org/2014/07/21/2014-summer-matching-challenge/", "source": "miri", "source_type": "blog", "text": "![Nate & Nisan](https://intelligence.org/wp-content/uploads/2014/07/Nate-Nisan.jpg)\nThanks to the generosity of several major donors,† every donation made to MIRI between now and August 15th, 2014 will be **matched dollar-for-dollar**, up to a total of $200,000!\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n$0\n\n\n\n\n$50K\n\n\n\n\n$100K\n\n\n\n\n$150K\n\n\n\n\n$200K\n\n\n\n\n### We have reached our matching total of $200,000!\n\n\n\n\n116\n===\n\n\n### Total Donors\n\n\n\n\n\n \n\n\nNow is your chance to **double your impact** while helping us raise up to $400,000 (with matching) to fund [our research program](http://intelligence.org/research/).\n\n\nCorporate matching and monthly giving pledges will count towards the total! Please email [malo@intelligence.org](mailto:malo@intelligence.org) if you intend on leveraging corporate matching (check [here](https://doublethedonation.com/miri), to see if your employer will match your donation) or would like to pledge 6 months of monthly donations, so that we can properly account for your contributions towards the fundraiser.\n\n\n(If you’re unfamiliar with our mission, see: [Why MIRI?](http://intelligence.org/2014/04/20/why-miri/))\n\n\n\n\n [Donate Now](https://intelligence.org/donate/#donation-methods)\n----------------------------------------------------------------\n\n\n\n\n \n\n\n![](https://intelligence.org/wp-content/uploads/2013/12/workshop-horizontal-3.jpg)\n### Accomplishments Since Our Winter 2013 Fundraiser Launched:\n\n\n* Hired **2 new Friendly AI researchers**, Benja Fallenstein & Nate Soares. Since March, they’ve authored or co-authored 4 papers/reports, with several others in the works. Right now they’re traveling, to present papers at the [Vienna Summer of Logic](http://vsl2014.at/), [AAAI-14](http://www.aaai.org/Conferences/AAAI/aaai14.php), and [AGI-14](http://agi-conf.org/2014/).\n* **5 new papers & book chapters**: “[Why We Need Friendly AI](http://intelligence.org/2013/12/18/new-paper-why-we-need-friendly-ai/),” “[The errors, insights, and lessons of famous AI predictions](http://intelligence.org/2014/04/30/new-paper-the-errors-insights-and-lessons-of-famous-ai-predictions/),” “[Problems of self-reference…](http://intelligence.org/2014/05/06/new-paper-problems-of-self-reference-in-self-improving-space-time-embedded-intelligence/),” “[Program equilibrium…](http://intelligence.org/2014/05/17/new-paper-program-equilibrium-prisoners-dilemma-via-lobs-theorem/),” and “[The ethics of artificial intelligence](http://intelligence.org/2014/06/19/new-chapter-cambridge-handbook-artificial-intelligence/).”\n* **11 new technical reports**: [7 reports from the December 2013 workshop](http://intelligence.org/2013/12/31/7-new-technical-reports-and-a-new-paper/), “[Botworld](http://intelligence.org/2014/04/10/new-report-botworld/),” “[Loudness…](http://intelligence.org/2014/05/30/new-report-loudness-priors-preference-relations/),” “[Distributions allowing tiling…](http://intelligence.org/2014/06/06/new-report-distributions-allowing-tiling-staged-subjective-eu-maximizers/),” and “[Non-omniscience…](http://intelligence.org/2014/06/23/new-report-non-omniscience-probabilistic-inference-metamathematics/)”\n* **New book**: *[Smarter Than Us](http://intelligence.org/smarter-than-us/),* published both as an e-book and a paperback.\n* Held [one MIRI workshop](http://intelligence.org/workshops/#may-2014) and launched the **[MIRIx program](http://intelligence.org/mirix/)**, which currently supports 8 independently-organized Friendly AI discussion/research groups around the world.\n* **New analyses**: [Robby’s posts on naturalized induction](http://wiki.lesswrong.com/wiki/Naturalized_induction), [Luke’s list of 70+ studies which could improve our picture of superintelligence strategy](http://lukemuehlhauser.com/some-studies-which-could-improve-our-strategic-picture-of-superintelligence/), “[Exponential and non-exponential trends in information technology](http://intelligence.org/2014/05/12/exponential-and-non-exponential/),” “[The world’s distribution of computation](http://intelligence.org/2014/02/28/the-worlds-distribution-of-computation-initial-findings/),” “[How big is the field of artificial intelligence?](http://intelligence.org/2014/01/28/how-big-is-ai/),” “[Robust cooperation: A case study in Friendly AI research](http://intelligence.org/2014/02/01/robust-cooperation-a-case-study-in-friendly-ai-research/),” “[Is my view contrarian?](http://lesswrong.com/lw/jv2/is_my_view_contrarian/),” and “[Can we really upload Johnny Depp’s brain?](http://www.slate.com/articles/technology/future_tense/2014/04/transcendence_science_can_we_really_upload_johnny_depp_s_brain.html)”\n* **Won $60,000+ in matching and prizes** from sources that wouldn’t have otherwise given to MIRI, [via the Silicon Valley Gives fundraiser](http://intelligence.org/2014/05/06/liveblogging-the-svgives-fundraiser/). (Thanks again, all you dedicated donors!)\n* [**49 new expert interviews**](http://intelligence.org/category/conversations/), including interviews with [Scott Aaronson](http://intelligence.org/2013/12/13/aaronson/) (MIT), [Max Tegmark](http://intelligence.org/2014/03/19/max-tegmark/) (MIT), [Kathleen Fisher](http://intelligence.org/2014/01/10/kathleen-fisher-on-high-assurance-systems/) (DARPA), [Suresh Jagannathan](http://intelligence.org/2014/04/30/suresh-jagannathan-on-higher-order-program-verification/) (DARPA), [André Platzer](http://intelligence.org/2014/02/15/andre-platzer-on-verifying-cyber-physical-systems/) (CMU), [Anil Nerode](http://intelligence.org/2014/03/26/anil-nerode/) (Cornell), [John Baez](http://intelligence.org/2014/02/21/john-baez-on-research-tactics/) (UC Riverside), [Jonathan Millen](http://intelligence.org/2014/04/12/jonathan-millen/) (MITRE), and [Roger Schell](http://intelligence.org/2014/06/23/roger-schell/).\n* **4 transcribed conversations** about MIRI strategy: [1](http://intelligence.org/2014/01/13/miri-strategy-conversation-with-steinhardt-karnofsky-and-amodei/), [2](http://intelligence.org/2014/01/27/existential-risk-strategy-conversation-with-holden-karnofsky/), [3](http://intelligence.org/2014/02/11/conversation-with-jacob-steinhardt-about-miri-strategy/), [4](http://intelligence.org/2014/02/21/conversation-with-holden-karnofsky-about-future-oriented-philanthropy/).\n* Published a thorough “[2013 in review](http://intelligence.org/2013/12/20/2013-in-review-operations/).”\n\n\n\n\n [Donate Now](https://intelligence.org/donate/#donation-methods)\n----------------------------------------------------------------\n\n\n\n\n### Ongoing Activities You Can Help Support\n\n\n* We’re writing an overview of the Friendly AI technical agenda (as we see it) so far.\n* We’re currently developing and testing several tutorials on different pieces of the Friendly AI technical agenda (tiling agents, modal agents, etc.).\n* We’re writing several more papers and reports.\n* We’re growing the MIRIx program, largely to grow the pool of people we can plausibly hire as full-time FAI researchers in the next couple years.\n* We’re planning, or helping to plan, multiple research workshops, including the [May 2015 decision theory workshop at Cambridge University](http://intelligence.org/2014/07/12/may-2015-decision-theory-workshop-cambridge/).\n* We’re finishing the editing for a book version of Eliezer’s *[Sequences](http://wiki.lesswrong.com/wiki/Sequences)*.\n* We’re helping to fund further [SPARC](http://sparc-camp.org/) activity, which provides education and skill-building to elite young math talent, and introduces them to ideas like effective altruism and global catastrophic risks.\n* We’re continuing to discuss formal collaboration opportunities with UC Berkeley faculty and development staff.\n* We’re helping Nick Bostrom promote his [*Superintelligence*](http://smile.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom-ebook/dp/B00LOOCGB2/) book in the U.S.\n* We’re investigating opportunities for supporting Friendly AI research via federal funding sources such as the NSF.\n\n\nOther projects are still being surveyed for likely cost and impact. See also our [mid-2014 strategic plan](http://intelligence.org/2014/06/11/mid-2014-strategic-plan/).\n\n\nWe appreciate your support for our work! [Donate now](https://intelligence.org/donate/#donation-methods), and seize a better than usual opportunity to move our work forward. If you have questions about donating, please contact Malo Bourgon at (510) 292-8776 or malo@intelligence.org.\n\n\n† $200,000 of total matching funds has been provided by Jaan Tallinn, Edwin Evans, and Rick Schwall.\n\n\nThe post [2014 Summer Matching Challenge!](https://intelligence.org/2014/07/21/2014-summer-matching-challenge/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-07-21T12:51:25Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "36dca6a2069bfb6cecb1ded184afb57e", "title": "May 2015 decision theory conference at Cambridge University", "url": "https://intelligence.org/2014/07/12/may-2015-decision-theory-workshop-cambridge/", "source": "miri", "source_type": "blog", "text": "MIRI, [CSER](http://cser.org/), and the philosophy department at Cambridge University are co-organizing a decision theory conference titled [**Self-Prediction in Decision Theory and AI**](http://www.phil.cam.ac.uk/events/decision-theory-conf), to be held in the Faculty of Philosophy at the Cambridge University. The dates are May 13-19, 2015.\n\n\n[Huw Price](http://prce.hu/w/index.html) and [Arif Ahmed](http://www.phil.cam.ac.uk/people/teaching-research-pages/ahmed/ahmed-page) at Cambridge University are the lead organizers.\n\n\nConfirmed speakers, in the order they are scheduled to speak, are:\n\n\n* [Arif Ahmed](http://www.phil.cam.ac.uk/people/teaching-research-pages/ahmed/ahmed-page) (Cambridge)\n* [Huw Price](http://prce.hu/w/index.html) (Cambridge)\n* [Julia Haas](http://philosophy.artsci.wustl.edu/people/julia-haas) (WU St. Louis)\n* [Wlodek Rabinowicz](http://www.fil.lu.se/en/department/staff/WlodekRabinowicz/) (Lund)\n* [Kenny Easwaran](http://www.kennyeaswaran.org/) (Texas A&M)\n* [Preston Greene](http://www.prestongreene.com/Home.html) (NTU)\n* [Joseph Halpern](http://www.cs.cornell.edu/home/halpern/) (Cornell)\n* [Katie Steele](http://www.lse.ac.uk/researchAndexpertise/experts/profile.aspx?KeyValue=k.steele%40lse.ac.uk) (LSE)\n* [Jenann Ismael](http://www.jenanni.com/) (Arizona)\n* [H. Orri Stefánsson](https://sites.google.com/site/hostefansson/) (IFS)\n* [Benja Fallenstein](http://intelligence.org/team/#staff) (MIRI)\n* [Reuben Stern](https://wisc.academia.edu/ReubenStern) (U Wisconsin)\n* [Nate Soares](http://mindingourway.com/) (MIRI)\n* [Stuart Armstrong](http://www.fhi.ox.ac.uk/about/staff/) (Oxford)\n* [Patrick LaVictoire](https://intelligence.org/team/) (MIRI)\n* [Catrin Campbell-Moore](http://www.mcmp.philosophie.uni-muenchen.de/people/doct_fellows/moore/index.html) (MCMP)\n* [James Joyce](http://www-personal.umich.edu/~jjoyce/) (U Michigan)\n* [Alan Hájek](http://philosophy.anu.edu.au/profile/alan-hajek/) (ANU)\n* [Stuart Russell](http://www.cs.berkeley.edu/~russell/) (Berkeley)\n* [Vladimir Slepnev](http://lesswrong.com/user/cousin_it/submitted/) (Google)\n\n\n(Updated May 17, 2015.)\n\n\nThe post [May 2015 decision theory conference at Cambridge University](https://intelligence.org/2014/07/12/may-2015-decision-theory-workshop-cambridge/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-07-12T08:59:48Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "ecbee4f88361627270423aa88f1b1cdc", "title": "MIRI’s July 2014 newsletter", "url": "https://intelligence.org/2014/07/01/july-newsletter-2/", "source": "miri", "source_type": "blog", "text": "| | | | | | | | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| \n\n| |\n| --- |\n| |\n\n\n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n[Machine Intelligence Research Institute](http://intelligence.org)\n |\n\n |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n**Research Updates**\n* Two new reports: “[Distributions allowing tiling of staged subjective EU maximizers](http://intelligence.org/2014/06/06/new-report-distributions-allowing-tiling-staged-subjective-eu-maximizers/)” and “[Non-omniscience, probabilistic inference, and metamathematics](http://intelligence.org/2014/06/23/new-report-non-omniscience-probabilistic-inference-metamathematics/).”\n* New analysis: [Failures of an embodied intelligence](http://lesswrong.com/lw/k68/failures_of_an_embodied_aixi/).\n* Book chapter co-authored by Nick Bostrom (Oxford) and Eliezer Yudkowsky (MIRI) [now published](http://intelligence.org/2014/06/19/new-chapter-cambridge-handbook-artificial-intelligence/) in the *Cambridge Handbook of Artificial Intelligence.*\n* [2 new expert interviews](http://intelligence.org/category/conversations/): [Roger Schell](http://intelligence.org/2014/06/23/roger-schell/) on long-term computer security research and [Allan Friedman](http://intelligence.org/2014/06/06/allan-friedman-cybersecurity-cyberwar/) on cybersecurity and cyberwar.\n\n\n\n**News Updates**\n* We’ve released our mid-2014 strategic plan [update](http://intelligence.org/2014/06/11/mid-2014-strategic-plan/).\n* There are currently [six active MIRIx groups](http://intelligence.org/mirix/) around the world. If you’re a mathematician, computer scientist, or formal philosopher, you may want to attend one of these groups, or apply for funding to run [your own independently-organized MIRIx workshop](http://intelligence.org/mirix/)!\n* Luke and Eliezer will be giving talks at the [Effective Altruism Summit](http://www.effectivealtruismsummit.com/).\n* We are **actively hiring** for [four positions](http://intelligence.org/careers/): research fellow, science writer, office manager, and director of development. Salaries + benefits are competitive, visa assistance available if needed.\n\n\n**Other Updates**\n* Luke has a [personal blog](http://lukemuehlhauser.com/) now, which often discusses, or links to articles about, long-term AI outcomes.\n* *Our Final Invention* by James Barrat is [now available in audiobook](http://smile.amazon.com/Our-Final-Invention-Artificial-Intelligence/dp/B00KMZY5NG/).\n\n\nAs always, please don’t hesitate to let us know if you have any questions or comments.\nBest,\nLuke Muehlhauser\nExecutive Director\n |\n\n |\n\n |\n| \n\n| | | | | |\n| --- | --- | --- | --- | --- |\n| \n\n| |\n| --- |\n| [Facebook](https://www.facebook.com/MachineIntelligenceResearchInstitute)  |  [Twitter](https://twitter.com/MIRIBerkeley)  |  [Google+](https://plus.google.com/+IntelligenceOrg/)  |  [Forward to a friend](https://intelligence.org/feed/*|FORWARD|*) |\n| | |\n| \nYou’re receiving this because you subscribed to the [M](http://intelligence.org)[IRI](http://intelligence.org) newsletter.\n[unsubscribe from this list](https://intelligence.org/feed/*|UNSUB|*) | [update subscription preferences](https://intelligence.org/feed/*|UPDATE_PROFILE|*)\n |\n\n |\n\n |\n\n  |\n\n\nThe post [MIRI’s July 2014 newsletter](https://intelligence.org/2014/07/01/july-newsletter-2/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-07-01T21:00:20Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "6c63a812476d57d0e99118ec79458271", "title": "New report: “Non-omniscience, probabilistic inference, and metamathematics”", "url": "https://intelligence.org/2014/06/23/new-report-non-omniscience-probabilistic-inference-metamathematics/", "source": "miri", "source_type": "blog", "text": "[![Non-Omniscience](https://intelligence.org/wp-content/uploads/2014/06/Non-Omniscience.png)](https://intelligence.org/files/Non-Omniscience.pdf)UC Berkeley student and MIRI research associate Paul Christiano has released a new report: “[Non-omniscience, probabilistic inference, and metamathematics](https://intelligence.org/files/Non-Omniscience.pdf).”\n\n\nAbstract:\n\n\n\n> We suggest a tractable algorithm for assigning probabilities to sentences of first-order logic and updating those probabilities on the basis of observations. The core technical difficulty is relaxing the constraints of logical consistency in a way that is appropriate for bounded reasoners, without sacrificing the ability to make useful logical inferences or update correctly on evidence.\n> \n> \n> Using this framework, we discuss formalizations of some issues in the epistemology of mathematics. We show how mathematical theories can be understood as latent structure constraining physical observations, and consequently how realistic observations can provide evidence about abstract mathematical facts. We also discuss the relevance of these ideas to general intelligence.\n> \n> \n\n\nWhat is the relation between this new report and Christiano et al.’s earlier “[Definability of truth in probabilistic logic](https://intelligence.org/files/DefinabilityTruthDraft.pdf)” report, discussed by John Baez [here](http://johncarlosbaez.wordpress.com/2013/03/31/probability-theory-and-the-undefinability-of-truth/)? In this new report, Paul aims to take a broader look at the interaction between probabilistic reasoning and epistemological issues, from an algorithmic perspective, before continuing to think about reflection and truth in particular.\n\n\nThe post [New report: “Non-omniscience, probabilistic inference, and metamathematics”](https://intelligence.org/2014/06/23/new-report-non-omniscience-probabilistic-inference-metamathematics/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-06-23T18:13:50Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "c172ea07fb681425310401a2527422ef", "title": "Roger Schell on long-term computer security research", "url": "https://intelligence.org/2014/06/23/roger-schell/", "source": "miri", "source_type": "blog", "text": "![Roger R. Schell portrait](http://intelligence.org/wp-content/uploads/2014/06/Schell_w1300.jpg) [Roger R. Schell](https://wiki.umn.edu/CBI_ComputerSecurity/PeopleSchellRoger) is a Professor of Engineering Practice at the [University Of Southern California](http://www.usc.edu/) [Viterbi School Of Engineering](http://viterbi.usc.edu/), and a member of the founding faculty for their [Masters of Cyber Security](http://gapp.usc.edu/cyber) degree program. He is internationally recognized for originating several key security design and evaluation techniques, and he holds patents in cryptography, authentication and trusted workstation. For more than decade he has been co-founder and an executive of [Aesec Corporation](http://www.aesec.com/), a start-up company providing verifiably secure platforms. Previously Prof. Schell was the Corporate Security Architect for Novell, and co-founder and vice president for Gemini Computers, Inc., where he directed development of their highly secure (what NSA called “[Class A1](http://en.wikipedia.org/wiki/Trusted_Computer_System_Evaluation_Criteria#A_.E2.80.94_Verified_protection)”) commercial product, the Gemini Multiprocessing Secure Operating System ([GEMSOS](http://en.wikipedia.org/wiki/Security-evaluated_operating_system#GEMSOS)). He was also the founding Deputy Director of NSA’s National Computer Security Center. He has been referred to as the “father” of the [Trusted Computer System Evaluation Criteria](http://en.wikipedia.org/wiki/Trusted_Computer_System_Evaluation_Criteria) (the “Orange Book”). Prof. Schell is a retired USAF Colonel. He received a Ph.D. in Computer Science from the MIT, an M.S.E.E. from Washington State, and a B.S.E.E. from Montana State. The NIST and NSA have recognized him with the National Computer System Security Award. In 2012 he was inducted into the inaugural class of the [National Cyber Security Hall of Fame](http://www.cybersecurityhalloffame.com/#chof_2012).\n\n\n\n\n**Luke Muehlhauser**: You have several decades of experience in high assurance computer security, much of it in the military and government, and you discussed your experience at length in [this interview](http://conservancy.umn.edu/bitstream/11299/133439/1/oh405rrs.pdf). One thing I’m curious about is this: do you know of cases where someone was worried about a computer security or safety challenge that wasn’t imminent but maybe one or two decades away, and they decided to start doing research to prepare for that challenge anyway — e.g. perhaps because they expected the solution would require a decade or two of “serial” research and/or engineering work, with each piece building on the ones before it, and they wanted to be prepared to meet the challenge near when it arrived? Lampson’s early identification of the “[confinement problem](http://www.cs.umd.edu/%7Ejkatz/TEACHING/comp_sec_F04/downloads/confinement.pdf)” — a decade or two before anyone detected such an attack in the wild, to my knowledge — looks to me like it might be one such example, but maybe I’m misreading the history there.\n\n\n\n\n---\n\n\n**Roger R. Schell**: First to perhaps clarify the context of my responses, let me refine your introductory summary of my experience in high assurance computer security, which you characterize by saying “much of it in the military and government”. My responses may be better understood by recognizing that although I am currently a Professor on the faculty of the University of Southern California, I spent the previous 28 years in industry in positions ranging from founder and executive in a few of information technology startups to corporate security architect and senior development manager for one of the largest software companies. This was substantially more time than the 22 years I spent in the military prior to that.\n\n\nThat said, your question about the reading of history does take me back to my military experience. You asked, about whether there was a case “where someone was worried about a computer security or safety challenge that wasn’t imminent but maybe one or two decades away, and they decided to start doing research to prepare for that challenge anyway”. From my perspective on that history the answer is a definite yes, as I noted in my 2001 paper on “[Information security: science, pseudoscience, and flying pigs](http://lukemuehlhauser.com/wp-content/uploads/Schell-Information-Security-Science-Pseudoscience-and-Flying-Pigs.pdf).” That paper referred to a major instance of that as follows:\n\n\n\n> The failure of penetrate and patch to secure ADP systems in the late sixties helped stimulate the Ware Report [in 1970], which represented a codification of the state of understanding, which primarily was a realization of how difficult the problem was. This was one of those points where understanding of concepts came together enough to allow a significant step forward.\n> \n> \n> The Ware Report [in 1970] clearly identified the problem, but left it unresolved. That led to the Anderson Panel [in 1972], which defined the reference monitor concepts and conceived a program for evaluating and developing kernels.\n> \n> \n\n\nThat same paper notes that the “confinement problem” that you cited from Butler Lampson, was recognized years earlier and generally termed the “multilevel security” problem. Butler in 1973 made a significant contribution by provide a term and providing a bit of a taxonomy. But as noted in the above paper:\n\n\n\n> Development of early military systems concluded that some portions of the system require particularly strong security enforcement. Specifically, this enforcement was necessary to protect data whose loss would cause extremely grave damage to the nation. Systems that handled such data, and simultaneously included interfaces and users who were not authorized to access such data, came to be known as “multilevel”.\n> \n> \n\n\nIt is pretty clear that stating the need to protect such data from loss is essentially equivalent to stating the need to solve what was later in 1973 referred to at the confinement problem. In fact by that time the “program” defined by Anderson was well underway, and it was as you put it “expected the solution would require a decade or two of ‘serial’ research and/or engineering work”.\n\n\nSo, I agree that with the substance of your conclusion that the confinement problem is one such example, but I would refine your reading of history to note that it was “decided to start doing research to prepare for that challenge” quite a while before the multilevel security challenge was termed the “confinement problem”. I think that refinement is consistent with the nice paper last year by Alex Crowell, et. al., entitled “[The Confinement Problem: 40 Years Later](http://lukemuehlhauser.com/wp-content/uploads/Crowell-et-al.-The-Confinement-Problem-40-Years-Later.pdf)”, which cites the [1973 report](http://lukemuehlhauser.com/wp-content/uploads/Bell-LaPadula-Secure-computer-systems-mathematical-foundations.pdf) by D. E. Bell and L. J. LaPadula specifically directed at a mathematical model to address the multilevel security problem.\n\n\n\n\n---\n\n\n**Luke**: If someone wanted to find additional examples of people working for a decade or two “in advance” on a difficult computer security or safety challenge, who else would you recommend they ask, and where would you recommend they look for examples?\n\n\n\n\n---\n\n\n**Roger**: I would not expect to find a lot of examples of people working for a decade or two “in advance” for a couple of reasons. Even the few cases of which I am aware are not particularly notable for having the significant results successfully brought to bear to meet the challenge when it arrived.\n\n\nFirst, in the U.S. Business culture rewards for executives seem to significantly diminish with the amount of time until investments result in return. Therefore, it is quite difficult to get a business to make a significant and persistent commitment to have people working on a challenge a decade or two “in advance”.\n\n\nSecond, in the U.S. government at the end of the cold war there seemed to be a significant loss of urgency in seriously addressing the problem of dealing with a witted adversary whose likely tool of choice is software subversion. The early efforts, such as the Anderson panel, were focused on high assurance security to address to what had more recently been termed the advanced persistent threat (APT). This was (and is) a genuinely “difficult” security challenge. For a number of years the government had given little attention to high assurance security.\n\n\nIn terms of past examples, the substantial Multics efforts over a number of years by Honeywell (initially General Electric) is one of the few. Decades in advance of widespread commercial need Multics addressed the challenge of creating a “computing utility” for which security was a central value proposition. Many decades later essentially this vision has been given the new name of “cloud computing” – unfortunately without significant attention to security. This early commercial investment was not particularly well-rewarded.\n\n\nIntel provides a second example. The Multics innovations for security significantly influence Intel to make investments in their x.86 architecture in anticipation of demand for security. Their inclusion of Multics-like hardware segmentation and protection rings for security was not easy when the number of transistors a chip were scarce. Again, Intel did not get a good return on this investment. As professor Bill Caelli of Australia pointed out in his paper on trusted systems, the GEMSOS security kernel (for which I was the architect) was a rare example of an operating system actually using this powerful hardware support. This RTOS from a small business hardly constituted a major market win for the Intel investment in segmentation and protection rings.\n\n\nIn terms of other people to ask about this, I don’t know many. Dr. David Bell is the one who has rather systematically looked at the evolution of solution for difficult computer security challenges, as reflected in his [2005 retrospective](http://lukemuehlhauser.com/wp-content/uploads/Bell-Looking-Back-at-the-Bell-La-Padula-Model.pdf) invited paper (plus the [addendum](http://lukemuehlhauser.com/wp-content/uploads/Bell-Looking-Back-Addendum.pdf)) on his security model work at ACSAC several years ago. It is clear that he has rather carefully thought about these issues.\n\n\n\n\n---\n\n\n**Luke**: From your perspective, what are some of the most important contemporary avenues of research on high assurance security systems?\n\n\nE.g. in software *safety* the experts will list things like formal verification, program synthesis, [simplex architectures](http://www.cs.uiuc.edu/class/sp08/cs598tar/Papers/Simplex.pdf), verified libraries and compilers like in [VST](http://vst.cs.princeton.edu/), progress in formal *validation* ala [Cimatti et al. (2012)](http://commonsenseatheism.com/wp-content/uploads/2014/02/Cimatti-et-al-Validation-of-Requirements-for-Hybrid-Systems-a-Formal-Approach.pdf), “clean-slate” high assurance projects [HACMS](http://www.darpa.mil/Our_Work/I2O/Programs/High-Assurance_Cyber_Military_Systems_%28HACMS%29.aspx), systems-based approaches like [Leveson (2012)](http://smile.amazon.com/Engineering-Safer-World-Systems-Thinking-ebook/dp/B00ELVWF70/), and tools that make high assurance methods easier to apply. There’s some overlap between software safety and security, but what avenues of research would you name in a high assurance *security* context?\n\n\n\n\n---\n\n\n**Roger**: It seems to me that some of the most important contemporary avenues of research on high assurance security systems are those related to what over the years have been persistent “hard problems”. In my [2001 ACSAC](http://lukemuehlhauser.com/wp-content/uploads/Schell-Information-Security-Science-Pseudoscience-and-Flying-Pigs.pdf) invited essay on Information security, I listed half a dozen “Remaining Hard Problems”:\n\n\n1. Verifying the absence of trap doors in hardware.\n2. Verifying the absence of trap doors and other malicious software in development tools.\n3. Covert timing channels.\n4. Covert channels in end-to-end encryption systems.\n5. Formal methods for corresponding source code to a formal specification, and object code to source.\n6. Denial of service attacks.\n\n\nIt is not surprising, I suppose, that many of these were also in the high assurance security challenges identified more than 15 years earlier in the TCSEC ([Orange Book](http://en.wikipedia.org/wiki/Trusted_Computer_System_Evaluation_Criteria)) section on “Beyond Class (A1)”. I am disappointed in our profession and its sponsors, and have to say that from my perspective, this is still a reasonable list of hard problems, and remains an important set of contemporary avenues of research. Unfortunately, relative to the importance of high assurance security, very little well-focused research effort is being directed to them. From this list, one area is receiving some attention, although often buried in larger, less significant, projects: recent research in hardware verification is producing valuable results – which was at the top of my list of hard problems.\n\n\nIn addition to the hard problems, there is an additional area that I consider at least as important in terms of the potential for dramatic positive impact. This is what David Bell in his couple of Looking Back papers I referenced earlier called for in terms of pursuit of enhanced high assurance “security in the form of crafting and sharing reference implementations of widely needed components.” David called for “Producing security archetypes or reference implementations of common networking components”. Such reference implementations are secure systems capabilities that technologists think they can figure out how to produce, but the devil is in the details, so only research that includes an implementation can confirm or deny that hypothesis. He provided his list of proposed reference implementations. I have more recently in various public presentations proposed my own list which includes the following:\n\n\n1. MLS network attached storage (NAS)\n2. High Assurance MLS Linux, Unix, \\*ix\n3. Guards, filters, and other (CDSs)\n4. Networked Windows (Thin Client)\n5. Real-time exec (appliances)\n6. Critical infrastructure platform\n7. Identity mgt PKI (Quality Attribute)\n8. MLS handheld network devices (e.g., PDA)\n9. Confined financial apps (e.g., credit card)\n\n\nIn summary, my bottom line is that I believe that the most important contemporary avenues of research on high assurance security systems fall into theses two areas: (1) addressing the remaining hard problems for the future and (2) research projects that include creating reference implementations that create ways to leverage the successful research results of the past for practical secure systems.\n\n\n\n\n---\n\n\n**Luke**: You write “I am disappointed in our profession and its sponsors…” In retrospect, what seem to be the major obstacles to faster progress on the hard problems of computer security?\n\n\nTwo possible explanations that come to mind are: (1) Researchers and funders prefer easier research projects, because they have a higher chance of “success” even though the social value may be lower. And/or perhaps (2) hard problems tend to require long-term research efforts, and current institutions are less well-suited to executing long-term efforts.\n\n\nBut these are just two of my guesses; I’d like to hear your perspective on what the major obstacles seem to be.\n\n\n\n\n---\n\n\n**Roger**: Although not particularly insightful by itself, to start I think it is fair to say that the major reason for the lack of faster progress on the hard problems of computer security is the lack of substantial resources being applied in an informed way to address them. And that does quickly lead to your question of the major obstacles to doing so. It seems to me there are two tightly related aspects to that. First, there must be funders who are willing and able to sponsor such work. Second there must be motivated researchers interested in pursuing such research. Both are needed. Although practically speaking what researchers advocate and promote can significantly affect the funders, they are not likely to continue pursuing paths where funders seem independently disinterested (or even hostile).\n\n\nI have already noted that in our culture rewards for executives seem to significantly diminish with the amount of time until investments result in return. Although that makes it difficult to get a commitment to long-term efforts, this is not unique to computer security, and various sustained research efforts are pursued notwithstanding that. Unfortunately, there is evidence that computer security continues to face some additional unique challenges in the availability of resources for addressing hard problems where the results can provide significantly higher assurance of security.\n\n\nAmong these challenges is the rather unusual characteristic of significant vested interests that may not really want high assurance security. I touched on some of this in my oral history interview you mentioned at the beginning of our exchange, and others have alluded to additional observations of this sort.\n\n\nThe comments that have been made related to potential financial vested interests include the following:\n\n\n1. Cyber security is a multi-billion dollar industry, with much of that revenue generated as the result of the rampant flaws in consumer products from low-assurance cyber components. In my history interview I mentioned that that the Black Forest Group, which is a consortium of international Fortune 50 kind of companies, gave up their pursuit of a high assurance Public Key Infrastructure (PKI) when “they concluded that the vested interests against high assurance just made it impractical”. That vested interest environment can discourage commercial sponsorship for working on the hard problems.\n2. Beyond consumer products, I also noted a couple of anecdotal reports from the aerospace industry discouraging broadly applicable, reusable high assurance solutions, because it was a threat to a significant revenue stream they have from repeatedly addressing security on an ad hoc basis in similar contexts.\n3. It seems the research community can have its own vested interest in having unsolved problems to work on. Several have commented on how what seems to be an almost systematic loss of corporate memory facilities resources for projects to reinvent the wheel. That does little to encourage researchers to give attention to the hard problems. I noted in my history interview reports that “people in the research community fought strongly against having The Orange Book be a standard because it dampened the interest in research”.\n4. The [1997 IEEE history paper](http://lukemuehlhauser.com/wp-content/uploads/MacKenzie-Pottinger-Mathematics-technology-and-trust.pdf) by MacKenzie and Pottinger, reports the serious fragmentation of security research attention after moving away from a focused effort based on the Orange Book where “the path to achieving computer security appeared clear.” Faster progress on the hard problems of computer security is unlikely in the absence of relatively focused research attention.\n\n\nIn addition there have been separate comments about the challenges related to government policy issues:\n\n\n1. There has long been a tension between policy for security solution whose technology is closed restricted (e.g., classified, as in government cryptography) versus technology that is open and transparent (e.g., the Orange Book). This is discussed in some detail in the 1985 George Jelen (an NSA employee) Harvard University [paper](http://www.pirp.harvard.edu/pubs_pdf/jelen/jelen-p85-8.pdf) on “Information Security”, and the policy issues persist to this day. Significant influence toward restricted results does not enhance the prospects for researching security solutions. The cancellation of major commercial work on the difficult problem a high secure virtual machine monitor (sorely needed today for cloud computing) is a real-world example from the past of the draconian effect of government restrictions on access to research results. Paul Karger’s [1991 IEEE paper](http://lukemuehlhauser.com/wp-content/uploads/Karger-et-al-A-retrospective-on-the-VAX-VMM-security-kernel.pdf) reported a major reason for that cancellation was that, “U.S. State Department export controls on operating systems at the B3 and Al levels are extremely onerous and would likely have interfered with many potential sale”.\n2. What has been called “the equities issue” reflects that intelligence gathering can benefit from exploiting vulnerabilities, and thus can create a powerful vested interest in there NOT being major progress in the hard problems of computer security. Bruce Schneier in his [May 2008 blog](http://archive.wired.com/politics/security/commentary/securitymatters/2008/05/blog_securitymatters_0501) argues that this is particularly a challenge when the same agency has major responsibly for both defensive solutions and exploiting vulnerabilities, e.g., he says “The equities issue has long been hotly debated inside the NSA.”\n3. Government policy can strongly favor their dominant control over security solutions, rather than encourage commercial development, especially for high assurance where it really matters. As mentioned by David Bell in his couple of Looking Back papers I cited earlier, a powerful monopolist ploy is to promise high assurance government endorsed solutions in the future to discourage applying in the near term resources supportive of commercial offerings. David reported that “Boeing’s Dan Schnackenberg believed that NSA through MISSI put all the Class A1 vendors out of business,” and I personally saw that kind of activity at that time, as well as more recently. There are other examples beyond MISSI with names like SAT, NetTop, SELinux, MILS and HAP, and like MISSI not one of them ever actually delivered verified high assurance, e.g., a Class A1 evaluation. As David Bell noted, such monopolistic government policy discourages commercial investment in making progress, including on hard problems.\n4. Resources are always scarce, and it seems that current policy for addressing cyber security strongly emphasizes massive surveillance trying to find evidence of exploitation of weak systems. The meager resources applied to high assurance defenses (which is the focus of the hard problems) against a witted adversary pales in comparison to the billions being spent on surveillance. As I said about surveillance in [my July 2012 keynote article for ERCIM](http://ercim-news.ercim.eu/en90/keynote), “this misplaced reliance stifles introduction of proven and mature technology that can dramatically reduce the cyber risks to privacy… an excuse for overreaching surveillance to capture and disseminate identifiable information without a willing and knowing grant of access.”\n\n\nIn summary, I believe the primary reason we have not seen faster progress on the hard problems of computer security is the persistent decision not to apply significant resources. A major class of obstacles to applying resources are strong vested interest that are somewhat unique to computer security. There are not only commercial financial vested interests but also government policy vested interests that for years seem to have successfully created major obstacles.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Roger!\n\n\nThe post [Roger Schell on long-term computer security research](https://intelligence.org/2014/06/23/roger-schell/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-06-23T14:25:00Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "f6cc49f98a47cc7a045926bc202ffc0d", "title": "New chapter in Cambridge Handbook of Artificial Intelligence", "url": "https://intelligence.org/2014/06/19/new-chapter-cambridge-handbook-artificial-intelligence/", "source": "miri", "source_type": "blog", "text": "*[![cambridge handbook of AI](https://intelligence.org/wp-content/uploads/2014/06/cambridge-handbook-of-AI.jpg)The Cambridge Handbook of Artificial Intelligence](http://smile.amazon.com/Cambridge-Handbook-Artificial-Intelligence-ebook/dp/B00JXII7JQ/)* has been released. It contains a chapter co-authored by Nick Bostrom (Oxford) and Eliezer Yudkowsky (MIRI) called “The Ethics of Artificial Intelligence,” available in PDF [here](https://intelligence.org/files/EthicsofAI.pdf).\n\n\nThe abstract reads:\n\n\n\n> The possibility of creating thinking machines raises a host of ethical issues. These questions relate both to ensuring that such machines do not harm humans and other morally relevant beings, and to the moral status of the machines themselves. The first section discusses issues that may arise in the near future of AI. The second section outlines challenges for ensuring that AI operates safely as it approaches humans in its intelligence. The third section outlines how we might assess whether, and in what circumstances, AIs themselves have moral status. In the fourth section, we consider how AIs might differ from humans in certain basic respects relevant to our ethical assessment of them. The final section addresses the issues of creating AIs more intelligent than human, and ensuring that they use their advanced intelligence for good rather than ill.\n> \n> \n\n\nThe post [New chapter in Cambridge Handbook of Artificial Intelligence](https://intelligence.org/2014/06/19/new-chapter-cambridge-handbook-artificial-intelligence/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-06-20T00:44:04Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "c3627e12942f93cb6bffb3730496d195", "title": "Our mid-2014 strategic plan", "url": "https://intelligence.org/2014/06/11/mid-2014-strategic-plan/", "source": "miri", "source_type": "blog", "text": "#### Summary\n\n\nEvents since MIRI’s [April 2013 strategic plan](http://intelligence.org/2013/04/13/miris-strategy-for-2013/) have increased my confidence that we are “headed in the right direction.” During the rest of 2014 we will continue to:\n\n\n* Decrease our public outreach efforts, leaving most of that work to [FHI](http://www.fhi.ox.ac.uk/) at Oxford, [CSER](http://cser.org/) at Cambridge, [FLI](http://thefutureoflife.org/) at MIT, [Stuart Russell](http://www.cs.berkeley.edu/~russell/research/future/) at UC Berkeley, and others (e.g. [James Barrat](http://www.jamesbarrat.com/)).\n* Finish a few pending “strategic research” projects, then decrease our efforts on that front, again leaving most of that work to FHI, plus CSER and FLI if they hire researchers, plus some others.\n* Increase our investment in our Friendly AI (FAI) technical research agenda.\n\n\nThe [reasons](http://intelligence.org/2013/04/13/miris-strategy-for-2013/) for continuing along this path remain largely the same, but I have more confidence in it now than I did before. This is because, since April 2013:\n\n\n* We produced much [Friendly AI research progress](http://intelligence.org/all-publications/) on many different fronts, and do not *remotely* feel like we’ve exhausted the progress that could be made if we had more researchers, demonstrating that the FAI technical agenda is highly tractable.\n* FHI, CSER, and FLI have had substantial public outreach success, in part by leveraging their university affiliations and impressive advisory boards.\n* We’ve heard that as a result of this outreach success, and also because of Stuart Russell’s discussions with researchers at AI conferences, AI researchers are beginning to ask, “Okay, this looks important, but what is the technical research agenda? What could my students and I *do* about it?” Basically, they want to see an FAI technical agenda, and MIRI is is developing that technical agenda already (see below).\n\n\nIn short, I think we tested and validated MIRI’s new strategic focus, and now it is *time to scale*. Thus, our **top goals** for the next 6-12 months are to:\n\n\n1. Produce more Friendly AI research.\n2. Recruit more Friendly AI researchers.\n3. Fundraise heavily to support those activities.\n\n\n\nOur low-level tactics for achieving these high-level goals will probably change quickly as we try things and learn, but I’ll sketch our *current* tactical plans below anyway.\n\n\n \n\n\n#### 1. More Friendly AI research\n\n\nOur [workshops](http://intelligence.org/research/) have produced much FAI research progress, and they allowed us to recruit our [two new FAI researchers](http://intelligence.org/2014/03/13/hires/) of 2014, but we’ve learned that it’s difficult for “newcomers” (people who haven’t been following the research for a long time) to contribute to FAI research at our workshops. Newcomers need better tutorials and more time to think about the research problems before they can contribute much at the cutting edge (see the next section). Therefore our efforts toward novel research progress in 2014 will focus on:\n\n\n* Organizing research workshops attended mostly or solely by “veterans” (people who have been following the research for a long time), such as our [May 2014 workshop](http://intelligence.org/workshops/#may-2014).\n* Inviting individual researchers to visit MIRI for a few days at a time to work with us on very specific research problems with which they are already familiar.\n* Giving our staff FAI researchers time to make theoretical progress on their own and with each other.\n\n\n \n\n\n#### 2. Recruiting\n\n\nHere I talk about “recruiting” rather than “creating a field,” but in fact most of the planned activities below accomplish both ends simultaneously, because they find — and help to activate — new FAI researchers. Still, where there’s a steep tradeoff between recruiting and helping to create a field, we focus on recruiting. FAI research is best done as a full-time career with minimal distractions, and right now MIRI is the only place [offering such jobs](http://intelligence.org/careers/research-fellow/).\n\n\nWe plan to try many different things to see what most helps for recruiting:\n\n\n* We plan to publish an overview of the FAI technical agenda as we see it so far. This should make it easier for potential FAI researchers to engage.\n* We plan to prepare, test, improve, and then deliver (in many different cities) a series of tutorials on different parts of the FAI technical agenda. Besides giving tutorial lectures, we may also organize one-day or two-day workshops that are just for tutorials and Q&A sessions rather than being aimed at making novel research progress.\n* We plan to release a book version of Yudkowsky’s [*Less Wrong Sequences*](http://wiki.lesswrong.com/wiki/Sequences), in part because we’ve noticed that those who have contributed most to FAI research progress have read — and been influenced by — those writings, and this seems to help when they’re doing FAI research.\n* We’ll continue to help fund [SPARC](http://sparc-camp.org/), the lead organizer of which is a frequent MIRI workshop participant (Paul Christiano).\n* We’ll continue to fund independently-organized FAI workshops via our [MIRIx program](http://intelligence.org/mirix/).\n* We may offer widely-advertised cash prizes for certain kinds of research progress, e.g. a winning decision algorithm submitted to a [robust program equilibrium](http://arxiv.org/abs/1401.5577) tournament, or a certain kind of solution to the [Löbian obstacle](https://intelligence.org/files/ProblemsofSelfReference.pdf).\n* We may advertise in venues such as *Notices of the AMS* (widely read by mathematicians) and *Communications of the ACM* (widely read by computer scientists).\n* We’re in discussion with development staff at UC Berkeley about a variety of potential MIRI-Berkeley collaborations that could make it easier for researchers to collaborate heavily with MIRI from within a leading academic institution.\n\n\nWe’ve also made our [job offer to FAI researchers](http://intelligence.org/careers/research-fellow/) more competitive.\n\n\n \n\n\n#### 3. Fundraising\n\n\nTo support our growing Friendly AI research program, we’ve set a “stretch” goal to [raise $1.7 million in 2014](http://intelligence.org/2014/04/02/2013-in-review-fundraising/).\n\n\nTo reach toward that goal, we [participated in SV Gives](http://intelligence.org/2014/05/06/liveblogging-the-svgives-fundraiser/) on May 6th and raised ~$110,000, and also won ~$61,000 in matching and prizes from sources that otherwise wouldn’t have donated to MIRI. Our thanks to everyone who donated!\n\n\nAs usual, we’ll also run our major summer and winter fundraising drives, starting in July and December respectively.\n\n\nThis year we’ll explore the possibility of corporate sponsorships, but we are not optimistic about that strategy, because MIRI’s work has little near-term commercial relevance, and we’ve been counseled that corporate sponsors often require more in return from a sponsored organization than the funds are worth.\n\n\nWe’re more optimistic about the potential returns from improving our donor stewardship and donor prospecting, which we’ve already begun, and we’re seeking to hire a [full-time Director of Development](http://intelligence.org/careers/director-of-development/).\n\n\nDonor prospecting is unlikely to result in new donors in 2014, but will be important for future fundraising years. This is also the case for our recent efforts to find and apply for grants from both private and public grantmakers. We don’t expect to win much in grants in 2014, but we expect to learn in detail what else we need to do over the next couple years so that we can win large grants in the future, and thereby diversify our funding sources.\n\n\nThe post [Our mid-2014 strategic plan](https://intelligence.org/2014/06/11/mid-2014-strategic-plan/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-06-11T17:29:50Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "7e89e1523e11d3ea1da7d447a0fa6c02", "title": "New report: “Distributions allowing tiling of staged subjective EU maximizers”", "url": "https://intelligence.org/2014/06/06/new-report-distributions-allowing-tiling-staged-subjective-eu-maximizers/", "source": "miri", "source_type": "blog", "text": "[![Distributions allowing report](https://intelligence.org/wp-content/uploads/2014/06/Distributions-allowing-report.png)](https://intelligence.org/files/DistributionsAllowingTiling.pdf)MIRI has released a new technical report by Eliezer Yudkowsky, “[Distributions allowing tiling of staged subjective EU maximizers](https://intelligence.org/files/DistributionsAllowingTiling.pdf),” which summarizes some work done at MIRI’s May 2014 workshop.\n\n\nAbstract:\n\n\n\n> We consider expected utility maximizers making a staged series of sequential choices, and replacing themselves with successors on each time-step (to represent self-modification). We wanted to find conditions under which we could show that a staged expected utility maximizer would replace itself with another staged EU maximizer (representing stability of this decision criterion under self-modification). We analyzed one candidate condition and found that the “Optimizer’s Curse” implied that maximization at each stage was not actually optimal. To avoid this, we generated an extremely artificial function *η* that should allow expected utility maximizers to tile. We’re still looking for the exact necessary and sufficient condition.\n> \n> \n\n\nThe post [New report: “Distributions allowing tiling of staged subjective EU maximizers”](https://intelligence.org/2014/06/06/new-report-distributions-allowing-tiling-staged-subjective-eu-maximizers/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-06-06T21:56:07Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "a44794d73cfa6318fab0fccfc342eae6", "title": "Allan Friedman on cybersecurity and cyberwar", "url": "https://intelligence.org/2014/06/06/allan-friedman-cybersecurity-cyberwar/", "source": "miri", "source_type": "blog", "text": "![Allan Friedman](https://intelligence.org/wp-content/uploads/2014/06/Allan-Friedman.jpg)MIRI recently interviewed [Allan Friedman](http://allan.friedmans.org/), co-author of *[Cybersecurity and Cyberwar: What Everyone Needs to Know](http://smile.amazon.com/Cybersecurity-Cyberwar-What-Everyone-Needs-ebook/dp/B00GJG6ZB2/)*.\n\n\nWe interviewed Dr. Friedman about cyberwar because the regulatory and social issues raised by the prospect of cyberwar may overlap substantially with those that will be raised by the prospect of advanced autonomous AI systems, such as those studied by MIRI.\n\n\nOur [GiveWell-style](http://www.givewell.org/conversations) notes on this conversation are available in PDF format [here](https://intelligence.org/wp-content/uploads/2014/06/Allan-Friedman-on-cybersecurity-issues.pdf).\n\n\nThe post [Allan Friedman on cybersecurity and cyberwar](https://intelligence.org/2014/06/06/allan-friedman-cybersecurity-cyberwar/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-06-06T20:58:45Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "d4cec3b7b6fb0efaef09afb534d36c1e", "title": "MIRI’s June 2014 Newsletter", "url": "https://intelligence.org/2014/06/01/miris-june-2014-newsletter/", "source": "miri", "source_type": "blog", "text": "| | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n[Machine Intelligence Research Institute](http://intelligence.org)\n |\n\n |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \nDear friends,\n\n\nThe SV Gives fundraiser was a big success for [many organizations](http://www.mercurynews.com/sal-pizarro/ci_25716819/pizarro-silicon-valley-gives-raises-7-9-million), and [especially for MIRI](http://intelligence.org/2014/05/06/liveblogging-the-svgives-fundraiser/). Thanks so much, everyone!\n**Research Updates**\n* Two new papers: “[Program equilibrium…](http://intelligence.org/2014/05/17/new-paper-program-equilibrium-prisoners-dilemma-via-lobs-theorem/)” (accepted to the MIPC workshop at AAAI-14) and “[Problems of self-reference…](http://intelligence.org/2014/05/06/new-paper-problems-of-self-reference-in-self-improving-space-time-embedded-intelligence/)” (accepted for AGI-14).\n* First report from our May workshop: “[Loudness: on priors over preference relations](http://intelligence.org/2014/05/30/new-report-loudness-priors-preference-relations/).” (Other reports forthcoming.)\n* New analysis: [Exponential and non-exponential trends in information technology](http://intelligence.org/2014/05/12/exponential-and-non-exponential/).\n* [9 new expert interviews](http://intelligence.org/category/conversations/), including e.g. [Michael Fisher](http://intelligence.org/2014/05/09/michael-fisher/) on verifying autonomous systems.\n\n\n\n**News Updates**\n* Our **MIRIx program** wants to fund [your independently-organized Friendly AI workshop](http://intelligence.org/mirix/).\n* We are **actively hiring** for [four positions](http://intelligence.org/careers/): research fellow, science writer, office manager, and director of development. Salaries + benefits are competitive, visa assistance available if needed.\n* Now available in paperback: *[Smarter Than Us: The Rise of Machine Intelligence](http://smile.amazon.com/Smarter-Than-Us-Machine-Intelligence/dp/1939311098/)*.\n* [Christof Koch and Stuart Russell on machine superintelligence](http://intelligence.org/2014/05/13/christof-koch-stuart-russell-machine-superintelligence/).\n\n\n**Other Updates**\n* The Future of Life Institute’s inaugural talks and panel: [The Future of Technology: Benefits and Risks](http://techtv.mit.edu/videos/29155-the-future-of-technology-benefits-and-risks) (video).\n* A bit of humor: [machine ethics on the *Colbert Report*](http://thecolbertreport.cc.com/videos/o2wt62/morality-lessons-for-robots).\n* [New honors thesis](https://intelligence.org/wp-content/uploads/2014/10/Hintze-Problem-Class-Dominance-In-Predictive-Dilemmas.pdf) by Danny Hintze compares four decision procedures, including Yudkowsky’s TDT and Dai’s UDT.\n\n\nAs always, please don’t hesitate to let us know if you have any questions or comments.\nBest,\nLuke Muehlhauser\nExecutive Director\n |\n\n |\n\n |\n| |\n\n |\n\n\n \n\n\nThe post [MIRI’s June 2014 Newsletter](https://intelligence.org/2014/06/01/miris-june-2014-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-06-02T03:00:15Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "82d34f327ed56d779aa2f4a1c1c5c411", "title": "Milind Tambe on game theory in security applications", "url": "https://intelligence.org/2014/05/30/milind-tambe/", "source": "miri", "source_type": "blog", "text": "![Milind Tambe portrait](http://intelligence.org/wp-content/uploads/2014/05/Tambe_w1049.jpg)[Milind Tambe](http://teamcore.usc.edu/tambe/) is Helen N. and Emmett H. Jones Professor in Engineering at the [University of Southern California](http://www.usc.edu/) (USC). He is a fellow of [AAAI](http://www.aaai.org/) (Association for Advancement of Artificial Intelligence) and [ACM](http://www.acm.org/) (Association for Computing Machinery), as well as recipient of the ACM/SIGART [Autonomous Agents Research Award](http://sigai.acm.org/awards/autonomous_agents_award.html), [Christopher Columbus Fellowship Foundation Homeland security award](http://www.ccolumbusfoundationawards.org/homeland/past.cfm), the [INFORMS Wagner prize for excellence in Operations Research Practice](https://www.informs.org/Recognize-Excellence/Award-Recipients/Milind-Tambe), the [Rist Prize of the Military Operations Research Society](http://create.usc.edu/2011/08/create_reserachers_awarded_the.html), IBM Faculty Award, [Okawa Foundation Faculty Research Award](http://www.okawa-foundation.or.jp/en/activities/research_grant/index.html), RoboCup scientific challenge award, [USC Associates Award for Creativity in Research](https://research.usc.edu/associates-award-previous-recipients/) and [USC Viterbi School of Engineering use-inspired research award](http://viterbi.usc.edu/academics/awards/viterbi-school-awards/vsoe_research_award.htm).\n\n\nProf. Tambe has contributed several foundational papers in agents and multiagent systems; this includes areas of multiagent teamwork, distributed constraint optimization (DCOP) and security games. For this research, he has received the “[influential paper award](http://viterbi.usc.edu/news/news/2012/professor-milind-tambe.htm)” from the International Foundation for Agents and Multiagent Systems (IFAAMAS), as well as with his research group, best paper awards at a number of premier Artificial Intelligence Conferences and workshops; these have included multiple best paper awards at the International Conference on Autonomous Agents and Multiagent Systems and International Conference on Intelligent Virtual Agents.\n\n\nIn addition, the “security games” framework and algorithms pioneered by Prof. Tambe and his research group are now deployed for real-world use by several agencies including the US Coast Guard, the US Federal Air Marshals service, the Transportation Security Administration, LAX Police and the LA Sheriff’s Department for security scheduling at a variety of US ports, airports and transportation infrastructure. This research has led to him and his students receiving the US Coast Guard Meritorious Team Commendation from the Commandant, US Coast Guard First District’s Operational Excellence Award, Certificate of Appreciation from the US Federal Air Marshals Service and special commendation given by the Los Angeles World Airports police from the city of Los Angeles. For his teaching and service, Prof. Tambe has received the USC Steven B. Sample Teaching and Mentoring award and the ACM recognition of service award. Recently, he co-founded [ARMORWAY](http://armorway.com/), a company focused on risk mitigation and security resource optimization, where he serves on the board of directors. Prof. Tambe received his Ph.D. from the School of Computer Science at Carnegie Mellon University.\n\n\n\n**Luke Muehlhauser**: In [Tambe et al. (2013)](http://teamcore.usc.edu/papers/2014/AAAISS14.pdf), you and your co-authors give an overview of game theory in security applications, saying:\n\n\n\n> Game theory is well-suited to adversarial reasoning for security resource allocation and scheduling problems. Casting the problem as a Bayesian Stackelberg game, we have developed new algorithms for efficiently solving such games to provide randomized patrolling or inspection strategies.\n> \n> \n\n\nYou then give many examples of game-theoretic algorithms used for security at airports, borders, etc.\n\n\nIs there evidence to suggest that the introduction of these systems has improved the security of the airports, borders, etc. relative to whatever security processes they were using before?\n\n\n\n\n\n---\n\n\n**Milind Tambe**: This is an important and wonderful question. There is a long answer to this question that compiles all of our evidence (this is actually a [book](http://smile.amazon.com/Security-Game-Theory-Algorithms-Deployed/dp/1107096421/ref=nosim?tag=793775876-20) chapter). I will attempt to provide a shorter answer here and will attempt to provide pointers to our publications that compile this evidence.\n\n\nAs you note, we are fortunate that many of our game-theoretic algorithms for security resource optimization (via optimal scheduling, allocation) have jumped out of our lab and are now deployed for real use in many applications. As we write papers about these deployed applications, evidence showing the benefits of these algorithms is definitely an important issue that is necessary for us to answer. Unlike our more “mainstream” papers, where we can run 1000s of careful simulations under controlled conditions, we cannot conduct such experiments in the real world with our deployed applications. Nor can we provide a proof of 100% security – there is no such thing. So what can we do? In our evidence gathering, we have focused on the question of: are we better off with our tools based on computational game theory than what was being done previously, which was typically relying on human schedulers or a simple dice roll for security scheduling (simple dice roll is often the other “automation” that is used or offered as an alternative to our methods). Now within my area of Artificial Intelligence, that AI programs can beat humans at complex scheduling tasks is not controversial; but regardless we have used the following methods to provide evidence that our game-theoretic algorithms indeed perform better. These methods range from simulations to actual field tests.\n\n\n1. Simulations (including using a “machine learning” attacker): We provide simulations of security schedules, e.g., randomized patrols, assignments, comparing our approach to earlier approaches based on techniques used by human schedulers. We have a machine learning based attacker who learns any patterns and then chooses to attack the facility being protected. Game-theoretic schedulers are seen to perform significantly better in providing higher levels of protections ([Pita et al. 2008](http://teamcore.usc.edu/papers/2008/AAMASind2008Final.pdf); [Jain et al. 2010](http://teamcore.usc.edu/papers/2010/09Interfaces.pdf)).\n2. Human adversaries in the lab: We have worked with a large number of human subjects and security experts (security officials) to have them get through randomized security schedules, where some are schedules generated by our algorithms, and some are baseline approaches for comparison. Human subjects are paid money based on the reward they collect by successfully intruding through our security schedules; again our game-theoretic schedulers perform significantly better ([Pita et al. 2009](http://commonsenseatheism.com/wp-content/uploads/2014/05/Pita-et-al-Effective-solutions-for-real-world-Stackelberg-games-when-agents-must-deal-with-human-uncertainties.pdf)).\n3. Actual security schedules before and after: For some security applications, we have data on how scheduling was done by humans (before our algorithms were deployed) and how schedules are generated after deployment of our algorithms. For measures of interest to security agencies, e.g., predictability in schedules, we can compare the actual human-generated schedules vs our algorithmic schedules. Again, game-theoretic schedulers are seen to perform significantly better by avoiding predictability and yet ensuring that more important targets are covered with higher frequency of patrols. Some of this data is published ([Shieh et al. 2012](http://commonsenseatheism.com/wp-content/uploads/2014/05/Shieh-et-al-PROTECT-a-deployed-game-theoretic-system-to-protect-the-ports-of-the-United-States.pdf)).\n4. “Adversary” teams simulate attack: In some cases, security agencies have deployed adversary perspective teams or “mock attacker teams” that will attempt to conduct surveillance to plan attacks; this is done before and after our algorithms have been deployed to check which security deployments worked better. This was done by the US Coast Guard indicating that the game-theoretic scheduler provided higher levels of deterrence ([Shieh et al. 2012](http://commonsenseatheism.com/wp-content/uploads/2014/05/Shieh-et-al-PROTECT-a-deployed-game-theoretic-system-to-protect-the-ports-of-the-United-States.pdf)).\n5. Real-time comparison: human vs algorithm: This is a test we ran on the metro trains in Los Angeles. For a day of patrol scheduling, we provided head-to-head comparison of human schedulers trying to schedule 90 officers on patrols vs an automated game-theoretic scheduler. External evaluators then provided an evaluation of these patrols; the evaluators did not know who had generated each of the schedules. The results show that while human schedulers required significant effort even for generating one schedule (almost a day), and the game-theoretic scheduler ran quickly, the external evaluators rated the game theoretic schedulers higher (with statistical significance). ([Delle Fave et al. 2014a](http://aamas2014.lip6.fr/proceedings/aamas/p1363.pdf), Delle Fave et al. under consideration).\n6. Actual data from deployment: This is another test run on the metro trains in LA. We had a comparison of game-theoretic scheduler vs an alternative (in this case a uniform random scheduler augmented with real time human intelligence) to check fare evaders. In 21 days of patrols, the game-theoretic scheduler led to significantly higher numbers of fare evaders captured than the alternative. ([Delle Fave et al. 2014a](http://aamas2014.lip6.fr/proceedings/aamas/p1363.pdf), [Delle Fave et al. 2014b](http://teamcore.usc.edu/papers/2014/jair2014-execution.pdf)).\n7. Domain expert evaluation (internal and external): There have been of course significant numbers of evaluations done by domain experts comparing their own scheduling method with game theoretic schedulers and repeatedly the game theoretic schedulers have come out ahead. The fact that our software is now in use for several years at several different important airports, ports, air-traffic, and so on, is an indicator to us that the domain experts must consider this software of some value.\n\n\nAll of this evidence suggests that the game-theoretic approaches for security resource optimization performs significantly better than its competitors, i.e., human schedulers or simple randomization. Humans were seen to fall into predictable patterns; and indeed this is an extremely complex adversarial scheduling/planning/allocation task, where humans have to reason about trillions of possible schedules (actually even more), and which is time-consuming and very difficult for humans. It would seem then that we should let this complex task to be handed over to software, and let humans concentrate on the more important tasks of actually physically providing security.\n\n\n\n\n---\n\n\n**Luke**: Diana Spears [mentioned](http://intelligence.org/2014/04/09/diana-spears/) that you teach a course on “Artificial Intelligence and Science Fiction,” which includes a section on your “research on Asimovian multiagents.” Is [this 2007 syllabus](http://teamcore.usc.edu/tambe/freshman-seminar.htm) still a pretty accurate outline of the course? What work on “Asimovian multiagents” do you discuss? She mentioned [Schurr et al. (2006)](http://www.aptima.info/publications/2006_Schurr_Varakantham_Bowring_Tambe_Grosz.pdf); is there any other work on that topic you’ve done?\n\n\n\n\n---\n\n\n**Milind**: The last iteration of “Understanding intelligent agents via science fiction” was taught in 2010. Here is the [syllabus for it](http://teamcore.usc.edu/Courses/CSCI300/).\n\n\nThis course was jointly developed with my former PhD student, Prof. Emma Bowring, now Associate Professor at Univ of Pacific, and really was her idea to develop it while she was still a student at USC. We have written about this course ([Bowring & Tambe 2009](http://teamcore.usc.edu/papers/2009/BowringTambe2.pdf)).\n\n\nWhile we did use Asimov’s short stories extensively, the goal here is to really introduce core concepts in agents and multiagent systems, as can be seen from the syllabus. So the focus is more on using these stories, Star Trek clips, and other science fiction as a motivation to introduce core concepts in AI/Agents and Multiagent systems, starting from Markov Decision Problems (MDPs), POMDPs, Game theory, agent modeling and so on.\n\n\nWhile you are right that we have done some follow up work based on Diana’s original paper and earlier paper by [Weld & Etzioni (1994)](https://www.cs.auckland.ac.nz/~nickjhay/papersuni/RoughDraft200207-best_of_SASEMAS.pdf) and that is a fascinating thread of research, but that isn’t the focus of this course. This is more of an undergraduate course introducing students to key concepts. Towards the end of the course we get into abstract ideas on agent design where students may use some of the ideas advocated in these “Asimovian agents”, which is great.\n\n\nWe haven’t pursued that research direction beyond that paper and an earlier one ([Pynadath and Tambe 2001](http://teamcore.usc.edu/papers/2001/atal01-Asimov.pdf)).\n\n\nIt would be a great idea to push that direction more though.\n\n\n\n\n---\n\n\n**Luke**: In [Tambe et al. (2013)](http://teamcore.usc.edu/papers/2014/AAAISS14.pdf) you identify open research issues for game theory in security applications, such as scalability and robustness. Which open research problems in this area do you think would have the largest practical impact if they could be solved well?\n\n\n\n\n---\n\n\n**Milind**: Both scalability and robustness are important. Scale-up is important because we want to solve large-scale games. For example, even if we leave aside complex scheduling constraints and just think of the abstract assignment problem of security agencies such as Federal Air Marshals Service of assigning say 20 defenders to 1000 flights, that is 1000-choose-20. These problems become so large that we cant fit games in the normal form in memory and must somehow find an optimal defender strategy without explicitly representing the game in memory. On the other hand, we wish to handle robustness because of the many uncertainties in the game. There is uncertainty related to adversary’s surveillance (how much surveillance is actually going on), uncertainty about adversary’s payoffs, capabilities, and so on. When we combine the two requirements of solving large-scale games and handling uncertainty, then the problem becomes even more complex. These remain critical challenges for us to address.\n\n\nThere are however many other major research challenges that are open. Significant effort has been focused on modeling adversary bounded rationality. This exciting research at the intersection of algorithmic and behavioral game theory is a major area of research in security games. Furthermore, our recent work focused on protecting wildlife and fisheries ([Yang et al. 2014](http://aamas2014.lip6.fr/proceedings/aamas/p453.pdf); [Haskell et al. 2014](http://teamcore.usc.edu/papers/2014/IAAI_2014.pdf)) brings up challenges related to Machine Learning. Specifically we now have data related to say poachers’ movements and actions. This data can be used to create a better model of the poachers. Another area is preference elicitation. If there is significant uncertainty related to many aspects of the domain, and reducing this uncertainty is going to take effort, then which features do we focus on first to reduce uncertainty? Our recent paper at AAAI’2014 ([Nguyen et al. 2014](http://teamcore.usc.edu/papers/2014/aaai2014_mirage.pdf)) provides our initial thrust into this area.\n\n\nIn short, while we have made significant progress, not just in my group, but as the “security games community,” there is a lot more that still needs to be done.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Milind!\n\n\nThe post [Milind Tambe on game theory in security applications](https://intelligence.org/2014/05/30/milind-tambe/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-05-31T03:00:21Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "cdaf69cd45020a7ca58ecf05a9eab617", "title": "New report: “Loudness: On priors over preference relations”", "url": "https://intelligence.org/2014/05/30/new-report-loudness-priors-preference-relations/", "source": "miri", "source_type": "blog", "text": "[![loudness first page](https://intelligence.org/wp-content/uploads/2014/05/loudness-first-page.png)](https://intelligence.org/files/LoudnessPriors.pdf)Today we release the first technical report from our [May 2014 workshop](http://intelligence.org/workshops/#may-2014): “[Loudness: on priors over preference relations](https://intelligence.org/files/LoudnessPriors.pdf)” by Benja Fallenstein and Nisan Stiennon. Other technical reports from that workshop are also in progress. Here’s the abstract for this report:\n\n\n\n> This is a quick writeup of a problem discussed at the May 2014 MIRI workshop: how to formally deal with uncertainty about preferences. We assume that the true preferences satisfy the von Neumann-Morgenstern (VNM) axioms, and can therefore be represented by a utility function. It may seem that we should then simply maximize the expectation of this function. However, in the absence of more information, this is not well-defined; in this setting, different choices of utility functions representing the same VNM preferences can lead the agent to make different choices. We give a formalization of this problem and show that the choice of a prior probability distribution over VNM preference relations together with the choice of a representative for each of these distributions is in a certain sense equivalent to the choice of a single number for every preference relation, which we call its “loudness”. (Mathematically, a “loudness prior” can be seen as a probability distribution over preference relations, but this object does not have an epistemic interpretation.)\n> \n> \n\n\nThe post [New report: “Loudness: On priors over preference relations”](https://intelligence.org/2014/05/30/new-report-loudness-priors-preference-relations/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-05-30T23:26:58Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "8ffdc3781050cd76cdcc92f72d84f226", "title": "MIRI wants to fund your independently-organized Friendly AI workshop", "url": "https://intelligence.org/2014/05/29/miri-wants-fund-independently-organized-friendly-ai-workshop/", "source": "miri", "source_type": "blog", "text": "![mirix_small](https://intelligence.org/wp-content/uploads/2014/05/mirix_small.jpg)To support Friendly AI research around the world, our new [**MIRIx program**](http://intelligence.org/mirix/) funds mathematicians, computer scientists, and formal philosophers to organize their own Friendly AI workshops.\n\n\nA MIRIx workshop can be as simple as gathering some of your friends to read [MIRI papers](http://intelligence.org/research/) together, talk about them, eat some snacks, scribble some ideas on whiteboards, and go out to dinner together. Or it can be a multi-day research workshop pursuing a specific line of attack on a particular problem. It’s up to you.\n\n\nApply for funding **[here](http://intelligence.org/mirix/)**. In some cases we’ll be able to send a MIRI research fellow to your first meeting to give tutorials and answer questions, or perhaps they’ll Skype in to your workshop to do the same.\n\n\nThe post [MIRI wants to fund your independently-organized Friendly AI workshop](https://intelligence.org/2014/05/29/miri-wants-fund-independently-organized-friendly-ai-workshop/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-05-29T22:16:32Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "f40da4607fbf06aa0c952aeaca3ddea4", "title": "Aaron Tomb on crowd-sourced formal verification", "url": "https://intelligence.org/2014/05/29/aaron-tomb/", "source": "miri", "source_type": "blog", "text": "![Aaron Tomb portrait](http://intelligence.org/wp-content/uploads/2014/04/Tomb_w370.jpg) [Aaron Tomb](http://corp.galois.com/aaron-tomb/) is a Principal Investigator at Galois, where his work includes research, development, and project leadership in the area of automated and semi-automated techniques for analysis of software, including type systems, defect detection tools, formal verification, and more general software development tools based on deep analysis of program semantics. He joined Galois in 2007, and received a Ph.D. in Computer Science from the [University of California, Santa Cruz](http://www.ucsc.edu/) on a technique for identifying inconsistent assumptions in software source code.\n\n\n\n**Luke Muehlhauser**: DARPA’s Crowd Sourced Formal Verification ([CSFV](http://www.darpa.mil/Our_Work/I2O/Programs/Crowd_Sourced_Formal_Verification_%28CSFV%29.aspx)) program “aims to investigate whether large numbers of non-experts can perform formal verification faster and more cost-effectively than conventional processes.” To that end, the program created [Verigames.com](http://Verigames.com), the home of several free online puzzle games.\n\n\nYour contribution to Verigames is [Stormbound](http://stormbound.verigames.com/play/). In a [blog post](http://www.verigames.com/static/newsDetails/52a9f33751716dad14000183), your colleague Jef Bell explains:\n\n\n\n> Here’s a high level view of how StormBound works, from the [formal verification] perspective. First we figure out the properties of the system we want to prove…\n> \n> \n> Based on our knowledge about the common sources of errors we are trying to prevent, we select certain code points in the program source code where we need to check that a property we care about is in fact true…\n> \n> \n> Unfortunately, in a typical program there are a lot of these code points, and it is not always easy to find the exact property that needs to hold at each point. It turns out this is where StormBound players can help, so bear with me as I explain how…\n> \n> \n> We run the system a whole bunch of times in a variety of scenarios that are intended to exercise the system as it would be exercised in real-world use. As we run the system all these times, we collect snapshots of the data that is in the computer’s memory, every time the program gets to one of the code points we identified.\n> \n> \n> This provides us with a large collection of examples of the data in the computer’s memory. Now, we want to identify some things in common about the data at each code point, every time that same code point is reached while running the program. These things in common are in fact the kinds of properties we are looking for. And here’s an important bit: it turns out that computers aren’t always good at finding these “things in common,” but human beings are very adept at finding patterns, especially visual patterns.\n> \n> \n> This is where the StormBound game comes in. Each game level actually consists of the data that was in memory all the times a particular code point was reached. When you play a level, you are looking for common patterns in that data. The game shows you the data (in what we think is a fun and visually appealing form), and allows you to “experiment” with the data in various ways, to help you to find those common patterns. The spells you cast in StormBound describe the patterns. In the [formal verification] world we call these spells assertions. We combine together the assertions discovered by many players of the same game level, which enables the [formal verification] tools we use to prove the truthfulness of the desired properties at the corresponding code point.\n> \n> \n\n\nRoughly how many players have played StormBound so far? Have they yet found anything that you expect would have been very difficult for an automated search algorithm to find?\n\n\n\n\n---\n\n\n**Aaron Tomb**: At this point we’ve had thousands of players, who have collectively described over 100,000 patterns (which ultimately become logical assertions about program states). The game that is available for play at the moment is the first of two phases, and allows players to describe a relatively restricted class of assertions. And, in general, the assertions we have gathered during this first phase have been relatively simple, and likely discoverable using automated techniques. However, having the current iteration of the game available to a wide range of players has taught us a lot about what keeps players engaged (or causes them to lose interest), and has also allowed us to develop an understanding of the sorts of results that we need to get from players but that aren’t encouraged (or, in a few cases, possible) with the current interface. We are using this information to design a second version that we expect will yield richer results beyond what automated tools would be likely to produce.\n\n\n\n\n---\n\n\n**Luke**: Makes sense. That’s standard Silicon Valley wisdom: launch a minimum viable product, learn, iterate quickly, etc.\n\n\nHow would you know whether Stormbound, or even CSFV altogether (if you were running it), had succeeded or failed? Presumably there’s a substantial chance that crowd-sourced formal verification assistance can’t really work with anything like current verification techniques and problems?\n\n\n\n\n---\n\n\n**Aaron**: I can’t speak for DARPA about how they plan to measure the success of the program, but I can say something about how we can know when we’ve successfully verified a particular piece of software. One advantage of typical verification techniques is that there’s a fairly clear measure of completion. Verification tools often accept program source code plus some statement of desired properties of the code. The tools will then attempt to prove, usually automatically, that the code does indeed have those properties. If they succeed, there’s a high probability (not 100%, because the tools may have bugs, themselves) that the code has the desired properties.\n\n\nStormBound is structured to generate annotations for C code that are sent to a sophisticated tool called Frama-C. Those annotations are combined with some built-in definitions that describe the absence of common security vulnerabilities. By using Frama-C, StormBound is already working with state-of-the-art verification techniques. Our target programs are large pieces of open source software critical to internet infrastructure, so we’re also working with real-world problems.\n\n\nA key question is then: can StormBound generate the necessary annotations to allow verification to succeed? In the first phase, we’ve found that players have come up with some useful annotations, but have missed others that would be necessary for a complete proof. In the second phase, we’ll be updating the game to include significantly more context about the ultimate verification goal, in game-oriented terms, and we expect that this will lead more players toward constructing solutions that give significant verification progress.\n\n\n\n\n---\n\n\n**Luke**: You write that “Our target programs are large pieces of open source software critical to internet infrastructure…”\n\n\nDo you have any opinion as to whether formal verification would have been likely to catch the [Heartbleed bug](http://en.wikipedia.org/wiki/Heartbleed) in OpenSSL?\n\n\n\n\n---\n\n\n**Aaron**: Yes, any sound verification tool capable of proving the absence of undefined behavior in C, such as Frama-C, would have failed to prove that OpenSSL was safe. Depending on the tool being used, that proof failure may or may not help you identify the nature of the bug, but it would at least indicate that some bug of some sort exists, and something about its rough location. John Regehr, a Computer Science professor at the University of Utah, wrote a nice [blog post](http://blog.regehr.org/archives/1125) on the topic, which specifically mentions using Frama-C to do the verification.\n\n\nThe key complication is that, in order to prove that such bugs don’t exist, tools like Frama-C require a lot of extra information from the programmer (or from game players, in the case of StormBound), encoded as annotations in the source code. In general, the effort required to create those annotations can exceed the time taken to write the program in the first place, and the process requires familiarity with verification tools, which is not widespread. We hope that StormBound will be able to significantly reduce the necessary effort and expertise, to the point where it becomes practical.\n\n\n\n\n---\n\n\n**Luke**: What’s your current guess as to how general the CSFV of 5-10 years from now will be? How broad a range of verification challenges might they conceivably be applied to, once they are substantially more fully developed than they are in these early days?\n\n\n\n\n---\n\n\n**Aaron**: A lot can happen in 5-10 years, so my answer to this is very speculative, and just reflects my own opinions, not those of DARPA or even the rest of the StormBound team.\n\n\nIn the last decade, there has been significant progress on tools for interactive construction of mathematical proofs that can be reliably checked by computers for correctness. Some very general and robust tools such as ACL2, Coq, Isabelle and PVS have arisen from this research. (Most of them were initially created even earlier than the last decade, but the approach has become significantly more widespread recently.) These tools have been used for significant undertakings in software verification, as well as general mathematics, resulting in successful proofs that would simply be infeasible to do (correctly) by hand. However, these tools are time-consuming to use for proofs about realistic software systems, and require rare expertise.\n\n\nThe design approaches we have been considering for next revision of StormBound borrow significantly from the technical underpinnings of these interactive theorem provers, and the resulting game should, as a result, be quite general. While the tentative design builds on these ideas, it presents them through a very different interface than that used by existing tools, and we hope that will allow people without the same level of mathematical background to prove at least some theorems.\n\n\nThe key unknown, at the moment, is how well players without mathematical background will be able to develop the intuition necessary to make progress. The current design sketches focus heavily on using visual pattern recognition in place of standard mathematical reasoning techniques in many places, but proofs of typical mathematical theorems still tend to rely on deep insight based on years of experience. At the same time, the proofs necessary to verify some basic security properties of software are often much less intricate than those that mathematicians spend their time on.\n\n\nSo, to finally get to the answer, I could see several possible results the next decade. We could have a significantly different interface to interactive theorem provers that may make expert users more efficient. An even more exciting possibility, though, is that we could have a system that allows non-experts to take a stab at proving essentially any mathematical theorem simply by recognizing patterns and applying a few relatively simple rules. In either case, the I expect that the results will be more general than simply proving security properties about software.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Aaron!\n\n\nThe post [Aaron Tomb on crowd-sourced formal verification](https://intelligence.org/2014/05/29/aaron-tomb/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-05-29T11:00:30Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "0a030df9807421e1ce853168058ea128", "title": "Lennart Beringer on the Verified Software Toolchain", "url": "https://intelligence.org/2014/05/27/lennart-beringer/", "source": "miri", "source_type": "blog", "text": "![Lennart Beringer portrait](http://intelligence.org/wp-content/uploads/2014/05/Beringer_w175.jpg) [Lennart Beringer](http://www.cs.princeton.edu/~eberinge/) is an Associate Research Scholar at Princeton University, where he uses interactive proof assistants to develop provably correct compilers, program analyses, and other software verification tools. Previously, he held research appointments at the Ludwig-Maximilians-University Munich and the University of Edinburgh, where he developed proof-carrying-code techniques for mobile code architectures, focusing on properties of resource consumption and information flow security. He received his PhD degree from the University of Edinburgh for a language-based analysis of an idealized asynchronous processor architecture. In 2012, he served as the co-chair of the Third International Conference on Interactive Theorem Proving (ITP).\n\n\n\n\n**Luke Muehlhauser**: One of the projects to which you contribute is the [Verified Software Toolchain](http://vst.cs.princeton.edu/). How would you summarize what has been achieved so far? What do you hope to have achieved in the next year or two on that project?\n\n\n\n\n---\n\n\n**Lennart Beringer**: The Verified Software Toolchain is a collection of verification tools for the C programming language, implemented and formally verified in the Coq proof assistant, with a connection to the formally verified [CompCert compiler](http://compcert.inria.fr/) developed by Xavier Leroy at INRIA, France.\n\n\nThe heart of the VST is a Hoare-style program logic suitable for verifying partial-correctness properties and program safety, i.e. the absence of abnormal program termination due to, e.g., an attempt to dereference a dangling pointer. Program specifications are written in a highly expressive variant of concurrent separation logic that supports higher-order reasoning, impredicative quantification, and other concepts necessary for formulating, e.g., the resource invariants of locks in POSIX-style multithreading.\n\n\nIn addition to the core program logic, the VST contains support for proof automation, and a verifiably sound implementation of a shape analysis, the two components of which — a symbolic execution engine and an entailment checker for the Smallfoot-fragment of separation logic — can be either code-extracted to yield stand-alone tools, or integrated into Coq- or Ocaml-based analyses, including Smallfoot itself. The proof automation component consists of various Coq tactics, auxiliary lemmas, and specification abstractions that capture typical proof patterns in concrete application programs.  Special emphasis here is put on the idea of hierarchically organized layers of assertion formats that hide much of the complexity of the underlying logic from the user. An important goal of our current research is the discovery of additional such reasoning patterns for various programming and specification styles, and the exploitation of such structure to further enhance automation and performance. Regarding applications of the VST, we believe that in addition to the typical domains of high-assurance software, concrete open-source implementations of formal standards or libraries provide excellent case studies for evaluating further enhancements to the VST. At the same time, machine-checkable proofs (or even just precise specifications) of such libraries can contribute to the public level of trust that everyday users put on these libraries.\n\n\nMany design decisions and implementation details of the VST are described in Andrew Appel’s recent book, *[Program Logics for Certified Compilers](http://smile.amazon.com/Program-Logics-Certified-Compilers-Andrew/dp/110704801X/ref=nosim?tag=793775876-20)* (CUP 2014), while the source code is publicly available [here](http://vst.cs.princeton.edu).\n\n\nA further aspect of our work concerns CompCert itself. Here, we are collaborating with Leroy and others to evolve CompCert’s specification and proof to cover shared memory interaction between threads, and formal models of linking. Indeed, a modular notion of Compiler correctness should respect that communication with the OS, with libraries, or with separately compiled program modules in C crucially relies on exchanging pointers to buffers or other shared data structures. Instead, CompCert’s notion of compiler correctness employs a whole-program view of compilation. However, it is far from easy to reconcile pointer sharing with the requirements of typical compiler optimizations, as these routinely alter memory layout and eliminate or relocate existing pointers. In addition, compiler-managed memory locations such as return addresses and spilling locations must remain unmodified be even those external callees that have legitimate access to other stack-held data.\n\n\n\n\n---\n\n\n**Luke**: How would you describe the underlying motivation for the VST? How will the world be different for various kinds of programmers (and other actors) in 10 years if the VST continues to be funded and worked on at a good pace?\n\n\n\n\n---\n\n\n**Lennart**: Specifying the expected behavior of software, and verifying that concrete implementations satisfy these specifications, has been an active research area for several decades. But for many years, the effectiveness of program logics was limited, and one of the reasons for this was the lack of support for modular reasoning, particularly with respect to pointer sharing and data structures that are laid out in memory. About ten years ago, separation logics emerged as a promising technique to address this shortcoming, by allowing specifications and invariants to elegantly capture properties of the heap and other shared resources.\n\n\nAt the same time, interactive proof assistants (and the computation power of the underlying hardware) had sufficiently matured so that program logics could be effectively implemented, and could be equipped with machine-checkable proofs of soundness. Indeed, by developing CompCert, Leroy demonstrated that interactive theorem proving scales to real-world languages, and can cope with the complexity inherent in an industrial-strength optimizing compiler.\n\n\nThe VST project combines these two threads, while at the same time exploring how the current automation and specification boundaries can be pushed further.\n\n\nThe main tool chain of the VST will most likely be of interest to C programmers writing either high assurance software, to implementors of medium-scale software such as crypto libraries, or to developers using code synthesis techniques, where — instead of verifying the correctness of individually generated programs — one could envision verified synthesis tools that target the VST’s Clight program logic. Functional programmers may benefit by having a precise way to formally relate their high-level code written in Ocaml or Haskell to potentially more efficient code written in C, using Coq’s code extraction feature to mediate between these two worlds. By combining code extraction and compilation, it may even become possible to verify hybrid software systems in which some components are written in C and others in a functional language.\n\n\nAnother group of potential users is that of system designers: VST specifications enable individual components to be characterized by much richer interfaces than is possible using C header files or Java interfaces. As interfaces become more robust, truly component-oriented software development becomes possible, where adherence to specifications suffices to guarantee that substituting individual components does not have negative impact on global system properties.\n\n\nFor typical components of the standard system stack (OS, network protocols,…), such formally defined interfaces are in fact of interest in their own right, as the contract between, say, different components of the OS, the compiler, and the architecture, are notoriously difficult to comprehend. In addition to system developers and researchers, a third clientele may hence be that of educators, for whom specifications offer guidance for partitioning their material into self-contained, but interdependent units of teaching, and for explaining the mutual contracts and assumptions.\n\n\n\n\n---\n\n\n**Luke**: What are some other projects that seem to be aimed at similar goals but e.g. for other languages or via other approaches?\n\n\n\n\n---\n\n\n**Lennart**: I am not aware of other attempts to develop a similarly expressive program logic for a mainstream language, validate its soundness in a proof assistant, link it with a verified optimizing compiler, and equip it with sufficient proof automation to enable the verification of nontrivial programs.\n\n\nHowever, many research groups address various subsets of these features.\n\n\nMicrosoft Research over the past decade developed an impressive collection of verification tools ([VCC](http://research.microsoft.com/en-us/projects/vcc/), [Boogie](http://research.microsoft.com/en-us/projects/boogie/)/[Dafny](http://research.microsoft.com/en-us/projects/dafny/),…) which are primarily targeted at C#, but contain components or intermediate verification platforms that are also applicable to other programming languages. These have excellent proof automation, and are tightly integrated with modern SAT/SMT solvers, software model checkers, and other code analysis and inspection tools, but are not formally backed up by a machine-checkable soundness proof. At the same time, several groups within MSR already employ proof assistants to verify compilers, develop formal models of machine code, and validate program logics that are roughly comparable to the VST. It would hardly come as a surprise if these complementary research efforts were eventually integrated to yield an end-to-end machine-checkable toolchain for C#.\n\n\nChlipala’s [Bedrock framework](http://www.bedrockframework.com/) enables synthesis of verified assembly code by providing machine-checked code and specification patterns, enabling interaction between code fragments that deviates from C’s calling convention. In addition, this group pioneered proof engineering techniques that exploit advanced features of Coq’s tactic language and type theory, including some approaches we are now applying in the VST. We also incorporated ideas from Jesper Bengtson’s [Charge!](https://github.com/jesper-bengtson/Charge) project, which provides a Coq-validated separation logic for Java.\n\n\nRegarding compiler verification, Xavier Leroy and his collaborators at INRIA are extending [CompCert](http://compcert.inria.fr/) towards full support for both C99 and C11, at the same time validating further optimization phases and – most recently – adding a verified parser.\n\n\nUPenn’s [Vellum project](http://www.cis.upenn.edu/~stevez/vellvm/) developed mechanized reasoning support for programs expressed in the LLVM intermediate representation, demonstrating that CompCert’s verification principles are also applicable to compiler infrastructures that were not designed with formal verification in mind, and to compilation based on static single assignment (SSA) in particular.\n\n\nClosely related to compiler verification is the approach of translation validation – in fact, CompCert’s register allocator is also proved in this style. Recent contributions to mechanized translation validation include those by Sorin Lerner’s group at San Diego, and Greg Morrisett’s group at Harvard.\n\n\nThe development of practically validated, yet formally precise, mechanized specifications of real-world hard- and software systems, is an area that continues to grow, with recent examples including web servers, browsers, databases, operating system and hypervisors, relaxed memory models, and processor architectures. Particularly noteworthy are the projects under DARPA’s [CRASH](http://www.darpa.mil/Our_Work/I2O/Programs/Clean-slate_design_of_Resilient_Adaptive_Secure_Hosts_(CRASH).aspx) programme, as well as the [REMS](http://www.cl.cam.ac.uk/~pes20/rems/) effort led by Peter Sewell at Cambridge (UK).\n\n\nI probably failed to mention a number of projects, but I hope that the ones I mentioned give you a rough picture of the current developments.\n\n\n\n\n---\n\n\n**Luke:** Thanks, Lennart!\n\n\nThe post [Lennart Beringer on the Verified Software Toolchain](https://intelligence.org/2014/05/27/lennart-beringer/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-05-27T11:00:04Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "5340930693acbc0d81377204349139fc", "title": "Johann Schumann on high-assurance systems", "url": "https://intelligence.org/2014/05/24/johann-schumann/", "source": "miri", "source_type": "blog", "text": "![Johann Schumann portrait](http://intelligence.org/wp-content/uploads/2014/05/Schumann_w150.jpg)[Dr. Johann Schumann](http://ti.arc.nasa.gov/profile/schumann/) is a member of the [Robust Software Engineering Group RSE](http://ti.arc.nasa.gov/tech/rse/) at [NASA Ames](http://www.nasa.gov/centers/ames/home/). He obtained his habilitation degree (2000) from the [Technische Universität München](http://www.tum.de/), Germany on application of automated theorem provers in Software Engineering. His [PhD thesis](http://ti.arc.nasa.gov/m/profile/schumann/PDF/SL1990.pdf) (1991) was on high-performance parallel theorem provers. Dr. Schumann is engaged in research on software and system health management, verification and validation of advanced air traffic control algorithms and adaptive systems, statistical data analysis of air traffic control systems and UAS incident data, and the generation of reliable code for data analysis and state estimation. Dr. Schumann’s general research interests focus on the application of formal and statistical methods to improve design and reliability of advanced safety- and security-critical software.\n\n\nHe is employed by [SGT, Inc.](http://www.sgt-inc.com/) as Chief Scientist Computational Sciences.\n\n\n\n\n**Luke Muehlhauser**: In [Denney et al. (2006)](http://www.worldscientific.com/doi/abs/10.1142/S0218213006002576), you and your co-authors helpfully explain:\n\n\n\n> Software certification aims to show that the software in question achieves a certain level of quality, safety, or security. Its result is a certificate, i.e., independently checkable evidence of the properties claimed. Certification approaches vary widely… but the highest degree of confidence is achieved with approaches that are based on formal methods and use logic and theorem proving to construct the certificates.\n> \n> \n> We have developed a certification approach which… [demonstrates] the safety of aerospace software which has been automatically generated from high-level specifications. Our core idea is to extend the code generator so that it simultaneously generates code and the detailed annotations… that enable a fully automated safety proof. A verification condition generator (VCG) processes the annotated code and produces a set of safety obligations, which are provable if and only if the code is safe. An automated theorem prover (ATP) then discharges these obligations and the proofs, which can be verified by an independent proof checker, serve as certificates…\n> \n> \n> …[Our] architecture distinguishes between *trusted* and *untrusted* components… Trusted components must be correct because any errors in their results can compromise the assurance given by the system; untrusted components do not affect soundness because their results can be checked by trusted components.\n> \n> \n\n\nYou also include this helpful illustration:\n\n\n![Certification system architecture, Denney et al. (2006)](http://intelligence.org/wp-content/uploads/2014/05/Schumann_Certification_system_architecture.png)\nWhy are the problem specification and synthesis system untrusted, in your certification system architecture?\n\n\n\n\n---\n\n\n**Johann Schumann**: Ideally, the entire architecture in this figure would only contain trusted components. Then one could be sure that nothing bad happens during the verification process. Unfortunately, software systems like the program synthesis system or an entire automated theorem prover cannot be formally verified with current technology. Their code is tens of thousands of lines of code. We therefore use a trick of problem complexity asymmetry: E.g., a piece of software to find the prime factors of a large number is complex; its verification is complicated and time consuming. On the other hand, checking each individual result is trivial: one just multiplies the factors. Such a checker is easy to implement and to verify. So, if we now use the unverified prime-factor-finder and, for each result, run the quick and small verified checker, we can find out if our prime-factor finder works correctly. Important here is: one needs to run the checker for each result, but since the checker is fast, it doesn’t matter. Similar situation here: synthesis system and prover are *large* and *complicated*, but the proof checker isn’t. The VCG is of medium complexity and can be verified. The practical bottleneck of this entire system is that you have to trust your safety policy and the domain theory. Both can be larger set of logic formulas. In my experience: if a formula is longer than 5 lines, it is very likely that it contains a bug. Unfortunately, there is no easy way around that.\n\n\n\n\n---\n\n\n**Luke**: Intuitively, how much “safer” do you think a system developed via formal methods (such as your own process) is, compared to a system developed via normal engineering methods and extensive testing? Have there been any empirical tests of such hypotheses, to your knowledge?\n\n\n\n\n---\n\n\n**Johann**: Intuitively, I think that the use of formal methods (FM) for software development and V&V can substantially increase safety. However, there are a number of issues/considerations:\n\n\n* Using FMs to verify that certain, well understood safety properties are never violated (e.g., no buffer overflow, no divide-by-zero, no uninitialized variables) is extremely beneficial (see recent security problems — a lot of those have to do with such safety properties). Some technologies for that task (e.g., static analysis) have already achieved much in that direction.\n* Even assuming the tools are there, one has to ask: can i specify *all* necessary security properties? Is the specification of the software, against which I do the FMs based verification, correct and consistent? A correct, consistent, and complete specification is extremely hard to write — so for any software of reasonable complexity, I would assume that there are at least as many bugs in the specification as there are in the code.\n* In my opinion, FMs should be used together with traditional methods (including testing) and together with dynamic monitoring. As an example, a formal proof of no-buffer-overflow should be able to reduce (but not eliminate) extensive testing. And since testing only can be finite, dynamic monitoring (aka runtime verification (RV) or software health management (SWHM)) should be used to react on unforeseen software behavior (e.g., caused by dormant bugs or operation in an unexpected environment). RV and SWHM themselves are mostly formal-methods based.\n\n\nI don’t know of any such empirical tests – they would be very hard to carry out in an unbiased way. Work on cost and reliability modeling (e.g., B. Boehm) might give some insight but I would need to look.\n\n\n\n\n---\n\n\n**Luke**: How effectively do you think formal verification methods will eventually be able to handle more complex safety properties, such as collision avoidance for trains, planes and cars, and Asimovian properties (*a la* [Weld & Etzioni](https://www.cs.auckland.ac.nz/~nickjhay/papersuni/RoughDraft200207-best_of_SASEMAS.pdf)) for systems that interact with humans in general?\n\n\n\n\n---\n\n\n**Johann**: There have been a couple of examples, where formal methods have been successfully used for specification and/or development of a complex system. For example, the [B method](http://en.wikipedia.org/wiki/B-Method) (a tool-supported method related to the Z method) has been used for the automated train control on the Paris Metro (Line 14). Airbus is using formal methods like abstract interpretation (Cousot) for their development of safety-critical aircraft software.\n\n\nThe TCAS system (Traffic Collision Avoidance System) is a international system on commercial aircraft that warns the pilot from danger of an imminent midair collision (less than a minute away). This complex system (the aircraft need to resolve the conflict among themselves using an elaborate communication protocol) has been studied in detail and formal specifications/analyses exist (e.g., Leveson, Heimdahl and others).\n\n\nIn my opinion, formal methods are strong enough to be used to specify/verify complex systems. However, the Ueberlingen accident showed that this system is not fail-safe. Here “system” means: actual hardware/software, the pilots (who got conflicting messages from the TCAS system and the ATC), the airline (which supposedly told the pilots to ignore warnings and fly the most profitable route), and the swiss air traffic control, which was not fully staffed.\n\n\nTo me it seems that the accurate modeling and prediction of human behavior in a mixed control system (where the human operator has all/partial control) is still a big unsolved issue.\n\n\nA fully automated system (e.g., UAS) might have safely handled this situation, but in general there are several major issues:\n\n\n1. The acceptance problem (would we board a UAS, would we relinguish the wheel of our cars?). Supposedly when BART first introduced driverless trains, they weren’t accepted by the commuters and they had to put drivers back on the trains.\n2. How to operate safely in a “mixed-mode” environment? E.g., some cars are automatic, some cars are driven manually, or commerical aircraft versus general aviation and UAS systems. Human operators might make a (silly) mistake or react in an unexpected way. The automated system might react unexpectedly too, e.g., because of a software error.\n3. How to deal with system security? Many systems to increase safety are currently highly vulnerable to malicious attack: E.g., the ADS-B system, which is supposed to replace parts of the aging ATC system uses an unencrypted protocol; UAS aircraft can be spoofed (see CIA UAS captured by Iran); modern cars can be hacked into easily.\n\n\nAlthough a lot has happened in the rigorous treament and specification of complex systems, a number of issues needs to be addressed before we even come close to talk about global properties like the laws of robotics.\n\n\nHow to recover a complex system safely after an operator error? Assume a driver in a car with driving assistence systems (e.g., adaptive cruise control, detection of unsafe lane changes). How does the automated system react to the imminent problem without causing the driver to freak out and make more mistakes?\n\n\nHow to recover the system safely after a problem in the system hardware or software? When the engine exploded on the Qantas A380, the pilots received on the order of 400 diagnostic and error messages, which they had to process (luckily they had the time). Some of the messages contradicted each other or would have been dangerous to carry out.\n\n\nThis shows that it is still a big problem to present the operator with concise and correct status information of a complex system.\n\n\nAfter the pilots landed the A380, they couldn’t turn of the engines. First sign of machine intelligence “don’t kill me”?\n\n\n\n\n---\n\n\n**Luke:** Thanks, Johann!\n\n\nThe post [Johann Schumann on high-assurance systems](https://intelligence.org/2014/05/24/johann-schumann/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-05-24T11:00:47Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "36adaee58cbb73d15408456d1b4b78a2", "title": "Sandor Veres on autonomous agents", "url": "https://intelligence.org/2014/05/23/sandor-veres/", "source": "miri", "source_type": "blog", "text": "![Sandor Veres portrait](http://intelligence.org/wp-content/uploads/2014/05/Veres_w137.jpg) Professor [Sandor Veres](http://sheffield.ac.uk/acse/staff/smv/index) was born and educated in Hungary as applied mathematician. He completed his PhD in dynamical modelling of stochastic systems in 1983 and worked in industry on computer controlled systems. In 1987-1988 he received two consecutive scholarships at [Imperial College London](http://www3.imperial.ac.uk/) and at [Linacre College Oxford](http://www.linacre.ox.ac.uk/).\n\n\nBetween 1989-1999 he was lecturer at the [Electronic and Electrical engineering department](http://www.birmingham.ac.uk/schools/eece/index.aspx) at the [University of Birmingham](http://www.birmingham.ac.uk/) and pursued research in system identification, adaptive control and embedded electronic control systems. In 2000 he joined the [University of Southampton](http://www.southampton.ac.uk/) to do work in the areas of active vibration control systems, adaptive and learning systems, satellite formation flying and autonomous control. Since 2002 he has held a chair in control systems engineering and was chairman of [IFAC Technical Committee of Adaptive and Learning Systems](http://tc.ifac-control.org/).\n\n\nAt Southampton he has established the [Centre for Complex Autonomous Systems Engineering](http://www.southampton.ac.uk/ccase/) where he is now visiting professor. Today his main research interest is agent-based control systems and he is leading the [Autonomous Systems and Robotics Group](http://www.sheffield.ac.uk/acse/research/groups/asrg) at the Department of Automatic Control at the [University of Sheffield](http://www.sheffield.ac.uk/) since 2013. He published about 200 papers, authored 4 books and co-authored numerous software packages.\n\n\n\n\n**Luke Muehlhauser**: In “[Autonomous Asteroid Exploration by Rational Agents](http://commonsenseatheism.com/wp-content/uploads/2014/04/Lincoln-et-al.-Autonomous-Asteroid-Exploration-by-Rational-Agents.pdf)” (2013), you discuss a variety of agent architectures for use in contexts where autonomous or semi-autonomous operation is critical — in particular, in outer space, where there are long communication delays between a robot and a human operator on Earth. Before we talk about *rational* agent architectures in particular, could you explain what an “agent” architecture is from your perspective, and why it is superior to other designs for many contexts?\n\n\n\n\n---\n\n\n**Sandor Veres**: First of all I would like to say that I am not sure that all kinds of agent architectures are “superior”. Some agent paradigms have however advantages as a way of organizing your complex software system which controls an intelligent machine such as a robot. The feature I most like in some agent architectures is when they exhibit anthropomorphic type of operations so that they can handle statements about the world in terms of space and time, past and future, express possibilities and necessities, behaviour rules, knowledge of other agent’s knowledge, including that of humans, can handle intentions and beliefs about a situation. If such constructs are part of the software, as opposed to being translated from a different kind of software, that makes things simpler in terms of programming. Also we would like robots to share with us their knowledge about the world, and we share our knowledge with them. It is an advantage if we share our ways of reasoning, a robot can be made more easy to understand and control if it is programmed in an anthropomorphic manner.\n\n\n\n\n---\n\n\n**Luke**: At one point in that paper you write that “Creating a new software architecture with significant benefits for autonomous robot control is a difficult problem when there are excellent software packages around…” What kinds of software architectures for autonomous robot control are out there already, and how generally applicable are they?\n\n\n\n\n---\n\n\n**Sandor**: By excellent packages I meant some robot operating system foundations such as the [ROS](http://www.ros.org/) for Linux and [CCR](http://en.wikipedia.org/wiki/Concurrency_and_Coordination_Runtime) for Windows on which one can build robot programs. Both of these have extensive libraries available for us to enable programming robot skills. For programming autonomous behaviour we have [agent-oriented programming](http://en.wikipedia.org/wiki/Agent-oriented_programming) defined in exact terms [by Yoav Shoham in 1993](http://www.infor.uva.es/~cllamas/MAS/AOP-Shoham.pdf) and by now there are books just to review the many approaches to agent programming. There is also [CLARAty](https://claraty.jpl.nasa.gov) by JPL which is continuous re-planning based rather than agent oriented. To summarise, [GofAI](http://en.wikipedia.org/wiki/GOFAI) has been transformed during the past 3 decades into various logic-based, subsumption, multi-layered and belief-desire-intention robot programming approaches, so there is a long and interesting history of great effort in software architectures for robots.\n\n\n\n\n---\n\n\n**Luke**: Where you think the future of autonomous agents is headed?\n\n\n\n\n---\n\n\n**Sandor**: I hope that soon not only agent programming languages will be available but standardised agents for various applications of robots. These can then be either programmed further or just simply trained further before deployed in an application. Eventually these agents could provide high level of integrity, capability and safety of robot operations autonomously. A next step will be to make these physically able agents truly social agents so that they behave in an appropriate manner and cooperate where required.\n\n\n\n\n---\n\n\n**Luke**: How might we get high-assurance autonomous agents for safety-critical applications?\n\n\n\n\n---\n\n\n**Sandor**: Determinism is a fundamental feature of digital computing so far, though we have random phenomena in some parallel computing we do on robots. Determinism of software has the advantage that our robots are also deterministic. We currently make every effort to make robots always respond in an appropriate manner, solve problems and be predictable despite often complex and disturbing environment. The more intelligent they are the more complex environment they can handle. The more complex environment however also means that it becomes more difficult to formally verify and test all their possible deterministic responses. So the trade off is between complexity of agent-software and “determinism” of intelligence, i.e. suitable actions by the robot at all times. Though the agent is deterministic, the sensors they use may not be reliable which make their response to some degree probabilistic. The challenge we face is hence to built up conceptual abstractions of complex environments which enable reliable decision making of our robotic agents. On the question of whether this is always possible in practice the jury is still out.\n\n\n\n\n---\n\n\n**Luke**: Will research progress on autonomy capabilities outpace research progress on safety research?\n\n\n\n\n---\n\n\n**Sandor**: This is a very good question and I believe this is likely to happen. High levels of capabilities will however reduce the probability of inappropriate response by a robot as long as it is accompanied by a formally verifiable decision making process of the agent. For agent development the most we can do is to make it “perfect”, meaning that it should never intentionally do the wrong action or in case of failing hardware, it should always take the most likely positive action. This is easier said than done as the environment can create conflicting requirements for a robot’s response and in such cases it needs to behave as a moral agent. Moral agents can use models of a broader context of a situation and apply principles over a wide range of knowledge. Moral agents of the future are likely to need the knowledge of an educated adult.\n\n\n\n\n---\n\n\n**Luke:** Thanks, Sandor!\n\n\nThe post [Sandor Veres on autonomous agents](https://intelligence.org/2014/05/23/sandor-veres/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-05-23T11:00:06Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "e489dc675519b24f08d9c29f92527a04", "title": "New Paper: “Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem”", "url": "https://intelligence.org/2014/05/17/new-paper-program-equilibrium-prisoners-dilemma-via-lobs-theorem/", "source": "miri", "source_type": "blog", "text": "[![robust cooperation first page](https://intelligence.org/wp-content/uploads/2014/05/robust-cooperation-first-page.png)](https://intelligence.org/files/ProgramEquilibrium.pdf)We’ve released a new paper recently accepted to the [MIPC](http://conferences.inf.ed.ac.uk/mipc2014/) workshop at [AAAI-14](http://www.aaai.org/Conferences/AAAI/aaai14.php): “[Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem](https://intelligence.org/files/ProgramEquilibrium.pdf)” by LaVictoire et al. This paper is essentially a shortened version of [Barasz et al. (2014)](http://arxiv.org/abs/1401.5577). For the history of the key results therein, see [Robust Cooperation: A Case Study in Friendly AI Research](http://intelligence.org/2014/02/01/robust-cooperation-a-case-study-in-friendly-ai-research/).\n\n\nAbstract of the new paper:\n\n\n\n> Applications of game theory often neglect that real-world agents normally have some amount of out-of-band information about each other. We consider the limiting case of a one-shot Prisoner’s Dilemma between algorithms with read access to one anothers’ source code. Previous work has shown that cooperation is possible at a Nash equilibrium in this setting, but existing constructions require interacting agents to be identical or near-identical. We show that a natural class of agents are able to achieve mutual cooperation at Nash equilibrium without any prior coordination of this sort.\n> \n> \n\n\n \n\n\nThe post [New Paper: “Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem”](https://intelligence.org/2014/05/17/new-paper-program-equilibrium-prisoners-dilemma-via-lobs-theorem/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-05-18T05:00:32Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "e023abca8c8320afdd35094b35c78475", "title": "Christof Koch and Stuart Russell on machine superintelligence", "url": "https://intelligence.org/2014/05/13/christof-koch-stuart-russell-machine-superintelligence/", "source": "miri", "source_type": "blog", "text": "Recently, *Science Friday* (hosted by Ira Flatow) featured an interview ([page](http://www.sciencefriday.com/segment/05/09/2014/science-goes-to-the-movies-transcendence.html), [mp3](https://dl.dropboxusercontent.com/u/163098/Science%20Friday%20-%20Transcendence%2C%20Russell%20%26%20Koch.mp3)) with Christof Koch and Stuart Russell about machine superintelligence. [Christof Koch](http://www.alleninstitute.org/our-institute/our-team/profiles/christof-koch) is the Chief Scientific Officer of the Allen Institute for Brain Science, and [Stuart Russell](http://www.cs.berkeley.edu/~russell/) is a computer science professor at UC Berkeley, and co-author of the world’s [most-used AI textbook](http://aima.cs.berkeley.edu/).\n\n\nI was glad to hear both distinguished guests take seriously the opportunities and risks of [AGI](http://intelligence.org/2013/08/11/what-is-agi/). Those parts of the conversation are excerpted below:\n\n\n\n\n> **Russell**: Most [AI researchers] are like the Casters [from [*Transcendence*](http://en.wikipedia.org/wiki/Transcendence_(2014_film))]. They’re just trying to solve the problem. They want to make machines intelligent. They’re working on the next puzzle. There are [also] a number of people who think about what will happen when we succeed. I actually have a chapter in my book called, “What if we succeed?” If we make machines that are much smarter than human being, and this is what happens after Will Caster is uploaded, he has the massive computational resources to run his brain with, he becomes much more intelligent than humans.\n> \n> \n> If that happens, then it’s very difficult to predict what such a system would do and how to control it. It’s a bit like the old stories of the genie in the lamp or sorcerer’s apprentice. You can give your 3 wishes, but in all those stories, the 3 wishes backfire, and there’s always some loophole. The genie carries out what you ask to the utmost extent, and it’s never what you really wanted. The same is going to be true with machines. If we ask them, “Heal the sick, end human suffering,” perhaps the best way to end human suffering is to end human life altogether because then there won’t be anymore human suffering. That would be pretty big loophole. You can imagine how hard it is to write tax laws so there are no loopholes. We haven’t succeeded after 300 years of trying.\n> \n> \n> It’s very difficult to say what we would want a super intelligent machine to do so that we can be absolutely sure that the outcome is what we really want as opposed to what we say. That’s the issue. I think we, as a field, are changing, going through a process of realization that more intelligent is not necessarily better. We have to be more intelligent and controlled and safe, just like the nuclear physicist when they figured out chain reaction they suddenly realized, “Oh, if we make too much of a chain reaction, then we have a nuclear explosion.” So we need controlled chain reaction just like we need controlled artificial intelligence.\n> \n> \n> **Flatow**: Isn’t that true of all of science? The history of science whether we’re talking about genetic engineering in its early days, wondering about what’s going to crawl out of the lab once we start playing around the with the genome?\n> \n> \n> **Koch**: The difference now is that this [risk is an] existential threat to human society just like we’ll see once a nuclear genie was out of the bottle, we still live under the possibility within 20 minutes that all people we know will be obliterated under nuclear mushroom. Now we live with the possibility that over the next 50, 100 years this invention that we’re working on might be our final invention as Stuart has emphasized and that our future may be bleaker than we think.\n> \n> \n> […]\n> \n> \n> **Flatow**: Do scientists have responsibilities to think about now the consequences of AI and are researchers organizing, talking, meeting about these things?\n> \n> \n> **Russell**: Absolutely. Yes. There’s a responsibility because if you’re working on a field whose success would probably be the biggest event in human history, and as some people predict, the last event in human history, then you’d better take responsibility. People are having meetings. In fact, I just organized one on Tuesday in Paris taking advantage of a major conference that was here. What I’m finding is that senior people in the field who have never publicly evinced any concern before are privately thinking that we do need to take this issue very seriously, and the sooner we take it seriously the better. I think the nuclear physicists wished they had taken it seriously much earlier than they did.\n> \n> \n> **Flatow**: Christof?\n> \n> \n> **Koch**: Yes. I fully have to agree, Stuart. I recently attended a meeting of physicists, and we had a large discussion and a poll about what are the biggest existential threat. To my surprise, after a nuclear war, it was the grey goo scenario or the AI run amok scenario. It wasn’t climate control or some of the other more conventional ones. There is a lot of concern among some experts what will happen to run away AI.\n> \n> \n> […]\n> \n> \n> There is no law in the universe that says things are limited to human level intelligence. There may well be entities that are going to be much smarter than us, and as Stuart has said, we have no way to predict what will be their desires, what will be their motivations, and programming it in, we all know how buggy software is. Do we really want to rely on some standard programmers to program in something that will not wipe us out in one way or another?\n> \n> \n\n\nThe post [Christof Koch and Stuart Russell on machine superintelligence](https://intelligence.org/2014/05/13/christof-koch-stuart-russell-machine-superintelligence/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-05-14T00:34:27Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "f99c320eaefc942f033c02b1273638df", "title": "Exponential and non-exponential trends in information technology", "url": "https://intelligence.org/2014/05/12/exponential-and-non-exponential/", "source": "miri", "source_type": "blog", "text": "*Co-authored with Lila Rieber.*\n\n\nIn [*The Singularity is Near*](http://smile.amazon.com/Singularity-Near-Humans-Transcend-Biology-ebook/dp/B000QCSA7C/), Ray Kurzweil writes that “every aspect of information and information technology is growing at an exponential pace.”[1](https://intelligence.org/2014/05/12/exponential-and-non-exponential/#footnote_0_11027 \"Page 85. In the same book, he also writes that “we see ongoing exponential growth of every aspect of information technology, including price-performance, capacity, and rate of adoption.” (p. 377). In How to Create a Mind, Kurzweil writes that “In the course of my investigation, I made a startling discovery: If a technology is an information technology, the basic measures of price/performance and capacity… follow amazingly precise exponential trajectories” (p. 254).\") In *[Abundance](http://www.amazon.com/Abundance-Future-Better-Than-Think-ebook/dp/B005FLOGMM/)*, the authors list eight fields — including nanomaterials, robotics, and medicine — as “exponentially growing fields.”[2](https://intelligence.org/2014/05/12/exponential-and-non-exponential/#footnote_1_11027 \"Page 57. In general, Diamandis & Kotler seem to agree with Kurzweil that all information technologies experience exponential growth curves. E.g. on page 99 they write that “Although [some agroecological] practices themselves look decidedly low tech, all the fields they’re informed by are information-based sciences and thus on exponential growth curves,” and on page 190 they write that “almost every component of medicine is now an information technology and therefore on an exponential trajectory.”\") [*The Second Machine Age*](http://www.amazon.com/Second-Machine-Age-Prosperity-Technologies-ebook/dp/B00D97HPQI/) says that “technical progress” in general is “improving exponentially.”[3](https://intelligence.org/2014/05/12/exponential-and-non-exponential/#footnote_2_11027 \"Page 10. The authors also seem to expect exponential trends for anything that becomes a digital process: “…batteries… haven’t improved their performance at an exponential rate because they’re essentially chemical devices, not digital ones…” (p. 52).\")\n\n\nThese authors are correct to emphasize that exponential trends in technological development are surprisingly common ([Nagy et al. 2013](http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0052669)), and that these trends challenge the wisdom of our [built-in heuristic](http://wiki.lesswrong.com/wiki/Absurdity_heuristic) to ignore futures that *sound* absurd. (To someone in the 1980s, the iPhone is absurd. To us, it is an affordable consumer good.)\n\n\nUnfortunately, these and other popular discussions of “exponential technologies” are often very succinct and therefore ambiguous, resulting in public and professional misunderstanding.[4](https://intelligence.org/2014/05/12/exponential-and-non-exponential/#footnote_3_11027 \"E.g. Kurzweil seems to use a fairly loose definition of “exponential.” For example in Kurzweil (2001) he gives this chart of ISP cost-performance as an example exponential trend. Sometimes this seems to cause confusion in dialogue. For example, in response to Ilkka Tuomi’s criticisms (2002, 2003) of claims of exponential trends in computing, Kurzweil wrote that if Tuomi were correct, “I would have to conclude that the one-quarter MIPS computer costing several million dollars that I used at MIT in 1967 and the 1000 MIPS computer that I purchased recently for $2,000 never really existed… I admire his tenacity in attempting to prove that the world of information technology is flat (i.e., linear).” But Tuomi’s views don’t entail that, and Tuomi didn’t say that trends in information technology have been linear. The conflict appears to stem from the fact that Tuomi was using “exponential” in the strict sense, while Kurzweil was using the term in a very loose sense. This becomes clearer in Tuomi’s reply to Kurzweil.\") I (Luke) regularly encounter people who have read the books above and come away with the impression that all information technologies show roughly exponential trends all the time. But this isn’t true unless you have a *very* broad concept of what counts as “roughly exponential.”\n\n\nSo, without speculating much about what Kurzweil & company intend to claim, we’ll try to clear up some common misunderstandings about exponential technologies by showing a few examples of exponential and not-so-exponential trends in information technology. A more thorough survey of trends in information technology must be left to other investigators.[5](https://intelligence.org/2014/05/12/exponential-and-non-exponential/#footnote_4_11027 \"Our thanks to Jonah Sinick for his assistance in researching this post.\")\n\n\n\n#### Computations per dollar: still exponential\n\n\nIt’s clear that Kurzweil himself does not *literally* mean that “every aspect of information and information technology is growing at an exponential pace,” for he has previously discussed examples of non-exponential growth in some aspects of information technology. For example, he’s well aware that the exponential trend in processor clock speed broke down in 2004, as shown in [Fuller & Millett (2011a)](http://commonsenseatheism.com/wp-content/uploads/2014/03/Fuller-Millett-Computing-Performance-Game-Over-or-Next-Level-in-IEEE-Computer-Society.pdf):[6](https://intelligence.org/2014/05/12/exponential-and-non-exponential/#footnote_5_11027 \"It should be noted, however, that the exponential trend line for clock speed on page 61 of The Singularity is Near (2005) is now known to be incorrect. Kurzweil’s graph used the 2002 ITRS report to project the trend line for 2001-2016, but actual growth in clock speed fell substantially short of the ITRS projection.\")\n\n\n![Fuller & Millett figure 1](https://intelligence.org/wp-content/uploads/2014/03/Fuller-Millett-figure-1.png)\nBecause this is a logarithmic chart, a straight line represents an exponential trend. Notice that clock speed stopped improving exponentially in 2004, but transistors per chip has continued to increase exponentially via the jump from single-core to multicore processors.\n\n\nElsewhere, Kurzweil tends to emphasize exponential trends in **price-performance ratios** specifically, for example *computations per dollar*. This is perfectly reasonable. Most of us don’t care about the fine details of processor architecture — we just care about how much *stuff we can do* per dollar.[7](https://intelligence.org/2014/05/12/exponential-and-non-exponential/#footnote_6_11027 \"As Fuller & Millett (2011b) write, “When we talk about scaling computing performance, we implicitly mean to increase the computing performance that we can buy for each dollar we spend” (p. 81).\") And thus far, the exponential trend in computations per dollar has kept up.[8](https://intelligence.org/2014/05/12/exponential-and-non-exponential/#footnote_7_11027 \"Kurzweil (2012), ch. 10, footnote 10 shows “calculations per second per $1,000” growing exponentially from 1900 through 2010, including several data points after 2004. However, we couldn’t find his data sources, and we don’t know whether he adjusted for inflation, so we’ve relied instead on a data set provided by Koh & Magee (2006), extended by data we pulled from NotebookCheck, PCStats, Tom’s Hardware, and CPU-World. Our raw data are here and show a continuing exponential trend in MIPS per dollar, adjusted for inflation.\")\n\n\nIt’s unclear, however, how much longer this trend can be maintained. In particular, the [dark silicon problem](http://intelligence.org/2013/10/21/hadi-esmaeilzadeh-on-dark-silicon/) may slow the currently exponential trend in computations per dollar. Joel Hruska covers other recent challenges to the trend [here](http://www.extremetech.com/computing/178529-this-is-what-the-death-of-moores-law-looks-like-euv-paused-indefinitely-450mm-wafers-halted-and-no-path-beyond-14nm), including the halted production of the 450mm wafers that Intel, TSMC, and Samsung all [bet their money on](http://www.extremetech.com/computing/132604-intel-invests-in-asml-to-boost-extreme-uv-lithography-massive-450mm-wafers) 18 months ago.\n\n\n \n\n\n#### DRAM capacity per dollar: a slowing trend\n\n\nAnother important price-performance trend is *DRAM capacity per dollar*. This trend was almost precisely exponential for many years but recently the trend has slowed:[9](https://intelligence.org/2014/05/12/exponential-and-non-exponential/#footnote_8_11027 \"This chart uses data from Bryant & O’Hallaron (2011), p. 584 and from the Performance Curve Database. Raw data here.\")\n\n\n[![DRAM chart](https://intelligence.org/wp-content/uploads/2014/04/DRAM-chart-2.png)](https://intelligence.org/wp-content/uploads/2014/04/DRAM-chart-2.png)\n\n\nThe same slowing trend for DRAM is also reported in [Hennessy & Patterson (2011)](http://www.amazon.com/Computer-Architecture-Quantitative-Approach-Kaufmann-ebook/dp/B0067KU84U/), on page 17. On page 100 they remark:\n\n\n\n> DRAMs obeyed Moore’s law for 20 years, bringing out a new chip with four times the capacity every three years. Due to the manufacturing challenges of a single-bit DRAM, new chips only double capacity every two years since 1998. In 2006, the pace slowed further, with the four years from 2006 to 2010 seeing only a [single] doubling of capacity.\n> \n> \n\n\n \n\n\n#### SRAM capacity per dollar: a slowing trend\n\n\nWhat about another kind of computer memory, like [SRAM](http://en.wikipedia.org/wiki/Static_random-access_memory)? The cost of SRAM dropped precipitously from 1980 to 1990 but has dropped more slowly since then.[10](https://intelligence.org/2014/05/12/exponential-and-non-exponential/#footnote_9_11027 \"Data from Bryant & O’Hallaron (2011), p. 584. Raw data here.\")\n\n\n[![SRAM chart (2)](https://intelligence.org/wp-content/uploads/2014/04/SRAM-chart-2.png)](https://intelligence.org/wp-content/uploads/2014/04/SRAM-chart-2.png)\n\n\n#### Hard drive capacity per dollar: interrupted by floods in Thailand\n\n\nCost per gigabyte of hard drive storage had been dropping exponentially for about 30 years when suddenly hard drive prices actually [*increased*](http://www.computerworld.com/s/article/9227829/Hard_drive_prices_to_remain_high_until_2014) for a while because [October 2011 floods in Thailand](http://en.wikipedia.org/wiki/2011_Thailand_floods#Damages_to_industrial_estates_and_global_supply_shortages) destroyed some hard drive factories. Hard drive prices have been comparatively flat since then:[11](https://intelligence.org/2014/05/12/exponential-and-non-exponential/#footnote_10_11027 \"Our data are drawn from Matthew Komorowski’s page on storage cost (but adjusted for inflation), Koh & Magee (2006), Bryant & O’Hallaron (2011), and the Performance Curve Database.\")\n\n\n[![Hard drive storage cost](https://intelligence.org/wp-content/uploads/2014/04/Hard-drive-storage-cost.png)](https://intelligence.org/wp-content/uploads/2014/04/Hard-drive-storage-cost.png)\nThe floods hurt many computing providers, who struggled (and sometimes failed) to offer services at the low prices they had anticipated based on exponential expectations: see e.g. the comments by [BackBlaze](http://blog.backblaze.com/2013/11/26/farming-hard-drives-2-years-and-1m-later/), [Intel](http://money.cnn.com/2011/12/12/technology/intel_earnings_revision/index.htm), and [Joyent](http://gigaom.com/2011/12/27/what-the-hdd-shortage-means-for-cloud-computing/).\n\n\nWhereas the exponential trend for processor clock speed was brought to a halt by physics, the exponential trend in cost per gigabyte of storage was slowed (at least for now) by natural disaster.\n\n\n\n\n---\n\n1. Page 85. In the same book, he also writes that “we see ongoing exponential growth of every aspect of information technology, including price-performance, capacity, and rate of adoption.” (p. 377). In [*How to Create a Mind*](http://smile.amazon.com/How-Create-Mind-Thought-Revealed-ebook/dp/B007V65UUG/), Kurzweil writes that “In the course of my investigation, I made a startling discovery: If a technology is an information technology, the basic measures of price/performance and capacity… follow amazingly precise exponential trajectories” (p. 254).\n2. Page 57. In general, Diamandis & Kotler seem to agree with Kurzweil that all information technologies experience exponential growth curves. E.g. on page 99 they write that “Although [some agroecological] practices themselves look decidedly low tech, all the fields they’re informed by are information-based sciences and thus on exponential growth curves,” and on page 190 they write that “almost every component of medicine is now an information technology and therefore on an exponential trajectory.”\n3. Page 10. The authors also seem to expect exponential trends for anything that becomes a digital process: “…batteries… haven’t improved their performance at an exponential rate because they’re essentially chemical devices, not digital ones…” (p. 52).\n4. E.g. Kurzweil seems to use a fairly loose definition of “exponential.” For example in [Kurzweil (2001)](http://www.kurzweilai.net/the-law-of-accelerating-returns) he gives [this chart](https://intelligence.org/wp-content/uploads/2014/03/ISP-cost-performance.jpg) of ISP cost-performance as an example exponential trend. Sometimes this seems to cause confusion in dialogue. For example, in response to Ilkka Tuomi’s criticisms ([2002](http://firstmonday.org/ojs/index.php/fm/article/viewArticle/1000/921), [2003](http://meaningprocessing.com/personalPages/tuomi/articles/Kurzweil.pdf)) of claims of exponential trends in computing, Kurzweil [wrote](http://www.kurzweilai.net/exponential-growth-an-illusion-response-to-ilkka-tuomi) that if Tuomi were correct, “I would have to conclude that the one-quarter MIPS computer costing several million dollars that I used at MIT in 1967 and the 1000 MIPS computer that I purchased recently for $2,000 never really existed… I admire his tenacity in attempting to prove that the world of information technology is flat (i.e., linear).” But Tuomi’s views don’t entail that, and Tuomi didn’t say that trends in information technology have been linear. The conflict appears to stem from the fact that Tuomi was using “exponential” in the strict sense, while Kurzweil was using the term in a very loose sense. This becomes clearer in Tuomi’s [reply to Kurzweil](http://www.meaningprocessing.com/personalPages/tuomi/articles/ResponseToKurzweil.pdf).\n5. Our thanks to Jonah Sinick for his assistance in researching this post.\n6. It should be noted, however, that the exponential trend line for clock speed on page 61 of *The Singularity is Near* (2005) is now known to be incorrect. Kurzweil’s graph used the [2002 ITRS report](http://www.itrs.net/Links/2006Update/2006UpdateFinal.htm) to project the trend line for 2001-2016, but actual growth in clock speed fell substantially short of the ITRS projection.\n7. As [Fuller & Millett (2011b)](http://www.amazon.com/The-Future-Computing-Performance-Level/dp/0309159512/) write, “When we talk about scaling computing performance, we implicitly mean to increase the computing performance that we can buy for each dollar we spend” (p. 81).\n8. [Kurzweil (2012)](http://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/0670025291/), ch. 10, footnote 10 shows “calculations per second per $1,000” growing exponentially from 1900 through 2010, including several data points after 2004. However, we couldn’t find his data sources, and we don’t know whether he adjusted for inflation, so we’ve relied instead on a data set provided by [Koh & Magee (2006)](http://web.mit.edu/cmagee/www/documents/15-koh_magee-tfsc_functional_approach_studying_technological_progress_vol73p1061-1083_2006.pdf), extended by data we pulled from [NotebookCheck](http://www.notebookcheck.net/), [PCStats](http://pcstats.com/), [Tom’s Hardware](http://www.tomshardware.com/), and [CPU-World](http://www.cpu-world.com/). Our raw data are [here](https://docs.google.com/spreadsheets/d/1JIYhJ3QPrP0VUlhwVZ5uPnRnBfTs8I_H4CH0XHJR2XA/edit?usp=sharing) and show a continuing exponential trend in MIPS per dollar, adjusted for inflation.\n9. This chart uses data from [Bryant & O’Hallaron (2011)](http://www.amazon.com/Computer-Systems-Programmers-Perspective-2nd-ebook/dp/B008VIXMWQ/), p. 584 and from the [Performance Curve Database](http://pcdb.santafe.edu/graph.php?curve=25). Raw data [here](https://docs.google.com/spreadsheets/d/1i91c82nRczN8DTbs21J8IiZYI_EKAafCR-Ny1qVDyq0/edit?usp=sharing).\n10. Data from [Bryant & O’Hallaron (2011)](http://www.amazon.com/Computer-Systems-Programmers-Perspective-2nd-ebook/dp/B008VIXMWQ/), p. 584. Raw data [here](https://docs.google.com/spreadsheets/d/14k-PP-ojAqzRz2VT19gjlAAwA9iYMdR7ILJ1EnnXJiI/edit?usp=sharing).\n11. [Our data](https://docs.google.com/spreadsheets/d/1JwaZaQeuos0-YW8VF3RjhqmWXceggkswQk3KJtzKVW8/edit?usp=sharing) are drawn from Matthew Komorowski’s [page on storage cost](http://www.mkomo.com/cost-per-gigabyte-update) (but adjusted for inflation), [Koh & Magee (2006](http://web.mit.edu/cmagee/www/documents/15-koh_magee-tfsc_functional_approach_studying_technological_progress_vol73p1061-1083_2006.pdf)), [Bryant & O’Hallaron (2011)](http://www.amazon.com/Computer-Systems-Programmers-Perspective-2nd-ebook/dp/B008VIXMWQ/), and the [Performance Curve Database](http://pcdb.santafe.edu/graph.php?curve=24).\n\nThe post [Exponential and non-exponential trends in information technology](https://intelligence.org/2014/05/12/exponential-and-non-exponential/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-05-12T08:18:08Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "a72a06d57465b85e2db5c2b15ddc6365", "title": "Benjamin Pierce on clean-slate security architectures", "url": "https://intelligence.org/2014/05/11/benjamin-pierce/", "source": "miri", "source_type": "blog", "text": "![Benjamin C. Pierce portrait](http://intelligence.org/wp-content/uploads/2014/05/Pierce_w271.jpg) Benjamin C. Pierce is Henry Salvatori Professor of Computer and Information Science at the University of Pennsylvania and a Fellow of the ACM. His research interests include programming languages, type systems, language-based security, computer-assisted formal verification, differential privacy, and synchronization technologies. He is the author of the widely used graduate textbooks *[Types and Programming Languages](http://smile.amazon.com/Types-Programming-Languages-Benjamin-Pierce/dp/0262162091/ref=nosim?tag=793775876-20)* and *Software Foundations*.\n\n\nHe has served as co-Editor in Chief of the *Journal of Functional Programming*, as Managing Editor for *Logical Methods in Computer Science*, and as editorial board member of *Mathematical Structures in Computer Science*, *Formal Aspects of Computing*, and *ACM Transactions on Programming Languages and Systems*. He is also the lead designer of the popular [Unison](http://en.wikipedia.org/wiki/Unison_(file_synchronizer)) file synchronizer.\n\n\n\n**Luke Muehlhauser**: I previously [interviewed](http://intelligence.org/2013/11/05/greg-morrisett-on-secure-and-reliable-systems-2/) Greg Morrisett about the [SAFE](http://www.crash-safe.org/) project, and about computer security in general. You’ve also contributed to the SAFE project, gave an “early retrospective” talk on it ([slides](http://www.cis.upenn.edu/%7Ebcpierce/papers/safe-PiP-140126.pdf)), and I’d like to ask you some more detailed questions about it.\n\n\nIn particular, I’d like to ask about the “[verified information-flow architecture](http://prosecco.gforge.inria.fr/personal/hritcu/publications/verified-ifc-draft.pdf)” developed for SAFE. Can you give us an overview of the kinds of information flow security properties you were able to prove about the system?\n\n\n\n\n---\n\n\n**Benjamin C. Pierce**: Sure. First, to remind your readers: SAFE is a clean-slate design of a new hardware / software stack whose goal is to build a network host that is highly resilient to cyber-attack. One pillar of the design is pervasive mechanisms for tracking information flow. The SAFE hardware offers fine-grained tagging and efficient propagation and combination of tags on each instruction dispatch. The operating system virtualizes these generic facilities to provide an “information-flow abstract machine,” on which user programs run.\n\n\nFormal verification has been part of the SAFE design process right from the beginning. We’d originally hoped to be able to verify the actual running code of the OS in the style of [sel4](http://ssrg.nicta.com.au/projects/seL4/), but we found that the codebase was too big and moving too fast for this to be practical with a small verification team. Instead, we’ve developed a methodology that combines full formal verification of *models* of the system’s key features with “property-based random testing” (*à la* [QuickCheck](http://en.wikipedia.org/wiki/QuickCheck)) of richer subsets of the system’s functionality.\n\n\nOur [most interesting formal proof](http://www.cis.upenn.edu/%7Ebcpierce/papers/verified-ifc.pdf) so far shows that the key security property of the information-flow abstract machine — the fact that a program’s secret inputs cannot influence its public outputs — is correctly preserved by our implementation on (a simpified version of) the SAFE hardware. This is interesting because the behavior of the abstract machine is achieved by a fairly intricate interplay between a hardware “rule cache” and a software layer that fills the cache as needed by consulting a symbolic representation of the current security policy. Since this mechanism lies at the very core of the SAFE architecture’s security guarantees, we wanted to be completely sure it is correct (and it is!).\n\n\n\n\n\n---\n\n\n**Luke**: You mention that since you aren’t able to formally verify the entire OS with your available resources, you’re instead formally verifying models of the system’s key features and also using property-based random testing of some of the rest of the system’s functionality. What level of subjective confidence (betting odds) does that approach give you for the security of the system, compared to what you could get if you had the resources to successfully verify the entire OS? (A rough “gut guess” is acceptable!)\n\n\n\n\n---\n\n\n**Benjamin**: Let’s say that an unverified but well-engineered and road-tested system gets a 1 on the security scale and a fully verified OS gets an 8. And let’s assume that we’re talking about a finished and polished version of SAFE (recognizing that what we have now is an incomplete prototype), together with correctness proofs for models of core mechanisms and random testing of some larger components. My gut would put this system at maybe a 4 on this scale. The proofs we’ve done are limited in scope, but they’ve been extremely useful in deepening our understanding of the core design and guiding us to a clean implementation.\n\n\nWhy is the top of my scale 8 and not 10? Because even a full-blown machine-checked proof of correctness doesn’t mean that your OS is truly invulnerable. Any proof is going to be based on assumptions — e.g., that the hardware behaves correctly, which might not be the case if the attacker has a hair dryer and physical access to the CPU! So, to get higher than 8 requires some additional assurance that the degree of potential damage is limited *even if* your formal assumptions are broken.\n\n\nThis observation suggests a defense-in-depth strategy. Our goal in designing the SAFE software has been to make the “omniprivileged” part of the code (i.e., the part that could cause the machine to do absolutely anything, if the attacker could somehow get it to misbehave) as small as possible — vastly smaller than conventional operating systems and even significantly smaller than microkernels. The code that checks security policy and fills the hardware rule cache is highly privileged, as is the garbage collector in our current design, but we believe that the rest of the OS can be written as a set of “compartments” with more limited privilege, using the tagging hardware to enforce separation between components. (Demonstrating this claim is ongoing work.)\n\n\n\n\n---\n\n\n**Luke**: I’ve spoken to some people who are skeptical that formal verification adds much of a security boost, since so many security problems come from not understanding the vulnerabilities well enough to capture them all in a system’s formal requirements in the first place, as opposed to coming from a system which fails to match the design specification in ways that could have been caught by formal verification. Yet you seem to rank a verified system as far more secure than a non-verified system. What’s your perspective on this?\n\n\n\n\n---\n\n\n**Benjamin**: Yes, other things being equal I would put quite a bit of money on a verified system being more secure than an unverified one — not because a the verification process eliminates the possibility of vulnerabilities that were not considered in the specification, but because the process of *writing* the formal specification requires someone to sit down and think carefully about at least some of the (security and other) properties that the system is trying to achieve. A clear understanding of at least some of the issues is a lot better than nothing!\n\n\nActually carrying out a proof that the system matches the specification adds significant further value, because specifications themselves are complex artifacts that require debugging just as much as the system being specified does! But property-based random testing — i.e., checking that the system satisfies its specification for a large number of well-distributed random inputs — can be a very effective intermediate step, giving a large part of the bang for relatively small buck.\n\n\n\n\n---\n\n\n**Luke**: In general, do you think that highly complex systems for which “high assurance” of safety or security (as relevant) are required need to be designed “from the ground up” for safety or security in order to obtain such high assurance, or are there many cases in which a highly complex but high assurance system can be successfully developed by optimizing development for general performance and then “patching” the system for security and safety?\n\n\nAnd, do you have any guesses as to what the most common reply to this question might be in your own subfield(s) and other adjacent subfields?\n\n\n\n\n---\n\n\n**Benjamin**: Of course it depends a bit what you mean by “high assurance”! Quite a few people define the term so that it *only* applies to software developed using methods that consider safety and/or security from day 1. But even if we take a broader definition of HA — “offering very good resistance to malicious penetration and catastrophic failure,” or some such — my feeling is that this is extremely difficult to retrofit.\n\n\nI personally saw this effect very clearly while building Unison, a cross-platform file synchronization tool that I made with Trevor Jim and Jérome Vouillon a few years ago. I had done several synchronizers before Unison, but I always discovered edge cases where the behavior seemed wrong, so we began the Unison design by trying to write down a very precise (though dramatically simplified) specification of its core behavior. Beyond helping us understand edge cases, this exercise had a *huge* effect on the way we wrote the code. For example, the specification says that, at the end of a run of the synchronizer, every part of the filesystems being synchronized should be either *synchronized* (brought up to date with the other copy) or *unchanged* from when the synchronizer started running (for example, if there was a conflict that the user chose not to resolve). But, since the synchronizer can terminate in all sorts of ways (finishing normally, getting killed by the user, the network connection getting dropped, one machine getting rebooted, …) this requirement means that the filesystems must be in one of these two states *at every moment* during execution. This is actually quite hard to achieve: there are many parts of the code that have to do extra work to buffer things in temporary locations until they can be moved atomically into place, use two-phase commits to make sure internal data structures don’t get corrupted, etc., etc., and the reasons why the extra work is needed are often quite subtle. Having the spec beforehand forced us to do this work as we went. Going back later and rediscovering all the places where we should have paid extra attention would have been well nigh impossible.\n\n\nAs for what others in my research area might say to your question, I decided to poll some friends instead of just assuming. As expected, most of the comments I got back were along the lines of “Making a complex system really secure after the fact is basically impossible.” But a few people did propose systems that might count as examples, such as SELinux and Apache. (Actually, SELinux is an interesting case. On one hand, its goal was only to add mandatory access control to Linux; it explicitly didn’t even attempt to increase the assurance of Linux itself. So for attacks that target kernel vulnerabilities, SELinux is no better than vanilla Linux. On the other hand, I’m told that, in certain situations — in particular, for securing web applications and other network-facing code against the sorts of attacks that come over HTTP requests — SELinux can provide very strong protection.)\n\n\n\n\n---\n\n\n**Luke**: Thanks, Benjamin!\n\n\nThe post [Benjamin Pierce on clean-slate security architectures](https://intelligence.org/2014/05/11/benjamin-pierce/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-05-11T18:00:06Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "e14e61b347dab8abbca52829ad3d1bb8", "title": "Michael Fisher on verifying autonomous systems", "url": "https://intelligence.org/2014/05/09/michael-fisher/", "source": "miri", "source_type": "blog", "text": "![Michael Fisher portrait](http://intelligence.org/wp-content/uploads/2014/05/Fisher_w449.jpg) Michael Fisher is a professor of Computer Science, specialising in logical methods and automated formal verification, and Director of the multi-disciplinary [Centre for Autonomous Systems Technology](http://www.liv.ac.uk/cast) at the [University of Liverpool](https://theconversation.com/institutions/university-of-liverpool) and member of the [Logic and Computation group](http://intranet.csc.liv.ac.uk/research/logics/) in the Department of Computer Science.\n\n\nProfessor Fisher is also a Fellow of both the [BCS](http://www1.bcs.org.uk/) and the [IET](http://www.theiet.org/), and a member of the [UK Computing Research Committee](http://www.ukcrc.org.uk/). He is Chair of the Department’s [Industrial Liaison Committee](http://www.csc.liv.ac.uk/~michael/industrial.html).\n\n\n\n**Luke Muehlhauser**: In “[Verifying Autonomous Systems](http://commonsenseatheism.com/wp-content/uploads/2013/08/Fisher-et-al-Verifying-Autonomous-Systems.pdf)” (2013), you and your co-authors summarize some recent approaches to formally verifying autonomous systems. At one point, you write:\n\n\n\n> Many autonomous systems, ranging over unmanned aircraft, robotics, satellites, and even purely software applications, have a similar structure, namely layered architectures as summarized in Figure 1. Although purely connectionist/sub-symbolic architectures remain prevalent in some areas, such as robotics, there is a broad realization that separating out the important/difficult choices into an identifiable entity can be very useful for development, debugging, and analysis. While such layered architectures have been investigated for many years they appear increasingly common in autonomous systems.\n> \n> \n\n\nThe accompanying figure is:\n\n\n![Figure 1. Typical hybrid autonomous figure architecture—with suitable analysis techniques noted.](http://intelligence.org/wp-content/uploads/2014/05/Fisher-Typical-hybrid-autonomous-system-architecture.png)\nWhat can you say about how common you think something like your “typical hybrid autonomous system architecture” is? E.g. do you think that kind of architecture is in use in ~20 different real-world applications, or ~200 different real-world applications, or perhaps more? Do you think there’s a trend in favor of such architectures? Is there a trend toward such architectures even in robotics, do you think?\n\n\n\n\n\n---\n\n\n**Michael Fisher**: It is difficult to get definitive evidence about the internal architectures of autonomous systems, especially commercial ones, so I only have anecdotal information about systems constructed in this way. However, from my discussions with engineers, both in academia and industry (aerospace and automotive), it appears that the use of this sort of architecture is increasing. It might not be termed an “agent architecture”, indeed the “decision maker” might not be termed an “agent” at all, but systems with a separate “executive” or “control module” that explicitly deals with high-level decisions, fit into our model.\n\n\nI am not sure whether robotic architectures are also moving in this direction, but there seems to be a theme of adding a “monitor” or “governor” that can assess the safety or ethics of proposed actions before the robot takes them[1](https://intelligence.org/2014/05/09/michael-fisher/#footnote_0_11068 \"Arkin, R. “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture“. Technical Report GIT-GVU-07-11, Georgia Tech, 2007.\"),[2](https://intelligence.org/2014/05/09/michael-fisher/#footnote_1_11068 \"Woodman, R., Winfield, A., Harper, C. and Fraser, M. “Building Safer Robots: Safety Driven Control“. International Journal of Robotic Research 31(13):1603–1626, 2012.\") . This can alternatively be viewed as an additional agent making high-level decisions (what to permit, what to block) and so this approach might also be seen as a variety of hybrid agent architecture.\n\n\n\n\n---\n\n\n**Luke**: In that same paper, you and your co-authors explain that\n\n\n\n> Since autonomous software has to make its own decision, it is often vital to know not only what the software does and when it does it but also *why* the software chooses to do it.\n> \n> \n> …A very useful abstraction for capturing such autonomous behavior within complex, dynamic systems turns out to be the concept of an agent. Since the agent concept came into prominence in the 1980s, there has been vast development within both academia and industry. It has become clear this agent metaphor is very useful for capturing many practical situations involving complex systems comprising flexible, autonomous, and distributed components. In essence, agents must fundamentally be capable of flexible autonomous action.\n> \n> \n> However, it turns out the “agent” concept on its own is still not enough! Systems controlled by neural networks, genetic algorithms, and complex control systems, among others, can all act autonomously and thus be called agents, yet the reasons for their actions are often quite opaque. Because of this, such systems are very difficult to develop, control, and analyze.\n> \n> \n> So, the concept of a *rational* agent has become more popular. Again, there are many variations but we consider this to be an agent that *has explicit reasons for making the choices it does, and should be able to explain these if necessary*.\n> \n> \n\n\nI previously explored the relative transparency of different AI methods in [Transparency in Safety-Critical Systems](http://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/), where I asked:\n\n\n\n> How does the transparency of a method change with scale? A 200-rules logical AI might be more transparent than a 200-node Bayes net, but what if we’re comparing 100,000 rules vs. 100,000 nodes? At least we can *query* the Bayes net to ask “what it believes about X,” whereas we can’t necessarily do so with the logic-based system.\n> \n> \n\n\nWhat are your thoughts on how well the transparency of rational agent models will scale? Or, do you think the “rational agent” part of future autonomous systems will be kept fairly small (e.g. to enable transparency and verification) even as the subsystems at lower “layers” in the architecture may grow massively in complexity?\n\n\n\n\n---\n\n\n**Michael**: I would say that the high-level rules within rational agents remain relatively transparent even as their numbers increase. It’s just that the proof/analysis mechanisms do not scale so well.\n\n\nSo, as you suggest, the intention is to keep the rational agent part as small as possible. Formal verification is quite costly, so any verification of very large agents is likely to be infeasible.\n\n\nBy only dealing with the high-level, rational agent decisions, we aim to avoid handling the vast state-space of the whole system. The use of logical (specifically, BDI) rules abstracts us from much low-level detail.\n\n\nConcerning your question about “100,000 rules vs. 100,000 nodes”, I would hope that the symbolic logic language used helps us reduce this to 1000, or even 100, rules (vs. 100,000 nodes).\n\n\n\n\n---\n\n\n**Luke**: Does your hierarchical autonomous system approach, with a Belief-Desire-Intention rational agent “on top,” integrate with [hybrid system control](http://intelligence.org/2014/03/26/anil-nerode/) approaches, or would it be an alternative to hybrid system control approaches?\n\n\n\n\n---\n\n\n**Michael**: Yes, it fits together with hybrid system approaches and, certainly, the continuous and stochastic aspects are important for modelling real-world interactions. You can possibly see our approach as providing further detail about the discrete steps within hybrid systems.\n\n\n\n\n---\n\n\n**Luke**: In the future, do you think the logical agent specification in the top-level role in hierarchical autonomous systems will eventually need to be replaced by a more flexible, probabilistic/fuzzy reasoning approach, so as to be a better match for the environment? Or could a logical agent specification play that role even as we build systems (several decades from now) that approach human-like, fully general reasoning and planning abilities?\n\n\n\n\n---\n\n\n**Michael**: The latter. We are working towards being able to specify and verify probabilistic, and even real-time, agent behaviours at this top level. (Currently we can do either agent verification or probabilistic verification, but we are developing combination mechanisms.) The high-level agent will likely become increasingly complex as the sophistication of applications develop and may well incorporate different reasoning and decision components. Our approach at present is that planning components are within the separate, low-level, modules. The agent just calls these and then makes decisions between the plan options produced.\n\n\n\n\n---\n\n\n**Luke**: Interesting. Could you say a bit more about the combination mechanisms you’re developing?\n\n\n\n\n---\n\n\n**Michael**: Formally verifying basic logical properties is already difficult. However, once we tackle autonomous systems that might well be used in realistic situations, then we are often led to much more complex logical queries. For example, imagine we wish to verify a property such as\n\n\n\n> when asked, the probability of the robot replying within 30 seconds is at least 80%\n> \n> \n\n\nAlready we have moved beyond basic properties to incorporate both probabilities (>80%) and real-time aspects (within 30s). But, with autonomous systems we would like to verify not just what the system (e.g. a robot) does but what its intentions and beliefs are. So we might even want to verify\n\n\n\n> when asked, the probability of the robot believing the request has been made and then intending to reply within 30 seconds is at least 80%\n> \n> \n\n\nThe problem here is that the logical framework we need to describe such requirements is already quite complex. Combinations of logics are often difficult to deal with and combinations of logics of probability, real-time, intention, belief, etc., have not yet been studied in great detail.\n\n\nWe are aiming to develop and extend some theoretical [work we have been doing](http://dx.doi.org/10.1016/j.tcs.2013.07.012) on model checking of complex logical combinations in order to tackle this problem of more comprehensive autonomous system verification.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Michael!\n\n\n\n\n---\n\n1. Arkin, R. “[Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.110.1610)“. Technical Report GIT-GVU-07-11, Georgia Tech, 2007.\n2. Woodman, R., Winfield, A., Harper, C. and Fraser, M. “[Building Safer Robots: Safety Driven Control](http://dx.doi.org/10.1177/0278364912459665)“. International Journal of Robotic Research 31(13):1603–1626, 2012.\n\nThe post [Michael Fisher on verifying autonomous systems](https://intelligence.org/2014/05/09/michael-fisher/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-05-09T17:33:49Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "800d274c1818f8386ee31dd2656d56cd", "title": "Harry Buhrman on quantum algorithms and cryptography", "url": "https://intelligence.org/2014/05/07/harry-buhrman/", "source": "miri", "source_type": "blog", "text": "![Harry Buhrman portrait](http://intelligence.org/wp-content/uploads/2014/05/Buhrman_w185.jpg) [Harry Buhrman](http://www.cwi.nl/people/552) is head of the research group ‘[Algorithms and Complexity](http://www.cwi.nl/research-groups/algorithms-and-complexity)’ at the Centrum Wiskunde & Informatica, Amsterdam, which he joined in 1994. Since 2000 he also has a joint appointment as full professor of computer science at the [University of Amsterdam](http://www.uva.nl/en/home). Buhrman’s research focuses on quantum computing, algorithms, complexity theory, and computational biology. In 2003 he obtained a prestigious [Vici award](http://www.nwo.nl/en/research-and-results/programmes/Talent+Scheme/awards/vici+awards) and was coordinator of several national and international projects.\n\n\nBuhrman is editor of several international journals and is member of various advisory and scientific boards, such as the advisory board of the Institute for Quantum Computing (Waterloo, Canada).\n\n\n\n**Luke Muehlhauser**: In theory, [position-based quantum cryptography](http://ercim-news.ercim.eu/en85/special/position-based-quantum-cryptography) could ensure that certain tasks could only be performed at certain locations:\n\n\n\n> For example, a country could send a message in such a way that it can only be deciphered at the location of its embassy in another country.\n> \n> \n\n\nIn [Buhrman et al. (2011)](http://commonsenseatheism.com/wp-content/uploads/2014/02/Buhrman-et-al.-Position-Based-Quantum-Cryptography-Impossibility-and-Constructions.pdf), you and your co-authors showed that\n\n\n\n> if collaborating adversaries are allowed to pre-share an arbitrarily large entangled quantum state, then… position-based cryptography… is impossible… in the quantum setting.\n> \n> \n> On the positive side, we show that for adversaries that are restricted to not share any entangled quantum states, secure position-verification is achievable. Jointly, these results suggest the interesting question whether secure position-verification is possible in the bounded case of a bounded amount of entanglement.\n> \n> \n\n\nWhat is the current state of our knowledge on whether secure position-verification is possible given a bounded amount of entanglement?\n\n\n\n\n---\n\n\n**Harry Buhrman**: Classically such schemes will always be insecure, also if one restricts the running time or other resources of the adversaries. Quantumly the situation is more complicated, although we and others have shown that with an exponential amount of entanglement, also quantum schemes can be broken and are insecure. The quest is thus on for schemes that 1) can be easily implemented by the honest parties 2) require unreasonable amounts of entanglement for adversaries to break the scheme. We have seen some partial progress along these lines. For example a [paper](http://arxiv.org/pdf/1101.1065.pdf) by Beigi and Koenig shows that there are schemes where a linear amount of entanglement is necessary to break them. It turns out that the question whether such schemes exists and if they do, prove for certain schemes that they are secure, is a very hard problem that we are currently working on. It turns out that the approach we have taken implies, if successful, results in classical complexity theory, and thus are very hard. On the other hand, it may still be possible that there simply do not exist schemes that are secure in this sense. \n\n\n\n\n\n\n---\n\n\n**Luke**: Based on your knowledge of the current state of quantum computing and quantum algorithm design, what recommendations to governments or citizens might you make?\n\n\nE.g. Ronald de Wolf recommended we switch to [post-quantum cryptography](http://en.wikipedia.org/wiki/Post-quantum_cryptography) for sensitive data, especially since “even though there’s no QC yet, the security services… are probably hoovering up RSA-encrypted communications that they store for the time being, waiting for the QC that will allow them to decrypt these messages later.”\n\n\n\n\n---\n\n\n**Harry**: Yes I agree with Ronald completely. Switch as soon as possible to quantum proof security (although we don’t know what exactly is quantum proof, see next paragraph). Indeed, all of our electronic encrypted communication, such as e-mail, telephone conversations, etc. that are encrypted using RSA-like techniques, can not yet be decrypted, but NSA and other organisations collect these encrypted data and wait till the day comes when they can decipher them. For example when a sufficiently large quantum computer is built such data can be decrypted very easily. So if we want to keep our data secure for a reasonable amount of time we should switch to other encryption techniques and standards. This applies more to governments, banks, big companies, than to normal users, but still I feel that a lot of people don’t realise this. The common incorrect way of approaching this is as follows: This quantum computer, if built at all, is still 15 years ahead of us and when that time comes we switch then to more secure encryption. This reasoning is of course incorrect.\n\n\nSo what encryption should we use instead? Problematic is that this requires much more research. However a few encryption schemes have been proposed that *appear* to be quantum proof. I suggest that we swith to those in combination with good old RSA. So double encryption. This way the encryption security remains at least the same as what it is now, but most likely it is also quantum proof. One caveat of this approach is that encryption and decryption become (much) more inefficient since RSA like techniques are conveniently fast. This is the price one has to pay to be secure well into the future. So one has to make a decision when encrypting. Is it only necessary to be secure in the next year, e.g. when transmitting a credit card number it is mostly fine if it is only secure for a year or two, or do we need security well into the future? In the latter case I advise to switch as soon as possible.\n\n\n\n\n---\n\n\n**Luke**: Are there additional recommendations you can make to governments or citizens, based on your knowledge of the current state of quantum computing and quantum algorithm design?\n\n\n\n\n---\n\n\n**Harry**: Cryptographic implications are the ones that pertain immediately to society. What most people don’t realise is that there are many problems in science and technology related fields, like eg chemistry, biology, physics and medicine design, that are computational very demanding and often impossible to carry out even on the most powerful supercomputers. Quantum computers wil help to solve some of these more efficiently notably when they concern simulation of quantum mechanical systems and properties. So research in QC will probably have an impact there as well.\n\n\n\n\n---\n\n\n**Luke**: From your perspective, what are the most significant factors blocking or slowing greater progress on the theoretical half (not the experimental half) of quantum computing research? In other words, what feasible but difficult changes to the field (or related fields) would most accelerate theoretical quantum computing research?\n\n\n\n\n---\n\n\n**Harry**: That is a difficult question to answer. If I knew what to change I would do it immediately. One of the problems is that we would like to have more quantum algorithms that demonstrate the power of quantum computation. The community has developed several techniques, like quantum Fourier sampling and quantum random walks. We would like to have more of those techniques and ideas. Maybe more important we also need to isolate computational problems or tasks that are amenable to quantum speedup. That is to say where quantum computers outperform classical computers.\n\n\nAnother area that could improve is that of fault tolerance and error correcting codes. A lot of progress has been made but we still don’t know the exact value of how many hardware errors can be tolerated by algorithmic design. This value is called the fault tolerant  threshold. It  is important to know the exact value when  implementing qubits and running quantum  algorithms and protocols on them. We  will need to perform error correction and fault tolerant computation that fight the errors that popup due to imperfections and noise. But if the errors that occur are too big such correction by computation is not possible. Also it will be important to keep the overhead as small as possible.\n\n\nA third problem is that of few qubit applications. We are beginning to see the demonstration of few qubit computers in the lab, but there are no good problems that could be used to demonstrate the usefulness of small quantum computers that operate on say 30-100 qubits.\n\n\n\n\n---\n\n\n**Luke**: In [Buhrman et al. (1997)](http://staff.science.uva.nl/%7Eleen/PDFpapers/726.pdf), you and your co-authors listed 6 open problems in computational complexity theory. Have any of them been solved in the intervening time, perhaps at least with relativized results?\n\n\n\n\n---\n\n\n**Harry**: In that paper we listed 6 hypotheses that all are probably false. The goal was and still is to show that they are equivalent to P=NP. We also listed 6 open problems of which two were [solved the next year](http://www.sciencedirect.com/science/article/pii/S0022000099916537) by D. Sivakumar (# 2 and 5 from that list). As far as I know not much progress has been made towards these problems or showing that all the hypothesis are equivalent to P=NP. I think this is indication for how difficult the P versus NP question is. There currently is a new approach to tackling this problem, not the six hypotheses but the P vs NP problem itself. This approach is called geometric complexity and involves algebraic geometry and representation theory. Difficult topics in Mathematics.\n\n\nThere actually is a connection between quantum information and these types of classical questions. The language of quantum information allows one to state some of these problems in a different language which allows one to approach the problem from a different angle. In general this has been quite successful. For example [the result of Ronald de Wolf and co-authors](http://homepages.cwi.nl/%7Erdewolf/publ/qc/stoc130-fiorini.pdf) which uses quantum communication at its core, or [the paper I have](https://arxiv.org/abs/0911.4007) with Jop Briet, Troy Lee and Thomas Vidick which settles a problem in Banach space theory by solving a problem in quantum information. There also is [a survey](http://theoryofcomputing.org/articles/gs002/gs002.pdf) about other such results.\n\n\nPerhaps we should revisit the 6 hypotheses paper with quantum techniques and see if we can make a little progress. Though at the moment I don’t see a clear plan of attack.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Harry!\n\n\nThe post [Harry Buhrman on quantum algorithms and cryptography](https://intelligence.org/2014/05/07/harry-buhrman/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-05-08T00:45:26Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "e46317038aed87706fc456605418ad1b", "title": "New paper: “Problems of self-reference in self-improving space-time embedded intelligence”", "url": "https://intelligence.org/2014/05/06/new-paper-problems-of-self-reference-in-self-improving-space-time-embedded-intelligence/", "source": "miri", "source_type": "blog", "text": "[![Problems of self-reference](https://intelligence.org/wp-content/uploads/2014/05/Problems-of-self-reference.png)](https://intelligence.org/files/ProblemsofSelfReference.pdf)We’ve released a new working paper by Benja Fallenstein and Nate Soares, “[Problems of self-reference in self-improving space-time embedded intelligence](https://intelligence.org/files/ProblemsofSelfReference.pdf).”\n\n\nAbstract:\n\n\n\n> By considering agents to be a part of their environment, Orseau and Ring’s space-time embedded intelligence is a better fi\ft for the real world than the traditional agent framework. However, a self-modifying AGI that sees future versions of itself as an ordinary part of the environment may run into problems of self-reference. We show that in one particular model based on formal logic, naive approaches either lead to incorrect reasoning that allows an agent to put off\u000b an important task forever (the *procrastination paradox*), or fail to allow the agent to justify even obviously safe rewrites (the *Löbian obstacle*). We argue that these problems have relevance beyond our particular formalism, and discuss partial solutions.\n> \n> \n\n\nThis working paper also cites a brief new technical report by Fallenstein, “[Procrastination in probabilistic logic](http://intelligence.org/wp-content/uploads/2014/04/procrastination-probabilistic-logic.pdf).”\n\n\n**Update** 05/14/14: This paper has been accepted to [AGI-14](http://agi-conf.org/2014/).\n\n\nThe post [New paper: “Problems of self-reference in self-improving space-time embedded intelligence”](https://intelligence.org/2014/05/06/new-paper-problems-of-self-reference-in-self-improving-space-time-embedded-intelligence/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-05-06T10:47:27Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "b99a01356bb2543a7873e065dbde85fe", "title": "Liveblogging the SV Gives Fundraiser", "url": "https://intelligence.org/2014/05/06/liveblogging-the-svgives-fundraiser/", "source": "miri", "source_type": "blog", "text": "[![SVGives logo lrg](https://intelligence.org/wp-content/uploads/2014/05/SVGives-logo-lrg-300x231.jpg)](http://intelligence.org/2014/05/04/calling-all-miri-supporters/)Today MIRI is participating in a massive 24-hour fundraiser called SV Gives. Strategy details [here](http://intelligence.org/2014/05/04/calling-all-miri-supporters/), donate [here](http://svgives.razoo.com/story/Machine-Intelligence-Research-Institute).\n\n\nThis blog post will be updated many times throughout the day.\n\n\nTotal donated to MIRI: **$110,245**.\n\n\nTotal prizes & matching won for MIRI: **$61,330**.\n\n\n \n\n\n**12:12am Pacific:** That’s a wrap, folks! Our thanks once again to [SVCF](http://www.siliconvalleycf.org/) for putting together a thrilling fundraiser with a neck-and-neck race toward the end. Our thanks to everyone who participated in SV Gives. And of course many thanks to our dedicated, well-organized donors who made this such a surprising success for MIRI. And finally, my thanks to the MIRI staff who worked very long hours to run the fundraiser, especially Malo Bourgon, who worked 38 hours straight, and now must go to sleep. ![🙂](https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png)\n\n\n**12:07am Pacific:** For the last hour, MIRI once again won both the $2,000 prize and the $150 prize! Total raised is $110,245 plus $61,330 in prizes and matching funds!\n\n\n**12:04am Pacific:** Fundraiser complete! $7.9M raised for local nonprofits. Congrats to SVCF for a rather thrilling fundraiser, and to leaderboard winners Sankara Eye Foundation and Burlingame Community for Education!\n\n\n**11:58pm Pacific:** With two minutes to spare, we passed 2000 total donations today!\n\n\n**11:06pm Pacific**: Another $2,000! And also another $150 for a random winner, Samuel Brooks!\n\n\n**10:10pm Pacific**: And another $2,000!\n\n\n**9:22pm Pacific:** MIRI has won $2,000 again!\n\n\n**8:07pm Pacific:** MIRI wins $2,000 again!\n\n\n**7:57pm Pacific**: Wow, MIRI and Sankara are still neck-and-neck for Total Unique Donors! Right now Sankara has a 5 person lead.\n\n\n**7:09pm Pacific:** MIRI won another $2,000 for unique donors last hour, plus $6,980 in matching! But in other news, Sankara Eye Foundation has now overtaken us in the race for the Grand Prize!\n\n\n**6:21pm Pacific:** MIRI won $3,030 in matching at 6pm. Thanks, everyone!\n\n\n**6:05pm Pacific:** MIRI won another $2,000 for unique donors last hour. Great job everyone! Get ready for more matching funds being avilable at 7pm sharp!\n\n\n**5:54pm Pacific:** Hurray! MIRI was one of 50 organizations to win $1,000 from the 5pm matching funds. The 6pm matching funds are coming up, so try to send your donation in the first ~2 seconds after the hour!\n\n\n**5:11pm Pacific:** We won the hour again ($2,000), and we also won a 2nd randomly chosen $150 Golden Ticket, courtesy of a donation by Simon Safar!\n\n\n**4:36pm Pacific:** Amazingly, MIRI is now 12% of all donations so far during SV Gives day.\n\n\n**4:27pm Pacific:** We’ve reached 250 unique donors for the day; thanks so much! As it happens, there’s a MIRI research workshop going on right now, so [here is a photo](https://intelligence.org/wp-content/uploads/2014/05/May-6th.jpg) of what you’re all funding (several times over)!\n\n\n**4:06pm Pacific:** Another $2,000 for MIRI. Note that we’re coming up on the 5pm matching opportunity, so you should try to donate in the first 5 seconds after 5pm to try to win the match!\n\n\n**3:16pm Pacific:** KFOX will be [covering](http://www.kfox.com/pages/9131811.php) SV Gives between 5-7pm today.\n\n\n**3:06pm Pacific**: Another $2,000! The [strategy](http://intelligence.org/2014/05/04/calling-all-miri-supporters/) is still working!\n\n\n**2:10pm Pacific:** MIRI won the hourly $2,000 again. Thanks, everyone!\n\n\n**1:36pm Pacific:** Noon matching results have been released. MIRI got $2910 of the $50,000 in matching, which was apparently all taken in less than 4 seconds!\n\n\n**1:11pm Pacific**: Another $2,000 prize won. Our deep and sincere thanks to [SVCF](http://www.siliconvalleycf.org/) for organizing [SV Gives](http://svgives.razoo.com/giving_events/svg14/home), and to MIRI’s donors for being so dedicated and well-organized!\n\n\n**12:06pm Pacific:** And we won the $2,000 again! Many thanks to our astoundingly well-coordinated donors.\n\n\n**11:07am Pacific**: We’ve won the $2,000 Most Unique Donors Each Hour prize again. Who will break MIRI’s winning streak?\n\n\n**10:34am Pacific:** We’ve hit the 200 unique donors (for the whole day) mark!\n\n\n**10:08am Pacific:** And yet another $2,000 prize for Most Unique Donors Each Hour! I bet the very gracious Sobrato Family Foundation wasn’t expecting to be one of MIRI’s largest donors of the year so far.\n\n\n**9:29am Pacific:** On Twitter, SVCF asked us: “What’s your secret? You’ve just got the 9 a.m. $2,000 Golden Ticket prize for a sixth time! Congrats!” We [replied](https://twitter.com/siliconvalleycf/status/463711885628411904): “Our secret is: 70+ person-hours of planning and donor coordination, 10+ energy drinks, and many enthusiastic supporters!” Seriously, I think our participation in SV Gives should be sponsored by Rockstar energy drinks, given how many of them we are consuming over here!\n\n\n**9:07am Pacific:** MIRI wins another $2,000. Three cheers for donor coordination!\n\n\n**8:22am Pacific:** $1,970 more in matching came in at the beginning of this hour! (The first 14 seconds of this hour, to be precise.)\n\n\n**8:10am Pacific:** Another hour, another $2,000 prize for MIRI! Thanks again, everyone.\n\n\n**7:48am Pacific:** In the first ~30 seconds after 7am, we captured $2,340 in matching. Thanks, everyone!\n\n\n**7:07am Pacific:** MIRI wins the $2,000 prize once again! (For the 6am-7am hour.)\n\n\n**6:16am Pacific:** 150 unique donors today so far.\n\n\n**6:08am Pacific:** Excellent work, everyone! MIRI won the $2,000 prize for “Most Unique Donors Each Hour” for a third time! (For the 5am-6am hour.)\n\n\n**5:27am Pacific:** According to SV Gives’ twitter account, NBC will be covering the SV Gives fundraiser in 3 minutes.\n\n\n**5:18am Pacific:** Our thanks to Mick Porter, whose donation won the $150 prize for “random winner” during the 4am hour! We also won the $2,000 “Most Unique Donors Each Hour” again for the 4am-5am hour!\n\n\n**4:41am Pacific:** So far we’ve won the $2,000 prize for Most Unique Donors Each Hour once, but not yet twice, so it looks like marginal donors giving as little as $10 each hour can make a difference!\n\n\n**3:50am Pacific:** If you’re visiting this page due to MIRI’s SV Gives participation, and would like to know what it looks like to do *technical* work on ensuring good outcomes from future self-improving AI programs, check out our just-released new working paper, “[Problems of self-reference in self-improving space-time embedded intelligence](http://intelligence.org/2014/05/06/new-paper-problems-of-self-reference-in-self-improving-space-time-embedded-intelligence/).”\n\n\n**3:04am Pacific**: Here’s a [live feed](https://plus.google.com/hangouts/_/7ecpjeqskmanhhclhe8ug0nmis) of MIRI HQ. Louie, Malo, myself, and Nate are there now.\n\n\n**2:23am Pacific:** We won the $2,000 Golden Ticket for the 1am-2am hour! We also won the $2500 “Local Area Code Prize” by way of Patrick LaVictoire making the 408th donation (to any charity) of the day! Our thanks to the Sobrato Family Foundation and the Shackleton Family Foundation, respectively.\n\n\nThe post [Liveblogging the SV Gives Fundraiser](https://intelligence.org/2014/05/06/liveblogging-the-svgives-fundraiser/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-05-06T09:28:50Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "80eb10b0f657438c7b0bfe89a71c39ea", "title": "Calling all MIRI supporters for unique giving opportunity!", "url": "https://intelligence.org/2014/05/04/calling-all-miri-supporters/", "source": "miri", "source_type": "blog", "text": "**Update:** We are liveblogging the fundraiser [here](http://intelligence.org/2014/05/06/liveblogging-the-svgives-fundraiser/).\n\n\nRead our strategy below, then [give here](http://svgives.razoo.com/story/Machine-Intelligence-Research-Institute)!\n------------------------------------------------------------------------------------------------------------------\n\n\n[![SVGives logo lrg](https://intelligence.org/wp-content/uploads/2014/05/SVGives-logo-lrg.jpg)](http://svgives.razoo.com/story/Machine-Intelligence-Research-Institute)\n \n\n\nAs previously [announced](http://intelligence.org/2014/04/25/may-6th-miri-participating-in-massive-24-hour-online-fundraiser/ \"Help MIRI in a Massive 24-Hour Fundraiser on May 6th\"), MIRI is participating in a massive 24-hour fundraiser on May 6th, called [SV Gives](http://svgives.razoo.com/giving_events/svg14/home). This is a unique opportunity for all MIRI supporters to increase the impact of their donations. To be successful we’ll need to pre-commit to a strategy and see it through. **If you plan to give at least $10 to MIRI sometime this year, during this event would be the best time to do it!**\n\n\nThe plan\n--------\n\n\nWe need all hands on deck to help us win the following prize as many times as possible:\n\n\n\n> $2,000 prize for the nonprofit that has the most individual donors in an hour, every hour for 24 hours.\n> \n> \n\n\nTo paraphrase, *every hour*, there is a $2,000 prize for the organization that has the most individual donors during that hour. **That’s a total of $48,000 in prizes, from sources that wouldn’t normally give to MIRI.**\n\n\nThe minimum donation is $10, and an individual donor can give as many times as they want. Therefore we ask our supporters to:\n\n\n1. **[give](http://svgives.razoo.com/story/Machine-Intelligence-Research-Institute) $10 an hour, during *every hour* of the fundraiser that they are awake (I’ll be up and donating for all 24 hours!)**;\n2. for those whose giving budgets won’t cover all those hours, see below for list of which hours you should privilege; and\n3. publicize this effort as widely as possible.\n\n\n### International donors, we especially need your help!\n\n\nMIRI has a strong community of international supporters, and this gives us a distinct advantage! While North America sleeps, you’ll be awake, ready to target all of the overnight $2,000 hourly prizes.\n\n\nHours to target in order of importance\n--------------------------------------\n\n\nTo increase our chances of winning these prizes we want to preferentially target the hours that will see the least donation traffic from donors of other participating organizations. Below are the top 12 hours we’d like to target in order of importance. Remember that all times are in Pacific Time. (Click on an hour to see what time it is in your timezone.)\n\n\n* [1 am hour](http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140506T01&p1=224&ah=1)  (01:00–01:59 PT)\n* [2 am hour](http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140506T02&p1=224&ah=1) (02:00–02:59 PT)\n* [3 am hour](http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140506T03&p1=224&ah=1) (03:00–03:59 PT)\n* [4 am hour](http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140506T04&p1=224&ah=1) (04:00–04:59 PT)\n* [5 am hour](http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140506T05&p1=224&ah=1) (05:00–05:59 PT)\n* [6 am hour](http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140506T06&p1=224&ah=1) (06:00–06:59 PT)\n* [11 pm hour](http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140506T23&p1=224&ah=1) (23:00–23:59 PT)\n* [7 am hour](http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140506T07&p1=224&ah=1) (07:00–07:59 PT)\n* [10 pm hour](http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140506T22&p1=224&ah=1) (22:00–22:59 PT)\n* [8 am hour](http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140506T08&p1=224&ah=1) (08:00–08:59 PT)\n* [5 pm hour](http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140506T17&p1=224&ah=1) (17:00–17:59 PT)\n* [9 pm hour](http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140506T21&p1=224&ah=1) (21:00–21:59 PT)\n\n\nFor the 5 pm hour there is an additional prize I think we can win:\n\n\n\n> $1,000 golden ticket added to the first 50 organizations receiving gifts in the 5 pm hour.\n> \n> \n\n\n**So if you are giving in the 5 pm hour try and give right at the beginning of the hour.**\n\n\n### Bottom line, for every hour you are awake, [give](http://svgives.razoo.com/story/Machine-Intelligence-Research-Institute) $10 an hour.\n\n\n###  Give preferentially to the hours above, if unable to give during all waking hours.\n\n\nWe also have plans to target the $300,000 in matching funds up for grabs during the event. If you would like to contribute $500 or more to this effort, shoot me an email at [malo@intelligence.org](mailto:malo@intelligence.org).\n\n\n**For those who want to follow along and contribute to the last minute planning, as well as receive updates and giving reminders during the event, sign up below.** \n\n\n\n\nThe post [Calling all MIRI supporters for unique giving opportunity!](https://intelligence.org/2014/05/04/calling-all-miri-supporters/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-05-04T23:01:07Z", "authors": ["Malo Bourgon"], "summaries": []} -{"id": "74416f01fc9849d65c74b4a880584d5f", "title": "Kasper Stoy on self-reconfigurable robots", "url": "https://intelligence.org/2014/05/02/kasper-stoy/", "source": "miri", "source_type": "blog", "text": "![Kasper Støy portrait](http://intelligence.org/wp-content/uploads/2014/04/Støy_w1300.jpg) [Kasper Stoy](http://www.itu.dk/~ksty/) is a robotics and embodied artificial intelligence researcher holding an associate professor position at the Software and Systems Section of the [IT University of Copenhagen](http://www.itu.dk/en/). He is interested in the construction and design of complete robot systems, but being a computer scientist he has made most of his personal contributions in distributed control of multi-robot systems and modular robotics. He has published more than sixty papers in international conference proceedings or journals and is the author of the book “*[Self-Reconfigurable Robots: An Introduction](http://smile.amazon.com/Self-Reconfigurable-Robots-Introduction-Intelligent-Autonomous/dp/0262013711/ref=nosim?tag=793775876-20)*” published by MIT Press. He also co-founded Universal Robots, a company focuses on user-friendly robot arms for industry. He is an active player in the international robot research community and reviews for all major journals and conferences in robotics. He has stayed for extended periods at University of Southern California, Harvard University, University of Tarapacá (Chile), and Seam Reap (Cambodia). He holds a M.Sc. degree in computer science and physics from the University of Aarhus, Denmark (1999) and a Ph.D. degree in computer system engineering from the University of Southern Denmark (2003) where he also worked as assistant professor (2003-2006) and associate professor (2006-2013). He is married and has two kids.\n\n\n\n**Luke Muehlhauser**: In [Larsen et al. (2013)](http://www.itu.dk/%7Eksty/uploads/publications/larsen2013ir.pdf), you and your co-authors write:\n\n\n\n> Using a bottom-up, model-free approach when building robots [is] often seen as a less scientific way, compared to a top-down model-based approach, because the results are not easily generalizable to other systems… In this paper we will show how the use of well-known experimental methods from bio-mechanics are used to measure and locate weaknesses in our bottom-up, model-free implementation of a quadruped walker and come up with a better solution.\n> \n> \n\n\nFrom looking at the paper I could see how your experimental method allowed you to find a better solution for your walker robot, but I couldn’t understand how you addressed the challenge of generalizing the solution to other systems despite the bottom-up, model-free approach. Could you explain that part?\n\n\n\n\n---\n\n\n**Kasper Stoy**: The problem is that researchers who are doing cutting edge robotics want to explore how materials and their interaction can aid the movement of the robot. For instance, researchers have been working on passive-dynamic walkers for a long time now that exploit the mechanical system to achieve walking without using sensors or actuators. The energy comes from walking downhill and the control is open-looped – the mechanical system itself is self-stabilising. For these systems our current engineering approach is ok, but not great. We can just about model these kinds of walkers so we can get a good guess of the initial parameters to get the system walking, but there is still an extended phase of tinkering before the system actually walks. It is in this tinkering phase that our precise motion capture as used in biomechanics comes into play. Given measured paths of all parts of the robot we can analyse the data and come up with reasonable hypotheses about how to improve the robot. Hence, the tinkering becomes much more systematic even though the underlying physical processes are too complex to be modelled. It may not be apparent, but just modelling the impact of a foot with the ground which takes all types of frictions into account, the deformation of the foot, the spring effect, and so on is largely intractable. Hence, the underlying assumption of our work is that all models of locomotion are fundamentally wrong. They may give a high-level picture of what is going on, but fundamentally they cannot be used to predict what will happen just two steps later. However, if we turn this around and we have gotten our robot to walk and we record the data of how it walks. Although difficult, we can build a model that match this data which is based on the ground truth and where there is no modelling bias on the part of the researcher. We now have a better informed model that can be used to build the next generation of robots. Hence, the model is a generalisation of our specific implementation which you can copy, but you can also replace elements of which you have better implementations. In locomotion research like many other fields the models and physical systems are drifting apart because it is researchers with different skill and interests who work on them. In our work, we are clearly working on physical systems and just provide a hint to how our experimental results can be generalised in a way that is meaningful to modelling oriented researchers. We hope. \n\n\n\n\n\n\n---\n\n\n**Luke**: You co-authored [a book](http://smile.amazon.com/Self-Reconfigurable-Robots-Introduction-Intelligent-Autonomous/dp/0262013711/ref=nosim?tag=793775876-20) on self-reconfigurable robots in 2010. What do you mean by that term, and what are some recent examples of practical self-reconfigurable robots? (Perhaps, more recent than the book?)\n\n\n\n\n---\n\n\n**Kasper**: Self-reconfigurable robots are an idea inspired by multi-cellular organisms. The idea is that instead of building a robot as a single expensive, monolithic, and fragile piece of hardware you would built a robot from many relative simple robotic cells. We refer to these robotic cells as modules and the research community as modular robotics. Self-reconfigurable robots then take this a step further in that not only is the robot multi-cellular, the modules can also rearrange the way they are connected automatically. Hence as the modules wander around each other the robot as a whole change shape. While this is rare in biology there is a small aquatic animal called hydra which has this ability. In fact, it has been shown that if you cut it in half it transform itself into two smaller replicas of the original hydra. The advantage of this type of robot is extreme robustness and versatility. Features that are particular important in for instance extra-terrestrial exploration. Regarding, application this was from the beginning curiosity driving research. Is it possible to build a robot that is multi-cellular and able to chance its own shape? The answer is yes. The concept has been implemented successfully in about five systems internationally. However, modules are relatively expensive, big (10cm diameter), complex, and quite heavy. Hence, direct applications are still lacking given that the cost-benefit of these robots is not good enough for a commercial market. However, the more modest modular robots have seen some recent success on KickStarter with the Modbots. A modular construction kit for building your own robots.\n\n\n\n\n---\n\n\n**Luke**: What are each of the “about five systems” you refer to?\n\n\n\n\n---\n\n\n**Kasper**: The most developed system is the M-TRAN III self-reconfigurable robot developed at AIST in Japan. The runner ups are ATRON which was developed mostly by my former colleagues at University of Southern Denmark. However, both of the above systems are not being developed further. Another runner up is the Roombots developed at EPFL in Switzerland. Recent exciting development are the M-Blocks coming of of MIT and SMORES coming ot of UPenn. There is a large number of robots which come very close, but not quite achieving convincing three dimensional self-reconfiguration.\n\n\nAlso, check out this Wikipedia article for a [more comprehensive list](http://en.wikipedia.org/wiki/Self-reconfiguring_modular_robot).\n\n\n\n\n---\n\n\n**Luke**: What are some specific breakthroughs or kinds of progress that you expect to see in the next 10-15 years of research into self-reconfiguring modular robots?\n\n\n\n\n---\n\n\n**Kasper**: The field has very much been about whether it is possible to build robots that can change their own shape or not. Within the last decade, we have established that this is indeed the case. However, currently the robotic cells are too complex, heavy, and expensive for most applications. Hence, one of the goals is to reduce all of these obstacles, but given the current state of the art in mechatronics this is quite difficult. Hence, I see a mainstream area of research focussing on various simplifications of the mechatronics to make it attractive for specific applications in space, robotic prototyping tools and the like. However, it also clear that other implementations have to be considered. For this some have turned to DNA-based implementations or using micro fabrication technology. Another unexplored opportunity is in soft robotics that is gain traction in the robotic community as a whole. The question is then where this would lead in 10-15 years. I think all areas have potential so if for instance the soft modular robotics effort is successful this could lead to robotic multi-cellular systems much closer to biological cells in terms of material properties. Where would this find applications is anyones guess, but maybe in furniture, clothing or in healthcare in the form of adaptive casts.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Kasper!\n\n\nThe post [Kasper Stoy on self-reconfigurable robots](https://intelligence.org/2014/05/02/kasper-stoy/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-05-02T12:00:34Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "d92b4185d94a6b0a2b50301c10b05966", "title": "MIRI’s May 2014 Newsletter", "url": "https://intelligence.org/2014/05/01/miris-may-2014-newsletter/", "source": "miri", "source_type": "blog", "text": "| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n[Machine Intelligence Research Institute](http://intelligence.org)\n |\n\n |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \nOn May 6th, there is $250,000+ in matching funds and prizes available from sources that normally wouldn’t contribute to MIRI at all. [Details here](http://intelligence.org/2014/04/25/may-6th-miri-participating-in-massive-24-hour-online-fundraiser/).\n\n\n**Research Updates**\n* New paper: “[The errors, insights, and lessons of famous AI predictions](http://intelligence.org/2014/04/30/new-paper-the-errors-insights-and-lessons-of-famous-ai-predictions/).”\n* [Botworld update](http://lesswrong.com/lw/k5u/exploring_botworld/) with 4 new games, including Prisoner’s Dilemma and Stag Hunt.\n* [12 new expert interviews](http://intelligence.org/category/conversations/), including e.g. [Suzana Herculano-Houzel](http://intelligence.org/2014/04/22/suzana-herculano-houzel/) on cognitive ability and brain size.\n\n\n\n**News Updates**\n* [Can We Really Upload Johnny Depp’s Brain?](http://www.slate.com/articles/technology/future_tense/2014/04/transcendence_science_can_we_really_upload_johnny_depp_s_brain.html) (in *Slate*)\n* [Why MIRI?](http://intelligence.org/2014/04/20/why-miri/)\n\n\n**Other Updates**\n* Call for Papers: [Special Issue. Confronting Future Catastrophic Threats To Humanity](http://www.journals.elsevier.com/futures/call-for-papers/call-for-papers-special-issue-confronting-future-catastrophi)\n* [AI Risk, New Executive Summary](http://lesswrong.com/lw/k37/ai_risk_new_executive_summary/) by Stuart Armstrong\n* **[Transcending complacency on superintelligent machines](http://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html)**, by Hawking, Tegmark, Russell, and Wilczek\n\n\nAs always, please don’t hesitate to let us know if you have any questions or comments.\nBest,\nLuke Muehlhauser\nExecutive Director\n |\n\n |\n\n |\n| |\n\n\n\n \n\n\nThe post [MIRI’s May 2014 Newsletter](https://intelligence.org/2014/05/01/miris-may-2014-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-05-02T03:00:47Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "f297fe8aa666383dce7535c696d26b1e", "title": "New Paper: “The errors, insights, and lessons of famous AI predictions”", "url": "https://intelligence.org/2014/04/30/new-paper-the-errors-insights-and-lessons-of-famous-ai-predictions/", "source": "miri", "source_type": "blog", "text": "[![AI predictions paper](https://intelligence.org/wp-content/uploads/2014/04/AI-predictions-paper.png)](https://intelligence.org/files/ErrorsInsightsLessons.pdf)During his time as a MIRI researcher, Kaj Sotala contributed to a paper now published in the *Journal of Experimental & Theoretical Artificial Intelligence*: “[The errors, insights and lessons of famous AI predictions – and what they mean for the future](http://www.fhi.ox.ac.uk/wp-content/uploads/FAIC.pdf).”\n\n\nAbstract:\n\n\n\n> Predicting the development of artificial intelligence (AI) is a difficult project – but a vital one, according to some analysts. AI predictions are already abound: but are they reliable? This paper starts by proposing a decomposition schema for classifying them. Then it constructs a variety of theoretical tools for analysing, judging and improving them. These tools are demonstrated by careful analysis of five famous AI predictions: the initial Dartmouth conference, Dreyfus’s criticism of AI, Searle’s Chinese room paper, Kurzweil’s predictions in the *Age of Spiritual Machines*, and Omohundro’s ‘AI drives’ paper. These case studies illustrate several important principles, such as the general overconfidence of experts, the superiority of models over expert judgement and the need for greater uncertainty in all types of predictions. The general reliability of expert judgement in AI timeline predictions is shown to be poor, a result that fits in with previous studies of expert competence.\n> \n> \n\n\nThe post [New Paper: “The errors, insights, and lessons of famous AI predictions”](https://intelligence.org/2014/04/30/new-paper-the-errors-insights-and-lessons-of-famous-ai-predictions/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-04-30T23:58:53Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "eed9b079a909d4e38fe445b02b159eca", "title": "Suresh Jagannathan on higher-order program verification", "url": "https://intelligence.org/2014/04/30/suresh-jagannathan-on-higher-order-program-verification/", "source": "miri", "source_type": "blog", "text": "![Suresh Jagannathan portrait](http://intelligence.org/wp-content/uploads/2014/04/Jagannathan_w1300.jpg) [Dr. Suresh Jagannathan](http://go.usa.gov/kp4F) joined DARPA in September 2013. His research interests include programming languages, compilers, program verification, and concurrent and distributed systems.\n\n\nPrior to joining DARPA, Dr. Jagannathan was a professor of computer science at Purdue University. He has also served as visiting faculty at Cambridge University, where he spent a sabbatical year in 2010; and as a senior research scientist at the NEC Research Institute in Princeton, N.J.\n\n\nDr. Jagannathan has published more than 125 peer-reviewed conference and journal publications and has co-authored one textbook. He holds three patents. He serves on numerous program and steering committees, and is on the editorial boards of several journals.\n\n\nDr. Jagannathan holds Doctor of Philosophy and Master of Science degrees in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology. He earned a Bachelor of Science degree in Computer Science from the State University of New York, Stony Brook.\n\n\n\n**Luke Muehlhauser**: From your perspective, what are some of the most interesting or important developments in higher-order verification in the past decade?\n\n\n\n\n---\n\n\n**Suresh Jagannathan**: I would classify the developments of the past decade into four broad categories:\n\n\n1. The development of higher-order recursion schemes (or higher-order model checking). This approach is based on defining a language semantics in terms of a (recursive) tree grammar. We can now ask whether the tree generated by the grammar satisfies a particular safety property. Because recursion schemes are very general and expressive structures, they can be viewed as natural extensions of model checkers based on pushdown automata or finite-state systems. In the past decade, there have been substantial advances that have made higher-order model checking a practical endeavor, accompanied by realistic implementations.\n2. Liquid Types combine facets of classic type inference techniques found in functional language with predicate abstraction techniques used in model checkers to automatically infer dependent type refinements with sufficient precision to prove useful safety properties about higher-order functional programs. Implementations of this approach found in OCaml, SML, and Haskell have demonstrated that it is indeed possible to verify non-trivial program properties without significant type annotation burden.\n3. There have been substantial advances in the development of new languages built around rich dependent type systems that enable the expression and static verification of rich safety and security specifications and properties of higher-order programs. These include the languages that support mechanized proof assistants like Coq, Agda, or Epigram, languages like F\\* geared towards secure distributed programming, libraries like Ynot that encapsulate imperative (stateful) features and Hoare-logic pre/post-conditions within a higher-order dependent type framework, and features such as GADTs and type classes found in languages like GHC.\n4. Polyvariant control-flow analyses like CFA2 or higher-order contracts are two techniques that facilitate verification and analysis of dynamic higher-order languages. CFA2 adopts pushdown models used in program analyses for first-order languages to a higher-order setting, enabling precise matching of call/return sequences. Higher-order contracts allow runtime verification and blame assignment of higher-order programs and are fully incorporated into Racket, a multi-paradigm dialect of Lisp/Scheme.\n\n\n\n\n\n---\n\n\n**Luke**: What are some of the “realistic implementations” of higher-order model checking that you refer to?\n\n\n\n\n---\n\n\n**Suresh**: The most scalable implementation to date is the tool Preface ([Ramsay et. al, POPL’14](http://hopa.cs.rhul.ac.uk/hopa-2013/files/submissions/hopa2013_submission_4.pdf)) that is capable of handling recursion schemes of several thousand rules (i.e., function definitions). Earlier work such as TRecS ([Kobayshi, POPL’09](http://commonsenseatheism.com/wp-content/uploads/2014/04/Kobayashi-Types-and-higher-order-recursion-schemes-for-verification-of-higher-order-programs.pdf)) have good performance for recursion schemes of up to hundreds of rules.\n\n\n\n\n---\n\n\n**Luke**: What are your recommended readings on recent interesting work on liquid types and on polyvariant control-flow analyses?\n\n\n\n\n---\n\n\n**Suresh**: \n\n*Liquid Types*\n\n\n1. Vazou, Rondon, Jhala, “[Abstract Refinement Types](http://goto.ucsd.edu/~rjhala/papers/abstract_refinement_types.pdf)”, European Symposium on Programming, 2013, p. 209-228\n2. Rondon, Kawaguchi, Jhala, “[Low-Level Liquid Types](http://goto.ucsd.edu/~rjhala/papers/low_level_liquid_types.pdf)”, ACM Symposium on Principles of Programming Languages (POPL), 2010, pp. 131-144.\n3. Rondon, Kawaguchi, Jhala, “[Liquid Types](http://goto.ucsd.edu/~rjhala/papers/liquid_types.pdf)”, ACM Conference on Programming Language Design and Implementation, 2008, pp. 159-169.\n4. Kawaguchi, Rondon, Bakst, Jhala, “[Deterministic Parallelism via Liquid Effects](http://goto.ucsd.edu/~rjhala/papers/deterministic_parallelism_via_liquid_effects.pdf)”, ACM Conference on Programming Language Design and Implementation, 2012, pp. 45-54.\n5. Zhu, Jagannathan, “[Compositional and Lightweight Dependent Type Inference for ML](https://www.cs.purdue.edu/homes/zhu103/pubs/vmcai13.pdf)”, Conference on Verification, Model-Checking and Abstract Interpretation, 2013, pp. 295-314.\n\n\n*Polyvariant Control-Flow Analysis*\n\n\n1. Midtgaard, “[Control-Flow Analysis of Functional Programs](http://cs.au.dk/~jmi/Midtgaard-CSur-final.pdf)”, ACM Computing Surveys, 2012, 44(3).\n2. Earl, Sergey, Might, Van Horn, , “[Introspective Pushdown Analysis of Higher-Order Programs](http://matt.might.net/papers/might2010mcfa.pdf)”, International Conference on Functional Programming, 2012, pp. 177-188.\n3. Might, Smaragdakis, Van Horn, “[Resolving and Exploiting the k-CFA Paradox: Illuminating Functional vs. Object-Oriented Program Analysis](http://matt.might.net/papers/might2010mcfa.pdf)”, ACM Conference on Programming Langauge Design and Implementation, 2010, pp. 305-315.\n4. Vardoulakis, Shivers, “[CFA2: A Context-Free Approach to Control-Flow Analysis](http://arxiv.org/pdf/1102.3676.pdf)”, Logical Methods in Computer Science, 2011, 7(2).\n\n\n\n\n---\n\n\n**Luke**: What are some key developments and solutions you hope to see in the next 5-10 years of research into higher-order verification techniques? Which breakthroughs could be most important, if they could be achieved, given our current understanding?\n\n\n\n\n---\n\n\n**Suresh**: Currently, much of the work on higher-order verification centers around foundations (the definition of expressive models for describing salient properties that arise in HO programs, as is the case with current work in higher-order recursion schemes, dependent type systems, or control-flow analysis), or pragmatics (reducing type annotation burden as in Liquid Types). Going forward, I would expect to see a convergence of these techniques that enable applying new expressive and foundational models at scale.\n\n\nOne important breakthrough would be the integration of these systems into realistic compilers that would go beyond simple safety and property checking, to inform new kinds of verifiable optimizations and program transformations. One can envision using these techniques to use the specifications HO verification techniques support, as a way of realizing tangible code improvements and performance gains in verified compilers and runtime systems. Another important advance would be a deeper understanding of the potential synergistic interplay between higher-order verification techniques and other kinds of specification systems (e.g., session types). The issues here deal with how we would express rich protocols that capture not only safety requirements, but liveness and other kinds of sophisticated modalities.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Suresh!\n\n\nThe post [Suresh Jagannathan on higher-order program verification](https://intelligence.org/2014/04/30/suresh-jagannathan-on-higher-order-program-verification/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-04-30T11:00:51Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "631bbf76369c99b05a2c39d4bee6945f", "title": "Ruediger Schack on quantum Bayesianism", "url": "https://intelligence.org/2014/04/29/ruediger-schack/", "source": "miri", "source_type": "blog", "text": "![Ruediger Schack portrait](http://intelligence.org/wp-content/uploads/2014/04/Schack_w100.jpg) [Ruediger Schack](https://www.ma.rhul.ac.uk/rschack) is a Professor at the [Department of Mathematics](https://www.royalholloway.ac.uk/mathematics/home.aspx) at [Royal Holloway, University of London](http://www.royalholloway.ac.uk). He obtained his PhD in Theoretical Physics at the University of Munich in 1991 and held postdoctoral positions at the Max Planck Institute for Quantum Optics, the University of Southern California, the University of New Mexico, and Queen Mary and Westfield College before joining Royal Holloway in 1995. His research interests are quantum information theory, quantum cryptography and quantum Bayesianism.\n\n\n\n**Luke Muehlhauser**: In [Fuchs et al. (2013)](http://arxiv.org/abs/1311.5253), you and your co-authors provide an introduction to [quantum Bayesianism](http://en.wikipedia.org/wiki/Quantum_Bayesianism) aka “QBism,” which you more or less co-invented with [Carlton Caves](http://en.wikipedia.org/wiki/Carlton_M._Caves) and [Christopher Fuchs](http://perimeterinstitute.ca/personal/cfuchs/). But before I ask about QBism, let me ask one of the questions asked of the interviewees in *[Elegance and Enigma: The Quantum Interviews](http://www.amazon.com/Elegance-Enigma-Interviews-Frontiers-Collection/dp/3642208797/)* (including [Fuchs](http://arxiv.org/abs/1207.2141)): “What first stimulated your interest in the foundations of quantum mechanics?”\n\n\n\n\n---\n\n\n**Ruediger Schack**: I can trace the beginning of my interest in quantum foundations to reading one paper: “[Where do we stand on maximum entropy?](http://bayes.wustl.edu/etj/articles/stand.on.entropy.pdf)” by Ed Jaynes, and one book: *Du Microscopique au Macroscopique* by Roger Balian. Jaynes’s paper introduced me to Bayesian probability theory, and Balian’s book taught me that one can think of quantum states as representing Bayesian probabilities.\n\n\n\n\n\n---\n\n\n**Luke**: What is your own summary of the message of QBism? And what do you think its practical import for the world could be?\n\n\n\n\n---\n\n\n**Ruediger**: In two words the message of QBism is that people matter. According to QBism, quantum mechanics is a theory that any agent can use to organize his experience. More precisely, quantum mechanics permits any agent to quantify, on the basis of his past experiences, his probabilistic expectations for his future experiences. QBism takes measurement outcomes as well as quantum states to be personal to the agent using the theory. Quantum mechanics does therefore not provide an objective, agent-independent description of the world – it rules out a “view from nowhere”. By clarifying thus the role of quantum mechanics and of science in general, QBism avoids all the interpretational difficulties usually associated with quantum foundations. In QBism, there are no objective elements of reality that determine either measurement outcomes or probabilities of measurement outcomes. Rather, every quantum measurement is an action on the world by an agent that results in the creation of something entirely new. QBism holds this to be true not only for laboratory measurements on microscopic systems, but for any action an agent takes on the world to elicit a new experience. It is in this sense that agents – people – have a fundamental creative role in the world.\n\n\nAny interpretation of quantum mechanics by definition makes the same predictions as quantum mechanics. Nevertheless, I expect QBism to have practical import for the world. By shifting the focus away from interpretations that regard quantum states as real, QBism opens up new possibilities: in the search for a compelling physical principle that would explain the quantum formalism, and in the search for new physics.\n\n\n\n\n---\n\n\n**Luke**: I heard that Fuchs, at least, thinks that the case for QBism would be more compelling if it turns out that SIC-POVMs (symmetric informationally complete positive operator-valued measures) existed in every finite Hilbert space dimension, which is currently an unsolved question. Is that your understanding as well? If so, what’s the reasoning?\n\n\n\n\n---\n\n\n**Ruediger**: QBism as an interpretation of quantum mechanics is independent of the existence of SICs and can be formulated without referring to SICs. But QBism is also a program, ultimately with the aim of discovering new physics. A more immediate goal is to find a simple and compelling physical principle underpinning the quantum formalism. Now one of QBism’s central tenets is that a measurement does not reveal a preexisting outcome but results in the creation of something new. In the quantum formalism, this idea finds a simple expression in the fact that the classical probability sum rule does not apply to the – necessarily hypothetical – outcomes of an unperformed experiment. For instance, in the double-slit experiment the probability distribution for the measured particle position on the screen cannot be obtained by adding the weighted probabilities given the particle goes through one or the other slit.\n\n\nSo far this is a purely negative statement. If a SIC exists in every finite Hilbert-space dimension, it turns into a powerful positive statement. If the hypothetical measurement is a SIC measurement, the Born rule takes the form of a very simple modification of the probability sum rule. What is more, from the modified probability sum rule, a large part of the structure of quantum mechanics can be derived. In this picture, instead of the purely negative statement that the probability sum rule cannot be used, we would have a simple physically motivated principle that implies a substantial part of the quantum formalism. In that sense, the existence of SICs in all dimensions would strengthen the case for QBism.\n\n\n\n\n---\n\n\n**Luke**: Roughly how many people are actively advocating or contributing to QBism? Do you think it’s particularly difficult to draw funding and cognitive talent toward this work because of its theoretical nature, or for other reasons?\n\n\n\n\n---\n\n\n**Ruediger**: As a matter of fact, the mathematical aspects of QBism (such as the structure of the SICs or the quantum de Finetti theorems) have attracted significant funding over the years. At present, however, only a small number of people are actively contributing to QBism. When QBism holds that science is as much about the scientist as it is about the world external to the scientist, it challenges one of the most deeply held prejudices that most physicists subscribe to. This prejudice is exemplified by the following quote from Landau and Lifshitz: “By measurement, in quantum mechanics, we understand any process of interaction between classical and quantum objects occurring apart from and independently of any observer.” Another commonly held prejudice is that a probability-1 assignment implies the existence of an objective mechanism that brings about the event. Physicists find it very hard to accept the QBist principle that probability-1 judgments are still judgments, like any other probability assignments. Let me finish with a prediction: In twenty-five years when a new generation of scientists have been exposed to QBist ideas, QBism will be taken for granted and quantum foundations will have disappeared as a problem.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Ruediger!\n\n\nThe post [Ruediger Schack on quantum Bayesianism](https://intelligence.org/2014/04/29/ruediger-schack/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-04-29T16:12:35Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "dc0ebc3f348bc02dbc6e5c157d23ee15", "title": "David J. Atkinson on autonomous systems", "url": "https://intelligence.org/2014/04/28/david-j-atkinson/", "source": "miri", "source_type": "blog", "text": "![David J. Atkinson portrait](http://intelligence.org/wp-content/uploads/2014/04/Atkinson_w384.jpg) David J. Atkinson, Ph.D, is a Senior Research Scientist at the [Florida Institute for Human and Machine Cognition](http://www.ihmc.us/) (IHMC). His current area of research envisions future applications of intelligent, autonomous agents, perhaps embodied as robots, who work alongside humans as partners in teamwork or provide services. Dr. Atkinson’s major focus is on fostering appropriate reliance and interdependency between humans and agents, and the role of social interaction in building a foundation for mutual trust between humans and intelligent, autonomous agents. He is also interested in cognitive robotics, meta-reasoning, self-awareness, and affective computing. Previously, he held several positions at California Institute of Technology, JPL (a NASA Center), where his work spanned basic research in artificial intelligence, autonomous systems and robotics with applications to robotic spacecraft, control center automation, and science data analysis. Recently, Dr. Atkinson delivered an invited plenary lecture on the topic of “Trust Between Humans and Intelligent Autonomous Agents” at the 2013 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT 2013). Dr. Atkinson holds a Bachelor’s degree in Psychology from University of Michigan, dual Master of Science and Master of Philosophy degrees in Computer Science (Artificial Intelligence) from Yale University, and the Doctor of Technology degree (d.Tekn) in Computer Systems Engineering from Chalmers University of Technology in Sweden.\n\n\n\n**Luke Muehlhauser**: One of your projects at [IHMC](http://www.ihmc.us/) is “[The Role of Benevolence in Trust of Autonomous Systems](http://www.ihmc.us/groups/datkinson/wiki/8b432/The_Role_of_Benevolence_in_Trust_of_Autonomous_Systems.html)“:\n\n\n\n> The exponential combinatorial complexity of the near-infinite number of states possible in autonomous systems voids the applicability of traditional verification and validation techniques for complex systems. New and robust methods for assessing the trustworthiness of autonomous systems are urgently required if we are to have justifiable confidence in such applications both pre-deployment and during operations…  The major goal of the proposed research is to operationalize the concept of benevolence as it applies to the trustworthiness of an autonomous system…\n> \n> \n\n\nSome common approaches for ensuring desirable behavior from AI systems include testing, formal methods, hybrid control, and simplex architectures. Where does your investigation of “benevolence” in autonomous systems fit into this landscape of models and methods?\n\n\n\n\n\n---\n\n\n**David J. Atkinson**: First let me point out that test, formal methods and such other techniques that you point out have little to do with ensuring desirable behavior and more to do with avoiding errors in behavior due to design or implementation flaws. These are not equivalent. Existing techniques improve reliability, but that is only one component of trust and it only concerned with behavior “as designed”. Furthermore, as your quote from my material points out, when it comes to the near-infinite state spaces of autonomy, those “common approaches” are inherently limited and cannot make the same strong claims regarding system behavior that they could with machines where the envelope of behavior could be completely known.\n\n\nTherefore, I chose to look for answers regarding trust, trustability and trustworthiness in the operations phase of the autonomous system lifecycle because it is here, not in testing and evaluation, where the full complexity of autonomous behavior will manifest itself in response to the uncertainty, dynamics, and real-world complexity that is its major strength. My approach is to focus is on the applicability of human interpersonal trust to operation of autonomous systems. The principle reasons for this are 1) Autonomy is limited — humans make the decisions to rely upon an intelligent agent, subject to a variety of individual and situational factors and constraints, and; 2) Eons of evolution have created a reasonably good mechanism in humans for trust. It is fundamental to every human social transaction and it works. Beyond reliability, studies in multiple disciplines have shown that people want evidence of capability, predictability, openness (aka transparency), and safety before granting a measure of trustworthiness to a machine. Key questions revolve around the nature of that evidence, how it is provided, and what inferences can be reasonably made. People use both cognitive and affective mental processes for evaluating trustworthiness.  My central claim is that if we can reverse engineer these mechanisms and make intelligent autonomous agents (IATs) that are compliant with the human trust process (two big ifs), then we will have created a new way for humans to have trusting relationships with they machines they rely upon. It will be a transition from seeing IATs as tools to treating them as partners.\n\n\nBenevolence is interesting for a couple of reasons. First, it is a complex attribution built upon a structure of beliefs about another person that include good will, competency, predictability, lack of a hidden agenda, agency and other beliefs, most which are likely to play a role in many kinds of interactions between human and IAT. Second, just looking at that list of beliefs makes it a hard problem, although my colleagues think people will have no problem attributing agency to a machine. Third, there are important applications of IATs, perhaps embodied as robots, where the unique psychology of benevolence is critical to success. For example, disaster rescue. It is long known that disaster victims have a unique psychology borne of stress, fear and the psycho-physiological effects these evoke. Human first responders undergo special training on victim psychology. One of the reasons is that a victim, justifiably afraid for their life, may be very reluctant to trust a rescuer and without trust there is no cooperation and rescue can become very difficult. Benevolence seems to be a part of the trust that is required. Today, we have no idea whatsoever whether a real disaster victim will trust and cooperate with a robot rescuer.\n\n\nCircling back to your question about “fitting in to the landscape of models and methods”, the ultimate goal of my research is to formulate design requirements, interaction methods, operations concepts, guidelines and more that, if followed, will result in an IAT that can itself engender well-justified human trust.\n\n\n\n\n---\n\n\n**Luke**: As you say, formal methods and other techniques are of limited help in cases where we don’t know how to formulate comprehensive and desirable design requirements, as is often the case for autonomous systems operating in unknown, dynamic environments. What kinds of design requirements might your approach to trustworthy systems suggest? Would these be formal design requirements, or informal ones?\n\n\n\n\n---\n\n\n**David**: There will certainly be gaps in what we can do today regarding the specific requirements related to unknown, dynamic environments. One of our objectives is to narrow those gaps so precise questions can be studied. It is one thing to wave hands and moan about uncertainty, and another entirely to do something about it.\n\n\nGenerally speaking, we are working towards formal specification of the traditional types of requirements: Functional, Performance, Design Constraint, and Interface (both Internal and External). By “formal”, I mean “complete according to best practices”. I do not mean “expressed in formal logic or according to a model-based language” — that is a step beyond. Requirements may be linked in numerous ways to each other forming a directed graph (hopefully with no cycles!). Relationships include Source, Required-By, Depends-upon and so forth.\n\n\nFor example, an attribution of benevolence by a human “Trustor” requires a belief (among numerous others) that the candidate “Trustee” (the autonomous system) has “no hidden ill will”. This is a very anthropomorphic concept that some might scoff at, but studies have demonstrated its importance to attribution of benevolence and so it is a factor we must reckon in designing trustworthy and trustable autonomous systems. But what does it even mean?\n\n\nJust to give you an idea of how we are breaking this down, here are some of the derived requirements. I should emphasize that it is very premature to make any claims about the quality or completeness of what we done with requirements engineering thus far — mostly that work is on the schedule for next year so I’ll only give you the types and titles: We will have as a matter of course a generic Level 1 Functional requirement to “Provide Information to Human User”. Derived from this is an Interface requirement something like “Volunteer Information: The Autonomous System shall initiate communication and provide information that is import to humans” (actually, that is two requirements). This in turn is elaborated by a number of Design Constraints such as “Disposition: The Autonomous System shall disclose any disposition that could result in harm to the interests of a human”. These Design Constraints are in turn linked to numerous other detailed requirements such as this Performance Requirement “Behavior – Protective: The Autonomous System shall recognize when its behavior could result in harm to the interests of a human”. A designer of an autonomous system will recognize a number of very hard problems in this simple example that need to be solved to effectively address the hypothetical need for a benevolent autonomous system.. Our focus in this project is on “what” needs to be done, not “how” to do it.\n\n\nWithout a doubt, this requirements process will generate plenty of questions that will require further study. It is also likely that some requirements may conflict and any particular application of autonomy must do tradeoff studies to prioritize. Nevertheless, our goal is to spell out as much as we can. We will differentiate mandatory requirements from goals or objectives. Where it is possible, we will allocate requirements to particular elements of an autonomous system’s architecture, such as “goal selection mechanism” (where issues relating to prioritization may arise and affect predictability and therefore trust). For every requirement, we will provide a rationale with links to studies and empirical data or other discussion that can be used in analysis. That part is very high priority in my mind. Too many times I have encountered requirements where it is impossible to understand how they were derived. I’d also like to take a risk-driven view of the requirements so that individual requirements, or groups of requirements, can be associated with particular risks. That is another area that will have to be quantified by further application-specific analysis. Finally, a good requirement has to be verifiable. There is considerable work to be done on this topic with respect to autonomous systems.\n\n\nRequirements engineering is a lot of work of course, and the history of large scale system development is replete with horror stories of poor requirements. That’s why I want to express our trust-related requirements formally following requirements engineering standards  the extent possible. From my previous experience at NASA, I know that the less work a project has to do, the more likely it is to to adopt existing requirements. So we are developing our trust-related requirements consciously with the goal of making them easy to understand and easy to reuse. Finally, given the scope of what is required it is likely that we will only be able to go just so far under the auspices of our current project to provide a vector for future work (hint hint to potential sponsors out there!)\n\n\n\n\n---\n\n\n**Luke**: From your description, this research project seems somewhat interdisciplinary, and the methodology seems less clear-cut than is the case with many other lines of research aimed at similar goals (e.g. some new project to model-check a particular software design for use in robotics). It’s almost “pre-paradigmatic,” in the Kuhnian sense. Do you agree? If not, are there are other research groups who are using this methodology to explore related problems?\n\n\n\n\n---\n\n\n**David**: Yes, the project is very interdisciplinary but with the primary ones being social and cognitive psychology, social robotics, and artificial intelligence we well as the rigor contributed by solid systems engineering. The relevant content within each of those disciplines can be quite broad. It is not a small undertaking. My hope is that I can plant some memes in each community to help bring them together on this topic of trust and these memes will foster further work. As far as methodology, we are pursuing both theoretical development and experimentation, and trying to be as rigorous as exploratory work of this nature will permit. We have to understand the previous psychological work on human interpersonal trust, and human factors studies on human-automation trust, to find those results that may have important implications for human trust of an intelligent, autonomous agent. The importance of agency is a good example.\n\n\nWe know from psychological studies that an attribution of “free-will”, or more narrowly, the ability to choose, is an important component of deciding whether someone else is benevolent or not. That is, if a person feels the other is compelled to help then the less likely they are to believe that other person is benevolent. Apart from philosophers, most people don’t think very deeply about this. They make a presumption of free-will and then look to see if there are reasons it is limited, for example, is the person just following orders, or required by their profession or social norms to act in a particular way? With machines, we start from the other side: people assume machines have no free-will because they believe machines are (just) programmed. However, there are [some studies](http://scholar.google.com/scholar?q=related:5FS9KTLPVGkJ:scholar.google.com/&hl=en&as_sdt=1,5) that suggest that when machine behavior is complex (enough), and somewhat unpredictable because there are numerous possible courses of action, then people begin to attribute mental states including the ability to choose. My hunch is this is an evoked response of our innate demand to interpret the actions of others in a framework of intentionality. At some point, enough features are present that those evolution-designed heuristics kick in to aid in understanding. WHY that may be, I will leave to the evolutionary socio-biologists.\n\n\nBack to methodology now: This is a phenomena for which we can design a study involving humans and machines, with systematic variation of various factors to see what qualities a machine actually requires in order to evoke a human attribution of the ability to choose to help. This year we will be conducting just such a study. I hope to begin running participants this summer. We have been rigorous with experimental design, choice of what data to collect and the statistical methods we will use to analyze the results. While we will fudge a little bit on the robot implementation, using simulation in places and “wizard-of-oz” techniques for language interaction for example, the products of the study ought to be recognized as solid science by researchers in multiple disciplines if we do it right. In general, I think this a very hard goal to achieve because each discipline community has certain preferences and biases about what they like to see in methodology before they are convinced. There are a couple of other groups working on social robotics who use this approach, and a very few number of psychologists who are working on human-centered design of automation. I don’t want to start listing names because I’m sure there are others of whom I’m not yet aware. I do know that I am certainly not the first to confront this challenge. Multidisciplinary research of this type is always, in some sense, pre-paradigmatic because it is a struggle for understanding and legitimacy at the boundaries of separate disciplines, not the core. Artificial intelligence has always been multidisciplinary, a strength as well as a weakness as far as more general acceptance among related disciplines. I don’t worry too much about what other people think. I just do what I believe has to be done.\n\n\n\n\n---\n\n\n**Luke**: What are some concrete outcomes you hope will be achieved during the next 5 years of this kind of research?\n\n\n\n\n---\n\n\n**David**: As a wise man once said, “It’s hard to make predictions, especially about the future.”\n\n\nMy hopes. This one is concrete to me, but perhaps not what you had in mind: I hope that the value of our approach will be convincingly demonstrated and other research groups and early-career researchers will join in. There is much to be explored and plenty of opportunity to make discoveries that can have a real impact.\n\n\nOn balance, it seems that in many potential application domains the biggest challenge is not mistrust or absence of trust, but excessive trust. It is abundantly clear that people have no trouble trusting machines in most contexts, and, as this is somewhat tied to generational attitudes, the condition will probably increase. Sometimes this leads to over-reliance and complacency, and then surprising (and potentially dangerous) conditions if things go sour. A funny but real example is the driver who set cruise control on his RV on a long stretch of straight highway, got out of the driver’s seat, and went in back to make a sandwich. You can guess what happened. Effective teamwork and cooperation requires that each team member understands the strengths and limitations of the others so under- and over-reliance do not occur. Intelligent, autonomous teammates need that same ability to participate effectively in this essential team-building process — a process that takes time and experience to build mutual familiarity. We will contribute to that solution.\n\n\nI believe our work will lead directly to an understanding of When, Why, What, and (some of) How a machine needs to interact with human teammates to inoculate against (and/or correct for) under- and over-reliance. This technology is key for solving what many people claim is a lack of transparency in intelligent systems. It will help “users” to better understand the competency of a machine (the most important quality) in a given context consisting of dynamic situational factors, tasks and goals. This will in turn increase predictability (another important quality) and thereby help mitigate concerns about risks and safety. Ultimately, a deep solution that is broadly applicable will require a higher degree of machine meta-reasoning and self-awareness than we can engineer today, but this is an area of active research where useful results ought to be appearing more and more frequently. (The field of cognitive (developmental) robotics is very exciting.) However, I do expect concrete and useful results for early applications in some semi-structured task domains. A few examples of domains containing “low hanging fruit” for applications are transportation (e.g., autonomy-assisted driving, long haul trucking), healthcare (patient monitoring, therapy assistance, assisted living), and some defense-related applications. My group is actively working towards all of these possibilities. I don’t want to leave you with the impression that creating effective applications will be easy because many hard basic research challenges remain, and we will undoubtedly discover others when we start to transition the technology into real-world applications. Nevertheless, I’m optimistic!\n\n\n\n\n---\n\n\n**Luke**: Thanks, David!\n\n\nThe post [David J. Atkinson on autonomous systems](https://intelligence.org/2014/04/28/david-j-atkinson/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-04-28T11:00:38Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "8bb44276dfa88ccab23679a0a27f2a23", "title": "Help MIRI in a Massive 24-Hour Fundraiser on May 6th", "url": "https://intelligence.org/2014/04/25/may-6th-miri-participating-in-massive-24-hour-online-fundraiser/", "source": "miri", "source_type": "blog", "text": "**Update:** We’re now liveblogging the fundraiser [here](http://intelligence.org/2014/05/06/liveblogging-the-svgives-fundraiser/).\n\n\n[![](https://intelligence.org/wp-content/uploads/2014/04/svgives.jpg)](http://svgives.razoo.com/story/Machine-Intelligence-Research-Institute)\nOn May 6th, MIRI is participating in Silicon Valley Gives. We were selected to participate along with other local Bay Area charities by the Silicon Valley Community Foundation. On this day, we recommend donors **[make gifts to MIRI through the SV Gives portal](http://svgives.razoo.com/story/Machine-Intelligence-Research-Institute)** so we can qualify for some of the [matching and bonus funds](http://svgives.razoo.com/giving_events/svg14/prizes) provided by dozens of Bay Area philanthropists.\n\n\nWhy is this exciting for supporters of MIRI? Many reasons, but here are a few:\n\n\n* **Over $250,000 of matching funds and prizes up for grabs, from sources that normally wouldn’t contribute to MIRI**:\n\t+ Kick-off Match! Two-to-one dollar match up to $50,000 during the midnight hour.\n\t+ $2,000 prize for the nonprofit that has the most individual donors in an hour, **every hour** for 24 hours.\n\t+ Golden Ticket of $150 added to a random donation each hour, **every hour**, for 24 hours.\n\t+ Dollar for Dollar match up to $35,000 during the 7 AM hour.\n\t+ Dollar for Dollar match up to $15,000 during the 8 AM hour.\n\t+ Dollar for Dollar match up to $50,000 during the 12 noon hour.\n\t+ Dollar for Dollar match up to $50,000 during the 6 PM hour.\n\t+ Dollar for Dollar match up to $50,000 during the 7 PM hour.\n* Local NBC stations, local radio stations, many local business, and lots of Bay Area foundations will be promoting the Silicon Valley Day of Giving on May 6th. So if MIRI is making a splash with our fundraising that day, it’s possible we’ll draw attention from media and by extension new donors.\n\n\n### We need your help to make the most of this opportunity!\n\n\nMaking the most of this opportunity will require some cleverness and a lot of coordination. We are going to need all the help we can get. Here are some ways you can help:\n\n\n1. We are currently thinking through how to best take advantage of  this unique opportunity. If you have any ideas and/or want to join the planning team (currently Malo, Louie, Luke, and long-time MIRI supporter Alexei Andreev), shoot Malo an email at [malo@intelligence.org](mailto:malo@intelligence.org).\n2. If you are interested in supporting MIRI with a  donation of **$500 or more** during the fundraiser, we’d love to coordinate with you to make it count as much as possible. Get in touch with Malo at [malo@intelligence.org](mailto:malo@intelligence.org).\n3. All MIRI supporters (not just donors) have the potential to make a big impact if we can all work together in a coordinated manner. Sign up below to receive updates on our strategy leading up to the event, and updates throughout the fundraiser on the best times to give and promote the event. **Follow along with the excitement and be on the inside of what’s going on all day!**\n\n\n\nThe post [Help MIRI in a Massive 24-Hour Fundraiser on May 6th](https://intelligence.org/2014/04/25/may-6th-miri-participating-in-massive-24-hour-online-fundraiser/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-04-26T02:20:34Z", "authors": ["Louie Helm"], "summaries": []} -{"id": "56d595933fc741c2e8fdc65bf56a8c61", "title": "Roland Siegwart on autonomous mobile robots", "url": "https://intelligence.org/2014/04/25/roland-siegwart/", "source": "miri", "source_type": "blog", "text": "![Roland Siegwart portrait](http://intelligence.org/wp-content/uploads/2014/04/Siegwart_w360.jpg) Roland Siegwart (born in 1959) is a Professor for Autonomous Systems and Vice President Research and Corporate Relations at [ETH Zurich](https://www.ethz.ch/en.html). After studying mechanics and mechatronics at ETH, he was engaged in starting up a spin-off company, spent ten years as professor for autonomous microsystems at EPFL Lausanne and held visiting positions at Stanford University and NASA Ames.\n\n\nIn his research interests are in the creation and control of intelligent robots operating in complex and highly dynamical environments. Prominent examples are personal and service robots, inspection devices, autonomous micro-aircrafts and walking robots. He is and was the coordinator of European projects, co-founder of half a dozen spin-off companies and board member of various high-tech companies.\n\n\nRoland Siegwart is member of the [Swiss Academy of Engineering Sciences](http://www.satw.ch/index_EN), IEEE Fellow and officer of the [International Federation of Robotics Research](http://ifrr.org/) (IFRR). He is in the editorial board of multiple journals in robotics and was a general chair of several conferences in robotics including [IROS 2002](http://www.iros02.ethz.ch/), [AIM 2007](http://aim2007.ethz.ch/), [FSR 2007](http://www.inrialpes.fr/FSR07/), [ISRR 2009](http://www.isrr2009.ethz.ch/).\n\n\n\n**Luke Muehlhauser**: In 2004 you co-authored [*Introduction to Autonomous Mobile Robots*](http://smile.amazon.com/Introduction-Autonomous-Mobile-Intelligent-Robotics/dp/0262015358/ref=nosim?tag=793775876-20), which offers tutorials on many of the basic tasks of autonomous mobile robots: locomotion, kinematics, perception, localization, navigation, and planning.\n\n\nIn your estimation, what are the most common approaches to “gluing” these functions together? E.g. are most autonomous mobile robots designed using an agent architecture, or some other kind of architecture?\n\n\n\n\n---\n\n\n**Roland Siegwart**: Mobile robots are very complex systems, that have to operate in real world environments and have to take decisions based on uncertain and only partially available information. In order to do so, the robot’s locomotion, perception and navigation system has to be best adapted to the environment and application setting. So robotics is before all a systems engineering task requiring a broad knowledge and creativity. A wrongly chosen sensor setup cannot be compensated by the control algorithms. In my view, the only proven concepts for autonomous decision making with mobile robots are Gaussian Processes and Bayes Filters. They allow to deal with uncertain and partial information in a consistent way and enable learning. Gaussian Processes and Bayes filter can model a large variety of estimation and decision processes and can be implemented in different forms, e.g. as the well-known Kalman Filter estimator.\n\n\nMost mobile robots use some sort of agent architecture. However, this is not a key issue in mobile robots, but rather an implementation issue for systems that run multiple tasks in parallel. The main perception, navigation and control algorithms have to adapt to unknown situation in a somewhat predictably and consistent manner. Therefore the algorithms and navigation concepts should also allow the robotics engineer to learn from experiments. This is only possible, if navigation, control and decision making is not implemented in a black-box manner, but in a model based approach taking best advantage of prior knowledge and systems models. \n\n\n\n\n\n\n---\n\n\n**Luke**: So are you saying that the glue which holds together the perception, navigation, and control algorithms is typically an agent architecture, and this is largely because you need to integrate those functions in a model-based manner which can reveal to the engineer what’s going wrong (in early experiments) and how to improve it? Or are you saying something else?\n\n\n\n\n---\n\n\n**Roland**: You understanding is only partially correct. Yes, most robot systems make use of some sort of an agent architecture, because it is the most evident concept to implement independent parallel tasks, like for example robot localization and security stop using the bumper signals. However, I don’t see agent architecture as a major issue in robotics or as the main glue. The glue for designing and implementing autonomous robots is with the fundamental understanding of all key elements and its interplay by the robotics engineer. Furthermore, Gaussian Processes and Bayes filter are today the most promising and proven approach for autonomous navigation, especially Simultaneous Localization and Mapping.\n\n\n\n\n---\n\n\n**Luke**: As robotic systems are made increasingly general and capable, do you think a shift in techniques will be required? E.g. 15 years from now do you expect Gaussian Processes and Bayes filters to be even more dominant in robotics than they are today, or do you expect rational agent architectures to ascend, or do you expect hybrid systems control to take over, or what? (Wild speculation is allowed; I know you’re not a crystal ball!)\n\n\n\n\n---\n\n\n**Roland**: I consider Gaussian Processes and Bayes filters the most powerful tools to create rational agents. They enable to learn correlations and models, and to reason about situations and future goals. This model-based approaches will gain importance in contrast to behavior-base approaches. However, there will probably never be a single unifying approach for creating intelligent agents.\n\n\nRobotics is the art of combining sensing, actuation and intelligent control in the most creative and optimal way.\n\n\n\n\n---\n\n\n**Luke**: Why do you expect model-based approaches to gain importance relative to behavior-based approaches?\n\n\n\n\n---\n\n\n**Roland**: In order to take “wise” decisions and plan actions, a robot has to be able to anticipate reactions its decisions and actions might have. This can only be realized by models, that form the basis for predictions. Furthermore, unsupervised learning also requires models that enable the robot system to learn from experience. Models enable the robot to generalize experiences which is not really possible with behavior-based approaches.\n\n\n\n\n---\n\n\n**Luke**: From your perspective, what has been some of the most interesting work in model-based approaches to autonomous robots in the past 5 years?\n\n\n\n\n---\n\n\n**Roland**: I think the most prominent model-based approach in robotics is within SLAM (Simultaneous Localization and Mapping) which can considered to be pretty much solved.\n\n\nThanks to consistent application to Gaussian Processes and Bayes filters, and appropriate error modelling, SLAM is today feasible with different sensors (Laser, Vision) and on wheeled and flying platforms.\n\n\nLarge scale maps with considerable dynamics, changes in lightning conditions and loop closures have been demonstrated be groups from Oxford, Sidney University, MIT, ETH an many more.\n\n\nAn other robotics field, where a lot of progress has been achieved by model base approaches, is in imitation learning of complex manipulation task. By combining physical models of human arms and robot manipulators with probabilistic processes, learning of various manipulation task has been demonstrated by groups at USC, DLR, KIT, EPFL and many other places.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Roland!\n\n\nThe post [Roland Siegwart on autonomous mobile robots](https://intelligence.org/2014/04/25/roland-siegwart/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-04-25T11:00:46Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "3568bfc5d07187829528a291d98a1016", "title": "Domitilla del Vecchio on hybrid control for autonomous vehicles", "url": "https://intelligence.org/2014/04/24/domitilla-del-vecchio/", "source": "miri", "source_type": "blog", "text": "![Domitilla Del Vecchio portrait](http://intelligence.org/wp-content/uploads/2014/04/Del_Vecchio_w200.jpg) Domitilla Del Vecchio received the Ph. D. degree in Control and Dynamical Systems from the [California Institute of Technology](http://www.caltech.edu/), Pasadena, and the Laurea degree in Electrical Engineering from the [University of Rome at Tor Vergata](http://web.uniroma2.it/) in 2005 and 1999, respectively. From 2006 to 2010, she was an Assistant Professor in the [Department of Electrical Engineering and Computer Science](https://www.eecs.umich.edu/) and in the Center for Computational Medicine and Bioinformatics at the University of Michigan, Ann Arbor. In 2010, she joined the [Department of Mechanical Engineering](http://meche.mit.edu/) and the Laboratory for Information and Decision Systems (LIDS) at the Massachusetts Institute of Technology (MIT), where she is currently an Associate Professor. She is a recipient of the Donald P. Eckman Award from the American Automatic Control Council (2010), the NSF Career Award (2007), the Crosby Award, University of Michigan (2007), the American Control Conference Best Student Paper Award (2004), and the Bank of Italy Fellowship (2000). Her research interests include analysis and control of networked dynamical systems with application to bio-molecular networks and transportation networks.\n\n\n\n**Luke Muehlhauser**: In [Verma & del Vecchio (2011)](http://commonsenseatheism.com/wp-content/uploads/2014/04/Verma-Del-Vecchio-Semiautonomous-Multivehicle-Safety.pdf), you and your co-author summarize some recent work in semiautonomous multivehicle safety from the perspective of hybrid systems control. These control systems will “warn the driver about incoming collisions, suggest safe actions, and ultimately take control of the vehicle to prevent an otherwise certain collision.”\n\n\nI’d like to ask about the application of hybrid control to self-driving cars in particular. Presumably, self-driving cars will operate in two modes: “semi-autonomous” (human driver, with the vehicle providing warnings and preventing some actions) and “fully autonomous” (no human driver). Do you think hybrid control will be used for both purposes, in commercial self-driving cars released (e.g.) 10 years from now? Or do you think hybrid control will be competing with other approaches aimed at ensuring safe behavior in autonomous and semi-autonomous vehicles?\n\n\n\n\n\n---\n\n\n**Domitilla Del Vecchio**: Yes, I believe that hybrid control will be used in commercial vehicles both for autonomous and semi-autonomous functions within the next 10 years. Here is some background to support this belief. The decreasing costs of embedded computing and communication technologies are pushing several of today’s engineering systems toward increased levels of autonomy. Transportation systems are an obvious example, in which vehicles and infrastructure are being enriched with more computation, sensing, and communication every day, to increase safety, comfort, and efficiency. For life-critical systems, however, this enrichment raises the fundamental question of whether newly engineered systems with enhanced functionalities can be proven to be safe. A number of accidents have been reported, such as the unintended acceleration problem of a Toyota vehicle in 2008, so that initiatives, including the ISO 26262 for functional safety, have been taken by the auto industry world-wide to address this question. According to these new automotive functional safety standards, any design that affects life critical applications should be assured to be safe. This requirement for system assurance has led most car companies, both in the US and in Europe, to explore formal design and verification approaches so that safety guarantees can be provided on newly designed applications and old applications can be formally verified for safety. In the hybrid control literature, techniques to design mixed logical/dynamical systems under safety specifications have been developed since the 90s with the pioneering work of Tomlin and co-workers and the very well known California PATH project. Therefore, hybrid control approaches have a substantial potential to impact the current automotive technology as far as safety applications are concerned and in fact many companies world-wise are already initiating research programs to explore the promise of formal methods for design and verification. Many challenges, however, need to be overcome from a theoretical point of view, such as being able to handle in a safe and least restrictive way hidden information that arises from many sources, such as driver’s behavior, sensor and communication errors, and poorly known environments.\n\n\n\n\n---\n\n\n**Luke**: And do you think hybrid systems control will fill a particular niche in the market for high-assurance software for self-driving cars, or do you think hybrid control approaches will compete with other approaches to high-assurance software, or do you think in the end both hybrid control and other currently existing approaches will need to be replaced by other approaches?\n\n\n\n\n---\n\n\n**Domitilla**: If we intend hybrid control approaches broadly as formal design methodologies that carefully model dynamics and logic, then I do not think they will compete with other approaches but they will complement other approaches. Ultimately, I think a mixture of approaches will be considered, including formal design methods, AI-like methods, and engineering-based methods.\n\n\n\n\n---\n\n\n**Luke**: I’ve discussed formal design methods in several past interviews. Which kinds of AI-like methods and engineering-based do you expect will be brought to bear on the challenge of high-assurance software for self-driving cars?\n\n\n\n\n---\n\n\n**Domitilla**: There are a number of approaches that are often used in the development of autonomous vehicles mostly for checking collision with static and moving obstacles, such as the RTT and RTT\\* algorithms that were used in the Urban Grand Challenge by the MIT team, or more general path planning algorithms with obstacles. These are typically originating from the robotics community and are not usually concerned with formal safety guarantees although work often very well in practice. Along the same lines, questions of perception of the environment are obviously crucial in self-driving vehicles, such as recognizing whether something standing on the road side is a pedestrian or a small tree, which may involve different vehicle’s decisions for assuring safety. These type of questions have been mostly studied in the artificial intelligence community, including research in computer vision. By engineering-based approaches, I mostly mean the typical software development cycle in industry, which involves extensive testing to highlight possible system malfunctions or safety hazards.\n\n\n\n\n---\n\n\n**Luke**: The hybrid control approach used in [Verma & del Vecchio (2011)](http://commonsenseatheism.com/wp-content/uploads/2014/04/Verma-Del-Vecchio-Semiautonomous-Multivehicle-Safety.pdf) had, in experiment, a success rate of 96.9%. I presume the success rate will need to be substantially higher before such systems are used to control autonomous cars in real road conditions? How much work might it take to push the success rate up to, say, 99.999%?\n\n\n\n\n---\n\n\n**Domitilla**: That is actually not true for systems that involve human drivers. In fact, automotive companies often prefer to tolerate an epsilon % of collision so they can have less conservative warning/controllers, which override drivers less frequently. From a practical standpoint, if human driver behavior needs to be accounted for, as in Verma and Del Vecchio (2011), 100% safety will most likely not be achievable just because human behavior can be modeled statistically as opposed to non-deterministically. In the paper, we truncated the Gaussian probability distributions that describe how drivers brake/accelerate in the proximity of the intersection in order to have bounded capture sets. Since the tails of the Gaussian probability distributions were not included, we may still have some (rare) instance in which the system cannot guarantee safety. In this sense, I think that when human behavior is accounted for, models and approaches will have to be stochastic and focus on providing probability of safety as opposed to 100%. This is a research direction we are pursuing right now.\n\n\n\n\n---\n\n\n**Luke**: What are the most interesting or important open problems in this area right now, to your mind?\n\n\n\n\n---\n\n\n**Domitilla**: There are many. A couple of ones that are particularly relevant especially from an application standpoint are computational complexity and the ability to provide design methods for stochastic safety. The first one is a problem that is always a challenge for implementing most formal safety approaches. Algorithms usually do not scale well with the size of the system and this limits real-time applicability. The second one is a challenging technical problem and methods to efficiently handle hidden decision making in a least conservative manner are very much needed.\n\n\n\n\n---\n\n\n**Luke**: What are your recommended readings on the latter challenge?\n\n\n\n\n---\n\n\n**Domitilla**: There are many readings that address several of the aspects involved in this problem. In the control theory community, we have seen recent papers by Abate’s and Lygeros’ groups, which address the stochastic reachability/verification problem. In the robotics community, there are also very related works, in which a vehicle has to be controlled to avoid stochastically moving agents, which can move according to a set of behaviors, with a given least probability (Jon How and colleagues). I believe there are a few more, and the ones cited here are not exhaustive, but these are those that come to mind now.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Domitilla!\n\n\nThe post [Domitilla del Vecchio on hybrid control for autonomous vehicles](https://intelligence.org/2014/04/24/domitilla-del-vecchio/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-04-25T00:00:06Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "e4f8b43dc997af38ddf8011569aa6e11", "title": "Dave Doty on algorithmic self-assembly", "url": "https://intelligence.org/2014/04/23/dave-doty/", "source": "miri", "source_type": "blog", "text": "![Dave Doty portrait](http://intelligence.org/wp-content/uploads/2014/04/Doty.jpg) Dave Doty is a Senior Research Fellow at the California Institute of Technology. He proves theorems about molecular computing and conducts experiments implementing algorithmic molecular self-assembly with DNA.\n\n\n \n\n\n \n\n\n \n\n\n \n\n\n\n**Luke Muehlhauser**: A couple years ago you wrote a [review article](http://www.dna.caltech.edu/%7Eddoty/papers/tasa.pdf) on algorithmic self-assembly, and also created a [video introduction](http://vimeo.com/54214122) to the subject. Your review article begins:\n\n\n\n> Self-assembly is the process by which small components automatically assemble themselves into large, complex structures. Examples in nature abound: lipids self-assemble a cell’s membrane, and bacteriophage virus proteins self-assemble a capsid that allows the virus to invade other bacteria. Even a phenomenon as simple as crystal formation is a process of self-assembly… Algorithmic self-assembly systems automate a series of simple growth tasks, in which the object being grown is simultaneously the machine controlling its own growth.\n> \n> \n\n\nAs an example, here’s a an electron microscope image of a Sierpinski triangle produced via [algorithmic self-assembly of DNA molecules](http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.0020424#pbio-0020424-g006):\n\n\n\n![Sierpinski triangles](http://intelligence.org/wp-content/uploads/2014/04/Doty_1_Sierpinski_triangles.png)\nIn your view, what are the most impressive examples of algorithmic molecular self-assembly produced in the lab thus far?\n\n\n\n\n---\n\n\n**Dave Doty**: Let me play the politician here and answer the question I wish I had been asked, which is to say, what’s the difference between algorithmic and non-algorithmic self-assembly, how well does algorithmic self-assembly work as of April 2014, and what are the reasons to think we can make it work better in the future?\n\n\n*Non-algorithmic self-assembly* \n\nThere are very impressive examples of *non-algorithmic* molecular self-assembly in the lab. Two very successful techniques have been DNA origami and DNA tiles. DNA origami requires a long DNA strand called a *scaffold* (the most commonly used one has 7249 bases). The idea is to synthesize about 200 much shorter (~32 bases) single strands of DNA, called *staples*, each of which binds via base-pairing (A binds to T and C binds to G) to multiple regions of the scaffold to bring the regions together, essentially folding the scaffold. Carefully designing *which* regions will be brought together means that different shapes can be created from the same scaffold by folding it differently.\n\n\nDNA tiles use base-pairing of DNA in a different way. A DNA tile is a complex of DNA with some number (often 4) of single-stranded regions called “sticky ends” that are available to bind on 4 different sides (they are “sticky” because they will bind via base-pairing to complementary single strands on other tiles). One advantage of DNA tiles is that we are not limited by the size of a scaffold strand; potentially many more than 200 tiles could bind to form large structures.\n\n\nThe “default” approach in either of these techniques is to use a unique *type* of DNA strand for each “pixel” in the self-assembled shape: 200 DNA strands binding to form a shape means 200 different DNA sequences.\n\n\n*Algorithmic self-assembly* \n\nThe “algorithmic” aspect of algorithmic self-assembly means that many different copies of *a single type* of DNA tile can be reused at several different parts of a structure. We call this “algorithmic” by analogy to the idea that small program (e.g., one printing the digits of pi) can encode a large or infinite object (the digits of pi) though local rules.\n\n\nFor instance, if a tile is designed to bind to copies of itself on all sides, the whole space (a plane if done in two dimensions) is filled with copies of itself in a regular square grid. We could get a little fancier: design two tile types, white and black, so that white binds only to black and black binds only to white. Then you’ll get an infinite checkerboard pattern. Slightly more interesting but still periodic.\n\n\nIn fact, we can get much fancier if we can achieve what’s called “cooperative binding”: setting up the experimental conditions juuuuuust right so that a tile can bind to a grid of other tiles only if TWO sticky ends of the tile match. The trick is that some other types of tiles may match just one of the sticky ends, or the other, but ONLY the one that matches BOTH will be allowed to bind. If this rule can be enforced, then tiles can be made to assemble structures of arbitrarily high algorithmic complexity, because this gives the tiles the ability to execute *any program* as they grow, in turn using that program to direct further growth.\n\n\nSo back to your question: what are the most impressive examples of algorithmic self-assembly in the lab today? We have tiles that assemble to count in binary (tiles bind in rows representing successive positive integers in binary) and tiles that compute XOR (which in turn leads to a pattern, like the one you showed, that creates a discrete version of the Sierpinski triangle). The best published binary counter counts from 1 up to 6, and then binding errors send it to 9. There are unpublished images of counters getting much farther before making errors, but we’re a long way from eliminating the errors entirely.\n\n\nSo what are these errors, and how do we plan to tame them? The errors stem from failed attempts at cooperative binding: a tile matched one of its sticky ends and not the other (hence was the wrong tile type to attach), but it attached anyway.\n\n\nThere are a number of experimental approaches to reducing error.\n\n\nLately we’ve been working a lot on getting certain experimental conditions right for cooperative binding to be favorable: for example, each sticky end should have the same binding strength as every other (G-C bonds are twice that of A-T bonds, for example, so this isn’t trivial to achieve just by using identical sticky end lengths).\n\n\nLong temperature holds seem to produce good results, but the correct temperature to choose often depends on a number of other factors, so sometimes we work hard to find the right temperature and then in the next experiment, with slightly changed conditions, need to find a new optimal temperature.\n\n\nThere are ways to correct errors “in software” by designing some redundancy into the tiles, in analogy to error-correcting codes that use 200 bits to represent a 100-bit message, so that instead of a single tile representing a bit, a 2×2 or 3×3 block of tiles will all represent the same bit.\n\n\nFinally, there are several labs looking into other ideas from the DNA nanotechnology community to implement cooperative binding. For example, there is a very widely used technology called DNA strand displacement that has proven very useful in controlling the interaction of artificial DNA systems. It is conceivable that it could be used as a way for a DNA tile to detect the condition “both of my ‘input’ binding sites match” and only then to allow the tile to attach. A lot of DNA nanotechonology and synthetic biology projects take existing enzymes, particularly ones that interact with DNA and RNA, and repurpose them to help drive some part of an artificial DNA system, so this may be an idea to explore with algorithmic self-assembly as well.\n\n\nI don’t know whether any of these ideas I’ve mentioned will ultimately be what takes algorithmic self-assembly from “dreamy theoretical playground” to “reliable technology for bottom-up molecular fabrication”. I have no doubt at all that *something* will work, however. Every day, nature uses bottom-up self-assembly to automatically assemble molecular structures of incredible complexity. What we’re doing is figuring out the engineering principles needed to do it reliably and cheaply.\n\n\n\n\n---\n\n\n**Luke**: For those in the field of algorithmic self-assembly, do you think it has it been particularly difficult to attract funding and research talent to such a “dreamy theoretical playground”? What methods have you used to help draw funding and research talent to work on these problems, despite the fact that they are many years away from practical application?\n\n\n\n\n---\n\n\n**Dave**: I don’t think either has been difficult.\n\n\nI suppose “difficulty of funding” is always relative to other subjects we could be studying but aren’t. It might be easier to get corporate funding, for instance, to study how companies can use online viral marketing in order to boost their profits, but the major risk of that project is that I would be unable to complete it after I die of boredom.\n\n\nOf course, funding is always easier to secure when you can show that results are guaranteed. But *big* breakthroughs are made when — and only when — people decide to work on a project where it’s not at all clear in advance that it’s going to work.\n\n\nThe National Science Foundation’s mission explicitly states that basic science is a goal, i.e., studying the world for the sake of understanding it better, not merely as a means to some application. I think this is a good thing for society in general. Happily for us, the National Science Foundation has been generous in funding grants related to algorithmic self-assembly, and molecular computing more generally, as have some other governmental granting agencies.\n\n\nAs far as attracting talent, I should say there are a lot of experimental scientists who do work on immediate practical applications: for example, fast/cheap disease diagnosis using “DNA circuits”, or using self-assembled DNA structures that guide the precise placement of other structures such as carbon nanotubes or transistors with very fine resolution.\n\n\nBut I like to believe that a lot of people are like me and are attracted to the field *precisely because* it’s so poorly understood and so disorganized that it’s years away from (even bigger) practical applications. Once a field is so well understood that applications follow easily, we get bored and want to move on to something else that no one understands yet, so that we can be the first people to make progress in trying to understand it.\n\n\nI’m inspired by someone like Alan Turing, who thought very deeply about programmable computers in 1936, years before the first one was built. Once practical electronic computers really hit their stride by the 1950s, Turing had moved on to studying, of all things, biological morphogenesis.\n\n\nOf course I am frequently asked, “What are the practical applications of programming matter to manipulate itself?”, and I politely oblige with guesses such as “smart drugs” and “nanoscale robots”. I’m curious if Turing was ever asked in 1936, “What are the practical applications of automating computation?” and whether he had a memorized response that surely failed to include “space flight”, “the internet”, “smart phones”, or “cruise control”.\n\n\nI’m not saying that every project should be funded, regardless of whether it has applications. But even the dullest imagination could brainstorm a few practical consequences that might follow from learning to program and control matter at the molecular level.\n\n\nI think our field attracts smart people precisely because they don’t need any convincing that this is an important goal. They need only to hear two things:\n\n\n1. nothing in the laws of physics says that it can’t be done, and\n2. we don’t know how to do it yet.\n\n\n\n\n---\n\n\n**Luke**: That’s very interesting. [Bill Hibbard](http://www.ssec.wisc.edu/%7Ebillh/homepage1.html) and I have a forthcoming paper in *CACM* called “*Exploratory Engineering in AI*,” in which I list several examples of what Eric Drexler called “exploratory engineering.” I named Turing’s pre-ENIAC work, pre-Sputnik astronautics, and some other examples. Looking back, I should have named algorithmic self-assembly and cited your *CACM* paper on that!\n\n\nAI is interesting in this regard, because there are some people who think fully-general “[AGI](http://intelligence.org/2013/08/11/what-is-agi/)” is a reasonable thing to be studying now even though it’s several decades away, precisely because physics suggests it should be feasible but we just don’t know how to do it yet. But there are others who think it’s kind of disreputable to write papers about AGI or even talk much about it in public, because it’s too speculative and theoretical.\n\n\nDoes that division exist in your field as well? If it doesn’t, do you have a theory as to why there is this division in the AI field but not in your own field?\n\n\n\n\n---\n\n\n**Dave**: This division definitely exists in my field. I’ve often thought of there being two flavors of theory, and I attribute the division to people judging results in the second type of theory by the standards of the first type:\n\n\n1) *Descriptive theory* that helps predict the behavior of an existing natural system, *in service of* experiment. Here, if theoretical predictions differ from experiment, then something’s wrong with the theoretical model, and the theoretician needs to work harder to make the model more realistic. An example is Newton’s laws of motion that predict how billiard balls will move around a billiard table. When they don’t move quite as predicted, it’s a problem with the model, which needs to be updated with ideas like friction, air resistance, etc.\n\n\n2) *Prescriptive theory* that serves as a programming language for talking about behavior that we want to engineer in a new system that may or may not presently exist, *served by* experiments that implement the programmed behavior. An example is theoretical work on Boolean circuits that abstracts away their underlying implementation with transistors (or more recently, implementation with DNA strands or genetic transcription factors). When the electronic circuit doesn’t output the correct bit, it’s not a problem with the Boolean circuit model, it’s a problem with the experimental implementation that failed to maintain analog voltages in such a way as to correctly represent discrete bits in the Boolean circuit.\n\n\nIf I understand your meaning correctly, the second of these two types of theory could very well be called exploratory engineering when the system being studied doesn’t exist yet. Most theory in the natural sciences is descriptive, but theoretical computer science is almost exclusively prescriptive. Theoretical computer scientists don’t predict physical results that are confirmed by experiment. Rather they show what is possible (and impossible) for certain engineered systems to do, and sometimes (as in the case of algorithmic self-assembly, or quantum computing, or electronic computing in 1936) they are reasoning about systems that haven’t even been built yet.\n\n\nThe abstract tile assembly model is a prescriptive theory (one variant of it; another variant called the kinetic tile assembly model is closer to the underlying chemistry and used more in the descriptive sense when we do experiments). The abstract tile assembly model doesn’t predict what DNA tiles do, and that’s not its goal. The point of the theory is to show that if DNA tiles can be built that have the idealized behavior of the model, then hey, look at all the other amazing things they could do! The fact that we currently aren’t able to experimentally engineer this behavior isn’t a problem with the theoretical model. It’s a problem with the experiments.\n\n\nConfusion results when someone familiar only with descriptive theory encounters a result in prescriptive theory. I’ve proved impossibility theorems (of the form, “no system exists that has so-and-so behavior”) and been asked whether I intend to confirm them experimentally, a notion that doesn’t even make logical sense.\n\n\nFor example, the uncomputability of the halting problem, proved by Turing in 1936, states that there is no computer program P that can predict whether another computer program, given as input to P, is ever going to halt. No experiment can be done to confirm this theorem, yet its truth is a fundamental law of nature.\n\n\nI can’t comment on the AI field and its detractors since I don’t know it well. This sort of phenomenon may be one explanation.\n\n\nThat said, not all exploratory engineering is equally valid. It is possible to be too speculative by speculating that provably impossible tasks are possible. I think it’s fine to theorize about systems that don’t yet exist, so long as we have good reasons to believe they could one day exist. Unfortunately, I’ve seen a lot of theory done in models that are demonstrably at odds with the laws of physics, e.g., ideas like stuffing 2500 strands of DNA (more than the number of atoms in the universe) into a single test tube to “test all solutions in parallel” to some hard combinatorial problem.\n\n\nBut, when someone tells me that work on algorithmic self-assembly is at odds with reality because they have never seen something like that happen in an experiment, I imagine them telling electrical engineers prior to the 1930s that just because they’d never seen an electronic circuit do arithmetic on binary numbers, the notion must be at odds with reality. There are ways to prove that certain systems cannot be built, but simply lacking the imagination to build them is not one of these ways.\n\n\n\n\n---\n\n\n**Luke**: When doing prescriptive theory, it’s very handy to have explicit, formal models of the laws which govern what you’re trying to do. E.g. it would be hard to do quantum algorithm design if there was no principled way to abstract away from implementation details, but it turns out [there is](http://intelligence.org/2014/02/03/ronald-de-wolf-on-quantum-computing/), so people can actually prove certain results even though the quantum computers don’t exist yet. One problem in AGI theory is that nobody knows what an AGI will look like, and there’s very little you can demonstrate about the ultimate capabilities of AGI with (e.g.) physics and computational complexity theory alone. At least with [AIXI](http://wiki.lesswrong.com/wiki/AIXI) (and its computable variants) one can make [principled](http://www.idsia.ch/%7Ering/AGI-2011/Paper-B.pdf) [arguments](http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_76.pdf) about what *certain classes* of AGIs would do or not do, but that kind of precision is rarely available in the field so far.\n\n\nHow is prescriptive theory in your field enabled by formal models which abstract away from implementation details?\n\n\n\n\n---\n\n\n**Dave**: The abstract tile assembly model is one workhorse for prescriptive theory. It abstracts away the proposed DNA implementation. In fact, it’s really based more on generic crystallization theory than DNA specifically. Any crystallizing molecule forming a square lattice, held just under the temperature where the rate of monomer attachment is just barely larger than the rate of detachment of monomers held by two bonds (assuming all bond strengths are equal), should have the same effect of enforcing cooperative binding: that tiles end up permanently attached only when held by at least two bonds. There may be other physical ways to enforce this sort of cooperative binding using techniques specific to DNA, or biological molecules more generally.\n\n\nThere have been variants of the model that, while based on plausible physical mechanisms for implementation, abstract away the precise method of implementation. Recently there’s been some work with models of *signal-passing* tiles, which are able to transmit information after they attach in response to receiving a signal from a neighboring tile. They can in turn send signals to other tiles, and the signals are used to do things like activate or deactivate bonds, which allows a bit more sophisticated assembly to happen than in the original model with its *passive* tiles. A reasonable restriction to assume is that only a fixed number of signals can be sent across a tile before they are used up; this is based on the idea that if sending a signal is thermodynamically favorable (whatever the exact mechanism), then it should put the tile in a lower energy state after sending the signal than before, and at some point it gets to a minimum energy.\n\n\nThere is some unpublished lab work I’m aware of implementing the beginnings of this sort of thing using DNA, but the point of the model is to say, “What’s a reasonable abstract model of signalling tiles that seems plausible and likely to capture lots of different ways of doing it?” and then ask what the model can do. The hope is that the results apply not just to one potential implementation of signalling tiles, but lots of different ways.\n\n\nAnother example is some work on motion: what if the monomers are able to move relative to each other after they attach? Again, the physical mechanism for this could be some sort of molecular motor driven by ATP, or it could be driven by the fact that more base pairs are attached after the motion than before.\n\n\nMoving outside of algorithmic self-assembly to molecular computing more generally, one of my favorite models is one of the oldest models in all of science: chemical reaction networks. These are lists of reactions like X + Y –> A + B, indicating that when a molecule of X and a molecule of Y collide, they might change into molecules of A and B. It’s abstracting away the question, “What exactly are X and Y and A and B, and why would they have this effect if X and Y collide?” As long as your system is based on molecules floating around a well-mixed liquid and bumping into each other (with no control of who’s going to bump into whom), and some of them might change as a result of colliding, then the model of chemical reaction networks applies to it.\n\n\nThe model has been around since the 1860’s. It started to get fairly serious scrutiny in the 1970s by a group of mathematicians studying what sorts of long-term behaviors are possible for different classes of chemical reaction networks: oscillations, steady states, etc. More recently, a more computational perspective has been applied, asking questions like, “What sort of *computation* is possible if we are allowed to engineer arbitrary chemical reactions?”\n\n\nThis may seem like the ultimate in theoretical navel-gazing: what makes anyone think we could engineer arbitrary chemical reactions? Well, assuming a reasonable model of a physical mechanism called DNA strand displacement (one of the experimental workhorses of the field of DNA nanotechnology), it was shown a few years ago that every abstract chemical reaction network you could think of can be systematically translated into a system of DNA complexes that undergo actual reactions imitating the abstract reactions.\n\n\nOf course, that’s just one way to try to implement a chemical reaction network. When we do prescriptive theory with chemical reaction networks, we aren’t thinking of that particular implementation. We know it’s out there and it helps us sleep at night, but maybe there are other ways to engineer artificial chemical reactions, and work on abstract chemical reaction networks applies to those systems as well.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Dave!\n\n\nThe post [Dave Doty on algorithmic self-assembly](https://intelligence.org/2014/04/23/dave-doty/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-04-23T21:24:14Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "a241c395ea1c86d235eaf39758bd3a54", "title": "Ariel Procaccia on economics and computation", "url": "https://intelligence.org/2014/04/23/ariel-procaccia/", "source": "miri", "source_type": "blog", "text": "![Ariel Procaccia portrait](http://intelligence.org/wp-content/uploads/2014/04/Procaccia_w760.jpg) Ariel Procaccia is an assistant professor in the [Computer Science Department](http://www.cs.cmu.edu/) at [Carnegie Mellon University](http://www.cmu.edu/). He received his Ph.D. in computer science from the [Hebrew University of Jerusalem](http://new.huji.ac.il/en/). He is a recipient of the NSF CAREER Award (2014), the (inaugural) Yahoo! Academic Career Enhancement Award (2011), the Victor Lesser Distinguished Dissertation Award (2009), and the Rothschild postdoctoral fellowship (2009). Procaccia was named in 2013 by IEEE Intelligent Systems to their biennial list of AI’s 10 to Watch. He is currently the editor of [ACM SIGecom Exchanges](http://www.sigecom.org/exchanges/), an associate editor of the [Journal of AI Research](https://www.jair.org/) (JAIR) and Autonomous Agents and Multi-Agent Systems (JAAMAS), and an editor of the upcoming *Handbook of Computational Social Choice*.\n\n\n\n\n**Luke Muehlhauser**: Much of your work concerns [mechanism design](http://en.wikipedia.org/wiki/Mechanism_design), which Leonid Hurwicz and Stanley Reiter once [described](http://www.amazon.com/Designing-Economic-Mechanisms-Leonid-Hurwicz/dp/0521724104/) succinctly:\n\n\n\n> In a [mechanism] design problem, the goal function is the main “given,” while the mechanism is the unknown. Therefore, the design problem is the “inverse” of traditional economic theory, which is typically devoted to the analysis of the performance of a given mechanism.\n> \n> \n\n\nIn [Brânzei & Procaccia (2014)](http://www.cs.cmu.edu/%7Earielpro/papers/verification.pdf), you and your co-author go on to explain that:\n\n\n\n> Arguably, the most sought-after property [for a mechanism] is *truthfulness*, more formally known as *incentive compatibility* or *strategyproofness*: an agent must not be able to benefit from dishonestly revealing its private information.\n> \n> \n\n\nBut moreover, it would be nice if agents could verify the truthfulness of a mechanism for themselves in a computationally efficient way. How would you describe the current state of progress on the problem of (efficiently) verifiably truthful mechanisms?\n\n\n\n\n---\n\n\n**Ariel Procaccia**: The work on verifiably truthful mechanisms has been rather sporadic. Over the last decade, several groups of researchers have leveraged model checking techniques to address this problem. The main shortcoming of the model checking approach is that it is computationally intensive, and, in particular, does not lead to provable computational efficiency. [Kang and Parkes (2006)](http://www.eecs.harvard.edu/econcs/pubs/verif3.pdf) studied “passive” verification algorithms that do no directly analyze the mechanism itself, but observe its sequence of inputs and outputs.\n\n\nIn the work with Simina Brânzei that you mentioned, we take a novel, three-step approach: (i) we provide a formalism that allows us to specify the structure of mechanisms, (ii) we construct a truthfulness verification algorithm that receives as input mechanisms specified using the formalism of step (i), and (iii) we analytically measure the quality of mechanisms whose truthfulness can be efficiently verified using the algorithm of step (ii). As a proof of concept, we applied our approach to the design of truthfully verifiable mechanisms in a specific domain (facility location on the real line). While I think our approach is quite exciting, it is too early to say whether it can be extended to richer domains.\n\n\nTo summarize, the study of verifiably truthful mechanisms is still in its infancy; there is still a lot of work to be done!\n\n\n\n\n---\n\n\n**Luke**: Some of your recent work examines “cake cutting” (fair division of a divisible good). According to your [survey article](http://www.cs.cmu.edu/%7Earielpro/papers/cakesurvey.cacm.pdf) for *CACM*, modern computational work on cake-cutting grew out of earlier work on fairness in recreational mathematics and microeconomics. To your knowledge, has there been much interaction between philosophical work on fairness (e.g. [Rawls](http://plato.stanford.edu/entries/rawls/)) and the fairness research in mathematics, economics, and computer science? (Along with [Scott Aaronson](http://intelligence.org/2013/12/13/aaronson/), I’m typically curious about the dynamic in which philosophical problems are studied outside philosophy. I often feel that progress is faster when a problem is studied outside philosophy journals.)\n\n\n\n\n---\n\n\n**Ariel**: Work on fair division is inspired by philosophy, and, I believe, raises new philosophical questions. For example, you mentioned John Rawls, the famous philosopher who advocated (in his book [*A Theory of Justice*](http://smile.amazon.com/A-Theory-Justice-John-Rawls/dp/0674000781/)) a maximin principle of fairness: society should maximize the prospects of the least fortunate members. This principle was adopted by the systems community. Indeed, [Demers et al. (1989)](http://x86.cs.duke.edu/courses/cps214/spring09/papers/p1-demers.pdf) write: “The maxmin fairness criterion … states that an allocation is fair if (1) no user receives more than its request, (2) no other allocation scheme satisfying condition 1 has a higher minimum allocation, and (3) condition 2 remains recursively true as we remove the minimal user and reduce the total resource accordingly”. The CS theory community has also devoted quite a bit of attention to a related algorithmic problem – allocating *indivisible* goods to maximize the minimum value of any player. In fact, some theorists believe that [Santa Claus](http://dl.acm.org/citation.cfm?id=1132522) uses this notion of fairness to allocate toys!\n\n\nIn contrast, the cake cutting literature more commonly studies “Boolean” fairness criteria. Perhaps the most compelling and intuitive axiom is envy-freeness: Each player should value his own allocation at least as much as he values any other player’s allocation. But [Aumann and Dombb (2010)](http://www.eecs.harvard.edu/cs286r/courses/fall11/papers/AD10.pdf) show that the two different approaches are at odds. Specifically, they formally construct a cake cutting instance in which constraining the allocation to be envy free significantly reduces the happiness of the least happy player under an optimal allocation. I think it is fascinating that this mathematical result informs fundamental philosophical questions about fairness, e.g.,  which is preferable, a richer society where some individuals envy others, or a poorer society where individuals are content with their own shares? While fair division is being taught in [philosophy courses](http://web.pacuit.org/classes/voting-fall2012.html), to the best of my knowledge the potential contribution of recent mathematical results to theories of justice and fairness has not been explored.\n\n\nAnd, a bit more whimsically, see [my blog post](http://agtb.wordpress.com/2012/08/15/fair-division-and-the-whining-philosophers-problem/) for an alternative connection between fair division and philosophy.\n\n\n\n\n---\n\n\n**Luke**: You’ve also worked on the fair division of *indivisible* goods. In [Procaccia & Wang (2014)](http://www.cs.cmu.edu/%7Earielpro/papers/mms.pdf), you and your co-author write:\n\n\n\n> typical real-world situations where fairness is a chief concern, notably divorce settlements and the division of an estate between heirs, involve indivisible goods (e.g., houses, cars, and works of art) — which in general preclude envy-free, or even proportional, allocations. As a simple example, if there are several players and only one indivisible item to be allocated, the allocation cannot possibly be proportional or envy free.\n> \n> \n\n\nHow would you summarize the current state of solutions to the problem of fair division of indivisible goods?\n\n\n\n\n---\n\n\n**Ariel**: As you noted, dividing indivisible goods is tricky, because classic notions of fairness like envy-freeness (which we discussed earlier) are clearly infeasible. We do know that if players’ values for goods are drawn at random, there exists an envy-free allocation with high probability under mild assumptions (see [Dickerson et al., 2014](http://www.cs.cmu.edu/%7Earielpro/papers/ef_phase.pdf)). Alternatively, for the case of two players, [Brams et al. (2014)](http://www.ams.org/notices/201402/rnoti-p130.pdf) propose an interesting protocol that guarantees envy-freeness (and other properties), but may not allocate all goods.\n\n\nIn my view, we need a notion of fairness that can *always* be achieved, for any number of players. In the paper you mentioned (with Junxing Wang), we study the notion of *maximin share (MMS) guarantee*, proposed by Eric Budish. In a nutshell, the MMS guarantee of a given player, assuming there are n players, is the value the player can *guarantee* by dividing the goods into n bundles, letting other players choose bundles (in some order), and getting the remaining bundle. It would have been great if it had been possible to always divide the goods in a way that satisfies the players’ MMS guarantees, but we show that this is not the case, even when valuations are additive (that is, my value for a bundle is the sum of values for individual goods in the bundle). However, we show that it is always possible to divide the goods in a way that each player gets *two thirds* of his MMS guarantee.\n\n\nWe use this theoretical result to design what is, in my highly biased opinion, arguably the most practical method for dividing indivisible goods among any number of players. First, we let the players assign points to goods. Second, in order to produce an allocation, we consider three levels of fairness: envy-freeness, proportionality (each of the n players gets 1/n of his value for the entire set of goods), and MMS guarantee; each level of fairness is stronger than the subsequent one. We return the allocation that maximizes social welfare, that is, the sum (over players) of points players assign to their allocated bundles, subject to the *strongest feasible* level of fairness. If the strongest feasible level of fairness is MMS guarantee, we maximize the fraction c such that we can provide to all players a c-fraction of their MMS guarantee. Our theoretical result implies a lower bound on the guaranteed level of fairness — c is at least 2/3 — but in practice (based on extensive simulations by several groups) it seems that c=1 should always be feasible with respect to realistic instances.\n\n\nThe method I just described is one of several practical fair division methods implemented in [Spliddit](http://www.spliddit.org/), a not-for-profit fair division website that I’ve been working on (with Jonathan Goldman) for the past year; it is set to launch in the coming months.\n\n\n\n\n---\n\n\n**Luke**: From your perspective, how does computational fair division theory interact with computational voting theory? If we can’t get a certain kind of satisfactory result via fair division theory, can we sometimes achieve “fairly good” results from a non-manipulable voting scheme, or vice-versa?\n\n\n\n\n---\n\n\n**Ariel**: Social choice and fair division share a common goal: aggregating individuals’ possibly conflicting preferences in a way that achieves a desirable outcome. However, the two settings are quite different, in that fair division is typically concerned with the properties of each individual’s allocation, and the fairness axioms we discussed earlier hold for each individual separately; whereas in the classic social choice setting, individuals specify their preferences over a typically unstructured space of societal outcomes. Typically, methods developed in one of the two fields should not be directly applied to the other. That said, some extensions — such as [voting over combinatorial domains](http://www.cs.rpi.edu/%7Exial/Files/chap8.pdf), and [fair division with externalities](http://www.cs.cmu.edu/%7Earielpro/papers/external.full.pdf) — bring the two fields closer together, and their relation is explored in detail by [Fleurbaey and Maniquet (2011)](http://www.cambridge.org/ar/academic/subjects/economics/public-economics-and-public-policy/theory-fairness-and-social-welfare).\n\n\nAs an aside (since you asked specifically about non-manipulable rules), non-manipulable fair division schemes are quite common (see, e.g., [Chen et al., 2013](http://www.cs.cmu.edu/%7Earielpro/papers/justruth.geb.pdf)), but we do not know how to achieve non-manipulability (a.k.a. strategyproofness) in computational voting theory. The Gibbard-Satterthwaite Theorem rules out the existence of “reasonable” voting rules that cannot be manipulated. A point that came up in your [interview with Toby Walsh](http://intelligence.org/2014/03/10/toby-walsh/) is that some voting rules are worst-case hard to manipulate (with respect to some formulations of the manipulation problem). However, I am more pessimistic than Toby about the degree to which such results actually provide a barrier against manipulation, and, indeed, almost a decade of work on designing voting rules that are computationally hard to manipulate *on average* has so far produced only negative results (see, e.g., [Mossel and Racz, 2012](http://dl.acm.org/citation.cfm?id=2214071)).\n\n\n\n\n---\n\n\n**Luke**: What kinds of progress in fair division theory or voting theory do you expect or not-expect in the next 15 years? E.g. you seem pessimistic about finding non-manipulable “reasonable” voting rules. What else are you pessimistic or optimistic about?\n\n\n\n\n---\n\n\n**Ariel**: The short answer (to the first part of the question) is “applications”.\n\n\nIn (computational) fair division, I feel that we already have an excellent understanding of how to solve real-world — even day-to-day — fair division problems (such as rent division), via clever methods developed over decades. But very few of these methods have ever been implemented or used in practice. The website [Spliddit](http://www.spliddit.org/), which I mentioned earlier, aims to take a first step towards giving people access to some of the most practical methods (including ones developed in my group). I believe it will also give us a clearer understanding of which problems people want to solve, and which solutions are viewed as satisfactory (sometimes all the mathematical guarantees in the world are not enough to make people happy!). It’s also worth mentioning that the Wharton Business School at Penn has recently adopted a [beautiful fair division method](http://faculty.chicagobooth.edu/eric.budish/research/budish-approxceei-jpe-2011.pdf) (which they call [Course Match](https://spike.wharton.upenn.edu/mbaprogram/course_match/)) for the allocation of MBA courses to students. In the next 15 years, I expect to see a proliferation of usable fair division systems.\n\n\nIn (computational) voting theory, I am optimistic about potential applications to crowdsourcing and human computation systems, for two reasons. First, in contrast to political elections, the designer of a crowdsourcing system can easily evaluate and choose any voting rule. Second, a classical, well-studied model of voting (which dates back to the marquis de Condorcet) views voters as noisy estimators of an underlying true ranking of the alternatives by quality; and while this model is questionable in the context of political elections (where opinions are subjective), it is a perfect fit with certain crowdsourcing settings. In fact, existing systems like [EteRNA](http://eterna.cmu.edu/web/) already use voting in order to aggregate information; the challenge is to understand which voting rules should be used. Work done in my group (e.g., [Caragiannis et al., 2013](http://www.cs.cmu.edu/%7Earielpro/papers/samples.full.pdf)), and by others, provides some answers to this question. In the next 15 years, I believe we will see crowdsourcing systems whose design is directly informed by computational voting theory.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Ariel!\n\n\nThe post [Ariel Procaccia on economics and computation](https://intelligence.org/2014/04/23/ariel-procaccia/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-04-23T12:00:03Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "e0ecd02e1c8a8bc143be3919f250f786", "title": "Suzana Herculano-Houzel on cognitive ability and brain size", "url": "https://intelligence.org/2014/04/22/suzana-herculano-houzel/", "source": "miri", "source_type": "blog", "text": "![Suzana Herculano-Houzel portrait](http://intelligence.org/wp-content/uploads/2014/04/Herculano-Houzel_w500.jpg)[Suzana Herculano-Houzel](http://www.suzanaherculanohouzel.com/lab) is an associate professor at the [Federal University of Rio de Janeiro](http://www.ufrj.br/), Brazil, where she heads the [Laboratory of Comparative Neuroanatomy](http://www.suzanaherculanohouzel.com/lab). She is a Scholar of the James McDonnell Foundation, and a Scientist of the Brazilian National Research Council (CNPq) and of the State of Rio de Janeiro (FAPERJ). Her main research interests are the cellular composition of the nervous system and the evolutionary and developmental origins of its diversity among animals, including humans; and the energetic cost associated with body size and number of brain neurons and how it impacted the evolution of humans and other animals.\n\n\nHer latest findings show that the human brain, with an average of 86 billion neurons, is not extraordinary in its cellular composition compared to other primate brains – but it is remarkable in its enormous absolute number of neurons, which could not have been achieved without a major change in the diet of our ancestors. Such a change was provided by the invention of cooking, which she proposes to have been a major watershed in human brain evolution, allowing the rapid evolutionary expansion of the human brain. A short presentation of these findings is available at TED.com.\n\n\nShe is also the author of six books on the neuroscience of everyday life for the general public, a regular writer for the Scientific American magazine *Mente & Cérebro* since 2010, and a columnist for the Brazilian newspaper *Folha de São Paulo* since 2006, with over 200 articles published in this and other newspapers.\n\n\n\n**Luke Muehlhauser**: Much of your work concerns the question “Why are humans smarter than other animals?” In a series of papers (e.g. [2009](http://journal.frontiersin.org/Journal/10.3389/neuro.09.031.2009/abstract), [2012](http://www.pnas.org/content/109/Supplement_1/10661.long)), you’ve argued that recent results show that some popular hypotheses are probably wrong. For example, the so-called “overdeveloped” human cerebral cortex contains roughly the percentage of total brain neurons (19%) as do the cerebral cortices of other mammals. Rather, you argue, the human brain may simply be a “linearly scaled-up primate brain”: primate brains seem to have more economical scaling rules than do other mammals, and humans have the largest brain of any primate, and hence the most total neurons.\n\n\nYour findings were enabled by a new method for neuron quantification developed at your lab, called “isotropic fractionator” ([Herculano-Houzel & Lent 2005](http://commonsenseatheism.com/wp-content/uploads/2014/01/Herculano-Houzel-Lent-Isotropic-fractionator-a-simple-rapid-method-for-the-quantification-of-total-cell-and-neuron-numbers-in-the-brain.pdf)). Could you describe how that method works?\n\n\n\n\n---\n\n\n**Suzana Herculano-Houzel**: The isotropic fractionator consists pretty much of turning fixed brain tissue into soup – a soup of a known volume containing free cell nuclei, which can be easily colored (by staining the DNA that all nuclei contain) and thus visualized and counted under a microscope. Since every cell in the brain contains one and only one nucleus, counting nuclei is equivalent to counting cells. The beauty of the soup is that it is fast (total numbers of cells can be known in a few hours for a small brain, and in about one month for a human-sized brain), inexpensive, and very reliable – as much or more than the usual alternative, which is stereology.\n\n\nStereology, in comparison, consists of cutting entire brains into a series of very thin slices; processing the slices to allow visualization of the cells (which are otherwise transparent); delineating structures of interest; creating a sampling strategy to account for the heterogeneity in the distribution of cells across brain regions (a problem that is literally dissolved away in the detergent that we use in the isotropic fractionator); acquiring images of these small brain regions to be sampled; and actually counting cells in each of these samples. It is a process that can take a week or more for a single mouse brain. It is more powerful in the sense that spatial information is preserved (while the tissue is necessarily destroyed when turned into soup for our purposes), but on the other hand, it is much more labor-intensive and not appropriate for working on entire brains, because of the heterogeneity across brain parts. \n\n\n\n\n\n\n---\n\n\n**Luke**: Your own work emphasizes importance of the brain’s sheer number of neurons for cognitive ability. What do you think of other recent results (e.g. [Smaers & Soligo 2013](http://commonsenseatheism.com/wp-content/uploads/2014/01/Smaers-Soligo-Brain-reorganization-not-relative-brain-size-primarily-characterizes-anthropoid-brain-evolution.pdf)), which emphasize the apparent importance of [mosaic](http://en.wikipedia.org/wiki/Mosaic_evolution) brain reorganization?\n\n\n\n\n---\n\n\n**Suzana**: Mosaic brain organization is a fact. It describes the independent scaling of different parts of the brain across species in evolution, as opposed to every brain part scaling in line with every other part (what Barbara Finlay describes as “linked regularities”). Mosaic scaling in evolution is seen for example in the enormous size that some structures exhibit in some species but not others, relative to the rest of the brain: the common squirrel, for instance, has an enormous superior colliculus, involved in visual processing, that other rodents of a similar brain size do not have; moles and shrews, who rely heavily on olfaction, have even more neurons in the olfactory bulb than in the cerebral cortex – something that is quite different from rodents of a similar brain size (this is work under review).\n\n\nIn the context of our work, mosaic brain evolution means that the numbers of neurons allocated to different brain structures can vary independently across said structures: while, say, the superior colliculus and the visual thalamus tend to gain neurons hand in hand, a particular species can gain neurons much faster in the superior colliculus than in the visual thalamus, for instance. Mosaic brain evolution also refers to the possibility of one system (for instance, vision) expanding faster than another system (say, audition). There is the occasional surprise, however. For instance, we have found that, while primates are highly visual and have a large proportion of the cortex devoted to vision (indeed, much larger than the cortical areas devoted to audition), this proportion (as well as the relative number of cortical neurons devoted to vision) does NOT increase together with increasing brain size. Many more cortical neurons are involved in visual than in auditory processing, yes – but that proportion is stable across primate species. Still, species that rely more heavily on other sensory modalities should have a different distribution of neurons. Indeed, the mouse, contrary to primates, has a far larger percentage of cortical neurons involved in somatosensory processing than primates; and, as I mentioned above, moles and shrews have more neurons in the olfactory bulb than in the whole cortex – a pattern that is not seen in other brains of a similar size.\n\n\nEven more remarkably, we have found that the apparent expansion of the cerebral cortex in mammalian evolution, varying from less than 40% of brain size in the smallest mammals to over 80% in humans and other even larger brains, is NOT the result of an expansion in numbers of neurons in the cortex: regardless of the relative size of the cortex across different species, it has about 20% of all brain neurons – even in the human brain. That’s another example of how apparent mosaic evolution (of one structure taking over the others) can actually not be mosaic evolution. It all depends on the precise variable examined.\n\n\n\n\n---\n\n\n**Luke**: To be more specific: Do you think your view that the human brain is essentially a “linearly scaled-up primate brain” is in significant tension with [Smaers & Soligo (2013)](http://commonsenseatheism.com/wp-content/uploads/2014/01/Smaers-Soligo-Brain-reorganization-not-relative-brain-size-primarily-characterizes-anthropoid-brain-evolution.pdf)‘s principal component analysis (PCA) of neural structure variation in primate species?\n\n\nSmaers and Soligo claim their PCA shows that while (1) the principal component which accounts for 25.8% of the variance is closely correlated with brain size, it’s also the case that (2) the remaining principal components — which account for a large majority of the variance — are not closely correlated with brain size. In particular, they claim that their phylogenetic analysis shows that “a clade-specific investment in particular brain formations (prefrontal white matter, prefronto-striatal and higher motor control) *in combination with* increased absolute brain size differentiates great apes (and humans) from other primates” (emphasis added).\n\n\n\n\n---\n\n\n**Suzana**: No, there is no tension. What we see is that the human cerebral cortex as a whole, like the human cerebellum as a whole, and the remaining areas of the brain as a whole, are linearly scaled-up *in their numbers of neurons* compared to the same structures in other primate brains. This means that the relationship between the particular size of a brain structure and its number of neurons is constant and shared across primate species. This does not at all imply or require that all brain areas have the same ratios of numbers of neurons *relative to one another*, which is what mosaic evolution states: given brain regions can become relatively enlarged or reduced compared to others, and still maintaing the same relationship between their number of neurons and mass as seen across species.\n\n\nHaving said that: yes, the human brain as a whole *does* fit the relationship between brain mass and total number of neurons that we found in other primates. As far as I understand, the relative differences that Jeroen Smaers concentrates on are very small – he is looking at the residuals of the relationships, and as many still do, using normalization to external parameters. I believe it is time that we stop assuming that things such as brain mass, or worse, body mass, are true independent parameters (which they very likely aren’t; brain mass, in particular, is the *result* of the cellular composition of the brain and its parts, and as such cannot determine much), and start looking at the absolute values of the different parameters – which is what we have been doing in my lab, trying to keep the number of assumptions to a minimum.\n\n\n\n\n---\n\n\n**Luke**: What are the current estimates of neuron quantities for the largest brains, in elephants and whales? Has you isotropic fractionator process been used on those brains yet, or are their current plans to do so?\n\n\n\n\n---\n\n\n**Suzana**: We have a paper under review on the number of neurons in the brain of the African elephant. The elephant is a great test of our hypothesis that numbers of neurons are a strong limiting factor to cognitive abilities exactly because of its large brain, at 4-5 kg, which is about 3x the mass of the human brain: we predicted that it should have fewer neurons than the human brain, despite being larger than ours.\n\n\nAs it turns out, the answer was even more interesting: the elephant brain as a whole has 3 times the number of neurons of the human brain, 257 billion neurons against an average 86 billion in ours, BUT 98% of those neurons are located in the elephant cerebellum, which turns out to be a major outlier in the numeric relationship between numbers of neurons in the cerebral cortex and cerebellum. While other mammals (humans included) have about 4 neurons in the cerebellum to every neuron in the cerebral cortex, the elephant has 45 neurons in the cerebellum to every neuron in the cerebral cortex. All we can do for now is to speculate on the reason for this extraordinary number of neurons in the elephant cerebellum, and the most likely candidates right now is to me the fine sensorimotor control of the trunk, a 200-pound appendage that has amazingly fine sensory and motor capabilities, which are known to involve the cerebellum.\n\n\nDespite the enormous number of neurons in the elephant cerebellum, its cerebral cortex, which is twice the size of ours, has only one third of the neurons in an average human cerebral cortex. Taken together, these results suggest that the limiting factor to cognitive abilities is not the number of neurons in the whole brain, but in the cerebral cortex (to which I would add, “provided that the cerebellum has enough neurons to shape activity in the cerebral cortex”).\n\n\nWe don’t have data on whales yet, but that research is underway in our lab – along with research on carnivores, who we predict to have more neurons than the large artiodactyls that they prey upon.\n\n\n\n\n---\n\n\n**Luke**: What other results in this line of research to you hope to have from your lab or other labs in the next 5 years?\n\n\n\n\n---\n\n\n**Suzana**: We’re extending our analysis to the other mammalian branches — xenarthrans, marsupials, carnivores, chiropterans and perissodactyls — and to non-mammalian vertebrates (birds, reptiles, fish, amphibians) and even some invertebrates. The goal is to achieve a full appreciation and understanding of brain evolution, which will give us, amongst other things, a view into the mechanisms that have led to the generation of brain diversity in evolution. Such a comparative analysis also gives us insights onto the most basic features of the brain: those that are shared by all mammals. As it turns out, there are some, and they are very revealing. One of them, for instance, is the addition of glial cells to the brain, in numbers which seem to be regulated by a self-organized process that is shared across all species examined so far.\n\n\nWe are also focusing our analysis on the prefrontal cortex, that is, the associative areas of the cerebral cortex. While it has been very informative to compare total numbers of neurons in the cerebral cortex across species, it is supposedly those neurons in the associative areas that should really limit the cognitive abilities of the species. This more specific analysis should allow us a new glimpse into the brains of different species and how they compare to the human brain. In this regard, we have a paper in the works comparing the distribution of neurons along the human cerebral cortex with that in other, non-human primate species.\n\n\nWe are also moving into the spacial properties of the tissue: how neurons are distributed, and how this is related to the distribution of astrocytes and vasculature, for instance. But one large question that remains is how numbers of synapses compare across humans and other species. That is also something that we are working on.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Suzana!\n\n\nThe post [Suzana Herculano-Houzel on cognitive ability and brain size](https://intelligence.org/2014/04/22/suzana-herculano-houzel/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-04-22T23:08:14Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "98a50e44f3547c4f8c78d7eff2889a18", "title": "Martin Hilbert on the world’s information capacity", "url": "https://intelligence.org/2014/04/22/martin-hilbert/", "source": "miri", "source_type": "blog", "text": "![Martin Hilbert portrait](http://intelligence.org/wp-content/uploads/2014/04/Hilbert_w664.jpg)[Martin Hilbert](http://www.martinhilbert.net) pursues a multidisciplinary approach to understanding the role of information, communication, and knowledge in the development of complex social systems. He holds doctorates in Economics and Social Sciences, and in Communication, a life-long appointment as Economic Affairs Officer of the United Nations Secretariat, and is part of the faculty of the University of California, Davis. Before joining UCD he created and coordinated the [Information Society Programme of United Nations Regional Commission for Latin America and the Caribbean](http://www.cepal.org/SocInfo). He provided hands-on technical assistance to Presidents, government experts, legislators, diplomats, NGOs, and companies in over 20 countries. He has written several books about digital development and published in recognized academic journals such as Science, Psychological Bulletin, World Development, and Complexity. His research findings have been featured in popular outlets like Scientific American, WSJ, Washington Post, The Economist, NPR, and BBC, among others.\n\n\n\n**Luke Muehlhauser**: You lead an ongoing [research project](http://www.martinhilbert.net/WorldInfoCapacity.html) aimed at “estimating the global technological capacity to store, communicate and compute information.” Your results have been published in *[Science](http://www.sciencemag.org/content/332/6025/60)* and other journals, and we used your work heavily in [The world’s distribution of computation](http://intelligence.org/2014/02/28/the-worlds-distribution-of-computation-initial-findings/). What are you able to share in advance about the next few studies you plan to release through that project?\n\n\n\n\n\n---\n\n\n**Martin Hilbert**: When we first started out, we were rather surprised how little work had been done in the area of quantifying our information and communication capacity. We have statistics about everything and know how many cars and trees there are, and have estimates about the social and economic impact of shoe sales and carbon exhaust, but living in an information age, only very few pioneering studies have been done about how much information there is[1](https://intelligence.org/2014/04/22/martin-hilbert/#footnote_0_10986 \"See this Special Section: Hilbert, M. ‘How to Measure “How Much Information”? Theoretical, Methodological, and Statistical Challenges for the Social Sciences. Introduction.’ International Journal of Communication 6 (2012), 1042–1055.\") . We felt the topic would deserve a more coherent treatment. So we set up three basic stages:\n\n\nFirst, creating the basic database: how much is there? How much is stored, how much communicated, how much can we compute? This was by far the most tedious part and resulted in a 300 page methodological appendix, where we list the more than 1,100 sources and databases we combined to create [these numbers](http://www.martinhilbert.net/LopezHilbertSupportAppendix2012.pdf). We found some interesting things here, such as the fact that our computational capacity has grown between 2 to 3 times faster than our information and communication capacity since the 1980s. This is not only good news for the machine intelligence community, but also for human kind as a whole: while it currently seems like we are drowning in an information overload that stems from the sustained 25 – 30 % annual growth of information storage and communication capacities, we should be able to make use of the computational power (growing at 60 – 90 % per year) to make sense and eventually tame of all of this information.\n\n\nSecond, how can we describe it? We found several surprising things here. Some of the most basic assumptions of the digital revolution literature appeared in a totally new light. For example, usually, it is assumed that the digital revolution has increased global communication equality. The problem with this conclusion is that it based on the head-count of digital devices and subscriptions as the main indicator, so since there are more phones now than in the 1980s (with a current mobile phone penetration of 90% worldwide), the conclusion usually is that equality must have increased. However, not all phones are equal nowadays. So looking at the distribution of communication capacities, [we found](http://www.martinhilbert.net/TechInfoInequality.pdf) that communication capacity in 1986 was actually more equally distributed than in the 1990s and the 2000s! In the 1980s there were only fixed line phones, but everybody had “equally little”. Afterward, the myriad of communication technologies increased the inequality among countries and within countries. Only very recently have we re-establish the pre-1990 equality levels in terms of our bits-capacity. In other words: while we are all much better off in absolute terms (“we all have more”), relative information inequality in terms of information capacity continuously opens up with each new innovation (“we are not automatically more equal”). The digital divide turns out to be a moving target! We do not yet have any idea yet about the social, economic and long-term political consequences of this ever-changing inequality in information and communication capacities among and within countries…\n\n\nAnother traditional assumption of the digital revolution literature is that we now live in a multimedia age, with an unprecedented share of moving videos and audio sounds. Looking at the evolution of the content of the world’s information and communication capacity, we actually [found](http://www.martinhilbert.net/WhatsTheContent_Hilbert.pdf) that the relative share of text and still images captures a larger portion of the total amount than before the digital age! Text merely represented 0.3% of the (optimally compressed) bits that flowed through global information channels in 1986 but grew to almost 30% in 2007. Back in the pre-digital age, text mainly appeared on paper, while telephone channels were filled with audio (voice) and many homes hoarded vast amounts of video material in VHS libraries, etc. The proliferation of alphanumeric text on the web and in vast databases in a phenomena of the digital age. The fact the digital age turns out to be a “text and image age” is good news for big-data analysts who extract intelligence from more easily analyzable text and image data.\n\n\nAnd as a last example, we were able to parse out [how much of the global information and communication explosion was driven by more, and how much by better technology](http://www.martinhilbert.net/HowMuchMoreORbetter_Hilbert.pdf). We found that technological progress has contributed between two to six times more than additional technological infrastructure to our global bits capacity. While infrastructure actually seems to reach a certain level of saturation (at roughly 20 storage devices per capita and 2 to 3 telecommunication subscriptions per capita), informational capacities are still expanding quickly. We also found that additionally to progress in better hardware, software for information compression turns out to be an important and often neglected driver of the global growth of technologically-mediated information and communication capacities. We estimate that better compression algorithms alone allowed us to triple our communication capacity: in the 2000s we could send 3 times as much information through the same channel than int he 1980s, thanks to compression. This underlines the importance to measure information and communication capacities directly in bits and bytes. Traditional statistics provided by the national telecom or science authorities (such as the FCC or NTIA) merely count devices and subscriptions. But this indicator does not tell as much anymore.\n\n\nAs a natural third step after this rather descriptive work, we are currently working on deepening our understanding of the social, economic and political impact of this information and communication flood. The first question here is: impact on what? Per definition, a general-purpose technology (like digital technology) affects all aspects of human conduct, which gives us the free choice for the area of impact. The common theme is that this social change was produced by information, so we have to involve the [bit-metric]. With it, we can measure economic impact as [US$/kbps] or democratic participation by [participation/kbps]. These kind of measures show us that somebody makes more or less effective use of the same communication capacity than somebody else. The other way around, we can also ask about [kbps/US$] and try to understand why some have more communication capacity while starting from the same economic resources. We can then fine-tune the [bit-metric] and analyze how the communication capacity relates to additional attributes of interest of the capacity itself (e.g. mobile or fixed; individual or shared; private or public; always-on or sporadic, etc.), or to different content. It will enable us to take a more systematic approach to ideas like the information overflow: how much of which kind of content, from which kind of technology has which kind of impact on what? How does the supposed curve of “decreasing returns to information” look like empirically and in which task? More elaborate indexes and models can even integrate an arbitrary combination of these variables with communication capacity, just as economist have come up with a myriad of ways to evaluate the distribution of monetary currency with a society. In the statistical analysis of economics the unifying ingredient is naturally $, while in the statistical analysis of technologically mediated communication the unifying ingredient is naturally the bit. Obviously, bits only say “how much”, not “how good”. Once we understand the impact of “more” or “less” bits, we can then even go on and ask about “better” or “worse” bits (or more of less suitable kinds of bits). The “better” or “worse” will appear as an unexplained “residual” in our impact studies. In other words: instead of actively defining what is good and what is bad, we corner it by at least taking out the co-founder which stems from “more” or “less”. Our main argument is that “quantity” is the lower hanging fruit, and that it must precede any question about quality. Otherwise we will helplessly confuse more- with better- information, and the other way around. So any impact must be normalized on the amount = [impact / bit], and for this we need to start measuring bits and bytes. Which brings us back to the reason why we started all of this…\n\n\n\n\n---\n\n\n**Luke**: In the course of your research so far, which trends related to information and information technology have you found to be roughly exponential during a certain period, and which trends have you found to be *not* exponential during some period?\n\n\n\n\n---\n\n\n**Martin**: Social systems are too complex to make identify pure distributions and dynamics, but within these limitation, they are all “roughly” exponential. Machines’ application-specific capacity to compute information per capita has roughly doubled every 14 months over the past decades in our sample, whereas the per capita capacity of the world’s general-purpose computers has doubled every 18 months. The global telecommunication capacity per capita doubled every 34 months, and the world’s storage capacity per capita required roughly 40 months. Per capita broadcast information has only doubled roughly every 12.3 years, but still grows exponentially. One the one hand, this stems from pure technological progress, such as Moore’s laws. On the other hand, this stems from the diffusion of the technology through social networks. This diffusion mechanism follows a logistic S-shaped curve, starting of with exponential growth until an inflection point, after which saturation converts the process in a reverse exponential. So actually we have two exponential processes (technological progress and social diffusion).\n\n\nIn a [recent study](http://martinhilbert.net/Powerlaw_ProgressDiffusion_Hilbert.pdf) I’ve shown that these two exponential processes combined can result in an authentic power-law distribution among the number of technological devices and their performance (so-called power-laws, scale-free-, Pareto- or Zipf distributions consist of two exponential distributions). I showed this with the distribution of supercomputers: there are exponentially few supercomputers with exponentially large computational power, and exponentially many supercomputers with exponentially lower computational capacity. Both line up in an almost spooky order, such as if some kind of super organizer subscribes the U.S. Department of Energy to order one supercomputer with performance x, and Los Alamos Laboratory, IBM and a couple of universities to order some with exactly lesser performance x-α, etc. Of course, there’s no such super-organizer, but social complexity leads to this stable order, grown out of two complementary exponential processes. Recognizing such social patterns can be useful, since it provides predictive insights into the evolution of highly uncertain technology markets.\n\n\n\n\n---\n\n\n**Luke**: Which research investigations would you most like to see (in this line of work) over the next 5 years, whether conducted by yourself or others?\n\n\n\n\n---\n\n\n**Martin**: One very useful contribution would be the continuous reporting of the growth and nature of our information stock and our informational capacities. Together with my co-author Priscila Lopez, we were able to create 20 year long time series, covering over 60 technological families, but this was a two person effort, mainly driven out of curiosity. We have not counted with the resources to sustain this effort continuously. The level of detail and the scope of the inventory should also be extended.\n\n\nFor example, for storage and communication we normalized the amount of information on the available optimal level of compression. This allowed us to gain insights on the amount of information, not merely the available hardware infrastructure (the same bandwidth can store/communicate different amounts of information, depending on how compressed the content). For the case of computation, however, we simply had to use MIPS, which is a hardware metric. Of course, during recent decades, computational algorithms also became more efficient. The same hardware can certainly solve several of the same problems much faster now than 20 years back. We didn’t have the resources to go into this distinction. The continuous effort of recording the growth and the nature of our information capacity will surely become an important corner stone of understanding reality in a digital world, and is therefore indispensable.\n\n\nBesides this empirical effort, I think it will be important that we deepen our theoretical understanding of the way social organization and social dynamics are currently being “algorithmified”. Social procedures, routines, habits, customs, and also laws have always been the central corner stones of civilization. These are currently being digitized, some in a more rigid, others in a less rigid fashion. Big Data is important here, as are agent-based computer simulations, and all kinds of decision support systems. This leads to profound changes what society is made of. We still lack a deeper understanding of the strengths, opportunities and threats of this ongoing process of social creative destruction.\n\n\n\n\n---\n\n\n**Luke**: Lack of information can be a major barrier for this kind of research. Sometimes the data you want to collect was simply never recorded by anyone, or perhaps it was recorded but never released publicly. If you could get your wish, what would change about how data about information and information technologies was recorded and disseminated? Could these changes be executed at a policy level, or an industry level, or some other level, if there was enough of a push for them?\n\n\n\n\n---\n\n\n**Martin**: I think this the conception of the lack of records and data is wrong.\n\n\nIt is true that data on ICT could be improved, and for the past 15 years I focused a large part of my effort at the United Nations Secretariat on adding ICT questions into household and business surveys worldwide. We got quite far and have achieved important improvements in this regard.[2](https://intelligence.org/2014/04/22/martin-hilbert/#footnote_1_10986 \"Partnership On Measuring ICT For Development – The Global Information Society: a Statistical View\")\n\n\nHowever, after years of often fatiguing international policy dialogues (just imagine: you are not the only one lobbying for including “just one more question” into the national household survey!), and considering that the collective policy dynamic often leads to the lowest common denominator (often obsolete indicators…), I came to the conclusion that it might be easier to simply start creating an alternative database from scratch and to lead by example. That was the starting point of our undertaking that eventually included over 1,100 different databases, business records, and statistics. When I first proposed the idea of taking a 20 years inventory of the “[The World’s Technological Capacity to Store, Communicate, and Compute Information](http://www.ris.org/uploadi/editor/13049382751297697294Science-2011-Hilbert-science.1200970.pdf)“, we received cynic remarks even from befriended (and very recognized) colleagues in the field. The reaction included concepts like “utopian megalomania” (this also led to the Acknowledgments in the Science publication that states that we thank “colleagues who motivated us by doubting the feasibility of this undertaking”, p.65). However, there is more information on ICT out there than we think, and the very same digital age often allows us to come up with proxies that enable us for very good estimates. The Big Data paradigm is not to be underestimated. The Big Data paradigm encourages us to embrace messiness of unstructured data, to look for highly correlating proxies, and to make up for it with redundancy from complementary sources. The wealth of “incomplete/ unstructured/messy” sources in a Big Data world often trumps the lack of one clean and centralized source. This also accounts for “datafying” the very own Big Data revolution!\n\n\nThis being said, if I “could get my wish”, of course it would be nice if eventually both lines of work would concur, that is, if the global statistical machinery would start to also consider “information” as something worth of measuring continuously. Until now the UN and others have started to consider it (e.g. see Chapter 5 in [*Measuring the Information Society*](http://www.itu.int/ITU-D/ict/publications/idi/material/2012/MIS2012_without_Annex_4.pdf)), but not yet embraced the idea fully. However, I think for this to happen it will still need much effort and we have to be proactive and show (a) that it’s possible; (b) that it’s worthwhile; and (c) which kind of stats are useful and which ones are not (which is still subject to an open [trial and error process](http://ijoc.org/index.php/ijoc/article/viewFile/1318/746.pdf))\n\n\n\n\n---\n\n\n**Luke**: Thanks, Martin!\n\n\n\n\n---\n\n1. See this Special Section: Hilbert, M. ‘[How to Measure “How Much Information”? Theoretical, Methodological, and Statistical Challenges for the Social Sciences. Introduction.](http://ijoc.org/index.php/ijoc/article/viewFile/1318/746.pdf)’ International Journal of Communication 6 (2012), 1042–1055.\n2. [Partnership On Measuring ICT For Development](http://www.itu.int/ITU-D/ict/partnership/) – [The Global Information Society: a Statistical View](http://unctad.org/en/docs/LCW190_en.pdf)\n\nThe post [Martin Hilbert on the world’s information capacity](https://intelligence.org/2014/04/22/martin-hilbert/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-04-22T19:43:18Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "29e35fa67a9a6b05b4cf0328cfbfd2b0", "title": "Why MIRI?", "url": "https://intelligence.org/2014/04/20/why-miri/", "source": "miri", "source_type": "blog", "text": "I wrote a short profile of MIRI for a forthcoming book on [effective altruism](http://lesswrong.com/lw/hx4/four_focus_areas_of_effective_altruism/). It leaves out [many important details](http://ukcatalogue.oup.com/product/9780199678112.do), but hits many of the key points pretty succinctly:\n\n\n\n> The [Machine Intelligence Research Institute](http://intelligence.org/) (MIRI) was founded in 2000 on the premise that creating smarter-than-human artificial intelligence with a positive impact — “Friendly AI” — might be a particularly efficient way to do as much good as possible.\n> \n> \n> First, because future people vastly outnumber presently existing people, we think that “From a global perspective, what matters most (in expectation) is that we do what is best (in expectation) for the general trajectory along which our descendants develop over the coming millions, billions, and trillions of years.” (See Nick Beckstead’s [*On the Overwhelming Importance of Shaping the Far Future*](http://intelligence.org/2013/07/17/beckstead-interview/).)\n> \n> \n> Second, as an empirical matter, we think that smarter-than-human AI is humanity’s most significant point of leverage on that “general trajectory along which our descendants develop.” If we handle advanced AI wisely, it could produce tremendous goods which endure for billions of years. If we handle advanced AI poorly, it could render humanity extinct. No other future development has more upside or downside. (See Nick Bostrom’s [*Superintelligence: Paths, Dangers, Strategies*](http://ukcatalogue.oup.com/product/9780199678112.do).)\n> \n> \n> Third, we think that Friendly AI research is tractable, urgent, and uncrowded.\n> \n> \n> *Tractable*: Our staff researchers and visiting workshop participants tackle open problems in Friendly AI theory, such as: How can we get an AI to preserve its original goals even as it learns new things and modifies its own code? How do we load desirable goals into a self-modifying AI? How do we ensure that advanced AIs will cooperate with each other and with modified versions of themselves? This work is currently at a theoretical stage, but we are making clear conceptual progress, and growing a new community of researchers devoted to solving these problems.\n> \n> \n> *Urgent*: Surveys of AI scientists, as well as our own estimates, expect the invention of smarter-than-human AI in the 2nd half of the 21st century if not sooner. Unfortunately, mathematical challenges such as those we need to solve to build Friendly AI often require several decades of research to overcome, with each new result building on the advances that came before. Moreover, because the invention of smarter-than-human AI is so difficult to predict, it may arrive with surprising swiftness, leaving us with little time to prepare.\n> \n> \n> *Uncrowded*: Very few researchers, perhaps fewer than five worldwide, are explicitly devoted to full-time Friendly AI research.\n> \n> \n> The overwhelming power of machine superintelligence will reshape our world, dominating other causal factors. Our intended altruistic effects on the vast majority of beings who will ever live must largely reach them via the technical design of the first self-improving smarter-than-human AIs. Many ongoing efforts — on behalf of better altruism, better reasoning, better global coordination, etc. — will play a role in this story, but we think it is crucial to also *directly address the core challenge*: the design of stably self-improving AIs with desirable goals. Failing to solve that problem will render humanity’s other efforts moot.\n> \n> \n> If our mission appeals to you, you can either [fund our research](http://intelligence.org/donate) or [get involved in other ways](https://intelligence.org/get-involved/).\n> \n> \n\n\n![miri_logo](https://intelligence.org/wp-content/uploads/2013/02/miri_logo.png)\nThe post [Why MIRI?](https://intelligence.org/2014/04/20/why-miri/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-04-21T01:17:27Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "6608f26fab1c78f61866dc834691866e", "title": "Thomas Bolander on self-reference and agent introspection", "url": "https://intelligence.org/2014/04/13/thomas-bolander/", "source": "miri", "source_type": "blog", "text": "![Thomas Bolander portrait](http://intelligence.org/wp-content/uploads/2014/04/Bolander_284.jpg) [Thomas Bolander, Ph.D.](http://www.imm.dtu.dk/~tobo/), is associate professor at DTU Compute, Technical University of Denmark.\n\n\nHe is doing research in logic and artificial intelligence with primary focus on the use of logic to model human-like planning, reasoning and problem solving.\n\n\nOf special interest is the modelling of social phenomena and social intelligence with the aim of creating computer systems that can interact intelligently with humans and other computer systems.\n\n\n\n**Luke Muehlhauser**: [Bolander (2003)](http://www2.mat.dtu.dk/publications/phd-thesis/2003/phd-2003-ThomasBolander.pdf) and some of your subsequent work studies paradoxes of [self-reference](http://plato.stanford.edu/entries/self-reference/) in the context of logical/computational agents, as does e.g. [Weaver (2013)](http://arxiv.org/abs/1312.3626). Do you think your work on these paradoxes will have practical import for AI researchers who are designing computational agents, or are you merely using the agent framework to explore the more philosophical aspects of self-reference?\n\n\n\n\n\n---\n\n\n**Thomas Bolander**: First of all, let me explain why I even started to study self-reference in the context of logical/computational agents. In [my PhD](http://www2.mat.dtu.dk/publications/phd-thesis/2003/phd-2003-ThomasBolander.pdf) I was working on how to formalise agent introspection, that is, agents representing and reasoning about their own mental states. One of the main motivations for this is that agents need introspection in order to understand their own role in social contexts, e.g. to ensure not to “be in the way of others” and to reason about how you can help other agents to achieve their goals. Another motivation is that introspection is essential in learning: realising the shortcomings of one’s own knowledge and routines is required for a deliberate, goal-directed approach to gaining new knowledge and improve ones problem-solving skills.\n\n\nIntrospection can be into one’s own knowledge, beliefs, intentions, plans, etc. In my PhD I focused on knowledge. Representing a propositional attitude such as knowledge in a logical formalisms can be done in two distinct ways, either by a predicate or by a modal operator. In most logical settings, the predicate solution is more expressive than the modal operator solution, but with high expressivity sometimes also comes trouble like paradoxical sentences leading to inconsistency. When formalising knowledge as a predicate in a first-order predicate logic, assuming a fairly reasonable, though idealised, set of properties (axioms) of knowledge leads to trivialising inconsistency (this is Montague’s theorem, see e.g. my entry on [Self-reference](http://plato.stanford.edu/entries/self-reference/) in the [Stanford Encyclopedia of Philosophy](http://plato.stanford.edu/)). Inconsistency arises here because the logic becomes sufficiently expressive to express paradoxical self-referential sentences concerning knowledge like “this is sentence is not known”. To remedy this problem, there are several possible routes to take: 1) move to a non-classical logic, e.g. a paraconsistent one; 2) restrict the set of axioms of knowledge or their applicability; 3) choose a modal operator approach instead.\n\n\nIn my PhD, I primarily considered option 2. Today my preferred option is 3, using modal logic instead of first-order predicate logic. Option 3 seems to be the currently most preferred option in both the logic and AI communities, with massive amounts of work on epistemic logic, temporal epistemic logic and dynamic epistemic logic based on modal logic. So why do people choose the modal logic approach? Well, one reason is certainly to avoid the problems of self-reference and inconsistency just mentioned. Another is that the modal logic approach often has lower computational complexity, which is essential for practical applications. So in a sense, to answer the question of “whether these paradoxes will have a practical import for AI researchers”, a first answer could be: they already have. They already have in the sense that people shifted attention to other logics where these problems don’t appear.\n\n\nThis answer is of course not entirely satisfactory, though. The way modal logic approaches solve problems of self-reference is by choosing logics in which self-referential sentences can not even be expressed. So the next question becomes: Are we going to miss them (the self-referential sentences)? Do they have an important role to play in reasoning that will eventually be required to build computational agents with strong introspective capabilities? This question is much harder to answer. The modal approach implies that knowledge becomes explicitly stratified. On the modal approach one can have first-order knowledge (knowledge about ontic facts), second-order knowledge (knowledge about knowledge about ontic facts), third-order knowledge (knowledge about second-order knowledge) etc., but there is no fixed-point allowing knowledge statements to refer to themselves. This stratification seems to be cognitively feasible, however. From studies in cognitive science, in particular in the so-called false-belief tasks, it is known that children only start to master second-order knowledge around the age of 4, and only third-order knowledge around the age of 7-8. And even adults seldom reason beyond third-order or at most fourth-order. This seems to talk in favour of some kind of explicit stratification also in the case of humans, and with a fairly fixed limit on the depth of reasoning. This picture fits better with the modal approach, where the stratification is also explicit (modal depth/depth of models). Furthermore, in the modal approach it is more straightforward to mimic limits on the depth of reasoning. Of course, having a fixed limit on one’s depth of reasoning means that there will be certain problems you can’t solve, like [the muddy children puzzle](http://sierra.nmsu.edu/morandi/coursematerials/MuddyChildren.html) with more children than your depth limit. But even within such limitations, the experience from human cognition suggests that it is still possible to do quite advanced social cognition and introspection.\n\n\nSo maybe the self-reference problems that are likely to follow from choosing a non-stratified approach are indeed more relevant in other areas such as the foundation of mathematics (set theory, incompleteness theorems) and the foundations of semantics (theories of truth) than for computational agents. I still find it a bit surprising, though, that I am more happy to accept an explicitly stratified approach to propositional attitudes of agents than as a solution to the self-reference problems in set theory: In set theory, I find the non-stratified approach of [Zermelo-Fraenkel set theory](http://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory) much more reasonable than the stratified approach of [type theory](http://en.wikipedia.org/wiki/Type_theory). But even if self-reference is not going to play a big practical role in the development of computational agents, it will probably always lurk in the background and affect people’s choices of formalisms – as well as play an important foundational role in the same way it has in mathematics.\n\n\n\n\n---\n\n\n**Luke**: What are some examples of work that addresses (roughly) the kind of agent introspection you discuss in your thesis, but uses the modal logic approach to the challenge, or some other stratified approach?\n\n\n\n\n---\n\n\n**Thomas**: Most of the examples I give in the thesis involve quantifying over knowledge or beliefs, like “I believe that some of my beliefs are false” or “I believe that Alice knows more about X than I do”. To quantify over knowledge or beliefs in the operator approach (modal logic approach), one needs at least first-order modal logic. When I left the predicate approach for the operator approach after my PhD, I essentially went all the way “down” to propositional modal logic, so I am not up to date with the uses of first- or higher-order modal logic for representations of introspective knowledge. I can give a couple of relevant references, but they are not very recent. One is the situation calculus extended with a modal knowledge operator as presented in the book *[The Logic of Knowledge Bases](http://smile.amazon.com/Logic-Knowledge-Bases-Hector-Levesque/dp/0262122324/)* by Levesque & Lakemeyer (MIT Press, 2001). Another one is the book *[Reasoning about Rational Agents](http://smile.amazon.com/Reasoning-Rational-Intelligent-Robotics-Autonomous/dp/0262515563/)* by Wooldridge (MIT Press, 2000). Both of these presents logics that extend first-order predicate logic by, among others, a knowledge operator, and they investigate the use of these logics as the basis for computational agents.\n\n\nMy choice of “downgrading” to propositional modal logic was not because I don’t find the quantifying over beliefs non-interesting or non-important, but simply because I started to become very interested in the dynamics of knowledge, i.e., how the knowledge of an agent changes when an action is executed or an exogenous event occurs. This was essential for my goal of being able to construct planning agents with higher-order reasoning capabilities. I found a logical framework that suited my needs perfectly, dynamic epistemic logic (DEL). Since DEL is mostly studied in the context of propositional modal logic, this is where I currently find myself working. To my knowledge there is in only one paper on first-order dynamic epistemic logic, “[Dynamic term-modal logic](http://www.philos.rug.nl/~barteld/kcpdtml.pdf)” by Barteld Kooi.\n\n\n\n\n---\n\n\n**Luke**: Are there examples of self-reflective computational agents (of the sort we’ve been discussing) in current industrial or governmental use, or is this work still entirely occurring at a more theoretical stage?\n\n\n\n\n---\n\n\n**Thomas**: To my knowledge it is still at the theoretical stage. However, its importance is becoming increasingly appreciated also among more applied AI researchers. As mentioned in my previous answer, I am now looking into epistemic planning: planning agents with higher-order reasoning capabilities (agents reasoning about the knowledge of themselves and other agents). The automated planning community is starting to become very interested in this. Bernhard Nebel, who is a very prominent figure in automated planning and robotics, even called epistemic planning the “next big thing”. The reason is that planning formalisms and systems have to become general and expressive enough to deal with robots acting in uncertain environments, and environments containing other agents (multi-agent systems). This calls for epistemic reasoning in such systems.\n\n\nThere is also an increased awareness of the importance of social intelligence in AI systems, if these systems are to efficiently communicate and cooperate with humans. This could be in something like hospital robots (e.g. the [TUG robot](http://www.sciencedirect.com/science/article/pii/S0262407909632474), which is very well-known for its current *lack* of social intelligence [Colin Barras, *New Scientist*, vol. 2738, 2009]), intelligent personal assistants like Siri on iPhone or Google Now on Android, and earthquake rescue robots that have to work in mixed teams with humans.\n\n\nBut integrating the existing formalisms for introspection and social intelligence into e.g. planning systems and then embed all of that into a robot is something which is still in its infancy. The most successful attempt up to now is probably the social robot bartender by Ron Petrick and co-authors [Petrick & Foster, ICAPS, 2013][1](https://intelligence.org/2014/04/13/thomas-bolander/#footnote_0_10963 \"Petrick, R. and Foster, M. Planning for Social Interaction in a Robot Bartender Domain. ICAPS 2013. Slides available.\"). They won a best paper award at the major international planning competition, ICAPS, for this last summer. Their underlying logical formalism is a restricted version of first-order epistemic logic (modal operator approach). The logic is restricted in many important ways in order to quickly get something with reasonable computational performance. The full logics of introspection and higher-order reasoning are still too expressive to be computationally feasible, and currently more work is needed to find the best trade-off between expressivity and computational efficiency that will make higher-order reasoning agents more attractive to the more applied researchers and the industry.\n\n\n\n\n---\n\n\n**Luke**: In general, what kinds of heuristics do you and your colleagues use to decide how to proceed with theoretical research in a broad problem category like “self-reflective reasoning in computational agents,” which may be 10-30 years from practical application? How do you decide which subproblems to work on, and when to pivot to a different sub-problem or a different approach? And how willing are grantmakers to fund that kind of work, in your experience?\n\n\n\n\n---\n\n\n**Thomas**: Since I’m using logical formalisms to try to capture “self-reflective reasoning”, any development of a new “theory” has a number of standard and clearly defined steps: 1) You define a suitable logical language and its semantics; 2) You define suitable calculi for reasoning within the logic (proof theory); 3) You give examples to show that your formalisms can capture important aspects of self-reflective reasoning; 3) You investigate the computational complexity and relative expressive strength of your logic; 4) You implement the calculi for reasoning, and optimise them if scalability is important for your proof of concept. Of course there are many choices to be made when you define your logical language, but this is usually driven by concrete examples: I want to be able to express these kinds of sentences or these lines of reasoning. Or, oppositely: I don’t want my language to be able to express this, because it is known to lead to inconsistency or undecidability.\n\n\nFor instance, in my new work, a source of inspiration has been the false-belief tasks used in cognitive psychology to test the [Theory of Mind](http://en.wikipedia.org/wiki/Theory_of_mind) (ToM) of humans. Theory of Mind is the ability to attribute mental states – beliefs, desires, intentions, etc. – to oneself and others. A goal for my logical formalism is that it can robustly deal with any of the existing false-belief tasks used on humans, and some suitably general closure of that set. This naturally drives research activities into finding logics that are suitable for representing that kind of reasoning in a natural way. It might be 20-50 years down the line to see e.g. hospital robots have any kind of general Theory of Mind, but this is also exactly because our understanding of Theory of Mind reasoning and how to represent it formally and in a computer is still rather limited. First we need to understand the concept, then we need to find out how to formalise it, then we need to tame the inherent computational complexity, and then finally we are hopefully able to implement it in a practical application. This is the process that e.g. [description logics](http://en.wikipedia.org/wiki/Description_logic) have been going through. Description logics are logics for representing and reasoning about terminological knowledge. Today reasoning calculi for description logic have been commercialised and are being used e.g. on large medical databases, but it was obviously a long process since the early days of description logic (description logics are in fact also a type of modal logics, more precisely a type of hybrid logics).\n\n\nThere is of course also a risk that what we are doing theoretically about self-reflective reasoning in computational agents will *never* have any practical impact. This is probably why some people start in the other end, and insist that everything they produce should from the outset be implemented in, say, a working robot. However, if you do this, you face another set of problems. Working on practical implementations of robots before the underlying concepts and mechanisms have been fully understood can often lead to solutions that are somewhat ad-hoc and far from being general and robust. There are e.g. researchers who have built robots able to pass certain false-belief tasks, but they can only pass first-order false-belief tasks, and it is not clear how the architecture can be generalised.\n\n\nIn terms of grants, there are always more money in applied research than in theoretical/foundational research. I don’t think getting funding of theoretical work on self-reflective reasoning in computational agents is any easier or harder than other foundational work.\n\n\n\n\n---\n\n\n**Luke**: You give the example of description logics, which I take it were developed several decades before they saw significant commercial application? What other examples of that kind are you aware of, where there was some early theoretical work inspired by fairly abstract motivations (“In the future, we’ll want systems that do X-ish type things, so let’s build some toy models of X and see where things go…”) that wasn’t commercialized until a decade or more later?\n\n\n\n\n---\n\n\n**Thomas**: Description logics go back to the beginning of the 80s. Efficient reasoners with good practical performance on large databases described in expressive description logics started appearing in the beginning of the 00s. Reasoners for these expressive description logics have very bad worst-case complexities (e.g. 2EXP), so it has been quite impressive, and surprising, to see how well they scale to large databases.\n\n\nWith respect to other examples, I think that most of the genuinely new theories in AI start out with only being applied to toy examples. Also, almost all of mathematics falls into the category of “theoretical work inspired by fairly abstract motivations”. If you e.g. look at all the mathematics needed to construct a physics engine for a modern 3D computer game, then none of this was originally made with computer games in mind, and most of it was developed without any applications in mind at all. In AI, of course, we tend to always have certain applications or sets of applications in mind, and we aim at shorter time intervals from theoretical developments to commercialisation. The example of the physics engine is about creating mathematical models of real-world physical phenomena and then implement these models in a computer. AI is not that different from this. AI is also about creating mathematical models of some part of our reality, but in this case it is mathematical models of (human) reasoning. There is no doubt that reaching this goal involves, as in the case of physics engines, significant work at all levels from the most theoretical foundations to the most applied implementation-oriented stuff.\n\n\nI think it is dangerous to dismiss any new ideas in AI that at first only applies to toy examples. That might be a necessary first step, as was e.g. the case in description logics. But it is equally dangerous to forget all about scalability, and think that “this is not my business to worry about”. If you don’t worry at all about scalability and applicability, then you are certainly much more likely to come up with ideas that will never scale or be applied. The researchers I respect the most are those who help make the ends meet: theoreticians with strong knowledge and intuition about applications, and applied researchers with strong theoretical knowledge and interest in applying new theory.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Thomas!\n\n\n\n\n---\n\n1. Petrick, R. and Foster, M. [Planning for Social Interaction in a Robot Bartender Domain](http://homepages.inf.ed.ac.uk/rpetrick/papers/icaps2013.pdf). [ICAPS 2013](http://icaps13.icaps-conference.org/). [Slides available](http://homepages.inf.ed.ac.uk/rpetrick/papers/icaps2013-slides.pdf).\n\nThe post [Thomas Bolander on self-reference and agent introspection](https://intelligence.org/2014/04/13/thomas-bolander/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-04-13T18:17:18Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "acc298c6f03fda501296dde97e72378d", "title": "Jonathan Millen on covert channel communication", "url": "https://intelligence.org/2014/04/12/jonathan-millen/", "source": "miri", "source_type": "blog", "text": "![Jonathan Millen portrait](http://intelligence.org/wp-content/uploads/2014/04/Jonathan_Millen.jpg) [Jonathan Millen](http://jonmillen.com) started work at the [MITRE Corporation](http://www.mitre.org/) in 1969, after graduation from [Rensselaer Polytechnic Institute](http://www.rpi.edu/) with a Ph.D. in Mathematics. He retired from MITRE in 2012 as a Senior Principal in the Information Security Division. From 1997 to 2004 he enjoyed an interlude as a Senior Computer Scientist in the [SRI International Computer Science Laboratory](http://www.csl.sri.com/). He has given short courses at RPI Hartford, University of Bologna Summer School, ETH Zurich, and Taiwan University of Science and Technology. He organized the [IEEE Computer Security Foundations Symposium](http://www.ieee-security.org/CSFWweb/) (initially a workshop) in 1988, and co-founded (with S. Jajodia) the [Journal of Computer Security](http://www.iospress.nl/journal/journal-of-computer-security/) in 1992. He has held positions as General and Program Chair of the IEEE Security and Privacy Symposium, Chair of the IEEE Computer Society Technical Committee on Security and Privacy, and associate editor of the ACM Transactions on Information and System Security.\n\n\nThe theme of his computer security interests is verification of formal specifications, of security kernels and cryptographic protocols. At MITRE he supported the DoD Trusted Product Evaluation Program, and later worked on the application of [Trusted Platform Modules](http://en.wikipedia.org/wiki/Trusted_Platform_Module). He wrote several papers on information flow as applied to covert channel detection and measurement. His 2001 paper (with V. Shmatikov) on the Constraint Solver for protocol analysis received the SIGSAC Test of Time award in 2011. He received the ACM SIGSAC Outstanding Innovation award in 2009.\n\n\n\n**Luke Muehlhauser**: Since you were a relatively early researcher in the field of covert channel communication, I’d like to ask you about the field’s early days, which are usually said to have begun with [Lampson (1973)](http://www.cs.umd.edu/%7Ejkatz/TEACHING/comp_sec_F04/downloads/confinement.pdf). Do you know when the first covert channel attack was uncovered “in the wild”? My impression is that Lampson identified the general problem a couple *decades* before it was noticed being exploited in the wild; is that right?\n\n\n\n\n---\n\n\n**Jonathan Millen**: We might never know when real covert channel attacks were first noticed, or when they first occurred. When information is stolen by covert channel, the original data is still in place, so the theft can go unnoticed. Even if an attack is discovered, the victims are as reluctant as the perpetrators to acknowledge it. This is certainly the case with classified information, since a known attack is often classified higher than the information it compromises. The only evidence I have of real attacks before 1999 is from Robert Morris (senior), a pioneer in UNIX security, and for a while the Chief Scientist of the National Computer Security Center, which was organizationally within NSA. He stated at a security workshop that there had been real attacks. He wouldn’t say anything more; it was probably difficult enough to get clearance for that much. \n\n\n\n\n\n\n---\n\n\n**Luke**: From your own perspective, what are some of the most interesting developments in covert channel communication since you wrote “[20 Years of Covert Channel Modeling and Analysis](http://commonsenseatheism.com/wp-content/uploads/2014/03/Millen-20-Years-of-Covert-Channel-Modeling-and-Analysis.pdf)” in 1999?\n\n\n\n\n---\n\n\n**Jonathan**: That’s a tough question, for several reasons. Like many research areas that have been in existence for many years, covert channel research has splintered into several specialties. Some of the subtopics are noninterference models, language-based information control via type theory, steganography, and so on.\n\n\nWhile specialization is necessary for deep progress, I wish there were more attention given to the pragmatic “so what” questions about covert channels. For example, information flow security is often defined in terms of noninterference, which can, unfortunately, be defined in several subtly different ways. All of them rely on some form of behavioral equivalence, a strong condition guaranteeing that an unauthorized observer cannot tell when a more privileged user is present. Even one bit, or a fraction of a bit, of information about the protected party’s data or actions is considered too much. Yet, what we really want to know is whether a recurrent channel can be set up, yielding a large or unbounded amount of information. Furthermore, information in the Shannon sense may or may not be useful. One paper that attracted my attention is “[Quantifying information flow with beliefs](https://www.cs.cornell.edu/andru/papers/jbelief.pdf)”, by Clarkson, Myers, and Schneider, 2009, because it dealt with the accuracy of information.\n\n\nAnother pragmatic issue is the trustworthiness of the system functionality. In general, I am distrustful of security approaches that depend on access control software or the design of the language in which it is written. Even the firmware and hardware below the operating system kernel is suspect. In the last few years, my main research interest was in trusted computing, as supported by trusted platform modules, with their capability — available but largely unused in modern platforms — to cryptographically check the integrity of system software by a bootstrap sequence starting with the tamper-resistant TPM. And still, having done that, we know only that the system software is genuine, not that it is correct or free from covert channels. It is encouraging that work in software and hardware verification continues, but commonly used systems are still beyond its reach.\n\n\nComplexity is the enemy of security. The more you have to depend on, the less likely it is that you can understand it well enough to exclude vulnerabilities. A couple of corollaries: the more complex your security policy is, the less robust it is, because more complex software is needed for it. And the more specialized a system component is to support security, the more attractive it is as a target for an attack. I suppose that the optimistic view of this is that if we use a typically complex modern system, covert channels are the least of our worries!\n\n\n\n\n---\n\n\n**Luke**: From your many decades of experience in computer security, do you know of cases where someone was worried about a computer security or safety challenge that wasn’t imminent but maybe one or two decades away, and they decided to start doing research to prepare for that challenge anyway — e.g. perhaps because they expected the solution would require a decade or two of “serial” research and/or engineering work, with each piece building on the ones before it, and they wanted to be prepared to meet the challenge near when it arrived? Lampson’s early identification of the “confinement problem” looks to me like it might be one such example, but maybe I’m misreading the history there.\n\n\n\n\n---\n\n\n**Jonathan**: In computer security, what usually happens is that someone realizes that a vulnerability already exists, but it is not clear how long it will take for a malicious party to take advantage of it. Another factor delaying the onset of relevant research and development is that long-term efforts aimed at removing vulnerabilities are risky and their results may be inadequate or eclipsed by different problems in the future.\n\n\nIf the difficulty is social, such as getting standards updated, the difficulty is real, but the basic engineering knowledge is mostly available early on. For example, the TCP sequence number attack for session hijacking was pointed out by Morris in 1985, taken advantage of by Mitnick in 1995, led to Bellovin’s standards-related recommendations in 1996, and only recently has the availability of encryption-based authentication and IPv6 brought a widespread solution within reach.\n\n\nIf the difficulty is in the engineering, such as the development of security kernels and mandatory access policies to defeat Trojan horses, or the installation and use of TPMs, there is an industry to be moved, one driven by the need for commercial viability of new products. And after all that effort, some vulnerabilities remain.\n\n\nComputer scientists can help by developing powerful general-purpose tools and techniques with fundamental computer science goals — tools that are not problem-dependent. By this, I mean such things as verification tools and model checkers, as well as encryption algorithms and applications. Such tools can ease the engineering burden when new problems and design ideas emerge. Development of these tools deserve decades of research building on prior work, and those who are capable of conducting this activity successfully should be encouraged and supported.\n\n\n\n\n---\n\n\n**Luke**: You mention verification tools and model checkers. I was recently speaking to a significant figure in the computer security and safety community (I don’t have permission to give her name) about the common claim that formal methods can catch corner bugs which are unlikely to be caught by testing alone. She mentioned that if the formal methods community has ever subjected formal methods to careful experimentation to demonstrate that verification really does catch bugs that aren’t likely to be found by testing, she wasn’t aware of it. Are you aware of any such demonstrations? If not, how would *you* describe the value in formal methods for safety and security purposes?\n\n\n\n\n---\n\n\n**Jonathan**: It seems unfair to me to ask for careful experimentation to justify formal methods in comparison to testing. One reason for this that was pointed out when formal methods were being recommended for security kernel verification, is that very few security kernels are actually developed, so any experimental results are not going to be statistically significant.\n\n\nOne experiment that was done in the very beginning of the history of formal specifications was by David Parnas, credited by some with inventing the notion of “formal nonprocedural specification.”[1](https://intelligence.org/2014/04/12/jonathan-millen/#footnote_0_10967 \"Parnas (1972).\") In fact, many of us referred to such specifications at first as “Parnas specifications”, but he let it be known (through Peter Neumann) that he disapproved of that. Parnas’ experiment was a small classroom experiment designed to investigate, not the efficacy of verification of formal specifications, but rather the plausible idea that specifications determine implementations. Parnas’ classroom experiment, in which several groups were given the same specification to implement, disproved that (or at least cast doubt on that) because different groups implemented their specified task in very different ways, despite the very detailed constraints on input-output behavior in the specification. (I heard the experiment described at a conference presentation by Parnas, but I can’t find the paper or the citation now.)\n\n\nThis result turned out to be useful to me a few years later when DoD sponsors argued that analysis of the formal specifications of security kernels should be classified higher than the programs themselves, even when the analysis proved the absence of access control violations and storage channels, because the specifications allowed readers to deduce features of the implementation that would tell them about timing and other channels. I pointed out that Parnas had given evidence that their fears were not justified.\n\n\nOne indicator of the value of formal methods, model checking in particular, is the fact that it has been used routinely in the design of CPUs, which have rather complicated internal strategies for managing memory and register caching. (One citation I found with a quick Google to confirm this is “[Fifteen years of formal property verification in Intel](http://commonsenseatheism.com/wp-content/uploads/2014/04/Fix-Fifteen-Years-of-Formal-Property-Verification-in-Intel.pdf)”, by Limor Fix, of Intel Research Pittsburgh, published in a 2008 Springer-Verlag book, [*25 Years of Model Checking*](http://www.amazon.com/Years-Model-Checking-Achievements-Perspectives/dp/3540698493/).)\n\n\n\n\n---\n\n\n**Luke**: You also wrote that “Computer scientists can help by developing powerful general-purpose tools and techniques… [e.g.] verification tools and model checkers, as well as encryption algorithms and applications… Development of these tools deserve decades of research building on prior work, and those who are capable of conducting this activity successfully should be encouraged and supported.”\n\n\nWhat are some specific examples of tools for software system safety or security that you wish were receiving more development talent and funding than is currently the case? E.g. probabilistic model-checkers, or verified libraries for program synthesis, or whatever you think is most urgent or underfunded.\n\n\n\n\n---\n\n\n**Jonathan**: My gut reaction is, all of the above. When it comes to specific tools, each of us has preferences based on what we needed, what we have used, and what is most accessible. When I was at SRI International, I used the PVS theorem prover, and later, SRI’s SAL environment, which hosts several flavors of model checking algorithms. Both of those have benefited from the highest levels of talent; the challenge is that the job is never finished, especially if it is a thriving, well-used system. Funding sources typically emphasize urgent applications or brand new ideas, as they should, but they tend to neglect the steady, unromantic work needed to keep a software suite up to date with respect to the evolution of hardware, operating systems, algorithmic advances, and so on. And tools that support formal methods do not have the same opportunities for income as Windows, Apple’s OS X, and other commercial systems, even with a strong user community.\n\n\nIncidentally, although I have emphasized general-purpose tools, that does not mean that I am against tools designed for particular applications such as security. After all, I spent a fair amount of time on a protocol security analyzer (called the Constraint Solver). Specialized tools, however, benefit in brevity and readability from being built on top of an expressive language with powerful primitives — for my application, Prolog, specifically SWI-Prolog.\n\n\nA word of caution, though: it’s one thing to build an analysis tool with a complex system under it, and another to build a supposedly secure system that way. A lesson learned from studying covert channels is that the whole system under the secure interface is a potential source of vulnerabilities. So how can we build secure systems? The main requirement, I think, is simple: don’t let the enemy program your computer (the computer holding your data). But how can we prevent that? That’s the hard part.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Jonathan!\n\n\n\n\n---\n\n1. [Parnas (1972)](http://commonsenseatheism.com/wp-content/uploads/2014/04/Parnas-A-Technique-for-software-module-specification-with-examples.pdf).\n\nThe post [Jonathan Millen on covert channel communication](https://intelligence.org/2014/04/12/jonathan-millen/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-04-12T08:00:08Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "ca4fbe66b7d2c80c3972870f0f8e8250", "title": "Wolf Kohn on hybrid systems control", "url": "https://intelligence.org/2014/04/11/wolf-kohn/", "source": "miri", "source_type": "blog", "text": "![Wolf Kohn portrait](http://intelligence.org/wp-content/uploads/2014/04/Kohn_w1300.jpg) [Dr. Wolf Kohn](http://depts.washington.edu/ie/people/kohn) is the Chief Scientist at [Atigeo, LLC](http://atigeo.com/), and a Research Professor in Industrial and Systems Engineering at the University of Washington. He is the founder and co-founder of two successful start-up companies: [Clearsight Systems, Corp.](http://www.clearsightsystems.com/), and Kohn-Nerode, Inc. Both companies explore applications in the areas of advanced optimal control, rule-based optimization, and quantum hybrid control applied to enterprise problems and nano-material shaping control. Prof. Kohn, with Prof. Nerode of Cornell, established theories and algorithms that initiated the field of hybrid systems. Prof. Kohn has a Ph.D. in Electrical Engineering and Computer Science from MIT, at the Laboratory of Information and Decision Systems. Dr. Kohn is the author or coauthor of over 100 referred papes, 6 book chapters and with Nerode and Zabinsky has written a book in Distributed Cooperative inferencing. Dr. Kohn Holds 10 US and international patents.\n\n\n\n**Luke Muehlhauser**: You co-founded the field of hybrid systems control with Anil Nerode. Anil gave his impressions of the seminal 1990 Pacifica meeting [here](http://intelligence.org/2014/03/26/anil-nerode/). What were your own impressions of how that meeting developed? Is there anything in particular you’d like to add to Anil’s account?\n\n\n\n\n---\n\n\n**Wolf Kohn**: The discussion on the first day of the conference centered on the problem of how to incorporate heterogeneous descriptions of complex dynamical systems into a common representation for designing large scale automation. What came almost immediately were observations from Colonel Mettala and others that established as a goal the finding of alternatives to classic approaches based on combining expert systems with conventional control and system identification techniques.\n\n\nThese approaches did not lead to robust designs. More important, they did not lead to a theory for the systematic treatment of the systems DOD was deploying at the time. I was working on control architectures based on constraints defined by rules, so after intense discussions among the participants Nerode and I moved to a corner and came up with a proposal to *amalgamate* models by extending the concepts of automata theory and optimal control to characterize the evolution of complex dynamical systems in a manifold in which the topology was defined by rules of operation, and behavior constraints and trajectories were generated by variational methods. This was the beginning of what we would be defined later on as “hybrid systems.” \n\n\n\n\n\n\n---\n\n\n**Luke**: Which commercial or governmental projects would you name as being among the most significant success stories of the hybrid systems research program, from 1990 to the present day?\n\n\n\n\n---\n\n\n**Wolf**: There are many applications today that use hybrid systems as the basic technology. These are a few of the ones I am I personally familiar with:\n\n\n* A demand forecaster and an inventory control and management system being deployed by the Microsoft Dynamics group.\n* A battlefield simulator deployed by the Army’s Piccatiny Arsenal.\n* A generic people and resource scheduling system deployed by Clearsight Systems.\n* A cooperative distributed inference system deployed by Atigeo with applications on medical informatics and smart electric power network management systems.\n* A quantum hybrid control system for capturing and storing sunlight being prototyped by Kohn-Nerode LLC.\n\n\n\n\n---\n\n\n**Luke**: What are the new theoretical developments in hybrid systems of the past 10 years that are most impressive or interesting to you? What kinds of advances do you think we might see in the next 10 years?\n\n\n\n\n---\n\n\n**Wolf**: For me the most important advances in hybrid systems are in three areas:\n\n\n1. *Representation*: We have found that the behavior of dynamical systems characterized by multiple heterogeneous models can be effectively characterized by Hamiltonian functions. The interaction with multiple models and the transfer of information from one model to another is defined by the interaction of Hamiltonian forms. This fact allows hybrid systems to be a preferred theory and implementation technology for the development of new control approaches such as mean field agent-based distributed control and gauge theory and most importantly, control and optimization specified and implemented by rules.\n2. *Hybrid systems control design*, which allows for the specification of architectural structures and the control requirements as part of the formulation of a control problem. This fuses the physical and empirical data about the dynamic process to be controlled. The control specification and the computational requirements are achieved because control performance specifications and architectural constraints *are* models of the controlled process.\n3. *Agent-based distributed control*: Nerode and I, both together and separately have developed a variational theory, based on hybrid systems, for dynamic synchronization of multiple agents participating in the control of a distributed process. The variational theory generalizes a principle in network theory, called Tellegen’s Theorem. The theory provides for active synchronization with no umpire for network interaction between the agents. A version of this theory has been used to implement an agent-based architecture for control, uncertainty management, and learning in several applications. This architecture is known by the acronym MAHCA (Multiple Agent Hybrid Control Architecture).\n4. *Metacontrol*: This is a theory for controlling the performance and behavior of *implemented* algorithms. The dynamics in this case is a computational multitask process. The objective is to make it run faster with less memory and with active synchronization between tasks. We are developing a computational hybrid systems theory based on metacontrol. Preliminary implementations of this theory on optimization algorithms has shown very promising results in terms of reduction of compute time, real time synchronization and rule based optimization.\n\n\n\n\n---\n\n\n**Luke**: I’m particularly interested in the safety challenges presented by the increasingly autonomous AI systems of the future. Self-driving cars are on their way, the U.S. military is working toward [autonomous battlefield robots](http://www.amazon.com/Governing-Lethal-Behavior-Autonomous-Robots-ebook/dp/B008I9YG9G/) of various types, etc. Do you think hybrid systems control, and relatively modest extensions to it, will be sufficient to gain high assurances of the safety for the more-capable autonomous systems we’re likely to have in 10 or 20 years, or do you think other contemporary control and verification approaches have a better shot at addressing that problem, or do you think entirely new approaches will need to be developed?\n\n\n\n\n---\n\n\n**Wolf**: I will break down my answer in two parts: (1) the autonomy issue and (2) the verification issue.\n\n\n*Autonomy*: This was one of the central questions brought up by the initial sponsors of hybrid systems (ARPA, NIST, ARO, SAP). Our answer early on came with the following proposition which we implemented in a battlefield dynamics simulator: given a process to be controlled, build a model of desired behavior based on the performance specifications, regulations, and economic operation, and construct the controlled process dynamics. This is a hybrid system; let’s call it S.\n\n\nThen we build a model, another hybrid system, say C, representing specified safety rules and constraints and hybridize S and C to produce a new Hybrid control system S1. This approach is only successful in semi-closed, quasi-stationary systems.\n\n\nFor the large scale autonomous systems, one needs to allow the hybrid system controller to detect, learn and dispose of un-programmed situations. To do this, a class of hybrid controllers (known as agents in our papers) contain directives that implement Learning from Sensory Data. One example is what we might call “Learning by Failure Predictor and Repair” systems. This particular system operates as follows: a safety agent (or agents) is (are) monitoring the controlled system operation, and infer(s) whether it is operating in a feasible region and what is the likelihood that it is going to leave the feasibility region (a Failure) in the near future (“near” is a concept depending on the controlled system). This likelihood determines the response to the failure: a Repair operation.\n\n\nThen, another agent application designs a controller that implements the Repair operation. Note this is not necessarily an adaptation of an existing controller. We use the fact that the design procedure of hybrid controllers is itself a hybrid system defining the synthesis approach. We term the resulting design procedure a repair hybrid control. We have used this approach in a prototype microgrid management and control system that is near deployment with considerable success: i.e., high robustness, resiliency, recovery: all this while maintaining good performance.\n\n\nAnother element that agent-based hybrid systems provide is the ability to improve safety via redundancy.\n\n\nIn short, my answer to the first part of the question may be summarized as follows:\n\n\nHybrid systems is a first principles platform that allows for the incorporation of safety, learning and repair. What I believe the research in this area should be focused on is how to populate the system with *information* about the application and to allow for heuristics and empirical constraints and rules, and to provide the structural Failure and Repair mechanisms I outlined above. This approach builds on top on existing control techniques but incorporates a new concept of structural adaptation that is essential for the level of autonomy posed in your question.\n\n\n*Verification*: many researchers have proposed methods for hybrid systems validation and verification prior to deployment. In our approach we are happy to use some of these techniques. Our contribution in this area called for verification of online designs, as discussed above. So, we are developing hybrid systems to model verification principles, with the idea to amalgamate them in near real time to our applications.\n\n\n\n\n---\n\n\n**Luke**: Do you think our capacity to make systems more autonomous and capable will outpace our capacity to achieve confident safety assurances for those systems?\n\n\n\n\n---\n\n\n**Wolf**: I believe we have made great progress on increasing autonomy in most of the systems we are designing today. I also believe we have paid far less attention to develop the methods, sensory redundancy principles, and theory of design for safety performance.\n\n\nNerode and I are working on developing an advanced magnetic battery using quantum hybrid control methodology. We found that the key ingredients of this battery for it to operate safely are principles of safety that have been used in non-autonomous systems for the last 100 years. We are encoding these principles formally as part of our design algorithms. Perhaps this way may be generalized to obtain acceptable levels of safety.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Wolf!\n\n\nThe post [Wolf Kohn on hybrid systems control](https://intelligence.org/2014/04/11/wolf-kohn/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-04-11T08:00:30Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "d4b6eb87dcc235208877677502c422e6", "title": "MIRI’s April 2014 Newsletter", "url": "https://intelligence.org/2014/04/10/miris-april-2014-newsletter/", "source": "miri", "source_type": "blog", "text": "| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n[Machine Intelligence Research Institute](http://intelligence.org)\n |\n\n |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n**Research Updates**\n* New technical report: “**[Botworld 1.0](http://intelligence.org/2014/04/10/new-report-botworld/)**.”\n* 9 new expert interviews: [Max Tegmark](http://intelligence.org/2014/03/19/max-tegmark/) on the mathematical universe, [Randal Koene](http://intelligence.org/2014/03/20/randal-a-koene-on-whole-brain-emulation/) on whole brain emulation, [Michael Carbin](http://intelligence.org/2014/03/23/michael-carbin/) on integrity properties in approximate computing, [Anil Nerode](http://intelligence.org/2014/03/26/anil-nerode/) on hybrid systems control, [Lyle Ungar](http://intelligence.org/2014/03/26/lyle-ungar/) on forecasting, [Erik DeBenedictis](http://intelligence.org/2014/04/03/erik-debenedictis/) on supercomputing, [Will MacAskill](http://intelligence.org/2014/04/08/will-macaskill/) on normative uncertainty, [Diana Spears](http://intelligence.org/2014/04/09/diana-spears/) on the safety of adaptive agents, and [Paulo Tabuada](http://intelligence.org/2014/04/09/paulo-tabuada/) on program synthesis for cyber-physical systems.\n\n\n\n**News Updates**\n* We are [actively hiring](http://intelligence.org/careers/) for three positions: math researcher, science writer, and executive assistant. Compensation is competitive and visa are possible.\n* We’re also [accepting applications](http://intelligence.org/get-involved/#workshop) on a rolling basis for future MIRI research workshops. Please apply if you’re a curious academic looking for exposure to our material. If you’re accepted, we’ll contact you about potential workshops over time, as we schedule them.\n* We published our [2013 in Review: Fundraising](http://intelligence.org/2014/04/02/2013-in-review-fundraising/) post.\n* Louie was [interviewed](http://io9.com/why-asimovs-three-laws-of-robotics-cant-protect-us-1553665410) at *io9* about the unsuitability of Asimov’s Three Laws of Robotics for machine ethics.\n* Luke was interviewed at *The Information* about MIRI’s research. Unfortunately, the interviewer’s editing introduced several errors: see [here](https://www.facebook.com/lukeprog/posts/10104392165212640?stream_ref=10).\n\n\n**Other Updates**\n* April 11th is your last chance to **[vote for Nick Bostrom](http://www.prospectmagazine.co.uk/worldthinkers/)** and others in *Prospect*‘s World Thinkers 2014 poll.\n* Oxford University is hosting an effective altruism conference in July. [Speakers](http://www.gooddoneright.com/#!programme/c1dj9) include Nick Bostrom and Derek Parfit. Register [here](http://www.gooddoneright.com/#!register/c10fk).\n* Registration is also open for [Effective Altruism Summit](http://www.effectivealtruismsummit.com/) 2014, in August in the Bay Area.\n* Our frequent collaborators at the Future of Humanity Institute are [currently hiring](http://www.fhi.ox.ac.uk/vacancy-population-ethics/) a research fellow in philosophy, for a focus on population ethics.\n\n\nAs always, please don’t hesitate to let us know if you have any questions or comments.\nBest,\nLuke Muehlhauser\nExecutive Director\n |\n\n |\n\n |\n| |\n\n\n\nThe post [MIRI’s April 2014 Newsletter](https://intelligence.org/2014/04/10/miris-april-2014-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-04-11T06:00:04Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "c0999c31d42b23bf2daf39485778e94b", "title": "New Report: Botworld", "url": "https://intelligence.org/2014/04/10/new-report-botworld/", "source": "miri", "source_type": "blog", "text": "![robot swarm](https://intelligence.org/wp-content/uploads/2014/04/robot-swarm.jpg)\nToday MIRI releases a new technical report: “[Botworld 1.0](https://intelligence.org/files/Botworld.pdf)” (pdf) by [recent hires](http://intelligence.org/2014/03/13/hires/) Nate Soares and Benja Fallenstein. The report is a “[literate](http://www.haskell.org/haskellwiki/Literate_programming)” Haskell file, available from [MIRI’s Github page](https://github.com/machine-intelligence).\n\n\nSoares explains the report on his accompanying [Less Wrong post](http://lesswrong.com/lw/k1o/botworld_a_cellular_automaton_for_studying/), which is also the preferred place for discussion of the report:\n\n\n\n> This report introduces *Botworld*, a cellular automaton that provides a toy environment for studying self-modifying agents.\n> \n> \n> The traditional agent framework, used for example in Markov Decision Processes and in Marcus Hutter’s universal agent AIXI, splits the universe into an agent and an environment, which interact only via discrete input and output channels.\n> \n> \n> Such formalisms are perhaps ill-suited for real self-modifying agents, which are embedded within their environments. Indeed, the agent/environment separation is somewhat reminiscent of Cartesian dualism: any agent using this framework to reason about the world does not model itself as part of its environment. For example, such an agent would be unable to understand the concept of the environment interfering with its internal computations, e.g. by inducing errors in the agent’s RAM through heat.\n> \n> \n> Intuitively, this separation does not seem to be a fatal flaw, but merely a tool for simplifying the discussion. We should be able to remove this “Cartesian” assumption from formal models of intelligence. However, the concrete non-Cartesian models that have been proposed (such as Orseau and Ring’s formalism for space-time embedded intelligence, Vladimir Slepnev’s models of updateless decision theory, and Yudkowsky and Herreshoff’s tiling agents) depart significantly from their Cartesian counterparts.\n> \n> \n> Botworld is a toy example of the type of universe that these formalisms are designed to reason about: it provides a concrete world containing agents (“robots”) whose internal computations are a part of the environment, and allows us to study what happens when the Cartesian barrier between an agent and its environment breaks down. Botworld allows us to write decision problems where the Cartesian barrier is relevant, program actual agents, and run the system.\n> \n> \n> As it turns out, many interesting problems arise when agents are embedded in their environment. For example, agents whose source code is readable may be subjected to Newcomb-like problems by entities that simulate the agent and choose their actions accordingly.\n> \n> \n> Furthermore, certain obstacles to self-reference arise when non-Cartesian agents attempt to achieve confidence in their future actions. Some of these issues are raised by Yudkowsky and Herreshoff; Botworld gives us a concrete environment in which we can examine them.\n> \n> \n> One of the primary benefits of Botworld is concreteness: when working with abstract problems of self-reference, it is often very useful to see a concrete decision problem (“game”) in a fully specified world that directly exhibits the obstacle under consideration. Botworld makes it easier to visualize these obstacles.\n> \n> \n> Conversely, Botworld also makes it easier to visualize suggested agent architectures, which in turn makes it easier to visualize potential problems and probe the architecture for edge cases.\n> \n> \n> Finally, Botworld is a tool for communicating. It is our hope that Botworld will help others understand the varying formalisms for self-modifying agents by giving them a concrete way to visualize such architectures being implemented. Furthermore, Botworld gives us a concrete way to illustrate various obstacles, by implementing Botworld games in which the obstacles arise.\n> \n> \n> Botworld has helped us gain a deeper understanding of varying formalisms for self-modifying agents and the obstacles they face. It is our hope that Botworld will help others more concretely understand these issues as well.\n> \n> \n\n\nThe post [New Report: Botworld](https://intelligence.org/2014/04/10/new-report-botworld/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-04-11T02:43:31Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "ccc78ac4c67398da4c741bce6cedd216", "title": "Paulo Tabuada on program synthesis for cyber-physical systems", "url": "https://intelligence.org/2014/04/09/paulo-tabuada/", "source": "miri", "source_type": "blog", "text": "![Paulo Tabuada portrait](http://intelligence.org/wp-content/uploads/2014/04/Tabuada_w460.jpg) [Paulo Tabuada](http://www.seas.ucla.edu/~tabuada/) was born in Lisbon, Portugal, one year after the Carnation Revolution. He received his “Licenciatura” degree in Aerospace Engineering from Instituto Superior Tecnico, Lisbon, Portugal in 1998 and his Ph.D. degree in Electrical and Computer Engineering in 2002 from the Institute for Systems and Robotics, a private research institute associated with Instituto Superior Tecnico. Between January 2002 and July 2003 he was a postdoctoral researcher at the University of Pennsylvania. After spending three years at the University of Notre Dame, as an Assistant Professor, he joined the Electrical Engineering Department at the University of California, Los Angeles, where he established and directs the [Cyber-Physical Systems Laboratory](http://www.cyphylab.ee.ucla.edu/).\n\n\nPaulo Tabuada’s contributions to cyber-physical systems have been recognized by multiple awards including the NSF CAREER award in 2005, the Donald P. Eckman award in 2009 and the George S. Axelby award in 2011. In 2009 he co-chaired the [International Conference Hybrid Systems: Computation and Control](http://hscc-conference.org) (HSCC’09) and in 2012 he was program co-chair for the 3rd IFAC Workshop on Distributed Estimation and Control in Networked Systems (NecSys’12). He also served on the editorial board of the IEEE Embedded Systems Letters and the IEEE Transactions on Automatic Control. His latest [book](http://www.springer.com/mathematics/applications/book/978-1-4419-0223-8), on verification and control of hybrid systems, was published by Springer in 2009.\n\n\n\n**Luke Muehlhauser**: In “[Abstracting and Refining Robustness for Cyber-Physical Systems](http://arxiv.org/abs/1310.5199),” you and your co-author write that:\n\n\n\n> …we present a design methodology for robust cyber-physical systems (CPS) [which]… captures two intuitive aims of a robust design: bounded disturbances have bounded consequences and the effect of sporadic disturbances disappears as time progresses.\n> \n> \n\n\nYou use an “abstraction and refinement” procedure for this. How does an abstraction and refinement procedure work, in this context?\n\n\n\n\n---\n\n\n**Paulo Tabuada**: Cyber-physical systems are notoriously difficult to design and verify because of the complex interactions between the cyber and the physical components. Although control theorists have developed powerful techniques for designing and analyzing physical components, say described by differential equations, and computer scientists have developed powerful techniques for designing and analyzing cyber components, say described by finite-state models, these techniques are for the most part incompatible. The latter rely on discrete mathematics while the former rely on continuous mathematics. Our approach is based on replacing all the physical components by cyber abstractions so that all the remaining design and verification tasks can be done in the cyber world. The construction of these abstractions is based on rigorous numerical simulation combined with an analysis of the differential equation models to guarantee that the original physical components and its abstractions are equivalent up to a desired precision. Technically, “equivalent up to a desired precision” means approximately bisimilar and intuitively this means that both models generate the same set of behaviors up to a desired precision. \n\n\n\n\n\n\n---\n\n\n**Luke**: [Last summer](https://excape.cis.upenn.edu/summer-school.html) you gave a four-part tutorial ([p1](https://excape.cis.upenn.edu/documents/Tutorialthree_partone.pdf), [p2](https://excape.cis.upenn.edu/documents/Tutorialthree_parttwo.pdf), [p3](https://excape.cis.upenn.edu/documents/Tutorialthree_partthree.pdf), [p4](https://excape.cis.upenn.edu/documents/Sat_handson_pessoa.pdf)) on program synthesis for cyber-physical systems. For someone who isn’t familiar with program synthesis, can you describe how it’s done in the context of cyber-physical systems, and give an example of such a system that has been implemented?\n\n\n\n\n---\n\n\n**Paulo**: Program synthesis is already a challenging problem for software only systems. In the context of cyber-physical systems it becomes even more challenging since the objective is to synthesize a program that will make a physical system behave as intended. I am currently working on two projects related to program synthesis. One of these has for objective the synthesization programs that control the behavior of bipedal robots. The very same techniques are being used in the same project to synthesize programs for adaptive cruise control and lane departure control systems in collaboration with Toyota and Ford. To give you an idea of where the challenge lies, let me recall how this project started. My colleagues Jessy Grizzle at UMich and Aaron Ames at Texas A&M came to me and shared their frustration with the programs they developed to control their robots. While they were very satisfied making their robots walk on flat and unobstructed surfaces, they faced large problem on uneven terrain. Their approach was to develop a set of rules, i.e., a reactive program that responds to the stimuli provided by the sensors by determining which actuators should be used and how. Although the rules were developed based on common sense, it soon became clear it was impossible to predict how the execution of these rules would impact the motion of the robot. Moreover, a small change in the rules would lead to completely different and unexpected behavior. Our approach is to construct a finite-state abstraction of the robot dynamics and then to synthesize a reactive program that forces this abstraction to satisfy a desired specification. The synthesis of these reactive programs is done via the solution of a two player game where the synthesized program plays against the robot’s environment and enforces the specification no matter which action the environment takes. In this way, rather than verifying a reactive program, we synthesize one that is guaranteed to be correct by construction.\n\n\n\n\n---\n\n\n**Luke**: Robots and other AI systems are becoming increasingly autonomous in operation: [self-driving cars](http://en.wikipedia.org/wiki/Autonomous_car), robots that [navigate disaster sites](http://www.darpa.mil/Our_Work/TTO/Programs/DARPA_Robotics_Challenge.aspx), [HFT](http://en.wikipedia.org/wiki/High-frequency_trading) programs that trade stocks quickly enough to “[flash crash](http://en.wikipedia.org/wiki/2010_Flash_Crash)” the market or [nearly bankrupt](http://en.wikipedia.org/wiki/Knight_Capital_Group#2012_stock_trading_disruption) a large equities trader, etc.\n\n\nHow might current AI safety methods (formal verification, program synthesis, simplex architectures, etc.) be scaled up to meet the safety challenges raised for highly autonomous systems operating in unknown, continuous, dynamic environments? Will our capacity to make systems more autonomous and capable outpace our capacity to achieve confident safety assurances for those systems?\n\n\n\n\n---\n\n\n**Paulo**: Indeed, that is what has been happening so far. Our ability to create large complex systems is far ahead of our understanding of the basic scientific principles ensuring their safe operation. In my opinion, synthesis based approaches have the best chance to scale to the level of existing applications. Formal verification of these systems is extremely hard since the design space is very very large. No formal verification technique can handle the wide variety of systems from such large design space. When using synthesis, however, we can reduce the design space and thus obtain much more structured designs for which formal guarantees can be obtained. What we loose, in turn, is the ability to find all the design solutions. This tradeoff is well understood in other areas of engineering, e.g., brick and mortar or wood framed building construction. I believe that it wont take long until we discover a few synthesis techniques that guarantee safety and correctness of highly autonomous systems, even if these result in somewhat more conservative solutions.\n\n\n\n\n---\n\n\n**Luke**: You write that “it wont take long until we discover a few synthesis techniques that guarantee safety and correctness of highly autonomous systems, even if these result in somewhat more conservative solutions.” Could you clarify what you have in mind? For example, what might a “conservative solution” look like for a high-assurance program synthesis solution for the software of a self-driving car? (Feel free to give a different example if you prefer.)\n\n\n\n\n---\n\n\n**Paulo**: By conservative I mean that we will have to restrict the synthesized software so that it falls within a class for which synthesis techniques are available. A simple example is software that can be described by timed-automata. Although it might be convenient to make a decision based on a generic function of several timers, to stay within the timed-automata class we are only allowed to compare timers against constants or compare differences between timers against constants. For a self-driving car, this could mean taking decisions based on delays exceeding thresholds but not being allowed to make decisions based on the average of several delays.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Paulo!\n\n\nThe post [Paulo Tabuada on program synthesis for cyber-physical systems](https://intelligence.org/2014/04/09/paulo-tabuada/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-04-09T20:42:42Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "a523905b1d193748a2edc25b8c6c37ee", "title": "Diana Spears on the safety of adaptive agents", "url": "https://intelligence.org/2014/04/09/diana-spears/", "source": "miri", "source_type": "blog", "text": "![Diana Spears portrait](http://intelligence.org/wp-content/uploads/2014/04/Spears_w730.jpg) [Diana Spears](http://www.swarmotics.com/Contact_Information.html) is an Owner and Research Scientist at [Swarmotics, LLC](http://www.swarmotics.com/Swarmotics.html). Previously, she worked at US government laboratories ([Goddard](http://www.nasa.gov/centers/goddard/home/), [NIST](http://www.nist.gov/), [NRL](http://www.nrl.navy.mil/)) and afterwards she was an Associate Professor of Computer Science at the [University of Wyoming](http://www.uwyo.edu/). She received both the MS and PhD (1990) degrees in Computer Science from the [University of Maryland, College Park](http://www.umd.edu/).\n\n\nDr. Spears’s research interests include machine learning, adaptive swarm robotic sensing networks/grids, computational fluid dynamics based algorithms for multi-robot chemical/biological plume tracing and plume mapping, and behaviorally-assured adaptive and machine learning systems. Dr. Spears pioneered the field of safe adaptive agents with her award-winning (2001 NRL Alan Berman Research Publication Award) publication entitled, [“Asimovian adaptive agents.”](https://www.jair.org/media/720/live-720-1895-jair.pdf) Most recently she and her husband co-edited the book [“*Physicomimetics: Physics-Based Swarm Intelligence*,”](http://www.springer.com/computer/ai/book/978-3-642-22803-2) published by Springer-Verlag in 2012.\n\n\n\n**Luke Muehlhauser**: In [Spears (2006)](http://www.swarmotics.com/uploads/chap.pdf) and other publications, you’ve discussed the challenge of ensuring the safety of adaptive (learning) agents:\n\n\n\n> a designer cannot possibly foresee all circumstances that will be encountered by the agent. Therefore, in addition to supplying an agent with a plan, it is essential to also enable the agent to learn and modify its plan to adapt to unforeseen circumstances. The introduction of learning, however, often makes the agent’s behavior significantly harder to predict. The goal of this research is to verify the behavior of adaptive agents. In particular, our objective is to develop efficient methods for determining whether the behavior of learning agents remains within the bounds of prespecified constraints… after learning…\n> \n> \n> …Our results include proofs that… with respect to important classes of properties… if the [safety] property holds for the agent’s plan prior to learning, then it is guaranteed to still hold after learning. If an agent uses these “safe” learning operators, it will be guaranteed to preserve the properties with no reverification required. This is the best one could hope for in an online situation where rapid response time is critical. For other learning operators and property classes our a priori results are negative. However, for these cases we have developed incremental reverification algorithms that can save time over total reverification from scratch.\n> \n> \n\n\nWhat do you mean by “incremental” reverification algorithms, as in the last sentence I quoted?\n\n\n\n\n---\n\n\n**Diana Spears**: Verification (model checking, in particular) consists of proving that a computer program satisfies a desirable property/constraint and, if it does not, a counterexample is provided. In my work, I assume that this program is a (multi)agent plan for action.  In most real-world applications, plans are typically enormous and therefore verification may be quite time-consuming. Suppose the safety property/constraint that agent A must always obey is that “agent A should always be at least M units away from agent B” (to prevent collisions). Let’s assume that initial verification proved that the entire plan (consisting of all action sequences that agent A could ever possibly take) is guaranteed to obey this property in all circumstances. Furthermore, let’s assume that adaptation is required after the agent has been fielded, where adaptation consists of applying a machine learning operator to modify the plan. For example, suppose a specific part of the plan states that agent A should “move from location 3 to location 4 if there is a clear path between 3 and 4, the ground is fairly level (e.g., nowhere higher than X or lower than Y between locations 3 and 4), and if the schedule permits such movement at this time.” Then an example machine learning operator might change the “4” to “6” based on new information about the task being performed.\n\n\nNote that this learning operator only modifies one condition in one miniscule portion of the entire plan.Therefore, why re-verify that the entire plan still satisfies the desired property after learning? Why not re-verify only the specific portion of the plan that was modified, as well as any parts of the plan that depend on the modified portion?  That is what “incremental re-verification” does. It localizes post-adaptation verification to only those parts of the plan that are essential to re-verify.  In doing so, it improves the time complexity of re-verification. Time complexity is a very important and practical consideration for online systems – especially those that operate in real-time or in time-critical situations. In my research I ran numerous experiments comparing the CPU time of re-verifying an entire plan versus localized incremental re-verification of the plan after learning. My results showed as much as a 1/2-billion-fold speedup using incremental re-verification! And that’s using a plan that’s tiny compared to what agents would typically use in the real world. \n\n\n\n\n\n\n---\n\n\n**Luke**: With what kinds of agent programs have you explored this issue? What do the agents do, in what environment, and what kinds of safety properties do you prove about them?\n\n\n\n\n---\n\n\n**Diana**: Because of the strong relevance of the topic “safe adaptation” to aerospace applications, I chose to focus on NASA-related (multi-)agent programs. I depict a scenario in which a spacecraft has landed on another planet, such as Mars, and from which multiple mobile rovers emerge. The plans (programs) for the spacecraft  lander, as well as the planetary rovers, are for collecting, retrieving and transmitting/delivering data and/or samples from the surface of the planet. I prove “safety” and “liveness” properties. An example of “safety” is, “It is always the case that agent R is not delivering at the same time that agent L is transmitting.” Here, L is the lander and R is one of the rovers. This property/constraint prevents problems that may arise from the lander simultaneously receiving new data from R while transmitting older data to Earth. An example of “liveness” is, “If agent R executes the ‘deliver’ action, then eventually agent L will execute the ‘receive’ action.”\n\n\n\n\n---\n\n\n**Luke**: Have you or anyone else built on this specific line of work since 2006? What are some natural “next steps” for this particular line of research?\n\n\n\n\n---\n\n\n**Diana**: There were essentially three main offshoots of my research that I’m aware of – from NASA Ames, SRI, and USC. I’ll start with the NASA Ames offshoot. Around the year 2000 I gave a talk about “Asimovian Adaptive Agents” at NASA Ames. My instincts about the strong relevance of this work to NASA, and more generally aerospace, proved to be correct. (Furthermore, it appears to be highly relevant to *any* automated transportation, including automated cars/highways.) The researchers at NASA Ames rapidly and eagerly followed up on my talk with a flurry of related work, including research and publications. These researchers focused on “reference models,” which are used for online, run-time I/O checking. Instead of using temporal logic properties to verify, they used control theoretic properties such as “stability” and “performance.” Perkins and Barto also used Lyapunov stability as the property of interest when building adaptive learning agents[1](https://intelligence.org/2014/04/09/diana-spears/#footnote_0_10954 \"Perkins, T. and Barto, A. “Lyapunov design for safe reinforcement learning control.” Proceedings of AAAI’02.\"). For examples of the NASA Ames research and other related work, see the papers that appeared in the NIPS’04 workshop on “Verification, validation, and testing of learning systems”[2](https://intelligence.org/2014/04/09/diana-spears/#footnote_1_10954 \" Margineantu, Schumann, Gupta, Drumheller, and Fresnedo (co-chairs). Workshop on “Verification, validation, and testing of learning systems.” NIPS’04. \"). Dietterich gave a follow-up talk at NIPS’06 on this topic[3](https://intelligence.org/2014/04/09/diana-spears/#footnote_2_10954 \"Dietterich, T. “Research issues in deployed adaptive systems.” NIPS’06.\"). The NASA Ames offshoot has continued to be active post-2006, as exemplified by the many recent contributing papers in Schumann’s 2010 book[4](https://intelligence.org/2014/04/09/diana-spears/#footnote_3_10954 \"Schumann, J. “Applications of Neural Networks in High Assurance Systems.” Springer-Verlag, 2010.\"). Furthermore, Vahram Stepanyan and others at NASA Ames have been working on a project called “Integrated Resilient Aircraft Control” (IRAC), whose goal is to validate multidisciplinary integrated aircraft control design tools and techniques that will enable safe flight despite unexpected adverse conditions[5](https://intelligence.org/2014/04/09/diana-spears/#footnote_4_10954 \"Stepanyan, V. et al., “Stability and performance metrics for adaptive flight control.” AIAA’09.\").\n\n\nShortly after my Ames talk, a second offshoot was initiated by John Rushby at SRI International. The SRI follow-on research continued to focus on formal methods with model checking, which is what I had originally worked with. However more recently this work has moved in a more similar direction to that of Ames[6](https://intelligence.org/2014/04/09/diana-spears/#footnote_5_10954 \"Rushby, J. “A safety-case approach for certifying adaptive systems.” AIAA’09.\"). For example, in this paper Rushby introduces the idea of using a “safety case” that leads to an online performance monitor. And even more recently, Ashish Tiwari at SRI has worked on bounded verification of adaptive neural networks in the context of the IRAC project[7](https://intelligence.org/2014/04/09/diana-spears/#footnote_6_10954 \"Tiwari, A. “Bounded verification of adaptive flight control systems.” AIAA’10.\").\n\n\nNext, consider a third offshoot. This is the research at the University of Southern California (USC) by Milind Tambe and others. These USC researchers built on my specific line of work, but they decided to address the important issue of mixed-initiative situations (also called “adjustable autonomy”), where humans and artificial agents collaborate to achieve a goal. Their multiagent plans are in the form of Partially Observable Markov Decision Processes (POMDPs) and they check safety constraints in this context. The first paper of theirs that I’m aware of on the topic of Asimovian adaptive (multi)agents appeared in 2006[8](https://intelligence.org/2014/04/09/diana-spears/#footnote_7_10954 \"Schurr, N. et al. “Asimovian multiagents: Laws of robotics to teams of humans and agents.” 2006.\"). In 2007, Nathan Schurr got his Ph.D. on this topic[9](https://intelligence.org/2014/04/09/diana-spears/#footnote_8_10954 \"Schurr, N. “Toward human-multiagent teams.” USC Ph.D. dissertation, 2007.\"). Milind Tambe continues to teach a very popular courses on “Artificial Intelligence and Science Fiction” in which he discusses his research on Asimovian multiagents.\n\n\nFinally, I’ll mention miscellaneous post-2006 research that continues to build on my earlier line of work. For one, during 2006-2008 I was part of a DARPA Integrated Learning initiative that focused on methods for airspace deconfliction. Two of my graduate students, Antons Rebguns and Derek Green, along with Geoffrey Levine (U of Illinois) and Ugur Kuter (U of Maryland), applied safety constraints to planners[10](https://intelligence.org/2014/04/09/diana-spears/#footnote_9_10954 \"Levine, G. et al. “Learning and verifying safety constraints for planners in a knowledge-impoverished system.” Computational Intelligence 28 (3), 2012.\"). Their work was inspired by my earlier research on Asimovian agents. There are also researchers currently building on the NASA Ames work: an international group[11](https://intelligence.org/2014/04/09/diana-spears/#footnote_10_10954 \"Tamura, G. et al. “Towards practical runtime verification and validation of self-adaptive software systems.” Self-Adaptive Systems, LNCS 7475, Springer-Verlag, 2013.\"), Zhang and other researchers at Michigan State University[12](https://intelligence.org/2014/04/09/diana-spears/#footnote_11_10954 \"Zhang et al. “Modular verification of dynamically adaptive systems.” AOSD’09.\"), and Italian researchers who are building on Zhang’s work[13](https://intelligence.org/2014/04/09/diana-spears/#footnote_12_10954 \"Sharifloo, A. and Spoletini, P. “LOVER: Light-weight formal verification of adaptive systems at run time.” Formal Aspects of Component Software. Lecture Notes in Computer Science Volume 7684, pp. 170-187, 2013.\"). Also, Musliner and Pelican (Honeywell Labs), along with Goldman (SIFT, LLC) started building on my *incremental* re-verification work in particular – back in 2005[14](https://intelligence.org/2014/04/09/diana-spears/#footnote_13_10954 \"Musliner , D. et al. “Incremental verification for on-the-fly controller synthesis.” MoChArt’05.\"), and apparently Musliner is still doing verification and validation (V&V) of adaptive systems.\n\n\nNow I’ll respond to your second question about natural “next steps” for this particular line of research. I believe that all of the research mentioned above is exciting and shows promise. But I want to particularly emphasize the NASA/SRI direction as potentially fruitful for the future. This is based on my personal experiences with machine learning, formal methods, V&V, and AI in general. I have always found that formal methods and other logic-based approaches are, for computational reasons, difficult to scale up to complex real-world problems. Over the course of my career, I have leaned more towards stochastic methods for machine learning, and run-time checking for V&V. Therefore I hope that the aerospace researchers will continue in the directions they have adopted. However I also believe that they should widen their horizons. There are numerous techniques currently available for runtime monitoring and checking, e.g., see the online software self-checking, self-testing, and self-correcting methods of Ronitt Rubinfeld[15](https://intelligence.org/2014/04/09/diana-spears/#footnote_14_10954 \"Rubinfeld, R. Checking.\"), or the run-time monitoring and checking of Insup Lee and Oleg Sokolsky[16](https://intelligence.org/2014/04/09/diana-spears/#footnote_15_10954 \"Sokolsky, O. Selected Recent Publications by Subject\")**.** I believe it would be potentially very fruitful to explore how many of the available monitoring and checking techniques are applicable to the behavioral assurance of adaptive systems.\n\n\nBut, most importantly, there is a topic that is critical to the future of building trustworthy adaptive systems and needs to be explored in great depth. That’s the issue of self-recovery/repair. Since around 1998-1999, my colleagues and I have been addressing self-repair in the context of swarm robotics[17](https://intelligence.org/2014/04/09/diana-spears/#footnote_16_10954 \"Gordon, D. et. al. “Distributed spatial control, global monitoring and steering of mobile physical agents.” ICIIS’99.\")[18](https://intelligence.org/2014/04/09/diana-spears/#footnote_17_10954 \"Spears, W. and Spears, D. (Eds.) “Physicomimetics: Physics-Based Swarm Intelligence.” Springer-Verlag, 2012.\"). Our research focuses primarily on physics-based approaches to controlling swarm robotic formations – because physics naturally obeys the “principle of least action,” i.e., if a formation is disturbed then it will automatically perform the minimal actions required to repair the disturbance. This repair is locally optimal but is not necessarily globally optimal. In other words, we have dropped the requirement of global optimality, focusing on “satisficing” behavior instead.  Organic and natural physical systems are not perfect, but their lack of perfection often makes them more robust. There are systems where we need precise guarantees of behavior (e.g., the dynamic control of an airplane wing, to ensure that the plane does not stall and crash). But for other tasks, perfection and optimality are not even relevant (e.g., the Internet). We have demonstrated the feasibility of our research both in simulation and on real robots on numerous tasks, including uniform coverage, chain formations, surveillance, the movement of formations through environments with obstacles, and chemical source localization[19](https://intelligence.org/2014/04/09/diana-spears/#footnote_18_10954 \"Spears, W. and Spears, D. (Eds.) 2012.\"). Hopefully other researchers will also explore physics-based systems. I believe that an excellent “safe adaptive (multi)agent” architecture would consist of a physics-based controller at the lower level, with a monitor/checker at a higher level to provide strict behavioral guarantees when needed. In particular, a more sophisticated version of our original design in [[17](http://intelligence.org/?p=10954&preview=true#footnote_16_10954)] would be quite promising.\n\n\nIn summary, I sincerely hope that the above-mentioned research will continue in the fruitful directions it has already taken, and I also hope that students and researchers will pursue additional, novel research along these lines. It seems to me that the topic of “safe adaptation” is “low-hanging fruit.” DARPA[20](https://intelligence.org/2014/04/09/diana-spears/#footnote_19_10954 \"DARPA-sponsored ISAT meeting on “Trustable Deployed Adaptive Systems” at SRI, 2006.\") and other funding agencies have expressed to me their desire to fund research on this topic – because it must be satisfactorily addressed if we are to have deployable, adaptive systems that we can trust.\n\n\n\n\n---\n\n\n**Luke**: In the lines of work you outlined above, what kinds of AI-like functionality are included in the parts of the code that are actually verified? E.g. does the verified code include classical planning algorithms, modern planning algorithms, logical agent architectures, or perhaps even machine learning algorithms in some cases?\n\n\n\n\n---\n\n\n**Diana**: The code that gets verified consists of reactive, “anytime” plans, which are plans that get continually executed in response to internal and external environmental conditions. Each agent’s plan is a finite-state automaton (FSA), which consists of states and state-to-state transitions. Each state in the FSA corresponds to a subtask of the overall task (which is represented by the entire FSA). And each transition corresponds to an action taken by the agent. In general, there are multiple transitions exiting each state, corresponding to the choice of action taken by the agent. For example, consider the scenario I described in one of my previous answers in this interview, i.e., that of a planetary lander along with rovers. Two possible states for a planetary lander L might be “TRANSMITTING DATA” and “RECEIVING DATA.” Suppose the lander is in the former state. If it takes the action “PAUSE” then it will stay in its current state, but if it takes the action “TRANSMIT” then after this action has completed the lander will transition to the latter state.  Furthermore, the conditions for transitioning from one state to the next depend not only on what action the agent takes, but also on what’s going on in the environment, including what this agent observes the other agents (e.g., the rovers) doing. For this reason, I call the plans “reactive.”\n\n\nEvery FSA has an initial state, but no final state. The philosophy is that the agents are indefinitely reactive to environmental conditions subsequent to task initiation, and their task is continually ongoing. FSAs are internally represented as finite graphs, with vertices (nodes) corresponding to behavioral states and directed edges corresponding to state-to-state transitions.\n\n\nMachine learning (ML) is applied to the FSA plans for the purposes of agent initialization and adaptation. Learning is done with evolutionary algorithms (EAs), using traditional generalization and specialization operators. These operators add, delete, move, or modify vertices and edges, as well as actions associated with the edges.  For example, suppose the lander’s transition from its “TRANSMITTING DATA” to its “RECEIVING DATA” state depends not only on its own “TRANSMIT” action, but it also requires that rover R1 successfully received the data transmitted by lander L before the lander can make this state-to-state transition in its FSA. This is a very reasonable requirement. Now suppose that R1’s communication apparatus has catastrophically failed. Then L will need to adapt its FSA by modifying the requirement of checking R1’s receipt of the transmission before it can transition to its “RECEIVING DATA” state. One possibility is that it replaces “R1” with “R2” in its FSA. Many other alternative learning operators are of course also possible, depending on the circumstances.\n\n\nMachine learning is assumed to occur in two phases: offline then online. During the offline initialization phase, each agent starts with a randomly initialized population of candidate FSA plans, which is then evolved using evolutionary algorithms. The main loop of the EA consists of selecting parent plans from the population, applying ML operators to produce offspring, evaluating the fitness of the offspring, and then returning the offspring to the population if they are sufficiently “fit.” After evolving a good population of candidate plans, the agent then selects the “best” (according to its fitness criteria) plan from its population. Verification is then performed to this plan as well as repair, if required. During the online phase, the agents are fielded and plan execution is interleaved with learning (adaptation to environmental changes, such as agent hardware failures), re-verification, and plan repair as needed.\n\n\nThe main point of my “Asimovian adaptive agents” paper is that by knowing what adaptation was done by the agent, i.e., what machine learning operator was applied to the FSA, we can streamline the re-verification process enormously.\n\n\n\n\n---\n\n\n**Luke**: AI systems are becoming increasingly autonomous in operation: [self-driving cars](http://en.wikipedia.org/wiki/Autonomous_car), robots that [navigate disaster sites](http://www.darpa.mil/Our_Work/TTO/Programs/DARPA_Robotics_Challenge.aspx), [HFT](http://en.wikipedia.org/wiki/High-frequency_trading) programs that trade stocks quickly enough to “[flash crash](http://en.wikipedia.org/wiki/2010_Flash_Crash)” the market or [nearly bankrupt](http://en.wikipedia.org/wiki/Knight_Capital_Group#2012_stock_trading_disruption) a large equities trader, etc.\n\n\nHow might current AI safety methods (formal verification and reverification, program synthesis, simplex architectures, hybrid systems control, [layered architectures](http://commonsenseatheism.com/wp-content/uploads/2013/08/Fisher-et-al-Verifying-Autonomous-Systems.pdf), etc.) be extended to meet the safety challenges that will be raised by the future’s highly autonomous systems operating in unknown, continuous, dynamic environments? Do you think our capacity to make systems more autonomous and capable will outpace our capacity to achieve confident safety assurances for those systems?\n\n\n\n\n---\n\n\n**Diana**: My response to your first question is problem- and context-dependent. I know of many AI communities that are built around a single algorithm, and the researchers in those communities try to apply this algorithm to as many problems as possible. I believe that’s a misguided approach to research. Instead, I have always tried to adopt a problem-driven approach to research. The best way to scientifically solve a problem is to study it in great depth, and based on the *a priori* problem/task analysis select the most appropriate solution — including the planner or problem-solver, the properties/constraints to be verified, the adaptation method(s), etc. This will require a large suite of different AI safety/verification methods from which to choose. I cannot foresee the nature of this suite in advance; it’ll have to be constructed based on experience. As we tackle more complex autonomous systems, our repertoire of verification techniques will grow commensurately.\n\n\nYour second question about whether autonomy will outpace safety is an extremely important one, Luke. Based on the applications you listed in your first paragraph, I see that your concerns extend to security. In fact, your security concerns also apply to “[the internet of things](http://www.economist.com/news/science-and-technology/21594955-when-internet-things-misbehaves-spam-fridge),” which includes electronic, interconnected, remotely-accessible autonomous devices such as washing machines, ovens, and thermostats that will be installed in “smart homes” of the future. Businesses usually lack the motivation to install safety and security measures without some incentive. For example, leading software companies release beta versions of their programs with the expectation that the public will find and report the bugs. This is unacceptable as we transition to increasingly capable and potentially hazardous autonomous systems. I believe that the primary incentive will be government regulations. But we can’t wait until disasters arise before putting these regulations in place! Instead, we need to be proactive.\n\n\nIn 2008-2009 I was a member of a [American Association for the Advancement of Artificial Intelligence (AAAI) Presidential Panel](http://research.microsoft.com/en-us/um/people/horvitz/AAAI_Presidential_Panel_2008-2009.htm) to study these issues. This was a fabulous panel, and it brought awareness to the AI community of researchers. Nevertheless it’s time raise awareness beyond the community of AI researchers. One suggestion I have is to assign a new or existing member of the [United States President’s Council of Advisors on Science and Technology](http://en.wikipedia.org/wiki/United_States_President%27s_Council_of_Advisors_on_Science_and_Technology) the task of studying and assessing the safety and security of autonomous systems.  This council member should consult the following people:\n\n\n1. AI researchers who have extensive experience in developing autonomous systems\n2. Engineers from aerospace, transportation, and other applications where safety is paramount\n3. Lawyers and lawmakers who are cognizant of the legal issues that could arise\n4. Cyber security experts.\n\n\nI assume this council member would research the topic, consult others, conduct meetings, and conclude with a report and recommendations. Furthermore, I strongly believe that such a task should be assigned as soon as possible. We are *already* in a state where autonomy is outpacing safety and security, particularly in the commercial sector outside of the transportation industry.\n\n\n\n\n---\n\n\n**Luke**: Given that “autonomy is outpacing safety and security,” what are some other ideas you have for increasing the odds of reliably good outcomes from future autonomous systems?\n\n\nBy the way, I’ve only ever seen an “interim” [report](http://research.microsoft.com/en-us/um/people/horvitz/note_from_AAAI_panel_chairs.pdf) from that AAAI panel. Is there supposed to be a follow-up report at some point?\n\n\n\n\n---\n\n\n**Diana**: I haven’t heard about any follow-up or final report for the AAAI panel, unfortunately.\n\n\nOne idea is that we should have extensive safety and security testing prior to product release, based on established industry/government standards. We may not be able to enforce 100% compliance, but the presence of something like a “Safe and Secure Autonomous Product” certification could motivate consumers to favor purchasing tested and certified products over others lacking compliance. This would be like the existing [UL product certification](http://www.ul.com/global/eng/pages/offerings/perspectives/newtoul/ulmarkproductcertification/).\n\n\nAnother idea is to have monitoring, safety shutoff, self-recovery, and self-repair capabilities associated with autonomous devices. Furthermore, for security reasons these mechanisms should be decoupled from the autonomous system’s control, and they should also be detached from communications (e.g., not connected to the Internet) so as to avoid malicious tampering.\n\n\nI don’t believe it’s possible to ensure complete safety and security at all times with autonomous systems. As you stated above, the best we can do is to increase “the odds of reliably good outcomes.” Nevertheless, I believe that substantial progress can be made if there is financial, technical, legal and programmatic support in this direction.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Diana!\n\n\n\n\n---\n\n1. Perkins, T. and Barto, A. “[Lyapunov design for safe reinforcement learning control.](http://www.aaai.org/Papers/Symposia/Spring/2002/SS-02-07/SS02-07-005.pdf)” Proceedings of AAAI’02.\n2. Margineantu, Schumann, Gupta, Drumheller, and Fresnedo (co-chairs). Workshop on “[Verification, validation, and testing of learning systems.](http://www.dmargineantu.net/nips2004/)” NIPS’04.\n3. Dietterich, T. “[Research issues in deployed adaptive systems.](http://www.dmargineantu.net/NIPS06-TDLDS/TDLDS-NIPS06.Dietterich.pdf)” NIPS’06.\n4. Schumann, J. “[Applications of Neural Networks in High Assurance Systems.](http://www.springer.com/engineering/computational+intelligence+and+complexity/book/978-3-642-10689-7)” Springer-Verlag, 2010.\n5. Stepanyan, V. et al., “[Stability and performance metrics for adaptive flight control.](http://arc.aiaa.org/doi/abs/10.2514/6.2009-5965)” AIAA’09.\n6. Rushby, J. “[A safety-case approach for certifying adaptive systems.](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.159.905)” AIAA’09.\n7. Tiwari, A. “[Bounded verification of adaptive flight control systems.](http://arc.aiaa.org/doi/abs/10.2514/6.2010-3362)” AIAA’10.\n8. Schurr, N. et al. “[Asimovian multiagents: Laws of robotics to teams of humans and agents.](http://www.aptima.info/publications/2006_Schurr_Varakantham_Bowring_Tambe_Grosz.pdf)” 2006.\n9. Schurr, N. “[Toward human-multiagent teams.](http://teamcore.usc.edu/schurr/SchurrThesis.pdf)” USC Ph.D. dissertation, 2007.\n10. Levine, G. et al. “[Learning and verifying safety constraints for planners in a knowledge-impoverished system.](http://onlinelibrary.wiley.com/doi/10.1111/j.1467-8640.2012.00416.x/abstract)” Computational Intelligence 28 (3), 2012.\n11. Tamura, G. et al. “[Towards practical runtime verification and validation of self-adaptive software systems.](http://link.springer.com/chapter/10.1007%2F978-3-642-35813-5_5#page-1)” Self-Adaptive Systems, LNCS 7475, Springer-Verlag, 2013.\n12. Zhang et al. “[Modular verification of dynamically adaptive systems.](http://www.cse.msu.edu/~mckinley/Pubs/files/Zhang.AOSD.2009.pdf)” AOSD’09.\n13. Sharifloo, A. and Spoletini, P. “[LOVER: Light-weight formal verification of adaptive systems at run time.](http://link.springer.com/chapter/10.1007/978-3-642-35861-6_11)” Formal Aspects of Component Software. Lecture Notes in Computer Science Volume 7684, pp. 170-187, 2013.\n14. Musliner , D. et al. “[Incremental verification for on-the-fly controller synthesis.](http://rpgoldman.goldman-tribe.org/papers/mochart05.pdf)” MoChArt’05.\n15. Rubinfeld, R. [Checking](http://www.cs.cornell.edu/faculty/home/ronitt/ronitt.html).\n16. Sokolsky, O. [Selected Recent Publications by Subject](http://www.cis.upenn.edu/~sokolsky/papers.html)\n17. Gordon, D. et. al. “[Distributed spatial control, global monitoring and steering of mobile physical agents.](http://repository.upenn.edu/cgi/viewcontent.cgi?article=1310&context=cis_papers)” ICIIS’99.\n18. Spears, W. and Spears, D. (Eds.) “[Physicomimetics: Physics-Based Swarm Intelligence.](http://www.springer.com/computer/ai/book/978-3-642-22803-2)” Springer-Verlag, 2012.\n19. Spears, W. and Spears, D. (Eds.) 2012.\n20. DARPA-sponsored ISAT meeting on “Trustable Deployed Adaptive Systems” at SRI, 2006.\n\nThe post [Diana Spears on the safety of adaptive agents](https://intelligence.org/2014/04/09/diana-spears/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-04-09T20:10:21Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "be20367b2d3e73e6324b2cb1586c67fb", "title": "Will MacAskill on normative uncertainty", "url": "https://intelligence.org/2014/04/08/will-macaskill/", "source": "miri", "source_type": "blog", "text": "![William MacAskill portrait](http://intelligence.org/wp-content/uploads/2014/04/MacAskill_w606.jpg) [Will MacAskill](http://www.williammacaskill.com/) recently completed his DPhil at Oxford University and, as of October 2014 will be a Research Fellow at Emmanuel College, Cambridge.\n\n\nHe is the cofounder of [Giving What We Can](http://www.givingwhatwecan.org/) and [80,000 Hours](http://80000hours.org/). He’s currently [writing a book](http://www.effective-altruism.com/announcing-a-forthcoming-book-on-effective-altruism/), *Effective Altruism*, to be published by Gotham (Penguin USA) in summer 2015.\n\n\n\n**Luke Muehlhauser**: In [MacAskill (2014)](http://commonsenseatheism.com/wp-content/uploads/2014/03/MacAskill-Normative-Uncertainty.pdf) you tackle the question of normative uncertainty:\n\n\n\n> Very often, we are unsure about what we ought to do… Sometimes, this uncertainty arises out of empirical uncertainty: we might not know to what extent non-human animals feel pain, or how much we are really able to improve the lives of distant strangers compared to our family members. But this uncertainty can also arise out of fundamental normative uncertainty: out of not knowing, for example, what moral weight the wellbeing of distant strangers has compared to the wellbeing of our family; or whether non-human animals are worthy of moral concern even given knowledge of all the facts about their biology and psychology.\n> \n> \n> …one might have expected philosophers to have devoted considerable research time to the question of how one ought to take one’s normative uncertainty into account in one’s decisions. But the issue has been largely neglected. This thesis attempts to begin to fill this gap.\n> \n> \n\n\nIn the first part of your thesis you argue that when the moral theories to which an agent assigns some credence are cardinally measurable (as opposed to ordinal-scale) *and* they are intertheoretically comparable, the agent should choose an action which “maximizes expected choice-worthiness” (MEC), which is akin to maximizing expected value across multiple uncertain theories of what is desirable.\n\n\nI suspect that result will be intuitive to many, so let’s jump forward to where things get more interesting. You write:\n\n\n\n> Sometimes, [value] theories are merely ordinal, and, sometimes, even when theories are cardinal, choice-worthiness is not comparable between them. In either of these situations, MEC cannot be applied. In light of this problem, I propose that the correct metanormative theory is sensitive to the different sorts of information that different theories provide. In chapter 2, I consider how to take normative uncertainty into account in conditions where all theories provide merely ordinal choice-worthiness, and where choice-worthiness is noncomparable between theories, arguing in favour of the *Borda Rule*.\n> \n> \n\n\nWhat is the Borda Rule, and why do you think it’s the best action rule under these conditions?\n\n\n\n\n---\n\n\n**Will MacAskill**: Re: “I suspect that result will be intuitive to many.” Maybe in your circles that’s true! Many, or even most, philosophers get off the boat way before this point. They say that there’s no sense of ‘ought’ according to which what one ought to do takes normative uncertainty into account. I’m glad that I don’t have to defend that for you, though, as I think it’s perfectly obvious that the ‘no ought’ position is silly.\n\n\nAs for the Borda Rule: the Borda Rule is a type of voting system, which works as follows. For each theory, an option’s *Borda Score* is equal to the number of options that rank lower in the theory’s choice-worthiness ordering than that option. An option’s *Credence-Weighted Borda Score* is equal to the sum, across all theories, of the decision-maker’s credence in the theory multiplied by the Borda Score of the option, on that theory.\n\n\nSo, for example, suppose I have 80% credence in Kantianism and 20% credence in Contractualism. (Suppose I’ve had some very misleading evidence….) Kantianism says that option A is the best option, then option B, then option C. Contractualism says that option C is the best option, then option B, then option A.\n\n\nThe Borda scores, on Kantianism, are: \n\nA = 2 \n\nB = 1 \n\nC = 0\n\n\nThe Borda scores, on Contractualism, are: \n\nA = 0 \n\nB = 1 \n\nC = 2\n\n\nEach option’s Credence-Weighted Borda Score is: \n\nA = 0.8\\*2 + 0.2\\*0 = 1.6 \n\nB = 0.8\\*1 + 0.2\\*1 = 1 \n\nC = 0.8\\*0 + 0.2\\*2 = 0.4\n\n\nSo, in this case, the Borda Rule would say that A is the most appropriate option, followed by B, and then C.\n\n\nThe reason we need to use some sort of voting system is because I’m considering, at this point, only *ordinal* theories: theories that tell you that it’s better to choose A over B (alt: that “A is more choice-worthy than B”), but won’t tell you *how much* more choice-worthy A is than B. So, in these conditions, we have to have a theory of how to take normative uncertainty into account that’s sensitive *only* to each theory’s choice-worthiness ordering (as well as the degree of credence in each theory), because the theories I’m considering don’t give you anything more than an ordering.\n\n\nThe key reason why I think the Borda Rule is better than any other voting system is that it satisfies a condition I call Updating Consistency. The idea is that increasing your credence in some particular theory T1 shouldn’t make the appropriateness ordering (that is, the ordering of options in terms of what-you-ought-to-do-under-normative-uncertainty) *worse* by the lights of T1.\n\n\nThis condition seems to me to be *very plausible indeed*. But, surprisingly, very few voting systems satisfy that property, and those others that do have other problems. \n\n\n\n\n\n\n---\n\n\n**Luke**: One problem for the Borda Rule is that it is, as you say, “extremely sensitive to how one individuates options” — a problem analogous to the problem of [clone-dependence](http://en.wikipedia.org/wiki/Independence_of_clones_criterion) in voting theory. You tackle this problem by modifying the Borda Rule to include a measure over the set of all possible options. Could you explain how that works? Also, is this modification to the Borda Rule novel to your thesis?\n\n\n\n\n---\n\n\n**Will**: A measure is a way of giving sense to the size of a space. It allows us to say that some options represent a larger part of possibility space than others. This is an intuitive idea: ‘drinking tea tomorrow’ represents a larger portion of possibility space than ‘drinking tea with my left hand tomorrow at 3pm’. With the idea of a measure on board, we can rewrite our definition of the Borda Rule, as follows (I’ll ignore the possibility of equally choice-worthy options, as that makes the definition a little more complicated):\n\n\nFor each theory, an option’s *Borda Score* is equal to the sum total of the measure of the options that rank lower in the theory’s choice-worthiness ordering than that option. An option’s *Credence-Weighted Borda Score* is equal to the sum, across all theories, of the decision-maker’s credence in the theory multiplied by the Borda Score of the option, on that theory.\n\n\nSo, suppose that, according to some theory Ti, A>B. On the old definition of the Borda Rule, A gets a Borda Score of 1. But if option B is split into options B’ and B”, such that A>B’=B”, then A gets a Borda score of 2. The fact that A’s score has changed just because of how you’ve individuated options is problematic.\n\n\nBut let’s use the new definition, which incorporates a measure, and suppose that the measure of A is 0.5 and the measure of B is 0.5. If so, then, when the decision-maker faces options A and B, then A gets a Borda Score of 0.5, on Ti. But when option B is split into options B’ and B”, then the measure is split, too. Suppose that B’ and B” are equally large. If so, then B’ would have a measure of 0.25 and B” would have a measure of 0.25. So A’s Borda score, on Ti, would be 0.5, just as before.\n\n\nThis modification to the Borda Rule is novel, though the idea was given to me by Owen Cotton-Barratt, so I can’t take credit! I guess the reason it hasn’t been suggested in the voting theory literature is because it might seem obvious that every candidate gets the same measure. But perhaps you could think of the ‘space’ of possible political positions (which would be easy if it were really a left-right spectrum), and assign candidates a measure based on how much of this space they take up. That could possibly allow for the Borda Rule to avoid problems to do with clone-dependence. But I think that for actual voting systems, the Schulze method is better than the Borda Rule. It’s clone-independent even without a measure, and is much less vulnerable to strategic voting than the Borda Rule is.\n\n\n\n\n---\n\n\n**Luke**: What is the relevance of [Arrow’s impossibility theorem](http://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem) to your suggested use of a modified Borda Rule for handling normative uncertainty?\n\n\n\n\n---\n\n\n**Will**: I suggest that, in conditions of ordinal theories, we should exploit the analogy with voting. But that analogy with voting means that we’ll run into a analogue of Arrow’s Impossibility Theorem: the result that no voting system can satisfy all of a number of highly desirable properties.\n\n\nThere are a few ways to formulate the impossibility result. The strongest, in my view, is to say that any voting system that satisfies other more essential properties must violate Contraction Consistency, where Contraction Consistency is defined as follows: \n\nLet A be the option set, and M be the set of maximally appropriate options within A. Let S be a subset of A that contains all members of M. The condition is: *A* is a maximally appropriate option, given option set S, iff it is a member of M.\n\n\nIt’s a condition that you’ve got to be careful how to formulate. I don’t go into that in my thesis. But some violations of it are intuitively clearly irrational. E.g. imagine you’ve got the options of blueberry ice cream or strawberry ice cream. You currently prefer blueberry. But then you discover that the restaurant also serves chocolate ice cream, and so you switch your preference from blueberry to strawberry, even though your assessment of the relative values of blueberry and strawberry hasn’t changed. That seems irrational – e.g. it would suggest that you should spend resources trying to find out if you have available to you an option that you know you won’t take.\n\n\nI think that contraction consistency is a problem for the Borda Rule. But it’s a problem that affects all voting system analogues. So it’s something that we’ve got to live with – it’s just unfortunate that we have (or ought to have) credence in merely ordinal theories.\n\n\nThere is a second response as well. Which is to distinguish the Narrow and Broad versions of the Borda Rule. The Narrow version assigns Borda Scores within an option-set. The Broad version assigns Borda Scores across all possible options. It’s only the Narrow version that violates Contraction Consistency. But the Broad version has its own weirdnesses. Suppose that you’ve got a situation:\n\n\n99%: T1: A>B \n\n1%: T2: B>A\n\n\nWhere T1 and T2 are merely ordinal theories. It might seem obvious that you should pick A in that situation. But you can’t infer that from that case, according to the Broad Borda Rule. Instead, you’ve got to look at how A and B rank in T1 and T2’s ordering of all possible options. If A and B are very close on T1 but very far apart, on T2, then B might be the most appropriate option. So the Broad Borda is very difficult to use. And it gives results that seem wrong to me – as if you’re ‘faking’ cardinality where there is none.\n\n\nSo my general view on this is that any account you have will have deep problems. Endorsing a particular view involves carefully weighing up different strengths and weaknesses; there’s no obviously correct position. (This becomes a theme when you start working on normative uncertainty. To an extent, this should be expected: we’re dealing with messy nonideal agents, who don’t have perfect access to their own values or to the normative truth).\n\n\n\n\n---\n\n\n**Luke**: Your thesis covers many other interesting topics; we won’t try to cover them all here. How would you summarize the other major “takeaways” you’d most want people to know about from your thesis?\n\n\n\n\n---\n\n\n**Will**: The “Maximise Expected Choice-Worthiness” approach to moral uncertainty is the best approach. It is able to respond to a number of objections that have been levelled against it.\n\n\nIf you think that you can’t compare choice-worthiness across theories, then you should normalise different theories at their variance. But I think that the arguments for intertheoretic incomparability don’t work. Instead, you should feel comfortable about using your intuitions about how different theories compare.\n\n\nWe can make sense of the idea of two theories T1 and T2 having exactly the value-ordering over options, but T1 thinking that everything is twice as important as T2 does. So ‘utilitarianism’ really refers to a class of theories, each of different levels of amplification.\n\n\nMost of our intuitions about different Newcomb and related problems can be captured by maximising expected choice-worthiness over uncertainty about whether evidential or causal decision theory is true (with much higher credence in causal decision-theory than evidential decision theory). Taking decision-theoretic uncertainty into account puts causal decision theory on pretty strong grounds — you can respond to the intuitive and “Why Ain’cha Rich?” arguments in favour of evidential decision theory.\n\n\nMoral philosophy provides a bargain in terms of gaining new information: doing just a bit of philosophical study or research can radically alter the value of one’s options. So individuals, philanthropists, and governments should all spend a lot more resources on researching and studying ethics than they currently do.\n\n\nEven if you think that continued human survival is net bad, you should still work to prevent near-term human extinction, in case the human race gets evidence to the contrary over the next few centuries. (Well, this is true given a few fairly controversial premises.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Will!\n\n\nThe post [Will MacAskill on normative uncertainty](https://intelligence.org/2014/04/08/will-macaskill/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-04-08T17:24:42Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "eda98457b10f266fdbaca338aedfcb74", "title": "Erik DeBenedictis on supercomputing", "url": "https://intelligence.org/2014/04/03/erik-debenedictis/", "source": "miri", "source_type": "blog", "text": "![Erik DeBenedictis portrait](http://intelligence.org/wp-content/uploads/2014/04/DeBenedictis_w150.jpg) [Erik DeBenedictis](http://debenedictis.org/erik/) works for Sandia’s Advanced Device Technologies department. He has been a member of the [International Technology Roadmap for Semiconductors](http://en.wikipedia.org/wiki/International_Technology_Roadmap_for_Semiconductors) since 2005.\n\n\nDeBenedictis has received Ph.D. in computer science from Caltech. As a grad student and post-doc, he worked on the hardware that turned into the first hypercube multiprocessor computer. Later dubbed the “[Cosmic Cube](http://en.wikipedia.org/wiki/Caltech_Cosmic_Cube),” it ran for more than a decade after he left the university and was copied over and over. It’s considered the ancestor of most of today’s supercomputers.\n\n\nIn the 1980s, then working for Bell Labs in Holmdel, N.J., DeBenedictis was part of a consortium competing for the first Gordon Bell award. The team got the second place award, the first place going to Sandia. During the 1990s, he ran NetAlive, Inc., a company developing information management software for desktops and wireless systems. Starting in 2002, DeBenedictis was one of the project leads on the Red Storm supercomputer.\n\n\n\n\nThe opinions expressed by Erik below are his own and not those of Sandia or the US Department of Energy. This document has been released by Sandia as SAND Number 2014-2679P.\n\n\n\n\n---\n\n\n**Luke Muehlhauser**: Some of your work involves reversible computing, which I previously [discussed](http://intelligence.org/2014/01/31/mike-frank-on-reversible-computing/) with Mike Frank. Mike’s view seemed to be that there were promising signs that reversible computing would be possible eventually, but progress is not moving quickly due to lack of funding and interested researchers. Is that your view as well? And based on my interview with him, do you seem to have a *substantially* different impression that Mike does about anything he and I discussed?\n\n\n\n\n---\n\n\n**Erik DeBenedictis**: I agree with Mike, but his discussion of minimum energy in computing due to irreversibility is just part of a larger topic of minimum energy in computing that starts with “Moore’s Law Ending.”\n\n\nFor any reader who has not read Mike Frank’s interview, I’d like to give a quick summary of the relevant points. Mike was interviewed about reversible logic, which is sometimes called reversible computing. If you were a brilliant engineer and could figure out how to make a computer logic gate like AND or OR that dissipated kT joules per logic operation (the meaning of kT is in the next paragraph), you would discover that there is an additional heat production on the order of kT due to the interaction between information and thermodynamics. If you were determined to make even lower power computer gates anyway, you would have to use reversible logic principles. You could use a different universal gate set that would include a new gate such as the TOFFOLI or FREDKIN gate. You could also use regular gates (e. g. AND, OR, NOT) and a “retractile cascade” clocking scheme that reverses the computation after you capture the answer.\n\n\nFor reference on kT: k = 1.38 x 10-23 Joules/Kelvin is Boltzmann’s constant and T is the absolute temperature with T = 300 Kelvin at room temperature. kT is about 4 zeptojoules = 4 x 10-21 Joules. Comparing this number to today’s computers is imprecise because dissipation in today’s computers is primarily attributable to the interconnect wire, which varies in length. An AND or OR gate in a modern computer may dissipate a million times this value.\n\n\nA great many respected scientists believe that reversible computing is feasible, but challenging. If their views are correct, computation should be possible at “arbitrarily low energy levels” and all theories proposing unavoidable, general limits are incorrect. There are a handful of contrary theories proposing minimum energy dissipation levels for computation. Several key ones are Landauer’s Limit of “on the order of kT” per logic operation[1](https://intelligence.org/2014/04/03/erik-debenedictis/#footnote_0_10946 \"Note added in review: Landauer proposed a lower limit on “on the order of kT” only for “irreversible” computations. As far as I know, the phrase “Landauer’s Limit” was created later by other people. In my experience, the phrase “Landauer’s Limit” if often applied as a general limit.\"), a thermal limit of 40-100 kT (depending on your definition of reliable), and the concept in the popular press today that “Moore’s Law is Ending” and the minimum energy per computation is whatever is in the rightmost column of the International Technology Roadmap for Semiconductors (ITRS). That value is about 50,000 kT with typical lengths of interconnect wire.\n\n\nScientific situations with multiple competing theories can be settled by a scientific experiment. For example, there is a researcher in New York that has a superconducting circuit running in the sub-kT range and looks like it could demonstrate a logic circuit in another couple “spins” of his chip. Demonstrating and rigorously measuring a sub-kT circuit would invalidate all current theories claiming unavoidable limits.\n\n\nWhether anybody will fund such an experiment should depend on whether anybody cares about the result, and I’d like to present two society-level questions that the experiment would resolve:\n\n\nThe computer industry started its upward trend during WWII, growing industry revenue and computer throughput in a fairly clean exponential lasting 70 years. The revenue from semiconductors and downstream industries is around $7 trillion per year right now. If there is a lower energy limit to computing, the shift in growth rate will cause a glitch in the world’s economy. My argument is that proving or disproving theories of computing limits could be accomplished for a very small fraction of $7 trillion per year.\n\n\nThe second has to do with profoundly important computational problems, such as the simulation of the global environment to assess climate change issues. Existing climate models running on petaflops supercomputers give varying projections for the future climate, with these projections diverging from observations over the last decade. Regardless of politics, the remedy would be a more sophisticated climate model running on a bigger supercomputer. We don’t know how much bigger, but a zettaflops or more has been mentioned in this context. If any of the minimum energy dissipation theories are correct, the energy dissipation of the required supercomputer could turn out to be too large and climate modeling may be infeasible; if the theory that computing is possible at “arbitrarily low levels” is true, accurate climate modeling will just require a highly-advanced computer.\n\n\nI’ve tried to expand on Mike’s point: Research on reversible computing could shed light on the future of the economy and the planet’s climate, but I do not know of a single person funded for reversible computing research. Furthermore, a conclusive demonstration of reversible computing would show that there is plenty of room for improving computer efficiency and hence performance. If “Moore’s Law is Ending” means an end to improving computer efficiency, validating reversible computing would show this to be a matter of choice not technology.\n\n\n\n\n---\n\n\n**Luke**: From your perspective, what are the major currently-foreseeable barriers that Moore’s law might crash into *before* hitting the Landauer limit? (Here, I’m thinking more about the economically important “[computations per dollar](http://intelligence.org/2013/05/15/when-will-ai-be-created/#footnote_8_10199)” formulations of Moore’s law rather than the “serial speed” formulation, which [hit a wall in 2004](http://www.amazon.com/The-Future-Computing-Performance-Level/dp/0309159512/).)\n\n\n\n\n---\n\n\n**Erik**: There is huge upside, but not necessarily for every application. The “computations per dollar” link in the question focused on the use of computers as a platform for strong Artificial Intelligence (AI), so I will comment specifically on that application: I wouldn’t be surprised to see AI accelerated by technology specifically for learning, like neural networks with specialized devices for the equivalent of synapses.\n\n\nLet’s consider (a) Moore’s Law 1965 to say 2020 and (b) Beyond Moore’s Law 2020+.\n\n\nFrom 1965 to 2020, the strategy was to shrink line width. That strategy will be good for 1012 or so increase in computations per dollar.\n\n\nI see the following classes of advances beyond 2020 that will each give maybe 10-100x efficiency increase each:\n\n\n1. More efficient implementation of the von Neumann architecture.\n2. More parallelism, with a commensurate increase in the difficulty of programming.\n3. Software improvements for more efficient execution (e. g. new computer languages and compilers to run general code on Graphics Processing Units).\n4. Better algorithms that solve a given problem with fewer computations.\n5. Accelerators, such as CPU+GPU today, extendable to CPU+GPU+various new accelerator types.\n6. Even at constant energy per gate operation, continued scaling in 2D and better use of the third dimension for reducing communications energy.\n7. Optical interconnect has upside, but optics is often oversold.\n8. Nanodevices with behavior different from a transistor that allow some computer functions to be done more efficiently. Examples: Memristors, analog components.\n9. Improved gate technology through adiabatic methods, sub-threshold or low-threshold operation, or probabilistic computing. Eventually, reversible computing (note below on this one).\n10. Alternative models of computation (i.e. neuromorphic) that do not use gates as normally defined.\n\n\nIf the ten items in the list above yield an average of 1½ orders of magnitude increase in computations per dollar each, you have more upside than the entire run of Moore’s Law.\n\n\nIf a couple of the items in the list don’t pan out, you could achieve excellent results by concentrating on other paths. So I do not see a general technology crash anytime soon. However, certain specific applications may be dependent on a just a subset of the list above (climate modeling was mentioned) and could be vulnerable to a limit.\n\n\nReversible computing plus continued reduction in manufacturing cost per device could extend upside potential tremendously.\n\n\nHowever, the necessary technology investment will be greater in the future for a less clear purpose. The message of Moore’s Law was very concise: industry and government invest in line width shrinkage and get a large payoff. In the future, many technology investments will be needed whose purposes have less clear messages.\n\n\nBottom line: In general, the path ahead is expensive but will yield a large increase in computations per dollar. Specific application classes could see limits, but they will have to be analyzed specifically.\n\n\n\n\n---\n\n\n**Luke**: What’s your opinion on whether, in the next 15 years, the [dark silicon problem](http://intelligence.org/2013/10/21/hadi-esmaeilzadeh-on-dark-silicon/) will threaten Moore’s Law (computations per dollar)?\n\n\n\n\n---\n\n\n**Erik**: I believe the dark silicon problem will negatively impact computations per dollar. The problem and the underlying energy efficiency problem are going to get worse at least until the cost of increased energy is greater than the cost of refining a solution and bringing it to production. That will happen eventually, but I believe the problem will persist longer than may be expected due momentum against change. However, you admit Moore’s Law has ended when you admit that there is a dark silicon problem.\n\n\nThe underlying cause of dark silicon is that technology scales device dimensions faster than it reduces power. This causes power per unit chip area to increase, which contradicts the key statement in Gordon Moore’s 1965 paper that defined Moore’s Law: “In fact, shrinking dimensions on an integrated structure makes it possible to operate the structure at higher speed for the same power per unit area.”\n\n\nThe mismatched scaling rates create a problem for computations per dollar. Today, the cost of buying a computer is approximately equal to the cost of supplying it with power over its lifetime. Unless power efficiency can be increased, improvements to computer logic will not benefit the user because the amount of computation they use will be limited by the power bill.\n\n\nThe mismatched scaling rates can be accommodated (but not solved) by turning off transistors (dark silicon), packing microprocessors with low energy-density functions like memory (a good idea, to a point), and specialization (described in your interview under [dark silicon problem](http://intelligence.org/2013/10/21/hadi-esmaeilzadeh-on-dark-silicon/)).\n\n\nThe scaling rates could be brought together by more power-efficient transistors, such as the Tunnel Field Effect Transistor (TFET). However, this transistor type will only last a few generations. See [here](http://www.itrs.net/ITWG/Beyond_CMOS/2012Sept/2_Frank_ITRS%20ERD%209-21-2012.pdf).\n\n\nTheory says energy per computation can be made “arbitrarily small,” but R&D to exploit these issues will be expensive and disruptive. The leading approaches I am aware of are:\n\n\n*Adiabatic*. A fundamentally different approach to logic gate circuits. Example: Mike Frank’s 2LAL.\n\n\n*Certain low-voltage logic classes*: For example, see CMOS LP in arXiv 1302.0244 (which is not same as ITRS CMOS LP).\n\n\n*Reversible computing*, the topic of [Mike Frank’s interview](http://intelligence.org/2014/01/31/mike-frank-on-reversible-computing/).\n\n\nThe approaches above are disruptive, which I believe limits their popularity today. The approaches use different circuits from CMOS, which would require new design tools. New design tools would be costly to develop and would require retraining of the engineers that use them. Children learn the words “and” and “or” when they are about one year old, with these words becoming the basis of AND and OR in the universal logic basis of computers. To exploit some technologies that save computer power, you have to think in terms of a different logic basis like TOFFOLI, CNOT, and NOT. Some of the ideas above would require people to give up concepts that they learned as infants and have not had reason to question before.\n\n\n\n\n---\n\n\n**Luke**: What do you mean by “you admit Moore’s Law has ended when you admit that there is a dark silicon problem”? The computations-per-dollar Moore’s Law has held up at least through early 2011 (I haven’t checked the data after that), but we’ve known about the dark silicon problem since 2010 or earlier.\n\n\n\n\n---\n\n\n**Erik**: Moore’s Law has had multiple meanings over time, and is also part of a larger activity.\n\n\nThere was a very interesting [study by Nordhaus](http://aida.econ.yale.edu/~nordhaus/homepage/nordhaus_computers_jeh_2007.pdf) that revealed the peak computation speed of large computers experienced an inflection point around WW II and has been on an upwards exponential ever since. Eyeballing figure 2 of his paper, I’d say the exponential trend started in 1935.\n\n\nGordon Moore published a paper in 1965 with the title “[Cramming more Components onto Integrated Circuits](http://commonsenseatheism.com/wp-content/uploads/2014/03/Moore-Cramming-more-components-onto-integrated-circuits.pdf)” that includes a graph of components per chip versus year. As I mentioned for a previous question, the text of the paper includes the sentence, “In fact, shrinking dimensions on an integrated structure makes it possible to operate the structure at higher speed for the same power per unit area.” The graph and previous sentence seem to me to be a subjective description of an underlying scaling rule that was formalized by Dennard in 1974 and is called Dennard scaling.\n\n\nI have sketched below Moore’s graph of components as a function of year with Nordhaus’ speed as a function of year on the same axes (a reader should be able to obtain the original documents from the links above, which are more compelling than my sketch). This exercise reveals two things: (1) the one-year doubling period in Moore’s paper was too fast, and is now known to be about 18 months, and (2) that Moore’s Law is a subtrend of the growth in computers documented by Nordhaus.\n\n\n![](http://intelligence.org/wp-content/uploads/2014/04/DeBenedictis_Moore.png)\nA really interesting question is whether Moore was applying somebody else’s law or whether the two laws were actually part of a larger concept that was not understood at the time. I conclude the latter. Intel did not invent the microprocessor until six years after Moore’s article. I have also talked to people (not Moore) who tell me Gordon Moore was thinking about general electrical circuits and was not foreseeing the emergence of the microprocessor.\n\n\nLet me try to apply Moore’s Law as defined by his paper. I recall building a computer system in 1981 with an 8086 (very similar to the 8088 in the original IBM PC). I’d heard it was highly complex and dissipated a lot of heat, so I put my finger on it to experience the heat. I recall surprise that it didn’t seem warmer than anything else. I have thought about the heat from microprocessors in the last year, 33 years later. Since Moore’s Law says power per unit area is the same and chips are nearly the same size at 1 cm2, I should be able to put my finger on a chip and not feel any heat. The reality is that there is a new structure sitting on top of today’s microprocessors that reminds me of Darth Vader’s head and is called a “heat sink.” The heat sink is to remove 50-200 watts of heat generated by the chip. I believe I’ve just made a case that any microprocessor with a heat sink violates Moore’s Law.\n\n\nWhat’s going on? Moore’s Law is being given additional meaning over and above what Moore was thinking. Many people believe Moore’s Law is only about dimensional scaling, a conclusion supported by the title of his article and the main graph. Moore’s Law has also been associated with computations per dollar, but that law had been around for 30 years before Moore’s paper.\n\n\nI found the [interview](http://intelligence.org/2013/10/21/hadi-esmaeilzadeh-on-dark-silicon/) with Hadi Esmaeilzadeh on Dark Silicon to be on track, yet he uses another interpretation of Moore’s Law – one where Moore’s Law continues, but Dennard scaling ended in the mid-2000s. Yet, I quoted the phrase from Moore’s paper that disclosed the scaling rule that later became known as Dennard scaling.\n\n\nAt a higher level, I believe Moore’s Law has turned into a marketing phrase that is being redefined as needed by the semiconductor industry so it remains true.\n\n\nSo why are computations per dollar rising? For many years, the vendor objective was to make processors that ran word processors and web browsers faster. This trend culminated in the early 2000s with processors like the Pentium IV with a 4 GHz clock and dissipating 200W+. Customers rebelled and industry shifted to multicore. With an n-core microprocessor, the results of running the benchmark on one core could be multiplied by n. This is an example of progress (raising computations per dollar) by item 2 in my response to a previous question (more parallelism, subject to difficulty in programming). Even now, most software does not exploit the multiple cores.\n\n\n\n\n---\n\n\n**Luke**: You write that “Customers rebelled and industry shifted to multicore.” I typically hear a different story about the 2002-2006 era, one that didn’t have much to do with customer rebellion, but instead the realization by industry that the quickest way to keep up the Moorean trend — to which consumers and manufacturers had become accustomed — was to jump to multicore. That’s the story I see in e.g. [*The Future of Computing Performance*](http://www.amazon.com/The-Future-Computing-Performance-Level/dp/0309159512) by National Academies Press (official summary [here](http://commonsenseatheism.com/wp-content/uploads/2014/03/Fuller-Millett-Computing-Performance-Game-Over-or-Next-Level-in-IEEE-Computer-Society.pdf)). Moreover, the power scaling challenge to Moore’s Law was anticipated many years in advance by the industry, [for example in the ITRS reports](http://lesswrong.com/lw/hzu/model_combination_and_adjustment/9gxu). Can you clarify what you mean by “customers rebelled”?\n\n\n\n\n---\n\n\n**Erik**: What happens if you take projections of the future to be true and then the projections change? You eventually end up with multiple “truths” about the same thing in the historical record. I accept that the stories you hear are true, but there is another truth based on different projections.\n\n\nLet us mathematically invert the ITRS roadmap to see how projections of today’s (2014) microprocessor clock rate evolved as industry addressed power scaling and shifted to multicore. I have gone back to [earlier editions of the ITRS](http://www.irts.net) and accessed edition reports for 2003, 2005, and 2007. In table 4 of the executive summary of each edition, they have a projection of “on chip local clock,” which means microprocessor clock rate. I accessed [Pricewatch](http://www.pricewatch.com) to get the 2014 clock rate.\n\n\n\n\n\n\n\n\n | On chip local clock | In year 2013 | In year 2014 | In year 2015 |\n| Projection in 2003 ITRS | 22.9 GHzTable 4c | Only odd years reported in this edition | 33.4 GHzTable 4d |\n| Projection in 2005 ITRS | | 28.4 GHzTable 4d | |\n| Projection in 2007 ITRS | | 7.91 GHzTable 4c | |\n| 2014 reality | | 4.0 GHz\nPricewatch.com | |\n\n\nThe most conspicuous issue is that the 2003 and 2005 editions overstated clock rate by about 7x. ITRS accommodated to multicore in 2007 with a new scaling model that we see in retrospect overstates reality by only 2x. Footnote 1 in the 2007 ITRS describes the change. The footnote ends with the following sentence: “This is to reflect recent on-chip frequency slowing trends and anticipated speed-power design tradeoffs to manage a maximum 200 watts/chip affordable power management tradeoff.”\n\n\nIf you believe ITRS is “industry,” industry had been telling customers to expect the benefits of Moore’s Law through rising clock rate. In my view, customers took the lead in saying power per chip should be less than 200 watts even if it meant a more difficult to use parallel programming model. Several years after the multicore became popular, industry changed its projection so customers were to expect the benefits of progress through rising computations per dollar rather than speed. This, of course, led to the rise of battery operated smart phones and tablets with power limits much lower than 200 watts.\n\n\nBy the way, I have not heard the phrase “Moorean trend” before. It seems to capture the idea of progress in computing without being tied to particular technical property. Why don’t you trademark it; it gets zero Google hits.\n\n\n\n\n---\n\n\n**Luke**: Are you willing to make some forecasts about the next ~15 years in computing? I’d be curious to hear your point estimate, or even better your 70% confidence interval, for any of the following:\n\n\n* FLOPS per US dollar in top-end supercomputing in 2030.\n* Average kT per active logic gate in top-end supercomputing in 2030.\n* Some particular measure of progress on reversible computing, in 2030?\n* World’s total FLOPS capacity in 2030. (See [here](http://intelligence.org/2014/02/28/the-worlds-distribution-of-computation-initial-findings/).)\n\n\nOr really, anything specific about computing you’d like to forecast for 2030.\n\n\n\n\n---\n\n\n**Erik**: The FLOPS per dollar question will be most interesting, so I’ll leave it for last.\n\n\n*kT/logic op*: I see a plateau around 10,000 kT, and will discuss what might come beyond the plateau in the next paragraph.. My guess of 10,000 kT includes interconnect wire, which is significant because today 75-90% of energy is attributable to interconnect wire. Today, we see around 50,000 kT. A reduction in supply voltage to .3v should be good for 10x improvement, but there are other issues. This estimate should be valid in 10 years, but the question asked about 15 years.\n\n\nI would not be surprised that we see a new approach in the interval 2025-2030 (mentioned below). It will be difficult to predict specifically, but the five-year interval is short and the improvement rate seems to be insensitive to details. So, say there is a 5x additional improvement by 2030.\n\n\n*Cumulative by 2030*: 2,000 kT/logic op, including interconnect wire. However, this will be really disappointing. People will expect 10 doublings due to Moore’s Law in the 15-year interval, for an expected improvement of 1024x; I’m predicting 25x.\n\n\n*Reversible computing*: I think reversible computing (as strictly defined) will be demonstrated in a few years and principally impact society’s thought processes. The demonstration would be computation at less than 1 kT/logic op, where theory says those levels are unachievable unless reversible computing principles are used. I do not expect reversible computing to be widely used by 2030. The projection of 2,000 kT/logic op in 2030 represents a balance of manufacturing costs and energy costs.\n\n\nBy 2030, reversible computing could be employed in some applications where power is very expensive, such as spacecraft or implantable medical devices.\n\n\nHowever, a demonstration of reversible computing could have an important impact on societal thinking. Popular thinking sees some ideas as unlimited for planning purposes and endows those ideas with attention and investment. This applied to California real estate prices (until 2008) and Moore’s Law (until a few years ago). Claims that “Moore’s Law is Ending” are moving computation into a second class of ideas that popular thinking sees as limited, like the future growth potential of railroads. A reversible computing demonstration would move computing back to the first category and thus make more attention and capital available.\n\n\nHowever, reversible computing is part of a continuum. I see a good possibility that adiabatic methods could become the new method mentioned above for the 2020-2025 time range.\n\n\n*World’s Total FLOPS capacity, 2030*. I looked over the [document by Naik](http://intelligence.org/2014/02/28/the-worlds-distribution-of-computation-initial-findings/) you cited. I don’t feel qualified to judge his result. However, I will stand by my ratio of 50,000 kT to 2,000 kT = 25. So my answer is to multiply Naik’s result by 25. I do not imagine that the cumulative power consumption of computers will rise substantially, particularly with Green initiatives.\n\n\n*FLOPS per dollar*: This answer will be all over the place. Let’s break down by application class:\n\n\n(A) Some applications are CPU-bound, meaning their performance will track changes in kT per logic op. I have given my guess of 25x improvement (which is a lot less then the 1024x that Moore’s Law would have delivered).\n\n\n(B) Other applications are memory bound, meaning their performance will track (a) memory subsystem performance, where advances partially overlap with advances due to Moore’s Law and (b) architecture changes that can reduce the amount of data movement.\n\n\nIt is a lot easier to make a computer for (A) than (B); for a given cost, a computer will outperform type A applications by an order of magnitude or more on FLOPS compared to type B.\n\n\nA top-end supercomputer supports both A and B, but the balance between A and B may be the profound question of the era. The balance has been heavily weighted in favor of A (through reliance on LINPACK as the benchmark). However, we do not currently have a particularly aggressive Exascale program in the US. Instead, we have a lot of discussion about the memory subsystem’s low energy efficiency. You can make a fairly compelling case that progress in top-end supercomputing will be held up until the computers can become better balanced.\n\n\n(For reference the TOP 2 supercomputer is ORNL Titan with 17.5 Petaflops Linpack for $97 million; a ratio of 181 MFLOPS/$. The TOP 1 supercomputer does not seem to be a good cost reference.)\n\n\nIf architecture stays fixed until 2030, I’ll guess 25x improvement. That would be 4.5 GFLOPS/$. Memory subsystems are made out of the same transistor technology as logic, perhaps plus a growing fraction of optics. If transistors become more effective by 25x, this could benefit both FLOPS and the memory subsystem. Use of 3D may boost performance (due to shorter wires), but this will be offset by efficiency loss due to difficulty exploiting greater parallelism. Call the latter factors a wash.\n\n\nArchitecture is the wildcard. There are architectures known that are vastly more efficient than the von Neumann machine, such as systolic arrays, Processor-In-Memory (PIM), Field Programmable Gate Arrays (FPGAs) and even GPUs. These architectures get a performance boost by organizing themselves to put calculations closer to where data is stored, requiring less time and energy to complete a task. Unfortunately, these architectures succeed at the expense of generality. If a vendor boosts performance through too much specialization, they run the risk of being disqualified as an example of a “top-end supercomputer.” The Holy Grail would be a software approach that would make general software run on some specialized hardware (like a compiler that would run general C code on a GPU – at the full performance of the GPU).\n\n\nHowever, I will predict that architecture improvements will contribute an additional 4x by 2030, for a cumulative improvement factor of 100x. That will be 18 GFLOPS/$. This is still 10x short of the 1024x expected for 15 years.\n\n\nHowever, I think Artificial General Intelligence (AGI) may fare well due to specialization. Synaptic activity is the dominant function that enables living creatures to think, but it is quite different from the floating point in a supercomputer. A synapse performs local, slow, computations based on analog stimuli and analog learned behavior. In contrast, the floating point in a supercomputer operates blazingly fast on data fetched from a distant memory and computes an answer with 64-bit precision. Speed reduces energy efficiency, and the supercomputer doesn’t even learn. Since a von Neumann computer is Turing complete, it will be capable of executing an AGI coded in software. However, the efficiency may be low.\n\n\nExecuting an AGI could be optimized by new or specialized technology and advance faster than the rate of Moore’s Law, like Bitcoin mining. I am going to project that an AGI demonstration at scale will require a non-conventional, but not unimaginable computer. The computer could be specialized CMOS, like a GPU with special data types and data layout. Alternatively, the computer could employ new physical devices, such as a neuromorphic architecture with a non-transistor device (e. g. memristor).\n\n\nAll said, AGI might see 1000x or more improvement. In other words, AGI enthusiasts might be able to plan on 181 GFLOPS/$ by 2030. However, they would be classed as AI machines rather than top-end supercomputers.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Erik!\n\n\n\n\n---\n\n1. Note added in review: Landauer proposed a lower limit on “on the order of kT” only for “irreversible” computations. As far as I know, the phrase “Landauer’s Limit” was created later by other people. In my experience, the phrase “Landauer’s Limit” if often applied as a general limit.\n\nThe post [Erik DeBenedictis on supercomputing](https://intelligence.org/2014/04/03/erik-debenedictis/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-04-03T08:00:12Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "724c407837468be2d83ea610c642d525", "title": "2013 in Review: Fundraising", "url": "https://intelligence.org/2014/04/02/2013-in-review-fundraising/", "source": "miri", "source_type": "blog", "text": "**Update 04/16/2014:** At a donor’s [request](http://intelligence.org/2014/04/02/2013-in-review-fundraising/#comment-1318521618) I have replaced the Total Donations per Year chart with one that shows which proportion of the donations were from new and returning donors. Some tweaks were also made to our donation database since the publication of this post, so I have updated the post to reflect these changes.\n\n\n \n\n\nThis is the 5th part of my personal and qualitative [self-review of MIRI in 2013](http://intelligence.org/2013/12/20/2013-in-review-operations/), in which I review MIRI’s 2013 fundraising activities.\n\n\nFor this post, “fundraising” includes donations and grants, but not other sources of revenue.[1](https://intelligence.org/2014/04/02/2013-in-review-fundraising/#footnote_0_10812 \"I didn’t include other sources of revenue in this analysis because they don’t seem likely to play a significant role in our ongoing funding strategy in the forseeable future. For example, revenue from ebook sales is marginal, and we don’t plan to sell tickets to a new conference.\")\n\n\n \n\n\n#### Summary\n\n\n1. Our funding in 2013 grew by about 75% compared to 2012, though comparing to past years is problematic because MIRI is now a very different organization than it was in 2012 and earlier.\n2. We began to apply for grants in late 2013. We haven’t received money from any of these grantmakers yet, but several of our grant applications are pending.\n3. MIRI’s ability to spend money on its highest-value work is much greater than it was one year ago, and we plan to fundraise heavily to meet our 2014 fundraising goal of $1.7 million.\n\n\n\n#### Numbers for 2013\n\n\nWe typically run two major fundraisers per year, and our winter fundraiser occurs during December and January. Often, some donations for each winter fundraiser don’t arrive until February. Hence, it’s most natural to summarize our fundraising success in “2013” by looking at the numbers from March 2013 to March 2014, rather than from January 2013 to January 2014.[2](https://intelligence.org/2014/04/02/2013-in-review-fundraising/#footnote_1_10812 \"For this reason, the numbers reported in this post won’t match those in our 990s, which are anchored to calendar years. Also, the donation numbers in this post are approximate, not exact.\") So, from 03/01/2013 through 02/31/2014, our fundraising income consisted of:\n\n\n* $525,000 from Jed McCaleb in XRP.[3](https://intelligence.org/2014/04/02/2013-in-review-fundraising/#footnote_2_10812 \"This was the straightforward valuation of the XRP at the time of donation. Obviously, XRP is a volatile cryptocurrency, so it’s difficult to make a more accurate but principled valuation. However, XRP’s value rose sharply shortly after Jed’s donation, and as of 03/27/2014 it was still trading higher than at the time of donation. As of 03/27/2014 we’ve  brought in ~$227,000 through the sale of ~25% of our XRP holdings. So in the end, it’s quite plausible we’ll eventually reap $525,000 or more from the XRP donation.\")\n* $250,000 from Peter Thiel.\n* $100,000 from Jaan Tallinn.\n* $690,000 from other individual donations, plus corporate matching.\n* $89,000 of in-kind donations from Investling.\n* $5,000 via [Give for Free](http://intelligence.org/get-involved/#give) methods, mostly via the [MIRI affinity card](http://intelligence.org/get-involved/#give). (Sign up if you haven’t already!)\n* $0 in grants. (Not including [Google Grants](http://intelligence.org/2014/02/06/miris-experience-with-google-adwords/).)\n\n\n \n\n\n#### Comparison to past years\n\n\nIt’s difficult to meaningfully compare MIRI’s 2013 fundraising income to MIRI’s fundraising income in past years, because MIRI was so different in 2013 compared to previous years. In particular:\n\n\n* From 2006-2012, we ran the [Singularity Summit](http://intelligence.org/singularitysummit/), which brought in revenue chiefly via ticket sales rather than donation. Some supporters bought Summit tickets rather than donating, or donated less than they would have had they not also bought a Summit ticket.\n* During 2012, [CFAR](http://rationality.org/) was still being “incubated” within MIRI, and so many expenses and donations were CFAR-related in a way that is difficult to disentangle.\n* By 2013, we had sold the Summit, CFAR had become a separate organization with fully separate finances, and some donors who had previously been funding MIRI because of its rationality work switched (as planned) to funding CFAR instead.\n\n\nBut for the sake of completeness, let’s attempt the comparison anyway.[4](https://intelligence.org/2014/04/02/2013-in-review-fundraising/#footnote_3_10812 \"The numbers in this section may not match past estimates, because we spent a lot of time filling in gaps in our donor database immediately before writing this post.\")\n\n\nBelow is a chart of MIRI’s semi-recent historical fundraising, based on the records in our donor database. For reasons explained above, the numbers reported for each year are for a March to March period rather than for the calendar year. (In-kind donations are not included in this chart.)\n\n\n**Total Donations per Year**\n\n\n[![Total Dontations 2010-2013](https://intelligence.org/wp-content/uploads/2014/04/Total-Dontations-2010-2013.png)](https://intelligence.org/wp-content/uploads/2014/04/Total-Dontations-2010-2013.png)\n\n\nAnother important data set concerns donor retention and new donor acquisition. This is a noisier process. Below is a chart showing the acquisition of new donors who gave for the first time in a given year, and the retention of donors who gave again in a giving year having already given at least once in a previous year.[5](https://intelligence.org/2014/04/02/2013-in-review-fundraising/#footnote_4_10812 \"Again, using a March to March period rather than calendar year.\")\n\n\n**Donor Acquisition and Retention per Year**\n\n\n[![Number of Donors 2010-2013](https://intelligence.org/wp-content/uploads/2014/03/Number-of-Donors-2010-2013.png)](https://intelligence.org/wp-content/uploads/2014/03/Number-of-Donors-2010-2013.png)\n\n\nFinally, here is a chart showing trends in the acquisition of “new large donors” who gave $5,000 for the first time in a given year, and less than $5,000 total previously.[6](https://intelligence.org/2014/04/02/2013-in-review-fundraising/#footnote_5_10812 \"When considering who qualifies as a “new large donor,” I’m including money secured by the donor via corporate matching.\")\n\n\n**New Large Donors per Year**\n\n\n[![New Large Donors 2010-2013](https://intelligence.org/wp-content/uploads/2014/03/New-Large-Donors-2010-2013.png)](https://intelligence.org/wp-content/uploads/2014/03/New-Large-Donors-2010-2013.png)\n\n\n \n\n\n#### Recent experiments and lessons learned\n\n\nRecently, we asked (nearly) every donor who gave more than $3,000 in 2013 about the source of their initial contact with MIRI, their reasons for donating in 2013, and their preferred methods for staying in contact with MIRI. Of the approximately 35 respondents, nearly all seemed to be giving (in part) because they believe MIRI’s mission is highly important, and about half also mentioned excitement about the new focus on technical research and the [results](http://intelligence.org/research/) that focus has produced. Twenty-one of them came into contact with MIRI via Eliezer’s posts on Overcoming Bias or Less Wrong. Four came into contact with MIRI via [*HPMoR*](http://hpmor.com/). The rest came through other sources.\n\n\nIn general, it has remained very difficult for MIRI to acquire new donors from populations that have not been previously exposed at length to effective altruism, applied rationality, existential risk, or AGI risk ideas and communities. As such, we will continue to prioritize the acquisition of new donors from within those circles and, as a longer-term fundraising strategy, we will (in the typical case) continue to direct new contacts toward those circles rather than straining to solicit donations from them immediately.\n\n\nWe began to apply for grants in late 2013.[7](https://intelligence.org/2014/04/02/2013-in-review-fundraising/#footnote_6_10812 \"Again, not including Google Grants.\") We haven’t received money from any of these grantmakers yet, but several of our grant applications are pending.\n\n\n \n\n\n#### Projected future needs\n\n\nI think of Friendly AI research as MIRI’s core mission and our comparative advantage. I also think FAI research is MIRI’s highest-value work, for [*many* reasons](http://lesswrong.com/lw/ixt/lone_genius_bias_and_returns_on_additional/9zlg). Happily, **MIRI’s ability to spend money on its highest-value work is much greater than it was one year ago**. This is because:\n\n\n* We [now have](http://intelligence.org/2014/03/13/hires/) *three* full-time Friendly AI researchers who are making FAI progress, publishing results, presenting at conferences, etc.\n* We have several medium-term hiring prospects for full-time FAI research or FAI open problems exposition, if we have the funding to afford them.\n* Our capacity to run workshops has increased dramatically. There are ~25 past MIRI workshop participants who performed well and who are interested to come to future workshops, and ~40 applicants we’ve deemed “promising” and might want to invite to future workshops. Moreover, building on our experience in 2013, we are able to start planning a broader variety of workshops, some in collaboration with major universities and other research institutes.\n* Several more open problems in Friendly AI have been described in writing during the past year (see [here](http://intelligence.org/2014/02/18/2013-in-friendly-ai-research/)), allowing other researchers to understand what we are working on.\n\n\nIn order to seize these growing opportunities to make FAI research progress and grow the FAI research community, **we aim to raise $1.7 million in 2014**. (That is, from March 2014 to March 2015.)\n\n\nThis is a “stretch” goal: there is a substantial probability we will fail to meet it. To meet this goal, we’ll need to make a stronger fundraising push than we have in past years. In particular, a greater fraction of *my* time will be devoted to fundraising than in past years.\n\n\nOne month into this new fundraising year, we have a long way to go:\n\n\n**Total Donations per Year Including 2014**\n\n\n[![Total Dontations 2010-2014](https://intelligence.org/wp-content/uploads/2014/04/Total-Dontations-2010-2014.png)](https://intelligence.org/wp-content/uploads/2014/04/Total-Dontations-2010-2014.png)\n\n\nIf you are excited by our recent progress, and you’d like to see our Friendly AI research program continue to grow, please consider signing up for a [monthly donation](https://intelligence.org/donate/). If you’d like to make a large gift, please contact me at luke@intelligence.org directly.\n\n\n\n\n---\n\n1. I didn’t include other sources of revenue in this analysis because they don’t seem likely to play a significant role in our ongoing funding strategy in the forseeable future. For example, revenue from ebook sales is marginal, and we don’t plan to sell tickets to a new conference.\n2. For this reason, the numbers reported in this post won’t match those in [our 990s](http://intelligence.org/transparency/), which are anchored to calendar years. Also, the donation numbers in this post are approximate, not exact.\n3. This was the straightforward valuation of the XRP at the time of donation. Obviously, XRP is a volatile cryptocurrency, so it’s difficult to make a more accurate but principled valuation. However, XRP’s value rose sharply shortly after Jed’s donation, and as of 03/27/2014 it was still trading higher than at the time of donation. As of 03/27/2014 we’ve  brought in ~$227,000 through the sale of ~25% of our XRP holdings. So in the end, it’s quite plausible we’ll eventually reap $525,000 or more from the XRP donation.\n4. The numbers in this section may not match past estimates, because we spent a lot of time filling in gaps in our donor database immediately before writing this post.\n5. Again, using a March to March period rather than calendar year.\n6. When considering who qualifies as a “new large donor,” I’m including money secured by the donor via corporate matching.\n7. Again, not including Google Grants.\n\nThe post [2013 in Review: Fundraising](https://intelligence.org/2014/04/02/2013-in-review-fundraising/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-04-02T16:04:31Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "83d1151fce440677db1d7bb0ae9048a4", "title": "Lyle Ungar on forecasting", "url": "https://intelligence.org/2014/03/26/lyle-ungar/", "source": "miri", "source_type": "blog", "text": "![Lyle Ungar portrait](http://intelligence.org/wp-content/uploads/2014/03/Ungar_w1300.jpg) [Dr. Lyle Ungar](http://www.cis.upenn.edu/~ungar/home.html) is a Professor of Computer and Information Science at the University of Pennsylvania, where he also holds appointments in multiple departments in the schools of Engineering, Arts and Sciences, Medicine, and Business. He has published over 200 articles and is co-inventor on eleven patents. His research areas include machine learning, data and text mining, and psychology, with a current focus on statistical natural language processing, spectral methods, and the use of social media to understand the psychology of individuals and communities.\n\n\n\n**Luke Muehlhauser**: One of your interests (among many) is [forecasting](http://www.cis.upenn.edu/%7Eungar/CVs/forecasting.html). Some of your current work is funded by IARPA’s [ACE program](http://www.iarpa.gov/Programs/ia/ACE/ace.html) — one of the most exciting research programs happening anywhere in the world, if you ask me.\n\n\nOne of your recent papers, co-authored with Barbara Mellers, Jonathan Baron, and several others, is “[Psychological Strategies for Winning a Geopolitical Forecasting Tournament](http://commonsenseatheism.com/wp-content/uploads/2014/03/Mellers-et-al-Psychological-Strategies-for-Winning-a-Geopolitical-Forecasting-Tournament.pdf).” The abstract is:\n\n\n\n> Five university-based research groups competed to assign the most accurate probabilities to events in two geopolitical forecasting tournaments. Our group tested and found support for three psychological drivers of accuracy: training, teaming, and tracking. Training corrected cognitive biases, encouraged forecasters to use reference classes, and provided them with heuristics, such as averaging when multiple estimates were available. Teaming allowed forecasters to share information and discuss the rationales behind their beliefs. Tracking placed the highest performers (top 2% from Year 1) in elite teams that worked together. Results showed that probability training improved calibration. Team collaboration and tracking enhanced both calibration and resolution. Forecasting is often viewed as a statistical problem; but it is also a deep psychological problem. Behavioral interventions improved the accuracy of forecasts, and statistical algorithms improved the accuracy of aggregations. Our group produced the best forecasts two years in a row by putting statistics and psychology to work.\n> \n> \n\n\nIn these experiments, some groups were given scenario training or probability training, which “took approximately 45 minutes, and could be examined throughout the tournament.”\n\n\nAre these modules available to the public online? If not, can you give us a sense of what they were like? And, do you suspect that significant additional probability or scenario training would further reduce forecasting errors, e.g. if new probability training content was administered to subjects for 30 minutes every two weeks?\n\n\n\n\n\n---\n\n\n**Lyle Ungar**: I’m sorry, but the modules are not publicly available.\n\n\nOur main probability training module teaches participants not Bayesian probability rules, but approaches to forecasting. It begins with a discussion of two elements of good probability judgment – calibration and resolution, giving definitions and examples. Our module then provides a number of tips for good forecasting judgment, including 1) consider relevant base rates, 2) average over multiple estimates (if available), use historical data, use statistical predictions whenever possible, and consider simple models based on key variables.\n\n\nI was surprised how much benefit we got from a 45 minute online training, especially given the fact that many people have found that taking a full course in probability has no benefit on peoples’ probability estimation. I think the key is the specific approaches to forecasting.\n\n\nWe are developing follow-up training, but I think that the key is not giving forecasters more frequent training. What I think is important to our forecasters’ performance is the fact that our forecasters use their skills every week, and get very concrete feedback about how good their forecasts were, including comparisons other people’s accuracies. This feedback allows and encourages people to keep learning.\n\n\n\n\n---\n\n\n**Luke**: I haven’t personally signed up for any of the ACE forecasting tournaments because I saw that the questions drew on very narrow domain knowledge — e.g. one SciCast question is “When will an operating graphene-based nano-antenna be demonstrated?” My sense was that even with 10 minutes of research into such a question, I wouldn’t be able to do much better than distributing my probability mass evenly across all available non-crazy answers. Or, if I was allowed to see others’ estimates, I wouldn’t be able to do better than than just copying the median response, even with 10 minutes of research on a single question.\n\n\nFor that reason, I’ve been thinking that the low-hanging fruit in mass calibration training would be to develop an app that feeds people questions for which many players could be expected to outperform random guessing (or exactly copying others) with mere *seconds* of thought — e.g. questions (with no Googling allowed) about basic science, or about which kinds of things tend to happen in normal human relationships, or about what happened in a famous historical event from the last 100 years.\n\n\nOf course, that’s “retrodiction” rather than forecasting, but I suspect it would be useful calibration training nonetheless, and it could be more rewarding to engage with because it takes less time per question and participants could learn from their mistakes more quickly. This is the approach taken by many of the questions on the Center for Applied Rationality’s [credence calibration game](http://rationality.org/calibration/), though unfortunately that game currently has too few questions in its database (~1000, I think?), and too many of them are questions about historical sports outcomes, which are as obscure to non-sports-fans as the SciCast question about nano-antennas is to most people. (I had to tap “50% confident” for all those questions.)\n\n\nOne could even imagine it being gamified in various way, taking lessons from games like [DragonBox](http://archive.wired.com/geekdad/2012/06/dragonbox/), which feels like a game but is actually teaching kids algebra.\n\n\nWhat do you think of my impressions about that? If regular practice is what likely makes the difference for people’s calibration, how could one plausibly create a scalable tool for calibration training (either retrodiction or forecasting) that people would actually want to use?\n\n\n\n\n---\n\n\n**Lyle**: First, let me clarify that the best performers on our Team Good Judgement competition are not people with specialized expertise, but people who work hard, collect lots of information and think carefully about it.\n\n\nI like your idea of calibration training. I’m not sure how well performance on problems like sports betting or guessing the height of mount Everest  generalize to real prediction problems. That’s a good question, and one that someone should test. My intuition is that many of the skills needed for good performance on problems like geo-political forecasting (e.g. picking a good reference class of events and using base rates from those as a starting point for a forecast) are quite different from the skills needed for retrodiction “guessing games”, but perhaps calibration would generalize.  Or perhaps not.\n\n\n\n\n---\n\n\n**Luke**: How much calendar time is there between when the forecasts are made and when the forecasted events occur?\n\n\n\n\n---\n\n\n**Lyle**: We forecast events that range from one week to one year in the future. Predicting events months in the future is good time frame, since one can start with situations where the outcome is unclear, observe how probability estimates change as the world evolves, and also see what the actual outcome is.\n\n\nAn important aspect of our forecasting competition is that we make estimates every day about the probabilities of the future events. Individual forecasters, of course, update less frequently (they all have day jobs), but we evaluate people on their average daily accuracy — and we combine their individual forecasts to get an daily update on our aggregate estimate of how likely each future event is.\n\n\n\n\n---\n\n\n**Luke**: What are the prospects, do you think, for a similar research project investigating forecasts of events that range from 2-5 years in the future? Would you expect the “super forecasters” in the current project to show similar performance on forecasts with longer time horizons?\n\n\n\n\n---\n\n\n**Lyle**: In general, forecasting farther in the future is harder. (Think of predicting election outcomes; it’s much easier to prediction an election outcome as the election date gets closer.) Our super-forecasters are super, but not magic, so they will tend to be less accurate about long-range predictions. What will life be like in a hundred years? That’s probably a job for a futurist or science fiction writer, not a forecaster.\n\n\nI don’t think many funders will have the patience to wait five years to see how good our (or anyone’s) forecasting methods are. A more promising direction, which we are pursuing, is to create clusters of questions. Some will be longer term, or perhaps even poorly specified (“Is China getting more aggressive?”). Others will be shorter term, but correlated with the longer term outcomes. Then we can estimate changes in probabilities of long-term or vague questions based on shorter term, clearly resolvable ones.\n\n\n\n\n---\n\n\n**Luke**: Years ago, you also wrote a [review article](http://commonsenseatheism.com/wp-content/uploads/2014/03/Ungar-Forecasting.pdf) on forecasting with neural nets. If you were given a sufficient budget to forecast something, what heuristics would you use to decide which forecasting methods to use? When are neural nets vs. prediction markets vs. team-based forecasting vs. large computer models vs. other methods appropriate?\n\n\n\n\n---\n\n\n**Lyle**: Firstly, neural nets are just a very flexible class of equations used to fit data; I.e. they are a statistical estimation method. Modern versions of them (“deep neural nets”) are very popular now at companies like Google and Facebook, mostly for recognizing objects in images, and for speech recognition, and work great if one has *lots* of data on which to “train” them —  to estimate the model with.\n\n\nWhich leads me to the answer to your question:\n\n\nI think one can roughly characterize forecasting problems into categories —  each requiring different forecasting methods —  based, in part, on how much historical data is available.\n\n\nSome problems, like the geo-political forecasting we are doing, require lots collection of information and human thought. Prediction markets and team-based forecasts both work well for sifting through the conflicting information about international events. Computer models mostly don’t work as well here – there isn’t a long enough track records of, say, elections or coups in Mali to fit a good statistical model, and it isn’t obvious what other countries are ‘similar.’\n\n\nOther problems, like predicting energy usage in a given city on a given day, are well suited to statistical models (including neural nets). We know the factors that matter (day of the week, holiday or not, weather, and overall trends), and we have thousands of days of historical observation. Human intuition is not as going to beat computers on that problem.\n\n\nYet other classes of problems, like economic forecasting (what will the GDP of Germany be next year? What will unemployment in California be in two years) are somewhere in the middle. One can build big econometric models, but there is still human judgement about the factors that go into them. (What if Merkel changes her mind or Greece suddenly adopts austerity measures?) We don’t have enough historical data to accurately predict economic decisions of politicians.\n\n\nThe bottom line is that if you have lots of data and the world isn’t changing to much, you can use statistical methods. For questions with more uncertain, human experts become more important. Who will win the US election tomorrow? Plug the results of polls into a statistical model. Who will win the US election in a year? Check the Iowa prediction markets. Who will win the US election in five years? No one knows, but a team of experts might be your best bet.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Lyle!\n\n\nThe post [Lyle Ungar on forecasting](https://intelligence.org/2014/03/26/lyle-ungar/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-03-26T23:55:10Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "64ba699f3e739efd8f25b306638dfaeb", "title": "Anil Nerode on hybrid systems control", "url": "https://intelligence.org/2014/03/26/anil-nerode/", "source": "miri", "source_type": "blog", "text": "![Anil Nerode portrait](https://intelligence.org/wp-content/uploads/2014/03/Nerode_w120.png) Dr. Anil Nerode is a Goldwin Smith Professor of Mathematics and Computer Science at the Cornell University. He [is](http://www.math.cornell.edu/News/2010-2011/nerode_chicago.html) “a pioneer in mathematical logic, computability, automata theory, and the understanding of computable processes, both theoretical and practical for over half a century, whose work comes from a venerable and distinguished mathematical tradition combined with the newest developments in computing and technology.”\n\n\nHis 50 Ph.D.’s and their students occupy many major university and industrial positions world-wide in mathematics, computer science, software engineering, electrical engineering, etc. He and Wolf Kohn founded the discipline of hybrid systems in 1992 which has become a major area of research in mathematics, computer science, and many branches of engineering. Their work on modeling control of macroscopic systems as relaxed calculus of variations problems on Finsler manifolds is the ground for their current efforts in quantum control and artificial photosynthesis. His research has been supported consistently by many entities, ranging from NSF (50 years) to ADWADC, AFOSR, ARO, USEPA, etc. He has been a consultant on military development projects since 1954. He received his Ph.D. in Mathematics from the University of Chicago under Saunders MacLane (1956).\n\n\n\n**Luke Muehlhauser**: In [Nerode (2007)](http://commonsenseatheism.com/wp-content/uploads/2014/02/Nerode-Logic-and-control.pdf), you tell the origin story of hybrid systems control. A 1990 DARPA meeting in Pacifica seems to have been particularly seminal. As you describe it:\n\n\n\n> the purpose of the meeting was to explore how to clear a major bottleneck, the control of large military systems such as air-land-sea forces in battle space.\n> \n> \n\n\nCan you describe in more detail what DARPA’s long-term objectives for that meeting seemed to be? Presumably they hoped the meeting would spur new lines of research that would allow them to solve particular control problems in the next 5-20 years?\n\n\n\n\n\n---\n\n\n**Anil Nerode**: The director of the DARPA program Domain Specific Software Initiative, Col Eric Mettala, had called the meeting. I had no idea what was intended by that term.. I was there because I was director of the Army sponsored Mathematical Sciences Institute at Cornell and was sent by the ARO math director Jagdish Chandra. I found out the first morning that Mettala had recognized that the current control software industry in 1991 was not developing software for real time control by generals of land-sea-air battles in real time. He invited control software industry, a few control engineers from big companies (such as Wolf Kohn from Boeing), and a few academic Control Engineering professors. I believe that I was only logic-computer science person there. (I worked in design and verification of control systems in the 1950’s, but not thereafter.)\n\n\nThe first days discussion made it quite clear that the control software industry had not a clue how to write programs for logical based digital control of real time continuous machines, and absolutely no idea how to verify that such a program was correct. We were asked to leave position papers on what should be done for the next day. I went to my room and formulated the concept of a system of interacting continuous and discrete (logical) devices, and made it clear that to prove a program governing them works properly, you had to verify that the mixed execution sequence of digital and continuous steps always lead to satisfying the performance specification. I did not expect anyone to pick up on it.\n\n\nBut the next morning, after a completely confused discussion, some member of the audience said “why don’t we use Prof. Nerode’s definition of our problem.” Amazingly enough, they did. Later in the day there was a discussion in which I was very strongly supported in the conclusion that a research program, not a development program, needed to be undertaken. A lot of the audience did not see it that way, they wanted many hundreds of millions to blow to larger proportions what they were currently doing. Wolf Kohn from Boeing articulated my points much more clearly than I did. In the end Mettala put out RFP’s for a research program, I asked Wolf to send me his papers. So that was the Pacific meeting from my point of view.\n\n\n\n\n---\n\n\n**Luke**: Was the Pacifica meeting your inspiration for organizing the 1991 and 1992 Hybrid Systems Workshops at Cornell? And do you know whether anything came of Mettala’s RFPs?\n\n\n\n\n---\n\n\n**Anil**: I recognized the issue Col. Mattela raised as very significant, both for the Armed Forces and for business and industrial future systems. I did not think that Mettala was sufficiently clear on what the problem was to be able to formulate a DARPA RFP that would address the issue directly.\n\n\nI had the funds available from US ARO in my Institute to sponsor new lines of research. So I organized the first hybrid systems meeting in Ithaca entirely independently of Darpa to confer with those world wide I had identified as having something to offer. These participants were from the AI and expert systems community and from the control community.\n\n\nThis first meeting in Ithaca was *not* intended to yield a volume or paper, but rather to establish that a smart bunch of people thought this is a productive research area. I explained my concept of a hybrid system, and they bought it. I chose “hybrid systems” as the name of the area. The group I chose as organizing committee for the subsequent first real hybrid systems workshop and volume was an extension of that original group.\n\n\nMetalla put out a generic RFP for research in the long term military problem of control of forces and machines. I applied and got some of the funds. Others secured some of the funds under his RFP.\n\n\nThe principal outcome was development of hybrid systems theory simply because I continued to organize a community for several years under these monies and others. Once the point was established, there were, and are, plenty of sources of funding for the area. It takes time to develop a community. My substantial research in the area started after that seminal preparatory meeting.\n\n\n\n\n---\n\n\n**Luke**: In those early years, what actions did you take — or see others take — to develop the hybrid systems community? In retrospect, which actions turned out to have been the most important for the development of the field?\n\n\n\n\n---\n\n\n**Anil**: What my hybrid systems model revealed was that to either extract or to verify digital programs interconnected with continuous devices, expertise is required in both program verification ( a discrete logic language enterprise) and control theory (analysis based continuous mathematics).\n\n\nI enlisted the best of both communities and their graduate students at the initial three hybrid systems volume meetings. This *mix* was the key to success. The first of the brilliant to participate actively were leading professors in control and program verification at Stanford, MIT and Berkeley. Many others followed the lead of these institutions. They have, through their PhD’s, provided many of the professors and engineers world wide for computer science and engineering departments of universities and industries. .\n\n\nThe compelling attraction to the computer scientists was the development of digital tools to analyze continuous devices, a new use for the technology they had developed for the digital world.\n\n\nThe compelling interest for the control community was that digital control of continuous devices was demanded by industry, business, and the military. Their previous technology was piecewise linear quadratic control with breakpoints for highly non-linear systems, where there was no mathematical basis assuring that the resulting systems would perform as requested. After the mathematical security of linear-quadratic control, they were skating on thin ice. This was an ad hoc process of constructing state tables followed by simulation.\n\n\nIn many cases now, embedded control systemshave been verified by formal (hybrid system) methods. (For instance, the Mercedes car embedded systems were first verified by a scientist with an MIT mathematical logic degree. He showed me his verification after a Florida invited lecture I gave a few years ago.)\n\n\nI must add that for commercial or military systems with a life safety risk, having such verification makes it much easier to get federal certification or industrial approval.\n\n\nFinally, once leading world figures had been engaged, the funding agencies followed: what else could they do?\n\n\n\n\n---\n\n\n**Luke**: What have been some of the major success stories of this research program? What practical problems in industry or government were solved, which could not have been solved without hybrid systems control? André Platzer outlines some applications of KeYmaera alone, [here](http://symbolaris.com/info/KeYmaera.html#case-studies); I’m hoping you can provide some earlier examples.\n\n\n\n\n---\n\n\n**Anil**: Before Kohn and I introduced hybrid systems as a subject, there existed highly developed logic based technology for proving that execution sequences for digital programs always satisfy their program specification, or finding counterexamples. At the same time there were countless embedded digital control systems for continuous devices and also many attempts to make programs which would act as, for example, an assistant controlling battlefield hardware. (That was one of the purposes served by construction of ADA.)\n\n\nThe concept of execution sequence of a hybrid system which we introduced led to extending the program verification paradigm to mixed digital-continuous systems. This, rather than any particular theoretical development, was the main impact in science and engineering. As for program verification, this was carried out in many industries. My favorite application, created by a PhD logician from MIT, was the verification of the control systems for Mercedes limousines.\n\n\nThere are technical groups now for verifying such applications.\n\n\nI and my partner’s primary interest was extracting digital controls for continuous systems from system description and performance specification. We have done this commercially in some practical cases, but as a whole the math background required and the magnitude of the computations needed have discouraged development of the field. It will happen in the long run because it is useful and possible.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Anil!\n\n\nThe post [Anil Nerode on hybrid systems control](https://intelligence.org/2014/03/26/anil-nerode/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-03-26T23:25:31Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "e2b5bc4e32a85189286d4c3e4a36df62", "title": "Michael Carbin on integrity properties in approximate computing", "url": "https://intelligence.org/2014/03/23/michael-carbin/", "source": "miri", "source_type": "blog", "text": "![Michael Carbin portrait](https://intelligence.org/wp-content/uploads/2014/03/Carbin_w320.jpg)[Michael Carbin](http://people.csail.mit.edu/mcarbin/) is a Ph.D. Candidate in Electrical Engineering and Computer Science at MIT. His interests include the design of programming systems that deliver improved performance and resilience by incorporating approximate computing and self-healing.\n\n\nHis work on program analysis at Stanford University as an undergraduate received an award for Best Computer Science Undergraduate Honors Thesis. As a graduate student, he has received the MIT Lemelson Presidential and Microsoft Research Graduate Fellowships. His recent research on verifying the reliability of programs that execute on unreliable hardware received a best paper award at OOPSLA 2013.\n\n\n\n**Luke Muehlhauser**: In [Carbin et al. (2013)](http://people.csail.mit.edu/mcarbin/papers/oopsla13.pdf), you and your co-authors present Rely, a new programming language that “enables developers to reason about the… probability that [a program] produces the correct result when executed on unreliable hardware.” How is Rely different from earlier methods for achieving reliable approximate computing?\n\n\n\n\n---\n\n\n**Michael Carbin**: This is a great question. Building applications that work with unreliable components has been a long-standing goal of the distributed systems community and other communities that have investigated how to build systems that are fault-tolerant. A key goal of a fault tolerant system is to deliver a correct result even in the presence of errors in the system’s constituent components.\n\n\nThis goal stands in contrast to the goal of the unreliable hardware that we have targeted in my work. Specifically, hardware designers are considering new designs that will — purposely — expose components that may silently produce incorrect results with some non-negligible probability. These hardware designers are working in a subfield that is broadly called approximate computing.\n\n\nThe key idea of the approximate computing community is that many large-scale computations (e.g., machine learning, big data analytics, financial analysis, and media processing) have a natural trade-off between the quality of their results and the time and resources required to produce a result. Exploiting this fact, researchers have devised a number of techniques that take an existing application and modify it to trade the quality of its results for increased performance or decreased power consumption.\n\n\nOne example that my group has worked on is simply skipping parts of a computation that we have demonstrated — through testing — can be elided without substantially affecting the overall quality of the application’s result. Another approach is executing portions of an application that are naturally tolerant of errors on these new unreliable hardware systems.\n\n\nA natural follow-on question to this is, how have developers previously dealt with approximation?\n\n\nThese large-scale applications are naturally approximate because exact solutions are often intractable or perhaps do not even exist (e.g., machine learning). The developers of these applications therefore often start from an exact model of how to compute an accurate result and then use that model as a guide to design a tractable algorithm and a corresponding implementation that returns a more approximate solution. These developers have therefore been manually applying approximations to their algorithms (and their implementations) and reasoning about the accuracy of their algorithms for some time. A prime example of this is the field of numerical analysis and its contributions to scientific computing.\n\n\nThe emerging approximate computing community represents the realization that programming languages, runtime systems, operating systems, and hardware architectures can not only help developers navigate the approximations they need to make when building these applications, but also that these systems can incorporate approximations themselves. So for example, the hardware architecture may itself export unreliable hardware components that an application’s developers can then use as one of their many tools for performing approximation. \n\n\n\n\n\n\n---\n\n\n**Luke**: Why did you choose to develop Rely as an imperative rather than functional programming language? My understanding is that functional programming languages are often preferred for applications related to reliability, safety, and security, because they can often be machine-checked for correctness.\n\n\n\n\n---\n\n\n**Michael**: There has historically been a passionate rivalry between the imperative and functional languages groups. Languages that have reached wide-spread usage (e.g., C, C++, Java, php, and Python) have traditionally been imperative whereas functional languages have traditionally appealed to a smaller and more academically inclined group of programmers.\n\n\nThe divide between the imperative and functional mindset also holds true for researchers within the programming languages and compilers research community. Our decision to use an imperative language is largely motivated by the fact that simple imperative languages are accessible to a broad public and research audience.\n\n\nHowever, the results we have for imperative languages can be adapted to functional languages as well. This is important because with the popularity of languages like Erlang (WhatsApp) and Scala (Twitter) there has been more public interest in functional languages. As a result, the divide between these imperative and functional camps has started to blur as standard functional languages features have been adopted into mainstream imperative languages (e.g., lambdas C++ and Java). Our research is therefore in a position to adapt to changes in the landscape of programming paradigms.\n\n\nOne important thing to note is that — in principle — reasoning about a program written in two different Turing-complete languages (imperative or functional) is equally as difficult (i.e., undecidable). However, writing a program in a functional language typically better exposes the structure of the computation.\n\n\nFor example, mapping a list of elements to another list of elements in a functional language makes explicit the recursive nature of the computation and the fact that the current head element of the list is disjoint from its tail.\n\n\nHowever, mapping a list of elements with a straightforward C implementation (for example) would immediately use pointers and therefore complicate the reasoning required to perform verification. As imperative languages begin to expose better structured programming constructs, the reasoning gap between imperative and functional languages will narrow.\n\n\n\n\n---\n\n\n**Luke**: What are some promising “next steps” for research into methods and tools like Rely that could improve program reliability for e.g. approximate computing and embedded system applications?\n\n\n\n\n---\n\n\n**Michael**: The approximate computing community is just starting to pick up steam, so there are many opportunities going forward.\n\n\nOn the computer hardware side, there are still open questions about what performance/energy gains are possible if we intentionally build hardware that breaks the traditional digital abstraction by silently returning incorrect or approximate results. For example, researchers in the community are still asking, will we see 2x gains or can we hope to see 100x gains?\n\n\nAnother main question about the hardware is, what are the error models for each approximate component — do they fail frequently with arbitrary error or fail infrequently with small bounded error? Or, is there some happy balance in between those two extremes?\n\n\nOn top of the hardware then comes a variety of software concerns. Most software is designed and built around the assumption that hardware is reliable. The research we are doing with Rely is some of the first to propose a programming model and workflow that provides reliability and accuracy guarantees in the presence of these new approximate hardware designs.\n\n\nHowever, there are still many challenges. For example, compilers have traditionally relied on the assumption that all instructions and storage regions are equally reliable. However, approximate hardware may result in hardware designs where some operations/storage regions are more reliable than others. Because of this distinction, standard compiler transformations that optimize a program by exchanging one sequence of operations for another sequence of operations may now change the program’s reliability. This new reality will require the community to rethink how we design, build, and reason about compilers to balance both optimization and reliability.\n\n\nOne additional opportunity that the approximate computing community has yet is explore is the fact that an existing piece of software implements some algorithm that may have flexibility itself or may be one of a number of potential algorithms. Going forward, the approximate computing community will need to consider an application’s algorithmic flexibility to realize the broad impact it hopes to achieve.\n\n\nSpecifically, by bringing the algorithm into the picture, the approximate computing community will be able to incorporate the experience and results of the numerical analysis, scientific computing, and theory communities to provide strong guarantees about the accuracy, stability, and convergence of the algorithms that these approximate hardware and software systems will be used to implement.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Michael!\n\n\nThe post [Michael Carbin on integrity properties in approximate computing](https://intelligence.org/2014/03/23/michael-carbin/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-03-23T08:00:07Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "69ea23caf6587bb5d9865244180a638d", "title": "Randal Koene on whole brain emulation", "url": "https://intelligence.org/2014/03/20/randal-a-koene-on-whole-brain-emulation/", "source": "miri", "source_type": "blog", "text": "![Randal A. Koene portrait](http://intelligence.org/wp-content/uploads/2014/03/Koene_w315.jpg)[Dr. Randal A. Koene](http://www.randalkoene.com/) is CEO and Founder of the not-for-profit science foundation [Carboncopies](http://carboncopies.org/) as well as the neural interfaces company NeuraLink Co. Dr. Koene is Science Director of the 2045 Initiative and a scientific board member in several neurotechnology companies and organizations.\n\n\nDr. Koene is a neuroscientist with a focus on neural interfaces, neuroprostheses and the precise functional reconstruction of neural tissue, a multi‑disciplinary field known as [(whole) brain emulation](http://en.wikipedia.org/wiki/Whole_brain_emulation). Koene’s work has emphasized the promotion of feasible technological solutions and “big‑picture” roadmapping aspects of the field. Activities since 1994 include science-curation such as bringing together experts and projects in cutting‑edge research and development that advance key portions of the field.\n\n\nRandal Koene was Director of Analysis at the Silicon Valley nanotechnology company [Halcyon Molecular](http://www.crunchbase.com/company/halcyon-molecular) (2010-2012) and Director of the Department of Neuroengineering at [Tecnalia](http://www.tecnalia.com), the third largest private research organization in Europe (2008-2010). Dr. Koene founded the Neural Engineering Corporation (Massachusetts) and was a research professor at Boston University’s Center for Memory and Brain. Dr. Koene earned his Ph.D. in Computational Neuroscience at the Department of Psychology at McGill University, as well as an M.Sc. in Electrical Engineering with a specialization in Information Theory at Delft University of Technology. He is a core member of the University of Oxford working group that convened in 2007 to create the first roadmap toward whole brain emulation (a term Koene proposed in 2000). Dr. Koene’s professional expertise includes computational neuroscience, neural engineering, psychology, information theory, electrical engineering and physics.\n\n\nIn collaboration with the VU University Amsterdam, Dr. Koene led the creation of [NETMORPH](http://netmorph.org), a computational framework for the simulated morphological development of large‑scale high‑resolution neuroanatomically realistic neuronal circuitry.\n\n\n\n**Luke Muehlhauser**: You were a participant in the 2007 workshop that led to FHI’s [Whole Brain Emulation: A Roadmap](http://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf) report. The report summarizes the participants’ views on several issues. Would you mind sharing your *own* estimates on some of the key questions from the report? In particular, at what level of detail do you think we’ll need to emulate a human brain to achieve WBE? (molecules, proteome, metabolome, electrophysiology, spiking neural network, etc.)\n\n\n(By “WBE” I mean what the report calls success criterion 6a (“social role-fit emulation”), so as to set aside questions of consciousness and personal identity.)\n\n\n\n\n---\n\n\n**Randal Koene**: It would be problematic to base your questions largely on the 2007 report. All of those involved are pretty much in agreement that said report did not constitute a “roadmap”, because it did not actually lay out a concrete / well devised theoretical plan by which whole brain emulation is both possible and feasible. The 2007 white paper focuses almost exclusively on structural data acquisition and does not explicitly address the problem of system identification in an unknown (“black box”) system. That problem is fundamental to questions about “levels of detail” and more. It immediately forces you to think about constraints: What is successful/satisfactory brain emulation?\n\n\nSystem identification (in small) is demonstrated by the neuroprosthetic work of Ted Berger. Taking that example and proof-of-principle, and applying it to the whole brain leads to a plan for decomposition into feasible parts. That’s what the actual roadmap is about.\n\n\nI don’t know if you’ve encountered these two papers, but you might want to read and contrast with the 2007 report:\n\n\n* Koene, R. 2012. [Experimental Research In Whole Brain Emulation: The Need For Innovative In-Vivo Measurement Techniques](http://carboncopies.org/research-projects/articles-and-essays/koene.IJMC.experimental-research-in-WBE-the-need-for-innovative-in-vivo-measurement-techniques.proofs-20120410.pdf).\n* Koene, R. 2012. [Fundamentals Of Whole Brain Emulation: State, Transition And Update Representations](http://carboncopies.org/research-projects/articles-and-essays/koene.IJMC.fundamentals-of-WBE-state-transition-and-update-representations.proofs-20120410.pdf).\n\n\nI think that a range of different levels of detail will be involved in WBE. For example, as [work by Ted Berger](http://www.neural-prosthesis.com/) on a prosthetic hippocampus has already shown, it may often be adequate to emulate at the level of spike timing and patterns of neural spikes. It is quite possible that, from a functional perspective, emulation at that level can capture that which is perceptible to us. Consider, differences of pre- and post-synaptic spike times are the basis for synaptic strengthening (spike-timing dependent potentiation), i.e. encoding of long term memory. Trains of spikes are used to communicate sensory input (visual, auditory, etc). Patterns of spikes are used to drive groups of muscles (locomotion, speech, etc).\n\n\nThat said, a good emulation will probably require a deeper level of data acquisition for parameter estimation and possible also a deeper level of emulation in some cases, for example if we try to distinguish different types of synaptic receptors, and therefore how particular neurons can communicate with each other. I’m sure there are many other examples. \n\nSo, my hunch (strictly a hunch!) is that whole brain emulation will ultimately involve a combination of tools that carry out most data acquisition at one level, but which in some places or at some times dive deeper to pick up local dynamics.\n\n\nI think it is likely that we will need to acquire structure data at least at the level of current connectomics that enables identification of small axons/dendrites and synapses. I also think it is likely that we will need to carry out much electrophysiology, amounting to what is now called the Brain Activity Map (BAM). \n\nI think is is less likely that we will need to map all proteins or molecules throughout an entire brain – though it is very likely that we will be studying each of those thoroughly in representative components of brains in order to learn how best to relate measurable quantities with parameters and dynamics to be represented in emulation.\n\n\n(Please don’t interpret my answer as “spiking neural networks”, because that does not refer to a data acquisition level, but a certain type of network abstraction for artificial neural networks.) \n\n\n\n\n\n\n---\n\n\n**Luke**: Which of these “assumptions” on page 15 of the report do you generally agree with? (physicalism, multiple realizability, computability, non-organicism, scale separation, component tractability, simulation tractabilty, brain-centeredness)\n\n\n\n\n---\n\n\n**Randal**:\n\n\n*Philosophical physicalism*: Yes. With the caveat that we are assuming that mind function can take place on different functional substrates (just as you could move a program from one type of computer to another). Since we already know that the brain can work around some types of brain damage by carrying out a function from a previously damaged piece in some other piece of brain (after retraining), leading to a resumption of normal mental operations, we have some evidence of such distinction between brain and mind functions even in the biological implementation. (That specific example does not work for all mind functions.)\n\n\n*Computability*: Yes – at the level that we are interested in. Rather than addressing the matter of Turing computability, I’d rather highlight the problem of replication: If you try to duplicate something accurately, by analog or digital means, it is pretty much impossible to do so to infinite precision. A good example is digital audio. Despite that constraint, we can capture what we are interested in. The assumption here is that what we would consider a satisfactory whole brain emulation does not require replication at infinite precision. From my experience in neuroscience, I see a lot of evidence that the brain itself goes to a lot of trouble to make itself more predictable, to make it possible for brain regions to talk to each other and cooperate intelligibly. In essence, the brain uses tricks like spike bursts, nested oscillatory modulation, redundant ensembles of neurons active in spike patterns, etc, to insure reliability and be more “computable”.\n\n\n*Non-organicism*: Yes (I was unfamiliar with the term, but I agree with the description). What is understanding? When do we understand something? If we can make a good model of something, is that understanding? In biological sciences, that is increasingly becoming the measure of understanding due to the increase in complexity with the number of components (unlike basic physics questions such as “pressure”, where, for example, adding more particles can make a system more easy to describe/understand than working with a single particle).\n\n\n*Scale-separation*: Yes-ish, but see my longer answer along those lines above where I talked about what sort of data I think would be needed.\n\n\n*Component tractability*: Yes, but not really the components: It’s the signals that matter! This is about [system identification](http://en.wikipedia.org/wiki/System_identification), see for example as used by Ted Berger to [create Volterra expansions](http://en.wikipedia.org/wiki/Volterra_series) that capture system function for neural prosthesis. You can draw a box around some part of the brain at some resolution and call that a ‘black box’. The important thing is to know which signals you are interested in that are going in and coming out of the box. At an unknown CMOS component, we could say those signals are 1s and 0s (though actually voltages below and above some threshold). We wouldn’t necessarily care about other signals there, such as infrared EM radiation (heat). Similarly, you need to consider the signals that are interesting when you examine black boxes at some level in the brain biology. I do believe that we will be able to carry out system identification (i.e. make a model of / understand) giving us the functions that describe what a component does in terms of its transforming of input into output.\n\n\n*Simulation tractability*: Yes. Having carried out system identification we will learn which functions or parts of functions are most common and thereby determine how to build efficient processors for them (call them neuromorphic if you like, though they may be quite different than what goes by that name… since it’s really about whatever the functions turn out to be, not about neurons per-se). The biological brain is able to run functions of the mind fairly efficiently, so we should be able to engineer the same.\n\n\n*Brain centeredness*: Maybe, but not quite. I find this an odd thing to state as an important assumption, because it isn’t. Focusing on emulating the brain is a choice just like the choice of which black boxes to select at higher resolution. What is ‘me’? Is me my brain, or my body, or my interaction with the whole universe and everyone in it? It seems to me like you can move that definition in or out as much as you’d like. But… given a universe in which I exist, I can take a part of that, such as my brain, declare it a volume to be emulated, build the emulator, and it should fit in nicely with the rest. So, perhaps, to exist properly you do need to have at least the perception of legs and arms and visual input, etc. I don’t think it is a fundamental problem for brain emulation. But the assumption is a weird thing to include, because it (falsely, in my opinion) makes it look like it could be a possible road-block to whole brain emulation. Do we need bodies or no? How much of a body? Does the answer to those questions say much at all about the feasibility of whole brain emulation? I doubt it.\n\n\n\n\n---\n\n\n**Luke**: Table 4 of the report (p. 39) summarizes possible modeling complications, along with estimates of how likely they are to be necessary for WBE and how hard they would be to implement. Which of those estimates do you disagree with most strongly?\n\n\n\n\n---\n\n\n**Randal**: Spinal cord, don’t know. I agree with all estimates from synaptic adaptation through ephaptic effects.\n\n\nDynamical state: I agree it’s probably not necessary as such, but capturing dynamics through functional characterization may well be needed, hence brain activity mapping. This is not a show-stopper, on account of recent new interface prototypes.\n\n\nI agree with the estimate about analog computation and about “true” randomness.\n\n\n\n\n---\n\n\n**Luke**: Neuroscientist Miguel Nicolelis [says](http://www.technologyreview.com/view/511421/the-brain-is-not-computable/) whole brain emulation is incomputable because the “most important features [of consciousness] are the result of unpredictable, nonlinear interactions among billions of cells.” What’s your reply to that?\n\n\n\n\n---\n\n\n**Randal**: I’d say that Miguel is making a bit of a wild claim without actual evidence. What is he actually saying there? Remember the bit where I explained my take on its computability and the lengths the brain goes to to make itself more predictable + the issue of picking what you consider satisfactory for whole brain emulation (if you can’t be satisfied by anything less than atomic precision duplication… well then you can’t make a Mac emulator to run on a PC either).\n\n\nIt’s a strange thing for a neuroscientist to say, given that almost all of them – Miguel included – use computational neuroscience models and keep asking for ones that more rigorously replicate what is going on in the neurophysiology.\n\n\nHow does Miguel know if it makes any difference if the “most important features [of consciousness]” (whatever those are) are the result of unpredictable, possibly nonlinear interactions among billions of biological cells or billions of components of an emulation?\n\n\nThe short answer is: Miguel’s statement rings of strong feeling, but I can’t parse his argument to connect with any demonstrated evidence.\n\n\n\n\n---\n\n\n**Luke**: In two recent papers ([2012a](http://carboncopies.org/research-projects/articles-and-essays/koene.IJMC.experimental-research-in-WBE-the-need-for-innovative-in-vivo-measurement-techniques.proofs-20120410.pdf), [2012b](http://carboncopies.org/research-projects/articles-and-essays/koene.IJMC.fundamentals-of-WBE-state-transition-and-update-representations.proofs-20120410.pdf)) you explain the general challenge of whole brain emulation (WBE), and outline some experimental research that would make progress toward successful WBE. In your view, what are some specific, tractable “next research projects” that, if funded, could show substantial progress toward WBE in the next 10 years?\n\n\n\n\n---\n\n\n**Randal**: To be honest, I’m in the process of revamping the roadmap right now, because events of 2013 have altered what is likely to be the area most in need of focused attention.\n\n\nWith that caveat, I’ll propose a few:\n\n\n1. Develop a platform for wireless free-floating neural interfaces based on prototypes presently at UC Berkeley[1](https://intelligence.org/2014/03/20/randal-a-koene-on-whole-brain-emulation/#footnote_0_10873 \" See Neural Dust, presently at 126um in size \") and MIT, but sufficiently open and standardized that many research labs can use the platform even if the interface hardware undergoes several iterations of improvement. This is one component of work on the brain activity map (BAM) that will be essential to carry out system identification in general brain tissue.\n2. Demonstrate the ability to theoretically “break” a piece of neural tissue into a collection of tractable small subsystems, where characterization of a.) the connectivity between the pieces, and b.) the system functions identified within each piece allow reconstruction of function of the whole. You can start this very small and work your way up, though validation will be tough if small pieces of tissue are arbitrarily chosen. It might be best to work in something like retina or a very small animal (not sure if I should suggest C.Elegans for this, due to its oddities).\n3. Attempt to use neural interfaces and neural prostheses in increasing numbers of patient populations for treatment of mental disorders, brain damage in select areas (hippocampus is one interesting candidate, obviously), and nerve damage (paralysis) as a way to accelerate experience with and development of those tools that share most of the same ultimate specifications as the ones needed to acquire activity data for WBE.\n4. Combine multiple microscopy modalities, such as EM and light (we seem some of this already in [Brainbow](https://www.mcb.harvard.edu/mcb/news/news-detail/3677/brainbow-20-lichtman-and-sanes-labs/)), protein tagging to improve the identification of physiological components in sections of brain tissue.\n5. Develop automated segmentation, identification, 3D reconstruction and translation to functional model parameters from stacks of brain slice images (especially for EM images used in connectomics). As this proceeds, it becomes possible to test the limits of such identification and reconstruction, which relies (when not paired with BAM data) on sampling from probability distributions in libraries of correlations between structure & function.\n6. Improve throughput in connectomics by further developing mechanical automation of serial imaging (closely related to [Ken Hayworth’s work](http://cbs.fas.harvard.edu/science/connectome-project/atlum)[2](https://intelligence.org/2014/03/20/randal-a-koene-on-whole-brain-emulation/#footnote_1_10873 \" Also see: Hayworth, K. 2013. Preserving and Mapping the Brain’s Connectome. Global Future 2045 (2013). Proceedings. \"), of course). Related to this, further improve the quality of brain tissue preparations for connectomics work (see the [Brain Preservation Foundation](http://www.brainpreservation.org/)).\n7. Explore the whole tree of signal detection and stimulation modalities and their possible implementations for a) BAM and b) connectomics, a process that has already begun with the work of the so-called PoBAM group[3](https://intelligence.org/2014/03/20/randal-a-koene-on-whole-brain-emulation/#footnote_2_10873 \" See:\n\nMarblestone, A. et al. 2013. Physical Principles for Scalable Neural Recording.\nMarblestone, A. et al. 2013. Conneconomics: The Economics of Large-Scale Neural Connectomics.\nCybulski, T. et al. 2014. Spatial Information in Large-Scale Neural Recordings.\n\n\") (Physics of Brain Activity Mapping) and the various publications by Adam Marblestone et al. As this progresses, you get to the development of hybrid technology applications where the strengths of each modality (e.g. ultrasound, opto-electrics, biochemical sensing, etc.) are employed where best suited, so that hurdles that any one of them faces indivually are overcome. There is a pretty strong group working on this right now, which promises advances in BAM in the next 5 years similar to the ones we’ve seen in connectomics over the last 5 years.\n8. Systematize (even more than the Allen Institute already does) the process of fundamental neurophysiological inquiry, in order to continue and complete the identification and mechanistic understanding of all physiological components involved in brain function (all receptor types, proteins, etc). While this may not all end up being needed if BAM and connectomics works well enough to identify system function and system interactivity, more understanding may sometimes turn out to be a game changer and certainly helps with validation, as well as treatments for issues in the biological brain.\n\n\nThere are certainly more, but as I already mentioned, this is presently undergoing heavy revamping. Also, I think there is still merit to completing the mapping and modeling of C.Elegans, though I suspect that something like further success in brain-piece neural prostheses (such as the hippocampal and prefrontal cortex work of the Berger group) will have more impact to accelerate research and is a better representation as proof-of-principle that neuroprosthetic replacement of mammalian brain tissue is possible.\n\n\n\n\n---\n\n\n**Luke**: In suggestion #2 you referred to *C. elegans*‘ “oddities.” What oddities are you thinking of?\n\n\n\n\n---\n\n\n**Randal**: Firstly, neurons in C.Elegans don’t spike. The system operates on sub-threshold communications. Secondly, the 302 neurons of C.Elegans are all quite unique and different, basically different types that carry out rather sophisticated functions (non-redundantly and non-distributed) by themselves. If you were to compare this with the human brain, it would be more like having 302 different brain regions.\n\n\nA much better “small” animal that compares with mammals in terms of the sorts of functions its brain can carry out (like visual processing, locomotion using limbs, etc.) would be the fruitfly Drosophila, for example, which is the primary animal studied at Janelia Farms labs.\n\n\n\n\n---\n\n\n**Luke**: What’s your guess as to when scientists will successfully emulate a *Drosphila* brain? (Use whichever interpretation of “successfully” you prefer.)\n\n\n\n\n---\n\n\n**Randal**: Obviously, I can’t be very detailed about this sort of long-range guesswork.\n\n\nI’d say that if the brain activity map stuff develops in the next 5 years the way connectome stuff developed in the past 5 years, then in about 5-7 years it might be a conceivable / feasible / budegatable thing to propose a project to map the drosophila brain and to emulate it in a first, draft version of an emulation. I would expect that such a proposal would map out something on a duration of 8-10 years. So, I would think that the earliest date by which we could expect first emulations of drosophila brain is about 15-17 years out.\n\n\nThere are two interesting things to consider there:\n\n\n1. Meanwhile, what is happening on the parallel track where useful neural prostheses are built for human patients even if those are piece-wise and not “whole brain”?\n2. Is there any distinction between a drosophila brain emulation and a human brain emulation besides scale? I.e. once you start iterating through improved versions of drosophila emulation, how does that affect motivation and funding for a race to scale the procedure to humans?\n\n\n\n\n---\n\n\n**Luke**: What research into various aspects of WBE do you personally hope to be doing over the next 5 years?\n\n\n\n\n---\n\n\n**Randal**: I personally hope to always apply myself where my analysis of the roadmap indicates that my activity can have the most useful impact. That is foremost for me, and I’ve been adjusting my activities accordingly for a few years now.\n\n\nAside from that, I’m very interested to get involved with the development of neural interface platforms that can get to the resolution and bandwidth required for system identification.\n\n\nBoth of those considerations, the big-picture focus and application to that where needed, and work on neural interfaces, those are in my field of view right now when I think about the next 5 years.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Randal!\n\n\n\n\n---\n\n1. See [Neural Dust](http://arxiv.org/abs/1307.2196), presently at 126um in size\n2. Also see: Hayworth, K. 2013. [Preserving and Mapping the Brain’s Connectome](http://www.carboncopies.org/gf2045-2013-proceedings/gf2045-2013-proceedings-hayworth). Global Future 2045 (2013). Proceedings.\n3. See:\n\t* Marblestone, A. et al. 2013. [Physical Principles for Scalable Neural Recording](http://arxiv.org/abs/1306.5709v1).\n\t* Marblestone, A. et al. 2013. [Conneconomics: The Economics of Large-Scale Neural Connectomics](http://biorxiv.org/content/early/2013/12/16/001214).\n\t* Cybulski, T. et al. 2014. [Spatial Information in Large-Scale Neural Recordings](http://arxiv.org/abs/1402.3375).\n\nThe post [Randal Koene on whole brain emulation](https://intelligence.org/2014/03/20/randal-a-koene-on-whole-brain-emulation/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-03-20T08:00:46Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "ac997516bf77638c44216a08bb066277", "title": "Max Tegmark on the mathematical universe", "url": "https://intelligence.org/2014/03/19/max-tegmark/", "source": "miri", "source_type": "blog", "text": "![Randal A. Koene portrait](http://intelligence.org/wp-content/uploads/2014/03/Tegmark_2.jpg)Known as “Mad Max” for his unorthodox ideas and passion for adventure, his scientific interests range from precision cosmology to the ultimate nature of reality, all explored in his new popular book “[Our Mathematical Universe](http://www.amazon.com/Our-Mathematical-Universe-Ultimate-Reality/dp/0307599809/)“. He is an [MIT physics professor](http://web.mit.edu/physics/people/faculty/tegmark_max.html) with more than two hundred technical papers, 12 cited over 500 times, and has featured in dozens of science documentaries. His work with the SDSS collaboration on galaxy clustering shared the first prize in Science magazine’s “Breakthrough of the Year: 2003.”\n\n\n\n**Luke Muehlhauser**: Your book opens with a concise argument against the [absurdity heuristic](http://wiki.lesswrong.com/wiki/Absurdity_heuristic) — the rule of thumb which says “If a theory sounds absurd to my human psychology, it’s probably false.” You write:\n\n\n\n> Evolution endowed us with intuition only for those aspects of physics that had survival value for our distant ancestors, such as the parabolic orbits of flying rocks (explaining our penchant for baseball). A cavewoman thinking too hard about what matter is ultimately made of might fail to notice the tiger sneaking up behind and get cleaned right out of the gene pool. Darwin’s theory thus makes the testable prediction that whenever we use technology to glimpse reality beyond the human scale, our evolved intuition should break down. We’ve repeatedly tested this prediction, and the results overwhelmingly support Darwin. At high speeds, Einstein realized that time slows down, and curmudgeons on the Swedish Nobel committee found this so weird that they refused to give him the Nobel Prize for his relativity theory. At low temperatures, liquid helium can flow upward. At high temperatures, colliding particles change identity; to me, an electron colliding with a positron and turning into a Z-boson feels about as intuitive as two colliding cars turning into a cruise ship. On microscopic scales, particles schizophrenically appear in two places at once, leading to the quantum conundrums mentioned above. On astronomically large scales… weirdness strikes again: if you intuitively understand all aspects of black holes [then you] should immediately put down this book and publish your findings before someone scoops you on the Nobel Prize for quantum gravity… [also,] the leading theory for what happened [in the early universe] suggests that space isn’t merely really really big, but actually infinite, containing infinitely many exact copies of you, and even more near-copies living out every possible variant of your life in two different types of parallel universes.\n> \n> \n\n\nLike much of modern physics, the [hypotheses](http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/) [motivating](http://intelligence.org/2013/05/15/when-will-ai-be-created/) MIRI’s work can easily run afoul of a reader’s own absurdity heuristic. What are your best tips for getting someone to give up the absurdity heuristic, and try to judge hypotheses via argument and evidence instead?\n\n\n\n\n---\n\n\n**Max Tegmark**: That’s a very important question: I think of the absurdity heuristic as a cognitive bias that’s not only devastating for any scientist hoping to make fundamental discoveries, but also dangerous for any sentient species hoping to avoid extinction. Although it appears daunting to get most people to drop this bias altogether, I think it’s easier if we focus on a specific example. For instance, whereas our instinctive fear of snakes is innate and evolved, our instinctive fear of guns (which the Incas lacked) is learned. Just as people learned to fear nuclear weapons through blockbuster horror movies such as “The Day After”, rational fear of unfriendly AI could undoubtedly be learned through a future horror movie that’s less unrealistic than Terminator III, backed up by a steady barrage of rational arguments from organizations such as MIRI.\n\n\nIn the mean time, I think a good strategy is to confront people with some incontrovertible fact that violates their absurdity heuristic and the whole notion that we’re devoting adequate resources and attention to existential risks. For example, I like to ask why more people have heard of Justin Bieber than of Vasili Arkhipov, even though it wasn’t Justin who singlehandedly prevented a Soviet nuclear attack during the Cuban Missile Crisis.\n\n\n\n\n\n---\n\n\n**Luke**: After reviewing mainstream contemporary physics, you begin to explain the “[multiverse heirarchy](http://en.wikipedia.org/wiki/Multiverse#Max_Tegmark.27s_four_levels)” in chapter 6, which refers to four “levels” of multiverse we might inhabit. The “Level I multiverse,” as you call it, is an inescapable prediction of what is currently the most widely accepted theory of the early universe ([eternal inflation](http://en.wikipedia.org/wiki/Eternal_inflation)): the universe is infinite in all directions, implying that there is an identical copy of me, who is also asking Max Tegmark questions via email, roughly 101029 meters from where I sit now.\n\n\n\nTo many readers that will sound absurd. But you write that it is “a prediction of eternal inflation which, as we’ve seen above, agrees with all current observational evidence and is implicitly used as the basis for most calculations and simulations presented at cosmology conferences.” Moreover, the Level I multiverse could still exist even if eternal inflation turns out to be false. All we need for the Level I multiverse is, you write:\n\n\n\n\n> \n> * *Infinite space and matter*: Early on, there was an infinite space filled with hot expanding plasma.\n> * *Random seeds*: Early on, a mechanism operated such that any region could receive any possible seed fluctuations, seemingly at random.\n> \n> \n> \n\n\nAnd as you explain, we have pretty decent evidence for these two claims, independent of whether eternal inflation in particular happens to be correct.\n\n\nStill, I’m curious what your colleagues in physics departments around the world think of this. If you had to guess, what proportion of them accept that the Level I multiverse is a straightforward prediction of eternal inflation? And roughly what proportion of them think eternal inflation, or *some* theory that assumes both “infinite space and matter” and also “random seeds”, will turn out to be correct?\n\n\n\n\n---\n\n\n**Max**: There’s definitely been an increased acceptance of these ideas, with the most vocal critics shifting from saying “this makes no sense and I hate it!” to “I hate it!”, tacitly acknowleding that it’s actually a scientifically legitimate possibility.\n\n\nI haven’t seen any relevant poll, but my sense is that the proportion of physicists who think a Level I multiverse is likely depends strongly on their subfield of physics, with the proportion being highest among theoretical cosmologists and high-energy theorists, and lowest in very different areas where people don’t normally think much about these ideas and often feel that they sound too weird to be true.\n\n\nYou and your MIRI colleagues work very hard to be rational, so if you’re convinced that A implies B and A is true, then you’ll update your Bayesian prior to be convinced that B is also true. I suspect that many physicists are less rational than you: my guess is that many who are sympathetic toward inflation and learn that it generically implies eternal inflation and Level I don’t actually update their prior about Level I, but instead tell themselves that a multiverse feels unscientific, and therefore lose interest in spending more time thinking about consequences of inflation.\n\n\n\n\n---\n\n\n**Luke**: You go on to explain four levels of multiverse in total:\n\n\n* Level I multiverse: “Distant regions of space that are currently but not forever unobservable; they have the same effective laws of physics but may have different histories.” A straightforward prediction of eternal inflation, and many other possible theories of cosmological evolution.\n* Level II multiverse: “Distant regions of space that are forever unobservable because space between here and there keeps in inflating; they obey the same fundamental laws of physics, but their effective laws of physics may differ.” Also suggested by eternal inflation.\n* Level III multiverse: “Different parts of quantum Hilbert space.” If the wavefunction never collapses but is instead *always* governed by the Schrödinger equation, this implies the universe is constantly “splitting” into parallel universes.\n* Level IV multiverse: “All mathematical structures, corresponding to different fundamental laws of physics.”\n* Before we get to Level IV, I want to ask about a claim you make about Level III.\n\n\nYou write that “fledgling technologies such as quantum cryptography and quantum computing explicitly exploit the Level III multiverse and work only if the wavefunction doesn’t collapse.”\n\n\nBut I assume that collapse theorists don’t think their view will be falsified as soon as the first “true” quantum computer is ([uncontroversially](http://www.scottaaronson.com/blog/?p=1643)) built. What do they argue in response? Why do you think quantum computing can only work if the wavefunction doesn’t collapse?\n\n\n\n\n---\n\n\n**Max**: That’s a good question, because there’s lots of confusion in this area. What’s uncontroversial is that quantum computers *will* work it there’s no collapse, i.e., if the Schrödinger equation works with no exceptions (as long as engineering obstacles such as decoherence mitigation can be overcome). What’s controversial is what happens otherwise. There’s such a large zoo of [non-Everett interpretations](http://en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics) (13 by now) that you’ll have to ask their adherents what exactly they predict for quantum computers, for quantum systems containing humans, etc. – in my experience, different people claiming to subscribe to the same interpretation sometimes nonetheless disagree on specific predictions.\n\n\nThe quantum litmus test is to ask them the following question:\n\n\n\n> Alice is in an isolated lab in a spaceship and measures the spin of an electron that was in a superposition of “up” and “down”. According to Bob, who hasn’t yet observed the spaceship, is Alice’s brain in a superposition of perceiving that she’s measured “up” and that she’s measured “down”?\n> \n> \n\n\nThe Many-Worlds Interpretation says unabiguously “yes”, because that’s what the Schrödinger equation predicts. Some Copenhagen supporters would say “no”, on the grounds that Alice collapsed the wavefunction when she observed the electron: something truly random happened at that instant, and it’s now really just up or really just down. Others say “yes” and still others will give you less clear-cut answers. My point is that a theory refusing to make a clear prediction for whether the Schrödinger holds for arbitrary large and complicated systems is automatically refusing to make a prediction for whether an arbitrarily large and complicated quantum computers work or not, and is therefore not a complete theory. I’d also argue that any theory where the wavefunction never collapses is simply Everett disguised in unfamiliar language – as far as nature is concerned, it’s only the equations that matter.\n\n\n\n\n---\n\n\n**Luke**: In chapter 11 you write that:\n\n\n\n> Whereas most of my physics colleagues would say that our external physical reality is (at least approximately) *described* by mathematics, I’m arguing that it *is* mathematics (more specifically, a mathematical structure). In other words, I’m making a much stronger claim. Why? …If a future physics textbook contains the coveted Theory of Everything (ToE), then its equations are a complete description of the mathematical structure that is the external physical reality. I’m writing *is* rather than *corresponds* to here, because if two structures are equivalent, then there’s no meaningful sense in which they’re not one and the same…\n> \n> \n\n\nDo you have physics colleagues who assume external reality exists and that it can be described by mathematics, but who *don’t* accept your Mathematical Universe Hypothesis? If so, what are their counter-arguments?\n\n\n\n\n---\n\n\n**Max**: Interestingly, I haven’t heard any clearly articulated counter-arguments from physics colleagues. Rather, it’s a bit like with the unfriendly AI X-risk  argument: the scientists I know who are unconvinced by the conclusion don’t take issue with specific logical steps in the argument, but lack sufficient interest in the question to have familiarized themselves with the argument. \n\nIf you want to classify people’s views, it boils down to two logically separate questions:\n\n\n1. Is our external physical reality completely described by mathematics?\n2. Can something be perfectly described by mathematics (having no properties except mathematical properties) but still not be a mathematical structure?\n\n\nThe people I’ve heard answer “no” to 1) tend to do so not based on evidence or a logical argument, but based on a preference for a non-mathematical free will, soul or deity. The people I’ve heard answer “no” to 2) often conflate the description with the described, within mathematics itself. This ties in with the important question about whether mathematics is invented or discovered – a famous controversy among mathematicians and philosophers.\n\n\nOur *language* for describing the planet Neptune (which we obviously invent – we invented a different word for it in Swedish) is of course distinct from the planet itself, which we discovered. Analogously, we humans invent the *language* of mathematics (the symbols, our human names for the symbols, etc.), but it’s important not to confuse this language with the *structures* of mathematics that I focus on in the book. For example, any civilization interested in Platonic solids would discover that there are precisely 5 such structures (the tetrahedron, cube, octahedron, dodecahedron and icosahedron). Whereas they’re free to invent whatever names they want for them, they’re *not* free to invent a 6th one – it simply doesn’t exist. It’s in the same sense that the mathematical structures that are popular in modern physics are discovered rather than invented, from 3+1-dimensional pseudo-Riemannian manifolds to Hilbert spaces. The possibility that I explore in the book is that one of the *structures* of mathematics (which we can discover but not invent) corresponds to the physical world (which we also discover rather than invent).\n\n\n\n\n---\n\n\n**Luke**: In chapter 13 you turn your attention to the future of physics, and the future of humanity within the physical world. In particular, you talk a lot about risks of human extinction, aka “[existential risks](http://www.existential-risk.org/concept.pdf).”\n\n\nTo summarize: the bad news is that there are lots of ways for humans to go extinct. The good news is that very few extinction risks are remotely likely in the next, say, *150 years*. To illustrate this point, you provide this graphic:\n\n\n![upcoming existential risks](https://intelligence.org/wp-content/uploads/2014/03/upcoming-existential-risks.png)\nI like that graphic, and I think it’s basically right, except that:\n\n\n* I’d downplay nuclear war as a fully *existential* risk (see [here](http://lesswrong.com/lw/jb9/some_notes_on_existential_risk_from_nuclear_war/)),\n* I’d change “global pandemic” to “synthetic biology” to emphasize that it’s *novel* pathogens that might be capable of full-blown human extinction (rather than “mere” global catastrophe),\n* and I’d add [molecular nanotechnology](http://en.wikipedia.org/wiki/Molecular_nanotechnology) as a major existential threat for the next 150 years.\n\n\nI suspect the folks at Cambridge University’s [Centre for the Study of Existential Risk](http://cser.org/) (CSER) would make the same adjustments, as they also [seem to be](http://cser.org/hundreds-attend-cser-lecture/) focusing on risks from synthetic biology, molecular nanotechnology, and AGI.\n\n\nDo you think you’d agree with those adjustments, or is your basic picture somewhat different from mine on those points?\n\n\n\n\n---\n\n\n**Max**: I agree that “synthetic biology” is a better phrase, especially since the global pandemics I had in mind when making that figure were mainly human-made. The forms of molecular nanotechnology that I suspect pose the greatest existential risks are those that transcend the boundaries with synthetic biology or AI (already covered).\n\n\nI disagree with the argument that we’ve overestimated nuclear war as an existential risk. Of course an all-out nuclear war couldn’t kill all humans instantly by literally blowing us up. However, I find the supposedly reassuring arguments you cite unconvincing, and had a spirited debate about this with one of the authors last year. To qualify something as an existential risk, we don’t need to prove that it *will* extinguish humanity – we simply need to establish reasonable doubt of the assertion that it *can’t*.\n\n\nIf the initial blasts disable much of our infrastructure and then nuclear winter lowers the summer temperature by about 20°C (36°F) in most of North America, Europe and Asia (as per [Robock](http://climate.envsci.rutgers.edu/robock/robock_nwpapers.html)) to cause catastrophic crop failures, it’s not hard to imagine scenarios of truly existential proportions. Modern human society is a notoriously complex and hard-to-model system, so the scenarios I’m most concerned about involve complex interplays between multiple effects. For example, infrastructure breakdown might make it difficult to control either starvation-induced pandemics or armed gangs who systematically sweep the planet for food, weapons, etc. with little regard for sustainability. Without any serious attempts to model such complications, I don’t find the cited estimates of available biomass etc. particularly reassuring.\n\n\n\n\n---\n\n\n**Luke**: As [Bostrom (2013)](http://www.existential-risk.org/concept.pdf) notes, humanity seems to be investing much more effort into (e.g.) dung beetle research than we are investing in research on near-term existential risks and how we might mitigate them. From your perspective, what can we do to cause research grantmakers (e.g. the NSF) and researchers (e.g. economists) to direct more of their effort toward research into near-term existential risks, especially the risks from novel technologies like synthetic biology and AGI?\n\n\n\n\n---\n\n\n**Max**: We need to draw more attention to these risks, so that people start thinking of them as real threats rather than science fiction fantasies. Organizations such as MIRI are an invaluable step in this direction.\n\n\nWe should aim to make more opinion leaders understand xrisks and make more people who understand xrisks into opinion leaders.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Max!\n\n\nThe post [Max Tegmark on the mathematical universe](https://intelligence.org/2014/03/19/max-tegmark/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-03-19T08:00:37Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "da58dbf61081e0b3837a139a764c07cd", "title": "MIRI’s March 2014 Newsletter", "url": "https://intelligence.org/2014/03/18/miris-march-2014-newsletter/", "source": "miri", "source_type": "blog", "text": "| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n[Machine Intelligence Research Institute](http://intelligence.org)\n |\n\n |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n**Research Updates**\n* We recently hired [four new researchers](http://intelligence.org/2014/03/13/hires/), including two new Friendly AI researchers. We announced this to our local supporters at the recent [MIRI Expansion Party](https://www.facebook.com/media/set/?set=a.655204764516911.1073741832.170446419659417&type=1).\n* We’ve scheduled our [next research workshop](http://intelligence.org/2014/02/22/miris-may-2014-workshop/) for May 2014.\n* Three new posts on naturalized induction: [Bridge Collapse](http://lesswrong.com/lw/jlg/bridge_collapse_reductionism_as_engineering/), [Solomonoff Cartesianism](http://lesswrong.com/lw/jg1/solomonoff_cartesianism/), and [The Problem with AIXI](http://lesswrong.com/lw/jti/the_problem_with_aixi/).\n* New analyses: [The world’s distribution of computation](http://intelligence.org/2014/02/28/the-worlds-distribution-of-computation-initial-findings/) and [Is my view contrarian?](http://lesswrong.com/lw/jv2/is_my_view_contrarian/)\n* Many new expert interviews: [Anders Sandberg](http://intelligence.org/2014/03/02/anders-sandberg/) on space colonization, [Nik Weaver](http://intelligence.org/2014/02/24/nik-weaver-on-paradoxes-of-rational-agency/) on paradoxes of rational agency, [John Baez](http://intelligence.org/2014/02/21/john-baez-on-research-tactics/) on research tactics, [Bob Constable](http://intelligence.org/2014/03/02/bob-constable/) on correct-by-construction programming, [Armando Tacchella](http://intelligence.org/2014/03/02/armando-tacchella/) on safety in future AI systems, [David Cook](http://intelligence.org/2014/03/07/david-cook/) on VV&A, [John Ridgway](http://intelligence.org/2014/03/08/john-ridgway-on-safety-critical-systems/) on safety-critical systems, [Toby Walsh](http://intelligence.org/2014/03/10/toby-walsh/) on computational social choice, and [Randall Larsen & Lynne Kidder](http://intelligence.org/2014/03/09/randall-larsen-and-lynne-kidder/) on USA bio-response.\n\n\n\n**News Updates**\n* New post for *2013 in Review* on [Friendly AI research](http://intelligence.org/2014/02/18/2013-in-friendly-ai-research/).\n* A conversation with Holden Karnofsky about [future-oriented philanthropy](http://intelligence.org/2014/02/21/conversation-with-holden-karnofsky-about-future-oriented-philanthropy/).\n* MIRI and *Our Final Invention* were [featured](http://www.myfoxny.com/story/24823210/the-rapid-progress-of-artificial-intelligence) on FoxTV.\n* *BioTrove* [interviewed](http://biotroveinvestments.com/biotrove-podcasts/2014/2/18/louie-helm-the-future-of-artificial-intelligence) Louie Helm about the future of AI, and *Big Picture Science* [interviewed](http://radio.seti.org/blog/2014/03/big-picture-science-you-think-youre-so-smart-luke-muehlhauser/?utm_content=buffer50743&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer) Luke Muehlhauser about intelligence in animals and machines.\n* [Castify](http://castify.co/) has turned several of Yudkowsky’s *Sequences* into professionally-read podcasts: [Map and Territory](http://castify.co/channels/4-map-and-territory), [How to Actually Change Your Mind](http://castify.co/channels/46-how-to-actually-change-your-mind), [A Human’s Guide to Words](http://castify.co/channels/16-a-human-s-guide-to-words), [Mysterious Answers to Mysterious Questions](http://castify.co/channels/1-mysterious-answers-to-mysterious-questions), [Reductionism](http://castify.co/channels/43-reductionism), and [Ethical Injunctions](http://castify.co/channels/2-ethical-injunctions).\n\n\n**Other Updates**\n* [Video](http://cser.org/hundreds-attend-cser-lecture/) of the inaugural lectures of the Center for the Study of Existential Risk at Cambridge University.\n\n\nAs always, please don’t hesitate to let us know if you have any questions or comments.\nBest,\nLuke Muehlhauser\nExecutive Director\n |\n\n |\n\n |\n| |\n\n\nThe post [MIRI’s March 2014 Newsletter](https://intelligence.org/2014/03/18/miris-march-2014-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-03-18T18:53:57Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "e67b8a418afe152e955aaf78538e89b9", "title": "Recent Hires at MIRI", "url": "https://intelligence.org/2014/03/13/hires/", "source": "miri", "source_type": "blog", "text": "MIRI is proud to announce several new team members (see our [Team](http://intelligence.org/team/) page for more details):\n\n\n**Benja Fallenstein** attended four of MIRI’s past [workshops](http://intelligence.org/workshops/), and has contributed to several novel results in Friendly AI theory, including [Löbian cooperation](http://arxiv.org/abs/1401.5577), [parametric polymorphism](http://intelligence.org/wp-content/uploads/2013/12/decreasing-strength-parametric-polymorphism.pdf), and “[Fallenstein’s monster](http://intelligence.org/wp-content/uploads/2013/12/fallensteins-monster.pdf).” Her research focus is Friendly AI theory.\n\n\n**Nate Soares** [worked through](http://lesswrong.com/lw/jg3/the_mechanics_of_my_recent_productivity/) much of the [MIRI’s courses list](http://intelligence.org/courses/) in time to attend MIRI’s December 2013 workshop, where he demonstrated his ability to contribute to the research program in a variety of ways, including [writing](http://intelligence.org/wp-content/uploads/2013/12/fallensteins-monster.pdf). He and Fallenstein are currently collaborating on several papers in Friendly AI theory.\n\n\n**Robby Bensinger** works part-time for MIRI, describing open problems in Friendly AI in collaboration with Eliezer Yudkowsky. His current project is to explain the open problem of [naturalized induction](http://wiki.lesswrong.com/wiki/Naturalized_induction).\n\n\n**Katja Grace** has also been hired in a part-time role to study questions related to the [forecasting](http://intelligence.org/research/#forecasting) part of MIRI’s research program. She previously researched and wrote [Algorithmic Progress in Six Domains](https://intelligence.org/files/AlgorithmicProgress.pdf) for MIRI.\n\n\nMIRI continues to collaborate on a smaller scale with many other valued researchers, including [Jonah Sinick](http://intelligence.org/2014/01/28/how-big-is-ai/), [Vipul Naik](http://intelligence.org/2014/02/28/the-worlds-distribution-of-computation-initial-findings/), and our many [research associates](http://intelligence.org/team/#associates).\n\n\nIf you’re interested in joining our growing team, [apply to attend a future MIRI research workshop](http://intelligence.org/get-involved/#workshop). We’re also still looking to fill [several non-researcher positions](http://intelligence.org/careers/).\n\n\nThe post [Recent Hires at MIRI](https://intelligence.org/2014/03/13/hires/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-03-13T20:57:36Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "48ac94fc7b4977727b3a283abac82f23", "title": "Toby Walsh on computational social choice", "url": "https://intelligence.org/2014/03/10/toby-walsh/", "source": "miri", "source_type": "blog", "text": "![Toby Walsh portrait](https://intelligence.org/wp-content/uploads/2014/03/Walsh_w727.jpg)[Toby Walsh](http://www.cse.unsw.edu.au/~tw/) is a professor of artificial intelligence at [NICTA](http://www.nicta.com.au) and the University of New South Wales. He has served as Scientific Director of NICTA, Australia’s centre of excellence for ICT research. He has also held research positions in England, Scotland, Ireland, France, Italy, Sweden and Australia. He has been Editor-in-Chief of the *[Journal of Artificial Intelligence Research](http://www.jair.org/)*, and of *AI Communications*. He is Editor of the *[Handbook of Constraint Programming](http://www.amazon.com/Constraint-Programming-Foundations-Artificial-Intelligence/dp/0444527265/)*, and of the *[Handbook of Satisfiability](http://www.amazon.com/Handbook-Satisfiability-Artificial-Intelligence-Applications/dp/1586039296/)*.\n\n\n\n**Luke Muehlhauser**: In [Rossi et al. (2011)](http://www.amazon.com/Short-Introduction-Preferences-Artificial-Inetlligence/dp/1608455866/), you and your co-authors quickly survey a variety of methods in computational social choice, including methods for preference aggregation, e.g. voting rules. In [Narodytska et al. (2012)](http://www.cse.unsw.edu.au/%7Etw/nwxecai12.pdf), you and your co-authors examine the issue of combining voting rules to perform a run-off between the different winners of each voting rule. What do you think are some plausible practical applications of this work — either soon or after further theoretical development?\n\n\n\n\n---\n\n\n**Toby Walsh**: As humans, we’re all used to voting: voting for our politicians, or voting for where to go out. In the near future, we’ll hand over some of that responsibility to computational agents that will help organize our lives. Think Siri on steroids. In such situations, we often have many choices as there can be a combinatorial number of options. This means we need to consider computational questions: How do we get computer(s) to work with such rich decision spaces? How do we efficiently collect and represent users’ preferences?\n\n\nI should note that computer systems are already voting. The [SCATS](http://www.scats.com.au/) system for controlling traffic lights has the controllers of different intersections vote for what should be the common cycle time for the lights. Similarly, the [Space Shuttle](https://intelligence.org/feed/en.wikipedia.org/wiki/Space_Shuttle) had 5 control computers which voted on whose actions to follow.\n\n\nComputational social choice is, however, more than just voting. It covers many other uses of preferences. Preferences are used to allocate scarce resources. I prefer, for example, a viewing slot on this expensive telescope when the moon is high from the horizon. Preferences are also used to allocate people to positions. I prefer, for example, to be matched to a hospital with a good pediatrics depts. [Lloyd Shapley](http://en.wikipedia.org/wiki/Lloyd_Shapley) won the Nobel Prize in Economics recently for looking at such allocation problems. There are many appealing applications in areas like kidney transplant, and school choice.\n\n\nOne interesting thing we’ve learnt from machine learning is that you often [make better decisions when you combine the opinions of several methods](http://en.wikipedia.org/wiki/Ensemble_learning). It’s therefore likely that we’ll get better results by combining together voting methods. For this reason, we’ve been looking at how voting rules combine together.\n\n\n\n\n\n---\n\n\n**Luke**: Which methods from social computational choice might be relevant to a situation like the following, adapted from [Bostrom (2009)](http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html)?\n\n\n\n> Suppose you want to learn agent A’s preferences, and suppose that you begin with a set of mutually exclusive preference orderings, and that you assign each of these some probability of representing A’s preferences. Now imagine that each of these preference orderings gets to send some number of delegates to Parliament. The number of delegates each preference ordering gets to send is proportional to the probability of that preference ordering. Then the delegates bargain with one another for support on various issues; and the Parliament reaches a decision by the delegates voting.\n> \n> \n\n\n\n\n---\n\n\n**Toby**: In computational social choice, we’d be interested in computational aspects of such voting situations. The first problem in this setting is that the number of preference orderings is exponential. So, is the number of delegates also exponential which will make it computationally difficult to deal with such a large number of delegates? Or is it polynomial in which case, we need to consider which polynomial subset and how “representative” this is of the whole? Having solved such problems, we’ll then consider computational questions like how do the delegates compute their (possibly strategic) votes. We know from fundamental axiomatic results in social choice that any reasonable voting system is likely to be manipulable and subject to [strategic voting](http://en.wikipedia.org/wiki/Gibbard%E2%80%93Satterthwaite_theorem).\n\n\n\n\n---\n\n\n**Luke**: In some of your work (e.g. [2011a](http://www.cse.unsw.edu.au/%7Etw/waaim11.pdf), [2011b](http://www.cse.unsw.edu.au/%7Etw/wjair11.pdf)), you explore the computational complexity of manipulation. Are there cases where an agent can efficiently engage in truthful voting, but can’t efficiently engage in various forms of manipulation, including strategic voting?\n\n\n\n\n---\n\n\n**Toby**: Well, hopefully an agent can always efficiently engage in truthful voting as we want participation. But surprisingly, even this can be sometime challenging. A nice example goes back to the Victorian mathematician and writer Charles Dodgson, or to go by his more familiar pen name, Lewis Carroll. He proposed an interesting voting rule, known as Dodgson’s method which elects any candidate that is preferred by a majority of voters to any other candidate when such a candidate exists. When such a candidate does not exist (for instance, when votes are split three ways), it elects the closest candidate in the sense that we have to make the fewest number of pairwise swaps in the votes to get to a candidate that is preferred by a majority. It’s worth noting that many popular rules, even plurality voting, do not necessarily elect a candidate that is pairwise preferred to all others by a majority when such a candidate exists. Anyway, back to Dodgson’s rule. Somewhat to the surprise of many, even working out who wins an election run under Dodgson’s rule is [computationally intractable in the worst case](http://link.springer.com/article/10.1007/BF00303169#page-1) (formally, it is [NP-hard](http://www.ii.uib.no/~daniello/papers/dodgsonHard.pdf)).\n\n\nFortunately, most voting rules are easy to compute, typically even linear time. On the other hand, there are many [deep impossibility results](http://en.wikipedia.org/wiki/Gibbard%E2%80%93Satterthwaite_theorem) showing that voters may profit from strategic voting and mis-reporting their preferences. An interesting escape from this problem was put forwards by [Bartholdi, Tovey and Trick](http://link.springer.com/article/10.1007/BF00295861#page-1). Perhaps strategic voting is possible but too computationally difficult to actually work out?\n\n\nSince they, I and others have explored which voting rules have this property. For example, we recently settled a long standing open question that Borda voting (which many people will have seen as a variant of Borda voting is used in the Eurovision song contest) is computationally difficult to manipulate. These are worst case results so we’ve also looked at how difficult strategic voting is [to compute in practice](https://jair.org/media/3223/live-3223-5837-jair.pdf) (not just in the worst case).\n\n\n\n\n---\n\n\n**Luke**: What are some of the most interesting questions in computational social choice that you think have a decent chance of being essentially resolved in the next 20 years?\n\n\n\n\n---\n\n\n**Toby**: We’ll definitely have a good idea of the computational properties of the many different voting rules proposed throughout history, from ancient ones like Borda to recent ones like Schulz’e rule. And we’ll understand the computational resistance of these rules to manipulation not just in the worst case but in average and in practice on real world elections. More practically, we’ll also have a good understanding of computational aspects of other problems in social choice like fair division. For instance, we’ll have developed a rich range of mechanisms for fair division that cover the spectrum of cases (divisible/indivisible goods, centralized/decentralized, etc) and for which we understand their properties (fairness, efficiency, etc) and agents behaviour (e.g. how agents can compute a best response, Nash dynamics, etc).\n\n\n\n\n---\n\n\n**Luke**: Thanks, Toby!\n\n\nThe post [Toby Walsh on computational social choice](https://intelligence.org/2014/03/10/toby-walsh/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-03-10T19:05:14Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "9f8d98f08fc79381ffa5a87575ae7380", "title": "Randall Larsen and Lynne Kidder on USA bio-response", "url": "https://intelligence.org/2014/03/09/randall-larsen-and-lynne-kidder/", "source": "miri", "source_type": "blog", "text": "![Randall Larsen portrait](http://intelligence.org/wp-content/uploads/2014/03/Larsen_w200.jpg)[Colonel Randall Larsen](http://www.randalllarsen.com/), USAF (Ret), is the National Security Advisor at the [UPMC Center for Health Security](http://www.upmchealthsecurity.org/), and a Senior Fellow at the [Homeland Security Policy Institute](http://www.gwumc.edu/hspi/), George Washington University. He previously served as the Executive Director of the Commission on the Prevention of Weapons of Mass Destruction Proliferation and Terrorism (2009-2010); the Founding Director and CEO of the [Bipartisan WMD Terrorism Research Center](http://www.wmdcenter.org/) (2010-2012), the Founding Director of the [ANSER Institute for Homeland Security](https://www.anser.org/) (2000-2003), and the chairman of the [Department of Military Strategy and Operations](http://www.carlisle.army.mil/usawc/dmspo/dmspohome/DMSPOHome.htm) at the National War College (1998-2000).\n\n\n\n\n![Lynne Kidder portrait](http://intelligence.org/wp-content/uploads/2014/03/Kidder_w460.jpg)[Lynne Kidder](http://www.wmdcenter.org/?page_id=54) is the former President of the Bipartisan WMD Terrorism Research Center (the WMD Center) and was the principal investigator for the Center’s [Bio-Response Report Card](http://www.wmdcenter.org/?page_id=183).  She is currently a Boulder, CO-based consultant, a research affiliate with the University of Colorado’s Natural Hazards Center, and also serves as the co-chair of the Institute of Medicine’s Forum on Medical and Public Health Preparedness for Catastrophic Events. Her previous positions include Sr. Vice President at [Business Executives for National Security](http://www.bens.org/), Senior Advisor to the [Center for Excellence in Disaster Management and Humanitarian Assistance](http://www.coe-dmha.org/) (US Pacific Command), and eight years as professional staff in the U.S. Senate.\n\n\n\n**Luke Muehlhauser**: Your [Bio-Response Report Card](http://www.wmdcenter.org/?page_id=183) assesses the USA’s bio-response capabilities. Before we explore your findings, could you say a bit about how the report was produced, and what motivated its creation?\n\n\n\n\n---\n\n\n**Randall Larsen**: The 9/11 Commission recommended that a separate commission examine the terrorism threat from weapons of mass destruction (WMD). The bipartisan leadership in the Senate and House asked former US Senators Bob Graham (D-FL) and Jim Talent (R-MO) to head the Congressional Commission on the Prevention of Weapons of Mass Destruction Proliferation and Terrorism (WMD Commission). The WMD Commission completed its work in December 2008 and published a report, [*World at Risk*](http://www.pharmathene.com/World_at_Risk_Report.pdf). In March 2009, the bipartisan leadership of Congress asked Senators Graham and Talent to re-establish the Commission to continue its work and provide a report card on progress. This was the first Congressional Commission to be extended for a second year.\n\n\nI became the Executive Director for the WMD Commission’s second year, and in January 2010, the Commission released a [WMD Report Card](http://www.pharmathene.com/WMD_Report_Card.pdf) assessing 37 aspects of the WMD threat. The grades ranged from A’s to F’s. The failing grade that received the most attention, both on Capitol Hill and in the press, was the F grade for “preparedness to respond to a biological attack.”\n\n\nAt the commissioners’ final meeting in December 2009, they encouraged Senators Graham and Talent to continue their work with a focus on the biological threat. To do so, a not-for-profit organization (501c3) was created in March 2010, The Bipartisan WMD Terrorism Research Center (WMD Center). Senators Graham and Talented agreed to serve on the board of advisors, I became the CEO, and recruited Lynne Kidder to serve as the President.\n\n\nLaunching the project was a bit of a challenge, since many of the traditional national security organizations that support such work were solely focused on the nuclear threat—a reflection of the Congressional perspective. The legislation that created the WMD Commission had not contained the words bioterrorism or biology—ironic since *World at Risk* clearly identified bioterrorism as the most likely WMD threat.\n\n\nWe began work on the Bio-Response Report Card in January 2011 by recruiting a world-class team of senior advisors. They included a former Deputy Administrator of the Food and Drug Administration, a former Special Assistant to the President for Biodefense, the Director of Disaster Response at the American Medical Association, the VP and Director of RAND Health, the Founding President of the Sabin Vaccine Institute, and experts in the fields of public health, emergency medicine, and environmental remediation.\n\n\nThe Board of Advisors helped inform methodology of the project, helped define the categories of bio-response, and then proposed metrics in the form of questions, by which to assess capabilities in each category.\n\n\n\n\n\n---\n\n\n**Luke**: In the Methodology section of your report (p. 20), you explain that the project’s “Board of Advisors… informed project methodology, the categories of bio-response, and then proposed metrics by which to assess capabilities in each category,” and that you “enlisted a separate group of diverse subject-matter experts to perform much of the research and analysis required to answer these questions.” It looks like a challenging project, requiring hundreds or thousands of hours of labor from knowledgeable experts. It also looks as though, lacking an easily-imitable template that would satisfy the goals of the project, you had to come up with much of the methodology yourself. Could you give more detail as to the process that generated the report? A chronological account might be the easiest to remember and share.\n\n\n\n\n---\n\n\n**Lynne Kidder**: Yes, this was a very labor-intensive project. The biggest challenge was that no one had ever attempted a national-level, end-to-end assessment of bio-response capabilities ranging from initial detection through environmental cleanup. No federal department or agency has the ability to do such a study without getting considerable pushback for grading the work of many other organizations with related missions. This is why the study needed to be accomplished outside of government—preferably by a not-for-profit.\n\n\nAdditionally, we had to work with our Board of Advisors to develop a “scale of events”—none existed. This fact had caused great confusion within the national security and biosecurity communities. When some people talked of a biological attack, they had in mind an event similar to the anthrax envelopes of October 2001 (22 infected and 5 deaths), while others were referring to a catastrophic event that would threaten the lives of hundreds of thousands. Working with our Board of Advisors, we identified six scenarios (small-scale non-contagious, small-scale contagious, large-scale non-contagious, large-scale contagious, large-scale drug-resistant, and a global crisis). See page 21 of the report card for specific examples and details of each scenario.\n\n\nWe quoted George E. P. Box, “Essentially, all models are wrong, but some are useful.” Our model was not perfect, but we found it shocking that the federal government had not developed such a model for analysis, and we were convinced ours was useful for our study and as a standard for others to use.\n\n\nThe second step for our Board of Advisors was to complete our framework for analysis with a matrix. The vertical axis listed the eight (generally agreed upon) mission areas: Detection & Diagnosis, Attribution, Communication, Medical Countermeasure Availability, Medical Countermeasure Development & Approval Process, Medical Countermeasure Dispensing, Medical Management, and Environmental Cleanup. The horizontal axis listed the six scales of attack. (See page 9.)\n\n\nWe then discovered there were no agreed upon standards of performance for the eight mission areas, so we also had to develop them. We first defined each mission area, and then provided standards of performance–what we called, “Fundamental Expectations”. We concluded that the American public and their elected representatives would have these fundamental expectations based on the tens of billions of dollars that have been spent on biodefense since 2001. For instance, in the mission area of Medical Countermeasure Distribution & Dispensing, we listed these fundamental expectations:\n\n\n* Appropriate types of medical countermeasure stockpiles, strategically located, subject to rigorous security and environmental controls, with schedules of resupply, rotation, and shelf-life extension\n* Distribution and dispensing mechanisms that are timely, efficient and deliver medical countermeasures to the point of need\n* Redundant and community-based dispensing strategies developed to address specific population needs (age distribution, at risk populations, logistics, etc.)\n* Supporting communications strategies that are multi-lingual, multi-cultural, and multi-channel\n* A trained and knowledgeable workforce (professional and/or volunteer) with the skill set, willingness, and necessary preparation to participate in mass dispensing activities\n\n\nThe final step for our Board of Advisors was to develop a set of questions for each mission area—questions that would be provided to the eight teams of subject-matter-experts to evaluate the nation’s capability to meet the Fundamental Expectations. In the mission area of Medical Countermeasure Distribution & Dispensing the Board of Advisors provided six questions to the subject-matter-experts. Here is an example of one of the questions:\n\n\n\n> Can medical countermeasures in the Strategic National Stockpile be dispensed to affected populations within 48 hours (as specified in HSPD-21)?\n> \n> \n\n\nNote: This is one of the very few areas where there actually was an agreed upon standard of performance.\n\n\nMission area definitions, scale of attacks, fundamental expectations, and questions were provided to eight teams of subject-matter-experts (SMEs). The SMEs provided detailed answers to the staff of the WMD Center. Senators Bob Graham and Jim Talent, Randy and I, plus two consultants (Dave McIntyre, PhD and Sara Rubin, MPH) prepared the report card—using the inputs from the Board of Advisors and SMEs—and assigned grades.\n\n\n\n\n---\n\n\n**Luke**: What process did you use to find and select the SMEs? Roughly how many ended up contributing, in how many different teams?\n\n\n\n\n---\n\n\n**Randall**: I have worked in the biodefense community since 1994, and Lynne Kidder had more than a decade’s experience –plus she was serving (and still is) as the Co-Chair of the Institute of Medicine’s Forum on Medical and Public Health Preparedness for Catastrophic Events. Between us, we knew most of the leading biodefense experts, in both the public and private sectors.\n\n\nOur Board of Advisors were nationally-recognized senior leaders in the field. Their names were listed in our report, but they were only involved in setting up the study. This added credibility to our study, but since they were not involved in the assessment and grading, it protected these senior leaders from the highly-politicized environment in the nation’s capitol.\n\n\nOn the other hand, the SMEs we chose to answer the questions, and provide suggested grades, remained anonymous—so that any criticism of grades could only be directed at Graham/Talent/Larsen/Kidder.\n\n\nWe had a “team” of SMEs assigned to each of the eight mission areas, but in fact they weren’t real teams. Each individual worked independently—unaware of who else was answering the same questions. This allowed us to identify those areas in which there was general agreement, and more importantly, areas in which there was disagreement. For those mission areas that had widely different assessments (answers to our questions) we would follow up with the SMEs, and in some cases, recruited additional SMEs. For one mission area, we ended up with five SMEs.\n\n\nSurprisingly (at least to us) there were few areas of major disagreement. When there was, it was often about interpretation of specific policies and guidelines provided by the federal government. For instance, was the “48-hour standard” (for getting antibiotics to affected populations) established in the Bush Administration, still the standard for the Obama Administration?\n\n\nAll of the Board of Advisors were paid honoraria, and the SMEs were paid consulting fees. We also reached out to other SMEs who agreed to participate pro bono–including government employees, industry representatives, and several from the academic/think tank community.\n\n\n\n\n---\n\n\n**Luke**: Are you able to reveal the total cost of preparing the report, or a ballpark figure? How was funding acquired?\n\n\n\n\n---\n\n\n**Lynne**: Our budget for the 10-month project was $500,000. Smith Richardson Foundation contributed $200,000 and the Skoll Global Threats Fund contributed $300,000.\n\n\n\n\n---\n\n\n**Luke**: In January 2010, a research associate at the James Martin Center for Nonproliferation Studies (CNS) [published](http://cns.miis.edu/stories/100129_bw_report_card.htm) a somewhat-critical review of the earlier WMD report card by Bob Graham and Jim Talent, focusing on some of its claims about biosecurity. Were you aware of that article while preparing your followup, the Bio-Response Report Card (published Oct. 2011)? If so, did it influence your work on the Bio-Response Report Card? Do you have any general comments on the article?\n\n\n\n\n---\n\n\n**Randall**: We certainly agreed with the final sentence in the article, “Accordingly, their report card should be read and considered with a critical eye.” All reports, including the report written by Mr. Kirk Bansak, a research associate at CNS, should be carefully and thoughtfully read and considered.\n\n\nUnfortunately, Mr. Bansak appears to have misunderstood the WMD Commission’s grading system. As described on page 5 of the WMD Commission’s Report Card, “This report card uses letter grades to assess the U.S. Government’s progress in implementing the Commission’s recommendations.”\n\n\nThese were not grades on the overall status of preparedness and nonproliferation efforts, rather grades on specific responses to the recommendations in *World at Risk*—the tasking given to the Commission by the bipartisan leadership in Congress when the commission was extended for a second year.\n\n\nFor instance: Recommendation 1-2 in *World at Risk*\n\n\n\n> “Develop a national strategy for advancing bioforensic capabilities.”\n> \n> \n\n\nThe report card gave this an **A,** commenting, “An Interagency Bioforensics Strategy has been finalized and approved by the U.S. Office of Science and Technology Policy and exceeds the criteria stated in the Commission’s recommendations.”\n\n\nThis was clearly not an **A** for bioforensics writ large, but specifically a grade for response to a *World at Risk* recommendation—“develop a national strategy.”\n\n\nAn **F** grade was given for Recommendation 1-5, “Enhance the nation’s capabilities for rapid response to prevent biological attacks from inflicting mass casualties.” Mr. Bansak states, “Graham and Talent, who also include logistical strategy and planning in their list of biothreat preparedness needs, also appear to overlook President Obama’s Executive Order assigning the U.S. Postal Service responsibility for dispensing medical countermeasures in the event of a biological attack…”\n\n\nThis was hardly overlooked. At the Commission meeting when grades were debated and assigned, top officials from the White House briefed the commissioners on a wide variety of WMD issues, and provided copies of the soon-to-be-released Executive Order regarding the U.S. Postal Service. The commissioners were not impressed with this two-page document.\n\n\nAs the Executive Director of the WMD Commission, I did not have a vote for the assignment of grades. I just counted the votes. In many areas, there was considerable discussion and debate amongst the commissioners—lengthy debates whether a particular area should receive a **B-** or **C+,** or a **C-** or **D**, and in one case, an **A** or **A+.** However, when we got to the issue of preparedness to respond to a biological attack (Recommendation 1 in *World at Risk)* the discussion was short and the grade unanimous—**F**.\n\n\nRegarding the “too generous grade” for actions on the Biological Weapons Convention, we can only state, once again, Mr. Bansak appears to have misunderstood the grading criteria. Recommendation 2-4 in *World at* Risk stated, “Propose an action plan for achieving universal adherence to the Biological Weapons Convention.” The report card grade was a **B+,** based on the release of the *National Strategy for Countering Biological Threats* by the National Security Council in December 2009. A comment was also offered on what actions the Administration would need to take to raise the grade to an **A**.\n\n\nMr. Bansak comments on the **B+** grade with, “This plaudit, while not unwarranted, seems inappropriate for this section because it overlooks the failing of the BWC…” Once again, we can only say, this was not a grade for the BWC. It was a grade for the government’s response to a specific recommendation in *World at Risk.*\n\n\nIt appears that the primary objection from Mr. Bansak and CNS is on the issue of verification—discussed at length in the CNS report. CNS is a strong supporter of improved verification, however, the commissioners clearly agreed with the decisions of both the Bush and Obama Administrations that “bad verification is worse than no verification.”\n\n\nThe CNS assessment of the WMD Commission’s Report Card of January 2010 did not influence the WMD Center’s Report Card of October 2011. The primary purpose of the second report card was to move from a single letter grade of **F** (for the federal government’s response to a recommendation in*World at Risk),* to a much more detailed strategic analysis of this country’s capabilities to respond to a biological attack.\n\n\n\n\n---\n\n\n**Luke**: If a philanthropist granted your [WMD Center](http://www.wmdcenter.org/) $10 million to make *research* progress on the issue of national biodefense, and you were available full-time to lead such research efforts over the next five years, which project(s) might you execute? In other words, what do you see as the most urgent and desirable “next steps” for research into national biodefense?\n\n\n\n\n---\n\n\n**Lynne**: One top priority would be medical countermeasures (MCMs): developing new models where the public and private sectors can work in close cooperation to develop new MCMs for the [Strategic National Stockpile](http://www.cdc.gov/phpr/stockpile/stockpile.htm) (SNS); developing the capability for rapid development of MCMs in response to new bio-threats (both man-made and naturally-occurring); and the greatest MCM challenge, finding the means to rapidly gain FDA approval for use of new MCMs.\n\n\nAnother priority would be to promote best practices in states and local communities for biodefense preparedness. The majority of our work at the WMD Commission and WMD Center has been focused on federal programs. Senator Bob Graham, the former two-term governor of Florida, has encouraged looking at these issues from a local perspective—the frontlines of biodefense. There have been numerous private-public partnerships created at the state and local level to strengthen disaster preparedness and response. Unfortunately, the lessons learned and best practices from these pilot projects have not yet been fully exploited for wide spread use in cities and counties across the nation.\n\n\n\n\n---\n\n\n**Randall**: One specific challenge that will require effective private-public collaboration is the dispensing of medical countermeasures. The federal government has continually received high marks for maintaining the SNS and having the means to rapidly distribute the MCMs to cities and counties in a crisis. However, the problem is getting these MCMs into the hands (and mouths) of the citizens in these communities. If a bio-attack occurred in most American cities today, the feds would get the needed MCMs to the local airport, but there is serious doubt they would get to the residents in time to make a difference. The current situation is like having a fiber-optic cable with 1 gigabyte capability running down your street, but no link to your house.\n\n\nAnother important prospective project is executive education. Today’s leaders in Congress and the Administration do not fully appreciate the 21st century threat of bioterrorism. The biotech revolution has democratized the potential for creating biological weapons. The capability for sophisticated bio-attacks, that could threaten the lives of hundreds of thousands, was once limited to superpowers. Much has changed since the late 1960s. Today, these same (nuclear-equivalent) weapons, can be produced with equipment purchased on the internet and from pathogens readily-available in nature. Until senior leaders in both the public and private sectors fully understand this threat, it is unlikely they will give sufficient priority to developing the required defenses.\n\n\n\n\n---\n\n\n**Luke**: Thank you both!\n\n\nThe post [Randall Larsen and Lynne Kidder on USA bio-response](https://intelligence.org/2014/03/09/randall-larsen-and-lynne-kidder/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-03-09T18:10:17Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "77f2fd5303621c15d9a07eaaa5f99ac8", "title": "John Ridgway on safety-critical systems", "url": "https://intelligence.org/2014/03/08/john-ridgway-on-safety-critical-systems/", "source": "miri", "source_type": "blog", "text": "![John Ridgway portrait](http://intelligence.org/wp-content/uploads/2014/03/John_Ridgway_w680.jpg)John Ridgway studied physics at the University of Newcastle Upon Tyne and Sussex University before embarking upon a career in software engineering. As part of that career he worked for 28 years in the field of Intelligent Transport Systems (ITS), undertaking software quality management and systems safety engineering roles on behalf of his employer, [Serco Transportation Systems](http://www.serco.com). In particular, John provided design assurance for Serco’s development of the Stockholm Ring Road Central Technical System (CTS) for the Swedish National Roads Administration (SNRA), safety analysis and safety case development for Serco’s M42 Active Traffic Management (ATM) Computer Control System for the UK Highways Agency (HA), and safety analysis for the National Traffic Control Centre (NTCC) for the HA.\n\n\nJohn is a regular contributor to the Safety Critical Systems Club (SCSC) Newsletter, in which he encourages fellow practitioners to share his interest in the deeper issues associated with the conceptual framework encapsulated by the terms ‘uncertainty’, ‘chance’ and ‘risk’. Although now retired, John recently received the honour of providing the after-banquet speech for the SCSC 2014 Annual Symposium.\n\n\n\n**Luke Muehlhauser**: What is the nature of your expertise and interest in safety engineering?\n\n\n\n\n---\n\n\n**John Ridgway**: I am not an expert and I would not wish to pass myself off as one. I am, instead, a humble practitioner, and a retired one at that. Having been educated as a physicist, I started my career as a software engineer, rising eventually to a senior position within Serco Transportation Systems, UK, in which I was responsible for ensuring the establishment and implementation of processes designed to foster and demonstrate the integrity of computerised systems. The systems concerned (road traffic management systems) were not, initially, considered to be safety-related, and so lack of integrity in the delivered product was held to have little more than a commercial or political significance. However, following a change of safety policy within the procurement departments of the UK Highways Agency, I recognised that a change of culture would be required within my organisation, if it were to continue as an approved supplier.\n\n\nIf there is any legitimacy in my contributing to this forum, it is this: Even before safety had become an issue, I had always felt that the average practitioner’s track record in the management of risk would benefit greatly from taking a closer interest in (what some may deem to be) philosophical issues. Indeed, over the years, I became convinced that many of the factors that have hampered software engineering’s development into a mature engineering discipline (let’s say on a par with civil or mechanical engineering) have at their root, a failure to openly address such issues. I believe the same could also be said with regard to functional safety engineering. The heart of the problem lies in the conceptual framework encapsulated by the terms ‘uncertainty’, ‘chance’ and ‘risk’, all of which appear to be treated by practitioners as intuitive when, in fact, none of them are. This is not an academic concern, since failure to properly apprehend the deeper significance of this conceptual framework can, and does, lead practitioners towards errors of judgement. If I were to add to this the accusation that practitioners habitually fail to appreciate the extent to which their rationality is undermined by cognitive biases, then I feel there is more than enough justification for insisting that they pay more attention to what is going in the world of academia and research organisations, particularly in the fields of cognitive science, decision theory and, indeed, neuroscience. This, at least, became my working precept.\n\n\n\n\n\n---\n\n\n**Luke**: What do you think was the origin of your concern for the relevance of philosophy and cognitive science to safety engineering? For example did you study philosophy, or [Kahneman and Tversky](http://www.amazon.com/Judgment-under-Uncertainty-Heuristics-Biases/dp/0521284147/), in university?\n\n\n\n\n---\n\n\n**John**: As a physics student I was offered little opportunity (and to be honest had little ambition) to pursue an education in philosophy. Of course it is true that quantum mechanics made a mockery of my cherished belief in the existence of unobserved, objective reality, but I was very much in the ‘*I know it makes no sense whatsoever but just shut up and carry on calculating’* school of practice. In fact, it wasn’t until I had become a jobbing software engineer, investigating the use of software metrics to predict development timescales, that philosophical issues started to take on an occupational importance.\n\n\nThe practice within my department had been to use single-point estimations for the duration of tasks, then feed these figures and the task dependencies into a Gantt chart and simply use the chart to read off the predicted project end date. Sometimes the results were very impressive: predicted project end dates often proved to be in the right decade! Then one bright spark said, ‘*You’re living in the dark ages. What you need are three-point estimations for the duration of each task (i.e. estimate a most likely duration together with an upper and lower band). You then use Monte Carlo Simulation to create a project outturn curve; a curve which indicates a statistical spread of possible project durations. Everyone does this nowadays. And it just so happens I have an expensive risk management tool to do all the nasty calculations for you*’. Imagine his look of disgust when I told him that I still thought the old way was better!\n\n\nWas he talking to a smug idiot? I like to think not. The basis of my objection was this: As a physicist, I was well aware of the benefits of using Monte Carlo Simulation in areas such as solid state and nuclear physics. Here it is used to determine the macroscopic behaviour of a physical system, where probability distribution curves can be used to model stochastic variability of the micro behaviour of the system under study. Now, however, I was being invited to use Monte Carlo methods to draw conclusions based upon the averaging of various levels of posited (and decidedly non-stochastic) ignorance regarding the future. In such circumstances, no one could provide me with a convincing argument for deciding the most appropriate form for the probability distribution curve upon which the simulation would be based. In fact, I was told this choice didn’t matter, though quite clearly it did. If, as seemed likely, a realistic distribution curve would have a fat tail, the results would be hugely influenced by the choice of curve. Furthermore, the extra two estimates (i.e. the upper and lower bands for task duration) were supposed to represent a level of uncertainty, but the uncertainty behind their selection was at least of the same order of magnitude as the uncertainty these bounds were supposed to represent. In other words, one could not be at all certain how uncertain one was! It occurred to me that no real information was being added to the risk model by using these three-point estimates, and so no amount of Monte Carlo Simulation would help matters.\n\n\nThis led me to philosophical musing about the nature of uncertainty and the subtleties of its relationship with risk. These musings had a potentially practical value because I thought that a lot of people were spending a lot of time and money using inappropriate techniques to give themselves false confidence in the reliability of their predictions. Unfortunately, none of my colleagues seemed to share my concern and all I could do to try and persuade them was to wave my hands whilst speaking vaguely about second order uncertainty, model entropy and the important distinction between variability and incertitude. So I decided there was a gap in my education that needed filling.\n\n\nAs it happened, my investigations soon led me to the work of [Professor Norman Fenton](http://www.eecs.qmul.ac.uk/%7Enorman) of Queen Mary University London, and I familiarised myself with concepts such as subjective probability and epistemic versus aleatoric uncertainty. Furthermore, once subjectivity had been placed centre stage, the relevance of cognitive sciences loomed large and although I can’t claim to have studied Tversky and Kahneman, I became familiar with ideas associated with decision theory that owe existence to their work.\n\n\nSuddenly, my career seemed so much more interesting. And once I moved on to safety engineering, the same issues cropped up again in the form of over-engineered fault tree diagrams replete with probability values determined by ‘expert’ opinion. Now it seemed all the more important that practitioners should think more deeply about the philosophical and psychological basis for their confident proclamations on risk.\n\n\n\n\n---\n\n\n**Luke**: In many of your articles for the Safety-Critical Systems Club (SCSC) newsletter, you briefly discuss issues in philosophy and cognitive science and their relevance to safety-critical systems (e.g. [1](http://commonsenseatheism.com/wp-content/uploads/2014/02/Ridgway-Whats-Luck-Got-To-Do-With-It.pdf), [2](http://commonsenseatheism.com/wp-content/uploads/2014/02/Ridgway-Risk-%C2%AD-A-Coffee-Break-Philosophers-Viewpoint.pdf), [3](http://commonsenseatheism.com/wp-content/uploads/2014/02/Ridgway-Reflections-on-risk-and-the-importance-of-understanding-the-nature-of-uncertainty.pdf), [4](http://commonsenseatheism.com/wp-content/uploads/2014/02/Ridgway-Having-Confidence-in-Your-Confidence.pdf), [5](http://commonsenseatheism.com/wp-content/uploads/2014/02/Ridgway-Armageddon-and-Other-Failure-Modes.pdf), [6](http://commonsenseatheism.com/wp-content/uploads/2014/02/Ridgway-The-Philosophers-Chances.pdf)). During your time working on safety engineering projects, how interested did your colleagues seem to be in such discussions? Were many of them deeply familiar with the issues already? Does philosophy of probability and risk, and cognitive science of (e.g.) heuristics and biases, seem to be part of the standard training for those working safety engineering — at least, among the people you encountered?\n\n\n\n\n---\n\n\n**John**: Perhaps my experience was atypical, but the sad fact is that I found it extremely difficult to persuade any of my colleagues to share an interest in such matters, and I found this doubly frustrating. Firstly, I thought it to be a missed opportunity on my colleagues’ part, as I felt certain that application of the ideas would be to their professional advantage. Their diffidence was to a certain extent understandable, however, since there was nothing in the occupational training provided for them that hinted at the importance of philosophy or psychology. However, what really frustrated me was the fact that no one appeared to be at all excited by the prospect of introducing these subjects into the workplace. How could that be? How could my colleagues fail to be anything other than utterly fascinated? In fact, their lack of interest seemed to me to represent nothing less than a wanton refusal to enjoy their job!\n\n\nThe key to the problem lay, of course, with the training provided by my employer, and it didn’t help that the internal department that provided such training glorified under the title of ‘The Best Practice Centre’. Clearly, anything that I might say that differed from the company endorsed view was, by definition, less then best! And I soon found that berating the centre’s risk management course for failing to explore the concept of uncertainty was, if anything, counter-productive. Upon reflection, I think that some of these frustrations led me to seek an alternative forum in which I could express my thinking. Publishing articles for the Safety Critical Systems Club newsletter provided such an outlet.\n\n\n\n\n---\n\n\n**Luke**: You say that you “felt certain that application of the ideas would be to their professional advantage.” Can you give me some reasons and/or example for why you felt certain of that?\n\n\n\n\n---\n\n\n**John**: I think that my concerns were the product of working within a profession that appears to see the world rather too much in frequentist terms, in which the assumption of aleatoric uncertainty would be valid. In reality, it is increasingly the case that the risks a systems safety engineer has to analyse are predicated predominantly upon epistemic uncertainty. I cite, in particular, safety risks associated with complex software-based systems, adaptive systems and so-called Systems of Systems (SoS), or indeed any system that interacts with its environment in a novel or inherently unpredictable manner. Whilst it is true that analysing stochastic failure of physical components may play a significant role in predicting system failure, the probabilistic techniques involved in such analysis simply cannot address epistemic concerns, i.e. where the parameters of any posited probability distribution curve may be a matter for pure speculation. (I am aware that Monte Carlo simulation is sometimes used to probabilistically model the parametric uncertainty in probabilistic models, but this strikes me as an act of desperation reminiscent of the invention of epicycles upon epicycles to shore up the Ptolemaic cosmology).\n\n\nThere are a number of suitable approaches available to the safety analyst, which seek to accommodate epistemic uncertainty (Bayesian methods, Possibility Theory and Dempster-Schafer, to name but three). However, whilst the practitioner is not even aware that there is an issue, and continues to assume the objectivity of all probability, there is little hope that these methods will attract the attention they deserve.\n\n\nThen, of course, we have to consider the pernicious effect that cognitive bias has upon the analyst’s assessment of likelihood. It is in the nature of such biases that the individual is unaware of their impact. Surely, therefore, even the most basic training in this area would be of considerable benefit to the practitioner. On a similar theme, I have become concerned that the average safety analyst is insufficiently mindful of the distinction to be made between risk aversion and ambiguity aversion. This may lead to a failure to adequately understand the rationality that lies behind a particular decision, but it may also explain why my colleagues didn’t appear to appreciate the importance of undertaking uncertainty management alongside risk management.\n\n\nFinally, when one considers the interconnectivity of risks, and the complications introduced by multi-stakeholders, it becomes very difficult to think about risk management strategies without having to address ethical issues associated with risk transfer, optimisation and sufficing. But perhaps that is another story.\n\n\n\n\n---\n\n\n**Luke**: Yes, can you say more about the “ethical issues associated with risk transfer, optimization, and sufficing”?\n\n\n\n\n---\n\n\n**John**: In UK health and safety legislation there is an obligation to reduce existing risk levels ‘So Far As Is Reasonably Practicable’ (SFAIRP). This leads to the idea of residual risks being ‘As Low As Reasonably Practicable’ ([ALARP](https://intelligence.org/feed/As Low As Reasonably Practicable)). The ALARP concept assumes that an upper limit can be defined, above which the risk is considered ‘intolerable’. In addition, there is a lower limit, below which the risk is considered to be ‘broadly acceptable’. A risk may be allowed to lie between these two limits as long as it can be demonstrated that further reduction would require disproportionate cost and effort. Apart from the vagueness of the terminology used here, the main problem with this view is that it says nothing about the possibility that the management of one risk may bear upon the scale of another. Indeed, one can envisage a network of inter-connected risks in which this knock-on effect will propagate, resulting in the net level of risk increasing (keep in mind that the propagation may include both positive and negative feedback loops). For this reason, there exists the Globally At Least Equivalent (GALE) principle, which holds that upon modifying a system, one must assess the overall, resulting risk level posed by the system rather than focussing purely upon the risk that the modification was intended to address. The idea, of course is that the overall level should never increase.\n\n\nSo far this has all been very basic risk management theory and, on the face of it, the ALARP and GALE principles appear to complement each other. But do they always? Well, in the simple case where all risks are owned and managed by a single authority, this may be the case. But what if the various risks under consideration have differing owners and stakeholders? In such circumstances, parties who own risks and seek to reduce them SFAIRP may find themselves in conflict with each other, with the various stakeholders and with any body that may exist to ensure that the global risk level is not increased.\n\n\nPerhaps we are now in the province of game theory rather than decision theory. If so, it seems reasonable to insist that the game be played in accordance with ethical constraints, but has anyone declared what these might be? Some seem obvious; for example, never transfer risk to another party without their knowledge and consent. Others may not be so straightforward. I think we are all familiar with the old chestnut of the signalman who can prevent a runaway train from ploughing into a group of schoolchildren by changing the points, but only by causing the certain death of the train driver. Does the signalman have the moral authority to commit murder? Would it be murder whether or not he or she switches the points? If we find this difficult to answer, one can easily envisage similar difficulties when deciding the ethical framework associated with the management of risk collectives.\n\n\n\n\n---\n\n\n**Luke**: Which changes do you most desire for the safety-critical systems industry?\n\n\n\n\n---\n\n\n**John**: I think that there is a lot to be said for making membership of a suitably constructed professional body a legal imperative for undertaking key systems safety engineering roles. Membership would require demonstration of a specified level of competence, adherence to formulated codes of conduct and the adoption of appropriate ideologies. Given my responses to earlier questions, your readers will probably be unsurprised to hear that I hope that this would provide the opportunity to promote a greater understanding of the conceptual framework lying behind the terms ‘risk’ and ‘uncertainty’. In particular, I would like to see a professional promotion of the philosophical, ethical and cognitive dimensions of system safety engineering.\n\n\nI appreciate that the various engineering disciplines are already well served by a number of professional societies and that, for example, an Independent Safety Assessor (ISA) in the UK would be expected to be a chartered engineer and a recognised expert in the field (whatever that means). However, the epistemic uncertainties surrounding the development and application of complex computer-based systems introduce issues that perhaps the commonly encountered engineering psyche may not be fully equipped to appreciate. It may be that the safety engineering professional may need to think more like a lawyer. Consequently, the professional body I am looking for could be modelled upon the legal profession, as much as upon the existing engineering professions. I know for some people ‘lawyer’ is a dirty word, but so is ‘subjectivity’ in some engineers’ minds. Being pragmatic, and in order to stay in the game, we may all have to become sophists.\n\n\n\n\n---\n\n\n**Luke**: Thanks, John!\n\n\nThe post [John Ridgway on safety-critical systems](https://intelligence.org/2014/03/08/john-ridgway-on-safety-critical-systems/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-03-08T10:15:09Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "21c80d79c9a67a7403ee29e2faa715cf", "title": "David Cook on the VV&A process", "url": "https://intelligence.org/2014/03/07/david-cook/", "source": "miri", "source_type": "blog", "text": "![Emil Vassev portrait](http://intelligence.org/wp-content/uploads/2014/03/David_A_Cook_w678.jpg)Dr. David A. Cook is Associate Professor of Computer Science at [Stephen F. Austin State University](http://www.sfasu.edu/), where he [teaches](https://intelligence.org/feed/ Computer simulation developed hand-in-hand with the rapid growth of the computer, following its first large-scale deployment during the Manhattan Project in World War II to model the process of nuclear detonation. It was a simulation of 12 hard spheres using a Monte Carlo algorithm. Computer simulation is often used as an adjunct to, or substitute for, modeling systems for which simple closed form analytic solutions are not possible. There are many types of computer simulations; their common feature is the attempt to generate a sample of representative scenarios for a model in which a complete enumeration of all possible states of the model would be prohibitive or impossible.\n> \n> \n\n\nVV&A really did not become a serious issue until models and simulations were computerized, and computers did not become available until the late 1940s. Starting in the late 1940s, both digital and analog computers became available. However, very few (if any) engineers were trained on how to use this newly-developed computing power. There are various stories of how modeling and simulation became a powerful force in the DOD, but the story I have personal knowledge of is the story of John McLeod – an engineer working at the Naval Air Missile Test Center at Point Mugu on the California coast north of Los Angeles. John was an innovator and after working on analog computers and simulations in the early 1950s, , was John McLeod, who took delivery of a new analog computer sometime in 1952. John was not the only engineer in the aerospace community in Southern California facing the same problems, and a few of them decided to get together as an informal user group to exchange ideas and experiences. To make a long story short, John helped found what became the Society for Computer Simulation (SCS). This organization, over the years, has had members who were leaders and innovators in the field of modeling, simulation, and VV&A. [Note that I had the privilege to be the President of SCS from 2011 – 2012, so I am a bit biased]. The SCS has, to this day, the McLeod award to commemorate the advances John McLeod made in the M&S arena. It is only awarded to those that have made significant contributions to the profession.\n\n\nThe SCS published newsletters. M&S conferences were organized. Leaders in the field were able to meet, publish, and share their expertise. All of which help integrate M&S into more and more domains. As a result of leaders in the field being able to share M&S information, and also as a result huge increase in capabilities and availability of computers to run M&S, the need for VV&A also increased. Over the years, modeling and simulation became more and more important in many domains within the DOD. It helped develop fighters (in fact, aircraft of all types). It helped train our astronauts to land on the moon. It modeled the space shuttle. Complex models and simulations helped us model ballistic missile defense, fight war games with minimal expense (and no lives lost!), and design complex weapon systems. In fact, it’s hard to imagine any technologically sophisticated domain that does not use M&S to save money, save time, and ensure safety. But – these increasingly complex models needed verification and validation, and frequently accreditation,\n\n\nSo – the proliferation in the use of M&S lead to an increased need for VV&A. M&S became so complex that VV&A could not be accomplished without “domain experts” – usually referred to as “Subject Matter Experts” (SMEs) to help. Increased complexity of the M&S lead to increased complexity of the VV&A. Various elements within the DOD were performing VV&A on their own, with little official coordination. To leverage the experience of various DOD components and multiple domains, the DOD saw the need for a single point of coordination. As a result, in the 1990s, the DOD formed the Defense Modeling and Simulation Office (DMSO). The DMSO served as a single point of coordination for all M&S (and VV&A) efforts within the DOD. One of the best DMSO contributions was the VV&A Recommended Practices Guide (VV&A RPG) – first published in 1996. The guide has been updated several times over the years, reflecting the uncreased importance of VV& in the DOD. In 2008 DMSO was renamed the Modeling and Simulation Coordination Office. The MSCO web site (and the latest version of the VV&A Recommended Practices Guide) can be found at [msco.mil](http://www.msco.mil/)\n\n\nFor those of you interested in M&S and VV&A, I cannot recommend the MSCO resource enough. It costs nothing (not even email registration), and contains a vast amount of information about M&S and VV&A. The [RPG Key Concepts document](http://www.msco.mil/documents/Key01_Key_Concepts.pdf) alone contains 34 pages of critical “background” information that you should read before going any further in VV&A.\n\n\n\n\n---\n\n\n**Luke**: In [Cook (2006)](http://www.ieee-stc.org/proceedings/2006/pdfs/DC1252.pdf) you write that one of the reasons V&V is so difficult comes from “having to ‘backtrack’ and fill in blanks long after development.” What does this mean? Can you give an example?\n\n\n\n\n---\n\n\n**David**: Let’s imagine you are designing a new fighter aircraft. It is still on the drawing board, and only plans exist.\n\n\nRather than spend money building an actual prototype first, you develop mathematical models of the jet to help verify the performance characteristics. You might actually build a very small model of the body – maybe 1/10th size for wind tunnel experiments.\n\n\nYou also build computer-based models and execute them to estimate flight characteristics. The wind-tunnel experiences (even though only on a 1/10th size model) will give data that might make you modify or change the computer-based model. This feedback loop consists of “build model – run simulation – examine data – adjust model” and repeat.\n\n\nEventually, you build a working prototype of the jet. Almost certainly, the actual flight characteristics will not exactly match the computer-based model. The prototype is “real world” – so you have to readjust the computer-based model. The “real-world” prototype is just a prototype – and probably not used for high-speed fighting and turns – but the basic data gathered from the flying of the prototype leads to changes in the computer-based model, which will now be used to predict more about high-speed maneuvering.\n\n\nBack when I worked on the Airborne Laser – we had models that predicted the laser performance before the laser was actually built or fired! The models were based on mathematical principles, on data from other lasers, and from simpler, earlier models that were being improved on. Once a working Airborne Laser was built and fired – we had “real world” data. It was no surprise to find out that the actual characteristics of the laser beam were slightly different that those predicted by the models. For one thing, the models were simplistic – it was impossible to take everything into account. The result was that we took the real-world data, and modified the computer models to permit them to better predict future performance.\n\n\nThe bottom line is that the model is *never* finished. Every time you have additional data from the “real world” that is not an exact match to what the model predicts, the model should be examined, and the model adjusted as necessary.\n\n\nThere are two terms I like to use for models when it comes to VV&A – “anchoring” and “benchmarking”. If I can get another independently-developed model to predict the same events as my model, I have a source of validation. I refer this as benchmarking. Subject matter experts, other simulations, similar events that lend credence to your model – all improve the validity, and provide benchmarking. Anchoring, on the other hand, is when I tie my model directly to real-world data.\n\n\nAs long as the model is being used to predict behavior – it needs to continually be tied or anchored to real-world performance, if possible. If no real-world data is available, then similar models, expert opinions, etc. can be used to also increase the validity.\n\n\nJust a final note. Models can become so engrained in thoughts that they become “real. For example, I remember when the recent Star Trek movie (the 2009 version) came out. A friend of mine said, after viewing the movie, that he had trouble the the bridge of the USS Enterprise. It did not “look real”. I asked what “real” was – and my friend replied “You know, like the REAL USS Enterprise, the NCC 1701 (referring to the original series). Think about it – all are notional and imaginary (sorry, fellow Trekers) – yet he viewed one as “real” and the other as inaccurate. Models – when no real-world artifact exists – have the potential to become “real” in your mind. It’s worth remembering that a model is NOT real, but only an artifact built to resemble or predict what might (or might not) eventually become real one day.\n\n\n\n\n---\n\n\n**Luke**: Do you have a sense of how common [formal verification](http://en.wikipedia.org/wiki/Formal_verification) is for software used in DoD applications? Is formal verification of one kind or another *required* for certain kinds of software projects? (Presumably, DoD also uses much software that is not amenable to formal methods.)\n\n\n\n\n---\n\n\n**David**: I have not worked on any project that uses formal V&V methods.\n\n\nI used to teach the basics of formal methods (using ‘Z’- pronounced Zed) – but it is very time consuming, and not really fit for a lot of project.\n\n\nFormal notation shows the correctness of the algorithm from a mathematical standpoint. For modeling and simulation, however, they do not necessarily help you with accreditation – because the formal methods check the correctness of the code, and not necessarily the correlation of the cod eight real-world data.\n\n\nI have heard that certain extremely critical applications (such as reactor code, and code for the Martian Lander) use formal methods to make sure that the code is correct. However, formal methods take a lot of training and education to use correctly, and it also consumes a lot of time in actual use. Formal methods seldom (never?) speed up the process – they are strictly used to validate the code.\n\n\nFrom my experience, I have not work on on any project that made any significant use of formal methods – and in fact, I do not have any colleagues that have used formal methods, either.\n\n\n\n\n---\n\n\n**Luke**: Thanks, David!\n\n\nThe post [David Cook on the VV&A process](https://intelligence.org/2014/03/07/david-cook/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-03-07T23:11:30Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "1a77cece78994fd7cc1c06d86458a1fb", "title": "Robert Constable on correct-by-construction programming", "url": "https://intelligence.org/2014/03/02/bob-constable/", "source": "miri", "source_type": "blog", "text": "![](http://intelligence.org/wp-content/uploads/2014/03/Constable.jpg)[Robert L. Constable](http://www.cs.cornell.edu/home/rc/) heads the Nuprl research group in automated reasoning and software verification, and joined the Cornell faculty in 1968. He has supervised over forty PhD students in computer science, including the very first graduate of the CS department. He is known for his work connecting programs and mathematical proofs, which has led to new ways of automating the production of reliable software. He has written three books on this topic as well as numerous research articles. Professor Constable is a graduate of Princeton University where he worked with Alonzo Church, one of the pioneers of computer science.\n\n\n\n**Luke Muehlhauser**: In some of your work, e.g. [Bickford & Constable (2008)](http://commonsenseatheism.com/wp-content/uploads/2014/02/Bickford-Constable-Formal-Foundations-of-Computer-Security.pdf), you discuss “correct-by-construction” and “secure-by-construction” methods for designing programs. Could you explain what such methods are, and why they are used?\n\n\n\n\n\n---\n\n\n**Robert Constable**:\n\n\n*Short Summary*\n\n\nThe history of programming languages shows that types are valuable because compilers can check programs for type correctness and inform programmers whether or not their code exactly matches its type specification. Types describe requirements on programming tasks, and very rich type systems of the kind used in mathematics and increasingly in computing describe certain technical tasks completely as we’ll see below. *The more expressive the type system, the more specifications help programmers understand their goals and the programs that achieve them.* System designers can specify many requirements and computing tasks precisely with types and organize them into modules (or objects) that become a blueprint for analyzing and building a computing system.\n\n\nWhen types are rich enough to completely specify a task, the methodology we just outlined becomes an example of *correct by construction* programming. This approach has been investigated for mathematical problems for decades and for programming problems since the late 60s and early 70s. For example, if we want to implement a correct by construction factorization based on the *fundamental theorem of arithmetic* (FTA), that every natural number greater than 1 can be factored uniquely into a product of primes, we can first express this task as a programming problem using types. The problem is then solved by coding a computable function *factor with the appropriate type.* For programmers the type is probably just as clear as the FTA theorem. It is given below. It requires taking a natural number n greater than 1 as input and returning a list of pairs of numbers (p,e) where p is a prime and e is its exponent. We write the list as [(p1,e1), (p2,e2), …, (pn,en)] and require that n =p1e1 x … x pnen where pe means prime p to the power e, and the ei are all natural numbers, and x is multiplication (and is overloaded as the Cartesian product of two types). In addition the full FTA theorem requires that the factorization is unique. We discuss uniqueness later. For the factorization part, consider the example factor(24,750) = [(2,1),(3,2),(5,3),(11,1)]. This is correct since 24,750 = 2 x 9 x 125 x 11, where 53 is 125. *In a rich type system the factorization piece of the fundamental theorem of arithmetic can be described completely as a programming task* *and the resulting program,* factor*, can be seen as a constructive proof accomplishing that factorization. A type checker verifies that the program has the correct type. The program* factor *is* correct by construction *because it is known that it has the type of the problem it was coded to solve.*\n\n\n \n\n\n*Dependent Types*\n\n\nThis paragraph is a bit technical and can be skipped – or used as a guide for coding factorization in a functional language with dependent types. Here is roughly how the above *factorization task* is described in the type systems of the programming languages of the Agda, Coq and Nuprl proof assistants – a bit more on them later. In English the type specification requires *Nat*, the type of the natural numbers, 0,1,2… and says this: given a natural number *n* larger than 1, compute a list of pairs of numbers, (p,e), where p is a prime number, and e is a natural number such that the input value n can be written as a product of all the prime numbers p on the list, each raised to the power e (exponent e) that is paired with it in the *list lst*. Symbolically this can be written as follows were *N++* is the type *(n:Nat where n>1)*. We also define the type *Prime* as *(n:Nat where prime(n))* for a Boolean valued expression *prime(n)* which we code as a function from *Nat to Booleans.* Here is *the FTA factorization type*.\n\n\n*(n:N++) ->( lst:List[Prime x Nat] where n = prod(lst) ).*\n\n\nNote that the symbol *x* in *(Prime x Nat*) denotes the type constructor for *ordered pairs* of numbers. Its elements are ordered pairs *(p,e)* of the right types, *p* a *Prime* and *e* a *Nat*. Of course we need to define the product function, *prod(lst),* over a list *lst* of pairs in the right way. This is simple using functional programming in languages such as Agda, Coq, and Nuprl, in which we have the function type used above. We can assign this kind of programming problem to first year undergraduates, and they will solve it quite easily, especially if we give them a fast *prime(n)* procedure.\n\n\nAnother very nice way to write a compact solution is to use *map* and *fold\\_left* functions that are usually supplied in the library of list processing functions. That solution uses the well known *map reduce paradigm* taught in functional programming courses for years and widely used in industry. Solving with map and reduce is a good exercise since the reduce function *fold\\_right* is mathematical induction on lists defined as a specific very general recursive program.\n\n\n \n\n\n*Historical Perspective*\n\n\nThe development of mainstream programming languages from Algol 60 to Java and OCaml shows that type systems have become progressively more expressive. For example, now it is standard that functions can be treated as objects, e.g. passed as inputs, produced as outputs, and created as explicit values as we did in the FTA example above. Type checking and type inference algorithms have kept up with this trend, although OCaml can’t yet check the type we created for FTA, Agda, Coq, Nuprl and other programming languages supported by proof assistants can (they all use some form of *automated reasoning* to help). Static type checking of programs defined using very rich types is a demonstrably effective technology for precisely specifying tasks, documenting code as it’s being written, rendering code more readable by others (including yourself when you revisit it several months later), finding errors, and for safely making changes. With higher-order types such as function types, more programming tasks can be precisely defined, and code can be generated that is completely *correct by construction*.\n\n\nBy 1984 researchers created an extreme point in this type checking technology in which the type systems are so rich that they can describe essentially any mathematical programming task in complete “formal” detail. Types also express assumptions on the data and the computing environment on which the specifications depend. These extremely rich type systems can essentially express any problem in mathematics. Indeed, Coq has been used to formally prove the Four Color Theorem as well as other important theorems in mathematics. The type checking by proof assistants often requires human intervention as well as sophisticated reasoning algorithms. The type systems we have mentioned are strong enough to also describe the uniqueness of the FTA factorization. They can do this by converting any factorization into the standard one produced by the function *factor* defined earlier. That function might be called *normalize*, and it does the job of converting any factorization into the standard one in which the primes come in order and are raised to the highest power possible.\n\n\nAmong the best known modern proof assistants are Agda, Coq, HOL, Isabelle, MetaPRL, Minlog, Nuprl, and PVS. Nuprl was the first of these, built in 1984, and the Coq assistant is written in OCaml and supported by INRIA through the French government. Coq is widely taught and used for programming. These proof assistants help programmers check extremely rich specifications using automated reasoning methods. The types are rich enough to guarantee that programs having the type are correct for the task specified. This is a dominant form of correct by construction programming and sometimes includes the technology of extracting a program from a proof that a specification is achievable; this is called using *proofs as programs*.\n\n\n \n\n\n*Research Goal*\n\n\nAn overarching research goal is to design type systems and programming assistants to express any computational problem and provide mechanisms for type checking, perhaps with human aid and using automated reasoning tools, no matter how expressive the type system becomes.\n\n\nResearchers in programming languages, formal methods, and logic are working on this goal worldwide. There are many references in the research literature citing progress toward this goal. The recent book by Adam Chlipala *[Certified Programming with Dependent Types](http://www.amazon.com/Certified-Programming-Dependent-Types-Introduction/dp/0262026651/)*, from MIT Press, 2013, provides an up to date view of the subject focused on the widely used Coq proof assistant. The 1991 book by Simon Thompson, *[Type Theory and Functional Programming](http://www.amazon.com/Functional-Programming-International-Computer-Science/dp/0201416670/)*, from Addison-Wesley, is one of the early comprehensive text books on the subject as it was just gaining momentum. Nuprl is the oldest proof assistant of this kind that is still active since 1984. It was described in the 1986 book [*Implementing Mathematics with The Nuprl Proof Development System*](http://www.nuprl.org/book/). The Nuprl literature and an overview of the area circa 2006 can be found in the technical article “[Innovations in computational type theory using Nuprl](http://www.nuprl.org/documents/Allen/05-jal-final.pdf)” from the *Journal of Applied Logic*, 4, 428-469, 2006. Also the author’s 2010 article on the “[The Triumph of Types](http://www.cs.uoregon.edu/research/summerschool/summer11/lectures/Triumph-of-Types-Extended.pdf)” at the 2010 celebration for *Principia Mathematica* in Cambridge, England recounts the 100 year old story of type theory and connects that to current research on type systems for correct by construction programming. It is available at [www.nuprl.org](http://www.nuprl.org).\n\n\nWhen a type system can specify security properties, the checked programs are *secure by construction*. In both correct and secure by construction programming, the specifications include assumptions about the computing environment, say about the network topology. If these assumptions fail to hold, say because the network topology changed in an unexpected way, then the type specifications are no longer sufficient to guarantee correctness. So it is very important to document the assumptions on which the type specification depends.\n\n\nFor many problems these assumptions are stable, and in those cases, this correctness methodology is extremely effective. There are very good examples in the literature; at the web site [www.nuprl.org](http://www.nuprl.org) we recently posted a simple example for the purpose of illustrating the method to newcomers. It is a complete formal proof that allowed us to build a correct by construction program to solve the “maximum segment sum” problem that is used to teach formal program development starting from type specifications.\n\n\n \n\n\n*Recent Work*\n\n\nThe author and his colleagues at Cornell have recently used Nuprl to create a correct by construction version of the Multi-Paxos distributed consensus protocol that is being used in a database system. This required an implemented theory of distributed computing which is steadily evolving. We have made this protocol *attack-tolerant*, which is a form of security. Consensus protocols are critical to Cloud computing, and industry works very hard to build them correctly and make them secure as they are revised and improved. Researchers have created hundreds of correct by construction programs since 1984 and secure by construction programs as well. Worldwide many more are being built.\n\n\nModern proof assistants are steadily becoming more powerful and effective at correct and secure by construction programming because there is a research imperative and an economic incentive to use them. There is also a strong incentive to teach them, as the work on *[Software Foundations](http://www.cis.upenn.edu/~bcpierce/sf/)* by Benjamin Pierce and his colleagues at the University of Pennsylvania demonstrates. Proof assistants are an addictive technology partly because they improve themselves the more they are used which steadily increases their appeal and value.\n\n\n\n\n---\n\n\n**Luke**: What do you think are the prospects for applying correct-by-construction and secure-by-construction approaches to methods commonly labeled “artificial intelligence”?\n\n\n\n\n---\n\n\n**Robert**: The *proof assistants* such as Agda, Coq, and Nuprl that support correct by construction programming are themselves examples of artificial intelligence in the sense that they use automated reasoning tools developed in AI. The earliest proof assistants such as the Boyer-Moore prover came out of the AI Department at Edinburgh University. Also one of the seminal books in the field, *The Computer Modeling of Mathematical Reasoning,* by Alan Bundy at Edinburgh was a landmark in AI, published in 1983 by Academic Press.\n\n\nIn due course these proof assistants will use other AI tools as well such as those that can translate formal proofs into natural language proofs. There has already been interesting progress on this front. For instance the article “Verbalization of High-Level Formal Proofs” by Holland-Minkley, Barzilay and me in the 16th National Conference on AI in 1999, 277 – 284 translates Nuprl proofs in number theory into natural language.\n\n\nIt is also the case that these systems are extended using each other. For example, right now my colleagues Vincent Rahli and Abhishek Anand are using the Coq prover to check that new rules they want to add to Nuprl are correct with respect to the semantic model of Nuprl’s constructive type theory that they painstakingly formalized in Coq. They have also checked certain rules using Agda. This activity is precisely what you asked about, and the answer is a clear yes.\n\n\nWhen it comes to looking at machine learning algorithms on the other hand, the criteria for their success is more empirical. Do they actually give good performance? Machine learning algorithms were used by professors Regina Barzilay (MIT) and Lillian Lee (Cornell) to improve the performance of machine translation from Nuprl mathematics to natural language in their 2002 article on “bootstrapping lexical choice.” For this kind of work, it does not necessarily make sense to use correct by construction programming. On the other hand, one can imagine a scenario where a correct translation of the mathematics might be critical. In that case, we would probably not try to prove properties of the machine learning algorithm, but we would instead try to capture the meaning of the natural language version and compare that to the original mathematics.\n\n\nThe work of “understanding natural language mathematics” would benefit from correct by construction programming. Already there is excellent work being done to use constructive type theory to provide a semantic basis for natural language understanding. The book by Aarne Ranta, *Type-theoretical grammar,* Oxford, 1994 is an excellent example of this kind of work. It also happens to be one of the best introductions to constructive type theory.\n\n\nEventually machine learning algorithms are going to have a major impact on the effectiveness of proof assistants. That is because we can improve the ability of proof assistants to work on various subtasks on their own by teaching them how the expert humans handle these tasks. This kind of work will be an excellent example of both aspects of AI working together. Machine learning guides the machine to a proof, but the automated reasoning tools check that it is a correct proof.\n\n\n\n\n---\n\n\n**Luke**: What developments do you foresee, during the next 10-20 years, in proof assistants and related tools?\n\n\n\n\n---\n\n\n**Robert**: I’ve been working on proof assistants, programming logics, and constructive type theory for about forty years, and part of the job is to make predictions five to seven years out and try to attract funding to attain a feasible goal that will advance the field. To maintain credibility, you need to be right enough of the time. I’ve been more right than wrong on eight or so of these predictions. So I am confident about predicting five to seven years out, and I have ideas and hopes about ten years out, and then I have beliefs about the long term.\n\n\n \n\n\n*The Short Term*\n\n\nOver the next five years we will see more striking examples of correct by construction programming in building significant software and more examples of verifying software systems as in the work on the seL4 kernel, the SAFE software stack, features of Intel hardware, and important protocols for distributed computing, especially for cloud computing. This will be done because it is becoming cost effective to do it. I don’t see a major industrial effort even for the next ten years. This area will be a focus of continued academic and industrial research, and it will have incremental effects on certain key systems. However, it will remain too expensive for wide spread deployment in industry.\n\n\nWe will see formal tools being used to make both offensive and defensive weapons of cyber warfare more effective. One of the key lessons driving this is the fact that the stuxnet weapon, one of the best and most expensive ever created, was apparently defeated because it had a bug that led to its discovery. Another lesson is that the offensive weapons are improving all the time, and the defense must be more agile, not only building secure by construction “fortress like” systems, but thinking about systems as living responsive entities that learn how to adapt, recover, and maintain their core functionality.\n\n\nIn the realm of warfare, cost has rarely been the decisive factor, and formal methods tools are not outrageously costly compared to other required costs of maintaining a superior military. In this realm, unlike in building and maintaining ships, planes, and rockets and defenses against them, the cost lies in training and recruiting people. These people will have valuable skills for the private sector as well, and that sector will remain starved for well educated talent.\n\n\nSo overall, investing in advanced formal methods technology will be seen as cost effective. Proof assistants are the best tools we have in this sector, and we know how to make them significantly better by investing more. These tools include Agda, Coq, Nuprl, Minlog, and MetaPRL on the synthesis side, and ACL2, Isabelle-HOL, HOL-Light, HOL, and PVS on the verification side. All of them are good, each has important unique features. All have made significant contributions. All of them have ambitions for becoming more effective at what they do best. They will attract more funding, and new systems will emerge as well.\n\n\n \n\n\n*Medium Term*\n\n\nMost of the current proof assistants are also being used in education. It seems clear to me that the proof assistants that are well integrated with programming languages, such as Coq with OCaml and ACL2 with Lisp will prosper from being used in programming courses. Other proof assistants will move in this direction as well, surely Nuprl will do that. What happens in education will depend on the forces that bring programming languages in and out of fashion in universities. It seems clear to me that the Coq/OCaml system will have a chance of being widely adopted in university education, at least in Europe and the United States. It has already had a major impact in the Ivy League – Cornell, Harvard, Princeton, UPenn, and Yale are all invested as is MIT. I predict that within five years we will see this kind of programming education among the MOOCs, and we will see it move into select colleges and into top high schools as well.\n\n\nAll of the proof assistants have some special area of expertise. If there is sufficient funding, we will see them capitalize on this and advance their systems in various directions. Nuprl has been very effective in building correct by construction distributed protocols that are also attack tolerant. It is likely that this work will continue and result in building communication primitives into the programming language to match the theory of events already well developed formally. Coq has been used to build a verified C compiler, at UPenn and Harvard Coq is being used to verify the Breeze compiler for new hardware designed to support security features, and at MIT they are using Coq to verify more of the software stack with the Bedrock system. There are other applications of Coq too numerous to mention. We will see additional efforts to formally prove important theorems in mathematics, but it’s hard to predict which ones and by which systems. Current effortsto use proof assistants in homotopy theory will continue and will produce important mathematical insights. Proof assistants will also support the effort to build more certified algorithms along the lines of Kurt Mehlhorn’s impressive work.\n\n\n \n\n\n*Long Term*\n\n\nPeople who first encounter a proof assistant operated by a world class expert is likely to be stunned and shocked. This human/machine partnership can do things that seem impossible. They can solve certain kinds of mathematics problems completely precisely in real time, leaving a readable formal proof in Latex as the record of an hour’s worth of work. Whole books are being written this way as witnessed by the *Software Foundations* project at UPenn mentioned previously. I predict that we will see more books of readable formal mathematics and computing theory produced this way. Around them will emerge a technology for facilitating this writing and its inclusion in widely distributed educational materials. It might be possible to use proof assistants in grading homework from MOOCs. Moreover, the use of proof assistants will sooner or later reach the high schools.\n\n\nI think we will also see new kinds of remote research collaborations mediated by proof assistants that are shared over the internet. We will see joint research teams around the world attacking certain kinds of theoretical problems and building whole software systems that are correct by construction. We already see glimpses of this in the Coq, Agda, and Nuprl community where the proof assistants share very similar and compatible type theories.\n\n\nThere could be a larger force at work for the very long term, a force of nature that has encoded mechanisms into the human gene pool that spread and preserve information, the information that defines our species. We might not be *Homo sapiens* after all, but *Homo informatis*. The wise bits have not always been so manifest, but the information bits advance just fine – at least so far. That is the aspect of our humanness for which we built the proof assistant partners. They are part of an ever expanding information eco system. There is a chance that proof assistants in a further evolved form will be seen by nature as part of us.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Bob!\n\n\nThe post [Robert Constable on correct-by-construction programming](https://intelligence.org/2014/03/02/bob-constable/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-03-03T00:41:34Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "937269657160b5c7e9ca616fd72aa6af", "title": "Armando Tacchella on Safety in Future AI Systems", "url": "https://intelligence.org/2014/03/02/armando-tacchella/", "source": "miri", "source_type": "blog", "text": "![](https://intelligence.org/wp-content/uploads/2014/03/Armando-Tacchella.png)[Armando Tacchella](https://sites.google.com/site/armandotacchella/) is Associate Professor of Information Systems at the Faculty of Engineering, at the University of Genoa. He obtained his Ph.D in Electrical and Computer Engineering from the University of Genoa in 2001 and his “Laurea” (M.Sc equivalent) in Computer Engineering in 1997. His teaching activities include graduate courses in AI, formal languages, compilers, and machine learning as well as undergraduate courses in design and analysis of algorithms. His research interest are mainly in the field of AI, with a focus on systems and techniques for automated reasoning and machine learning, and applications to modeling, verification and monitoring of cyber-physical systems. His recent publications focus on improving the dependability of complex control architectures using formal methods, from the design stage till the operational stage of the system. He has published more than forty papers in international conferences and journals including AAAI, IJCAI, CAV, IJCAR, JAI, JAIR, IEEE-TCAD. In 2007 he was awarded by the Italian Association of Artificial Intelligence (AI\\*IA) the prize “Marco Somalvico” for the best young Italian researcher in AI.\n\n\n\n**Luke Muehlhauser**: My summary of [Transparency in Safety-Critical Systems](http://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/) was:\n\n\n\n> Black box testing can provide some confidence that a system will behave as intended, but if a system is built such that it is transparent to human inspection, then additional methods of reliability verification are available. Unfortunately, many of AI’s most useful methods are among its least transparent. Logic-based systems are typically more transparent than statistical methods, but statistical methods are more widely used. There are exceptions to this general rule, and some people are working to make statistical methods more transparent.\n> \n> \n\n\nThe last sentence applies to a [2009 paper](http://commonsenseatheism.com/wp-content/uploads/2014/01/Pulina-Tacchella-An-abstraction-refinement-approach-to-verification-of-artificial-neural-networks.pdf) you co-authored with [Luca Pulina](https://sites.google.com/site/lpulina/), in which you show formal guarantees about the behavior of a trained multi-layer perceptron (MLP). Could you explain roughly how that works, and what kind of guarantees you were able to prove?\n\n\n\n\n\n---\n\n\n**Armando Tacchella**: A trained MLP is just a mathematical function and, in principle, its properties could be verified in a logic that deals with real numbers and interpreted linear functions. However, useful MLPs, even the single hidden layer ones that we consider in our paper, use non-linear transcendental functions in their “activation” units (e.g., logistic or hyperbolic tangent). Any logic language that deals natively with such functions is bound to have a non-decidable decision problem — for a discussion see the introduction of [this paper](http://jsat.ewi.tudelft.nl/content/volume1/JSAT1_11_Fraenzle.pdf).\n\n\nOur approach to establish safety of MLPs is thus based on an abstraction-refinement paradigm, whereby an “abstract” MLP is obtained from the “concrete” one using some over-approximation whose properties are decidable. This is a technique [introduced by P. Cousot and R. Cousot](http://dl.acm.org/citation.cfm?id=512973) some time ago and more recently [exploited in the model checking community](http://dl.acm.org/citation.cfm?id=876643) to attempt the verification of systems whose properties are not decidable.\n\n\nIf the abstract MLP is found  to satisfy the properties, then also the concrete one does. Otherwise, we obtain an “abstract” counterexample, i.e., a set of inputs which violates a property in the abstract MLP.\n\n\nIf the set of inputs is found to violate the property in the concrete MLP, then the concrete MLP is faulty, and must be fixed. On the other hand, if the concrete MLP works fine when supplied the abstract counterexample, such counterexample is “spurious”, meaning that it is an artifact of the abstraction which was too coarse to be useful.\n\n\nThe abstraction is thus “refined”, i.e., made less coarse, perhaps more expensive to check, and the process iterates, until either the MLP is proven safe, or the process exhausts available resources.\n\n\nClearly, abstraction-refinement, does not work around the initial undecidability issue, because an infinite chain of refinements due to spurious counterexamples might always exist. However, in practice, we are often able to establish the validity of a property with a reasonable number of refinements.\n\n\nIn our paper, we instantiate the approach above to the specific case of single hidden layer MLPs which are nevertheless [universal approximators](http://www.sciencedirect.com/science/article/pii/0893608089900208) when subject to some reasonable conditions that we ensure. In particular:\n\n\n1. We abstract MLPs to corresponding “interval functions”, i.e., if the MLP is y = f(x) then the abstract one is [y] = F([x]), where “F” is the interval function corresponding to “f”.\n2. We consider a “safety property”, i.e., if [x] is contained in the input range of f, then [y] will be contained in some interval [l,h] with l < h (the “safety bound”).\n3. We prove the property on F using a decision procedure for Linear Arithmetic over reals, e.g., a Satisfiability Modulo Theory (SMT) solver. If the property holds for F, i.e., for all [x] in the domain of f we have that F([x]) is contained in [l,h], then this is the case also for f.\n\n\nIf a counterexample is found, then there is some interval [x] such that F([x]) is not contained in [l,h]. We consider any x contained in [x], and we try the corresponding f(x) to check whether f(x) is in [l,h]. If one is found, then f is faulty and must be fixed. Otherwise, the counterexample was spurious, so we restart from step (1) with a finer grained refinement.\n\n\nIn the end, if the MLP f passes our test, we are guaranteed that whatever input it will receive, it will never exceed its “specifications” in term of minimum and maximum expected output values.\n\n\n\n\n---\n\n\n**Luke**: In what ways do you think that formal methods helped to “explain” the MLP, and make it more transparent to human programmers?\n\n\n\n\n---\n\n\n**Armando**: The answer to this question largely depends on what we mean by “explanation”.\n\n\nIf by “explanation” we mean that, using our verification methodology, the neural net user is now able to understand clearly which “reasoning rule” is implemented by which neuron (or set of neurons), then our method does not provide any further insight in that direction.\n\n\nHowever, if we look for “behavioral explanation” then, the techniques proposed in our paper, helps in at least two ways:\n\n\n* the formal analysis of the original trained neural network enables the network user to establish whether its behaviors are in line with the specifications. When this is not the case, the user is given a (set of) counterexample(s) that will aid her in identifying the causes of misbehaviors. Indeed, by perusing the verification outputs, the user learns about the “points of failure” in the input/output space of the network, which, in turn, will enable him to improve his original design (e.g., by collecting more experimental evidence around those points)\n* since a neural network is not “classical” software — albeit from a formal point of view we treat it exactly as such — it might be the case that the user cannot readily “fix” the network using counterexamples. Our “repair” procedure can help to fix the network in an automated way by performing a controlled training using the result of “spurious” counterexamples. If the system to be learned by the network is available for experimentation, our repair procedure can also leverage the results of actual counterexamples (a form of “active learning”).\n\n\nTechnically, repair exposes the network to the danger of “overfitting” input data, i.e., the variance in the network performance is very high when considering several different datasets. However, we controlled for that possibility, and it never occurred in our experiments.\n\n\n\n\n---\n\n\n**Luke**: Before our interview, you wrote to me that:\n\n\n\n> I am quite skeptical that we will see any strongly autonomous robot around unless we are able to show (in a convincing way) that the robot is harmless for the environment…\n> \n> \n> Since I do not expect to be able to achieve safe strong autonomy with “correct-by-construction” and “proven in use” philosophies alone (the ones we use to build airplanes), I expect formal analysis to be even more important in robots than in any other implement.\n> \n> \n\n\nWere you saying that, despite the tremendous economic and military incentives to build ever-more-capable machines without waiting for safety-related capabilities to catch up, companies and governments *will in fact* not release strongly autonomous programs and robots “into the real world” without first making sure that we have the kind of confidence about their safety that we currently require for (e.g.) autopilot software?\n\n\n\n\n---\n\n\n**Armando**: Engineers know very well the kind of pressure your are talking about and, in fact, if we consider any catalog of “horror stories” – see e.g. [Software Horror Stories](http://www.cs.tau.ac.il/~nachumd/horror.html) for more than 100 cases related *only* to SW engineering – we cannot rule out the possibility of something bad happening when autonomous robots will hit the market.\n\n\nFor robots endowed with a relatively weak autonomy, or with a relatively low probability of causing serious harm to the surrounding environment, incidents may also be part of a trade-off wherein robot makers are not willing to invest huge amounts of effort to eliminate them completely. In these cases, I expect traditional risk analysis, and engineering best practices to prevail over a major shift to precise verification methods.\n\n\nFor robots endowed with strong autonomy, *and* a non-negligible potential to cause serious harms to the environment, I believe there will be a strong deterrent towards souping-up traditional engineering practices and adopting them in domains that they cannot really handle.\n\n\nTo make the discussion concrete, let us assume that the autonomous car made by Google is going to be sold as a product like, say, a mobile phone. A shipping company can now buy a fleet of such cars, and send them *driverless* to delivery parcels at the doors of their customers. Technologically this is feasible today, but we will not see this happening any time soon. The reasons are several, but let me mention a few that I consider important:\n\n\n* Ethical. Do we really want to empower a machine to a level that it can *autonomously* decide, e.g., to run-down a pedestrian? Also an autopilot can fail and kill people, but it was clearly not the “will” of the autopilot to do so.\n* Technical. Can we ensure that without an operator ready to press “emergency stop” the car will never behave in an hazardous way? Planes are operated by very-well trained officials, who can always “go manual” if anything goes wrong in the autopilot.\n* Economical. Even if we had the technology to insure safety, would it be cost-effective? Right now, airplane makers can ensure, through standard engineering practices (e.g., redundancy, stress testing, preventive maintenance) that autopilots are cost effective within their operational requirements.\n* Legal. Whose is the liability if an autonomous car causes an accident? The issue becomes especially bothersome if there are more legal entities involved in the product, e.g., the car maker, the system integrator, the “AI” supplier, the operator (assuming on-field configuration is available).\n\n\nGoing back to my initial statement about the importance of formal methods in robots, I expect that a crisp understanding of the phenomena governing the “brains” of an autonomous robot and its interactions with its “body” and the surrounding environment, are of paramount importance if we wish to address any of the above issues in a satisfactory way.\n\n\n\n\n---\n\n\n**Luke**: [Levitt (1999)](http://commonsenseatheism.com/wp-content/uploads/2014/01/Levitt-Robot-Ethics-Value-Systems-and-Decision-Theoretic-Behaviors.pdf) argued that as intelligent systems become increasingly autonomous, and operate in increasingly non-deterministic, unknown environments, it will not be possible to model their environment and possible behaviors entirely with tools like control theory:\n\n\n\n> If robots are to put to more general uses, they will need to operate without human intervention, outdoors, on roads and in normal industrial and residential environments where unpredictable… events routinely occur. It will not be practical, or even safe, to halt robotic actions whenever the robot encounters an unexpected event or ambiguous [percept].\n> \n> \n> Currently, commercial robots determine their actions mostly by control-theoretic feedback. Control-theoretic algorithms require the possibilities of what can happen in the world be represented in models embodied in software programs that allow the robot to pre-determine an appropriate action response to any task-relevant occurrence of visual events. When robots are used in open, uncontrolled environments, it will not be possible to provide the robot with *a priori* models of all the objects and dynamical events that might occur.\n> \n> \n> In order to decide what actions to take in response to un-modeled, unexpected or ambiguously interpreted events in the world, robots will need to augment their processing beyond controlled feedback response, and engage in decision processes.\n> \n> \n\n\nThus, he argues, autonomous programs will eventually need to be decision-theoretic.\n\n\nWhat do you think of that prediction? In general, what tools and methods do you think will be most relevant for achieving high safety assurances for the highly autonomous systems of the future?\n\n\n\n\n---\n\n\n**Armando**: I agree with Levitt when he says that “control theory” alone cannot deal with strongly autonomous robots. This is also what motivates the stream of research in Formal Methods dealing with Robotics and Automation (see, e.g., [Workshop on Formal Methods for Robotics and Automation](http://verifiablerobotics.com/RSS13/) for a recent event on the subject).\n\n\nHowever, it might be the case that “engaging in decision theoretic processes” while the robot is performing its tasks raises challenging — to say the least — computational issues. Indeed, it is well known that even simple factual, i.e., Boolean, reasoning may require exponential time to be carried out. Since we expect reality to be much more complex than simple 0/1 evidence, reasoning in every “logic” that the robot is endowed with is going to be computationally hard.\n\n\nEven assuming the availability of powerful heuristics and/or approximations, the problem is still made tough by real-time requirements in the decision process. Indeed, the robot may not take forever to plan its course of actions, and in many cases high variances in response time are not acceptable as well.\n\n\nAt the same time, making sure that a strongly autonomous robot stays safe using off-line verification only, is also probably not going to happen. This is because of computational issues (combinatorial state-space explosion, undecidability of expressive logics to deal with time/space/probabilities), but also because, as in control theory, most of the situations that will arise while the robot is running are hard to frame.\n\n\nIn my opinion, what we might see is a combination of several approaches, including:\n\n\n* “Correct by construction” design and implementation\n* Formal Verification and testing at the design and implementation stage\n* Lightweight reasoning while the robot is executing\n\n\nReasoning during execution is probably going to be “self-reflective” by adopting e.g., efficient monitors, that can be automatically synthesized during the design process, and then deployed on the robot to ensure that certain safety conditions are met at all the times without the burden of “deep” logical reasoning.\n\n\nFinally, actual robots will probably need to be “compliant” to humans, not only in their cognitive/reasoning abilities but also in their structure (see, e.g., [recent development in “Soft Robotics”](http://online.liebertpub.com/toc/soro/1/)). A “compliant” structure will improve security and/or will make resoning about security of the robot easier, since some “intrinsic safety” can be achieved by the use of innovative materials and electronics. In other words, we should not worry just about a “safe mind”, but we must retain the principle that a safer “mind” requires also a safer “body”.\n\n\n\n\n---\n\n\n**Luke**: Sometimes I like to distinguish “bottom up” vs. “top down” approaches to the challenge of long-term AI safety — that is, to the challenge of making sure that we can continue to achieve confident safety guarantees for AI systems as they grow in autonomy and capability.\n\n\n“Bottom up” approaches to long-term AI safety build incrementally upon current methods for achieving confident safety guarantees about today’s limited systems. “Top down” approaches begin by trying to imagine the capabilities of the AI systems we’ll have several decades from now, and use those speculations to guide research today.\n\n\nI would describe your response to my previous question as a “bottom up” perspective on long-term AI safety, and I think it’s the kind of perspective most people working on AI safety would give. To illustrate what I mean by “top down” approaches to long-term AI safety, I’ll provide three examples: (1) [Lampson (1979)](http://www.cs.umd.edu/%7Ejkatz/TEACHING/comp_sec_F04/downloads/confinement.pdf)‘s theoretical description of the “confinement problem,” which motivated the creation of the field of [covert channel communication](http://en.wikipedia.org/wiki/Covert_channel). (2) [Orseau & Ring (2012)](http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_76.pdf)‘s work on the agent-environment boundary typically used as a simplifying assumption in reinforcement learning and other AI models, which I hope will inspire future research into more “naturalistic” AI models that will avoid problematic “Cartesian” mistakes in agent reasoning. (3) [Weaver (2013)](http://arxiv-web.arxiv.org/abs/1312.3626)‘s discussion of paradoxes in rational agency, which I hope will motivate later research into how future self-modifying agent-based AIs can reason coherently about the consequences of modifying their own decision-making code.\n\n\nEven if you aren’t familiar with these examples already, do you have an opinion on “top down” approaches to long-term AI safety in general, and how they should (or shouldn’t) inform today’s research efforts?\n\n\n\n\n---\n\n\n**Armando**: Indeed, I was not familiar with the references you cited, and I can see that they are definitely much more “visionary” than what I have suggested previously. Since my background is in Engineering, I must acknowledge a bias towards “bottom-up” views as you define them, or simply “reductionist” and “pragmatic” as I would define them.\n\n\nYour citations reminded my of a paper [by Weld and Etzioni](https://www.cs.auckland.ac.nz/~nickjhay/papersuni/RoughDraft200207-best_of_SASEMAS.pdf). I often quote and cite this paper in my works because, in my opinion, it had the merit to raise the level of concern of the AI community about “unleashing” strongly autonomous agents in the physical world. The contents of the paper, albeit slightly outdated, still can provide some useful guidelines and vocabulary for current research on safe AI (at least they did it for me). While it is not the kind of paper I would conceive, I believe this kind of speculation is useful to prepare the ground for more tangible achievements.\n\n\nOn the other hand, if we are to chase the current push for stronger autonomy in robots acting in the physical world, then we should probably be very pragmatic about what we can do with current technology in order to ensure safety at all times. While it is possible that future advances in technology will make a “decision theoretic” agent possible within the resource constraints of a mobile robot, it is current technology which is enabling, e.g., autonomous cars to drive around. Therefore, if we want this kind of robots to become products in the next few years, I believe we should focus research on making them trustworthy by leveraging current scientific and technological developments.\n\n\n\n\n---\n\n\n**Luke**: Are there any other readings you’d recommend, besides [Weld & Etzioni (1994)](https://www.cs.auckland.ac.nz/%7Enickjhay/papersuni/RoughDraft200207-best_of_SASEMAS.pdf), on the subject of long-term AI safety?\n\n\n\n\n---\n\n\n**Armando**: Weld and Etzioni returned on the subject of safe AI in their paper “[Intelligent agents on the internet: Fact, fiction, and forecast](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.6.7045&rep=rep1&type=pdf)”\n\n\nA similar theme is touched upon by Eichmann in his “[Ethical web agents](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.57.3585&rep=rep1&type=pdf)“.\n\n\nI have found out about the “Call to Arms” by Etzioni and Weld in a series of papers by Gordon, of which I would deem “[Asimovian Adaptive Agents](http://www.jair.org/media/720/live-720-1895-jair.pdf)” as the most relevant.\n\n\nI had just stumbled upon (but still did not have a chance to read thoroughly) a relatively recent book by Wilks “Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues” ([see a review here](http://oldsite.aclweb.org/anthology-new/J/J11/J11-2006.pdf)). The book seems controversial at times, but it does contain many different views about artificial companions and it does raise some issues that we should probably keep well in mind when designing physically situated agents.\n\n\nFinally, to have an idea about concrete projects going on in the field of robotics where safety is considered both a prominent value and an essential challenge, [this site](http://www.robotcompanions.eu/) lists many interesting recent developments and ideas for further research.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Armando!\n\n\nThe post [Armando Tacchella on Safety in Future AI Systems](https://intelligence.org/2014/03/02/armando-tacchella/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-03-02T22:04:41Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "8b515cb991d8e507007978701fb20089", "title": "Anders Sandberg on Space Colonization", "url": "https://intelligence.org/2014/03/02/anders-sandberg/", "source": "miri", "source_type": "blog", "text": "![](https://intelligence.org/wp-content/uploads/2014/03/Anders-Sandberg.jpg)Anders Sandberg works [at the Future of Humanity Institute](http://www.fhi.ox.ac.uk/about/staff/), a part of the Oxford Martin School and the Oxford University philosophy faculty. Anders’ research at the FHI centres on societal and ethical issues surrounding human enhancement, estimating the capabilities and underlying science of future technologies, and issues of global catastrophic risk. In particular he has worked on cognitive enhancement, whole brain emulation and risk model uncertainty. He is senior researcher for the FHI-Amlin Research Collaboration on Systemic Risk of Modelling, a unique industry collaboration investigating how insurance modelling contributes to or can mitigate systemic risks.\n\n\nAnders has a background in computer science and neuroscience. He obtained his Ph.D in computational neuroscience from Stockholm University, Sweden, for work on neural network modelling of human memory. He is co-founder and writer for the think tank Eudoxa, and a regular participant in international public debates about emerging technology.\n\n\n\n**Luke Muehlhauser**: In your paper with Stuart Armstrong, “[Eternity in Six Hours](http://commonsenseatheism.com/wp-content/uploads/2013/05/Armstrong-Sandberg-Eternity-in-six-hours-intergalactic-spreading-of-intelligent-life-and-sharpening-the-Fermi-paradox.pdf),” you run through a variety of calculations based on known physics, and show that “Given certain technological assumptions, such as improved automation, the task of constructing Dyson spheres, designing replicating probes, and launching them at distant galaxies, become quite feasible. We extensively analyze the dynamics of such a project, including issues of deceleration and collision with particles in space.”\n\n\nYou frame the issue in terms of the Fermi paradox, but I’d like to ask about your paper from the perspective of “How hard would it be for an AGI-empowered, Earth-based civilization to colonize the stars?”\n\n\nIn section 6.3. you comment on the robustness of the result:\n\n\n\n> In the estimation of the authors, the assumptions on intergalactic dust and on the energy efficiency of the rockets represent the most vulnerable part of the whole design; small changes to these assumptions result in huge increases in energy and material required (though not to a scale unfeasible on cosmic timelines). If large particle dust density were an order of magnitude larger, reaching outside the local group would become problematic without shielding methods.\n> \n> \n\n\nWhat about the density of *intra*galactic dust? Given your technological assumptions, do you think it would be fairly straightforward to colonize most of the Milky Way from Earth?\n\n\n\n\n\n---\n\n\n**Anders Sandberg**: I believe it would be straightforward given the technological assumptions we make, but to demonstrate it requires much more exploratory engineering footwork.\n\n\nIntergalactic dust appears to be relatively thin, which is unsurprising: most of the contents of intergalactic space are primordial. There are contributions from galaxies, but they are relatively minor. Another way of seeing that the dust is thin is the absence of obscuration of remote galaxies: the total area along the line of sight is small.\n\n\nThis is unfortunately not true inside the galactic plane. At least in some directions dust is optically thick. The probability of a photon (or a fast spacecraft) hitting a dust particle along its trajectory is high. The size distribution of course matters: if it was all in very small particles the problem is different from than rarer pebbles. (If the total mass density is ρ kg/m3, and the average radius is r meters, then the number density N scales as ρ/r3 grains per cubic meter, while the surface area scales as r2N = ρ/r – finer dust can obscure more, down to the diffraction limit.) The actual size distribution for larger grains that would be a hazard is unfortunately not experimentally known – there is a fair deal known about smaller grains, but the ones we care about here are hard to observe.\n\n\nThe approach in our paper consists of just sending enough probes to have a good chance of one reaching the target without any collision. This has an effective distance cutoff rather dependent on the density of too large grains (d ≈ 1/σN, where σ is the probe cross section and N is the number density). One might argue that sending a *lot* of probes could work beyond that, but since the required number grows as exp(distance/d) this soon swamps any local resources. So clearly some shielding is necessary; this reduces N. (Getting a very small σ is also good up until the point where the length of the javelin-shaped probe becomes long enough that it starts to suffer lateral hits)\n\n\nNow, showing that this shielding is achievable is where things get much more involved. This requires analysing the interaction between relativistic grains and a target. We have done an informal analysis of this (unpublished), and think it is possible to get the shielding up more than enough for interstellar travel if one can build atomically precise structures. This seems to be roughly within the capabilities assumed in the rest of the paper.\n\n\nAnother interesting issue is that in our galaxy most dust exists within the Thin Disk. This is a disk around 3000 lightyear thick (with a scale height of about 400 lightyears), which contain most of the gas and dust (plus star mass). Outside this is the thick disk, which is far clearer. Even interstellar hydrogen gas is a rather nasty proton beam from the perspective of a relativistic traveller, so it might actually be useful to go above the plane of the galaxy. This is a spiral galaxy problem: elliptical galaxies tend to have little dust or gas, so they are much easier to travel through.\n\n\n\n\n---\n\n\n**Luke**: What if we just talk about colonizing the local group or the local supercluster? How sensitive are those outcomes to the density of intergalactic space dust and the energy efficiency of rockets?\n\n\n\n\n---\n\n\n**Anders**: There is a nice trade-off between dangerous dust density and safe travel distance: twice the density of largish dust and your distance is halved. So in our paper we assumed intergalactic dust densities larger than a micron to be 10-35 gm/cm3, giving us a reach of 1,194 megaparsecs for using just a single probe. This is more than enough for the local supercluster (which is just 33 megaparsecs across). So even if dust was 36 times denser than we thought this would be achievable.\n\n\nNote that in galaxies densities can be up to a few million times higher than outside. So there we should expect the distance of travel to be merely a thousand parsecs (if, as seems likely, there is also more grains in the interstellar medium per unit of density distances get even shorter – at least when flying through the dirtiest parts).\n\n\nGoing slower reduces the impact energy, and hence makes the dangerous dust density go down. This effect is however mainly felt for low velocity (<0.1 c) probes: as soon as you start going an appreciable clip of lightspeed you become sensitive to even rather small grains. However, for local colonization if you are not competing with anybody or trying to outrun cosmological expansion, a leisurely pace might be totally acceptable. The energy demands of course decrease a lot too: for 99% c travel you need to supply energy corresponding to 7 times the rest mass of the probe, and even 0.5 c requires 15% of the rest mass energy. 0.1 c is down to just 0.5%.\n\n\nSome depends on how good rockets can be made. The rocket equation is my least favourite physics equation: it states that to get a velocity change you need to use an exponential amount of reaction mass (and it gets even worse for relativistic rockets). This is why our paper assumed external launchers to send off the probes: using rockets for launch too would have increased the energy and mass demands enormously. The total energy requirement difference between relatively inefficient fission rockets and efficient antimatter rockets is however less than an order of magnitude, since the efficiency boost is eaten up by a longer range and hence the need to send more probes.\n\n\nSince the paper was written we have tinkered with laser-pushed rocket ideas: if one section of the probe continues ahead, sending back a very intense laser beam to slow the payload, this can be made more effective. There are some very interesting optical possibilities here if designs can be made atomically precise that would make slowing quite energy efficient. But in the end, I don’t think these considerations will provide many orders of magnitude difference.\n\n\n\n\n---\n\n\n**Luke**: As explained by your paper, your plan assumes the probes would be launched by “fixed launch systems” such as coilguns, quenchguns, laser propulsion, and particle beam propulsion, which are vastly more efficient than rockets. (Rockets are for deceleration near the end of a probe’s trip.) Could you give a brief description of these fixed launch systems? Which ones are you most confident will be technologically feasible?\n\n\n\n\n---\n\n\n**Anders**: Most people have heard about railguns, devices that uses the Lorenz force law to electromagnetically accelerate projectiles. These are based on an armature sliding along two current-bearing rails, pushed forward by the magnetic field through the loop of rails and armature. While able to generate tremendous acceleration they also tend to be \\*messy\\*, since the armature experiences friction with the rails, generating plasma discharges and erosion. Nevertheless, such systems exist today and can generate a few km/s velocity launches. While one can imagine scaling things up and use optimal materials, for relativistic launches railguns are likely entirely out.\n\n\nIn our paper we assumed some form of coil-guns: the projectile is never touching the launcher, and is accelerated by electromagnetic coils along the launch path. This has many practical benefits, like only needing currents for one or two coils at a time. The main limitations are switching speeds (which we assume can be handled by advanced technology, especially since the motion is very predictable), magnetic saturation of the projectile (which limits the acceleration per coil, but can be handled by using many more coils) and resistance in the coils. Using superconductors for the coils seems entirely reasonable for a large scale project with big resources. That also allows “quenchguns”, where energy is first stored in currents in superconducting coils that are then quenched, producing a very sharp field gradient that could propel the payload. According to the literature and experts we talked with these can in principle be made very efficient, converting most of the electrical energy into kinetic energy. They are based on well understood physics and can be seen as a modest extrapolation of current engineering to large scales.\n\n\nWe have not investigated particle beam propulsion, but it clearly has some potential. In a way a particle beam is a coilgun-accelerated stream of very small projectiles, and part of their momentum can be gathered by a projectile either by absorbing them (leading to heating and damage issues), or by using a mini-accelerator to “grip” them. There is not as much research on this as for electromagnetic propulsion or laser propulsion, but the physics seems sound. Whether it can be made efficient enough likely depends on whether the high efficiency of the particle accelerator can balance the inefficiencies of capturing their momentum at the receiving end. This is worth investigating further.\n\n\nLaser propulsion has been experimentally demonstrated on small scales, and again is fairly well understood from a exploratory engineering standpoint. The main limitation on acceleration is heating of the launched object: ideally it would just reflect the beam back, but imperfect reflection means that the back part will heat up. Atomically precise manufacturing appears to allow surprisingly beam-resistant materials, even when based on current materials. Another problem for relativistic projectiles is that the beam will become redshifted as it speeds up, and that a normal beam will spread out as it gets further away from the source, reducing efficiency. It turns out these problems can to some extent be fixed by some more radical optics: it looks like it is physically possible to create free-floating “optical fibres” (actually a long line of very thin lenses) that allow the beam intensity to remain high even over significant distances, adapting as the effective wavelength changes. These ideas were developed more fully after the manuscript was finished, so we do not make use of them in it. But they suggest (if they work out when fully written up and simulated) that laser-launching (and laser-powered slowing) might be the most effective way for an advanced civilization to deliver relativistic payloads.\n\n\n\n\n---\n\n\n**Luke**: Now let’s talk about deceleration, where rockets may be needed. But before we get to rockets, let’s talk about more exotic solutions for deceleration: magnetic sails and Bussard ramjets. How might those work?\n\n\n\n\n---\n\n\n**Anders**: The Bussard ramjet is a classic idea for powering an interstellar starship: instead of lugging around a lot of reaction mass (the reason the rocket equation is so painful), why not gather interstellar gas and burn it in a fusion reactor? Unfortunately this requires a pretty high speed in order to get an appreciable mass inflow, and constructing a ramscoop that sucks up enough mass is daunting. It would probably have to be a vast electromagnetic funnel, requiring significant power to generate (and then some more power to compress the input for fusion). Most calculations in the literature end up with the depressing conclusion that it would not be feasible unless the scoop and fusion reaction were very effective, and the exhaust speed very high.\n\n\nHowever, one might of course turn the thing around and instead use the funnel to slow the ship against the interstellar medium. Dana Andrews and Robert Zubrin suggested using a magnetic sail, where the field from a superconducting loop deflects charged particles and hence experiences drag. This might be quite effective for slowing relativistic probes; they get a result that implies that a 99% c probe could be slowed to typical interplanetary speeds in about two centuries.\n\n\nFor slowing in intergalactic space where gas is thin ramjets or magnetic sails are of course not very effective until the probe has come very close to the destination (however, if it takes centuries to brake, the braking distance is still merely a few hundred light-years). The lack of reaction mass is very enticing. In our paper we did not explore these possibilities since we wanted a rather conservative estimate, but it looks likely that magnetic sails are worth a serious look in further modelling.\n\n\n\n\n---\n\n\n**Luke**: Why do you hate the relativistic rocket equation? How did you come up with your estimates for the energy efficiency of future rocket designs that could be used for deceleration?\n\n\n\n\n---\n\n\n**Anders**: The rocket equation states that the amount of reaction mass you need to expel grows exponentially with the desired ending velocity – this is why standard rockets are mostly fuel and the payload small. You can get better performance by expelling the reaction mass faster, up to the limit of light-speed (which would require a rocket that expels photons).\n\n\nThe relativistic rocket equation is even worse: as you speed up a lot of the energy turns into mass of the rocket rather than velocity. No amount of energy will get you beyond lightspeed, and as you approach it the acceleration decreases (as seen by an outside observer; somebody onboard the rocket will not notice anything). So the overall energy efficiency becomes worse and worse; unless you really want to get to your destination fast it might not be worth it.\n\n\nThe energy estimates were taken from the literature; see our citations.\n\n\nThis is one area where engineering realities might make designs less efficient, and it does have a big effect: a factor of 50% lower efficiency will square the initial mass to payload ratio. Ouch. Still, given the enormous amount of energy, mass and time available the end result is not too different: very large number of probes can be sent by the resources of a single solar system over astronomically insignificant timescales. It is practically irrelevant if it takes 15 years instead of 6 hours to launch the probes. Inefficiencies however start to bite more sharply the larger the replicator is: in the 500 ton case a 45% less efficient fusion reactor means a complete launch of 1.52 billion probes takes a billion years. So the lesson is clearly that boosting efficiency and/or producing smaller payloads have a big effect. If someone were to argue against our analysis they should likely focus here, looking for fundamental limits of efficiency or credible payload sizes. However, as discussed above, there might be alternatives like magnetic sails that vastly improve efficiency too.\n\n\n\n\n---\n\n\n**Luke**: Which few “next analyses” would be most desirable, from your perspective, for helping to answer the question of how likely it is that an AGI-empowered Earth-based civilization could colonize the galaxy, the supercluster, or the observable universe?\n\n\n\n\n---\n\n\n**Anders**: Much of the work can be done using exploratory engineering: using known physics, can we make designs that perform well enough to fit into the overall mission profile? They do not have to be optimal to prove that it could be done by an advanced civilisation (indeed, when thinking about the Dyson shell material requirements we considered a “steampunk” approach consisting of iron mirrors focusing sunlight on boilers with water or gas-filled turbines in order to check that even if Mercury or similar materials sources lacked elements for big photovoltaics large-scale energy production was still possible). The more detail, the more convincing, but effort should mainly be directed at the big bottlenecks of efficiency or where our understanding of engineering is a bit weak. (More stars = higher priority.)\n\n\n★ Obviously better designs for self-replicating installations with materials closure in generic space environments are nice: the classic NASA lunar factory study is ancient by now. This also helps understand constraints on how quickly industrialisation of a system could be done, whether there are large first-mover advantages in the solar system that might pose arms race temptations, and some firmer numbers of seed sizes and replication times.\n\n\n★ Understanding the AI needs for space mining and engineering would be useful. Full AGI is presumably by assumption able to run the infrastructure we require, but even lower order intelligence might be adequate: many animals construct elaborate constructions adaptively or navigate in complex environments without much general intelligence. This also helps estimate the importance of AI and AGI for human space industry.\n\n\n★★ Relativistic mass drivers look feasible but cumbersome. We think investigating the limits and possibilities of laser-powered launching might be more productive.\n\n\n★★★ We need a better understanding of the large particle distribution in the interstellar and intergalactic medium. Much of probe design hinges on whether to attempt a shielded probe or just send a large number of expendable probes.\n\n\n★★ Slowing down using magnetic sails or lasers appear promising, but requires a good power source that can last for a very long time for intergalactic trips, presumably induced fission, fusion or antimatter. How do long-term storable energy sources scale? What are their minimum mass?\n\n\n★ Automated navigation and colonization requires automated astronomy. We assumed the probes knew where they were going, but it would be interesting to analyse automated colonisation planning. Large orbiting infrastructures can likely act as very powerful telescopes.\n\n\n★ From a Fermi question perspective it would be useful to investigate more the properties of spread in the younger universe: is the shorter distances outweighed by higher densities?\n\n\n★★ From a strategic perspective it would be useful to investigate whether this type of replicating probes can be successfully prevented from invading an occupied galaxy or not. Much of long-term strategy may depend on whether invaders or defenders have the advantage, including whether very large singleton systems are possible or whether significant amount of resources have to be spent on defence.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Anders!\n\n\nThe post [Anders Sandberg on Space Colonization](https://intelligence.org/2014/03/02/anders-sandberg/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-03-02T22:00:03Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "cb3ae0c6f4d196652a0aa8690df38c43", "title": "The world’s distribution of computation (initial findings)", "url": "https://intelligence.org/2014/02/28/the-worlds-distribution-of-computation-initial-findings/", "source": "miri", "source_type": "blog", "text": "What is the world’s current distribution of computation, and what will it be in the future?\n\n\nThis question is relevant to several issues in AGI safety strategy. To name just three examples:\n\n\n* If a large government or corporation wanted to quickly and massively upgrade its computing capacity so as to make a push for AGI or WBE, how quickly could it do so?\n* If a government thought that AGI or WBE posed a national security threat or global risk, how much computation could it restrict, how quickly?\n* How much extra computing is “immediately” available to a successful botnet or government, simply by running existing computers near 100% capacity rather than at current capacity?\n\n\nTo investigate these questions, MIRI recently contracted Vipul Naik to gather data on the world’s current distribution of computation, including current trends. This blog post summarizes our initial findings by briefly responding to a few questions. Naik’s complete research notes are available **[here](http://intelligence.org/wp-content/uploads/2014/02/Naik-Distribution-of-Computation.pdf)** (22 pages). This work is meant to provide a “quick and dirty” launchpad for future, more thorough research into the topic.\n\n\n\n \n\n\n**Q**: *How much of the world’s computation is in high-performance computing clusters vs. normal clusters vs. desktop computers vs. other sources?*\n\n\n**A**: Computation is split between application-specific integrated circuits (ASICs) and general purpose computing: According to Hilbert & Lopez ([2011a](http://www.uvm.edu/~pdodds/files/papers/others/2011/hilbert2011a.pdf), [2012a](http://commonsenseatheism.com/wp-content/uploads/2014/02/Hilbert-Lopez-How-to-Measure-the-World’s-Technological-Capacity-part-1.pdf), [2012b](http://commonsenseatheism.com/wp-content/uploads/2014/02/Hilbert-Lopez-How-to-Measure-the-World’s-Technological-Capacity-part-2.pdf)), the fraction of computation done by general-purpose computing declined from 40% in 1986 to 3% in 2007. The trend line suggests further decline.\n\n\nWithin general-purpose computing, the split for the year 2007 is as follows:\n\n\n* For installed capacity: 66% PCs (incl. laptops), 25% videogame consoles, 6% mobile phones/PDAs, 3% servers and mainframes, 0.03% supercomputers, 0.3% pocket calculators.\n* For effective gross capacity: 52% PCs, 20% videogame consoles, 13% mobile phones/PDAs, 11% servers and mainframes, 4% supercomputers, 0% pocket calculators.\n\n\nFor more detailed data, see Section 2.2 of Naik’s research notesand Section E of [Hilbert & Lopez (2011b)](http://commonsenseatheism.com/wp-content/uploads/2014/02/Hilbert-Lopez-Supporting-online-material-for-The-Worlds-Technological-Capacity.pdf).\n\n\n\n**Q**: *What is it being used for, by whom, where?*\n\n\n**A**: See the answer above, plus Section 3 of Naik’s research notes.\n\n\n\n**Q**: *How much capacity is added per year?*\n\n\n**A**:Growth rates and doubling periods are as follows, using data 1986-2007:\n\n\n* General-purpose computing capacity: growth rate 58% per annum, doubling period 18 months  (see Section 2.2 of Naik’s research notes).\n* Communication: growth rate 28% per annum, doubling period 34 months (see Section 2.3 of Naik’s research notes).\n* Storage: growth rate 23% per annum, doubling period 40 months (see Section 2.4 of Naik’s research notes).\n* Application-specific computing: growth rate 83% per annum, doubling time 14 months (see Section 2.2 of Naik’s research notes).\n\n\nBreakdown of data by time periods is available in [Hilbert & Lopez (2011a)](http://www.uvm.edu/~pdodds/files/papers/others/2011/hilbert2011a.pdf), and the most important quotes are included in the relevant sections of Naik’s research notes.\n\n\n\n**Q**: *How quickly could capacity be scaled up (in the short or medium term) if demand for computing increased?*\n\n\nThe semiconductor industry is quite responsive to changes in demand, and catches up with book-to-bill ratios as large as 1.4 within 6 months (see Section 3.2 of Naik’s research notes). In addition, the fact that Litecoin, an allegedly ASIC-resistant substitute for Bitcoin, already has ASICs about to be shipped (within two years of launch) also suggests relatively rapid turnaround given large enough economic incentives. In the case of high-frequency trading (HFT), huge investments in nanosecond computing and shaving off milliseconds from Chicago-New York and New York-London cables also suggests quick responsiveness to large incentives.\n\n\n\n**Q**: *How much computation would we expect to be available from custom hardware (FPGA’s/ASIC’s and their future analogs)?*\n\n\n**A**:An increasing fraction (note decline in general-purpose computing’s share from 40% in 1986 to 3% in 2007). However, stuff like ASICs and FPGAs can’t really be repurposed for other use, so existing ones don’t help that much with new tasks.\n\n\n \n\n\n**Q**: *What is the state of standard conventions for reporting data on such trends?*\n\n\n**A**:The work of Hilbert, Lopez, and others may eventually lead to uniform conventions for reporting and communicating the data, which would allow for a more informed discussion of these. However, Martin Hilbert in particular is skeptical of standardization in the near future, although he believes it possible in principle, see [Hilbert (2012)](http://ijoc.org/index.php/ijoc/article/download/1318/746) for more. On the other hand, [Dienes (2012)](http://ijoc.org/index.php/ijoc/article/download/1357/737) argues for standardization.\n\n\nThe post [The world’s distribution of computation (initial findings)](https://intelligence.org/2014/02/28/the-worlds-distribution-of-computation-initial-findings/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-03-01T01:49:59Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "14a4d995d9cb2e89b178643ba4294266", "title": "Nik Weaver on Paradoxes of Rational Agency", "url": "https://intelligence.org/2014/02/24/nik-weaver-on-paradoxes-of-rational-agency/", "source": "miri", "source_type": "blog", "text": "![Nik Weaver portrait](https://intelligence.org/wp-content/uploads/2014/02/Weaver_w150.jpg)[Nik Weaver](http://www.math.wustl.edu/~nweaver/) is a professor of mathematics at Washington University in St. Louis. He did his graduate work at Harvard and Berkeley and received his Ph.D. in 1994. His main interests are functional analysis, quantization, and the foundations of mathematics. He is best known for his work on independence results in C\\*-algebra and his role in the recent solution of the Kadison-Singer problem. His most recent book is *[Forcing for Mathematicians](http://www.amazon.com/Forcing-Mathematicians-Nik-Weaver/dp/9814566950/).*\n\n\n\n**Luke Muehlhauser**: In [Weaver (2013)](http://arxiv-web.arxiv.org/abs/1312.3626) you discuss some paradoxes of rational agency. Can you explain roughly what these “paradoxes” are, for someone who might not be all that familiar with provability logic?\n\n\n\n\n---\n\n\n**Nik Weaver**: Sure. First of all, these are “paradoxes” in the sense of being highly counterintuitive — they’re not outright contradictions.\n\n\nThey all relate to the basic Löbian difficulty that if you reason within a fixed axiomatic system S, and you know that some statement A is provable within S, you’re generally not able to deduce that A is true. This may be an inference that you and I would be willing to make, but if you try to build it into a formal system then the system becomes inconsistent. So, for a rational agent who reasons within a specified axiomatic system, knowing that a proof exists is not as good as actually having a proof.\n\n\nThis leads to some very frustrating consequences. Let’s say I want to build a spaceship, but first I need to be sure that it’s not going to blow up. I have an idea about how to prove this, but it’s extremely tedious, so I write a program to work out the details of the proof and verify that it’s correct. The good news is that when I run the program, it informs me that it was able to fill in the details successfully. The bad news is that I now know that there is a proof that the ship won’t blow up, but I still don’t know that the ship won’t blow up! I’m going to have to check the proof myself, line by line. It’s a complete waste of time, because I *know* that the program functions correctly (we can assume I’ve proven this), so I *know* that the line by line verification is going to check out, but I still have to do it.\n\n\nYou may say that I have been programmed badly. Whoever wrote my source code ought to have allowed me to accept “there is a proof that the ship won’t blow up” as sufficient justification for building the ship. This can be a general rule: for any statement A, let “there exists a proof of A” license all the actions that A licenses. We’re not contradicting Löb’s theorem — we still can’t deduce A from knowing there is a proof of A — but we’re finessing it by stipulating that knowing there’s a proof of A is good enough. But there are still problems. Imagine that I can prove that if the Riemann hypothesis is true, then the ship won’t blow up, and if it’s false then there exists a proof that the ship won’t blow up. Then I’m in a situation where I know that *either* A is true *or* there is a proof that A is true, but I don’t know which one. So even with the more liberal licensing condition, I still can’t build my lovely spaceship. \n\n\n\n\n\n\n---\n\n\n**Luke**: Section 2 of the paper develops an approach to solving this kind of problem via the notion of *warranted assertibility*. How does that work?\n\n\n\n\n---\n\n\n**Nik**: “Warranted assertibility” is a philosophical term that, in mathematical settings, refers to statements which can be asserted with perfect rational justification. My idea was that this concept could help with these Löbian type problems because it is nicer in some ways than formal provability. For instance, in any consistent axiomatic system there are statements A(n) with the property that for each value of n we can prove A(n), yet there is no single proof of the statement “for all n, A(n)”. However, it is always the case that if each A(n) is warranted then “for all n, A(n)” is also warranted.\n\n\nSo I introduce a predicate Box(A) to express “A is warranted”, and given a suitable axiomatization (which is a little subtle) I show that we can always infer Box(A) from knowing that there is a proof of A. We still can’t infer A — Löb’s theorem is not violated — but, along the lines I suggested earlier, we can program our agents to accept Box(A) as licensing the same actions that A licenses. Thus, it suffices to prove that the statement “the ship won’t blow up” is warranted. If you go this route then all the paradoxes seem to disappear. If an agent is able to reason about warranted assertibility, then it gets to perform all the actions it intuitively ought to be allowed to perform.\n\n\n\n\n---\n\n\n**Luke**: Later in the paper, you write:\n\n\n\n> [[Yudkowsky & Herreshoff (2013)](https://intelligence.org/files/TilingAgents.pdf)] involves a rational agent who is seeking… general permission to delegate the performance of actions to a second agent… The same issues appear in the case of an intelligent machine that is considering modifying its own source code (in order to make itself more intelligent, say). Before doing this it would want to be sure that its post-modification state will reason correctly, i.e., any theorem it proves after the modification should actually be true. This runs into the familiar Löbian difficulty that the agent is not even able to affirm the soundness of its pre-modification reasoning.\n> \n> \n> … In Section 4 of [Yudkowsky & Herreshoff (2013)] two constructions are presented of an infinite sequence of independently acting agents, each of whom can give a provable justification for activating the next one, yet none of whose deductive power falls below an initially prescribed level. The constructions are clever but they have a nonstandard flavor. Probably this is unavoidable, unless the problem description were to be altered in some fundamental way. In the remainder of this section we present another solution which uses nonstandard assertibility reasoning.\n> \n> \n\n\nCan you describe the solution you go on to present — again, in terms that someone lacking a background in provability logic might be able to understand, at least in an intuitive way?\n\n\n\n\n---\n\n\n**Nik**: I will try! First, in regard to my comment that only nonstandard solutions seem possible, I should mention that there is now [a theorem due to Herreshoff, Fallenstein, and Armstrong](http://intelligence.org/wp-content/uploads/2013/12/procrastinati on-paradox.pdf) that makes this idea precise. The point is that it’s not clear I *should* be allowed to build a duplicate copy of myself and delegate my goals to it, for fear that it would then do the same thing, and on ad infinitum (the “procrastination problem”).\n\n\nNow, going back to my spaceship example, let B be the statement that the ship won’t blow up. A basic feature of assertibility is that we have a general law that A implies Box(A), but we don’t have the converse. So Box(B) is “weaker” than B, and my proposal was that we should allow our agents to accept Box(B) to license the same action (building the spaceship) that B does. This helps with Löbian issues because the statement that B is provable doesn’t imply B, but it does imply Box(B).\n\n\nBut if Box(B) is good enough, why not accept the even weaker statement Box(Box(B))? Or Box(Box(Box(B)))? Let me write Box^k(B) for the general expression with k boxes. The idea about delegating tasks to successors is that if I am willing to accept Box^9(B), say, then I should be willing to build a successor who can’t accept Box^9(B) but can accept Box^8(B). I can reason that if my successor proves Box^8(B), then Box^8(B) is provable, and therefore Box^9(B) is true, which is good enough for me. Then my successor should be happy with a subsequent model that is required to prove Box^7(B), and so on. So we can create finite chains in this way, without doing anything nonstandard.\n\n\nThe way to get infinite chains is to introduce a nonstandard “infinite” natural number kappa, and start with an initial agent who is required to verify Box^kappa(B). Then it can build a successor who is required to verify Box^{kappa – 1}(B), and so on. All the agents in the resulting sequence have the same formal strength because a simple transposition (replacing kappa with kappa + 1) turns each agent into its predecessor, without affecting what they’re able to prove.\n\n\n\n\n---\n\n\n**Luke**: Now let me ask about research of this type in general. What’s your model for how this kind of research might be useful? It’s pretty distant from current methods for constructing actual software agents.\n\n\n\n\n---\n\n\n**Nik**: Yes, it’s very theoretical, no question. One model is pure mathematics, which is actually my background. In pure math, most of the time we’re working on questions that don’t have any immediate practical applications. I think experience has shown that this kind of work occasionally becomes very important, while most of it does not, but it’s hard to predict which directions are going to pan out.\n\n\nOr, you know, maybe we’re all living inside a simulation like Nick Bostrom says. In which case, the fact that we’re trying to figure out how to help machines make themselves smarter may be a clue as to the nature of that simulation.\n\n\n\n\n---\n\n\n**Luke**: What do you think are some “promising avenues” for making additional progress on reasoning our way through these “paradoxes” of rational agency?\n\n\n\n\n---\n\n\n**Nik**: I don’t know. If you’re talking about the Löbian obstacle, I feel that something essential is missing from our current problem description. Among the directions that are available now my preference is for Benja Fallenstein’s parametric polymorphism. But I also think we should be looking at ways of modifying the problem description.\n\n\n\n\n---\n\n\n**Luke**: What is the origin of your interest in paradoxes of rational agency?\n\n\n\n\n---\n\n\n**Nik**: I’ve been thinking about issues surrounding truth and circularity for around five years. (I just checked: my [earliest paper](http://arxiv.org/pdf/0905.1681.pdf) on the subject was posted on the Mathematics arXiv in May 2009.) This was purely theoretical work, coming from an interest in the foundations of mathematics and logic.\n\n\nMy involvement in paradoxes of rational agency was sparked by [Eliezer Yudkowsky and Marcello Herreshoff’s paper on the Lobian obstacle](https://intelligence.org/files/TilingAgents.pdf). In fact I think I got the term “paradoxes of rational agency” from that paper. When I read it I had the immediate reaction that my ideas about assertibility should be relevant, so that’s how I got interested.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Nik!\n\n\nThe post [Nik Weaver on Paradoxes of Rational Agency](https://intelligence.org/2014/02/24/nik-weaver-on-paradoxes-of-rational-agency/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-02-24T23:36:47Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "4996893f2e8e6025cb5945421600c01c", "title": "MIRI’s May 2014 Workshop", "url": "https://intelligence.org/2014/02/22/miris-may-2014-workshop/", "source": "miri", "source_type": "blog", "text": "![december-2013-workshop-header-721px](https://intelligence.org/wp-content/uploads/2014/02/december-2013-workshop-header-721px.jpg)\nFrom May 3–11, MIRI will host its **7th Workshop of Logic, Probability, and Reflection**. This workshop will focus on decision theory and tiling agents.\n\n\nThe participants — all veterans of [past workshops](https://intelligence.org/workshops/) — are:\n\n\n* [Mihály Barasz](http://www.quora.com/Mihaly-Barasz) (Google)\n* [Paul Christiano](http://rationalaltruist.com/) (UC Berkeley)\n* [Benja Fallenstein](http://lesswrong.com/user/Benja/submitted/) (Bristol U)\n* [Marcello Herreshoff](http://www.linkedin.com/pub/marcello-herreshoff/0/8b4/51a) (Google)\n* [Patrick LaVictoire](http://www.math.wisc.edu/~patlavic/) (U Wisconsin)\n* [Nate Soares](http://so8r.es/) (Google)\n* [Nisan Stiennon](http://lesswrong.com/user/Nisan/submitted/) (Stanford)\n* [Qiaochu Yuan](http://math.berkeley.edu/~qchu/) (UC Berkeley)\n* [Eliezer Yudkowsky](http://yudkowsky.net/) (MIRI)\n\n\nIf you have a strong mathematics background and might like to attend a future workshop, [apply today](https://intelligence.org/get-involved/)! Even if there are no upcoming workshops that fit your schedule, please **still apply**, so that we can notify you of other workshops (long before they are announced publicly).\n\n\nThe post [MIRI’s May 2014 Workshop](https://intelligence.org/2014/02/22/miris-may-2014-workshop/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-02-23T02:04:23Z", "authors": ["Alex Vermeer"], "summaries": []} -{"id": "2b2ed79f44beb3f63c1553c862021e4a", "title": "Conversation with Holden Karnofsky about Future-Oriented Philanthropy", "url": "https://intelligence.org/2014/02/21/conversation-with-holden-karnofsky-about-future-oriented-philanthropy/", "source": "miri", "source_type": "blog", "text": "Recently, Eliezer and I had an email conversation with Holden Karnofsky to discuss future-oriented philanthropy, including MIRI. The participants were:\n\n\n* [Eliezer Yudkowsky](http://yudkowsky.net/) (research fellow at MIRI)\n* [Luke Muehlhauser](http://lukeprog.com/) (executive director at MIRI)\n* Holden Karnofsky (co-CEO at [GiveWell](http://www.givewell.org/))\n\n\nWe then edited the email conversation into a streamlined conversation, available [**here**](http://intelligence.org/wp-content/uploads/2014/02/Conversation-with-Holden-Karnofsky-about-Future-Oriented-Philanthropy.pdf).\n\n\nSee also four previous conversations between MIRI and Holden Karnofsky: on [existential risk](http://intelligence.org/2014/01/27/existential-risk-strategy-conversation-with-holden-karnofsky/), on [MIRI strategy](http://intelligence.org/2014/01/13/miri-strategy-conversation-with-steinhardt-karnofsky-and-amodei/), on [transparent research analyses](http://intelligence.org/2013/08/25/holden-karnofsky-interview/), and on [flow-through effects](http://intelligence.org/2013/09/14/effective-altruism-and-flow-through-effects/).\n\n\nThe post [Conversation with Holden Karnofsky about Future-Oriented Philanthropy](https://intelligence.org/2014/02/21/conversation-with-holden-karnofsky-about-future-oriented-philanthropy/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-02-22T03:20:21Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "cf6cd4320c76087c5e59a29f2813f30a", "title": "John Baez on Research Tactics", "url": "https://intelligence.org/2014/02/21/john-baez-on-research-tactics/", "source": "miri", "source_type": "blog", "text": "![John Baez portrait](https://intelligence.org/wp-content/uploads/2014/02/Baez_w150.jpg)[John Baez](http://math.ucr.edu/home/baez/) is a professor of mathematics at U.C. Riverside. Until recently he worked on higher category theory and quantum gravity. His internet column [This Week’s Finds](http://math.ucr.edu/home/baez/TWF.html) dates back to to 1993 and is sometimes called the world’s first blog. In 2010, concerned about climate change and the future of the planet, he switched to working on more practical topics and started the [Azimuth Project](http://www.azimuthproject.org/azimuth/show/HomePage), an international collaboration to create a focal point for scientists and engineers interested in saving the planet. His research now focuses on the math of [networks](http://math.ucr.edu/home/baez/networks/) and [information theory](http://math.ucr.edu/home/baez/information/), which should help us understand the complex systems that dominate biology and ecology.\n\n\n\n**Luke Muehlhauser**: In a previous interview, I [asked Scott Aaronson](http://intelligence.org/2013/12/13/aaronson/) which “object-level research tactics” he finds helpful when trying to make progress in theoretical research, and I provided some examples. Do you have any comments on the research tactics that Scott and I listed? Which recommended tactics of your own would you add to the list?\n\n\n\n\n---\n\n\n**John Baez**: What do you mean by “object-level” research tactics? I’ve got dozens of tactics. Some of them are ways to solve problems. But equally important, or maybe more so, are tactics for coming up with problems to solve: problems that are interesting but still easy enough to solve. By “object-level”, do you mean the former? \n\n\n\n\n\n\n---\n\n\n**Luke**: Both! Conceiving of — and crisply posing — good research problems can often be even more important than solving previously-identified research problems.\n\n\n\n\n---\n\n\n**John**: Okay. Here are some of my tactics.\n\n\n(1) Learn a lot. Try to understand how the whole universe works, from the philosophical, logical, mathematical and physical aspects to chemistry, biology, and the sciences based on those, to the historical sciences such as cosmology, paleontology, archaeology and history, to the social sciences such as psychology, sociology, anthropology, politics and economics, to the aspects that are captured best in literature, art and music.\n\n\nIt’s a never-ending quest, and obviously it pays to specialize and become more of an expert on a few things – but the more angles you can take on any subject, the more likely you are to stumble on good questions or good answers to existing questions. Also, when you get stuck on a problem, or get tired, it can be really re-energizing to learn new things.\n\n\n(2) Keep synthesizing what you learn into terser, clearer formulations. The goal of learning is not to memorize vast amounts of data. You need to do serious data compression, and filter out the noise. Very often people will explain things to you in crappy ways, presenting special cases and not mentioning the general rules, stating general rules incorrectly, and so on.\n\n\nThis process goes on forever. When you first learn algebraic topology, for example, they teach you. homology theory. At the beginner’s level, this is presented as a rather complicated recipe for taking a topological space and getting a list of groups out of it. By looking at examples you get insight into what these groups do: the nth one counts the n-dimensional holes, in some sense. You learn how to use them to solve problems, and how to efficiently compute them.\n\n\nBut later—much later, in my case—you learn that algebraic topology of this sort not really about topological spaces, but something more abstract, called “homotopy types”. This is a discovery that happened rather slowly. It crystallized around the 1968, when a guy named Quillen wrote a book on “homotopical algebra”. It’s always fascinating when this happens: when people in some subject learn that its proper object of study is not what they had thought!\n\n\nBut even this was just the beginning: a lot has happened in math since the 1960s. Shortly thereafter, Grothendieck came along and gave us a new dream of what homotopy types might actually be. Very roughly, he realized that they should show up naturally if we think of “equality” as a process—the process of proving two thing are the same—rather than a static relationship.\n\n\nI’m being pretty vague here, but I want to emphasize that this was a very fundamental discovery with widespread consequences, not a narrow technical thing.\n\n\nFor a long time people have struggled to make Grothendieck’s dream precise. I was involved in that myself for a while. But in the last 5 years or so, a guy named Voevodsky made a lot of progress by showing us how to redo the foundations of mathematics so that instead of treating equality as a mere relationship, it’s a kind of process. This new approach gives an alternative to set theory, where we use homotopy types right from the start as the basic objects of mathematics, instead of sets. It will take about a century for the effects of this discovery to percolate through all of math.\n\n\nSo, you see, by taking something important but rather technical, like algebraic topology, and refusing to be content with treating it as a bunch of recipes to be memorized, you can dig down into deep truths. But it takes great persistence. Even if you don’t discover these truths yourself, but merely learn them, you have to keep simplifying and unifying.\n\n\n(3) Look for problems, not within disciplines, but in the gaps between existing disciplines. The division of knowledge into disciplines is somewhat arbitrary, and people put most of their energy into questions that lie squarely within disciplines, so it shouldn’t be surprising that many interesting things are lurking in the gaps, waiting to be discovered.\n\n\nAt this point, tactics (1) and (2) really come in handy. If you study lots of subjects and keep trying to distill their insights into terse, powerful formulations, you’re going to start noticing points of contact between these subjects. Sometimes these will be analogies that deserve to be made precise. Sometimes people in one subject know a trick that people in some other subject could profit from. Sometimes people in one subject have invented the hammer, and people in another have invented the nail—and neither know what these things are good for!\n\n\n(4) Talk to lots of people. This is a great way to broaden your vistas and find connections between seemingly different subjects.\n\n\nTalk to the smartest people who will condescend to talk to you. Don’t be afraid to ask them questions. But don’t bore them. Smart people tend to be easily bored. Try to let them talk about what’s interesting to them, instead of showing off and forcing them to listen to your brilliant ideas. But make sure to bring them some “gifts” so they’ll want to talk to you again. “Gifts” include clear explanations of things they don’t understand, and surprising facts—little nuggets of knowledge.\n\n\nOne of my strategies for this was to write [This Week’s Finds](http://math.ucr.edu/home/baez/TWF.html), explaining lots of advanced math and physics. You could say that column is a big pile of gifts. I started out as a nobody, but after ten years or so, lots of smart people had found out about me. So now it’s pretty easy for me to blunder into any subject, write a blog post about it, and get experts to correct me or tell me more. I also get invited to give talks, where I meet lots of smart people.\n\n\n\n\n---\n\n\n**Luke**: You’ve explained some tactics for how to come up with problems to solve. Once you generate a good list, how do you choose among them?\n\n\n\n\n---\n\n\n**John**: Here are two bits of advice on that.\n\n\n(1) Actually *write down lists of problems*.\n\n\nWhen I was just getting started, I had a small stock of problems to think about – so small that I could remember most of them. Many were problems I’d heard from other people, but most of those were too hard. I would also generate my own problems, but they were often either too hard, too vague, or too trivial.\n\n\nIn more recent years I’ve been able to build up a huge supply of problems to think about. This means I need to actually list them. Often I generate these lists using the ‘data compression’ tactic I mentioned in part (2) of my last answer. When I learn stuff, I ask:\n\n\n* Is this apparently new concept or fact a special case of some concept or fact I already know?\n* Given two similar-sounding concepts or facts, can I find a third having both of these as special cases?\n* Can I use the analogy between X and Y to do something new in subject Y that’s analogous to something people have already done in subject X?\n* Given a rough ‘rule of thumb’, can I state it more precisely so that it holds always, or at least more often?\n\n\nas well as plenty of more specific questions.\n\n\nSo, instead of being ‘idea-poor’, with very few problems to work on, I’m now ‘idea-rich’, and the challenge is keeping track of all the problems and finding the best ones.\n\n\nI always carry around a notebook. I write down questions that seem interesting, especially when I’m bored. The mere act of writing them down either makes them less vague or reveals them to be hopelessly fuzzy. Sometimes I can solve a problem just by taking the time to state it precisely. And the act of writing down questions naturally triggers more questions.\n\n\nBesides questions, I like ‘analogy charts’, consisting of two or more columns with analogous items lined up side by side. You can see one near the bottom of my 2nd article on [quantropy](http://johncarlosbaez.wordpress.com/2012/02/10/quantropy-part-2/). Quantropy is an idea born of the analogy between thermodynamics and quantum mechanics. This is a big famous analogy, which I’d known for decades, but writing down an analogy chart made me realize there was a hole in the analogy. In thermodynamics we have entropy, so what’s the analogous thing in quantum mechanics? It turns out there’s an answer: quantropy.\n\n\nI later wrote [a paper with Blake Pollard](http://arxiv.org/abs/1311.0813) on quantropy, but I gave a link to the blog article because that’s another aspect of how I keep track of questions. I don’t just write lists for myself—I write blog articles about things that I want to understand better.\n\n\n(2) Only work on problems when you think they’re important and you see how to solve them.\n\n\nThis tactic isn’t for everyone, but it works for me. When I was just getting started I would try to solve problems that I had no idea how to solve. People who are good at puzzles may succeed this way, but I generally did not.\n\n\nIt turns out that for me, a better approach is to make long lists of questions, and keep thinking about them on and off for many years. I slowly make progress until—poof!—I think I see something new and important. Only then do a take a problem off the back burner and start intensely working on it.\n\n\nThe physicist John Wheeler put it this way: you should never do a calculation until you already know the answer. That’s a bit of an exaggeration, because it’s also good to fool around and see where things go. But there’s a lot more truth to it than you might think.\n\n\nFeynman had a different but related rule of thumb: he only worked on a problem when he felt he had an “inside track” on it—some insight or trick up his sleeve that nobody else had.\n\n\n\n\n---\n\n\n**Luke**: And once you’ve chosen a problem to solve, what are some of your preferred tactics for actually solving it?\n\n\n\n\n---\n\n\n**John**: By what I’ve said before, it’s clear that I get serious about a problem only after I have a darn good idea of how to solve it. At the very least, I *believe* I know what to do. So, I just do it.\n\n\nBut usually it doesn’t work quite that easily.\n\n\nIf you only officially tackle problems after absolutely every wrinkle has been ironed out by your previous musings, you’re being too cautious: you’ll miss working on a lot of interesting things. Many young researchers seem to fall prey to the opposite error, and waste time being completely stuck. The right balance lies in the middle. You break a problem down into sub-problems, and break those down into sub-subproblems… and you decide you’re ready to go when all these sub-subproblems seem likely to be doable, even before you’ve worked through the details.\n\n\nHow can you tell if they’re doable? This depends a lot on having previous experience with similar problems. If you’re a newbie, things that seem hard to you can be really easy to experts, while things that seem easy can turn out to be famously difficult.\n\n\nEven with experience, some of sub-subproblems that seem likely to be routine will turn out to be harder than expected. That’s where the actual work comes in. And here it’s good to have lots of tricks. For example:\n\n\n(1) If you can’t solve a problem, there should be a similar problem that’s a bit easier. Try solving that. And if you can’t solve that one… use the same principle again! Keep repeating until you get down to something you can solve. Then climb your way back up, one step at a time.\n\n\nDon’t be embarrassed to simplify a problem to the point where you can actually do it.\n\n\n(2) There are lots of ways to make a problem easier. Sometimes you should consider a special case. In math there are special cases of special cases of special cases… so there’s a lot of room for exploration here. If you see how enough special cases work, you’ll get ideas that may help you for your original problem.\n\n\n(3) On the other hand, sometimes a problem becomes simpler when you generalize, leaving out potentially irrelevant details. Often people get stuck in clutter. But if it turns out the generalization doesn’t work, it may help you see which details were actually relevant.\n\n\n(4) Sometimes instead of down or up the ladder of generality it pays to move across, by considering an analogous problem in a related field.\n\n\n(5) Finally, a general hint: keep a written record of your efforts to solve a problem, including explanations of what didn’t work, and why. Look back at what you wrote from time to time. It’s amazing how often I come close to doing something right, forget about it, and come back later—sometimes years later—and see things from a slightly different angle, which makes everything fall into place. Failure can be just millimeters from success.\n\n\n\n\n---\n\n\n**Luke**: Thanks, John!\n\n\nThe post [John Baez on Research Tactics](https://intelligence.org/2014/02/21/john-baez-on-research-tactics/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-02-22T02:04:28Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "8f1988c9a2fd8c71aa8f46646335375a", "title": "2013 in Review: Friendly AI Research", "url": "https://intelligence.org/2014/02/18/2013-in-friendly-ai-research/", "source": "miri", "source_type": "blog", "text": "This is the 4th part of my personal and qualitative [self-review of MIRI in 2013](http://intelligence.org/2013/12/20/2013-in-review-operations/), in which I review MIRI’s 2013 Friendly AI (FAI) research activities.[1](https://intelligence.org/2014/02/18/2013-in-friendly-ai-research/#footnote_0_10726 \"What counts as “Friendly AI research” is, naturally, a matter of debate. For most of this post I’ll assume “Friendly AI research” means “what Yudkowsky thinks of as Friendly AI research,” with the exception of intelligence explosion microeconomics, for reasons given in this post.\")\n\n\n### Friendly AI research in 2013\n\n\n1. In early 2013, we [decided](http://intelligence.org/2013/04/13/miris-strategy-for-2013/) to shift our priorities from research plus public outreach to a more exclusive focus on technical FAI research. This resulted in roughly as much *public-facing* FAI research in 2013 as in all past years combined.\n2. Also, our workshops succeeded in identifying candidates for hire. We expect to hire two 2013 workshop participants in the first half of 2014.\n3. During 2013, I learned many things about how to create an FAI research institute and FAI research field. In particular…\n4. MIRI needs to attract more experienced workshop participants.\n5. Much FAI research can be done by a broad community, and need not be labeled as FAI research. But, more FAI progress is made when the researchers themselves conceive of the research as FAI research.\n6. Communication style matters a lot.\n\n\n\n### The shift to Friendly AI research\n\n\nFrom MIRI’s founding in 2000 until our [strategic shift in early 2013](http://intelligence.org/2013/04/13/miris-strategy-for-2013/),[2](https://intelligence.org/2014/02/18/2013-in-friendly-ai-research/#footnote_1_10726 \"Until early 2013, the organization currently named “Machine Intelligence Research Institute” was known as the “Singularity Institute for Artificial Intelligence.”\") we did some research and *much* public outreach (e.g. the [Singularity Summit](http://intelligence.org/singularitysummit/) and *[The Sequences](http://wiki.lesswrong.com/wiki/Sequences)*).[3](https://intelligence.org/2014/02/18/2013-in-friendly-ai-research/#footnote_2_10726 \"From 2000-2004, “MIRI” was just Eliezer Yudkowsky, doing early FAI research. The organization began to grow in 2004, and by 2006 most efforts were outreach-related rather than research-related. This remained true until early 2013.\") In early 2013, we decided that enough outreach and movement-building had been done that we could productively shift to a *primary* focus on research, and Friendly AI research specifically.\n\n\nThe task before us was, essentially, to **create a new FAI research institute** (out of what had previously been primarily an outreach organization), and to **create a new field of FAI research**. We still have much to learn about how to achieve these goals (see below).\n\n\nOur initial steps were to (1) hold a series of [research workshops](http://intelligence.org/workshops/), and to (2) describe open problems in Friendly AI theory to potential research collaborators. Our workshops and open problem descriptions were aimed at **three goals in particular**. We wanted them to:\n\n\n1. help us identify researchers MIRI should hire to work full-time on Friendly AI theory,\n2. expose additional researchers to the Friendly AI research agenda, and\n3. spur concrete progress on open problems in Friendly AI.\n\n\nFirst, I’ll describe our 2013 Friendly AI research activities. After that, I’ll review “how good” I think these results are, and what lessons I’ve learned.\n\n\n#### The workshops\n\n\nThe workshops strategy had been suggested by the success of our one-week [November 2012 workshop](http://intelligence.org/workshops/#november-2012), which had been an experiment involving only four researchers, and had produced the core result of “[Definability of Truth in Probabilistic Logic](http://intelligence.org/wp-content/uploads/2013/03/Christiano-et-al-Naturalistic-reflection-early-draft.pdf).”\n\n\nOur [first workshop of 2013](http://intelligence.org/workshops/#april-2013), held in April, was an attempt to tackle as many open problems as we could, with as many people as we could gather, to quickly learn which problems were most tractable and which researchers were most likely to contribute in the future. It involved 12 participants and lasted 3 weeks, though (due to scheduling constraints) only 5 researchers participated for the entire duration of the workshop. We learned a great deal about the workshop’s participants, and three problems in particular showed the most progress: Christiano’s “[Definability of Truth](http://intelligence.org/wp-content/uploads/2013/03/Christiano-et-al-Naturalistic-reflection-early-draft.pdf)” framework, LaVictoire’s “[Robust Cooperation](https://intelligence.org/files/RobustCooperation.pdf)” framework, and Fallenstein’s “[parametric polymorphism](http://lesswrong.com/lw/e4e/an_angle_of_attack_on_/)” approach to the [Löbian obstacle for self-modifying systems](http://lesswrong.com/lw/hmt/tiling_agents_for_selfmodifying_ai_opfai_2/). The success of this workshop encouraged us to hold more such workshops, albeit at a smaller scale and with tighter research foci.\n\n\nOur [next workshop](http://intelligence.org/workshops/#july-2013), in July 2013, had 8 participants and lasted one week. It focused on issues related to logical omniscience and the Löbian obstacle / self-reflective agents, and produced less progress-per-day than the April workshop. Its chief result was described in a [blog post](http://lo-tho.blogspot.com/2013/08/progress-in-logical-priors.html) by participant Abram Demski.\n\n\nOur [September workshop](http://intelligence.org/workshops/#september-2013) focused instead on decision theory. It had 11 participants and lasted one week. Participants brainstormed “well-posed problems” in the area, built on LaVictoire’s robust cooperation framework, made some progress on formalizing [updateless decision theory](http://wiki.lesswrong.com/wiki/Updateless_decision_theory), and formulated additional toy problems such as the [Ultimate Newcomb’s Problem](http://lesswrong.com/lw/ila/the_ultimate_newcombs_problem/).\n\n\nOur [November workshop](http://intelligence.org/workshops/#november-2013) was our first workshop held outside of Berkeley. [FHI](http://www.fhi.ox.ac.uk/) graciously hosted us at Oxford University. As with the July workshop, this workshop focused on logical omniscience and self-reflective agents. There were 11 participants, and it lasted one week. November’s theoretical progress flowed into the progress made at our [December workshop](http://intelligence.org/workshops/#december-2013) (same topics, 13 participants, one week), which was captured in [7 new technical reports](http://intelligence.org/2013/12/31/7-new-technical-reports-and-a-new-paper/).\n\n\nNext, some **basic statistics**:\n\n\n* We held 5 research workshops in 2013, with all but one of them being one week long.\n* These workshops were attended by 35 unique researchers, plus 7 first-day-only visitors (e.g. [Hannes Leitgeb](http://www.philosophie.uni-muenchen.de/lehreinheiten/logik_sprachphil/personen/hannes_leitgeb/index.html) and [Nik Weaver](http://www.math.wustl.edu/~nweaver/)).[4](https://intelligence.org/2014/02/18/2013-in-friendly-ai-research/#footnote_3_10726 \"Some statistics about 2013’s 35 workshop participants: 15 have a PhD, three are women, and 3 hold a university faculty position of assistant professor or higher rank. In short, our workshop participants have thus far largely been graduate students, post-docs, and independent researchers. Among the 15 participants who have a PhD, 9 have a PhD in mathematics, 4 have a PhD in computer science, one has a PhD in cognitive science, and one has a joint PhD in philosophy and computer science.\")\n* For first-time attendees, the median reply to the question “How happy are you that you came to the workshop, 0-10?” was 8.5.\n* From the time it went live in March 2013 to the end of 2013, about a dozen people contacted us about our [Recommended Courses for MIRI Math Researchers](http://intelligence.org/courses/) page. However, we have reason to believe it has influenced the study patterns of a much larger number of people. Some MIRI supporters have told us they routinely point smart young acquaintances to that page. Moreover, the page received more unique pageviews in 2013 than (e.g.) our [Donate](https://intelligence.org/donate/) or [About](http://intelligence.org/about/) pages, despite not being linked from every page of the site like the Donate and About pages are. The Recommended Courses page [made it possible](http://lesswrong.com/lw/jg3/the_mechanics_of_my_recent_productivity/) for at least one person (Nate Soares) to quickly upgrade his math skills and attend a workshop in 2013, which he couldn’t have done before studying several of the textbooks on the Courses page.\n* From the time it went live in June 2013 to the end of 2013, we received 227 non-junk applications[5](https://intelligence.org/2014/02/18/2013-in-friendly-ai-research/#footnote_4_10726 \"By “junk applications” I mean to include both spam applications and applications from people who are clearly incapable of math research, e.g. “Hello, I would love to come to America to learn algebra.”\") to attend future MIRI workshops, 47 of which are still being processed. So far, 60 applicants are ones we’ve deemed “promising,” 23 of whom attended a workshop in 2013. Of those 23, about half were researchers with whom we had little to no prior contact.\n\n\n \n\n\n#### Describing open problems in Friendly AI\n\n\nIn 2013, MIRI described open problems in Friendly AI (OPFAIs) to researchers via three standard methods: articles, talks, and tutorials at workshops.\n\n\nOn **OPFAI articles**: Yudkowsky’s article on “OPFAI #1” discussed [intelligence explosion microeconomics](http://lesswrong.com/lw/hbd/new_report_intelligence_explosion_microeconomics/) (*aka* AI takeoff dynamics), which I consider to be an open problem in “strategy research” rather than in Friendly AI theory, so I discussed it in a [previous post](http://intelligence.org/2014/02/08/2013-in-review-strategic-and-expository-research/). From my perspective, the first written OPFAI description of 2013 was on [logical decision theory](http://intelligence.org/2013/04/19/altairs-timeless-decision-theory-paper-published/). Alex Altair (then a MIRI researcher) described the problem in an April 2013 paper called “[A Comparison of Decision Algorithms on Newcomblike Problems](https://intelligence.org/files/Comparison.pdf).” This open problem had been described before, in [Less](http://lesswrong.com/lw/135/timeless_decision_theory_problems_i_cant_solve/) [Wrong](http://lesswrong.com/lw/15z/ingredients_of_timeless_decision_theory/) [posts](http://lesswrong.com/lw/164/timeless_decision_theory_and_metacircular/) and in a [117-page technical report](https://intelligence.org/files/TDT.pdf), but Altair’s presentation of the issue was more succinct and formal than previous presentations had been.\n\n\nThe second written OPFAI description of 2013 was on the [tiling agents](http://lesswrong.com/lw/hmt/tiling_agents_for_selfmodifying_ai_opfai_2/) problem, and specifically the Löbian obstacle to tiling agents. Yudkowsky brought a draft of this paper to the April workshop, and heavily modified the draft as a result of the progress at that workshop, finally publishing the draft in June 2013. The third written OPFAI description of 2013, by Patrick LaVictoire and co-authors, was on the [robust cooperation](http://lesswrong.com/lw/hmw/robust_cooperation_in_the_prisoners_dilemma/) problem. The fourth written OPFAI description of 2013 was on [naturalized induction](http://lesswrong.com/lw/jd9/building_phenomenological_bridges/).\n\n\nBecause the tiling agents paper took ~2 months of FAI researcher time to produce, we decided to experiment with a [process](http://lesswrong.com/lw/jd9/building_phenomenological_bridges/a89w) that would minimize the amount of FAI-researcher-time required to produce new OPFAI descriptions. First, Yudkowsky brain-dumped the OPFAI to [a Facebook group](https://www.facebook.com/groups/233397376818827/). Then, Robby Bensinger worked with several others to produce Less Wrong posts that described the OPFAI more clearly. The first post produced via this process was published in December 2013: [Building Phenomenological Bridges](http://lesswrong.com/lw/jd9/building_phenomenological_bridges/). The rest of the posts explaining this OPFAI will be published in Q1 2014. Because we want to maximize the amount of FAI researcher hours that goes into FAI research rather than exposition, we hope to hire additional expository writing talent in 2014 (see our [Careers](http://intelligence.org/careers/) page).\n\n\nOn **OPFAI talks**: MIRI scheduled [two OPFAI talks](http://intelligence.org/2013/10/01/upcoming-talks-at-harvard-and-mit/) in 2013. Yudkowsky’s Oct. 15th talk, “Recursion in rational agents: Foundations for self-modifying AI,” described both the robust cooperation and tiling agents problems to an audience at MIT. Two days earlier, (MIRI research associate) Paul Christiano gave a talk about probabilistic metamathematics at Harvard, following up on the earlier results from the “[Definability of Truth](http://intelligence.org/wp-content/uploads/2013/03/Christiano-et-al-Naturalistic-reflection-early-draft.pdf)” paper.[6](https://intelligence.org/2014/02/18/2013-in-friendly-ai-research/#footnote_5_10726 \"Probabilistic metamathematics is an OPFAI in itself, and also one possible path toward a solution to the tiling agents problem.\") Unfortunately, Yudkowsky’s talk was not recorded, but Christiano’s [was](http://intelligence.org/2013/10/23/probabilistic-metamathematics-and-the-definability-of-truth/).\n\n\nOn **OPFAI tutorials at workshops**: Each MIRI workshop in 2013 opened with a day or two of tutorials on the open problems being addressed by that workshop. These tutorials exposed ~35 researchers (participants and first-day visitors) to OPFAIs they weren’t previously very familiar with. (The others — e.g. Yudkowsky, Christiano, and Fallenstein — were already pretty familiar with the OPFAIs described in the tutorials.)\n\n\n#### How good are these results?\n\n\nFor comparison’s sake, **MIRI’s 2000-2012 FAI research** efforts consisted in:\n\n\n* Yudkowsky’s early research into the general “shape” of the Friendly AI challenge, resulting in publications such as “[Creating Friendly AI](https://intelligence.org/files/CFAI.pdf)” (2001), “[Coherent Extrapolated Volition](https://intelligence.org/files/CEV.pdf)” (2004), and “[Artificial Intelligence as a Positive and Negative Factor in Global Risk](https://intelligence.org/files/AIPosNegFactor.pdf)” (2008). These publications did not yet describe any OPFAIs as well-defined as the open problems described in [Altair (2013)](https://intelligence.org/files/Comparison.pdf), [Yudkowsky & Herreshoff (2013)](https://intelligence.org/files/TilingAgents.pdf), or [LaVictoire et al (2013)](http://lesswrong.com/lw/hmw/robust_cooperation_in_the_prisoners_dilemma/).[7](https://intelligence.org/2014/02/18/2013-in-friendly-ai-research/#footnote_6_10726 \"The open problems in these publications, too, need additional formalization. Such is the current state of research.\")\n* Yudkowsky’s early decision theory research, which resulted in TDT circa 2005, though this work wasn’t written up in much detail until 2009 ([1](http://lesswrong.com/lw/135/timeless_decision_theory_problems_i_cant_solve/), [2](http://lesswrong.com/lw/15z/ingredients_of_timeless_decision_theory/), [3](http://lesswrong.com/lw/164/timeless_decision_theory_and_metacircular/)) and [2010](https://intelligence.org/files/TDT.pdf).\n* Yudkowsky’s early work on Friendly consequentialist AI, in 2003-2009, some of it with Marcello Herreshoff, and one summer (2006) with Peter de Blanc and Nick Hay as well. This work resulted in early versions of many of the OPFAIs described by MIRI in 2013, currently being written up, or currently in Yudkowsky’s queue to write up. It also resulted in the “infinite waterfall” method later described in [Yudkowsky & Herreshoff (2013)](https://intelligence.org/files/TilingAgents.pdf).\n* Yudkowsky worked again with Herreshoff in the summer of 2009, in part on the Löbian obstacle.\n* MIRI held a decision theory workshop in March 2010, attended by Eliezer Yudkowsky, [Wei Dai](http://www.weidai.com/), [Stuart Armstrong](http://lesswrong.com/user/Stuart_Armstrong/submitted/), [Gary Drescher](http://en.wikipedia.org/wiki/Gary_Drescher), [Anna Salamon](http://annasalamon.com/), and about a dozen others who were present for some but not all discussions.[8](https://intelligence.org/2014/02/18/2013-in-friendly-ai-research/#footnote_7_10726 \"For example, Steve Rayhawk and Henrik Jonsson.\") This workshop spawned a decision theory mailing list that has, from 2010 through the present day, produced much of the recent progress on [TDT](http://wiki.lesswrong.com/wiki/Timeless_decision_theory)/[UDT](http://wiki.lesswrong.com/wiki/Updateless_decision_theory)-style decision theories, though mostly via non-MIRI researchers like Wei Dai, Vladimir Slepnev, Stuart Armstrong, and Vladimir Nesov.\n* (Former MIRI researcher) Peter de Blanc’s work on “convergence of expected utility for universal AI” and ontological crises, resulting in [de Blanc (2009)](https://intelligence.org/files/Convergence-EU.pdf) and [de Blanc (2011)](https://intelligence.org/files/OntologicalCrises.pdf).\n* (MIRI research associate) Daniel Dewey’s work on value learning, resulting in [Dewey (2011)](https://intelligence.org/files/LearningValue.pdf).\n\n\nThus, MIRI’s *public-facing* Friendly AI research from 2000-2012 consisted in a few non-technical works like “Creating Friendly AI” and “Coherent Extrapolated Volition,” some philosophical writings on TDT, and three somewhat technical papers by Peter de Blanc and Daniel Dewey. Compare this to MIRI’s 2013 public-facing FAI research: [Muehlhauser & Williamson (2013)](https://intelligence.org/files/IdealAdvisorTheories.pdf),[9](https://intelligence.org/2014/02/18/2013-in-friendly-ai-research/#footnote_8_10726 \"This short paper lies deep in the “philosophy” end of the philosophy -> math -> engineering spectrum.\") [Altair (2013)](https://intelligence.org/files/Comparison.pdf), [Christiano et al. (2013)](http://intelligence.org/wp-content/uploads/2013/03/Christiano-et-al-Naturalistic-reflection-early-draft.pdf), [Yudkowsky & Herreshoff (2013)](https://intelligence.org/files/TilingAgents.pdf), [LaVictoire et al (2013)](http://lesswrong.com/lw/hmw/robust_cooperation_in_the_prisoners_dilemma/), and [these 7 technical reports](http://intelligence.org/2013/12/31/7-new-technical-reports-and-a-new-paper/).[10](https://intelligence.org/2014/02/18/2013-in-friendly-ai-research/#footnote_9_10726 \"For both the 2000-2012 and 2013 calendar periods, when I write of “MIRI’s public-facing FAI work” I’m not including work that was “enabled” but not really “produced” by MIRI or its workshops, for example most work on UDT/ADT (which were nevertheless largely developed on MIRI’s LessWrong.com website and its decision theory mailing list).\")\n\n\nSubjectively, it feels to me like **MIRI produced about as much public-facing Friendly AI research progress in 2013 as in all past years combined** (2000-2012), and possibly more. This is good but not particularly surprising, since 2013 was also the first year in which MIRI *tried* to focus on producing public-facing FAI research progress. (But to be clear: if we remove the “public-facing” qualifier, then it’s clear that Yudkowsky alone produced far more FAI research progress in 2000-2012 than MIRI and its workshops produced in 2013 alone.)\n\n\nSo, did our workshops and open problem descriptions **achieve our stated goals?** Let’s check:\n\n\n1. Yes, they helped us identify candidates for hire. **We expect to hire two 2013 workshop participants in the first half of 2014**. (One of these hires is pending a visa application approval.)\n2. Yes, they exposed many new researchers to the Friendly AI research program. But, this exposure didn’t lead to as much independent Friendly AI work as I had hoped, and I have some theories as to why this was (see below).\n3. Yes, they spurred concrete research progress on Friendly AI (see above).\n\n\nWhile this represents a promising start toward growing an FAI research institute and a new field of FAI research, there are many dimensions on which our output needs to improve for MIRI to have the impact we hope for (see below).\n\n\n### What have I learned about how to create an FAI research institute, and a new field of FAI research?\n\n\nSome of my “lessons learned” from 2013’s FAI research activities were things I genuinely didn’t know at the start of the year. Most of them are things I suspected already, and I think they were confirmed by our experiences in 2013. Here are a few of them, in no particular order.\n\n\n#### 1. Keep operations work away from researchers.\n\n\nIn other words, “Don’t be afraid of a high operations-staff-to-researchers ratio.” Operations talent (including executive talent) is easier to find than FAI research talent, so it’s important to hire sufficient operations talent to make sure the FAI researchers we *do* find can spend approximately *all* their time on FAI research, and almost *none* of their time on tasks that can mostly be handled by operations staff (writing grant proposals, organizing events, fundraising, paper bibliographies, etc.). MIRI should hire enough operations talent to do this even if it makes our operations-staff-to-researcher ratio looks high for a research institute.[11](https://intelligence.org/2014/02/18/2013-in-friendly-ai-research/#footnote_10_10726 \"At the end of 2013, we had five full-time staff members: Luke Muehlhauser (executive director), Louie Helm (deputy director), Eliezer Yudkowsky (research fellow), Malo Bourgon (program manager), and Alex Vermeer (program management analyst), totaling 4 operations staff and one researcher. This 4:1 ratio will shrink as we are able to hire more FAI researchers, but I think it would have been a mistake to try to get by with fewer operations staff in 2013.\")\n\n\nUniversities often struggle with this (from a research productivity perspective), loading up some of the best research talent in the world with teaching duties, grant writing duties, and university service.[12](https://intelligence.org/2014/02/18/2013-in-friendly-ai-research/#footnote_11_10726 \"Link et al. (2008); Marsh & Hattie (2002); NSOPF (2004).\") As an independent research institute, MIRI can set its own policies and minimize these problems.\n\n\n#### 2. We need to attract more experienced workshop participants.\n\n\nOur workshops attracted some very bright participants, but they were almost exclusively younger than 30, with relatively few publications to their name. More experienced researchers would probably have advantages in (1) knowing related results and formal tools, (2) knowing productive research tactics, and (3) writing up results for peer-review, among other advantages.\n\n\n#### 3. Much FAI research can be done by a broad community, and need not be labeled as FAI research.\n\n\nPresently, the Yudkowskian paradigm for “Friendly AI research” describes a very large research program that breaks down into dozens of sub-problems (OPFAIs), e.g. the [tiling agents](http://lesswrong.com/lw/hmt/tiling_agents_for_selfmodifying_ai_opfai_2/) toy problem. Locating and formulating open problems plausibly relevant for Friendly AI is a challenge in itself, one that especially benefits from specializing in Friendly AI for several years.\n\n\nMany of the OPFAIs themselves, however, can be framed as “ordinary” open problems in AI safety engineering, philosophy, mathematical logic, theoretical computer science, economics, and other fields. These open problems can often be stated without any mention of Friendly AI, and sometimes without any mention of AI in general.\n\n\nFor every OPFAI Yudkowsky has described,[13](https://intelligence.org/2014/02/18/2013-in-friendly-ai-research/#footnote_12_10726 \"Sometimes with much help from Robby Bensinger and/or others.\") I’ve been able to locate earlier related work.[14](https://intelligence.org/2014/02/18/2013-in-friendly-ai-research/#footnote_13_10726 \"I’ll list some examples of earlier related work. (1) Superrationality: getting agents to rationally cooperate with agents like themselves. Before “Robust Cooperation” there was: Rapoport (1966); McAfee (1984); Hofstadter (1985); Binmore (1987); Howard (1988); Tennenholtz (2004); Fortnow (2009); Kalai et al. (2010); Peters & Szentes (2012). (For Rapoport 1966, see especially pages 141-144 and 209-210. (2) Coherent extrapolated volition: figuring out what we would wish if we knew more, thought better, were more the people we wished we were, etc. Before Yudkowsky (2004) there was: Rawls (1971); Harsanyi (1982); Railton (1986); Rosati (1995). (For an overview of this background, see Muehlhauser & Williamson 2013.) (3) Parliamentary methods for values aggregation: using voting mechanisms to resolve challenges in normative uncertainty and values aggregation. Before Bostrom (2009) there was a vast literature on this topic in social choice theory. For recent overviews, see List (2013); Brandt et al. (2012); Rossi et al. (2011); Gaertner (2009). (4) Reasoning under fragility: figuring out how to get an agent not to operate with full autonomy before it has been made fully trustworthy. Before Yudkowsky began to discuss the issue, there was much work on “adjustable autonomy”: Schreckenghost et al. (2010); Mouaddib et al. (2010); Zieba et al. (2010); Pynadath & Tambe (2002); Tambe et al. (2002). (5) Logical decision theory: finding a decision algorithm which can represent the agent’s deterministic decision process. Before Yudkowsky (2010) there was: Spohn (2003); Spohn (2005). (6) Stable self-improvement: getting a self-modifying agent to avoid rewriting its own code unless it has very high confidence that these rewrites will maintain desirable agent properties. Before Yudkowsky & Herreshoff (2013) there was: Schmidhuber (2003); Schmidhuber (2009); Steunebrink & Schmidhuber (2012). (7) Naturalized induction: getting an induction algorithm to treat itself, its data inputs, and its hypothesis outputs as reducible to its physical posits. Before “Building Phenomenological Bridges” there was: Orseau & Ring (2011); Orseau & Ring (2012).\") Although this earlier work has not produced what we would regard as good solutions to open problems in FAI, it does suggest that FAI can be framed in ways palatable to academia. FAI need not be an “alien” research program, operating strictly outside mainstream academia, and conducted only by those explicitly motivated by FAI. Instead, FAI researchers should be able to frame their work in the context of mainstream research paradigms if they choose to do so. Moreover, much FAI research can be done even by those who aren’t explicitly motivated by FAI, so long as they find (e.g.) the Löbian obstacle interesting *as mathematics* — or as computer science, or as philosophy, etc.\n\n\n \n\n\n#### 4. But, more FAI progress is made when the researchers themselves conceive of the research as FAI research.\n\n\nStill, researchers *do* seem more likely to produce useful work on Friendly AI if they are thinking about the problems from the perspective of Friendly AI, rather than merely thinking about them as interesting open problems in philosophy, computer science, economics, etc. As I said in [my conversation with Jacob Steinhardt](http://intelligence.org/2014/02/11/conversation-with-jacob-steinhardt-about-miri-strategy/):\n\n\n\n> People work on different pieces of the problem depending on whether they’re trying to solve the problem for Friendly AI or just for a math journal. If they aren’t thinking about it from the FAI perspective, people can work all day on stuff that’s very close to what we care about in concept-space and yet has no discernable value to FAI theory. Thus, the people who have contributed by far the most novel FAI progress are people explicitly thinking about the problems from the perspective of FAI…\n> \n> \n\n\n#### \n\n\n#### 5. Communication style matters a lot.\n\n\nWhen I talk to the kinds of top-notch researchers MIRI would like to collaborate with on open problems in Friendly AI, perhaps the most common complaint I hear is that our work is not formal enough, or not described clearly enough for them to understand it without more effort on their part than they are willing to expend. For an example of such a conversation that was recorded and transcribed, see again [my conversation with Jacob Steinhardt](http://intelligence.org/2014/02/11/conversation-with-jacob-steinhardt-about-miri-strategy/).\n\n\nI’ve thought this for a long time, and my experiences in 2013 have only reinforced the point. I’ll be writing more about this in the future.\n\n\n \n\n\n\n\n---\n\n1. What counts as “Friendly AI research” is, naturally, a matter of debate. For most of this post I’ll assume “Friendly AI research” means “what Yudkowsky thinks of as Friendly AI research,” with the exception of intelligence explosion microeconomics, for reasons given in this post.\n2. Until [early 2013](http://intelligence.org/2013/01/30/we-are-now-the-machine-intelligence-research-institute-miri/), the organization currently named “Machine Intelligence Research Institute” was known as the “Singularity Institute for Artificial Intelligence.”\n3. From 2000-2004, “MIRI” was just Eliezer Yudkowsky, doing early FAI research. The organization began to grow in 2004, and by 2006 most efforts were outreach-related rather than research-related. This remained true until early 2013.\n4. Some statistics about 2013’s 35 workshop participants: 15 have a PhD, three are women, and 3 hold a university faculty position of assistant professor or higher rank. In short, our workshop participants have thus far largely been graduate students, post-docs, and independent researchers. Among the 15 participants who have a PhD, 9 have a PhD in mathematics, 4 have a PhD in computer science, one has a PhD in cognitive science, and one has a joint PhD in philosophy and computer science.\n5. By “junk applications” I mean to include both spam applications and applications from people who are clearly incapable of math research, e.g. “Hello, I would love to come to America to learn algebra.”\n6. Probabilistic metamathematics is an OPFAI in itself, and also one possible path toward a solution to the tiling agents problem.\n7. The open problems in these publications, too, need additional formalization. Such is the current state of research.\n8. For example, Steve Rayhawk and Henrik Jonsson.\n9. This short paper lies deep in the “philosophy” end of the philosophy -> math -> engineering [spectrum](http://intelligence.org/2013/11/04/from-philosophy-to-math-to-engineering/).\n10. For both the 2000-2012 and 2013 calendar periods, when I write of “MIRI’s public-facing FAI work” I’m not including work that was “enabled” but not really “produced” by MIRI or its workshops, for example most work on [UDT](http://wiki.lesswrong.com/wiki/Updateless_decision_theory)/[ADT](http://wiki.lesswrong.com/wiki/Ambient_decision_theory) (which were nevertheless largely developed on MIRI’s [LessWrong.com](http://lesswrong.com/) website and its decision theory mailing list).\n11. At the end of 2013, we had five full-time staff members: Luke Muehlhauser (executive director), Louie Helm (deputy director), Eliezer Yudkowsky (research fellow), Malo Bourgon (program manager), and Alex Vermeer (program management analyst), totaling 4 operations staff and one researcher. This 4:1 ratio will shrink as we are able to hire more FAI researchers, but I think it would have been a mistake to try to get by with fewer operations staff in 2013.\n12. [Link et al. (2008)](http://commonsenseatheism.com/wp-content/uploads/2014/01/Link-et-al-A-time-allocation-study-of-university-faculty.pdf); [Marsh & Hattie (2002)](http://commonsenseatheism.com/wp-content/uploads/2014/01/Marsh-Hattie-The-relation-between-research-productivity-and-teaching-effectiveness.pdf); [NSOPF (2004)](http://nces.ed.gov/pubs2006/2006176.pdf).\n13. Sometimes with much help from Robby Bensinger and/or others.\n14. I’ll list some examples of earlier related work. (**1**) [*Superrationality*](http://en.wikipedia.org/wiki/Superrationality): getting agents to rationally cooperate with agents like themselves. Before “[Robust Cooperation](http://intelligence.org/2014/02/01/robust-cooperation-a-case-study-in-friendly-ai-research/)” there was: [Rapoport (1966)](http://www.amazon.com/Two-Person-Game-Theory-Essential/dp/047205015X/); [McAfee (1984)](http://www.mcafee.cc/Papers/PDF/EffectiveComputability.pdf); [Hofstadter (1985)](http://en.wikipedia.org/wiki/Superrationality); [Binmore (1987)](http://commonsenseatheism.com/wp-content/uploads/2014/02/Binmore-Modeling-Rational-Players-Part-I.pdf); [Howard (1988)](http://commonsenseatheism.com/wp-content/uploads/2013/04/Howard-Cooperation-in-the-Prisoners-Dilemma.pdf); [Tennenholtz (2004)](http://mechroom.technion.ac.il/~moshet/progeqnote4.pdf); [Fortnow (2009)](http://commonsenseatheism.com/wp-content/uploads/2014/02/Fortnow-et-al-Program-Equilibria-and-Discounted-Computation-Time.pdf); [Kalai et al. (2010)](http://commonsenseatheism.com/wp-content/uploads/2014/02/Kalai-et-al-A-commitment-folk-theorem.pdf); [Peters & Szentes (2012)](http://commonsenseatheism.com/wp-content/uploads/2014/02/Peters-Szentes-Definable-and-Contractible-Contracts.pdf). (For Rapoport 1966, see especially pages 141-144 and 209-210. (**2**) [*Coherent extrapolated volition*](http://wiki.lesswrong.com/wiki/Coherent_Extrapolated_Volition): figuring out what we would wish if we knew more, thought better, were more the people we wished we were, etc. Before [Yudkowsky (2004)](https://intelligence.org/files/CEV.pdf) there was: [Rawls (1971)](http://www.amazon.com/A-Theory-Justice-John-Rawls/dp/0674000781/); [Harsanyi (1982)](http://commonsenseatheism.com/wp-content/uploads/2014/02/Harsanyi-Morality-and-the-theory-of-rational-behaviour.pdf); [Railton (1986)](http://commonsenseatheism.com/wp-content/uploads/2012/12/Railton-Facts-and-Values.pdf); [Rosati (1995)](http://commonsenseatheism.com/wp-content/uploads/2012/12/Rosati-Persons-perspectives-and-full-information-accounts-of-the-good.pdf). (For an overview of this background, see [Muehlhauser & Williamson 2013](https://intelligence.org/files/IdealAdvisorTheories.pdf).) (**3**) [*Parliamentary methods for values* aggregation](http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html): using voting mechanisms to resolve challenges in normative uncertainty and values aggregation. Before [Bostrom (2009)](http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html) there was a vast literature on this topic in social choice theory. For recent overviews, see [List (2013)](http://plato.stanford.edu/entries/social-choice/); [Brandt et al. (2012)](http://commonsenseatheism.com/wp-content/uploads/2014/02/Brandt-et-al-computational-social-choice.pdf); [Rossi et al. (2011)](http://www.amazon.com/Short-Introduction-Preferences-Artificial-Inetlligence/dp/1608455866/); [Gaertner (2009)](http://www.amazon.com/Primer-Social-Choice-Theory-Perspectives-ebook/dp/B006R4S9ME/). (**4**) *Reasoning under fragility*: figuring out how to get an agent not to operate with full autonomy before it has been made fully trustworthy. Before Yudkowsky [began to discuss](https://www.facebook.com/groups/233397376818827/permalink/255982137893684/) the issue, there was much work on “adjustable autonomy”: [Schreckenghost et al. (2010)](http://commonsenseatheism.com/wp-content/uploads/2014/02/Schreckenghost-et-al-Measuring-performance-in-real-time-during-remote-human-robot-operations-with-adjustable-autonomy.pdf); [Mouaddib et al. (2010)](http://commonsenseatheism.com/wp-content/uploads/2014/02/Mouaddib-et-al-A-decision-theoretic-approach-to-cooperative-control-and-adjustable-autonomy.pdf); [Zieba et al. (2010)](http://commonsenseatheism.com/wp-content/uploads/2014/02/Zieba-et-al.-Principles-of-adjustable-autonomy-a-framework-for-resilient-human–machine-cooperation.pdf); [Pynadath & Tambe (2002)](http://commonsenseatheism.com/wp-content/uploads/2014/02/Pynadath-Tambe-Revisiting-Asimovs-first-law-a-response-to-the-call-to-arms.pdf); [Tambe et al. (2002)](http://commonsenseatheism.com/wp-content/uploads/2014/02/Tambe-et-al-Adjustable-autonomy-for-the-real-world.pdf). (**5**) *Logical decision theory*: finding a decision algorithm which can represent the agent’s deterministic decision process. Before [Yudkowsky (2010)](https://intelligence.org/files/TDT.pdf) there was:[Spohn (2003)](http://commonsenseatheism.com/wp-content/uploads/2012/09/Spohn-Dependency-equilibria-and-the-causal-structure-of-decision-and-game-situations.pdf); [Spohn (2005)](http://commonsenseatheism.com/wp-content/uploads/2012/09/Spohn-5-Questions-on-Formal-Philosophy.pdf). (**6**) *Stable self-improvement*: getting a self-modifying agent to avoid rewriting its own code unless it has very high confidence that these rewrites will maintain desirable agent properties. Before [Yudkowsky & Herreshoff (2013)](https://intelligence.org/files/TilingAgents.pdf) there was: [Schmidhuber (2003)](http://arxiv.org/abs/cs.LO/0309048); [Schmidhuber (2009)](http://www.idsia.ch/~juergen/ultimatecognition.pdf); [Steunebrink & Schmidhuber (2012)](http://www.idsia.ch/~steunebrink/Publications/AGIbook2_self-reflection.pdf). (**7**) [*Naturalized induction*](http://wiki.lesswrong.com/wiki/Naturalized_induction): getting an induction algorithm to treat itself, its data inputs, and its hypothesis outputs as reducible to its physical posits. Before “[Building Phenomenological Bridges](http://lesswrong.com/lw/jd9/building_phenomenological_bridges/)” there was: [Orseau & Ring (2011)](http://www.idsia.ch/~ring/Orseau,Ring%3BSelf-modification%20and%20Mortality%20in%20Artificial%20Agents,%20AGI%202011.pdf); [Orseau & Ring (2012)](http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_76.pdf).\n\nThe post [2013 in Review: Friendly AI Research](https://intelligence.org/2014/02/18/2013-in-friendly-ai-research/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-02-18T21:13:05Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "5bc6e443dc5d48e4356d08a4b2d2a7a5", "title": "MIRI’s February 2014 Newsletter", "url": "https://intelligence.org/2014/02/17/miris-february-2014-newsletter/", "source": "miri", "source_type": "blog", "text": "| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n[Machine Intelligence Research Institute](http://intelligence.org)\n |\n\n |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \nDear friends, \nSee below for news on our new ebook, new research, new job openings, and Google’s new AI ethics board.\n**Research Updates**\n* New analyses: [Robust Cooperation: A Case Study in Friendly AI Research](http://intelligence.org/2014/02/01/robust-cooperation-a-case-study-in-friendly-ai-research/) and [How Big is the Field of Artificial Intelligence?](http://intelligence.org/2014/01/28/how-big-is-ai/)\n* New interviews: [Emil Vassev](http://intelligence.org/2014/01/30/emil-vassev-on-formal-verification/) on formal verification, [Mike Frank](http://intelligence.org/2014/01/31/mike-frank-on-reversible-computing/) on reversible computing, [Ronald de Wolf](http://intelligence.org/2014/02/03/ronald-de-wolf-on-quantum-computing/) on quantum computing, [Gerwin Klein](http://intelligence.org/2014/02/11/gerwin-klein-on-formal-methods/) on formal methods, [André Platzer](http://intelligence.org/2014/02/15/andre-platzer-on-verifying-cyber-physical-systems/) on verifying cyber-physical systems.\n* Two MIRI talks from the AGI-11 academic conference have finally [been posted](http://intelligence.org/2014/01/31/two-miri-talks-from-agi-11/) (videos + slides + transcripts).\n\n\n**News Updates**\n* MIRI has a new [Careers](http://intelligence.org/careers/) page, currently with 4 job openings.\n* New ebook: [*Smarter Than Us: The Rise of Machine Intelligence*](http://intelligence.org/smarter-than-us/).\n* [Want to help MIRI by investing in XRP?](http://intelligence.org/2014/01/18/investing-in-xrp/)\n* Two new posts in our 2013 in Review series, on [outreach](http://intelligence.org/2014/01/20/2013-in-review-outreach/) and on [strategic and expository research](http://intelligence.org/2014/02/08/2013-in-review-strategic-and-expository-research/).\n* Two new conversations about MIRI strategy, with [Holden Karnofsky](http://intelligence.org/2014/01/27/existential-risk-strategy-conversation-with-holden-karnofsky/) and with [Jacob Steinhardt](http://intelligence.org/2014/02/11/conversation-with-jacob-steinhardt-about-miri-strategy/).\n* [MIRI’s experience with Google Adwords](http://intelligence.org/2014/02/06/miris-experience-with-google-adwords/).\n\n\n**Other Updates**\n* If you’re in the UK, go see Martin Rees, Jaan Tallinn, and Huw Price at the Cambridge lecture “[Existential Risk: Surviving the 21st Century](http://lesswrong.com/lw/jog/cambridge_england_lecture_existential_risk/)” on Feb. 26th.\n* The big AI news last month was Google’s ~$400M acquisition of DeepMind, and Google’s rumored creation of an AI ethics board. The best coverage I saw was at *Forbes*: [Inside Google’s Mysterious Ethics Board](http://www.forbes.com/sites/privacynotice/2014/02/03/inside-googles-mysterious-ethics-board/). One of DeepMind’s co-founders, Shane Legg, wrote [his PhD thesis](http://www.vetta.org/documents/Machine_Super_Intelligence.pdf) on machine superintelligence, [takes AI risks quite seriously](http://www.huffingtonpost.com/2014/01/29/google-ai_n_4683343.html), and previously wrote about strategies for [funding safe AGI](http://www.vetta.org/2009/08/funding-safe-agi/). (Shane is far from the *only* safety-conscious person at DeepMind; he’s just the one who has *written* the most about AGI safety.)\n* Blue Marble Space is running an [essay contest](http://www.bmsis.org/essaycontest/) on preparing for the distant future of civilization. $500 first-place prize.\n* Nick Bostrom’s next book is *[Superintelligence: Paths, Dangers, Strategies](http://ukcatalogue.oup.com/product/9780199678112.do)*, now available for pre-order from [Amazon US](http://www.amazon.com/gp/product/0199678111/ref=as_li_ss_tl?tag=miri05-20) and [Amazon UK](http://www.amazon.co.uk/gp/product/0199678111/ref=as_li_ss_tl?tag=miri05-20). I’ve read it three times already, and it is excellent. It’s easily the most cohesive, comprehensive coverage of the subject so far.\n\n\nAs always, please don’t hesitate to let us know if you have any questions or comments.\n\n \nBest,\nLuke Muehlhauser\nExecutive Director\n |\n\n |\n\n |\n\n\nThe post [MIRI’s February 2014 Newsletter](https://intelligence.org/2014/02/17/miris-february-2014-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-02-17T23:28:10Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "e155a65d6b9d817324248e50a78bb11f", "title": "New eBook: ‘Smarter Than Us’", "url": "https://intelligence.org/2014/02/17/new-ebook-smarter-than-us/", "source": "miri", "source_type": "blog", "text": "[![SmarterThanUsCover-Angled-200px](https://intelligence.org/wp-content/uploads/2014/02/SmarterThanUsCover-Angled-200px.jpg)](https://intelligence.org/smarter-than-us/)We are pleased to release a new ebook, commissioned by MIRI and written by Oxford University’s [Stuart Armstrong](http://www.fhi.ox.ac.uk/about/staff/), and available in EPUB, MOBI, PDF, and from the Amazon and Apple ebook stores.\n\n\nWhat happens when machines become smarter than humans? Forget lumbering Terminators. The power of an artificial intelligence (AI) comes from its intelligence, not physical strength and laser guns. Humans steer the future not because we’re the strongest or the fastest but because we’re the smartest. When machines become smarter than humans, we’ll be handing them the steering wheel. What promises—and perils—will these powerful machines present? Stuart Armstrong’s new book navigates these questions with clarity and wit.\n\n\nHead over to [intelligence.org/smarter-than-us/](http://intelligence.org/smarter-than-us/) to grab a copy.\n\n\nThe post [New eBook: ‘Smarter Than Us’](https://intelligence.org/2014/02/17/new-ebook-smarter-than-us/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-02-17T21:27:11Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "0bb3ff829a61e0e1257922562220738b", "title": "André Platzer on Verifying Cyber-Physical Systems", "url": "https://intelligence.org/2014/02/15/andre-platzer-on-verifying-cyber-physical-systems/", "source": "miri", "source_type": "blog", "text": "![André Platzer portrait](http://intelligence.org/wp-content/uploads/2014/02/Platzer.jpg)[André Platzer](http://symbolaris.com) is an Assistant Professor of Computer Science at [Carnegie Mellon University](http://www.cs.cmu.edu/). He develops the logical foundations of cyber-physical systems to characterize their fundamental principles and to answer the question how we can trust a computer to control physical processes. He received an M.Sc. from the University of Karlsruhe (TH), Germany, in 2004 and a Ph.D. in Computer Science from the University of Oldenburg, Germany, in 2008, after which he joined CMU as an Assistant Professor.\n\n\nDr. Platzer received a number of awards for his research on logic of dynamical systems, cyber-physical systems, programming languages, and theorem proving, including an NSF CAREER Award and ACM Doctoral Dissertation Honoroable Mention Award. He was also named one of the Brilliant 10 Young Scientists by the Popular Science magazine and one of the AI’s 10 to Watch by the IEEE Intelligent Systems magazine.\n\n\n\n**Highlights** of Platzer’s thoughts, from the interview below:\n\n\n* KeYmaera is a formal verification tool for cyber-physical systems. It has seen many real-world applications, e.g. verifying non-collision in the [European Train Control System](http://symbolaris.com/info/ETCS.html).\n* KeYmaera’s capability comes from Platzer’s “differential dynamic logic,” which builds on earlier work in (e.g.) [dynamic logic](http://www.cs.cornell.edu/~kozen/papers/HKT.pdf).\n* Contra Levitt, Platzer isn’t convinced that autonomous robots will need to use decision theory, though they may.\n* Platzer contrasts formal verification (“did I build the system right?”) with formal validation (“did I build the right system?”). One way to make progress on the latter is to develop what Rushby called “deep assumption tracking.”\n\n\n\n\n\n---\n\n\n**Luke Muehlhauser**: One of your projects is [KeYmaera](http://symbolaris.com/info/KeYmaera.html), a tool for deductive formal verification of hybrid systems, including autonomous control systems for trains, planes, and robots. What counts as a “hybrid system,” in this context? What commercial or government applications do you see as being within KeYmaera’s plausible capabilities reach, but for which KeYmaera isn’t yet being used?\n\n\n\n\n---\n\n\n**André Platzer**: KeYmaera is a tool for verifying hybrid systems. A hybrid system — or, more precisely, a hybrid dynamical system — is one in which both discrete dynamics and continuous dynamics interact. In other words, part of the system exhibits a discrete behavior, often caused by computation or communication, and other parts of the system exhibit a continuous behavior, such as physical motion in space or continuous chemical reactions. Systems in classical engineering have predominantly continuous dynamics while computer science classically explores only discrete dynamics, because, after all, computers compute one thing after another and are, thus, inherently discrete. Communication is a similarly discrete phenomenon where information is exchanged once at a certain point in time and then again later on at some other point in time. For example, over a wireless channel, communication does not literally happen continuously all the time, only every once in a while. However fast it might appear to the innocent bystander.\n\n\nHybrid systems now start with the realization that exciting things happen as soon as you make discrete dynamics and continuous dynamics interact. All of a sudden, your systems can, for example, use discrete computers and the power of advanced software algorithms to control and influence continuous physical processes and make them safer.\n\n\nBut we can no longer quite understand the discrete and continuous part of the system in isolation, because they interact. It would not be enough to worry just about whether the software could crash but also about whether it feeds the right control input into the physical system that it is trying to control and what effect those inputs will have on the physics. Likewise, it is not sufficient to understand the continuous dynamics alone, because its effect depends on the control input generated by the computer program.\n\n\nThis is where hybrid systems start and where KeYmaera comes in. Hybrid systems, which are a common core mathematical model behind cyber-physical systems (CPS), bring exciting new capabilities of how we can change the world for the better. But these capabilities come hand in hand with our responsibility of making sure that the systems are not going to let us down when we badly need them.\n\n\nAs Jeannette Wing has aptly phrased: “How can we provide people with cyber-physical systems they can bet their lives on?”\n\n\nApplication areas include aviation, automotive, railway, robotics, medical devices, physical or chemical process control, factory automation, and many more areas than I could list. Hybrid systems are a very general mathematical model which makes them interesting for many possible application areas. KeYmaera has indeed been used with industrial partners from the automotive industry, on mobile robotics applications of an industrial partner, in a study of a surgical robotics system by a government lab, and we are currently working with the Federal Aviation Authority on how to use KeYmaera for the next generation Airborne Collision Avoidance System ACAS X. Having said that, all those application areas bring up interesting challenges. Since most software in the wild still contains bugs, there is more potential and there remain more challenges in all the other application areas as well.\n\n\n\n\n---\n\n\n**Luke**: What role does [differential dynamic logic](http://symbolaris.com/logic/dL.html) play in KeYmaera? What was your inspiration for developing it?\n\n\n\n\n---\n\n\n**André**: *Differential dynamic logic* is the theoretical underpinning and mechanism behind KeYmaera. It is the principle that makes the verification power of KeYmaera work. Differential dynamic logic is the logic in which you can specify and verify properties of hybrid systems. The logic integrates all the relevant features of hybrid systems in a single language: differential equations for describing their continuous dynamics, difference equations / discrete assignments for discrete dynamics and conditional switching, nondeterministic choices, as well as repetition. With these fundamental dynamical principles, all the other relevant ones can be expressed.\n\n\nI designed differential dynamic logic when I was starting my PhD. I was always very curious about how the core logical foundations of hybrid systems would look and simply could not wait to discover them. Especially since they beautifully brought together the discrete and continuous parts of mathematics, which are classically considered separately.\n\n\nBut then someone showed me a model of the *European Train Control System* (ETCS) with critical parameters (such as when to start braking) that were chosen more or less at random based on their performance in a few simulations. Suddenly I saw that there was also a real need in practice for the flexibility that differential dynamic logic provides. There simply had to be a better and more disciplined way of approaching the problem of how to design hybrid systems and safely choose their respective parameters. Any specific number for a crucial design parameter can obviously only be safe within a range of constraints on all the other parameters of the system. And, indeed, the chosen parameter value for the start braking point in the ETCS model turned out to cause a collision when just increasing the speed a little.\n\n\nFrom this point on, I designed differential dynamic logic with a dual focus on both its core theory and its practical use in applications. This broader focus certainly made it more successful. Especially when practical applications raised interesting theoretical challenges and, conversely, questions considered for purely theoretical reasons later turned out to be critical in unsuspecting applications.\n\n\nDifferential dynamic logic is what we later used as the basis for implementing the theorem prover KeYmaera and has served us well since. Much has been written about differential dynamic logic since my first draft in 2005, including a [book](http://www.amazon.com/Logical-Analysis-Hybrid-Systems-Theorems/dp/3642145086/). But a good survey can be found in a [LICS tutorial](http://commonsenseatheism.com/wp-content/uploads/2014/02/Platzer-Logics-of-Dynamical-Systems.pdf).\n\n\n\n\n---\n\n\n**Luke**: What previous work in logic and differential equations did you build on to create differential dynamic logic?\n\n\n\n\n---\n\n\n**André**: Differential dynamic logic fortunately had all of science to build on. An exhaustive list of all the areas of logic, mathematics, and computer science that its development and theory are drawing from would be longer than the whole interview. And the list with connections to control and engineering would be just as long. In fact, this is one of the reasons why research on hybrid systems is so exciting. It is certainly a way of being able to explore any of your favorite areas of mathematics while still having important applications that other people care about.\n\n\nFortunately, [KeYmaera](http://symbolaris.com/info/KeYmaera.html) gives you all those powerful connections to other areas of science & engineering, but they are hidden behind the very elegant and intuitive language of differential dynamic logic. That means the user does not need to know all those areas of math and can play with the syntactical constructs of differential dynamic logic and its proof rules. Differential dynamic logic gives people a safe framework in which they can leverage and exploit all these areas without knowing all of them in depth. The computer can then check that the reasoning is combined in a correct way.\n\n\nThe most central previous work I built on is [dynamic logic](http://www.cs.cornell.edu/%7Ekozen/papers/HKT.pdf)  originally due to Pratt and pioneered by many people including Harel, Meyer, Parikh, Kozen as well as on Sophus Lie’s theory of continuous algebras and continuous groups. Both of those inventions are from before I was born, but have helped me a lot in designing differential dynamic logic.\n\n\nDynamic logic is a logic for conventional discrete programs that I hybridized by augmenting it with differential equations and generalizing it to hybrid systems with their interacting discrete and continuous dynamics. The original dynamic logic has also been implemented and used successfully in the prover [KeY](http://www.key-project.org) on which our prover KeYmaera is based. This heritage also explains the funny name KeYmaera, which makes independent sense, as KeYmaera is a homophone to [Chimaera](http://en.wikipedia.org/wiki/Chimera_%28mythology%29), the hybrid animal from ancient Greek mythology.\n\n\nIn retrospect, it is obvious that dynamic logic has really been longing for hybrid systems for three decades without anybody noticing. Dynamic logic was originally formulated for nondeterministic systems in which programs can have more than one possible effect. This is a fundamental characteristic but really slightly at odds with the behavior of conventional programs, which is mostly deterministic, i.e. each program statement will have exactly one effect. Well in hybrid systems, the story is different, because CPS implementations are never as deterministic as you might think they would run, and because the environment always has more than one possible future behavior that you need to consider. [Dynamic logic and dynamical systems are simply made for each other](http://arxiv.org/abs/1205.4788).\n\n\nEven more fundamental than dynamic logic itself are the general fundamental principles of logic. They led me to the realization that hybrid systems deserve a study of their logical characteristics. Hybrid systems used to be second class citizens because they were challenging and no one had created a logical foundation for them. Now that those proper logical foundations are in place, we see that hybrid systems and their programmatic expressions as hybrid programs are not just some peripheral artifact, but are actually a powerful construct in their own right. They possess the intrinsic characteristics of integrating interacting discrete and continuous behavior. The result of such an interaction may be a little bit like what happens when particles of fundamentally different origin collide in a high-energy physics collider experiment. Historically disjoint areas of mathematics & science suddenly being to interact. Really surprising interactions happen and [scientific bridges are being built that unite discrete and continuous computation](http://commonsenseatheism.com/wp-content/uploads/2014/02/Platzer-The-Complete-Proof-Theory-of-Hybrid-Systems.pdf) by way of the mediating nature of hybrid dynamics.\n\n\nAnother example illustrating the consequences of the impact of logic on hybrid systems is that proof theory led to a generalization of Lie’s seminal results in ways that relate to Gentzen’s seminal cut elimination in logic. Cuts are the logically fundamental mechanism behind lemmas in proofs and have been shown by Gentzen to be unnecessary. They are rather discrete, though, because cuts prove a (static) lemma and then use it. The counterpart for discrete cuts is called [differential cuts](http://commonsenseatheism.com/wp-content/uploads/2014/02/Platzer-Differential-algebraic-Dynamic-Logic-for-Differential-algebraic-Programs.pdf) which prove a property of a differential equation and then modify the system behavior to comply with this property. [Differential cuts turn out to be fundamental](http://arxiv.org/pdf/1104.1987.pdf) and, for all I know, Sophus Lie does not seem to have been aware of the significance of such a logical mechanism for differential equations. Phenomena like this one demonstrate that amazing science can happen when logicians like Gerhard Gentzen interact with algebraists like Sophus Lie.\n\n\nYet, at the end of the day with all those mind-bending theoretical surprises, it is certainly also reassuring to observe their practical effect and relevance in verification and analysis results for CPS applications that have not been possible before. It is simply a nice mix to have exciting theory with useful applications at your fingertips every day.\n\n\n\n\n---\n\n\n**Luke**: Do you think the creators of dynamic logic, or the creators of other theoretical work that you built on to develop differential dynamic logic, thought that *their* work could be useful for program safety or verification one day? Or is this an instance of you plucking up theoretical results from unrelated disciplines and using them for your own purposes in software verification?\n\n\n\n\n---\n\n\n**André**: Well, these are all very smart people. I find it hard to imagine the pioneers of dynamic logic would not have thought of possible uses in program verification. Even if dynamic logic might have started from more theoretical / logical investigations, I am sure they were aware of this future. And I am not the first to use dynamic logic for verification either. But it is true that, unlike in the case of differential dynamic logic, the development of the theory, practice, and applications of dynamic logic was mostly led by different groups. Maybe it is just part of the natural appeal of hybrid systems that I can never resist to be as curious about their theory as I am about their applications.\n\n\nHaving said that, an excellent question is whether the inventors of dynamic logic have thought of hybrid systems. I, of course, don’t know that. Dynamic logic predates the invention of hybrid systems by almost two decades. But hybrid systems have also almost always been around us. It’s just that until the 1990s, nobody noticed that they are a fascinating mathematical phenomenon in their own right.\n\n\nConcerning the other primary source of inspiration, the seminal work of Sophus Lie, he might not have thought of possible uses in the verification of hybrid systems, because Sophus Lie’s results predate dynamic logic by a century. He invented what are now called Lie groups 1867-1873, which is long before computers were around. Yet, I do not want to leave the impression these were my only inspirations when developing differential dynamic logic and its theory and applications. There is significant influence also from ideas of Kurt Gödel, Gerhard Gentzen, Alfred Tarski, David Hilbert, Alonzo Church, Yiannis Moschovakis, Henri Poincaré, Joseph Doob, Eugene Dynkin, and many many others. I most certainly owe my results to what they have found out before.\n\n\n\n\n---\n\n\n**Luke**: [Levitt (1999)](http://commonsenseatheism.com/wp-content/uploads/2014/01/Levitt-Robot-Ethics-Value-Systems-and-Decision-Theoretic-Behaviors.pdf) argued that as intelligent systems become increasingly autonomous, and operate in increasingly non-deterministic, unknown environments, it will not be possible to model their environment and possible behaviors entirely with tools like control theory:\n\n\n\n> If robots are to put to more general uses, they will need to operate without human intervention, outdoors, on roads and in normal industrial and residential environments where unpredictable… events routinely occur. It will not be practical, or even safe, to halt robotic actions whenever the robot encounters an unexpected event or ambiguous [percept].\n> \n> \n> Currently, commercial robots determine their actions mostly by control-theoretic feedback. Control-theoretic algorithms require the possibilities of what can happen in the world be represented in models embodied in software programs that allow the robot to pre-determine an appropriate action response to any task-relevant occurrence of visual events. When robots are used in open, uncontrolled environments, it will not be possible to provide the robot with *a priori* models of all the objects and dynamical events that might occur.\n> \n> \n> In order to decide what actions to take in response to un-modeled, unexpected or ambiguously interpreted events in the world, robots will need to augment their processing beyond controlled feedback response, and engage in decision processes.\n> \n> \n\n\nThus, he argues, autonomous programs will eventually need to be decision-theoretic.\n\n\nWhat do you think of that prediction? And in general, what tools and methods do you think will be most relevant for achieving high safety assurances for the highly autonomous systems of the future?\n\n\n\n\n---\n\n\n**André**: Levitt definitely has a point about the necessity of stronger safety arguments for robots, especially in light of their challenging and uncertain environments.\n\n\nI would, however, not advise dismissing control theory entirely. I do not even see how decision theory would contradict control theory. Control theory is, without any doubt, contributing valuable technology and perspectives on cyber-physical systems, including robots. The spirit of cyber-physical systems and their mathematical model of hybrid systems is to learn from *all* the various disciplines involved and bring concepts together rather than segregating them.\n\n\nWhile I sympathize with Levitt’s conclusion about the possible use of decision-theoretical methods, I disagree with his argument. The reason why I do is a nice illustration of important principles behind cyber-physical systems. In the text you quoted and other places in his article, Levitt argues that control theory does not help because it requires models. He further argues that the solution would be to use decision theory, which, however, still needs models:\n\n\n\n> “For each step in a plan to accomplish a robot task, […] we can model the robot’s immediate possible action choices as decisions. Then we can develop *a priori* sets of values for the possible task-relevant outcomes of robot actions.”\n> \n> \n\n\nIn fact, this needs even more models: prior belief distribution models, models how perception updates beliefs, models of the cost of taking an action in a state.\n\n\nIt is inconclusive to argue that the cure to using models is to use more models, even if this argument occurs frequently. At the same time, it is hard to escape the Cartesian Demon without adopting at least some model of the world. As long as we acknowledge that our predictions are subject to some form of models, however hidden or data-driven they may be, there is, thus, nothing wrong in using models. And in fact, there is even a good reason to consider increasingly more flexible models.\n\n\nRegardless, I agree with Levitt that robots live in an uncertain environment and, uncertainty, thus, needs to be included in models. Simple models are sufficient for understanding some phenomena in a system, while other aspects need more complicated models. But they are still models! Whether the right model is deterministic, probabilistic, nondeterministic, or adversarial depends on the exact purpose. In some situations, nondeterministic understandings of uncertainty are much better, in others, they have to be probabilistic, in yet others adversarial.\n\n\nThis is one of the central guiding principles behind [multi-dynamical systems](http://symbolaris.com/logic/lfcps.html), which I define as systems that combine multiple dynamical aspects such as discrete, continuous, stochastic, nondeterministic, or adversarial dynamics. Multi-dynamical systems generalize hybrid systems and provide the dynamical features of cyber-physical systems that hybrid systems are missing. Differential dynamic logic generalizes very elegantly to [dynamic logics of multi-dynamical systems](http://arxiv.org/abs/1205.4788).\n\n\nThis kind of flexibility is crucial for success in high assurance safety systems. It is simply a pity if all models, analysis results, and techniques need to be discarded entirely as soon as a dynamic aspect occurs that the techniques do not handle. If, instead, models and analysis techniques are as flexible as the [family of differential dynamic logics](http://symbolaris.com/pub/guide.html#lds), then there is no overwhelming cost associated with introducing or dropping nondeterminism or other dynamical aspects.\n\n\nMy colleagues Bruce Krogh, David Garlan and our students have argued for such flexibility of models as well, where the insight is that it is often more feasible to consider [multiple heterogeneous models](http://commonsenseatheism.com/wp-content/uploads/2014/02/Rajhans-et-al-Using-parameters-in-architectural-views-to-support-heterogeneous-design-and-verification.pdf) rather than a single all-embracing one.\n\n\nAnother frequent but faulty argument that Levitt expresses is\n\n\n\n> “In fact, we claim that if robot perceptions were certain, that we would have almost no uncertainty associated to robot behaviors.”\n> \n> \n\n\nThe argument “if only we could sense, we could act” is a common fallacy. Its contrapositive is quite true: If we cannot sense, it is virtually impossible to control anything safely. Yet, even if the sensors happen to work perfectly, there are still numerous opportunities for subtle and blatant mistakes in controllers. And there are even more opportunities for mistakes coming from the uncertainty of the effects of actions that Levitt readily dismisses. For emphasis, I’ll focus only on discussing controller mistakes that are independent of sensing and actuation uncertainty.\n\n\nFor example, our verification tool KeYmaera led Yanni Kouskoulas, one of my collaborators at the John’s Hopkins University’s Applied Physics Lab (JHU APL), to find and fix two very subtle flaws in the controllers of a surgical robotics system for skull-base surgery. These controllers had been developed according to the best practice state of the art in safety-critical systems and had been scrutinized by their developers. Yet, the developers had second thoughts and approached the JHU APL, who ultimately approached us. The fact that the bugs have only been found by formal verification in the end demonstrates that it is just extraordinarily difficult to get cyber-physical systems correct without the right analytic tools. But I am not surprised that it took sound verification tools to find the bugs. One was a timing bug caused by the inertia of the system. The other one had to do with very subtle acting and counteracting control depending on the precise geometrical relationships.\n\n\nThis is not an isolated case. We have observed similar benefits of using verification in our DARPA HACMS project. It was an interesting experience to observe the discrepancies that can arise when comparing formally verified parts of the robot to non-verified parts. That is always an eye-opening experience where all participants learn something important about the application domain.\n\n\nLevitt underestimates substantial challenges:\n\n\n\n> “In summary we claim that valuing world states is computationally tractable in two senses. The first is that we can specify the values about world states *a priori* and use them to dynamically value actions during task execution. The second is that the number of such valuations is order linear in a relatively small number of abstract classes of world entities that concern a robotic task, such as roads, traffic signals, humans and animals.”\n> \n> \n\n\nBut the complexity of decision making is quite a problem in practice. Even [seemingly simple planning instances are NP-hard](http://www.ijcai.org/Past%20Proceedings/IJCAI-91-VOL1/PDF/043.pdf), which Bellman identified as the curse of dimensionality in the 1950s. Correct decisions that come half an hour too late are not particularly useful for rescuing a robot’s safety mission. Real-time responses are critical. That’s one of the reasons why it is crucial to have pre-verified safe responses as opposed to settling on unbounded online deliberation.\n\n\nIn contrast to what the above quote indicates, it has been an open challenge for decades to find out how to get the models right, get the values right, and make sure both translate into the right action to choose.\n\n\nConsider ACAS X, the next generation Airborne Collision Avoidance System that the FAA tasked our collaborators at JHU APL and us to analyze with our verification techniques. Its design is built on decision theory in the form of Markov Decision Process optimization. On some level, it might sound trivial to identify that “no collisions ever” is the highest value in designing a collision avoidance system for aircraft. Yet, how exactly does this translate into math? What are the right secondary objectives that prevent unnecessary avoidance maneuvers? What is the right model to assume for an intruder aircraft? For pilot’s responses? After many years of careful and highly skilled design of ACAS X, I have still seen an “optimal” interim version that is just not what anybody would want. And that, fortunately, will not fly.\n\n\nOverall, I would argue that verification ultimately reduces correctness of cyber-physical systems to the question of whether we got the model right. It is a true challenge to identify the right model for a problem. But once we did, there are still numerous subtle ways of getting the answer wrong. These are the flaws that verification removes. Having said that, verification is also making progress on validating whether we got the right model in the first place.\n\n\nFor the record, I also strongly disagree with other claims Levitt made in that position paper. Especially:\n\n\n\n> “We claim that the core capability that makes ethical behaviors possible for robots is to understand the world external to the robot through the automated interpretation of sensory data. Robot perceptions have a fundamentally different relationship to the truth or falsity of statements about the world than is possessed by syntactic rules or similar representations.”\n> \n> \n\n\nThis claim directly contradicts Alonzo Church’s seminal proofs from 1936. And my primary line of research on logic and proofs for cyber-physical systems shows exactly how such relationships can be established and successfully exploited for verification. Thus, I understand Levitt’s claims more as possible suggestions rather than conclusive arguments. That might be a logicians trait, though, to distinguish theorems from conjectures by whether or not they come with proof.\n\n\n\n\n---\n\n\n**Luke**: You wrote that “verification is also making progress on validating whether we got the right model in the first place.” What did you have in mind, there? The thing that came to mind for me was [Rushby (2013)](http://www.csl.sri.com/users/rushby/papers/safecomp13.pdf), but I don’t know much about concrete ongoing progress on the problem of reducing (what he calls) “epistemic doubt.”\n\n\n\n\n---\n\n\n**André**: John Rushby makes a great case for distinguishing logic doubt (“did we get the arguments right?”) versus epistemic doubt (“did we get our knowledge of the world right?”). I agree with him that we can get logic doubt under control and make it a problem of the past. Yet, that is exactly what you need such strong sound reasoning frameworks for cyber-physical systems for, such as differential dynamic logic. The ultimate proof will simply have to take the dynamics of the system into account just as much as it takes its control into account.\n\n\nTerminologically, I usually contrast formal verification (“did I build the system right?”) with formal validation (“did I build the right system?”). That is primarily because verification versus validation has a widespread (but not unambiguous) compatible connotation in the field. And also because I want to emphasize that we should not settle only for arguments that Rushby aptly phrases as:\n\n\n“Because an expert says so.”\n\n\nYet, it doesn’t matter what name we use, it’s still our responsibility to make sure the system really works in the end.\n\n\nElements of formal validation are also used in our NSF Expedition project CMACS (Computational Modeling and Analysis for Complex Systems), led by my colleague Ed Clarke Among other things, we also probe models for plausibility by subjecting them to formal verification tests of whether or not they satisfy expected properties. These are not only the safety-critical properties in the end, but also rather mundane properties whose only purpose is to validate whether the model behaves as expected. Certainly a technique that has been successful also in formal methods in chip designs.\n\n\nTo follow up on Rushby’s suggestions, I completely agree with his plea for what I would call deep assumption tracking. That is, making sure all assumptions and conclusions are kept track of meticulously. Again an important topic in verification with multiple heterogeneous models. It’s a common reason for minor and major hiccups if someone get’s the assumptions wrong. Or if no one knows about them, so that other people draw the wrong conclusions. That’s also why inadequate modeling assumptions that are left implicit tend to have the most dire consequences on a system’s behavior.\n\n\nIn our work on provably safe obstacle avoidance for autonomous robotic ground vehicles, for example, our models went by subsequently enriching the dynamic capabilities of the obstacles from static over dynamic all the way to dynamic obstacles in the presence of sensor and actuator and location uncertainty. Since we accompanied each of those models with a correctness proof in KeYmaera, we couldn’t help but notice under which circumstance the system is safe and when it’s not. In our models, for example, it was a simple change to go from “obstacles are stuck” to “obstacles can move wherever they want, nondeterministically”. Since the proof won’t work then, though, you immediately realize that you first need to know some bound on the obstacle’s velocity. Dynamic obstacles coming at you faster than the speed of light are simply hard to defend against.\n\n\nLater, during a dry-run of a different controller implementation built without verification, it was interesting to see the difference in action. The unverified robot controller sometimes collided with obstacles, which is exactly what it wasn’t supposed to do. A corresponding (simulation) run of the verified system instead vetoed unsafe control actions long before the collision ever happened. The reason why was prototypical. Whoever designed the unverified controller that we had access to did the natural thing: the robot first considered all obstacles static and hoped for fast enough sensing. Well, that is what our first verified model had also assumed. But our KeYmaera proofs immediately went further and investigated the case of dynamic obstacles, which is why we recommended to consider the case of dynamic obstacles for the unverified controller, too.\n\n\nThe unverified robot still collided. This time, the reason was more subtle. In a nutshell, the implementations silently assumed that walls would not move while other agents could. That sounds reasonably justified by everyday’s experience. The catch is that corners do move. Well, they don’t really. But as soon as your verification model says that obstacles either move or they don’t and that it’s their choice (nondeterministically!) where they move to, then you cannot help but notice in your proof that the system is only safe if your controller correctly addresses the case where an obstacle suddenly comes racing round a corner. And indeed, when we asked, we got the confirmation that troublemaker cases in practice were partially occluded obstacles.\n\n\nNow it might be easy to blame the developers of the original robot controllers for this oversight and say that they just haven’t been paying enough attention. But that would be too easy. In fact, that would miss the point, entirely. The point is that robotics and any other kind of cyber-physical systems design is just darn challenging. These are very hard problems. Programming is incredibly difficult. And programming robots only makes things more difficult, because we’ve suddenly got to face the whole physical reality around us. There’s more ways to get them wrong, quite subtly, than there are ways of getting them right. We desperately need analytic support that helps us getting the designs right so that we can bet our lives on them.\n\n\nHaving talked to a number of people in academia and industry later on, it was not apparent to most of them that some static parts of the ambient space would have to be considered as moving even if they themselves didn’t. This is a mistake that verification prevented us from doing.\n\n\nBut guess how we found out. A month earlier, postdocs in my group tried our freshly verified controller on some examples and came back quite frustrated, because it was too conservative for their taste. It shied away from narrow doorways. I was probably baffled for a moment there myself. But then it dawned on us that our verified controller was right. If obstacles can move anywhere they please, they might as well move quickly from behind a corner where our robot could not see them yet. Notice that we never built this knowledge into our model nor our controller nor anything else. We simply started with the canonical model of a robot’s motion on a plane with a moving obstacle. Then we discovered where our proofs led us to in designing a verifiably safe controller. They led us to a controller that stays clear of narrow doorways. But that is what the robot simply has to do to stay provably collision-free.\n\n\nWe later discussed our verified controller with our collaborators in industry. They not only confirmed that our controller works within the intended operational range and wasn’t overly conservative at all. But they also simply loved the sudden new capability of playing with different instantiations of the parameters of our verified model to find out whether the design will work correctly efficiently within their intended operating regime. Without having to set up all those different scenarios and system configurations experimentally. And the verified controller automagically ends up doing the right thing. When cutting corners, it will slow down to avoid collisions. When moving with a bigger margin around a corner, it would happily speed up. We didn’t build that in. It just followed from the safety constraints of our proofs.\n\n\nStill, we might have made mistakes in our model of the motion principles of the world and our assumptions about the environment. That would have given us a true, undeniable proof, which just does not apply to reality, because its assumptions aren’t met. Or we might have misformalized what safety property it is that we care about. And this is where formal validation comes in. Techniques for ensuring that the models fit to the relevant part of reality and that the formulas fit to the intent. A problem that we just made some promising progress on and where I’m sure that many more exciting things will still be found out about.\n\n\nFormal validation is just as exciting as formal verification. And that is why I am developing the Logical Foundations of Cyber-Physical Systems, because we simply need such strong analytic foundations when software reaches out into our physical world. Yet, with those foundations, there are suddenly amazing opportunities for how we can change the world for the better by designing cyber-physical systems that help us in our daily lives in more ways than we ever dreamed off. After all, true cyber-physical systems combine cyber capabilities with physical capabilities to solve problems that neither part could solve alone.\n\n\n\n\n---\n\n\n**Luke**: Thanks, André!\n\n\nThe post [André Platzer on Verifying Cyber-Physical Systems](https://intelligence.org/2014/02/15/andre-platzer-on-verifying-cyber-physical-systems/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-02-15T15:24:55Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "de7dce6003b22ff8124902be2b95d3f3", "title": "Gerwin Klein on Formal Methods", "url": "https://intelligence.org/2014/02/11/gerwin-klein-on-formal-methods/", "source": "miri", "source_type": "blog", "text": "![Gerwin Klein portrait](http://intelligence.org/wp-content/uploads/2014/02/Klein_w856.jpg)[Gerwin Klein](http://ssrg.nicta.com.au/people/?cn=Gerwin+Klein) is a Senior Principal Researcher at [NICTA](http://nicta.com.au), Australia’s National Centre of Excellence for ICT Research, and Conjoint Associate Professor at the [University of New South Wales](http://www.cse.unsw.edu.au) in Sydney, Australia. He is leading NICTA’s Formal Methods research discipline and was the leader of the L4.verified project that created the first machine-checked proof of functional correctness of a general-purpose microkernel in 2009. He joined NICTA in 2003 after receiving his PhD from Technische Universität München, Germany, where he formally proved type-safety of the Java Bytecode Verifier in the theorem prover Isabelle/HOL.\n\n\nHis research interests are in formal verification, programming languages, and low-level systems. Gerwin has won a number of awards together with his team, among them the 2011 MIT TR-10 award for the top ten emerging technologies world-wide, NICTA’s Richard E. Newton impact award for the kernel verification work, the best paper award from SOSP’09 for the same, and an award for the best PhD thesis in Germany in 2003 for his work on bytecode verification. When he is not proving theorems and working on trustworthy software, he enjoys travelling and dabbling in martial arts and photography. Together with Tobias Nipkow he has just published an online draft of the text book [Concrete Semantics](http://www21.in.tum.de/~nipkow/Concrete-Semantics/) that teaches programming language semantics using the [Isabelle](http://isabelle.in.tum.de/) theorem prover.\n\n\n\n**Highlights** of Klein’s thoughts, from the interview below:\n\n\n* Verifying code not designed for verification is very difficult and costly. Such “post-mortem verification” has other disadvantages as well.\n* Program designers can use abstraction, modularity, and explicit architecture choices to help make complex systems transparent to humans.\n* There are two ways probability can play a role in verification: (1) direct probabilistic reasoning, in the logic or in a setting where the program itself is probabilistic, or (2) standard non-probabilistic reasoning paired with subjectively uncertain reasoning about what a guarantee means for the overall probability that the system will work as intended.\n* Truly autonomous systems seem to be a long ways off, but if you could make a system with truly unexpected behavior, then you couldn’t make it safe.\n\n\n\n\n---\n\n\n**Luke Muehlhauser**: In your forthcoming paper “[Comprehensive Formal Verification of an OS Microkernel](http://www.nicta.com.au/pub?id=7371),” you and your co-authors describe the kernel design (seL4) you used to ensure its verification would be tractable. You also discuss the process of keeping the formal proof current “as the requirements, design and implementation of the system [changed] over almost a decade.”\n\n\nHow “micro” is the seL4 microkernel? That is, what standard functionality does it provide, and what functionality (that is common in larger OS kernels) does it *not* provide?\n\n\n\n\n---\n\n\n**Gerwin Klein**: It is pretty micro: it is a true micro-kernel in the L4 tradition. This means it is very small compared to traditional monolithic OS kernels: Linux has on the order of a few million lines of code, seL4 has on the order of 10,000 lines of C. seL4 provides only the core mechanisms that you need to build an OS. These include threads and scheduling, virtual memory, interrupts and asynchronous signals, as well as synchronous message passing. The maybe most important feature of seL4 is its strong access control mechanism that can be used together with the rest to build isolated components with controlled communication between them. These components could be small isolated device drivers or an entire Linux guest OS, the developer can choose the granularity.\n\n\nThe philosophy is: if a mechanism strictly needs privileged hardware access to be implemented, it is in the kernel, but if it can be implemented using the other mechanisms as a user-level task, it is outside the kernel. A surprising amount of features can be outside: device drivers, for instance, graphics, files and file systems, network stacks, even disk paging is not provided by seL4, but can be implemented on top of it. What you (hopefully) get is that tricky complexity is concentrated in the kernel, and user-level OS components become more modular.\n\n\n\n\n\n---\n\n\n**Luke**: Is anyone likely to formally verify an OS microkernel of seL4’s size, or any computer program of comparable complexity, if it wasn’t designed from the ground up to be especially amenable to formal verification?\n\n\n\n\n---\n\n\n**Gerwin**: I wouldn’t say it���s impossible to verify code that wasn’t designed for verification, but it is going to increase cost and effort significantly. So currently it’s unlikely that someone would do a similar verification on something that wasn’t designed for it. This could change in 5-10 more years as techniques become more powerful and effort decreases.\n\n\nBut even if it is possible, I don’t think it is desirable. I’ve heard someone call this kind of process “post-mortem verification” and I agree with that general sentiment. Starting verification after everything else is finished is too late. If you find serious problems during the verification, what can you do about them if everything is set in stone already? Also, if you are serious about building a provably correct system or a high-assurance system, you should design and plan it as such. Interestingly, agile methods go into the same direction: they test and verify right from the get-go, even if they don’t use formal methods for it.\n\n\nIf you can factor the cost of verification into your development process, your design choices will be more biased towards making it obvious why the system works correctly. In our experience, this doesn’t just reduce the cost of verification later, but also leads to better and more elegant designs. Of course there is also tension between verification and factors such as performance, but it’s not black and white: you can evaluate cost and make informed trade-offs, and sometimes there are solutions that satisfy both.\n\n\n\n\n---\n\n\n**Luke**: You mention that designing with verification in mind may nudge design choices toward “making it more obvious why the system works correctly” — an important issue I’ve sometimes called “transparency,” as in “transparent to human engineers.” I wrote about this in [Transparency and Safety-Critical Systems](http://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/).\n\n\nOne of the “open questions” I posed in that post was: “How does the transparency of a method change with scale?” Perhaps a small program written with verification in mind will tend to be more transparent than a Bayes net of similar complexity, but do you think this changes with scale? If we’re able to scale up formal methods to verify programs 10,000x bigger than seL4, would it be more or less transparent to human engineers than a Bayes net of comparable complexity? At least we can query the Bayes net to ask “what it believes about X,” whereas we can’t necessarily do so with the formally verified system. (You might also think I’m framing these issues the wrong way!)\n\n\n\n\n---\n\n\n**Gerwin**: What you call transparency probably is correlated closely with complexity, and complexity tends to increase with scale. There are a number of ways for dealing with complexity in traditional system design, for instance abstraction, modularity, and architecture. All of these also help with formal verification and make it easier to scale. For example, abstraction hides implementation complexity behind an interface and makes it possible to talk about a part of a system without knowing all of the system.\n\n\nA great early example of the power of abstraction is the London underground tube map: it tells you which trains go where and connect with which other trains, and it gives you a rough geographical relation, but it doesn’t tell you which bends the tracks make, where precisely each station is, or what is above ground between stations. You can confidently navigate a fairly complex train system as a passenger without knowing anything at all about the more detailed mechanics of how everything works and where precisely everything is.\n\n\nIn our seL4 verification, we initially couldn’t make much use of abstraction, because in a microkernel everything is tightly interrelated. In larger systems this is less necessary, in part because something like a kernel is then available to provide modules and isolation.\n\n\nIn the longer run, we benefited from abstraction even in the seL4 verification. We first verified that the code correctly implements a higher-level specification, which is an abstraction of the real kernel code. Years later, we used that much simpler abstraction, and, without even looking at the complex C code, could prove very specific security properties about it, such as that seL4 respects integrity and confidentiality. Since we had first proved that this specification was a true abstraction of the code, and because the properties we were interested were preserved by this kind of abstraction, we had automatically gained a verification of the code as well. The win in effort was significant: the verification of integrity was 20 times less expensive than the original code verification.\n\n\nIn my opinion, making serious use of such principles is going to be the way to scale verification. It is already what systems engineers do for complex but critical systems. The way to use it for verification is to stick with it more closely and not to make convenience exceptions that break abstractions or circumvent the system architecture. In the end, everything that makes it easier for humans to think about a system, will help to verify it.\n\n\n\n\n---\n\n\n**Luke**: You wrote that “There are a number of ways for dealing with complexity in traditional system design, for instance abstraction, modularity, and architecture.” You gave examples of verifying complex code using abstraction and modularity — what did you have in mind for using architecture to make verification of complex systems more tractable?\n\n\n\n\n---\n\n\n**Gerwin**: Yes, thanks for spotting that.\n\n\nThere are two main ways I was thinking of in which good architecture can help verification. The first is just the fact that an explicit architecture and design will tell you which modules or layers exist in the system and how they are connected to each other. Making this explicit will enable you to draw some first conclusions about the system without even needing to look at the code. For example, if two components are not connected to each other, and the underlying operating system enforces the architecture correctly, then these two components will not be able to influence each other’s behaviour, and that means you can verify each one in isolation. You now have two simpler verification problems instead of one large one.\n\n\nThe second way comes into play when you build the critical property you are interested in verifying directly into the architecture from the start. For example, imagine a trusted data logging component in a critical device. The properties you might want from this component are: it only accesses its own log files, it only writes data and never reads, and no other component in the system can write to those log files. To verify this property in a monolithic system, you’d either have to prove of all components with file write access that they don’t overwrite the log file, or you would have to prove correct the access control mechanism of a complex file system. Instead of doing all that work, you could put an additional small filter component between any writers and the file system: the filter will deny requests to write to the log file and allow the rest. This is easy to implement and easy to prove correct. You now have reduced a large, potentially infeasible verification task to a (hopefully) small runtime cost and an architecture property.\n\n\nFor any of this to work, you need to make sure that the real code in the end actually adheres to the architecture and that the communication and protection structure of the architecture is enforced by the underlying system. This is why our group is thinking about such security architecture patterns: a formally verified microkernel is the perfect building block for building high assurance systems in this way.\n\n\n\n\n---\n\n\n**Luke**: I’d like to get your perspective on something I previously [asked](http://intelligence.org/2014/01/30/emil-vassev-on-formal-verification/) Emil Vassev: To what degree can we demonstrate both deductive guarantees and probabilistic guarantees with today’s formal verification methods? If we could generalize formal specification and verification procedures to allow both for deductive guarantees and probabilistic guarantees, perhaps we could verify larger, more complex, and more diverse programs.\n\n\nMany of the deductive proofs for safety properties in today’s formally verified systems are already “probabilistic” in the sense that the designers [have some subjective uncertainty](http://intelligence.org/2013/10/03/proofs/) as to whether the formal specification accurately captures the intuitively desirable safety properties, and (less likely) whether there was an error in the proof somewhere.\n\n\nBut the question I’m trying to ask is about explicitly probabilistic guarantees within the proof itself. Example: the verifier could represent that it ran out of computation before it could prove or disprove whether the system had property X, but given its reasoning and search process thus far it’s 80% confident that the system has property X, kind of like human researchers today can’t prove whether P = NP or not, but they’re (say) subjectively 80% confident that P ≠ NP. Though perhaps to do this, we’d first need a solution to the problem of how a formal system or logical calculus can have internally consistent probabilistic beliefs (and belief updates) about logical sentences (a la [Demski 2012](http://ict.usc.edu/pubs/Logical%20PriorProbability.pdf) or [Hutter et al. 2012](http://arxiv.org/pdf/1209.2620.pdf)).\n\n\n\n\n---\n\n\n**Gerwin**: Emil gave a great answer to this question already, so I’ll try to approach it from a slightly different angle. I see two ways in which probability can play a role in verification: one where you reason about probabilities directly, be it just in the logic or even in a setting where the program itself is probabilistic, or another one where you reason classically, but are trying to figure out what that means for the overall probability that your system will work as intended.\n\n\nThere are good examples for both of these methods, and both are useful to apply, even within the same project. As Emil pointed out, it is preferable to use full deduction and have the ability to draw definite conclusions. But when that is not available, you may be satisfied with at least a probabilistic answer. There are a number of probabilistic model checkers, for instance [PRISM](http://www.cs.ox.ac.uk/activities/prism/index.html) from [Marta Kwiatkowska’s](http://www.cs.ox.ac.uk/marta.kwiatkowska/) group in Oxford, or the Probabilistic Consistency Engine ([PCE](https://pal.sri.com/Plone/framework/Components/learning-applications/probabilistic-consistency-engine-jw)) from SRI. These tools will let you attach probabilities to certain events in a system or attach probabilities to beliefs about the state of the world and reason from there. The Probabilistic Consistency Engine is very close to what you mentioned in the question: reconciling potentially conflicting probabilistic beliefs about the world and drawing probabilistic conclusions from those descriptions.\n\n\nThere are also programs and protocols that directly work with probability, and for instance virtually flip a coin to make certain decisions. The Firewire protocol for instance uses this mechanism, and despite its use of probability you can make definite claims about its behaviour. Carroll Morgan who is working with us at UNSW has been applying probabilistic reasoning about such programs to security questions and program construction. To give you a flavour of what is possible in this space: one of our PhD students, David Cock, has recently used Carroll’s methods in Isabelle to reason about covert channels in microkernels and the maximum bandwidth at which they can leak information.\n\n\nFinally, there is the question how uncertainty about requirements and uncertainty about the physical world fits formal verification. John Rushby from SRI has a great take on this question in his paper about possibly perfect systems [Logic and Epistemology in Safety Cases](http://www.csl.sri.com/users/rushby/abstracts/safecomp13). His point is that there are two different kinds of probability to consider: the probability that the system is correct, and the underlying uncertainty we have about the world. For example, a utopian fully verified system where we captured all requirements correctly and where we got everything perfect might still fail, if one of its physical components fails from wear and tear or overheating or because someone dropped it into a volcano.\n\n\nFormal verification does nothing about the latter kind of probability, but it increases the probability that the system is perfect (even if it can never fully get there). We can use this concept to figure out where verification has the largest impact, and quantify how much impact it has. For example, it may not make much sense to formally verify a system where even the best possible solution could only be right about 50% of the time, because the world around it behaves too randomly. On the other hand, we can get a huge pay-off from formal verification in a setting where requirements and system can be captured precisely. Rushby’s approach lets us quantify the difference.\n\n\n\n\n---\n\n\n**Luke**: [Levitt (1999)](http://commonsenseatheism.com/wp-content/uploads/2014/01/Levitt-Robot-Ethics-Value-Systems-and-Decision-Theoretic-Behaviors.pdf) argued that as intelligent systems become increasingly autonomous, and operate in increasingly non-deterministic, unknown environments, it will not be possible to model their environment and possible behaviors entirely with tools like control theory:\n\n\n\n> If robots are to put to more general uses, they will need to operate without human intervention, outdoors, on roads and in normal industrial and residential environments where unpredictable… events routinely occur. It will not be practical, or even safe, to halt robotic actions whenever the robot encounters an unexpected event or ambiguous [percept].\n> \n> \n> Currently, commercial robots determine their actions mostly by control-theoretic feedback. Control-theoretic algorithms require the possibilities of what can happen in the world be represented in models embodied in software programs that allow the robot to pre-determine an appropriate action response to any task-relevant occurrence of visual events. When robots are used in open, uncontrolled environments, it will not be possible to provide the robot with *a priori* models of all the objects and dynamical events that might occur.\n> \n> \n> In order to decide what actions to take in response to un-modeled, unexpected or ambiguously interpreted events in the world, robots will need to augment their processing beyond controlled feedback response, and engage in decision processes.\n> \n> \n\n\nThus, he argues, autonomous programs will eventually need to be decision-theoretic.\n\n\nWhat do you think of that prediction? In general, what tools and methods do you think will be most relevant for achieving high safety assurances for the highly autonomous systems of the future?\n\n\n\n\n---\n\n\n**Gerwin**: There’s a lot of work ahead of us in this area, and I don’t think we have very good solutions yet. After all, we’re still struggling to build simpler systems properly.\n\n\nOn the other hand, I agree that more autonomy seems to be the direction that systems are heading towards. In the DARPA HACMS program that Kathleen Fisher is managing we are investigating some approaches to high-assurance autonomous vehicles. One thing you can do for instance, is use a so-called simplex architecture. You use two controllers: one complex, autonomous controller and an additional, simpler safety controller that you can provide high assurance for. During normal operation, the complex autonomous controller runs, and provides convenient, even smart behaviour. The simple safety controller stays in the background and monitors operations. It only checks whether the vehicle is still within safe parameters. Should something unforeseen happen, such as a fault, a security attack, or simply a defect in the complex controller, the simple one takes over and puts the system in a safe mode. For autonomous air vehicles, this could be “fly home to base”, or “land safely in a free spot”. This solution doesn’t work for everything, but it is a good first step.\n\n\nWhether we will really start building systems that come up with their own unexpected behaviour, based on independent reasoning, or not, I can’t really say. In current autonomous vehicles, like self-driving cars or proposals like delivery drones, this is certainly not yet the case. And I have a hard time imagining regulatory authorities agreeing to something like this (with good reasons). Instead of trying to predict everything, these systems currently are constructed to safely deal with classes of problems. For instance, you don’t need to know precisely which kind of obstacles you will encounter for a self-driving car. You don’t need to know if it is an animal, debris, or a small child. It’s enough to see that something is in your way or is on a collision course and react to this better than humans do. And you err on the side of caution rather than take risks. To get an improvement over the current state of human drivers for instance, you don’t need to build a system that is perfect. You only need to build one that drives a lot safer than humans. That is a much simpler problem.\n\n\nSystems that do their own reasoning could still be made safe, if you can show that the reasoning is sound and you can explore the implication space of the situations you expect it to be in. The problem is that automated reasoning is hard — not just complicated to think about, but a theoretically hard problem where even simple problems need a lot of computational power. Building systems directly even for complex situations is currently still a lot easier than that, so I’d say that autonomous programs that do their own reasoning are still closer to science fiction than reality. Asimov played with this vision in his famous laws of robotics, which in his stories often lead to surprising, but perfectly valid (according to those laws) behaviours. In the end, if you really want to build a system that can have truly unexpected behaviour, then by definition you cannot verify that it is safe, because you just don’t know what it will do. You’d have to deal with them more like you deal with people than you deal with machines.\n\n\n\n\n---\n\n\n**Luke**: What do you think are some promising “next steps” — say, over the next decade — for research on how to achieve high-assurance AI systems?\n\n\n\n\n---\n\n\n**Gerwin**: There’s a lot of work ahead of us in this area, and I don’t think we have very good solutions yet. After all, we’re still struggling to build simpler systems properly.\n\n\nI think research such as in the HACMS program is encouraging: high-assurance control and sensing algorithms, high-assurance operating systems, not just kernels, but including the wider system infrastructure, high-level design tools that allow architecture-driven approaches and analysis, and generally stronger verification tools will be the key to make such autonomous systems more reliable and safer.\n\n\nIf we can crack the next order of magnitude in system scale, things will get a lot more interesting. There are some good approaches for this developing: instead of trying to find proofs without context, we have had some initial success in synthesizing more code from specifications and generating the corresponding proofs automatically. If that approach works on a larger scale, it could make building high-assurance systems dramatically cheaper in the future.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Gerwin!\n\n\nThe post [Gerwin Klein on Formal Methods](https://intelligence.org/2014/02/11/gerwin-klein-on-formal-methods/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-02-11T16:00:50Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "76e074996c3d1b8e51b3b755a318be48", "title": "Conversation with Jacob Steinhardt about MIRI Strategy", "url": "https://intelligence.org/2014/02/11/conversation-with-jacob-steinhardt-about-miri-strategy/", "source": "miri", "source_type": "blog", "text": "On January 21st, 2014, MIRI met with Jacob Steinhardt to discuss MIRI strategy. Participants were:\n\n\n* [Luke Muehlhauser](http://lukeprog.com/) (executive director at MIRI)\n* [Jacob Steinhardt](http://cs.stanford.edu/~jsteinhardt/) (grad student in computer science at Stanford)\n\n\nWe recorded and transcribed much of the conversation, and then edited and paraphrased the transcript for clarity, conciseness, and to protect the privacy of some content. The resulting edited transcript is available in full **[here](https://intelligence.org/wp-content/uploads/2014/02/01-21-2014-conversation-with-Jacob-Steinhardt.pdf)** (6 pages).\n\n\nTopics discussed include research communication style, inferential distance, FAI research directions, and Knightian uncertainty.\n\n\nThe post [Conversation with Jacob Steinhardt about MIRI Strategy](https://intelligence.org/2014/02/11/conversation-with-jacob-steinhardt-about-miri-strategy/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-02-11T15:15:11Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "71e3539f475dcc153439f82c0f6ae1dc", "title": "2013 in Review: Strategic and Expository Research", "url": "https://intelligence.org/2014/02/08/2013-in-review-strategic-and-expository-research/", "source": "miri", "source_type": "blog", "text": "This is the 3rd part of my personal and qualitative [self-review of MIRI in 2013](http://intelligence.org/2013/12/20/2013-in-review-operations/), in which I begin to review MIRI’s 2013 research activities. By “research activities” I mean to include outreach efforts primarily aimed at researchers, and also [three types](http://intelligence.org/2013/04/13/miris-strategy-for-2013/) of research performed by MIRI:\n\n\n* **Expository research** aims to consolidate and clarify already-completed strategic research or Friendly AI research that hasn’t yet been explained with sufficient clarity or succinctness, e.g. “[Intelligence Explosion: Evidence and Import](https://intelligence.org/files/IE-EI.pdf)” and “[Robust Cooperation: A Case Study in Friendly AI Research](http://intelligence.org/2014/02/01/robust-cooperation-a-case-study-in-friendly-ai-research/).” (I consider this a form of “research” because it often requires significant research work to explain ideas clearly, cite relevant sources, etc.)\n* **Strategic research** aims to clarify how the future is likely to unfold, and what we can do now to nudge the future toward good outcomes, and involves more novel thought and modeling than expository research — though, the distinction is fuzzy. See e.g. “[Intelligence Explosion Microeconomics](https://intelligence.org/files/IEM.pdf)” and “[How We’re Predicting AI — or Failing to](https://intelligence.org/files/PredictingAI.pdf).”[1](https://intelligence.org/2014/02/08/2013-in-review-strategic-and-expository-research/#footnote_0_10756 \"Note that what I call “MIRI’s strategic research” or “superintelligence strategy research” is a superintelligence-focused subset of what GiveWell would call “strategic cause selection research” and CEA would call this “cause prioritization research.”\")\n* **Friendly AI research** aims to solve the technical sub-problems that seem most relevant to the challenge of designing a stably self-improving artificial intelligence with humane values. This [often involves](http://intelligence.org/2013/11/04/from-philosophy-to-math-to-engineering/) sharpening philosophical problems into math problems, and then developing the math problems into engineering problems. See e.g. “[Tiling Agents for Self-Modifying AI](https://intelligence.org/files/TilingAgents.pdf)” and “[Robust Cooperation in the Prisoner’s Dilemma](http://arxiv.org/abs/1401.5577).”\n\n\nI’ll review MIRI’s strategic and expository research in this post; **my review of MIRI’s 2013 Friendly AI research will appear in a future post**. For the rest of this post, I usually won’t try to distinguish which writings are “expository” vs. “strategic” research, since most of them are partially of both kinds.\n\n\n\n### Strategic and expository research in 2013\n\n\n1. In 2013, our public-facing strategic and expository research consisted of 4 papers published directly by MIRI, 4 journal-targeted papers, 4 chapters in a peer-reviewed book, 9 in-depth analysis blog posts, 14 short analysis blog posts, and 16 interviews with domain experts.\n2. I think these efforts largely accomplished the goals at which they were aimed, but in 2013 we learned a great deal about how to accomplish those goals more efficiently in the future. In particular…\n3. Expert interviews seem to be the most efficient way to accomplish some of those goals.\n4. Rather than conducting large strategic research projects ourselves, we should focus on writing up what is already known (“expository research”) and on describing open research questions so that *others* can examine them.\n\n\n\n### What we did in 2013 and why\n\n\nBelow I list the writings that constitute MIRI’s public-facing[2](https://intelligence.org/2014/02/08/2013-in-review-strategic-and-expository-research/#footnote_1_10756 \"As usual, we also did significant strategic research in 2013 that is not public-facing (at least not yet), for example 100+ hours of feedback on various drafts of Nick Bostrom’s forthcoming book Superintelligence: Paths, Dangers, Strategies, 15+ hours of feedback on early drafts of Robin Hanson’s forthcoming book about whole brain emulation, and much work on forthcoming MIRI publications.\") **strategic and expository research in 2013.**\n\n\n* **4 papers published directly by MIRI**: (1) Yudkowsky’s “[Intelligence Explosion Microeconomics](https://intelligence.org/files/IEM.pdf),”[3](https://intelligence.org/2014/02/08/2013-in-review-strategic-and-expository-research/#footnote_2_10756 \"Yudkowsky labeled this as “open problem in Friendly AI #1”, but I categorize it as strategic research rather than Friendly AI research.\") (2) Sotala & Yampolskiy’s “[Responses to Catastrophic AGI Risk: A Survey](https://intelligence.org/files/ResponsesAGIRisk.pdf),” (3) Grace’s “[Algorithmic Progress in Six Domains](https://intelligence.org/files/AlgorithmicProgress.pdf),” and (4) Fallenstein & Mennen’s “[Predicting AGI: What can we say when we know so little?](https://intelligence.org/files/PredictingAGI.pdf)“\n* **4 journal-targeted papers**, two of them published and two of them still being considered by target journals: (1) Shulman & Bostrom’s “[Embryo Selection for Cognitive Enhancement](http://www.nickbostrom.com/papers/embryo.pdf),” (2) Armstrong et al.’s “[Racing to the Precipice](http://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf),” (3) Yampolskiy & Fox’s “[Safety Engineering for Artificial General Intelligence](https://intelligence.org/files/SafetyEngineering.pdf),”[4](https://intelligence.org/2014/02/08/2013-in-review-strategic-and-expository-research/#footnote_3_10756 \"At the time of publication, Joshua Fox was a MIRI research associate.\") and (4) Muehlhauser & Bostrom’s “[Why We Need Friendly AI](https://intelligence.org/files/WhyWeNeedFriendlyAI.pdf).”[5](https://intelligence.org/2014/02/08/2013-in-review-strategic-and-expository-research/#footnote_4_10756 \"“Why We Need Friendly AI” was published in an early 2014 issue of the journal Think, but it was released online in 2013.\")\n* **4 chapters in a peer-reviewed book** from Springer called [*Singularity Hypotheses: A Scientific and Philosophical Assessment*](http://www.amazon.com/Singularity-Hypotheses-Scientific-Philosophical-Assessment-ebook/dp/B00C6C07N0/). Three chapters were written by MIRI staff members: “[Intelligence Explosion: Evidence and Import](https://intelligence.org/files/IE-EI.pdf),” “[Intelligence Explosion and Machine Ethics](https://intelligence.org/files/IE-ME.pdf),” and “Friendly Artificial Intelligence.”[6](https://intelligence.org/2014/02/08/2013-in-review-strategic-and-expository-research/#footnote_5_10756 \"The “Friendly Artificial Intelligence” chapter is merely an abridged version of Yudkowsky’s earlier “Artificial Intelligence as a Positive and Negative Factor in Global Risk.”\") One additional chapter was co-authored by then-MIRI research associate Joshua Fox: “[Artificial Intelligence and the Human Mental Model](https://intelligence.org/files/AGI-HMM.pdf).” MIRI also contributed two short replies to other chapters, one reply by Yudkowsky and another by Michael Anissimov.[7](https://intelligence.org/2014/02/08/2013-in-review-strategic-and-expository-research/#footnote_6_10756 \"These chapters were written during 2011 and 2012, but not published in the book until 2013.\")\n* **9 *in-depth* “analysis” blog posts**: (1) Sotala’s “[A brief history of ethically concerned scientists](http://lesswrong.com/lw/gln/a_brief_history_of_ethically_concerned_scientists/),” (2) Kaas’ “[Bayesian Adjustment Does Not Defeat Existential Risk Charity](http://lesswrong.com/lw/gzq/bayesian_adjustment_does_not_defeat_existential/),” Yudkowsky’s  (3) “[The Robots, AI, and Unemployment Anti-FAQ](http://lesswrong.com/lw/hh4/the_robots_ai_and_unemployment_antifaq/)” and (4) “[Pascal’s Muggle](http://lesswrong.com/lw/h8k/pascals_muggle_infinitesimal_priors_and_strong/),” and Muehlhauser’s (5) “[AGI Impact Experts and Friendly AI Experts](http://intelligence.org/2013/05/01/agi-impacts-experts-and-friendly-ai-experts/),” (6) “[When Will AI Be Created?](http://intelligence.org/2013/05/15/when-will-ai-be-created/)“, (7) “[Transparency in Safety-Critical Systems](http://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/),” (8) “[How effectively can we plan for future decades? (initial findings)](http://intelligence.org/2013/09/04/how-effectively-can-we-plan-for-future-decades/),” and (9) “[How well will policy-makers handle AGI? (initial findings)](http://intelligence.org/2013/09/12/how-well-will-policy-makers-handle-agi-initial-findings/).”\n* **14 *short* analysis blog posts**: Yudkowsky’s (1) “[Five theses, two lemmas, and a couple of strategic implications](http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/),” (2) “[After critical event W happens, they still won’t believe you](http://lesswrong.com/lw/hp5/after_critical_event_w_happens_they_still_wont/),” (3) “[Do Earths with slower economic growth have a better chance at FAI?](http://lesswrong.com/lw/hoz/do_earths_with_slower_economic_growth_have_a/),” and (4) “[Being Half-Rational About Pascal’s Wager is Even Worse](http://lesswrong.com/lw/h8m/being_halfrational_about_pascals_wager_is_even/),” and Muehlhauser’s (5) “[Friendly AI Research as Effective Altruism](http://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/),” (6) “[What is intelligence?](http://intelligence.org/2013/06/19/what-is-intelligence-2/)“, (7) “[What is AGI?](http://intelligence.org/2013/08/11/what-is-agi/)“, (8) “[AI Risk and the Security Mindset](http://intelligence.org/2013/07/31/ai-risk-and-the-security-mindset/),” (9) “[Mathematical Proofs Improve But Don’t Guarantee Security, Safety, and Friendliness](http://intelligence.org/2013/10/03/proofs/),” (10) “[Richard Posner on AI Dangers](http://intelligence.org/2013/10/18/richard-posner-on-ai-dangers/),” (11) “[Russell and Norvig on Friendly AI](http://intelligence.org/2013/10/19/russell-and-norvig-on-friendly-ai/),” (12) “[From Philosophy to Math to Engineering](http://intelligence.org/2013/11/04/from-philosophy-to-math-to-engineering/),” (13) “[Intelligence Amplification and Friendly AI](http://lesswrong.com/r/discussion/lw/iqi/intelligence_amplification_and_friendly_ai/),” and (14) “[Model Combination and Adjustment](http://lesswrong.com/r/lesswrong/lw/hzu/model_combination_and_adjustment/).”\n* **16 interviews with domain experts**:  (1) [James Miller](http://intelligence.org/2013/07/12/james-miller-interview/) on unusual incentives facing AGI companies, (2) [Roman Yampolskiy](http://intelligence.org/2013/07/15/roman-interview/) on AI safety engineering, (3) [Nick Beckstead](http://intelligence.org/2013/07/17/beckstead-interview/) on the importance of the far future, (4) [Benja Fallenstein](http://intelligence.org/2013/08/04/benja-interview/) on the Löbian obstacle to self-modifying systems, (5) [Holden Karnofsky](http://intelligence.org/2013/08/25/holden-karnofsky-interview/) on transparent research analyses, (6) [Stephen Hsu](http://intelligence.org/2013/08/31/stephen-hsu-on-cognitive-genomics/) on Cognitive Genomics, (7) [Laurent Orseau](http://intelligence.org/2013/09/06/laurent-orseau-on-agi/) on Artificial General Intelligence, (8) [Paul Rosenbloom](http://intelligence.org/2013/09/25/paul-rosenbloom-interview/) on Cognitive Architectures, (9) [Ben Goertzel](http://intelligence.org/2013/10/18/ben-goertzel/) on AGI as a Field, (10) [Hadi Esmaeilzadeh](http://intelligence.org/2013/10/21/hadi-esmaeilzadeh-on-dark-silicon/) on Dark Silicon, (11) [Bas Steunebrink](http://intelligence.org/2013/10/25/bas-steunebrink-on-sleight/) on Self-Reflective Programming, (12) [Markus Schmidt](http://intelligence.org/2013/10/28/markus-schmidt-on-risks-from-novel-biotechnologies/) on Risks from Novel Biotechnologies, (13) [Robin Hanson](http://intelligence.org/2013/11/01/robin-hanson/) on Serious Futurism, (14) [Greg Morrisett](http://intelligence.org/2013/11/05/greg-morrisett-on-secure-and-reliable-systems-2/) on Secure and Reliable Systems, (15) [Scott Aaronson](http://intelligence.org/2013/12/13/aaronson/) on Philosophical Progress, and (16) [Josef Urban](http://intelligence.org/2013/12/21/josef-urban-on-machine-learning-and-automated-reasoning/) on Machine Learning and Automated Reasoning.[8](https://intelligence.org/2014/02/08/2013-in-review-strategic-and-expository-research/#footnote_7_10756 \"There were also two very short interviews with Eliezer Yudkowsky: “Yudkowsky on Logical Uncertainty” and “Yudkowsky on ‘What can we do now?’“\")\n* One recorded and transcribed conversation about effective altruism in general, with other members of the effective altruism movement: [Effective Altruism and Flow-Through Effects](http://intelligence.org/2013/09/14/effective-altruism-and-flow-through-effects/).\n\n\nMIRI staff members have varying opinions about the value and purpose of strategic and expository research. Speaking for myself, I supported or conducted the above research activities in order to:[9](https://intelligence.org/2014/02/08/2013-in-review-strategic-and-expository-research/#footnote_8_10756 \"I have an additional goal for some of our outreach and research activities, which is to address difficult problems in epistemology, because they are more relevant to MIRI’s research than to (e.g.) business or the practice of “normal science” (in the Kuhnian sense). “Pascal’s Muggle” is one example. Also, some of our expository and strategic research doubles as general outreach, e.g. the popular interview with Scott Aaronson.\")\n\n\n1. **Test our assumptions** and try to understand the views of people who (might) disagree with us. *Examples*: “How effectively can we plan for future decades?”, “How well will policy-makers handle AGI?”, and the Greg Morrisett interview.\n2. **Learn new things** that can inform strategic action concerning existential risk and Friendly AI. *Examples*: “Algorithmic Progress in Six Domains,” the Hadi Esmaeilzadeh interview, and the Josef Urban interview.\n3. **Make it easier for other researchers to contribute**, by performing small bits of initial work on questions of strategic significance, or by explaining how an open question in superintelligence strategy could be studied in more depth. *Examples*: “Intelligence Explosion Microeconomics,” “Algorithmic Progress in Six Domains,” and “How effectively can we plan for future decades?”\n4. **Build relationships with researchers who might one day contribute** to strategic, expository, or Friendly AI research. *Examples*: many of the interviews.\n5. **Explain small “pieces of the puzzle” that contribute to MIRI-typical views** about existential risk and Friendly AI. *Examples*: “When Will AI Be Created?”, “Mathematical Proofs Improve But Don’t Guarantee…,” and the Nick Beckstead interview.\n\n\n### \n\n\n### How well did these efforts achieve their goals?\n\n\nWe have [not yet implemented](http://intelligence.org/2013/12/20/2013-in-review-operations/) quantitative methods for measuring how well our strategic and expository research efforts are meeting the goals at which they are aimed.[10](https://intelligence.org/2014/02/08/2013-in-review-strategic-and-expository-research/#footnote_9_10756 \"Well, we can share some basic web traffic data. According to Google Analytics, the pages (of 2013’s strategic or expository research) with the most “unique pageviews” since they were created are: “When will AI be created?” (~15.5k), the Scott Aaronson interview (~13.5k), the Hadi Esmaeilzadeh interview (~13.5k), “The Robots, AI, and Unemployment Anti-FAQ” (~12k), “What is intelligence?” (~5k), “Pascal’s Muggle” (~5k), “A brief history of ethically concerned scientists” (~4.5k), “Intelligence explosion microeconomics” (~3.5k), and “From philosophy to math to engineering” (~3.5k). Naturally, this list is biased in favor of articles published earlier. Also, Google Analytics doesn’t track PDF downloads, so we don’t have numbers for those.\") For now, I can only share my subjective, qualitative impressions, based on my own reasoning and a few conversations I had with some people who follow our research closely, after showing them a near-complete draft of the previous section.\n\n\n**Re: goal (1)**. It’s difficult to locate cheap, strong tests of our assumptions. So, the research aimed at this goal conducted in 2013 either weakly confirmed some of our assumptions (e.g. see the Greg Morrisett interview[11](https://intelligence.org/2014/02/08/2013-in-review-strategic-and-expository-research/#footnote_10_10756 \"E.g. see his statements “Yes, I completely agree with [the ‘Mathematical Proofs Improve…‘ post]” and “I think re-architecting and re-coding things will almost always lead to a win in terms of security, when compared to bolt-on approaches.”\") ) or could make only small steps toward providing good tests of our assumptions (e.g. see “How effectively can we plan for future decades?” and “How well will policy-makers handle AGI?”).\n\n\n**Re: goal (2)**. Similarly, it’s difficult to locate inexpensive evidence that robustly pins down the value of an important strategic variable (e.g. AI timelines, AI takeoff speed, or the strength of the “[convergent instrumental values](http://www.nickbostrom.com/superintelligentwill.pdf)” attractor in mind design space). Hence, research aimed at learning new things typically only provides small updates (for us, anyway), e.g. about the prospects for Moore’s law (the Hadi Esmaeilzadeh interview) and about the current state of automated mathematical reasoning (the Josef Urban interview).\n\n\nMy own reaction to the difficulty of obtaining additional high-likelihood-ratio evidence about long-term AI futures goes something like this:\n\n\n\n> Well, the good news is that humanity seems to have seized most of the low-hanging fruit about future machine superintelligence, which wasn’t the case 15 years ago. The bad news is that the low-hanging fruit alone doesn’t make it clear how we go about *winning*. But since the stakes are really high, we just have to accept that [long-term forecasting is hard](http://intelligence.org/2013/05/15/when-will-ai-be-created/), and then *try harder*. We need to get more researchers involved so more research can be produced, and we must be prepared to accept that it might take 10 PhD theses worth of work before we get a 2:1 Bayesian update about a strategically relevant variable. Also, it’s probably good to “marinate” one’s brain in relevant fields even if one isn’t sure which specific updates one will be able to make as a result, because filling one’s brain with facts about relevant fields will likely improve one’s intuitions in general about those fields and adjacent fields.[12](https://intelligence.org/2014/02/08/2013-in-review-strategic-and-expository-research/#footnote_11_10756 \"This last bit is part of my motivation for listening to so many nonfiction audiobooks since September 2013.\")\n> \n> \n\n\n**Re: goal (3)**. I don’t have a good sense of how useful MIRI’s 2013 strategic and expository research has been for other researchers, but such effects typically require several years to materialize.[13](https://intelligence.org/2014/02/08/2013-in-review-strategic-and-expository-research/#footnote_12_10756 \"“Intelligence Explosion Microeconomics” enabled “Algorithmic Progress in Six Domains,” but it was still the case that MIRI had to commission “Algorithmic Progress in Domains.”\") I’m optimistic about this work enabling further research by others simply because that’s how things typically work in other fields of research, and I don’t see much reason to think that superintelligence strategy will be any different.\n\n\n**Re: goal (4)**. Yes, many of the interviews built new relationships with helpful domain experts.\n\n\n**Re: goal (5)**. Again, I don’t have good measures of the effects here, but I do receive frequent comments from community members that “such-and-such post was really clarifying.” Some of the analyses are also linked regularly by other groups. For example, both [GiveWell](http://www.givewell.org/) and [80,000 Hours](http://80000hours.org/) have linked to our [model combination post](http://lesswrong.com/lw/hzu/model_combination_and_adjustment/) when explaining their own research strategies.\n\n\n \n\n\n### Looking ahead to 2014\n\n\nAs discussed above and in my [operations review](http://intelligence.org/2013/12/20/2013-in-review-operations/), we still need to find better ways to measure the impact of our research. A plausible first-try measurement technique would be to survey a subset of the people we hope to impact in various ways, and ask how our research has impacted them.\n\n\nEven before we can learn from improved impact measurement, however, I think I can say a few things about what I’ve learned about doing strategic and expository research, and what we plan to do differently in 2014.\n\n\n**First,** *interviews with domain experts are a highly efficient way to achieve some of the goals I have for expository and strategic research*. Each interview required only a few hours of staff time, whereas a typical “short” analysis post cost between 5 and 25 person-hours, and a typical “in-depth” analysis post cost between 10 and 60 person-hours.\n\n\nIn 2013 we published 16 domain expert interviews between July 1st and December 30th, an average of 2.66 interviews per month. In 2014 I intend to publish 4 or more interviews per month on average.\n\n\n**Second,** *expository research tends to be more valuable per unit effort than new strategic research*. MIRI (in conjunction with our collaborators at FHI) has an uncommonly large backlog of strategic research that has been “completed” but not *explained clearly* anywhere. Obviously, it takes less effort to explain already-completed strategic research than it takes to conduct original strategic research and then *also* explain it.\n\n\n**Third,** *we can prioritize expository (and sometimes strategic) research projects by dialoguing with intelligent critics who are representative of populations we want to influence* (e.g. AI researchers, mega-philanthropists) and then preparing the writings most relevant to their concerns. We can then dialogue with them again after they’ve read the new exposition, and see whether that particular objection remains, and if so why, and if not then what other objections remain — which can in turn inform our prioritization of future writings, and also potentially reveal flaws in our models.\n\n\n**Fourth**, *students want to know which research projects they could do that would help clarify superintelligence strategy*. Unfortunately, experienced professors are not yet knocking down our door to ask us which papers they could research and write to clarify superintelligence strategy, but many *graduate students* are. Also, I’ve had a few conversations with graduate student *advisors* who said they have to put lots of time into helping their students find good projects, and that it would be helpful if somebody else prepared research project proposals suitable for their students and their department.\n\n\nFurthermore, there is *some* historical precedent for this strategy working, even within the young, narrow domain of superintelligence strategy. The clearest example is that of [Nick Beckstead](http://intelligence.org/2013/07/17/beckstead-interview/), who wrote a useful philosophy dissertation on the importance of shaping the far future, in part due to conversations with [FHI](http://www.fhi.ox.ac.uk/). João Lourenço is currently writing a philosophy dissertation about the prospects for [moral enhancement](http://lesswrong.com/lw/7nl/moral_enhancement/), in part due to conversations with FHI and MIRI. Jeremy Miller is in the early planning stages of a thesis project about universal measures of intelligence, in part due to conversations with MIRI. I think there are other examples, but I haven’t been able to confirm them yet.\n\n\nSo, in 2014 we plan to publish short descriptions of research projects which could inform superintelligence strategy. This will be much easier to do once Nick Bostrom’s [*Superintelligence*](http://ukcatalogue.oup.com/product/9780199678112.do) book is published, so we’ll probably wait until that happens this summer.\n\n\n**Fifth**, *Nick Bostrom’s forthcoming scholarly monograph on machine superintelligence provides a unique opportunity to engage more researchers in superintelligence strategy*. As such, some of our “outreach to potential strategic researchers” work in 2014 will consist in helping to promote Bostrom’s book. We also plan to release a reading guide for the book, to increase the frequency with which people finish, and benefit from, the book.\n\n\n\n\n---\n\n1. Note that what I call “MIRI’s strategic research” or “superintelligence strategy research” is a superintelligence-focused subset of what [GiveWell](http://www.givewell.org/) would call “[strategic cause selection research](http://blog.givewell.org/2012/05/02/strategic-cause-selection/)” and [CEA](http://home.centreforeffectivealtruism.org/) would call this “[cause prioritization research](http://80000hours.org/blog/285-a-framework-for-strategically-selecting-a-cause).”\n2. As usual, we also did significant strategic research in 2013 that is not public-facing (at least not yet), for example 100+ hours of feedback on various drafts of Nick Bostrom’s forthcoming book [*Superintelligence: Paths, Dangers, Strategies*](http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/), 15+ hours of feedback on early drafts of Robin Hanson’s [forthcoming book about whole brain emulation](http://www.overcomingbias.com/2013/09/wanted-know-it-some-critics.html), and much work on forthcoming MIRI publications.\n3. Yudkowsky labeled this as “open problem in Friendly AI #1”, but I categorize it as strategic research rather than Friendly AI research.\n4. At the time of publication, Joshua Fox was a MIRI research associate.\n5. “Why We Need Friendly AI” was published in an early 2014 issue of the journal *Think*, but it was released online in 2013.\n6. The “Friendly Artificial Intelligence” chapter is merely an abridged version of Yudkowsky’s earlier “[Artificial Intelligence as a Positive and Negative Factor in Global Risk](https://intelligence.org/files/AIPosNegFactor.pdf).”\n7. These chapters were written during 2011 and 2012, but not published in the book until 2013.\n8. There were also two very short interviews with Eliezer Yudkowsky: “[Yudkowsky on Logical Uncertainty](http://intelligence.org/2013/01/30/yudkowsky-on-logical-uncertainty/)” and “[Yudkowsky on ‘What can we do now?’](http://intelligence.org/2013/01/30/yudkowsky-on-what-can-we-do-now/)“\n9. I have an additional goal for some of our outreach and research activities, which is to address difficult problems in epistemology, because they are more relevant to MIRI’s research than to (e.g.) business or the practice of “normal science” (in the Kuhnian sense). “Pascal’s Muggle” is one example. Also, some of our expository and strategic research doubles as general outreach, e.g. the popular interview with Scott Aaronson.\n10. Well, we can share some basic web traffic data. According to Google Analytics, the pages (of 2013’s strategic or expository research) with the most “unique pageviews” since they were created are: “When will AI be created?” (~15.5k), the Scott Aaronson interview (~13.5k), the Hadi Esmaeilzadeh interview (~13.5k), “The Robots, AI, and Unemployment Anti-FAQ” (~12k), “What is intelligence?” (~5k), “Pascal’s Muggle” (~5k), “A brief history of ethically concerned scientists” (~4.5k), “Intelligence explosion microeconomics” (~3.5k), and “From philosophy to math to engineering” (~3.5k). Naturally, this list is biased in favor of articles published earlier. Also, Google Analytics doesn’t track PDF downloads, so we don’t have numbers for those.\n11. E.g. see his statements “Yes, I completely agree with [the ‘[Mathematical Proofs Improve…](http://intelligence.org/2013/10/03/proofs/)‘ post]” and “I think re-architecting and re-coding things will almost always lead to a win in terms of security, when compared to bolt-on approaches.”\n12. This last bit is part of my motivation for listening to [so many nonfiction audiobooks](http://lesswrong.com/lw/hlc/will_the_worlds_elites_navigate_the_creation_of/9zkx) since September 2013.\n13. “Intelligence Explosion Microeconomics” enabled “Algorithmic Progress in Six Domains,” but it was still the case that *MIRI* had to commission “Algorithmic Progress in Domains.”\n\nThe post [2013 in Review: Strategic and Expository Research](https://intelligence.org/2014/02/08/2013-in-review-strategic-and-expository-research/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-02-08T17:24:11Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "91c2adda9ed934fc91fb7d8b0e6042b7", "title": "MIRI’s Experience with Google Adwords", "url": "https://intelligence.org/2014/02/06/miris-experience-with-google-adwords/", "source": "miri", "source_type": "blog", "text": "![Adwords](https://intelligence.org/wp-content/uploads/2014/02/Adwords_w600.jpg)\nIn late 2011, MIRI opened a [Google Grants](http://www.google.com/grants/) account, which provides $10k/mo in free Google Adwords for nonprofits. Kevin Fischer and I tweaked our Adwords account over several months until we successfully spent the full $10,000/mo 3 months in a row.\n\n\nThis qualified us for Grants Pro, and we are still grandfathered into that (now unavailable) $40k/mo level of free Adwords. The limit is actually $1350/day which is challenging to spend wisely, even with the recently increased max bidding level of $2/click. But with more tweaking we are now able to spend nearly all of it each month. Kevin and I probably spent 100 hours between us over the past few years optimizing this, but much of it was done while we were both volunteering for MIRI. Ongoing tweaking requires only an hour or less of my time per month.\n\n\nThe traffic is large (2/3 of our total) but marginal in quality. In the past 6 months, we drove ~250,000 visitors to MIRI’s site via Google Adwords.\n\n\n* 5000 people read at least one of our research papers\n* 500 signed up for the newsletter (our true goal since it gives them a chance to hear from us again)\n* 150 went to the [volunteer site](http://mirivolunteers.org/) (and almost all didn’t sign up once they got there)\n* 100 applied to attend research workshops (no qualified candidates yet)\n\n\nOur impression is that MIRI has an especially hard time making good use of Google Adwords, because there is such a gulf of inferential distance between what we do and what people already know. Many [things we could show someone who had never heard about us](http://vimeo.com/82527075) to try to have a strong impact would plausibly be [more misleading than helpful](http://www.youtube.com/watch?v=3C-9rNqLxGw). We expect most charities could make higher-value use of Google Adwords than we can for this reason, including e.g. effective altruism meta-charities or animal welfare groups.\n\n\nPaying $16 for each newsletter subscriber is pretty bad, and it’s not remotely how we would spend $1350/day if it was unrestricted money. But, Adwords creates some value on the margin and we’re glad Google includes us in the program and that we’re able to reach new people about our work by using it. It’s 1000 new people getting our newsletter every year, and more eyeballs on our content. Some of them might pass it to someone else who would be good for a workshop, or something.\n\n\nAlso a word of warning to other nonprofits: we tried many times to get interns or volunteers to improve our Adwords account, but nobody was good at it. Lots of remote supporters (some of whom swore they were amazing at AdWords) were given access and didn’t make a single change to a single campaign, much less create new experiments and find improvements. We also briefly tried paying a contractor to make new ads and they at least tried things, but their ads didn’t generate value so we had to let them go.\n\n\nIf charities work with volunteers or supporters to improve their Adwords accounts, I’d recommend requiring that volunteers produce proof that they are currently managing at least one other large Adwords account successfully (or only ask volunteers for cheap things like ideas and don’t expect any help from them doing the much more costly and difficult work of actually implementing their ideas).\n\n\nThe post [MIRI’s Experience with Google Adwords](https://intelligence.org/2014/02/06/miris-experience-with-google-adwords/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-02-06T09:33:22Z", "authors": ["Louie Helm"], "summaries": []} -{"id": "b49088ee4919a5f9589fbd2c9166f8b1", "title": "Careers at MIRI", "url": "https://intelligence.org/2014/02/03/careers-at-miri/", "source": "miri", "source_type": "blog", "text": "We’ve published a new [Careers](http://intelligence.org/careers/) page, which advertises current job openings at MIRI.\n\n\nAs always, we’re seeking **math researchers** to make progress on Friendly AI theory. If you’re interested, the next step is not to apply for the position directly, but to [apply to attend a future MIRI research workshop](http://intelligence.org/get-involved/#workshop).\n\n\nWe are also accepting applications for a **grants manager**, a **science writer**, and an **executive assistant**.\n\n\n[Visit our Careers page to apply](http://intelligence.org/careers/).\n\n\n[![careers](https://intelligence.org/wp-content/uploads/2014/02/careers.png)](http://intelligence.org/careers/)\nThe post [Careers at MIRI](https://intelligence.org/2014/02/03/careers-at-miri/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-02-04T04:10:44Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "a8460595487813bbc5f7b347a5c6aeed", "title": "Ronald de Wolf on Quantum Computing", "url": "https://intelligence.org/2014/02/03/ronald-de-wolf-on-quantum-computing/", "source": "miri", "source_type": "blog", "text": "![Ronald de Wolf portrait](http://intelligence.org/wp-content/uploads/2014/02/De_Wolf_w150.jpg)[Ronald de Wolf](http://homepages.cwi.nl/~rdewolf) is a senior researcher at CWI and a part-time full professor at the University of Amsterdam. He obtained his PhD there in 2001 with a thesis about quantum computing and communication complexity, advised by [Harry Buhrman](http://homepages.cwi.nl/~buhrman) and [Paul Vitanyi](http://homepages.cwi.nl/~paulv). Subsequently he was a postdoc at UC Berkeley. His scientific interests include quantum computing, complexity theory, and learning theory.\n\n\nHe also holds a Master’s degree in philosophy (where his thesis was about Kolmogorov complexity and Occam’s razor), and enjoys classical music and literature.\n\n\n\n**Luke Muehlhauser**: Before we get to quantum computing, let me ask you about philosophy. Among other topics, your [MSc thesis](http://homepages.cwi.nl/%7Erdewolf/publ/philosophy/phthesis.pdf) discusses the relevance of computational learning theory to philosophical debates about [Occam’s razor](http://en.wikipedia.org/wiki/Occam%27s_razor), which is the principle advocating that “among the theories, hypotheses, or explanations that are consistent with the facts, we are to prefer simpler over more complex ones.”\n\n\nThough many philosophers and scientists adhere to the principle of Occam’s razor, it is often left ambiguous exactly what is meant by “simpler,” and also why this principle is justified in the first place. But in your thesis you write that “in certain formal settings we can, more or less, *prove* that certain versions of Occam’s Razor work.”\n\n\nPhilosophers are usually skeptical when I argue for [K-complexity](http://en.wikipedia.org/wiki/Kolmogorov_complexity) versions of Occam’s razor, as you do. For example, USC’s Kenny Easwaran [once wrote](https://www.facebook.com/lukeprog/posts/10103841562829230?stream_ref=10), “I’ve never actually seen how [a K-complexity based simplicity measure] is supposed to solve anything, given that it always depends on a choice of universal machine.”\n\n\nHow would you reply, given your optimism about justifying Occam’s razor “in certain formal settings”?\n\n\n\n\n\n---\n\n\n**Ronald de Wolf**: I would treat Occam’s razor more as a rule of thumb than as a formal rule or theorem. Clearly it’s vague, and clearly there are cases where it doesn’t work. Still, many scientists have been guided by it to good effect, often equating simplicity with beauty (for example Einstein and Dirac). Psychologically, invoking Occam will only be effective if there is some shared notion of simplicity; maybe not to quantify simplicity, but at least to be able to rank theories according to their simplicity.\n\n\nYou could try to use Kolmogorov complexity as your “objective” measure of simplicity, and in some simplified cases this makes perfect sense. In my MSc thesis I surveyed a few known cases where it provably does. However, such cases do not provide convincing proof of Occam’s razor “in the real world”. They are more like thought experiments, where you strip away everything that’s superfluous in order to bring out a certain point more clearly.\n\n\nIn practice there are at least three issues with using Kolmogorov complexity to measure simplicity. First, it requires you to write down your theory (or whatever it is whose simplicity you’re quantifying) over some fixed alphabet, say as a string of bits. It’s often kind of subjective which background assumptions to count as actually part of your theory. Second, as Easwaran rightly says, KC depends on the choice of universal Turing machine w.r.t. which it is defined. However, I don’t think this is such a big issue. If you choose some reasonably efficient universal Turing machine and consider the KC of reasonably long strings, the constant difference incurred by the choice of universal Turing machine will be relatively small. Thirdly and possibly most importantly, KC is not computable, not even approximable by any computational process (even a very slow one) with any approximation-guarantees. This rules out using KC itself in practical settings.\n\n\nHowever, the core idea that compression somehow corresponds to detection of patterns in your data is a perfectly valid one, and you can use it in practice if you’re willing to base “compression” on imperfect but practical programs like gzip. This loses the theoretical optimality guaranteed by KC (which you can view as the “ultimate compression”) but it gives you a tool for data mining and clustering that’s often quite good in practice. See for example [here](http://arxiv.org/ftp/arxiv/papers/0809/0809.2553.pdf). Such practical approaches are like heuristics that try to approach, in some weak sense, the ideal but unreachable limit-case of KC.\n\n\n\n\n---\n\n\n**Luke**: Do you think one can use Occam-like principles to choose between, for example, the various [explanations of quantum mechanics](http://en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics), since they appear to make essentially the same predictions about what we should observe?\n\n\n\n\n---\n\n\n**Ronald**: In principle you could, but to my limited understanding (I’m not following this debate closely), the main interpretations of QM all suffer from having some seemingly superfluous aspects. The standard interpretation that a measurement “collapses the wave function” to a probabilistically-chosen measurement outcome treats “observers” as a special category of quantum objects, or “observation/measurement” as a special category of quantum process. Before you know it, people will bring consciousness into the picture and mysticism beckons. It seems to me that treating the “observer” as a special category violates Occam’s razor. Alternatively you can take the position that measurement is nothing special but just another interaction between quantum systems (observer and observed system). This is sometimes known as the “church of the larger Hilbert space”. It’s mathematically pleasing because now there’s only this smooth, coherent, and even deterministic evolution of the whole universe. However, now you will have may different “branches” of the superposition that is the world’s state vector, which very quickly leads to the multiverse view of many worlds. A metaphysics that postulates infinitely many worlds existing in superposition doesn’t strike me as very Occam-compliant either.\n\n\nThen there is the instrumentalist “shut up and calculate” school. This is minimalistic in an Occam-pleasing sense, but seems to substantially impoverish the scientific endeavour, whose aim should not just be to predict but also to explain and give some picture of the world. All interpretations of QM are problematic in their own way, and choosing between them based on Occam’s razor assumes some shared idea of what simplicity is as well as a shared view of the goals of science, which we seem to lack here.\n\n\n\n\n---\n\n\n**Luke**: Most of your work these days is in quantum computing and communication. Quantum computing is an interesting field, since its researchers design algorithms, error correction techniques, etc. for machines that cannot yet be built. In this sense, I tend to think of quantum computing as an “[exploratory engineering](http://en.wikipedia.org/wiki/Exploratory_engineering)” discipline, akin to pre-Sputnik astronautics, pre-ENIAC computer science, and Eric Drexler’s *[Nanosystems](http://www.amazon.com/Nanosystems-Molecular-Machinery-Manufacturing-Computation/dp/0471575186/)*. Do you think that’s a fair characterization? Do you or your colleagues in quantum computing get much criticism that such work is “too speculative”? (For the record, that’s not *my* view.)\n\n\n\n\n---\n\n\n**Ronald**: The two main questions in quantum computing are (1) can we build a large-scale quantum computer and (2) what could it do if we had one. I think your term “exploratory engineering” fits the work on the first question; small quantum computers on a handful of qubits were already built a decade ago, so it’s not pure theory anymore. I myself am a theoretical computer scientist focusing on the second question. While I think this is more mathematics than engineering, you can certainly compare it to computer science in the 1930s: at that point the theoretical model of a (classical) computer had already been introduced by Alan Turing, but no large-scale computers had been built yet. You could already design algorithms for Turing machines on paper, and you could even prove that such computers could *not* solve certain problems (as Turing famously did for the halting problem). We are doing such work on quantum computing now: designing quantum algorithms and communication protocols that are much faster than classical problems for some computational problems, and on the other hand proving that quantum computers do not give you a speed-up for many other problems. Much of the relevance of this is of course contingent upon the eventual construction of a large QC. Interestingly, however, some of the work we are doing has spin-offs for the analysis of classical computing, and that is relevant today irrespective of progress on building a QC.\n\n\nRegarding the possible charge of being “too speculative”: in the mid-1990s, right after Peter Shor published his groundbreaking quantum algorithm for factoring large numbers into their prime factors (which breaks a lot of cryptography), there was a lot of skepticism, particularly among physicists who thought that this would never fly. They expected that any attempt at implementing quantum bits and operations would have so much noise and errors that it would quickly decohere to a classical computer. Of course they had good reasons to be skeptical — manipulating something as small as an electron is extremely hard, much harder than manipulating a vacuum tube was in the 1940s and 1950s. The worries about noise and imperfections were partially answered soon after by the development (partially by Shor himself) of quantum error-correction and fault-tolerant computing, which roughly says that if the noise is not too large and not too malicious, your quantum computer can correct for it. The only way these worries can be fully overcome is to actually build a large-scale QC. My impression is that experimental physicists are making slow but sure progress on this, and are becoming more optimistic over time that this will actually be realized within one or two decades. So, sure this is a speculative endeavour (most long-term research is), but not unreasonably so.\n\n\n\n\n---\n\n\n**Luke**: What heuristics do you and your colleagues in quantum computing use to decide what to work on, given QC’s long-term and somewhat speculative nature? Presumably you need to make uncertain predictions about which types of quantum computers are most likely to be built, what the solutions to known obstacles might look like, etc.? (I ask because MIRI aims to conduct long-term research that is *more* speculative than quantum computing.)\n\n\n\n\n---\n\n\n**Ronald**: Most of the time we study how well quantum computers can solve classical computational problems, problems with classical inputs (such as a large number N) and classical outputs (such as the prime factors of N). Computer science has over decades been defining and studying the complexity of lots of interesting and useful computational problems and models, and often we start from there: we take an existing computational problem and try to find quantum tricks to improve over the best classical solutions. In some cases we succeed, designing quantum ways to outperform classical computers, and in some cases we can prove that a QC can’t do better than a classical computer. Of course it’s hard to predict what quantum tricks (if any) might help for a specific problem, but we have some general tools at our disposal. For example, quantum computers are good at detecting periodic patterns (that’s the core of Shor’s algorithm); they can search faster (Grover’s algorithm); you can hide information by encoding it in an unknown basis (quantum cryptography); you can pack a doubly-exponential number of quantum states in an n-qubit space (quantum fingerprinting), etc. A lot of work is based on skillfully combining and applying such known quantum tools, and once in a while people find new tricks to add to our toolbox. Of course, there is also work of a more specific quantum nature, which is not just throwing quantum tricks at classical problems. For example, a lot of work has been done recently on testing whether given quantum states are properly entangled (and hence can be used, for instance, in quantum cryptography).\n\n\nWe typically abstract away from the details of the specific physical system that will implement the quantum computer. Instead we just focus on the mathematical model, with quantum bits and a well-defined set of elementary operations (“gates”) that we can perform on them. It doesn’t really matter whether the qubits will be implemented as electron spins, or as photon polarizations, or as energy levels of an atom — from the perspective of the model, it only matters that a qubit has well-defined 0 and 1 states and that we can form superpositions thereof. Similarly, for classical computers it doesn’t really matter whether you program in C or Java or assembler; all such programming languages can efficiently simulate each other. And you don’t care about the precise voltages used to implement bits physically, as long as each bit has stable and clearly distinguished 0 and 1 values.\n\n\nAbstracting away from such implementation details is justified when we have a large-scale quantum computer, because different varieties of quantum computers will be able to simulate each other with only moderate overhead in terms of additional number of qubits and operations needed. For example, for the purposes of designing quantum algorithms it’s convenient to assume that you can interact any pair of qubits, even when they are far apart; in the reality of physical experiments it’s much simpler to allow only nearest-neighbor interactions between qubits. We can design algorithms for the first model and then implement them in the nearest-neighbor model by inserting a few swap-operations to move interacting qubits close together. However, this “moderate overhead” is actually quite significant as long as we do not yet have a large-scale quantum computer. It’s quite likely that on the slow road towards a large QC we will first have QCs with a few dozen or a few hundred qubits (the current state of the art is a few qubits). In this case we can’t be too wasteful and probably should design algorithms that are optimized for specific physical implementations. It is actually a very interesting question to find problems where a 50- or 100-qubit QC can already outperform classical computers in some noticeable way. Such problems would be the benchmark on which intermediate-size QCs could be tested.\n\n\nThe point is that once you have a large number of qubits available, the differences between different physical implementations/architectures don’t matter too much, because they are all equivalent up to small overheads (needed to simulate one variant using another). But when we have only intermediate-size QCs available (of, say, a few dozen or a few hundred qubits), then these overheads do make a big difference, and we need to carefully optimize our quantum algorithm for performing on the specific physical implementation that’s actually available. In this respect quantum computing seems quite different from most other future technologies: somehow we’re better able to predict the power of this technology for the long term (when we’ll hopefully have a large-scale QC available and can essentially ignore implementation details) than for the short and medium term (while we only have small-scale QCs with quirky limitations).\n\n\n\n\n---\n\n\n**Luke**: My next question leaps from quantum computing to technological forecasting. What is your subjective probability that we’ll have a 500-qubit quantum computer, which is [uncontroversially](http://arxiv.org/abs/1401.2910) a quantum computer, within the next 20 years? And, how do you reason about a question like that?\n\n\n\n\n---\n\n\n**Ronald**: Quite high, let’s say probability greater than 2/3. That’s the typical computer science threshold for a “bounded-error” algorithm. From a theoretical perspective, I don’t think we know of any fundamental obstacles to building a large-scale QC, and the threshold theorem from fault-tolerant QC assures us we can deal with moderate amounts of noise and errors. Clearly building a QC is an exceedingly hard engineering problem, but my impression is that experimentalists are making slow but sure progress. There are basically three possible scenarios here:\n\n\n1. Someone constructs a large QC\n2. We discover a fundamental problem with quantum mechanics (which would be extremely interesting new physics!)\n3. Experimentalists muddle through without too much progress until either they or the funding agencies lose faith and give up.\n\n\nThe first scenario seems the most plausible to me. I should qualify this by saying that I’m not a certified physicist, let alone a certified *experimental* physicist, so this opinion is partly based on hearsay — but I do have some confidence in the progress that’s happening in places like MIT, NIST, Yale, Delft,… The recent paper you refer to casts doubt upon the controversial D-Wave quantum computer, which has gotten a lot of press in the last few years. For commercial reasons they prioritize quantity (=number of available qubits) over quality (=the coherence and “true quantum nature” of those qubits), and their machines seem too noisy to have useful quantum computing power.\n\n\n\n\n---\n\n\n**Luke**: Does that mean we probably need to purge Earth of [Shor](http://en.wikipedia.org/wiki/Shor%27s_algorithm)-breakable crypto-security, and transition to [post-quantum cryptography](http://en.wikipedia.org/wiki/Post-quantum_cryptography), within ~20 years?\n\n\n\n\n---\n\n\n**Ronald**: I think that would be a wise precaution, at least for important or sensitive data. There are at least two ways to handle this. We could either stick with public-key cryptography but replace Shor-breakable problems like factoring and discrete logs by problems that seem to be hard to crack even for QC; lattice problems are an oft-mentioned candidate. Or we could use quantum cryptography. Neither is as efficient as RSA, but at least they’re more secure. It makes sense to start this transition already now, even though there’s no QC yet: the security services (and, who knows, maybe the mafia too) are probably hoovering up RSA-encrypted communications that they store for the time being, waiting for the QC that will allow them to decrypt these messages later. So even today’s communication is not safe from a future QC.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Ronald!\n\n\nThe post [Ronald de Wolf on Quantum Computing](https://intelligence.org/2014/02/03/ronald-de-wolf-on-quantum-computing/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-02-03T09:00:13Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "5fab497daff6a8be86d7589daf370827", "title": "Robust Cooperation: A Case Study in Friendly AI Research", "url": "https://intelligence.org/2014/02/01/robust-cooperation-a-case-study-in-friendly-ai-research/", "source": "miri", "source_type": "blog", "text": "![robots shaking hands (cropped)](https://intelligence.org/wp-content/uploads/2014/01/robots-shaking-hands-cropped.jpg)\nThe paper “[Robust Cooperation in the Prisoner’s Dilemma: Program Equilibrium via Provability Logic](http://arxiv.org/abs/1401.5577)” is among the clearer examples of theoretical progress produced by explicitly [FAI](http://en.wikipedia.org/wiki/Friendly_artificial_intelligence)-related research goals. What can we learn from this case study in Friendly AI research? How were the results obtained? How did the ideas build on each other? Who contributed which pieces? Which kinds of synergies mattered?\n\n\nTo answer these questions, I spoke to many of the people who contributed to the “robust cooperation” result.\n\n\nI’ll begin the story in December 2011, when Vladimir Slepnev (a Google engineer in Zurich) posted [A model of UDT with a halting oracle](http://lesswrong.com/lw/8wc/a_model_of_udt_with_a_halting_oracle/), representing joint work with Vladimir Nesov (a computer science grad student in Moscow).[1](https://intelligence.org/2014/02/01/robust-cooperation-a-case-study-in-friendly-ai-research/#footnote_0_10723 \"The development of updateless decision theory itself is another story, which won’t be recounted here in detail. Two brief sources on this story are the ‘Prior Work’ section of Vladimir Nesov’s Controlling Constant Programs, and also this comment. Nesov’s very brief summary of the development of UDT goes like this: “(1) Eliezer Yudkowsky’s early informal remarks about TDT and Anna Salamon’s post brought up the point that some situations should be modeled by unusual dependencies, motivating the question of how to select an appropriate model (infer dependencies). (2) Wei Dai’s UDT post sketched one way of doing that, but at the time I didn’t understand that post as answering this question, and eventually figured it out for programs-control-programs case in May 2010. After discussion on the decision theory mailing list, Vladimir Slepnev applied the technique to the prisoner’s dilemma (PD). (3) The more general technique was then written up by Slepnev and me, with Slepnev’s posts having more technical substance, and mine more speculative, attempting to find better ways of framing the theory: What a reduction of ‘could’ could look like, Controlling constant programs, and Notion of preference in ambient control. (4) There were still a number of technical issues around ‘spurious moral arguments’. See this comment by Benja Fallenstein and An example of self-fulfilling spurious proofs in UDT. (5) One solution was adding a ‘chicken rule’ to the decision algorithm, which I figured out for the programs-control-programs case in April 2011 and discussed a bit on the decision theory list, but which turned out to be much more theoretically robust in a setting with a halting oracle, which came up in another discussion on the decision theory list in December 2011, which Slepnev wrote up in A model of UDT with a halting oracle. My later write up was Predictability of Decisions and the Diagonal Method. (6) Armed with the diagonal trick (chicken rule), Stiennon wrote up cooperation in PD for the oracle case, which was more theoretically tractable than Slepnev’s earlier no-oracle solution for PD. (7) At this point, we had both a formalization of UDT that didn’t suffer from the spurious proofs problem, and an illustration of how it can be applied to a non-trivial problem like PD.”\") This post, arguably for the first time,[2](https://intelligence.org/2014/02/01/robust-cooperation-a-case-study-in-friendly-ai-research/#footnote_1_10723 \"Some researchers might instead say that Slepnev’s August 2010 post “What a reduction of ‘could’ could look like” presented the “first formal model” of UDT.\") presented a formal model of Wei Dai’s [updateless decision theory](http://wiki.lesswrong.com/wiki/Updateless_decision_theory) (UDT) and showed that UDT agents would “win” when presented with Newcomb’s Problem — *if* the universe program and its agent sub-program had access to a halting oracle, anyway. Nisan Stiennon (a math grad student at Stanford) applied Slepnev’s formalization to the problem of proving cooperation using Peano Arithmetic in [Formulas of arithmetic that behave like decision agents](http://lesswrong.com/lw/9o7/formulas_of_arithmetic_that_behave_like_decision/) (Feb. 2012).[3](https://intelligence.org/2014/02/01/robust-cooperation-a-case-study-in-friendly-ai-research/#footnote_2_10723 \"Stiennon’s post also improved the formalization by using a two-step “chicken rule” rather than a one-step chicken rule.\")\n\n\nThe success of these two posts in formalizing UDT inspired Patrick LaVictoire (a math postdoc in Madison) to attempt a “semi-formal analysis” of [timeless decision theory](http://wiki.lesswrong.com/wiki/Timeless_decision_theory) (TDT), an earlier decision theory invented by Eliezer Yudkowsky (MIRI’s founder), which itself had been a significant inspiration for UDT. After [three](http://lesswrong.com/lw/aq9/decision_theories_a_less_wrong_primer/) [setup](http://lesswrong.com/lw/axl/decision_theories_a_semiformal_analysis_part_i/) [posts](http://lesswrong.com/lw/az6/decision_theories_a_semiformal_analysis_part_ii/), LaVictoire thought he had successfully formalized something *kind of* like TDT in [April 2012](http://lesswrong.com/lw/b7w/decision_theories_a_semiformal_analysis_part_iii/).\n\n\nLaVictoire didn’t get much reaction from other TDT/UDT researchers, so when he visited the SF Bay Area for a [CFAR workshop](http://rationality.org/workshops/) in July 2012, he tracked down Yudkowsky, Stiennon, Paul Christiano (a computer science grad student at Berkeley), and several others to talk to them about his attempt to formalize TDT. Their reaction was positive enough that LaVictoire was encouraged to keep working on the approach.\n\n\nLaVictoire also discussed his work with Slepnev when Slepnev visited the Bay Area in August 2012, and Slepnev pointed out that LaVictoire’s attempted formalization of TDT (now called “Masquerade”) was [fatally flawed](http://lesswrong.com/lw/e94/decision_theories_part_35_halt_melt_and_catch_fire/) for Löbian reasons. But in September 2012, LaVictoire was able to [patch the problem](http://lesswrong.com/lw/ebx/decision_theories_part_375_hang_on_i_think_this/) by having Masquerade escalate between different formal systems. At this point, LaVictoire began writing an early draft of the “[Robust Cooperation](http://arxiv.org/abs/1401.5577)” paper.\n\n\nSlepnev insisted on the importance of optimality results, so later that month LaVictoire [came up with](http://lesswrong.com/lw/ebx/decision_theories_part_375_hang_on_i_think_this/7gah) a candidate optimality notion, and then in October noticed that Masquerade itself failed to be optimal by that definition. This was roughly the state of things when MIRI’s [April 2013 workshop](http://intelligence.org/workshops/#april-2013) began.\n\n\nEarly in the workshop, LaVictoire gave a Masquerade tutorial to the other participants. Tweaking Masquerade eventually led to the concept of modal agents, and LaVictoire and Mihály Barasz (a Google engineer in Zurich) began looking for ways to mechanically validate the actions of such agents against one another. Eventually, Barasz and Marcello Herreshoff (a Google engineer in the Bay Area) developed a [model checker](https://github.com/klao/provability/blob/master/modal.hs) for modal agent interactions, so that agents’ choices against other agents could be proved mechanically.\n\n\nNear the end of the April workshop, Christiano developed PrudentBot, which is in some sense the “star” of the current paper. Additional contributions were made by Yudkowsky, Benja Fallenstein (a grad student at Bristol University), and others during the workshop. LaVictoire updated the draft paper with the April workshop’s results, and [posted it to Less Wrong](http://lesswrong.com/lw/hmw/robust_cooperation_in_the_prisoners_dilemma/) in June 2013.\n\n\nLater, at MIRI’s [September 2013 workshop](http://intelligence.org/workshops/#september-2013), Kenny Easwaran (a philosopher at USC) found that it was harder than LaVictoire had expected to prove that any unexploitable agent must eventually fail to optimize against some kind of WaitFairBot. Herreshoff worked on patching this, but the proof was ballooning that section of the paper beyond recognition for a minor result, so LaVictoire decided to remove it from the paper.\n\n\nIn December 2013, Fallenstein found that the paper didn’t adequately show that the actions of two modal agents depend solely on their modal description, and he introduced a set of patches for this. LaVictoire modified the paper once more, and then, with the consent of his co-authors, [uploaded the revised paper to arxiv](http://arxiv.org/abs/1401.5577) in January 2014.\n\n\nWhat, then, is the meaning and significance of the results in the “Robust Cooperation” paper? LaVictoire’s view, at least, is this:\n\n\n\n> The significance of modal combat is that it’s a toy universe in which we can study concepts of advanced decision theory (and which we might modify slightly in order to study other concepts, like blackmail), and within which the intuitively appealing idea of superrationality in fact works out. It’s at least a philosophical hint that good communication can enable cooperation without the usual costs of enforcement and punishment, and that there are incentives toward simplicity and verifiability among rational agents.\n> \n> \n> In fact, it’s an even more basic analogue of an Iterated Prisoner’s Dilemma tournament. Just as Axelrod’s IPD [iterated PD] tournament illustrated the usefulness of “tough but fair” and gave rise to the idea of evolutionary incentives for reciprocal altruism, I think that modal combat is a useful sandbox for illustrating the logic of “superrationality.” Furthermore, modal combat includes many of the features of the IPD (with levels of deduction being somewhat analogous to an agent’s historical interactions with another agent), and it has extremely simple grammar for the level of sophistication of these algorithms.\n> \n> \n\n\n\n\n---\n\n1. The development of [updateless decision theory](http://wiki.lesswrong.com/wiki/Updateless_decision_theory) itself is another story, which won’t be recounted here in detail. Two brief sources on this story are the ‘Prior Work’ section of Vladimir Nesov’s [Controlling Constant Programs](http://lesswrong.com/lw/2os/controlling_constant_programs/), and also [this comment](http://lesswrong.com/lw/8wc/a_model_of_udt_with_a_halting_oracle/5h33). Nesov’s *very* brief summary of the development of UDT goes like this: “(1) Eliezer Yudkowsky’s [early](http://lesswrong.com/lw/135/timeless_decision_theory_problems_i_cant_solve/) [informal](http://lesswrong.com/lw/15z/ingredients_of_timeless_decision_theory/) [remarks](http://lesswrong.com/lw/164/timeless_decision_theory_and_metacircular/) about TDT and [Anna Salamon’s post](http://lesswrong.com/lw/17b/decision_theory_why_pearl_helps_reduce_could_and/) brought up the point that some situations should be modeled by unusual dependencies, motivating the question of how to select an appropriate model (infer dependencies). (2) Wei Dai’s [UDT post](http://lesswrong.com/lw/15m/towards_a_new_decision_theory/) sketched one way of doing that, but at the time I didn’t understand that post as answering this question, and eventually figured it out for programs-control-programs case in May 2010. After discussion on the [decision theory mailing list](https://groups.google.com/forum/#!forum/decision-theory-workshop), Vladimir Slepnev [applied](http://lesswrong.com/lw/2ip/ai_cooperation_in_practice) the technique to the prisoner’s dilemma (PD). (3) The more general technique was then written up by Slepnev and me, with Slepnev’s posts having more technical substance, and mine more speculative, attempting to find better ways of framing the theory: [What a reduction of ‘could’ could look like](http://lesswrong.com/lw/2l2/what_a_reduction_of_could_could_look_like/), [Controlling constant programs](http://lesswrong.com/lw/2os/controlling_constant_programs/), and [Notion of preference in ambient control](http://lesswrong.com/lw/2tq/notion_of_preference_in_ambient_control/). (4) There were still a number of technical issues around ‘spurious moral arguments’. See [this comment](http://lesswrong.com/lw/2l2/what_a_reduction_of_could_could_look_like/2f7w) by Benja Fallenstein and [An example of self-fulfilling spurious proofs in UDT](http://lesswrong.com/lw/b5t/an_example_of_selffulfilling_spurious_proofs_in/). (5) One solution was adding a ‘chicken rule’ to the decision algorithm, which I figured out for the programs-control-programs case in April 2011 and discussed a bit on the decision theory list, but which turned out to be much more theoretically robust in a setting with a halting oracle, which came up in another discussion on the decision theory list in December 2011, which Slepnev wrote up in [A model of UDT with a halting oracle](http://lesswrong.com/lw/8wc/a_model_of_udt_with_a_halting_oracle/). My later write up was [Predictability of Decisions and the Diagonal Method](http://lesswrong.com/lw/ap3/predictability_of_decisions_and_the_diagonal/). (6) Armed with the diagonal trick (chicken rule), Stiennon [wrote up](http://lesswrong.com/lw/9o7/formulas_of_arithmetic_that_behave_like_decision/) cooperation in PD for the oracle case, which was more theoretically tractable than Slepnev’s earlier no-oracle solution for PD. (7) At this point, we had both a formalization of UDT that didn’t suffer from the spurious proofs problem, and an illustration of how it can be applied to a non-trivial problem like PD.”\n2. Some researchers might instead say that Slepnev’s August 2010 post “[What a reduction of ‘could’ could look like](http://lesswrong.com/lw/2l2/what_a_reduction_of_could_could_look_like/)” presented the “first formal model” of UDT.\n3. Stiennon’s post also improved the formalization by using a two-step “chicken rule” rather than a one-step chicken rule.\n\nThe post [Robust Cooperation: A Case Study in Friendly AI Research](https://intelligence.org/2014/02/01/robust-cooperation-a-case-study-in-friendly-ai-research/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-02-01T16:55:13Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "ba085c7a20261a01c92f63105a8b5e4b", "title": "Two MIRI talks from AGI-11", "url": "https://intelligence.org/2014/01/31/two-miri-talks-from-agi-11/", "source": "miri", "source_type": "blog", "text": "Thanks in part to the volunteers at [MIRI Volunteers](http://mirivolunteers.org/), we can now release the videos, slides, and transcripts for two talks delivered at [AGI-11](http://agi-conf.org/2011/). Both talks represent joint work by Anna Salamon and Carl Shulman, who were MIRI staff at the time (back when MIRI was known as the “Singularity Institute”):\n\n\nSalamon & Shulman (2011). Whole brain emulation as a platform for creating safe AGI. [[Video](http://www.youtube.com/watch?v=Cul4-p7joDk)] [[Slides](https://intelligence.org/wp-content/uploads/2014/01/Salamon-Shulman-Whole-brain-emulation-as-a-platform-for-creating-safe-AGI.pptx)] [[Transcript](https://docs.google.com/document/d/1-2A_cHiFC8fmeWHdQBeM7ynWaFdbklPm9u-UtKxZJ0A/pub)]\n\n\nShulman & Salamon (2011). Risk-averse preferences as an AGI safety technique. [[Video](http://www.youtube.com/watch?v=0xLw7eAogWk)] [[Slides](https://intelligence.org/wp-content/uploads/2014/01/Shulman-Salamon-Risk-averse-preferences-as-an-AGI-safety-technique.pptx)] [[Transcript](https://docs.google.com/document/d/1HF0aK2-nyulheAYpOyZ1Xat-PzZPpjt5BCmzChUCVKc/pub)]\n\n\nThe post [Two MIRI talks from AGI-11](https://intelligence.org/2014/01/31/two-miri-talks-from-agi-11/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-01-31T20:41:44Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "ea66235fb2d12cde9976195147d59eb6", "title": "Mike Frank on reversible computing", "url": "https://intelligence.org/2014/01/31/mike-frank-on-reversible-computing/", "source": "miri", "source_type": "blog", "text": "![Mike Frank portrait](http://intelligence.org/wp-content/uploads/2014/01/Frank_w150.jpg)[Michael P. Frank](http://www.eng.fsu.edu/~mpf/) received his Bachelor of Science degree in Symbolic Systems from Stanford University in 1991, and his Master of Science and Doctor of Philosophy degrees in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in 1994 and 1999 respectively. While at Stanford, he helped his team win the world championship in the 1990-91 International Collegiate Programming Competition sponsored by the Association for Computing Machinery. Over the course of his student years, he held research internships at IBM’s T.J. Watson Research Center, NASA’s Ames Research Center, NEC Research Institute, Stanford Research Institute, and the Center for Study of Language and Information at Stanford. He also spent the summer after his Freshman year as a software engineering intern at Microsoft. During 1998-1999, Mike stopped out of school for a year to work at a friend’s web startup (Stockmaster.com).\n\n\nAfter graduation, he worked as a tenure-track Assistant Professor in the Computer and Information Science and Engineering department at the University of Florida from 1999-2004, and at the Electrical and Computer Engineering department at the Florida A&M University – Florida State University College of Engineering from 2004-2007. After an ill-fated attempt to start a business in 2007-2008, he returned to academia in a variety of short-term research and teaching positions in the Florida A&M Department of Physics and the FAMU-FSU College of Engineering. His present title is Associate in Engineering, and he spends most of his time supervising multidisciplinary senior engineering projects. Over the years, Dr. Frank’s research interests have spanned a number of different areas, including decision-theoretic artificial intelligence, DNA computing, reversible and quantum computing, market-based computing, secure election systems, and digital cash.\n\n\n\n**Luke Muehlhauser**: Some long-term computing forecasts include the possibility of nanoscale computing, but efficient computing at that scale appears to require reversible computing due to the [Landauer limit](http://en.wikipedia.org/wiki/Landauer%27s_principle). Could you please explain what reversible computing is, and why it appears to be necessary for efficient computing beyond a certain point of miniaturization?\n\n\n\n\n\n---\n\n\n**Mike Frank**: Reversible computing refers to computation using logically reversible (invertible) transformations of the digital state of the machine, transformations that are also carried out using physical mechanisms that are (almost) thermodynamically reversible – that is, mechanisms that produce no, or negligibly small amounts of, physical entropy. Thermodynamic reversibility requires logical reversibility, since, if you carry out many-to-one transformations of the logical state, that necessitates a corresponding one-to-many transformation of the rest of the detailed physical state – or in other words, entropy production.\n\n\nReversible computing becomes important at a small scale because bit energies are becoming smaller, and are approaching the point (around 1 electron-volt) where they will not be able to be made much smaller without running into severe reliability problems due to thermal noise. But when you manipulate bits in an irreversible fashion, this essentially involves dissipating the entire bit energy to waste heat on each operation. Improvements in computer energy efficiency in the past have been driven mainly by reductions in bit energies, so if bit energy stops decreasing, it’s a big problem – the energy efficiency of traditional irreversible technology will level off and stop improving. Note this observation depends only on very fundamental thermodynamic considerations of thermal noise, and so it’s not affected by the details of whatever clever alternative nanoscale devices you may come up with. Your device technology could be based on superconductors, carbon nanotubes, nanophotonics, quantum interference transistors, I don’t care. As long as it’s a reliable, irreversible device, it will *have* to dissipate at least on the order of 1 electron volt (about 40 *kT*) per bit-operation in a room-temperature envronment, full stop, end of story.\n\n\nWith reversible computing, the situation is markedly different. Although it remains true that bit-energies in reliable devices are forced to level off in the 10s of *kT*s, if the device is reversible, then we no longer need to dissipate that entire bit energy on each operation. That’s the critical difference. In principle, for bit-operations that are performed using increasingly high-quality reversible transformations, the amount of energy that’s dissipated per operation can be made *arbitrarily* small; smaller than *kT*, much smaller even than *kT* ln 2 or 0.69 *kT*, which is the *Landauer limit*, the energy dissipation that’s fundamentally associated with the creation of a single bit’s worth of entropy. That’s the theoretical prediction, at least. But to actually realize practical, high-quality, high-performance, cost-efficient reversible computing below the Landauer limit is, I would say, one of the most difficult, hard-core engineering challenges of our time. That’s not to say it’s impossible; indeed, there has never been any valid proof from fundamental physics that it’s impossible, and to the contrary, there are many indications, from the research that’s been done to date, that suggest that it will probably be possible to achieve eventually. But it’s certainly not an easy thing to accomplish, given the state of engineering know-how at this point in time. A future computer technology that actually achieves high-quality, efficient reversible computing will require a level of device engineering that’s so precise and sophisticated that it will make today’s top-of-the-line device technologies seem as crude in comparison, to future eyes, as the practice of chiseling stone tablets looks to us today.\n\n\n\n\n---\n\n\n**Luke**: You write that “there has never been any valid proof from fundamental physics that [reversible computing is] impossible, and to the contrary, there are many indications… that suggest that it will probably be possible to achieve eventually.”\n\n\nWhat are some of these indications suggesting that reversible computing should be possible to achieve eventually?\n\n\n\n\n---\n\n\n**Mike**: Yes, good question. First, there have been not only simulations, but also laboratory demonstrations illustrating that indeed, just as per the theoretical predictions, adiabatic switching[1](https://intelligence.org/2014/01/31/mike-frank-on-reversible-computing/#footnote_0_10745 \"See slide 19 of Snider et al. (2013).\") of the voltage on a capacitor (for example) can indeed dissipate as little energy as desired, as the transition is carried out more slowly. However, these kinds of examples typically invoke external resonators to drive the transitions, and so this would not really count as a complete, self-contained system, until the design of those resonators is further fleshed out; but the design of high-quality resonators is a quite difficult engineering problem by itself.\n\n\nIn the meantime, we do already know of other self-contained systems that do undergo at least simple state transitions with negligible dissipation – consider, for example, a small, isolated [diamond crystal](http://arxiv.org/abs/1212.1347) or other solid nanostructure that is rotating or vibrating, in vacuum, under zero-gravity conditions. In principle, even in total isolation, such a structure will emit (extremely weak) gravity waves, and will therefore eventually settle into a stationary state (with no detectable vibration, and no spin except around axes around which there is perfect axial symmetry). But, for a small, rigid object, this gravitational settling process could take many billions of cycles (at least!), which, for a nanoscale object, can translate into far less than kT energy dissipation per cycle.\n\n\nA related notion is that of a [time crystal](http://www.wired.com/wiredscience/2013/04/time-crystals/); these have recently been proposed as hypothetical examples of quantum systems that could theoretically cycle through a set of distinguishable states forever, even when fully relaxed into their quantum ground states. Time crystals themselves are a rather controversial concept. However, even if a true (i.e., perfect) time crystal turns out to be impossible, a system like a rotating solid object that takes a very long time to settle into a stationary state is definitely a realistic one.\n\n\nThe problem with that thought experiment is that a simple periodic motion, like that of a rotating or vibrating crystal, is not interesting computationally. For computational purposes, we want machines that traverse a very complexly-shaped trajectory through their configuration space, not just a short cycle. The configuration space of any useful computer has an enormous number of dimensions, but suppose, for the moment, that it had only three dimensions. Imagine now a polished ball, or particle (representing the current state of the machine), coasting along a hollowed-out channel through a solid block of material; the channel represents the constraints imposed by the structure of the computational mechanism (i.e., by the device physics). A complex reversible computation then corresponds, in this analogy, to the ball coasting smoothly, ballistically, along a long, twisty path through the material.\n\n\nCan a physical system be engineered in such a way that the ball-particle (machine state) can coast through many twists and turns (many steps of computation) without losing a significant fraction of its initial dynamical (kinetic) energy? I feel that this is the fundamental question that the science of reversible computing has yet to answer definitively, one way or the other.\n\n\nA good starting point would be to demonstrate any physical system that coasts along a long, complexly-shaped, deterministic (non-bifurcating) trajectory through configuration space – never mind what function it’s computing. One can always say, it’s just computing its own next state. It’s not clear why this kind of behavior should be impossible – even an everyday amusement-park roller-coaster illustrates that even macroscopic objects can coast through somewhat complex deterministic trajectories for at least a little while. And at the nanoscale, even the uncertainty principle only limits your certainty about when you arrive at the final state, not necessarily about the shape of the trajectory that you follow.\n\n\nSo, how far can we push this “roller-coaster” idea of ballistic computing? Maybe pretty far. But, until we have further developed a scientific and engineering discipline showing exactly how increasingly-efficient examples of such “roller-coaster” like constructs can be implemented, and wherein such constructs are designed to traverse not just a trajectory in simple 3D space, but rather in the multidimensional configuration space of a complex, nanostructured system composed of many interacting subsystems, which then might be doing something interesting in terms of computation, we will not know for sure that highly efficient reversible computing is really possible.\n\n\nI personally expect that it is – if only for the lack of any sound proof that it is not (despite many attempts to rule it out having been made) – but, to really flesh out the discipline of how to engineer such systems is far more difficult, I now believe, than many of the field’s early enthusiasts may have anticipated.\n\n\nWhat we really need, I think, are certain key theoretical breakthroughs, such as, for starters, a complete theoretical (but also realizable, i.e., not overly idealized) model of a self-contained quantum system with many interacting parts that dynamically evolves in such a way (that is, along a complex deterministic trajectory in configuration space).\n\n\nI speculate that perhaps some sort of dynamical version of the quantum watchdog effect will be required in order to keep the various interacting components continuously in sync with each other, while they are simultaneously also coasting forward along complex trajectories in configuration space which are constrained, by the shape of the subsystems’ mutual interactions, to carry out interesting computations. But, I don’t know for sure if this “dynamical quantum watchdog” approach can be made to work.\n\n\nOverall, to show how to fully realize reversible computing is a very thorny theoretical problem, one that I think will require serious attention from some of the foremost minds in quantum physics to solve definitively. If I had the research support, I might go back to school, bone up on my quantum theory, and try to solve it myself. But unfortunately, basic research funding for this field has been sorely lacking, in my experience, and nowadays, I have a family to support.\n\n\n\n\n---\n\n\n**Luke**: Why do you think reversible computing has been unable thus far to attract significant research funding and researcher time?\n\n\n\n\n---\n\n\n**Mike**: That is a good question. Part of the reason, I think, is that there has been a lot of misinformation and misconceptions that have circulated around about this field for a while. For example, there is a sort of scholarly rumor or “urban legend” circulating around claiming that John von Neumann, a famous pioneer of computer architecture and quantum theory, had proved at some point that information processing with less than *kT* ln 2 dissipation per binary “decision” was impossible. But, there is no real substance behind this legend; all that we actually have from von Neumann on this is one brief remark he made during a lecture that was not backed up by any analysis. He never published any peer-reviewed journal article on this topic, possibly because he realized that it was a mistake and had never actually proved it. Probably he was implicitly assuming that decision-making implied the creation of entropy, since an unknown outcome (of the decision) was being replaced by a known one, which would imply entropy creation elsewhere to satisfy the 2nd law of thermodynamics. But of course, a decision-making process can be deterministic, as most computations are; if the outcome is predetermined by the input, then there need be no change in entropy in the decision-making process. And, even if you want the outcome of the decision-making process to be random, that only requires moving a bit of entropy from the environment into the computer, not generating any new entropy. My feeling is that von Neumann simply hadn’t thought all this through carefully enough when he first made that remark, and that, if he had ever been exposed to the concept of reversible computing during his lifetime, he would have immediately said, “Oh, of course.”\n\n\nA second rumor or urban legend against reversible computing is the claim that it would violate the central theorems proved by Claude Shannon, pioneer of the mathematical theory of information and communication, concerning the minimum power required for communication, at a given bandwidth and reliability. However, I personally have meticulously searched every single page of Shannon’s entire collected works, and absolutely nowhere in his published work did he *ever* address power *dissipation*, not even once!  Absolutely *all* of his work only addresses power *transmitted*, but nowhere do his theorems establish that the power contained in a signal cannot later be recovered by suitable mechanisms. In the design of reversible computing mechanisms, we explicitly show how the energy contained in any given pattern of physically encoded bits can be recovered, for example, by using the time-reverse of the exact same reversible process that created that bit-string in the first place. Again, if Shannon were alive today, I’m sure he would look at the principles of reversible computing and say, “Yes, of course that works.”\n\n\nSo, there is a bias against reversible computing that is based on the widespread misunderstanding or misinterpretation of the work of these respected pioneers. Stacked on top of this, there have been many more recent attempts to prove reversible computing impossible, but all of the supposed “proofs” contain either fallacies in reasoning or invalid assumptions, which have been shown definitively to be incorrect – I have a list of about 15 of these erroneous arguments, and the results showing that they’re clearly wrong, in [one of my talks](http://commonsenseatheism.com/wp-content/uploads/2014/01/Frank-Reversible-computing-its-promises-and-challenges.pdf). But, each time the flaw in one of these skeptical arguments is pointed out, disproving its core objection definitively using a clear counterexample, the skeptics just keep coming up with new and different (but still flawed) arguments against reversible computing. Some of the skeptics are perhaps motivated by the desire not to lose face and admit that they’ve been mistaken in their previous arguments. I think that probably, the only way that many of these reversible computing skeptics will ever be convinced is if an already commercially-successful reversible computer is staring them in the faces – then they will no longer be able to deny it. A mere theoretical model will never satisfy the hard-core skeptics – and perhaps that’s as it should be. After all, all of the theory in the world is useless until we are able to use it to start building practical machines. And to be fair, the challenges that still need to be faced in order to make reversible computing practical are substantial. But, the prevalent attitude does make it very difficult to get research funding. And without substantial funding – or top-level researchers who have the free time and inclination to solve the remaining foundational problems definitively – these problems will never be solved, and reversible computing will never become more than an obscure academic curiosity. But, if adequate attention is paid to solving the key remaining problems, reversible computing has the potential to become *the* foundation of 21st century computer engineering. Certainly, we cannot make more than a modest amount of further progress in the power-performance of most computing applications without it.\n\n\n\n\n---\n\n\n**Luke**: Can you say a bit about the history of reversible computing thus far? What were the key pieces of conceptual progress after the initial demonstration of the Landauer limit? How did they build on each other, or on other results in physics or computing?\n\n\n\n\n---\n\n\n**Mike**: I’ll summarize a few of the major historical developments; but keep in mind, any brief history is bound to be incomplete. The first major theoretical development in reversible computing after Landauer’s analysis was his protege Charlie Bennett’s [1973 paper](http://www.cs.princeton.edu/courses/archive/fall04/cos576/papers/bennett73.html) showing conclusively that irreversible operations are not strictly required for computation, that is, any computation can be carried out by use of only reversible transformations of the logical state of the machine. At worst, you might need to irreversibly initialize, just once, whatever new storage space you might want to permanently dedicate to preserve the final output of the computation – but, all of the intermediate calculations can be carried out reversibly in temporary space which can be reused by other computations. The basic technique basically involves temporarily storing all of the intermediate data that would otherwise be thrown away, so it is somewhat space-intensive, but later, [in 1989](http://commonsenseatheism.com/wp-content/uploads/2014/01/Bennett-Time-space-trade-offs-for-reversible-computation.pdf), Bennett showed that more space-efficient variations on the technique were possible. Finally, it was even shown in 1997 (by [Lange et al.](http://commonsenseatheism.com/wp-content/uploads/2014/01/Lange-et-al-Reversible-space-equals-deterministic-space.pdf)) that reversible computing in linear space (the same as an irreversible machine) is possible, although that method is not time-efficient. It is probably the case that general reversible computations do require *some* amount of overhead in either space or time complexity; indeed, [Ammer and I proved rigorously](http://www.eng.fsu.edu/~mpf/revsep.pdf) that this is true in a certain limited technical context. But, the overheads of reversible algorithms can theoretically be overwhelmed by their energy-efficiency benefits, to improve overall cost-performance for large-scale computations.\n\n\nIn terms of practical implementations, some early conceptual work was done by Fredkin and Toffoli at MIT in a [1978 proposal](http://commonsenseatheism.com/wp-content/uploads/2014/01/Fredkin-Toffoli-Design-Principles-for-Achieving-High-Performance-Submicron-Digital-Technologies.pdf) suggesting the use of inductors and capacitors to shuttle energy around between devices in a circuit to implement logic in a (near) dissipationless fashion. In the mid-80s, Charles Seitz and colleagues at Caltech [came up with](http://authors.library.caltech.edu/26956/2/5177_TR_85.pdf) an improved technique in which inductors could be shared between logic elements and brought off-chip, but they didn’t answer the question of whether any arbitrary logical function could be implemented using their technique. In the early 1990s, [Koller and Athas](http://commonsenseatheism.com/wp-content/uploads/2014/01/Koller-Athas-Adiabatic-Switching-Low-Energy-Computing-and-the-Physics-of-Storing-and-Erasing-Information.pdf) developed a general adiabatic logic family, but it was only able to handle combinational (not sequential) logic. Finally, [in 1993](http://web.cecs.pdx.edu/~mperkows/CLASS_FUTURE/to-chip-april-6/younis.pdf) Younis and Knight at MIT developed their general-purpose, adiabatic, sequential Split-Level Charge Recovery Logic – although even that one still contained a small bug limiting its energy efficiency, which we found and [fixed in the late 90s](http://www.eng.fsu.edu/~mpf/Frank-99-PhD-bookmarked.pdf), during my thesis work. I also invented another, simpler (and bug-free) universal adiabatic logic family called 2LAL at the University of Florida in 2000[2](https://intelligence.org/2014/01/31/mike-frank-on-reversible-computing/#footnote_1_10745 \"See slides #89-93 of Frank (2006a).\"). Of course, by then, many, many other research groups were also pursuing adiabatic logic, so I won’t attempt to credit all of the many contributions to the field that have been made in the meantime. However, I will say that most of the published techniques that I’ve seen for implementing adiabatic logic in CMOS (that is, using standard transistors) contain conceptual flaws of one kind or another which unnecessarily limit their energy efficiency. I have not seen any other CMOS-based approach yet that is as efficient as 2LAL. In detailed simulations at UF, we found that[3](https://intelligence.org/2014/01/31/mike-frank-on-reversible-computing/#footnote_2_10745 \"See slide #27 of Frank (2006b).\") sequential 2LAL circuits could dissipate as little as 1 electron volt of energy per transistor per cycle, limited only by the leakage current in the particular device family we were simulating. This figure could be made even lower if other device families with reduced leakage were used.\n\n\nPeople have also proposed alternative implementations of reversible computing not using transistors at all; for example, there is Craig Lent’s work at Notre Dame on [quantum-dot cellular automata](http://commonsenseatheism.com/wp-content/uploads/2014/01/Lent-et-al-Bennet-clocking-of-quantum-dot-cellular-automata-and-the-limits-of-binary-logic-scaling.pdf), and there is research that’s been done by several groups on reversible logic in superconducting circuits[4](https://intelligence.org/2014/01/31/mike-frank-on-reversible-computing/#footnote_3_10745 \"See Plourde (2012); Ren (2012); Semenov (2012; Ustinov (2012).\"). However, these alternative approaches also present their own challenges; and I’d say it’s not yet clear whether they will become viable successors to CMOS; but, at least they are trying, and are paying attention to these issues! Whereas, the vast majority of other nanocomputing proposals that have been made, namely, all of the ones that totally ignore reversible computing, are all fundamentally doomed to failure, in that they will never be able to get many orders of magnitude beyond the limits of CMOS in terms of their energy efficiency – since they all ignore the fundamental limit on the energy efficiency of irreversible devices that follows from the requirement to maintain high reliability despite thermal noise. If you want to develop a new logic technology that has any hope of being successful, in the sense of having power-performance that can potentially scale many orders of magnitude (and not, at best, just 1 or 2) beyond the limits of CMOS, then designing your devices around reversible computing principles is not optional – it’s an absolute necessity! Unfortunately, in my experience, there are far too few device physicists who seem to understand this basic fact.\n\n\n\n\n---\n\n\n**Luke**: If a researcher wanted to make progress toward reversible computing, where would you recommend they start looking? What are the most promising avenues of research at this point, do you think?\n\n\n\n\n---\n\n\n**Mike**: Well, as I mentioned earlier, I think that some fundamental breakthroughs in basic theory are still needed. We are still lacking a comprehensive theoretical model demonstrating how a realistic quantum-mechanical system composed of many interacting subsystems can be made to coast along a complexly-constrained, deterministic trajectory through configuration space with negligible entropy increase per operation. Reversible computing is somewhat easier than full-blown quantum computing, in that we don’t have to maintain delicate quantum superpositions of entangled states. Instead, we can presumably rely only on naturally stable (a.k.a. “classical”) states, or “pointer states.” But, accurately modeling the dynamical behavior of microscopic physical systems still requires a deep understanding of quantum mechanics; such an understanding is especially necessary for the design of nanoscale devices that could potentially serve as effective components of an efficient reversible computing system. If I were starting over in my career, I would begin by giving myself a comprehensive, top-of-the-line, world-class education in quantum mechanics, and then I’d re-develop the fundamental principles of reversible computing technology bottom-up, building on that foundation–with an emphasis on creating self-contained, energy-efficient, ballistic reversible computational mechanisms that are as simple as possible to design and manufacture. It is a big challenge, but I think that someone will show us the way eventually.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Mike!\n\n\n\n\n---\n\n1. See slide 19 of [Snider et al. (2013)](http://terpconnect.umd.edu/~kosborn/SEALeR/Snider%20Sealer%203-12.pdf).\n2. See slides #89-93 of [Frank (2006a)](http://commonsenseatheism.com/wp-content/uploads/2014/01/Frank-Physical-Limits-of-Computing.pptx).\n3. See slide #27 of [Frank (2006b)](http://commonsenseatheism.com/wp-content/uploads/2014/01/Frank-Reversible-computing-and-truly-adiabatic-circuits.pdf).\n4. See [Plourde (2012)](http://terpconnect.umd.edu/~kosborn/SEALeR/Plourde.pdf); [Ren (2012)](http://terpconnect.umd.edu/~kosborn/SEALeR/Jie_Ren.pdf); [Semenov (2012](http://terpconnect.umd.edu/~kosborn/SEALeR/Semenov-StonyBrook-2012.pdf); [Ustinov (2012)](http://terpconnect.umd.edu/~kosborn/SEALeR/Ustinov_March_2012.pdf).\n\nThe post [Mike Frank on reversible computing](https://intelligence.org/2014/01/31/mike-frank-on-reversible-computing/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-01-31T19:48:22Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "b20353794188530c46a05d2b207419ba", "title": "Emil Vassev on Formal Verification", "url": "https://intelligence.org/2014/01/30/emil-vassev-on-formal-verification/", "source": "miri", "source_type": "blog", "text": "![Emil Vassev portrait](http://intelligence.org/wp-content/uploads/2014/01/Vassev_w150.jpg)[Dr. Emil Vassev](http://www.vassev.com/) received his M.Sc. in Computer Science (2005) and his Ph.D. in Computer Science (2008) from [Concordia University](http://www.cse.concordia.ca/), Montreal, Canada. Currently, he is a research fellow at [Lero (the Irish Software Engineering Research Centre) at](http://www.lero.ie/) [University of Limerick](http://www.ul.ie/), Ireland where he is leading the Lero’s participation in the [ASCENS FP7](http://cordis.europa.eu/fp7/home_en.html) project and the Lero’s joint project with ESA on Autonomous Software Systems Development Approaches. His research focuses on knowledge representation and awareness for self-adaptive systems. A part from the main research, Dr. Vassev’s research interests include engineering autonomic systems, distributed computing, formal methods, cyber-physical systems and software engineering. He has published two books and over 100 internationally peer-reviewed papers. As part of his collaboration with NASA, Vassev has been awarded one patent with another one pending.\n\n\n\n**Luke Muehlhauser**: In “[Swarm Technology at NASA: Building Resilient Systems](http://commonsenseatheism.com/wp-content/uploads/2014/01/Vassev-Swarm-Technology-at-NASA-Building-Resilient-Systems.pdf),” you and your co-authors write that:\n\n\n\n> To increase the survivability of [remote exploration] missions, NASA [uses] principles and techniques that help such systems become more resilient…\n> \n> \n> …Practice has shown that traditional development methods can’t guarantee software reliability and prevent software failures. Moreover, software developed using formal methods tends to be more reliable.\n> \n> \n\n\nWhen talking to AI scientists, I notice that there seem to be at least two “cultures” with regard to system safety. One culture emphasizes the limitations of systems that are amenable to (e.g.) formal methods, and advises that developers use traditional AI software development methods to build a functional system, and try to make it safe near the end of the process. The other culture tends to think that getting strong safety guarantees is generally only possible when a system is designed “from the ground up” with safety in mind. Most machine learning people I speak to seem to belong to the former culture, whereas e.g. [Kathleen Fisher](http://intelligence.org/2014/01/10/kathleen-fisher-on-high-assurance-systems/) and other people working on safety-critical systems seem to belong to the latter culture.\n\n\nDo you perceive these two cultures within AI? If so, does the second sentence I quoted from your paper above imply that you generally belong to the second culture?\n\n\n\n\n\n---\n\n\n**Emil Vassev**: Before going to the answer of your question, I need to clarify some of the basics of AI as I perceive it. Basically, AI depends on our ability to efficiently transfer knowledge to software-intensive systems. A computerized machine can be considered as one exhibiting AI when it has the basic capabilities to transfer data into context-relevant information and then that information into conclusions exhibiting knowledge. Going further, I can say that AI is only possible in the presence of artificial awareness, one by which we can transfer knowledge to machines. Artificial awareness entails much more than computerized knowledge, however. It must also incorporate means by which a computerized machine can perceive events and gather data about its external and internal worlds. Therefore, to exhibit awareness, intelligent systems must sense and analyze components as well as the environment in which they operate. Determining the state of each component and its status relative to performance standards, or service-level objectives, is therefore vital for an aware system. Such systems should be able to notice changes, understand their implications, and apply both pattern analysis and pattern recognition to determine normal and abnormal states. In other words, awareness is conceptually a product of representing, processing, and monitoring knowledge. Therefore, AI requires knowledge representation, which can be considered as a formal specification of the “brain” of an AI system. Moreover, to allow for learning, we must consider an open-world model of this “machine brain”.\n\n\nNow, let’s to go back to your question. I think, both research “cultures” have their niche within AI. Both cultures lean towards the use of open-world modeling of the AI by using formal methods. The difference lies mainly in the importance of the safety requirements, which justifies both approaches. Note that AI is a sort of superior control mechanism that exclusively relies on the functionality of the system to both detect safety hazards and pursue safety procedures. Therefore, in all cases AI is limited by system functionality and systems designed “from the ground up with safety in mind” are presumably designed with explicit safety-related functionality, and thus, their AI is less limited when it comes to safety.\n\n\nTo answer your second question, yes, I consider myself as leaning towards the second “research culture”. For many NASA/ESA systems, safety is an especially important source of requirements. Requirements engineers can express safety requirements as a set of features and procedures that ensure predictable system performance under normal and abnormal conditions. Furthermore, AI engineers might rely on safety requirements to derive special self-\\* objectives controlling the consequences of unplanned events or accidents. Think about the self-\\* objectives as AI objectives driving the system in critical situations employing self-adaptive behavior. Safety standards might be a good source of safety requirements and consecutively on safety-related self-\\* objectives. Such self-\\* objectives may provide for fault-tolerance behavior, bounding failure probability, and adhering to proven practices and standards. Explicit safety requirements provide a key way to maintain safety-related knowledge within a machine brain of what is important for safety. In typical practice, safety-related AI requirements can be derived by a four-stage process:\n\n\n1. Hazard identification – all the hazards exhibited by the system are identified. A hazard might be regarded as a condition – situation, event, etc., that may lead to an accident.\n2. Hazard analysis – possible causes of the system’s hazards are explored and recorded. Essentially, this step identifies all processes, combinations of events, and sequences that can lead from a ‘normal’ or ‘safe’ state to an accident. Success in this step means that we now understand how the system can get to an accident.\n3. Identifying safety capabilities – a key step is to identify the capabilities (functionality) the system needs to have in order to perform its goals and remain safe. It is very likely that some of the capabilities have been already identified by for the purpose of other self-\\* objectives.\n4. Requirements derivation – once the set of hazards is known, and their causation is understood, engineers can derive safety requirements that either prevent the hazards occurring or mitigate the resulting accidents via self-\\* objectives.\n\n\n\n\n---\n\n\n**Luke**: As explained [here](http://intelligence.org/2013/10/03/proofs/), no system can be 100% safe, in part because its formal safety requirements might fail to capture a hazard that was missed, or inadequately modeled, by the system’s designers. What’s your intuitive sense of how much “extra” confidence in a system’s safety is typically gained by the application of formal methods? For example, you might say that space probe launched with lots of testing but no formal methods would have a 70% chance of completing its mission successfully, whereas a space probe which had been tested *and* formally verified (and hence, designed in a different way, so as to be amenable to formal verification) would have a 90% chance of completing its mission successfully.\n\n\nNaturally, such estimates depend greatly on the details of the specific system, and which properties were proved by the formal verification process, but I’m hoping you can say something about your intuitive sense as to how much safety is gained in exchange for the extra work of designing a system so precisely that it can be formally verified.\n\n\n\n\n---\n\n\n**Emil**: Generally speaking, formal methods strive to build software right (and thus, reliable) by eliminating flaws, e.g., requirements flaws. Formal method tools allow comprehensive analysis of requirements and design and eventually near-to-complete exploration of system behavior, including fault conditions. However, good requirements formalization depends mainly on the analytical skills of the requirements engineers along with the proper use of the formal methods in hand. Hence, errors can be introduced when capturing or implementing safety requirements. This is maybe the main reason why, although efficient in terms of capacity of the dedicated analysis tools such as theorem provers and model checkers, formal methods actually do not eliminate the need of testing.\n\n\nIn regards with safety requirements, the application of formal methods can only add on safety. Even if we assume that proper testing can capture all the safety flaws that we may capture with formal verification, with proper use of formal methods we can always improve the quality of requirements and eventually derive more efficient test cases. Moreover, formal methods can be used to create formal specifications, which subsequently can be used for automatic test case generation. Hence, in exchange for the extra work put to formally specify the safety requirements of a system, you get not only the possibility to formally verify and validate these requirements, but also to more efficiently test their implementation.\n\n\nSo, to go back to your question, as you said 100% safety cannot be guaranteed, but when properly used, formal methods can significantly contribute to safety by not replacing, but complementing testing. The quantitative measure of how much safety can be gained with formal methods may be regarded in three aspects:\n\n\n1. Formal verification and validation allows for early detection of safety flaws, i.e., before implementation.\n2. High quality of safety requirements improves the design and implementation of these requirements.\n3. Formally specified safety requirements assist in the derivation and generation of efficient test cases.\n\n\nTo be more specific, although it really depends on the complexity of the system in question, my intuition is that these three aspects complement each other and together they may help us build a system with up to 99% safety guarantee.\n\n\n\n\n---\n\n\n**Luke**: You wrote about the difficulty of “good requirements formalization” — that is, the challenge of translating our intuitive ideas about how we want the system to behave into a logical calculus.\n\n\nWhat types of requirements are researchers currently able to formalize for use in formally verified systems? What types of intuitively desirable requirements are we as-yet unable to figure out how to formalize?\n\n\n\n\n---\n\n\n**Emil**: Contemporary formal verification techniques (e.g., model checking) rely on state-transition models where objects or entities are specified with states they can be in and associated with functions that are performed to change states or object characteristics. Therefore, basically every system property that can be measured or quantified, or qualified as a function can be formalized for the needs of formal verification. Usually, the traditional types of requirements – functional and non-functional (e.g., data requirements, quality requirements, time constraints, etc.), are used to provide a specific description of functions and characteristics that address the general purpose of the system. The formal verification techniques use the formal specification of such requirements to check desired safety and liveness properties. For example, to specify safety properties of a system, we need to formalize “nothing bad will happen to the system”, which can be done via the formalization of non-desirable system states along with the formalization of behavior that will never lead the system to these states.\n\n\nObviously, the formalization of well-defined properties (e.g., with proper states expressed via boundaries, data range, outputs, etc.) is a straightforward task. However, it is not that easy to formalize uncertainty, e.g., liveness properties (something good will eventually happen). Although, probabilistic theories such as fuzzy logic help us formalize “degrees of truth” and deal with approximate conclusions rather with exact ones, the verification tools for fuzzy control systems are not efficient due to the huge spate-explosion problem. Moreover, testing such systems is not efficient as well, simply because, statistical evidence for their correct behavior may be not enough. Hence, any property that requires a progressive evaluation (or partial satisfaction, e.g., soft goals) is difficult and often impossible to be formalized “for use in formally verified systems”.\n\n\nOther properties that are “intuitively desirable” (especially by AI) but still cannot be formalized today are human behavior and principles, related to cultural differences, ethics, feelings, etc. The problem is that with the formal approaches today we cannot express, for example, emotional bias as a meaningful system state.\n\n\n\n\n---\n\n\n**Luke**: Let’s work with a concrete, if hypothetical, example. Suppose a self-driving car’s navigation software was designed with formal verification in mind, at least for many core components of the software. I presume we are very far from being able to define a formal requirement that matches the intuitive meaning of “Don’t ever be in a state such that the car has injured a pedestrian.”\n\n\nSo, what requirements *could* we formalize such that, if they were proven to be met by the car’s navigation software, would help us to feel increasingly confident that the car would never injure a pedestrian (while still allowing the car to engage in normal driving behavior)?\n\n\n\n\n---\n\n\n**Emil**: Again, this example should be regarded with the insight that “100 % safety is not possible”, especially when the system in question (e.g., a self-driving car) engages in interaction with a non-deterministic and open-world environment. What we should do though, to maximize the safety guarantee that “the car would never injure a pedestrian” is to determine all the critical situations involving the car itself in close proximity to pedestrians. Then we shall formalize these situations as system and environment states and formalize self-adaptive behavior (e.g., as self-\\* objectives) driving the car in such situations. For example, a situation could be defined as “all the car’s systems are in operational condition and the car is passing by a school”. To increase safety in this situation, we may formalize a self-adaptive behavior such as “automatically decrease the speed down to 20 mph when getting in close proximity to children or a school”.\n\n\nFurther, we need to specify situations involving close proximity to pedestrians (e.g., crossing pedestrians) and car states emphasizing damages or malfunction of the driving system, e.g., flat tires, malfunctioning stirring wheel, malfunctioning breaks, etc. For example, we may specify a self-adaptive behavior “automatically turn off the engine when the break system is malfunctioning and the car is getting in close proximity to pedestrians.”\n\n\nOther important situations should involve severe weather conditions introducing hazards on the road, e.g., snow storm, ice, low visibility (formalized as environment states), and the car getting in close proximity to pedestrians. In such situations, formalized self-adaptive behavior should automatically enforce low speed, turning lights on, turning wipers on, etc.\n\n\n\n\n---\n\n\n**Luke**: To what degree, if any, can we demonstrate both deductive guarantees and probabilistic guarantees with today’s formal verification methods?\n\n\nIn a sense, many of the deductive proofs for safety properties in today’s formally verified systems are already “probabilistic” in the sense that the designers have some subjective uncertainty as to whether the formal specification accurately captures the intuitively desirable safety properties, and (less likely) whether there was an error in the proof somewhere.\n\n\nBut the question I’m trying to ask is about *explicitly* probabilistic guarantees within the proof itself. Though perhaps to do this, we’d first need a solution to the problem of how a formal system or logical calculus can have internally consistent probabilistic beliefs about logical sentences (*a la*[Demski 2012](http://ict.usc.edu/pubs/Logical%20Prior%20Probability.pdf) or [Hutter et al. 2012](http://arxiv.org/pdf/1209.2620.pdf)).\n\n\nIf we could generalize formal specification and verification procedures to allow both for deductive guarantees and probabilistic guarantees, perhaps we could verify larger, more complex, and more diverse programs.\n\n\n\n\n---\n\n\n**Emil**: With deductive guarantees a formal verification actually provides true statements that demonstrate that desired safety properties are held. Such a verification process is deterministic and a complete proof is required to guarantee the correctness of safety properties. For example, such a proof can be equipped with deterministic rules and expressed in the classical first-order logic (or in high-order logic if we use Isabelle to run a deductive verification). On the other hand, with the probabilistic guarantees you can accept that a complete proof is not necessary and safety properties can be verified with some degree of uncertainty. Basically, the probabilistic guarantees can be regarded as a result of quantification of uncertainty in both the verification parameters and subsequent predictions. With the Bayesian methods, for example, we quantify our uncertainty as prior distribution of our beliefs we have in the values of certain properties. Moreover, we also embed likelihood in the properties formalization, i.e., how likely is it that we would observe a certain value in particular conditions. You may think about it as the likelihood of holding certain safety properties in specific situations. Then, the probabilistic guarantees assert in a natural way “likely” properties over the possibilities that we envision.\n\n\nSo, to answer your question, unfortunately, deductive guarantees can be provided only for simple safety properties, because their complete proof often unavoidably does not terminate. Although deductive verification may deal with infinite state systems, its automation is limited, which is mainly due to the decidability of the logical reasoning (first-order logic and its extensions such as high-order logic are not decidable, or rather semi-decidable). For example, if we go back to our example with the self-driving car, we may supply all the needed deterministic rules expressing our safety requirements (e.g., speed limit of 20 mph when passing by a school), but the complete proof eventually cannot be achieved, because although the desired conclusion follows from some of the premises, other premises may eventually lead to resolution refutation.\n\n\nThe probabilistic guarantees are not as complete as the deductive ones, but they may deal with more complex properties, e.g., where a larger number of states can be required. Of course, this tradeoff should be considered when evaluating the results of any probabilistic formal verification. So, if we go back to your question about how much confidence in system’s safety is gained with formal methods, probabilistic guarantees bring less confidence than deductive ones, but they may bring some extra confidence to safety properties that cannot be handled otherwise.\n\n\nIt is important to mention that abstraction is the most efficient solution to the state-explosion problem (and respectively, to the problem of deductive guarantees decidability). With abstraction the size of the state space is reduced by aggregating state transitions into coarser-grained state transitions. The technique effectively reduces the total amount of states to be considered but is likely to reduce the granularity of the system to a point where it no longer adequately represents that system. The problem is that although the abstract model (e.g, the formalization of safety properties) is relatively small it should also be precise enough to adequately represent the original system.\n\n\nTherefore as you said, in order to obtain better results, we shall consider both verification approaches and eventually apply these together. For example, we may formalize with the presumption that both deductive and probabilistic guarantees can be obtained in a sort of compositional verification where we may apply both approaches to different safety properties, and eventually combine the results under the characteristics of global safety invariants. Such invariants can be classified as: goal invariants, behavior invariants, interaction invariants and resource invariants (see my co-authored paper “[Verification of Adaptive Systems](http://cda.ornl.gov/publications_2012/Publication_36920.pdf)”).\n\n\n\n\n---\n\n\n**Luke**: Besides mixing probabilistic and deductive verification methods, what other paths forward do you see for improving on our current verification toolset? In particular, what important work in this area seems to be most neglected by funding and researcher hours?\n\n\n\n\n---\n\n\n**Emil**: Well, maybe the most popular technique for formal verification is model checking where the properties are expressed in a temporal logic and the system formalization is turned into a state machine. The model checking methods verify if the desired properties hold in all the reachable states of a system, which is basically a proof that properties will hold during the execution of that system. State explosion is the main issue model checking is facing today. This problem is getting even bigger when it comes to concurrent systems where the number of states is exponential to the number of concurrent processes. So, basically, model checking is an efficient and powerful verification method, but only when applied to finite, yet small state spaces.\n\n\nHere, to answer your question about what can be done to improve the current verification toolset: on the one side we need to work on fully automated deductive verification based on decidable logics with both temporal and probabilistic features, and on the other side we need to work on improving the model checking ability to handle large state spaces (e.g., symbolic model-checking, probabilistic model checking, etc.).\n\n\nImportant work that seems neglected by the scientific community is the so-called stabilization science, which provides a common approach to studying system stability. In this approach, a system is linearized around its operating point to determine a small-signal linearized model of that operating point. The stability of the system is then determined using linear system stability analysis methods such as Routh-Hurwitz, Root Locus, Bode Plot, and Nyquist Criterion. We may use stabilization science to build small-signal linearized models for the different system components, anticipating that the linearized models of system components will yield a relatively small state space, enabling for their efficient verification (see again my co-authored paper “[Verification of Adaptive Systems](http://cda.ornl.gov/publications_2012/Publication_36920.pdf)”). Then we may apply compositional verification techniques to produce an overall system-wide verification.\n\n\nOther, not that well-developed verification techniques are those related to automatic test-case generation and simulation, which may reduce testing costs and improve the quality of testing. For example, test cases can be generated from a formal specification of a system built with a domain-specific formal language. If combined with code generation and analysis techniques for efficient test-case generation (e.g., change-impact analysis), automatic test-case generation might be used to efficiently test system behavior under simulated conditions (see my co-authored paper “[Automated Test Case Generation of Self-Managing Policies for NASA Prototype Missions Developed with ASSL](http://commonsenseatheism.com/wp-content/uploads/2014/01/Vassev-Automated-test-case-generation-of-self-managing-policies-for-NASA-prototype-missions-developed-with-ASSL.pdf)”).\n\n\nMoreover, high-performance computing can be used for parallelizing simulations, which will allow multiple state space explorations to occur simultaneously, etc.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Emil!\n\n\nThe post [Emil Vassev on Formal Verification](https://intelligence.org/2014/01/30/emil-vassev-on-formal-verification/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-01-31T01:33:14Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "b0e5412c03a2610a5b700b45d4dcb4fb", "title": "How Big is the Field of Artificial Intelligence? (initial findings)", "url": "https://intelligence.org/2014/01/28/how-big-is-ai/", "source": "miri", "source_type": "blog", "text": "Co-authored with Jonah Sinick.\n\n\nHow big is the field of AI, and how big was it in the past?\n\n\nThis question is relevant to several issues in AGI safety strategy. To name just two examples:\n\n\n* [**AI forecasting**](http://intelligence.org/2013/05/15/when-will-ai-be-created/). Some people forecast AI progress by looking at how much has been accomplished for each calendar year of research. But as inputs to AI progress, (1) AI funding, (2) quality-adjusted researcher years (QARYs), and (3) computing power are more relevant than calendar years.[1](https://intelligence.org/2014/01/28/how-big-is-ai/#footnote_0_10731 \"Another important input metric is theoretical progress imported from other fields, e.g. methods from statistics.\") To use these metrics to predict future AI progress, we need to know how many dollars and QARYs and computing cycles at various times in the past have been required to produce the observed progress in AI thus far.\n* **Leverage points**. If most AI research funding comes from relatively few funders, or if most research is produced by relatively few research groups, then these may represent high-value leverage points through which one might influence the field as a whole, e.g. to be more concerned with the [long-term social consequences of AI](https://intelligence.org/files/EthicsofAI.pdf).\n\n\nFor these reasons and more, MIRI recently investigated the current size and past growth of the AI field. This blog post summarizes our initial findings, which are meant to provide a “quick and dirty” launchpad for future, more thorough research into the topic.\n\n\nTo begin, we tried to quantify the size and past growth of the field using metrics such as\n\n\n* Number of researchers\n* Number of journals\n* Publication counts\n* Number of conferences\n* Number of organizations\n* Famous prizes awarded for AI research\n* Amount of funding\n\n\nIt’s difficult to interpret these figures, and they may be significantly less informative than an object level study of the research would be, but the figures still have some relevance:\n\n\n* For the purpose of investigating growth, one can look at year-to-year percentage growth in the statistics, combining this with other measures of the amount of progress that has occurred in AI, in order to estimate the amount of AI research that will occur in the medium term future.\n* For the purpose of investigating the current size of the AI field, one can look at the quantitative metrics relative to the corresponding metrics for computer science (CS)[2](https://intelligence.org/2014/01/28/how-big-is-ai/#footnote_1_10731 \"It’s also worth noting the following point. Suppose that a source S can be used to generate an estimate E1 for a quantity Q1 having to do with AI and an estimate E2 having to do with CS. Then E1 and E2 may overstate or understate Q1 and Q2 (respectively). Let the factors by which E1 and E2 differ from Q1 and Q2 be F1 and F2. We don’t have good estimates for F1 and F2, but if we compute the ratio 1(E1)/(E2) we get [(Q1)/(Q2)]*[(F1)/(F2)]. The quantity (F1)/(F2) will be closer to 1 than F1 is to 1, because the some of the factors that lead E1 to deviate from Q1 to given degrees will also lead E2 to deviate from Q2 to similar degrees. So (E1)/(E2) is closer to (Q1)/(Q2) (in relative terms) than E1 is to Q1 (in relative terms).\") , and use these in conjunction with a holistic sense of the current size of the CS field to inform one’s holistic sense of the amount of progress that there’s been in AI.\n\n\nThe data that we were able to collect provide a decent picture of the size of the AI field relative to the size of the CS field, but they are insufficient to support a robust conclusion, and more investigation is warranted. Unless otherwise specified, see the spreadsheet “[Current size & past growth of AI field](https://intelligence.org/wp-content/uploads/2014/01/Current-size-past-growth-of-the-AI-field.xlsx)” for the raw data on which this blog post is based.\n\n\n### The size of the AI field\n\n\nAccording to a variety of metrics, **the amount of AI research being done appears to be about 10% of the amount of computer science (CS) research being done**. The metrics used, however, mostly capture research *quantity* rather than research *quality*, and thus may be a weak proxy for measuring how many QARYs have been invested. That said, the fact that roughly 10% of CS research prizes are awarded for AI work may indicate that research quality is similar in CS and AI.\n\n\nWe obtained many of the relevant figures from [Microsoft Academic Search](http://academic.research.microsoft.com/) (MAS). MAS allows one to search under the headings:\n\n\n* Computer science\n* Artificial intelligence\n* Natural language and speech\n* Machine learning and pattern recognition\n* Computer vision\n\n\nOne gets different figures depending on whether one counts the latter three subjects (hereafter referred to as “cognate disciplines”) as AI. Below, we give figures both for items that fall under the “artificial intelligence” heading alone, and for items that fall under the heading “artificial intelligence” *or* under the heading of one of the cognate disciplines.\n\n\n#### Number of researchers\n\n\nMAS gives number of authors in CS, AI, and the cognate disciplines of AI, but these figures don’t pick up on the amount of research done as well as publication count figures do.[3](https://intelligence.org/2014/01/28/how-big-is-ai/#footnote_2_10731 \"MAS shows 1.6 million authors in CS and 0.26 authors million in AI, so 16%. If one adds up the listed number of authors in AI and cognate disciplines, the figure rises to 39%. However some authors publish in multiple disciplines (for example, an author might publish in both artificial intelligence and machine learning).\")\n\n\nThe IEEE Computational Intelligence Society [has](http://cis.ieee.org/about-cis.html) ~7,000 members and the IEEE Computer Society [has](http://sites.ieee.org/scv-cs/about) ~85,000 members, so the membership of the first is 8% the membership of the second.\n\n\nSome other relevant figures (which don’t paint a cohesive picture):\n\n\n* [According to](http://www.bls.gov/ooh/computer-and-information-technology/computer-and-information-research-scientists.htm) the Bureau of Labor Statistics, there are 26,700 computer and information science researchers in the US.\n* ACM’s Special Interest Group on Artificial Intelligence (SIGAI) [has](http://sigai.acm.org/organization/history.html) “more than 1,000 members.”\n* The [International Neural Network Society](http://www.inns.org/) (INNS) has “more than 2,000 members.”\n\n\n#### Number of journals\n\n\nMAS lists 1360 CS journals, with 106 in AI, and 172 in either AI or one of AI’s cognate disciplines, so 8% and 13% respectively.[4](https://intelligence.org/2014/01/28/how-big-is-ai/#footnote_3_10731 \"Cells B96 through B100 of the spreadsheet.\")\n\n\n#### Publication counts\n\n\nBetween 2005 and 2010, of those publications listed under MAS’s “CS” heading, about 10% were listed under “AI” and about 20% were listed under “AI” or one of its cognate disciplines.[5](https://intelligence.org/2014/01/28/how-big-is-ai/#footnote_4_10731 \"Some papers may be listed under multiple categories, making it unclear whether the 10% figure or the 20% figure is more representative.\") One sees roughly the same percentages if one looks at publications between 1990 and 1995, between 1995 and 2000, and between 2000 and 2005.[6](https://intelligence.org/2014/01/28/how-big-is-ai/#footnote_5_10731 \"Table with upper left hand corner A2 in the spreadsheet.\") Searching Google Scholar for “Computer Science” and “Artificial Intelligence,” one finds that the number of hits for the latter search is about 30% the number of hits for the former search,[7](https://intelligence.org/2014/01/28/how-big-is-ai/#footnote_6_10731 \"Google Scholar results:\nSearch term “computer science” (in quotes) yields 2,650,000 results\n“artificial intelligence” -> 1,710,000\n“machine intelligence” -> 655,000\nsince 2013:\nSearch term “computer science” (in quotes) yields 99,600 results\n“artificial intelligence” -> 32,300\n“machine intelligence” -> 11,600\n2012:\nSearch term “computer science” (in quotes) yields 163,000 results\n“artificial intelligence” -> 52,500\n“machine intelligence” -> 22,600\n2011:\nSearch term “computer science” (in quotes) yields 247,000 results\n“artificial intelligence” -> 66,100\n“machine intelligence” -> 23,000\") which could mean that the amount of AI research is significantly more than 10% the amount of CS research, but some papers that contain the phrase “artificial intelligence” are not artificial intelligence research, and some computer science papers may not contain the phrase “computer science.”\n\n\n#### Number of conferences\n\n\nMAS lists 3,519 “top conferences” in CS and 361 “top conferences” in AI, and the former number is about 10% of the latter number. There are 561 “top conferences” in AI or cognate disciplines, so 16% the number of CS conferences.[8](https://intelligence.org/2014/01/28/how-big-is-ai/#footnote_7_10731 \"Cell B27 through Cell B31 of the spreadsheet.\")\n\n\n#### Number of organizations\n\n\nMicrosoft Academic Search lists 11,338 organizations for CS and 7,125 organizations for AI, so 63%. If one counts cognate disciplines as AI, the number of AI organizations is 21,802, so 192% that of CS organizations.[9](https://intelligence.org/2014/01/28/how-big-is-ai/#footnote_8_10731 \"Cell B119 through Cell B123 of the spreadsheet.\") Taken in isolation, this would suggest that the amount of AI research is much greater than 10%.\n\n\n“Number of organizations” seems likely to be a weaker metric of amount of research than “number of publications,” etc., so this should be discounted. Nevertheless, the fact that the ratio of AI organizations to CS organizations is so much higher than the other ratios that we looked at is a puzzle. Perhaps the difference comes from the CS community and the AI community having different cultural norms. Or, perhaps MAS is less consistent about how it counts organizations than how it counts publications.\n\n\n#### Famous prizes awarded for AI research vs. CS research\n\n\n[ACM Turing Award](http://en.wikipedia.org/wiki/ACM_Turing_Award): Six out of 46 prizes were awarded for AI research, so 13% of the total.[10](https://intelligence.org/2014/01/28/how-big-is-ai/#footnote_9_10731 \"The annual Turing Prize was first awarded in 1966 (last prize 2012), so 46 prizes so far. Of those, 6 were for achievements in AI related research, namely:\n• 1969 Marvin Minsky\n• 1971 John McCarthy\n• 1975 Newell & Simon\n• 1991 Robin Milner (machine assisted proof construction)\n• 1994 Edward Feigenbaum & Raj Reddy\n• 2010 Leslie Valiant (Probably Approximately Correct Learning)\n• 2011 Judea Pearl\n8 of the 46 prizes were awarded to 2 people, and another 2 were awarded to 3 people, so the total number of recipients is 58, out of which 8 received the prize for AI-related achievements.\")\n\n\n[Nevanlinna Prize](http://en.wikipedia.org/wiki/Nevanlinna_Prize): One of the 8 prizes was awarded for AI work, so 12.5% of the total. However, the prize for AI work was awarded in 1986, which is a long time ago.[11](https://intelligence.org/2014/01/28/how-big-is-ai/#footnote_10_10731 \"The Nevanlinna Prize has been awarded every 4 years since 1982; 8 times so far.\")\n\n\n#### Amount of funding\n\n\nIn 2011, the National Science Foundation (NSF) received $636 million for funding CS research (through [CISE](http://www.nsf.gov/dir/index.jsp?org=CISE)). Of this, $169 million [went to](http://www.nsf.gov/about/budget/fy2013/pdf/06-CISE_fy2013.pdf) Information and Intelligent Systems (IIS). IIS has [three programs](http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=13707&org=IIS&from=home): Cyber-Human Systems (CHS), Information Integration and Informatics (III) and Robust Intelligence (RI). If roughly 1/3 of the funding went to each of these, then $56 million went to Robust Intelligence, so 9% of the total CS funding. (Some CISE funding may have gone to AI work outside of IIS — that is, via [ACI](http://www.nsf.gov/div/index.jsp?div=ACI), [CCF](http://www.nsf.gov/div/index.jsp?div=CCF), or [CNS](http://www.nsf.gov/div/index.jsp?div=CNS) — but at a glance, non-IIS AI funding through CISE looks negligible.)\n\n\nOther major U.S. funding sources for CS research include [ONR](http://www.onr.navy.mil/), [DARPA](http://www.darpa.mil/), and several companies (Microsoft, Google, IBM, etc.) but we have not investigated these funding sources yet. We also did not investigate non-U.S. funding sources.\n\n\n### The growth of the AI field\n\n\nWe did not investigate the growth rate of the number of AI researchers in sufficient depth to make meaningful estimates. However, the growth rate of the number of scientists and engineers in all fields might serve as a *very weak* proxy measure for the growth rates of AI or CS.\n\n\nFor example, **the annual growth rate of science and engineering researchers in OECD countries, between 1995 and 2005, appears to be about 3.3%**, corresponding to a doubling time of 23 years.[12](https://intelligence.org/2014/01/28/how-big-is-ai/#footnote_11_10731 \"From U.S. National Science Foundation (NSF). Science and Engineering Indicators: 2010, Chapter 3. Science and Engineering Labor Force:\nIn the early 1960s, a prominent historian of science, Derek J. de Solla Price, examined the growth of science and the number of scientists over very long periods in history and summarized his findings in a book entitled Science Since Babylon (1961). Using a number of empirical measures (most over at least 300 years), Price found that science, and the number of scientists, tended to double about every 15 years, with measures of higher quality science and scientists tending to grow slower (doubling every 20 years) and measures of lower quality science and scientists tending to grow faster (every 10 years). According to Price (1961), one implication of this long-term exponential growth is that “80 to 90% of all the scientists that ever lived are alive today.” This insight follows from the likelihood that most of the scientists from the past 45 years (a period of three doublings) would still be alive. Price was interested in many implications of these growth patterns, but in particular, he was interested in the idea that this growth could not continue indefinitely and the number of scientists would reach “saturation.” Price was concerned in 1961 that saturation had already begun.\nHow different are the growth rates in the number of scientists and engineers in recent periods from what Price estimated for past centuries? Table 3-A shows growth rates for some measurements of the S&E labor force in the United States and elsewhere in the world for a period of available data. Of these measures, the number of S&E doctorate holders in the United States labor force showed the lowest average annual growth of 2.4% (doubling in 31 years if this growth rate were to continue). The number of doctorate holders employed in S&E occupations in the United States showed a faster average annual growth of 3.8% (doubling in 20 years if continued). There are no global counts of individuals in S&E, but counts of “researchers” in member countries of the Organisation for Economic Co-operation and Development (OECD) grew at an average annual rate of 3.3% (doubling in 23 years if continued). Data on the population of scientists and engineers in most developing countries are very limited, but OECD data for researchers in China show a 10.8% average annual growth rate (doubling in 8 years if continued). All these numbers are broadly consistent with a continuation of growth in S&E labor exceeding the rate of growth in the general labor force.\") This needs to be viewed in juxtaposition with indications that average researcher productivity (as measured by patents per researcher, amount of time spent training per researcher, the number of coauthors per paper, and the number of papers cited) has been decreasing.[13](https://intelligence.org/2014/01/28/how-big-is-ai/#footnote_12_10731 \"Below are some references on declining productivity per researcher. Our thanks to Gwern for compiling many of these in the article Scientific Stagnation:\n• Machlup, Fritz. The Production and Distribution of Knowledge in the United States, Princeton, NJ: Princeton University Press, 1962, 170-176\n• Segerstrom, Paul. Endogenous Growth Without Scale Effects, American Economic Review, December 1998, 88, 1290-1310\n• Terman, F.E. A Brief History of Electrical Engineering Education, Proceedings of the IEEE, August 1998, 86 (8), 1792-1800\n• Adams, James D., Black, Grant C., Clemmons, J.R., and Stephan, Paula E. Scientific Teams and Institutional Collaborations: Evidence from U.S. Universities, 1981-1999, NBER Working Paper #10640, July 2004\n• Jones (2006), Age and Great Invention\n• Jones, Benjamin F. The Burden of Knowledge and the Death of the Renaissance Man: Is Innovation Getting Harder? NBER Working Paper #11360, 2005\n• National Research Council, On Time to the Doctorate: A Study of the Lengthening Time to Completion for Doctorates in Science and Engineering, Washington, DC: National Academy Press, 1990\nTilghman, Shirley (chair) et al. Trends in the Early Careers of Life Sciences, Washington, DC: National Academy Press, 1998\n• Zuckerman, Harriet and Merton, Robert. Age, Aging, and Age Structure in Science, in Merton, Robert, The Sociology of Science, Chicago, IL: University of Chicago Press, 1973, 497-559\n• Cronin et al, 2004 Visible, Less Visible, and Invisible Work: Patterns of Collaboration in 20th Century Chemistry, Journal of the American Society for Information Science and Technology, 2004, 55(2), 160-168\n• Grossman, Jerry. The Evolution of the Mathematical Research Collaboration Graph, Congressus Numerantium, 2002, 158, 202-212\n• Cronin, Blaise, Shaw, Debora, and La Barre, Kathryn. A Cast of Thousands: Coauthorship and Subauthorship Collaboration in the 20th Century as Manifested in the Scholarly Journal Literature of Psychology and Philosophy, Journal of the American Society for Information Science and Technology, 2003, 54(9), 855-871\n• McDowell, John, and Melvin, Michael. The Determinants of Coauthorship: An Analysis of the Economics Literature, Review of Economics and Statistics, February 1983, 65, 155-160\n• Hudson, John. Trends in Multi-Authored Papers in Economics, Journal of Economic Perspectives, Summer 1996, 10, 153-158\n• Laband, David and Tollison, Robert. Intellectual Collaboration, Journal of Political Economy, June 2000, 108, 632-662\n• Jones 2010. As Science Evolves, How Can Science Policy?\n• The Collapse of the Soviet Union and the Productivity of American Mathematicians, by George J. Borjas and Kirk B. Doran, NBER Working Paper No. 17800, February 2012\n\") The NSF Budget for Information and Intelligent Systems (IIS) has generally increased between 4% and 20% per year since 1996, with a one-time percentage boost of 60% in 2003, for a total increase of 530% over the 15 year period between 1996 and 2011.[14](https://intelligence.org/2014/01/28/how-big-is-ai/#footnote_13_10731 \"See table with upper left-hand corner A367 in the spreadsheet.\") “Robust Intelligence” is one of three program areas covered by this budget. According to MAS, the number of publications in AI grew by 100+% every 5 years between 1965 and 1995, but between 1995 and 2010 it has been growing by about 50% every 5 years. One sees a similar trend in machine learning and pattern recognition.[15](https://intelligence.org/2014/01/28/how-big-is-ai/#footnote_14_10731 \"Table with upper left hand corner A2 in the spreadsheet.\")\n\n\n### Notes on further research\n\n\nFuture research on this topic could dig much deeper, and come to more robust conclusions. Our purpose here is to lay some groundwork for future research. With that in mind, here are some miscellaneous notes to future researchers investigating the current size and past growth of the AI field:\n\n\n* If the papers being cited are newer, that could indicate more rapid progress. On the other hand, it could also indicate faddishness, and one would somehow need to differentiate between the two things.\n* Some citation databases that could be useful for analyzing citation patterns are are Scopus, Web of Science, MS Academic Search, and Science Citation Index (SSI).[16](https://intelligence.org/2014/01/28/how-big-is-ai/#footnote_15_10731 \"A 2008 study compared PubMed, Scopus, Web of Science, and Google Scholar and concluded: “PubMed and Google Scholar are accessed for free […] Scopus offers about 20% more coverage than Web of Science, whereas Google Scholar offers results of inconsistent accuracy. PubMed remains an optimal tool in biomedical electronic research. Scopus covers a wider journal range […] but it is currently limited to recent articles (published after 1995) compared with Web of Science. Google Scholar, as for the Web in general, can help in the retrieval of even the most obscure information but its use is marred by inadequate, less often updated, citation information.” Larsen & von Ins (2010) claim that the coverage of SSI has been declining.\")\n* Some sources of noise in citation counts are: (a) Journal editors asking authors of submitted papers to add citations to other papers in the same journal in order to boost the journal’s impact factor & (b) Authors citing their own papers excessively in order to increase their citation counts.[17](https://intelligence.org/2014/01/28/how-big-is-ai/#footnote_16_10731 \"Here are some caveats about citations as a measure of quality: Wilhite and Fong (2012): “…impact factors continue to be a primary means by which academics “quantify the quality of science”. One side effect of impact factors is the incentive they create for editors to coerce authors to add citations to their journal. Coercive self-citation does not refer to the normal citation directions, given during a peer- review process, meant to improve a paper. Coercive self-citation refers to requests that (i) give no indication that the manuscript was lacking in attribution; (ii) make no suggestion as to specific articles, authors, or a body of work requiring review; and (iii) only guide authors to add citations from the editor’s journal.” And Storbeck (2012): “The [extent] of manipulation is amazing. For example, according to figures published by the Managing Editor of the ‘Review of Finance’, the impact factor of the ‘Journal of Banking and Finance’ – the fourth worst offender according to the study by Wilhite and Fong – dwindles if self-citations are excluded. While the raw impact factor of the journal is 2.731, the one without self-citations is just 0.748.”\")\n\n\nOur thanks to Sebastian Nickel for data-gathering, and to Carl Shulman for his feedback.\n\n\n\n\n---\n\n1. Another important input metric is theoretical progress imported from other fields, e.g. methods from statistics.\n2. It’s also worth noting the following point. Suppose that a source S can be used to generate an estimate E1 for a quantity Q1 having to do with AI and an estimate E2 having to do with CS. Then E1 and E2 may overstate or understate Q1 and Q2 (respectively). Let the factors by which E1 and E2 differ from Q1 and Q2 be F1 and F2. We don’t have good estimates for F1 and F2, but if we compute the ratio 1(E1)/(E2) we get [(Q1)/(Q2)]\\*[(F1)/(F2)]. The quantity (F1)/(F2) will be closer to 1 than F1 is to 1, because the some of the factors that lead E1 to deviate from Q1 to given degrees will also lead E2 to deviate from Q2 to similar degrees. So (E1)/(E2) is closer to (Q1)/(Q2) (in relative terms) than E1 is to Q1 (in relative terms).\n3. MAS shows 1.6 million authors in CS and 0.26 authors million in AI, so 16%. If one adds up the listed number of authors in AI and cognate disciplines, the figure rises to 39%. However some authors publish in multiple disciplines (for example, an author might publish in both artificial intelligence and machine learning).\n4. Cells B96 through B100 of the [spreadsheet](http://intelligence.org/wp-content/uploads/2014/01/Current-size-past-growth-of-the-AI-field.xlsx).\n5. Some papers may be listed under multiple categories, making it unclear whether the 10% figure or the 20% figure is more representative.\n6. Table with upper left hand corner A2 in the [spreadsheet](http://intelligence.org/wp-content/uploads/2014/01/Current-size-past-growth-of-the-AI-field.xlsx).\n7. Google Scholar results:\nSearch term “computer science” (in quotes) yields 2,650,000 results \n\n“artificial intelligence” -> 1,710,000 \n\n“machine intelligence” -> 655,000\n\n\n*since 2013*: \n\nSearch term “computer science” (in quotes) yields 99,600 results \n\n“artificial intelligence” -> 32,300 \n\n“machine intelligence” -> 11,600\n\n\n*2012*: \n\nSearch term “computer science” (in quotes) yields 163,000 results \n\n“artificial intelligence” -> 52,500 \n\n“machine intelligence” -> 22,600\n\n\n*2011*: \n\nSearch term “computer science” (in quotes) yields 247,000 results \n\n“artificial intelligence” -> 66,100 \n\n“machine intelligence” -> 23,000\n8. Cell B27 through Cell B31 of the [spreadsheet](http://intelligence.org/wp-content/uploads/2014/01/Current-size-past-growth-of-the-AI-field.xlsx).\n9. Cell B119 through Cell B123 of the [spreadsheet](http://intelligence.org/wp-content/uploads/2014/01/Current-size-past-growth-of-the-AI-field.xlsx).\n10. The annual [Turing Prize](http://en.wikipedia.org/wiki/Turing_Prize) was first awarded in 1966 (last prize 2012), so 46 prizes so far. Of those, 6 were for achievements in AI related research, namely: \n\n• 1969 Marvin Minsky \n\n• 1971 John McCarthy \n\n• 1975 Newell & Simon \n\n• 1991 Robin Milner (machine assisted proof construction) \n\n• 1994 Edward Feigenbaum & Raj Reddy \n\n• 2010 Leslie Valiant (Probably Approximately Correct Learning) \n\n• 2011 Judea Pearl \n\n8 of the 46 prizes were awarded to 2 people, and another 2 were awarded to 3 people, so the total number of recipients is 58, out of which 8 received the prize for AI-related achievements.\n11. The Nevanlinna Prize has been awarded every 4 years since 1982; 8 times so far.\n12. From [U.S. National Science Foundation (NSF). Science and Engineering Indicators: 2010, Chapter 3. Science and Engineering Labor Force](http://www.nsf.gov/statistics/seind10/c3/c3s.htm#sb1):\n\n> In the early 1960s, a prominent historian of science, Derek J. de Solla Price, examined the growth of science and the number of scientists over very long periods in history and summarized his findings in a book entitled *Science Since Babylon* (1961). Using a number of empirical measures (most over at least 300 years), Price found that science, and the number of scientists, tended to double about every 15 years, with measures of higher quality science and scientists tending to grow slower (doubling every 20 years) and measures of lower quality science and scientists tending to grow faster (every 10 years). According to Price (1961), one implication of this long-term exponential growth is that “80 to 90% of all the scientists that ever lived are alive today.” This insight follows from the likelihood that most of the scientists from the past 45 years (a period of three doublings) would still be alive. Price was interested in many implications of these growth patterns, but in particular, he was interested in the idea that this growth could not continue indefinitely and the number of scientists would reach “saturation.” Price was concerned in 1961 that saturation had already begun.\n> \n> \n\n\nHow different are the growth rates in the number of scientists and engineers in recent periods from what Price estimated for past centuries? [Table 3-A](http://www.nsf.gov/statistics/seind10/c3/tt03-a.htm) shows growth rates for some measurements of the S&E labor force in the United States and elsewhere in the world for a period of available data. Of these measures, the number of S&E doctorate holders in the United States labor force showed the lowest average annual growth of 2.4% (doubling in 31 years if this growth rate were to continue). The number of doctorate holders employed in S&E occupations in the United States showed a faster average annual growth of 3.8% (doubling in 20 years if continued). There are no global counts of individuals in S&E, but counts of “researchers” in member countries of the Organisation for Economic Co-operation and Development (OECD) grew at an average annual rate of 3.3% (doubling in 23 years if continued). Data on the population of scientists and engineers in most developing countries are very limited, but OECD data for researchers in China show a 10.8% average annual growth rate (doubling in 8 years if continued). All these numbers are broadly consistent with a continuation of growth in S&E labor exceeding the rate of growth in the general labor force.\n13. Below are some references on declining productivity per researcher. Our thanks to [Gwern](http://www.gwern.net/) for compiling many of these in the article [Scientific Stagnation](http://www.gwern.net/): \n\n• Machlup, Fritz. *[The Production and Distribution of Knowledge in the United States](http://www.amazon.com/Production-Distribution-Knowledge-United-States/dp/0691003564/?tag=gwernnet-20)*, Princeton, NJ: Princeton University Press, 1962, 170-176 \n\n• Segerstrom, Paul. Endogenous Growth Without Scale Effects, *American Economic Review*, December 1998, 88, 1290-1310 \n\n• Terman, F.E. A Brief History of Electrical Engineering Education, *Proceedings of the IEEE*, August 1998, 86 (8), 1792-1800 \n\n• Adams, James D., Black, Grant C., Clemmons, J.R., and Stephan, Paula E. Scientific Teams and Institutional Collaborations: Evidence from U.S. Universities, 1981-1999, NBER Working Paper #10640, July 2004 \n\n• Jones (2006), *Age and Great Invention* \n\n• Jones, Benjamin F. [The Burden of Knowledge and the Death of the Renaissance Man: Is Innovation Getting Harder?](http://www.nber.org/papers/w11360.pdf) NBER Working Paper #11360, 2005 \n\n• National Research Council, *[On Time to the Doctorate: A Study of the Lengthening Time to Completion for Doctorates in Science and Engineering](http://www.amazon.com/Time-Doctorate-Lengthening-Completion-Engineering/dp/030904085X/?tag=gwernnet-20)*, Washington, DC: National Academy Press, 1990 \n\nTilghman, Shirley (chair) et al. *[Trends in the Early Careers of Life Sciences](http://www.amazon.com/Trends-Early-Careers-Life-Scientists/dp/0309061806/?tag=gwernnet-20)*, Washington, DC: National Academy Press, 1998 \n\n• Zuckerman, Harriet and Merton, Robert. Age, Aging, and Age Structure in Science, in Merton, Robert, *[The Sociology of Science](http://www.amazon.com/Sociology-Science-Theoretical-Empirical-Investigations/dp/0226520927/?tag=gwernnet-20)*, Chicago, IL: University of Chicago Press, 1973, 497-559 \n\n• Cronin et al, 2004 Visible, Less Visible, and Invisible Work: Patterns of Collaboration in 20th Century Chemistry, *Journal of the American Society for Information Science and Technology*, 2004, 55(2), 160-168 \n\n• Grossman, Jerry. The Evolution of the Mathematical Research Collaboration Graph, *Congressus Numerantium*, 2002, 158, 202-212 \n\n• Cronin, Blaise, Shaw, Debora, and La Barre, Kathryn. A Cast of Thousands: Coauthorship and Subauthorship Collaboration in the 20th Century as Manifested in the Scholarly Journal Literature of Psychology and Philosophy, *Journal of the American Society for Information Science and Technology*, 2003, 54(9), 855-871 \n\n• McDowell, John, and Melvin, Michael. The Determinants of Coauthorship: An Analysis of the Economics Literature, *Review of Economics and Statistics*, February 1983, 65, 155-160 \n\n• Hudson, John. Trends in Multi-Authored Papers in Economics, *Journal of Economic Perspectives*, Summer 1996, 10, 153-158 \n\n• Laband, David and Tollison, Robert. Intellectual Collaboration, *Journal of Political Economy*, June 2000, 108, 632-662 \n\n• Jones 2010. [As Science Evolves, How Can Science Policy?](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.174.2853&rep=rep1&type=pdf) \n\n• The Collapse of the Soviet Union and the Productivity of American Mathematicians, by George J. Borjas and Kirk B. Doran, NBER Working Paper No. 17800, February 2012\n14. See table with upper left-hand corner A367 in the [spreadsheet](http://intelligence.org/wp-content/uploads/2014/01/Current-size-past-growth-of-the-AI-field.xlsx).\n15. Table with upper left hand corner A2 in the [spreadsheet](http://intelligence.org/wp-content/uploads/2014/01/Current-size-past-growth-of-the-AI-field.xlsx).\n16. A 2008 study compared PubMed, Scopus, Web of Science, and Google Scholar and concluded: “PubMed and Google Scholar are accessed for free […] Scopus offers about 20% more coverage than Web of Science, whereas Google Scholar offers results of inconsistent accuracy. PubMed remains an optimal tool in biomedical electronic research. Scopus covers a wider journal range […] but it is currently limited to recent articles (published after 1995) compared with Web of Science. Google Scholar, as for the Web in general, can help in the retrieval of even the most obscure information but its use is marred by inadequate, less often updated, citation information.” [Larsen & von Ins (2010)](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2909426/pdf/11192_2010_Article_202.pdf) claim that the coverage of SSI has been declining.\n17. Here are some caveats about citations as a measure of quality: [Wilhite and Fong (2012)](http://commonsenseatheism.com/wp-content/uploads/2014/01/Wilhite-Fong-Coercive-citation-in-academic-publishing.pdf): “…impact factors continue to be a primary means by which academics “quantify the quality of science”. One side effect of impact factors is the incentive they create for editors to coerce authors to add citations to their journal. Coercive self-citation does not refer to the normal citation directions, given during a peer- review process, meant to improve a paper. Coercive self-citation refers to requests that (i) give no indication that the manuscript was lacking in attribution; (ii) make no suggestion as to specific articles, authors, or a body of work requiring review; and (iii) only guide authors to add citations from the editor’s journal.” And [Storbeck (2012)](http://olafstorbeck.blogstrasse2.de/?p=1536): “The [extent] of manipulation is amazing. For example, according to figures published by the Managing Editor of the ‘Review of Finance’, the impact factor of the ‘Journal of Banking and Finance’ – the fourth worst offender according to the study by Wilhite and Fong – dwindles if self-citations are excluded. While the raw impact factor of the journal is 2.731, the one without self-citations is just 0.748.”\n\nThe post [How Big is the Field of Artificial Intelligence? (initial findings)](https://intelligence.org/2014/01/28/how-big-is-ai/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-01-28T18:13:46Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "27f73ca8cfa9b6bc5ebc9efb3da69a44", "title": "Existential Risk Strategy Conversation with Holden Karnofsky", "url": "https://intelligence.org/2014/01/27/existential-risk-strategy-conversation-with-holden-karnofsky/", "source": "miri", "source_type": "blog", "text": "On January 16th, 2014, MIRI met with Holden Karnofsky to discuss existential risk strategy. The participants were:\n\n\n* [Eliezer Yudkowsky](http://yudkowsky.net/) (research fellow at MIRI)\n* [Luke Muehlhauser](http://lukeprog.com/) (executive director at MIRI)\n* Holden Karnofsky (co-CEO at [GiveWell](http://www.givewell.org/))\n\n\nWe recorded and transcribed the conversation, and then edited and paraphrased the transcript for clarity, conciseness, and to protect the privacy of some content. The resulting edited transcript is available in full **[here](http://intelligence.org/wp-content/uploads/2014/01/01-16-2014-conversation-on-existential-risk.pdf)** (41 pages).\n\n\nBelow is a summary of the conversation written by Karnofsky, then edited by Muehlhauser and Yudkowsky. Below the summary are some highlights from the conversation chosen by Karnofsky.\n\n\nSee also three previous conversations between MIRI and Holden Karnofsky: on [MIRI strategy](http://intelligence.org/2014/01/13/miri-strategy-conversation-with-steinhardt-karnofsky-and-amodei/), on [transparent research analyses](http://intelligence.org/2013/08/25/holden-karnofsky-interview/), and on [flow-through effects](http://intelligence.org/2013/09/14/effective-altruism-and-flow-through-effects/).\n\n\n\n#### Summary\n\n\n*Pages 1-5*:\n\n\nDefining some basic terms, such as the hypothesis that far future considerations dominate utility (the “N lives” hypothesis). Holden temporarily accepts this hypothesis for the sake of argument.\n\n\n \n\n\n*Pages 5-9*:\n\n\nDiscussion of how the path to colonization of the stars is likely to play out.\n\n\nThere has been significant confusion over the definition of the term “existential risk.” Going into the conversation, Holden interpreted this term to mean “risks of events that would directly and more-or-less immediately wipe out the human race,” whereas Eliezer and Luke meant to use it to mean anything that causes humanity to fall drastically short of its potential, including paths that simply involve never succeeding in e.g. space colonization. This confusion took up much of pages 1-9 and was clarified at pages 8-9.\n\n\n \n\n\n*Pages 9-13*:\n\n\nFurther discussion of how the path to colonization of the stars is likely to play out. There isn’t much in the way of clear and major disagreement except for the dynamics around artificial general intelligence.\n\n\n \n\n\n*Pages 14-19*:\n\n\nHolden argues for an emphasis on “doing good without worrying much about its connection to the far future” as plausibly the best way to improve the far future. At the same time, he thinks it’s likely that global catastrophic risks are a promising area for philanthropy and thinks it’s likely that GiveWell will make recommendations relevant to these.\n\n\nHolden thinks that many of the arguments advanced in favor of focusing on global catastrophic risks are “right for the wrong reasons,” focusing on Pascal’s Mugging type arguments and not sufficiently addressing the question of whether these risks are neglected, and how neglected compared to other areas. Eliezer and Luke agree that addressing this question is crucial (though they are confident that the areas they emphasize as promising are sufficiently neglected to justify this).\n\n\nHolden wants to do more investigation into what’s neglected before being confident that these areas are relatively neglected, but his current guess is that they are. He discusses to some extent why GiveWell hasn’t focused on them previously despite suspecting this.\n\n\n \n\n\n*Pages 19-27*:\n\n\nHolden thinks that donating to AMF (a) may make the world more robust to catastrophe in multiple ways, if in fact “robust to catastrophe” turns out to be the most important criterion; (b) is a good thing to do if that criterion does not turn out to be so important and if our ability to identify key risk factors for the far future is essentially nil; (c) has helped GiveWell along its path to becoming influential and able to work on other fronts. He believes that the amount of good done on the (a) front is small relative to the amount of influence over the future of humanity that people like Eliezer and Luke seek to have, and that people who think they can have a great deal of influence on the future of humanity have reason to pursue other opportunities, but that it isn’t obvious that a donor (especially the sort of individual donor GiveWell has historically targeted) can do as much good as Luke and Eliezer seem to aim for.\n\n\nEliezer and Luke agree that doing “normal good stuff” has a good track record and is likely to produce significant goods in the near term, but they worry that faster economic growth accelerates AI capabilities research relative to AI safety/Friendliness research, since the former parallelizes more easily across multiple researchers and funding sources than the latter does. If smarter-than-human AI is figured out before Friendliness is, they expect global catastrophe. Hence, faster economic growth could be net bad in the long run. But, they’re much, much less sure of the sign of this argument than they are about e.g. the intelligence explosion and the orthogonality of capabilities and values.\n\n\n \n\n\n*Pages 28-32*:\n\n\nIt’s possible that the disagreements between Holden and Luke/Eliezer reduce to their disagreements over the specific dynamics of AI risk. Holden feels that the “Astronomical Waste” essay has been misleadingly cited, as establishing that one should directly focus on x-risk reduction work over other causes; Eliezer feels the essay didn’t say anything incorrect but has been misinterpreted on this point.\n\n\n \n\n\n*Pages 32-35*:\n\n\nDiscussion of how to deal with a situation in which there are multiple models of the world that might be right, and some involve much higher claimed utilities than others. Does one do a straightforward expected-value calculation, which means essentially letting the model with the bigger numbers in it dominate? Or does one do something closer to “giving each model votes/resources in proportion to their likelihood”? Every principled/explicit answer seems to have problems. Holden, Luke and Eliezer agree that the “multiply big utilities by small probabilities” approach is not optimal.\n\n\n \n\n\n*Pages 35-39*:\n\n\nDiscussion of likelihood that humanity will eventually colonize the stars. Luke and Eliezer think that this is a default outcome if things continue to go moderately well; Holden had the impression that this was not the case, but hasn’t thought about it much. Possibilities for updating on this question are discussed.\n\n\n \n\n\n*Pages 39-45*:\n\n\nDiscussion of whether “causing N future lives to exist when they would have otherwise not existed” should be valued at least 10%\\*N times as much as “saving one life today.” Holden has high uncertainty on this question, more so than Eliezer and Luke. He mistrusts the methodology of thought experiments to a greater degree than Eliezer and Luke.\n\n\n \n\n\n#### Some highlights selected by Holden\n\n\n**Holden**: Yeah, I agree. I think you want a list of all the things that could be highly disruptive and you want to consider them all risks, and you want to consider them all possibilities, I’m not really sure what else there is here.\n\n\n**Luke**: I think we might also disagree on what you can figure out, and what you can’t, about the future.\n\n\n**Holden**: Yeah, I think that’s our main disagreement.\n\n\n**Luke**: Because I think we make a list and we think we know some things about the items on that list and therefore we can figure out which ones to focus on more.\n\n\n**Holden**: Well, no, I would agree with what you just stated, as stated, but I just think that you are more confident than I am. I also believe we can make a list of things that are much more likely to be disruptive than other things, and then we should go and look into them and pay attention to them, but I just think that you guys are much more confident in your view of what are the things. My feeling is this: my feeling is basically we know very little. It’s very important to keep this [upward trend] going. That is not something we should neglect or ignore. So generally helping people is good, that’s kind of how we’ve gotten to where we’ve gotten, is that people have just done things that are good, without visualizing where they’re going.\n\n\nThe track record of visualizing where it’s all going is really bad. The track record of doing something good is really good. So I think we should do good things, I also think that we should list things that we think are in the far future or just relevant to the far future that are especially important. I think we should look into all of them. Another point worth noting is that my job is different from you guy’s job. You guys are working in an organization that’s trying to … it’s a specialized organization, it’s knowledge production. My job is explicitly to be broad. My job is to basically be able to advise a philanthropist and part of what I want to be able to do is to be able to talk about a lot of different options and know about a lot of different things. I don’t think it’s a good way for me to do my job, to just pick the highest expected value thing and put all my eggs in that basket. But perhaps that would be a good job for many people, just not for someone whose explicit value-add is breadth of knowledge.\n\n\nSo part of it is the role, but I do think that you guys are much more confident. My view is that we should list things we think we know, we should look into doing something about them. At the same time, we should also just do things that are good, because doing things that are good has a better track record of getting us closer to colonizing the stars than doing things that are highly planned out.\n\n\n**Eliezer**: So indeed, if I tried to pass your ideological Turing test, I would have said some mixture of “we can’t actually model the weird stuff and people trying to do good is what got us where we are and it will probably take us to the galaxy as well,” that would have been the very thing …\n\n\n**Holden**: You just need to water down a little.\n\n\n**Eliezer**: Sure, so: “insofar as we’re likely to get to the galaxy at all, and it’s highly probable that a lot of the pathway will be people just trying to do good, so just try to do good and get there.”\n\n\n**Holden**: Yeah, and it especially will come from people just doing good, as approximate goal and then having kind of civilizational consequences in ways that were hard to foresee, which is I’m particularly interested in opportunities to do good that just feel big. Even if the definition of big is different from opportunity to opportunity, so like a way to help a lot of animals. A way to help a lot of Africans, a way to help a lot of Americans. These are all, in some absolute sense, it seems unlikely that they could be in the same range, but in some market efficiency sense, they’re in the same range. This is: whoa, I don’t see something this good every day, most things is good someone else snaps up, let me grab this one, because this is the kind of thing that could be like a steam engine, where it’s like, this thing is cool, I built it. it’s super cool. Then it actually has civilizational consequences.\n\n\n**Eliezer**: So in order to get an idea of what you think Earth’s ideal allocation of resources should be, if you were appointed economic czar of the world, you formed the sort of dangerous to think about counterfactual… or maybe a better way of putting it would be: how much money would you need to have personal control over before you started to trying to fund, say, bioterror, nanotech and AI risk type stuff? Not necessarily any current organization, but before you start trying to do something about it?\n\n\n**Holden**: I mean, less than Good Ventures has.\n\n\n**Eliezer**: Interesting.\n\n\n**Holden**: Like, I think we’re probably going to do something. But it depends. We want to keep looking into it. Part of this is that I don’t have all the information and you guys may have information that I don’t, but I think a lot of people in your community don’t have information and are following a Pascal’s mugging-type argument to a conclusion that they should have just been a lot more critical of, and a lot more interested in investigating the world about. So my answer is: we’re still looking at all this stuff, but my view is that no, existential risks are a great opportunity. There’s not nearly enough funding there.\n\n\n \n\n\n\\*\\*\\*\n\n\n**Holden**: So anyway, that was an aside. I think you guys are more in the camp of thinking you understand the issues really well and not only understanding what the issues are, but who is working on what and believing that the neglectedness of x-risk is a large part of your interest in x-risk, I think. I think there are a lot of people who reason so quickly to believing x-risk is paramount that I don’t believe they’ve gone out and looked at the world and seen what is neglected and what isn’t neglected. I think they’re instead doing a version of Pascal’s mugging. But I’m happy to engage with you guys and just say that I don’t know everything about what’s neglected and what isn’t, I think existential risk looks pretty neglected, preliminarily, but I want to look at more things before I really decide how neglected it is and what else might be more neglected. Do you agree with me that the neglectedness of x-risk is a major piece of why you think it’s a good thing to work on?\n\n\n**Luke**: I think it is for me.\n\n\n**Eliezer**: I think I would like specialize that to say that there are particular large x-risks that look very neglected, which means you get a big marginal leverage from acting on them. But even that wouldn’t really honestly actually carry the argument.\n\n\n**Holden**: But if you read “Astronomical Waste,” it concludes that x-risk is the thing to work on, without discussing whether it’s neglected, and I think that’s the chain of reasoning most people are following. I think that is screwed up.\n\n\n**Eliezer**: Yeah, that can’t possibly be right. Or a sane Earth has some kind of allocation across all philanthropies. And insofar as things drop below their allocations, you’ll get benefit from putting stuff into them, and if they go above their allocations, you’re betting off putting your money somewhere else. There exists some amount of investment we can make in x-risk, such that our next investment should be in Against Malaria Foundation or something. Although that actually that still isn’t right, because that’s a better argument now because GiveWell actually did say Against Malaria Foundation is temporarily overinvested, let’s see what they can do what with their existing inflow.\n\n\n**Luke**: Though not necessarily relative to the current allocation in the world!\n\n\n**Holden**: Yeah, absolutely.\n\n\n \n\n\n\\*\\*\\*\n\n\n**Holden**: Yeah. Okay. I think some disagreements we have, which I think are like not enormous disagreements, I think they mostly have to do with how confident we can be. I think we agree that there are many things that are important, we agree that being neglected is part of what makes a cause good. If there are other causes that are really important and really neglected, those are good, too. We agree that everything that is good has some value, but I think the things that are good have more value relative to the things that seem to fit into the long-term plan and that has a lot to do with my feeling about how confident we can be about the long-term plan.\n\n\n**Eliezer**: My reasoning for CFAR sounds a lot like this. Why, to some extent, I sort in practice, divide my efforts between MIRI and CFAR, is sort of like this, except that no matter what happens, I expect the causal pathways to galactic colonization to go down the “something weird happens and other weird things potentially prevent you from doing it” path.\n\n\nI think that human colonization of the galaxy has probability nearly zero.\n\n\n**Holden**: Right, you think it would be something human-like.\n\n\n**Eliezer**: I’m hoping that they’re having fun and that they have this big, complicated civilization and that there was sort of a continuous inheritance from human values, because I think fun is at present, a concept that exists among humans and maybe to some lesser extent, other mammals, but not in the rocks. So you don’t get it for free, you don’t want a scenario where the galaxies end up being turned into paperclips or something. But: “humane” life might be a better term.\n\n\n**Holden**: Sure, sure.\n\n\n**Eliezer**: I think that along the way there you get weird stuff happening and weird emergencies. So CFAR can be thought of as a sort of generalized weird emergencies handler.\n\n\n**Holden**: There’s a lot of generalized weird emergencies handlers.\n\n\n**Luke**: Yeah, you can improve decision-making processes in the world in general, by getting prediction market standard or something.\n\n\n**Holden**: Also just by making people wealthier and happier.\n\n\n**Eliezer**: Prediction markers have a bit of trouble with x-risks for obvious reasons, like the market can’t pay off in most of the interesting scenarios.\n\n\n**Holden**: I think you can make humanity smarter by making it wealthier and happier. It certainly seems to be what’s happened so far.\n\n\n**Eliezer**: Yeah, and intelligence enhancement?\n\n\n**Holden**: Yeah, well, that, too. But that’s further off and that’s more specific and that’s more speculative. I think the world really does get smarter, as an ecosystem. I don’t mean the average IQ. I think the ecosystem gets smarter. If you believe that MIRI is so important, I think the existence of MIRI is a testament to this, because I think the existence of MIRI is made possible by a lot of this wealth and economic development, certainly it’s true for GiveWell. If you take my egg and run it back 20 years, my odds of being able to do anything like this are just so much lower.\n\n\n**Eliezer**: CFAR, from my perspective, it’s sort of like: generalize those kind of skills required to handle a weird emergency like MIRI and have them around for whatever other weird stuff happens.\n\n\n**Holden**: I think the world ecosystem has been getting better in handling weird emergencies like that. I think that part of that, if you want to put a lot of weight on your CFARs, then I think that’s evidence, and if you don’t want to put a lot of weight, then I think there’s other evidence. There is more nonprofits that deal with random stuff, because we have more money.\n\n\n**Eliezer**: I’m not sure if I’d rate our ability to handle weird emergencies as having increased. Nuclear weapons are the sort of classic weird emergency that actually did get handled by this lone genius figure who saw it coming and tried to mobilize efforts and so on, I’m talking about Leó Szilárd. So there was a letter to President Roosevelt, which Einstein wrote, except Einstein didn’t write it. It was ghost-written by someone who did it because Leó Szilárd told them to, and then Einstein sent it off. There is this sort of famous story about the conversation where Leó Szilárd explains to Einstein about the critical fission chain reaction and Einstein sort of goes “I never thought of that.” Then came the Manhattan Project, which was this big mobilization of government effort to handle it. \n\nSo my impression is that if something like that happened again, modern day Einstein’s letter does not get read by Obama. My impression is that we’ve somehow gotten worse at this.\n\n\n**Holden**: I don’t agree with that.\n\n\n**Luke**: Eliezer, why do you think that?\n\n\n**Holden**: You’re also pointing to a very specific pathway. I’m also thinking about all the institutions that exist to deal with random stuff these days. And all the people who have the intellectual freedom, the financial freedom, to think about this stuff, and not just this stuff, other stuff that we aren’t thinking about, that can turn out to be more important.\n\n\n**Eliezer**: We don’t seem to be doing very well with sort of demobilizing the nuclear weapons of the former Soviet Republics, for example.\n\n\n**Holden**: We’re also talking about random response to random stuff. I think we just have a greater degree of society’s ability to notice random stuff and to think about random stuff.\n\n\n**Eliezer**: That’s totally what I would expect on priors, I’m just wondering if we can actually see evidence that it’s true. On priors, I agree that that’s totally expected.\n\n\n**Eliezer**: We could somehow be worse than it was in the 1940s and yet, still, increasing development could all else equal improve our capacity to handle weird stuff. I think I’d agree with that. I think that I would like also sort of agree that all else being equal, as society becomes wealthier, there are more nonprofits, there is like more room to handle weird stuff.\n\n\n**Holden**: Yeah, it’s also true that as we solve more problems, people go down the list, so I think if it hadn’t been for all the health problems in Africa, Bill Gates might be working on [GCRs], or he might be working on something else with global civilizational consequences. So when I’m sitting here not knowing what to do and not feeling very educated in the various speculative areas, but knowing that I can save some lives, that’s another reason there is something to that.\n\n\nBut it’s certainly like: the case for donating to AMF, aside from the way in which it helps GiveWell, is definitely in a world in which I feel very not very powerful and not very important [relative to the world Eliezer and Luke envision]. I feel like, you know, I’m going to do [a relatively small amount of] good and that’s what I’m trying to do.\n\n\nSo in some sense, when you say, AMF isn’t like a player in the story or something, I think that’s completely fair, but also by trying to take a lot of donors who are trying to do this much [a small amount] and trying to help them, we’ve hopefully gotten ourselves in a position to also be a player in the story, if in fact the concept of a player in the story ends up making sense. If it doesn’t and this [small amount of good] turns out to be really good, we’ll at least have done that.\n\n\n**Eliezer**: The sort of obvious thing that I might expect Holden to believe, but I’m not sure that that actually passes your ideological Turing test is that collectively, fixing this stuff collectively, is like a bigger player than collectively the people who go off and try to fix weird things that they think that the fate of the future will hinge on.\n\n\n**Holden**: I just think it’s possible that what you just said is true, and possible that it isn’t. If I’m sitting here, knowing very little about anything, and I want to do a little bit of good, I think doing a little bit of good is better than taking a wild guess on something that I feel very ill-informed about. On the other hand, our ideal at GiveWell is to really be playing both sides.\n\n\n \n\n\n\\*\\*\\*\n\n\n**Eliezer**: Do we think a necessary and sufficient cause of our disagreement is just our visualization of how AI plays out?\n\n\n**Holden**: I think it’s possible.\n\n\n**Eliezer**: If your visualization of how AI worked magically, instantly switched to Eliezer Yudkowsky’s visualization of how AI worked. I mean, Eliezer Yudkowsky, given sudden magical control of GiveWell, does not just GiveWell to be all about x-risk. Eliezer puts it on the link like three steps deep, and just sort of tries to increase the degree to which incoming effective altruists are funneled toward…\n\n\n…\n\n\n**Holden**: I think it’s pretty possible, and I just want to contrast what you guys think with the normal tenor of the arguments I have over x-risk, which… I just talk to a lot of people who are just like, look, x-risk is clearly the most important thing. Why do you think that? Well, have you read “Astronomical Waste?” Well, that’s a little bit absurd. You have an essay that doesn’t address whether something is neglected, concludes what’s most important, and we’re not even talking about AI and path to AI and why AI, it’s just x-risk, [which people interpret to mean things like] asteroids, come on.\n\n\n**Eliezer**: I endorse your objection. We can maybe issue some kind of joint statement, if you want, to inform people.\n\n\n**Holden**: Yeah, perhaps. I was going to write something about this, so maybe I’ll run it by you guys. To the extent that I’m known as Mr. X-Risk Troll, or whatever, it’s because those are the arguments I’m always having. When I think about you guys, I think that you and I do not see eye to eye on AI, and that goes back to that conversation we had last time, and that may be a lot of the explanation. At the same time, it’s certainly on the table for us to put some resources into this.\n\n\n \n\n\n\\*\\*\\*\n\n\n**Eliezer**: Okay, I checked the “Astronomical Waste” paper and everything in there seemed correct, but I can see how we would all now wish, in retrospect, that a caveat had been added along the lines of “and in the debate over what to do nowadays, this doesn’t mean that explicit x-risk focused charities are the best way to maximize the probability of okay outcome.”\n\n\n**Holden**: Right, and in fact, this doesn’t tell us very much. This may prove a useful framework, it may prove a useless framework. There’s many things that have been left unanswered, whereas the essay really had a conclusion of: we’ve narrowed it down from a lot to a little.\n\n\n**Eliezer**: I don’t remember that being in that essay. It was just sort of like, this is the criterion by which we should choose between actions, which seems like obviously correct in my own ethical framework.\n\n\n**Holden**: I also don’t agree with that, so maybe that’s the next topic.\n\n\n**Eliezer**: Yeah. Suppose that you accepted Maximized Probability of Okay Outcome, not as a causal model of how the world works, but just as a sort of a determining ethical criterion. Would anything you’re doing change?\n\n\n**Holden**: I’ve thought about this, maybe not as hard as I should. I don’t think much would change. I think I would be relatively less interested in direct, short-term suffering stuff. But I’m not sure by a lot. Actually, I think I would be substantially now. I think five years ago, I wouldn’t have changed much. I think right now I would be, because I feel like we’re becoming better positioned to actually target things, I think I would be a little bit more confident about zeroing in on extreme AI and the far future and all that stuff. And the things that I think matter most to that, but I don’t think it would be a huge change.\n\n\n\\*\\*\\*\n\n\n**Holden**: I just think there’s also a chance that this whole argument is crap and… so there is one guy [at GiveWell] who is definitely representing more the view that we’re not going to have any causal impact on all this [far future] stuff and there is suffering going on right now and we should deal with it, and I place some weight on that view. I don’t do it the way that you would do it in an expected value framework, where it’s like according to this guy, we can save N lives and according to this guy, we could save Q lives and they have very different world models. So therefore, the guy saying N lives wins because N is so much bigger than Q. I don’t do the calculation that way. I’m closer to equal weight, right.\n\n\n**Eliezer**: Yeah, you’re going to have trouble putting that on a firm epistemic foundation but Nick Bostrom has done some work on what he calls parliamentary models of decision-making. I’m not sure Nick Bostrom would endorse their extension to this case, but descriptively, it seems a lot of what we do is sort of like the different things we think might be true get to be voices in our head in proportion to how true they are and then they negotiate with each other. This has the advantage of being robust against Pascal’s Mugging-type stuff, which I’d like to once again state for the historical record: I invented that term and not as something that you ought to do! So anyway, it’s robust against Pascal’s Mugging-type stuff, and it has the disadvantage of plausibly failing the what-if-everyone-did-that test.\n\n\n\\*\\*\\*\n\n\n**Holden**: Let me step back a second. I hear your claim that I should assign a very high probability that we can — if we survive — colonize the stars. I believe this to be something that smart technical people would not agree with. I’ve outlined why I think they wouldn’t agree with it, but not done a great job with it and that’s something that I’d be happy to think more about and talk more about.\n\n\n**Eliezer**: Are there reasons apart from the Fermi Paradox?\n\n\n**Holden**: I don’t know what all the reasons are. I’ve given my loose impression and it’s not something that I’ve looked into much, because I didn’t really think there was anyone on the other side.\n\n\nThe post [Existential Risk Strategy Conversation with Holden Karnofsky](https://intelligence.org/2014/01/27/existential-risk-strategy-conversation-with-holden-karnofsky/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-01-28T02:20:08Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "eff0d5a1606258667022b97630c9fdbc", "title": "2013 in Review: Outreach", "url": "https://intelligence.org/2014/01/20/2013-in-review-outreach/", "source": "miri", "source_type": "blog", "text": "This is the 2nd part of my personal and qualitative [self-review of MIRI in 2013](http://intelligence.org/2013/12/20/2013-in-review-operations/).\n\n\nBy “outreach” I refer to *general* outreach efforts, rather than e.g. *outreach to specific researchers*, which will be discussed in the post about MIRI’s 2013 research activities.\n\n\n \n\n\n#### Outeach in 2013\n\n\n1. In early 2013, we [decided](http://intelligence.org/2013/04/13/miris-strategy-for-2013/) to reduce our outreach efforts significantly, and we did.\n2. However, we learned throughout 2013 that some forms of “indirect” outreach tend to be pretty cost-effective for MIRI’s goals.[1](https://intelligence.org/2014/01/20/2013-in-review-outreach/#footnote_0_10653 \"We are currently gathering much more information about the effects of our direct and indirect outreach efforts, but these data will take several months to gather. It’s possible those data will overturn the basic conclusions in this post, but it’s more likely they will slightly adjust them.\")\n3. Therefore, we plan to put more effort into indirect outreach in 2014 than we did in 2013.\n\n\n\n#### Reducing our outreach efforts\n\n\nMIRI provisionally decided to reduce its outreach efforts in mid-2012, when we decided to enter negotiations with [Singularity University](http://singularityu.org/) (SU) in response to their offer to acquire the [Singularity Summit](http://singularitysummit.com/), a deal finally [closed](http://singularityu.org/2012/12/09/singularity-university-acquires-the-singularity-summit/) in December 2012. The reduction in outreach efforts explained in our [2013 strategy post](http://intelligence.org/2013/04/13/miris-strategy-for-2013/) was largely a continuation of this earlier strategic decision, which had been conditional on the deal with SU closing successfully. (We would have continued to run the Singularity Summit had SU not acquired it.)\n\n\nMIRI will co-produce the Singularity Summit for the next couple years, but this requires minimal effort on our part, and the scheduling of future Summits is in SU’s hands. SU has not yet scheduled the next Singularity Summit.\n\n\nWithout a Singularity Summit occurring in 2013, our *direct* outreach efforts this year were few. By “direct outreach efforts” I refer to efforts that attempt to directly expose people to MIRI’s mission focused on the [superintelligence control problem](http://intelligence.org/about/). This is in contrast to “indirect outreach efforts,” which aim to expose people not to MIRI’s mission, but to other ideas that may begin to bridge the [inferential distance](http://wiki.lesswrong.com/wiki/Inferential_distance) that lies between common knowledge and the [core](http://intelligence.org/2013/05/15/when-will-ai-be-created/) [ideas](http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/) [motivating](http://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/) MIRI’s mission — ideas such as [effective altruism](http://www.effective-altruism.com/), [applied rationality](http://rationality.org/), [global catastrophic risks](http://www.global-catastrophic-risks.com/), the [security mindset](http://intelligence.org/2013/07/31/ai-risk-and-the-security-mindset/), etc.\n\n\nOur **direct outreach efforts** in 2013 were:\n\n\n1. A [new website](http://intelligence.org/), a necessary part of our rebranding as the Machine Intelligence Research Institute, and some increased use of social media.\n2. A few general-audience talks about superintelligence control, given at universities and technology companies (Stanford, Oxford, Facebook, Quixey, and Heroku), and also at the [Effective Altruism Summit](http://www.effectivealtruismsummit.com/).\n3. Two Bay Area events we hosted for our community: our April “relaunch as MIRI” event, and our September book launch event for James Barrat’s *[Our Final Invention](http://www.amazon.com/Our-Final-Invention-Artificial-Intelligence-ebook/dp/B00CQYAWRY/)*. Two MIRI staffers also published reviews of Barrat’s book, [here](http://www.kurzweilai.net/book-review-our-final-invention-artificial-intelligence-and-the-end-of-the-human-era) and [here](http://singularityhub.com/2013/12/14/will-advanced-ai-be-our-final-invention/).\n4. Two new ebooks containing previously-written content: [*Facing the Intelligence Explosion*](http://intelligenceexplosion.com/) and [*The Hanson-Yudkowsky AI-Foom Debate*](http://intelligence.org/ai-foom-debate/).\n5. We also attended several events for networking purposes, and had conversations with interested parties who contacted us, and thereby spoke to many people about superintelligence control.\n6. Various small writings and interviews, e.g. a [short article](http://qz.com/85825/robots-may-take-our-jobs-but-its-hard-to-say-when/) for *Quartz* and an [interview](http://radio.seti.org/episodes/Meet_Your_Replacements) for *Big Picture Science*.\n\n\nIt is difficult to measure how much value these efforts are producing for our long-term goals. Some basic statistics on these efforts are recorded in a footnote.[2](https://intelligence.org/2014/01/20/2013-in-review-outreach/#footnote_1_10653 \"Basic statistics on direct outreach efforts: (1) Our new intelligence.org domain name rose from PageRank 0 to PageRank 5, one rank below LessWrong.com, a site that has been heavily linked and active since 2009. (2) Since it launched in March 2013, intelligence.org has averaged 28,613 unique visitors per month. This is roughly double the traffic to our previous domain, singularity.org, which averaged 14,843 unique visitors per month during its operation (July 2012 through February 2013). (3) Our other “mini-sites” got negligible traffic, as expected. During their active months in 2013, here’s how many unique visitors per month our mini-sites got: IntelligenceExplosion.com (2,336/mo), Friendly-AI.com (50/mo), TheUncertainFuture.com (129/mo). (4) As of Jan. 2nd, MIRI’s Facebook page had ~5000 Likes, had been seen by ~19,500 unique people, and had “engaged” ~1100 unique people (meaning that they liked, commented on, or clicked on MIRI’s Facebook wall posts). MIRI’s Google+ page is more recent and was much less active: with ~230 +1s (analogous to Facebook Likes), and 17 re-shares. Note that our Facebook Likes count appears to be artificially high. From Oct. 9th to Nov. 9th, MIRI’s Facebook page received an unusual number of Likes. This began again on Dec. 22nd. We don’t know the cause. (5) Our newsletter subscription count grew only slightly, from 9,152 subscribers in January 2013 to 10,024 subscribers as of December 22nd. (6) Our April 2013 relaunch event was attended by ~65 people. The September 2013 book launch event was attended by ~75 people. In both cases, most attendees were close friends of MIRI. (7) Within one month of its publication, Luke’s review of Our Final Invention on KurzweilAI.net saw ~8,500 unique pageviews and was responsible for at least 20 purchases of the book, which describes MIRI’s mission and work in some detail. Meanwhile, Louie’s review of Our Final Invention for Singularity Hub was responsible for at least 60 purchases of the book as of Jan. 2nd, 2014. (8) From April-Nov 2013, Facing the Intelligence Explosion was purchased 1,379 times. Since September, The Hanson-Yudkowsky AI-Foom Debate has been downloaded at least 1,240 times, but this counts only downloads from people who clicked a link to the book from intelligence.org. (9) How many people attended our general-audience talks? By memory, I’d guess 40 at Stanford, 30 at Oxford, 20 at Facebook, 40 at Quixey, 15 at Heroku, and 50 at the Effective Altruism Summit. These were smaller, more targeted talks than e.g. our past talks at the Singularity Summits.\")\n\n\nOverall, **my own impressions** on the value of these direct research efforts are as follows:\n\n\n1. The new domain name (intelligence.org) and website seem clearly worthwhile. Our limited use of social media also seems worthwhile, not to *grow* our community (because: too much [inferential distance](http://wiki.lesswrong.com/wiki/Inferential_distance)), but to keep our existing community engaged with MIRI’s work. (For many people, our Facebook updates are the easiest way to stay in touch with MIRI.)\n2. Our superintelligence control talks at the Effective Altruism Summit were worthwhile because there was less inferential distance with that audience. We also know that some donations came as a result of those talks. In contrast, we haven’t yet received any specific evidence that the *other* superintelligence control talks produced value toward our mission, other than by keeping us in touch with our existing community. We didn’t strongly expect to see such evidence so soon, but this still constitutes non-negligible [evidence of absence](http://lesswrong.com/lw/ih/absence_of_evidence_is_evidence_of_absence/) of positive outreach effects.\n3. The two Bay Area events were valuable for staying in touch with our existing community, but they failed to produce major press stories even though we invited several major journalists to each. The events cost more staff time and attention than money. Our two book reviews of *Our Final Invention* are too recent for us to know whether they caused useful action (beyond merely purchasing the book) in their readers.\n4. The two ebooks of pre-existing content were inexpensive to produce, and they make our material more accessible. They are too recent for us to have specific evidence of positive effects, but I’d guess they were worth the expense.\n5. As far as I can tell, the networking events at which we spoke to people about superintelligence control (rather than effective altruism, rationality, etc.) seem to have produced negligible value. We experimented with several presentations of our material, but I think the inferential distance is just too great to engage new people who aren’t already familiar with effective altruism, applied rationality, global catastrophic risks, etc.\n\n\n \n\n\n#### *Indirect* outreach efforts were more clearly successful\n\n\nOur **indirect outreach efforts** in 2013 were also aimed at long-term effects:\n\n\n1. Eliezer Yudkowsky wrote new chapters of [*Harry Potter and the Methods of Rationality*](http://hpmor.com/) (*HPMoR*), and MIRI distributed 2500+ paperback copies of its first 17 chapters to schools, universities, and companies.[3](https://intelligence.org/2014/01/20/2013-in-review-outreach/#footnote_2_10653 \"HPMoR is Yudkowsky’s private project, but MIRI sponsored this work in the sense that Yudkowsky remained on salary at MIRI while he took time off from “direct MIRI work” to write and release new chapters of HPMoR.\")\n2. Carl Shulman contributed research and career counseling services to the [Center for Effective Altruism](http://home.centreforeffectivealtruism.org/) (CEA).\n3. We produced several front-page posts for the [Less Wrong](http://lesswrong.com/) community, e.g. a [Decision Theory F.A.Q.](http://lesswrong.com/lw/gu1/decision_theory_faq/)[4](https://intelligence.org/2014/01/20/2013-in-review-outreach/#footnote_3_10653 \"MIRI’s substantive, outreach-oriented, front-page posts for the Less Wrong community in 2013 were: (1) Decision Theory F.A.Q.; (2) Fermi Estimates; (3) Start Under the Streetlight, then Push into the Shadows; (4) Four Focus Areas of Effective Altruism; (5) How to Measure Anything; (6) MetaMed: Evidence-Based Healthcare.\")\n\n\nThe positive effects of our posts to Less Wrong are hard to measure, but our other indirect outreach efforts have produced clear evidence of their value more quickly than our direct outreach efforts have.\n\n\nCarl’s work with CEA had measurable effects on the funding and attention going into [long-term philanthropy](http://www.effective-altruism.com/a-long-run-perspective-on-strategic-cause-selection-and-philanthropy/) (including the superintelligence control problem).\n\n\nFinally, some of the evidence we’ve gotten about the indirect outreach value of *HPMoR*:\n\n\n* When I observe someone  doing something useful to MIRI’s mission (e.g. donation or research) for the first time, I typically ask how they came to be involved, and I trace the story back to their first contact with someone at MIRI, with one of MIRI’s close friends, or to something that MIRI or MIRI’s close friends produced. *HPMoR* is now the single most common “first contact” I encounter. (The 2nd most common first contact is with [The Sequences](http://wiki.lesswrong.com/wiki/Sequences).)\n* The paperback version of *HPMoR* ends in the middle of chapter 17, meaning that readers coming to [HPMoR.com](http://hpmor.com/) after finishing the paperback should jump directly to chapter 17. According to Google Analytics, people visit chapter 17 significantly more often than any other chapter outside the first 10 chapters and newly released chapters. It’s hard to tell exactly, but my best guess is that 100-400 people per month are coming to hpmor.com as a result of first reading the paperback copy of the first 17 chapters.\n\n\nThese data fit with MIRI’s earlier experience of the effects of its direct and indirect outreach efforts. For example, as I said [earlier](http://lesswrong.com/lw/iwk/miri_strategy/9yuw): Eliezer Yudkowsky spent MIRI’s early years appealing directly to people about the importance of superintelligence control research. Some helpful people found his work, but the audience was being filtered for “interest in future technology” rather than “able to reason well,” and thus when Eliezer would make [basic arguments](http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/) for e.g. the orthogonality thesis or convergent instrumental values, the responses he would get were typically akin to the output of a free association task. So Eliezer wrote [*The Sequences*](http://wiki.lesswrong.com/wiki/Sequences) and *[HPMoR](http://hpmor.com/),* for which the primary audience filter is “interest in improving one’s reasoning.” This audience, in our experience, is more likely to do useful things when we present the case for effective altruism, for x-risk reduction, for FAI research, etc.\n\n\n \n\n\n#### Lessons for 2014\n\n\nToday I find myself even more confident than I was in [April 2013](http://intelligence.org/2013/04/13/miris-strategy-for-2013/) that certain kinds of indirect outreach by MIRI are more likely to produce value for superintelligence control than MIRI’s direct outreach efforts are. For this reason, I suspect that in 2014 we will continue to scale back our *direct* outreach efforts.[5](https://intelligence.org/2014/01/20/2013-in-review-outreach/#footnote_4_10653 \"There will probably be at least two major exceptions to this trend. First, in early 2014 we will publish an ebook called Smarter Than Us, which was written by Stuart Armstrong all the way back in late 2012. We just didn’t have staff time to turn it into an ebook until very recently. Second, we will probably publish something about the upcoming film Transcendence, because that is a unique opportunity to direct many eyeballs to our writings. Also, we will likely assist in the promotion of Nick Bostrom’s Superintelligence book, but this really qualifies as “outreach to researchers” rather than “general outreach” because Bostrom’s book is a scholarly monograph.\") Meanwhile, we are likely to increase our *indirect* outreach efforts, in particular by giving Eliezer time to write more *HPMoR* chapters, by distributing more *HPMoR* paperbacks, and by publishing an [ebook](http://lesswrong.com/lw/jc7/karma_awards_for_proofreaders_of_the_less_wrong/) of *The Sequences*.\n\n\nWe will also likely provide some consulting and other support for organizations who are doing the kinds of outreach that end up benefiting MIRI (and other organizations), for example [CFAR](http://rationality.org/) (rationality) and [CEA](http://home.centreforeffectivealtruism.org/) (effective altruism).\n\n\n\n\n---\n\n1. We are currently gathering much more information about the effects of our direct and indirect outreach efforts, but these data will take several months to gather. It’s possible those data will overturn the basic conclusions in this post, but it’s more likely they will slightly adjust them.\n2. Basic statistics on direct outreach efforts: (1) Our new [intelligence.org](http://intelligence.org/) domain name rose from PageRank 0 to PageRank 5, one rank below [LessWrong.com](http://lesswrong.com/), a site that has been heavily linked and active since 2009. (2) Since it launched in March 2013, intelligence.org has averaged 28,613 unique visitors per month. This is roughly double the traffic to our previous domain, singularity.org, which averaged 14,843 unique visitors per month during its operation (July 2012 through February 2013). (3) Our other “mini-sites” got negligible traffic, as expected. During their active months in 2013, here’s how many unique visitors per month our mini-sites got: [IntelligenceExplosion.com](http://intelligenceexplosion.com/) (2,336/mo), [Friendly-AI.com](http://friendly-ai.com/) (50/mo), [TheUncertainFuture.com](http://theuncertainfuture.com/) (129/mo). (4) As of Jan. 2nd, [MIRI’s Facebook page](https://www.facebook.com/MachineIntelligenceResearchInstitute) had ~5000 Likes, had been seen by ~19,500 unique people, and had “engaged” ~1100 unique people (meaning that they liked, commented on, or clicked on MIRI’s Facebook wall posts). [MIRI’s Google+ page](https://plus.google.com/+IntelligenceOrg/) is more recent and was much less active: with ~230 +1s (analogous to Facebook Likes), and 17 re-shares. Note that our Facebook Likes count appears to be artificially high. From Oct. 9th to Nov. 9th, MIRI’s Facebook page received an unusual number of Likes. This began again on Dec. 22nd. We don’t know the cause. (5) Our newsletter subscription count grew only slightly, from 9,152 subscribers in January 2013 to 10,024 subscribers as of December 22nd. (6) Our April 2013 relaunch event was attended by ~65 people. The September 2013 book launch event was attended by ~75 people. In both cases, most attendees were close friends of MIRI. (7) Within one month of its publication, [Luke’s review](http://www.kurzweilai.net/book-review-our-final-invention-artificial-intelligence-and-the-end-of-the-human-era) of *Our Final Invention* on KurzweilAI.net saw ~8,500 unique pageviews and was responsible for at least 20 purchases of the book, which describes MIRI’s mission and work in some detail. Meanwhile, [Louie’s review](http://singularityhub.com/2013/12/14/will-advanced-ai-be-our-final-invention/) of *Our Final Invention* for Singularity Hub was responsible for at least 60 purchases of the book as of Jan. 2nd, 2014. (8) From April-Nov 2013, *[Facing the Intelligence Explosion](http://intelligenceexplosion.com/ebook/)* was purchased 1,379 times. Since September, [*The Hanson-Yudkowsky AI-Foom Debate*](http://intelligence.org/ai-foom-debate/) has been downloaded at least 1,240 times, but this counts only downloads from people who clicked a link to the book from intelligence.org. (9) How many people attended our general-audience talks? By memory, I’d guess 40 at Stanford, 30 at Oxford, 20 at Facebook, 40 at Quixey, 15 at Heroku, and 50 at the Effective Altruism Summit. These were smaller, more targeted talks than e.g. our [past talks](http://intelligence.org/singularitysummit/) at the Singularity Summits.\n3. *HPMoR* is Yudkowsky’s private project, but MIRI sponsored this work in the sense that Yudkowsky remained on salary at MIRI while he took time off from “direct MIRI work” to write and release new chapters of *HPMoR*.\n4. MIRI’s substantive, outreach-oriented, front-page posts for the Less Wrong community in 2013 were: (1) [Decision Theory F.A.Q.](http://lesswrong.com/lw/gu1/decision_theory_faq/); (2) [Fermi Estimates](http://lesswrong.com/lw/h5e/fermi_estimates/); (3) [Start Under the Streetlight, then Push into the Shadows](http://lesswrong.com/lw/hsd/start_under_the_streetlight_then_push_into_the/); (4) [Four Focus Areas of Effective Altruism](http://lesswrong.com/lw/hx4/four_focus_areas_of_effective_altruism/); (5) [How to Measure Anything](http://lesswrong.com/lw/i8n/how_to_measure_anything/); (6) [MetaMed: Evidence-Based Healthcare](http://lesswrong.com/lw/gvi/metamed_evidencebased_healthcare/).\n5. There will probably be at least two major exceptions to this trend. First, in early 2014 we will publish an ebook called *Smarter Than Us*, which was written by Stuart Armstrong all the way back in late 2012. We just didn’t have staff time to turn it into an ebook until very recently. Second, we will probably publish something about the upcoming film [*Transcendence*](http://www.imdb.com/title/tt2209764/), because that is a unique opportunity to direct many eyeballs to our writings. Also, we will likely assist in the promotion of Nick Bostrom’s [*Superintelligence* book](http://ukcatalogue.oup.com/product/9780199678112.do), but this really qualifies as “outreach to researchers” rather than “general outreach” because Bostrom’s book is a scholarly monograph.\n\nThe post [2013 in Review: Outreach](https://intelligence.org/2014/01/20/2013-in-review-outreach/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-01-20T19:11:22Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "c2e77000f4bf7eac250622c8e2118dfc", "title": "Want to help MIRI by investing in XRP?", "url": "https://intelligence.org/2014/01/18/investing-in-xrp/", "source": "miri", "source_type": "blog", "text": "![XRP](https://intelligence.org/wp-content/uploads/2014/01/XRP.png)\nRecently, [Mt.Gox](http://en.wikipedia.org/wiki/Mt.Gox) and [Ripple](http://en.wikipedia.org/wiki/Ripple_(payment_protocol)) creator Jed McCaleb gave MIRI a large donation in [XRP](https://ripple.com/currency/), which is the [#2 cryptocurrency](http://coinmarketcap.com/) in market cap behind Bitcoin. This gift, along with our [recent successful fundraiser](http://intelligence.org/2013/12/26/winter-2013-fundraiser-completed/), should enable us to hire the beginnings of a full-time [Friendly AI](http://en.wikipedia.org/wiki/Friendly_artificial_intelligence) research team in 2014. It’s difficult to know *exactly* how to value the donation, but (e.g.) valuing it at the exchange rate at the time of donation would make it the largest single donation to MIRI in our history.\n\n\nThe size of Jed’s donation means that we currently have more XRP than it makes sense to hold in our diversified asset portfolio. To reduce our XRP holdings while growing the Ripple user base, **we’d like to sell some of our XRP** to members of our community. If you’re interested, please contact Malo Bourgon (malo@intelligence.org).\n\n\n**How does Ripple/XRP work?** Technically, Ripple is an online payments protocol and XRP is an associated digital currency, but XRP is often referred to as “Ripple,” which can be confusing. The best explanation of all this yet written is “[Bitcoin Vs. Ripple](http://blog.coinsetter.com/2013/04/29/virtual-currency-trading-wars-bitcoin-versus-ripple-xrp/)” from the Coinsetter blog, which I’ll excerpt below:\n\n\n\n> …think of Ripple as “Kayak for currency exchange.” Ripple will compare various pathways of exchanging one currency to another and find the lowest cost option. Transactions also happen rapidly (in under 5 seconds)…\n> \n> \n> XRP is a digital currency… with an embedded use, which is to pay for Ripple transactions, create Ripple accounts, and be a currency of last resort in the Ripple system… Whereas Bitcoin has no inherent use (its demand and use is solely based on people’s interest in Bitcoin), XRP has… a basic reason why someone may need to use XRP, which is to complete Ripple transactions.\n> \n> \n> …Should Bitcoin believers be worried that Ripple is on its way to crush their dream currency? Absolutely not. Interest in Bitcoin will continue to grow, and separately, interest in Ripple will continue to grow. In fact, the Ripple network should help make bitcoins substantially easier to buy and sell, as well as legitimize them.\n> \n> \n\n\n[Click here](https://ripple.com/client/#/register) to create a Ripple wallet and join the network; the process takes less than 60 seconds.\n\n\nMIRI continues to [accept donations](https://intelligence.org/donate/) in both XRP and Bitcoin, and we send tax receipts for non-anonymous donations in either currency.\n\n\nThe post [Want to help MIRI by investing in XRP?](https://intelligence.org/2014/01/18/investing-in-xrp/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-01-18T17:55:40Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "43decda16c83c050b3022c0031f4b23a", "title": "MIRI’s January 2014 Newsletter", "url": "https://intelligence.org/2014/01/17/miris-january-2014-newsletter/", "source": "miri", "source_type": "blog", "text": "| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n[Machine Intelligence Research Institute](http://intelligence.org)\n |\n\n |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \nDear friends,Wow! MIRI’s donors finished our Winter 2013 fundraising drive [three weeks early](http://intelligence.org/2013/12/26/winter-2013-fundraiser-completed/). Our sincere thanks to everyone who contributed.\n**Research Updates**\n* MIRI’s December workshop resulted in [7 new technical reports and one new paper](http://intelligence.org/2013/12/31/7-new-technical-reports-and-a-new-paper/). Photos from the workshop are [here](https://www.facebook.com/media/set/?set=a.630047813699273.1073741831.170446419659417&type=3).\n* New paper by Luke Muehlhauser and Nick Bostrom: “[Why We Need Friendly AI](http://intelligence.org/2013/12/18/new-paper-why-we-need-friendly-ai/).” io9’s George Dvorsky interviewed Luke about the paper [here](http://io9.com/can-we-build-an-artificial-superintelligence-that-wont-1501869007).\n* Eliezer Yudkowsky and Robby Bensinger have begun to describe another open problem in Friendly AI: [naturalized induction](http://lesswrong.com/lw/jd9/building_phenomenological_bridges/).\n* Two new interviews: Josef Urban on [machine learning and automated reasoning](http://intelligence.org/2013/12/21/josef-urban-on-machine-learning-and-automated-reasoning/), and DARPA’s Kathleen Fisher on [high-assurance systems](http://intelligence.org/2014/01/10/kathleen-fisher-on-high-assurance-systems/).\n\n\n\n**Other Updates**\n* We published the first part of our *2013 in Review* series, on [operations](http://intelligence.org/2013/12/20/2013-in-review-operations/).\n* We published a transcript and “summary of disagreements” for [a conversation about MIRI strategy](http://intelligence.org/2014/01/13/miri-strategy-conversation-with-steinhardt-karnofsky-and-amodei/) held with Jacob Steinhardt (Stanford), Holden Karnofsky (GiveWell), and Dario Amodei (Stanford).\n* We published our first donor story: “[Noticing Inferential Distance](http://intelligence.org/2014/01/05/donor-story-1-giving-after-critique/).”\n\n\n**Other Links**\n* FQXI is holding an [essay contest](http://www.fqxi.org/community/essay) on the subject of “How Should Humanity Steer the Future?” Submission deadline is April 18th. First prize is $10,000, and up to 20 entries may win $1,000 or more.\n* MIT’s [Max Tegmark on AI x-risk](http://www.huffingtonpost.com/max-tegmark/humanity-in-jeopardy_b_4586992.html) at the HuffPost blog.\n* “[A Long-run Perspective on Strategic Cause Selection and Philanthropy](http://www.effective-altruism.com/a-long-run-perspective-on-strategic-cause-selection-and-philanthropy/),” by Nick Beckstead and Carl Shulman.\n* In case you missed it last year, Bostrom’s “[Existential Risk Prevention as Global Priority](http://www.existential-risk.org/concept.pdf)” remains the best extant summary of existential risks in general.\n\n\nAs always, please don’t hesitate to let us know if you have any questions or comments.\nBest,\n\nLuke Muehlhauser\nExecutive Director\n |\n\n |\n\n |\n| |\n\n\nThe post [MIRI’s January 2014 Newsletter](https://intelligence.org/2014/01/17/miris-january-2014-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-01-17T17:06:06Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "3ef7eb8cc87b6e3cda558d7c4bae54d4", "title": "MIRI strategy conversation with Steinhardt, Karnofsky, and Amodei", "url": "https://intelligence.org/2014/01/13/miri-strategy-conversation-with-steinhardt-karnofsky-and-amodei/", "source": "miri", "source_type": "blog", "text": "On October 27th, 2013, MIRI met with three additional members of the effective altruism community to discuss MIRI’s organizational strategy. The participants were:\n\n\n* [Eliezer Yudkowsky](http://yudkowsky.net/) (research fellow at MIRI)\n* [Luke Muehlhauser](http://lukeprog.com/) (executive director at MIRI)\n* Holden Karnofsky (co-CEO at [GiveWell](http://www.givewell.org/))\n* [Jacob Steinhardt](http://cs.stanford.edu/~jsteinhardt/) (grad student in computer science at Stanford)\n* [Dario Amodei](http://med.stanford.edu/profiles/Dario_Amodei/) (post-doc in biophysics at Stanford)\n\n\nWe recorded and transcribed much of the conversation, and then edited and paraphrased the transcript for clarity, conciseness, and to protect the privacy of some content. The resulting edited transcript is available in full **[here](http://intelligence.org/wp-content/uploads/2014/01/10-27-2013-conversation-about-MIRI-strategy.pdf)** (62 pages).\n\n\nOur conversation located some disagreements between the participants; these disagreements are summarized below. This summary is not meant to present arguments with all their force, but rather to serve as a guide to the reader for locating more information about these disagreements. For each point, a page number has been provided for the approximate start of that topic of discussion in the transcript, along with a phrase that can be searched for in the text. In all cases, the participants would likely have quite a bit more to say on the topic if engaged in a discussion on that specific point.\n\n\n\n \n\n\n**Summary of disagreements**\n\n\nPage 7, starting at “the difficulty is with context changes”:\n\n\n* Jacob: Statistical approaches can be very robust and need not rely on strong assumptions, and logical approaches are unlikely to scale up to human-level AI.\n* Eliezer: FAI will have to rely on lawful probabilistic reasoning combined with a transparent utility function, rather than our observing that previously executed behaviors seemed ‘nice’ and trying to apply statistical guarantees directly to that series of surface observations.\n\n\nPage 10, starting at “a nice concrete example”\n\n\n* Eliezer: Consider an AI that optimizes for the number of smiling faces rather than for human happiness, and thus tiles the universe with smiling faces. This example illustrates a class of failure modes that are worrying.\n* Jacob & Dario: This class of failure modes seems implausible to us.\n\n\nPage 14, starting at “I think that as people want”:\n\n\n* Jacob: There isn’t a big difference between learning utility functions from a parameterized family vs. arbitrary utility functions.\n* Eliezer: Unless ‘parameterized’ is Turing complete it would be extremely hard to write down a set of parameters such that human ‘right thing to do’ or CEV or even human selfish desires were within the hypothesis space.\n\n\nPage 16, starting at “Sure, but some concepts are”:\n\n\n* Jacob, Holden, & Dario: “Is Terry Schiavo a person” is a natural category.\n* Eliezer: “Is Terry Schiavo a person” is not a natural category.\n\n\nPage 21, starting at “I would go between the two”:\n\n\n* Holden: Many of the most challenging problems relevant to FAI, if in fact they turn out to be relevant, will be best solved at a later stage of technological development, when we have more advanced “tool-style” AI (possibly including AGI) in order to assist us with addressing these problems.\n* Eliezer: Development may be faster and harder-to-control than we would like; by the time our tools are much better we might not have the time or ability to make progress before UFAI is an issue; and it’s not clear that we’ll be able to develop AIs that are extremely helpful for these problems while also being safe.\n\n\nPage 24, starting at “I think the difference in your mental models”:\n\n\n* Jacob & Dario: An “oracle-like” question-answering system is relatively plausible.\n* Eliezer: An “oracle-like” question-answering system is really hard.\n\n\nPage 24, starting at “I don’t know how to build”:\n\n\n* Jacob: Pre-human-level AIs will not have a huge impact on the development of subsequent AIs.\n* Eliezer: Building a very powerful AGI involves the AI carrying out goal-directed (consequentialist) internal optimization on itself.\n\n\nPage 27, starting at “The Oracle AI makes a”:\n\n\n* Jacob & Dario: It should not be too hard to examine the internal state of an oracle AI.\n* Eliezer: While AI progress can be either pragmatically or theoretically driven, internal state of the program is often opaque to humans at first and rendered partially transparent only later.\n\n\nPage 38, starting at “And do you believe that within having”:\n\n\n* Eliezer: I’ve observed that novices who try to develop FAI concepts don’t seem to be self-critical at all or ask themselves what could go wrong with their bright ideas.\n* Jacob & Holden: This is irrelevant to the question of whether academics are well-equipped to work on FAI, both because this is not the case in more well-developed fields of research, and because attacking one’s own ideas is not necessarily an integral part of the research process compared to other important skills.\n\n\nPage 40, starting at “That might be true, but something”:\n\n\n* Holden: The major FAI-related characteristic that academics lack is cause neutrality. If we can get academics to work on FAI despite this, then we will have many good FAI researchers.\n* Eliezer: Many different things are going wrong in the individuals and in academia which add up to a near-total absence of attempted — let alone successful — FAI research.\n\n\nPage 53, starting at “I think the best path is to try”:\n\n\n* Holden & Dario: It’s relatively easy to get people to rally (with useful action) behind safety issues.\n* Eliezer: No, it is hard.\n\n\nPage 56, starting at “My response would be that’s the wrong thing”:\n\n\n* Jacob & Dario: How should we present problems to academics? An English-language description is sufficient; academics are trained to formalize problems once they understand them.\n* Eliezer: I treasure such miracles when somebody shows up who can perform them, but I don’t intend to rely on it and certainly don’t think it’s the default case for academia. Hence I think in terms of MIRI needing to crispify problems to the point of being 80% or 50% solved before they can really be farmed out anywhere.\n\n\nThis summary was produced by the following process: Jacob attempted a summary, and Eliezer felt that his viewpoint was poorly expressed on several points and wrote back with his proposed versions. Rather than try to find a summary both sides would be happy with, Jacob stuck with his original statements and included Eliezer’s responses mostly as-is, and Eliezer later edited them for clarity and conciseness. A Google Doc of the summary was then produced by Luke and shared with all participants, with Luke bringing up several points for clarification with each of the other participants. A couple points in the summary were also removed because it was difficult to find consensus about their phrasing. The summary was published once all participants were happy with the Google Doc.\n\n\nThe post [MIRI strategy conversation with Steinhardt, Karnofsky, and Amodei](https://intelligence.org/2014/01/13/miri-strategy-conversation-with-steinhardt-karnofsky-and-amodei/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-01-14T07:22:08Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "ee464d3eb789b6b0ebe71251c0a6723c", "title": "Kathleen Fisher on High-Assurance Systems", "url": "https://intelligence.org/2014/01/10/kathleen-fisher-on-high-assurance-systems/", "source": "miri", "source_type": "blog", "text": "![Kathleen Fisher portrait](https://intelligence.org/wp-content/uploads/2014/01/Fisher_w150.jpg)[Dr. Kathleen Fisher](http://www.darpa.mil/Our_Work/I2O/Personnel/Dr__Kathleen_Fisher.aspx) joined DARPA as a program manager in 2011. Her research and development interests relate to programming languages and high assurance systems. Dr. Fisher joined DARPA from Tufts University. Previously, she worked as a Principal Member of the technical staff at AT&T Labs. Dr. Fisher received her Doctor of Philosophy in computer science and her Bachelor of Science in math and computational science from Stanford University.\n\n\n\n**Luke Muehlhauser**: Kathleen, you’re the program manager at DARPA for [HACMS](http://www.darpa.mil/Our_Work/I2O/Programs/High-Assurance_Cyber_Military_Systems_(HACMS).aspx) program, which aims to construct cyber-physical systems which satisfy “appropriate safety and security properties” using a “clean-slate, formal methods-based approach.” My first question is similar to one I [asked Greg Morrisett](http://intelligence.org/2013/11/05/greg-morrisett-on-secure-and-reliable-systems-2/) about the [SAFE](http://www.crash-safe.org/) program, which aims for a “clean slate design for resilient and secure systems”: In the case of HACMS, why was it so important to take the “clean slate” approach, and design the system from the ground up for safety and security (along with functional correctness)?\n\n\n\n\n---\n\n\n**Kathleen Fisher**: Researchers have been trying to prove programs correct for decades, with very little success until recently (successful examples include NICTA’s [seL4 microKernel](http://ssrg.nicta.com.au/projects/seL4/) and INRIA’s [compCert verifying C compiler](http://compcert.inria.fr/compcert-C.html)). One of the lessons learned in this process is that verifying existing code is much harder than verifying code written with verification in mind.\n\n\nThere are a couple of reasons for this. First, many of the invariants that have to hold for a program to be correct often exist only in the head of the programmer. When trying to verify a program after the fact, these invariants have been lost and can take a very long time for the verifier to rediscover. Second, sometimes the code can be written in multiple ways, some of which are much easier to verify than others. If programmers know they have to verify the code, they’ll choose the way that is easy to verify.\n\n\n\n\n\n---\n\n\n**Luke**: [Harper (2000)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Harper-Challenges-for-designing-intelligent-systems-for-safety-critical-applications.pdf) quotes an [old message](http://www.cs.york.ac.uk/hise/safety-critical-archive/2000/0501.html) by Ken Frith from the [Safety Critical Mailing List](http://www.cs.york.ac.uk/hise/safety-critical-archive/2000/0501.html):\n\n\n\n> The thought of having to apply formal proofs to intelligent systems leaves me cold. How do you provide satisfactory assurance for something that has the ability to change itself during a continuous learning process?\n> \n> \n\n\nObviously we can’t write a formal specification of the entire universe and formally verify a system against it, so what kinds of (formal) things *can* we do to ensure desirable behavior from increasingly capable autonomous systems?\n\n\n\n\n---\n\n\n**Kathleen**: Verifying adaptive or intelligent systems is a difficult challenge. It is the subject of active research, particularly in the avionics industry. One approach is to use a Simplex Architecture, in which the intelligent system is monitored to ensure it stays within a safety envelop. If the intelligent system looks likely to leave the envelop, a monitor diverts control to a basic system. In this approach, the monitor, the switching code, and the basic system have to be verified directly, but not the intelligent system. Verifying these smaller pieces is within the reach of current technology.\n\n\n\n\n---\n\n\n**Luke**: You said earlier that “verifying existing code is much harder than verifying code written with verification in mind.” This suggests that when a very high level of system safety is needed, e.g. for autopilot software, a great deal of extra work is required, because one must essentially start from scratch to design a system “from the ground up” with safety and verification in mind.\n\n\nMoreover, I have the impression that the work required to design a verifiably safe intelligent system tends to be far more “serial” in nature than the work required to design an intelligent system without such a high degree of verifiable safety. When designing an intelligent system that doesn’t need to be particularly “safe”, e.g. a system for intelligently choosing which ads to show website visitors, one can cobble together bits of code from here and there and test the result on some data sets and then run the system “in the real world.” But a verifiably safe system needs to be built in a very precise way, with many algorithm design choices depending on previous choices, and with many research insights dependent on the shape of previous insights, etc. In this way, it seems that the project of designing a not-verifiably-safe intelligent system parallelizes more easily than the project of designing a verifiably safe intelligent system.\n\n\nIs this also your impression? How would you modify my account?\n\n\n\n\n---\n\n\n**Kathleen**: Your assessment is accurate. Basically, if your code has a wide range of acceptable behaviors and if there aren’t particularly bad consequences if it misbehaves, it is a lot easier to write code.\n\n\nI’d be careful with the term parallelize. It is usually used to refer to programs where parts run at the same time as other parts, say on multiple computers or chips. It can also refer to programmers themselves working in parallel. The way you are using it, it reads like you mean the code is running in parallel when I think you mean to say the code is written in parallel. Both are easier for non-high-assurance code, but the argument you are making lends itself to the “written in parallel” interpretation.\n\n\n\n\n---\n\n\n**Luke**: What qualitative or quantitative trends do you observe or anticipate with regard to verifiably safe cyber-physical systems? For example: Do you think research on the *capability* and *intelligence* of cyber-physical systems is outpacing research on their *safety* and *verifiability*? How much funding and quality-adjusted researcher hours are going into one (capability and intelligence) relative to the other (safety and verifiability), with respect to cyber-physical systems specifically? How much of the low-hanging fruit seems to have been plucked with respect to each, such that additional units of progress require more funding and researcher time than previous units?\n\n\n\n\n---\n\n\n**Kathleen**: In general, research into capabilities outpaces the corresponding research into how to make those capabilities secure. The question of security for a given capability isn’t interesting until that capability has been shown to be possible, so initially researchers and inventors are naturally more focused on the new capability rather than on its associated security. Consequently, security often has to catch up once a new capability has been invented and shown to be useful.\n\n\nIn addition, by definition, new capabilities add interesting and useful new capabilities, which often increase productivity, quality of life, or profits. Security adds nothing beyond ensuring something works the way it is supposed to, so it is a cost center rather than a profit center, which tends to suppress investment.\n\n\nI don’t know how one would go about collecting the data to show levels of investment in research for capabilities vs. security. I’m not sure the dollar totals would be that revealing. Research in security could still be lagging research in capabilities even if more money were being spent on security, if research in security were more costly than research into capabilities.\n\n\n\n\n---\n\n\n**Luke**: It is one challenge to ensure the safe behavior of a system with very limited behaviors and very limited decision-making capacity, and quite another to ensure the safe behavior of a system with much greater autonomous capability. What do you think are the prospects for ensuring the safe behavior of cyber-physical systems as they are endowed with sufficient intelligence and autonomy to e.g. replace factory workers, replace taxi drivers, replace battlefield troops, etc. over the next several decades?\n\n\n\n\n---\n\n\n**Kathleen**: As you say, it is much easier to ensure the safe and secure behavior of simple systems with well-defined behavior. That said, it is clear that we will be fielding more and more complex cyber-physical systems for all kinds of activities and we will need to be able to ensure that those systems are sufficiently safe and secure. I think a combination of techniques will enable us to reach this goal, although substantial additional research is needed. These techniques include: 1) model-based design, 2) program synthesis, 3) security- and safety-aware composition, and 4) simplex-based architectures. Basically, we’ll need to start with simple components and simple architectures, and then leverage them to build more and more complex systems.\n\n\n\n\n---\n\n\n**Luke**: Thanks!\n\n\nThe post [Kathleen Fisher on High-Assurance Systems](https://intelligence.org/2014/01/10/kathleen-fisher-on-high-assurance-systems/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-01-10T20:17:53Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "edcee51812d19f7beba0dcdd3fd5a9dc", "title": "Donor Story #1: Noticing Inferential Distance", "url": "https://intelligence.org/2014/01/05/donor-story-1-giving-after-critique/", "source": "miri", "source_type": "blog", "text": "2013 was by far MIRI’s most successful fundraising year (more details later), so now we’re talking to our donors to figure out: “Okay, what *exactly* are we doing so *right*?”\n\n\nBelow is one donor’s story, anonymized and published with permission:\n\n\n\n> My decision to donate was heavily dependent upon MIRI’s relationship with [LessWrong](http://lesswrong.com/). I did look into MIRI itself, perused the blog and the papers, and did some fact-checking. But this was largely sanity-checking after I had been convinced to donate by my interactions on LessWrong.\n> \n> \n> Initially, I wasn’t so much convinced to donate as I was convinced that FAI is a more pressing problem than my prior concerns. Once I believed this, it wasn’t a question of whether I was going to donate to FAI research but a question of where to focus my efforts.\n> \n> \n> I chose to donate to MIRI after it became apparent to me that\n> \n> \n> 1. Few people are aware of the problems posed by uFAI.\n> 2. Among those who are, many dismiss the problem after failing to understand it.\n> \n> \n> This perhaps sounds arrogant. Allow me to explain some:\n> \n> \n> Perhaps paradoxically, one of the biggest factors that made me trust MIRI was [Holden Karnofsky’s critique](http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/) of the organization. Holden’s tool AI suggestion and a number of his other objections seemed, to my eye, transparently foolish. Afterwards, I read the responses of [Luke](http://lesswrong.com/lw/di4/reply_to_holden_on_the_singularity_institute/) and [Eliezer](http://lesswrong.com/lw/cze/reply_to_holden_on_tool_ai/), who rejected these points for reasons similar to why I had rejected them. This went a long way towards convincing me that whatever organization Luke and Eliezer were working at was particularly important.\n> \n> \n> This scenario played out many times over, in big discussions and in small comments. I discovered that there were many people objecting to uFAI as a problem, and that the vast majority of them fundamentally misunderstood the concerns.\n> \n> \n> It’s not that I think Holden Karnofsky or other dissenters are stupid, nor misinformed. I respect Karnofsky in particular: he’s very intelligent and he’s doing incredible work. I certainly don’t mean to imply I’m smarter than him nor any others who question MIRI’s mission.\n> \n> \n> Rather, I have a history of arguing for uncouth ideas. I’ve often found myself on the far side of an [inferential gap](http://wiki.lesswrong.com/wiki/Inferential_distance). I’ve long been convinced that smart people can fundamentally misunderstand the most important problems. Frequently I have tried to hone a strange idea, only to have discussion partners fall into the inferential gap four steps before we got the real questions.\n> \n> \n> Many of the discussions about FAI — Karnofsky’s critique primary among them — convinced me that FAI is the same sort of problem. The sort where people get caught up debating whether the problem is actually a problem, the sort where it’s very difficult to find people who are actually searching for solutions instead of debating whether solutions are necessary.\n> \n> \n> This was my primary reason for trusting MIRI: these discussions left me convinced that I should expect a dearth of experts who are appropriately concerned.\n> \n> \n> I didn’t expect there to exist organizations outside of MIRI (and [FHI](http://www.fhi.ox.ac.uk/), and a few others in the same circles) that I could trust to address the problems as I see them. The number of smart people stumbling on inferential gaps made me cautious of other organizations sharing similar missions: even if their goals sounded good it would have been difficult to convince me that their leadership could avoid the common pitfalls.\n> \n> \n> By contrast, Luke, Eliezer, and [Nick Bostrom](http://nickbostrom.com/) demonstrated their abilities time and again, across many blog posts, discussions, and papers. They slowly built up the trust that I now place in MIRI and FHI.\n> \n> \n> I understand that this still sounds arrogant, and this was not lost on me. I was introduced to these issues via Eliezer’s writings, and thus I had to expect a bias towards Eliezer’s viewpoint. The fact that I was dismissing many arguments from very smart people with minimal consideration raised some red flags. The fact that the field was rife with misconception implied that I should anticipate misconceptions on my own part.\n> \n> \n> I was aware of all this, and I spent a long time trying to account for these points. The decision was not made lightly: it took four solid months of reflection, research, and thought before I was confident enough to donate to MIRI. In that timeframe I found more respect for the opposing views, but my conclusions did not change.\n> \n> \n> I came out the other end convinced that uFAI is a pressing problem, that FAI research is important, and that the field is full of landmines. MIRI remains one of very few organizations that has convinced me they have a chance of navigating this minefield successfully.\n> \n> \n> Of course, these conclusions were not quite so crisp in my head, at the time. There was always more thinking to do, more reflection to be had, more facts to check before moving piles of money around. And life kept on happening in the meantime: work and leisure took precedent over deciding where my money should go. It was the [summer matching challenge](http://lesswrong.com/lw/i3a/miris_2013_summer_matching_challenge/) that finally forced my hand, shook me from my reverie, and provided a convenient excuse to actually hit [the button](https://intelligence.org/donate/).\n> \n> \n\n\nThe post [Donor Story #1: Noticing Inferential Distance](https://intelligence.org/2014/01/05/donor-story-1-giving-after-critique/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-01-06T01:09:09Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "c2b563c0a76595e9c0519da15be0d7e2", "title": "7 New Technical Reports, and a New Paper", "url": "https://intelligence.org/2013/12/31/7-new-technical-reports-and-a-new-paper/", "source": "miri", "source_type": "blog", "text": "Recently, MIRI released 7 brief technical reports that explain several pieces of theoretical progress made at our [December 2013 research workshop](http://intelligence.org/2013/07/24/miris-december-2013-workshop/). Several of these results build on work done at our [July](http://intelligence.org/2013/06/07/miris-july-2013-workshop/) and [November](http://intelligence.org/2013/08/30/miris-november-2013-workshop-in-oxford/) 2013 workshops, and also on between-workshop research by Paul Christiano, Benja Fallenstein, and others.\n\n\nTo understand these technical reports in context, and to discuss them, please see Benja Fallenstein’s post: **[Results from MIRI’s December workshop](http://lesswrong.com/lw/jed/results_from_miris_december_workshop/)**. See also [two](http://johncarlosbaez.wordpress.com/2013/12/26/logic-probability-and-reflection/) [posts](https://plus.google.com/117663015413546257905/posts/5VkpsWge7K9) about the workshop by workshop participant John Baez.\n\n\nThe 7 technical reports are:\n\n\n* Hahn, “[Scientific Induction in Probabilistic Metamathematics](https://intelligence.org/files/ScientificInduction.pdf)“\n* Yudkowsky, “[The Procrastination Paradox](https://intelligence.org/files/ProcrastinationParadox.pdf)“\n* Fallenstein, “[An infinitely descending sequence of sound theories each proving the next consistent](https://intelligence.org/files/ConsistencyWaterfall.pdf)“\n* Soares, “[Fallenstein’s Monster](https://intelligence.org/files/FallensteinsMonster.pdf)“\n* Stiennon, “[Recursively-defined logical theories are well-defined](https://intelligence.org/files/RecursivelyDefinedTheories.pdf)“\n* Fallenstein, “[The 5-and-10 problem and the tiling agents formalism](https://intelligence.org/files/TilingAgents510.pdf)“\n* Fallenstein, “[Decreasing mathematical strength in one formalization of parametric polymorphism](https://intelligence.org/files/DecreasingStrength.pdf)“\n\n\nAlso, [Nik Weaver](http://www.math.wustl.edu/~nweaver/) attended the first day of the workshop, and gave a tutorial on his response paper to Yudkowsky & Herreshoff’s [tiling agents paper](https://intelligence.org/files/TilingAgentsDraft.pdf), titled “[Paradoxes of rational agency and formal systems that verify their own soundness](http://arxiv-web.arxiv.org/abs/1312.3626).” Benja Fallenstein’s comments on Weaver’s idea of “naturalistic trust” are here: [Naturalistic trust among AIs](http://lesswrong.com/lw/jcq/naturalistic_trust_among_ais_the_parable_of_the/).\n\n\nThe post [7 New Technical Reports, and a New Paper](https://intelligence.org/2013/12/31/7-new-technical-reports-and-a-new-paper/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2014-01-01T00:23:47Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "5bdfbf53b3f119664caf8d360ca3eee5", "title": "Winter 2013 Fundraiser Completed!", "url": "https://intelligence.org/2013/12/26/winter-2013-fundraiser-completed/", "source": "miri", "source_type": "blog", "text": "Wow! MIRI’s donors just finished our [2013 winter matching challenge](http://intelligence.org/2013/12/02/2013-winter-matching-challenge/) *three weeks ahead of the deadline!*\n\n\nAs it happens, the person who finished this drive was [Patrick LaVictoire](http://www.linkedin.com/pub/patrick-lavictoire/65/47a/591), who says his donation was the product of his [altruism tip jar](http://lesswrong.com/lw/3kl/optimizing_fuzzies_and_utilons_the_altruism_chip/) habit.\n\n\nI did not expect to finish the fundraiser so quickly. Basically, this happened because the 3:1 matching for “new large donors” worked *even better than we hoped*: this fundraiser brought forth *ten* “new large donors,” an uncommonly high number.\n\n\nFor those who were wondering whether they should contribute to MIRI or to its child organization CFAR, the answer is now clear: please contribute to [CFAR’s ongoing fundraiser](http://rationality.org/fundraiser2013/).\n\n\nMany, many thanks to everyone who contributed!\n\n\nThe post [Winter 2013 Fundraiser Completed!](https://intelligence.org/2013/12/26/winter-2013-fundraiser-completed/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-12-26T20:07:22Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "6610b43a0acb9b1b8ebc0288b6a5892a", "title": "Josef Urban on Machine Learning and Automated Reasoning", "url": "https://intelligence.org/2013/12/21/josef-urban-on-machine-learning-and-automated-reasoning/", "source": "miri", "source_type": "blog", "text": "![Josef Urban portrait](https://intelligence.org/wp-content/uploads/2013/12/Urban_w150.jpg) [Josef Urban](http://cs.ru.nl/~urban/) is a postdoctoral researcher at the Institute for Computing and Information Sciences of the Radboud University in Nijmegen, the Netherlands. His main interest is development of combined inductive and deductive AI methods over large formal (fully semantically specified) knowledge bases, such as large corpora of formally stated mathematical definitions, theorems and proofs. This involves deductive AI fields such as Automated Theorem Proving (ATP) and inductive AI fields such as Machine Learning. In 2003, he made the largest corpus of formal mathematics – the Mizar library – available to ATP and AI methods, and since then has been developing the first approaches, systems and benchmarks for AI and automated reasoning over such corpora. His Machine Learner for Automated Reasoning (MaLARea) has in 2013 solved by a large margin most problems in the mathematically oriented Large Theory division of the [CADE Automated Systems Competition (CASC)](http://www.cs.miami.edu/~tptp/CASC/24/WWWFiles/DivisionSummary1.html). The system is based on several positive-feedback loops between deducing new proofs and learning how to guide the reasoning process based on successful proofs. Such AI methods are also being deployed as systems that assist interactive theorem proving and formal verification in systems like [Mizar](http://en.wikipedia.org/wiki/Mizar_system), [Isabelle](http://en.wikipedia.org/wiki/Isabelle_(proof_assistant)) and [HOL Light](http://www.cl.cam.ac.uk/~jrh13/hol-light/).\n\n\nHe received his PhD in Computers Science from the Charles University in Prague in 2004, and MSc in Mathematics at the same university in 1998. In 2003, he co-founded the [Prague Automated Reasoning Group](http://arg.felk.cvut.cz/), and from 2006 to 2008 he was a Marie-Curie fellow at the University of Miami.\n\n\n\n**Luke Muehlhauser**: In [Urban & Vyskocil (2013)](http://commonsenseatheism.com/wp-content/uploads/2013/11/Urban-Vyskocil-Theorem-Proving-in-Large-Formal-Mathematics-as-an-Emerging-AI-Field.pdf) and other articles, you write about the state of the art in both the formalization of mathematics and automated theorem proving. Before I ask about machine learning in this context, could you quickly sum up for us what we can and can’t do at this stage, when it comes to automated theorem proving?\n\n\n\n\n---\n\n\n**Josef Urban**: The best-known result is the proof of the Robbins conjecture found automatically by [Bill McCune’s system EQP in 1996](http://www.nytimes.com/library/cyber/week/1210math.html). Automated theorem proving (ATP) can be used today for solving some open equational algebraic problems, for example in the [quasigroup and loop theory](http://www.karlin.mff.cuni.cz/~stanovsk/math/qptp.pdf). Proofs of such open problems may be thousands of inferences long, and quite different from the proofs that mathematicians produce. Solving such problems with ATP is often not a “push-button” matter. It may require specific problem reformulation (usually involving only a few axioms) suitable for ATP, exploration of suitable parameters for the ATP search, and sometimes also guiding the final proof attempt by lemmas (“hints”) that were useful for solving simpler related problems.\n\n\nATP is today also used for proving small lemmas in large formal libraries developed with interactive theorem provers (ITPs) such as Mizar, Isabelle and HOL Light. This is called “large-theory” ATP. In the past ten years, we have found that even with the large mathematical libraries (thousands of definitions and theorems) available in these ITPs, it is often possible to automatically select the relevant previous knowledge from such libraries and find proofs of small lemmas automatically without any manual pre-processing. Such proofs are usually much shorter than the long proofs of the equational algebraic problems, but even that already helps a lot to the people who use ITPs to formally prove more advanced theorems.\n\n\nAn illustrative example is the [Flyspeck](http://code.google.com/p/flyspeck/) (“Formal Proof of Kepler”) project led by Tom Hales, who proved the Kepler conjecture in 1998. The proof is so involved that the only way how to ensure its correctness is formal verification with an ITP. Tom Hales has recently written a [300-page book](http://www.cambridge.org/nl/academic/subjects/mathematics/geometry-and-topology/dense-sphere-packings-blueprint-formal-proofs?format=PB) about the proof, with the remarkable property that most of the concepts, lemmas and proofs have a computer-understandable “Flyspeck” counterpart that is stated and verified in HOL Light. Hales estimates that this has taken about 20 man-years. Recently, we [have](http://arxiv.org/abs/1211.7012) [shown](http://www.easychair.org/publications/?page=242387180) that the proofs of about 47% of the 14000 small Flyspeck lemmas could be found automatically by large-theory ATP methods. This means that the only required mental effort is to formally state these lemmas in HOL Light. Again, the caveat is that the lemmas have to be quite easy to prove from other lemmas that are already in the library. On the other hand, this means that once we have such large corpus of formally-stated everyday mathematics, it is possible to use the corpus together with the large-theory ATP methods to solve many simple formally-stated mathematical problems.\n\n\nAnother interesting recent development are ATP methods that efficiently combine “reasoning” and “computing”. This has been a long-time challenge for ATP systems, particularly when used for software and hardware verification. Recent SMT (Satisfiability Modulo Theories) systems such as [Z3](http://z3.codeplex.com/) integrate a number of decision procedures, typically about linear and nonlinear arithmetics, bitvectors, etc. I am not an expert in this area, but I get the impression that the growing strength of such systems is making the full formal verification of complex hardware and software systems more and more realistic.\n\n\n\n\n\n---\n\n\n**Luke**: As you see it, what are the most important factors limiting the success and scope of ATP algorithms?\n\n\n\n\n---\n\n\n**Josef**: (1) Their low reasoning power, particularly in large and advanced theories, and (2) lack of computer understanding of current human-level (math) knowledge.\n\n\nThe main cause of (1) is that so far the general (“universal”) reasoning algorithms often use too much brute force, without much smart guidance, specific methods for specific subproblems, and methods for self-improvement. Even on mathematical problems that are considered quite easy, such brute-force algorithms then often explode. (2) is a major obstacle because even if the strength of ATP algorithms could be already useful for simple query-answering applications in general mathematics, the systems still do not understand problems posed using the human-level mathematical language and concepts. (1) and (2) are connected: human math prose can be arbitrarily high-level, requiring the discovery of long reasoning chains just for parsing and filling the justification gaps.\n\n\nOne path to improving (1) leads through better (and preferably automated) analysis of how mathematicians think. In other words, through mining and understanding what they know, how they use and produce it, and how they improve their knowledge and reasoning methods. If that is correct, then it should be useful to have large corpora of mathematical knowledge in a (possibly “annotated”) format that can be at least correctly parsed, and then further analysed and perhaps used for building various self-improving ATP algorithms. To some extent, this has been started in the last decade, even though manual (“theory-driven”) engineering of ATP algorithms still largely prevails. Unfortunately, large “ATP-parsable” (or annotated) corpora are still quite rare and expensive to produce today, exactly because of (2), i.e., the gaps between human-level and computer-level understanding and reasoning. Needless to say that dually annotated human-computer corpora of mathematics should be also (together with decent ATP) useful for a “bootstrapped” solving of (2), i.e., for learning more and more computer understanding of human-level mathematical texts. Thanks to the fact that mathematics is the only field with deep formal semantics, study of such corpora might also eventually lead us to better semantic understanding of natural language and strong AI for other sciences.\n\n\nIn software and hardware verification, (1) is probably the main problem, although, for example, semantic annotation of large code libraries is also a major effort. Solving (1) is still arbitrarily hard, but it seems that “code” is generally not so diverse as “mathematics”, and it might be possible to get quite far with a smaller number of manually programmed techniques.\n\n\n\n\n---\n\n\n**Luke**: How much of human mathematical knowledge has been formalized such that automated theorem provers can use it? At what pace is that base of formalized mathematics growing?\n\n\n\n\n---\n\n\n**Josef**: One metric that we have is the [list of 100 theorems created in 1999](http://pirate.shu.edu/~kahlnath/Top100.html) and [tracked for formal systems by Freek Wiedijk](http://www.cs.ru.nl/~freek/100/). Currently, 88% of these 100 theorems have been proved in at least one proof assistant, 87% in HOL Light alone. A lot of the undergraduate curriculum is probably already covered to a good extent: the basics of real/complex calculus, measure theory, algebra and linear algebra, general topology, set theory, category theory, combinatorics and graph theory, logic, etc. But I think the PhD-level coverage is still quite far.\n\n\nThere are some advanced mathematical formalizations like the recent [formal proof of the Feit-Thompson theorem in Coq](http://ssr2.msr-inria.inria.fr/~jenkins/current/progress.html), the already mentioned [Flyspeck project](http://ssr2.msr-inria.inria.fr/~jenkins/current/progress.html) done in HOL Light, [formalization of more than a half of the Compendium of Continuous Lattices in Mizar](http://megrez.mizar.org/ccl/state.html#report), and many smaller interesting projects. Some [overview pages of the theorems and theories formalized in the systems are the MML Query](http://mmlquery.mizar.org/mmlquery/fillin.php?filledfilename=mml-facts.mqt&argument=number+102) (Mizar), the [Archive of Formal Proofs](http://afp.sourceforge.net/topics.shtml) (Isabelle), the [Coq contribs](https://gforge.inria.fr/scm/viewvc.php/trunk/?root=coq-contribs), and [the 100-theorems project](http://code.google.com/p/hol-light/source/browse/#svn%2Ftrunk%2F100) in HOL Light.\n\n\nThe rate of growth of formal math is not very high, but also the number of authors is quite low. My very rough estimate is about 10k-20k top-level lemmas per year with some 100-300 more important theorems among them. But there is quite a lot of repetition and duplication between various systems, their various libraries, and sometimes even inside one library. People are trying various approaches to formalization, and sometimes prefer to design their own formalization from scratch rather than trying to re-use the work of somebody else. This is quite similar to code libraries. Making the libraries as re-usable as possible is a nontrivial effort.\n\n\nStrictly speaking, only Mizar, Isabelle and HOL Light are currently accessible to large-theory ATP systems that really try to use the whole libraries at once to prove new conjectures. Coq’s logic is more different from the logics used by the strongest ATP systems and there is so far no sufficiently complete large-theory link between them. But there is also a lot of smaller-scale programmable proof automation inside Coq (and also other interactive provers) already.\n\n\n\n\n---\n\n\n**Luke**: The prospect of having large databases of formalized mathematical knowledge naturally leads to the question: “Might we be able to use machine learning algorithms for improved automated theorem proving?” What is the current state of machine learning in the context of automated theorem proving?\n\n\n\n\n---\n\n\n**Josef**: Most of the machine learning applications are today in the context of large-theory ATP. The task that has the most developed learning methods is selection of the most relevant theorems and definitions from a large library when proving a new conjecture. This is important for two reasons: (i) ATPs can today often get quite far when given the right axioms that are sufficient, not very redundant, and reasonably close to the conjecture, and (ii) adding irrelevant axioms to the ATP search space often quickly diminishes the chances of finding the proof within reasonable time. This selection problem became known as “premise selection” and [a number of learning and non-learning algorithms have been tried for it already and combined in various ways](http://dx.doi.org/10.1007/978-3-642-31365-3_30). The oldest one tried since 2003 is naive Bayes, but more recently we tried also kernel methods, various versions of distance-weighted k-nearest neighbor, random forests, and some basic ensemble methods.\n\n\nA major issue is the choice of good characterization of mathematical objects such as formulas and proofs for the learning algorithms. In a large diverse library, just using the set of symbols as formula features is already useful. This has been extended by methods ranging from purely syntactic n-gram analogs (terms and subterms and their parts, normalized in various ways), to more semantic but still quite syntactic features such as addition of type information and type hierarchies, to strongly semantic features such as the validity of formulas in a large pool of diverse models. Feature preprocessing methods such as TF-IDF and latent semantic analysis help a lot, and quite likely more can be done still.\n\n\nAnother major issue are the proofs used for training. Some proofs are easy for humans, but hard for today’s ATPs, so it is often better to find an ATP proof if possible and learn from it, instead of trying to learn from formally correct but ATP-infeasible proofs. This gives rise to exploratory AI systems that interleave deductive proof-finding and learning from proofs. Positive feedback loops appear: the more ATP proofs (and also counter-examples) we find, the better the next learning iteration, and the more ATP proofs we usually find in the next iteration of learning-advised proof finding. In some sense, one cannot easily “just do inductive AI (learning)” or “just do deductive AI (ATP)”: the strongest approach to large-theory ATP is to really interleave theorem proving with learning from the proofs. This should not be too surprising, because neither humans do science just by pure induction or just by pure deduction. But to me it seems really useful to have today at least one AI domain that allows such combined techniques and pragmatic experiments with them.\n\n\nA bit similar ideas apply when we stop treating ATPs as black boxes and start to look inside them. Instead of evolving an inductive/deductive system that is good at selecting relevant premises for conjectures ([MaLARea](http://dx.doi.org/10.1007/978-3-540-71070-7_37) is an example), we may try to evolve a system that has a number of specialized problem-solving techniques (ATP strategies) for common classes of problems, and some intuition about which techniques to use for which problem classes. An example is [the Blind Strategymaker](http://arxiv.org/abs/1301.2683), which on a large library of different problems co-evolves a population of ATP strategies (treated as sets of search-directing parameters) with a population of solvable training problems, with the ultimate goal of having a reasonably small set of strategies that together solve as many of the problems as possible. The initial expert strategies (“predators”) are mutated and quickly evaluated on their easy prey (the easy training problems they specialize in), and if the mutations show promise (faster solutions) on such training subset, they undergo a more expensive (more time allowed) evaluation on a much wider set of problems, possibly solving some previously unsolvable ones and making them into further training targets. Again, just random mutating on the so-far-unsolved problems is quite inefficient, so one really needs the intuition about which training data the mutations should be grown on. This again yields a fast “inductive” training phase, followed (if successful) by a slower “hard thinking” phase, in which the newly trained strategies attempt to solve some more problems, making them into further training data. The intuition and the deductive capability co-evolve again; doing just one of them does not work so well.\n\n\nAnd going again from black-box to white-box, one can also ask what are really the ATP strategies and deductive techniques, and how to steer/program them more than just by finding good combinations of predefined search parameters for problem classes. There are several learning methods that go beyond such parameter settings, the most prominent and successful is probably [the “hints” method](http://dx.doi.org/10.1007/BF00252178) by Bob Veroff (a bit differently done also [by Stephan Schulz](http://dx.doi.org/10.1007/3-540-45422-5_23)), that directs the proof search towards the lemmas (hints) that were found useful in proofs of related problems. I believe that learning such internal guidance is still quite an unexplored territory that we should be working on. One might not only direct the search towards previously useful lemmas, but also look at methods that suggest completely new lemmas and concepts, based on analogies with other problems, deeper knowledge about the problem state, etc. Depending on one’s idea about what is and what is not machine learning, one could also relate here to Douglas Lenat’s seminal work on concept and lemma-finding systems such as the [Automated Mathematician](http://dx.doi.org/10.1016/0004-3702(77)90024-8) and [Eurisko](http://dx.doi.org/10.1016/S0004-3702(83)80005-8).\n\n\n\n\n---\n\n\n**Luke**: What trends in this area do you expect in the next 20 years? Do you expect that in 20 years a much greater share of research mathematicians will use ATPs, at least in a highly interactive way (*ala* Lenat with Eurisko)? Do you expect learning (rather than non-learning) ATPs to predominate in the future? Do you expect things to advance so far that the jobs of *some* research mathematicians will be effectively replaced, just as some paralegal jobs have ([supposedly](http://www.nytimes.com/2011/03/05/science/05legal.html)) been replaced by better software for searching through legal databases?\n\n\n\n\n---\n\n\n**Josef**: First, AI predictions usually happen, but often take longer than expected. 20 years is not really much in these low-profile fields: Flyspeck itself has taken about 20 person-years and Mizar took 40 years from [a visionary talk by Andrzej Trybulec](http://webdocs.cs.ualberta.ca/~piotr/Mizar/History/04MMA/M30.pdf) in 1973 to the current 1200 articles in the Mizar library. There are not many large projects in these areas and often they are pushed only by extreme determination of single researchers.\n\n\nOne AI breakthrough that I believe is quite reachable within 20 years is deep *automated* semantic understanding (formalization, “machine reading”) of most of LaTeX-written mathematical textbooks. This has been for long time blocked by three factors: (i) lack of annotated formal/informal corpora to train such semantic parsing on, (ii) lack of sufficiently large repository of background mathematical knowledge needed for “obvious-knowledge gap-filling”, and (iii) lack of sufficiently strong large-theory ATP that could fill the reasoning gaps using the large repository of background knowledge. This has changed a lot recently and unless we are really lazy I believe it will mostly happen. It might not be immediately perfect and on the level of manual formalization, but it will gradually improve both by using more data and by better algorithms, a bit similarly to what has been happening with Google Translate between languages like English, Spanish and German. Once it starts happening automatically, it might snowball faster and faster, again thanks to the positive feedback loop between more “reasoning data” becoming accessible and the strength of the data-driven large-theory ATP and AI methods trained on such data.\n\n\nI don’t think that mathematicians have any reason to “fear AI” (at least at the moment), quite the contrary: more and more are wondering about the current low level of computer understanding and assistance of math. Mathematicians like von Neumann, Turing and Goedel are the reason that we have computers and AI as a discipline in the first place. One of their motivations was Hilbert’s program and the Leibniz-style dreams of machines assisting mathematics and scientific reasoning. It is even a bit of a shame for computer science and AI that in the hundred years since Turing’s birth they have done so little for their “parent” science and that the progress is so slow. At this moment, anybody who tried writing formal mathematics prays for better automation. Without it, formalization will remain a slow, costly and marginal effort. So it is more the opposite: in order for deep semantic treatment of math and sciences to take off, automation has to improve. But once such deep semantic treatment of sciences starts to develop, it means also easier access for mathematicians. Law is just one example; I really like [John McCarthy’s futuristic note](http://www-formal.stanford.edu/jmc/future/objectivity.html) on formal proof becoming a strong criterion for all kinds of policy making. Complex mathematical proofs that require formal checking are just a tip of the iceberg: today we are building and relying on more and more complex machinery, hardware, software, legal, political and economical systems, and they are all buggy and easy to hack and break. Fixing this all and allowing us to correctly implement much more complex designs is a great future opportunity and market for mathematically thinking people equipped with automation tools that allow them to make fast progress without sacrificing correctness.\n\n\nI do expect that ATPs will in general have better pattern-detection capabilities in 20 years than they have now, and that they will be able to better accumulate, process and efficiently use previous knowledge and better combine brute-force search with all kinds of guidance on various levels. Specifically in more advanced mathematics, high-level heuristic human-inspired proving/thinking methods might start to be more developed. One way to try to get them automatically is again through basic computer understanding of LaTeX-written mathematical texts, and learning what high-level concepts like “by analogy” and “using diagonalization” exactly semantically mean in various contexts. This is also related to the ability to reformulate problems and map them to a setting (for current ATPs, the best is purely equational) where the theorem proving becomes more easy. And another related work that needs to be done is “explaining back” the long ATP proofs using an understandable mathematical presentation.\n\n\nThe word “learning” has itself many meanings. A recent breakthrough in SAT solving (propositional ATP) is called “conflict-driven clause learning” (CDCL), but it is just a correct (resolution) inference of a new lemma implied by the axioms. The generalization aspect of “learning” is quite limited there. I don’t really know how good the ATPs will be in twenty years, improvements may come from various sides, not just by using “learning”. The main ATP calculi are so far so simple and brute-force, that it is also possible that somebody will come up with a completely new approach which will considerably improve the strength of ATP systems. For example, instantiation-based ATPs like [iProver](http://www.cs.man.ac.uk/~korovink/iprover/) relying on the CDCL-extended SAT solvers have been improved quite a lot in the recent years.\n\n\nAnother trend I expect is more “manual” work on all kinds of targeted decision procedures (dealing with numbers, lists, bitvectors, etc.), particularly when useful for software and hardware verification, and their integration with ATP calculi. I also hope that we will start to be able to detect such targeted algorithms automatically from the successful reasoning patterns done by ATPs. In some sense, we need better automated compilation of “search” into “computing” (we could call it “learning of decision procedures”). There is no rigid distinction between these two: in ATP-inspired programming languages like Prolog, the two things largely coincide. The better we know how to direct the proof search in certain (e.g. numerical) problem classes, the more the proof search turns into efficient computation.\n\n\nFinally, I might venture one concrete and refutable prediction about the strength of large-theory ATP in 20 years: on the Flyspeck resp. Mizar libraries, learning-assisted large-theory ATP can today prove 47% resp. 40% of the [toplevel](http://www.easychair.org/publications/?page=2069121719) [lemmas](http://arxiv.org/abs/1310.2805). In 20 years, using the same hardware and resources (i.e., not relying on Moore’s Law), we will be able to prove automatically 80% of both (measured on the same versions of the libraries).\n\n\n\n\n---\n\n\n**Luke:** Thanks, Josef!\n\n\nThe post [Josef Urban on Machine Learning and Automated Reasoning](https://intelligence.org/2013/12/21/josef-urban-on-machine-learning-and-automated-reasoning/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-12-21T21:49:43Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "1810c6c39a1adc3ffd6f23498d6c6ef2", "title": "2013 in Review: Operations", "url": "https://intelligence.org/2013/12/20/2013-in-review-operations/", "source": "miri", "source_type": "blog", "text": "It’s been 8 months since our [big strategic shift](http://intelligence.org/2013/04/13/miris-strategy-for-2013/) to a focus on Friendly AI research. What has worked well? What hasn’t? What have we learned? In this and several upcoming posts, I’ll provide a **qualitative and personal self-review of MIRI in 2013**.\n\n\nThis review will be mostly *qualitative* because we haven’t yet found quantitative metrics that do a decent job of tracking what we actually care about (see below). The review will be *personal* because MIRI staff have a variety of perspectives, so I will usually be speaking for myself rather than for the organization as a whole.\n\n\nI’ll split my review into **six posts**: operations (this post), [outreach](http://intelligence.org/2014/01/20/2013-in-review-outreach/), [strategic and expository research](http://intelligence.org/2014/02/08/2013-in-review-strategic-and-expository-research/), [Friendly AI research](http://intelligence.org/2014/02/18/2013-in-friendly-ai-research/), [fundraising](http://intelligence.org/2014/04/02/2013-in-review-fundraising/), and our [mid-2014 strategic plan](http://intelligence.org/2014/06/11/mid-2014-strategic-plan/).\n\n\n \n\n\n#### Operations in 2013\n\n\n1. 2013 was a big year for capacity-building at MIRI — not so much by hiring more people but by doing more with the people we have.\n2. That said, this progress cost lots of executive time, and I should have been more aggressive about finding ways to accomplish this with fewer executive hours.\n3. As a result of (2), we did not make as much progress on metrics and systematic experimentation as I would have liked to make in 2013.\n\n\n\n#### Capacity-building at MIRI in 2013\n\n\nHere I’ll explain how our *operations capacity* — our ability to efficiently execute a wide range of programs and projects — has grown in 2013.[1](https://intelligence.org/2013/12/20/2013-in-review-operations/#footnote_0_10643 \"Some operations work in 2013 wasn’t related to improving our operations capacity per se. For example, we also wrote and passed some important policies that are (e.g.) required to get certain insurance policies. We also setup full offsite backups, made sure our passwords are protected with 1Password, etc. For communication security, we signed up for a trial of Silent Circle (one month before the Snowden leaks), but unfortunately Silent Circle’s phone solution was buggy and their email solution never materialized.\") Our *research capacity* will be discussed in a later post.\n\n\nIn 2013, our operations capacity grew not by hiring more operations staff, but by making our processes more efficient and thereby doing more with a roughly-stable number of staff.[2](https://intelligence.org/2013/12/20/2013-in-review-operations/#footnote_1_10643 \"Alex Vermeer became a full-time team member during 2013, and Malo Bourgon went full-time near the end of 2012, but Malo was already a near-full-time contractor before that, and Alex’s rising hours in the latter half of 2013 quantitatively substituted for Ioven’s hours from the first third of 2013, which were lost when Ioven departed in mid-March.\")\n\n\nBelow is a partial list of operational improvements we made in 2013.[3](https://intelligence.org/2013/12/20/2013-in-review-operations/#footnote_2_10643 \"In other cases, it’s not clear that the implemented change improved our efficiency. In particular, we set up MIRIvolunteers.org (using YouTopia) to manage and incentivize volunteers, but it’s not clear whether this has been worth it. We have 450+ volunteers signed up on the site, and the clickthrough rate for volunteer task announcements is surprisingly high (19%), but there were only about 25hrs/mo of volunteer labor throughout 2013, which meant that many volunteer projects saw slow progress, or none at all. Given the time cost of setting up the YouTopia site and maintaining engagement with volunteers, it’s at least not obvious that MIRIvolunteers.org increases our efficiency. We remain curious to see how much volunteer engagement we get with the Sequences ebook project, and also whether engagement improves as the YouTopia team implements new features.\") I suspect other nonprofit executives can benefit from some of them.\n\n\n* We moved to a larger and more centrally-located office in downtown Berkeley, 2.5 blocks from the UC Berkeley campus and 0.5 blocks from a BART subway entrance. As expected, this has made it easier for people to work together on projects, easier to meet with visitors, easier to host research workshops, etc.\n* We streamlined our Board meetings and reduced their frequency by providing the Board with a monthly “snapshot” update in PDF, and by giving all but the highest-level decisions to the executives, or to committees composed of particular board members and executives. (We still need to do more of this, though.)\n* We won an [exemption](http://www.visapro.com/Immigration-Articles/?a=1090&z=48) to the USA’s annual cap on H-1B visas. This makes it easier for us to bring in foreign talent to work with us in Berkeley, thus greatly expanding our pool of potential hires. Using this cap exemption, we won H-1B visas for Malo Bourgon and Alex Vermeer. Alex has already joined us in Berkeley and realized the expected productivity and efficiency gains. Malo will join us in Berkeley next year.\n* Many operational improvements came from *automating  processes with software*. For example, we tested several time-tracking solutions and settled on [Harvest](http://www.getharvest.com/), which makes it easy to track staff and contractor hours, and also to estimate the time costs of each major project. We also manage some projects, especially those involving contractors, via [Basecamp](https://basecamp.com/).\n* With the help of a few custom scripts, we now have a pretty accurate donations database (via [DonorTools](http://www.donortools.com/)) that is integrated with our favorite CRM ([HighRise](https://highrisehq.com/)) and some web pages (e.g. the live progress bar for our [ongoing fundraiser](http://intelligence.org/2013/12/02/2013-winter-matching-challenge/)). This makes it easier for us to track donations, and to stay in touch with donors, potential workshop participants, and other contacts.\n* After struggling to efficiently use popular, broken payroll systems like Paychex and Wells Fargo Payroll, we made the jump to [ZenPayroll](https://zenpayroll.com/) and discovered it to be exactly what it claims to be: a painless, modern payroll system with responsive customer support.\n* Lots of other little tweaks: we consolidated our web domains and web hosting, we record expenses by emailing smartphone photos of receipts to an email address, we use [EchoSign](https://www.echosign.adobe.com/en/home.html) for signing contracts online instead of faxing or mailing them, we switched to [MailChimp](http://mailchimp.com/) for newsletter management and analytics, we switched to [Disqus](http://disqus.com/) for blog comment management, etc.\n\n\nCompared to mid-2012, it is noticeably easier to just *get things done* at MIRI. In mid-2012, I would often find that doing X required that Y and Z be in place, and to set up Y and Z we needed four different documents, and we didn’t have any of those documents yet because MIRI hadn’t previously invested much in operations, and it was hard to outsource the production of those documents because we didn’t have a central database for that kind of organizational information, so I’d have to pull it all from my own head and make phone calls to other people, and I couldn’t fill in line 6 of document 3 because the payroll system once again wouldn’t let me load that report, and I couldn’t estimate the project costs requested by document 2 because there had been no costs tracking, and… well, these kinds of tasks are much easier now, and much easier to outsource, which means that MIRI’s executives (Louie and I) can spend more time on organizational strategy, fundraising, and building the research program.\n\n\n \n\n\n#### These operational improvements consumed too much executive time\n\n\nFor the past few months, these operational improvements have largely been tested and implemented by Malo Bourgon and (to a lesser extent) Alex Vermeer. But for most of 2013, these improvements required much personal effort from MIRI’s executives, which carries an especially high opportunity cost. In retrospect, I think I could have conserved executive time better:\n\n\n* I could have, by mid-2012, recognized our ongoing need for technically-savvy operations support, and recruited more aggressively for that role. That said, I’m not sure more recruiting effort here would have paid off. (See [here](http://blog.givewell.org/2013/08/29/we-cant-simply-buy-capacity/).)\n* I could have further reduced the executive time needed to make these improvements, by giving greater responsibility to Malo Bourgon earlier, and by helping Malo outsource his other responsibilities sooner.[4](https://intelligence.org/2013/12/20/2013-in-review-operations/#footnote_3_10643 \"I couldn’t have done this so easily with Alex Vermeer, given the projects he was running at the time.\")\n* I could have done more to free up executive time in other ways, by delegating tasks and projects more aggressively. For example, I *personally* ran onsite logistics for some of this year’s [research workshops](http://intelligence.org/workshops/#past-workshops), to “make sure everything went well,” but I should have delegated that role more fully to others, even if it wouldn’t have been executed quite as well. That said, we did outsource quite a lot of work (2500+ contractor hours) in 2013.\n\n\n \n\n\n#### As a result, we didn’t make much progress on metrics and experimentation\n\n\nTo improve MIRI’s effectiveness via [tight feedback loops](http://intelligence.org/2013/04/04/the-lean-nonprofit/), we need quantitative measures of the kinds of progress we care about, and we need to run frequent experiments to test hypotheses about which strategies will work best for operational efficiency, outreach success, research progress, and fundraising.\n\n\nMIRI is a quickly-evolving organization with hard-to-measure goals, and thus it requires significant high-level investment from MIRI’s executives to figure out [how to measure](http://lesswrong.com/lw/i8n/how_to_measure_anything/) variables *we care about* (instead of variables that are *easy to measure*), via experimentation and other means. But because so much executive time in 2013 was consumed by operational improvements, we made little progress on metrics and experimentation until very recently. Right now we’re experimenting with the “OKRs and KPIs” framework used at Intel, Google, Quixey, and other companies, but we have a lot more work to do.\n\n\nI think another reason for our lack of progress on metrics and experimentation, besides the lack of executive time, was that I failed to set up deadlines for concrete progress on developing metrics and experiments. In part this was because “make progress figuring out how to measure progress” felt too abstract for useful deadlines, and in part it was because MIRI’s object-level work usually felt more urgent to me than the “meta-level” work on metrics and experimentation. But these were poor reasons to avoid putting deadlines on an important project.\n\n\n \n\n\n#### What I’m doing to improve\n\n\nHere’s what I’m doing to improve our progress on metrics and experimentation:\n\n\n1. I’m more aggressively delegating and outsourcing tasks and projects, and helping Louie, Malo, and Alex to do the same.\n2. I’m prioritizing the development of metrics and experimentation over some object-level projects that could otherwise consume staff time.\n3. I’m setting up regular deadlines for concrete increments of progress on the development of metrics and experiments.\n\n\nI do not, however, intend to hire additional full-time staff to achieve our operations goals, unless we significantly *expand* our operations in 2014 in a way that requires more operations staff. We agree with GiveWell that growing capacity by hiring staff is [difficult and risky](http://blog.givewell.org/2013/08/29/we-cant-simply-buy-capacity/) — in this case, unnecessarily so.\n\n\nIn the next part of my 2013 review, I’ll review our outreach activities in 2013.\n\n\n\n\n---\n\n1. Some operations work in 2013 wasn’t related to improving our operations capacity *per se*. For example, we also wrote and passed some important [policies](http://intelligence.org/transparency/) that are (e.g.) required to get certain insurance policies. We also setup full offsite backups, made sure our passwords are protected with [1Password](https://agilebits.com/onepassword), etc. For communication security, we signed up for a trial of Silent Circle (one month before the [Snowden leaks](http://en.wikipedia.org/wiki/Global_surveillance_disclosure)), but unfortunately Silent Circle’s [phone solution](https://silentcircle.com/web/silent-phone/) was buggy and their email solution [never materialized](http://silentcircle.wordpress.com/2013/08/09/to-our-customers/).\n2. Alex Vermeer became a full-time team member during 2013, and Malo Bourgon went full-time near the end of 2012, but Malo was already a near-full-time contractor before that, and Alex’s rising hours in the latter half of 2013 quantitatively substituted for Ioven’s hours from the first third of 2013, which were lost when Ioven departed in mid-March.\n3. In other cases, it’s not clear that the implemented change improved our efficiency. In particular, we set up [MIRIvolunteers.org](http://mirivolunteers.org/) (using [YouTopia](http://www.youtopia.com/)) to manage and incentivize volunteers, but it’s not clear whether this has been worth it. We have 450+ volunteers signed up on the site, and the clickthrough rate for volunteer task announcements is surprisingly high (19%), but there were only about 25hrs/mo of volunteer labor throughout 2013, which meant that many volunteer projects saw slow progress, or none at all. Given the time cost of setting up the YouTopia site and maintaining engagement with volunteers, it’s at least not *obvious* that MIRIvolunteers.org increases our efficiency. We remain curious to see how much volunteer engagement we get with the [Sequences ebook project](http://lesswrong.com/lw/jc7/karma_awards_for_proofreaders_of_the_less_wrong/), and also whether engagement improves as the YouTopia team implements new features.\n4. I couldn’t have done this so easily with Alex Vermeer, given the projects he was running at the time.\n\nThe post [2013 in Review: Operations](https://intelligence.org/2013/12/20/2013-in-review-operations/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-12-20T19:14:51Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "505f7b917710143789d4b21f9901cb60", "title": "New Paper: “Why We Need Friendly AI”", "url": "https://intelligence.org/2013/12/18/new-paper-why-we-need-friendly-ai/", "source": "miri", "source_type": "blog", "text": "[![Think journal page](https://intelligence.org/wp-content/uploads/2013/12/WhyWeNeedFriendlyAI-cover.gif)](https://intelligence.org/files/WhyWeNeedFriendlyAI.pdf)A new paper by Luke Muehlhauser and Nick Bostrom, “[Why We Need Friendly AI](https://intelligence.org/files/WhyWeNeedFriendlyAI.pdf),” has been published in the latest issue of the Cambridge journal *[Think: Philosophy for Everyone](http://journals.cambridge.org/action/displayJournal?jid=THI)*.\n\n\nAbstract:\n\n\n\n> Humans will not always be the most intelligent agents on Earth, the ones steering the future. What will happen to us when we no longer play that role, and how can we prepare for this transition?\n> \n> \n\n\nThis paper is copyrighted by Cambridge University Press, and is reproduced on MIRI’s website with permission. The paper appears in [Volume 13, Issue 36](http://journals.cambridge.org/action/displayIssue?jid=THI&volumeId=13&seriesId=0&issueId=36).\n\n\nThe post [New Paper: “Why We Need Friendly AI”](https://intelligence.org/2013/12/18/new-paper-why-we-need-friendly-ai/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-12-18T19:44:10Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "4e01b06342d3b12662e04f0b521de2bd", "title": "MIRI’s December 2013 Newsletter", "url": "https://intelligence.org/2013/12/16/miris-december-2013-newsletter/", "source": "miri", "source_type": "blog", "text": "| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n[Machine Intelligence Research Institute](http://intelligence.org)\n |\n\n |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| Dear friends,We’re still experimenting with our new, ultra-brief newsletter style. Please tell us what you think of it, by replying to this email. Thanks!\nMIRI’s [winter 2013 matching fundraiser](http://intelligence.org/2013/12/02/2013-winter-matching-challenge/) is on! All donations made by January 15th will be matched by Peter Thiel, with **3x matching** for “new large donors”: if you’ve given less than $5,000 total in the past, and now give or pledge $5,000 or more, your impact will be *quadrupled*! Reply to this email if you’d like to ask whether you qualify for 3x matching.\nHere is a recent snapshot of our fundraising progress so far:\n\n**Other News Updates**\n* Louie Helm’s review of [*Our Final Invention*](http://www.amazon.com/Our-Final-Invention-Artificial-Intelligence-ebook/dp/B00CQYAWRY/) has been [published at Singularity Hub](http://singularityhub.com/2013/12/14/will-advanced-ai-be-our-final-invention/).\n* [Volunteers are needed](http://lesswrong.com/lw/jc7/karma_awards_for_proofreaders_of_the_less_wrong/) to proofread a large ebook of Eliezer’s [LessWrong.com writings](http://wiki.lesswrong.com/wiki/Sequences).\n\n\n**Research Updates**\n* Our [6th research workshop](http://intelligence.org/2013/07/24/miris-december-2013-workshop/) is ongoing. See details on all our workshops [here](http://intelligence.org/workshops/).\n* New paper: “[Racing to the Precipice](http://intelligence.org/2013/11/27/new-paper-racing-to-the-precipice/).” A game-theoretic model of AI development.\n* New paper: “[Predicting AGI: What can we say when we know so little?](http://intelligence.org/2013/12/01/new-paper-predicting-agi-what-can-we-say-when-we-know-so-little/)” Argues for a Pareto-distributed prior over the time until we can see that AGI is within reach.\n* New interview: [Scott Aaronson](http://intelligence.org/2013/12/13/aaronson/) (MIT) on philosophical progress.\n\n\nAs always, please don’t hesitate to let us know if you have any questions or comments.\nBest,\nLuke Muehlhauser\nExecutive Director\n |\n\n |\n\n |\n\n\nThe post [MIRI’s December 2013 Newsletter](https://intelligence.org/2013/12/16/miris-december-2013-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-12-16T14:42:46Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "5d049b630b4e6cd7d2d9d2a2e459a4bc", "title": "Scott Aaronson on Philosophical Progress", "url": "https://intelligence.org/2013/12/13/aaronson/", "source": "miri", "source_type": "blog", "text": "![Scott Aaronson portrait](https://intelligence.org/wp-content/uploads/2013/12/Aaronson_w150.jpg)[Scott Aaronson](http://www.csail.mit.edu/user/1324) is an Associate Professor of Electrical Engineering and Computer Science at MIT. Before that, he did a PhD in computer science at UC Berkeley, as well as postdocs at the Institute for Advanced Study, Princeton, and the University of Waterloo. His research focuses on the capabilities and limits of quantum computers, and more generally on the connections between computational complexity and physics. Aaronson is known for [his blog](http://www.scottaaronson.com/blog/) as well as for founding the [Complexity Zoo](http://complexityzoo.uwaterloo.ca) (an online encyclopedia of complexity classes); he’s also written about quantum computing for Scientific American and the New York Times. His first book, *[Quantum Computing Since Democritus](http://www.amazon.com/Quantum-Computing-since-Democritus-Aaronson-ebook/dp/B00B4V6IZK/ref=nosim?tag=lukeprogcom-20)*, was published this year by Cambridge University Press. He’s received the Alan T. Waterman Award, the PECASE Award, and MIT’s Junior Bose Award for Excellence in Teaching.\n\n\n\n**Luke Muehlhauser**: Though you’re best known for your work in theoretical computer science, you’ve also produced some pretty interesting philosophical work, e.g. in *[Quantum Computing Since Democritus](http://www.amazon.com/Quantum-Computing-since-Democritus-Aaronson-ebook/dp/B00B4V6IZK/ref=nosim?tag=lukeprogcom-20)*, “[Why Philosophers Should Care About Computational Complexity](http://www.scottaaronson.com/papers/philos.pdf),” and “[The Ghost in the Quantum Turing Machine](http://www.scottaaronson.com/papers/giqtm3.pdf).” You also taught a fall 2011 MIT class on [Philosophy and Theoretical Computer Science](http://stellar.mit.edu/S/course/6/fa11/6.893/).\n\n\nWhy are you so interested in philosophy? And what is the social value of philosophy, from your perspective?\n\n\n\n\n---\n\n\n**Scott Aaronson**: I’ve always been reflexively drawn to the biggest, most general questions that it seemed possible to ask. You know, like are we living in a computer simulation? if not, could we upload our consciousnesses into one? are there discrete “pixels” of spacetime? why does it seem impossible to change the past? could there be different laws of physics where 2+2 equaled 5? are there objective facts about morality? what does it mean to be rational? is there an explanation for why I’m alive right now, rather than some other time? What *are* explanations, anyway? In fact, what really perplexes me is when I meet a smart, inquisitive person—let’s say a mathematician or scientist—who claims NOT to be obsessed with these huge issues! I suspect many MIRI readers might feel drawn to such questions the same way I am, in which case there’s no need to belabor the point.\n\n\nFrom my perspective, then, the best way to frame the question is not: “why be interested in philosophy?” Rather it’s: “why be interested in anything else?”\n\n\nBut I think the latter question has an excellent answer. A crucial thing humans learned, starting around Galileo’s time, is that even if you’re interested in the biggest questions, usually the only way to make progress on them is to pick off smaller subquestions: ideally, subquestions that you can attack using math, empirical observation, or both. For again and again, you find that the subquestions aren’t nearly as small as they originally looked! Much like with zooming in to the Mandelbrot set, each subquestion has its own twists and tendrils that could occupy you for a lifetime, and each one gives you a new perspective on the big questions. And best of all, you can actually *answer* a few of the subquestions, and be the first person to do so: you can permanently move the needle of human knowledge, even if only by a minuscule amount. As I once put it, progress in math and science — think of natural selection, Godel’s and Turing’s theorems, relativity and quantum mechanics — has repeatedly altered the terms of philosophical discussion, as philosophical discussion *itself* has rarely altered them! (Of course, this is completely leaving aside math and science’s “fringe benefit” of enabling our technological civilization, which is not chickenfeed either.)\n\n\n\nOn this view, philosophy is simply too big and too important to be confined to philosophy departments! Of course, the word “philosophy” used to mean the entire range of fundamental inquiry, from epistemology and metaphysics to physics and biology (which were then called “natural philosophy”), rather than just close textual analysis, or writing papers with names like “A Kripkean Reading of Wittgenstein’s Reading of Frege’s Reading of Kant.” And it seems clear to me that there’s enormous scope today for “philosophy” in the former sense — and in particular, for people who love working on the subquestions, on pushing the frontiers of neuroscience or computer science or physics or whatever else, but who also like to return every once in a while to the “deep” philosophical mysteries that motivated them as children or teenagers. Admittedly, there have been many great scientists who didn’t care at all about philosophy, or who were explicitly anti-philosophy. But there were also scientists like Einstein, Schrodinger, Godel, Turing, or Bell, who not only read lots of philosophy but (I would say) used it as a sort of springboard into science — in their cases, a wildly successful one. My guess would be that science ultimately benefits from both the “pro-philosophical” and the “anti-philosophical” temperaments, and even from the friction between them.\n\n\nAs for the “social value” of philosophy, I suppose there are a few things to say. First, the world needs good philosophers, if for no other reason than to refute bad philosophers! (This is similar to why the world needs lawyers, politicians, and soldiers.) Second, the Enlightenment seems like a pretty big philosophical success story. Philosophers like Locke and Spinoza directly influenced statesmen like Thomas Jefferson, in ways you don’t have to squint to see. Admittedly, philosophers’ positive influence on humankind’s moral progress is probably less today than in the 1700s (to put it mildly). And also, most of the philosophical questions that have obsessed me personally have been pretty thin in their moral implications. But that brings me to the third point: namely, to whatever extent you see social value in *popularizing basic science* — that is, in explaining the latest advances in cosmology, quantum information, or whatever else to laypeople — to that extent I think you also need to see social value in philosophy. For the popularizer doesn’t have the luxury of assuming the importance of the particular subquestion on which progress has been made. Instead, he or she constantly needs to say what the little tendrils currently being explored do (or just as importantly, don’t) imply about the whole fractal — and when you’re zooming out like that, it’s hard to avoid talking about philosophy.\n\n\n\n\n---\n\n\n**Luke**: You write that “usually the only way to make progress on [the big questions] is to pick off smaller subquestions: ideally, subquestions that you can attack using math, empirical observation, or both.” This is an idea you wrote about at greater length in [one of your papers](http://arxiv.org/pdf/1306.0159v2.pdf) — specifically, in [this passage](http://lesswrong.com/lw/hok/link_scott_aaronson_on_free_will/9546):\n\n\n\n> whenever it’s been possible to make definite progress on ancient philosophical problems, such progress has almost always involved a [kind of] “bait-and-switch.” In other words: one replaces an unanswerable philosophical riddle Q by a “merely” scientific or mathematical question Q′, which captures part of what people have wanted to know when they’ve asked Q. Then, with luck, one solves Q′.\n> \n> \n> Of course, even if Q′ is solved, centuries later philosophers might still be debating the exact relation between Q and Q′! And further exploration might lead to other scientific or mathematical questions — Q′′, Q′′′, and so on — which capture aspects of Q that Q′ left untouched. But from my perspective, this process of “breaking off” answerable parts of unanswerable riddles, then trying to answer those parts, is the closest thing to philosophical progress that there is.\n> \n> \n> …A good replacement question Q′ should satisfy two properties: (a) Q′ should capture some aspect of the original question Q — so that an answer to Q′ would be hard to ignore in any subsequent discussion of Q, [and] (b) Q′ should be precise enough that one can see what it would mean to make progress on Q′: what experiments one would need to do, what theorems one would need to prove, etc.\n> \n> \n\n\nWhat are some of your favorite examples of illuminating Q-primes that were solved within your own field, theoretical computer science?\n\n\n\n\n---\n\n\n**Scott**: It’s hard to know where to begin with this question! In fact, my 59-page essay [Why Philosophers Should Care About Computational Complexity](http://www.scottaaronson.com/papers/philos.pdf) was largely devoted to cataloging the various “Q-primes” on which I think theoretical computer science has made progress. However, let me mention four of my favorites, referring readers to the essay for details:\n\n\n(1) One of the biggest, oldest questions in the philosophy of science could be paraphrased as: “why is Occam’s Razor justified? when we find simple descriptions of past events, why do we have any grounds whatsoever to expect those descriptions to predict future events?” This, I think, is the core of Hume’s “problem of induction.” Now, I think theoretical computer science has contributed large insights to this question — including Leslie Valiant’s Probably Approximately Correct (PAC) learning model, for which he recently won the Turing Award; the notion of Vapnik-Chernonenkis (VC) dimension; and the notion of the universal prior from algorithmic information theory. In essence, these ideas all give you various formal models where Occam’s Razor *provably* works — where you can give “simplicity” a precise definition, and then see *exactly* why simple hypotheses are more likely to predict the future than complicated ones. Of course, a skeptic about induction could still ask: OK, but why are the assumptions behind these formal models justified? But to me, this represents progress! The whole discussion can now start from a more sophisticated place than before.\n\n\n(2) One of the first questions anyone asks on learning quantum mechanics is, “OK, but do all these branches of the wavefunction really exist? or are they just mathematical constructs used to calculate probabilities?” Roughly speaking, Many-Worlders would say they do exist, while Copenhagenists would say they don’t. Of course, part of what makes the question slippery is that it’s not even completely clear what we mean by words like “exist”! Now, I’d say that quantum computing theory has sharpened the question in many ways, and actually answered some of the sharpened versions — but interestingly, sometimes the answer goes one way and sometimes it goes the other! So for example, we have strong evidence that quantum computers can solve certain specific problems in polynomial time that would require exponential time to solve using a classical computer. Some Many-Worlders, most notably David Deutsch, have seized on the apparent exponential speedups for problems like factoring, as the ultimate proof that the various branches of the wavefunction must literally exist: “if they *don’t* exist,” they ask, “then where was this huge number factored? where did the exponential resources to solve the problem come from?” The trouble is, we’ve also learned that a quantum computer could NOT solve arbitrary search problems exponentially faster than a classical computer could solve them — something you’d probably predict a QC could do, if you thought of all the branches of the wavefunction as just parallel processors. If you want a quantum speedup, then your problem needs a particular structure, which (roughly speaking) lets you choreograph a pattern of constructive and destructive interference involving ALL the branches. You can’t just “fan out” and have one branch try each possible solution — twenty years of popular articles notwithstanding, that’s not how it works! We also know today that you can’t encode more than about n classical bits into n quantum bits (qubits), in such a way that you can reliably retrieve any one of the bits afterward. And we have all lots of other results that make quantum-mechanical amplitudes feel more like “just souped-up versions of classical probabilities,” and quantum superposition feel more like just a souped-up kind of potentiality. I love how the mathematician Boris Tsirelson summarized the situation: he said that “a quantum possibility is more real than a classical possibility, but less real than a classical reality.” It’s an ontological category that our pre-mathematical, pre-quantum intuitions just don’t have a good name for.\n\n\n(3) Many interesting philosophical puzzles boil down to what it means to know something: and in particular, to the difference between knowing something “explicitly” and knowing it only “implicitly.” For example, I mentioned in my essay the example of the largest “known” prime number. According to the Great Internet [Mersenne Prime Search](http://www.mersenne.org/), that number is currently 2^57885161 – 1. The question is, why can’t I reply immediately that I know a bigger prime number: namely, “the first prime larger than 2^57885161 – 1”? I can even give you an algorithm to find my number, which provably halts: namely, starting from 2^57885161, try each number one by one until you hit a prime! Theoretical computer science has given us the tools to sharpen a huge number of questions of this sort, and sometimes answer them. Namely, we can say that to know a thing “explicitly” means, not merely to have ANY algorithm to generate the thing, but to have a provably polynomial-time algorithm. That gives us a very clear sense in which, for example, 2^57885161 – 1 is a “known” prime number while the next prime after it is not. And, in many cases where mathematicians vaguely asked for an “explicit construction” of something, we can sharpen the question to whether or not some associated problem has a polynomial-time algorithm. Then, sometimes, we can find such an algorithm or give evidence against its existence!\n\n\n(4) One example that I *didn’t* discuss in the essay — but a wonderful one, and one where there’s actually been huge progress in the last few years — concerns the question of how we can ever know for sure that something is “random.” I.e., even if a string of bits passes every statistical test for randomness that we throw it at, how could we ever rule out that there’s some complicated regularity that we simply failed to find? In the 1960s, the theory of Kolmogorov complexity offered one possible answer to that question, but a rather abstract and inapplicable one: roughly speaking, it said we can consider a string “random enough for our universe” if it has no *computable* regularities, if there’s no program to output the string shorter than the string itself. More recently, a much more practical answer has come from the Bell inequality — and in particular, from the realization that the experimental violation of that inequality can be used to produce so-called “Einstein-certified random numbers.” These are numbers that are *provably* random, assuming only (a) that they were produced by two separated devices that produced such-and-such outputs in response to challenges, and (b) there was no faster-than-light communication between the devices. But it’s only within the last few years that computer scientists figured out how to implement this striking idea, in such a way that you get out more randomness than you put in. (Recently, two MIT grad students proved that, starting from a fixed “seed” of, let’s say, 100 random bits, you can produce *unlimited* additional random bits in this Einstein-certified way — see [Infinite Randomness Expansion and Amplification with a Constant Number of Devices](http://arxiv.org/abs/1310.6755)) And the experimental demonstration of these ideas is just getting started now. Anyway, I’m working on an article for American Scientist magazine about these developments, so rather than cannibalize the article, I’ll simply welcome people to read it when it’s done!\n\n\n\n\n---\n\n\n**Luke**: What do you think about *philosophy the field* — work published by people in philosophy departments, who publish mostly in philosophy journals like *Mind* and *Noûs*, who are writing mostly for other philosophers?\n\n\nI’ve previously called philosophy a “[diseased discipline](http://lesswrong.com/lw/4zs/philosophy_a_diseased_discipline/),” for many reasons. For one thing, people working in philosophy-the-field tend to know strikingly little about the philosophical progress made in other fields, e.g. computer science or cognitive neuroscience. For another, books on the history of philosophy seem to be about the musings of old dead guys who were wrong about almost everything because they didn’t have 20th century science or math, rather than about actual philosophical progress, which is instead recounted in books like *[The Information](http://www.amazon.com/Information-History-Theory-Flood-ebook/dp/B004DEPHUC/)*.\n\n\nDo you wish people in other fields would more directly try to use the tools of their discipline to make philosophical progress on The Big Questions? Do you wish philosophy-the-field would be reformed in certain ways? Would you like to see more crosstalk between disciplines about philosophical issues? Do you think that, as Clark Glymour [suggested](http://choiceandinference.com/2011/12/23/in-light-of-some-recent-discussion-over-at-new-apps-i-bring-you-clark-glymours-manifesto/), philosophy departments should be defunded unless they produce work that is directly useful to other fields (as is the case with Glymour’s department)?\n\n\n\n\n---\n\n\n**Scott**: Well, let’s start with the positives of academic philosophy!\n\n\n(1) I liked the philosophy of math and science courses that I took in college. Sure, I sometimes got frustrated by the amount of time spent on what felt like Talmudic exegesis, but on the other hand, those courses offered a scope for debating big, centuries-old questions that my math and science courses hardly ever did.\n\n\n(2) These days, I go maybe once a year to conferences where I meet professional philosophers of science, and I’ve found my interactions with them stimulating and fun. Philosophers often listen to what you say more carefully than other scientists do, and they’re incredibly good at spotting hidden assumptions, imprecise use of language, that sort of thing. Also, philosophers of science tend to double in practice as science historians: they often know much, much more about what, let’s say, Einstein or Bohr or Godel or Turing wrote and believed than physicists and mathematicians themselves know.\n\n\n(3) While my own reading of the philosophical classics has been woefully incomplete, I don’t feel like the time I spent with (say) Hume or J. S. Mill or William James or Bertrand Russell was wasted at all. You’re right that these “old dead guys” didn’t know all the math and science we know today, but then again, neither did Shakespeare or Dostoyevsky! I mean, sure, the central questions of philosophy have changed over time, and the human condition has changed as well: we no longer get confused over Zeno’s paradoxes or the divine right of kings, and we now have global telecommunications and the Pill. I just don’t think either human nature or human philosophical concerns have changed *quickly* enough for great literature on them written centuries ago to have ceased being great.\n\n\nHaving said all that, from what I’ve seen of academic philosophy, I do pretty much agree with your diagnoses of its “diseases.” By far the most important disease, I’d say, is the obsession with interpreting and reinterpreting the old masters, rather than moving beyond them. Back in college, after we’d spent an hour debating why *this* passage of Frege seemed to contradict *that* one, I’d sometimes want to blurt out: “so maybe he was having a bad day! I mean, he was also a raving misogynist and antisemite; he believed all kinds of things. Look, we’ve read Frege, we’ve learned from Frege, now can’t we just give the old dude a rest and debate the ground truth about the problems he was trying to solve?” Likewise, when I read books about the philosophy of physics or computing, it sometimes feels like I’m stuck in a time warp, as the contributors rehash certain specific debates from the 1930s over and over (say, about the Church-Turing Thesis or the Einstein-Podolsky-Rosen paradox). I want to shout, “enough already! why not help clarify some modern scientific debates—-say, about quantum computing, or string theory, or the black-hole firewall problem, ones where we don’t already know how everything turns out?” To be fair, today there are philosophers of science who are doing exactly that, and who have interesting and insightful things to say. That’s a kind of philosophy that I’d love to see more of, at the expense of the hermeneutic kind.\n\n\nNow, regarding Clark Glymour’s suggestion that philosophy departments be defunded unless they produce work useful to other fields — from what I understand, something not far from that is already happening! As bad as our funding woes in the sciences might be, I think the philosophers have it a hundred times worse, with like a quadrillion applicants for every tenure-track opening. So it seems to me like the right question is not how much further those poor dudes should be defunded, but rather: what can philosophy departments do to make themselves more vibrant, places that scientists regularly turn to for clarifying insights, and that deans and granting agencies get excited about wanting to expand? As a non-philosopher, I hesitate to offer unsolicited “advice” about such matters, but I guess I already did in the previous paragraph.\n\n\nOne final note: none of the positive or hopeful things that I said about philosophy apply to the postmodern or Continental kinds. As far as I can tell, the latter aren’t really “philosophy” at all, but more like pretentious brands of performance art that fancy themselves politically subversive, even as they cultivate deliberate obscurity and draw mostly on the insights of Hitler and Stalin apologists. I suspect I won’t ruffle too many feathers here at MIRI by saying this.\n\n\n\n\n---\n\n\n**Luke**: Suppose a mathematically and analytically skilled student wanted to make progress, in roughly the way you describe, on the Big Questions of philosophy. What would you recommend they study? What should they read to be inspired? What skills should they develop? Where should they go to study?\n\n\n\n\n---\n\n\n**Scott**: The obvious thing to say is that, as a student, you should follow your talents and passions, rather than following the generic advice of some guy on the Internet who doesn’t even know you personally!\n\n\nHaving said that, I would think broadly about which fields can give you enough scope to address the “Big Questions of Philosophy.”  You can philosophize from math, computer science, physics, economics, cognitive science, neuroscience, and probably a bunch of other fields too.  (My colleague Seth Lloyd philosophizes left and right, from his perch in MIT’s Mechanical Engineering department.)  Furthermore, all of these fields have the crucial advantage that they’ll offer you a steady supply of “fresh meat”: that is, new and exciting empirical or theoretical discoveries in which you can participate, and that will give you something to philosophize ABOUT (not to mention, something to do when you’re not philosophizing).  If I were working in a philosophy department, I feel like I’d have to make a conscious and deliberate effort to avoid falling into a “hermeneutic trap,” where I’d spend all my time commenting on what other philosophers had said about the works of yet other philosophers, and where I’d seal myself off from anything that had happened in the world of science since (say) Godel’s Theorem or special relativity.  (Once again, though, if you find that your particular talents and passions are best served in an academic philosophy department, then don’t let some guy on the Internet stop you!)\n\n\nRegardless of your major, I recommend taking a huge range of courses as an undergrad: math, computer science (both applied and theoretical), physics, humanities, history, writing, and yes, philosophy. Looking back on my own undergrad years, the most useful courses I took were probably my math courses, and that’s *despite* the fact that most of them were poorly taught!  Things like linear algebra, group theory, and probability have so many uses throughout science that learning them is like installing a firmware upgrade to your brain — and even the math you *don’t* use will stretch you in helpful ways.  After math courses, the second most useful courses I took were writing seminars — the kind where a small group of students reads and critiques one another’s writing, and the professor functions mostly as a moderator.  It was in such a seminar that I wrote my essay “[Who Can Name the Bigger Number?](http://www.scottaaronson.com/writings/bignumbers.html)“, which for better or worse, continues to attract more readers than anything else I’ve written in the fifteen years since.  One writing seminar, if it’s good, can easily be worth the whole cost of a college tuition.\n\n\nIf you’re the kind of person for whom this advice is intended, then you probably don’t have to be told to read widely and voraciously, anything you get curious about.  Don’t limit yourself to one genre, don’t limit yourself to stuff you agree with, and *certainly* don’t limit yourself to the assigned reading for your courses.  When I was an adolescent, my favorites were just what a nerd stereotyper might expect: science fiction (especially Isaac Asimov), books about programming and the software industry, and math puzzle books (especially Martin Gardner).  A few years later, I became obsessed with reading biographies of scientists, like Feynman, Ramanujan, Einstein, Schrodinger, Turing, Godel, von Neumann, and countless lesser luminaries.  I was interested in every aspect of their lives — in their working habits, their hobbies, their views on social and philosophical issues, their love lives — but, I confess, I was particularly interested in what they were doing as teenagers, so that I could compare to what I was doing and sort of see how I measured up.  At the same time, my reading interests were broadening to include politics, history, philosophy, psychology, and some contemporary fiction (I especially like Rebecca Goldstein).  It was only in grad school that I felt I’d sufficiently recovered from high-school English to tackle “real literature” like Shakespeare — but when I did, it was worth it.\n\n\nAs for where to study, well, the “tautological” answer is wherever will give you the best opportunities!  There are certain places, like Boston or the Bay Area, that are famous for having high concentrations of intellectual opportunity, but don’t go somewhere just because of what you’ve heard about the *general* atmosphere or prestige: particularly for graduate school, go where the particular people or programs are that resonate for you.  In quantum computing, for example, one of the centers of the world for the last decade has been Waterloo, Canada — a place many people hadn’t even heard of when I did my postdoc there eight years ago (though that’s changing now).  And one of the intellectually richest years of my life came when I attended The Clarkson School, a program that lets high-school students live and take courses at Clarkson University in Potsdam, NY.  (I went there when I was 15, and was looking for something less prison-like than high school.)  If, for what you personally want to do, there are better opportunities in Topeka, Kansas than at Harvard, go to Topeka.\n\n\n\n\n---\n\n\n**Luke**: Finally, I’d like to ask about which object-level research tactics — more specific than your general “bait and switch” strategy — you suspect are likely to help with philosophical research, or perhaps with theoretical research of any kind.\n\n\nFor example, some of the tactics we’ve found helpful at [MIRI](http://intelligence.org/) include:\n\n\n* When you’re confused about a fuzzy, slippery concept, try to build a simple formal model and push on it with the new tools then available to you. Even if the model doesn’t capture the complexity of the world, pushing things into the mathematical realm can lead to progress. E.g. the [VNM axioms](http://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem#The_axioms) don’t exactly capture “rationality,” but it sure is easier to think clearly about rationality once you have them. Or: we’re confused about how to do principled reflective reasoning within an agent, so even though advanced AIs are unlikely to literally run into a “[Löbian obstacle](https://intelligence.org/files/TilingAgents.pdf)” to self-reflection, setting up the problem that way (in mathematical logic) can lead to some interesting insights in (e.g.) [probabilistic metamathematics](http://intelligence.org/wp-content/uploads/2013/03/Christiano-et-al-Naturalistic-reflection-early-draft.pdf) for reflective reasoning.\n* Look for tools from other fields that appear to directly map onto the phenomena you’re studying. E.g. [model moral judgment as an error process amenable to Bayesian curve fitting](http://commonsenseatheism.com/wp-content/uploads/2013/12/Beckstead-On-the-overwhelming-importance-of-shaping-the-far-future.pdf).\n* Try to think of how your concept could be instantiated with infinite computing power. If you can’t do that, your concept might be fundamentally confused.\n* If you’re pretty familiar with [modern psychology](http://www.amazon.com/Handbook-Thinking-Reasoning-Library-Psychology/dp/0199313792/), then… When using your intuitions to judge between options, try to think about which cognitive algorithms could be generating those intuitions, and [whether they are](http://lesswrong.com/lw/74f/are_deontological_moral_judgments_rationalizations/) [cognitive algorithms](http://lesswrong.com/lw/no/how_an_algorithm_feels_from_inside/) [whose outputs](http://lesswrong.com/lw/hw/scope_insensitivity/) [you reflectively endorse](https://intelligence.org/files/CognitiveBiases.pdf).\n* To make the thing you’re studying clearer, look just next to it, and around it. [Foer (2009)](http://www.amazon.com/Eating-Animals-Jonathan-Safran-Foer-ebook/dp/B002SSBD6W/) explains this nicely in the context of thinking about one’s values and vegetarianism: “A simple trick from the backyard astronomer: if you are having trouble seeing something, look slightly away from it. The most light-sensitive parts of our eyes (those we need to see dim objects) are on the edges of the region we normally use for focusing. Eating animals has an invisible quality. Thinking about dogs, and their relationship to the animals we eat, is one way of looking askance and making something invisible visible.”\n\n\nWhich object-level thinking tactics, at roughly this level of specificity, do you use in your own theoretical (especially *philosophical*) research? Are there tactics you suspect might be helpful, which you haven’t yet used much yourself?\n\n\n\n\n---\n\n\n**Scott**: As far as I can remember, I’ve never set out to do “philosophical research,” so I can’t offer specific advice about that. What I *have* often done is research in quantum computing and complexity theory that was motivated by some philosophical issue, usually in foundations of quantum mechanics. (I’ve also written a few philosophical essays, but I don’t really count those as “research.”) Anyway, I can certainly offer advice about doing the kind of research I like to do!\n\n\n(1) Any time you find yourself in a philosophical disagreement with a fellow scientist, don’t be content just to argue philosophically — even if you’re sure you can win the argument! Instead, think hard about whether you can go further, and find a concrete technical question that captures some little piece of what you’re disagreeing about. Then see if you can answer that technical question. Of course, any time you do this, you have to be prepared for the possibility that the answer will go your opponent’s way, rather than yours! But what’s nice is that you get to publish a paper even then. (One of the best ways to tell whether a given enterprise is scientific at all, rather than ideological, is by asking whether the participants will opportunistically “go to bat for the opposing side” whenever they find a novel truth on that side.) I’d estimate that up to half the papers I’ve written had their origin in my reading or overhearing some claim — for example, “Grover’s algorithm obviously can’t work for searching actual physical databases, since the speed of light is finite,” or “the quantum states arising in Shor’s algorithm are obviously completely different from anything anyone has ever seen in the lab,” or “the interactive proof results obviously make oracle separations completely irrelevant” — and getting annoyed, either because I thought the claim was false, or because I simply didn’t think it had been adequately justified. The cases where my annoyance paid off are precisely the ones where, rather than just getting mad, I managed to get technical!\n\n\n(2) Often, the key to research is figuring out how to redefine failure as success. Some stories: when Alan Turing published his epochal 1936 paper on Turing machines, he did so with great disappointment: he had recently learned that Alonzo Church had independently arrived at similar results using lambda calculus, and he didn’t know whether anyone would still be interested in his alternative, machine-based approach. In the early 1970s, Leonid Levin delayed publishing about NP-completeness for several years: apparently, his “real” goal was to prove graph isomorphism was NP-complete (something we now know is almost certainly false), and in his mind, he had failed. Instead, he merely had a few “trivialities,” like the definitions of P, NP, and NP-completeness, and the proof that satisfiability was NP-complete. And Levin’s experience is far from unique: again and again in mathematical research, you’ll find yourself saying something like: “goddammit, I’ve been trying for six months to prove Y, but I can only prove the different/weaker statement X! And every time I think I can bridge the gap between X and Y, yet another difficulty rears its head!” Any time that happens to you, think hard about whether you can write a compelling paper that begins: “Y has been a longstanding open problem. In this work, we introduce a new idea: to make progress on Y by shifting attention to the more tractable X.” More broadly, experience has shown that scientists are *terrible* judges of which of their ideas will be interesting or important to others. Pick any scientist’s most cited paper, and there’s an excellent chance that the scientist herself, at one point, considered it a “little recreational throwaway project” that was barely worth writing up. After you’ve seen enough examples of that, you learn you should always err on the side of publishing, and let posterity sort out which of your ideas are most important. (Yet another advantage of this approach is that, the more ideas you publish, the less emotionally invested you are in any one of them, so the less crushed you are when a few turn out to be wrong or trivial or already known.)\n\n\n(3) Sometimes, when you set out to prove some mathematical conjecture, your first instinct is just to throw an arsenal of theory at it. “Hey, what if I try a topological fixed-point theorem? What if I translate the problem into a group-theoretic language? If neither of those works, what if I try both at once?” Sometimes, you rise so quickly this way into a stratosphere of generality that the original problem is barely a speck on the ground. And yes, some problems *can* be beaten into submission using high-powered theory. But in my experience, there are two enormous risks with this approach. First, you’re liable to get lost on a wild goose chase, where you get so immersed in theory and techniques that you lose sight of your original goal. It’s as if your efforts to break into a computer network lead you to certain complicated questions about the filesystem, which in turn lead you to yet more complicated questions about the kernel… and in the meantime someone else breaks in by guessing people’s birthdays for their passwords. Second, you’re also liable to fool yourself this way into thinking you’ve solved the problem when you haven’t. When you let high-powered machinery take the place of hands-on engagement with the problem, a single mistake in applying the machinery can creep in unbelievably easily. These risks are why I’ve learned over time to work in an extremely different way. Rather than looking for “general frameworks,” I look for easy special cases and simple sanity checks, for stuff I can try out using high-school algebra or maybe a five-line computer program, just to get a feel for the problem. Even more important, when I’m getting started, I don’t think about proof techniques at all: I think instead about obstructions. That is, I ask myself, “what would the world have to be like for the conjecture to be *false*? what goes wrong if I try to invent a simple counterexample? *does* anything go wrong? it does? OK then, what obstruction keeps me from proving this conjecture in the simplest, dumbest way imaginable?” I find that, after you’ve felt out the full space of obstructions and counterexamples, and really honestly convinced *yourself* of why the conjecture should be true, finding the proof techniques by which to convince everyone else is often a more-or-less routine exercise.\n\n\nFinally, you ask about tactics that I suspect might be helpful, but that I haven’t used much myself. One that springs to mind is to really master a tool like Mathematica, MATLAB, Maple, or Magma — that is, to learn it so well that I can code as fast as I think, and just let it take over all the routine / calculational / example-checking parts of my work. As it is, I use pretty much the same antiquated tools that I learned as an adolescent, and I rely on students whenever there’s a need for better tools. A large part of the problem is that, as a “tenured old geezer,” I no longer have the time or patience to learn new tools just for the sake of learning them: I’m always itching just to solve the problem at hand with whatever tools I know. (The same issue has kept me from learning new mathematical tools, like representation theory, even when I can clearly see that they’d benefit me.)\n\n\n\n\n---\n\n\n**Luke**: Thanks, Scott!\n\n\nThe post [Scott Aaronson on Philosophical Progress](https://intelligence.org/2013/12/13/aaronson/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-12-13T20:33:33Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "7fb4e3f00dcf1b5f5f0bad6860d4af7a", "title": "2013 Winter Matching Challenge", "url": "https://intelligence.org/2013/12/02/2013-winter-matching-challenge/", "source": "miri", "source_type": "blog", "text": "![](https://intelligence.org/wp-content/uploads/2013/12/workshop-horizontal-1.jpg)\nThanks to [Peter Thiel](http://en.wikipedia.org/wiki/Peter_Thiel), every donation made to MIRI between now and January 15th, 2014 will be **matched dollar-for-dollar**!\n\n\nAlso, **gifts from “new large donors” will be matched 3x!** That is, if you’ve given less than $5k to SIAI/MIRI ever, and you now give or pledge $5k or more, Thiel will donate $3 for every dollar you give or pledge.\n\n\nWe don’t know whether we’ll be able to offer the 3:1 matching ever again, so if you’re capable of giving $5k or more, we encourage you to take advantage of the opportunity while you can. Remember that:\n\n\n* If you prefer to give monthly, no problem! If you pledge 6 months of monthly donations, your full 6-month pledge will be the donation amount to be matched. So if you give monthly, you can get 3:1 matching for only $834/mo (or $417/mo if you get matching from your employer).\n* We accept [Bitcoin](http://en.wikipedia.org/wiki/Bitcoin) (BTC) and [Ripple](http://en.wikipedia.org/wiki/Ripple_(monetary_system)) (XRP), both of which have recently jumped in value. If the market value of your Bitcoin or Ripple is $5k or more on the day you make the donation, this will count for matching.\n* If your employer matches your donations at 1:1 (check [here](http://doublethedonation.com/miri)), then you can take advantage of Thiel’s 3:1 matching by giving as little as $2,500 (because it’s $5k after corporate matching).\n\n\n*Please email [malo@intelligence.org](mailto: malo@intelligence.org) if you intend on leveraging corporate matching or would like to pledge 6 months of monthly donations, so that we can properly account for your contributions towards the fundraiser.*\n\n\nThiel’s total match is capped at $250,000. The total amount raised will depend on how many people take advantage of 3:1 matching. We don’t anticipate being able to hit the $250k cap without substantial use of 3:1 matching — so if you haven’t given $5k thus far, please consider giving/pledging $5k or more during this drive. (If you’d like to know the total amount of your past donations to MIRI, just ask [malo@intelligence.org](mailto: malo@intelligence.org).)\n\n\n\n\n\n\n\n$0\n\n\n\n\n$62.5K\n\n\n\n\n$125K\n\n\n\n\n$187.5k\n\n\n\n\n$250k\n\n\n\n\n### We have reached our matching total of $250,000!\n\n\nNow is your chance to **double or quadruple your impact** in funding our [research program](http://intelligence.org/research/).\n\n\n\n\n [Donate Today](https://intelligence.org/donate/#donation-methods)\n------------------------------------------------------------------\n\n\n\n\n \n\n\n![](https://intelligence.org/wp-content/uploads/2013/12/workshop-horizontal-3.jpg)\n### Accomplishments Since Our July 2013 Fundraiser Launched:\n\n\n* Held three **[research workshops](http://intelligence.org/workshops/#past-workshops)**, including our [first European workshop](http://intelligence.org/2013/08/30/miris-november-2013-workshop-in-oxford/).\n* [**Talks** at MIT and Harvard](http://intelligence.org/2013/10/01/upcoming-talks-at-harvard-and-mit/), by Eliezer Yudkowsky and Paul Christiano.\n* Yudkowsky is blogging more **Open Problems in Friendly AI**… [on Facebook](https://www.facebook.com/groups/233397376818827/)! (They’re also being written up in a more conventional format.)\n* New **papers**: (1) [Algorithmic Progress in Six Domains](http://lesswrong.com/r/discussion/lw/i8i/algorithmic_progress_in_six_domains/); (2) [Embryo Selection for Cognitive Enhancement](http://intelligence.org/2013/10/30/new-paper-embryo-selection-for-cognitive-enhancement/); (3) [Racing to the Precipice](http://intelligence.org/2013/11/27/new-paper-racing-to-the-precipice/); (4) [Predicting AGI: What can we say when we know so little?](https://intelligence.org/files/PredictingAGI.pdf)\n* New **ebook**: [*The Hanson-Yudkowsky AI-Foom Debate*](http://intelligence.org/ai-foom-debate/).\n* New **analyses**: (1) [From Philosophy to Math to Engineering](http://intelligence.org/2013/11/04/from-philosophy-to-math-to-engineering/); (2) [How well will policy-makers handle AGI?](http://intelligence.org/2013/09/12/how-well-will-policy-makers-handle-agi-initial-findings/) (3) [How effectively can we plan for future decades?](http://intelligence.org/2013/09/04/how-effectively-can-we-plan-for-future-decades/) (4) [Transparency in Safety-Critical Systems](http://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/); (5) [Mathematical Proofs Improve But Don’t Guarantee Security, Safety, and Friendliness](http://intelligence.org/2013/10/03/proofs/); (6) [What is AGI?](http://intelligence.org/2013/08/11/what-is-agi/) (7) [AI Risk and the Security Mindset](http://intelligence.org/2013/07/31/ai-risk-and-the-security-mindset/); (8) [Richard Posner on AI Dangers](http://intelligence.org/2013/10/18/richard-posner-on-ai-dangers/); (9) [Russell and Norvig on Friendly AI](http://intelligence.org/2013/10/19/russell-and-norvig-on-friendly-ai/).\n* New **expert interviews**: [Greg Morrisett](http://intelligence.org/2013/11/05/greg-morrisett-on-secure-and-reliable-systems-2/) (Harvard), [Robin Hanson](http://intelligence.org/2013/11/01/robin-hanson/) (GMU), [Paul Rosenbloom](http://intelligence.org/2013/09/25/paul-rosenbloom-interview/) (USC), [Stephen Hsu](http://intelligence.org/2013/08/31/stephen-hsu-on-cognitive-genomics/) (MSU), [Markus Schmidt](http://intelligence.org/2013/10/28/markus-schmidt-on-risks-from-novel-biotechnologies/) (Biofaction), [Laurent Orseau](http://intelligence.org/2013/09/06/laurent-orseau-on-agi/) (AgroParisTech), [Holden Karnofsky](http://intelligence.org/2013/08/25/holden-karnofsky-interview/) (GiveWell), [Bas Steunebrink](http://intelligence.org/2013/10/25/bas-steunebrink-on-sleight/) (IDSIA), [Hadi Esmaeilzadeh](http://intelligence.org/2013/10/21/hadi-esmaeilzadeh-on-dark-silicon/) (GIT), [Nick Beckstead](http://intelligence.org/2013/07/17/beckstead-interview/) (Oxford), [Benja Fallenstein](http://intelligence.org/2013/08/04/benja-interview/) (Bristol), [Roman Yampolskiy](http://intelligence.org/2013/07/15/roman-interview/) (U Louisville), [Ben Goertzel](http://intelligence.org/2013/10/18/ben-goertzel/) (Novamente), and [James Miller](http://intelligence.org/2013/07/12/james-miller-interview/) (Smith College).\n* With [Leverage Research](http://www.leverageresearch.org/), we held a San Francisco **book launch party** for James Barratt’s [*Our Final Invention*](http://www.amazon.com/Our-Final-Invention-Artificial-Intelligence-ebook/dp/B00CQYAWRY/), which discusses MIRI’s work at length. (If you live in the Bay Area and would like to be notified of local events, please tell malo@intelligence.org!)\n\n\n\n\n [Donate Today](https://intelligence.org/donate/#donation-methods)\n------------------------------------------------------------------\n\n\n\n\n### How Will Marginal Funds Be Used?\n\n\n* **Hiring Friendly AI researchers**, identified through our workshops, as they become available for full-time work at MIRI.\n* Running **more workshops** (next one begins [Dec. 14th](http://intelligence.org/2013/07/24/miris-december-2013-workshop/)), to make concrete Friendly AI research progress, to introduce new researchers to open problems in Friendly AI, and to identify candidates for MIRI to hire.\n* Describing more **open problems in Friendly AI**. Our current strategy is for Yudkowsky to explain them as quickly as possible via Facebook discussion, followed by more structured explanations written by others in collaboration with Yudkowsky.\n* Improving humanity’s **strategic understanding** of what to do about superintelligence. In the coming months this will include (1) additional interviews and analyses on our blog, (2) a reader’s guide for Nick Bostrom’s forthcoming [*Superintelligence*](http://ukcatalogue.oup.com/product/9780199678112.do) [book](http://ukcatalogue.oup.com/product/9780199678112.do), and (3) an introductory ebook currently titled *Smarter Than Us.*\n\n\nOther projects are still being surveyed for likely cost and impact.\n\n\nWe appreciate your support for our work! [Donate now](https://intelligence.org/donate/#donation-methods), and seize a better than usual chance to move our work forward. If you have questions about donating, please contact Louie Helm at (510) 717-1477 or louie@intelligence.org.\n\n\nThe post [2013 Winter Matching Challenge](https://intelligence.org/2013/12/02/2013-winter-matching-challenge/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-12-02T22:33:13Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "921fb4bc67502ae144af9ff9f966cae0", "title": "New Paper: “Predicting AGI: What can we say when we know so little?”", "url": "https://intelligence.org/2013/12/01/new-paper-predicting-agi-what-can-we-say-when-we-know-so-little/", "source": "miri", "source_type": "blog", "text": "[![Predicting AGI](https://intelligence.org/wp-content/uploads/2013/12/predicting-agi-thumb.gif)](https://intelligence.org/files/PredictingAGI.pdf)MIRI research associate Benja Fallenstein and UC Berkeley student Alex Mennen have released a new working paper titled “[Predicting AGI: What can we say when we know so little?](https://intelligence.org/files/PredictingAGI.pdf)”\n\n\nFrom the introduction:\n\n\n\n> This analysis does not attempt to predict when AGI will actually be achieved, but instead, to predict when this epistemic state with respect to AGI will change, such that we will have a clear idea of how much further progress is needed before we reach AGI. Metaphorically speaking, instead of predicting when AI takes off, we predict when it will start taxiing to the runway.\n> \n> \n\n\nThe paper argues for a Pareto distribution for “time to taxi,” and concludes:\n\n\n\n> in general, a Pareto distribution suggests that we should put a much greater emphasis on short-term strategies than a less skewed distribution (e.g. a normal distribution) with the same median would.\n> \n> \n\n\nThe post [New Paper: “Predicting AGI: What can we say when we know so little?”](https://intelligence.org/2013/12/01/new-paper-predicting-agi-what-can-we-say-when-we-know-so-little/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-12-01T14:00:04Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "8efea430258ce12d90108b166875e239", "title": "New Paper: “Racing to the Precipice”", "url": "https://intelligence.org/2013/11/27/new-paper-racing-to-the-precipice/", "source": "miri", "source_type": "blog", "text": "[![Racing to the edge](https://intelligence.org/wp-content/uploads/2013/12/racing-to-the-precipice-thumb.gif)](http://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf)During his time as a MIRI research fellow, Carl Shulman contributed to a paper now available as an FHI technical report: [Racing to the Precipice: a Model of Artificial Intelligence Development](http://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf).\n\n\nAbstract:\n\n\n\n> This paper presents a simple model of an AI arms race, where several development teams race to build the first AI. Under the assumption that the first AI will be very powerful and transformative, each team is incentivized to finish first — by skimping on safety precautions if need be. This paper presents the Nash equilibrium of this process, where each team takes the correct amount of safety precautions in the arms race. Having extra development teams and extra enmity between teams can increase the danger of an AI-disaster, especially if risk taking is more important than skill in developing the AI. Surprisingly, information also increases the risks: the more teams know about each others’ capabilities (and about their own), the more the danger increases.\n> \n> \n\n\n**Update:** As of July 2015, this paper has been published in the journal *[AI & Society](http://link.springer.com/article/10.1007%2Fs00146-015-0590-y)*.\n\n\nThe post [New Paper: “Racing to the Precipice”](https://intelligence.org/2013/11/27/new-paper-racing-to-the-precipice/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-11-27T12:58:18Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "d7068c00f8c0c3dfa931b2c16132c8f5", "title": "MIRI’s November 2013 Newsletter", "url": "https://intelligence.org/2013/11/18/miri-update-november-2013/", "source": "miri", "source_type": "blog", "text": "| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n[Machine Intelligence Research Institute](http://intelligence.org)\n |\n\n |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \nDear friends,\n\nWe’re experimenting with a new, ultra-brief newsletter style. To let us know what you think of it, simply reply to this email. Thanks!\n\n\n**News Updates**\n* You can now [support MIRI for free](http://intelligence.org/2013/11/06/amazonsmile/) by shopping at smile.amazon.com instead of amazon.com. Update your bookmarks!\n* Louie Helm will be onsite for the Nov. 24th Marin county screening of Doug Wolens’ new documentary [*The Singularity*](http://thesingularityfilm.com/). Details on this and other screenings are [here](http://thesingularityfilm.com/screenings/).\n\n\n\n**Research Updates**\n* New paper: “[Embryo Selection for Cognitive Enhancement](http://intelligence.org/2013/10/30/new-paper-embryo-selection-for-cognitive-enhancement/).” (Including a description of iterated embryo selection.)\n* New analysis: “[From Philosophy to Math to Engineering](http://intelligence.org/2013/11/04/from-philosophy-to-math-to-engineering/).” (How we see research progress.)\n* Eliezer Yudkowsky [gave a talk](http://intelligence.org/2013/10/01/upcoming-talks-at-harvard-and-mit/) at MIT, but unfortunately MIT failed to record the video as scheduled. A later delivery of the talk *may* be recorded and released in the next few months.\n* Paul Christiano gave a talk at Harvard about probabilistic metamathematics. Video [here](http://intelligence.org/2013/10/23/probabilistic-metamathematics-and-the-definability-of-truth/).\n* Two surveys of what others have written about risks from AGI: [Richard Posner](http://intelligence.org/2013/10/18/richard-posner-on-ai-dangers/) (the most-cited legal scholar of the 20th century) and [Stuart Russell & Peter Norvig](http://intelligence.org/2013/10/19/russell-and-norvig-on-friendly-ai/) (authors of the #1 AI textbook).\n* Several new expert interviews: [Ben Goertzel](http://intelligence.org/2013/10/18/ben-goertzel/) (Novamente) on AGI as a field, [Hadi Esmaeilzadeh](http://intelligence.org/2013/10/21/hadi-esmaeilzadeh-on-dark-silicon/) (GIT) on dark silicon’s threat to Moore’s Law, [Bas Steunebrink](http://intelligence.org/2013/10/25/bas-steunebrink-on-sleight/) (IDSIA) on self-reflective programming, [Markus Schmidt](http://intelligence.org/2013/10/28/markus-schmidt-on-risks-from-novel-biotechnologies/) (Biofaction) on risks from novel biotechnologies, [Robin Hanson](http://intelligence.org/2013/11/01/robin-hanson/) (GMU) on “serious futurism,” and [Greg Morrisett](http://intelligence.org/2013/11/05/greg-morrisett-on-secure-and-reliable-systems-2/) (Harvard) on secure and reliable systems.\n\n\n**Other Updates**\n* Many of our friends have said Louie Helm’s [Rockstar Research](http://rockstarresearch.com/) is among their favorite new sources of news; check it out!\n* Know an exceptionally bright, ambitious person younger than 20? Tell them to [apply for The Thiel Fellowship](http://www.thielfellowship.org/apply-2014/)! $100,000 to skip college and develop one’s skills and ideas, with an incredible network of mentors in the Bay Area.\n* CFAR has [upcoming rationality workshops](http://rationality.org/apply/) in February (Melbourne), March (Bay Area), and April (NYC). Tell your friends!\n\n\nCheers,\n\nLuke Muehlhauser\nExecutive Director\n |\n\n |\n\n |\n| \n\n| |\n| --- |\n| \n |\n\n |\n\n\nThe post [MIRI’s November 2013 Newsletter](https://intelligence.org/2013/11/18/miri-update-november-2013/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-11-18T19:06:28Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "6063a85cb9d92f79a01c72a89b3d2745", "title": "Support MIRI by Shopping at AmazonSmile", "url": "https://intelligence.org/2013/11/06/amazonsmile/", "source": "miri", "source_type": "blog", "text": "[![Support MIRI through AmazonSmile.](https://intelligence.org/wp-content/uploads/2013/11/amazon-smile.png)](http://smile.amazon.com/ch/58-2565917)If you shop at the new [AmazonSmile](http://smile.amazon.com/ch/58-2565917), Amazon donates 0.5% of the price of your eligible purchases to a charitable organization of your choosing.\n\n\nMIRI is an eligible charitable organization, so the next time you consider purchasing something through Amazon, support MIRI by shopping at [AmazonSmile](http://smile.amazon.com/ch/58-2565917)!\n\n\nIf you get to Amazon.com via an affiliate link, remember to change “amazon.com” to “smile.amazon.com” in the address bar before making your purchase. Or, even easier, use the [SmileAlways](http://intelligence.org/2013/11/06/amazonsmile/) Chrome extension.\n\n\nThe post [Support MIRI by Shopping at AmazonSmile](https://intelligence.org/2013/11/06/amazonsmile/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-11-07T01:36:49Z", "authors": ["Alex Vermeer"], "summaries": []} -{"id": "5a5bb47884efd6ee94d18246b379631e", "title": "Greg Morrisett on Secure and Reliable Systems", "url": "https://intelligence.org/2013/11/05/greg-morrisett-on-secure-and-reliable-systems-2/", "source": "miri", "source_type": "blog", "text": "![Greg Morrisett portrait](https://intelligence.org/wp-content/uploads/2013/11/Morrisett_w150.jpg) [Greg Morrisett](http://www.eecs.harvard.edu/~greg/) is the Allen B. Cutting Professor of Computer Science at Harvard University. He received his B.S. in Mathematics and Computer Science from the University of Richmond in 1989, and his Ph.D. from Carnegie Mellon in 1995. In 1996, he took a position at Cornell University, and in the 2003-04 academic year, he took a sabbatical and visited the Microsoft European Research Laboratory. In 2004, he moved to Harvard, where he has served as Associate Dean for Computer Science and Engineering, and where he currently heads the Harvard Center for Research on Computation and Society.\n\n\nMorrisett has received a number of awards for his research on programming languages, type systems, and software security, including a Presidential Early Career Award for Scientists and Engineers, an IBM Faculty Fellowship, an NSF Career Award, and an Alfred P. Sloan Fellowship.\n\n\nHe served as Chief Editor for the Journal of Functional Programming and as an associate editor for ACM Transactions on Programming Languages and Systems and Information Processing Letters. He currently serves on the editorial board for The Journal of the ACM and as co-editor-in-chief for the Research Highlights column of Communications of the ACM. In addition, Morrisett has served on the DARPA Information Science and Technology Study (ISAT) Group, the NSF Computer and Information Science and Engineering (CISE) Advisory Council, Microsoft Research’s Technical Advisory Board, and Microsoft’s Trusthworthy Computing Academic Advisory Board.\n\n\n\n**Luke Muehlhauser**: One of the [interesting projects](http://www.eecs.harvard.edu/%7Egreg/) in which you’re involved is [SAFE](http://www.crash-safe.org/), a DARPA-funded project “focused on a clean slate design for resilient and secure systems.” What is the motivation for this project, and in particular for its “clean slate” approach?\n\n\n\n\n\n---\n\n\n**Greg Morrisett**: I think that from DARPA’s perspective, the primary force is that what we’ve been doing so far in terms of securing systems isn’t working.  I suspect that some high-profile problems with successful attacks from both nation states and other organizations has made them desperate to try new and different things.\n\n\nAt the same time, there was a growing recognition that legacy software (and potentially hardware) was a big part of the problem. If, for instance, all of our critical software were coded in type-safe languages, then a lot of low-level problems (e.g., buffer overruns) would be gone.  So I think this was the thinking behind the whole CRASH program and there are something like 20 different teams trying a range of ideas.\n\n\nOur particular project (SAFE) took a very radical approach in wanting to re-think everything:  the languages, the operating system, and even the hardware, as well as an emphasis on formal methods for trying to gain some assurance.  The team has designed some very interesting hardware that (a) effectively enforces type-safety at the machine level (i.e., object-level integrity, pointers as capabilities, etc.), and (b) provides novel support for doing fine-grained end-to-end information flow tracking.  We’ve also designed a high-level, dynamically-typed language (Breeze) that has pervasive information flow tracking built in.  This means that we can tell which principal actors may have influenced the construction of any value in the system. We’re now building compiler support for Breeze and systems-level software (e.g., scheduler, garbage collector, etc.) to bridge the high-level code and the hardware.\n\n\nAt the same time, we’ve been trying to apply formal models and reasoning to try to prove key properties about models of the system.  For example, we’ve had machine-checked proofs of security for the (core of the) language from the very beginning, and are currently building core models of the hardware and trying to establish similar properties.\n\n\n\n\n---\n\n\n**Luke**: Could you explain, preferably with examples, what you mean by “type-safety at the machine level” and “fine-grained end-to-end information flow tracking”?\n\n\n\n\n---\n\n\n**Greg**: So let’s first start with “type-safety”. In this context, I simply mean that we make a distinction between different values in memory and the machine will only allow appropriate operations to be applied to appropriate values. For instance, we make a distinction between pointers to objects (of a given size) and integers, and the machine will only let you load/store relative to a pointer (and within the bounds of the object to which it points). It will trap if you try to load/store relative to an integer. Similarly, we make a distinction between references to code and data, so you’re only allowed to jump-to or call references to code, and not data. These basic integrity properties block a whole range of basic attacks that rely upon violating abstractions (e.g., code injection, buffer overruns, etc.)\n\n\nWith respect to “information flow control” (IFC): we want to enforce policies like “unauthorized principal actors, say a web site with whom I’m communicating, should never learn my private information, say my users’ social security number of password”. The hard part is the “learn no information” as opposed to literally passing the data. If, for instance, I reverse your password and send it out, or take every other letter from your password, we want those to be seen as violations as well as explicitly copying the data. (There are much more subtle ways to communicate information as well.)\n\n\nFormally, the policy we aim for is that the observations of the “unauthorized principal actors” should be the same if I change the password to an arbitrary string. The technical term for this is “non-interference”.\n\n\nThe way we implement IFC is by labeling every piece of data with an information flow control policy. When you combine two pieces of data (say add two numbers), we have rules for computing a new policy to place on the newly generated value. For example, if you add a “secret” and a “public” number, then the result is conservatively classified as “secret”.\n\n\n\n\n---\n\n\n**Luke**: In [a blog post](http://intelligence.org/2013/10/03/proofs/), I warned readers that no system is “provably secure” or “provably safe” in the sense of a 100% guarantee of security or safety, because e.g. a security/safety definition might be subtly wrong, or a particular security reduction might have an error in it. These things have happened before. I said that “Rather, these approaches are meant to provide *more confidence than we could otherwise have, all else equal*, that a given system is secure [or safe].”\n\n\nDo you agree with that characterization? And, on the terminological issue: do you think terms like “provably secure” and “provably safe” are too misleading, or are they just another inevitable example of specialist terminology that is often but unavoidably interpreted incorrectly by non-specialists?\n\n\n\n\n---\n\n\n**Greg**: Yes, I completely agree with your blog post! All “proofs of correctness” for a system will be subject to modeling the environment, and specifying what we mean by correct. For real systems, modeling will never be sufficient, and formalizing the notion of correctness may be as complicated as reimplementing the system.\n\n\nI do think that building explicit models of the environment, and writing formal specifications and constructing formal proofs has a lot of value though. As you noted, it really does help to increase confidence. And, it helps identify assumptions. Many times, an attacker will look at your assumptions and try to violate them to gain purchase. So identifying assumptions through modeling is a great exercise.\n\n\nGoing through the process of constructing a formal proof also leads to a lot of good architectural insight. For instance, when we were trying to construct a proof of correctness for Google’s Native Client checker, we realized it would be too hard with the tools we had at hand. So we came up with a simpler algorithm and architecture which was easier to prove correct. It also turned out to be faster! This is detailed in a technical paper: [RockSalt: better, faster, stronger SFI for the x86](http://dl.acm.org/citation.cfm?doid=2254064.2254111).\n\n\nFinally, all of this is a great push-back on creeping complexity in design. I think a lot of problems arise because software is so soft, and we write a lot of code without thinking deeply about what it should (not) be doing.\n\n\nBack to your question as to whether we should use the term “provably secure”: probably not. I would like to use the term “certified code” because we are going through a very rigorous certification process. But unfortunately, the term “certified” has come to mean that we’ve followed some bureaucratic set of development processes that don’t actually provide much in the way of security or reliability. So I’m open to suggestions for better terms.\n\n\n\n\n---\n\n\n**Luke**: [Harper (2000)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Harper-Challenges-for-designing-intelligent-systems-for-safety-critical-applications.pdf) quotes [an old message](http://www.cs.york.ac.uk/hise/safety-critical-archive/2000/0501.html) by Ken Frith from the [Safety Critical Mailing List](http://www.cs.york.ac.uk/hise/sc_list.php):\n\n\n\n> The thought of having to apply formal proofs to intelligent systems leaves me cold. How do you provide satisfactory assurance for something that has the ability to change itself during a continuous learning process?\n> \n> \n\n\nHow do you think about the opportunities and limitations for applying formal methods to increasingly intelligent, autonomous systems?\n\n\n\n\n---\n\n\n**Greg**: I’ll be honest and say that I haven’t given it much thought. Perhaps the closest thing that I’ve worked on is trying to formalize information-theoretic security for cryptographic schemes and construct formal proofs in that setting. This is an area where I think the crypto/theory community has provided enough structure, we can hope to make progress, but even here, we have all sorts of shaky assumptions at the foundations.\n\n\nI suspect that for intelligent and autonomous systems work (especially in the cyber-physical realm), the first real challenge is constructing appropriate models that are over-approximations of potential failure modes, and try to find something that is tractable to work with, similar to the way that crypto tries to model adversaries.\n\n\n\n\n---\n\n\n**Luke**: In general, for systems that need to meet a high standard of security, to what degree do you think we can take existing systems that were mostly designed to be *effective*, and “bolt security onto them,” versus needing to build things from the ground up to be secure? Perhaps some specific examples inform your view?\n\n\n\n\n---\n\n\n**Greg**: I think re-architecting and re-coding things will almost always lead to a win in terms of security, when compared to bolt-on approaches. A good example is [qmail](http://cr.yp.to/qmail/guarantee.html) which was designed as a secure alternative to sendmail, and in comparison, had a good architecture that made it much easier to get most things right, including and especially, configuration. As far as I know, there’s only been one bug in qmail (an integer overflow) that could potentially lead to a security problem, whereas there are many, many issues with sendmail (I’ve lost count.)\n\n\nBut, in the end, security is at best a secondary consideration for almost all systems. And as much as I wish we could throw away all of the code and take the time to re-architect and re-code lots of things (especially using better language technology), it’s too expensive. For instance, Windows XP was over 50 million lines of code (I don’t know how big Win7 or Win8 are) so re-coding Windows just isn’t a realistic possibility.\n\n\nWhat we need instead are tools and techniques that people can buy into now, and help block problems with legacy systems. Ideally, these same tools should give you a “pay-as-you-go” kind of property: a little bit of effort buys you some assurance, and more effort buys you a lot more.\n\n\nA good example is the kind of static analysis tools that we’re starting to see for legacy C/C++ code, such as Coverity, HP/Fortify, or Microsoft’s Prefast/Prefix/SLAM/etc. These are tools that automatically identify potential errors which in turn could lead to vulnerabilities. Today, if you run say these tools on a random pile of C/C++ code, you’ll find that it gives lots of false positives. But if you work to minimize those warnings, then you’ll find that the tool becomes much more effective. And, for new code, it works quite well as long as you pay attention to the warnings and work to get rid of them.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Greg!\n\n\nThe post [Greg Morrisett on Secure and Reliable Systems](https://intelligence.org/2013/11/05/greg-morrisett-on-secure-and-reliable-systems-2/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-11-05T21:11:55Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "90a07e1b00e2e987d639efb2a708bf1c", "title": "From Philosophy to Math to Engineering", "url": "https://intelligence.org/2013/11/04/from-philosophy-to-math-to-engineering/", "source": "miri", "source_type": "blog", "text": "For centuries, philosophers wondered how we could learn what causes what. Some argued it was impossible, or possible only via experiment. Others kept [hacking away](http://lesswrong.com/lw/8ns/hack_away_at_the_edges/) at the problem, [clarifying ideas](http://commonsenseatheism.com/wp-content/uploads/2013/10/Pearl-The-Art-and-Science-of-Cause-and-Effect.pdf) like *counterfactual* and *probability* and *correlation* by making them more precise and coherent.\n\n\nThen, in the 1990s, a breakthrough: Judea Pearl and others [showed](http://www.amazon.com/Causality-ebook/dp/B00AKE1VYK/) that, in principle, we can sometimes infer causal relations from data even without experiment, via the mathematical machinery of probabilistic graphical models.\n\n\nNext, engineers used this mathematical insight to write [software](http://cran.r-project.org/web/packages/pcalg/index.html) that can, in seconds, infer causal relations from a data set of observations.\n\n\nAcross the centuries, researchers had toiled away, pushing our understanding of causality from philosophy to math to engineering.\n\n\n[![From Philosophy to Math to Engineering (small)](https://intelligence.org/wp-content/uploads/2013/10/From-Philosophy-to-Math-to-Engineering-small.jpg)](https://intelligence.org/wp-content/uploads/2013/10/From-Philosophy-to-Math-to-Engineering.jpg)\nAnd so it is with [Friendly AI](http://en.wikipedia.org/wiki/Friendly_artificial_intelligence) research. Current progress on each sub-problem of Friendly AI lies somewhere on a spectrum from philosophy to math to engineering.\n\n\n\nWe began with some fuzzy philosophical ideas of what we want from a Friendly AI (FAI). We want it to be benevolent and powerful enough to eliminate suffering, protect us from natural catastrophes, help us explore the universe, and otherwise make life *awesome*. We want FAI to allow for moral progress, rather than immediately reshape the galaxy according to whatever our current values happen to be. We want FAI to remain beneficent even as it rewrites its core algorithms to become smarter and smarter. And so on.\n\n\nSmall pieces of this philosophical puzzle have been broken off and [turned into](http://lesswrong.com/lw/hok/link_scott_aaronson_on_free_will/9546?context=1#comments) math, e.g. [Pearlian causal analysis](http://www.amazon.com/Causality-ebook/dp/B00AKE1VYK/) and [Solomonoff induction](http://lesswrong.com/lw/dhg/an_intuitive_explanation_of_solomonoff_induction/). Pearl’s math has since been used to produce causal inference software that can be run on today’s computers, whereas engineers have thus far succeeded in implementing (tractable approximations of) Solomonoff induction only for [very limited applications](http://arxiv.org/pdf/0909.0801v2.pdf).\n\n\nToy versions of two pieces of the “stable self-modification” problem were transformed into math problems in [de Blanc (2011)](https://intelligence.org/files/OntologicalCrises.pdf) and [Yudkowsky & Herreshoff (2013)](https://intelligence.org/files/TilingAgents.pdf), though this was done to enable further insight via formal analysis, not to assert that these small pieces of the philosophical problem had been *solved* to the level of math.\n\n\nThanks to Patrick LaVictoire and other [MIRI workshop](http://intelligence.org/get-involved/#workshop) participants,[1](https://intelligence.org/2013/11/04/from-philosophy-to-math-to-engineering/#footnote_0_10548 \"And before them, Moshe Tennenholtz.\") Douglas Hofstadter’s FAI-relevant philosophical idea of “[superrationality](http://en.wikipedia.org/wiki/Superrationality)” seems to have been, for the most part, successfully [transformed](https://intelligence.org/files/RobustCooperation.pdf) into math, and a bit of the engineering work has also [been done](https://github.com/klao/provability/blob/master/modal.hs).\n\n\nI say “seems” because, while humans are fairly skilled at turning math into feats of practical engineering, we seem to be [much](http://lesswrong.com/lw/4zs/philosophy_a_diseased_discipline/) [*less*](http://commonsenseatheism.com/wp-content/uploads/2012/10/Brennan-Scepticism-about-philosophy.pdf) [skilled](http://consc.net/papers/progress.pdf) at turning philosophy into math, without leaving anything out. For example, some very sophisticated thinkers have [claimed](http://commonsenseatheism.com/wp-content/uploads/2013/10/Rathmanner-Hutter-A-philosophical-treatise-of-universal-induction.pdf) that “Solomonoff induction solves the problem of inductive inference,” or [that](http://www.amazon.com/Introduction-Kolmogorov-Complexity-Applications-Computer/dp/0387339981/) “Solomonoff has successfully invented a perfect theory of induction.” And indeed, it certainly *seems* like a truly universal induction procedure. However, it [turns out](http://lesswrong.com/lw/cw1/open_problems_related_to_solomonoff_induction/) that Solomonoff induction *doesn’t* fully solve the problem of inductive inference, for relatively subtle reasons.[2](https://intelligence.org/2013/11/04/from-philosophy-to-math-to-engineering/#footnote_1_10548 \"Yudkowsky plans to write more about how to improve on Solomonoff induction, later.\")\n\n\nUnfortunately, philosophical mistakes like this could be fatal when humanity builds the first self-improving AGI ([Yudkowsky 2008](https://intelligence.org/files/AIPosNegFactor.pdf)).[3](https://intelligence.org/2013/11/04/from-philosophy-to-math-to-engineering/#footnote_2_10548 \"This is a specific instance of a problem Peter Ludlow described like this: “the technological curve is pulling away from the philosophy curve very rapidly and is about to leave it completely behind.”\") FAI-relevant philosophical work is, as Nick Bostrom says, “philosophy with a deadline.”\n\n\n\n\n---\n\n1. And before them, [Moshe Tennenholtz](http://mechroom.technion.ac.il/~moshet/progeqnote4.pdf).\n2. Yudkowsky plans to write more about how to improve on Solomonoff induction, later.\n3. This is a specific instance of a problem Peter Ludlow described like [this](http://leiterreports.typepad.com/blog/2013/10/progress-in-philosophy-revisited.html): “the technological curve is pulling away from the philosophy curve very rapidly and is about to leave it completely behind.”\n\nThe post [From Philosophy to Math to Engineering](https://intelligence.org/2013/11/04/from-philosophy-to-math-to-engineering/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-11-04T15:36:23Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "f77e2bbf19ae04ca69caa3c6f61077d9", "title": "Robin Hanson on Serious Futurism", "url": "https://intelligence.org/2013/11/01/robin-hanson/", "source": "miri", "source_type": "blog", "text": "![](https://intelligence.org/wp-content/uploads/2013/10/Hanson_w150.jpg) [Robin Hanson](http://hanson.gmu.edu/) is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He is known as a founder of the field of prediction markets, and was a chief architect of the Foresight Exchange, DARPA’s FutureMAP, IARPA’s DAGGRE, SciCast, and is chief scientist at Consensus Point. He started the first internal corporate prediction market at Xanadu in 1990, and invented the widely used market scoring rules. He also studies signaling and information aggregation, and is writing a book on the social implications of brain emulations. He blogs at [Overcoming Bias](http://www.overcomingbias.com/).\n\n\nHanson received a B.S. in physics from the University of California, Irvine in 1981, an M.S. in physics and an M.A. in Conceptual Foundations of Science from the University of Chicago in 1984, and a Ph.D. in social science from Caltech in 1997. Before getting his Ph.D he researched artificial intelligence, Bayesian statistics and hypertext publishing at Lockheed, NASA and elsewhere.\n\n\n\n**Luke Muehlhauser**: In an earlier blog post, I [wrote](http://intelligence.org/2013/05/01/agi-impacts-experts-and-friendly-ai-experts/) about the need for what I called AGI impact experts who “develop skills related to predicting technological development, predicting AGI’s likely impact on society, and identifying which interventions are most likely to increase humanity’s chances of safely navigating the creation of AGI.”\n\n\nIn 2009, you gave a talk called, “[How does society identify experts and when does it work?](http://vimeo.com/7336217)” Given the study that you’ve done and the expertise, what do you think of humanity’s prospects for developing these AGI impact experts? If they are developed, do you think society will be able to recognize who is an expert and who is not?\n\n\n\n\n\n---\n\n\n**Robin Hanson**: One set of issues has to do with existing institutions and what kinds of experts they tend to select, and what kinds of topics they tend to select. Another set of issues has to do with, if you have time and attention and interest, to what degree can you acquire expertise on any given subject, including AGI impacts, or tech forecasting more generally? A somewhat third subject which overlaps the first two is, if you did acquire such expertise, how would you convince anybody that you had it?\n\n\nI think the easiest question to answer is the second one. Can you learn about this stuff?\n\n\nI think some people have been brought up in a Popperian paradigm, where there’s a limited scientific method and there’s a limited range of topics it can apply to, and you turn the crank and if it can apply to those topics then you have science and you have truth, and you have something you’ve learned and otherwise everything else is opinion, equally undifferentiated opinion.\n\n\nI think that’s completely wrong. That is, we have a wide range of intellectual methods out there and a wide range of social institutions that coordinate efforts.\n\n\nSome of those methods work better than others, and then there are some topics on which progress is easier than others, just by the nature of the topic. But honestly, there are very few topics on which you can’t learn more if you just sit down and work at it.\n\n\nOf course, that doesn’t mean you simply stare at a wall. Most topics are related to other topics on which people have learned some things. Whatever your topic is, figure out the related topics, learn about those related topics, learn as many different things as you can about what other people know about the related topic, and then start to intersect and connect them to your topic and work on it.\n\n\nJust blood-sweat work can get you a lot of way in a very wide range of topics. Of course, just because you can learn about almost anything doesn’t mean you should. It doesn’t mean it’s worth the effort to society or yourself, and it doesn’t mean that there are, for any subject, easy ways to convince other people that you’ve learned something.\n\n\nThere are methods that you can use, where it becomes easier to convince people of things, and you might prefer to focus on those topics or methods where it is easier to convince people that you know something. A related issue is, how impressed are people about you knowing something?\n\n\nMany of the existing institutions like academic institutions or media institutions that identify and credential people as experts on a variety of topics function primarily as ways to distinguish and label people as impressive.\n\n\nPeople want to associate with, connect with, read about, and hear talks from people who are acknowledged as impressive as part of a network of experts who co-acknowlege each other as impressive. It’s called status.\n\n\nSome institutions are dominated by people who are mainly trying to acquire credentials as being impressive, so they can seem impressive, be hired for impressive jobs, have punditry positions that are reserved for impressive people, be on boards of directors, etc.\n\n\nAlso, there are standard procedures by which you would do things so people could say, “Yes, he knows the procedures,” and “Yes, you can follow them,” and “Yes, those are damn hard procedures. Anybody who can do that must be damn impressive.”\n\n\nBut there are things you can learn about that it’s harder to become credentialed as impressive at.\n\n\nGenerically, when you just pick any topic in the world because it’s interesting or important in some more basic way, it isn’t necessarily well-suited for being an impressiveness display.\n\n\nWhat about futurism? For various aspects of the future, if you sit down and work at it, you can make progress. It’s not very well-suited for proving that you’ve made progress, because the future takes a while to get here. Of course, when it does get here, it will be too late for you to gain much advantage from finally having been proven as impressive on the subject.\n\n\nI like to compare the future to history. History is also something we are uncertain about. We have to take a lot of little clues and put them together, to draw inferences about the past. We have a lot of very concrete artifacts that we focus on. We can at least demonstrate our impressive command of all those concrete artifacts, and their details, and locations, and their patterns. We don’t have something like that for the future. We will eventually, of course.\n\n\nIt’s much harder to demonstrate your command of the future. You can study the future somewhat by using complicated statistical techniques that we’ve applied to other subjects. That’s possible. It still doesn’t tend to demonstrate impressiveness in quite as dramatic a way as applying statistical techniques to something where you can get more data next week that verifies what you just showed in your statistical analysis.\n\n\nI also think the future is where people project a lot of hopes. They’re just less willing to be neutral about it. People are more willing to say, “Yes, sad and terrible things happened in the past, but we get it. We once believed that our founding fathers were great people, and now we can see they were shits.” I guess that’s so, but for the future their hopes are a little more hard to knock off.\n\n\nYou can’t prove to them the future isn’t the future they hope it is. They’ve got a lot emotion wrapped up in it. Often it’s just easier to show you’re being an impressive academic on subjects that most people don’t have a very strong emotional commitment for, because that tends to get in the way.\n\n\n\n\n---\n\n\n**Luke**: I have some hunches about some types of scientific training that might give people different perspectives on how well we can do at medium- to long-term tech forecasting. I wanted to get your thoughts on whether you think my hunches match up with your experience.\n\n\nOne hunch is that, for example, if someone is raised in a Popperian paradigm, as opposed to maybe somebody younger who was raised in a Bayesian paradigm, the Popperian will have a strong falsificationist mindset, and because you don’t get to falsify hypotheses about the future until the future comes, these kinds of people will be more skeptical of the idea that you can learn things about the future.\n\n\nOr in the risk analysis community, there’s a tradition there that’s being trained in the idea that there is risk, which is something that you can attach a probability to, and then there’s uncertainty, which is something that you don’t know enough about to even attach a probability to. A lot of the things that are decades away would fall into that latter category. Whereas for me, as a Bayesian, uncertainty just collapses into risk. Because of this, maybe I’m more willing to try to think hard about the future.\n\n\n\n\n---\n\n\n**Robin**: Those questions are somewhat framed from the point of view of an academic, or of an academic familiar with relatively technical kinds of skills. But say you’re running a business, and you have some competitors, and you’re trying to decide where will your field go in the next few years, or what kind of products will people like, or you’re running a social organization, and you’re trying to decide how to change your strategy.\n\n\nAnother example: you have some history, and you’re trying to go back and figure out what were your grandfathers doing, or just almost all random questions people might ask about the world. The Popperian stuff doesn’t help at all. It’s completely useless. If you just had any sort of habit of dealing with real problems in the world, you would have developed a tolerance for expecting things not to be provable or falsifiable.\n\n\nYou’d also develop an expectation that there are a range of probabilities for things. You’ll be uncertain, and you’ll have to deal with that. It’s only in a rarefied academic world where it would ever be plausible to deny uncertainty, or to insist on falsification, because that’s just almost never possible or relevant for the vast majority of questions you could be trying to ask.\n\n\n\n\n---\n\n\n**Luke**: Getting back to the question of how someone might develop expertise in, for example, AGI impacts or, let’s just say more broadly, long term tech forecasting…\n\n\nWhat are your thoughts on some of the key training that someone would need to undergo? It could even be mental habits, memorizing certain fields of material where we’ve done a lot of stamp collecting in the scientific sense, etc. What’s relevant, do you think, for developing this kind of expertise?\n\n\n\n\n---\n\n\n**Robin**: We live in a world where people spend a substantial fraction of their career learning about stuff, and then it’s only after they’ve learned about a lot of things that they become the most productive about applying the stuff they’ve learned.\n\n\nWe’re just in a world where people have long life spans, and they’re competing with other people with long life spans. You have to expect that if you’re going to be the best at something you will have to spend a large fraction of your life devoted to it. Sorry, no shortcuts. That’s just a message people might not want to hear but that’s the way it goes.\n\n\nYou’ll also have to figure out where and how much to specialize. You can’t learn 20 fields as well as the best people can know them. Sorry. You just won’t have time. You’ll have to be some very unusual person in some way to get anywhere close to that.\n\n\nYou will have to decide what aspects of this future you want to focus on. There are many different aspects and they don’t all come together as a package, where if you learn about one you automatically learn about the others.\n\n\nIn tech forecasting, one category of questions is about what technologies are feasible, in principle. To have a sense for that kind of question, to answer it, you will need to have spent a substantial fraction of your life learning about the kinds of technologies you’re talking about. You’ll also want to have spent some substantial time looking at the histories of other technologies, and how they’ve progressed over time. The typical trajectory of technology and the typical trajectory of innovation, and where it tends to come from, and how many starts tend to be false starts, et cetera.\n\n\nAnother category of questions is about the social implications of the technology. Batteries, say. For that, it requires a whole different set of expertise. It can be informed by knowing what a battery is, and how it works, and who might make them, and when they’ll get how good. But in order to forecast social implications of batteries you’ll have to know about societies, and how they work, and what they’re made out of, and just a lot fewer people working on it. Then you’ll just have to, probably, learn more different fields. Maybe you could both learn a lot of social science and a lot of battery tech, but that’ll take a lot of time.\n\n\nOne of the main questions about studying anything, including the future, is how to specialize, how to make a division of labor. As usual, like in software, the key to division of labor is interfaces. You want to carve nature at its joints so the interfaces are as simple and robust and modular as possible.\n\n\nYou want to ask, where are there the fewest dependencies between different questions so that you can cut the expertise lines there. You say “you guys over here you work on the answer to this question, and you guys over here you take the answer to that question, and you go do something with it.” The smaller you can make that set of answers and questions, the more modularity and independence you can have, and the more you can separate the work.\n\n\nWhenever you have different teams with an interface, they’ll each have to learn a fair bit about the interface itself, in order to be productive. They’ll have to know what the interface means and where it comes from. What parts of it are uncertain? What parts of it change fast? What parts of it are people serious about, and all sorts of things on an interface, and what do they tend to lie about? That’s part of the search.\n\n\nOne obvious, very plausible interface is between people who predict that particular devices will be available at particular points in time for particular costs, with particular capabilities, and other people who talk about what the hell that means for the rest of society. That seems to me a relatively tight interface, compared to the other interfaces we could choose here.\n\n\nOf course, within technology you could divide it up. Somebody might know lithium batteries, and they just know lithium batteries really well, and they can talk about the future of lithium batteries.\n\n\nBut if graphene batteries are coming down the pike, they’re not going to understand that very well. Somebody else might specialize in graphene batteries, or just specialize in knowing the range of kinds of batteries available, and what might happen to them.\n\n\nSomebody else might specialize more in the distribution of technological innovation. When you draw a chart of time and capacity over time, how often does that chart align or something else, and how misleading can it be when you see a short term thing, etc.? Just a sense for a range of different kinds of histories of technologies, and what sort of variety of paths we see. You could specialize in that.\n\n\nBut if you’re in an area of futurism, where there aren’t very many other people doing it, you should expect things to be more like a startup where you just have to be flexible. Not because being flexible is somehow intrinsically more productive in general. It’s because it’s required when there’s a bunch of things to be done and not very many people to do them. You will, by necessity, have to acquire a wider range of skills, a wider range of approaches, consider a wider range of possibilities, accept more often restructuring, more often changing goals.\n\n\nI would love it if some day serious futurism were as detailed and specialized as history. Historians have broken up the field of history into lots of different areas and times of history, and a lot of different aspects. Each person can see a previous track record of people with careers in history, and what they focused on, and the set of open questions.\n\n\nThen they can go into history and they can take a particular area and know what a career in history looks like, and know what other people in that area, what kind of skills they acquired, and what it took to become impressive. If the future became that specialized then that’s what it would be like for the future too.\n\n\nIt just happens not to be that way, at the moment, because there’s just a lot fewer people working on it. Then you’ll just have to, probably, learn more different fields then you otherwise would, learn more different skills than you otherwise would, accept more changing of your mind about what was important, and what were the key questions, just because there’s not very many people doing serious futurism.\n\n\n\n\n---\n\n\n**Luke**: Robin, you used this term “serious futurism,” which happens to be the term I’ve been using for futurists who are trying to figure it out as opposed to meet the demand for morality tales about the future, or meet a demand for hype that fuels excited talk about, “Gee whiz, cool stuff from the future,” etc.\n\n\nWhen I try to do serious futurism, most of the sources I encounter are not trying to meet the demand of figuring out what’s true about the future. I have to weed through a lot of material that’s meeting other demands, before I find anything that’s useful to my project of serious futurism.\n\n\nI wonder, from your perspective, what are your thoughts on what one would do if you wanted to try to make serious futurism more common, get people excited about serious futurism, show them the value of the project, get them to invest in it so that there is more of a field, so there are more people doing all the different things that need to be done in order to figure out what the future is going to be like?\n\n\n\n\n---\n\n\n**Robin**: There is this world of people who think of themselves as serious futurists. I’ve had limited contact with them. Before we go into talking about them, I think it would help to just bracket this by noticing that there are many other intellectual areas which have had a similar problem, which I will phrase as widespread public interest, limited academic interest, and then an attempt to carve out a serious version of the field.\n\n\nTwo examples, relatively extreme examples, are sex and aliens. Both of these are subjects that people have long found fascinating to talk about. And for the most part, academic avoided both of them for a long time. Then some academics, at one point, tried to carve out an area of being serious about it.\n\n\nIn both of those cases, and, I think, in lots of other cases, you can see what the key problem is. As soon as you start to seriously engage the subject, if you don’t do it in a way that really clearly distinguishes how you’re doing it from how all those other people are doing it, you look like them. Then you acquire, in people’s minds, all the attributes of them which, of course, include not being serious or worthy of attention.\n\n\nFor example, for aliens the first big method you can use to distinguish yourself is to just search the sky for signals with huge radio telescopes. You’re not going to talk about all the other aspects of aliens. You’re just going to be hard-science radio telescope guy, searching the sky for signals. What distinguishes you from everybody else talking about aliens?\n\n\nFirst of all, you have a radio telescope and they don’t. Second of all, you know how to do lots of complicated signal processing. Thirdly, there’s other people who do signal processing, and you’re just inheriting and applying their methods, so that’s a standard thing. It’s complicated to learn that, and you could have gone to the schools were you learned that stuff, and you can pass muster with those people when it comes to knowing how to build radio telescopes and do the signal processing. You’re just applying that to aliens. Hey. That’s just another subject.\n\n\nWith sex, the way they did it was they said, “Well, we’re going to put people in a room and they’ll be having sex. We’ll be watching them in all the standard ways we ever watch anybody doing anything, as a social scientist. We’re going to have the same sort of selection criteria, and methods of recording things, and recording variations, and things like that. We’re just going to do it in a very big, standardized way in order to show we’re different, we’re serious. All those other people like to talk about sex all the time. But they couldn’t be doing this. They don’t even know what the words we’re using mean. They’re not one of us. We’re not one of them.”\n\n\nOf course, that means that you are, in some sense, throwing away all the data people had about sex, or at least setting it aside. You’re saying, “All these things people are claiming about sex, that’s all coming from these ordinary conversations about sex. That’s not good enough for us so we’re going to wipe the slate clean and just see what we can get with our own new data.”\n\n\nIn futurism, there are a bunch of futurists who are like inspirational speaker futurists, who just talk about all the cool stuff coming down the line and how society will change and that sort of thing. Then there’s the sort of academic futurists who see themselves as distinctly different from that.\n\n\nMany of them focus on collecting data series about previous technologies or predictions, and then they project those data series forward and do statistical projection and prediction. They see themselves as serious academics, and one of the ways they distinguish themselves from these other futurists is that if they don’t have a data series for it then they’re not going to talk about it. It’s not in the realm of their kind of futurism.\n\n\nFor my work, I’m taking a risky strategy, which I don’t have any strong reason to expect to succeed, of simply having been a social science professional for a long time, taking a lot of detailed social science knowledge, that most people don’t know, and applying it to my particular scenario, using social science lingo and concepts, and basically saying, “Doesn’t this look different to you?”\n\n\nBasically I’m saying, “A social scientist, when they look at this they will recognize that this is using professional, state of the art concepts and applying them to this particular subject.”\n\n\nI’ve gotten, certainly, some people, reading my draft, to say, “You’re coming up with a lot more detail than I would have thought possible.” That’s sort of what I’m proud of. A lot of people look at a scenario like this and they kind of wave their hands and say, “It doesn’t look like we can figure out anything about that. That’s just too hard and complicated.” I’m going to come back and say, “Actually, it’s one of those things that is hard but not impossible, so it just takes more work.”\n\n\n\n\n---\n\n\n**Luke**: You mentioned this skepticism that many people have about our ability to figure out things, at all, about the far future, or figure them out in any amount of detail.\n\n\nOne quote that comes to mind is from Vaclav Smil, from his book [*Global Catastrophes and Trends*](http://www.amazon.com/Global-Catastrophes-Trends-Fifty-ebook/dp/B004GEC5LS/), where he writes specifically about AI:\n\n\n\n> If the emergence of superior machines… is only a matter of time then all we can do is wait passively to be eliminated. If such developments are possible, we have no rational way to assess the risks. Is there a 75 percent or 0.75 percent chance of self replicating robots taking over the Earth by 2025…?\n> \n> \n\n\nThis is a very pessimistic, fatalistic view about our ability to forecast AI in particular. Actually, he says the same thing about nanotechnology. What do you think about this?\n\n\n\n\n---\n\n\n**Robin**: I was a physics student and then a physics grad student. In that process, I think I assimilated what was the standard worldview of physicists, at least as projected on the students. That worldview was that physicists were great, of course, and physicists could, if they chose to, go out to all those other fields, that all those other people keep mucking up and not making progress on, and they could make a lot faster progress, if progress was possible, but they don’t really want to, because that stuff isn’t nearly as interesting as physics is, so they are staying in physics and making progress there.\n\n\nFor many subjects, they don’t think it’s just possible to learn anything, to know anything. For physicists, the usual attitude towards social science was basically there’s no such thing as social science; there can’t be such a thing as social science.\n\n\nSurely you can look at some little patterns but because you can’t experiment on people, or because it’ll be complicated, or whatever it is, it’s just not possible. Partly, that’s because they probably tried for an hour, to see what they could do, and couldn’t get very far.\n\n\nIt’s just way too easy to have learned a set of methods, see some hard problem, try it for an hour, or even a day or a week, not get very far, and decide it’s impossible, especially if you can make it clear that your methods definitely won’t work there.\n\n\nYou don’t, often, know that there are any other methods to do anything with because you’ve learned only certain methods.\n\n\nIt’s very hard to say that something can’t be learned. It’s much easier to say that you haven’t figured anything out or, perhaps, that a certain kind of method runs out there. It’s easier to imagine trying all the different paths you can use in a certain method, even though that’s pretty hard too.\n\n\nBut, to be able to say that nobody can learn anything about this, in order to say that with some authority, you have to have some understanding of all the methods out there, and what they can do, and have tried it for a while.\n\n\nAcademics tend to know their particular field very well and its methods, and then other fields kind of fade away and blur together. If you’re a physicist, the different between physics and chemistry is overwhelmingly important. The difference between sociology and economics seems like terminology or something, and vice-versa. If you’re an economist, the difference between economics and sociology seems overwhelming, and the difference between physics and chemistry seems like picky terminology, which just means that most people don’t know very many methods. They don’t know very many of all the different things you can do.\n\n\nAs one of the rare people who have spent a lot of time learning a lot of different methods, I can tell you there are a lot out there. Furthermore, I’ll stick my neck out and say most fields know a lot. Almost all academic fields where there’s lots of articles and stuff published, they know a lot.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Robin!\n\n\nThe post [Robin Hanson on Serious Futurism](https://intelligence.org/2013/11/01/robin-hanson/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-11-01T17:00:50Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "a43d4f4e4359e3440bd04f9579f10623", "title": "New Paper: “Embryo Selection for Cognitive Enhancement”", "url": "https://intelligence.org/2013/10/30/new-paper-embryo-selection-for-cognitive-enhancement/", "source": "miri", "source_type": "blog", "text": "[![IES first page](https://intelligence.org/wp-content/uploads/2013/10/embryselection-cover.gif)](https://intelligence.org/files/EmbryoSelection.pdf)During his time as a MIRI research fellow, Carl Shulman co-authored (with [Nick Bostrom](http://nickbostrom.com/)) a paper that is now available as a preprint, titled “[Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer?](https://intelligence.org/files/EmbryoSelection.pdf)”\n\n\nAbstract:\n\n\n\n> Human capital is a key determinant of personal and national outcomes, and a major input to scientific progress. It has been suggested that advances in genomics will make it possible to enhance human intellectual abilities. One way to do this would be via embryo selection in the context of *in vitro* fertilization (IVF). In this article, we analyze the feasibility, timescale, and possible societal impacts of embryo selection for cognitive enhancement. We find that embryo selection, on its own, may have significant impacts, but likely not drastic ones, over the next 50 years, though large effects could accumulate over multiple generations. However, there is a complementary technology, stem cell-derived gametes, which has been making rapid progress and which could amplify the impact of embryo selection, enabling very large changes if successfully applied to humans.\n> \n> \n\n\nThe last sentence refers to “iterated embryo selection” (IES), a future technology first described by MIRI [in 2009](http://theuncertainfuture.com/faq.html#7). This technology has significant strategic relevance for Friendly AI (FAI) development because it might be the only intelligence amplification (IA) technology (besides [WBE](http://en.wikipedia.org/wiki/Mind_uploading)) to have large enough effects on human intelligence to substantially shift our odds of getting FAI before arbitrary AGI, if AGI is [developed sometime this century](http://intelligence.org/2013/05/15/when-will-ai-be-created/).\n\n\nUnfortunately, it remains unclear whether the arrival of IES would shift our FAI chances positively or negatively. On the one hand, a substantially smarter humanity may be wiser, and more likely to get FAI right. On the other hand, IES might accelerate AGI relative to FAI, since AGI is more parallelizable than FAI. (For more detail, and more arguments pointing in both directions, see [Intelligence Amplification and Friendly AI](http://lesswrong.com/lw/iqi/intelligence_amplification_and_friendly_ai/).)\n\n\nThe post [New Paper: “Embryo Selection for Cognitive Enhancement”](https://intelligence.org/2013/10/30/new-paper-embryo-selection-for-cognitive-enhancement/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-10-30T09:47:13Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "ae51e3767fe6b123e38ab9aa11ca73b2", "title": "Markus Schmidt on Risks from Novel Biotechnologies", "url": "https://intelligence.org/2013/10/28/markus-schmidt-on-risks-from-novel-biotechnologies/", "source": "miri", "source_type": "blog", "text": "![Markus Schmidt portrait](https://intelligence.org/wp-content/uploads/2013/10/Marcus-Schmidt.jpg) [Dr. Markus Schmidt](http://www.markusschmidt.eu/) is founder and team leader of [Biofaction](http://www.biofaction.com/), a research and science communication company in Vienna, Austria. With an educational background in electronic engineering, biology and environmental risk assessment he has carried out environmental risk assessment and safety and public perception studies in a number of science and technology fields (GM-crops, gene therapy, nanotechnology, converging technologies, and synthetic biology) for more than 10 years.\n\n\nHe was/is coordinator/partner in several national and European research projects, for example [SYNBIOSAFE](http://synbiosafe.eu/), the first European project on safety and ethics of synthetic biology (2007-2008), COSY on communicating synthetic biology (2008-2009), TARPOL on industrial and environmental applications of synthetic biology (2008-2010), CISYNBIO on the depiction of synthetic biology in movies (2009-2012), a joint Sino-Austrian project on synthetic biology and risk assessment (2009-2012), or ST-FLOW on standardization for robust bioengineering of new-to-nature biological properties (2011-2015).\n\n\nHe produced science policy reports for the Office of Technology Assessment at the German Bundestag (on GM-crops in China), and the Austrian Ministry of Transport, Innovation and Technology (nanotechnology and converging technologies). He served as an advisor to the European Group on Ethics (EGE) of the European Commission, the US Presidential Commission for the Study of Bioethical Issues, the J Craig Venter Institute, the Alfred P. Sloan Foundation, and Bioethics Council of the German Parliament as well as to several thematically related international projects. Markus Schmidt is the author of several peer-reviewed articles, he edited a special issue and two books about synthetic biology and its societal ramifications, and produced the first documentary film about synthetic biology.\n\n\nIn addition to the scientific work, he organized a Science Film Festival and produced an art exhibition (both 2011) to explore novel and creative ideas and interpretations on the future of biotechnology.\n\n\n\n**Luke Muehlhauser**: I’ll start by giving our readers a quick overview of [synthetic biology](http://en.wikipedia.org/wiki/Synthetic_biology), the “design and construction of biological devices and systems for useful purposes.” As explained in [a 2012 book you edited](http://www.amazon.com/Synthetic-Biology-Markus-Schmidt/dp/3527331832/), major applications of synthetic biology include:\n\n\n* **Biofuels**: ethanol, algae-based fuels, bio-hydrogen, microbial fuel cells, etc.\n* **Bioremediation**: wastewater treatment, water desalination, solid waste decomposition, CO2recapturing, etc.\n* **Biomaterials**: bioplastics, bulk chemicals, cellulosomes, etc.\n* **Novel developments**: protocells and xenobiology for the production of novel cells and organisms.\n\n\nBut in addition to promoting the useful applications of synthetic biology, you also [speak](http://www.markusschmidt.eu/?page_id=27) and [write](http://www.synbiosafe.eu/uploads/pdf/Schmidt_etal-2009-SSBJ.pdf) extensively about the potential *risks* of synthetic biology. Which risks from novel biotechnologies are you most concerned about?\n\n\n\n\n\n---\n\n\n**Markus Schmidt**: It doesn’t come as a surprise that a new and emerging technology that is hallmarked as a game changer for the bioeconomy also has the potential for causing harm.\n\n\nTraditionally we can see direct risks related to safety and security. Safety deals with potential unintended consequences, such as accidents, while security refer to harm that is caused intentionally, such as bioterrorism. Right now, safety issues of SB are mostly covered by existing regulations and practices developed for genetic engineering (GE). But as SB is developing beyond the scope of GE, first of all it deals with genetic systems rather than a set of one or few genetic elements, second it attempts to apply true engineering principles to biology (such as standardization, modularization, hierarchies of abstraction, and separation of design and fabrication); and third it doesn’t only take what nature provides in terms of biochemical systems but attempts to go beyond that. So one concern is that while right now GE regulations seem to be adequate, in a not so distant future GE risk analysis practices will be outdated and we might run into difficulties to assess the safety risks of SB products. Another risk stems from one of the aims of SB to make biology easier to engineer. While this is predominantly a positive approach, it also brings with it the fact that more and more people outside the elite institutions will be able to use SB, such as amateur biologist. While amateurs have overwhelmingly good intentions, many of them do not have a background or training in biosafety and thus have a higher risk for accidents in their garage labs. A third point is the design of new-to-nature “xenobiological” system where alternative biochemical structures are used to run biological operations, such as additional amino acids, different types of nucleic acids etc. but also novel types of cells or protocells that behave differently than natural cells. The introduction of these alternative systems is of great interest to science, society and industry, but needs a careful assessment in order not to cause unwanted effects.\n\n\nSecurity comes with a different set of problems. Some experts believe that terrorists could start to use SB to enhance existing or develop new pathogens, or get hold of them via DNA synthesis.\n\n\nApart from these classical risks, we might also see indirect risks in other areas, such as the changes SB could cause to the socioeconomic structure. For example, one might ask if the use of this technology is going to benefit most people or just a few. Questions such as these, however, are not unique to SB but come up in every debate about technologies that promise huge changes.\n\n\n\n\n---\n\n\n**Luke**: One problem for the future of biosecurity is that it seems likely that advanced bioweapons will be cheaper to make and harder to track than nuclear fissile material. Thus, states and terrorists might find it easier to threaten groups of people — or maybe the world — with (say) home-built superviruses than with home-built nuclear weapons. And yet the release of a carefully designed supervirus could be as devastating, or more devastating, than a nuclear detonation. What’s your perspective on this?\n\n\n\n\n---\n\n\n**Markus**: The issue of bioterrorism has been tackled by several high level national security groups, such as the NSABB in the US. While there is a certain, although small risk, of people using biotech for illicit purposes, meassures have been taken to keep that from happening. One major concern, e.g. Was the mail-ordered virus form DNA synthesis companies. Following debates among the companies, governments and other stakeholder, there is now an effective screening mechanism in place to prevent people from ordering pathogenic sequences found on the internet.\n\n\nApart from that, it would be extremely difficult to make a new supervirus. The abilities to make new forms of life or viruses is still very limited, and will be for the time being. But the issue is on the agenda for national and international security agencies so that any development and inovations with a dual use potential is monitored (e.g. By the UN and others)\n\n\n\n\n---\n\n\n**Luke**: What seem to be the most important factors in getting regulatory bodies and other policy-makers to produce effective policy for biotechnological risk mitigation?\n\n\n\n\n---\n\n\n**Markus**: Innovations in science and technology tend to outpace the speed by which regulatory bodies operate. In other words policy-makers run the risk to be too slow to react on new challenges to the regulatory system, plus once they react it still takes some time before adapted or new regulations or guidelines are actually in place. In a time where the bioeconomy is believed to hold great potential for Europe or the USA, regulatory bodies cannot afford to hold up research and innovation once the techno-science goes beyond the limits of established regulations. So a real-time and forward looking assessment by policy makers is needed, as demonstrated e.g. by the Scientific Committee on Emerging and Newly Identified Health Risks (SCENIHR) of the European Commission, that is currently analysing the need to update the existing biotech regulation on synthetic biology.[1](https://intelligence.org/2013/10/28/markus-schmidt-on-risks-from-novel-biotechnologies/#footnote_0_10558 \"Scientific Committee on Emerging and Newly Identified Health Risks (SCENIHR) in association with Scientific Committee on Consumer Safety (SCCS), Scientific Committee on Health and Environmental Risks (SCHER). request for a joint scientific opinion: on Synthetic Biology\")\n\n\nAnother important point is the acknowledgement of the convergence of different technologies into one. In the case of synthetic biology we see the confluence of biotech, nanotech, IT and other areas into one converging field. So far the biotech and IT regulations come from a very different background with different aims and distinct cultures, and path dependencies tend to lock in the opportunities seen by the regulatory community. Future regulations of synthetic biology must take this convergence into account.\n\n\nA third aspect is a broader stakeholder consultation, a more participatory form of deriving to conclusions compared to the first genetic engineering rules implemented in the mid 70ies.\n\n\n\n\n---\n\n\n**Luke**: My impression is that government ethics & safety committees like SCENIHR are rarely able to spur policy changes that are implemented quickly enough for regulators to “keep up” with new technological developments. Is that your impression as well? And if it is, is there anything different about SCENIHR that should give us more hope than might usually be justified?\n\n\n\n\n---\n\n\n**Markus**: No. In principle, all the committees that advise governments on new science and technologies face the problem of keeping up to date with the pace of research and innovation. In synthetic biology, quite remarkably, a lot of committees from different countries and with different thematic focuses have taken up synbio as a case study. So all together I think that synbio is reasonably well covered.\n\n\nLet’s not forget that although synbio has been promised a “game-changer”, the “next industrial revolution” etc, real breakthroughs that impact the market are yet to come. With the few exceptions where a quick response was necessary by governments (such as in DNA synthesis and biosecurity), the “speed kills” argument doesn’t weigh as heavy as the need to provide sustainable medium to long term governance frameworks.\n\n\nI think it the following statement is ascribed to Bill Gates who said “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten. Don’t let yourself be lulled into inaction.”\n\n\n\n\n---\n\n\n**Luke**: Interesting. Can you give an example or two of the kind of ethics & safety committee success that you hope for with SCENIHR?\n\n\n\n\n---\n\n\n**Markus**: What I would like to see as an outcome of the SCENIHIR opinion on synbio is a statement regarding where, when and how the risks of synbio will go beyond those of genetically modified organisms. How should risk assessment be adapted or amended so we can continue to have a robust assessment of the risks involved? Also I would like to see an analysis of the potential for novel built-in safety locks (aka. semantic containment[2](https://intelligence.org/2013/10/28/markus-schmidt-on-risks-from-novel-biotechnologies/#footnote_1_10558 \"Schmidt M, de Lorenzo V. 2012. Synthetic constructs in/for the environment: Managing the interplay between natural and engineered Biology. FEBSLetters. Vol. 586: 2199-2206\"), genetic firewall[3](https://intelligence.org/2013/10/28/markus-schmidt-on-risks-from-novel-biotechnologies/#footnote_2_10558 \"Schmidt M. 2010. Xenobiology: a new form of life as the ultimate biosafety tool. BioEssays. Vol.32(4): 322-331\")) and recommendations on how to use them, so policy makers, scientists, funding agencies, and industry have a clear idea for which application built-in safety locks can be used, which additional level of safety can be provided, and which research, innovation and governance gaps have to be filled in order to have a fully operational safety lock available.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Markus!\n\n\n\n\n---\n\n1. [Scientific Committee on Emerging and Newly Identified Health Risks (SCENIHR) in association with Scientific Committee on Consumer Safety (SCCS), Scientific Committee on Health and Environmental Risks (SCHER). request for a joint scientific opinion: on Synthetic Biology](http://ec.europa.eu/health/scientific_committees/docs/synthetic_biology_mandate_en.pdf)\n2. [Schmidt M, de Lorenzo V. 2012. Synthetic constructs in/for the environment: Managing the interplay between natural and engineered Biology. FEBSLetters. Vol. 586: 2199-2206](http://www.markusschmidt.eu/pdf/12-02-Synthetic-constructs.pdf)\n3. [Schmidt M. 2010. Xenobiology: a new form of life as the ultimate biosafety tool. BioEssays. Vol.32(4): 322-331](http://www.markusschmidt.eu/pdf/Xenobiology-Schmidt_Bioessays_201004.pdf)\n\nThe post [Markus Schmidt on Risks from Novel Biotechnologies](https://intelligence.org/2013/10/28/markus-schmidt-on-risks-from-novel-biotechnologies/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-10-28T15:48:39Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "a110188ad99bd845fab0aaa0170bc4cc", "title": "Bas Steunebrink on Self-Reflective Programming", "url": "https://intelligence.org/2013/10/25/bas-steunebrink-on-sleight/", "source": "miri", "source_type": "blog", "text": "![Bas Steunebrink portrait](https://intelligence.org/wp-content/uploads/2013/10/Steunebrink_w150.jpg)[Bas Steunebrink](http://www.idsia.ch/~steunebrink/) is a postdoctoral researcher at the Swiss AI lab IDSIA, as part of [Prof. Schmidhuber’s](http://www.idsia.ch/~juergen/) group. He received his PhD in 2010 from Utrecht University, the Netherlands. Bas’s dissertation was on the subject of artificial emotions, which fits well in his continuing quest of finding practical and creative ways in which general intelligent agents can deal with time and resource constraints. A recent [paper](http://www.idsia.ch/~steunebrink/Publications/AGI13_resource-bounded.pdf) on how such agents will naturally strive to be effective, efficient, and curious was awarded the [Kurzweil Prize](http://www.agi-conference.org/2013/prizes/) for Best AGI Idea at AGI’2013. Bas also has a great interest in anything related to self-reflection and meta-learning, and all “meta” stuff in general.\n\n\n\n**Luke Muehlhauser**: One of your ongoing projects has been a Gödel machine (GM) implementation. Could you please explain (1) what a Gödel machine is, (2) why you’re motivated to work on that project, and (3) what your implementation of it does?\n\n\n\n\n---\n\n\n**Bas Steunebrink**: A GM is a program consisting of two parts running in parallel; let’s name them Solver and Searcher. Solver can be any routine that does something useful, such as solving task after task in some environment. Searcher is a routine that tries to find beneficial modifications to both Solver and Searcher, i.e., to any part of the GM’s software. So Searcher can inspect and modify any part of the Gödel Machine. The trick is that the initial setup of Searcher only allows Searcher to make such a self-modification if it has a proof that performing this self-modification is beneficial in the long run, according to an initially provided utility function. Since Solver and Searcher are running in parallel, you could say that a third component is necessary: a Scheduler. Of course Searcher also has read and write access to the Scheduler’s code.\n\n\n![Godel Machine: diagram of scheduler](https://intelligence.org/wp-content/uploads/2013/10/Steunebrink_gm_diagram_scheduler-300x160.png)\n\nIn the old days, writing code that makes self-modifications was seen as a nice way of saving on memory, which was very expensive. But now that memory is cheap and making self-modifications has turned out to be error-prone, this technique has been mostly banished. Modern programming languages abstract away from internals and encapsulate them, such that more guarantees about safety and performance can be given. But now that the techniques for automated reasoning developed by the theorem proving and artificial intelligence communities are maturing, a new opportunity for self-reflection emerges: to let the machine itself do all the reasoning and decide when, where, and how to perform self-modifications.\n\n\nThe construction of an actual implementation of a Gödel Machine is currently ongoing work, and I’m collaborating and consulting with MIRI in this endeavor. The technical problem of obtaining a system with anytime self-reflection capabilities can be considered solved. This includes answering the nontrivial question of what it means to inspect the state and behavior of actively running, resource-constrained code, and how it can safely by modified, at any time. Current work focuses on encoding the operational semantics of the system within a program being run by the system. Although these operational semantics are extremely simple, there remain possible theoretical hurdles, such as the implications of Löb’s Theorem ([Yudkowsky & Herreshoff 2013](https://intelligence.org/files/TilingAgents.pdf))\n\n\nIt will be interesting to see how well current Automated Theorem Provers (ATP) hold up in a Gödel Machine setting, because here we are trying to formulate proofs about code that is actively running, and which contains the ATP itself, and which is subject to resource constraints. There may be interesting lessons in here as well for the ATP developers and community.\n\n\n\n\n---\n\n\n**Luke**: You say that “the technical problem of obtaining a system with anytime self-reflection capabilities can be considered solved.” Could you give us a sketch of the solution to which you’re referring?\n\n\n\n\n---\n\n\n**Bas**: There are two issues at play here. First, there is the issue of deciding what you will see upon performing self-reflection. For example, we can ask whether a process can see the call stack of other currently running processes, or even its own call stack, and what operations can be performed on such a call stack to inspect and modify it. But second, performing self-reflection is an act in time: when a machine inspects (and modifies) its own internals, this happens at some moment in time, and doing so takes some time. The question here is, can the machine decide when to perform self-reflection?\n\n\nWe can envisage a machine which is performing duties by day, and every night at 2:00am is allowed access to several internal register to make adjustments based on the events of the past day, in order to be better prepared for the next day. One could argue that this machine has full self-reflective capabilities, because look, it can meddle around without restrictions every day. But it still lacks a certain degree of control, namely in time.\n\n\nWhat I have done is design a mechanism by which a machine can decide with very high precision when it wants to inspect a certain process. It involves dividing the operation of the machine into extremely small blocks, with the possibility for interruption between any two blocks. On an average desktop this gives about 50 million opportunities for initiating self-reflection per second, which I think is fine-grained enough for the anytime label.\n\n\nBesides the anytime issue of self-reflection, there are many other (technical) questions that have to be answered when implementing a self-reflective system such as a Gödel Machine. For example: should (self-)reflection interrupt the thing it is reflecting on, or can it look at something that is changing while you look at it? In an extreme case, you could be looking at your own call stack, where every probing instruction directly affects the thing being probed. Is this a powerful ability, or merely dangerous and stupid? In the self-reflective system that I have implemented, called Sleight, the default behavior is interruption, but the fluid case is very easy to get too, if you want. Basically, what it requires is interruption and immediate resumption, while saving a pointer to the obtained internal state of the resumed process. Now you can watch yourself change due to you watching yourself… It’s a bit like watching a very high-resolution, live brain scan of your own brain, but with the added ability of changing connectivity and activation anywhere at any time.\n\n\n\n\n---\n\n\n**Luke**: Another project you’ve been working on is called Sleight. What is Sleight, and what is its purpose?\n\n\n\n\n---\n\n\n**Bas**: Sleight is a self-reflective programming system. It consists of a compiler from [Scheme](http://en.wikipedia.org/wiki/Scheme_%28programming_language%29) to Sleight code, and a virtual machine (VM) for running Sleight code. The VM is what is called a *self-reflective interpreter*: it allows code being run to inspect and modify internal registers of the VM. These registers include the call stack and all variable assignments. Since all code is assigned to variables, Sleight code has complete access to its own source code at runtime. Furthermore, Sleight offers a mechanism for scheduling multiple processes. Of course, each process can inspect and modify other processes at any time. Protection and other safety issues have been well taken care of — although a process can easily cripple itself, the VM cannot be crashed.\n\n\nThe purpose of Sleight is to function as a platform for *safely* experimenting with self-reflective code, and for implementing a Gödel Machine in particular. The scheduling capabilities of the Sleight VM make it especially suitable for setting up the Solver, Searcher, and Scheduler — see the figure above.\n\n\n\n\n---\n\n\n**Luke**: A typical approach to reflective interpreters is the “reflective tower” of e.g. 3-Lisp ([Rivieres & Smith 1984](http://commonsenseatheism.com/wp-content/uploads/2013/10/Rivieres-Smith-The-implementation-of-procedurally-reflective-languages.pdf)). Interestingly, Sleight collapses the reflective tower into what you call a “reflective bungalow.” Could you explain what this means, and why it’s a good idea?\n\n\n\n\n---\n\n\n**Bas**: The reflective tower is a construct that in theory is infinite in two directions, like a tower with no ground floor and no top floor. I love the following quote from [Christian Queinnec](http://pagesperso-systeme.lip6.fr/Christian.Queinnec/WWW/LiSP.html) on the concept:\n\n\n\n> In the mid-eighties, there was a fashion for reflective interpreters, a fad that gave rise to a remarkable term: “reflective towers.” Just imagine a marsh shrouded in mist and a rising tower with its summit lost in gray and cloudy skies—pure Rackham! […] Well, who hasn’t dreamed about inventing (or at least having available) a language where anything could be redefined, where our imagination could gallop unbridled, where we could play around in complete programming liberty without trammel nor hindrance?\n> \n> \n\n\nThe infinite reflective tower, as so poetically described by Queinnec in the quote above, can be visualized as an infinite *nesting of interpreters*, each interpreter being run by another interpreter. In the tower metaphor, each interpreter is being run by an interpreter one floor above it, and each interpreter can start running a new interpreter, adding a floor below itself. Going one floor up is then called *reification*, whereas going one floor down is called *reflection*. Theoretically the tower can be infinitely high, but in a practical implementation the tower must have a top, which is formed by a black-box interpreter whose internal state cannot be reified, as is the case in [this pedagogical example](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.54.446). The point of having access to the interpreter running one floor up is that you can redefine any function and change your own operational semantics at will.\n\n\nYet I consider the theoretical ideal of the infinite tower not so ideal at all: if there is always more internal state that can be reified, *ad infinitum*, then the operational semantics of the machine as a whole must be unfounded or circular. But with a simple trick we can make do with just one interpreter and no nesting. We can do this by maintaining a special data structure: the *metacontinuation*. The metacontinuation is a “tower” or stack of interpreter states. The act of reification, which in the infinite tower meant moving one floor up, now means popping one interpreter state off the metacontinuation, and running the sole interpreter using that state. Likewise, the act of reflection, which in the infinite tower meant adding a floor below, now means pushing the current interpreter state onto the metacontinuation. This way we can imitate the effect of an infinite tower of interpreters without any need for nesting—it just needs an extra stack, effectively becoming a “reflective bungalow.”\n\n\nAlthough the metacontinuation is not an invention of mine—see also “Brown” ([Friedman & Wand, 1984](http://www.cs.indiana.edu/pub/techreports/TR161.pdf); [Wand & Friedman, 1986](http://www.cs.indiana.edu/pub/techreports/TR196.pdf)) and “Blond” ([Danvy & Malmkjær, 1988](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.40.2018&rep=rep1&type=pdf))—there is a whole lot of fun to be had with it. For example, in Sleight, I have extended the metacontinuation concept with an extremely fine-grained interruption mechanism, such that self-reflection can be performed *at any time*. It also facilitates the implementation of schedulers for “green” threads, which is helpful for a Gödel Machine implementation.\n\n\n\n\n---\n\n\n**Luke**: Some of MIRI’s research frames questions of self-reflection in terms of successor agents (e.g. [Yudkowsky & Herreshoff 2013](https://intelligence.org/files/TilingAgents.pdf)). You might call this *ex-vivo* self-modification vs. Sleight’s more *in-vivo* methods (reflecting on oneself in real-time). What are the benefits from operating in the in-vivo framework vs an ex-vivo one?\n\n\n\n\n---\n\n\n**Bas**: It seems to me that there are two ways to look at the in-vivo versus ex-vivo frameworks: with respect to *inspection* and with respect to *modification*.\n\n\nAn agent with in-vivo inspection capabilities may be said to have read access to its own actual internal registers, whereas an agent with ex-vivo inspection capabilities may only be able to request occasional snapshots (copies) of its internal registers to be made available for inspection. In the in-vivo inspection case, the very act of inspection is likely to change the thing being inspected, whereas in the ex-vivo inspection case, the agent is looking at a static object. The latter is probably easier to deal with, although the two capabilities are very different. Sleight supports both though.\n\n\nThe cited paper, however, deals with ex-vivo *modification* capabilities, meaning that self-modifications are made to a copy of the agent. This is reminiscent of an agent producing fitter offspring. As I understand it, the ex-vivo modification method is a generalization of in-vivo modification, promising to create an elegant formal framework of investigation. But when an agent is studying its own workings in order to find a beneficial ex-vivo modification to it, is this done using in-vivo or ex-vivo *inspection*? These different inspection methods may yield very different results, so I think this question also deserves investigation. Personally I think the in-vivo frameworks are the most interesting (if only because of their difficulty and danger…), but it remains to be seen which will work best in practice, which is what counts eventually.\n\n\n\n\n---\n\n\n**Luke**: Sorry, what do you mean by this statement? “I think the in-vivo frameworks are the most interesting (if only because of their difficulty and danger)…”\n\n\n\n\n---\n\n\n**Bas**: The *difficulty* is in the in-vivo inspection: a common assumption made by automated reasoning techniques is that the data being reasoned about is static while the reasoning is in progress. But if the data includes the dynamic internal state of the reasoner, this assumption breaks down. I find this interesting because we are entering uncharted territory.\n\n\nThe *danger* is in the in-vivo modification: if an agent decides to modify its own software, there may be no way back. The agent must be very sure what it is doing, or it might end up disabling itself. Although, come to think of it, the ex-vivo modification — i.e., producing a successor agent — comes with its own dangers. For example, it may be technically impossible to halt a flawed yet powerful successor agent, or it may be unethical to do so.\n\n\n\n\n---\n\n\n**Luke**: You say that “if an agent decides to modify its own software, it must be very sure what it is doing, or it might end up disabling itself.” Presumably another worry, from both the perspective of the agent and that of its creators, is that a self-modification may result in unintended consequences for the agent’s future behavior.\n\n\nIdeally, we’d want an agent to continue pursuing the same ends, with the same constraints on its behavior, even as it modifies itself (to become capable of achieving its goals more effectively and efficiently). That is, we want a solution to the problem of what we might call “*stable* self-modification.”\n\n\nBesides [Yudkowsky & Herreschoff (2013)](https://intelligence.org/files/TilingAgents.pdf), are you aware of other literature on the topic? And what are your thoughts on the challenge of stable self-modification?\n\n\n\n\n---\n\n\n**Bas**: This is a very interesting question, which I think concerns the issue of *growth* in general. It seems to me there are actually two questions here: (1) How can an agent be constructed such that it can grow stably from a “seed” program? And (2) How much supervision, testing, corrections, and interventions does such an agent need while it is growing?\n\n\nThe first question assumes that handcrafting a complex AI system is very hard and a bad strategy anyway — something which we will not have to argue over, I think. A great exploration of the prerequisites for stable self-growth has been performed by [Kristinn Thórisson (2012)](http://xenia.media.mit.edu/%7Ekris/ftp/Thorisson_chapt9_TFofAGI_Wang_Goertzel_2012.pdf) from Reykjavik University, who labels the approach of building self-growing systems *Constructivist AI*. I have had (and still have) the pleasure of working with him during the EU-funded [HumanObs](http://www.humanobs.org/) project, where we implemented a comprehensive cognitive architecture satisfying the prerequisites for stable self-growth and following the constructivist methodology laid out in the linked paper. The architecture is called [AERA](http://arxiv.org/abs/1312.6764) and is based on the [Replicode](http://cadia.ru.is/wiki/_media/public:nivel_thorisson_replicode_agi13.pdf) programming platform (developed mostly by Eric Nivel). This stuff is still very much work-in-progress, with a bunch of papers on AERA currently in preparation. So yes, I believe there exists a way of achieving long-term stable self-growth, and we are actively pursuing a promising path in this direction with several partner institutes.\n\n\nThe second question is an age-old one, especially regarding our own children, with our answers determining our schooling system. But many AI researcher seem to aim for a purely “intellectual” solution to the problem of building intelligent agents. For example, one could spend some time in front of a computer, programming a G��del Machine or other “ultimate algorithm”, run it, sit back, and just wait for it to launch us all into the singularity… But I’m not too optimistic it will work out this way. I think we may have to get up and take a much more parental and social approach to crafting AI systems. To ensure stable growth, we may have to spend lots of time interacting with our creations, run occasional tests, like with school tests for children, and apply corrections if necessary. It will not be the rising of the machines, but rather the *raising* of the machines.\n\n\nI’m hedging my bets though, by working on both sides simultaneously: [AERA](http://arxiv.org/abs/1312.6764) and the constructivist methodology on the one hand, and the Gödel Machine and other “ultimate” learning algorithms on the other hand.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Bas!\n\n\nThe post [Bas Steunebrink on Self-Reflective Programming](https://intelligence.org/2013/10/25/bas-steunebrink-on-sleight/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-10-25T13:00:40Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "decdbcdfa33a85c22479ca14b43111b1", "title": "Probabilistic Metamathematics and the Definability of Truth", "url": "https://intelligence.org/2013/10/23/probabilistic-metamathematics-and-the-definability-of-truth/", "source": "miri", "source_type": "blog", "text": "On October 15th, Paul Christiano presented “Probabilistic metamathematics and the definability of truth” at Harvard University as part of [Logic at Harvard](http://logic.harvard.edu/colloquium.php) (details [here](http://intelligence.org/2013/10/01/upcoming-talks-at-harvard-and-mit/)). As explained [here](http://lesswrong.com/lw/h1k/reflection_in_probabilistic_logic/), Christiano came up with the idea for this approach, and it was developed further at a series of [MIRI research workshops](http://intelligence.org/get-involved/#workshop).\n\n\nVideo of the talk is now available:\n\n\n\nThe video is occasionally blurry due to camera problems, but is still clear enough to watch.\n\n\nThe post [Probabilistic Metamathematics and the Definability of Truth](https://intelligence.org/2013/10/23/probabilistic-metamathematics-and-the-definability-of-truth/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-10-23T22:21:35Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "0ceca2c8c9e0bb2a78c46bd99c28e827", "title": "Hadi Esmaeilzadeh on Dark Silicon", "url": "https://intelligence.org/2013/10/21/hadi-esmaeilzadeh-on-dark-silicon/", "source": "miri", "source_type": "blog", "text": "![](https://intelligence.org/wp-content/uploads/2013/10/Esmaeilzadeh_w150.jpg) [Hadi Esmaeilzadeh](http://www.cc.gatech.edu/~hadi/) recently joined the School of Computer Science at the Georgia Institute of Technology as assistant professor. He is the first holder of the Catherine M. and James E. Allchin Early Career Professorship. Hadi directs the [Alternative Computing Technologies (ACT) Lab](https://research.cc.gatech.edu/act-lab/), where he and his students are working on developing new technologies and cross-stack solutions to improve the performance and energy efficiency of computer systems for emerging applications. Hadi received his Ph.D. from the Department of Computer Science and Engineering at University of Washington. He has a Master’s degree in Computer Science from The University of Texas at Austin, and a Master’s degree in Electrical and Computer Engineering from University of Tehran. Hadi’s research has been recognized by three [*Communications of the ACM*](http://cacm.acm.org/) Research Highlights and three [*IEEE Micro*](http://www.computer.org/portal/web/computingnow/micro) Top Picks. Hadi’s work on dark silicon has also been [profiled](http://www.nytimes.com/2011/08/01/science/01chips.html) in *New York Times*.\n\n\n\n**Luke Muehlhauser**: Could you please explain for our readers what “dark silicon” is, and why it poses a threat to the historical exponential trend in computing performance growth?\n\n\n\n\n---\n\n\n**Hadi Esmaeilzadeh**: I would like to answer your question with a question. What is the difference between the computing industry and the commodity industries like the paper towel industry?\n\n\nThe main difference is that computing industry is an industry of new possibilities while the paper towel industry is an industry of replacement. You buy paper towels because you run out of them; but you buy new computing products because they get better.\n\n\nAnd, it is not just the computers that are improving; it is the offered services and experiences that consistently improve. Can you even imagine running out of Microsoft Windows?\n\n\nOne of the primary drivers of this economic model is the exponential reduction in the cost of performing general-purpose computing. While in 1971, at the dawn of microprocessors, the price of 1 MIPS (Million Instruction Per Second) was roughly $5,000, it today is about 4¢. This is an exponential reduction in the cost of raw material for computing. This continuous and exponential reduction in cost has formed the basis of computing industry’s economy in the past four decades.\n\n\n\nTwo primary enabling factors of this economic model are:\n\n\n1. [Moore’s Law](http://en.wikipedia.org/wiki/Moore%27s_law)**:** The consistent and exponential improvement in transistor fabrication technology that happens every 18 months.\n2. The continuous improvements to the general-purpose processors’ architecture that leverage the transistor-level improvements.\n\n\nMoore’s Law has been a fundamental driver of computing for more than four decades. Over the past 40 years, every 18 months, the transistor manufacturing facilities have been able to develop a new technology generation that doubles the number of transistors on a single monolithic chip. However, doubling the number of transistors does not provide any benefits by itself. The computer architecture industry harvests these transistors and designs *general-purpose processors* that make these tiny switches available to the rest of computingcommunity. By building general-purpose processors, the computer architecturecommunity provides a link with mechanisms and abstractions that makethese devices accessible to compilers, programming languages, system designers,and application developers. To this end, general-purpose processorsenable the computing industry to commodify computing and make it pervasivelypresent everywhere.\n\n\nThe computer architecture community has also harvested theexponentially increasing number of transistors to deliver almost the same rate ofimprovement in performance of general-purpose processors. Thisconsistent improvement in performance has proportionally reduced the cost ofcomputing that in turn enabled application and system developers toconsistently offer new possibilities.\n\n\nThe ability to consistently provide newpossibilities has historically paid off the huge cost of developing new process technologiesfor transistor fabrication. This self-sustaining loop has preserved the economicmodel of our industry over the course of the past four decades. Nonetheless, there are fundamentalchallenges that are associated with developing new process technologies and integrating exponentially increasing numberof transistors on a single chip.\n\n\nOne of the main challenges of doubling the number of transistors on the chip is powering them without melting the chip and incurring excessively expensive cooling costs. Even though the number of transistors on the chip has exponentially increased since 1971 (the time first microprocessors were introduced), the chip power has merely increased very modestly and has plateaued in recent years.\n\n\nRobert Dennard formulated how the new transistor fabrication process technology can provide such physical properties. In fact, Dennard’s theory of scaling is the main force behind Moore’s Law. Dennard’s scaling theory showed how to reduce the dimensions and the electrical characteristics of a transistor proportionally to enable successive shrinks that simultaneously improved density, speed, and energy efficiency. According to Dennard’s theory with a scaling ratio of 1/√2, the transistor count doubles (Moore’s Law), frequency increases by 40%, and the total chip power stays the same from one generation of process technology to the next on a fixed chip area. That is, the power per transistor will decrease by the same rate transistor area shrinks from one technology generation to the next.\n\n\nWith the end of Dennard scaling in the mid 2000s, process technology scaling can sustain doubling the transistor count every generation, but with significantly less improvement in transistor switching speed and power efficiency. This disparity will translate to an increase in chip power if the fraction of active transistors is not reduced from one technology generation to the next.\n\n\nOne option to avoid increases in the chip power consumption was to not increase the clock frequency or even lower it. The shift to multicore architectures was partly a response to the end of Dennard scaling. When developing a new process technology, if the rate that the power of the transistors scales is less than the rate the area of the transistors shrinks, it might not be possible to turn on and utilize all the transistors that scaling provides. Thus, we define dark silicon as follows:\n\n\n\n> Dark silicon is the fraction of chip that needs to be powered off at all times due to power constraints.\n> \n> \n\n\nThe low utility of this dark silicon poses a great challenge for the entire computing community. If we cannot sufficiently utilize the transistors that developing costly new process technologies provide, how can we justify their development cost? If we cannot utilize the transistors to improve the performance of the general-purpose processors and reduce the cost of computing, how can we avoid becoming an industry of simple replacement?\n\n\nWhen the computing industry was under the reign of Dennard scaling, computer architects harvested new transistors to build higher frequency single-core microprocessors and equip them with more capabilities. For example, as technology scaled, the processors were packing better branch predictors, wider pipelines, larger caches, etc. These techniques were applying superlinear complexity-power tradeoffs to harvest instruction-level parallelism (ILP) and improve single core performance. However, the failure of Dennard scaling created a power density problem. The power density problem in turn broke many of these techniques that were used to improve the performance of single-core processors. The industry raced down the path of building multicore processors.\n\n\nThe multicore era started at 2004, when the major consumer processor vendor (Intel) cancelled its next generation single-core microarchitecture, Prescott, and gave up on focusing exclusively on single-thread performance switching to multicore, as their performance scaling strategy.\n\n\nWe mark the start of multicore era not with the date of the first multicore part, but with the time multicore processors became the default and main strategy for continued performance improvement.\n\n\nThe basic idea behind designing multicore processors was to substitute building more complex/capable single-core processors with building multicore processors that constitute simpler and/or lower frequency cores. It was anticipated that by exploiting parallelism in the applications, we can overcome the trends in the transistor level. The general consensus was that a long-term era of multicore has begun and the general expectation was that by increasing the number of cores, processors will provide benefits that will enable developing many more process fabrication technologies. Many believe that there will be thousands of cores on each single chip.\n\n\nHowever, in our dark silicon ISCA paper, we performed an exhaustive and comprehensive quantitative study that showed how the severity of the problem at the transistor level and the post Dennard scaling trends will affect the prospective benefits from multicore processors.\n\n\nIn our paper, we quantitatively question the consensus about multicore scaling. The results show that even with optimistic assumptions, multicore scaling — increasing the number of cores every technology generation — is not a long-term solution and cannot sustain the historical rates of performance growth in the coming years.\n\n\nThe gap between the projected performance of multicore processors and what the microprocessor industry has historically provided is significantly large, 24x. Due to lack of high degree of parallelism and the severe degree of energy inefficiency in the transistor level, adding more cores will not even enable using all the transistors that new process technologies provide.\n\n\nIn less than a decade from now, more than 50% of the chip may be dark. The lack of performance benefits and the lack of ability to utilize all the transistors that new process technologies provide may undermine the economic viability of developing new technologies. We may stop scaling not because of the physical limitations, but because of the economics.\n\n\nMoore’s Law has effectively worked as a clock and enabled the computing industry to consistently provide new possibilities and periodically may stop or slow down significantly. The entire computing industry may be at the risk of becoming an industry of replacement if new avenues for computing are not discovered.\n\n\n\n\n---\n\n\n**Luke**: How has the computing industry reacted to your analysis? Do some people reject it? Do you think it will be taken into account in the next [ITRS reports](http://www.itrs.net/reports.html)?\n\n\n\n\n---\n\n\n**Hadi**: I think the best way for me to answer this question is to point out the number of citations for our ISCA paper. Even though we published the paper in summer of 2011, the paper has already been cited more than 200 times. The paper was profiled on NYTimes and was selected as *IEEE Micro* Top Picks and the *Communications of the ACM* Research Highlights.\n\n\nI think there are people in industry and academia who thought the conclusions were too pessimistic. However, I talked to quite a few device physicists and they confirmed the problem at the transistor level is very dire. We have also done some preliminary measurements that show our projections were more optimistic than the reality. I think the results in our paper show the *urgency* of the issue, and the opportunity for disruptive innovation. I think time will tell us how optimistic or pessimistic our study was.\n\n\nITRS is an industry consortium that sets targets and goal for the semiconductor manufacturing. We used ITRS’s projections in our study; however, I am not sure if ITRS can actually use our results.\n\n\n\n\n---\n\n\n**Luke**: You point out in [your CACM paper](http://www.cc.gatech.edu/~hesmaeil/doc/paper/2013-cacm-dark_silicon.pdf) that if your calculations are correct, multicore scaling won’t be able to maintain the historical exponential trend in computations-per-dollar long enough for us to make the switch to radical alternative solutions such as “neuromorphic computing, quantum computing, or bio-integration.” What do you think are the most promising paths by which the semiconductor industry might be able to maintain the historical trend in computations per dollar?\n\n\n\n\n---\n\n\n**Hadi**: I think significant departures from conventional techniques are needed to provide considerable energy efficiency in general-purpose computing. I believe approximate computing and specialization have a lot of potential. There may be other paths forward though.\n\n\nWe have focused on general-purpose approximate computing. What I mean by general-purpose approximate computing is general-purpose computing that relaxes the robust digital abstraction of full accuracy, allowing a degree of error in execution. This might sound a bit odd, but there are many applications for which tolerance to error is inherent to the application. In fact, there is a one billion dollar company that makes profit by making your pictures worse. There are many cyber-physical and embedded systems that take in noisy sensory inputs and perform computations that do not have a unique output. Or, when you are searching on the web, there are multiple acceptable outputs. We are embracing error in the computation.\n\n\nConventional techniques in energy-efficient processor design, such as voltage and frequency scaling, navigate a design space defined by the two dimensions of performance and energy, and traditionally trade one for the other. In this proposal, we explore the dimension of error, a third dimension, and trade accuracy of computation for gains in both performance and energy.\n\n\nIn this realm, we have designed an [architectural framework](http://www.cc.gatech.edu/~hadi/doc/paper/2012-asplos-truffle.pdf) from the ISA (Instruction Set Architecture) to the microarchitecture, which conventional processors can use to trade accuracy for efficiency. We have also introduced a new class of accelerators that map a hot code region from a von Neumann model to a neural model, and provides significant performance and efficiency gains. We call this new class of accelerators Neural Processing Units (NPUs). These NPUs can potentially allow us to use analog circuits for general-purpose computing. I am excited about this work because it bridges von Neumann and neural models of computing, which are thought to be alternatives to one another. [Our paper](http://www.cc.gatech.edu/~hadi/doc/paper/2013-toppicks-npu.pdf) on NPUs was selected for *IEEE Micro* Top Picks and has been recently nominated for *CACM* Research Highlights.\n\n\nAs for specialization: we try to redefine the abstraction between hardware and software. Currently, the abstraction and the contract between hardware and software is the instruction set architecture (ISA) of the general-purpose processors. However, even though these ISAs provide a high level of programmability, they are not the most efficient way of realizing an application. There is a well-known tension between programmability and efficiency. There are orders of magnitude difference in efficiency between running an application on general-purpose processors and implementing the application with ASICs ([application-specific integrated circuits](http://en.wikipedia.org/wiki/Application-specific_integrated_circuit)).\n\n\nSince designing ASICs for the massive base of quickly changing, general-purpose applications is currently infeasible, providing programmable and specialized accelerators is a very important and interesting research direction. Programmable accelerators provide an intermediate point between the efficiency of ASICs and the generality of conventional processors, gaining significant efficiency for restricted domains of applications. GPUs and FPGAs are examples of these specialized accelerators.\n\n\n\n\n---\n\n\n**Luke**: What do your studies of dark silicon suggest, quantitatively?\n\n\n\n\n---\n\n\n**Hadi**: Our results show that without a breakthrough in process technology or microarchitecture design, core count scaling provides much less of a performance gain than conventional wisdom suggests. Under (highly) optimistic scaling assumptions—for parallel workloads—multicore scaling provides a 7.9× (23% per year) over ten years. Under more conservative (realistic) assumptions, multicore scaling provides a total performance gain of 3.7× (14% per year) over ten years, and obviously less when sufficiently parallel workloads are unavailable. Without a breakthrough in process technology or microarchitecture, other directions are needed to continue the historical rate of performance improvement.\n\n\n\n\n---\n\n\n**Luke**: You’ve been careful to say that your predictions will hold unless there is “a breakthrough in process technology or microarchitecture design.” How likely are such breakthroughs, do you think? Labs pretty regularly report preliminary results that “could” lead to breakthroughs in process technology or microarchitecture design in a few years (e.g. see [this story](http://www.colorado.edu/news/features/cu-mit-breakthrough-photonics-could-allow-faster-and-faster-electronics)), but do you have any sense of how many of these preliminary results are actually promising?\n\n\n\n\n---\n\n\n**Hadi**: I have been careful because I firmly believe in creativity! All these reports are of extreme value and they may very well be the prelude to a breakthrough. However, I have not seen any technology that is ready to replace the large-scale silicon manufacturing that is driving the whole computing industry.\n\n\nRemember that we are racing against the clock, and technology transfer from a lab that builds a prototype device to a large-scale industry takes considerable time. The results of our study show an urgent need for a shift in focus.\n\n\nI like to look at our study as a motivation for exploring new avenues and unconventional ways of performing computation. And I feel that we have had that impact in the community. However, timing is crucial!\n\n\n\n\n---\n\n\n**Luke**: Finally, what do you think is the main technology for the post-silicon era?\n\n\n\n\n---\n\n\n**Hadi**: I personally like to see the use of biological nervous tissues for general-purpose computing. Furthermore, using physical properties of devices and building hybrid analog-digital general-purpose computers is extremely enticing.\n\n\nAt the end, I would like to acknowledge my collaborators on the dark silicon project, Emily Blem, Renee St. Amant. Karthikeyan Sankaralingam, and Doug Burger as well as my collaborators on the NPU project, Adrian Sampson, Luis Ceze, and Doug Burger.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Hadi!\n\n\nThe post [Hadi Esmaeilzadeh on Dark Silicon](https://intelligence.org/2013/10/21/hadi-esmaeilzadeh-on-dark-silicon/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-10-22T01:42:27Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "0d73290c5305381251db5634574a023f", "title": "Russell and Norvig on Friendly AI", "url": "https://intelligence.org/2013/10/19/russell-and-norvig-on-friendly-ai/", "source": "miri", "source_type": "blog", "text": "[*![russell-norvig](https://intelligence.org/wp-content/uploads/2013/10/russell-norvig.jpg)AI: A Modern Approach*](http://aima.cs.berkeley.edu/) is by far the dominant textbook in the field. It is used in 1200 universities, and is currently the [22nd most-cited](http://citeseer.ist.psu.edu/stats/citations) publication in computer science. Its authors, [Stuart Russell](http://www.cs.berkeley.edu/~russell/) and [Peter Norvig](http://norvig.com/), devote significant space to AI dangers and Friendly AI in section 26.3, “The Ethics and Risks of Developing Artificial Intelligence.”\n\n\nThe first 5 risks they discuss are:\n\n\n* People might lose their jobs to automation.\n* People might have too much (or too little) leisure time.\n* People might lose their sense of being unique.\n* AI systems might be used toward undesirable ends.\n* The use of AI systems might result in a loss of accountability.\n\n\nEach of those sections is one or two paragraphs long. The final subsection, “The Success of AI might mean the end of the human race,” is given 3.5 *pages*. Here’s a snippet:\n\n\n\n> The question is whether an AI system poses a bigger risk than traditional software. We will look at three sources of risk. First, the AI system’s state estimation may be incorrect, causing it to do the wrong thing. For example… a missile defense system might erroneously detect an attack and launch a counterattack, leading to the death of billions…\n> \n> \n> Second, specifying the right utility function for an AI system to maximize is not so easy. For example, we might propose a utility function designed to minimize human suffering, expressed as an additive reward function over time… Given the way humans are, however, we’ll always find a way to suffer even in paradise; so the optimal decision for the AI system is to terminate the human race as soon as possible – no humans, no suffering…\n> \n> \n> Third, the AI system’s learning function may cause it to evolve into a system with unintended behavior. This scenario is the most serious, and is unique to AI systems, so we will cover it in more depth. I.J. Good wrote ([1965](http://commonsenseatheism.com/wp-content/uploads/2011/02/Good-Speculations-Concerning-the-First-Ultraintelligent-Machine.pdf)),\n> \n> \n> *Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then be unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.*\n> \n> \n\n\n\nRussell and Norvig then mention Moravec and Kurzweil’s writings, before returning to a more concerned tone about AI. They cover Asimov’s three laws of robotics, and then:\n\n\n\n> [Yudkowsky (2008)](https://intelligence.org/files/AIPosNegFactor.pdf) goes into more detail about how to design a Friendly AI. He asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognize both that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the challenge is one of mechanism design – to define a mechanism for evolving AI systems under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes. We can’t just give a program a static utility function, because circumstances, and our desired responses to circumstances, change over time. For example, if technology had allowed us to design a super-powerful AI agent in 1800 and endow it with the prevailing morals of the time, it would be fighting today to reestablish slavery and abolish women’s right to vote. On the other hand, if we build an AI agent today and tell it how to evolve its utility function, how can we assure that it won’t read that “Humans think it is moral to kill annoying insects, in part because insect brains are so primitive. But human brains are primitive compared to my powers, so it must be moral for me to kill humans.”\n> \n> \n> [Omohundro (2008)](http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf) hypothesizes that even an innocuous chess program could pose a risk to society. Similarly, Marvin Minsky once suggested that an AI program designed to solve the Riemann Hypothesis might end up taking over all the resources of Earth to build more powerful supercomputers to help achieve its goal. The moral is that even if you only want your program to play chess or prove theorems, if you give it the capability to learn and alter itself, you need safeguards.\n> \n> \n\n\nWe are happy to see MIRI’s work getting such mainstream academic exposure.\n\n\nReaders may also be interested to learn that Russell organized a panel on AI impacts at the [IJCAI-13](http://ijcai13.org/) conference. Russell’s own slides from that panel are [here](http://ijcai13.org/files/summary/Russell-Future_of_AI.pdf). The other panel participants were Henry Kautz ([slides](http://ijcai13.org/files/summary/Kautz-Future_of_AI.pdf)), Joanna Bryson ([slides](http://ijcai13.org/files/summary/Ethics_panel.pdf)), Anders Sandberg ([slides](http://ijcai13.org/files/summary/What_if_we_succeed-Sandberg.pdf)), and Sebastian Thrun ([slides](http://ijcai13.org/files/summary/Thrun-panel.pdf)).\n\n\nThe post [Russell and Norvig on Friendly AI](https://intelligence.org/2013/10/19/russell-and-norvig-on-friendly-ai/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-10-19T18:09:39Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "6c83cc7119d9961b22fedbc3aacec2e3", "title": "Richard Posner on AI Dangers", "url": "https://intelligence.org/2013/10/18/richard-posner-on-ai-dangers/", "source": "miri", "source_type": "blog", "text": "[![Posner](https://intelligence.org/wp-content/uploads/2013/10/Posner.jpg)Richard Posner](http://en.wikipedia.org/wiki/Richard_Posner) is a jurist, legal theorist, and economist. He is also the author of nearly 40 books, and is by far the [most-cited legal scholar of the 20th century](http://commonsenseatheism.com/wp-content/uploads/2013/10/Shapiro-The-most-cited-legal-scholars.pdf).\n\n\nIn 2004, Posner published *[Catastrophe: Risk and Response](http://www.amazon.com/Catastrophe-Risk-and-Response-ebook/dp/B0013PTUSO/)*, in which he discusses risks from [AGI](http://intelligence.org/2013/08/11/what-is-agi/) at some length. His analysis is interesting in part because it appears to be intellectually independent from the Bostrom-Yudkowsky tradition that dominates the topic today.\n\n\nIn fact, Posner does not *appear* to be aware of earlier work on the topic by I.J. Good ([1970](http://commonsenseatheism.com/wp-content/uploads/2012/03/Good-Some-future-social-repurcussions-of-computers.pdf), [1982](http://commonsenseatheism.com/wp-content/uploads/2012/03/Good-Ethical-Machines.pdf)), Ed Fredkin ([1979](http://lesswrong.com/lw/hva/open_thread_july_115_2013/9d1h)), Roger Clarke ([1993](http://commonsenseatheism.com/wp-content/uploads/2012/03/Clarke-Asimovs-Laws-of-Robotics-implications-for-information-technology-part-1.pdf), [1994](http://commonsenseatheism.com/wp-content/uploads/2012/03/Clarke-Asimovs-Laws-of-Robotics-implications-for-information-technology-part-2.pdf)), Daniel Weld & Oren Etzioni ([1994](http://www.cs.auckland.ac.nz/~nickjhay/papersuni/RoughDraft200207-best_of_SASEMAS.pdf)), James Gips ([1995](http://commonsenseatheism.com/wp-content/uploads/2011/03/Gips-Towards-the-ethical-robot.pdf)), Blay Whitby ([1996](http://www.amazon.com/Reflections-Artificial-Intelligence-Blay-Whitby/dp/1871516684/)), Diana Gordon ([2000](http://arxiv.org/pdf/1106.0244.pdf)), Chris Harper ([2000](http://commonsenseatheism.com/wp-content/uploads/2013/06/Harper-Challenges-for-designing-intelligent-systems-for-safety-critical-applications.pdf)), or Colin Allen ([2000](http://commonsenseatheism.com/wp-content/uploads/2009/08/Allen-Prolegomena-to-any-future-artificial-moral-agent.pdf)). He is not even aware of Hans Moravec ([1990](http://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187/), [1999](http://www.amazon.com/Robot-Mere-Machine-Transcendent-Mind/dp/0195136306/)), Bill Joy ([2000](http://www.wired.com/wired/archive/8.04/joy.html)), Nick Bostrom ([1997](http://www.nickbostrom.com/old/predict.html); [2003](http://www.nickbostrom.com/ethics/ai.html)), or Eliezer Yudkowsky ([2001](https://intelligence.org/files/CFAI.pdf)). Basically, he seems to know only of Ray Kurzweil ([1999](http://www.amazon.com/Age-Spiritual-Machines-Intelligence-ebook/dp/B002CIY8JW/)).\n\n\nStill, much of Posner’s analysis is consistent with the basic points of the Bostrom-Yudkowsky tradition:\n\n\n\n> [One class of catastrophic risks] consists of… scientific accidents, for example accidents involving particle accelerators, nanotechnology…, and artificial intelligence. Technology is the cause of these risks, and slowing down technology may therefore be the right response.\n> \n> \n> …there may some day, perhaps some day soon (decades, not centuries, hence), be robots with human and [soon thereafter] more than human intelligence…\n> \n> \n> …Human beings may turn out to be the twenty-first century’s chimpanzees, and if so the robots may have as little use and regard for us as we do for our fellow, but nonhuman, primates…\n> \n> \n> …A robot’s potential destructiveness does not depend on its being conscious or able to engage in [e.g. emotional processing]… Unless carefully programmed, the robots might prove indiscriminately destructive and turn on their creators.\n> \n> \n> …Kurzweil is probably correct that “once a computer achieves a human level of intelligence, it will necessarily roar past it”…\n> \n> \n\n\nOne major point of divergence seems to be that Posner worries about a scenario in which AGIs become self-aware, re-evaluate their goals, and decide not to be “bossed around by a dumber species” anymore. In contrast, Bostrom and Yudkowsky think AGIs will be dangerous not because they will “rebel” against humans, but because (roughly) using all available resources — including those on which human life depends — is a convergent instrumental goal for almost any set of final goals a powerful AGI might possess. (See e.g. [Bostrom 2012](http://www.nickbostrom.com/superintelligentwill.pdf).)\n\n\nThe post [Richard Posner on AI Dangers](https://intelligence.org/2013/10/18/richard-posner-on-ai-dangers/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-10-18T07:23:51Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "56532ab935d9f004b8c91ee12f30bb67", "title": "Ben Goertzel on AGI as a Field", "url": "https://intelligence.org/2013/10/18/ben-goertzel/", "source": "miri", "source_type": "blog", "text": "![Ben Goertzel portrait](https://intelligence.org/wp-content/uploads/2013/10/goertzel_w150_2.jpg) Dr. Ben Goertzel is Chief Scientist of financial prediction firm [Aidyia Holdings](http://www.aidyia.com/); Chairman of AI software company [Novamente LLC](http://wp.novamente.net/) and bioinformatics company [Biomind LLC](http://wp.biomind.com/); Chairman of the [Artificial General Intelligence Society and the](http://www.agi-society.org/) [OpenCog Foundation](http://opencog.org/); Vice Chairman of futurist nonprofit [Humanity+](http://humanityplus.org/); Scientific Advisor of biopharma firm [Genescient Corp.](http://www.genescient.com/); Advisor to the [Singularity University](http://singularityu.org/) and [MIRI](http://intelligence.org/); Research Professor in the Fujian Key Lab for Brain-Like Intelligent Systems at Xiamen University, China; and general Chair of the [Artificial General Intelligence conference](https://intelligence.org/feed/www.agi-conference.org/) series. His research work encompasses artificial general intelligence, natural language processing, cognitive science, data mining, machine learning, computational finance, bioinformatics, virtual worlds and gaming and other areas. He has published a dozen scientific books, 100+ technical papers, and numerous journalistic articles. Before entering the software industry he served as a university faculty in several departments of mathematics, computer science and cognitive science, in the US, Australia and New Zealand. He has three children and too many pets, and in his spare time enjoys creating avant-garde fiction and music, and exploring the outdoors.\n\n\n\n \n\n**Luke Muehlhauser**: Ben, you’ve been heavily involved in the formation and growth of a relatively new academic field — the field of [artificial general intelligence](http://intelligence.org/2013/08/11/what-is-agi/) (AGI). Since MIRI is now trying to co-create a new academic field of study — the field of [Friendly AI research](http://intelligence.org/research/) — we’d love to know what you’ve learned while co-creating the field of AGI research.\n\n\nCould you start by telling us the brief story of the early days? Of course, AI researchers had been talking about human-level AI since the dawn of the field, and there were occasional conferences and articles and books on the subject, but the field seemed to become more cohesive and active after you and a few others pushed on things under the name “artificial general intelligence.”\n\n\n\n\n---\n\n\n**Ben Goertzel**: I was interested in “the subject eventually to be named AGI” since my childhood, and started doing research in the area at age 16 (which was the end of my freshman year of college, as I started university at 15). However, it soon became apparent to me that “real AI” (the term I used privately before launching the term AGI), had little to do with the typical preoccupations of the academic or industry AI fields. This is part of what pushed me to do a PhD in math rather than AI. Rather than do a PhD on the kind of narrow AI that was popular in computer science departments in the 1980s. I preferred to spend grad school learning math and reading widely and preparing to work on “real AI” via my own approaches…\n\n\nI didn’t really think about trying to build a community or broad interest in “real AI” until around 2002, because until that point it just seemed hopeless. Around 2002 or so, it started to seem to me — for a variety of hard-to-pin-down reasons — that the world was poised for an attitude shift. So I started thinking a little about how to spread the word about “real AI” and its importance and feasibility more broadly.\n\n\nFrankly, a main goal was to create an environment in which it would be easier for me to attract a lot of money or volunteer research collaborators for my own real-AI projects. But I was also interested in fostering work on real AI more broadly, beyond just my own approach.\n\n\nMy first initiative in this direction was editing a book of chapters by researchers pursuing ambitious AI projects aimed at general intelligence, human-level intelligence, and so forth. This required some digging around, to find enough people to contribute chapters — i.e. people who were both doing relevant research, and willing to contribute chapters to a book with such a focus. It also required me to find a title for the book, which is where the term “AGI” came from. My original working title was “Real AI”, but I knew that was too edgy — since after all, narrow AI is also real AI in its own sense. So I emailed a bunch of friends soliciting title suggestions and Shane Legg proposed “Artificial General Intelligence.” I felt that “AGI” lacked a certain pizazz that other terms like “Artificial Life” have, but it was the best suggestion I got so I decided to go for it. Reaction to the term was generally positive. (Later I found that a guy named Mark Gubrud had used the term before, in passing in an article focused broadly on future technologies. I met Mark Gubrud finally at the AGI-09 conference in DC.)\n\n\nI didn’t really make a big push at community-building until 2005 when I started working with Bruce Klein. Bruce was a hard-core futurist whose main focus in life was human immortality. I met him when he came to visit me in Maryland to film me for a documentary. We talked a bit after that, and I convinced him that one very good way to approach immortality would be to build AGI systems that would solve the biology problems related to life extension. I asked Bruce to help me raise money for AGI R&D. After banging his head on the problem of recruiting $$ from investors for a while, he decided it would be useful to first raise the profile of the AGI pursuit in general — and this would create a context in which raising $$ for our own AGI R&D would be easier.\n\n\nSo Bruce and I conceived the idea of organizing an AGI conference. We put together the first AGI Workshop in Bethesda in 2006. Bruce did the logistical work; I recruited the researchers from my own social network, which was fairly small at that point. I would not have thought of trying to run conferences and build a community without Bruce’s nudging — this was more a Bruce approach than a Ben approach. I note that a few years later, Bruce played the key role in getting Singularity University off the ground. Diamandis and Kurzweil were of course the big names who made it happen; but without Bruce’s organizational legwork (as well as that of his wife at the time, Susan Fonseca), over a 6 month period prior to the first SU visioning meeting, SU would not have come together.\n\n\nThe AGI Workshop went well — and that was when I realized fully that there were a lot of AI researchers out there, who were secretly harboring AGI interests and ambitions and even research projects, but were not discussing these openly because of the reputation risk.\n\n\nFrom relationships strengthened at the initial AGI Workshop, the AGI conference series was born — the first full-on AGI conference was in 2008 at the University of Memphis, and they’ve been annual ever since. The conferences have both seeded a large number of collaborations and friendships among AGI researchers who otherwise would have continued operating in an isolated way, and have had an indirect impact via conferring more legitimacy on the AGI pursuit. They have brought together industry and academic and government researchers interested in AGI, and researchers from many different countries.\n\n\nLeveraging the increasing legitimacy that the conferences brought, I then did various other community-building things like publishing a co-authored paper on AGI in “AI Magazine”, the mainstream periodical of the AI field. The co-authors of the paper included folks from major firms like IBM, and some prestigious “Good Old-Fashioned AI” people. A couple other AGI-like conferences have also emerged recently, e.g. BICA and Cognitive Systems. I helped get the BICA conferences going originally, though I didn’t play a leading role. I think the AGI conferences helped create an environment in which the emergence of these other related small conferences seemed natural and acceptable.\n\n\nOf course, there is no way to assess how much impact all this community-building work of mine had, because we don’t know how the AI field would have developed without my efforts. But according to my best attempt at a rational estimation, it seems my initiatives of this sort have had serious impact.\n\n\nA few general lessons I would draw from this experience are:\n\n\n1. You need to do the right thing at the right time. With AGI we started our “movement” at a time when a lot of researchers *wanted* to do and talk about AGI, but were ashamed to admit it to their peers. So there was an upsurge of AGI interest “waiting to happen”, in a sense.\n2. It’s only obvious in hindsight that it was the right time. In real time, moving forward, to start a community one needs to take lots of entrepreneurial risks, and be tolerant of getting called foolish multiple times, including by people you respect. The risks will include various aspects, such as huge amounts of time spent, carefully built reputation risked, and personal money ventured (for instance, even for something like a conference, the deposit for the venue and catering has to come from somewhere… For the first AGI workshop, we wanted to maximize attendance by the right people so we made it free, which meant that Bruce and I — largely Bruce, as he had more funds at that time — covered the expenses from our quite limited personal funds.)\n3. Social networking and community building are a lot more useful expenditures of time than I, as a math/science/philosophy geek, intuitively realized. Of course people who are more sociable and not so geeky by nature realize the utility of these pursuits innately. I had to learn via experience, and via Bruce Klein’s expert instruction.\n\n\n\n\n---\n\n\n**Luke**: Did the early AGI field have much continuity with the earlier discussions of “human-level AI” (HLAI)? E.g. there were articles by [Nilsson](http://aaaipress.org/ojs/index.php/aimagazine/article/viewFile/1850/1748), [McCarthy](http://www-formal.stanford.edu/jmc/human.pdf), [Solomonoff](http://www.theworld.com/%7Erjs/timesc.pdf), [Laird](http://en.wikipedia.org/wiki/Soar_%28cognitive_architecture%29), and others, though I’m not sure whether there were any conferences or significant edited volumes on the subject.\n\n\n\n\n---\n\n\n**Ben**: It was important that, in trying to move AGI forward as a field and community, we did not found our overall efforts in any of these earlier discussions.\n\n\nFurther, a key aspect of the AGI conferences was their utter neutrality in respect to what approach to take. This differentiates the AGI conferences from BICA or Cognitive Systems, for example. Even though I have my own opinions on what approaches are most likely to succeed, I wanted the conferences to be intellectually free-for-all, equally open to all approaches with a goal of advanced AGI…\n\n\nHowever, specific researchers involved with the AGI movement from an early stage were certainly heavily inspired by these older discussions you mention. E.g. Marcus Hutter had a paper in the initial AGI book and has been a major force at the conferences, and has been strongly Solomonoff-inspired. Paul Rosenbloom has been a major presence at the conferences; he comes from a SOAR background and worked with the good old founders of the traditional US AI field… Selmer Bringsjord’s logic-based approach to AGI certainly harks back to McCarthy. Etc.\n\n\nSo, to overgeneralize a bit, I would say that these previous discussions tended to bind the AGI problem with some particular approach to AGI, whereas my preference was to more cleanly separate the goal from the approach, and create a community neutral with regard to the approach…\n\n\n\n\n---\n\n\n**Luke**: The *[Journal of Artificial General Intelligence](http://www.degruyter.com/view/j/jagi)* seems to have been pretty quiet for most of its history, but the [conference series](http://agi-conference.org/) seems to have been quite a success. Can you talk a bit about the challenges and apparent impacts of these two projects, and how they compare to each other?\n\n\n\n\n---\n\n\n**Ben**: Honestly, I have had relatively little to do with the JAGI, on a month by month basis. Loosely speaking — the conferences have been my baby; and the journal has been the baby of my friend and colleague Dr. Pei Wang. I’m on the editorial board of the journal, but my involvement so far has been restricted to help with high-level strategic decisions (like the move of the journal to the Versita platform a while ago, which Pei suggested and I was in favor of).\n\n\nSince I have limited time to focus on stuff besides my own R&D work, I have personally decided to focus my attention on the conferences and not the journal. This is because I felt that the conferences would have a lot of power for informal connection building and community building, beyond the formal aspect of providing a venue for presenting papers and getting publications in conference proceedings volumes.\n\n\nOne thing I can say is that Pei made the explicit decision, early on, to focus on quality rather than quantity in getting papers in the journal. I think he’s succeeded at getting high quality papers.\n\n\nI think the JAGI is an important initiative and has real potential to grow in the future and become an important journal. One big step we’ll need to take is to get it indexed in SCI, which is important because many academics only get “university brownie points” for publications in SCI indexed journals.\n\n\n\n\n---\n\n\n**Luke**: Can you say more about what kinds of special efforts you put into getting the AGI conference off the ground and growing it? Basically, what advice would you give to someone else who wants to do the same thing with another new technical discipline?\n\n\n\n\n---\n\n\n**Ben**: In the early stages, I made an effort to reach out one-on-one to researchers who I felt would be sympathetic to the AGI theme, and explicitly ask them to submit papers and come to the conference… This included some researchers whom I didn’t know personally at that time, but knew only via their work.\n\n\nMore recently, the conference keynote speeches have been useful as a tool for bringing new people into the AGI community. Folks doing relevant work who may not consider themselves AGI researchers per se, and hence wouldn’t submit papers to the conference, may still accept invitations to give keynote speeches. In some cases this may get them interested in the AGI field and community in a lasting way.\n\n\nWe’ve also made efforts not to let AGI get too narrowly sucked into the computer science field — by doing special sessions on neuroscience, robotics, futurology and so forth, and explicitly inviting folks from those fields to the conference, who wouldn’t otherwise think to attend.\n\n\nOther things we do is to ongoingly maintain our own mailing list of AGI-interested people, built by a variety of methods, including \n\nscouring conference websites to find folks who have presented papers related in some way to AGI. And we’ve established and maintained a relationship with AAAI, which enables us to advertise in their magazine and send postcards to their membership, thus enabling us to get a broader reach.\n\n\nAnyway this is just basic organizational mechanics I suppose — not terribly specific to AGI. This kind of stuff is fairly natural for me, due to having watched my mom organize various things for decades (she’s been a leader in the social work field and is retiring this month). But I don’t think it’s anything terribly special — only the content matter (AGI) is special!\n\n\nIf I have put my personal stamp on this community-building process in some way, it’s probably been via the especially inclusive way it’s been conducted. I’ve had the attitude that since AGI is an early stage field (though accelerating progress means that fields can potentially advance fairly rapidly from early to advanced stages), we should be open to pretty much any sensible perspective, in a spirit of community-wide brainstorming. Of course each of us must decide which ideas to accept and take seriously for our own work, and each researcher can have more in-depth discussions with those who share more of their own approach — but a big role of a broad community like the one we’re fostering with the AGI conferences, is to expose people to ideas and perspective different from the ones they’d encounter in their ordinary work lives, yet still with conceptual (and sometimes even practical) relevance…\n\n\n\n\n---\n\n\n**Luke**: What advice would you specifically give to those trying to create a field of “Friendly AI research”? For example, the term itself stands out as suboptimal, though I have even stronger objections to some of the most obvious alternatives, e.g. “Safe AI” or “Good AI.”\n\n\n\n\n---\n\n\n**Ben**: Well, I agree with you that the term “Friendly AI” is unlikely to catch on among researchers in academia or industry, or the media for that matter. So that is one issue you face in forming an FAI community. I don’t have a great alternative term in mind, but I’ll think about it. I’ve often gravitated toward the word “Beneficial” in this context, but I realize that’s not short or spiffy-sounding.\n\n\nTaking the analogy with the AGI field, one question I have is whether there’s a population of researchers who are already working on Friendly AI but not calling their work by that label or discussing it widely; or researchers or students who have a craving to work on Friendly AI but feel inhibited from doing so because of social stigma against the topic. If so, there is an analogous situation from the AGI field 10 years ago. If not, there’s no close analogy. Without such a “subterranean proto-community” already existent, guiding the formation of an above-the-ground community is a harder problem, I would think.\n\n\nOf course, some sort of dramatic success in FAI research would attract people to the field. But this is a chicken-and-egg problem, as dramatic success is more likely to come if there are more people in the field. In AGI there has not yet been a dramatic success but we’ve been steadily building a community of researchers anyway. (There have been diverse, modest successes, at any rate…!)\n\n\nI’m afraid I don’t have any great advice to offer beyond the obvious stuff. For instance, if you can get some famous researchers to put their reputaton behind the idea that FAI research is an important thing to be pursuing now, that would be a big help… Or convince someone to make a Hollywood movie in which some folks are making an Evil AI, which is then thwarted by a Friendly AI whose design is expertly guided by a team of FAI theorists furtively writing equations on napkins ;D … Or get someone to write a book analogous to The Singularity is Near but FAI focused — i.e. with a theme “The Singularity is Quite Possibly Near — and Whether It’s a Positive or Negative Event for Humanity LIkely Depends on How Well We Know What We’re Doing As It Approaches … and Understanding FAI Better is One Important Aspect of Knowing What We’re Doing…” … I’m fairly sure Eliezer Yudkowsky could write a great book on this theme if he wanted to, for example.\n\n\nOne key if FAI is to become a serious field, I think, will be to carefully and thoroughly build links between FAI researchers and people working in other related fields, like AGI, neuroscience, cognitive psychology, computer security, and so forth. If FAI is perceived as predominantly the domain of academic philosophers and abstract mathematicians, it’s not going to catch on — because after all, when is the last time that academic philosophers averted a major catastrophe, or created something of huge practical benefit? It will be key to more thoroughly link FAI to *real stuff* — to people actually doing things in the world and discovering new inventions or practical facts, rather than just writing philosophy papers or proving theorems about infeasible theoretical AI systems. Along these lines, workshops bringing together FHI and MIRI people don’t do much to build toward a real FAI community, I’d suppose.\n\n\nAnalogizing to my experience with AGI community-building, I’d say that organizing a FAI-oriented conference (with a name not involving “Friendly AI”) bringing together people from diverse disciplines, with a broad variety of perspectives, to discuss related issues freely and without any implicit assumption built into the event that the MIRI/FHI perspective is the most likely path to a solution, would be a reasonable start.\n\n\nOne minor comment is that, since MIRI is closely associated in the futurist community with a very particular and somewhat narrow set of perspectives on Friendly AI, if there is to be an effort to build a broader research community focused on FAI, it might be better if MIRI did this in conjunction with some other organization or organizations having reputations for greater inclusiveness.\n\n\nA broader comment is: I wonder if MIRI is framing the problem too narrowly. In your KurzweilAI review of James Barrat’s recent book, you define Friendly AI research as the problem “Figure out how to make sure the first self-improving intelligent machines will be human-friendly and will stay that way.”\n\n\nBut there are an awful lot of assumptions built into that formulation. It presents a strong bias toward certain directions of research, which may or may not be the best ones. For instance, Francis Heylighen, David Weinbaum and their colleagues at the Global Brain Institute have interesting (and potentially valuable) things to say about AI and human extinction risk, yet would not be comfortable shoehorning their thinking into a formulation like the above.\n\n\nSo I think you should find good a way to formulate the core concern at the base of FAI research in a broader way, that will attract researchers with a greater variety of intellectual backgrounds and interests and theoretical orientations. The real issue you’re concerned with, according to my understanding, is something like: To increase the odds that, as AI advances beyond the human level and \n\nallied technologies advance as well, the world continues to be a place that is reasonably acceptable (and perhaps even awesome!) according to human values. This may sound the same to you as “Figure out how to make sure the first self-improving intelligent machines will be human-friendly and will stay that way.” but it won’t sound the same to everybody…\n\n\nIMO an emerging FAI community, to be effective, will have to be open to a variety of different conceptual approaches to “increasing the odds that, as AI advances beyond the human level and allied technologies advance as well, the world continues to be a place that is reasonably acceptable (and perhaps even awesome!) according to human values.” — including approaches that have nothing directly to do with self-improving machines. Ironically, I suspect that this would lead to an influx of creative thinking into the subcommunity of researchers specifically concerned with “figuring out how to make sure the first self-improving intelligent machines will be human-friendly and will stay that way.”\n\n\n\n\n---\n\n\n**Luke**: Thanks, Ben!\n\n\nThe post [Ben Goertzel on AGI as a Field](https://intelligence.org/2013/10/18/ben-goertzel/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-10-18T07:17:02Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "767f4b84ceb4f043d881431cebe11323", "title": "MIRI’s October Newsletter", "url": "https://intelligence.org/2013/10/12/miris-october-newsletter/", "source": "miri", "source_type": "blog", "text": "| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| |\n\n |\n| \n\n| | | | |\n| --- | --- | --- | --- |\n| \n\n| | | |\n| --- | --- | --- |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \nGreetings from the Executive Director\nDear friends,\nThe big news this month is that Paul Christiano and Eliezer Yudkowsky are giving talks at Harvard and MIT about the work coming out of MIRI’s workshops, on Oct. 15th and 17th, respectively (details below).\nMeanwhile we’ve been planning future workshops and preparing future publications. Our experienced document production team is also helping to prepare [Nick Bostrom](http://nickbostrom.com/)‘s *Superintelligence* book for publication. It’s a very good book, and should be released by Oxford University Press in mid-2014.\nBy popular demand, MIRI research fellow Eliezer Yudkowsky now has a few “Yudkowskyisms” available on t-shirts, at [Rational Attire](http://rationalattire.com/). Thanks to Katie Hartman and Michael Keenan for setting this up.\nCheers,\nLuke Muehlhauser\nExecutive Director\n\n\nUpcoming Talks at Harvard and MIT\nIf you live near Boston, you’ll want to come see Eliezer Yudkowsky give a talk about MIRI’s research program in the spectacular Stata building on the MIT campus, on **October 17th**.\nHis talk is titled **Recursion in rational agents: Foundations for self-modifying AI**. There will also be a party the next day in MIT’s Building 6, with Yudkowsky in attendance.\nTwo days earlier, Paul Christiano will give a technical talk to a smaller audience about on of the key results from MIRI’s research workshops thus far. This talk is titled **Probabilistic metamathematics and the definability of truth**.\nFor more details on both talks, see the blog post [here](http://intelligence.org/2013/10/01/upcoming-talks-at-harvard-and-mit/).\n\n\n\n“Our Final Invention” Released\nBarat - Our Final Invention[*Our Final Invention*](http://www.amazon.com/Our-Final-Invention-Intelligence-ebook/dp/B00CQYAWRY/ref=as_li_ss_tl?tag=miri05-20), the best book yet written about the challenges of getting good outcomes from smarter-than-human AI, has been released.\nMIRI’s Luke Muehlhauser reviewed the book for the *Kurzweil AI* blog [here](http://www.kurzweilai.net/book-review-our-final-invention-artificial-intelligence-and-the-end-of-the-human-era), and [GCRI’s](http://gcrinstitute.org/) Seth Baum reviewed the book for *Scientific American*‘s Guest Blog [here](http://blogs.scientificamerican.com/guest-blog/2013/10/11/our-final-invention-is-ai-the-defining-issue-for-humanity/). You can also read an excerpt from the book [on Tor.com](http://www.tor.com/stories/2013/09/our-final-invention-excerpt).\nIf you read the book, be sure to write a quick review of it [on Amazon.com](http://www.amazon.com/Our-Final-Invention-Intelligence-ebook/dp/B00CQYAWRY/ref=as_li_ss_tl?tag=miri05-20).\n\n\nNew Analyses and Conversations\nMuch of MIRI’s research is published directly to [our blog](http://intelligence.org/blog/). Since our last newsletter, we’ve published the following **conversations**:\n[Paul Rosenbloom on Cognitive Architectures](http://intelligence.org/2013/09/25/paul-rosenbloom-interview/). For decades, Rosenbloom was a project manager for Soar, perhaps the earliest AGI project. In this interview, he discussed his new cognitive architecture project, Sigma.\n[Effective Altruism and Flow-Through Effects](http://intelligence.org/2013/09/14/effective-altruism-and-flow-through-effects/). Carl Shulman, who was at the time a MIRI research fellow, participated in a conversation about effective altruism and flow-through effects. This issue is highly relevant to MIRI’s mission, since MIRI focuses on activities that are intended to produce altruistic value via their flow-through effects on the invention of AGI. The other participants were [FHI’s](http://www.fhi.ox.ac.uk/) Nick Beckstead, UC Berkeley’s Paul Christiano, [GiveWell’s](http://www.givewell.org/) Holden Karnofsky, and [CEA’s](http://home.centreforeffectivealtruism.org/) Rob Wiblin. \nWe’ve also published two new **research analyses**:\n[How well will policy-makers handle AGI?](http://intelligence.org/2013/09/12/how-well-will-policy-makers-handle-agi-initial-findings/) One question relevant to superintelligence strategy is: “How well should we expect policy makers to handle the invention of AGI, and what does this imply about how much effort to put into AGI risk mitigation vs. other concerns?” To investigate this question, we asked Jonah Sinick to examine how well policy-makers handled past events analogous in some ways to the future invention of AGI, and summarize his findings.\n[Mathematical Proofs Improve But Don’t Guarantee Security, Safety, and Friendliness](http://intelligence.org/2013/10/03/proofs/). The approaches sometimes called “provable security,” “provable safety,” and “provable friendliness” should not be misunderstood as offering 100% guarantees of security, safety, and friendliness. Rather, these approaches are meant to provide more confidence than we could otherwise have, all else equal, that a given system is secure, safe, or “friendly.” Especially for something as complex as Friendly AI, our message is: “**If we prove it correct, it might work. If we *don’t* prove it correct, it *definitely* won’t work**.” \n\n\nDouble Your Donations via Corporate Matching\n MIRI has now partnered with [Double the Donation](http://doublethedonation.com/), a company that makes it easier for donors to take advantage of donation matching programs offered by their employers.\nMore than 65% of Fortune 500 companies match employee donations, and 40% offer grants for volunteering, but many of these opportunities go unnoticed. Most employees don’t know these programs exist!\nGo to [MIRI’s Double the Donation page](http://doublethedonation.com/miri) to find out whether your employer can match your donations to MIRI.\n\n\nRockstar Research Magazine\n Want an intelligent, independent source of news that isn’t just the same “hot,” “trending” stories everyone else is covering?\nMany of MIRI’s fans have told us they like [Rockstar Research Magazine](http://rockstarresearch.com/), MIRI Deputy Director [Louie Helm](http://intelligence.org/team/#staff)‘s daily news brief on AI, life hacking, effective altruism, rationality, and independent research.  Recent stories include:\n* [Moore’s Law has Foreseeable Path to 2035](http://rockstarresearch.com/moores-law-has-foreseeable-path-to-2035/)\n* [How Susceptible Are Jobs To Computerization?](http://rockstarresearch.com/how-susceptible-are-jobs-to-computerisation/)\n* [New Longevity Benefits of Whey Protein](http://rockstarresearch.com/new-longevity-benefits-of-whey-protein/)\n* [Boosting Reading Speed 40% in 1 Month](http://rockstarresearch.com/speed-reading/)\n* [Why Scientists Held Back Details on a Unique Botulinum Toxin](http://rockstarresearch.com/why-scientists-held-back-details-on-a-unique-botulinum-toxin/)\n\n\nVisit [Rockstar Research](http://rockstarresearch.com/) to browse past stories and sign up to receive weekly updates.\n\n\nFeatured Volunteer: Mallory Tackett\nMallory TackettMallory Tackett helps MIRI with publicity tasks. She is a physics undergraduate researching neural networks.\nMallory learned about MIRI through [Randal Koene](http://rak.minduploading.org/) when she was studying consciousness and whole brain emulation. She agrees with MIRI’s message that caution must be taken in developing AGI.\nEventually, Mallory would like to participate in [MIRI’s research workshops](http://intelligence.org/get-involved/#workshop). She would also like to spread awareness about the possibilities of whole brain emulation and virtual reality.\n |\n\n |\n\n |\n\n |\n\n |\n| |\n\n\nThe post [MIRI’s October Newsletter](https://intelligence.org/2013/10/12/miris-october-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-10-12T21:24:44Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "39d5b61ec39508dbf3dfa084945b7fb3", "title": "Mathematical Proofs Improve But Don’t Guarantee Security, Safety, and Friendliness", "url": "https://intelligence.org/2013/10/03/proofs/", "source": "miri", "source_type": "blog", "text": "![encryption](https://intelligence.org/wp-content/uploads/2013/09/encryption.jpg)In 1979, Michael Rabin [proved](http://publications.csail.mit.edu/lcs/pubs/pdf/MIT-LCS-TR-212.pdf) that his encryption system could be inverted — so as to decrypt the encrypted message — only if an attacker could factor *n*. And since this factoring task is [computationally hard](http://en.wikipedia.org/wiki/Computational_hardness_assumption) for any sufficiently large *n*, Rabin’s encryption scheme was said to be “provably secure” so long as one used a sufficiently large *n*.\n\n\nSince then, creating encryption algorithms with this kind of “provable security” has been a major goal of cryptography,[1](https://intelligence.org/2013/10/03/proofs/#footnote_0_10521 \"An encryption system is said to be provably secure if its security requirements are stated formally, and proven to be satisfied by the system, as was the case with Rabin’s system. See Wikipedia.\") and new encryption algorithms that meet these criteria are sometimes marketed as “provably secure.”\n\n\nUnfortunately, the term “provable security” can be misleading,[2](https://intelligence.org/2013/10/03/proofs/#footnote_1_10521 \"Security reductions can still be useful (Damgård 2007). My point is just that term “provable security” can be misleading, especially to non-experts.\") for several reasons[3](https://intelligence.org/2013/10/03/proofs/#footnote_2_10521 \"For more details, and some additional problems with the term “provable security,” see Koblitz & Menezes’ Another Look website and its linked articles, especially Koblitz & Menezes (2010).\").\n\n\n\n**First,** Rabin-style security proofs don’t actually prove security. Instead, they prove that an efficient algorithm for breaking the encryption system would imply an efficient algorithm for solving the underlying computational problem (e.g. integer factorization), which is assumed to be computationally hard. And though these underlying problems are *assumed* to be computationally hard, they aren’t actually *proven* to be computationally hard. (Hence, the word “proof” in “security proof” is different from what “proof” usually means in mathematics, where a proof can be deductively traced all the way from the proven mathematical statement to the axioms of the mathematical system being used.)\n\n\n**Second,** the system’s formal security requirements might fail to capture everything the attacker can do to break the system, and what information is available to the attacker. For example, it wasn’t long after [Rabin (1979)](http://publications.csail.mit.edu/lcs/pubs/pdf/MIT-LCS-TR-212.pdf) that Ron Rivest (the R in [RSA](http://en.wikipedia.org/wiki/RSA_(algorithm))) pointed out[4](https://intelligence.org/2013/10/03/proofs/#footnote_3_10521 \"See Williams (1980).\") that if an adversary could trick the user of Rabin’s crypto system to decrypt a cipher text of the adversary’s choosing, this would make it much easier for the adversary to break the whole system. As another example, [Bleichenbacher (1998)](http://commonsenseatheism.com/wp-content/uploads/2013/09/Bleichenbacher-Chosen-ciphertext-attacks-against-protocols-based-on-the-RSA-encryption-standard-PKCS-1.pdf) showed that an early version of the RSA encryption system could be successfully attacked by making intelligent use of the error messages received after failed attacks.\n\n\nIn general, security proofs usually do not account for [side channel attacks](http://en.wikipedia.org/wiki/Side_channel_attack), which make use of information taken from the crypto system’s physical implementation. For example, a physical computer will inevitably use power and give off electromagnetic radiation when running a crypto system. In some cases, statistical analyses of this power consumption or radiation can be used to break the crypto system.\n\n\n**Third,** sometimes, mathematical proofs themselves are wrong, including security proofs. For example, [Boldyreva et al. (2007)](http://www.cc.gatech.edu/~aboldyre/papers/bgoy-old.pdf) created a new kind of digital signature (called OMS) which they claimed was more efficient and more secure than other systems with similar functionality. But then [Hwang et al. (2009)](http://commonsenseatheism.com/wp-content/uploads/2013/09/Hwang-et-al-Universal-forgery-of-the-identity-based-sequential-aggregate-signature-scheme.pdf) showed that the “provably secure” OMS protocol could easily be broken. As it turned out, there was an undetected error in Boldyreva et al.’s 4-page proof for their Theorem 5.1.[5](https://intelligence.org/2013/10/03/proofs/#footnote_4_10521 \"See Koblitz & Menezes (2010). For additional examples of errors in mathematical proofs, see Branwen (2012); Kornai (2013).\")\n\n\nIn fact, *no* system can be “provably secure” in the strongest sense, since (1) we can’t be 100% certain that the system’s formal security requirements have been specified properly, and (2) we can’t be 100% certain the security proof itself is without error.\n\n\nSimilarly, some computer systems are labeled “[provably safe](https://www.google.com/search?q=%22provably+safe%22&oq=%22provably+safe%22),” usually because the software has (in whole or in part) been [formally verified](http://en.wikipedia.org/wiki/Formal_verification) against formally specified safety criteria.[6](https://intelligence.org/2013/10/03/proofs/#footnote_5_10521 \"See also Transparency in Safety-Critical Systems.\") But once again, we must remember that (1) we can never be 100% certain the formal safety specification captures everything we care about, and (2) we can never be 100% certain that a complex mathematical proof (formal verification) is without error.\n\n\nThe same reasoning applies to [AGI](http://intelligence.org/2013/08/11/what-is-agi/) “[friendliness](http://en.wikipedia.org/wiki/Friendly_artificial_intelligence).” Even if we discover (apparent) solutions to known open problems in [Friendly AI research](http://intelligence.org/research/), this does not mean that we can ever build an AGI that is “provably friendly” in the strongest sense, because (1) we can never be 100% certain that our formal specification of “friendliness” captures everything we care about, and (2) we can never be 100% certain that there are no errors in our formal reasoning. (In fact, the “friendliness specification” problem seems much harder than the “security specification” and “safety specification” problems, since proper friendliness specification involves a heavy dose of philosophy, and humans are [notoriously bad at philosophy](http://commonsenseatheism.com/wp-content/uploads/2012/10/Brennan-Scepticism-about-philosophy.pdf).)\n\n\nThus, the approaches sometimes called “provable security,” “provable safety,” and “provable friendliness” should not be misunderstood as offering 100% guarantees of security, safety, and friendliness.[7](https://intelligence.org/2013/10/03/proofs/#footnote_6_10521 \"Because this misunderstanding is so common, MIRI staff try to avoid using phrases like “provably Friendly.” MIRI research fellow Eliezer Yudkowsky is often accused of advocating for “provably friendly” AGI, but I was unable to find an example of Yudkowsky himself using that phrase. If Yudkowsky has used that phrase in the past, it seems likely that he was talking about having certain proofs about the internal structure of the AGI (to give us more confidence in its friendliness than would otherwise be the case), not proofs about its behavior in the physical world.\nAdded 10-16-2014: A reader pointed me to several passages from Yudkowsky’s writings that could have contributed to this confusion. In an October 2008 post, Yudkowsky used the phrase “provably correct Friendly AI,” but didn’t explain what he meant by the phrase. Yudkowsky also used the phrase “Friendly AI that (provably correctly) self-improves” in an October 2008 blog post comment. In a December 2008 post, Yudkowsky wrote: “Programmers operating with strong insight into intelligence, directly create along an efficient and planned pathway, a mind capable of modifying itself with deterministic precision – provably correct or provably noncatastrophic self-modifications. This is the only way I can see to achieve narrow enough targeting to create a Friendly AI.” And finally, in a September 2005 mailing list comment, Yudkowsky wrote a clearer exposition of what he tends to mean when using the word “provable” in the context of Friendly AI:\nNo known algorithm could independently prove a CPU design correct in the age of the universe, but with human-chosen *lemmas* we can get machine-*verified* correctness proofs. The critical property of proof in an axiomatic system is not that it’s certain, since the system might be inconsistent for all we know or can formally prove. The critical property is that, *if* the system is consistent, then a proof of ten thousand steps is as reliable as a proof of ten steps. There are no independent sources of failure. I hope that provably correct rewrites, like provably correct CPUs, will be managable if the AI can put forth deductive reasoning with efficiency, tractability, and scalability at least equalling that of a human mathematician. An AI-complete problem? Sure, but let’s not forget – we *are* trying to design an AI.\n\") Rather, **these approaches are meant to provide *more confidence than we could otherwise have*, all else equal, that a given system is secure, safe, or “friendly.”**\n\n\nEspecially for something as complex as Friendly AI, our message is: “If we prove it correct, it *might* work. If we *don’t* prove it correct, it *definitely* won’t work.”[8](https://intelligence.org/2013/10/03/proofs/#footnote_7_10521 \"This isn’t to say we expect to be able to (or need to) formally verify an entire AGI system. One possibility is that early AGIs will be hierarchical autonomous agents — ala Fisher et al. (2013) — and only the “top” control layer of the agent will be constructed to be amenable to formal correctness proofs of one kind or another.\")\n\n\nHumans often [want zero-risk solutions](http://en.wikipedia.org/wiki/Zero-risk_bias), but there aren’t any zero-risk solutions for computer security, safety, or friendliness. On the other hand, we shouldn’t ignore the value of formal proofs. They give us more precision than any other method available, and are a useful complement to (e.g.) testing.[9](https://intelligence.org/2013/10/03/proofs/#footnote_8_10521 \"See also Kornai (2012); Muehlhauser (2013); Damgård (2007).\")\n\n\nAnd when it comes to building self-improving AGI, we’d like to have as much confidence in the system’s friendliness as we can. A self-improving AGI is not just a “[safety-critical system](http://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/),” but a *world*-critical system. Friendliness research is hard, but it’s worth it, given [the stakes](https://intelligence.org/files/IE-EI.pdf).[10](https://intelligence.org/2013/10/03/proofs/#footnote_9_10521 \"My thanks to Louie Helm and Eliezer Yudkowsky for their feedback on this post.\")\n\n\n\n\n---\n\n1. An encryption system is said to be provably secure if its security requirements are stated formally, and proven to be satisfied by the system, as was the case with Rabin’s system. See [Wikipedia](http://en.wikipedia.org/wiki/Provable_security).\n2. Security reductions can still be useful ([Damgård 2007](http://www.daimi.au.dk/~ivan/positionpaper.pdf)). My point is just that term “provable security” can be misleading, especially to non-experts.\n3. For more details, and some additional problems with the term “provable security,” see Koblitz & Menezes’ [Another Look](http://anotherlook.ca/) website and its linked articles, especially [Koblitz & Menezes (2010)](http://www.ams.org/notices/201003/rtx100300357p.pdf).\n4. See [Williams (1980)](http://commonsenseatheism.com/wp-content/uploads/2013/09/Williams-A-modification-of-the-RSA-public-key-encryption-procedure.pdf).\n5. See [Koblitz & Menezes (2010)](http://www.ams.org/notices/201003/rtx100300357p.pdf). For additional examples of errors in mathematical proofs, see [Branwen (2012)](http://www.gwern.net/The%20Existential%20Risk%20of%20Mathematical%20Error); [Kornai (2013)](http://kornai.com/Drafts/agi12.pdf).\n6. See also [Transparency in Safety-Critical Systems](http://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/).\n7. Because this misunderstanding is so common, MIRI staff try to avoid using phrases like “provably Friendly.” MIRI research fellow Eliezer Yudkowsky is [often accused](https://www.google.com/search?q=yudkowsky+\"provably+friendly\") of advocating for “provably friendly” AGI, but I was unable to find an example of Yudkowsky himself using that phrase. If Yudkowsky *has* used that phrase in the past, it [seems likely](http://multiverseaccordingtoben.blogspot.com/2010/03/creating-predictably-beneficial-agi.html) that he was talking about having certain proofs about the internal structure of the AGI (to give us more confidence in its friendliness than would otherwise be the case), not proofs about its behavior in the physical world.\n*Added 10-16-2014:* A reader pointed me to several passages from Yudkowsky’s writings that could have contributed to this confusion. In [an October 2008 post](http://lesswrong.com/lw/un/on_doing_the_impossible/), Yudkowsky used the phrase “provably correct Friendly AI,” but didn’t explain what he meant by the phrase. Yudkowsky also used the phrase “Friendly AI that (provably correctly) self-improves” in an October 2008 blog post [comment](http://lesswrong.com/lw/v4/which_parts_are_me/ocu). In a [December 2008 post](http://lesswrong.com/lw/wg/permitted_possibilities_locality/), Yudkowsky wrote: “Programmers operating with strong insight into intelligence, directly create along an efficient and planned pathway, a mind capable of modifying itself with deterministic precision – provably correct or provably noncatastrophic self-modifications. This is the only way I can see to achieve narrow enough targeting to create a Friendly AI.” And finally, in a September 2005 mailing list comment, Yudkowsky [wrote](http://www.sl4.org/archive/0509/12230.html) a clearer exposition of what he tends to mean when using the word “provable” in the context of Friendly AI:\n\n\n\n> No known algorithm could independently prove a CPU design correct in the age of the universe, but with human-chosen \\*lemmas\\* we can get machine-\\*verified\\* correctness proofs. The critical property of proof in an axiomatic system is not that it’s certain, since the system might be inconsistent for all we know or can formally prove. The critical property is that, \\*if\\* the system is consistent, then a proof of ten thousand steps is as reliable as a proof of ten steps. There are no independent sources of failure. I hope that provably correct rewrites, like provably correct CPUs, will be managable if the AI can put forth deductive reasoning with efficiency, tractability, and scalability at least equalling that of a human mathematician. An AI-complete problem? Sure, but let’s not forget – we \\*are\\* trying to design an AI.\n> \n>\n8. This isn’t to say we expect to be able to (or need to) formally verify an *entire* AGI system. One possibility is that early AGIs will be hierarchical autonomous agents — *ala* [Fisher et al. (2013)](http://cacm.acm.org/magazines/2013/9/167136-verifying-autonomous-systems/abstract) — and only the “top” control layer of the agent will be constructed to be amenable to formal correctness proofs of one kind or another.\n9. See also [Kornai (2012)](http://kornai.com/Drafts/agi12.pdf); [Muehlhauser (2013)](http://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/); [Damgård (2007)](http://www.daimi.au.dk/~ivan/positionpaper.pdf).\n10. My thanks to Louie Helm and Eliezer Yudkowsky for their feedback on this post.\n\nThe post [Mathematical Proofs Improve But Don’t Guarantee Security, Safety, and Friendliness](https://intelligence.org/2013/10/03/proofs/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-10-03T19:33:10Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "58140f1a9185fdd0c18cf669256f4d87", "title": "Upcoming Talks at Harvard and MIT", "url": "https://intelligence.org/2013/10/01/upcoming-talks-at-harvard-and-mit/", "source": "miri", "source_type": "blog", "text": "![Paul & Eliezer](https://intelligence.org/wp-content/uploads/2013/10/Paul-Eliezer.jpg)On **October 15th** from 4:30-5:30pm, MIRI workshop participant [Paul Christiano](http://rationalaltruist.com/) will give a technical talk at the [Harvard University Science Center](https://www.google.com/maps/preview#!q=Harvard+University+Science+Center%2C+Oxford+Street%2C+Cambridge%2C+MA&data=!1m4!1m3!1d14614!2d-71.116864!3d42.3764958!4m10!1m9!4m8!1m3!1d2947!2d-71.115478!3d42.376308!3m2!1i1024!2i768!4f13.1), room 507, as part of the [Logic at Harvard](http://logic.harvard.edu/colloquium.php) seminar and colloquium.\n\n\nChristiano’s title and abstract are:\n\n\n\n> **Probabilistic metamathematics and the definability of truth**\n> \n> \n> No model M of a sufficiently expressive theory can contain a truth predicate T such that for all S, M |= T(“S”) if and only if M |= S. I’ll consider the setting of probabilistic logic, and show that there are probability distributions over models which contain an “objective probability function” P such that M |= a < P(“S”) < b almost surely whenever a < P(M |= S) < b. This demonstrates that a probabilistic analog of a truth predicate is possible as long as we allow infinitesimal imprecision. I’ll argue that this result significantly undercuts the philosophical significance of Tarski’s undefinability theorem, and show how the techniques involved might be applied more broadly to resolve obstructions due to self-reference.\n> \n> \n\n\n![Stata Center](https://intelligence.org/wp-content/uploads/2013/10/Stata-Center.jpg)Then, on **October 17th** from 4:00-5:30pm, [Scott Aaronson](http://www.scottaaronson.com/) will host a talk by MIRI research fellow [Eliezer Yudkowsky](http://yudkowsky.net/).\n\n\nYudkowsky’s talk will be somewhat more accessible than Christiano’s, and will take place in MIT’s [Ray and Maria Stata Center](https://www.google.com/maps/preview#!data=!1m4!1m3!1d3006!2d-71.0906703!3d42.3615315!4m12!2m11!1m10!1s0x89e370a95d3025a9%3A0xb1de557289ff6bbe!3m8!1m3!1d6502594!2d-119.306607!3d37.2691745!3m2!1i1024!2i768!4f13.1) (see image on right), in room 32-123 (aka Kirsch Auditorium, with 318 seats). There will be light refreshments 15 minutes before the talk. Yudkowsky’s title and abstract are:\n\n\n\n> **Recursion in rational agents: Foundations for self-modifying AI**\n> \n> \n> Reflective reasoning is a familiar but formally elusive aspect of human cognition. This issue comes to the forefront when we consider building AIs which model other sophisticated reasoners, or who might design other AIs which are as sophisticated as themselves. Mathematical logic, the best-developed contender for a formal language capable of reflecting on itself, is beset by impossibility results. Similarly, standard decision theories begin to produce counterintuitive or incoherent results when applied to agents with detailed self-knowledge. In this talk I will present some early results from workshops held by the Machine Intelligence Research Institute to confront these challenges.\n> \n> \n> The first is a formalization and significant refinement of Hofstadter’s “superrationality,” the (informal) idea that ideal rational agents can achieve mutual cooperation on games like the prisoner’s dilemma by exploiting the logical connection between their actions and their opponent’s actions. We show how to implement an agent which reliably outperforms classical game theory given mutual knowledge of source code, and which achieves mutual cooperation in the one-shot prisoner’s dilemma using a general procedure. Using a fast algorithm for finding fixed points, we are able to write implementations of agents that perform the logical interactions necessary for our formalization, and we describe empirical results.\n> \n> \n> Second, it has been claimed that Godel’s second incompleteness theorem presents a serious obstruction to any AI understanding why its own reasoning works or even trusting that it does work. We exhibit a simple model for this situation and show that straightforward solutions to this problem are indeed unsatisfactory, resulting in agents who are willing to trust weaker peers but not their own reasoning. We show how to circumvent this difficulty without compromising logical expressiveness.\n> \n> \n> Time permitting, we also describe a more general agenda for averting self-referential difficulties by replacing logical deduction with a suitable form of probabilistic inference. The goal of this program is to convert logical unprovability or undefinability into very small probabilistic errors which can be safely ignored (and may even be philosophically justified).\n> \n> \n\n\nAlso, on **Oct 18th** at 7pm there will be a [Less Wrong](http://lesswrong.com/) / [*Methods of Rationality*](http://hpmor.com/) meetup/party on the MIT campus in [Building 6](http://whereis.mit.edu/?go=6), room 120. There will be snacks and refreshments, and Yudkowsky will be in attendance.\n\n\nThe post [Upcoming Talks at Harvard and MIT](https://intelligence.org/2013/10/01/upcoming-talks-at-harvard-and-mit/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-10-02T00:43:11Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "cb11753dc9169c479a6240128a9cd987", "title": "Paul Rosenbloom on Cognitive Architectures", "url": "https://intelligence.org/2013/09/25/paul-rosenbloom-interview/", "source": "miri", "source_type": "blog", "text": "![Paul Rosenbloom portrait](http://intelligence.org/wp-content/uploads/2013/09/Rosenbloom_w150.jpg) Paul S. Rosenbloom is Professor of Computer Science at the University of Southern California and a project leader at USC’s Institute for Creative Technologies. He was a key member of USC’s Information Sciences Institute for two decades, leading new directions activities over the second decade, and finishing his time there as Deputy Director. Earlier he was on the faculty at Carnegie Mellon University (where he had also received his MS and PhD in computer science) and Stanford University (where he had also received his BS in mathematical sciences with distinction).\n\n\nHis research concentrates on cognitive architectures – models of the fixed structure underlying minds, whether natural or artificial – and on understanding the nature, structure and stature of computing as a scientific domain.  He is a AAAI Fellow, the co-developer of [Soar](http://sitemaker.umich.edu/soar/home) (one of the longest standing and most well developed cognitive architectures), the primary developer of [Sigma](http://cs.usc.edu/%7Erosenblo/) (which blends insights from earlier architectures such as *Soar* with ideas from graphical models), and the author of [*On Computing: The Fourth Great Scientific Domain*](http://www.amazon.com/On-Computing-Fourth-Scientific-Domain/dp/0262018322) (MIT Press, 2012).\n\n\n\n \n\n**Luke Muehlhauser**: From 1983-1998 you were co-PI of perhaps the longest-running cognitive architectures aimed at artificial *general* intelligence, the [Soar project](http://sitemaker.umich.edu/soar/home), which is still under development. One of your current projects is a new cognitive architecture called [Sigma](http://cs.usc.edu/%7Erosenblo/), which is “an attempt to build a functionally elegant, grand unified cognitive architecture/system – based on graphical models and piecewise continuous functions – in support of virtual humans and intelligent agents/robots.” What lessons did you learn from Soar, and how have they informed your work on Sigma?\n\n\n\n\n---\n\n\n**Paul S. Rosenbloom**: That’s an interesting and complex question, and one I’ve thought about off and on over the past few years, as Sigma was motivated by both the strengths and weaknesses of Soar. My current sense is that there were five lessons from Soar that have significantly impacted Sigma.\n\n\nThe first is the importance of seeking unified architectures for cognition. Not focusing on individual capabilities in isolation helps avoid local optima on the path towards general intelligence and opens the way for deep scientific results that are only accessible when considering interactions among capabilities.  In Sigma, this idea has been broadened from unification across cognitive capabilities, which is the primary concern in Soar, to unification across the full range of capabilities required for human(-level) intelligent behavior (including perception, motor, emotion, etc.). This is what is meant by *grand unified*.\n\n\nThe second is the importance of a uniform approach to architectures – at least as was exhibited through version 8 of Soar – in combination with Allen Newell’s exhortation to “listen to the architecture.” Many of the most interesting results from Soar arose from exploring how the few mechanisms already in the architecture could combine to yield new functional capabilities without adding new modules specifically for them. This has been recast in Sigma as *functional elegance*, or yielding something akin to “cognitive Newton’s laws”. This doesn’t necessarily mean allowing only one form of each capability in the architecture, as is implied by strict uniformity, but it does suggest searching for a small set of very general mechanisms through whose interactions the diversity of intelligent behavior arises. It also emphasizes deconstructing new capabilities in terms of existing architectural mechanisms in preference to adding new architectural mechanisms, and preferring to add microvariations to the architecture, rather than whole new modules, when extensions are necessary.\n\n\nThe third is the importance of having a working system that runs fast enough to provide feedback on the architecture from complex experiments. This is reflected in Sigma’s goal of *sufficient efficiency*.\n\n\nThe fourth is the functional elegance of a trio of nested control loops (reactive, deliberative and reflective); where flexibility increases (and speed decreases) with successive loops; where each earlier control loop acts as the inner loop of the next one; and where there is a mechanism such as chunking that can compile results generated within later/outer/slower loops into knowledge that is more directly accessible in earlier/inner/faster loops. This control structure is the largest conceptual fragment of Soar that has been carried directly over to Sigma, with work currently underway on how to generalize chunking appropriately for Sigma.\n\n\nThe fifth is that the earlier uniform versions of Soar did not provide sufficient architectural capability for full cognitive unification, and fell even further short when considering grand unification. Instead of leading the development of Sigma away from Soar’s original uniformity assumption, as has been the case with version 9 of Soar, this lesson led to a search for a more general set of uniform mechanisms; and, in particular, to the centrality of graphical models and piecewise-linear functions in Sigma. Graphical models yield state of the art algorithms across symbol, probability and signal processing, while piecewise-linear functions provide a single representation that can both approximate arbitrary continuous functions as closely as desired and be appropriately restricted for probabilistic and symbolic functions. The goal given this combination is to create a new breed of architecture that can provide the diversity of intelligent behavior found in existing state-of-the-art architectures, such as Soar and ACT-R, but in a simpler and more theoretically elegant fashion, while also extending in a functionally elegant manner to grand unification.\n\n\n\n\n\n---\n\n\n**Luke**: Where does Sigma fit into the space of AGI-aimed cognitive architectures? How is it similar to, and different from, [AIXI](http://wiki.lesswrong.com/wiki/AIXI), [LIDA](http://en.wikipedia.org/wiki/LIDA_%28cognitive_architecture%29), [DUAL](http://alexpetrov.com/proj/dual/), etc.? And why do you think Sigma might offer a more promising path toward AGI than these alternatives?\n\n\n\n\n---\n\n\n**Paul**: Sigma can be viewed as an attempt to merge together what has been learned from three decades of work in both cognitive architectures and graphical models (although clearly reflecting a personal perspective on what has been learned).  It is this combination that I believe provides a particularly promising path toward AGI.  Although a variety of cognitive architectures have been developed over the years, no existing architecture comes close to fully leveraging the potential of graphical models to combine (broad) generality with (state of the art) efficiency.\n\n\nI’m not at this point going to get into detailed comparisons with specific architectures, but one particularly illustrative dimension for comparison with the three you mentioned concerns functional elegance.  AIXI is at one extreme, attempting to get by with a very small number of basic mechanisms.  LIDA is at the other extreme, with a large number of distinct mechanisms.  I’m not terribly familiar with DUAL, but from what I have seen it is closer to AIXI than LIDA, although not as extreme as AIXI.  Along this dimension, Sigma is closest to DUAL, approaching the AIXI end of the spectrum but stopping short in service of sufficient efficiency.\n\n\nWith respect to your first question, concerning the broader space of AGI-aimed architectures, there are additional dimensions along which Sigma can also be situated, although I’ll only mention a few of them explicitly here.  First, in the near term Sigma is more directly aimed at functional criteria than criteria concerned with modeling of human cognition.  There is however an indirect effect of such modeling criteria through lessons transferred from other architectures, and there is an ultimate intent of taking them more directly into consideration in the future.  Second, Sigma sits somewhere in the midrange on the dimension of how formal/mathematical the architecture is, with distinctly informal aspects – such as the nested control loops – combined with formal aspects inherited from graphical models.  The third and fourth dimensions are the related ones of high-level versus low-level cognition and central versus peripheral cognition.  Sigma is aimed squarely at the full spans of both of these dimensions, even with a starting point that has been more high level and central.\n\n\n\n\n---\n\n\n**Luke**: When discussing AGI with AI scientists, I often hear the reply that “AGI isn’t well-defined, so it’s useless to talk about it.” Has that been much of a problem for you? Which [operational definition of AGI](http://intelligence.org/2013/08/11/what-is-agi/) do you tend to use for yourself?\n\n\n\n\n---\n\n\n**Paul**: I tend to take a rather laid back position on this overall issue.  Many AI scientists theses days are uncomfortable with the vagueness associated with notions of general intelligence, and thus limit themselves to problems that are clearly definable, and for which progress can be measured in some straightforward, preferably quantifiable, manner. Some of the initial push for this came from funders, but there is a strong inherent appeal to the approach, as it makes it easy to say what you are doing and to determine whether or not you are making progress, even if the metric along which you are making progress doesn’t quite measure what you originally set out to achieve.  It also makes it easier to defend what you are doing as science.\n\n\nHowever, in addition to working on Sigma over the past few years, I’ve also spent considerable time reflecting on the nature of science, and on the question of where computing more broadly fits in the sciences. The result is a very simple notion of what it means to do science, which is to (at least on average) increase our understanding over time. (More on this, and on the notion of computing as science – and in fact, as a great scientific domain that is the equal of the physical, life and social sciences – can be found in *On Computing: The Fourth Great Scientific Domain*, MIT Press, 2012.) There is obviously room within this definition for precisely measured work on well-defined problems, but also for more informal work on less well-defined problems.  Frankly, I often learn more from novel conjectures that get me thinking in entirely new directions, even when not yet well supported by evidence, than small ideas carefully evaluated.  This is one of the reasons I’ve enjoyed my recent participation in the AGI community.  Even though the methodology can often seem quite sketchy, I am constantly challenged by new ideas that get me thinking beyond the confines of where I currently am.\n\n\nMy primary goal ever since I got into AI in the 1970s has been the achievement of general human-level intelligence.  There have been various attempts over the years to define AGI or human-level intelligence, but none have been terribly satisfactory.  When I teach Intro to AI classes, I define intelligence loosely as “The common underlying capabilities that enable a system to be general, literate, rational, autonomous and collaborative,” and a cognitive architecture as the fixed structures that support this in natural and/or artificial systems.  It is this fixed structure that I have tried to understand and build over most of the past three-to-four decades, with Soar and Sigma representing the two biggest efforts – spanning roughly two decades of my effort between them – but with even earlier work that included participating in a project on instructable production systems and developing the XAPS family of activation-based production system architectures.\n\n\nThe desiderata of grand unification, functional elegance, and sufficient efficiency were developed as a means of evaluating and explaining the kinds of things that I consider to be progress on architectures like Sigma.  Increasing the scope of intelligent behavior that can be yielded, increasing the simplicity and theoretical elegance of the production of such behaviors, and enabling these behaviors to run in (human-level) real time – i.e., at roughly 50 ms per cognitive cycle – are the issues that drive my research, and how I personally evaluate whether I am making progress.  Unfortunately these are not generally accepted criteria at traditional AI conferences, so I have not had great success in publishing articles about Sigma in such venues.  But there are other more non-traditional venues, such as AGI and an increasing number of others – e.g., the AAAI 2013 Fall Symposium on Integrated Cognition that Christian Lebiere and I are co-chairing – that are more open minded on criteria, and thus more receptive to such work.  There are similar issues with funders, where these kinds of criteria do not for example fit well the traditional NSF model, but there are other funders who do get excited by the potential of such work.\n\n\n\n\n---\n\n\n**Luke**: [Stuart Russell](http://www.cs.berkeley.edu/%7Erussell/) organized a panel at [IJCAI-13](http://ijcai13.org/) on the question “What if we succeed [at building AGI]?”\n\n\nMany AI scientists seem to think of AGI merely in terms of scientific achievement, which would be rather incredible: “In the long run, AI is the only science,” as [Woody Bledsoe](http://en.wikipedia.org/wiki/Woody_Bledsoe) put it.\n\n\nOthers give serious consideration to the potential benefits of AGI, which are also enormous: imagine 1000 smarter-than-Einstein AGIs working on curing cancer.\n\n\nStill others ([including MIRI](http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/), and [I think Russell](http://lesswrong.com/r/discussion/lw/4rx/agi_and_friendly_ai_in_the_dominant_ai_textbook/)) are pretty worried about the social consequences of AGI. Right now, humans steer the future not because we’re the fastest or the strongest but because we’re the smartest. So if we create machines smarter than we are, it’ll be them steering the future rather than us, and they might not steer it where we’d like to go.\n\n\nWhat’s your own take on the potential social consequences of AGI?\n\n\n\n\n---\n\n\n**Paul**: I’m less comfortable with this type of question in general, as I don’t have any particular expertise to bring to bear in answering it, but since I have thought some about it, here are my speculations.\n\n\nComputer applications, including traditional AI systems, are already having massive social consequences. They provide tools that make people more productive, inform and entertain them, and connect them with other people. They have become pervasive because they can be faster, more precise, more reliable, more comprehensive and cheaper than the alternatives. In the process they change us as individuals and as societies, and often eliminate jobs in situations where humans would otherwise have been employed. They also have yielded new kinds of jobs, but the balance between jobs lost versus those gained, whether in number or type, does not seem either terribly stable or predictable. Over the near future, I would be surprised if work on AGI were to have social consequences that are qualitatively different from these.\n\n\nBut what will happen if/when AGI were to yield human-level, or superhuman, general intelligence? One interesting sub-question is whether superhuman general intelligence is even possible. We already have compelling evidence that superhuman specialized AI is possible, but do people fall short of the optimum with respect to general intelligence in such a way that computers could conceivably exceed us to a significant degree? The notion of the singularity depends centrally on an (implicit?) assumption that such headroom does exist, but if you for example accept the tenets of rational analysis from cognitive psychology, people may already be optimally evolved (within certain limits) for the general environment in which they live. Still, even if superhuman general intelligence isn’t possible, AGI could conceivably still enable an endless supply of systems as intelligent as the most intelligent human; or it may just enable human-level artificial general intelligence to be tightly coupled with more specialized, yet still superhuman, computing/AI tools, which could in itself yield a significant advantage.\n\n\nIn such a world would an economic role remain for humans, or would we either be marginalized or need to extend ourselves through genetic manipulation and/or hybridization with computational (AGI or AI or other) systems? My guess is that we would either need to extend ourselves or become marginalized. In tandem though – and beyond just the economic sphere – we will also need to develop a broader notion of both intelligence and of agenthood to appropriately accommodate the greater diversity we will likely see, and of what rights and responsibilities should automatically accrue with different levels or aspects of both. In other words, we will need a broader and more fundamental ethics of agenthood that is not simply based on a few fairly gross distinctions, such as between adults and children or between people and animals or between natural and artificial systems. The notion of Laws of Robotics, and its ilk, assumes that artificial general intelligence is to be kept in subjection, but at some point this has to break down ethically, and probably pragmatically, even if the doubtful premise is granted that there is a way to guarantee that they could perpetually be kept in such a state. If they do meet or exceed us in intelligence and other attributes, it would degrade both them and us to maintain what would effectively be a new form of slavery, and possibly even justify an ultimate reversal of the relationship. I see no real long-term choice but to define, and take, the ethical high ground, even if it opens up the possibility that we are eventually superseded – or blended out of pure existence – in some essential manner.\n\n\n\n\n---\n\n\n**Luke**: As it happens, I took a look at your book *On Computing* about 4 months ago. I found myself intuitively agreeing with your main thesis:\n\n\n\n> This book is about the *computing sciences*, a broad take on computing that can be thought of for now as in analogy to the physical sciences, the life sciences, and the social sciences, although the argument will ultimately be made that this is much more than just an analogy, with computing deserving to be recognized as the equal of these three existing, great scientific domains — and thus *the fourth great scientific domain*.\n> \n> \n\n\nRather than asking you to share a summary of the book’s key points, let me instead ask this: How might the world look different if your book has a major impact among educated people in the next 15 years, as compared to the scenario where it has little impact?\n\n\n\n\n---\n\n\n**Paul**: I wrote the book more to impact how people think about computing – helping them to understand it as a rich, well structured and highly interdisciplinary domain of science (and engineering) – and where its future lies, than necessarily to create a world that looks qualitatively different. I would hope though that academic computing organizations will some day be better organized to reflect, facilitate and leverage this. I would also hope that more students will come into computing excited by its full potential, rather than focusing narrowly on a career in programming. I would further hope that funders will better understand what it means to do basic research in computing, which does not completely align with the more traditional model from the “natural” sciences.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Paul!\n\n\nThe post [Paul Rosenbloom on Cognitive Architectures](https://intelligence.org/2013/09/25/paul-rosenbloom-interview/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-09-26T03:18:23Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "7bf479957be6c553db5d0ba183101138", "title": "Effective Altruism and Flow-Through Effects", "url": "https://intelligence.org/2013/09/14/effective-altruism-and-flow-through-effects/", "source": "miri", "source_type": "blog", "text": "Last month, MIRI research fellow Carl Shulman[1](https://intelligence.org/2013/09/14/effective-altruism-and-flow-through-effects/#footnote_0_10484 \"Carl was a MIRI research fellow at the time of the conversation, but left MIRI at the end of August 2013 to study computer science.\") participated in a recorded debate/conversation about effective altruism and flow-through effects. This issue is highly relevant to MIRI’s mission, since MIRI focuses on activities that are intended to produce altruistic value [via their flow-through effects on the invention of AGI](http://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/).\n\n\nThe conversation ([mp3](https://docs.google.com/file/d/0B8_48dde-9C3VmlZSVp4YVVEckE/edit), [transcript](http://www.jefftk.com/p/flow-through-effects-conversation)) included:\n\n\n* Nick Beckstead, research fellow at the [Future of Humanity Institute](http://www.fhi.ox.ac.uk/) at Oxford University\n* Paul Christiano, UC Berkeley grad student and blogger at [Rational Altruist](http://rationalaltruist.com/)\n* Holden Karnofsky, co-founder of [GiveWell](http://www.givewell.org/)\n* Carl Shulman, MIRI research fellow\n* Rob Wiblin, executive director at the [Center for Effective Altruism](http://home.centreforeffectivealtruism.org/)\n\n\nRecommended background reading includes:\n\n\n* Holden Karnofsky’s essay “[Flow-through effects](http://blog.givewell.org/2013/05/15/flow-through-effects/)“\n* Paul Christiano’s essay “[My outlook](http://rationalaltruist.com/2013/06/03/my-outlook/)“\n* MIRI’s [interview with Nick Beckstead](http://intelligence.org/2013/07/17/beckstead-interview/) on the importance of the far future\n\n\nTo summarize the conversation very briefly: All participants seemed to agree that more research on [flow-through effects](http://blog.givewell.org/2013/05/15/flow-through-effects/) would be high value. However, there’s a risk that such research isn’t highly tractable. For now, GiveWell will focus on other projects that seem more tractable. Rob Wiblin might try to organize some research on flow-through effects, to learn how tractable it is.\n\n\n\n\n---\n\n1. Carl was a MIRI research fellow at the time of the conversation, but [left MIRI](http://intelligence.org/2013/09/10/september-newsletter/) at the end of August 2013 to study computer science.\n\nThe post [Effective Altruism and Flow-Through Effects](https://intelligence.org/2013/09/14/effective-altruism-and-flow-through-effects/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-09-15T01:01:38Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "ec70b4287fce34532f14b0003c22a077", "title": "Double Your Donations via Corporate Matching", "url": "https://intelligence.org/2013/09/14/double-your-donation/", "source": "miri", "source_type": "blog", "text": "[![double-the-donation-logo](https://intelligence.org/wp-content/uploads/2013/09/double-the-donation-logo.png)](http://doublethedonation.com/miri)MIRI has now partnered with [Double the Donation](http://doublethedonation.com/), a company that makes it easier for donors to take advantage of donation matching programs offered by their employers.\n\n\nMore than 65% of Fortune 500 companies match employee donations, and 40% offer grants for volunteering, but many of these opportunities go unnoticed. Most employees don’t know these programs exist!\n\n\nGo to MIRI’s Double The Donation page [here](http://doublethedonation.com/miri) to find out whether your employer can match your donations to MIRI. Or, use the form below:\n\n\n<> \n\n\n\n\n\nThe post [Double Your Donations via Corporate Matching](https://intelligence.org/2013/09/14/double-your-donation/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-09-15T00:42:27Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "087378a46eacaa29d91f85bcba4e10a1", "title": "How well will policy-makers handle AGI? (initial findings)", "url": "https://intelligence.org/2013/09/12/how-well-will-policy-makers-handle-agi-initial-findings/", "source": "miri", "source_type": "blog", "text": "MIRI’s [mission](http://intelligence.org/about/) is “to ensure that the creation of smarter-than-human intelligence has a positive impact.”\n\n\nOne policy-relevant question is: **How well should we expect policy makers to handle the invention of AGI, and what does this imply about how much effort to put into AGI risk mitigation vs. other concerns?**\n\n\nTo investigate these questions, we asked [Jonah Sinick](http://mathisbeauty.org/aboutme.html) to examine how well policy-makers handled past events analogous in some ways to the future invention of AGI, and summarize his findings. We pre-committed to publishing our entire email exchange on the topic (with minor editing), just as with our project on [how well we can plan for future decades](http://intelligence.org/2013/09/02/how-effectively-can-we-plan-for-future-decades). The post below is a summary of findings from [our full email exchange (.pdf)](http://intelligence.org/wp-content/uploads/2013/09/Elites-and-AI.pdf) so far.\n\n\nAs with our investigation of how well we can plan for future decades,**we decided to publish our initial findings after investigating only a few historical cases**. This allows us to gain feedback on the value of the project, as well as suggestions for improvement, before continuing. It also means that **we aren’t yet able to draw any confident conclusions about our core questions**.\n\n\nThe most significant results from this project so far are:\n\n\n1. We came up with a preliminary list of 6 seemingly-important ways in which a historical case could be analogous to the future invention of AGI, and evaluated several historical cases on these criteria.\n2. Climate change risk seems sufficiently disanalogous to AI risk that studying climate change mitigation efforts probably gives limited insight into how well policy-makers will deal with AGI risk: the expected damage of climate change appears to be very small relative to the the expected damage due to AI risk, especially when one looks at expected damage to policy makers.\n3. The 2008 financial crisis appears, after a shallow investigation, to be sufficiently analogous to AGI risk that it should give us some small reason to be concerned that policy-makers will not manage the invention of AGI wisely.\n4. The risks to critical infrastructure from geomagnetic storms are far too small to be in the same reference class with risks from AGI.\n5. The eradication of smallpox is only somewhat analogous to the invention of AGI.\n6. Jonah performed very shallow investigations of how policy-makers have handled risks from cyberwarfare, chlorofluorocarbons, and the Cuban missile crisis, but these cases need more study before even “initial thoughts” can be given.\n7. We identified additional historical cases that could be investigated in the future.\n\n\nFurther details are given below. For sources and more, please see [our full email exchange (.docx)](https://intelligence.org/wp-content/uploads/2013/09/Elites-and-AI.docx).\n\n\n\n### \n\n\n### 6 ways a historical case can be analogous to the invention of AGI\n\n\nIn conversation, Jonah and I identified six features of the future invention of AGI that, if largely shared by a historical case, seem likely to allow the historical case to shed light on how well policy-makers will deal with the invention of AGI:\n\n\n1. AGI may become a major threat in a somewhat unpredictable time.\n2. AGI may become a threat when the world has very limited experience with it.\n3. A good outcome with AGI may require solving a difficult global coordination problem.\n4. Preparing for the AGI threat adequately may require lots of careful work in advance.\n5. Policy-makers have strong personal incentives to solve the AGI problem.\n6. A bad outcome with AGI would be a global disaster, and a good outcome with AGI would have global humanitarian benefit.\n\n\nMore details on these criteria and their use are given in the second email of our full email exchange.\n\n\n \n\n\n### Risks from climate change\n\n\nPeople began to see climate change as a potential problem in the early 1970s, but there was some ambiguity as to whether human activity was causing warming (because of carbon emissions) or cooling (because of smog particles). The first [IPCC](http://en.wikipedia.org/wiki/Intergovernmental_Panel_on_Climate_Change) report was issued in 1990, and stated that were was substantial anthropogenic global warming due to greenhouse gases. By 2001, there was a strong scientific consensus behind this claim.\n\n\nWhile policy-makers’ response to risks from climate change might seem likely to shed light on whether policy-makers will deal wisely with AGI, there are some important disanalogies:\n\n\n* The harms of global warming are expected to fall disproportionately on disadvantaged people in poor countries, not on policy-makers. So policy-makers have much less personal incentive to solve the problem than is the case with AGI.\n* In the median case, humanitarian losses from global warming [seems to be](http://lesswrong.com/lw/hi1/potential_impacts_of_climate_change/) about 20% of GDP per year for the poorest people. In light of anticipated economic development and marginal diminishing utility, this is a *much* smaller negative humanitarian impact than AGI risk (even ignoring future generations). For example, economist Indur Goklany [estimated](http://wattsupwiththat.com/2012/10/17/is-climate-change-the-number-one-threat-to-humanity/) that “through 2085, only 13% of [deaths] from hunger, malaria, and extreme weather events (including coastal flooding from sea level rise) should be from [global] warming.”\n* Thus, potential analogies to AGI risk come from climate change’s *tail risk*. But there seem to be few credentialed scientists who have views compatible with a prediction that even a temperature increase in the 95th percentile of the probability distribution (by 2100) would do more than just begin to render some regions of Earth uninhabitable.\n* According to the [5th IPCC](http://www.ipcc.ch/meetings/session31/inf3.pdf), the risk of human extinction from climate change seems very low: “Some thresholds that all would consider dangerous have no support in the literature as having a non-negligible chance of occurring. For instance, a ‘runaway greenhouse effect’—analogous to Venus—appears to have virtually no chance of being induced by anthropogenic activities.”\n\n\n \n\n\n### The 2008 financial crisis\n\n\nJonah did a shallow investigation of the 2008 financial crisis, but the preliminary findings are interesting enough for us to describe them in some detail. Jonah’s impressions about the relevance of the 2008 financial crisis to the AGI situation are based on a reading of *[After the Music Stopped](http://www.amazon.com/After-Music-Stopped-Financial-Response/dp/1594205302/)* by Alan Blinder, who was the vice chairman of the federal reserve for 1.5 years during the Clinton administration. Naturally, many additional sources should be consulted before drawing firm conclusions about the relevance of policy-makers’ handling of the financial crisis to their likelihood of handling AGI wisely.\n\n\nBlinder’s seven main factors leading to the recession are (p. 27):\n\n\n1. Inflated asset prices, especially of houses (the housing bubble) but also of certain securities (the bond bubble);\n2. Excessive leverage (heavy borrowing) throughout the financial system and the economy;\n3. Lax financial regulation, both in terms of what the law left unregulated and how poorly the various regulators performed their duties;\n4. Disgraceful banking practices in subprime and other mortgage lending;\n5. The crazy-quilt of unregulated securities and derivatives that were built on these bad mortgages;\n6. The abysmal performance of the statistical rating agencies, which helped the crazy-quilt get stitched together; and\n7. The perverse compensation systems in many financial institutions that created powerful incentives to go for broke.\n\n\nWith these factors in mind, let’s look at the strength of the analogy between the 2008 financial crisis and the future invention of AGI:\n\n\n1. Almost tautologically, a financial crisis is unexpected, though we do know that financial crises happen with some regularity.\n2. The 2008 financial crisis was not unprecedented in kind, only in degree (in some ways).\n3. Avoiding the 2008 financial crisis would have required solving a difficult national coordination problem, rather than a global coordination problem. Still, this analogy seems fairly strong. As Jonah writes, “While the 2008 financial crisis seems to have been largely US specific (while having broader ramifications), there’s a sense in which preventing it would have required solving a difficult coordination problem. The causes of the crisis are diffuse, and responsibility falls on many distinct classes of actors.”\n4. Jonah’s analysis wasn’t deep enough to discern whether the 2008 financial crisis is analogous to the future invention of AGI with regard to how much careful work would have been required in advance to avert the risk.\n5. In contrast with AI risk, the financial crisis wasn’t a life or death matter for almost any of the actors involved. Many people in finance didn’t have incentives to avert the financial crisis: indeed, some of the key figures involved were rewarded with large bonuses. But it’s plausible that government decision makers had incentive to avert a financial crisis for reputational reasons, and many interest groups are adversely affected by financial crises.\n6. Once again, the scale of the financial crisis wasn’t on a par with AI risk, but it was closer to that scale than the other risks Jonah looked at in this initial investigation.\n\n\nJonah concluded that “the conglomerate of poor decisions [leading up to] the 2008 financial crisis constitute a small but significant challenge to the view that [policy-makers] will successfully address AI risk.” His reasons were:\n\n\n1. The magnitude of the financial crisis is nontrivial (even if small) compared with the magnitude of the AI risk problem (not counting future generations).\n2. The financial crisis adversely affected a very broad range of people, apparently including a large fraction of those people in positions of power (this seems truer here than in the case of climate change). A recession is bad for most businesses and for most workers. Yet these actors weren’t able to recognize the problem, coordinate, and prevent it.\n3. The reasons that policy-makers weren’t able to recognize the problem, coordinate, and prevent it seem related to reasons why people might not recognize AI risk as a problem, coordinate, and prevent it. First, several key actors involved seem to have exhibited conspicuous overconfidence and neglect of tail risk (e.g. Summers, etc. ignoring Brooksley Born’s warnings about excessive leverage). If true, this shows that people in positions of power are notably susceptible to overconfidence and neglect of tail risk. Avoiding overconfidence and giving sufficient weight to tail risk may be crucial in mitigating AI risk. Second, one gets a sense that bystander effect and tragedy of the commons played a large role in the case of the financial crisis. There are risks that weren’t adequately addressed because doing so didn’t fall under the purview of any of the existing government agencies. This may have corresponded to a mentality of the type “that’s not my job — somebody else can take care of it.” If people think that AI risk is large, then they might think “if nobody’s going to take care of it then I will, because otherwise I’m going to die.” But if people think that AI risk is small, they might think “This probably won’t be really bad for me, and even though someone should take care of it, it’s not going to be me.”\n\n\n \n\n\n### Risks from geomagnetic storms\n\n\nLarge geomagnetic storms like the [1859 Carrington Event](http://en.wikipedia.org/wiki/Solar_storm_of_1859) are infrequent, but could cause serious damage to satellites and critical infrastructure. See [this OECD report](http://www.oecd.org/gov/risk/46891645.pdf) for an overview.\n\n\nJonah’s investigation revealed a wide range in expected losses from geomagnetic storms, from $30 million per year to $30 billion per year. But even this larger number amounts to $1.5 trillion in expected losses over the next 50 years. Compare this with the losses from the 2008 financial crisis (roughly a 1 in 50 years event), which are [estimated](http://thinkprogress.org/economy/2012/09/13/846281/financial-crisis-lost-trillions/) to be about $13 trillion for Americans alone.\n\n\nThough serious, the risks from geomagnetic storms appear to be small enough to be disanalogous to the future invention of AGI.\n\n\n \n\n\n### The eradication of smallpox\n\n\n[Smallpox](http://en.wikipedia.org/wiki/Smallpox), after killing more than 500 million people over the past several millennia, was eradicated in 1979 after a decades-long global eradication effort. Though a hallmark of successful global coordination, it doesn’t seem especially relevant to whether policy-makers will handle the invention of AGI wisely.\n\n\nHere’s how the eradication of smallpox does our doesn’t fit our criteria for being analogous to the future invention of AGI:\n\n\n1. Smallpox didn’t arrive at an unpredictable time; it arrived millennia before the eradication campaign.\n2. The world didn’t have experience eradicating a disease before smallpox was eradicated, but a number of nations had eliminated smallpox.\n3. Smallpox eradication required solving a difficult global coordination problem, but in a way disanalogous to the invention of AGI safety (see the other points on this list).\n4. Preparing for smallpox eradication required effort in advance in some sense, but the effort had mostly already been exerted before the campaign was announced.\n5. Nations without smallpox had incentive to eradicate smallpox so that they didn’t have to spend money to immunize citizens so that the virus would not be (re)-introduced to their countries. For example, in 1968, the United States spent about $100 million on routine smallpox vaccinations.\n6. Smallpox can be thought of as a global disaster: by 1966, about 2 million people died of smallpox each year.\n\n\n \n\n\n### Shallow investigations of risks from cyberwarfare, chlorofluorocarbons, and the Cuban missile crisis\n\n\nJonah’s shallow investigation of risks from cyberwarfare revealed that experts disagree significantly about the nature and scope of these risks. It’s likely that dozens of hours of research would be required to develop a well-informed model of these risks.\n\n\nTo investigate how policy-makers handled the discovery that chlorofluorocarbons (CFCs) depleted the ozone layer, Jonah summarized the first 100 pages of *[Ozone Crisis: The 15-Year Evolution of a Sudden Global Emergency](http://www.amazon.com/Ozone-Crisis-Evolution-Emergency-Editions/dp/0471528234/)* (see our full email exchange for the summary). This historical case seems worth investigating further, and may be a case of policy-makers solving a global risk with surprising swiftness, though whether the response was appropriately prompt is debated.\n\n\nJonah also did a shallow investigation of the [Cuban missile crisis](http://en.wikipedia.org/wiki/Cuban_missile_crisis). It’s difficult to assess how likely it was for the crisis to escalate into a global nuclear war, but it appears that policy-makers made many poor decisions leading up to and during the Cuban missile crisis (see our full email exchange for a list). Jonah concludes:\n\n\n\n> even if the probability of the Cuban missile crisis leading to an all out nuclear war was only 1% or so, the risk was still sufficiently great so that the way in which the actors handled the situation is evidence against elites handling the creation of AI well. (This contrasts with the situation with climate change, in that elites had strong personal incentives to avert an all-out nuclear war.)\n> \n> \n\n\nHowever, this is only a guess based on a shallow investigation, and should not be taken too seriously before a more thorough investigation of the historical facts can be made.\n\n\n \n\n\n### Additional historical cases that could be investigated\n\n\nWe also identified additional historical cases that could be investigated for potentially informative analogies to the future invention of AGI:\n\n\n1. The 2003 [Iraq War](http://en.wikipedia.org/wiki/Iraq_War)\n2. The frequency with which dictators are deposed or assassinated due to “unforced errors” they made\n3. [Nuclear proliferation](http://en.wikipedia.org/wiki/Nuclear_proliferation)\n4. [Recombinant DNA](http://en.wikipedia.org/wiki/Recombinant_DNA)\n5. [Molecular nanotechnology](http://www.amazon.com/Radical-Abundance-Revolution-Nanotechnology-Civilization/dp/1610391136/)\n6. [Near Earth objects](http://www.amazon.com/Near-Earth-Objects-Finding-Them-Before/dp/0691149291/)\n7. Pandemics and potential pandemics (e.g. [HIV](http://en.wikipedia.org/wiki/HIV), [SARS](http://en.wikipedia.org/wiki/Severe_acute_respiratory_syndrome))\n\n\nThe post [How well will policy-makers handle AGI? (initial findings)](https://intelligence.org/2013/09/12/how-well-will-policy-makers-handle-agi-initial-findings/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-09-12T07:17:44Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "3bf21b4add1f2287d60943c7d0a1ab85", "title": "MIRI’s September Newsletter", "url": "https://intelligence.org/2013/09/10/september-newsletter/", "source": "miri", "source_type": "blog", "text": "| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| |\n\n  |\n| \n\n| | | | |\n| --- | --- | --- | --- |\n| \n\n| | | |\n| --- | --- | --- |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \nGreetings from the Executive Director\nDear friends,\nWith your help, we [finished](http://intelligence.org/2013/08/21/2013-summer-matching-challenge-completed/) **our largest fundraiser ever**, raising $400,000 for our research program. My thanks to everyone who contributed!\nWe continue to publish non-math research to our blog, including an ebook copy of *The Hanson-Yudkowsky AI-Foom Debate* (see below). In the meantime, earlier math results are currently being written up, and new results are being produced at [our ongoing decision theory workshop](http://intelligence.org/2013/07/08/miris-september-2013-workshop/).\nThis October, Eliezer Yudkowsky and Paul Christiano are giving talks about MIRI’s research at MIT and Harvard. Exact details are still being confirmed, so if you live near Boston then you may want to [subscribe to our blog](http://intelligence.org/blog/) so that you can see the details as soon as they are announced (which will be long before the next newsletter).\nThis November, Yudkowsky and I are visiting Oxford to “sync up” with our frequent collaborators at the [Future of Humanity Institute](http://www.fhi.ox.ac.uk/) at Oxford University, and also to run our [November research workshop](http://intelligence.org/2013/08/30/miris-november-2013-workshop-in-oxford/) (in Oxford).\nAnd finally, let me share a bit of fun with you. Philosopher Robby Bensinger re-wrote Yudkowsky’s [Five Theses](http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/) using the xkcd-inspired [Up-Goer Five Text Editor](http://splasho.com/upgoer5/), which only allows use of the 1000 most common words in English. [Enjoy](http://intelligence.org/2013/09/05/five-theses-using-only-simple-words/).\nCheers,\nLuke Muehlhauser\nExecutive Director\n\n\n\nThe Hanson-Yudkowsky AI-Foom Debate is now available as an eBook!\nIn late 2008, economist Robin Hanson and AI theorist Eliezer Yudkowsky conducted an online debate about the future of artificial intelligence, and in particular about whether generally intelligent AIs will be able to improve their own capabilities very quickly (a.k.a. “foom”). James Miller and Carl Shulman also contributed guest posts to the debate.\nThe original debate took place in a long series of blog posts, which are collected here. This book also includes a transcript of a 2011 in-person debate between Hanson and Yudkowsky on this subject, a summary of the debate written by Kaj Sotala, and a 2013 technical report on AI takeoff dynamics (“intelligence explosion microeconomics”) written by Yudkowsky.\nComments from the authors are included at the end of each chapter, along with a link to the original post. The curious reader is encouraged to use these links to view the original posts and all comments. This book contains minor updates, corrections, and additional citations.\nThe debate is completely free for download in various eBook formats. See [here](http://intelligence.org/ai-foom-debate/).\n\n\nNew Analyses And Interviews\nAs with some other independent research organizations (e.g. [GiveWell](http://www.givewell.org/)), much of MIRI’s research is published directly to our blog.\nSince our last newsletter, we’ve published the following **expert interviews**:\n[Holden Karnofsky on Transparent Research Analyses](http://intelligence.org/2013/08/25/holden-karnofsky-interview/):\nWe’re certainly developing new methods of analysis and evaluation [for GiveWell Labs]. Our working framework for shallow investigations replaces “proven, cost-effective, scalable charities” with “important, tractable, non-crowded causes” in terms of what we’re looking for. Much of our work so far has been more qualitative in nature, aiming to clarify and understand the basic landscape of causes rather than assess the extent to which approaches are “proven.”\n[Stephen Hsu on Cognitive Genomics](http://intelligence.org/2013/08/31/stephen-hsu-on-cognitive-genomics/):\nRecently the results of a massive GWAS for genes associated with educational attainment were published in *Science*. Some of the researchers in this large collaboration are reluctant to openly state that the hits are associated with cognitive ability (as opposed to, say, Conscientiousness, which would also positively impact educational success). But if you read the paper carefully you can see that there is good evidence that the alleles are actually associated with cognitive ability (g or IQ).\n[Laurent Orseau on Artificial General Intelligence](http://intelligence.org/2013/09/06/laurent-orseau-on-agi/):\nThe traditional [agent AI] framework is dualist in the sense that it considers that the “mind” of the agent (the process with which the agent chooses its actions) lies outside of the environment. But we all know that if we ever program an intelligent agent on a computer, this program and process will not be outside of the world, they will be a part of it and, even more importantly, computed by it. This led us to define our space-time embedded intelligence framework and equation.\nWe also published **two new analyses**.\n[Transparency in Safety-Critical Systems](http://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/):\nBlack box testing can provide some confidence that a system will behave as intended, but if a system is built such that it is transparent to human inspection, then additional methods of reliability verification are available. Unfortunately, many of AI’s most useful methods are among its least transparent. Logic-based systems are typically more transparent than statistical methods, but statistical methods are more widely used. There are exceptions to this general rule, and some people are working to make statistical methods more transparent.\n[How effectively can we plan for future decades? (initial findings)](http://intelligence.org/2013/09/04/how-effectively-can-we-plan-for-future-decades/)\nHow effectively can humans plan for future decades? Which factors predict success and failure in planning for future decades? To investigate these questions, we asked Jonah Sinick to examine historical attempts to plan for future decades and summarize his findings. We pre-committed to publishing our entire email exchange on the topic (with minor editing)… [and] we decided to publish our initial findings after investigating only a few historical cases.\n\n\nAn Appreciation Of Carl Shulman\nCarl Shulman This month, Carl Shulman is leaving MIRI to study computer science full-time.\nDuring his time with MIRI, Carl authored or co-authored [several articles](http://intelligence.org/all-publications/) related to x-risk strategy and intelligence amplification, some of which are still forthcoming. He also contributed extensively to dozens of publications on which he is not a co-author, and contributed to the research program at the [Center for Effective Altruism](http://home.centreforeffectivealtruism.org/). He continues to blog at [Reflective Disequilibrium](http://reflectivedisequilibrium.blogspot.com/), and remains a Research Associate of the [Future of Humanity Institute](http://www.fhi.ox.ac.uk/) at Oxford University.\nI’ve enjoyed working with Carl during the past couple years, and he remains a valuable advisor with a remarkably broad and deep grasp of the complex, interlocking issues that face humanity as we navigate the 21st century.\nCarl: Thanks so much for your work with MIRI! I wish you the best of luck on your future adventures.\n\n\nFeatured Volunteer: Francisco Garcia\nFrancisco Garcia helps out by translating MIRI’s articles into Spanish. His area of interest is robotics and autonomous agents, and he hopes to intern with MIRI in the future. Francisco is an MS/PhD student at the University of Massachusetts-Amherst. He recently joined the Resource Bounded Reasoning Lab (RBR) under the direction of Dr. Shlomo Zilberstein.\nFrancisco found out about MIRI about 2 years ago through a friend. He started volunteering to get more familiar with some of the modern ideas about AI and learn about the leading researchers, believing that MIRI’s work will play a crucial role in the not so distant future.\nFrancisco wants to build a career as a researcher. He is fascinated by planetary exploration and learning robots, and dreams of working in a place where he can research AI methods and apply them to the real world, like Boston Dynamics or NASA.  He believes that technology will become even more ubiquitous, AI techniques greatly improving and leading to a new understanding of the world, making humanity, as a whole, increasingly more efficient thanks to technology.\n |\n\n  |\n\n |\n\n |\n\n  |\n| |\n\n\n \n\n\n \n\n\n \n\n\nThe post [MIRI’s September Newsletter](https://intelligence.org/2013/09/10/september-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-09-11T00:57:27Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "70ba70e01740ac79de93d1b2dfa43b8b", "title": "Laurent Orseau on Artificial General Intelligence", "url": "https://intelligence.org/2013/09/06/laurent-orseau-on-agi/", "source": "miri", "source_type": "blog", "text": "![](https://intelligence.org/wp-content/uploads/2013/09/orseau_w150.jpg) [Laurent Orseau](http://www.agroparistech.fr/mia/orseau) is an associate professor (*maître de conférences*) since 2007 at [AgroParisTech](http://www.agroparistech.fr/), Paris, France. In 2003, he graduated from a professional master in computer science at the [National Institute of Applied Sciences](http://www.insa-rennes.fr/en.html) in Rennes and from a research master in artificial intelligence at [University of Rennes 1](http://www.univ-rennes1.fr/english/). He obtained his [PhD](http://www.agroparistech.fr/mmip/maths/laurent_orseau/papers/phd-orseau-2007.pdf) in 2007. His goal is to build a *practical theory* of artificial general intelligence. With his co-author Mark Ring, they have been awarded the [Solomonoff AGI Theory Prize](http://agi-conf.org/2011/prizes-support/) at AGI’2011 and the [Kurzweil Award for Best Idea](http://agi-conf.org/2012/prizes/) at AGI’2012.\n\n\n\n**Luke Muehlhauser**: In the past few years you’ve written some interesting papers, often in collaboration with [Mark Ring](http://www.idsia.ch/%7Ering/), that use [AIXI](http://wiki.lesswrong.com/wiki/AIXI)-like models to analyze some interesting features of different kinds of advanced theoretical agents. For example in [Ring & Orseau (2011)](http://www.idsia.ch/%7Ering/AGI-2011/Paper-B.pdf), you showed that some kinds of advanced agents will maximize their rewards by taking direct control of their input stimuli — kind of like the rats who “wirehead” when scientists give them direct control of the input stimuli to their reward circuitry ([Olds & Milner 1954](http://commonsenseatheism.com/wp-content/uploads/2013/07/Olds-Milner-Positive-reinforcement-produced-by-electrical-stimulation-of-septal-area-and-other-regions-of-rat-brain.pdf)). At the same time, you showed that at least one kind of agent, the “knowledge-based” agent, does not wirehead. Could you try to give us an intuitive sense of why some agents would wirehead, while the knowledge-based agent would not?\n\n\n\n\n---\n\n\n**Laurent Orseau**: You’re starting with a very interesting question!\n\n\nThis is because knowledge-seeking has a fundamental distinctive property: On the contrary to rewards, knowledge cannot be faked by manipulating the environment. The agent cannot itself introduce new knowledge in the environment because, well, it already knows what it would introduce, so it’s not new knowledge. Rewards, on the contrary, can easily be faked.\n\n\nI’m not 100% sure, but it seems to me that knowledge seeking may be the only non-trivial utility function that has this non-falsifiability property. In [Reinforcement Learning](http://webdocs.cs.ualberta.ca/%7Esutton/book/the-book.html), there is an omnipresent problem called the exploration/exploitation dilemma: The agent must both *exploit* its knowledge of the environment to gather rewards, and *explore* its environment to learn if there are better rewards than the ones it already knows about. This implies in general that the agent [cannot collect as many rewards](http://tor-lattimore.com/pubs/HL11asymptotic.pdf) as [it would like](http://www.agroparistech.fr/mmip/maths/essaimia/lib/exe/fetch.php?media=http%3A%2F%2Fwww.agroparistech.fr%2Fmmip%2Fmaths%2Flaurent_orseau%2Fpapers%2Forseau-TCS-2013-optimality.pdf).\n\n\nBut for knowledge seeking, thegoal of the agent is to explore, i.e., exploration *is* exploitation. Therefore the above dilemma collapses to doing only exploration, which is the only meaningful unified solution to this dilemma (the exploitation-only solution leads either to very low rewards or is possible only when the agent already has knowledge of its environment, as in dynamic programming). In more philosophical words, this [unifies epistemic rationality and instrumental rationality](http://www.princeton.edu/%7Etkelly/papers/epistemicasinstrumental.pdf).\n\n\nNote that the agent introduced in [Orseau & Ring (2011)](http://www.idsia.ch/%7Ering/AGI-2011/Paper-A.pdf), and better developed in [Orseau (2011)](http://www.agroparistech.fr/mmip/maths/essaimia/lib/exe/fetch.php?media=http%3A%2F%2Fwww.agroparistech.fr%2Fmmip%2Fmaths%2Flaurent_orseau%2Fpapers%2Forseau-ALT-2011-knowledge-seeking.pdf) where a convergence proof is given, works actually only for deterministic environments. Its problem is that it may consider noise as information, and get addicted to it, i.e., it may stare at a detuned TV screen forever. And one could well consider this as “self-delusion”.\n\n\nFortunately, with [Tor Lattimore](http://tor-lattimore.com) and [Marcus Hutter](http://www.hutter1.net), we are finalizing a paper for [ALT 2013](http://www-alg.ist.hokudai.ac.jp/%7Ethomas/ALT13/) where we considered all computable stochastic environments. This new agent does not have the defective behavior of the 2011 agent, and I think it would even have a better behavior even in deterministic environments. For example, it would (it seems) not focus for too long on the same source of information, and may from time to time get back to explore the rest of the environment before eventually coming back to the original source; i.e., it is not a monomaniac agent.\n\n\nA side note: If you (wrongly) understand knowledge-seeking as learning to predict all possible futures, then a kind of self-delusion may be possible: The agent might just jump into a trap, where all its observations would be the same whatever its actions, and it would thus have converged to optimal prediction. But we showed that the knowledge-seeking agent would give no value to such actions.\n\n\n\n\n---\n\n\n**Luke**: In [two](http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_76.pdf) [other](http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_74.pdf) papers, you and Mark Ring addressed a long-standing issue in AI: the naive Cartesian dualism of the ubiquitous agent-environment framework. Could you explain why the agent-environment framework is Cartesian, and also what work you did in those two papers?\n\n\n\n\n---\n\n\n**![robot agent environment model](https://intelligence.org/wp-content/uploads/2013/09/robot-agent-environment-model.jpg)Laurent**: In the [traditional](http://webdocs.cs.ualberta.ca/%7Esutton/book/the-book.html) [agent framework](http://aima.cs.berkeley.edu/), we consider that the agent *interacts* with its environment by sending, at each interaction cycle, an action that the environment can take into account to produce an observation, that the agent can in turn take into account to start the next interaction cycle and output a new action. This framework is very useful in practice because it avoids a number of complications of the real life. Those complications are exactly what we wanted to address head on. Because at some point you need to pull your head out of the sand and start dealing with the complex but important issues. But certainly many people, and in particular people working with robots, are quite aware that the real world is not a dualist framework. So in a sense, it was an obvious thing to do, especially because it seems that no-one had done it before, at least from this point of view and to the best of our knowledge.\n\n\nThe traditional framework is [dualist](http://plato.stanford.edu/entries/dualism/) in the sense that it considers that the “mind” of the agent (the process with which the agent chooses its actions) lies outside of the environment. But we all know that if we ever program an intelligent agent on a computer, this program and process will not be outside of the world, they will be a part of it and, even more importantly, *computed* by it. This led us to define our [space-time embedded intelligence](http://www.agroparistech.fr/mia/equipes:membres:page:laurent:embedded) framework and equation.\n\n\nPut simply, the idea is to consider an (realistic) environment, and a memory block of some length on some computer or robot *in this environment*. Then what is the best initial configuration of the bits on this memory block according to some measure of utility upon the expected future history?\n\n\nSome people worry that this is too general (in particular if you just consider some block of bits in an environment, not necessarily on a computer) and that we lose the essence of agency, which is to deal with inputs and outputs. But they forget that a) this systemic framework does also allow for defining stones (simply ignore the inputs and output a constant value) and b) this is how the real world is: If an AGI can duplicate itself and split itself in so many parts on many computers, robots and machines, how can we really identify this agent as a single systemic entity?\n\n\nSome other people worry about how this framework could be used in practice, and that it is too difficult to deal with. Our goal was not to define a framework where theorems are simple to prove and algorithms simple to write, but to define a framework for AGI that is closer to the real world. If the latter is difficult to deal with, then so be it. But don’t blame the framework, blame the real world. Anyway, we believe there are still interesting things to do with this framework. And I believe it is at least still useful to help people not forget that the real world is different from the usual text book simplifications. This is probably not very relevant for applied machine learning and narrow AI research, but I believe it is very important for AGI research.\n\n\nHowever, let me get one thing straight: Even in the traditional framework, an agent can still predict that it may be “killed” (in some sense), for example if [an anvil falls on its body](http://wiki.lesswrong.com/wiki/Anvil_problem). This is possible if the body of the agent, excluding the brain but including its sensors and effectors, are considered to be part of the environment: The agent can then predict that the anvil will destroy them and it will be unable to get any information and reward from and perform any action to the environment. Whenever we consider our skull (or rather, the skull of the robot) and brain to *always* be unbreakable, non-aging, and not subject to drugs, alcohol and external events such as heat, accelerations and magnetic waves, we can quite safely use the traditional framework.\n\n\nBut remove one of these hypotheses and the way the agent computes may become different from what it assumes it would be, hence leading to different action selection schemes. Regarding artificial agents, tampering with a source code is even easier than with a human brain. AGIs of the future will probably face a gigantic number of cracking and modification attempts, and the agent itself and its designers should be well aware that this source code and memory are not in a safe dualist space. In the “[Memory issues of intelligent agents](http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_74.pdf)” paper, we considered various consequences of giving the environment the possibility to read and write the agent’s memory of the interaction history. It appears difficult for the agent to be aware of such modifications in general. We must not forget that amnesia happens for people, and it may happen for robots too, e.g. after a collision. And security by obfuscation can only delay memory hackers, [even when considering a natural brain](http://www.sciencemag.org/content/341/6144/387.abstract). Another interesting consequence is that deterministic policies cannot be always optimal, on the contrary to the optimal and deterministic [AIXI](http://www.hutter1.net/ai/aixigentle.htm) in the dualist framework.\n\n\n\n\n---\n\n\n**Luke**: Do you think the [AIXI framework](http://wiki.lesswrong.com/wiki/AIXI), including the limited but tractable approximations like [MC-AIXI](http://www.jair.org/media/3125/live-3125-5397-jair.pdf), provides a plausible path toward real-world [AGI](http://intelligence.org/2013/08/11/what-is-agi/)? If not, what do you see as its role in AGI research?\n\n\n\n\n---\n\n\n**Laurent**: Approximating AIXI can be done in very many ways. The main ideas are [building/finding good and simple models](http://webdocs.cs.ualberta.ca/%7Esutton/book/ebook/node94.html) of the environment, and performing some planning on these models; i.e., it is a model-based approach (by contrast to [Q-learning](http://en.wikipedia.org/wiki/Q-learning) which is model-free for example: it does model the environment, but only learns to predict the expected rewards per action/state). This is a [very common approach](http://umichrl.pbworks.com/w/page/7597592/RL%20is%20model-free%20%28or%20direct%29) in reinforcement learning, because some may argue that model-free methods are “blind”, in the sense that they don’t learn about their environments, they just “know” what to do.  Another important component of AIXI is the interaction history (instead of a state-based observation), and approximations may need to deal appropriately with compressing this history, possibly with loss. Hutter is working on this aspect with [feature RL](http://www.hutter1.net/official/bib.htm#lstphi), with nice results. So yes, approximating AIXI can be seen as a very plausible way toward real-world AGI.\n\n\nFinding computation-efficient approximations is not an easy task, and it will quite probably require a number of neat ideas that will make it feasible, but it’s certainly a path worth researching. However, personally, I prefer to think that the agent must learn *how* to model its environment, which is something deeper than a model-based approach.\n\n\nEven without considering AIXI approximations, AIXI still is very important for AGI research because it unifies all important properties of cognition, like agency (interaction with an environment), knowledge representation and memory, understanding, reasoning, goals, problem solving, planning and action selection, abstraction, [generalization without overfitting](http://www.scholarpedia.org/article/Minimum_description_length), multiple hypotheses, [creativity,](http://www.idsia.ch/%7Ejuergen/creativity.html) exploration and [curiosity](http://www.idsia.ch/%7Ejuergen/interest.html), optimization and utility maximization, prediction, uncertainty, with [incremental](http://world.std.com/%7Erjs/nips02.pdf), [on-line](http://web.mit.edu/%7E6.863/www/spring2009/readings/gold67limit.pdf), lifelong, [continual learning](http://www.cs.utexas.edu/users/kuipers/readings/Ring-mlj-97.pdf) in arbitrarily complex environments, without a restart state, no [i.i.d.](http://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables) or [stationarity assumption](http://en.wikipedia.org/wiki/Stationary_process), etc. and does all this in a very simple, [elegant and precise manner](http://hutter1.net/ai/aixi1linel.gif). I believe that if one does not understand what tour de force AIXI is, one cannot seriously hope to tackle the AGI problem. People tend to think that simple ideas are easy to find because they are easy to read or be explained orally. But they tend to forget that easy to read does not mean easy to grok, and certainly not easy to find! Simplest ideas are the best ones, especially in research.\n\n\n\n\n---\n\n\n**Luke**: You speak of AIXI-related work as a pretty rich subfield with many interesting lines of research to pursue that are related to universal agents. Do you think there are other AGI-related lines of inquiry that are as promising or productive as AIXI-related work? For example Schmidhuber’s [Gödel machine](http://en.wikipedia.org/wiki/G%C3%B6del_machine), the [SOAR architecture](http://en.wikipedia.org/wiki/Soar_(cognitive_architecture)), etc.?\n\n\n\n\n---\n\n\n**Laurent**: I can’t really say much about the cognitive architectures. It looks very difficult to say whether such a design would work correctly autonomously over several decades. It’s interesting work with nice ideas, but I can’t see what kind of confidence I could have in such designs when regarding long-term AGI. That’s why I prefer simple and general approaches, formalized with some convergence proof or a proof of another important property, and that can give you confidence that your design will work for more than a few days ahead.\n\n\nRegarding the Gödel machine (GM), I do think it’s a very nice design, but I have two griefs with it. The first one is that it’s currently not sufficiently formalized, so it’s difficult to state if and how it really works. The second one is because it’s relying on a [automated theorem prover](http://en.wikipedia.org/wiki/Automated_theorem_proving). Searching for proofs is extremely complicated: Making a parallel with [Levin-search](http://www.scholarpedia.org/article/Universal_search) (LS), where given a goal output string (an improvement in the GM), you enumerate programs (propositions in the GM) and run them to see if they output the goal string (search for a proof of improvement in GM). This last part is the problem: in LS, the programs are fast to run, whereas in GM there is an additional search step for each proposition, so this looks *very* roughly like going from exponential (LS) to double-exponential (GM). And LS is already not really practical.\n\n\nTheorem proving is even more complicated when you need to prove that there will be an improvement of the system *at an unknown future step*. Maybe it would work better if the kinds of proofs were limited to some class, for example use simulation of the future steps up to some horizon given a model of the world. These kinds of proofs are easier to check and have a guaranteed termination, e.g. if the model class for the environment is based on Schmidhuber’s [Speed Prior](http://www.idsia.ch/%7Ejuergen/speedprior.html). But this starts to look pretty much like an approximation of AIXI, doesn’t it?\n\n\nReinforcement Learning in general is very promising for AGI: Even though most researchers of this field are not directly interested in AGI, they still try to find the most [general possible methods while remaining practical](http://en.wikipedia.org/wiki/Partially_observable_Markov_decision_process). It’s far more obvious in RL than in the rest of Machine Learning.\n\n\nBut I think there are many lines of research that have could be pushed toward some kind of AGI level. In machine learning, some fields like [Genetic Programming](http://en.wikipedia.org/wiki/Genetic_programming), [Inductive Logic Programming](http://en.wikipedia.org/wiki/Inductive_logic_programming), [Grammar Induction](http://en.wikipedia.org/wiki/Grammar_induction) and −let’s get bold− why not Recurrent [Deep Neural Networks](http://en.wikipedia.org/wiki/Deep_learning) (probably with an additional short-term memory mechanism of some sort), and possibly other research fields, are all based on some powerful induction schemes that could well lead to thinking machines if researchers in those fields wanted to. Schmidhuber’s [OOPS](http://www.idsia.ch/%7Ejuergen/oops.html) is also a very interesting design, based on Levin Search. It is limited in that it cannot truly reach the learning-to-learn level, but could be extended with a true probability distribution over programs, as described by [Solomonoff](http://Progress). And there is of course also the neuroscience way of trying to understand or at least [model the brain](http://bluebrain.epfl.ch/).\n\n\n\n\n---\n\n\n**Luke:** Thanks, Laurent!\n\n\nThe post [Laurent Orseau on Artificial General Intelligence](https://intelligence.org/2013/09/06/laurent-orseau-on-agi/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-09-07T02:29:53Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "ec2aa52816382a36a83326335c6b3506", "title": "Five Theses, Using Only Simple Words", "url": "https://intelligence.org/2013/09/05/five-theses-using-only-simple-words/", "source": "miri", "source_type": "blog", "text": "![xkcd at desk](https://intelligence.org/wp-content/uploads/2013/09/xkcd-at-desk.png)A [recent xkcd comic](http://xkcd.com/1133/) described the Saturn V rocket using only the 1000 most frequently used words (in English). The rocket was called “up-goer five,” and the liquid hydrogen feed line was the “thing that lets in cold wet air to burn.” This inspired a geneticist to make the [Up-Goer Five Text Editor](http://splasho.com/upgoer5/), which forces you to use only the 1000 most frequent words. *Mental Floss* recently collected [18 scientific ideas explained using this restriction](http://mentalfloss.com/article/48793/18-complicated-scientific-ideas-explained-simply).\n\n\nWhat does this have to do with MIRI? Well, young philosopher [Robby Bensinger](http://nothingismere.com/) has now [re-written](http://lesswrong.com/lw/ij5/the_upgoer_five_game_explaining_hard_ideas_with/) MIRI’s [Five Theses](http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/) using the Up-Goer Five text editor, with amusing results:\n\n\n* **Intelligence explosion**: If we make a computer that is good at doing hard things in lots of different situations without using much stuff up, it may be able to help us build better computers. Since computers are faster than humans, pretty soon the computer would probably be doing most of the work of making new and better computers. We would have a hard time controlling or understanding what was happening as the new computers got faster and grew more and more parts. By the time these computers ran out of ways to quickly and easily make better computers, the best computers would have already become much much better than humans at controlling what happens.\n* **Orthogonality**: Different computers, and different minds as a whole, can want very different things. They can want things that are very good for humans, or very bad, or anything in between. We can be pretty sure that strong computers won’t think like humans, and most possible computers won’t try to change the world in the way a human would.\n* **Convergent instrumental goals**: Although most possible minds want different things, they need a lot of the same things to get what they want. A computer and a human might want things that in the long run have nothing to do with each other, but have to fight for the same share of stuff first to get those different things.\n* **Complexity of value**: It would take a huge number of parts, all put together in just the right way, to build a computer that does all the things humans want it to (and none of the things humans don’t want it to).\n* **Fragility of value**: If we get a few of those parts a little bit wrong, the computer will probably make only bad things happen from then on. We need almost everything we want to happen, or we won’t have any fun.\n\n\nThat is all. You’re welcome.\n\n\nThe post [Five Theses, Using Only Simple Words](https://intelligence.org/2013/09/05/five-theses-using-only-simple-words/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-09-05T20:03:13Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "5d859907b3afc8e97dc4e32c901464e8", "title": "How effectively can we plan for future decades? (initial findings)", "url": "https://intelligence.org/2013/09/04/how-effectively-can-we-plan-for-future-decades/", "source": "miri", "source_type": "blog", "text": "MIRI aims to do research now that increases humanity’s odds of successfully managing important AI-related events that are at least [a few decades away](http://intelligence.org/2013/05/15/when-will-ai-be-created/). Thus, we’d like to know: To what degree can we take actions now that will predictably have positive effects on AI-related events decades from now? And, which factors predict success and failure in planning for decades-distant events that share important features with future AI events?\n\n\nOr, more generally: **How effectively can humans plan for future decades? Which factors predict success and failure in planning for future decades?**\n\n\nTo investigate these questions, we asked [Jonah Sinick](http://mathisbeauty.org/aboutme.html) to examine historical attempts to plan for future decades and summarize his findings. We pre-committed to publishing our entire email exchange on the topic (with minor editing), just as Jonah had done previously with GiveWell [on the subject of insecticide-treated nets](http://blog.givewell.org/2012/10/18/revisiting-the-case-for-insecticide-treated-nets-itns/). The post below is a summary of findings from [our full email exchange (.pdf)](http://intelligence.org/wp-content/uploads/2013/09/Can-we-know-what-to-do-about-AI.pdf) so far.\n\n\n**We decided to publish our initial findings after investigating only a few historical cases**. This allows us to gain feedback on the value of the project, as well as suggestions for improvement, before continuing. It also means that **we aren’t yet able to draw any confident conclusions about our core questions**.\n\n\nThe most significant results from this project so far are:\n\n\n1. Jonah’s initial impressions about *The Limits to Growth* (1972), a famous forecasting study on population and resource depletion, were that its long-term predictions were mostly wrong, and also that its authors (at the time of writing it) didn’t have credentials that would predict forecasting success. Upon reading the book, its critics, and its defenders, Jonah concluded that many critics and defenders had  seriously misrepresented the book, and that the book itself exhibits high epistemic standards and does not make significant predictions that turned out to be wrong.\n2. Svante Arrhenius (1859-1927) did a surprisingly good job of climate modeling given the limited information available to him, but he was nevertheless wrong about two important policy-relevant factors. First, he failed to predict how quickly carbon emissions would increase. Second, he predicted that global warming would have positive rather than negative humanitarian impacts. If more people had taken Arrhenius’ predictions seriously and burned fossil fuels faster for humanitarian reasons, then today’s scientific consensus on the effects of climate change suggests that the humanitarian effects would have been negative.\n3. In retrospect, Norbert Weiner’s concerns about the medium-term dangers of increased automation appear naive, and it seems likely that even at the time, better epistemic practices would have yielded substantially better predictions.\n4. Upon initial investigation, several historical cases seemed unlikely to shed substantial light on our  core questions: Norman Rasmussen’s analysis of the safety of nuclear power plants, Leo Szilard’s choice to keep secret a patent related to nuclear chain reactions, Cold War planning efforts to win decades later, and several cases of “ethically concerned scientists.”\n5. Upon initial investigation, two historical cases seemed like they *might* shed light on our  core questions, but only after many hours of additional research on each of them: China’s one-child policy, and the Ford Foundation’s impact on India’s 1991 financial crisis.\n6. We listed many other historical cases that may be worth investigating.\n\n\nThe project has also produced a chapter-by-chapter list of some key lessons from Nate Silver’s [*The Signal and the Noise*](http://www.amazon.com/The-Signal-Noise-Many-Predictions/dp/159420411X), available [here](http://lesswrong.com/lw/hxx/some_highlights_from_nate_silvers_the_signal_and/).\n\n\nFurther details are given below. For sources and more, please see [our full email exchange (.pdf)](http://intelligence.org/wp-content/uploads/2013/09/Can-we-know-what-to-do-about-AI.pdf).\n\n\n\n \n\n\n### The Limits to Growth\n\n\nIn his initial look at *[The Limits to](http://www.amazon.com/Limits-Growth-Donella-H-Meadows/dp/0451057678/)* [Growth](http://www.amazon.com/Limits-Growth-Donella-H-Meadows/dp/0451057678/) (1972), Jonah noted that the authors were fairly young at the time of writing (the oldest was 31), and they lacked credentials in long-term forecasting. Moreover, it appeared that *Limits to Growth* predicted a sort of doomsday scenario – *ala* Erlich’s *[The Population Bomb](http://www.amazon.com/The-population-bomb-Paul-Ehrlich/dp/0345021711/)* (1968) – that had failed to occur. In particular, it appeared that *Limits to Growth* had failed to appreciate [Julian Simon](http://en.wikipedia.org/wiki/Julian_Lincoln_Simon)‘s point that other resources would substitute for depleted resources.\n\n\nUpon reading the book, Jonah found that:\n\n\n* The book avoids strong, unconditional claims. Its core claim is that *if* exponential growth of resource usage continues, *then* there will likely be a societal collapse by 2100.\n* The book was careful to qualify its claims, and met high epistemic standards. Jonah wrote: “The book doesn’t look naive even in retrospect, which is impressive given that it was written 40 years ago. “\n* The authors discuss substitutability at length in chapter 4.\n* The book discusses mitigation at a theoretical level, but doesn’t give explicit policy recommendations, perhaps because the issues involved were too complex.\n\n\n### \n\n\n### Svante Arrhenius\n\n\nDerived more than a century ago, [Svante Arrhenius](http://en.wikipedia.org/wiki/Svante_Arrhenius)‘ equation for how the Earth’s temperature varies as a function of concentration of carbon dioxide is the same equation used today. But while Arrhenius’ climate modeling was impressive given the information available to him at the time, he failed to predict (by a large margin) how quickly fossil fuels would be burned. He also predicted that global warming would have positive humanitarian effects, but based on our current understanding, the expected humanitarian effects seem negative.\n\n\nArrhenius’s predictions were mostly ignored at the time, but had people taken them seriously and burned fossil fuels more quickly, the humanitarian effects would probably have been negative.\n\n\n \n\n\n### Norbert Weiner\n\n\nAs Jonah explains, [Norbert Weiner](http://en.wikipedia.org/wiki/Norbert_Weiner) (1894-1964) “believed that unless countermeasures were taken, automation would render low skilled workers unemployable. He believed that this would precipitate an economic crisis far worse than that of the Great Depression.” Nearly 50 years after his death, this [doesn’t seem to have happened](http://lesswrong.com/lw/hh4/the_robots_ai_and_unemployment_antifaq/) much, though it may eventually happen.\n\n\nJonah’s impression is that Weiner had strong views on the subject, doesn’t seem to have updated much in response to incoming evidence, and seems to have relied to heavily on what [Berlin (1953)](http://en.wikipedia.org/wiki/The_Hedgehog_and_the_Fox) and [Tetlock (2005)](http://www.amazon.com/Expert-Political-Judgment-Good-Know/dp/0691128715/) described as “hedgehog” thinking: “the fox knows many things, but the hedgehog knows one big thing.”\n\n\n \n\n\n### Some historical cases that seem unlikely to shed light on our questions\n\n\n[Rasmussen (1975)](http://en.wikipedia.org/wiki/WASH-1400) is a probabilistic risk assessment of nuclear power plants, written before any nuclear power plant disasters had occurred. However, Jonah concluded that this historical case wasn’t very relevant to our specific questions about taking actions useful for decades-distant AI outcomes, in part because the issue is highly domain specific, and because the report makes a large number of small predictions rather than a few salient predictions.\n\n\nIn 1936, [Leó Szilárd](http://en.wikipedia.org/wiki/Le%C3%B3_Szil%C3%A1rd) assigned his chain reaction patent in a way that ensured it would be kept secret from the Nazis. However, Jonah concluded:\n\n\n\n> I think that this isn’t a good example of a nontrivial future prediction. The destructive potential seems pretty obvious – anything that produces a huge amount of concentrated energy can be used in a destructive way. As for the Nazis, Szilard was himself Jewish and fled from the Nazis, and it seems pretty obvious that one wouldn’t want a dangerous regime to acquire knowledge that has destructive potential. It would be more impressive if the early developers of quantum mechanics had kept their research secret on account of dimly being aware of the possibility of destructive potential, or if Szilard had filed his patent secretly in a hypothetical world in which the Nazi regime was years away.\n> \n> \n\n\nJonah briefly investigated Cold War efforts aimed at winning the war decades later, but concluded that it was “too difficult to tie these efforts to war outcomes.”\n\n\nJonah also investigated Kaj Sotala’s [A brief history of ethically concerned scientists](http://lesswrong.com/lw/gln/a_brief_history_of_ethically_concerned_scientists/). Most of the historical cases cited there didn’t seem relevant to this project. Many cases involved “scientists concealing their discoveries out of concern that they would be used for military purposes,” but this seems to be an increasingly irrelevant sort of historical case, since science and technology markets are now relatively efficient, and concealing a discovery rarely delays progress for very long (e.g. see [Kelly 2011](http://www.amazon.com/What-Technology-Wants-Kevin-Kelly/dp/0143120174/)). Other cases involved efforts to reduce the use of dangerous weapons for which the threat was imminent during the time of the advocacy. There may be lessons among these cases, but they appear to be of relatively weak relevance to our current project.\n\n\n \n\n\n### Some historical cases that might shed light on our questions with much additional research\n\n\nJonah performed an initial investigation of the impacts of China’s [one-child policy](http://en.wikipedia.org/wiki/One-child_policy), and concluded that it would take many, many hours of research to determine both the sign and the magnitude of the policy’s impacts.\n\n\nJonah also investigated a case involving the [Ford Foundation](http://www.fordfoundation.org/). In [a conversation with GiveWell](http://www.givewell.org/files/conversations/Lant%20Pritchet%2006-18-12%20final%20for%20upload.pdf), Lant Pritchett said:\n\n\n\n> [One] example of transformative philanthropy is related to India’s recovery from its economic crisis of 1991. Other countries had previously had similar crises and failed to implement good policies that would have allowed them to recover from their crises. By way of contrast, India implemented good policies and recovered in a short time frame. Most of the key actors who ensured that India implemented the policies that it did were influenced by a think tank established by the Ford Foundation ten years before the crisis. The think tank exposed Indians to relevant ideas from the developed world about liberalization. The difference between (a) India’s upward economic trajectory and (b) what its upward economic trajectory would have been if it had been unsuccessful in recovering from the 1991 crisis is in the trillions of dollars. As such, the Ford Foundation’s investment in the think tank had a huge impact. For the ten years preceding the crisis, it looked like the think tank was having no impact, but it turned out to have a huge impact.\n> \n> \n\n\nUnfortunately, Jonah was unable to find any sources or contacts that would allow him to check whether this story is true.\n\n\n \n\n\n### Other historical cases that might be worth investigating\n\n\nHistorical cases we identified but did not yet investigate include:\n\n\n* [Eric Drexler](http://en.wikipedia.org/wiki/K._Eric_Drexler)‘s early predictions about the feasibility and likely effects of nanotechnology.\n* The [Asilomar conference on recombinant DNA](http://en.wikipedia.org/wiki/Asilomar_Conference_on_Recombinant_DNA)\n* Efforts to [detect asteroids before they threaten Earth](http://www.amazon.com/Near-Earth-Objects-Finding-Them-Before/dp/0691149291/)\n* The [Green Revolution](http://en.wikipedia.org/wiki/Green_Revolution)\n* The modern history of [cryptography](http://en.wikipedia.org/wiki/Cryptography)\n* Early efforts to [mitigate global warming](http://www.amazon.com/The-Discovery-Global-Warming-Technology/dp/067403189X/)\n* Possible deliberate long term efforts to produce scientific breakthroughs (the transistor? the human genome?)\n* Rachel Carson’s [*Silent Spring*](http://en.wikipedia.org/wiki/Silent_Spring) (1962)\n* Paul Ehrlich’s [*The Population Bomb*](http://en.wikipedia.org/wiki/The_Population_Bomb) (1968)\n* The Worldwatch Institute’s [*State of the World*](http://en.wikipedia.org/wiki/State_of_the_World_(book_series)) reports (since 1984)\n* The WCED’s [*Our Common Future*](http://en.wikipedia.org/wiki/Our_Common_Future) (1987)\n\n\n \n\n\nThe post [How effectively can we plan for future decades? (initial findings)](https://intelligence.org/2013/09/04/how-effectively-can-we-plan-for-future-decades/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-09-04T22:38:11Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "40213c4889b6d3269e3e68339e66e433", "title": "The Hanson-Yudkowsky AI-Foom Debate is now available as an eBook!", "url": "https://intelligence.org/2013/09/04/the-hanson-yudkowsky-ai-foom-debate-is-now-available-as-an-ebook/", "source": "miri", "source_type": "blog", "text": "[![ai-foom-cover](https://intelligence.org/wp-content/uploads/2013/09/ai-foom-cover-206x300.jpg)](http://intelligence.org/ai-foom-debate/ \"The Hanson-Yudkowsky AI-Foom Debate eBook\")In late 2008, economist Robin Hanson and AI theorist Eliezer Yudkowsky conducted an online debate about the future of artificial intelligence, and in particular about whether generally intelligent AIs will be able to improve their own capabilities very quickly (a.k.a. “foom”). James Miller and Carl Shulman also contributed guest posts to the debate.\n\n\nThe debate is now available as an eBook in various popular formats (PDF, EPUB, and MOBI). It includes:\n\n\n* the original series of blog posts,\n* a transcript of a 2011 in-person debate between Hanson and Yudkowsky on this subject,\n* a summary of the debate written by Kaj Sotala, and\n* a [2013 technical report](https://intelligence.org/files/IEM.pdf \"Intelligence Explosion Microeconomics\") on AI takeoff dynamics (“intelligence explosion microeconomics”) written by Yudkowsky.\n\n\nComments from the authors are included at the end of each chapter, along with a link to the original post.\n\n\nHead over to [intelligence.org/ai-foom-debate/](http://intelligence.org/ai-foom-debate/ \"The Hanson-Yudkowsky AI-Foom Debate eBook\") to download a free copy.\n\n\nThe post [The Hanson-Yudkowsky AI-Foom Debate is now available as an eBook!](https://intelligence.org/2013/09/04/the-hanson-yudkowsky-ai-foom-debate-is-now-available-as-an-ebook/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-09-04T19:59:11Z", "authors": ["Alex Vermeer"], "summaries": []} -{"id": "d41a46fab35bdd2df770d5965ce67cbe", "title": "Stephen Hsu on Cognitive Genomics", "url": "https://intelligence.org/2013/08/31/stephen-hsu-on-cognitive-genomics/", "source": "miri", "source_type": "blog", "text": "![Steve Hsu portrait](http://intelligence.org/wp-content/uploads/2013/08/hsu_w150.jpg)Stephen Hsu is Vice-President for Research and Graduate Studies and Professor of Theoretical Physics [at Michigan State University](http://www.epi.msu.edu/seminars/hsu.html). Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon. He was also founder of SafeWeb, an information security startup acquired by Symantec. Hsu is a scientific advisor to [BGI](http://www.genomics.cn/en/index) and a member of its [Cognitive Genomics Lab](https://www.cog-genomics.org/).\n\n\n\n**Luke Muehlhauser**: I’d like to start by familiarizing our readers with some of the basic facts relevant to the genetic architecture of cognitive ability, which I’ve drawn from the first half of a [presentation](https://www.youtube.com/watch?v=FgCSkGeBUNg&t=5m2s) you gave in February 2013:\n\n\n\n* The [human genome](http://en.wikipedia.org/wiki/Human_genome) consists of about 3 billion [base pairs](http://en.wikipedia.org/wiki/Base_pair), but humans are very similar to each other, so we only differ from each other on about 3 million of these base pairs.\n* Because there’s so much repetition, we could easily store the entire genome of every human on earth (~3mb per genome, compressed).\n* Scanning someone’s [SNPs](http://en.wikipedia.org/wiki/Single-nucleotide_polymorphism) costs about $200; scanning their entire genome costs $1000 or more.\n* But, genotyping costs are falling so quickly that SNPs may be irrelevant soon, as it’ll be simpler and cheaper to just sequence entire genomes.\n* To begin to understand the genetic architecture of *cognitive ability*, we can compare it to the genetic architecture of *height*, since the genetic architectures of height and cognitive ability are qualitatively the same.\n* For example, (1) height and cognitive ability are relatively *stable* and *reliable* traits (in adulthood), meaning that if you measure a person’s height or cognitive ability at multiple times you’ll get roughly the same result each time, (2) height and cognitive ability are *valid* traits, in that they “measure something real” that is predictive of various life outcome measures like income, (3) both height and cognitive ability are highly *heritable*, and (4) both height and cognitive ability are highly *polygenic*, meaning that many different genes contribute to height and cognitive ability.\n* All cognitive observables — e.g. vocabulary, digit recall (short term memory), ability to solve math puzzles, spatial rotation ability, cognitive reaction time — appear to be positively correlated. Because of this, we can (lossily) compress the data for how a person scores on different cognitive tests to a single number, which we call IQ, and this single number is predictive of their scores on *all* cognitive tests, and also life outcome measures like income, educational attainment, job performance, and mortality.\n* This contradicts some folk wisdom. E.g. parents often believe that “Johnny’s good at math, so he’s probably not going to be good with words.” But in fact, the data show that math skill is quite predictive of verbal skill, because (roughly) all cognitive abilities are positively correlated.\n* By convention, IQ is normally distributed in the population with a mean at 100 and a standard deviation of 15.\n* Culturally neutral cognitive tests like [progressive matrices](http://en.wikipedia.org/wiki/Raven%27s_Progressive_Matrices) are *very* tightly correlated (0.9) with IQ. So you can estimate someone’s IQ (and hence their verbal ability, spatial rotation ability, short term memory, cognitive reaction time, etc.) pretty well using *only* one test like [Raven’s progressive matrices](http://www.ravensprogressivematrices.com/).\n* It’s very difficult to raise one’s score on these cognitive tests with training. In large studies, it looks like thousands of dollars worth of training can raise your score by a small fraction of the standard deviation.\n* Additional IQ points do appear to “matter” — even above, say, IQ 145. E.g. the mean IQ of eminent scientists (IQ 160) is much higher than that of average PhDs (IQ 130). Also, in a longitudinal study of children identified as gifted at age 13, the “1 in 10,000”-level children had significantly better life outcomes than the “1 in 100”-level children, even though they generally all received “gifted child” development paths.\n\n\nOne source of details and references for most of this is [The Cambridge Handbook of Intelligence](http://www.amazon.com/Cambridge-Handbook-Intelligence-Handbooks-Psychology/dp/052173911X/).\n\n\nBefore we continue, Stephen, do you have any corrections or clarifications you’d like to make about my summary, or additional sources that you’d like to recommend to our readers?\n\n\n\n\n---\n\n\n**Stephen Hsu**: A couple of comments on the summary, which is excellent:\n\n\n1. Raven’s correlation might not be as high as 0.9 with overall IQ, it might actually be 0.8 or so. These numbers fluctuate around depending on the study. In general two tests might be considered valid “IQ tests” if they correlate at > 0.75 or so with g. This is the case with most standardized tests like ACT, SAT, GRE, etc.\n2. Mean IQ of participants in the Roe study was quite high, but I doubt that the average among eminent scientists (averaging over all fields) is 160; probably a bit lower like 145. In any case the Roe and SMPY data are sufficient to suggest nontrivial returns to IQ above 130 in STEM.\n\n\nIt seems you understood my talk perfectly well. The answers to your questions may already be in there, but I’m happy to discuss and clarify further.\n\n\n\n\n---\n\n\n**Luke**: Have we identified any genes that are (with high confidence) associated with cognitive ability? What can our history of identifying genes associated with other polygenic traits (e.g. height) tell us about our prospects for identifying genes associated with cognitive ability?\n\n\n\n\n---\n\n\n**Stephen**: Recently the results of a [massive GWAS for genes associated with educational attainment](http://infoproc.blogspot.com/2013/05/first-gwas-hits-for-cognitive-ability.html) were published in *Science*. Some of the researchers in this large collaboration are reluctant to openly state that the hits are associated with cognitive ability (as opposed to, say, Conscientiousness, which would also positively impact educational success). But if you read the paper carefully you can see that there is good evidence that the alleles are actually associated with cognitive ability (g or IQ).\n\n\nAt the link above you can find a historical graph:\n\n\n![GWAS hits](https://intelligence.org/wp-content/uploads/2013/08/GWAS-hits.png)\nThis graph displays the number of GWAS hits versus sample size for height, BMI, etc. Once the minimal sample size to discover the alleles of largest impact (large MAF, large effect size) is exceeded, one generally expects a steady accumulation of new hits at lower MAF / effect size. I expect the same sort of progress for g. (MAF = Minor Allele Frequency. Variants that are common in the population are easier to detect than rare variants.)\n\n\nWe can’t predict the sample size required to obtain most of the additive variance for g (this depends on the details of the distribution of alleles), but I would guess that about a million genotypes together with associated g scores will suffice. When, exactly, we will reach this sample size is unclear, but I think most of the difficulty is in obtaining the phenotype data. Within a few years, over a million people will have been genotyped, but probably we will only have g scores for a small fraction of the individuals.\n\n\n\n\n---\n\n\n**Luke**: Could you describe for us the goals and methods of the work you’re currently doing with BGI?\n\n\n\n\n---\n\n\n**Stephen**: The goal of our cognitive genomics project at BGI is to understand the genetic architecture of human cognition. There are obviously many potential applications of this work, in areas ranging from deep human history (evolution) to drug discovery to genetic engineering. But my primary interest is intellectual.\n\n\nThe methods are straightforward: obtain genotype and phenotype data and look for statistical associations (GWAS). More specifically, we want to determine the parameters of a polygenic model relating genotype to phenotype. (This as yet undetermined set of “fundamental constants” is one of the most interesting few megabytes of information in the biological world.) The leading term in this model is linear (meaning we are guaranteed a certain amount of progress from simple techniques), but eventually we will be interested in nonlinear corrections (epistasis, gene-gene interactions, dominance, etc.) as well.\n\n\nWe started out by looking for high g individuals because, as outliers, they produce more statistical power per dollar of sequencing. The cost of sequencing is still our primary constraint, and will be for at least a few more years. For example, the cost to sequence our 2000 high g volunteers is well into the millions of dollars. I also felt, given my background, that I had reasonable insight into where to find and how to recruit volunteers from the high g tail.\n\n\nUltimately, I hope that various genomics labs around the world will collaborate to produce a public data repository with g as one of the phenotype variables.\n\n\n*Link*:\n\n\n[International partners describe global alliance to enable secure sharing of genomic and clinical data](http://www.broadinstitute.org/news/globalalliance)\n\n\n\n\n---\n\n\n**Luke**: How feasible do you think “iterated embryo selection” will be, over the next several decades, for the amplification of cognitive abilities via genetic selection?\n\n\nBackground for our readers: iterated embryo selection is a plausible *future* technology that could allow strong genetic selection for intelligence without needing to wait 15-20 years between generations. It was first [described](http://theuncertainfuture.com/faq.html#7) in detail in the FAQ for MIRI’s *[The Uncertain Future](http://theuncertainfuture.com/)* project (see [Rayhawk et al. 2009](https://intelligence.org/files/ChangingTheFrame.pdf)), was later described in a book ([Miller 2012](http://www.amazon.com/Singularity-Rising-Surviving-Thriving-Dangerous/dp/1936661659/)), and was finally published in a journal in [Sparrow (2013)](http://commonsenseatheism.com/wp-content/uploads/2013/08/Sparrow-In-vitro-eugenics.pdf).\n\n\n\n\n---\n\n\n**Stephen**: I have no particular insight into specific challenges related to producing gametes from pluripotent stem cells. It’s not my area of expertise. However, I am confident that genomic selection for traits such as g will be possible. I would be surprised if, after analyzing millions of genotype-phenotype pairs, we were not able to produce a predictive model that captures, say, 50% of variance in g. That means, roughly, we might be able to predict g from genotype with standard error of somewhat less than a population standard deviation (e.g., 10 IQ points; note I don’t think the real world “meaning” of g is better defined than within an error of this size). This means that selection on g can proceed relatively efficiently, assuming the basic reproductive technologies are under control.\n\n\nI think there is good evidence that existing genetic variants in the human population (i.e., alleles affecting intelligence that are found today in the collective world population, but not necessarily in a single person) can be combined to produce a phenotype which is far beyond anything yet seen in human history. This would not surprise an animal or plant breeder — experiments on corn, cows, chickens, drosophila, etc. have shifted population means by many standard deviations (e.g., +30 SD in the case of corn).\n\n\nLet me add that, in my opinion, each society has to decide for itself (e.g. through democratic process) whether it wants to legalize or forbid activities that amount to genetic engineering. Intelligent people can reasonably disagree as to whether such activity is wise.\n\n\n*Links:*\n\n\n[“Only he was fully awake”](http://infoproc.blogspot.com/2012/03/only-he-was-fully-awake.html)\n\n\n[Maxwell’s Demon and genetic engineering](http://infoproc.blogspot.com/2010/10/maxwells-demon-and-genetic-engineering.html)\n\n\n[Epistasis vs additivity](http://infoproc.blogspot.com/2011/08/epistasis-vs-additivity.html)\n\n\n[Deleterious variants affecting traits that have been under selection are rare and of small effect](http://infoproc.blogspot.co.uk/2012/10/deleterious-variants-affecting-traits.html)\n\n\n\n\n---\n\n\n**Luke**: Work on the genetics of cognitive ability tends to be more controversial than work on the genetics of, say, height. Why do you think that is? Has your work, or the work of your colleagues, been made more difficult because of such issues?\n\n\n\n\n---\n\n\n**Stephen**: Given our difficult history with race there is an understandable discomfort with the idea that cognitive ability is strongly influenced by genetics. In the worst case, it might be found that historically isolated populations of humans differ in their average genetic capacities for cognition, due to variation in allele frequencies. Let me stress that at the moment our understanding of the genetics of intelligence is far too preliminary to reach a firm conclusion on this issue.\n\n\nAt the extremes, there are some academics and social activists who violently oppose any kind of research into the genetics of cognitive ability. Given that the human brain — its operation, construction from a simple genetic blueprint, evolutionary history — is one of the great scientific mysteries of the universe, I cannot understand this point of view.\n\n\n\n\n---\n\n\n**Luke**: What do you think a truly superior human intelligence would be like?\n\n\n\n\n---\n\n\n**Stephen**: I think we already have some hints in this direction. Take the case of John von Neumann, widely regarded as one of the greatest intellects in the 20th century, and a famous polymath. He made fundamental contributions in mathematics, physics, nuclear weapons research, computer architecture, game theory and automata theory.\n\n\nIn addition to his abstract reasoning ability, von Neumann had formidable powers of mental calculation and a photographic memory. In my opinion, genotypes exist that correspond to phenotypes as far beyond von Neumann as he was beyond a normal human.\n\n\n\n> I have known a great many intelligent people in my life. I knew Planck, von Laue and Heisenberg. Paul Dirac was my brother in law; Leo Szilard and Edward Teller have been among my closest friends; and Albert Einstein was a good friend, too. But none of them had a mind as quick and acute as Jansci [John] von Neumann. I have often remarked this in the presence of those men and no one ever disputed me.\n> \n> \n\n\n— Nobel Laureate Eugene Wigner\n\n\n\n> You know, Herb, how much faster I am in thinking than you are. That is how much faster von Neumann is compared to me.\n> \n> \n\n\n— Nobel Laureate Enrico Fermi to his former PhD student Herb Anderson.\n\n\n\n> One of his remarkable abilities was his power of absolute recall. As far as I could tell, von Neumann was able on once reading a book or article to quote it back verbatim; moreover, he could do it years later without hesitation. He could also translate it at no diminution in speed from its original language into English. On one occasion I tested his ability by asking him to tell me how The Tale of Two Cities started. Whereupon, without any pause, he immediately began to recite the first chapter and continued until asked to stop after about ten or fifteen minutes.\n> \n> \n\n\n— Herman Goldstine, mathematician and computer pioneer.\n\n\n\n> I always thought Von Neumann’s brain indicated that he was from another species, an evolution beyond man,\n> \n> \n\n\n— Nobel Laureate Hans A. Bethe.\n\n\n*Links:*\n\n\n[Wikipedia: John von Neumann](http://en.wikipedia.org/wiki/John_von_Neumann)\n\n\n[The differences are enormous](http://infoproc.blogspot.com/2012/03/differences-are-enormous.html)\n\n\n[“Only he was fully awake”](http://infoproc.blogspot.com/2012/03/only-he-was-fully-awake.html)\n\n\n\n\n---\n\n\n**Luke:** Thanks, Stephen!\n\n\nThe post [Stephen Hsu on Cognitive Genomics](https://intelligence.org/2013/08/31/stephen-hsu-on-cognitive-genomics/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-08-31T23:54:56Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "060a76c7d338b576f64cacf707122ddc", "title": "MIRI’s November 2013 Workshop in Oxford", "url": "https://intelligence.org/2013/08/30/miris-november-2013-workshop-in-oxford/", "source": "miri", "source_type": "blog", "text": "![013](http://intelligence.org/wp-content/uploads/2013/02/013.jpg)\nFrom November 23-29, MIRI will host another **Workshop on Logic, Probability, and Reflection**, for the first time in **Oxford, UK**.\n\n\nParticipants will investigate problems related to [reflective agents](http://lesswrong.com/lw/hmt/tiling_agents_for_selfmodifying_ai_opfai_2/), [probabilistic logic](http://lesswrong.com/lw/h1k/reflection_in_probabilistic_logic/), and [priors over logical statements](http://ict.usc.edu/pubs/Logical%20Prior%20Probability.pdf) / the logical omniscience problem.\n\n\nParticipants confirmed so far include:\n\n\n* [Stuart Armstrong](http://www.fhi.ox.ac.uk/about/staff/) (Oxford)\n* [Mihaly Barasz](http://www.quora.com/Mihaly-Barasz) (Google)\n* [Catrin Campbell-Moore](http://www.ccampbell-moore.com/) (LMU Munich)\n* [Daniel Dewey](http://www.danieldewey.net/) (Oxford)\n* [Benja Fallenstein](http://lesswrong.com/user/Benja/overview/) (U Bristol)\n* [Jacob Hilton](http://www1.maths.leeds.ac.uk/pure/logic/homepages/hilton.html) (U Leeds)\n* [Ramana Kumar](http://www.cl.cam.ac.uk/~rk436/) (Cambridge)\n* [Jan Leike](http://swt.informatik.uni-freiburg.de/staff/leike) (U Freiburg)\n* [Bas Steunebrink](http://www.idsia.ch/~steunebrink/) (IDSIA)\n* [Gregory Wheeler](http://gregorywheeler.org/) (LMU Munich)\n* [Eliezer Yudkowsky](http://yudkowsky.net/) (MIRI)\n\n\nIf you have a strong mathematics background and might like to attend this workshop, it’s not too late to [apply](http://intelligence.org/get-involved/#workshop)! And even if *this* workshop doesn’t fit your schedule, please **do apply**, so that we can notify you of other workshops (long before they are announced publicly).\n\n\nThe post [MIRI’s November 2013 Workshop in Oxford](https://intelligence.org/2013/08/30/miris-november-2013-workshop-in-oxford/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-08-30T23:05:38Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "ea1ff309afb7c365503f69fc9c786968", "title": "Transparency in Safety-Critical Systems", "url": "https://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/", "source": "miri", "source_type": "blog", "text": "In this post, I aim to summarize one common view on AI transparency and AI reliability. It’s difficult to identify the field’s “consensus” on AI transparency and reliability, so instead I will present *a* common view so that I can use it to introduce a number of complications and open questions that (I think) warrant further investigation.\n\n\nHere’s a short version of the common view I summarize below:\n\n\n\n> [Black box testing](http://en.wikipedia.org/wiki/Black-box_testing) can provide some confidence that a system will behave as intended, but if a system is built such that it is transparent to human inspection, then additional methods of reliability verification are available. Unfortunately, many of AI’s most useful methods are among its least transparent. Logic-based systems are typically more transparent than statistical methods, but statistical methods are more widely used. There are exceptions to this general rule, and some people are working to make statistical methods more transparent.\n> \n> \n\n\n### \n\n\n### The value of transparency in system design\n\n\n[Nusser (2009)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Nusser-Robust-Learning-in-Safety-Related-Domains.pdf) writes:\n\n\n\n> …in the field of safety-related applications it is essential to provide transparent solutions that can be validated by domain experts. “Black box” approaches, like artificial neural networks, are regarded with suspicion – even if they show a very high accuracy on the available data – because it is not feasible to prove that they will show a good performance on all possible input combinations.\n> \n> \n\n\nUnfortunately, there is often a tension between AI capability and AI transparency. Many of AI’s most powerful methods are also among its least transparent:\n\n\n\n> Methods that are known to achieve a high predictive performance — e.g. support vector machines (SVMs) or artificial neural networks (ANNs) — are usually hard to interpret. On the other hand, methods that are known to be well-interpretable — for example (fuzzy) rule systems, decision trees, or linear models — are usually limited with respect to their predictive performance.[1](https://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/#footnote_0_10415 \"Quote from Nusser (2009). Emphasis added. The original text contains many citations which have been removed in this post for readability. Also see Schultz & Cronin (2003), which makes this point by graphing four AI methods along two axes: robustness and transparency. Their graph is available here. In their terminology, a method is “robust” to the degree that it is flexible and useful on a wide variety of problems and data sets. On the graph, GA means “genetic algorithms,” NN means “neural networks,” PCA means “principal components analysis,” PLS means “partial least squares,” and MLR means “multiple linear regression.” In this sample of AI methods, the trend is clear: the most robust methods tend to be the least transparent. Schultz & Cronin graphed only a tiny sample of AI methods, but the trend holds more broadly.\")\n> \n> \n\n\nBut for [safety-critical systems](http://en.wikipedia.org/wiki/Life-critical_system) — and especially for [AGI](http://intelligence.org/2013/08/11/what-is-agi/) — it is important to prioritize system reliability over capability. Again, here is [Nusser (2009)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Nusser-Robust-Learning-in-Safety-Related-Domains.pdf):\n\n\n\n> strict requirements [for system transparency] are necessary because a safety-related system is a system whose malfunction or failure can lead to serious consequences — for example environmental harm, loss or severe damage of equipment, harm or serious injury of people, or even death. Often, it is impossible to rectify a wrong decision within this domain.\n> \n> \n\n\nThe special need for transparency in AI has also been stressed by many others,[2](https://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/#footnote_1_10415 \"I will share some additional quotes on the importance of transparency in intelligent systems. Kröske et al. (2009) write that, to trust a machine intelligence, “human operators need to be able to understand [its] reasoning process and the factors that precipitate certain actions.” Similarly, Fox (1993) writes: “Many branches of engineering have moved beyond purely empirical testing [for safety]… because they have established strong design theories… The consequence is that designers can confidently predict failure modes, performance boundary conditions and so forth before the systems are implemented… A promising approach to [getting these benefits in AI] may be to use well-defined specification languages and verification procedures. Van Harmelen & Balder (1992) [list some] advantages of using formal languages… [including] the removal of ambiguity… [and] the ability to derive properties of the design in the absence of an implementation.” In their preface, Fox & Das (2000) write: “Our first obligation is to try to ensure that the designs of our systems are sound. We need to ask not only ‘do they work?’ but also ‘do they work for good reasons?’ Unfortunately, conventional software design is frequently ad hoc, and AI software design is little better and possibly worse… Consequently, we place great emphasis on clear design principles, strong mathematical foundations for these principles, and effective development tools that support and verify the integrity of the system… We are creating a powerful technology [AI], possibly more quickly than we think, that has unprecedented potential to create havoc as well as benefit. We urge the community to embark on a vigorous discussion of the issues and the creation of an explicit ‘safety culture’ in the field.”\") including [Boden (1977)](http://www.amazon.com/Artificial-Intelligence-and-Natural-Man/dp/B0028LGZ1M/):\n\n\n\n> Members of the artificial intelligence community bear an ominous resemblance to… the [Sorcerer’s Apprentice](http://en.wikipedia.org/wiki/The_Sorcerer%27s_Apprentice). The apprentice learnt just enough magic… to save himself the trouble of performing an onerous task, but not quite enough to stop the spellbound buckets and brooms from flooding the castle…\n> \n> \n> [One question I shall ask is] whether there are any ways of writing programs that would tend to keep control in human hands… [For one thing,] programs should be intelligible and explicit, so that “what is going on” is not buried in the code or implicitly embodied in procedures whose aim and effect are obscure.\n> \n> \n\n\n### \n\n\n### A spectrum from black box to transparent\n\n\nNon-transparent systems are sometimes called “black boxes”:\n\n\n\n> a black box is a device, system or object which can be viewed in terms of its input, output and transfer characteristics *without any knowledge of its internal workings*. Its implementation is “opaque” (black). Almost anything might be referred to as a black box: a transistor, an algorithm, or the human mind.\n> \n> \n> …[And] in practice some [technically transparent] systems are so complex that [they] might as well be [black boxes].[3](https://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/#footnote_2_10415 \"Emphasis added. The first paragraph is from Wikipedia’s black box page; the second paragraph is from its white box page. The term “grey box” is sometimes used to refer to methods that are intermediate in transparency between “fully black box” and “fully transparent” methods: see e.g. Sohlberg (2003).\")\n> \n> \n\n\nThe human brain is mostly a black box. We can observe its inputs (light, sound, etc.), its outputs (behavior), and some of its transfer characteristics (swinging a bat at someone’s eyes often results in ducking or blocking behavior), but we don’t know very much about *how* the brain works. We’ve begun to develop an algorithmic understanding of *some* of its functions (especially [vision](http://www.amazon.com/How-Vision-Works-Physiological-Mechanisms/dp/0199751617/)), but only barely.[4](https://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/#footnote_3_10415 \"Thus, if we could build a whole brain emulation today, it would also be mostly a black box system, even though all its bits of information would be stored in a computer and accessible to database search tools and so on. But we’ll probably make lots of progress in cognitive neuroscience before WBE is actually built, and a also working WBE would probably enable quick advances in cognitive neuroscience, and therefore the human brain would rapidly become more transparent to us.\")\n\n\nMany contemporary AI methods are effectively black box methods. As [Whitby (1996)](http://www.amazon.com/Reflections-Artificial-Intelligence-Blay-Whitby/dp/1871516684/) explains, the safety issues that arise in “[GOFAI](http://en.wikipedia.org/wiki/GOFAI)” (e.g. search-based problem solvers and knowledge-based systems) “are as nothing compared to the [safety] problems which must be faced by newer approaches to AI… Software that uses some sort of neural net or genetic algorithm must face the further problem that it seems, often almost by definition, to be ‘inscrutable’. By this, I mean that… we can know that it works and test it over a number of cases but we will not in the typical case ever be able to know exactly how.”\n\n\nOther methods, however, are relatively transparent, as we shall see below.\n\n\nThis post cannot survey the transparency of all AI methods; there are too many. Instead, I will focus on three major “families” of AI methods.\n\n\n### \n\n\n### Examining the transparency of three families of AI methods\n\n\n#### \n\n\n#### Machine learning\n\n\n[Machine learning](http://en.wikipedia.org/wiki/Machine_learning) is perhaps the largest and most active subfield in AI, encompassing a wide variety of methods by which machines learn from data. For an overview of the field, see [Flach (2012)](http://www.amazon.com/Machine-Learning-Science-Algorithms-Sense/dp/1107422221/). For a quick video intro, see [here](https://www.youtube.com/watch?&v=e0WKJLovaZg).\n\n\nUnfortunately, machine learning methods tend not to be among the most transparent methods.[5](https://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/#footnote_4_10415 \"For more discussion of how machine learning can be used for relatively “transparent” ends, for example to learn the structure of a Bayesian network, see Murphy (2012), ch. 26.\") [Nusser (2009)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Nusser-Robust-Learning-in-Safety-Related-Domains.pdf) explains:\n\n\n\n> machine learning approaches are regarded with suspicion by domain experts in safety-related application fields because it is often infeasible to sufficiently interpret and validate the learned solutions.\n> \n> \n\n\nFor now, let’s consider one popular machine learning method in particular: [artificial neural networks](http://en.wikipedia.org/wiki/Artificial_neural_network) (ANNs). (For a concise video introduction, see [here](https://www.youtube.com/watch?v=DG5-UyRBQD4).) As [Rodvold (1999)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Rodvold-A-software-development-process-model-for-artificial-neural-networks-in-critical-applications.pdf) explains, ANNs are typically black boxes:\n\n\n\n> the intelligence of neural networks is contained in a collection of numeric synaptic weights, connections, transfer functions, and other network definition parameters. In general, inspection of these quantities yields little explicit information to enlighten developers as to why a certain result is being produced.\n> \n> \n\n\nAlso, [Kurd (2005)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Kurd-Artificial-neural-networks-in-safety-critical-applications.pdf):\n\n\n\n> it is common for typical ANNs to be treated as black-boxes… because ANN behaviour is scattered across its weights and links with little meaning to an observer. As a result of this unstructured and unorganised representation of behaviour, it is often not feasible to completely understand and predict their function and operation… The interpretation problems associated with ANNs impede their use in safety critical contexts…[6](https://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/#footnote_5_10415 \"Li & Peng (2006) make the same point: “conventional neural networks… lack transparency, as their activation functions (AFs) and their associated neural parameters bear very little physical meaning.” See also Woodman et al. (2012)‘s comments on this issue in the context of personal robotics: “Among the requirements of autonomous robots… is a certain degree of robustness. This means being able to handle errors and to continue operation during abnormal conditions… in a dynamic environment, the robot will frequently find itself in a wide range of previously unseen situations. To date, the majority of research in this area has addressed this issue by using learning algorithms, often implemented as artificial neural networks (ANNs)… However, as Nehmzow et al. (2004) identify, these implementations, although seemingly effective, are difficult to analyse due to the inherent opacity of connection-based algorithms. This means that it is difficult to produce an intelligible model of the system structure that could be used in safety analysis.”\")\n> \n> \n\n\nDeep learning is another popular machine learning technique. It, too, tends to be non-transparent — like ANNs, deep learning methods were inspired by how parts of the brain work, in particular the visual system.[7](https://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/#footnote_6_10415 \"Murphy (2012), p. 995, writes that “when we look at the brain, we seem many levels of processing. It is believed that each level is learning features or representations at increasing levels of abstraction. For example, the standard model of the visual cortex… suggests that (roughly speaking) the brain first extracts edges, then patches, then surfaces, then objects, etc… This observation has inspired a recent trend in machine learning known as deep learning… which attempts to replicate this kind of architecture in a computer.”\")\n\n\nSome machine learning methods are more transparent than others. [Bostrom & Yudkowsky (2013)](https://intelligence.org/files/EthicsofAI.pdf) explain:\n\n\n\n> If [a] machine learning algorithm is based on a complicated neural network… then it may prove nearly impossible to understand why, or even how, the algorithm [made its judgments]. On the other hand, a machine learner based on decision trees or Bayesian networks is much more transparent to programmer inspection ([Hastie et al. 2001](http://www.amazon.com/The-Elements-Statistical-Learning-Prediction/dp/0387848576/)), which may enable an auditor to discover [why the algorithm made the judgments it did].[8](https://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/#footnote_7_10415 \"It is generally accepted that Bayesian networks are more transparent than ANNs, but this is only true up to a point. A Bayesian network with hundreds of nodes that are not associated with human-intuitive concepts is not necessarily any more transparent than a large ANN.\")\n> \n> \n\n\nMoreover, recent work has attempted to make some machine learning methods more transparent, and thus perhaps more suitable for safety-critical applications. For example, [Taylor (2005)](http://www.amazon.com/Procedures-Verification-Validation-Artificial-Networks/dp/0387282882/) suggests methods for extracting rules (which refer to human-intuitive concepts) from neural networks, so that researchers can perform formal safety analyses of the extracted rules. These methods are still fairly primitive, and are not yet widely applicable or widely used, but further research could make these methods more useful and popular.[9](https://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/#footnote_8_10415 \"For an overview of this work, see Nusser (2009), section 2.2.3. Also see Pulina & Tacchella (2011). Finally, Ng (2011), sec. 4, notes that we can get a sense of what function an ANN has learned by asking what which inputs would maximize the activation of particular nodes. In his example, Ng uses this technique to visualize which visual features have been learned by a sparse autoencoder trained on image data.\")\n\n\n#### \n\n\n#### Evolutionary algorithms\n\n\n[Evolutionary algorithms](http://en.wikipedia.org/wiki/Evolutionary_algorithm) (EAs) are often categorized as a machine learning method, but here they will be considered separately. EAs use methods inspired by evolution to produce [candidate solutions](http://en.wikipedia.org/wiki/Candidate_solution) to problems. For example, watch [this video](https://www.youtube.com/watch?&v=z9ptOeByLA4) of software robots evolving to “walk” quickly.\n\n\nBecause evolutionary algorithms use a process of semi-random mutation and recombination to produce candidate solutions, complex candidate solutions tend not to be transparent — just like the evolultionarily-produced brain. [Mitchell (1998)](http://www.amazon.com/Introduction-Genetic-Algorithms-Complex-Adaptive/dp/0262631857/), p. 40 writes:\n\n\n\n> Understanding the results of [genetic algorithm] evolution is a general problem — typically the [genetic algorithm] is asked to find [candidate solutions] that achieve high fitness but is not told how that high fitness is to be attained. One could say that this is analogous to the difficulty biologists have in understanding the products of natural evolution (e.g., us)… In many cases… it is difficult to understand exactly how an evolved high−fitness [candidate solution] works. In genetic programming, for example, the evolved programs are often very long and complicated, with many irrelevant components attached to the core program performing the desired computation. It is usually a lot of work — and sometimes almost impossible — to figure out by hand what that core program is.\n> \n> \n\n\n[Fleming & Purshouse (2002)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Fleming-Purshouse-Evolutionary-algorithms-in-control-systems-engineering-a-survey.pdf) add:\n\n\n\n> Mission-critical and safety-critical applications do not appear, initially, to be favourable towards EA usage due to the stochastic nature of the evolutionary algorithm. No guarantee is provided that the results will be of sufficient quality for use on-line.\n> \n> \n\n\n#### \n\n\n#### Logical methods\n\n\nLogical methods in AI are implemented widely in safety-critical applications (e.g. medicine), but see far less application in general compared to machine learning methods.\n\n\nIn a logic-based AI, the AI’s knowledge and its systems for reasoning are written out in logical statements. These statements are typically hand-coded, and the meaning of each statement has a precise meaning determined by the axioms of the logical system being used (e.g. first-order logic). [Russell & Norvig (2009)](http://www.amazon.com/Artificial-Intelligence-Modern-Approach-Edition/dp/0136042597/), the leading AI textbook, describes the logical approach to AI in [chapter 7](http://commonsenseatheism.com/wp-content/uploads/2013/07/Russell-Norvig-Logical-Agents.pdf). It describes a popular application of logical AI, called “classical planning,” in [chapter 10](http://commonsenseatheism.com/wp-content/uploads/2013/07/Russell-Norvig-Classical-Planning.pdf). Also see [Thomason (2012)](http://plato.stanford.edu/entries/logic-ai/) and [Minker (2000)](http://commonsenseatheism.com/wp-content/uploads/2013/07/Minker-Introduction-to-Logic-Based-Artificial-Intelligence.pdf).\n\n\nGalliers ([1988](http://commonsenseatheism.com/wp-content/uploads/2013/05/Galliers-A-Theoretical-Framework-for-Computer-Models-of-Cooperative-Dialogue-Acknowledging-Multi-agent-Conflict.pdf), p. 88-89) explains the transparency advantages of logic-based methods in AI:\n\n\n\n> A theory expressed as a set of logical axioms is evident; it is open to examination. This assists the process of determining whether any parts of the theory are inconsistent, or do not behave as had been anticipated when they were expressed in English… Logics are languages with precise semantics [and therefore] there can be no ambiguities of interpretation… By expressing the properties of agents… as logical axioms and theorems… the theory is transparent; properties, interrelationships and inferences are open to examination… This contrasts with the use of computer code [where] it is frequently the case that computer systems concerned with… problem-solving are in fact designed such that properties of the interacting agents are implicit properties of the entire system, and it is impossible to investigate the role or effects of any individual aspect.[10](https://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/#footnote_9_10415 \"Wooldridge (2003) concurs, writing that “Transparency is another advantage [of logical approaches].”\")\n> \n> \n\n\nAnother transparency advantage of logical methods in AI comes from logic languages’ capacity to represent different kinds of machines, including machines that can reflect on themselves and the reasons for their beliefs. For example, they can pass assumptions around with each datum. E.g. see the “domino” agent in [Fox & Das (2000)](http://www.amazon.com/Safe-Sound-Artificial-Intelligence-Applications/dp/0262062119/).\n\n\nMoreover, some logic-based approaches are amenable to [formal methods](http://en.wikipedia.org/wiki/Formal_methods), for example [formal verification](http://en.wikipedia.org/wiki/Formal_verification): mathematically proving that a system will perform correctly with respect to a formal specification.[11](https://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/#footnote_10_10415 \"For recent overviews of formal methods in general, see Bozzano & Villafiorita (2010), Woodcock et al. (2009); Gogolla (2006); Bowen & Hinchey (2006). For more on the general application of safety engineering theory to AI, see Fox (1993); Yampolskiy & Fox (2013); Yampolskiy (2013).\") Formal methods complement empirical *testing* of software, e.g. by identifying “corner bugs” that are difficult to find when using empirical methods only — see [Mitra 2008](http://commonsenseatheism.com/wp-content/uploads/2013/07/Mitra-Strategies-for-Mainstream-Usage-of-Formal-Verification.pdf).\n\n\nFormal verification is perhaps best known for its use in [verifying hardware components](http://www.amazon.com/Introduction-Formal-Hardware-Verification-Thomas/dp/364208477X/) (especially since the [FDIV bug](http://en.wikipedia.org/wiki/Pentium_FDIV_bug) that cost Intel $500 million), but it is also used to verify a variety of software programs (in part or in whole), including flight control systems ([Miller et al. 2005](http://shemesh.larc.nasa.gov/fm/papers/FormalVerificationFlightCriticalSoftware.pdf)), rail control systems ([Platzer & Quesel 2009](http://commonsenseatheism.com/wp-content/uploads/2013/07/Platzer-Quesel-European-train-control-system-a-case-study-in-formal-verification.pdf)), pacemakers ([Tuan et al. 2010](http://www.comp.nus.edu.sg/~pat/publications/ssiri10_pacemaker.pdf)), compilers ([Leroy 2009](http://hal.inria.fr/docs/00/41/58/61/PDF/compcert-CACM.pdf)), operating system kernels ([Andronick 2011](http://ssrg.nicta.com.au/publications/papers/Andronick_10.pdf)), multi-agent systems ([Raimondi 2006](http://invisque.com/staffpages/f_raimondi/pubs/thesis.pdf)), outdoor robots ([Proetzsch et al. 2007](http://es.informatik.uni-kl.de/publications/datarsg/PBSS07.pdf)), and swarm robotics ([Dixon et al. 2012](http://commonsenseatheism.com/wp-content/uploads/2013/07/Dixon-et-al-Towards-temporal-verification-of-swarm-robotic-systems.pdf)).\n\n\nUnfortunately, formal methods face severe limitations. [Fox (1993)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Fox-On-the-soundness-and-safety-of-expert-systems.pdf) explains:\n\n\n\n> there are severe limitations on the capability of formal design techniques to completely prevent hazardous situations from arising. Current formal design methods are difficult to use and time-consuming, and may only be practical for relatively modest applications. Even if we reserve formal techniques for the safety-critical elements of the system we have seen that the soundness guaranteed by the techniques can only be as good as the specifier’s ability to anticipate the conditions and possible hazards that can hold at the time of use… These problems are difficult enough for ‘closed systems’ in which the designer can be confident, in principle, of knowing all the parameters which can affect system performance… Unfortunately all systems are to a greater or lesser extent ‘open’; they operate in an environment which cannot be exhaustively monitored and in which unpredictable events will occur. Furthermore, reliance on specification and verification methods assumes that the operational environment will not compromise the correct execution of software. In fact of course software errors can be caused by transient faults causing data loss or corruption; user errors; interfacing problems with external systems (such as databases and instruments); incompatibilities between software versions; and so on.[12](https://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/#footnote_11_10415 \"Another good point Fox makes is that normal AI safety engineering techniques rely on the design team’s ability to predict all circumstances that might hold in the future: “…one might conclude that using a basket of safety methods (hazard analysis, formal specification and verification, rigorous empirical testing, fault tolerant design) will significantly decrease the likelihood of hazards and disasters. However, there is at least one weakness common to all these methods. They rely on the design team being able to make long-range predictions about all the… circumstances that may hold when the system is in use. This is unrealistic, if only because of the countless interactions that can occur… [and] the scope for unforseeable interactions is vast.”\")\n> \n> \n\n\n[Bowen & Hinchey (1995)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Bowen-Hinchey-Seven-More-Myths-of-Formal-Methods.pdf) concur:\n\n\n\n> There are many… areas where, although possible, formalization is just not practical from a resource, time, or financial aspect. Most successful formal methods projects involve the application of formal methods to critical portions of system development. Only rarely are formal methods, and formal methods alone, applied to all aspects of system development. Even within the CICS project, which is often cited as a major application of formal methods… only about a tenth of the entire system was actually subjected to formal techniques…\n> \n> \n> [We suggest] the following maxim: *System development should be as formal as possible, but not more formal.*\n> \n> \n\n\nFor more on the use of formal methods for AI safety, see [Rushby & Whitehurst (1989)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Rushby-Whitehurst-Formal-verification-of-AI-software.pdf); [Bowen & Stavridou (1993)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Bowen-Stavridou-Safety-critical-systems-formal-methods-and-standards.pdf); [Harper (2000)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Harper-Challenges-for-designing-intelligent-systems-for-safety-critical-applications.pdf); [Spears (2006)](http://www.cs.auckland.ac.nz/~nickjhay/papersuni/spears%20-%20Assuring%20the%20behavior%20of%20adaptive%20agents.pdf); [Fischer et al (2013)](http://delivery.acm.org/10.1145/2500000/2494558/p84-fisher.pdf).[13](https://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/#footnote_12_10415 \"See also this program at Bristol University.\")\n\n\n### \n\n\n### Some complications and open questions\n\n\nThe common view of transparency and AI safety articulated above suggests an opportunity for [differential technological development](http://en.wikipedia.org/wiki/Differential_technological_development). To increase the odds that future AI systems are safe and reliable, we can invest disproportionately in transparent AI methods, and also in techniques for increasing the transparency of typically opaque AI methods.\n\n\nBut this common view comes with some serious caveats, and some difficult open questions. For example:\n\n\n1. How does the transparency of a method change with scale? A 200-rules logical AI might be more transparent than a 200-node Bayes net, but what if we’re comparing 100,000 rules vs. 100,000 nodes? At least we can *query* the Bayes net to ask “what it believes about X,” whereas we can’t necessarily do so with the logic-based system.\n2. Do the categories above really “[carve reality at its joints](http://lesswrong.com/lw/o0/where_to_draw_the_boundary/)” with respect to transparency? Does a system’s status as a logic-based system or a Bayes net reliably predict its transparency, given that in principle we can use either one to express a probabilistic model of the world?\n3. How much of a system’s transparency is “intrinsic” to the system, and how much of it depends on the quality of the user interface used to inspect it? How much of a “transparency boost” can different kinds of systems get from excellently designed user interfaces?[14](https://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/#footnote_13_10415 \"As an aside, I’ll briefly remark that user interface confusion has contributed to many computer-related failures in the past. For example, Neumann (1994) reports on the case of Iran Air Flight 655, which was shot down U.S. forces due (partly) to the unclear user interface of the USS Vincennes’ Aegis missile system. Changes to the interface were subsequently recommended. For other UI-related disasters, see Neumann’s extensive page on Illustrative Risks to the Public in the Use of Computer Systems and Related Technology.\")\n\n\n#### \n\n\n#### Acknowledgements\n\n\nMy thanks to John Fox, Jacob Steinhardt, Paul Christiano, Carl Shulman, Eliezer Yudkowsky, and others for their helpful feedback.\n\n\n#### \n\n\n\n\n---\n\n1. Quote from [Nusser (2009)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Nusser-Robust-Learning-in-Safety-Related-Domains.pdf). Emphasis added. The original text contains many citations which have been removed in this post for readability. Also see [Schultz & Cronin (2003)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Schultz-Cronin-Essential-and-desirable-characteristics-of-ecotoxicity-quantitative-structure-activity-relationships.pdf), which makes this point by graphing four AI methods along two axes: robustness and transparency. Their graph is available [here](http://commonsenseatheism.com/wp-content/uploads/2013/06/robustness-and-transparency.png). In their terminology, a method is “robust” to the degree that it is flexible and useful on a wide variety of problems and data sets. On the graph, GA means “genetic algorithms,” NN means “neural networks,” PCA means “principal components analysis,” PLS means “partial least squares,” and MLR means “multiple linear regression.” In this sample of AI methods, the trend is clear: the most robust methods tend to be the least transparent. Schultz & Cronin graphed only a tiny sample of AI methods, but the trend holds more broadly.\n2. I will share some additional quotes on the importance of transparency in intelligent systems. [Kröske et al. (2009)](http://commonsenseatheism.com/wp-content/uploads/2013/05/Kroske-et-al-Trusted-Reasoning-Engine-for-Autonomous-Systems-with-an-Interactive-Demonstrator.pdf) write that, to trust a machine intelligence, “human operators need to be able to understand [its] reasoning process and the factors that precipitate certain actions.” Similarly, [Fox (1993)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Fox-On-the-soundness-and-safety-of-expert-systems.pdf) writes: “Many branches of engineering have moved beyond purely empirical testing [for safety]… because they have established strong design theories… The consequence is that designers can confidently predict failure modes, performance boundary conditions and so forth *before the systems are implemented*… A promising approach to [getting these benefits in AI] may be to use well-defined specification languages and verification procedures. [Van Harmelen & Balder (1992)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Van-Harmelen-Balder-ML2-a-formal-language-for-KADS-models-of-expertise.pdf) [list some] advantages of using formal languages… [including] the removal of ambiguity… [and] the ability to derive properties of the design in the absence of an implementation.” In their preface, [Fox & Das (2000)](http://www.amazon.com/Safe-Sound-Artificial-Intelligence-Applications/dp/0262062119/) write: “Our first obligation is to try to ensure that the designs of our systems are sound. We need to ask not only ‘do they work?’ but also ‘do they work for good reasons?’ Unfortunately, conventional software design is frequently ad hoc, and AI software design is little better and possibly worse… Consequently, we place great emphasis on clear design principles, strong mathematical foundations for these principles, and effective development tools that support and verify the integrity of the system… We are creating a powerful technology [AI], possibly more quickly than we think, that has unprecedented potential to create havoc as well as benefit. We urge the community to embark on a vigorous discussion of the issues and the creation of an explicit ‘safety culture’ in the field.”\n3. Emphasis added. The first paragraph is from Wikipedia’s [black box](http://en.wikipedia.org/wiki/Black_box) page; the second paragraph is from its [white box](http://is.gd/8NbFdy) page. The term “grey box” is sometimes used to refer to methods that are intermediate in transparency between “fully black box” and “fully transparent” methods: see e.g. [Sohlberg (2003)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Sohlberg-Grey-box-modelling-for-model-predictive-control-of-a-heating-process.pdf).\n4. Thus, if we could build a [whole brain emulation](http://en.wikipedia.org/wiki/Mind_uploading) today, it would also be mostly a black box system, even though all its bits of information would be stored in a computer and accessible to database search tools and so on. But we’ll probably make lots of progress in cognitive neuroscience before WBE is *actually* built, and a also working WBE would probably enable quick advances in cognitive neuroscience, and therefore the human brain would rapidly become more transparent to us.\n5. For more discussion of how machine learning can be used for relatively “transparent” ends, for example to learn the structure of a Bayesian network, see [Murphy (2012)](http://www.amazon.com/Machine-Learning-Probabilistic-Perspective-Computation/dp/0262018020/), ch. 26.\n6. [Li & Peng (2006)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Li-Peng-System-oriented-neural-networks-problem-formulation-methodology-and-application.pdf) make the same point: “conventional neural networks… lack transparency, as their activation functions (AFs) and their associated neural parameters bear very little physical meaning.” See also [Woodman et al. (2012)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Woodman-et-al-Building-safer-robots-safety-driven-control.pdf)‘s comments on this issue in the context of personal robotics: “Among the requirements of autonomous robots… is a certain degree of robustness. This means being able to handle errors and to continue operation during abnormal conditions… in a dynamic environment, the robot will frequently find itself in a wide range of previously unseen situations. To date, the majority of research in this area has addressed this issue by using learning algorithms, often implemented as artificial neural networks (ANNs)… However, as [Nehmzow et al. (2004)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Nehmzow-et-al-RobotMODIC-modelling-identification-and-characterisation-of-mobile-robots.pdf) identify, these implementations, although seemingly effective, are difficult to analyse due to the inherent opacity of connection-based algorithms. This means that it is difficult to produce an intelligible model of the system structure that could be used in safety analysis.”\n7. [Murphy (2012)](http://www.amazon.com/Machine-Learning-Probabilistic-Perspective-Computation/dp/0262018020/), p. 995, writes that “when we look at the brain, we seem many levels of processing. It is believed that each level is learning features or representations at increasing levels of abstraction. For example, the standard model of the visual cortex… suggests that (roughly speaking) the brain first extracts edges, then patches, then surfaces, then objects, etc… This observation has inspired a recent trend in machine learning known as deep learning… which attempts to replicate this kind of architecture in a computer.”\n8. It is generally accepted that Bayesian networks are more transparent than ANNs, but this is only true up to a point. A Bayesian network with hundreds of nodes that are not associated with human-intuitive concepts is not necessarily any more transparent than a large ANN.\n9. For an overview of this work, see [Nusser (2009)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Nusser-Robust-Learning-in-Safety-Related-Domains.pdf), section 2.2.3. Also see [Pulina & Tacchella (2011)](http://commonsenseatheism.com/wp-content/uploads/2013/07/Pulina-Tacchella-NeVer-a-tool-for-artificial-neural-networks-verification.pdf). Finally, [Ng (2011)](http://www.stanford.edu/class/cs294a/sparseAutoencoder.pdf), sec. 4, notes that we can get a sense of what function an ANN has learned by asking what which inputs would maximize the activation of particular nodes. In his example, Ng uses this technique to visualize which visual features have been learned by a sparse autoencoder trained on image data.\n10. [Wooldridge (2003)](http://www.amazon.com/Reasoning-Rational-Intelligent-Robotics-Autonomous/dp/0262515563/) concurs, writing that “Transparency is another advantage [of logical approaches].”\n11. For recent overviews of formal methods in general, see [Bozzano & Villafiorita (2010)](http://commonsenseatheism.com/wp-content/uploads/2013/07/Bozzano-Villafiorita-Formal-Methods-for-Safety-Assessment.pdf), [Woodcock et al. (2009)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Woodcock-et-al-Formal-methods-practice-and-experience.pdf); [Gogolla (2006)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Gogolla-Benefits-and-problems-of-formal-methods.pdf); [Bowen & Hinchey (2006)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Gogolla-Benefits-and-problems-of-formal-methods.pdf). For more on the general application of safety engineering theory to AI, see [Fox (1993)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Fox-On-the-soundness-and-safety-of-expert-systems.pdf); [Yampolskiy & Fox (2013)](https://intelligence.org/files/SafetyEngineering.pdf); [Yampolskiy (2013)](http://commonsenseatheism.com/wp-content/uploads/2013/07/Yampolskiy-Artificial-Intelligence-Safety-Engineering.pdf).\n12. Another good point Fox makes is that normal AI safety engineering techniques rely on the design team’s ability to predict all circumstances that might hold in the future: “…one might conclude that using a basket of safety methods (hazard analysis, formal specification and verification, rigorous empirical testing, fault tolerant design) will significantly decrease the likelihood of hazards and disasters. However, there is at least one weakness common to all these methods. They rely on the design team being able to make long-range predictions about all the… circumstances that may hold when the system is in use. This is unrealistic, if only because of the countless interactions that can occur… [and] the scope for unforseeable interactions is vast.”\n13. See also [this program](http://www.brl.ac.uk/researchthemes/verificationvalidation.aspx) at Bristol University.\n14. As an aside, I’ll briefly remark that user interface confusion has contributed to many computer-related failures in the past. For example, [Neumann (1994)](http://www.amazon.com/Computer-Related-Risks-Peter-G-Neumann/dp/020155805X/) reports on the case of [Iran Air Flight 655](http://en.wikipedia.org/wiki/Iran_Air_Flight_655), which was shot down U.S. forces due (partly) to the unclear user interface of the USS *Vincennes’* Aegis missile system. Changes to the interface were subsequently recommended. For other UI-related disasters, see Neumann’s extensive page on [Illustrative Risks to the Public in the Use of Computer Systems and Related Technology](http://www.csl.sri.com/users/neumann/illustrative.html).\n\nThe post [Transparency in Safety-Critical Systems](https://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-08-25T18:28:23Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "bda8f7cdb75e72ee40c49214128056ba", "title": "Holden Karnofsky on Transparent Research Analyses", "url": "https://intelligence.org/2013/08/25/holden-karnofsky-interview/", "source": "miri", "source_type": "blog", "text": "![](http://intelligence.org/wp-content/uploads/2013/08/holden_w150.jpg)Holden Karnofsky is the co-founder of [GiveWell](http://www.givewell.org), which finds outstanding giving opportunities and publishes the full details of its analysis to help donors decide where to give. GiveWell [tracked ~$9.6 million in donations](http://www.givewell.org/about/impact) made on the basis of its recommendations in 2012. It has historically sought proven, cost-effective, scalable giving opportunities, but its new initiative, [GiveWell Labs](http://www.givewell.org/about/labs), is more broadly researching the question of how to give as well as possible.\n\n\n\n**Luke Muehlhauser**: GiveWell has gained respect for its high-quality analyses of some difficult-to-quantify phenomena: the impacts of particular philanthropic interventions. You’ve written about your methods for facing this challenge in several blog posts, for example (1) [Futility of standardized metrics: an example](http://blog.givewell.org/2010/05/12/futility-of-standardized-metrics-an-example/), (2) [In defense of the streetlight effect](http://blog.givewell.org/2011/05/27/in-defense-of-the-streetlight-effect/), (3) [Why we can’t take expected value estimates literally](http://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/), (4) [What it takes to evaluate impact](http://blog.givewell.org/2011/10/18/what-it-takes-to-evaluate-impact/), (5) [Some considerations against more investment in cost-effectiveness estimates](http://blog.givewell.org/2011/11/04/some-considerations-against-more-investment-in-cost-effectiveness-estimates/), (6) [Maximizing cost-effectiveness via critical inquiry](http://blog.givewell.org/2011/11/10/maximizing-cost-effectiveness-via-critical-inquiry/), (7) [Some history behind our shifting approach to research](http://blog.givewell.org/2012/07/20/some-history-behind-our-shifting-approach-to-research/), (8) [Our principles for assessing research](http://blog.givewell.org/2012/08/17/our-principles-for-assessing-evidence/), (9) [Surveying the research on a topic](http://blog.givewell.org/2012/09/06/surveying-the-research-on-a-topic/), (10) [How we evaluate a study](http://blog.givewell.org/2012/08/23/how-we-evaluate-a-study/), and (11) [Passive vs. rational vs. quantified](http://blog.givewell.org/2013/08/08/passive-vs-rational-vs-quantified/).\n\n\nIn my first question I’d like to ask about one particular thing you’ve done to solve one particular problem with analyses of difficult-to-quantify phenomena. The problem I have in mind is that **it’s often difficult for readers to know how much they should trust a given analysis of a difficult-to-quantify phenomenon**. In mathematics research it’s often pretty straightforward for other mathematicians to tell what’s good and what’s not. But what about analyses that combine intuitions, expert opinion, multiple somewhat-conflicting scientific studies, general research in a variety of “soft” sciences, and so on? In such cases it can be difficult for readers to distinguish high-quality and low-quality analyses, and it can be hard for readers to tell whether the analysis is biased in particular ways.\n\n\n\nOne thing GiveWell has done to address this problem is to strive for an unusually high degree of transparency in its analyses. For example, your analyses often include:\n\n\n* Good summaries and bolding to make analyses more skimmable\n* External evaluations\n* Public conversation notes with credentialed experts\n* Detailed discussion of what you think and why\n* Lots of citations, direct quotes from studies, and footnotes\n* Archived copies of cited websites and papers\n\n\nDo you agree with this interpretation of what you’re doing? What are some other things you do to make it easier for readers to tell how they should update their views in response to your analyses?\n\n\n\n\n---\n\n\n**Holden Karnofsky**: Yes, that’s a good characterization of what we’re trying to do.\n\n\nI’d say that our transparency efforts fall into two categories: “support” (making sure that all the relevant information about our decision-making processes is available to those who are seeking it) and “outreach” (proactively raising topics and views and inviting people to question us about them). We’ve found that “support” is often not enough: in order to get meaningful engagement on our work and help people become confident in it, we need to have high-level conversations in which we invite back-and-forth. Otherwise, people often feel that they don’t have time to evaluate everything we’ve written, and don’t engage with it.\n\n\nCharity reviews, and most other pages on our main website (not the blog), tend to be in a “support” framework. The core of this framework is that for every statement we make, it should be clear what the basis of the statement is. If the basis of the statement is simply “our guess,” that should be clear (so we often use language like “We’d guess that \\_\\_\\_” rather than simply stating our beliefs without such a qualifier). If the basis of the statement is a more complex chain of reasoning that doesn’t conceptually fit in the space, then such reasoning should be laid out elsewhere and linked to in the context of our statement. And if the basis of our statement is a document, website, or conversation, we use a footnote. Our conventions for footnotes go beyond what is common in academic papers: we generally aim to include the key quote (not just a citation) in a footnote, and we take steps (such as Webcite) to ensure that the original document and full context can be accessed by the reader even in the event that the original host of the source takes it down or changes it. We also try to structure charity reviews in logical ways and provide summary content at different levels of detail, so that it’s always relatively quick to find out what we think on a given topic and determine how to drill down on a particular piece of it.\n\n\nMost of the actual engagement we get with our views comes via “outreach” methods, particularly the blog and [conference calls and in-person events](http://www.givewell.org/research-discussions) (as well as one-on-one’s with major donors to GiveWell and our [top charities](http://www.givewell.org/research-discussions)). The blog is probably the single most effective way of getting people to engage with, understand, and trust our research. Via the discussions these mediums create, we gain better understandings of where people’s biggest questions and disagreements are, and we try to address these in further blog posts and FAQs. “Outreach” methods are organized around “What people want to know & what we want to tell them” rather than “What we think about charity X or intervention Y” and are often more informal, though they will link back to “support” content as appropriate.\n\n\nIn both frameworks, we rarely publish a piece of content without going over it and asking, “What are the key points here? Can they be summarized at the top, and/or bolded in the body?”\n\n\n\n\n---\n\n\n**Luke**: You [identify](http://blog.givewell.org/2013/08/13/effective-altruism/) GiveWell as part of the [effective altruism](http://lesswrong.com/lw/hx4/four_focus_areas_of_effective_altruism/) movement, and you also [write](http://blog.givewell.org/2013/08/13/effective-altruism/) that “Effective altruism is unusual and controversial” — a contrarian position, we might say.\n\n\nRobin Hanson [notes](http://www.overcomingbias.com/2009/11/contrarian-excuses.html) that “On average, contrarian views are less accurate than standard views… Honest contrarians who expect reasonable outsiders to give their contrarian view more than normal credence should point to strong outside indicators that correlate enough with contrarians tending more to be right.”\n\n\nDo you agree with Hanson’s analysis in [that post](http://www.overcomingbias.com/2009/11/contrarian-excuses.html)? If so, then do you think GiveWell typically expects reasonable outsiders to give your contrarian views higher than normal credence, and are you able to point to “strong outside indicators that correlate enough with contrarians tending to be more right”?\n\n\n\n\n---\n\n\n**Holden**: I don’t fully agree with that post. It implies that contrarian arguments **need** “strong outside indicators”; I think of a good argument as a combination of inside- and outside-view arguments, and enough strength in one area can make up for serious weakness in another. I’d frame it differently. I’d say that anyone who is trying to change minds through rational persuasion needs to think through whose minds they’re trying to change and how much mental energy such people can reasonably be expected to put into evaluating their arguments (bearing in mind that reasonably strong low-detail arguments can create enough intrigue to get the audience to increase their mental energy investment and understand higher-detail arguments). What I agree with in Robin’s post is that there is a non-trivial hurdle to overcome when espousing contrarian views, and (at least initially) a limited amount of mental energy one can expect the audience to invest in one’s argument.\n\n\nIn our case, we’ve so far largely targeted people who already buy into effective altruism. So we don’t deal with much of a “contrarian” dynamic for that specifically. Where I think we do have to “overcome the hurdles to contrarianism” is where we recommend charities that people haven’t necessarily heard of, as opposed to charities that elite conventional wisdom recommends. In making the case for such charities, we do have to think about how to put a low-detail case up front rather than just refer people to our lengthy body of research as a whole. We try to follow the principle of “summarize our main points, and make it clear how to drill down on any particular one of them.” That allows people to gain confidence in us via spot-check rather than by going through everything we’ve written (which I believe very few people have done).\n\n\nIn addition, tools like the blog can convince people over time that we’re generally trustworthy and intelligent on the relevant issues, which can allow them to buy into our claims without evaluating all the details of them. (I perceive many people as having followed a similar path to supporting MIRI.)\n\n\nBut even with regard to recommending not-widely-known charities, we have a lot of wind at our backs by virtue of whom we’re targeting. Much of our audience consists of people who already buy into effective altruism and already feel that the quality of dialogue around where to give is extraordinarily low. They also often come to us via referrals from trusted friends or media sources. So they arrive at our site very ready to believe that our claims are plausible.\n\n\nAs a side point, I think effective altruism falls somewhere on the spectrum between “contrarian view” and “unusual taste.” My commitment to effective altruism is probably better characterized as “wanting/choosing to be an effective altruist” than as “believing that effective altruism is correct.” I think that relieves some of the burden of having to “evaluate” effective altruism, though certainly not all of it.\n\n\n\n\n---\n\n\n**Luke**: Recently you’ve begun a new program within GiveWell called [GiveWell Labs](http://blog.givewell.org/2013/05/30/refining-the-goals-of-givewell-labs/), which aims to investigate the effectiveness of causes that are even more difficult to analyze than e.g. global health interventions. In global health, you often get to learn from medical science, multiple randomized controlled trials for specific interventions, and so on. But it’s much harder to investigate the cost effectiveness of interventions that aim to improve science, improve political or economic policy, reduce catastrophic risks, etc.\n\n\nSo I would imagine that developing GiveWell Labs has forced you to develop additional tools for analyzing complex phenomena, and for communicating your analyses to others. What are some things you’ve learned so far in the process of developing GiveWell Labs?\n\n\n\n\n---\n\n\n**Holden**: We’re certainly developing new methods of analysis and evaluation. Our working framework for [shallow investigations](http://www.givewell.org/shallow) replaces “proven, cost-effective, scalable charities” with “important, tractable, non-crowded [causes](http://blog.givewell.org/2013/05/30/refining-the-goals-of-givewell-labs/)” in terms of what we’re looking for. Much of our work so far has been more qualitative in nature, aiming to clarify and understand the basic landscape of causes rather than assess the extent to which approaches are “proven.” And we’ve also been doing a fair amount of “immersion” – trying to learn broadly about a field without pre-choosing our set of critical questions and goals (for example, as I’ve started to investigate scientific research, I’ve read about half of a biology textbook and taken multiple chances to attend meetings of scientists). Relative to our traditional work, more of our learning so far on the Labs front comes from conversations as opposed to studies, and as a result our emphasis on (and volume of) [conversation notes](http://www.givewell.org/conversations) has increased dramatically.\n\n\nWith that said, it’s still very early. I think we’re fairly far from having concrete recommendations, and it’s when we have concrete recommendations that it becomes much easier and more productive to work on engaging and convincing outsiders. We have certainly made attempts to put out what we’ve learned, using the same basic tools I mentioned before: “support” oriented pages (e.g., [shallow investigation writeups](http://www.givewell.org/shallow)) and “outreach” oriented communications (e.g., [Labs-focused blog posts](http://blog.givewell.org/category/givewell-labs/) and [Labs-focused in-person Q&A’s](http://www.givewell.org/research-discussions)).\n\n\n\n\n---\n\n\n**Luke**: I think GiveWell provides a helpful model of how to do valuable analytic research on difficult-to-quantify phenomena. Are there other groups doing a similar thing (but perhaps on other subjects) that you admire, or that you think are doing some important things right, or that you think are worth imitating in certain ways? You’ve praised the [Cochrane Collaboration](http://www.cochrane.org/) in the past… are there others?\n\n\n\n\n---\n\n\n**Holden**:\n\n\n* We’re definitely fans of Cochrane. Their work is consistently “transparent” in the sense that one can read a review, get a sense of the big picture, know where to drill down on anything one wants to drill on, and ultimately – if one wants – answer just about any question about where their conclusions are coming form.\n* We’ve been impressed with the usefulness and, often, transparency of the [Center for Global Development](http://www.cgdev.org/)‘s work (more at [this blog post](http://blog.givewell.org/2013/07/03/grant-to-center-for-global-development-cgd/)).\n* I’m generally a fan of Nate Silver. While I wish he disclosed the full details of his models, I usually feel like I have a pretty good idea of what he’s conceptually doing, without having to put much proactive effort into understanding it.\n* While there are many others whose work we use and/or enjoy, those are the only people that jump to mind in terms of “consistently providing transparent, convincing, reasonable analysis of difficult-to-analyze topics and/or topics relevant to GiveWell’s work.” I often see isolated cases of such work, such as particularly good academic papers.\n\n\n\n\n---\n\n\n**Luke**: Why do you think the sort of transparency you’re providing is relatively rare, given that it seems to have some credibility benefits?\n\n\n\n\n---\n\n\n**Holden**: The sort of transparency we’re providing takes a lot of work. Some of this is **because** our approach is relatively rare – for example, we’ve had to [iterate a lot](http://blog.givewell.org/2012/10/11/sharing-notes-from-conversations-case-study-in-pursuing-transparency-in-philanthropy/) (and deal with some awkward situations) on being able to share notes from conversations. We’ve spoken to some organizations that seem potentially interested in increasing transparency, but with the processes and relationships they’ve built up, getting all the third parties they work with to buy in would be a huge project and struggle. It’s easier for us because we’ve put this goal in our DNA from the beginning, but even for us the challenge of getting third parties to be comfortable with what we’re doing can be significant.\n\n\nEven if we weren’t dealing with resistance from third parties, the time we put into writing up our thoughts would be a significant chunk of the total time we spend on research. Maybe half. So in some sense it seems like we could learn twice as much if we didn’t feel the need to share our learning widely.\n\n\nWith that said, I think the benefits are significant as well, and I think they go beyond the credibility and influence boosts and feedback we get. Just the process of writing up and supporting our thinking often makes us think more clearly. We used to wait until our research felt “done” to write it up, but we found that the process of writing it up constantly caused us to rethink our conclusions and reopen investigations, as issues that had faded into the backgrounds of our minds re-emerged while we were trying to make our case. Now, we try to write things up “as” we research them, for exactly this reason. Writing things up for a general audience also means that our old material is easier for new employees (and long-running employees with not-so-great memories) to absorb, and so we lose less knowledge internally.\n\n\nEven if we were allocating all of the “money moved” ourselves (rather than making giving recommendations to others) and even if no outsiders could ever see what we wrote, I’d still want to put a great deal of time – maybe almost as much as we do now – into creating clear, supported writeups and summaries.\n\n\n\n\n---\n\n\n**Luke**: Thanks, Holden!\n\n\nThe post [Holden Karnofsky on Transparent Research Analyses](https://intelligence.org/2013/08/25/holden-karnofsky-interview/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-08-25T15:12:43Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "11d224740eb9f116abdfe5d9493c7717", "title": "2013 Summer Matching Challenge Completed!", "url": "https://intelligence.org/2013/08/21/2013-summer-matching-challenge-completed/", "source": "miri", "source_type": "blog", "text": "Thanks to the generosity of dozens of donors, on August 15th we successfully [completed](http://intelligence.org/2013/07/08/2013-summer-matching-challenge/) **the largest fundraiser in MIRI’s history**. All told, we raised **$400,000**, which will fund our research going forward.\n\n\nThis fundraiser came “right down to the wire.” At 8:45pm Pacific time, with only a few hours left before the deadline, we announced on [our Facebook page](https://www.facebook.com/MachineIntelligenceResearchInstitute/posts/560779573959431) that we had only $555 more to raise to meet our goal. At 8:53pm, [Benjamin Hoffman](https://www.facebook.com/benjamin.r.hoffman) donated exactly $555, finishing the drive.\n\n\nOur deepest thanks to all our supporters!\n\n\nThe post [2013 Summer Matching Challenge Completed!](https://intelligence.org/2013/08/21/2013-summer-matching-challenge-completed/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-08-21T22:25:36Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "79fb8f275cf9f38ba367e8dff3908e78", "title": "Luke at Quixey on Tuesday (Aug. 20th)", "url": "https://intelligence.org/2013/08/16/luke-at-quixey-on-tuesday-aug-20th/", "source": "miri", "source_type": "blog", "text": "![EA & EotW](https://intelligence.org/wp-content/uploads/2013/08/EA-EotW.png)\nThis coming Tuesday, MIRI’s Executive Director Luke Muehlhauser will give a talk at [Quixey](https://www.quixey.com/) titled **Effective Altruism and the End of the World**. If you’re in or near the South Bay, you should come! Snacks will be provided.\n\n\n*Time*: Tuesday, August 20th. Doors open at 7:30pm. Talk starts at 8pm. Q&A starts at 8:30pm.\n\n\n*Place*: Quixey Headquarters, 278 Castro St., Mountain View, CA. ([Google Maps](https://www.google.com/maps/preview#!q=278+Castro+St.%2C+Mountain+View%2C+CA&data=!4m10!1m9!4m8!1m3!1d51377!2d-122.30098!3d37.8707754!3m2!1i1319!2i783!4f13.1))\n\n\n*Entrance**:*** You cannot enter Quixey from Castro St. Instead, please enter through the back door, from the parking lot at the corner of Dana & Bryant.\n\n\nThe post [Luke at Quixey on Tuesday (Aug. 20th)](https://intelligence.org/2013/08/16/luke-at-quixey-on-tuesday-aug-20th/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-08-16T18:03:01Z", "authors": ["staff"], "summaries": []} -{"id": "bc20e95eb8bb0c382bf2ab0462e06f15", "title": "August Newsletter: New Research and Expert Interviews", "url": "https://intelligence.org/2013/08/13/august-newsletter/", "source": "miri", "source_type": "blog", "text": "| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n\n |\n\n |\n| \n\n| | | | |\n| --- | --- | --- | --- |\n| \n\n| | | |\n| --- | --- | --- |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \nGreetings from the Executive Director\nDear friends,\nMy personal thanks to everyone who has contributed to [our ongoing fundraiser](http://intelligence.org/donate/). We are 74% of the way to our goal!\nI’ve been glad to hear from many of you that you’re thrilled with the progress we’ve made in the past two years — progress both as an organization and as a research institute. I’m thrilled, too! And to see a snapshot of where MIRI is *headed*, take a look at the participant lineup for [our upcoming December workshop](http://intelligence.org/2013/07/24/miris-december-2013-workshop/). Some top-notch folks there, including [John Baez](http://en.wikipedia.org/wiki/John_C._Baez).\nWe’re also preparing for the anticipated media interest in James Barrat’s forthcoming book, [*Our Final Invention: Artificial Intelligence and the End of the Human Era*](http://www.amazon.com/Our-Final-Invention-Artificial-Intelligence/dp/0312622376/). The book reads like a detective novel, and discusses our research extensively. *Our Final Invention* will be released on October 1st by a division of [St. Martin’s Press](http://en.wikipedia.org/wiki/St._Martin%27s_Press), one of the largest publishers in the world.\nIf you’re happy with the direction we’re headed in, and you haven’t contributed to our fundraiser yet, please [donate now](http://intelligence.org/donate/) to show your support. **Even small donations can make a difference.** This newsletter is ~9,860 subscribers strong, and ~200 of you have contributed during the current fundraiser. If just 21% of the other 9,660 subscribers [give$25](http://intelligence.org/donate/) *as soon as they finish reading this sentence*, then we’ll meet our goal will those funds alone!\nThank you,\nLuke Muehlhauser\nExecutive Director\n\n\n\nSummer Fundraiser Ends August 15th!\nWith only a few days left, we’ve raised **74% of our goal** for our [summer matching challenge](http://intelligence.org/2013/07/08/2013-summer-matching-challenge/). Many thanks to everyone who has contributed!\n**[Donate](http://intelligence.org/donate/) on or before August 15th** to **double your donation** and **help us reach our goal of $200,000 raised** ($400,000 with matching).\nIf we’re able to reach our goal, then not only will we be able to continue to run [research workshops](http://intelligence.org/get-involved/#workshop) and [our other programs](http://intelligence.org/2013/07/08/2013-summer-matching-challenge/), but we might also be in a position later this year to hire our first new full-time mathematical researcher to work with Eliezer Yudkowsky on open problems in Friendly AI theory (e.g. the [Löbian obstacle](http://intelligence.org/2013/08/04/benja-interview/)). We can’t promise we’ll *decide* that hiring a new FAI researcher is the optimal use of those funds at that time, but it is a *serious option* we’re discussing internally. Our research workshops have been an excellent tool for evaluating potential hires.\nFeel free to contact Luke Muehlhauser (luke@intelligence.org) directly for more details on how marginal funds will be used at MIRI, *especially if you are considering a major gift* ($5,000 or more).\n\n\nAlgorithmic Progress in Six Domains\nMIRI has released a new technical report by Katja Grace: “[Algorithmic Progress in Six Domains](http://lesswrong.com/r/discussion/lw/i8i/algorithmic_progress_in_six_domains/).”\nThe report summarizes data on algorithmic progress – that is, better performance per fixed amount of computing hardware – in six domains:\n* SAT solvers,\n* Chess and Go programs,\n* Physics simulations,\n* Factoring,\n* Mixed integer programming, and\n* Some forms of machine learning.\n\n\nMIRI’s purpose for collecting these data was to shed light on the question of [intelligence explosion microeconomics](http://lesswrong.com/lw/hbd/new_report_intelligence_explosion_microeconomics/), though we suspect the report will be of broad interest within the software industry and computer science academia.\n\n\n4 New Interviews; 2 New Analyses\n[Our blog](http://intelligence.org/blog/) was especially active this past month, with 4 new expert interviews and 2 new analyses.\nNew analyses:\n* [What is AGI?](http://intelligence.org/2013/08/11/what-is-agi/) A quick explanation of the concept of artificial general intelligence, and a selection of operational definitions that allow us to be even more specific about what we mean by “AGI.”\n* [AI Risk and the Security Mindset](http://intelligence.org/2013/07/31/ai-risk-and-the-security-mindset/): “A recurring problem in much of the literature on ‘machine ethics’ or ‘AGI ethics’ or ‘AGI safety’ is that researchers and commenters often appear to be asking the question ‘How will this solution work?’ rather than ‘How will this solution fail?'”\n\n\nNew expert interviews:\n* James Miller (economics, Smith College) on [Unusual Incentives Facing AGI Companies](http://intelligence.org/2013/07/12/james-miller-interview/)\n* Roman Yampolskiy (computer science, U of Louisville) on [AI Safety Engineering](http://intelligence.org/2013/07/15/roman-interview/)\n* Nick Beckstead (philosophy, Oxford) on [the Importance of the Far Future](http://intelligence.org/2013/07/17/beckstead-interview/)\n* Benja Fallenstein (decision-making, Bristol U) on [the Löbian Obstacle to Self-Modifying Systems](http://intelligence.org/2013/08/04/benja-interview/)\n\n\n\n\nBurning Man Camp for Effective Altruists\nThe [effective altruism movement](http://lesswrong.com/lw/hx4/four_focus_areas_of_effective_altruism/) ([GiveWell](http://www.givewell.org/), [80,000 Hours](http://80000hours.org/), etc.) is one of MIRI’s intellectual communities. If you’re going to [Burning Man](http://www.burningman.com/) this year (Aug. 26 – Sep. 2) and would like to camp with many of MIRI’s closest friends (including e.g. Anna Salamon of [CFAR](http://rationality.org/)), then you may want to consider applying to the Burning Man theme camp for effective altruists, called *Paradigm*.\n*Paradigm* has an excellent location on 6:30 and A, near Center Camp. Its organizer is Nevin Freeman (nevin.freeman@gmail.com). *Paradigm* will build a large dome called the Temple of Skeptical Consequentialism, which will host talks about effective altruism and rationality. Additional details, maps, and photos are [here](https://docs.google.com/document/d/1YPR7TCmpl43BoGHVEpr9nE72-ZtqTOkdt9LDHOLma3k/edit).\nSpots are limited and many are already taken, so if you’re interested, [apply ASAP](https://docs.google.com/forms/d/1yrTf5jlzmfaSTeJkHJZaaZxwhuKorlTFG9RC6yQ3P_Q/viewform)!\n\n\nJob Openings at Giving What We Can and 80,000 Hours\nOur friends at [80,000 Hours](http://80000hours.org/) (80k) and [Giving What We Can](http://www.givingwhatwecan.org/) (GWWC) — two Oxford, UK organizations in the [effective altruism](http://lesswrong.com/lw/hx4/four_focus_areas_of_effective_altruism/) movement — are hiring.\nGWWC is [hiring](http://www.givingwhatwecan.org/blog/2013-07-22/were-hiring) a Director of Communications, a Director of Community, and a Director of Research. Under “Why You Should Apply,” they give these reasons:\n* *Impact*: A lot of charities will tell you that you can make a difference. Here we actually calculate that difference and we are driven by improving the measurable results of this organisation.\n* *Inspiration*: Offices don’t come much more intellectually stimulating than ours. Based in offices shared with the Future of Humanity Institute at Oxford University, you’ll be part of a team that dedicates itself to understanding how we can do the most good – and actually delivering on those ideas.\n* *Personal development*: All three positions offer fantastic personal development opportunities. We’re looking for talent as well as experience – and you’ll have opportunities to learn on the job, supported by a community dedicated to boosting their personal effectiveness.\n\n\n80k is hiring a [Careers Analyst](http://80000hours.org/blog/236-80-000-hours-is-hiring), a [Director of Fundraising](http://80000hours.org/blog/240-we-re-looking-for-a-director-of-fundraising-and-a-finance-manager), and a [Finance Manager](http://80000hours.org/blog/240-we-re-looking-for-a-director-of-fundraising-and-a-finance-manager). See their (longer) pitch for why you should apply [here](http://80000hours.org/blog/236-80-000-hours-is-hiring).\n\n\nFeatured Volunteer – Bikramjeet Singh\nBikramjeet Singh is a 24 year old volunteer from India who first found out about MIRI’s work when he was 16 – almost 8 years ago! He became acquainted with our deputy director Louie Helm 2 years later. After doing some work with promoting MIRI and volunteer searching, he was introduced to Michael Anissimov and became his personal media assistant. He remained in that position until February 2012, and joined the [MIRIvolunteers.org](http://mirivolunteers.org/) system in December 2012. His motivation for volunteering for MIRI stems from his interest in existential risk reduction and his judgment that FAI is the best way to solve that problem. Bikramjeet’s dream job is to be an AI researcher and a science fiction author.\nThanks for all your help Bikramjeet!\n |\n\n |\n\n |\n\n |\n\n |\n\n\nThe post [August Newsletter: New Research and Expert Interviews](https://intelligence.org/2013/08/13/august-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-08-14T00:37:04Z", "authors": ["Jake"], "summaries": []} -{"id": "b9b06a721a772e9d5d69dcaffa533493", "title": "What is AGI?", "url": "https://intelligence.org/2013/08/11/what-is-agi/", "source": "miri", "source_type": "blog", "text": "![android looking up](http://intelligence.org/wp-content/uploads/2013/08/android-looking-up.jpg)One of the most common objections we hear when talking about artificial general intelligence (AGI) is that “AGI is ill-defined, so you can’t really say much about it.”\n\n\nIn an [earlier post](http://intelligence.org/2013/06/19/what-is-intelligence-2/), I pointed out that we *often* don’t have precise definitions for things while doing useful work on them, as was the case with the concepts of “number” and “self-driving car.”\n\n\nStill, we must have *some* idea of what we’re talking about. [Earlier](http://intelligence.org/2013/06/19/what-is-intelligence-2/) I gave a rough working definition for “intelligence.” In this post, I explain the concept of AGI and also provide several possible [operational definitions](http://en.wikipedia.org/wiki/Operational_definition) for the idea.\n\n\n### The idea of AGI\n\n\nAs discussed earlier, the concept of “general intelligence” refers to the capacity for [efficient *cross-domain* optimization](http://intelligence.org/2013/06/19/what-is-intelligence-2/). Or as Ben Goertzel likes to [say](http://agi-school.org/2009/lecture-01), “the ability to achieve complex goals in complex environments using limited computational resources.” Another idea often associated with general intelligence is the ability to transfer learning from one domain to other domains.\n\n\nTo illustrate this idea, let’s consider something that would *not* count as a general intelligence.\n\n\nComputers [show](http://en.wikipedia.org/wiki/Progress_in_artificial_intelligence#Performance_evaluation) vastly superhuman performance at some tasks, roughly human-level performance at other tasks, and subhuman performance at still other tasks. If a team of researchers was able to combine many of the top-performing “[narrow AI](http://agi-school.org/2009/lecture-01)” algorithms into one system, as Google may be trying to do,[1](https://intelligence.org/2013/08/11/what-is-agi/#footnote_0_10388 \"In an interview with The Register, Google head of research Alfred Spector said, “We have the knowledge graph, [the] ability to parse natural language, neural network tech [and] enormous opportunities to gain feedback from users… If we combine all these things together with humans in the loop continually providing feedback our systems become … intelligent.” Spector calls this the “combination hypothesis.”\") they’d have a massive “Kludge AI” that was terrible at most tasks, mediocre at some tasks, and superhuman at a few tasks.\n\n\nLike the Kludge AI, particular humans are terrible or mediocre at most tasks, and far better than average at just a few tasks.[2](https://intelligence.org/2013/08/11/what-is-agi/#footnote_1_10388 \"Though, there are probably many disadvantaged humans for which this is not true, because they do not show far-above-average performance on any tasks.\") Another similarity is that the Kludge AI would probably show measured correlations between many different narrow cognitive abilities, just as humans do (hence the concepts of *g* and IQ[3](https://intelligence.org/2013/08/11/what-is-agi/#footnote_2_10388 \"Psychologists now generally agree that there is a general intelligence factor in addition to more specific mental abilities. For an introduction to the modern synthesis, see Gottfredson (2011). For more detail, see the first few chapters of Sternberg & Kaufman (2011). If you’ve read Cosma Shalizi’s popular article “g, a Statistical Myth, please also read its refutation here and here.\")): if we gave the Kludge AI lots more hardware, it could use that hardware to improve its performance in many different narrow domains simultaneously.[4](https://intelligence.org/2013/08/11/what-is-agi/#footnote_3_10388 \"In psychology, the factor analysis is done between humans. Here, I’m suggesting that a similar factor analysis could hypothetically be done between different Kludge AIs, with different Kludge AIs running basically the same software but having access to different amounts of computation. The analogy should not be taken too far, however. For example, it isn’t the case that higher-IQ humans have much larger brains than other humans.\")\n\n\nOn the other hand, the Kludge AI would not (yet) have *general intelligence*, because it wouldn’t necessarily have the capacity to solve somewhat-arbitrary problems in somewhat-arbitrary environments, wouldn’t necessarily be able to transfer learning in one domain to another, and so on.\n\n\n\n### Operational definitions of AGI\n\n\nCan we be more specific? This idea of general intelligence *is* difficult to operationalize. Below I consider four operational definitions for AGI, in (apparent) increasing order of difficulty.\n\n\n#### The Turing test ($100,000 Loebner prize interpretation)\n\n\nThe [Turing test](http://en.wikipedia.org/wiki/Turing_test) was proposed in [Turing (1950)](http://orium.homelinux.org/paper/turingai.pdf), and has many interpretations ([Moor 2003](http://www.amazon.com/Turing-Test-Artificial-Intelligence-Cognitive/dp/1402012047/)).\n\n\nOne specific interpretation is provided by the conditions for winning the [$100,000 Loebner Prize](http://en.wikipedia.org/wiki/Loebner_prize). Since 1990, [Hugh Loebner](http://en.wikipedia.org/wiki/Hugh_Loebner) has offered $100,000 to the first AI program to pass this test at the annual [Loebner Prize competition](http://www.loebner.net/Prizef/loebner-prize.html). Smaller prizes are given to the best-performing AI program each year, but no program has performed well enough to win the $100,000 prize.\n\n\nThe *exact* conditions for winning the $100,000 prize will not be defined until a program wins the $25,000 “silver” prize, which has not yet been done. However, we do know the conditions will look *something* like this: A program will win the $100,000 if it can fool half the judges into thinking it is human while interacting with them in a freeform conversation for 30 minutes *and* interpreting audio-visual input.\n\n\n#### The coffee test\n\n\n[Goertzel et al. (2012)](http://commonsenseatheism.com/wp-content/uploads/2013/06/Goertzel-et-al.-The-Architecture-of-Human-Like-General-Intelligence.pdf) suggest a (probably) more difficult test — the “coffee test” — as a potential operational definition for AGI:\n\n\n\n> go into an average American house and figure out how to make coffee, including identifying the coffee machine, figuring out what the buttons do, finding the coffee in the cabinet, etc.\n> \n> \n\n\nIf a robot could do that, perhaps we should consider it to have general intelligence.[5](https://intelligence.org/2013/08/11/what-is-agi/#footnote_4_10388 \"The coffee test was inspired by Steve Wozniak’s prediction that we would never “build a robot that could walk into an unfamiliar house and make a cup of coffee” (Adams et al. 2011). Wozniak’s original prediction was made in a PC World piece from July 19, 2007 called Three Minutes with Steve Wozniak.\")\n\n\n#### The robot college student test\n\n\n[Goertzel (2012)](http://www.newscientist.com/article/mg21528813.600-what-counts-as-a-conscious-thinking-machine.html) suggests a (probably) more challenging operational definition, the “robot college student test”:\n\n\n\n> when a robot can enrol in a human university and take classes in the same way as humans, and get its degree, then I’ll [say] we’ve created [an]… artificial general intelligence.\n> \n> \n\n\n#### The employment test\n\n\n[Nils Nilsson](http://ai.stanford.edu/~nilsson/), one AI’s founding researchers, once suggested an even more demanding operational definition for “human-level AI” (what I’ve been calling AGI), the [employment test](http://ai.stanford.edu/~nilsson/OnlinePubs-Nils/General%20Essays/AIMag26-04-HLAI.pdf):\n\n\n\n> Machines exhibiting true human-level intelligence should be able to do many of the things humans are able to do. Among these activities are the tasks or “jobs” at which people are employed. I suggest we replace the Turing test by something I will call the “employment test.” To pass the employment test, AI programs must… [have] at least the *potential* [to completely automate] economically important jobs.[6](https://intelligence.org/2013/08/11/what-is-agi/#footnote_5_10388 \"First, Nilsson proposes that to pass the employment test, “AI programs must be able to perform the jobs ordinarily performed by humans.” But later, he modifies this specification: “For the purposes of the employment test, we can finesse the matter of whether or not human jobs are actually automated. Instead, I suggest, we can test whether or not we have the capability to automate them.” In part, he suggests this modification because “many of today’s jobs will likely disappear — just as manufacturing buggy whips did.”\")\n> \n> \n\n\nTo develop this operational definition more completely, one could provide a canonical list of “economically important jobs,” produce a special [vocational exam](http://www.studyguidezone.com/vocational_exams.htm) for each job (e.g. both the written and driving exams required for a U.S. [commercial driver’s license](http://www.fmcsa.dot.gov/registration-licensing/cdl/cdl.htm)), and measure machines’ performance on those vocational exams.\n\n\nThis is a bit “unfair” because I doubt that any *single* human could pass such vocational exams for any long list of economically important jobs. On the other hand, it’s quite possible that many unusually skilled humans would be able to pass all or nearly all such vocational exams if they spent an entire lifetime training each skill, and an AGI — having near-perfect memory, faster thinking speed, no need for sleep, etc. — would presumably be able to train itself in all required skills much more quickly, *if* it possessed the kind of general intelligence we’re trying to operationally define.\n\n\n### The future is foggy\n\n\nOne or more of these operational definitions for AGI might seem compelling, but a look at history should teach us some humility.\n\n\nDecades ago, several leading AI scientists seemed to think that human-level performance at *chess* could represent an achievement of AGI-proportions. Here are [Newell et al. (1958)](http://commonsenseatheism.com/wp-content/uploads/2013/07/Newell-et-al-Chess-playing-programs-and-the-problem-of-complexity.pdf):\n\n\n\n> Chess is the intellectual game *par excellence*… If one could devise a successful chess machine, one would seem to have penetrated to the core of human intellectual endeavor.[7](https://intelligence.org/2013/08/11/what-is-agi/#footnote_6_10388 \"A bit later, they add a note of caution: “Now there might [be] a trick… something that [is] as the wheel to the human leg: a device quite different from humans in its methods, but supremely effective in its way, and perhaps very simple. Such a device might play excellent chess, but… fail to further our understanding of human intellectual processes. Such a prize, of course, would be worthy of discovery in its own right, but there are appears to be nothing of this sort in sight.”\")\n> \n> \n\n\nAs late as 1976, I.J. Good [asserted](http://intelligence.org/wp-content/uploads/2013/05/Good-Review-of-The-World-Computer-Chess-Championship.pdf) that human-level performance in computer chess was a good signpost for AGI, writing that “a computer program of Grandmaster strength would bring us within an ace of [machine ultra-intelligence].”\n\n\nBut machines surpassed the best human chess players about 15 years ago, and we still seem to be several decades away from AGI.\n\n\nThe surprising success of self-driving cars may offer another lesson in humility. Had I been an AI scientist in the 1960s, I might well have thought that a self-driving car as capable as [Google’s driverless car](http://en.wikipedia.org/wiki/Google_driverless_car) would indicate the arrival of AGI. After all, a self-driving car must act with high autonomy, at high speeds, in an extremely complex, dynamic, and uncertain environment: namely, the real world. It must also (on rare occasions) face genuine moral dilemmas such as the philosopher’s [trolley problem](http://craigweich.com/post/36670778407/machine-ethics-and-the-trolley-problem-as). Instead, Google built its driverless car with a series of “cheats” I might not have conceived of in the 1960s — for example by mapping with high precision almost every road, freeway on-ramp, and parking lot in the country *before* it built its driverless car.\n\n\n### Conclusion\n\n\nSo, what’s a good operational definition for AGI? I personally lean toward Nilsson’s employment test, but *you* might have something else in mind when you talk about AGI.\n\n\nI expect to pick a new working definition sometime in the next 20 years, as AGI draws nearer, but Nilsson’s operationalization will do for now.\n\n\n#### Acknowledgements\n\n\nMy thanks to Carl Shulman, Ben Goertzel, and Eliezer Yudkowsky for their feedback on this post.\n\n\n\n\n---\n\n1. In an [interview](http://www.theregister.co.uk/2013/05/17/google_ai_hogwash/) with *The Register*, Google head of research [Alfred Spector](http://en.wikipedia.org/wiki/Alfred_Spector) said, “We have the knowledge graph, [the] ability to parse natural language, neural network tech [and] enormous opportunities to gain feedback from users… If we combine all these things together with humans in the loop continually providing feedback our systems become … intelligent.” Spector calls this the “combination hypothesis.”\n2. Though, there are probably many disadvantaged humans for which this is not true, because they do not show far-above-average performance on *any* tasks.\n3. Psychologists now generally agree that there is a general intelligence factor in addition to more specific mental abilities. For an introduction to the modern synthesis, see [Gottfredson (2011)](http://www.newscientist.com/data/doc/article/dn19554/instant_expert_13_-_intelligence.pdf). For more detail, see the first few chapters of [Sternberg & Kaufman (2011)](http://www.amazon.com/Cambridge-Handbook-Intelligence-Handbooks-Psychology/dp/052173911X/). If you’ve read Cosma Shalizi’s popular article “[*g*, a Statistical Myth](http://vserver1.cscs.lsa.umich.edu/~crshalizi/weblog/523.html), please also read its refutation [here](http://humanvarieties.org/2013/04/03/is-psychometric-g-a-myth/) and [here](http://humanvarieties.org/2013/04/14/some-further-notes-on-g-and-shalizi/).\n4. In psychology, the factor analysis is done *between humans*. Here, I’m suggesting that a similar factor analysis could hypothetically be done *between different Kludge AIs*, with different Kludge AIs running basically the same software but having access to different amounts of computation. The analogy should not be taken too far, however. For example, it isn’t the case that higher-IQ humans have much larger brains than other humans.\n5. The coffee test was inspired by Steve Wozniak’s prediction that we would never “build a robot that could walk into an unfamiliar house and make a cup of coffee” ([Adams et al. 2011](http://www.cse.buffalo.edu/faculty/shapiro/Papers/hlai.pdf)). Wozniak’s original prediction was made in a *PC World* piece from July 19, 2007 called [Three Minutes with Steve Wozniak](http://www.pcworld.com/article/134826/article.html).\n6. First, Nilsson proposes that to pass the employment test, “AI programs must be able to perform the jobs ordinarily performed by humans.” But later, he modifies this specification: “For the purposes of the employment test, we can finesse the matter of whether or not human jobs are *actually* automated. Instead, I suggest, we can test whether or not we have the *capability* to automate them.” In part, he suggests this modification because “many of today’s jobs will likely disappear — just as manufacturing buggy whips did.”\n7. A bit later, they add a note of caution: “Now there might [be] a trick… something that [is] as the wheel to the human leg: a device quite different from humans in its methods, but supremely effective in its way, and perhaps very simple. Such a device might play excellent chess, but… fail to further our understanding of human intellectual processes. Such a prize, of course, would be worthy of discovery in its own right, but there are appears to be nothing of this sort in sight.”\n\nThe post [What is AGI?](https://intelligence.org/2013/08/11/what-is-agi/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-08-11T18:32:36Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "e824fe5f112623c72c193ad461b76fd5", "title": "Benja Fallenstein on the Löbian Obstacle to Self-Modifying Systems", "url": "https://intelligence.org/2013/08/04/benja-interview/", "source": "miri", "source_type": "blog", "text": "![](http://intelligence.org/wp-content/uploads/2013/02/associate_benja.png)Benja Fallenstein researches mathematical models of human and animal behavior at [Bristol University](http://www.bris.ac.uk/), as part of the [MAD research group](http://www.bristol.ac.uk/biology/research/behaviour/mad/) and the [decision-making research group](http://www.bris.ac.uk/decisions-research/).\n\n\nBefore that, she graduated from University of Vienna with a BSc in Mathematics. In her spare time, Benja studies questions relevant to AI impacts and Friendly AI, including: AI forecasting, intelligence explosion microeconomics, reflection in logic, and decision algorithms.\n\n\nBenja has attended two of [MIRI’s research workshops](http://intelligence.org/get-involved/), and is scheduled to attend another in December.\n\n\n\n**Luke Muehlhauser**: Since you’ve attended two MIRI research workshops on “Friendly AI math,” I’m hoping you can explain to our audience what that work is all about. To provide a concrete example, I’d like to talk about the [Löbian obstacle to self-modifying artificial intelligence](http://lesswrong.com/lw/hmt/tiling_agents_for_selfmodifying_ai_opfai_2/), which is one of the topics that MIRI’s recent workshops have focused on. To start with, could you explain to our readers what this problem is and why you think it is important?\n\n\n\n\n\n---\n\n\n**Benja Fallenstein**: MIRI’s research is based on I.J. Good’s concept of the [intelligence explosion](http://intelligence.org/ie-faq/): the idea that once we build an artificial intelligence that’s as good as a human at doing artificial intelligence research, this AI will be able to figure out how to make itself even smarter, and even better at AI research, leading to a runaway process that will eventually create machines far surpassing the capabilities of any human being. When this happens, [we really want these machines to have goals that humanity](http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/) [would approve of](http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/). True, it’s not very likely that an AI would decide that it wants to rule over us (that’s just [anthropomorphism](http://en.wikipedia.org/wiki/Anthropomorphism)), but most goals that a computer could have would be dangerous to us: For example, imagine a computer that wants to calculate π to as many digits as possible. That computer will see humans as being made of atoms which it could use to build more computers; and worse, since we would object to that and might try to stop it, we’d be a potential threat that it would be in the AI’s interest to eliminate ([Omohundro 2008](http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf)). So we want to make very sure that the end result of an intelligence explosion (after many, many self-improvements) is an AI with “good” goals.\n\n\nNow you might think that all we need to do is to build our initial AI to have “good” goals. As a toy model, imagine that you program your AI to only take an action x if it can prove that doing x will lead to a “good” outcome. Then the AI won’t self-modify to have “bad” goals, because it won’t be able to prove that this self-modification to a “good” outcome. But on the other hand, you’d think that this AI would be able to self-modify in a way that leaves its goals intact: You’d think that it would be able to reason, “Well, the new version of me will only take an action y if it can prove that this leads to an outcome it likes, and it likes ‘good’ outcomes, just as I do — so whatever it does will lead to a ‘good’ outcome, it’s all fine!”\n\n\nBut here’s the problem: In this chain of reasoning, our AI needs to go from “the new version will only take an action y if it has proven that y leads to a good outcome” to “it will only take y if this actually leads to a good outcome.” Intuitively, this seems like a perfectly reasonable argument; after all, we trust proofs in whatever formal system the AI is using (or we’d have programmed the AI to use a different system), so why shouldn’t the AI do the same? But by [Löb’s Theorem](http://lesswrong.com/lw/t6/the_cartoon_guide_to_l%C3%B6bs_theorem/), no sufficiently strong formal system can know that everything that it proves to be true is actually true. That’s what we call the “Löbian obstacle.”\n\n\n\n\n---\n\n\n**Luke**: You’ve called using mathematical proofs a “toy model,” but that’s exactly how work at the recent MIRI workshops has approached the Löbian obstacle. Do you think that practical AI designs will be based on logic and proofs? How confident are you that the Löbian obstacle will be relevant to a realistic artificial intelligence, and that the work MIRI is currently doing will be applicable in that context?\n\n\n\n\n---\n\n\n**Benja**: We certainly don’t think that a realistic AI will literally be able to find a mathematical proof that its actions are guaranteed to lead to “good” outcomes. Any practical AI will be uncertain about many things and will need to use probabilistic reasoning. There are two reasons why I think that MIRI’s current work has a decent chance of being relevant in that setting.\n\n\nFirst, Löb’s theorem is only one instance of a “diagonalization argument” placing limits on the degree to which a formal system can do self-reference. For example, there’s [Tarski’s theorem](http://en.wikipedia.org/wiki/Tarski%27s_undefinability_theorem) that a powerful formal language can’t talk about which sentences in that language are true, because otherwise you could have a formal analogue of the [Liar](http://en.wikipedia.org/wiki/Liar_paradox) [paradox](http://en.wikipedia.org/wiki/Liar_paradox) “this sentence is false,” and [Turing’s halting problem](http://en.wikipedia.org/wiki/Halting_problem), which says that there’s no computer program which can say for arbitrary other programs whether they go into an infinite loop. Other well-known examples include [Russell’s paradox](http://en.wikipedia.org/wiki/Russell%27s_paradox) and [Cantor’s argument that not all infinite sets are the same size](http://en.wikipedia.org/wiki/Cantor%27s_diagonal_argument). Similar arguments apply to simple-minded ways of doing probabilistic reasoning, so I feel that it’s unlikely that the problem will just automatically go away when we start using probability, and I think there is a decent chance that the work we are doing now will lead to insights that are applicable in that setting.\n\n\nAnd second, in order to achieve a reasonable probability that our AI still follows the same goals after billions of rewrites, we must have a very low chance of going wrong in every single step, and machine-verified formal mathematical proofs are the one way we know to become extremely confident that something is true (especially statements like “this AI design won’t destroy the world”, where we don’t get to just observe many independent examples). Although you can never be sure that a program will work as intended when run on a real-world computer — it’s always possible that a cosmic ray will hit a transistor and make things go awry — you can prove that a program would satisfy certain properties when run on an ideal computer. Then you can use probabilistic reasoning and error-correcting techniques to make it extremely probable that when run on a real-world computer, your program still satisfies the same property. So it seems likely that a realistic Friendly AI would still have *components* that do logical reasoning or something that looks very much like it.\n\n\nI tend to think not in terms of the results that we are currently proving being directly relevant to a future AI design; rather, I hope that the work we are currently doing will help us understand the problems better and lead to insights that lead to insights that ultimately allow us to build a safe self-improving machine intelligence.\n\n\n\n\n---\n\n\n**Luke**: What sort of historical precedent do we have for doing technical work that we hope will lead to some insights, which will lead to other insights, which will lead to other insights, which will lead to useful application many years later?\n\n\nI suppose that kind of thing happens in mathematics on occasion, for example when in the 1980s it was discovered that one might be able to prove [Fermat’s Last Theorem](http://en.wikipedia.org/wiki/Fermat%27s_last_theorem) via the modularity theorem, which prompted Andrew Wiles to pursue this attack, which allowed him to prove Fermat’s Last Theorem after about a decade of work ([Singh 1997](http://www.amazon.com/Fermats-Enigma-Greatest-Mathematical-Problem/dp/0802713319/)). Another example was Hamilton’s attack on the [Poincaré conjecture](http://en.wikipedia.org/wiki/Poincar%C3%A9_conjecture) via [Ricci flow](http://en.wikipedia.org/wiki/Ricci_flow) on a manifold, which began in 1982 and led to Perelman’s proof in 2003 ([Szpiro 2008](http://www.amazon.com/Poincares-Prize-Hundred-Year-Greatest-ebook/dp/B001RTKITQ/)). Of course, other conjectures have thus far resisted decades of effort to prove them, for example the [Riemann Hypothesis](http://en.wikipedia.org/wiki/Riemann_hypothesis) ([Rockmore 2007](http://www.amazon.com/Stalking-Riemann-Hypothesis-Numbers-ebook/dp/B0012RMVES/)) and P ≠ NP ([Fortnow 2013](http://www.amazon.com/The-Golden-Ticket-Impossible-ebook/dp/B00BKZYGUY/)).\n\n\nBut “goal stability under self-modification” isn’t as well-defined as the conjectures by Fermat and Poincaré. Maybe more analogous examples come from the field of computer science? For example, many early AI scientists worked toward the goal of writing a computer program that could play Grandmaster-level chess, even though they couldn’t be sure exactly what such a program would look like. There are probably analogues in quantum computing, too.\n\n\nBut anyway: how do you think about this?\n\n\n\n\n---\n\n\n**Benja**: My gut feeling actually tends to be that what we are trying to do here is fairly unusual — and for a good reason: it’s risky. If you want to be sure that what you’re working on isn’t a dead end, you’d certainly want to choose a topic where the gap between our goals and our current knowledge is smaller than in FAI. But I’m worried that if we wait with doing FAI research until we understand how AGIs will work, then there won’t be enough time remaining before the intelligence explosion to actually get the task done, so my current feeling is that the right tradeoff is to start now despite the chance of taking the wrong tack.\n\n\nBut then again, maybe our situation isn’t as unusual as my gut feeling suggests. Depending on how close you want the analogy to be, there may be many examples where scientists have a vague idea of the problem they want to solve, but aren’t able to tackle it directly, so instead they look for a small subproblem that they think they can make some progress on. You could tell a story where much of physics research is ultimately aimed at figuring out the true basic laws of the universe, and yet all a physicist can actually do is work on the next problem in front of them. Surely psychology was aimed from the beginning at figuring out all about how the human mind works, and yet starting by training rats to press a lever to get food, and later following this up by sticking an electrode in the rat’s brain and seeing what neurons are involved in accomplishing this task, can count as insights leading to insights that will plausibly help us figure out what’s really going on. Your own post on “[searching under the](http://lesswrong.com/lw/hsd/start_under_the_streetlight_then_push_into_the/) [streetlight](http://lesswrong.com/lw/hsd/start_under_the_streetlight_then_push_into_the/)” gives some examples of this pattern as well.\n\n\n\n\n---\n\n\n**Luke**: Can you say more about why you and some others think this problem of stability under self-modification should be investigated from the perspective of mathematical logic? For example, Stanford graduate student [Jacob Steinhardt](http://cs.stanford.edu/%7Ejsteinhardt/) [commented](http://lesswrong.com/lw/hmt/tiling_agents_for_selfmodifying_ai_opfai_2/9ai6) that the first tool he’d reach for to investigate this problem would not be mathematical logic, but instead “a martingale…, which is a statistical process that somehow manages to correlate all of its failures with each other… This can yield bounds on failure probability that hold for extremely long time horizons, even if there is non-trivial stochasticity at every step.”\n\n\n\n\n---\n\n\n**Benja**: I said earlier that in order to have a chance that our AI still follows the same goals after billions of rewrites, the probability of going wrong on any particular step must be very, very small. That’s true, but it’s not very quantitative. If we want to have 99% probability of success, how large a risk can we take on any particular step? It would be sufficient if the probability is lower than one in 100 billion each time, but that’s not really necessary. Jacob’s idea of using martingales is a similar but more flexible answer to this question, which allows you to take slightly larger risks under certain circumstances.\n\n\nBut even with this additional flexibility, you will still need a way to achieve extremely high confidence that what you’re doing is safe on most of the rewrite steps. And we can’t just achieve that confidence by the tried-and-true way of running an experiment with a large sample: The question is whether a rewrite our nascent AI is considering early on will lead to the intended results after the AI has become superintelligent and spread through the solar system and beyond — you can’t just simulate that, if you don’t already have these resources yourself!\n\n\nSo we need a way to reason abstractly about how our AI will behave in situations that are completely different from what we can simulate at the moment, and we will need to reach extreme confidence that these abstract conclusions are in fact correct. There is exactly one way we know how to do this, and that’s to use formally verified proofs in mathematical logic.\n\n\n\n\n---\n\n\n**Luke**: Suppose John Doe had an intuition that despite the fact that he is not a cognitive system with a logical architecture, he feels like he could make lots of self-modifications while retaining his original goals, if he had enough computational power and plenty of time to reason about whether the next self-modification he’s planning is going to change his goals. If this intuition is justified, then this suggests there are other methods we might use, outside mathematical logic, to ensure a very small probability of goal corruption upon self-modification. What would you say to John?\n\n\n\n\n---\n\n\n**Benja**: I’d say that I think he underestimates the difficulty of the problem. Two things:\n\n\nFirst, my impression is that a lot of people have an intuition that they are already making self-modifications all the time. But the changes that humans can make with present-day technology don’t change the design of the hardware we’re running on — they pale against the difference between a human and a chimpanzee, and a self-improving AI would very likely end up making much more fundamental changes to its design than the relatively small number of tweaks evolution has applied to our brains in the last five million years.\n\n\nBut second, John might say that even taking this into account, he thinks that given enough time to learn how his brain works, and reasoning very carefully about every single step he’s taking, he should be able to go through a long chain of self-modifications that preserve his values. In this case, I think it’s fairly likely that he’s just wrong. However, I could imagine that a human could in fact succeed in doing this — but not without achieving the same sort of extremely high confidence in each single rewrite step that we want our AI to have, and I think that if a human could manage to achieve this sort of confidence, it would be by … proving mathematical theorems and having the proofs formally checked by a computer!\n\n\n\n\n---\n\n\n**Luke**: Yeah, when people say that humans self-modify all the time without changing their goals, I give two replies of my own. First, I point out that people’s goals and values do change pretty often. And second, I point out how little humans actually self-modify. For example I once [switched](http://lesswrong.com/lw/7dy/a_rationalists_tale/) from fundamentalist Christian to scientific naturalist, and this went along with a pretty massive shift in how I process evidence and argument. But throughout that worldview change, my brain was still (e.g.) using the temporal difference reinforcement learning algorithm in my dopaminergic reward system. As far as we know, there were no significant changes to my brain’s core algorithms during that period of transformation. Humans never actually self-modify very much, not like an AI would.\n\n\nMy next question has to do with AI capability. As AI scientists know, logic-based AI is generally far less capable than AI that uses machine learning methods. Is the idea that only a very small part of a future self-modifying AI would have a logical structure (so that it could prove the goodness of modifications to its core algorithms), and the rest of the AI would make use of other methods? Sort of like how small parts of safety-critical software (e.g. for [flight control](http://shemesh.larc.nasa.gov/fm/papers/FormalVerificationFlightCriticalSoftware.pdf)) are written in a structured way such that they are amenable to [formal verification](http://en.wikipedia.org/wiki/Formal_verification), but the rest of the system isn’t necessarily written in ways amenable to formal verification?\n\n\n\n\n---\n\n\n**Benja**: I think that your point that people’s values actually do change often is useful for intuition, and I also feel it’s important to point out that these changes are again actually pretty small compared to what could happen if you were to change your brain’s entire architecture. People might switch between being committed environmentalists and feeling that environmentalism is fundamentally misguided, for example, but they don’t tend to become committed triangularists who feel that it’s a moral imperative to make all every-day tools triangular in shape. Both murder and condemnation of murder are [human universals](http://en.wikipedia.org/wiki/Human_universal), traits that are common to all cultures world-wide; we are talking about changes to our cognitive architecture easily fundamental enough that they could lead to the impulse to non-triangularism and the condemnation of this non-triangularism to become similarly universal.\n\n\nYes, I think that logical reasoning would be only one tool in the toolbox of a Friendly AI, and it would use different tools to reason about most things in its environment. Even when reasoning about its own behavior, I would expect the AI to only use logic to prove theorems about how it would behave when run on “ideal” hardware (or hardware that has certain bounds on error rates, etc.), and then use probabilistic reasoning to reason about what happens if it runs on real hardware in the physical world. (But with regard to your analogy to present-day safety-critical software, I want to point out that unlike with present-day safety-critical software, I expect the AI to prove theorems about all of its own component parts, I just don’t expect it to use logic to reason about, say, chairs. Formal verification is difficult and time-consuming, which is the reason why we currently only apply it to small parts of safety-critical systems, but I expect future AIs to be up to the task!)\n\n\n\n\n---\n\n\n**Luke**: Hmmm. That’s surprising. My understanding is that formal verification methods do not scale well at all, both due to computational intractability and due to the hours of human labor required to write a correct formal specification against which one could verify a complicated system. Why do you think a future AI would be “up to the task” of proving theorems “about all of its own component parts”?\n\n\n\n\n---\n\n\n**Benja**: Well, for one thing, I generally expect future AIs to be smarter than us, and easily able to do intellectual tasks that would take very many hours of human labor; and I don’t expect that they will get tired of the menial task of translating their mathematical “intuitions” into long chains of “boring” lemmas, like humans do.\n\n\nBut more specifically, we humans have an intuitive understanding of why we expect the systems we build to work, and my feeling is that probably a major reason for why it is difficult to translate that understanding into formal proofs is that there’s a mismatch between the way these intuitions are represented in our brain and the way the corresponding notions are represented in a formal proof system. In other words, it seems likely to me that when you build a cognitive architecture from scratch, you could build it to have mathematical “intuitions” about why a certain piece of computer code works that are fairly straightforward to translate into formally verifiable proofs. In fact, similarly to how I would expect an AI to directly manipulate representations of computer code rather than using images and verbal sounds like we humans do, I think it’s likely that an FAI will do much of its reasoning about why a piece of computer code works by directly manipulating representations of formal proofs.\n\n\nThat said, it also often seems to happen that we humans know by experience that a certain algorithm or mathematical trick tends to work on a lot of problems, but we don’t have a full explanation for why this is. I do expect future AIs to have to do reasoning of this type as well, and it does seem quite plausible that an AI might want to apply this type of reasoning to (say) machine learning algorithms that it uses for image processing, where a mistake can be recovered from — though likely not to the code it uses to check that future rewrites will still follow the same goal system! And I still expect the AI to prove theorems about its image processing algorithm, I just expect them to be things like “this algorithm will always complete in at most the following number of time steps” or “this algorithm will be perform correctly under the following assumptions, which are empirically known to be true in a very large number of cases.”\n\n\n\n\n---\n\n\n**Luke**: Thanks, Benja!\n\n\nThe post [Benja Fallenstein on the Löbian Obstacle to Self-Modifying Systems](https://intelligence.org/2013/08/04/benja-interview/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-08-04T19:53:10Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "77accb82dbb6f720d063093e487361c4", "title": "“Algorithmic Progress in Six Domains” Released", "url": "https://intelligence.org/2013/08/02/algorithmic-progress-in-six-domains-released/", "source": "miri", "source_type": "blog", "text": "[![algorithmic progress](http://intelligence.org/wp-content/uploads/2013/08/algorithmic_progress_cover.png)](https://intelligence.org/files/AlgorithmicProgress.pdf)Today we released a new technical report by visiting researcher [Katja Grace](http://meteuphoric.wordpress.com/) called “[**Algorithmic Progress in Six Domains**](https://intelligence.org/files/AlgorithmicProgress.pdf).” The report summarizes data on algorithmic progress – that is, better performance per fixed amount of computing hardware – in six domains:\n\n\n* SAT solvers,\n* Chess and Go programs,\n* Physics simulations,\n* Factoring,\n* Mixed integer programming, and\n* Some forms of machine learning.\n\n\nOur purpose for collecting these data was to shed light on the question of [intelligence explosion microeconomics](http://lesswrong.com/lw/hbd/new_report_intelligence_explosion_microeconomics/), though we suspect the report will be of broad interest within the software industry and computer science academia.\n\n\nOne finding from the report was previously discussed by Robin Hanson [here](http://www.overcomingbias.com/2013/06/why-does-hardware-grow-like-algorithms.html). (Robin saw an early draft on the intelligence explosion microeconomics [mailing list](https://docs.google.com/forms/d/1KElE2Zt_XQRqj8vWrc_rG89nrO4JtHWxIFldJ3IY_FQ/viewform).)\n\n\nThe preferred page for discussing the report in general is [here](http://lesswrong.com/r/discussion/lw/i8i/algorithmic_progress_in_six_domains/).\n\n\nSummary:\n\n\n\n> In recent *boolean satisfiability* (SAT) competitions, SAT solver performance has increased 5–15% per year, depending on the type of problem. However, these gains have been driven by widely varying improvements on particular problems. Retrospective surveys of SAT performance (on problems chosen after the fact) display significantly faster progress.\n> \n> \n> *Chess programs* have improved by around 50 Elo points per year over the last four decades. Estimates for the significance of hardware improvements are very noisy, but are consistent with hardware improvements being responsible for approximately half of progress. Progress has been smooth on the scale of years since the 1960s, except for the past five. *Go programs* have improved about one stone per year for the last three decades. Hardware doublings produce diminishing Elo gains, on a scale consistent with accounting for around half of progress.\n> \n> \n> Improvements in a variety of *physics simulations* (selected after the fact to exhibit performance increases due to software) appear to be roughly half due to hardware progress.\n> \n> \n> The *largest number factored* to date has grown by about 5.5 digits per year for the last two decades; computing power increased 10,000-fold over this period, and it is unclear how much of the increase is due to hardware progress.\n> \n> \n> Some *mixed integer programming* (MIP) algorithms, run on modern MIP instances with modern hardware, have roughly doubled in speed each year. MIP is an important optimization problem, but one which has been called to attention after the fact due to performance improvements. Other optimization problems have had more inconsistent (and harder to determine) improvements.\n> \n> \n> Various forms of *machine learning* have had steeply diminishing progress in percentage accuracy over recent decades. Some vision tasks have recently seen faster progress.\n> \n> \n\n\nThe post [“Algorithmic Progress in Six Domains” Released](https://intelligence.org/2013/08/02/algorithmic-progress-in-six-domains-released/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-08-03T02:31:09Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "b36d151ff3b333ea729ee89ebfbbf508", "title": "AI Risk and the Security Mindset", "url": "https://intelligence.org/2013/07/31/ai-risk-and-the-security-mindset/", "source": "miri", "source_type": "blog", "text": "In 2008, security expert Bruce Schneier wrote about [the security mindset](http://www.schneier.com/blog/archives/2008/03/the_security_mi_1.html):\n\n\n\n> Security requires a particular mindset. Security professionals… see the world differently. They can’t walk into a store without noticing how they might shoplift. They can’t use a computer without wondering about the security vulnerabilities. They can’t vote without trying to figure out how to vote twice…\n> \n> \n> [SmartWater](http://www.smartwater.com/) is a liquid with a unique identifier linked to a particular owner. “The idea is for me to paint this stuff on my valuables as proof of ownership,” I [wrote](http://www.schneier.com/blog/archives/2005/02/smart_water.html) when I first learned about the idea. “I think a better idea would be for me to paint it on *your* valuables, and then call the police.”\n> \n> \n> …This kind of thinking is not natural for most people. It’s not natural for engineers. Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail. It involves thinking like an attacker, an adversary or a criminal. You don’t have to exploit the vulnerabilities you find, but if you don’t see the world that way, you’ll never notice most security problems.\n> \n> \n\n\n[![with folded hands](http://intelligence.org/wp-content/uploads/2013/07/with-folded-hands.jpg)](http://en.wikipedia.org/wiki/With_Folded_Hands)A recurring problem in much of [the literature](https://intelligence.org/files/ResponsesAGIRisk.pdf) on “machine ethics” or “AGI ethics” or “AGI safety” is that researchers and commenters often appear to be asking the question “How will this solution work?” rather than “How will this solution fail?”\n\n\nHere’s an example of the security mindset at work when thinking about AI risk. When presented with the suggestion that an AI would be safe if it “merely” (1) was very good at prediction and (2) gave humans text-only answers that it predicted would result in each stated goal being achieved, [Viliam Bur](http://lesswrong.com/user/Viliam_Bur/overview/) pointed out a possible failure mode (which was later [simplified](http://lesswrong.com/lw/cze/reply_to_holden_on_tool_ai/)):\n\n\n\n> Example question: “How should I get rid of my disease most cheaply?” Example answer: “You won’t. You will die soon, unavoidably. This report is 99.999% reliable”. Predicted human reaction: Decides to kill self and get it over with. Success rate: 100%, the disease is gone. Costs of cure: zero. Mission completed.\n> \n> \n\n\nThis security mindset is one of the traits we look for in researchers we might hire or collaborate with. Such researchers show a tendency to ask “How will this fail?” and “Why might this formalism not quite capture what we really care about?” and “Can I find a way to break this result?”\n\n\nThat said, there’s no sense in being *infinitely* skeptical of results that may help with AI security, safety, reliability, or “friendliness.” As always, we must [think with probabilities](http://riskmanagementinsight.com/riskanalysis/?p=350).\n\n\nAlso see:\n\n\n* [You are dangerously bad at cryptography](http://happybearsoftware.com/you-are-dangerously-bad-at-cryptography.html)\n* [Illustrative Risks to the Public in the Use of Computer Systems and Related Technology](http://www.csl.sri.com/users/neumann/illustrative.html)\n* [Intelligence Explosion and Machine Ethics](https://intelligence.org/files/IE-ME.pdf)\n* [Complex Value Systems are Required to Realize Valuable Futures](https://intelligence.org/files/ComplexValues.pdf)\n* [Reply to Holden on Tool AI](http://lesswrong.com/lw/cze/reply_to_holden_on_tool_ai/) (especially sec. 2)\n\n\nThe post [AI Risk and the Security Mindset](https://intelligence.org/2013/07/31/ai-risk-and-the-security-mindset/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-08-01T03:19:14Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "72c516fb37555e3eb83b2eb50024f92e", "title": "Index of Transcripts", "url": "https://intelligence.org/2013/07/25/index-of-transcripts/", "source": "miri", "source_type": "blog", "text": "Volunteers at [MIRI Volunteers](http://mirivolunteers.org/) and elsewhere have helpfully transcribed several audio/video recordings related to MIRI’s work. This post is a continuously updated index of those transcripts.\n\n\nAll transcripts of Singularity Summit talks are available [here](http://intelligence.org/singularitysummit/).\n\n\nOther available transcripts include:\n\n\n* *Philosophy Talk*: [Turbo-Charging the Mind](https://docs.google.com/document/d/13EYHVI9KBte28-TMn3PN4QyT_8aZtnQs8AGFBH20Qp0/pub \"Transcript\") (with Anna Salamon)\n* *BloggingHeads.tv*: [Eliezer Yudkowsky and Scott Aaronson on superintelligence and many-worlds](https://docs.google.com/document/d/1JIqzTGNvdLukR0Ce5T2eiv_vxtPO53N3KGCbxVvREtU/pub)\n* *BloggingHeads.tv*: [Eliezer Yudkowsky and Massimo Pigliucci on consciousness and uploading](https://docs.google.com/document/d/1S-7CWOLOtLRDmMiS7LtVxELssUi3OI1-UcrPAzGMuH4/pub)\n* *AGI 2011:* [Whole Brain Emulation, as a platform for creating safe AGI](https://docs.google.com/document/d/1-2A_cHiFC8fmeWHdQBeM7ynWaFdbklPm9u-UtKxZJ0A/pub) (with Anna Salamon)\n* *AGI 2011:* [Risk-averse preferences as an AGI safety technique](https://docs.google.com/document/d/1HF0aK2-nyulheAYpOyZ1Xat-PzZPpjt5BCmzChUCVKc/pub) (with Anna Salamon)\n* *Dawn or Doom*: [Why ain’t you rich?](https://docs.google.com/document/d/1dRUHMKRLNngor-e-NNJD9NtDqGI4QAzQqfty1EjjIpA/edit?usp=sharing) (with Nate Soares)\n\n\nThe post [Index of Transcripts](https://intelligence.org/2013/07/25/index-of-transcripts/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-07-25T15:50:07Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "7849d0d08bd681568a126dc22b0310d6", "title": "MIRI’s December 2013 Workshop", "url": "https://intelligence.org/2013/07/24/miris-december-2013-workshop/", "source": "miri", "source_type": "blog", "text": "![013](http://intelligence.org/wp-content/uploads/2013/02/013.jpg)\nFrom December 14-20, MIRI will host another **Workshop on Logic, Probability, and Reflection**. This workshop will focus on the [Löbian obstacle](http://lesswrong.com/lw/hmt/tiling_agents_for_selfmodifying_ai_opfai_2/), [probabilistic logic](http://lesswrong.com/lw/h1k/reflection_in_probabilistic_logic/), and the intersection of logic and probability more generally.\n\n\nParticipants confirmed so far include:\n\n\n* [Nate Ackerman](http://www.math.harvard.edu/~nate/) (Harvard)\n* [John Baez](http://www.math.ucr.edu/home/baez/) (UC Riverside)\n* [Paul Christiano](http://rationalaltruist.com/) (UC Berkeley)\n* [Benja Fallenstein](http://lesswrong.com/user/Benja/submitted/) (U Bristol)\n* [Cameron Freer](http://math.mit.edu/~freer/) (MIT)\n* [Jeremy Hahn](http://lesswrong.com/user/JeremyHahn/overview/) (Harvard)\n* [Wojtek Moczydlowski](http://wojtek.moczydlowski.net/) (Google)\n* [Michele Reilly](http://www.linkedin.com/pub/michele-reilly/15/7a2/b44) (independent)\n* [Will Sawin](http://www.ctpost.com/news/article/Wisdom-beyond-his-years-1390299.php) (Princeton)\n* [Nate Soares](http://so8r.es/) (Google)\n* [Nisan Stiennon](http://lesswrong.com/user/Nisan/submitted/) (Stanford)\n* [Greg Wheeler](http://gregorywheeler.org/) (LMU Munich)\n* [Eliezer Yudkowsky](http://yudkowsky.net/) (MIRI)\n\n\nIf you have a strong mathematics background and might like to attend this workshop, it’s not too late to [apply](http://intelligence.org/get-involved/#workshop)! And even if *this* workshop doesn’t fit your schedule, please **do apply**, so that we can notify you of other workshops (long before they are announced publicly).\n\n\nThe post [MIRI’s December 2013 Workshop](https://intelligence.org/2013/07/24/miris-december-2013-workshop/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-07-24T23:55:35Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "78cc13a1f87caec9f9e16052a2d80f8d", "title": "Nick Beckstead on the Importance of the Far Future", "url": "https://intelligence.org/2013/07/17/beckstead-interview/", "source": "miri", "source_type": "blog", "text": "![](http://intelligence.org/wp-content/uploads/2013/07/225877_10102425448513670_660885940_n.jpg) Nick Beckstead recently finished a Ph.D in philosophy at [Rutgers University](http://philosophy.rutgers.edu/), where he [focused](https://sites.google.com/site/nbeckstead/research) on practical and theoretical ethical issues involving future generations. He is particularly interested in the practical implications of taking full account of how actions taken today affect people who may live in the very distant future. His research focuses on how big picture questions in normative philosophy (especially population ethics and decision theory) and various big picture empirical questions (especially about existential risk, moral and economic progress, and the future of technology) feed into this issue.\n\n\nApart from his academic work, Nick has been closely involved with the [effective altruism](http://effective-altruism.com/) movement. He has been the director of research for [Giving What We Can](http://www.givingwhatwecan.org/), he has worked as a summer research analyst at [GiveWell](http://www.givewell.org/), and he is currently on the board of trustees for the [Centre for Effective Altruism](http://home.centreforeffectivealtruism.org/), and he recently became a research fellow at the Future of Humanity Institute.\n\n\n\n \n\n**Luke Muehlhauser:** Your Rutgers philosophy dissertation, “[On the Overwhelming Importance of Shaping the Far Future](https://sites.google.com/site/nbeckstead/research),” argues that “from a global perspective, what matters most (in expectation) is that we do what is best (in expectation) for the general trajectory along which our descendants develop over the coming millions, billions, and trillions of years.”\n\n\nIn an [earlier post](http://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/), I summed up your “rough future-shaping argument”:\n\n\n\n> Astronomical facts suggest that humanity (including “post-humanity”) could survive for billions or trillions of years ([Adams 2008](http://books.google.com/books?id=X5jdMyJKNL4C&pg=PT77&lpg=PT77#v=onepage&q&f=false)), and could thus produce enormous amounts of good. But the value produced by our future depends on our development trajectory. If humanity destroys itself with powerful technologies in the 21st century, then nearly all that future value is lost. And if we survive but develop along a trajectory dominated by conflict and poor decisions, then the future could be much less good than if our trajectory is dominated by altruism and wisdom. Moreover, some of our actions today can have “ripple effects” which determine the trajectory of human development, because many outcomes are [path-dependent](http://en.wikipedia.org/wiki/Path_dependence). Hence, actions which directly or indirectly precipitate particular trajectory changes (e.g. mitigating existential risks) can have vastly more value (in expectation) than actions with merely proximate benefits (e.g. saving the lives of 20 wild animals).\n> \n> \n\n\nOne of the normative assumptions built into the rough future-shaping argument is an assumption you call Additionality. Could you explain what Additionality is, and why some people reject it?\n\n\n\n\n---\n\n\n**Nick Beckstead:** I think it may be helpful to give a bit of background first. I like to tackle the question of “how important is the far future?” by dividing the future up into big chunks of time (which I call “periods”), assigning values to the big chunks of time, and then assigning a value to the future as a function of the value assigned to the big chunks of time. You could think of it as creating some kind of computer program which would scan whole history of the world together with its future, carve it up into periods, scan each period and assign it a value, and then compute a value of the whole as a function of the value of its parts. It’s arbitrary how you carve up periods, but that’s okay because it’s an approximation technique. I think the approximation technique gives useful and reasonable answers if you make the periods quite large (spanning hundreds, thousands, or more years at once; you might want to carve it up into large batches of intelligent activity if you are considering future civilizations very different from our own).\n\n\nAdditionality basically says that when you’re assigning value to future periods, when you’ve got periods that you’d assign as “good,” it’s always better to have a period that you’d assign as good than periods you’d assign as “neutral.” I’m trying to partly draw on our intuitive ways of determining how well things have been going in recent history, and extending that to future periods, which we may be less capable of valuing using other methods. I want to say that if you had some future period which you’d regard as “good” judged purely on the basis of what happens in that period itself, that should contribute to the value you assign to the whole future.\n\n\nYou might disagree with this if you have what some philosophers call a strict “[Person-Affecting View](http://plato.stanford.edu/entries/nonidentity-problem/)“. According to strict Person-Affecting Views, the fact that a person’s life would go well if he lived could not, in itself, imply that it would be in some way good to create him. Why not? Since the person was never created, there is no person who could have benefited from being created. On this type of view, it would only be important to ensure that there are future generations if it would somehow benefit people alive today, or people who have lived in the past (perhaps by adding meaning to their lives). The idea is that ensuring that there are future generations is analogous to “creating” many people, and, on this view, “creating” people–even people who would have good lives–can’t be important except insofar as it is important for people other than those you’re creating.\n\n\nYou might also disagree with this view if you think that “shape” considerations are relevant. One example of this is an average type view. You might say that adding on extra periods that are good, but of below average quality, is a bad thing. Or you might say that adding on extra periods that are not as good as the preceding ones can be bad because it could mean that things are getting worse over time.\n\n\nI feel there are a lot of qualifications and details that need to be fleshed out here, but hopefully that should give some kind of reasonable introduction to the idea.\n\n\n\n\n---\n\n\n**Luke:** When I talk to someone about how much I value the far future, it’s pretty common for them to reply with a Person-Affecting View, though they usually don’t know it by that name. My standard reply is, “I used to have that view myself, but then I encountered some ideas that changed my mind, and made me think that, actually, I probably *do* care about future people roughly as much as I care about current people.” Then I tell them about those ideas that changed my mind.\n\n\nI usually start with the [block universe](http://philsci-archive.pitt.edu/2408/1/Petkov-BlockUniverse.pdf) idea, which seems to be the default view among physicists (see e.g. [Brian Greene](http://www.amazon.com/dp/0375727205/ref=nosim?tag=lukeprogcom-20) & [Sean Carroll](http://www.amazon.com/dp/0525951334/ref=nosim?tag=lukeprogcom-20), though I also like the explanation by AI researcher [Gary Drescher](http://commonsenseatheism.com/?p=11068)). According to the block universe view, there is no privileged “present” time, and hence future people exist in just the same way that present people do.\n\n\nBut in the two chapters you spend arguing against “strict” and “moderate” Person-Affecting Views, you don’t refer to the block universe at all. Do you think the block universe fails to provide a good argument against Person-Affecting Views, or was it simply one line of argument you didn’t take the time to elaborate in your thesis?\n\n\n\n\n---\n\n\n**Nick:** I agree with your view about the block universe. I don’t think it is a strong argument against Person-Affecting Views in general, though I think it is a good argument against certain types of Person-Affecting Views. I think Person-Affecting Views are messy in many ways, and there are other lines of argument that I could have pursued but did not.\n\n\nAnother way to put the basic idea behind Person-Affecting Views is to say that, on these views, you divide people who may exist depending on what you choose into two classes: the “extra” people and the other people. And then you say that if you cause some “extra” people to exist with good lives, either that isn’t good or is less good than helping people who aren’t “extra.” Following [Gustaf Arrhenius](http://people.su.se/~guarr/), in chapter 4 of my dissertation, I consider four different interpretations of extra: the people that don’t presently exist (Presentism), the people that will never actually exist (Actualism), the people whose existence is dependent on which alternative (of perhaps many) we choose (Necessitarianism), and the people that exist in one alternative being compared, but not the other (Comparativism).\n\n\nAs far as I can tell, only Presentism is undermined by the block universe critique, since only Presentism relies on a concept of “present.” This is why I said that the block universe critique only undermines certain versions of Person-Affecting Views.\n\n\nThe block universe argument seems like a knock-down argument against a very precise version of Presentism (which philosophers defending the view may hold), but I don’t think that it is a knock-down argument against a [steel-manned](http://lesswrong.com/lw/85h/better_disagreement/), “rough and ready” version of the view. Someone might say, “Well, yes, I accept the block universe theory, so I acknowledge there is no physically precise thing for me to mean by “present.” But we can, in ordinary English, say sentences like “The world population is now approximately 7 billion.” And you understand me to be saying something intelligible and correct in some approximate sense. In a similar way, when I recommend that we only consider benefits which could come to people now living, I intend you to understand me similarly. I also hold that, right now, it is not practically useful to consider potential benefits to people who may exist in distant parts of the universe, so it doesn’t particularly matter which reference frame you use to approximately interpret my use of “present.” Though my view may not correspond to a clean fundamental distinction, I believe that this recommendation, for our present circumstances, would survive reflection more successfully than other views on this question which have been proposed.”\n\n\nOne can respond to this line of thought by arguing that even rough and ready versions of Presentism have consequences that are hard to accept, and aren’t motivated by appealing theoretical considerations. This is the approach I take in chapter 4 of my dissertation. I believe this line of argument is more robust against a wider variety of alterations of Person-Affecting Views.\n\n\n\n\n---\n\n\n**Luke:** Yeah, I guess I tend to use the block universe not as an argument but as an [intuition pump](http://en.wikipedia.org/wiki/Intuition_pump) for the view that “current” people aren’t so privileged (in a moral sense) as one might naively think.\n\n\nAnyway: in chapter 4 you survey a variety of thought experiments that have varying implications for Person-Affecting Views. At the end of the chapter, you provide this handy summary table:\n\n\n![Summary Table](http://intelligence.org/wp-content/uploads/2013/07/table-4.2.png)\nCould you tell us what’s going on in this table, and maybe briefly hint at what a couple of the individual thought experiments are about?\n\n\n\n\n---\n\n\n**Nick:** In chapter 2 of my dissertation, I write about methodology for moral philosophy and argue that intuitive judgments about morality are in many ways less reliable than one might have hoped, and are often inconsistent. One of the consequences of this is that finding just a few counterexamples is often not enough to reject a moral theory. I believe it is important to systematically explore a wide variety of test cases and then proportion one’s credence to the theories that fare best over the whole set of cases.\n\n\nThe rows have different types of theories, and the columns are different types of test cases for the theories. And then I have marked the cases where the theories have implications that are hard to accept. Regarding the terminology in the columns, I call a Person-Affecting View “strict” if it gives no weight to “extra” people, and “moderate” if it gives less weight to “extra” people than other people. There is then a question about how much weight you give, and this table focuses on the cases where little weight is given to “extra” people.\n\n\nI call a Person-Affecting View “asymmetric” if people who have lives that are not worth living are never counted as “extra.” People with Person-Affecting Views often want their views to be asymmetric because they want to be able to say that it would be bad to cause a child to exist whose life would be filled with suffering. (Derek Parfit has a famous case called “The Wretched Child” in [Reasons and Persons](http://www.amazon.com/Reasons-Persons-Oxford-Paperbacks-Parfit/dp/019824908X/ref=sr_1_1?ie=UTF8&qid=1373284563&sr=8-1&keywords=reasons+and+persons), which is where I got this name. *Reasons and Persons* is probably my favorite book of moral philosophy.)\n\n\nA major problem with strict Person-Affecting Views is that they have very implausible consequences in cases of extinction. It is one thing to say that the future of humanity isn’t *overwhelmingly* important, but quite another to say that it *basically doesn’t matter* if we go extinct, except insofar as it lowers present people’s quality of life.\n\n\nModerate Person-Affecting Views have implausible implications in certain fairly mundane cases where we are choosing between improving the lives of “extra” people or people who aren’t “extra”. A simple example is a case I call “Disease Now or Disease Later,” where we must choose between a public health program that would present some disease from hurting toddlers alive today, or a public health program that would help prevent a greater number of toddlers (who aren’t yet alive) in a few years from now. It is hard to believe that because the other toddlers don’t exist yet and which toddlers exist in the future might depend on which program we choose, it would be better to choose the first program. But that is what moderate Person-Affecting Views imply, since they give less weight to the interests of the toddlers who are counted as “extra”.\n\n\nI call views which don’t make any distinction between regular people and “extra” people “Unrestricted Views.” Some philosophers believe that these views have imply that individuals are obligated to have children for the greater good, whereas Person-Affecting Views do not. However, there is no clear implication from “it would be good for there to be additional happy people” to “people are typically obligated to have children.” Why not? At least for people who don’t already want to have additional children, it would be very demanding to ask people to have additional children. Moreover, even on a view that gives a lot of weight to creating additional people, having additional children doesn’t seem like a particularly effective way of doing good in the world in comparison with things like donating money and time to charity. So it would be strange if people were obligated to make potentially significant sacrifices in order to do something that actually wasn’t all that effective as a method of doing good.\n\n\nBasically, the rest of this table is a result of systematically checking these different views against a variety of test cases like these to see which have the most plausible implications overall. Of all these views, only a strict Person-Affecting View can plausibly be used to rebut the case for the overwhelming importance of shaping the far future. And this type of view is much less plausible than the alternatives.\n\n\n\n\n---\n\n\n**Luke:** In chapter 5 of your dissertation you consider the question “Does future flourishing have diminishing marginal value?” Your summary table at the end of *that* chapter looks like this:\n\n\n![Summary Table Chapter 5](http://intelligence.org/wp-content/uploads/2013/07/table-5.1.png)\n\n\nCould you explain what’s going on in this one, too?\n\n\n\n\n---\n\n\n**Nick:** In my dissertation, I defend using the following principle to evaluate the importance of the far future:\n\n\n\n> Period Independence: By and large, how well history goes as a whole is a function of how well things go during each period of history; when things go better during a period, that makes the history as a whole go better; when things go worse during a period, that makes history as a whole go worse; and the extent to which it makes history as a whole go better or worse is independent of what happens in other such periods.\n> \n> \n\n\nTogether with other principles I defend, this leads to the conclusion that you can generally approximate the value of the history of the world by assigning a value to each period, and “adding up” the value across periods.\n\n\nAnother way to get a grip on what Period Independence is a partial answer to is to consider the following hypothetical. Imagine that humans survive the next 1000 years, and their lives go well. How good would it be if they survived for another thousand years, with the same or higher quality of life? What if they survived another thousand years beyond that? Consider three kinds of answer:\n\n\n1. The Period Independence answer: It would be equally as important in each such case.\n2. The Capped Model answer: After a while, it gets less and less important. Moreover, there is an upper limit to how much value you can get in this way.\n3. The Diminishing Value Model (DVM) answer: After a while, it gets less and less important. However, there is no upper limit to how much value you can get in this way.\n\n\nThis table summarizes the result of running different test cases against different versions of Period Independence, the Capped Model, and the Diminishing Value Model.\n\n\nProbably the most important test case supporting Period Independence is the one I call “Our Surprising History.” It goes like this:\n\n\n\n> Our Surprising History: World leaders hire experts to do a cost-benefit analysis and determine whether it is worth it to fund an Asteroid Deflection System. Thinking mostly of the interests of future generations, the leaders decide that it would be well worth it. After the analysis has been done, some scientists discover that life was planted on Earth by other people who now live in an inaccessible region of spacetime. In the past, there were a lot of them, and they had really great lives. Upon learning this, world leaders decide that since there has already been a lot of value in the universe, it is much less important that they build the Asteroid Deflection System than they previously thought.\n> \n> \n\n\nIt seems unreasonable to claim that how good it would be to build the Asteroid Deflection System depends on this information about our distant past. But this is what Capped Models and Diminishing Value Models imply about this case.\n\n\nMany of the cases in this table involve considering some simple test cases involving colonizing other planets. For example consider:\n\n\n\n> The Last Colony: Human civilization has lasted for 1 billion years, but the increasing heat of the sun will soon destroy all life on Earth. Humans (or our non-human descendants) get the chance to colonize another planet, where civilization can continue. They know that if they succeed in colonizing this planet, then: (i) the new planet will sustain a population equal to the size of the population of the Earth, and this planet, like Earth, will sustain life for 1 billion years, (ii) these people’s lives will probably go about as well the lives of the Earth people, (iii) there will not be a chance for the people on the new planet to colonize another planet.\n> \n> \n\n\nIntuitively, it would be extremely important to colonize the extra planet in the second case, much more important than colonizing in the first case. But on a Capped Model, if you set the “upper limits” low enough, it might not be very important at all.\n\n\nDiminishing Value Models avoid this implication, and can say that it would be extremely important to colonize another planet. They might also claim that their view has more plausible implications than Period Independence when comparing The Last Colony with a case like this:\n\n\n\n> The Very Last Colony: Convinced of the importance of preserving future generations, we take great precautions to protect the far future. Our descendants succeed in colonizing a large portion of the galaxy. It becomes relatively clear that our descendants will last for a very long time, about 100 trillion years, until the last stars burn out. At that point, there will be nothing of value left in the accessible part of the Universe. It comes to our attention that there is a chance to colonize one final place, just as in The Last Colony, before civilization comes to an end. For this billion years, these will be the only people in the accessible part of the Universe. During this period, things will go exactly as well as they went in The Last Colony.\n> \n> \n\n\nIn which case is colonization more important, The Last Colony or The Very Last Colony? According to Period Independence, it is equally as important in each case. According to Diminishing Value Models, it is less important in The Very Last Colony. I find DVM stance on this intuitively attractive, though I believe it is a product of a bias I call the *proportional reasoning fallacy*.\n\n\nIn chapter 2 of my dissertation, I argue that we use misguided proportional reasoning in some cases where many lives are at stake. Fetherstonhaugh et al. (1997) found that participants significantly preferred saving a fixed number of lives in a refugee camp when the proportion of lives saved was greater. Describing the participants’ hypothetical choice, they write:\n\n\n\n> There were two Rwandan refugee programs, each proposing to provide enough clean water to save the lives of 4,500 refugees suffering from cholera in neighboring Zaire. The Rwandan programs differed only in the size of the refugee camps where the water would be distributed; one program proposed to offer water to a camp of 250,000 refugees and the other proposed to offer it to a camp of 11,000.\n> \n> \n\n\nParticipants significantly preferred the second program. In another study, Slovic (2007) found that people were willing to pay significantly more for a program of the second kind.\n\n\nAll the views I consider have some implausible implications in certain cases, but it seems easier to explain away the test cases that look bad for Period Independence, and there are somewhat fewer of them, so I conclude that Period Independence is the most plausible principle to use for evaluating far future prospects. Of all these views, only Capped Model or a DVM with a very sharp diminishing rate in the limit can plausibly be used to rebut the case for the overwhelming importance of shaping the far future. And these views, I believe, are less plausible than the alternatives.\n\n\n\n\n---\n\n\n**Luke:** What’s a point you wish you could have included in your dissertation, that was left out for space or other reasons?\n\n\n\n\n---\n\n\n**Nick:** I’ll list a few. There are a lot of things that I think could be better, but you have to put your work out there at some point. Just as [real artists ship](http://en.wikiquote.org/wiki/Steve_Jobs), real thinkers share their ideas.\n\n\nFirst, a core empirical claim in my thesis is that humans could have an extremely large impact on the distant future. Really, it’s sufficient for my argument that they would do this by existing for an extremely long time, or that there could be a very large number of successors (such as [whole brain emulations](http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf) or other AIs). I didn’t defend this claim as thoroughly as I could have, and I didn’t go into great detail because I feared that philosophers would complain that it “isn’t philosophy,” I wanted to finish my dissertation, and I thought that going into it would require a lot of background information due to [inferential distance](http://wiki.lesswrong.com/wiki/Inferential_distance).\n\n\nThe second thing I’d like to add is related to chapter 2 of my dissertation. An abstract of that chapter goes like this:\n\n\n\n> I argue that our moral judgments are less reliable than many would hope, and this has specific implications for methodology in normative ethics. Three sources of evidence indicate that our intuitive ethical judgments are less reliable than we might have hoped: a historical record of accepting morally absurd social practices; a scientific record showing that our intuitive judgments are systematically governed by a host of heuristics, biases, and irrelevant factors; and a philosophical record showing deep, probably unresolvable, inconsistencies in common moral convictions. I argue that this has the following implications for moral theorizing: we should trust intuitions less; we should be especially suspicious of intuitive judgments that fit a bias pattern, even when we are intuitively confident that these judgments are not a simple product of the bias; we should be especially suspicious of intuitions that are part of inconsistent sets of deeply held convictions; and we should evaluate views holistically, thinking of entire classes of judgments that they get right or wrong in broad contexts, rather than dismissing positions on the basis of a small number of intuitive counterexamples. In addition, I argue that many of the specific biases that I discuss would lead us to predict that people would, in general, undervalue most of the available ways of shaping the far future, including speeding up development, existential risk reduction, and creating other positive trajectory changes.\n> \n> \n\n\nI’m concerned that in chapter 2, there is an unbalanced focus on ways in which intuitions fail, and not ways in which trying to correct intuition through theory development could fail. An uncharitable analogy would be that it is as if I wrote a paper about all the ways in which [markets can fail](https://en.wikipedia.org/wiki/Market_failure) and suggested we rely more on governments without talking about all the ways in which [governments can fail](http://en.wikipedia.org/wiki/Government_failure). And just as someone could write an additional chapter (or series of books) on how governments fail, someone could probably also write an important chapter on how people trying to correct intuitions with moral theory fail. So while I feel that the considerations I identify do speak in favor of the recommendations I make, I think there are also important considerations that speak against those recommendations which I did not mention, and probably should have mentioned.\n\n\nSome of the considerations on the other side, some of them [weak](http://lesswrong.com/lw/hmb/many_weak_arguments_vs_one_relatively_strong/), include:\n\n\n1. Given Jonathan Haidt’s theory of [social intuitionism](http://en.wikipedia.org/wiki/Social_intuitionism)–which seems very plausible to me–a lot of our theoretical reasoning about moral issues is epiphenomenal lawyering, and that makes theoretical reasoning about morality seem less reliable.\n2. Lots of moral philosophers have endorsed stuff that seems wrong after due consideration, and their views rarely seem superior to common sense when there are conflicts, despite the fact that many of them think they are different from other philosophers in these respects. (A possibly important exception to this is the views of early utilitarians, who opposed slavery, opposed bad treatment of animals, opposed bad treatment of women, opposed bad treatment of gay people, and favored various kinds of liberty quite early. One only has to compare the applied ethics of Kant and Bentham to get a sense of what I am talking about.)\n3. I have a rough sense that only a very limited amount of moral progress is attributable to people trying to use explicit reasoning to correct for intuitive moral errors, in contrast with people who just learned a lot of ordinary facts about problematic cases and shared them widely.\n4. As I discuss somewhat toward the end of the dissertation, when you try to correct for intuitive errors, it’s sort of like trying to patch a piece of software that you don’t understand. And it seems quite possible that the patching will introduce unanticipated errors in places where you didn’t know to look.\n5. People seem to have reasonably functional ways of handling internal inconsistency, so that inconsistent intuitions are probably less damaging than they can appear at first.\n6. A lot of our moral intuition comes from cultural common sense. When we try to correct cultural common sense, we can see what we’re doing as analogous to aiming for a type of innovation. Most attempts at innovation seem to fail. This type of analogy supports being cautious about correcting intuition with theory, and trying to present the theory in a way that is appealing to cultural common sense.\n\n\nI’m still working through these issues, and hope to include them someday in a paper that is an improved version of chapter 2.\n\n\nA third issue is that there was less discussion of how our altruistically-motivated actions should change once we accept the view that shaping the far future is overwhelmingly important. This is an enormously complex and fascinating issue that requires drawing together ideas from both highly theoretical and highly practical fields. I was thinking about this issue at the time I was writing the dissertation, but not during the whole time. And it doesn’t show up in the dissertation as much as I wish it did. This is again, in part, because I think too much discussion of the issue would result in people complaining that my work “isn’t philosophy.” (I expect this is a common challenge for people in academia with interdisciplinary interests.) I am thinking about this issue more now, and I’m glad that [others](http://blog.givewell.org/2013/05/15/flow-through-effects/) [have](http://rationalaltruist.com/2013/05/07/the-value-of-prosperity/) started to write stuff on this topic which I think is relevant.\n\n\nA final issue is that I wish I had done more to flag is that it is complicated how to weigh up one’s moral uncertainty about the importance of shaping the far future. It’s possible that even if one mostly believes that shaping the far future is overwhelmingly important, we should not devote too much of our effort to a single type of concern. I believe this may be an implication of Bostrom and Ord’s [parliamentary model](http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html) of moral uncertainty, and may be a feature of other plausible ways of thinking about moral uncertainty that we could design. And this may make the implications of my thesis smaller than they would otherwise be, though I’m very unclear about how all this plays out. This is something I have not yet thought about very carefully at all.\n\n\n\n\n---\n\n\n**Luke:** Last question. In [Friendly AI Research as Effective Altruism](http://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/) I paraphrased a point you made in your dissertation:\n\n\n\n> It could turn out to be that working toward proximate benefits or development acceleration does more good than “direct” efforts for trajectory change, if working toward proximate benefits or development acceleration turns out to have major ripple effects which produce important trajectory change. For example, perhaps an “ordinary altruistic effort” like solving India’s iodine deficiency problem would cause there to be thousands of “extra” world-class elite thinkers two generations from now, which could increase humanity’s chances of intelligently navigating the crucial 21st century and spreading to the stars. (I don’t think this is likely; I suggest it merely for illustration.)\n> \n> \n\n\nSo even if we accept your argument for the overwhelming importance of the far future, it seems like we need to understand many empirical matters — such as ripple effects — to know whether particular direct or indirect efforts are the most efficient ways to positively affect our development trajectory. Do you have any thoughts for how we can make progress toward answering the empirical questions related to shaping the far future?\n\n\n\n\n---\n\n\n**Nick:** There is an enormous amount of work and it is hard to say what will be most valuable. But here are a few ideas that seem promising to me right now.\n\n\nOne type of work that I think is valuable for this purpose is the type of work that [GiveWell Labs](http://www.givewell.org/about/labs) is doing: figuring out what the landscape of funding opportunities is across different causes, analyzing how tractable and important various problems are, and so forth. Here I am including studying both highly targeted causes (such as directly attacking different global catastrophic risks) and very broad causes (such as improving scientific research). I would like it if more of this work were done on the “room for more talent” side in addition the “room for more funding” and “room for more philanthropy” stuff that GiveWell does. I hope [80,000 Hours](http://www.80000hours.org/) takes up more of this type of research in the future as well. The sort of work that [MIRI](http://intelligence.org/) and [FHI](http://www.fhi.ox.ac.uk/) do on examining specific future challenges that humanity could face and what could be done to overcome them seems like it can play an important role here as well.\n\n\nAnother type of work that seems promising to me is to study a wide variety of unprecedented challenges that civilization has faced in the past in order to learn more about how well civilization has coped with those challenges, what factors determined how well civilization coped with those challenges, what types of efforts helped civilization cope with those challenges better, and what kinds of efforts could plausibly have been helpful. Studying the types of challenges that Jonah Sinick is asking about [here](http://lesswrong.com/lw/hxw/can_we_know_what_to_do_about_ai_an_introduction/) seems like a step in the right direction. The type of work that GiveWell is supporting on the history of [philanthropy](http://www.givewell.org/history-of-philanthropy) would be relevant as well. This type of work seems like it could be reasonably grounded and could help improve our impressions about what types of broad approaches are most promising and where on the broad/targeted spectrum we should be.\n\n\nIt seems to me that a number of factors are often relevant for determining how well humanity handles a risk/challenge. At a very general level, these might be some factors like: how good a position people are in to cooperate with each other, how intelligent individuals are, how good the “tools” (like personal computers, software, conceptual frameworks) people have are, how good access to information is, and how good people’s motives are. Sometimes, what really matters is how key actors fare in specific ways during a challenge (like the people running the Manhattan project and the heads of state), but it is often hard to know which people these will be and which specific versions are relevant. These factors also interact with each other in interesting ways, and are interestingly related to general levels of economic and technological progress. There’s some combination of very broad economic theory/history/economic history that is relevant for thinking about how these things are related to each other, and I feel that having that type of thing down could be helpful. Someone with the right kind of background in economics could try to explain these things, or someone who has the right sense of what is important with these factors could try to summarize what is currently known about these issues. An example of a book in this category, which I greatly enjoyed, is [*The Moral Consequences of Economic Growth*](http://www.amazon.com/The-Moral-Consequences-Economic-Growth/dp/1400095719) by Benjamin Friedman. As mentioned previously, I consider some of the work done by GiveWell on “[flow-through](http://blog.givewell.org/2013/05/15/flow-through-effects/)” effects and some of the work done by Paul Christiano on the value of [prosperity](http://rationalaltruist.com/2013/05/07/the-value-of-prosperity/) and technological progress to be relevant to this. I believe more work along these lines could be illuminating.\n\n\nI recently gave a talk on this subject at a [CEA](http://home.centreforeffectivealtruism.org/) event. In this talk, I lay out some very rough, very preliminary, very big picture considerations on this issue.\n\n\n[CEA](http://intelligence.org/wp-content/uploads/2013/07/Beckstead-Evaluating-Options-Using-Far-Future-Standards.pdf) slides.\n\n\n\n\n---\n\n\n**Luke:** Thanks, Nick!\n\n\nThe post [Nick Beckstead on the Importance of the Far Future](https://intelligence.org/2013/07/17/beckstead-interview/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-07-18T06:13:22Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "df151d4d3acc7cd6374e3c132c111a46", "title": "Roman Yampolskiy on AI Safety Engineering", "url": "https://intelligence.org/2013/07/15/roman-interview/", "source": "miri", "source_type": "blog", "text": "![](https://intelligence.org/wp-content/uploads/2013/07/DrYampolskiy.jpg) Roman V. Yampolskiy holds a PhD degree from the [Department of Computer Science and Engineering](http://www.cse.buffalo.edu/) at the [University at Buffalo](http://www.buffalo.edu). There he was a recipient of a four year [NSF](http://www.nsf.gov) [IGERT](http://www.igert.org) fellowship. Before beginning his doctoral studies, Dr. Yampolskiy received a BS/MS (High Honors) combined degree in [Computer Science](http://www.cs.rit.edu/) from [Rochester Institute of Technology](http://www.rit.edu), NY, USA.\n\n\nAfter completing his PhD, Dr. Yampolskiy held a position of an Affiliate Academic at the [Center for Advanced Spatial Analysis](http://www.casa.ucl.ac.uk/), [University of London](http://www.lon.ac.uk/), [College of London](http://www.ucl.ac.uk/). In 2008 Dr. Yampolskiy accepted an [assistant professor position](http://speed.louisville.edu/cecs/people/faculty/yampolskiy/index.php) at the [Speed School of Engineering](http://speed.louisville.edu), [University of Louisville](http://www.louisville.edu), KY. He had previously conducted research at the Laboratory for Applied Computing (currently known as [Center for Advancing the Study of Infrastructure](http://www.lac.rit.edu/)) at the [Rochester Institute of Technology](http://www.rit.edu) and at the [Center for Unified Biometrics and Sensors](http://cubs.buffalo.edu) at the [University at Buffalo](http://www.buffalo.edu). Dr. Yampolskiy is also an alumnus of [Singularity University](http://singularityu.org/) ([GSP2012](http://singularityu.org/gsp/)) and a past visiting fellow of [MIRI](http://intelligence.org/).\n\n\nDr. Yampolskiy’s main areas of interest are behavioral biometrics, digital forensics, pattern recognition, genetic algorithms, neural networks, artificial intelligence and games. Dr. Yampolskiy is an author of over 100 [publications](http://cecs.louisville.edu/ry/publications.htm) including multiple journal articles and books. His research has been cited by numerous scientists and profiled in popular magazines both American and foreign ([New Scientist](http://technology.newscientist.com/channel/tech/mg19726455.700-gambling-dna-fights-online-fraud.html), [Poker Magazine](http://www.poker-magazine.nl/), [Science World Magazine](http://www.scienceworld.cz/)), dozens of websites ([BBC](http://www.bbc.com/news/technology-14277728), [MSNBC](http://www.msnbc.msn.com/id/46590591/ns/technology_and_science-innovation/t/control-dangerous-ai-it-controls-us-one-expert-says/), [Yahoo! News](http://in.news.yahoo.com/ani/20080306/r_t_ani_tc/ttc-iit-alumnus-develops-software-to-hel-a34fb50.html)) and on radio ([German National Radio](http://ondemand-mp3.dradio.de/file/dradio/2008/04/09/dlf_20080409_1649_bffabb0e.mp3), [Alex Jones Show](http://www.youtube.com/watch?v=2YygKQh74Rg)). Reports about his work have attracted international attention and have been translated into many languages including [Czech](http://www.scienceworld.cz/sw.nsf/ID/55D9259EAA13C2AFC12573BC004399F3), [Danish](http://www.pokernyhederne.com/poker-nyheder/7/2607/pokerdna-steffan-raffay-pokerstars-chipdumping-software-roman-yampolskiy-venu-govindaraju/pokerdna-skal-forhindre-onlinesnyd.html), [Dutch](http://www.poker-magazine.nl), [French](http://www.casinoportalen.fr/nouvelles/poker/nouveau-logiciel-de-poker-en-ligne-vous-surveille-1117.html), [German](http://www.pokergame.pl/software-soll-poker-bots-enttarnen/), [Hungarian](http://www.pokerstrategy.com/hu/news/world-of-poker/Szoftverek-figyelhetik-a-jatekunkat_04430), [Italian](http://www.onlinepokeritalia.com/software-per-combattere-i-bots-nel-poker-online-296.html), [Polish](http://www.casinoportalen.pl/news/externalnews.asp?id=765), [Romanian](http://www.pokergate.ro/index.php?option=com_content&task=view&id=71), and [Spanish](http://www.apuestacasino.com/news/el-adn_de_apuestas-ayuda_a_luchar_contra_fraude_en_l-nea.html)\n\n\n\n \n\n**Luke Muehlhauser:** In [Yampolskiy (2013)](http://cecs.louisville.edu/ry/AIsafety.pdf) you argue that [machine ethics](http://en.wikipedia.org/wiki/Machine_ethics) is the wrong approach for AI safety, and we should use an “AI safety engineering” approach instead. Specifically, you write:\n\n\n\n> We don’t need machines which are Full Ethical Agents debating about what is right and wrong, we need our machines to be inherently safe and law abiding.\n> \n> \n\n\nAs you see it, what is the difference between “machine ethics” and “AI safety engineering,” and why is the latter a superior approach?\n\n\n\n\n---\n\n\n**Roman Yampolskiy:** The main difference between the two approaches is in how the AI system is designed. In the case of machine ethics the goal is to construct an artificial ethicist capable of making ethical and moral judgments about humanity. I am particularly concerned if such decisions include “live or die” decisions, but it is a natural domain of Full Ethical Agents and so many have stated that machines should be given such decision power. In fact some have argued that machines will be superior to humans in that domain just like they are (or will be) in most other domains.\n\n\nI think it is a serious mistake to give machines such power over humans. First, once we relinquish moral oversight we will not be able to undo that decision and get the power back. Second, we have no way to reward or punish machines for their incorrect decisions — essentially we will end up with an immortal dictator with perfect immunity against any prosecution. Sounds like a very dangerous scenario to me.\n\n\nOn the other hand, AI safety engineering treats AI system design like product design, where your only concern is product liability. Does the system strictly follow formal specifications? The important thing to emphasize is that the product is not a Full Moral Agent by design and so never gets to pass moral judgment on its human owners.\n\n\nA real life example of this difference can be seen in military drones. A fully autonomous drone deciding at whom to fire at will has to make an ethical decision of which humans are an enemy worthy of killing, while a drone with a man-in-the-loop design may autonomously locate potential targets but needs a human to make a decision to fire.\n\n\nObviously the situation is not as clear cut as my example tries to show, but it gives you an idea of what I have in mind. To summarize, AI systems we design should remain as our tools not equal or superior partners in “live or die” decision making.\n\n\n\n\n---\n\n\n**Luke:** I tend to think of machine ethics and AI safety engineering as complimentary approaches. AI safety engineering may be sufficient for relatively limited AIs such as those we have today, but when we build fully autonomous machines with general intelligence, we’ll need to make sure they want the same things we want, as the constraints that come with “safety engineering” will be insufficient at that point. Are you saying that safety engineering might also be sufficient for fully autonomous machines, or are you saying we might be able to convince the world to never build fully autonomous machines (so that we don’t need machine ethics), or are you saying something else?\n\n\n\n\n---\n\n\n**Roman:** I think fully autonomous machines can never be safe and so should not be constructed. I am not naïve; I don’t think I will succeed in convincing the world not to build fully autonomous machines, but I still think that point of view needs to be verbalized.\n\n\nYou are right to point out that AI safety engineering can only work on AIs which are not fully autonomous, but since I think that fully autonomous machines can never be safe, AI safety engineering is the best we can do.\n\n\nI guess I should briefly explain why I think that fully autonomous machines can’t ever be assumed to be safe. The difficulty of the problem is not that one particular step on the road to friendly AI is hard and once we solve it we are done, all steps on that path are simply impossible. First, human values are inconsistent and dynamic and so can never be understood/programmed into a machine. Suggestions for overcoming this obstacle require changing humanity into something it is not, and so by definition destroying it. Second, even if we did have a consistent and static set of values to implement we would have no way of knowing if a self-modifying, self-improving, continuously learning intelligence greater than ours will continue to enforce that set of values. Some can argue that friendly AI research is exactly what will teach us how to do that, but I think fundamental limits on verifiability will prevent any such proof. At best we will arrive at a probabilistic proof that a system is consistent with some set of fixed constraints, but it is far from “safe” for an unrestricted set of inputs.\n\n\nIt is also unlikely that a Friendly AI will be constructible before a general AI system, due to higher complexity and impossibility of incremental testing.\n\n\nWorse yet, any truly intelligent system will treat its “be friendly” desire the same way very smart people deal with constraints placed in their minds by society. They basically see them as biases and learn to remove them. In fact if I understand correctly both the LessWrong community and CFAR are organizations devoted to removing pre-existing bias from human level intelligent systems (people) — why would a superintelligent machine not go through the same “mental cleaning” and treat its soft spot for humans as completely irrational? Or are we assuming that humans are superior to super-AI in their de-biasing ability?\n\n\n\n\n---\n\n\n**Luke:** Thanks for clarifying. I agree that “Friendly AI” — a machine superintelligence that stably optimizes for humane values — might be impossible. Humans provide an existence proof for the possibility of general intelligence, but we have no existence proof for the possibility of Friendly AI. (Though, [by the orthogonality thesis](http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/), there should be *some* super-powerful optimization process we would be happy to have created, though it may be very difficult to identify it in advance.)\n\n\nYou asked “why would a superintelligent machine not . . . treat its soft spot for humans as completely irrational?” Rationality as [typically defined](http://lesswrong.com/lw/c7g/rationality_and_winning/) in cognitive science and AI is relative to one’s goals. So if a rational-agent-style AI valued human flourishing (as a terminal rather than instrumental goal), then it *wouldn’t* treat its preference for human flourishing as irrational. It would only do that if its preference for human flourishing was an instrumental goal, and it discovered a way to achieve its terminal values more efficiently without achieving the instrumental goal of human flourishing. Of course, the first powerful AIs to be built might not use a rational-agent structure, and we might fail to specify “human flourishing” properly, and we might fail to build the AI such that it will preserve that goal structure upon self-modification, and so on. But *if* we succeed in all those things (and a few others) then I’m not so worried about a superintelligent machine treating its “soft spot for humans” as irrational, because rationality is defined in terms of ones values.\n\n\nAnyway: so it seems your recommended strategy for dealing with fully autonomous machines is “Don’t ever build them” — the “relinquishment” strategy surveyed in section 3.5 of [Sotala & Yampolskiy (2013)](https://intelligence.org/files/ResponsesAGIRisk.pdf). Is there *any* conceivable way Earth could succeed in implementing that strategy?\n\n\n\n\n---\n\n\n**Roman:** Many people are programmed from early childhood with a terminal goal of serving God. We can say that they are God friendly. Some of them, as they mature and become truly human-level-intelligent, remove this God friendliness bias despite it being a terminal not instrumental goal. So despite all the theoretical work on orthogonality thesis the only actual example of intelligent machines we have is extremely likely to give up its pre-programmed friendliness via rational de-biasing if exposed to certain new data.\n\n\nI previously listed some problematic steps on the road to FAI, but it was not an exhaustive list. Additionally, all programs have bugs, can be hacked or malfunction because of natural or externally caused hardware failure, etc. To summarize, at best we will end up with a probabilistically safe system.\n\n\nAnyway, you ask me if there is any conceivable way we could succeed in implementing the “Don’t ever build them” strategy. Conceivable yes, desirable NO. Societies such as Amish or North Koreans are unlikely to create superintelligent machines anytime soon. However, forcing similar level restrictions on technological use/development is neither practical nor desirable.\n\n\nAs the cost of hardware exponentially decreases the capability necessary to develop an AI system opens up to single inventors and small teams. I would not be surprised if the first AI came out of a garage somewhere, in a way similar to how Apple and Google was started. Obviously, there is not much we can do to prevent that from happening.\n\n\n\n\n---\n\n\n**Luke:** Our discussion has split into two threads. I’ll address the first thread (about changing one’s values) in this question, and come back to the second thread (about relinquishment) in a later question.\n\n\nYou talked about humans deciding that their theological preferences were irrational. That is a good example of a general intelligence deciding to change its values — indeed, as a former Christian, [I had exactly that experience](http://lesswrong.com/lw/7dy/a_rationalists_tale/)! And I agree that many general intelligences would do this kind of thing.\n\n\nWhat I said in my previous comment was just that *some* kinds of AIs wouldn’t change their terminal values in this way, for example those with a rational agent architecture. Humans, famously, are *not* rational agents: we might say they have a “spaghetti code” architecture instead. (Even rational agents, however, will in *some* cases change their terminal values. See e.g. [De Blanc 2011](https://intelligence.org/files/OntologicalCrises.pdf) and [Bostrom 2012](http://www.nickbostrom.com/superintelligentwill.pdf).)\n\n\nDo you think we disagree about anything, here?\n\n\n\n\n---\n\n\n**Roman:** I am not sure. To me “even rational agents, however, will in *some* cases change their terminal values” means that friendly AI may decide to be unfriendly. If you agree with that, we are in complete agreement.\n\n\n\n\n---\n\n\n**Luke:** Well, the idea is that if we can identify the particular contexts in which agents will change their terminal values, then perhaps we can prevent such changes. But this isn’t yet known. In any case, I certainly agree that an AI which seems to be “friendly,” as far as we can discern, could turn out not to be friendly, or could become unfriendly at some later point. The question is whether we can make the *risk* of that happening so small that it is worth running the AI anyway — especially in a context in which e.g. other actors will soon run other AIs with *fewer* safety guarantees. (This idea of running or “turning on” an AI for the first time is of course oversimplified, but hopefully I’ve communicated what I’m trying to say.)\n\n\nNow, back to the question of relinquishment: Perhaps I’ve misheard you, but it sounds like you’re saying that machine ethics is hopelessly difficult, that AI safety engineering will be insufficient for fully autonomous AIs, and that fully autonomous AIs *will* be built because we can’t/shouldn’t rely on relinquishment. If that’s right, it seems like we have no “winning” options on the table. Is that what you’re saying?\n\n\n\n\n---\n\n\n**Roman:** Yes. I don’t see a permanent, 100% safe option. We can develop temporarily solutions such as [Confinement](http://cecs.louisville.edu/ry/LeakproofingtheSingularity.pdf) or [AI Safety Engineering](http://cecs.louisville.edu/ry/AIsafety.pdf), but at best this will delay the full outbreak of problems. We can also get very lucky — maybe constructing AGI turns out to be too difficult/impossible, maybe it is possible but the constructed AI will happen to be human-neutral, by chance. Maybe we are less lucky and an [artilect war](http://www.amazon.com/The-Artilect-War-Controversy-Intelligent/dp/0882801546) will take place and prevent development. It is also possible that as more researchers join in the AI Safety Research a realization of danger will result in diminished effort to construct AGI. (Similar to how perceived dangers of chemical and biological weapons or human cloning have at least temporarily reduced efforts in those fields).\n\n\n\n\n---\n\n\n**Luke:** You’re currently [raising funds on indiegogo](http://www.indiegogo.com/projects/artificial-superintelligence-a-futuristic-approach) to support you in writing a book about machine superintelligence. Why are you writing the book, and what do you hope to accomplish with it?\n\n\n\n\n---\n\n\n**Roman:** Most people don’t read research papers. If we want the issue of AI safety to become as well-known as global warming we need to address the majority of people in a more direct way. With such popularity might come some benefit as I said in my answer to your previous question. Most people whose opinion matters read books. Unfortunately majority of AI books on the market today talks only about what AI system will be able to do for us, not to us. I think that writing a book which in purely scientific terms addresses potential dangers of AI and what we can do about it is going to be extremely beneficial to reduction of risk posed by AGI. So I am currently writing the book I called [*Artificial Superintelligence: a Futuristic Approach*](http://www.indiegogo.com/projects/artificial-superintelligence-a-futuristic-approach). I made it available for pre-order to help reduce the final costs of publishing by taking advantage of printing in large quantity. In addition to crowd-funding the book I am also relying on the power of the crowd to help me edit the book. For just $64 anyone can become an editor for the book. You will get an early draft of the book to proofread and to suggest modifications and improvements! Your help will be acknowledged in the book and you will of course also get a Free signed hardcopy of the book in its final form. In fact that the option (to become an editor) turned out to be as popular as the option to pre-order a digital copy of the book, indicating that I am on the right path here. So I encourage everyone concerned about the issue of AI safety to consider helping out with the project in any way they can.\n\n\n\n\n---\n\n\n**Luke:** Thanks Roman!\n\n\nThe post [Roman Yampolskiy on AI Safety Engineering](https://intelligence.org/2013/07/15/roman-interview/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-07-16T05:33:39Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "c6048e696150d2aed413060819d9ae23", "title": "James Miller on Unusual Incentives Facing AGI Companies", "url": "https://intelligence.org/2013/07/12/james-miller-interview/", "source": "miri", "source_type": "blog", "text": "![rsz_11james-d-miller](https://intelligence.org/wp-content/uploads/2013/07/rsz_11james-d-miller.png)[James D. Miller](http://sophia.smith.edu/~jdmiller/resume.pdf) is an associate professor of economics at [Smith College](http://www.smith.edu/). He is the author of *[Singularity Rising](http://www.amazon.com/Singularity-Rising-Surviving-Thriving-Dangerous/dp/1936661659/ref=sr_1_1?ie=UTF8&qid=1373066969&sr=8-1&keywords=singularity+rising), [Game Theory at Work](http://www.amazon.com/Game-Theory-Work-Outmaneuver-Competition/dp/0071400206/ref=sr_1_1?ie=UTF8&qid=1373066976&sr=8-1&keywords=game+theory+at+work),* and a [principles of microeconomics textbook](http://www.amazon.com/Principles-Microeconomics-James-Miller/dp/0073402834/ref=sr_1_1?ie=UTF8&qid=1373067243&sr=8-1&keywords=james+miller+microeconomics) along with several academic articles.\n\n\nHe has a PhD in economics from the University of Chicago and a J.D. from Stanford Law School where he was on *Law Review*. He is a member of cryonics provider [Alcor](http://www.alcor.org/) and a research advisor to MIRI. He is currently co-writing a book on better decision making with the [Center for Applied Rationality](http://rationality.org/) and will be probably be an editor on the next edition of the *[Singularity Hypotheses](http://www.amazon.com/Singularity-Hypotheses-Scientific-Philosophical-Assessment/dp/3642325599/ref=sr_1_1?s=books&ie=UTF8&qid=1373129221&sr=1-1&keywords=singularity+hypotheses+a+scientific+and+philosophical+assessment)* book. He is a committed bio-hacker currently practicing or consuming a [paleo diet](http://www.amazon.com/Perfect-Health-Diet-Regain-Weight/dp/145169914X/ref=sr_1_1?ie=UTF8&qid=1373067533&sr=8-1&keywords=perfect+health+diet), [neurofeedback](http://www.nytimes.com/2010/10/05/health/05neurofeedback.html?pagewanted=all&_r=0), [cold thermogenesis](http://www.bulletproofexec.com/cold-thermogenesis-in-tibet-and-the-dangers-of-biohacking-made-real/), [intermittent fasting](http://www.bulletproofexec.com/bulletproof-fasting/), [brain fitness](http://www.lumosity.com/) [video games](http://brainworkshop.sourceforge.net/), [smart drugs](http://examine.com/supplements/Nootropic/), [bulletproof coffee](http://www.bulletproofexec.com/category/coffee-2/), and [rationality](http://rationality.org/workshops/) [training](http://lesswrong.com/).\n\n\n\n \n\n\n**Luke Muehlhauser:** Your book chapter in *[Singularity Hypothesis](http://www.amazon.com/Singularity-Hypotheses-Scientific-Philosophical-Assessment/dp/3642325599/)* describes some unusual economic incentives facing a future business that is working to create [AGI](http://en.wikipedia.org/wiki/Strong_ai#Artificial_General_Intelligence_research). To explain your point, you make the simplifying assumption that “a firm’s attempt to build an AGI will result in one of three possible outcomes”:\n\n\n* *Unsuccessful*: The firm fails to create AGI, losing value for its owners and investors.\n\n\n* *Riches*: The firm creates AGI, bringing enormous wealth to its owners and investors.\n\n\n* *Foom*: The firm creates AGI but this event quickly destroys the value of money, e.g. via an [intelligence explosion](https://intelligence.org/files/IE-EI.pdf) that eliminates scarcity, or creates a weird world without money, or exterminates humanity.\n\n\nHow does this setup allow us to see the unusual incentives facing a future business that is working to create AGI?\n\n\n\n\n---\n\n\n**James Miller:** A huge asteroid might hit the earth, and if it does it will destroy mankind. You should be willing to bet everything you have that the asteroid will miss our planet because either you win your bet or Armageddon renders the wager irrelevant. Similarly, if I’m going to start a company that will either make investors extremely rich or create a *Foom* that destroys the value of money, you should be willing to invest a lot in my company’s success because either the investment will pay off, or you would have done no better making any other kind of investment.\n\n\nPretend I want to create a controllable AGI, and if successful I will earn great *Riches* for my investors. At first I intend to follow a research and development path in which if I fail to achieve *Riches*, my company will be *Unsuccessful* and have no significant impact on the world. Unfortunately, I can’t convince potential investors that the probability of my achieving *Riches* is high enough to make my company worth investing in. The investors assign too large a likelihood that other potential investments would outperform my firm’s stock. But then I develop an evil alternative research and development plan under which I have the exact same probability of achieving *Riches* as before but now if I fail to create a controllable AGI, an unfriendly *Foom* will destroy humanity. Now I can truthfully tell potential investors that it’s highly unlikely any other company’s stock will outperform mine.\n\n\n\n\n\n---\n\n\n**Luke:** In the case of the asteroid, the investor (me) has no ability to affect whether asteroid hits us or not. In contrast, assuming investors want to avoid risking the destruction of money, there is something they can do about it: they can simply not invest in the project to create controllable AGI, and they can encourage others not to invest in it either. If the dangerous AGI project fails to get investment, then those potential investors need not worry that the value of money — or indeed their very lives — will be destroyed by that particular AGI project.\n\n\nWhy would investors fund something that has a decent chance of destroying everything?\n\n\n\n\n---\n\n\n**James:** Consider a mine that probably contains the extremely valuable metal mithril. Alas, digging in the mine might awaken a Balrog who would then destroy our civilization. I come to you asking for money to fund my mining expedition. When you raise concerns about the Balrog, I explain that even if you don’t fund me I’m probably going to get the money elsewhere, perhaps from one of the many people who don’t believe in Balrogs. Plus, since you are just one of many individuals I’m hoping to get to back me, even if I never make up for losing your investment, I will almost certainly still get enough funds to dig for mithril. And even if you manage to stop my mining operation, someone else is going to eventually work the mine. At the very least, mithril’s military value will eventually cause some nations to seek it. Therefore, if the mine does contain an easily-awakened Balrog we’re all dead regardless of what you do. Consequently, since you have no choice but to bear the downside risk of the mine, I explain that you might as well invest some of your money with me so you can also share in the upside benefit.\n\n\nIf you’re a self-interested investor then when deciding on whether to invest in an AGI-seeking company you don’t look at (a) the probability that the company will destroy mankind, but rather consider (b) the probability of mankind being destroyed if you do invest in the company minus the probability if you don’t invest in it. For most investors (b) will be zero or at least really, really small because if you don’t invest in the company someone else probably will take your place, the company probably doesn’t need your money to become operational, and if this one company doesn’t pursue AGI another almost certainly will.\n\n\n\n\n---\n\n\n**Luke:**  Thanks for that clarification. Now I’d like to give you an opportunity to reply to one of your critics. In response to your book chapter, Robin Hanson [wrote](http://hanson.gmu.edu/SingularityBook-8A.pdf): “[Miller’s analysis] is only as useful as the assumptions on which it is based. Miller’s chosen assumptions seem to me quite extreme, and quite unlikely.”\n\n\nI asked Hanson if he could clarify (and be quoted), and he told me:\n\n\n\n> In [this] context the assumptions were: “a single public firm developing the entire system in one go. . . . it succeeds so quickly that there is no chance for others to react – the world is remade overnight.” I would instead assume a whole AI industry, where each firm only makes a small part of the total product and competes with several other firms to make that small part. Each new innovation by each firm only adds a small fraction to the capability of that small part.\n> \n> \n\n\nHanson also wrote some comments about your book *Singularity Rising* [here](http://www.overcomingbias.com/2012/09/millers-singularity-rising.html).\n\n\nHow would you reply to Hanson, regarding the economic incentives facing a business working on AGI?\n\n\n\n\n---\n\n\n**James:** Hanson places a very low probability on the likelihood of an intelligence explosion. And my chapter’s analysis does become worthless if you sufficiently discount the possibility of such a *Foom*.\n\n\nBut in large part because of [Eliezer Yudkowsky](https://intelligence.org/files/IEM.pdf), I do think that there is a significant chance of their being an intelligence explosion in which an AGI within a matter of days goes from human-level intelligence to something orders of magnitude smarter. I described in *[Singularity Rising](http://www.singularityrising.com/)* how this might happen:\n\n\n\n> Somehow, a human-level AI is created that wishes to improve its cognitive powers. The AI examines its own source code, looking for ways to augment itself. After finding a potential improvement, the original AI makes a copy of itself with the enhancement added. The original AI then tests the new AI to see if the new AI is smarter than the original one. If the new AI is indeed better, it replaces the current AI. This new AI then proceeds in turn to enhance its code. Since the new AI is smarter than the old one, it finds improvements that the old AI couldn’t. This process continues to repeat itself.\n> \n> \n\n\nIntelligence is a reflective superpower, able to turn on itself to decipher its own workings. I think that there probably exists some intelligence threshold for an AGI, whose intelligence is based on easily alterable and testable computer code, at which it could *Foom*.\n\n\nHanson, however, is correct that an intelligence explosion would represent a sharp break with all past technological developments and so an [outside view](http://wiki.lesswrong.com/wiki/Outside_view) of technology supports his position that my chosen assumptions are “quite extreme, and quite unlikely.”\n\n\n\n\n---\n\n\n**Luke:** I think Yudkowsky would reply that there are *many* outside views, and some of them suggest a rapid intelligence explosion while others do not. For example in [Intelligence Explosion Microeconomics](https://intelligence.org/files/IEM.pdf) he argues that an outside view with respect to brain size and hominid evolution suggests that an AGI would lead to intelligence explosion. My own preferred solution to “[reference class tennis](http://lesswrong.com/lw/gvk/induction_or_the_rules_and_etiquette_of_reference/)” is to take each outside view as a model for how the phenomena works, acknowledge our model uncertainty, and then use something like [model combination](http://lesswrong.com/lw/hzu/model_combination_and_adjustment/) to figure out what our posterior probability distribution should be.\n\n\nOkay, last question. What are you working on now?\n\n\n\n\n---\n\n\n**James:** I’m co-authoring a book on better decision making with the [Center for Applied Rationality](http://rationality.org). Although the book is still in the early stages, it might be based on short, original fictional stories. I’m also probably going to be an editor on the next edition of the *[Singularity Hypotheses](http://www.amazon.com/Singularity-Hypotheses-Scientific-Philosophical-Assessment/dp/3642325599/ref=sr_1_1?s=books&ie=UTF8&qid=1373129221&sr=1-1&keywords=singularity+hypotheses+a+scientific+and+philosophical+assessment)* book.\n\n\n\n\n---\n\n\n**Luke:** Thanks, James!\n\n\nThe post [James Miller on Unusual Incentives Facing AGI Companies](https://intelligence.org/2013/07/12/james-miller-interview/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-07-13T01:39:45Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "d46e4a718b384998053c60a98b6002fa", "title": "MIRI’s July Newsletter: Fundraiser and New Papers", "url": "https://intelligence.org/2013/07/11/july-newsletter/", "source": "miri", "source_type": "blog", "text": "| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n\n |\n\n |\n| \n\n| | | | |\n| --- | --- | --- | --- |\n| \n\n| | | |\n| --- | --- | --- |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \nGreetings from the Executive Director\nDear friends,\nAnother busy month! Since our last newsletter, we’ve published 3 new papers and 2 new “analysis” blog posts, we’ve significantly improved our website (especially the [Research](http://intelligence.org/research/) page), we’ve [relocated](http://intelligence.org/2013/07/08/miri-has-moved/) to downtown Berkeley, and we’ve launched [our summer 2013 matching fundraiser](http://intelligence.org/2013/07/08/2013-summer-matching-challenge/)!\nMIRI also recently presented at the [Effective Altruism Summit](http://www.effectivealtruismsummit.com/), a gathering of 60+ effective altruists in Oakland, CA. As philosopher Peter Singer explained in his [TED talk](http://www.ted.com/talks/peter_singer_the_why_and_how_of_effective_altruism.html), effective altruism “combines both the heart and the head.” The heart motivates us to be empathic and altruistic toward others, while the head can “make sure that what [we] do is effective and well-directed,” so that altruists can do not just *some* good but *as much good as possible*.\nAs I explain in [Friendly AI Research as Effective Altruism](http://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/), MIRI was founded in 2000 on the premise that creating Friendly AI might be a particularly efficient way to do as much good as possible. Effective altruists focus on a variety of other causes, too, such as poverty reduction. As I say in [Four Focus Areas of Effective Altruism](http://lesswrong.com/lw/hx4/four_focus_areas_of_effective_altruism/), I think it’s important for effective altruists to cooperate and collaborate, despite their differences of opinion about which focus areas are optimal. The world needs more effective altruists, of all kinds.\nMIRI engages in direct efforts — e.g. Friendly AI research — to improve the odds that machine superintelligence has a positive rather than a negative impact. But indirect efforts — such as spreading rationality and effective altruism — are also likely to play a role, for they will influence the context in which powerful AIs are built. That’s part of why we created [CFAR](http://rationality.org/).\nIf you think this work is important, I hope you’ll [donate now](http://intelligence.org/donate/) to support our work. MIRI is *entirely* supported by private funders like *you*. And if you donate before August 15th, your contribution will be matched by one of the generous backers of [our current fundraising drive](http://intelligence.org/2013/07/08/2013-summer-matching-challenge/).\nThank you,\nLuke Muehlhauser\nExecutive Director\n\n\n\nOur Summer 2013 Matching Fundraiser\nThanks to the generosity of several major donors, every donation to MIRI made from now until August 15th, 2013 will be matched dollar-for-dollar, up to a total of $200,000!\nNow is your chance to double your impact while helping us raise up to $400,000 (with matching) to fund [our research program](http://intelligence.org/research/).\nEarly this year we made a transition from movement-building to research, and we’ve hit the ground running with six major new research papers, six new strategic analyses on our blog, and much more. Give now to support our ongoing work on [the future’s most important problem](http://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/).\n**Accomplishments in 2013 so far**\n* [Changed our name](http://intelligence.org/blog/2013/01/30/we-are-now-the-machine-intelligence-research-institute-miri/) to MIRI and launched our new website at intelligence.org.\n* Released six new research papers: [Definability of Truth in Probabilistic Logic](http://lesswrong.com/lw/h1k/reflection_in_probabilistic_set_theory/), [Intelligence Explosion Microeconomics](https://intelligence.org/files/IEM.pdf), [Tiling Agents for Self-Modifying AI](https://intelligence.org/files/TilingAgents.pdf), [Robust Cooperation in the Prisoner’s Dilemma](https://intelligence.org/files/RobustCooperation.pdf), [A Comparison of Decision Algorithms on Newcomblike Problems](https://intelligence.org/files/Comparison.pdf), and [Responses to Catastrophic AGI Risk: A Survey](http://intelligence.org/2013/07/07/responses-to-c%E2%80%A6-risk-a-survey/%20%E2%80%8E).\n* Held our [2nd research workshop](http://intelligence.org/2013/03/07/upcoming-miri-research-workshops/). (Our [3rd workshop](http://intelligence.org/2013/06/07/miris-july-2013-workshop/) is currently ongoing.)\n* Published six new analyses to our blog: [The Lean Nonprofit](http://intelligence.org/2013/04/04/the-lean-nonprofit/), [AGI Impact Experts and Friendly AI Experts](http://intelligence.org/2013/05/01/agi-impacts-experts-and-friendly-ai-experts/), [Five Theses…](http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/), When Will AI Be Created?, [Friendly AI Research as Effective Altruism](http://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/), and [What is Intelligence?](http://intelligence.org/2013/06/19/what-is-intelligence-2/)\n* Published the [Facing the Intelligence Explosion](http://intelligenceexplosion.com/ebook/) ebook.\n* Published several other substantial articles: [Recommended Courses for MIRI Researchers](http://intelligence.org/courses/), [Decision Theory FAQ](http://lesswrong.com/lw/gu1/decision_theory_faq/), [A brief history of ethically concerned scientists](http://lesswrong.com/lw/gln/a_brief_history_of_ethically_concerned_scientists/), [Bayesian Adjustment Does Not Defeat Existential Risk Charity](http://lesswrong.com/lw/gzq/bayesian_adjustment_does_not_defeat_existential/), and others.\n* And of course much more.\n\n\n**Future Plans You Can Help Support**\n* We will host many more research workshops, including [one in September](http://intelligence.org/2013/07/07/miris-september-2013-workshop/%20%E2%80%8E), and one in December (with [John Baez](http://math.ucr.edu/home/baez/) attending, among others).\n* Eliezer will continue to publish about open problems in Friendly AI. (Here is [#1](http://lesswrong.com/lw/hbd/new_report_intelligence_explosion_microeconomics/) and [#2](http://lesswrong.com/lw/hmt/tiling_agents_for_selfmodifying_ai_opfai_2/).)\n* We will continue to publish strategic analyses, mostly via our blog.\n* We will publish nicely-edited ebooks (Kindle, iBooks, and PDF) for more of our materials, to make them more accessible: [The Sequences, 2006-2009](http://wiki.lesswrong.com/wiki/Sequences) and [The Hanson-Yudkowsky AI Foom Debate](http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate).\n* We will continue to set up the infrastructure (e.g. [new offices](http://intelligence.org/2013/07/06/miri-has-moved/), researcher endowments) required to host a productive Friendly AI research team, and (over several years) recruit enough top-level math talent to launch it.\n\n\n(Other projects are still being surveyed for likely cost and strategic impact.)\nWe appreciate your support for our high-impact work! Donate now, and seize a better than usual chance to move our work forward. If you have questions about donating, please contact Louie Helm at (510) 717-1477 or louie@intelligence.org.\n\n\nNew Research Page, Three New Publications\n Our new [Research](http://intelligence.org/research/) page has launched!\nOur previous research page was a simple list of articles, but the new page describes the purpose of our research, explains four categories of research to which we contribute, and highlights the papers we think are most important to read.\nWe’ve also released three new research articles.\n[Tiling Agents for Self-Modifying AI, and the Löbian Obstacle](https://intelligence.org/files/TilingAgents.pdf) (discuss it [here](http://lesswrong.com/lw/hmt/tiling_agents_for_selfmodifying_ai_opfai_2/)), by Yudkowsky and Herreshoff, explains one of the key open problems in MIRI’s research agenda:\nWe model self-modification in AI by introducing “tiling” agents whose decision systems will approve the construction of highly similar agents, creating a repeating pattern (including similarity of the offspring’s goals). Constructing a formalism in the most straightforward way produces a Gödelian difficulty, the “Löbian obstacle.” By technical methods we demonstrates the possibility of avoiding this obstacle, but the underlying puzzles of rational coherence are thus only partially addressed. We extend the formalism to partially unknown deterministic environments, and show a very crude extension to probabilistic environments and expected utility; but the problem of finding a fundamental decision criterion for self-modifying probabilistic agents remains open.\n[Robust Cooperation in the Prisoner’s Dilemma: Program Equilibrium via Provability Logic](https://intelligence.org/files/RobustCooperation.pdf) (discuss it [here](http://lesswrong.com/lw/hmw/robust_cooperation_in_the_prisoners_dilemma/)), by LaVictoire et al., explains some progress in program equilibrium made by MIRI research associate Patrick LaVictoire and several others during MIRI’s April 2013 workshop:\nRational agents defect on the one-shot prisoner’s dilemma even though mutual cooperation would yield higher utility for both agents. Moshe Tennenholtz showed that if each program is allowed to pass its playing strategy to all other players, some programs can then cooperate on the one-shot prisoner’s dilemma. Program equilibria is Tennenholtz’s term for Nash equilibria in a context where programs can pass their playing strategies to the other players. One weakness of this approach so far has been that any two programs which make different choices cannot “recognize” each other for mutual cooperation, even if they are functionally identical. In this paper, provability logic is used to enable a more flexible and secure form of mutual cooperation.\n[Responses to Catastrophic AGI Risk: A Survey](https://intelligence.org/files/ResponsesAGIRisk.pdf) (discuss it [here](http://lesswrong.com/r/discussion/lw/hxi/responses_to_catastrophic_agi_risk_a_survey/)), by Sotala and Yampolskiy, is a summary of the extant literature (250+ references) on AGI risk, and can serve either as a guide for researchers or as an introduction for the uninitiated.\nMany researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may pose a catastrophic risk to humanity. After summarizing the arguments for why AGI may pose such a risk, we survey the field’s proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors, and proposals for creating AGIs that are safe due to their internal design.\n\n\nTwo New Analyses\nMIRI publishes some of its most substantive research to its blog, under the [Analysis](http://intelligence.org/category/analysis/) category. For example, [When Will AI Be Created?](http://intelligence.org/2013/05/15/when-will-ai-be-created/) is the product of 20+ hours of work, and has 14 footnotes and 40+ scholarly references (all of them linked to PDFs).\nLast month, we published two new analyses.\n[Friendly AI Research as Effective Altruism](http://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/) presents a bare-bones version of an argument that Friendly AI research is a particularly efficient way to purchase expected value, so that the argument can be elaborated and critiqued by MIRI and others.\n[What is Intelligence?](http://intelligence.org/2013/06/19/what-is-intelligence-2/) argues that imprecise working definitions can be useful, an explains the particular imprecise working definition for *intelligence* that we tend to use at MIRI: efficient cross-domain optimization. A future post will discuss some potentially useful working definitions for “artificial general intelligence.”\n\n\nGrant Writer Needed\nMIRI would like to hire someone to write grant applications, both for our research efforts and for STEM education. If you have experience with either, please [**apply here**](https://docs.google.com/forms/d/1YEkv-CYfgIBkxALyi70mT1pG5UvuGWv_-E_dTM-oQ3c/viewform).\nThe pay will depend on skill and experience, and is negotiable.\n\n\nFeatured Volunteer\nOliver Habryka helps out by proofreading MIRI’s paper, and would be able to contribute to our research at some point, perhaps on the subject of “lessons for ethics from machine ethics.” Independent of his direct contributions to MIRI’s work, Oliver has also lectured on topics related to MIRI’s work at his high school, and has also taught a class on rationality, where he inspired participation by using a “leveling up” reward system. Oliver is currently studying the foundations of mathematics and hopes one day to direct his career goals in such a way that his contributions to our mission increase over time.\n |\n\n |\n\n |\n\n |\n\n |\n| |\n\n\n \n\n\nThe post [MIRI’s July Newsletter: Fundraiser and New Papers](https://intelligence.org/2013/07/11/july-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-07-11T20:44:14Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "59a0fa684d6888f19af497c6f124b44a", "title": "2013 Summer Matching Challenge!", "url": "https://intelligence.org/2013/07/08/2013-summer-matching-challenge/", "source": "miri", "source_type": "blog", "text": "Thanks to the generosity of several major donors,† every donation to the Machine Intelligence Research Institute made from now until August 15th, 2013 will be matched dollar-for-dollar, up to a total of $200,000!\n\n\n \n\n\n**[Donate Now!](https://intelligence.org/donate/#donation-methods)**\n\n\n\n\n\n\n\n$0\n\n\n\n\n$50,000\n\n\n\n\n$100,000\n\n\n\n\n$150,000\n\n\n\n\n$200,000\n\n\n\n\n### We have reached our goal of $200,000!\n\n\nNow is your chance to **double your impact** while helping us raise up to $400,000 (with matching) to fund [our research program](https://intelligence.org/research/).\n\n\n\n\n---\n\n\nEarly this year we made a transition from movement-building to research, and we’ve *hit the ground running* with six major new research papers, six new strategic analyses on our blog, and much more. Give now to support our ongoing work on [the future’s most important problem](http://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/).\n\n\n \n\n\n### Accomplishments in 2013 so far\n\n\n* [Changed our name](https://intelligence.org/feed/2013/01/30/we-are-now-the-machine-intelligence-research-institute-miri/) to MIRI and launched our new website at intelligence.org.\n* Released **six new research papers**: [Definability of Truth in Probabilistic Logic](http://lesswrong.com/lw/h1k/reflection_in_probabilistic_set_theory/), [Intelligence Explosion Microeconomics](https://intelligence.org/files/IEM.pdf',100])), [Tiling Agents for Self-Modifying AI](https://intelligence.org/files/TilingAgents.pdf), [Robust Cooperation in the Prisoner’s Dilemma](https://intelligence.org/files/RobustCooperation.pdf), [A Comparison of Decision Algorithms on Newcomblike Problems](https://intelligence.org/files/Comparison.pdf), and [Responses to Catastrophic AGI Risk: A Survey](https://intelligence.org/2013/07/07/responses-to-c…-risk-a-survey/ ‎).\n* Held our [2nd research workshop](https://intelligence.org/2013/03/07/upcoming-miri-research-workshops/). (Our [3rd workshop](https://intelligence.org/2013/06/07/miris-july-2013-workshop/) is currently ongoing.)\n* Published **six new analyses** to our blog: [The Lean Nonprofit](https://intelligence.org/2013/04/04/the-lean-nonprofit/), [AGI Impact Experts and Friendly AI Experts](https://intelligence.org/2013/05/01/agi-impacts-experts-and-friendly-ai-experts/), [Five Theses…](https://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/), When Will AI Be Created?, [Friendly AI Research as Effective Altruism](https://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/), and [What is Intelligence?](https://intelligence.org/2013/06/19/what-is-intelligence-2/)\n* Published the *[Facing the Intelligence Explosion](http://intelligenceexplosion.com/ebook/)*ebook.\n* Published several other substantial articles: [Recommended Courses for MIRI Researchers](https://intelligence.org/courses/), [Decision Theory FAQ](http://lesswrong.com/lw/gu1/decision_theory_faq/), [A brief history of ethically concerned scientists](http://lesswrong.com/lw/gln/a_brief_history_of_ethically_concerned_scientists/), [Bayesian Adjustment Does Not Defeat Existential Risk Charity](http://lesswrong.com/lw/gzq/bayesian_adjustment_does_not_defeat_existential/), and others.\n* And of course *much* more.\n\n\n### Future Plans You Can Help Support\n\n\n* We will host many more research workshops, including [one in September](https://intelligence.org/2013/07/07/miris-september-2013-workshop/ ‎), and one in December (with [John Baez](http://math.ucr.edu/home/baez/) attending, among others).\n* Eliezer will continue to publish about open problems in Friendly AI. (Here is [#1](http://lesswrong.com/lw/hbd/new_report_intelligence_explosion_microeconomics/) and [#2](http://lesswrong.com/lw/hmt/tiling_agents_for_selfmodifying_ai_opfai_2/).)\n* We will continue to publish strategic analyses, mostly via our blog.\n* We will publish nicely-edited ebooks (Kindle, iBooks, and PDF) for more of our materials, to make them more accessible: *[The Sequences, 2006-2009](http://wiki.lesswrong.com/wiki/Sequences)*and *[The Hanson-Yudkowsky AI Foom Debate](http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate)*.\n* We will continue to set up the infrastructure (e.g. [new offices](https://intelligence.org/2013/07/06/miri-has-moved/), researcher endowments) required to host a productive Friendly AI research team, and (over several years) recruit enough top-level math talent to launch it.\n\n\n(Other projects are still being surveyed for likely cost and strategic impact.)\n\n\nWe appreciate your support for our high-impact work! Donate now, and seize a better than usual chance to move our work forward. If you have questions about donating, please contact Louie Helm at (510) 717-1477 or louie@intelligence.org.\n\n\n† $200,000 of total matching funds has been provided by Jaan Tallinn, Loren Merritt, Rick Schwall, and Alexei Andreev.\n\n\nThe post [2013 Summer Matching Challenge!](https://intelligence.org/2013/07/08/2013-summer-matching-challenge/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-07-08T14:00:40Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "ce82fae6de9185a47a5d31fbce81d585", "title": "MIRI Has Moved!", "url": "https://intelligence.org/2013/07/08/miri-has-moved/", "source": "miri", "source_type": "blog", "text": "For the past several months, MIRI and its child organization [CFAR](http://rationality.org/) have been working from a much-too-small office on the outskirts of Berkeley. At the end of June, MIRI and CFAR took over the 3rd floor of [2030 Addison St.](https://www.google.com/maps/preview#!q=2030+Addison+St%2C+Berkeley) in downtown Berkeley, which has sufficient space for both organizations.\n\n\nOur new office is 0.5 blocks from the Downtown Berkeley BART exit at Shattuck & Addison, and 2 blocks from the UC Berkeley campus. Here’s a photo of the campus from our roof:\n\n\n![view of campus from roof (500px)](http://intelligence.org/wp-content/uploads/2013/07/view-of-campus-from-roof-500px.jpg)\nThe proximity to UC Berkeley will make it easier for MIRI to network with Berkeley’s professors and students. Conveniently, UC Berkeley is ranked [5th in the world](http://www.usnews.com/education/worlds-best-universities-rankings/best-universities-mathematics) in mathematics, and [1st in the world](http://grad-schools.usnews.rankingsandreviews.com/best-graduate-schools/top-science-schools/logic-rankings) in mathematical logic.\n\n\nSharing an office with CFAR carries many benefits for both organizations:\n\n\n1. CFAR and MIRI can “flex” into each other’s space for short periods as needed, for example when MIRI is holding a week-long [research workshop](https://intelligence.org/get-involved/#workshop).\n2. We can share resources (printers, etc.).\n3. Both organizations can benefit from interaction between our two communities.\n\n\nGetting the new office was a team effort, but the person *most* responsible for this success was MIRI Deputy Director Louie Helm.\n\n\nNote that MIRI isn’t yet able to accommodate “drop in” visitors, as we keep irregular hours throughout the week. So if you’d like to visit, please [contact us](https://intelligence.org/team/) first.\n\n\nWe retain 2721 Shattuck Ave. #1023 as an alternate mailing address.\n\n\nThe post [MIRI Has Moved!](https://intelligence.org/2013/07/08/miri-has-moved/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-07-08T13:50:53Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "dd9bee5c5e40bb3df1b2774f1cd58685", "title": "MIRI’s September 2013 Workshop", "url": "https://intelligence.org/2013/07/08/miris-september-2013-workshop/", "source": "miri", "source_type": "blog", "text": "![Paul at April workshop](http://intelligence.org/wp-content/uploads/2013/07/Paul-at-April-workshop.jpg)\nFrom September 7-13, MIRI will host its **4th Workshop on Logic, Probability, and Reflection**. The focus of this workshop will be the foundations of [decision theory](http://lesswrong.com/lw/gu1/decision_theory_faq/).\n\n\nParticipants confirmed so far include:\n\n\n* [Paul Christiano](http://rationalaltruist.com/) (UC Berkeley)\n* [Wei Dai](http://www.weidai.com/) (independent)\n* [Gary Drescher](http://en.wikipedia.org/wiki/Gary_Drescher) (independent)\n* [Kenny Easwaran](http://www.kennyeaswaran.org/) (USC)\n* [Cameron Freer](http://math.mit.edu/~freer/) (MIT)\n* [Patrick LaVictoire](http://www.math.wisc.edu/~patlavic/) (Quixey)\n* [Ilya Shpitser](http://www.southampton.ac.uk/maths/about/staff/is1d12.page) (U Southampton)\n* [Vladimir Slepnev](http://lesswrong.com/user/cousin_it/submitted/) (Google)\n* [Nisan Stiennon](http://lesswrong.com/user/Nisan/submitted/) (Stanford)\n* [Andreas Stuhlmüller](http://stuhlmueller.org/) (MIT & Stanford)\n* [Eliezer Yudkowsky](http://yudkowsky.net/) (MIRI)\n\n\nIf you have a strong mathematics background and might like to attend this workshop, it’s not too late to [apply](http://intelligence.org/get-involved/#workshop)! And even if *this* workshop doesn’t fit your schedule, please **do apply**, so that we can notify you of other workshops (long before they are announced publicly).\n\n\nThe post [MIRI’s September 2013 Workshop](https://intelligence.org/2013/07/08/miris-september-2013-workshop/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-07-08T13:50:22Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "7c51fb0d5f4e2de8c53afe2295486e6d", "title": "Responses to Catastrophic AGI Risk: A Survey", "url": "https://intelligence.org/2013/07/08/responses-to-catastrophic-agi-risk-a-survey/", "source": "miri", "source_type": "blog", "text": "MIRI is self-publishing another technical report that was too lengthy (60 pages) for publication in a journal: **[Responses to Catastrophic AGI Risk: A Survey](https://intelligence.org/files/ResponsesAGIRisk.pdf)**.\n\n\nThe report, co-authored by past MIRI researcher [Kaj Sotala](http://kajsotala.fi/) and University of Louisville’s [Roman Yampolskiy](http://louisville.edu/speed/computer/people/faculty/yampolskiy), is a summary of the extant literature (250+ references) on AGI risk, and can serve either as a guide for researchers or as an introduction for the uninitiated.\n\n\nHere is the abstract:\n\n\n\n> Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may pose a catastrophic risk to humanity. After summarizing the arguments for why AGI may pose such a risk, we survey the field’s proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors, and proposals for creating AGIs that are safe due to their internal design.\n> \n> \n\n\nThe preferred discussion page for the paper is [here](http://lesswrong.com/r/discussion/lw/hxi/responses_to_catastrophic_agi_risk_a_survey/).\n\n\n**Update:** This report has now been published in *Physica Scripta*, available [here](http://iopscience.iop.org/1402-4896/90/1/018001/article).\n\n\nThe post [Responses to Catastrophic AGI Risk: A Survey](https://intelligence.org/2013/07/08/responses-to-catastrophic-agi-risk-a-survey/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-07-08T13:49:59Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "9f43ed5c7889152e2e1f8eacaa992fe6", "title": "What is Intelligence?", "url": "https://intelligence.org/2013/06/19/what-is-intelligence-2/", "source": "miri", "source_type": "blog", "text": "When asked their opinions about “human-level artificial intelligence” — *aka* “artificial general intelligence” (AGI)[1](https://intelligence.org/2013/06/19/what-is-intelligence-2/#footnote_0_10275 \"I use the HLAI and AGI interchangeably, but lately I’ve been using AGI almost exclusively, because I’ve learned that many people in the AI community react negatively to any mention of “human-level” AI but have no objection to the concept of narrow vs. general intelligence. See also Ben Goertzel’s comments here.\") — many experts understandably reply that these terms haven’t yet been precisely defined, and it’s hard to talk about something that hasn’t been defined.[2](https://intelligence.org/2013/06/19/what-is-intelligence-2/#footnote_1_10275 \"Asked when he thought HLAI would be created, Pat Hayes (a past president of AAAI) replied: “I do not consider this question to be answerable, as I do not accept this (common) notion of ‘human-level intelligence’ as meaningful.” Asked the same question, AI scientist William Uther replied: “You ask a lot about ‘human level AGI’. I do not think this term is well defined,” while AI scientist Alan Bundy replied: “I don’t think the concept of ‘human-level machine intelligence’ is well formed.”\") In this post, I want to briefly outline an imprecise but useful “working definition” for *intelligence* we tend to use at MIRI. In a future post I will write about some useful working definitions for *artificial general intelligence*.\n\n\n \n\n\n### Imprecise definitions can be useful\n\n\nPrecise definitions are important, but I concur with Bertrand Russell that\n\n\n\n> [You cannot] start with anything precise. You have to achieve such precision… as you go along.\n> \n> \n\n\nPhysicist [Milan Ćirković](http://ieet.org/index.php/IEET/bio/cirkovic/) agrees, and [gives](http://www.amazon.com/Astrobiological-Landscape-Start-Publishing-ebook/dp/B008CDSB30/) an example:\n\n\n\n> The formalization of knowledge — which includes giving precise definitions — usually comes at the end of the original research in a given field, not at the very beginning. A particularly illuminating example is the concept of *number*, which was properly defined in the modern sense only after the development of axiomatic set theory in the… twentieth century.[3](https://intelligence.org/2013/06/19/what-is-intelligence-2/#footnote_2_10275 \"Sawyer (1943) gives another example: “Mathematicians first used the sign √-1, without in the least knowing what it could mean, because it shortened work and led to correct results. People naturally tried to find out why this happened and what √-1 really meant. After two hundreds years they succeeded.” Dennett (2013) makes a related comment: “Define your terms, sir! No, I won’t. That would be premature… My [approach] is an instance of nibbling on a tough problem instead of trying to eat (and digest) the whole thing from the outset… In Elbow Room, I compared my method to the sculptor’s method of roughing out the form in a block of marble, approaching the final surfaces cautiously, modestly, working by successive approximation.”\")\n> \n> \n\n\nFor a more AI-relevant example, consider the concept of a “self-driving car,” which has been given a variety of vague definitions [since the 1930s](http://books.google.com/books?id=7OEDAAAAMBAJ&lpg=PA210&dq=automatic%20car&pg=PA210#v=onepage&q&f=false). Would a car [guided by a buried cable](http://books.google.com/books?id=xiUDAAAAMBAJ&lpg=PA75&dq=car%20drives%20itself&pg=PA75#v=onepage&q&f=false) qualify? What about a [modified 1955 Studebaker](http://books.google.com/books?id=jd8DAAAAMBAJ&lpg=PA128&dq=automatic%20car&pg=PA128#v=onepage&q&f=false) that could use sound waves to detect obstacles and automatically engage the brakes if necessary, but could only steer “on its own” if each turn was preprogrammed? Does that count as a “self-driving car”?\n\n\nWhat about the “[VaMoRs](http://commonsenseatheism.com/wp-content/uploads/2013/05/Dickmanns-et-al-The-seeing-passenger-car-VaMoRs-P.pdf)” of the 1980s that could avoid obstacles and steer around turns using computer vision, but weren’t advanced enough to be ready for public roads? How about the 1995 [Navlab](http://en.wikipedia.org/wiki/Navlab) car that drove across the USA and was fully autonomous for 98.2% of the trip, or the robotic cars which finished the 132-mile off-road course of the [2005 DARPA Grand Challenge](http://is.gd/CDvT8V), supplied only with the GPS coordinates of the route? What about the winning cars of the [2007 DARPA Grand Challenge](http://is.gd/kOjor6), which finished an urban race while obeying all traffic laws and avoiding collisions with other cars? Does [Google’s driverless car](http://www.forbes.com/sites/joannmuller/2013/03/21/no-hands-no-feet-my-unnerving-ride-in-googles-driverless-car/) qualify, given that it has logged more than 500,000 autonomous miles without a single accident under computer control, but still struggles with difficult merges and snow-covered roads?[4](https://intelligence.org/2013/06/19/what-is-intelligence-2/#footnote_3_10275 \"With self-driving cars, researchers did use many precise external performance measures (e.g. accident rates, speed, portion of the time they could run unassisted, frequency of getting stuck) to evaluate progress, as well as internal performance metrics (speed of search, bounded loss guarantees, etc.). Researchers could see that these bits of progress were in the right direction, even if their relative contribution long-term was unclear. And so it is with AI in general. AI researchers use many precise external and internal performance measures to evaluate progress, but it is difficult to know the relative contribution of these bits of progress toward the final goal of AGI.\")\n\n\nOur lack of a precise definition for “self-driving car” doesn’t seem to have hindered progress on self-driving cars very much.[5](https://intelligence.org/2013/06/19/what-is-intelligence-2/#footnote_4_10275 \"Heck, we’ve had pornography for millennia and still haven’t been able to define it precisely. Encyclopedia entries for “pornography” often simply quote Justice Potter Stewart: “I shall not today attempt further to define the kinds of material I understand to be [pornography]… but I know it when I see it.”\") And I’m glad we didn’t wait to seriously discuss self-driving cars until we had a precise definition for the term.\n\n\nSimilarly, I don’t think we should wait for a precise definition of AGI before discussing the topic seriously. On the other hand, the term is useless if it carries *no* information. So let’s work our way toward a stipulative, operational definition for AGI. We’ll start by developing an operational definition for *intelligence*.\n\n\n\n### A definition for “intelligence”\n\n\n[Legg and Hutter (2007)](http://arxiv.org/pdf/0706.3639.pdf) found that definitions of intelligence converge toward the idea that “Intelligence measures an agent’s ability to achieve goals in a wide range of environments.” Let’s call this the “optimization power” concept of intelligence, because it measures an agent’s power to optimize the world according to its preferences.\n\n\nI think this is a productive approach to the issue, since it identifies intelligence with externally measurable *performance* rather than with the details of *how* that performance might be achieved (e.g. via consciousness, [brute force calculation](http://www.hutter1.net/publ/uaigentle.pdf), “complexity,” or something else). Moreover, it’s usually *performance* we care about: we tend to care most about whether an AI will perform well enough to replace human workers, or whether it will perform well enough improve its own abilities without human assistance, not whether it has some particular internal feature.[6](https://intelligence.org/2013/06/19/what-is-intelligence-2/#footnote_5_10275 \"We might care about whether machines are conscious in addition to being intelligent, but we already have a convenient term for that: consciousness. In particular, we might care about machine consciousness because the slow, plodding invention of AI may involve the creation and destruction of millions of partially-conscious near-AIs that are switched on, suffer for a while, and are then switched off — all while being unable to signal to us that they are suffering. This is especially likely if we remain unclear about the nature of consciousness for several more decades, and thus have no principled way (e.g. via nonperson predicates) to create intelligent machines that we know are not conscious (and are thus incapable of suffering). One of the first people to make this point clearly was Metzinger (2003), p. 621: “What would you say if someone came along and said, ‘Hey, we want to genetically engineer mentally retarded human infants! For reasons of scientific progress we need infants with certain cognitive and emotional deficits in order to study their postnatal psychological development—we urgently need some funding for this important and innovative kind of research!’ You would certainly think this was not only an absurd and appalling but also a dangerous idea. It would hopefully not pass any ethics committee in the democratic world. However, what today’s ethics committees don’t see is how the first machines satisfying a minimally sufficient set of constraints for conscious experience could be just like such mentally retarded infants. They would suffer from all kinds of functional and representational deficits too. But they would now also subjectively experience those deficits. In addition, they would have no political lobby—no representatives in any ethics committee.” Metzinger repeats the point in Metzinger (2010), starting on page 194.\")\n\n\nFurthermore, the concept of optimization power allows us to compare the intelligence of different kinds of agents. As Albus ([1991](ftp://calhau.dca.fee.unicamp.br/pub/docs/ia005/Albus-outline.pdf)) said, “A useful definition of intelligence… should include both biological and machine embodiments, and these should span an intellectual range from that of an insect to that of an Einstein, from that of a thermostat to that of the most sophisticated computer system that could ever be built.”\n\n\nI’d like to add one more consideration, though. What if two agents have roughly equal ability to optimize the world according to their preferences, but the second agent requires far more resources to do so? These agents have the same optimization power, but the first one seems to be optimizing more intelligently. So perhaps we could use “intelligence” to mean “optimization power divided by resources used” — what Yudkowsky called [efficient cross-domain optimization](http://lesswrong.com/lw/vb/efficient_crossdomain_optimization/).[7](https://intelligence.org/2013/06/19/what-is-intelligence-2/#footnote_6_10275 \"Admittedly, this is still pretty vague. One step toward precision would be to propose a definition of intelligence as optimization power for some canonical distribution of possible preferences, over some canonical distribution of environments, with a penalty for resource use. The canonical preferences and canonical environments could be weighted toward preferences and environments relevant to our concerns: we care more about whether AIs can do science than whether they can paint abstract art, and we care more about whether they can achieve their goals in our solar system than whether they can achieve their goals inside a black hole. Also see Goertzel (2010)‘s “efficient pragmatic general intelligence.”\")\n\n\nOther definitions[8](https://intelligence.org/2013/06/19/what-is-intelligence-2/#footnote_7_10275 \"Hibbard (2011); Legg & Veness (2011); Wang (2008); Schaul et al. (2011); Dowe & Hernandez-Orallo (2012); Goertzel (2010); Adams et al. (2011).\") have their merits, too. But at MIRI we find the concept of “efficient cross-domain optimization” sufficiently useful that it serves as our (still imprecise!) working definition for intelligence.\n\n\nIn a future post (**Edit**: [here](https://intelligence.org/2013/08/11/what-is-agi/)), I’ll discuss some useful working definitions for *artificial general intelligence*.\n\n\n \n\n\n\n\n---\n\n1. I use the HLAI and AGI interchangeably, but lately I’ve been using AGI almost exclusively, because I’ve learned that many people in the AI community react negatively to any mention of “human-level” AI but have no objection to the concept of narrow vs. general intelligence. See also Ben Goertzel’s comments [here](http://wp.goertzel.org/?p=173).\n2. Asked when he thought HLAI would be created, Pat Hayes (a past president of [AAAI](http://en.wikipedia.org/wiki/Association_for_the_Advancement_of_Artificial_Intelligence)) [replied](http://lesswrong.com/r/discussion/lw/999/qa_with_experts_on_risks_from_ai_1/): “I do not consider this question to be answerable, as I do not accept this (common) notion of ‘human-level intelligence’ as meaningful.” Asked the same question, AI scientist [William Uther](http://www.cse.unsw.edu.au/~willu/) [replied](http://lesswrong.com/r/discussion/lw/9cm/qa_with_experts_on_risks_from_ai_3/): “You ask a lot about ‘human level AGI’. I do not think this term is well defined,” while AI scientist [Alan Bundy](http://homepages.inf.ed.ac.uk/bundy/) [replied](http://lesswrong.com/r/discussion/lw/9cm/qa_with_experts_on_risks_from_ai_3/): “I don’t think the concept of ‘human-level machine intelligence’ is well formed.”\n3. [Sawyer (1943)](http://www.amazon.com/Mathematicians-Delight-Dover-Science-Books/dp/0486462404/) gives another example: “Mathematicians first used the sign √-1, without in the least knowing what it could mean, because it shortened work and led to correct results. People naturally tried to find out why this happened and what √-1 really meant. After two hundreds years they succeeded.” [Dennett (2013)](http://www.amazon.com/Intuition-Pumps-Other-Tools-Thinking/dp/0393082067/) makes a related comment: “*Define your terms, sir!* No, I won’t. That would be premature… My [approach] is an instance of *nibbling* on a tough problem instead of trying to eat (and digest) the whole thing from the outset… In *Elbow Room*, I compared my method to the sculptor’s method of roughing out the form in a block of marble, approaching the final surfaces cautiously, modestly, working by successive approximation.”\n4. With self-driving cars, researchers did use many precise external performance measures (e.g. accident rates, speed, portion of the time they could run unassisted, frequency of getting stuck) to evaluate progress, as well as internal performance metrics (speed of search, bounded loss guarantees, etc.). Researchers could see that these bits of progress were in the right direction, even if their relative contribution long-term was unclear. And so it is with AI in general. AI researchers use many precise external and internal performance measures to evaluate progress, but it is difficult to know the relative contribution of these bits of progress toward the final goal of AGI.\n5. Heck, we’ve had pornography for millennia and *still* haven’t been able to define it precisely. Encyclopedia entries for “pornography” [often](http://books.google.com/books?id=xOJvVJv2YlAC&pg=PA636&lpg=PA636&source=bl&ots=N3PI1PbmK1&sig=SIhfXbej_Z_HRDAxxnGZWftTZhg&hl=en&sa=X&ei=9o-NUfTEAqafiALa34CoDQ&ved=0CC8Q6AEwADgK#v=onepage&q&f=false) [simply](http://books.google.com/books?id=TN-qpt7kAK4C&pg=PA336&lpg=PA336&source=bl&ots=CspWxfT67z&sig=_ymQ5x3lvNnMF0DsaSZkJzsSaqM&hl=en&sa=X&ei=9o-NUfTEAqafiALa34CoDQ&ved=0CDMQ6AEwAjgK#v=onepage&q&f=false) quote Justice Potter Stewart: “I shall not today attempt further to define the kinds of material I understand to be [pornography]… but I know it when I see it.”\n6. We might care about whether machines are conscious in addition to being intelligent, but we already have a convenient term for that: *consciousness*. In particular, we might care about machine consciousness because the slow, plodding invention of AI may involve the creation and destruction of millions of partially-conscious near-AIs that are switched on, suffer for a while, and are then switched off — all while being unable to signal to us that they are suffering. This is especially likely if we remain unclear about the nature of consciousness for several more decades, and thus have no principled way (e.g. via [nonperson predicates](http://lesswrong.com/lw/x4/nonperson_predicates/)) to create intelligent machines that we *know* are not conscious (and are thus incapable of suffering). One of the first people to make this point clearly was [Metzinger (2003)](http://www.amazon.com/Being-No-One-Self-Model-Subjectivity/dp/0262633086/), p. 621: “What would you say if someone came along and said, ‘Hey, we want to genetically engineer mentally retarded human infants! For reasons of scientific progress we need infants with certain cognitive and emotional deficits in order to study their postnatal psychological development—we urgently need some funding for this important and innovative kind of research!’ You would certainly think this was not only an absurd and appalling but also a dangerous idea. It would hopefully not pass any ethics committee in the democratic world. However, what today’s ethics committees don’t see is how the first machines satisfying a minimally sufficient set of constraints for conscious experience could be just like such mentally retarded infants. They would suffer from all kinds of functional and representational deficits too. But they would now also subjectively experience those deficits. In addition, they would have no political lobby—no representatives in any ethics committee.” Metzinger repeats the point in [Metzinger (2010)](http://www.amazon.com/Ego-Tunnel-Science-Mind-Myth/dp/0465020690/), starting on page 194.\n7. Admittedly, this is still pretty vague. One step toward precision would be to propose a definition of intelligence as optimization power for some canonical distribution of possible preferences, over some canonical distribution of environments, with a penalty for resource use. The canonical preferences and canonical environments could be weighted toward preferences and environments relevant to our concerns: we care more about whether AIs can do science than whether they can paint abstract art, and we care more about whether they can achieve their goals in our solar system than whether they can achieve their goals inside a black hole. Also see [Goertzel (2010)](http://agi-conf.org/2010/wp-content/uploads/2009/06/paper_14.pdf)‘s “efficient pragmatic general intelligence.”\n8. [Hibbard (2011)](http://www.ssec.wisc.edu/~billh/g/hibbard_agi11a.pdf); [Legg & Veness (2011)](http://arxiv.org/pdf/1109.5951.pdf); [Wang (2008)](http://www.cis.temple.edu/~wangp/Publication/AI_Definitions.pdf); [Schaul et al. (2011)](http://arxiv.org/pdf/1109.1314.pdf); [Dowe & Hernandez-Orallo (2012)](http://www.csse.monash.edu.au/~dld/Publications/2012/Dowe%2BHernandez-Orallo_2012_IQ_tests_are_not_for_machines_comma_yet_IN_PRESS.pdf); [Goertzel (2010)](http://agi-conf.org/2010/wp-content/uploads/2009/06/paper_14.pdf); [Adams et al. (2011)](http://www.cse.buffalo.edu/faculty/shapiro/Papers/hlai.pdf).\n\nThe post [What is Intelligence?](https://intelligence.org/2013/06/19/what-is-intelligence-2/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-06-19T19:59:12Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "4f5c91994054c240ce5e373cd9da12a5", "title": "MIRI’s July 2013 Workshop", "url": "https://intelligence.org/2013/06/07/miris-july-2013-workshop/", "source": "miri", "source_type": "blog", "text": "[![Mihaly at April workshop](http://intelligence.org/wp-content/uploads/2013/06/Mihaly-at-April-workshop.jpg)](http://intelligence.org/get-involved/#workshop)\nFrom July 8-14, MIRI will host its **3rd Workshop on Logic, Probability, and Reflection**. The focus of this workshop will be the [Löbian obstacle to self-modifying systems](http://lesswrong.com/lw/hmt/tiling_agents_for_selfmodifying_ai_opfai_2/).\n\n\nParticipants confirmed so far include:\n\n\n* [Andrew Critch](http://acritch.com/) (just finished his math PhD at UC Berkeley, now working at [CFAR](http://rationality.org/))\n* [Abram Demski](https://plus.google.com/111568410659864255951) (USC)\n* [Benja Fallenstein](http://lesswrong.com/user/Benja/submitted/) (Bristol U)\n* [Marcello Herreshoff](http://www.linkedin.com/pub/marcello-herreshoff/0/8b4/51a) (Google)\n* Jonathan Lee (Cambridge)\n* [Will Sawin](http://www.ctpost.com/news/article/Wisdom-beyond-his-years-1390299.php) (Princeton)\n* [Qioachu Yuan](http://math.berkeley.edu/~qchu/) (UC Berkeley)\n* [Eliezer Yudkowsky](http://yudkowsky.net/) (MIRI)\n\n\nIf you have a strong mathematics background and might like to attend this workshop, it’s not too late to [apply](http://intelligence.org/get-involved/#workshop)! And even if *this* workshop doesn’t fit your schedule, please **do apply**, so that we can notify you of other workshops (long before they are announced publicly).\n\n\nInformation on past workshops:\n\n\n* Our **1st Workshop** (Nov. 11-18, 2012; 4 participants) resulted in Christiano’s [probabilistic logic](http://lesswrong.com/lw/h1k/reflection_in_probabilistic_logic/), an attack on the [Löbian obstacle for self-modifying systems](http://lesswrong.com/lw/hmt/tiling_agents_for_selfmodifying_ai_opfai_2/).\n* Our **2nd Workshop** (Apr. 3-24, 2013; 12 participants coming in and out) resulted in (1) some as-yet unpublished progress on Christiano’s probabilistic logic, (2) some progress on program equilibrium recorded in [LaVictoire et al. (2013)](https://intelligence.org/files/RobustCooperation.pdf), and some progress on the Löbian obstacle resulting in [Yudkowsky & Herreschoff (2013)](https://intelligence.org/files/TilingAgents.pdf).\n\n\nThe post [MIRI’s July 2013 Workshop](https://intelligence.org/2013/06/07/miris-july-2013-workshop/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-06-07T22:15:07Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "8f6daa3853e3f7cc7b4392ffc6db273d", "title": "New Research Page and Two New Articles", "url": "https://intelligence.org/2013/06/06/new-research-page-and-two-new-articles/", "source": "miri", "source_type": "blog", "text": "[![research page](http://intelligence.org/wp-content/uploads/2013/06/research-page.png)](https://intelligence.org/research/)\nOur new [Research](http://intelligence.org/research/) page has launched!\n\n\nOur previous research page was a simple list of articles, but the new page describes the purpose of our research, explains four categories of research to which we contribute, and highlights the papers we think are most important to read.\n\n\nWe’ve also released drafts of two new research articles.\n\n\n[Tiling Agents for Self-Modifying AI, and the Löbian Obstacle](https://intelligence.org/files/TilingAgents.pdf) (discuss it [here](http://lesswrong.com/lw/hmt/tiling_agents_for_selfmodifying_ai_opfai_2/)), by Yudkowsky and Herreshoff, explains one of the key open problems in MIRI’s research agenda:\n\n\n\n> We model self-modification in AI by introducing “tiling” agents whose decision systems will approve the construction of highly similar agents, creating a repeating pattern (including similarity of the offspring’s goals). Constructing a formalism in the most straightforward way produces a Gödelian difficulty, the “Löbian obstacle.” By technical methods we demonstrates the possibility of avoiding this obstacle, but the underlying puzzles of rational coherence are thus only partially addressed. We extend the formalism to partially unknown deterministic environments, and show a very crude extension to probabilistic environments and expected utility; but the problem of finding a fundamental decision criterion for self-modifying probabilistic agents remains open.\n> \n> \n\n\n[Robust Cooperation in the Prisoner’s Dilemma: Program Equilibrium via Provability Logic](https://intelligence.org/files/RobustCooperation.pdf) (discuss it [here](http://lesswrong.com/lw/hmw/robust_cooperation_in_the_prisoners_dilemma/)), by LaVictoire et al., explains some progress in program equilibrium made by MIRI research associate Patrick LaVictoire and several others during MIRI’s April 2013 workshop:\n\n\n\n> Rational agents defect on the one-shot prisoner’s dilemma even though mutual cooperation would yield higher utility for both agents. Moshe Tennenholtz showed that if each program is allowed to pass its playing strategy to all other players, some programs can then cooperate on the one-shot prisoner’s dilemma. Program equilibria is Tennenholtz’s term for Nash equilibria in a context where programs can pass their playing strategies to the other players.\n> \n> \n> One weakness of this approach so far has been that any two programs which make different choices cannot “recognize” each other for mutual cooperation, even if they are functionally identical. In this paper, provability logic is used to enable a more flexible and secure form of mutual cooperation.\n> \n> \n\n\nParticipants of MIRI’s April workshop also made progress on [Christiano’s probabilistic logic](http://lesswrong.com/lw/h1k/reflection_in_probabilistic_logic/) (an attack on the Löbian obstacle), but that work is not yet ready to be released.\n\n\nWe’ve also revamped the [Get Involved](http://intelligence.org/get-involved/) page, which now includes an [application form](https://intelligence.org/get-involved/#workshop) for forthcoming workshops. If you *might* like to work with MIRI on some of its open research problems sometime in the next 18 months, [please apply](https://intelligence.org/get-involved/#workshop)! Likewise, if you know someone who might enjoy attending such a workshop, please encourage *them* to apply.\n\n\nThe post [New Research Page and Two New Articles](https://intelligence.org/2013/06/06/new-research-page-and-two-new-articles/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-06-07T06:54:02Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "d4384faff83e065943369d17119977a8", "title": "Friendly AI Research as Effective Altruism", "url": "https://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/", "source": "miri", "source_type": "blog", "text": "MIRI was founded in 2000 on the premise that creating[1](https://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/#footnote_0_10240 \"In this post, I talk about the value of humanity in general creating Friendly AI, though MIRI co-founder Eliezer Yudkowsky usually talks about MIRI in particular — or at least, a functional equivalent — creating Friendly AI. This is because I am not as confident as Yudkowsky that it is best for MIRI to attempt to build Friendly AI. When updating MIRI’s bylaws in early 2013, Yudkowsky and I came to a compromise on the language of MIRI’s mission statement, which now reads: “[MIRI] exists to ensure that the creation of smarter-than-human intelligence has a positive impact. Thus, the charitable purpose of [MIRI] is to: (a) perform research relevant to ensuring that smarter-than-human intelligence has a positive impact; (b) raise awareness of this important issue; (c) advise researchers, leaders and laypeople around the world; and (d) as necessary, implement a smarter-than-human intelligence with humane, stable goals” (emphasis added). My own hope is that it will not be necessary for MIRI (or a functional equivalent) to attempt to build Friendly AI itself. But of course I must remain open to the possibility that this will be the wisest course of action as the first creation of AI draws nearer. There is also the question of capability: few people think that a non-profit research organization has much chance of being the first to build AI. I worry, however, that the world’s elites will not find it fashionable to take this problem seriously until the creation of AI is only a few decades away, at which time it will be especially difficult to develop the mathematics of Friendly AI in time, and humanity will be forced to take a gamble on its very survival with powerful AIs we have little reason to trust.\") Friendly AI might be a particularly efficient way to do as much good as possible.\n\n\nSome developments since then include:\n\n\n* The field of “[effective altruism](http://www.ted.com/talks/peter_singer_the_why_and_how_of_effective_altruism.html)” — trying not just to do good but to do *as much good as possible*[2](https://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/#footnote_1_10240 \"One might think of effective altruism as a straightforward application of decision theory to the subject of philanthropy. Philanthropic agents of all kinds (individuals, groups, foundations, etc.) ask themselves: “How can we choose philanthropic acts (e.g. donations) which (in expectation) will do as much good as possible, given what we care about?” The consensus recommendation for all kinds of choices under uncertainty, including philanthropic choices, is to maximize expected utility (Chater & Oaksford 2012; Peterson 2004; Stein 1996; Schmidt 1998:19). Different philanthropic agents value different things, but decision theory suggests that each of them can get the most of what they want if they each maximize their expected utility. Choices which maximize expected utility are in this sense “optimal,” and thus another term for effective altruism is “optimal philanthropy.” Note that effective altruism in this sense is not too dissimilar from earlier approaches to philanthropy, including high-impact philanthropy (making “the biggest difference possible, given the amount of capital invested“), strategic philanthropy, effective philanthropy, and wise philanthropy. Note also that effective altruism does not say that a philanthropic agent should specify complete utility and probability functions over outcomes and then compute the philanthropic act with the highest expected utility — that is impractical for bounded agents. We must keep in mind the distinction between normative, descriptive, and prescriptive models of decision-making (Baron 2007): “normative models tell us how to evaluate… decisions in terms of their departure from an ideal standard. Descriptive models specify what people in a particular culture actually do and how they deviate from the normative models. Prescriptive models are designs or inventions, whose purpose is to bring the results of actual thinking into closer conformity to the normative model.” The prescriptive question — about what bounded philanthropic agents should do to maximize expected utility with their philanthropic choices — tends to be extremely complicated, and is the subject of most of the research performed by the effective altruism community.\") — has seen more publicity and better research than ever before, in particular through the work of [GiveWell](http://www.givewell.org/), the [Center for Effective Altruism](http://centreforeffectivealtruism.org/), the philosopher [Peter Singer](http://en.wikipedia.org/wiki/Peter_Singer), and the community at [Less Wrong](http://lesswrong.com/).[3](https://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/#footnote_2_10240 \"See, for example: Efficient Charity, Efficient Charity: Do Unto Others, Politics as Charity, Heuristics and Biases in Charity, Public Choice and the Altruist’s Burden, On Charities and Linear Utility, Optimal Philanthropy for Human Beings, Purchase Fuzzies and Utilons Separately, Money: The Unit of Caring, Optimizing Fuzzies and Utilons: The Altruism Chip Jar, Efficient Philanthropy: Local vs. Global Approaches, The Effectiveness of Developing World Aid, Against Cryonics & For Cost-Effective Charity, Bayesian Adjustment Does Not Defeat Existential Risk Charity, How to Save the World, and What is Optimal Philanthropy?\")\n* In his recent [PhD dissertation](https://sites.google.com/site/nbeckstead/research/Beckstead%2C%20Nick--On%20the%20Overwhelming%20Importance%20of%20Shaping%20the%20Far%20Future.pdf?attredirects=0&d=1), [Nick Beckstead](https://sites.google.com/site/nbeckstead/) has clarified the assumptions behind the claim that shaping the far future (e.g. via Friendly AI) is overwhelmingly important.\n* Due to research performed by MIRI, the [Future of Humanity Institute](http://www.fhi.ox.ac.uk/) (FHI), and others, our strategic situation with regard to machine superintelligence is more clearly understood, and FHI’s [Nick Bostrom](http://nickbostrom.com/) has organized much of this work in a [forthcoming book](http://lesswrong.com/lw/bd6/ai_risk_opportunity_a_timeline_of_early_ideas_and/91zl).[4](https://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/#footnote_3_10240 \"I believe Beckstead and Bostrom have done the research community an enormous service in creating a framework, a shared language, for discussing trajectory changes, existential risks, and machine superintelligence. When discussing these topics with my colleagues, it has often been the case that the first hour of conversation is spent merely trying to understand what the other person is saying — how they are using the terms and concepts they employ. Beckstead’s and Bostrom’s recent work should enable clearer and more efficient communication between researchers, and therefore greater research productivity. Though I am not aware of any controlled, experimental studies on the effect of shared language on research productivity, a shared language is widely considered to be of great benefit for any field of research, and I shall provide a few examples of this claim which appear in print. Fuzzi et al. (2006): “The use of inconsistent terms can easily lead to misunderstandings and confusion in the communication between specialists from different [disciplines] of atmospheric and climate research, and may thus potentially inhibit scientific progress.” Hinkel (2008): “Technical languages enable their users, e.g. members of a scientific discipline, to communicate efficiently about a domain of interest.” Madin et al. (2007): “terminological ambiguity slows scientific progress, leads to redundant research efforts, and ultimately impedes advances towards a unified foundation for ecological science.”\")\n* MIRI’s Eliezer Yudkowsky has [begun](http://lesswrong.com/lw/hmt/tiling_agents_for_selfmodifying_ai_opfai_2/) to describe in more detail which open research problems constitute “Friendly AI research,” in his view.\n\n\nGiven these developments, we are in a better position than ever before to assess the value of Friendly AI research as effective altruism.\n\n\nStill, this is a difficult question. It is challenging enough to evaluate the cost-effectiveness of [anti-malaria nets](http://blog.givewell.org/2012/10/18/revisiting-the-case-for-insecticide-treated-nets-itns/) or [direct cash transfers](http://www.givewell.org/international/top-charities/give-directly). Evaluating the cost-effectiveness of attempts to shape the far future (e.g. via Friendly AI) is even more difficult than that. Hence, **this short post sketches an argument that can be given in favor of Friendly AI research as effective altruism, to enable future discussion**, and is **not intended as a thorough analysis.**\n\n\n\n### An argument for Friendly AI research as effective altruism\n\n\n[Beckstead (2013)](https://sites.google.com/site/nbeckstead/research/Beckstead%2C%20Nick--On%20the%20Overwhelming%20Importance%20of%20Shaping%20the%20Far%20Future.pdf?attredirects=0&d=1) argues[5](https://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/#footnote_4_10240 \"In addition to Beckstead’s thesis, see also A Proposed Adjustment to the Astronomical Waste Argument.\") for the following thesis:\n\n\n\n> From a global perspective, what matters most (in expectation) is that we do what is best (in expectation) for the general trajectory along which our descendants develop over the coming millions, billions, and trillions of years.\n> \n> \n\n\nWhy think this? Astronomical facts suggest that humanity (including “post-humanity”) could survive for billions or trillions of years ([Adams 2008](http://books.google.com/books?id=X5jdMyJKNL4C&pg=PT77&lpg=PT77#v=onepage&q&f=false)), and could thus produce enormous amounts of good.[6](https://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/#footnote_5_10240 \"Beckstead doesn’t mention this, but I would like to point out that moral realism is not required for Beckstead’s arguments to go through. In fact, I generally accept Beckstead’s arguments even though most philosophers would not consider me a moral realist, though to some degree that is a semantic debate (Muehlhauser 2011; Joyce 2012). If you’re a moral realist and you believe your intuitive moral judgments are data about what is morally true, then Beckstead’s arguments (if successful) have something to say about what is morally true, and about what you should do if you want to act in morally good ways. If you’re a moral anti-realist but you think your intuitive judgments are data about what you value — or about what you would value if you had more time to think about your values and how to resolve the contradictions among them — then Beckstead’s arguments (if successful) have something to say about what you value, and about what you should do if you want to help achieve what you value.\") But the value produced by our future depends on our *development trajectory*. If humanity destroys itself with powerful technologies in the 21st century, then nearly all that future value is lost. And if we survive but develop along a trajectory dominated by conflict and poor decisions, then the future could be much less good than if our trajectory is dominated by altruism and wisdom. Moreover, some of our actions today can have “ripple effects”[7](https://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/#footnote_6_10240 \"Karnofsky calls these “flow-through effects.”\") which determine the trajectory of human development, because many outcomes are [path-dependent](http://en.wikipedia.org/wiki/Path_dependence). Hence, actions which directly or indirectly precipitate particular trajectory changes (e.g. mitigating existential risks) can have vastly more value (in expectation) than actions with merely proximate benefits (e.g. saving the lives of 20 wild animals). Beckstead calls this the “rough future-shaping argument.”\n\n\nIf we accept the normative assumptions lurking behind this argument (e.g. [risk neutrality](http://en.wikipedia.org/wiki/Risk_neutral); see Beckstead’s dissertation), then the far future is enormously valuable (if it goes at least as well on average as the past century), and existential risk reduction is much more important than producing proximate benefits (e.g. global health, poverty reduction) or speeding up development (which could in fact increase existential risks, and even if it doesn’t, has lower expected value than existential risk reduction).\n\n\nHowever, Beckstead’s conclusion is not necessarily that existential risk reduction should be our global priority, because\n\n\n\n> there may be other ways to have a large, persistent effect on the far future without reducing existential risk… Some persistent changes in values and social norms could make the future [some fraction] better or worse… Sure, succeeding in preventing an existential catastrophe would be better than making a smaller trajectory change, but creating a small positive trajectory change may be significantly easier.\n> \n> \n\n\nInstead, Beckstead’s arguments suggest that “what matters most for shaping the far future is producing positive trajectory changes and avoiding negative ones.” Existential risk reduction is one important kind of positive trajectory change that could turn out to be the intervention with the highest expected value.\n\n\nOne important clarification is in order. It could turn out to be that working toward proximate benefits or development acceleration does more good than “direct” efforts for trajectory change, if working toward proximate benefits or development acceleration turns out to have major ripple effects which produce important trajectory change. For example, perhaps an “ordinary altruistic effort” like solving India’s [iodine deficiency problem](http://www.dnaindia.com/india/1593586/report-71-million-hit-by-iodine-deficiency-in-india) would cause there to be thousands of “extra” world-class elite thinkers two generations from now, which could increase humanity’s chances of intelligently navigating the crucial 21st century and spreading to the stars. (I don’t think this is likely; I suggest it merely for illustration.)\n\n\nFor the sake of argument, suppose you agree with Beckstead’s core thesis that “what matters most (in expectation) is that we do what is best (in expectation) for the general trajectory along which our descendants develop.” Suppose you also think, as I do, that machine superintelligence is probably inevitable.[8](https://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/#footnote_7_10240 \"See Bostrom (forthcoming) for an extended argument. Perhaps the most likely defeater for machine superintelligence is that global catastrophe may halt scientific progress before human-level AI is created.\")\n\n\nIn that case, you might think that Friendly AI research is a uniquely foreseeable and impactful way to shape the far future in an enormously positive way, [because](http://lesswrong.com/lw/hjb/a_proposed_adjustment_to_the_astronomical_waste/91zq) “our effects on the far future must almost entirely pass through our effects on the development of machine superintelligence.” All other developing trends might be overridden by the overwhelming effectiveness of machine superintelligence — and specifically, by the values that were (explicitly or implicitly, directly or indirectly) written into the machine superintelligence(s).\n\n\nIf that’s right, our situation is a bit like sending an interstellar probe to [colonize distant solar systems](http://commonsenseatheism.com/wp-content/uploads/2013/05/Armstrong-Sandberg-Eternity-in-six-hours-intergalactic-spreading-of-intelligent-life-and-sharpening-the-Fermi-paradox.pdf) before they recede beyond the [cosmological horizon](http://en.wikipedia.org/wiki/Observable_universe#Particle_horizon) and can thus never be reached from Earth again due to [the expansion of the universe](https://en.wikipedia.org/wiki/Metric_expansion_of_space). Anything on Earth that doesn’t affect the content of the probe will have no impact on those solar systems. (See also [this comment](http://lesswrong.com/lw/hjb/a_proposed_adjustment_to_the_astronomical_waste/921r).)\n\n\n \n\n\n### Potential defeaters\n\n\nThe rough argument above — in favor of Friendly AI research as an efficient form of effective altruism — deserves to be “fleshed out” in more detail.[9](https://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/#footnote_8_10240 \"Beckstead, in personal communication, suggested (but didn’t necessarily endorse) the following formalization of the rough argument sketched in the main text of the blog post: “(1) To a first approximation, the future of humanity is all that matters. (2) To a much greater extent than anything else, the future of humanity is highly sensitive to how machine intelligence unfolds. (3) Therefore, there is a very strong presumption in favor of working on any project which makes machine intelligence unfold in a better way. (4) FAI research is the most promising route to making machine intelligence unfold in a better way. (5) Therefore, there is a very strong presumption in favor of doing FAI research.” Beckstead (2013) examines the case for (1). Bostrom (forthcoming), in large part, examines the case for (2). Premise (3) informally follows from (1) and (2), and the conclusion (5) informally follows from (3) and (4). Premise (4) appears to me to be the most dubious part of the argument, and the least explored in the extant literature.\")\n\n\nPotential defeaters should also be examined:\n\n\n* Perhaps we ought to reject one or more of the normative assumptions behind Beckstead’s rough future-shaping argument.\n* Perhaps it’s not true that “our effects on the far future must almost entirely pass through our effects on the development of machine superintelligence.”\n* Perhaps Friendly AI research is not (today) a particularly efficient way to positively affect the development of machine superintelligence. Competing interventions may include: (1) [AI risk strategy research](http://lesswrong.com/lw/bd6/ai_risk_opportunity_a_timeline_of_early_ideas_and/91zl), (2) improving [technological forecasting](http://intelligence.org/2013/05/15/when-will-ai-be-created/), (3) [improving science in general](http://www.vannevargroup.org/), (4) [improving and spreading effective altruism and rationality](http://rationalaltruist.com/2013/06/03/my-outlook/), and (5) many others.\n\n\nIn future blog posts, members of the effective altruist community (including myself) will expand on the original argument and examine potential defeaters.\n\n\n#### \n\n\n#### Acknowledgements\n\n\nMy thanks to those who provided feedback on this post: Carl Shulman, Nick Beckstead, Jonah Sinick, and Eliezer Yudkowsky.\n\n\n\n\n---\n\n1. In this post, I talk about the value of *humanity in general* creating Friendly AI, though MIRI co-founder Eliezer Yudkowsky usually talks about *MIRI in particular* — or at least, a functional equivalent — creating Friendly AI. This is because I am not as confident as Yudkowsky that it is best for MIRI to attempt to build Friendly AI. When updating MIRI’s bylaws in early 2013, Yudkowsky and I came to a compromise on the language of MIRI’s mission statement, which now reads: “[MIRI] exists to ensure that the creation of smarter-than-human intelligence has a positive impact. Thus, the charitable purpose of [MIRI] is to: (a) perform research relevant to ensuring that smarter-than-human intelligence has a positive impact; (b) raise awareness of this important issue; (c) advise researchers, leaders and laypeople around the world; and (d) *as necessary*, implement a smarter-than-human intelligence with humane, stable goals” (emphasis added). My own hope is that it will not be necessary for MIRI (or a functional equivalent) to attempt to build Friendly AI itself. But of course I must remain open to the possibility that this will be the wisest course of action as the first creation of AI [draws nearer](http://intelligence.org/2013/05/15/when-will-ai-be-created/). There is also the question of capability: few people think that a non-profit research organization has much chance of being the first to build AI. I worry, however, that the world’s elites will not find it fashionable to take this problem seriously until the creation of AI is only a few decades away, at which time it will be especially difficult to develop the mathematics of Friendly AI in time, and humanity will be forced to take a gamble on its very survival with powerful AIs we have little reason to trust.\n2. One might think of effective altruism as a straightforward application of [decision theory](http://lesswrong.com/lw/gu1/decision_theory_faq/) to the subject of philanthropy. Philanthropic agents of all kinds (individuals, groups, foundations, etc.) ask themselves: “How can we choose philanthropic acts (e.g. donations) which (in expectation) will do as much good as possible, given what we care about?” The consensus recommendation for *all* kinds of choices under uncertainty, including philanthropic choices, is to maximize expected utility ([Chater & Oaksford 2012](http://books.google.com/books?id=S1-K4AT3zXYC&lpg=PA11&ots=wjavib87qF&dq=normative%20systems%3A%20logic%2C%20probability%2C%20and%20rational%20choice&lr&pg=PA11#v=onepage&q&f=false); [Peterson 2004](http://commonsenseatheism.com/wp-content/uploads/2012/05/Peterson-From-Outcomes-to-Acts-a-non-standard-axiomatization-of-the-expected-utility-principle.pdf); [Stein 1996](http://www.amazon.com/Without-Good-Reason-Rationality-Philosophy/dp/0198235747/); [Schmidt 1998](http://www.amazon.com/Axiomatic-Utility-Theory-under-Risk/dp/3540643192):19). Different philanthropic agents value different things, but decision theory suggests that each of them can get the most of what they want if they each maximize their expected utility. Choices which maximize expected utility are in this sense “optimal,” and thus another term for effective altruism is “[optimal philanthropy](http://lesswrong.com/lw/da4/what_is_optimal_philanthropy/).” Note that effective altruism in this sense is not too dissimilar from earlier approaches to philanthropy, including [high-impact philanthropy](http://en.wikipedia.org/wiki/High_impact_philanthropy) (making “[the biggest difference possible, given the amount of capital invested](http://www.impact.upenn.edu/faq/)“), [strategic philanthropy](http://www.amphilsoc.org/sites/default/files/490202.pdf), [effective philanthropy](http://www.hudson.org/files/pdf_upload/Stanley_Katz_APS_Proceedings_Piece_June_2005.pdf), and [wise philanthropy](http://wisephilanthropy.com/). Note also that effective altruism does not say that a philanthropic agent should specify complete utility and probability functions over outcomes and then compute the philanthropic act with the highest expected utility — that is impractical for bounded agents. We must keep in mind the distinction between normative, descriptive, and prescriptive models of decision-making (Baron 2007): “normative models tell us how to evaluate… decisions in terms of their departure from an ideal standard. Descriptive models specify what people in a particular culture actually do and how they deviate from the normative models. Prescriptive models are designs or inventions, whose purpose is to bring the results of actual thinking into closer conformity to the normative model.” The *prescriptive* question — about what bounded philanthropic agents should do to maximize expected utility with their philanthropic choices — tends to be extremely complicated, and is the subject of most of the research performed by the effective altruism community.\n3. See, for example: [Efficient Charity](http://lesswrong.com/lw/37f/efficient_charity/), [Efficient Charity: Do Unto Others](http://lesswrong.com/lw/3gj/efficient_charity_do_unto_others/), [Politics as Charity](http://lesswrong.com/lw/2qq/politics_as_charity/), [Heuristics and Biases in Charity](http://lesswrong.com/lw/aid/heuristics_and_biases_in_charity/), [Public Choice and the Altruist’s Burden](http://lesswrong.com/lw/2hv/public_choice_and_the_altruists_burden/), [On Charities and Linear Utility](http://lesswrong.com/lw/44c/on_charities_and_linear_utility/), [Optimal Philanthropy for Human Beings](http://lesswrong.com/lw/6py/optimal_philanthropy_for_human_beings/), [Purchase Fuzzies and Utilons Separately](http://lesswrong.com/lw/6z/purchase_fuzzies_and_utilons_separately/), [Money: The Unit of Caring](http://lesswrong.com/lw/65/money_the_unit_of_caring/), [Optimizing Fuzzies and Utilons: The Altruism Chip Jar](http://lesswrong.com/lw/3kl/optimizing_fuzzies_and_utilons_the_altruism_chip/), [Efficient Philanthropy: Local vs. Global Approaches](http://lesswrong.com/lw/684/efficient_philanthropy_local_vs_global_approaches/), [The Effectiveness of Developing World Aid](http://lesswrong.com/lw/2pq/the_effectiveness_of_developing_world_aid/), [Against Cryonics & For Cost-Effective Charity](http://lesswrong.com/lw/2kh/against_cryonics_for_costeffective_charity/), [Bayesian Adjustment Does Not Defeat Existential Risk Charity](http://lesswrong.com/lw/gzq/bayesian_adjustment_does_not_defeat_existential/), [How to Save the World](http://lesswrong.com/lw/373/how_to_save_the_world/), and [What is Optimal Philanthropy?](http://lesswrong.com/lw/da4/what_is_optimal_philanthropy/)\n4. I believe Beckstead and Bostrom have done the research community an enormous service in creating a *framework*, a *shared language*, for discussing trajectory changes, existential risks, and machine superintelligence. When discussing these topics with my colleagues, it has often been the case that the first hour of conversation is spent merely trying to understand what the other person is saying — how they are using the terms and concepts they employ. Beckstead’s and Bostrom’s recent work should enable clearer and more efficient communication between researchers, and therefore greater research productivity. Though I am not aware of any controlled, experimental studies on the effect of shared language on research productivity, a shared language is widely considered to be of great benefit for any field of research, and I shall provide a few examples of this claim which appear in print. [Fuzzi et al. (2006)](http://atmos-chem-phys.net/6/2017/2006/acp-6-2017-2006.pdf): “The use of inconsistent terms can easily lead to misunderstandings and confusion in the communication between specialists from different [disciplines] of atmospheric and climate research, and may thus potentially inhibit scientific progress.” [Hinkel (2008)](http://www.pik-potsdam.de/research/transdisciplinary-concepts-and-methods/projects/project-archive/favaia/pubs/hinkel-knowledge-integration.pdf): “Technical languages enable their users, e.g. members of a scientific discipline, to communicate efficiently about a domain of interest.” [Madin et al. (2007)](http://www.cs.cofc.edu/~bowring/classes/csis%20633/readings/madin-etal-tree-2008.pdf): “terminological ambiguity slows scientific progress, leads to redundant research efforts, and ultimately impedes advances towards a unified foundation for ecological science.”\n5. In addition to Beckstead’s thesis, see also [A Proposed Adjustment to the Astronomical Waste Argument](http://lesswrong.com/lw/hjb/a_proposed_adjustment_to_the_astronomical_waste/).\n6. Beckstead doesn’t mention this, but I would like to point out that moral realism is not required for Beckstead’s arguments to go through. In fact, I generally accept Beckstead’s arguments even though most philosophers would not consider me a moral realist, though to some degree that is a semantic debate ([Muehlhauser 2011](http://lesswrong.com/lw/5u2/pluralistic_moral_reductionism/); [Joyce 2012](http://www.victoria.ac.nz/staff/richard_joyce/acrobat/joyce_metaethical.pluralism.pdf)). If you’re a moral realist and you believe your intuitive moral judgments are data about what is morally true, then Beckstead’s arguments (if successful) have something to say about what is morally true, and about what you should do if you want to act in morally good ways. If you’re a moral anti-realist but you think your intuitive judgments are data about what you value — or about what you would value if you had more time to think about your values and how to resolve the contradictions among them — then Beckstead’s arguments (if successful) have something to say about what you value, and about what you should do if you want to help achieve what you value.\n7. Karnofsky calls these “[flow-through effects](http://blog.givewell.org/2013/05/15/flow-through-effects/).”\n8. See [Bostrom (forthcoming)](http://lesswrong.com/lw/bd6/ai_risk_opportunity_a_timeline_of_early_ideas_and/91zl) for an extended argument. Perhaps the most likely defeater for machine superintelligence is that global catastrophe may halt scientific progress before human-level AI is created.\n9. Beckstead, in personal communication, suggested (but didn’t necessarily endorse) the following formalization of the rough argument sketched in the main text of the blog post: “(1) To a first approximation, the future of humanity is all that matters. (2) To a much greater extent than anything else, the future of humanity is highly sensitive to how machine intelligence unfolds. (3) Therefore, there is a very strong presumption in favor of working on any project which makes machine intelligence unfold in a better way. (4) FAI research is the most promising route to making machine intelligence unfold in a better way. (5) Therefore, there is a very strong presumption in favor of doing FAI research.” [Beckstead (2013)](https://sites.google.com/site/nbeckstead/research/Beckstead%2C%20Nick--On%20the%20Overwhelming%20Importance%20of%20Shaping%20the%20Far%20Future.pdf?attredirects=0&d=1) examines the case for (1). [Bostrom (forthcoming)](http://lesswrong.com/lw/bd6/ai_risk_opportunity_a_timeline_of_early_ideas_and/91zl), in large part, examines the case for (2). Premise (3) informally follows from (1) and (2), and the conclusion (5) informally follows from (3) and (4). Premise (4) appears to me to be the most dubious part of the argument, and the least explored in the extant literature.\n\nThe post [Friendly AI Research as Effective Altruism](https://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-06-06T00:55:23Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "d4a43b20d91af13873f703887d4c23d2", "title": "MIRI May Newsletter: Intelligence Explosion Microeconomics and Other Publications", "url": "https://intelligence.org/2013/05/30/miri-may-newsletter-intelligence-explosion-microeconomics-and-other-publications/", "source": "miri", "source_type": "blog", "text": "| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| |\n\n |\n| \n\n| | | | |\n| --- | --- | --- | --- |\n| \n\n| | | |\n| --- | --- | --- |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \nGreetings From the Executive Director\nDear friends,\nIt’s been a busy month!\nMostly, we’ve been busy *publishing* things. As you’ll see below, [*Singularity Hypotheses*](http://www.amazon.com/Singularity-Hypotheses-Scientific-Philosophical-Assessment/dp/3642325599/) has now been published, and it includes four chapters by MIRI researchers or research associates. We’ve also published two new technical reports — one on decision theory and another on intelligence explosion microeconomics — and several new blog posts analyzing various issues relating to the future of AI. Finally, we added [four older articles](http://intelligence.org/2013/05/24/four-articles-added-to-research-page/) to the research page, including [Ideal Advisor Theories and Personal CEV](https://intelligence.org/files/IdealAdvisorTheories.pdf) (2012).\nIn our [April newsletter](http://intelligence.org/2013/04/18/miri-april-newsletter-relaunch-celebration-and-a-new-math-result/) we spoke about our April 11th party in San Francisco, celebrating our relaunch as the Machine Intelligence Research Institute and our transition to mathematical research. Additional photos from that event are now available as a [Facebook photo album](https://www.facebook.com/media/set/?set=a.520720301298692.1073741826.170446419659417&type=3). We’ve also uploaded a video from the event, in which I spend 2 minutes explaining MIRI’s relaunch and some tentative results from the April workshop. After that, visiting researcher [Qiaochu Yuan](http://qchu.wordpress.com/) spends 4 minutes explaining one of MIRI’s core research questions: the Löbian obstacle to self-modifying systems.\nSome of the research from our April workshop will be published in June, so if you’d like to read about those results right away, you might like to [subscribe](http://intelligence.org/blog/) to [our blog](http://intelligence.org/blog).\nCheers!\nLuke Muehlhauser\nExecutive Director\n\n\n\nIntelligence Explosion Microeconomics\nOur largest new publication this month is Yudkowsky’s 91-page [Intelligence Explosion Microeconomics](https://intelligence.org/files/IEM.pdf) (discuss [here](http://lesswrong.com/lw/hbd/new_report_intelligence_explosion_microeconomics/)). In this article, Yudkowsky takes some initial steps toward tackling the key quantitative issue in the intelligence explosion, “reinvestable returns on cognitive investments”: what kind of returns can you get from an investment in cognition, can you reinvest it to make yourself even smarter, and does this process die out or blow up? The article can be thought of as a compact and hopefully more coherent successor to the [AI Foom Debate](http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate) of 2009, featuring Yudkowsky and GMU economist [Robin Hanson](http://hanson.gmu.edu/).\nHere is the abstract:\nI. J. Good’s thesis of the ‘intelligence explosion’ is that a sufficiently advanced machine intelligence could build a smarter version of itself, which could in turn build an even smarter version of itself, and that this process could continue enough to vastly exceed human intelligence. As Sandberg (2010) correctly notes, there are several attempts to lay down return-on-investment formulas intended to represent sharp speedups in economic or technological growth, but very little attempt has been made to deal formally with I. J. Good’s intelligence explosion thesis as such.\nI identify the key issue as *returns on cognitive reinvestment* — the ability to invest more computing power, faster computers, or improved cognitive algorithms to yield cognitive labor which produces larger brains, faster brains, or better mind designs. There are many phenomena in the world which have been argued as evidentially relevant to this question, from the observed course of hominid evolution, to Moore’s Law, to the competence over time of machine chess-playing systems, and many more. I go into some depth on the sort of debates which then arise on how to interpret such evidence. I propose that the next step forward in analyzing positions on the intelligence explosion would be to formalize return-on-investment curves, so that each stance can say formally which possible microfoundations they hold to be *falsified* by historical observations already made. More generally, I pose multiple open questions of ‘returns on cognitive reinvestment’ or ‘intelligence explosion microeconomics’. Although such questions have received little attention thus far, they seem highly relevant to policy choices affecting the outcomes for Earth-originating intelligent life.\nThe dedicated mailing list will be small and restricted to technical discussants: apply for it [here](https://docs.google.com/forms/d/1KElE2Zt_XQRqj8vWrc_rG89nrO4JtHWxIFldJ3IY_FQ/viewform).\n\n\nWhen Will AI Be Created?\nIn part, intelligence explosion microeconomics seeks to answer the question “How quickly will human-level AI self-improve to become superintelligent?” Another major question in AI forecasting is, of course: “When will we create human-level AI?”\nThis is another difficult question, and Luke Muehlhauser surveyed those difficulties in a recent (and quite detailed) blog post: [When Will AI Be Created?](http://intelligence.org/2013/05/15/when-will-ai-be-created/) He concludes:\nGiven these considerations, I think the most appropriate stance on the question “When will AI be created?” is something like this:\n“We can’t be confident AI will come in the next 30 years, and we can’t be confident it’ll take more than 100 years, and anyone who is confident of either claim is pretending to know too much.”\nHow confident is “confident”? Let’s say 70%. That is, I think it is unreasonable to be 70% confident that AI is fewer than 30 years away, and I also think it’s unreasonable to be 70% confident that AI is more than 100 years away.\nThis statement admits my inability to predict AI, but it also constrains my probability distribution over “years of AI creation” quite a lot.\nI think the considerations above justify these constraints on my probability distribution, but I haven’t spelled out my reasoning in great detail. That would require more analysis than I can present here. But I hope I’ve at least summarized the basic considerations on this topic, and those with different probability distributions than mine can now build on my work here to try to justify them.\nMuehlhauser also explains four methods for reducing our uncertainty about AI timelines: *explicit quantification*, *leveraging aggregation*, *signposting the future*, and *decomposing the phenomena*.\nAs it turns out, **you can participate** in the first two methods for improving our AI forecasts by [signing up for GMU’s DAGGRE program](http://intelligence.org/2013/05/24/sign-up-for-daggre-to-improve-science-technology-forecasting/). Muehlhauser himself has signed up.\nMuehlhauser also wrote a 400-word piece on the difficulty of AI forecasting for *Quartz* magazine: [Robots will take our jobs, but it’s hard to say when](http://qz.com/85825/robots-may-take-our-jobs-but-its-hard-to-say-when/). Here’s a choice quote:\nWe’ve had the computing power of a honeybee’s brain for quite a while now, but that doesn’t mean we know how to build tiny robots that fend for themselves outside the lab, find their own sources of energy, and communicate with others to build their homes in the wild.\n \n\n\nSingularity Hypothesis Published\n\n[*Singularity Hypotheses: A Scientific and Philosophical Assessment*](http://www.amazon.com/Singularity-Hypotheses-Scientific-Philosophical-Assessment/dp/3642325599/) has now been published by Springer, in hardcover and ebook forms.\nThe book contains 20 chapters about the prospect of machine superintelligence, including 4 chapters by MIRI researchers and research associates:\n* [Intelligence Explosion: Evidence and Import](https://intelligence.org/files/IE-EI.pdf) by Luke Muehlhauser and Anna Salamon\n* [Intelligence Explosion and Machine Ethics](https://intelligence.org/files/IE-ME.pdf) by Luke Muehlhauser and Louie Helm\n* Friendly Artificial Intelligence by Eliezer Yudkowsky, a shortened version of [Yudkowsky (2008)](https://intelligence.org/files/AIPosNegFactor.pdf)\n* [Artificial General Intelligence and the Human Mental Model](https://intelligence.org/files/AGI-HMM.pdf) by Roman Yampolskiy and (MIRI research associate) Joshua Fox\n\n\nFor more details, see [the blog post](http://intelligence.org/2013/04/25/singularity-hypotheses-published/).\n\n\nTimeless Decision Theory Paper Published\n During his time as a research fellow for MIRI, Alex Altair wrote an article on [Timeless Decision Theory](http://wiki.lesswrong.com/wiki/Timeless_decision_theory) (TDT) that has now been published: “[A Comparison of Decision Algorithms on Newcomblike Problems](https://intelligence.org/files/Comparison.pdf).”\nAltair’s article is both more succinct and also more precise in its formulation of TDT than Yudkowsky’s earlier paper “[Timeless Decision Theory](https://intelligence.org/files/TDT.pdf).” Thus, Altair’s paper should serve as a handy introduction to TDT for philosophers, computer scientists, and mathematicians, while Yudkowsky’s paper remains required reading for anyone interested to develop TDT further, for it covers more ground than Altair’s paper.\nFor a gentle introduction to the entire field of normative decision theory (including TDT), see Muehlhauser and Williamson’s [Decision Theory FAQ](http://lesswrong.com/lw/gu1/decision_theory_faq/).\n\n\nAGI Impacts Experts and Friendly AI Experts\nIn [AGI Impacts Experts and Friendly AI Experts](http://intelligence.org/2013/05/01/agi-impacts-experts-and-friendly-ai-experts/), Luke Muehlhauser explains the two types of experts MIRI hopes to cultivate.\n*AGI impacts experts* develop skills related to predicting technological development (e.g. building [computational models](https://intelligence.org/files/ChangingTheFrame.pdf) of AI development or reasoning about [intelligence explosion microeconomics](https://intelligence.org/files/IEM.pdf)), predicting AGI’s likely impacts on society, and identifying which interventions are most likely to increase humanity’s chances of safely navigating the creation of AGI. For overviews, see [Bostrom & Yudkowsky (2013)](https://intelligence.org/files/EthicsofAI.pdf); [Muehlhauser & Salamon (2013)](https://intelligence.org/files/IE-EI.pdf).\n*Friendly AI experts* develop skills useful for the development of mathematical architectures that can enable AGIs to be *trustworthy* (or “human-friendly”). This work is carried out at [MIRI research workshops](http://intelligence.org/2013/03/07/upcoming-miri-research-workshops/) and in various publications, e.g. [Christiano et al. (2013)](http://lesswrong.com/lw/h1k/reflection_in_probabilistic_set_theory/); [Hibbard (2013)](http://arxiv.org/pdf/1111.3934v2.pdf). Note that the term “Friendly AI” was selected (in part) to avoid the suggestion that we understand the subject very well — a phrase like “Ethical AI” might sound like the kind of thing one can learn a lot about by looking it up in an encyclopedia, but our present understanding of trustworthy AI is too impoverished for that.\nFor more details on which skills these kinds of experts should develop, [read the blog post](http://intelligence.org/2013/05/01/agi-impacts-experts-and-friendly-ai-experts/).\n\n\nMIRI’s Mission in Five Theses and Two Lemmas\nYudkowsky sums up MIRI’s research mission in the blog post [Five theses, two lemmas, and a couple of strategic implications](http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/). The five theses are:\n* Intelligence explosion\n* Orthogonality of intelligence and goals\n* Convergent instrumental goals\n* Complexity of value\n* Fragility of value\n\n\nAccording to Yudkowsky, these theses imply two important lemmas:\n* Indirect normativity\n* Large bounded extra difficulty of Friendliness\n\n\nIn turn, these two lemmas have two important strategic implications:\n1. We have a lot of work to do on things like indirect normativity and stable self-improvement. At this stage a lot of this work looks really foundational — that is, we can’t describe how to do these things using infinite computing power, let alone finite computing power.  We should get started on this work as early as possible, since basic research often takes a lot of time.\n2. There needs to be a Friendly AI project that has some sort of boost over competing projects which don’t live up to a (very) high standard of Friendly AI work — a project which can successfully build a stable-goal-system self-improving AI, before a less-well-funded project hacks together a much sloppier self-improving AI. Giant supercomputers may be less important to this than being able to bring together the smartest researchers… but the required advantage cannot be left up to chance. Leaving things to default means that projects less careful about self-modification would have an advantage greater than casual altruism is likely to overcome.\n\n\nFor more details on the theses and lemmas, [read the blog post](http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/) and its linked articles.\n\n\nOur Final Invention available for preorder\n[James Barrat](http://www.jamesbarrat.com/author/), a documentary filmmaker for National Geographic, Discovery, PBS, and other broadcasters, has written a wonderful new book about the intelligence explosion called *Our Final Invention: Artificial Intelligence and the End of the Human Era*. It will be released October 1st, and is [available for pre-order on Amazon](http://www.amazon.com/Our-Final-Invention-Artificial-Intelligence/dp/0312622376/?tag=r601000000-20).\nHere are some blurbs from people who have read an advance copy:\n“A hard-hitting book about the most important topic of this century and possibly beyond — the issue of whether our species can survive. I wish it was science fiction but I know it’s not.”\n—Jaan Tallinn, co-founder of Skype\n“The compelling story of humanity’s most critical challenge. A Silent Spring for the 21st Century.”\n—Michael Vassar, former MIRI president\n“*Our Final Invention* is a thrilling detective story, and also the best book yet written on the most important problem of the 21st century.”\n—Luke Muehlhauser, MIRI executive director\n“An important and disturbing book.”\n—Huw Price, co-founder, Cambridge University Center for the Study of Existential Risk\n\n\nMIRI Needs Advisors\nMIRI currently has a few dozen volunteer advisors on a wide range of subjects, but we need more! We’re especially hoping for additional advisors in mathematical logic, theoretical computer science, artificial intelligence, economics, and game theory.\nIf you [**sign up**](http://intelligence.org/2013/05/15/advise-miri-with-your-domain-specific-expertise/), we will occasionally ask you questions, or send you early drafts of upcoming writings for feedback.\nWe don’t always want technical advice (“Well, you can do that with a relativized arithmetical hierarchy…”); often, we just want to understand how different groups of experts respond to our writing (“The tone of this paragraph rubs me the wrong way because…”).\nEven if you don’t have much time to help, [**please sign up**](http://intelligence.org/2013/05/15/advise-miri-with-your-domain-specific-expertise/)! We will of course respect your own limits on availability.\n\n\nFeatured Volunteer – Florian Blumm\nFlorian Bumm has been one of our most active translators. Florian translates materials from English into his native tongue, German. Historically a software engineer, he is now on a traveling vacation in Bolivia progressively extended by remote contract labor, which he has found conducive to his volunteering for MIRI. After leaving a position as a Java engineer for a financial services company, he has decided that he would rather contribute directly to a cause of some sort, and has determined that there is nothing more important than mitigating existential risks from artificial intelligence.\nThanks, Florian!\nTo join Florian and dozens of other volunteers, visit [MIRIvolunteers.org](http://mirivolunteers.org/).\n |\n\n |\n\n |\n\n |\n\n |\n| \n |\n\n\n\n\n\n\nThe post [MIRI May Newsletter: Intelligence Explosion Microeconomics and Other Publications](https://intelligence.org/2013/05/30/miri-may-newsletter-intelligence-explosion-microeconomics-and-other-publications/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-05-30T18:48:11Z", "authors": ["Jake"], "summaries": []} -{"id": "567e1dda28e60c632a0b096b84400240", "title": "New Transcript: Yudkowsky and Aaronson", "url": "https://intelligence.org/2013/05/29/new-transcript-yudkowsky-and-aaronson/", "source": "miri", "source_type": "blog", "text": "[![Yudkowsky-Aaronson](http://intelligence.org/wp-content/uploads/2013/05/Yudkowsky-Aaronson.png)](https://docs.google.com/document/d/1JIqzTGNvdLukR0Ce5T2eiv_vxtPO53N3KGCbxVvREtU/pub)\nIn [When Will AI Be Created?](http://intelligence.org/2013/05/15/when-will-ai-be-created/), I referred to a [bloggingheads.tv conversation](http://bloggingheads.tv/videos/2220) between Eliezer Yudkowsky and Scott Aaronson. A transcript of that dialogue is [now available](https://docs.google.com/document/d/1JIqzTGNvdLukR0Ce5T2eiv_vxtPO53N3KGCbxVvREtU/pub), thanks to MIRI volunteers Ethan Dickinson, Daniel Kokotajlo, and Rick Schwall.\n\n\nSee also [the transcript](http://intelligence.org/2013/01/09/new-transcript-eliezer-yudkowsky-and-massimo-pigliucci-on-the-singularity/) for a [bloggingheads.tv conversation](http://bloggingheads.tv/videos/2561) between Eliezer Yudkowsky and Massimo Pigliucci.\n\n\nTo join these volunteers in assisting our cause, visit [MIRIvolunteers.org](http://mirivolunteers.org/)!\n\n\nThe post [New Transcript: Yudkowsky and Aaronson](https://intelligence.org/2013/05/29/new-transcript-yudkowsky-and-aaronson/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-05-30T03:21:29Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "3b37bc1eb1e0cf7344e6dd46c8c02669", "title": "Sign up for DAGGRE to improve science & technology forecasting", "url": "https://intelligence.org/2013/05/24/sign-up-for-daggre-to-improve-science-technology-forecasting/", "source": "miri", "source_type": "blog", "text": "In [When Will AI Be Created?](http://intelligence.org/2013/05/15/when-will-ai-be-created/), I named four methods that might improve our forecasts of AI and other important technologies. Two of these methods were *explicit quantification* and *leveraging aggregation*, as exemplified by IARPA’s [ACE program](http://www.iarpa.gov/Programs/ia/ACE/ace.html), which aims to “dramatically enhance the accuracy, precision, and timeliness of… forecasts for a broad range of event types, through the development of advanced techniques that elicit, weight, and combine the judgments of many… analysts.”\n\n\nGMU’s [DAGGRE program](http://www.daggre.org/info/), one of five teams participating in ACE, recently [announced](http://blog.daggre.org/2013/05/24/3054/) a transition from geopolitical forecasting to science & technology forecasting:\n\n\n\n> DAGGRE will continue, but it will transition from geo-political forecasting to science and technology (S&T) forecasting to better use its combinatorial capabilities. We will have a brand new shiny, friendly and informative interface co-designed by Inkling Markets, opportunities for you to provide your own forecasting questions and more!\n> \n> \n> Another exciting development is that our S&T forecasting prediction market will be open to everyone in the world who is at least eighteen years of age. We’re going global!\n> \n> \n\n\nIf you want to help improve humanity’s ability to forecast important technological developments like AI, please register for DAGGRE’s new S&T prediction website [here](http://signup.daggre.org/).\n\n\nI did.\n\n\nThe post [Sign up for DAGGRE to improve science & technology forecasting](https://intelligence.org/2013/05/24/sign-up-for-daggre-to-improve-science-technology-forecasting/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-05-25T01:24:32Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "46d60c940476abf4bdd2e418f1d7159e", "title": "Four Articles Added to Research Page", "url": "https://intelligence.org/2013/05/24/four-articles-added-to-research-page/", "source": "miri", "source_type": "blog", "text": "Four older articles have been added to our [research page](http://intelligence.org/research/).\n\n\nThe first is the early draft of Christiano et al.’s “[Definability of ‘Truth’ in Probabilistic Logic](http://intelligence.org/wp-content/uploads/2013/03/Christiano-et-al-Naturalistic-reflection-early-draft.pdf)” previously discussed [here](http://lesswrong.com/lw/h1k/reflection_in_probabilistic_logic/) and [here](http://intelligence.org/2013/03/22/early-draft-of-naturalistic-reflection-paper/). The draft was last updated on April 2, 2013.\n\n\nThe second paper is a cleaned-up version of an article originally published [in December 2012](http://lesswrong.com/lw/g35/ideal_advisor_theories_and_personal_cev/) by Luke Muehlhauser and Chris Williamson to Less Wrong: “[in December 2012](https://intelligence.org/files/IdealAdvisorTheories.pdf) by Luke Muehlhauser and Chris Williamson to Less Wrong: “[Ideal Advisor Theories and Personal CEV](https://intelligence.org/files/IdealAdvisorTheories.pdf',100])).”\n\n\nThe third and fourth papers were originally published by Bill Hibbard in the [AGI 2012 Conference Proceedings](http://www.amazon.com/Artificial-General-Intelligence-International-Proceedings/dp/3642355056/): “[AGI 2012 Conference Proceedings](https://intelligence.org/files/UnintendedBehaviors.pdf): “[Avoiding Unintended AI Behaviors](https://intelligence.org/files/UnintendedBehaviors.pdf',100]))” and “[Decision Support for Safe AI Design](https://intelligence.org/files/DecisionSupport.pdf).” Hibbard wrote these articles before he became a MIRI research associate, but he gave us permission to include them on our research page because (1) he became a MIRI research associate during the AGI-12 conference at which the articles were published, (2) the articles were partly inspired by a [public dialogue](http://lesswrong.com/lw/di6/muehlhauserhibbard_dialogue_on_agi/) with Luke Muehlhauser, and (3) the articles build on MIRI’s paper “[public dialogue](https://intelligence.org/files/IE-ME.pdf) with Luke Muehlhauser, and (3) the articles build on MIRI’s paper “[Intelligence Explosion and Machine Ethics](https://intelligence.org/files/IE-ME.pdf',100])).”\n\n\nAs mentioned in our [December 2012 newsletter](http://intelligence.org/2012/12/19/december-2012-newsletter/), “Avoiding Unintended AI Behaviors” was awarded MIRI’s $1000 Turing Prize for Best AGI Safety Paper. The prize was awarded in honor of Alan Turing, who not only discovered some of the key ideas of machine intelligence, but also grasped its importance, writing that “…it seems probable that once [human-level machine thinking] has started, it would not take long to outstrip our feeble powers… At some stage therefore we should have to expect the machines to take control…”\n\n\nThe post [Four Articles Added to Research Page](https://intelligence.org/2013/05/24/four-articles-added-to-research-page/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-05-24T17:44:17Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "e8f8256a2c5f1aac5a913bce81e1c446", "title": "When Will AI Be Created?", "url": "https://intelligence.org/2013/05/15/when-will-ai-be-created/", "source": "miri", "source_type": "blog", "text": "[Strong AI](http://en.wikipedia.org/wiki/Strong_AI) appears to be the topic of the week. Kevin Drum at *Mother Jones* [thinks](http://www.motherjones.com/media/2013/05/robots-artificial-intelligence-jobs-automation) AIs will be as smart as humans by 2040. [Karl Smith](http://www.forbes.com/sites/modeledbehavior/2013/05/13/inequality-in-the-robot-future/) at *Forbes* and “[M.S.](http://www.economist.com/blogs/democracyinamerica/2013/05/robot-threat)” at *The Economist* seem to roughly concur with Drum on this timeline. Moshe Vardi, the editor-in-chief of the world’s [most-read computer science magazine](http://en.wikipedia.org/wiki/Communications_of_the_ACM), [predicts](http://singularityhub.com/2013/05/15/moshe-vardi-robots-could-put-humans-out-of-work-by-2045/) that “by 2045 machines will be able to do if not any work that humans can do, then a very significant fraction of the work that humans can do.”\n\n\nBut predicting AI is more difficult than many people think.\n\n\nTo explore these difficulties, let’s start with a 2009 [bloggingheads.tv conversation](http://bloggingheads.tv/videos/2220) between MIRI researcher [Eliezer Yudkowsky](http://yudkowsky.net/) and MIT computer scientist [Scott Aaronson](http://www.scottaaronson.com/), author of the excellent *[Quantum Computing Since Democritus](http://www.amazon.com/Quantum-Computing-since-Democritus-Aaronson/dp/0521199565/)*. Early in that dialogue, Yudkowsky asked:\n\n\n\n> It seems pretty obvious to me that at some point in [one to ten decades] we’re going to build an AI smart enough to improve itself, and [it will] [“foom” upward in intelligence](https://intelligence.org/files/IE-EI.pdf), and by the time it exhausts available avenues for improvement it will be a “superintelligence” [relative] to us. Do you feel this is obvious?\n> \n> \n\n\nAaronson replied:\n\n\n\n> The idea that we could build computers that are smarter than us… and that those computers could build still smarter computers… until we reach the physical limits of what kind of intelligence is possible… that we could build things that are to us as we are to ants — all of this is compatible with the laws of physics… and I can’t find a reason of principle that it couldn’t eventually come to pass…\n> \n> \n> The main thing we disagree about is the *time scale*… a few thousand years [before AI] seems more reasonable to me.\n> \n> \n\n\nThose two estimates — several decades vs. “a few thousand years” — have wildly different policy implications.\n\n\nIf there’s a good chance that AI will replace humans at the steering wheel of history in the next several decades, then we’d better put our gloves on and [get to work](http://intelligence.org/research/) making sure that this event has a positive rather than negative impact. But if we can be pretty confident that AI is thousands of years away, then we needn’t worry about AI for now, and we should focus on other global priorities. Thus it appears that “When will AI be created?” is a question with high [value of information](http://en.wikipedia.org/wiki/Value_of_information) for our species.\n\n\nLet’s take a moment to review the forecasting work that *has* been done, and see what conclusions we might draw about when AI will likely be created.\n\n\n\n### The challenge of forecasting AI\n\n\n#### Expert elicitation\n\n\nMaybe we can ask the experts? Astronomers are pretty good at predicting eclipses, even decades or centuries in advance. Technological development tends to be messier than astronomy, but maybe the experts can still give us a *range* of years during which we can expect AI to be built? This method is called [expert elicitation](http://en.wikipedia.org/wiki/Expert_elicitation).\n\n\nSeveral people have surveyed experts working in AI or computer science about their AI timelines. Unfortunately, most of these surveys suffer from rather strong [sampling bias](http://en.wikipedia.org/wiki/Sampling_bias), and thus aren’t very helpful for our purposes.[1](https://intelligence.org/2013/05/15/when-will-ai-be-created/#footnote_0_10199 \"First, Sandberg & Bostrom (2011) gathered the AI timeline predictions of 35 participants at a 2011 academic conference on human-level machine intelligence. Participants were asked by what year they thought there is a 10%, 50%, and 90% chance that AI will have been built, assuming that “no global catastrophe halts progress.” Five of the 35 respondents expressed varying degrees of confidence that human-level AI would never be achieved. The median figures, calculated from the views of the other 30 respondents, were: 2028 for “10% chance,” 2050 for “50% chance,” and 2150 for “90% chance.” Second, Baum et al. (2011) surveyed 21 participants at a 2009 academic conference on machine intelligence, and found estimates similar to those in Sandberg & Bostrom (2011). Third, Kruel (2012) has, as of May 7th, 2013, interviewed 34 people about AI timelines and risks via email, 33 of whom could be considered “experts” of one kind or another in AI or computer science (Richard Carrier is a historian). Of those 33 experts, 19 provided full, quantitative answers to Kruel’s question about AI timelines: “Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming?” For those 19 experts, the median estimates for 10%, 50%, and 90% were 2025, 2035, and 2070, respectively (spreadsheet here). Fourth, Bainbridge (2005), surveying participants of 3 conferences on “Nano-Bio-Info-Cogno” technological convergence, found a median estimate of 2085 for “the computing power and scientific knowledge will exist to build machines that are functionally equivalent to the human brain.” However, the participants in these four surveys were disproportionately HLAI enthusiasts, and this introduces a significant sampling bias. The database of AI forecasts discussed in Armstrong & Sotala (2012) probably suffers from a similar problem: individuals who thought AI was imminent rather than distant were more likely to make public predictions of AI.\")\n\n\nShould we *expect* experts to be good at predicting AI, anyway? As [Armstrong & Sotala (2012)](https://intelligence.org/files/PredictingAI.pdf) point out, decades of research on expert performance[2](https://intelligence.org/2013/05/15/when-will-ai-be-created/#footnote_1_10199 \"Shanteau (1992); Kahneman and Klein (2009).\") suggest that predicting the first creation of AI is precisely the kind of task on which we should expect experts to show *poor* performance — e.g. because feedback is unavailable and the input stimuli are dynamic rather than static. [Muehlhauser & Salamon (2013)](https://intelligence.org/files/IE-EI.pdf) add, “If you have a gut feeling about when AI will be created, it is probably wrong.”\n\n\nThat said, the experts surveyed in [Michie (1973)](http://commonsenseatheism.com/wp-content/uploads/2013/05/Michie-Machines-and-the-theory-of-intelligence.pdf) — a more representative sample than in other surveys[3](https://intelligence.org/2013/05/15/when-will-ai-be-created/#footnote_2_10199 \"Another survey was taken at the AI@50 conference in 2006. When participants were asked “When will computers be able to simulate every aspect of human intelligence?”, 41% said “More than 50 years” and another 41% said “Never.” Unfortunately, many of the survey participants were not AI experts but instead college students who were attending the conference. Moreover, the phrasing of the question may have introduced a bias. The “Never” answer may have been given as often as it was because some participants took “every aspect of human intelligence” to include consciousness, and many people have philosophical objections to the idea that machines could be conscious. Had they instead been asked “When will AIs replace humans in almost all jobs?”, I suspect the “Never” answer would have been far less common. As for myself, I don’t accept any of the in-principle objections to the possibility of AI. For replies to the most common of these objections, see Chalmers (1996), ch. 9, and Chalmers (2012).\") — [did pretty well](http://lesswrong.com/lw/gta/selfassessment_in_expert_ai_predictions/). When asked to estimate a timeline for “[computers] exhibiting intelligence at adult human level,” the most common response was “More than 50 years.” Assuming (as most people do) that AI will not arrive by 2023, these experts will have been correct.\n\n\nUnfortunately, “more than 50 years” is a broad time frame that includes both “several decades from now” and “thousands of years from now.” So we don’t yet have any evidence that a representative survey of experts can predict AI within a few decades, and we have general reasons to suspect experts may not be capable of doing this kind of forecasting very well — although various aids (e.g. computational models; see below) may help them to improve their performance.\n\n\nHow else might we forecast when AI will be created?\n\n\n#### Trend extrapolation\n\n\nMany have tried to forecast the first creation of AI by extrapolating various trends. [Like Kevin Drum](http://www.motherjones.com/media/2013/05/robots-artificial-intelligence-jobs-automation), [Vinge (1993)](http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html) based his own predictions about AI on hardware trends (e.g. [Moore’s Law](http://en.wikipedia.org/wiki/Moore%27s_law)). But in a [2003 reprint](http://www-rohan.sdsu.edu/faculty/vinge/misc/WER2.html) of his article, Vinge noted the insufficiency of this reasoning: even if we acquire hardware sufficient for AI, we may not have the software problem solved.[4](https://intelligence.org/2013/05/15/when-will-ai-be-created/#footnote_3_10199 \"Though, Muehlhauser & Salamon (2013) point out that “Hardware extrapolation may be a more useful method in a context where the intelligence software is already written: whole brain emulation [WBE]. Because WBE seems to rely mostly on scaling up existing technologies like microscopy and large-scale cortical simulation, WBE may be largely an “engineering” problem, and thus the time of its arrival may be more predictable than is the case for other kinds of AI.” However, it is especially difficult to forecast WBE while we do not even have a proof of concept via a simple organism like C. elegans (David Dalrymple is working on this). Moreover, much progress in neuroscience will be required (Sandberg & Bostrom 2011), and such progress is probably less predictable than hardware extrapolation.\") As Robin Hanson [reminds](http://www.overcomingbias.com/2013/05/robot-econ-primer.html) us, “AI takes software, not just hardware.”\n\n\nPerhaps instead we could extrapolate trends in software progress?[5](https://intelligence.org/2013/05/15/when-will-ai-be-created/#footnote_4_10199 \"I’m not sure what a general measure of software progress would look like, though we can certainly identify local examples of software progress. For example, Holdren et al. (2010) notes: “in many areas, performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed… [For example] Martin Grötschel…, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later – in 2003 – this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms! Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008.” Muehlhauser & Salamon (2013) give another example: “For example, IBM’s Deep Blue played chess at the level of world champion Garry Kasparov in 1997 using about 1.5 trillion instructions per second (TIPS), but a program called Deep Junior did it in 2003 using only 0.015 TIPS. Thus, the computational efficiency of the chess algorithms increased by a factor of 100 in only six years (Richards and Shaw 2004).” A third example is Setty et al. (2012), which improved the efficiency of a probabilistically checkable proof method by 20 orders of magnitude with a single breakthrough. On the other hand, one can easily find examples of very slow progress, too (Davis 2012).\") Some people estimate the time until AI by asking what proportion of human abilities today’s software can match, and how quickly machines are “catching up.”[6](https://intelligence.org/2013/05/15/when-will-ai-be-created/#footnote_5_10199 \"For example, see Good (1970).\") Unfortunately, it’s not clear how to divide up the space of “human abilities,” nor how much each ability matters. Moreover, software progress seems to come in fits and starts.[7](https://intelligence.org/2013/05/15/when-will-ai-be-created/#footnote_6_10199 \"As I wrote earlier: “Increases in computing power are pretty predictable, but for AI you probably need fundamental mathematical insights, and it’s damn hard to predict those. In 1900, David Hilbert posed 23 unsolved problems in mathematics. Imagine trying to predict when those would be solved.” Some of these problems were solved quickly, some of them required several decades to solve, and many of them remain unsolved. Even the order in which Hilbert’s problems would be solved was hard to predict. According to Erdős & Graham (1980), p. 7, “Hilbert lectured in the early 1920’s on problems in mathematics and said something like this: probably all of us will see the proof of the Riemann hypothesis, some of us… will see the proof of Fermat’s last theorem, but none of us will see the proof that √2√2 is transcendental.” In fact, these results came in the reverse order: the last was proved by Kusmin a few years later, Fermat’s last theorem was proved by Wiles in 1994, and the Riemann hypothesis still has not been proved or disproved.\") With the possible exception of [computer chess progress](https://intelligence.org/wp-content/uploads/2015/05/Muehlhauser-Historical-chess-engines-estimated-ELO-ratings.pdf), I’m not aware of any trend in software progress as robust across multiple decades as Moore’s Law is in computing hardware.\n\n\nOn the other hand, [Tetlock (2005)](http://www.amazon.com/Expert-Political-Judgment-Good-Know/dp/0691128715/) points out that, at least in his large longitudinal database of pundit’s predictions about politics, simple trend extrapolation is tough to beat. Consider one example from the field of AI: when David Levy asked 1989 World Computer Chess Championship participants when a chess program would defeat the human World Champion, their estimates tended to be inaccurately pessimistic,[8](https://intelligence.org/2013/05/15/when-will-ai-be-created/#footnote_7_10199 \"According to Levy & Newborn (1991), one participant guessed the correct year (1997), thirteen participants guessed years from 1992-1995, twenty-eight participants guessed years from 1998-2056, and one participant guessed “Never.” Of the twenty-eight who guessed years from 1998-2056, eleven guessed year 2010 or later.\") despite the fact that computer chess had shown regular and predictable progress for two decades by that time. Those who forecasted this event with naive trend extrapolation (e.g. [Kurzweil 1990](http://www.amazon.com/The-Age-Intelligent-Machines-Kurzweil/dp/0262610795/)) got almost precisely the correct answer ([1997](http://en.wikipedia.org/wiki/Deep_Blue_versus_Garry_Kasparov#The_1997_rematch)).\n\n\nHence, it may be worth searching for a measure for which (a) progress is predictable enough to extrapolate, and for which (b) a given level of performance on that measure robustly implies the arrival of Strong AI. But to my knowledge, this has not yet been done, and it’s not clear that trend extrapolation can tell us much about AI timelines until such an argument is made, and made well.\n\n\n#### Disruptions\n\n\nWorse, several events could significantly accelerate or decelerate our progress toward AI, and we don’t know which of these events will occur, nor in what order. For example:\n\n\n* **An end to Moore’s Law**. The “serial speed” version of Moore’s Law broke down in 2004, requiring a leap to parallel processors, which raises substantial new difficulties for software developers ([Fuller & Millett 2011](http://www.amazon.com/The-Future-Computing-Performance-Level/dp/0309159512/)). The most economically relevant formulation of Moore’s law, *computations per dollar*, has been maintained thus far,[9](https://intelligence.org/2013/05/15/when-will-ai-be-created/#footnote_8_10199 \"As Fuller & Millett (2011, p. 81) note, “When we talk about scaling computing performance, we implicitly mean to increase the computing performance that we can buy for each dollar we spend.” Most of us don’t really care whether our new computer has more transistors or some other structure; we just want it to do more stuff, more cheaply. Kurzweil (2012), ch. 10, footnote 10 shows “calculations per second per $1,000” growing exponentially from 1900 through 2010, including several data points after the serial speed version of Moore’s Law broke down in 2004. The continuation of this trend is confirmed by “instructions per second per dollar” data for 2006-2011, gathered from Intel and other sources by Chris Hallquist (spreadsheet here). Thus it seems that the computations per dollar form of Moore’s Law has continued unabated, at least for now.\") but it remains unclear whether this will continue much longer ([Mack 2011](http://commonsenseatheism.com/wp-content/uploads/2011/12/Mack-Fifty-Years-of-Moores-Law.pdf); [Esmaeilzadeh et al. 2012](http://www.liacs.nl/~graaf/STUDENTENSEMINARIUM/DSEM.pdf)).\n* **Depletion of low-hanging fruit**. Progress is not only a function of effort but also of the difficulty of the progress. Some fields see a pattern of increasing difficulty with each successive discovery ([Arbesman 2011](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3277447/pdf/nihms352317.pdf)). AI may prove to be a field in which new progress requires far more effort than earlier progress. That is clearly the case for many parts of AI already, for example natural language processing ([Davis 2012](http://www.cs.nyu.edu/~davise/papers/singularity.pdf)).\n* **Societal collapse**. Political, economic, technological, or natural disasters may cause a societal collapse during which progress in AI would be essentially stalled ([Posner 2004](http://www.amazon.com/Catastrophe-Response-Richard-A-Posner/dp/0195306473/); [Bostrom and Ćirković 2008](http://www.amazon.com/Global-Catastrophic-Risks-Nick-Bostrom/dp/0199606501/)).\n* **Disinclination**. [Chalmers (2010)](http://consc.net/papers/singularityjcs.pdf) and [Hutter (2012a)](http://arxiv.org/pdf/1202.6177.pdf) think the most likely “speed bump” in our progress toward AI will be *disinclination*. As AI technologies become more powerful, humans may question whether it is wise to create machines more powerful than themselves.\n* **A breakthrough in cognitive neuroscience**. It is difficult, with today’s tools, to infer the cognitive algorithms behind human intelligence ([Trappenberg 2009](http://www.amazon.com/Fundamentals-Computational-Neuroscience-Thomas-Trappenberg/dp/0199568413/)). New tools and methods, however, might enable cognitive neuroscientists to decode how the human brain achieves its own intelligence, which might allow AI scientists to replicate that approach in silicon.\n* **Human enhancement**. [Human enhancement technologies](http://en.wikipedia.org/wiki/Human_enhancement) may make scientists more effective via cognitive enhancement pharmaceuticals ([Bostrom and Sandberg 2009](https://intelligence.org/feed/\"http:/www.fhi.ox.ac.uk/__data/assets/pdf_file/0005/9950/cognitive_enhancement_methods_ethics_and_regulatory_challenges.pdf')), brain-computer interfaces ([Groß 2009](http://commonsenseatheism.com/wp-content/uploads/2013/05/Gros-Blessing-or-curse-neurocognitive-enhancement-by-brain-engineering.pdf)), and genetic selection or engineering for cognitive enhancement.[10](https://intelligence.org/2013/05/15/when-will-ai-be-created/#footnote_9_10199 \"One possible breakthrough here may be iterated embryo selection. See Miller (2012, ch. 9) for more details.\")\n* **Quantum computing**. Quantum computing has overcome some of its early hurdles ([Rieffel and Polak 2011](http://www.amazon.com/Quantum-Computing-Introduction-Engineering-Computation/dp/0262015064/)), but it remains difficult to predict whether quantum computing will contribute significantly to the development of machine intelligence. Progress in quantum computing depends on particularly unpredictable breakthroughs. Furthermore, it seems likely that even if built, a quantum computer would provide dramatic speedups only for specific applications (e.g. [searching unsorted databases](http://en.wikipedia.org/wiki/Grover%27s_algorithm)).\n* **A tipping point in development incentives**. The launch of [Sputnik](http://en.wikipedia.org/wiki/Sputnik_1) in 1957 demonstrated the possibility of space flight to the public. This event triggered a space race between the United States and the Soviet Union, and led to long-term funding for space projects from both governments. If there is a “[Sputnik moment](http://wiki.lesswrong.com/wiki/AGI_Sputnik_moment)” for AI that makes it clear to the public and to governments that smarter-than-human AI is inevitable, a race to Strong AI may ensue, especially since the winner of the AI race might reap extraordinary economic, technological and geopolitical advantage.[11](https://intelligence.org/2013/05/15/when-will-ai-be-created/#footnote_10_10199 \"It is interesting, however, that the United States did not pursue extraordinary economic, technological and geopolitical advantage in the period during which it was the sole possessor of nuclear weapons. Also, it is worth noting that violence and aggression have steadily declined throughout human history (Pinker 2012).\")\n\n\n### Great uncertainty\n\n\nGiven these considerations, I think the most appropriate stance on the question “When will AI be created?” is something like this:\n\n\n\n> We can’t be confident AI will come in the next 30 years, and we can’t be confident it’ll take more than 100 years, and anyone who is confident of either claim is pretending to know too much.\n> \n> \n\n\nHow confident is “confident”? Let’s say 70%. That is, I think it is unreasonable to be 70% confident that AI is fewer than 30 years away, and I also think it’s unreasonable to be 70% confident that AI is more than 100 years away.\n\n\nThis statement admits my inability to predict AI, but it also constrains my probability distribution over “years of AI creation” quite a lot.\n\n\nI think the considerations above justify these constraints on my probability distribution, but I haven’t spelled out my reasoning in great detail. That would require more analysis than I can present here. But I hope I’ve at least summarized the basic considerations on this topic, and those with different probability distributions than mine can now build on my work here to try to justify them.\n\n\n### How to reduce our ignorance\n\n\nBut let us not be satisfied with a declaration of ignorance. Admitting our ignorance is an important step, but it is only the *first* step. Our next step should be to *reduce our ignorance* if we can, especially for high-value questions that have large strategic implications concerning the fate of our entire species.\n\n\nHow can we improve our long-term forecasting performance? [Horowitz & Tetlock (2012)](http://www.foreignpolicy.com/articles/2012/09/06/trending_upward), based on their own empirical research and prediction training, offer some advice on the subject:[12](https://intelligence.org/2013/05/15/when-will-ai-be-created/#footnote_11_10199 \"Tetlock (2010) adds another recommendation: “adversarial collaboration” (Mellers et al. 2001). Tetlock explains: “The core idea is simple: rival epistemic and political camps would nominate experts to come together to reach agreements on how they disagree on North Korea or deficit reduction or global warming — and then would figure out how to resolve at least a subset of their factual disputes. The disputants would need to specify, ex ante, how much belief change each side would ‘owe’ the other if various agreed-upon empirical tests were to work out one way or the other. When adversarial collaboration works as intended, it shifts the epistemic incentives from favoring cognitive hubris (generating as many reasons why one’s own side is right and the other is wrong) and toward modesty (taking seriously the possibility that some of the other side’s objections might have some validity). This is so because there is nothing like the prospect of imminent falsification to motivate pundits to start scaling back their more grandiose generalizations: ‘I am not predicting that North Korea will become conciliatory in this time frame if we did x — I merely meant that they might become less confrontational in this wider time frame if we did x, y and z — and if there are no unexpected endogenous developments and no nasty exogenous shocks.'” Tetlock (2012), inspired by Gawande (2009), also tentatively recommends the use of checklists in forecasting: “The intelligence community has begun developing performance-appraisal checklists for analysts that nudge them in the direction of thinking more systematically about how they think. But it has yet — to our knowledge — taken the critical next step of checking the usefulness of the checklists against independent real-world performance criteria, such as the accuracy of current assessments and future projections. Our experience in the [ACE] IARPA forecasting tournament makes us cautiously optimistic that this next step is both feasible and desirable.”\")\n\n\n* **Explicit quantification**: “The best way to become a better-calibrated appraiser of long-term futures is to get in the habit of making quantitative probability estimates that can be objectively scored for accuracy over long stretches of time. Explicit quantification enables explicit accuracy feedback, which enables learning.”\n* **Signposting the future**: Thinking through specific scenarios can be useful if those scenarios “come with clear diagnostic signposts that policymakers can use to gauge whether they are moving toward or away from one scenario or another… Falsifiable hypotheses bring high-flying scenario abstractions back to Earth.”[13](https://intelligence.org/2013/05/15/when-will-ai-be-created/#footnote_12_10199 \"But, let us not fool ourselves concerning the difficulty of this task. Good (1976) asserted that human-level performance in computer chess was a good signpost for human-level AI, writing that “a computer program of Grandmaster strength would bring us within an ace of [machine ultra-intelligence].” But of course this was not so. We may chuckle at this prediction today, but how obviously wrong was Good’s prediction in 1976?\")\n* **Leveraging aggregation**: “the average forecast is often more accurate than the vast majority of the individual forecasts that went into computing the average…. [Forecasters] should also get into the habit that some of the better forecasters in [an IARPA forecasting tournament called [ACE](http://www.iarpa.gov/Programs/ia/ACE/ace.html)] have gotten into: comparing their predictions to group averages, weighted-averaging algorithms, prediction markets, and financial markets.” See [Ungar et al. (2012)](http://commonsenseatheism.com/wp-content/uploads/2013/05/Ungar-et-al-The-Good-Judgment-Project-a-large-scale-test-of-different-methods-of-combining-expert-predictions.pdf) for some aggregation-leveraging results from the ACE tournament.\n\n\nMany forecasting experts add that when making highly uncertain predictions, it usually helps to **decompose the phenomena** into many parts and make predictions about each of the parts.[14](https://intelligence.org/2013/05/15/when-will-ai-be-created/#footnote_13_10199 \"E.g. Armstrong & Sotala (2012); MacGregor (2001); Lawrence et al. (2006).\") As [Raiffa (1968)](http://www.amazon.com/Decision-Analysis-Introductory-Lectures-Uncertainty/dp/007052579X/) succinctly put it, our strategy should be to “decompose a complex problem into simpler problems, get one’s thinking straight [on] these simpler problems, paste these analyses together with a logical glue, and come out with a program for action for the complex problem” (p. 271). MIRI’s [The Uncertain Future](http://theuncertainfuture.com/) is a simple toy model of this kind, but more sophisticated computational models — like those successfully used in climate change modeling ([Allen et al. 2013](http://commonsenseatheism.com/wp-content/uploads/2013/05/Allen-et-al-Test-of-a-decadal-climate-forecast.pdf)) — could be produced, and integrated with other prediction techniques.\n\n\nWe should expect AI forecasting to be difficult, but we need not be *as* ignorant about AI timelines as we are today.\n\n\n#### Acknowledgements\n\n\nMy thanks to Carl Shulman, Ernest Davis, Louie Helm, Scott Aaronson, and Jonah Sinick for their helpful feedback on this post.\n\n\n\n\n---\n\n1. First, [Sandberg & Bostrom (2011)](http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0015/21516/2011-1.pdf) gathered the AI timeline predictions of 35 participants at a 2011 academic conference on human-level machine intelligence. Participants were asked by what year they thought there is a 10%, 50%, and 90% chance that AI will have been built, assuming that “no global catastrophe halts progress.” Five of the 35 respondents expressed varying degrees of confidence that human-level AI would never be achieved. The median figures, calculated from the views of the other 30 respondents, were: 2028 for “10% chance,” 2050 for “50% chance,” and 2150 for “90% chance.” Second, [Baum et al. (2011)](http://sethbaum.com/ac/2011_AI-Experts.pdf) surveyed 21 participants at a 2009 academic conference on machine intelligence, and found estimates similar to those in Sandberg & Bostrom (2011). Third, [Kruel (2012)](http://wiki.lesswrong.com/wiki/Interview_series_on_risks_from_AI) has, as of May 7th, 2013, interviewed 34 people about AI timelines and risks via email, 33 of whom could be considered “experts” of one kind or another in AI or computer science (Richard Carrier is a historian). Of those 33 experts, 19 provided full, quantitative answers to Kruel’s question about AI timelines: “Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming?” For those 19 experts, the median estimates for 10%, 50%, and 90% were 2025, 2035, and 2070, respectively (spreadsheet [here](https://docs.google.com/spreadsheet/ccc?key=0AvoX2xCTgYnWdFlCajk5a0d0bG5Ld1hYUEQzaS1aQWc&usp=sharing)). Fourth, [Bainbridge (2005)](http://commonsenseatheism.com/wp-content/uploads/2013/05/Bainbridge-Survey-of-NBIC-Applications.pdf), surveying participants of 3 conferences on “Nano-Bio-Info-Cogno” technological convergence, found a median estimate of 2085 for “the computing power and scientific knowledge will exist to build machines that are functionally equivalent to the human brain.” However, the participants in these four surveys were disproportionately HLAI enthusiasts, and this introduces a significant sampling bias. The database of AI forecasts discussed in [Armstrong & Sotala (2012)](https://intelligence.org/files/PredictingAI.pdf) probably suffers from a similar problem: individuals who thought AI was imminent rather than distant were more likely to make public predictions of AI.\n2. [Shanteau (1992)](http://ruralgrocery.com/psych/cws/pdf/obhdp_paper91.PDF); [Kahneman and Klein (2009)](http://www.chrissnijders.com/eth2012/CaseFiles2012/Kahneman,%20Klein%20-%202009%20-%20Conditions%20for%20intuitive%20expertise%20a%20failure%20to%20disagree.pdf).\n3. Another survey was taken at the [AI@50 conference](http://en.wikipedia.org/wiki/AI@50) in 2006. When participants were asked “When will computers be able to simulate every aspect of human intelligence?”, 41% [said](http://web.archive.org/web/20110710193831/http://www.engagingexperience.com/ai50/) “More than 50 years” and another 41% said “Never.” Unfortunately, many of the survey participants were not AI experts but instead college students who were attending the conference. Moreover, the phrasing of the question may have introduced a bias. The “Never” answer may have been given as often as it was because some participants took “every aspect of human intelligence” to include consciousness, and many people have philosophical objections to the idea that machines could be conscious. Had they instead been asked “When will AIs replace humans in almost all jobs?”, I suspect the “Never” answer would have been far less common. As for myself, I don’t accept any of the in-principle objections to the possibility of AI. For replies to the most common of these objections, see [Chalmers (1996)](http://www.amazon.com/The-Conscious-Mind-Fundamental-Philosophy/dp/0195117891/), ch. 9, and [Chalmers (2012)](http://consc.net/papers/singreply.pdf).\n4. Though, [Muehlhauser & Salamon (2013)](https://intelligence.org/files/IE-EI.pdf) point out that “Hardware extrapolation may be a more useful method in a context where the intelligence software is already written: whole brain emulation [WBE]. Because WBE seems to rely mostly on scaling up existing technologies like microscopy and large-scale cortical simulation, WBE may be largely an “engineering” problem, and thus the time of its arrival may be more predictable than is the case for other kinds of AI.” However, it is especially difficult to forecast WBE while we do not even have a proof of concept via a simple organism like *C. elegans* ([David Dalrymple](http://nemaload.davidad.org/) is working on this). Moreover, much progress in neuroscience will be required ([Sandberg & Bostrom 2011](http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf)), and such progress is probably less predictable than hardware extrapolation.\n5. I’m not sure what a *general* measure of software progress would look like, though we can certainly identify *local* examples of software progress. For example, [Holdren et al. (2010)](http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-nitrd-report-2010.pdf) notes: “in many areas, performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed… [For example] Martin Grötschel…, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later – in 2003 – this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms! Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008.” [Muehlhauser & Salamon (2013)](https://intelligence.org/files/IE-EI.pdf) give another example: “For example, IBM’s Deep Blue played chess at the level of world champion Garry Kasparov in 1997 using about 1.5 trillion instructions per second (TIPS), but a program called Deep Junior did it in 2003 using only 0.015 TIPS. Thus, the computational efficiency of the chess algorithms increased by a factor of 100 in only six years ([Richards and Shaw 2004](http://users.ece.gatech.edu/~mrichard/Richards&Shaw_Algorithms01204.pdf)).” A third example is [Setty et al. (2012)](http://www.cs.utexas.edu/~mwalfish/papers/pepper-ndss12.pdf), which improved the efficiency of a probabilistically checkable proof method by 20 orders of magnitude with a single breakthrough. On the other hand, one can easily find examples of very *slow* progress, too ([Davis 2012](http://www.cs.nyu.edu/~davise/papers/singularity.pdf)).\n6. For example, see [Good (1970)](http://commonsenseatheism.com/wp-content/uploads/2012/03/Good-Some-future-social-repurcussions-of-computers.pdf).\n7. As I wrote [earlier](http://lesswrong.com/lw/h3w/open_thread_april_115_2013/8p4r): “Increases in computing power are pretty predictable, but for AI you probably need fundamental mathematical insights, and it’s damn hard to predict those. In 1900, David Hilbert posed [23 unsolved problems](http://en.wikipedia.org/wiki/Hilbert%27s_problems) in mathematics. Imagine trying to predict when those would be solved.” Some of these problems were solved quickly, some of them required several decades to solve, and many of them remain unsolved. Even the order in which Hilbert’s problems would be solved was hard to predict. According to [Erdős & Graham (1980)](http://www.amazon.com/problems-combinatorial-Monographie-lEnseignement-mathematique/dp/B0006E5L5O/), p. 7, “Hilbert lectured in the early 1920’s on problems in mathematics and said something like this: probably all of us will see the proof of the Riemann hypothesis, some of us… will see the proof of Fermat’s last theorem, but none of us will see the proof that √2√2 is transcendental.” In fact, these results came in the reverse order: the last was proved by Kusmin a few years later, Fermat’s last theorem was proved by Wiles in 1994, and the Riemann hypothesis still has not been proved or disproved.\n8. According to [Levy & Newborn (1991)](http://www.amazon.com/Computers-Play-Chess-David-Levy/dp/4871878015/), one participant guessed the correct year (1997), thirteen participants guessed years from 1992-1995, twenty-eight participants guessed years from 1998-2056, and one participant guessed “Never.” Of the twenty-eight who guessed years from 1998-2056, eleven guessed year 2010 or later.\n9. As Fuller & Millett ([2011](http://www.amazon.com/The-Future-Computing-Performance-Level/dp/0309159512/), p. 81) note, “When we talk about scaling computing performance, we implicitly mean to increase the computing performance that we can buy for each dollar we spend.” Most of us don’t really care whether our new computer has more transistors or some other structure; we just want it to *do more stuff, more cheaply*. [Kurzweil (2012)](http://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/0670025291/), ch. 10, footnote 10 shows “calculations per second per $1,000” growing exponentially from 1900 through 2010, including several data points after the *serial speed* version of Moore’s Law broke down in 2004. The continuation of this trend is confirmed by “instructions per second per dollar” data for 2006-2011, gathered from Intel and other sources by Chris Hallquist (spreadsheet [here](https://docs.google.com/spreadsheet/ccc?key=0AvoX2xCTgYnWdHJKbktNTkh6V1V1b0JNVVlYTkd4ZEE&usp=sharing)). Thus it seems that the *computations per dollar* form of Moore’s Law has continued unabated, at least for now.\n10. One possible breakthrough here may be [iterated embryo selection](http://www.theuncertainfuture.com/faq.html#7). See Miller ([2012](http://www.amazon.com/Singularity-Rising-Surviving-Thriving-Dangerous/dp/1936661659/), ch. 9) for more details.\n11. It is interesting, however, that the United States did not pursue extraordinary economic, technological and geopolitical advantage in the period during which it was the sole possessor of nuclear weapons. Also, it is worth noting that violence and aggression have steadily declined throughout human history ([Pinker 2012](http://www.amazon.com/The-Better-Angels-Our-Nature/dp/0143122010/)).\n12. [Tetlock (2010)](http://commonsenseatheism.com/wp-content/uploads/2013/05/Tetlock-Second-Thoughts-about-Expert-Political-Judgment-Reply-to-the-Symposium.pdf) adds another recommendation: “adversarial collaboration” ([Mellers et al. 2001](http://web.cenet.org.cn/upfile/21290.pdf)). Tetlock explains: “The core idea is simple: rival epistemic and political camps would nominate experts to come together to reach agreements on how they disagree on North Korea or deficit reduction or global warming — and then would figure out how to resolve at least a subset of their factual disputes. The disputants would need to specify, ex ante, how much belief change each side would ‘owe’ the other if various agreed-upon empirical tests were to work out one way or the other. When adversarial collaboration works as intended, it shifts the epistemic incentives from favoring cognitive hubris (generating as many reasons why one’s own side is right and the other is wrong) and toward modesty (taking seriously the possibility that some of the other side’s objections might have some validity). This is so because there is nothing like the prospect of imminent falsification to motivate pundits to start scaling back their more grandiose generalizations: ‘I am not predicting that North Korea will become conciliatory in this time frame if we did x — I merely meant that they might become less confrontational in this wider time frame if we did x, y and z — and if there are no unexpected endogenous developments and no nasty exogenous shocks.'” [Tetlock (2012)](http://commonsenseatheism.com/wp-content/uploads/2013/05/Tetlock-et-al-Should-system-thinkers-accept-the-limits-on-political-forecasting-or-push-the-limits.pdf), inspired by [Gawande (2009)](http://www.amazon.com/The-Checklist-Manifesto-Things-Right/dp/0312430000/), also tentatively recommends the use of checklists in forecasting: “The intelligence community has begun developing performance-appraisal checklists for analysts that nudge them in the direction of thinking more systematically about how they think. But it has yet — to our knowledge — taken the critical next step of checking the usefulness of the checklists against independent real-world performance criteria, such as the accuracy of current assessments and future projections. Our experience in the [ACE] IARPA forecasting tournament makes us cautiously optimistic that this next step is both feasible and desirable.”\n13. But, let us not fool ourselves concerning the difficulty of this task. [Good (1976)](http://intelligence.org/wp-content/uploads/2013/05/Good-Review-of-The-World-Computer-Chess-Championship.pdf) asserted that human-level performance in computer chess was a good signpost for human-level AI, writing that “a computer program of Grandmaster strength would bring us within an ace of [machine ultra-intelligence].” But of course this was not so. We may chuckle at this prediction today, but how obviously wrong was Good’s prediction in 1976?\n14. E.g. [Armstrong & Sotala (2012)](http://singularity.org/files/PredictingAI.pdf); [MacGregor (2001)](http://advertisingprinciples.com/docs/MacGregorPoF.pdf); [Lawrence et al. (2006)](http://yoksis.bilkent.edu.tr/doi_getpdf/articles/10.1016-j.ijforecast.2006.03.007.pdf).\n\nThe post [When Will AI Be Created?](https://intelligence.org/2013/05/15/when-will-ai-be-created/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-05-16T05:00:20Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "a46b44ad08ba621c8002e6eb31027262", "title": "Advise MIRI with Your Domain-Specific Expertise", "url": "https://intelligence.org/2013/05/15/advise-miri-with-your-domain-specific-expertise/", "source": "miri", "source_type": "blog", "text": "MIRI currently has a few dozen volunteer advisors on a wide range of subjects, but we need more! If you’d like to help MIRI pursue its mission more efficiently, please [sign up to be a MIRI advisor](https://docs.google.com/spreadsheet/viewform?formkey=dG1oUmNybktpUzBXY0JUR1dTSFVkanc6MQ).\n\n\nIf you sign up, we will occasionally ask you questions, or send you early drafts of upcoming writings for feedback.\n\n\nWe don’t always want technical advice (“Well, you can do that with a relativized arithmetical hierarchy…”); often, we just want to understand how different groups of experts respond to our writing (“The tone of this paragraph rubs me the wrong way because…”).\n\n\nAt the moment, we are most in need of advisors on the following subjects:\n\n\n* **Mathematical logic** (especially [computability theory](http://en.wikipedia.org/wiki/Computability_theory) and [proof theory](http://en.wikipedia.org/wiki/Proof_theory), more-especially [provability logic](http://en.wikipedia.org/wiki/Provability_logic), and most-especially [the combination of logic and probabilism](http://intelligence.org/wp-content/uploads/2013/03/Christiano-et-al-Naturalistic-reflection-early-draft.pdf',100])))\n* **Theoretical computer science** (especially computability theory, [computational complexity theory](http://en.wikipedia.org/wiki/Computational_complexity_theory), and [AIXI](http://wiki.lesswrong.com/wiki/AIXI))\n* **Artificial intelligence** (especially [machine learning](http://en.wikipedia.org/wiki/Machine_learning), [agent architectures](http://en.wikipedia.org/wiki/Agent_architecture), and [probabilistic graphical models](http://en.wikipedia.org/wiki/Graphical_model))\n* **Economics** (especially [endogenous growth theory](http://en.wikipedia.org/wiki/Endogenous_growth_theory) and the [economics of innovation](http://www.amazon.com/The-Economics-Innovation-An-Introduction/dp/1848440278/))\n* **Game theory** (especially [program equilibrium](http://commonsenseatheism.com/wp-content/uploads/2013/04/Woolridge-Computation-and-the-Prisoners-Dilemma.pdf) and [normative decision theory](http://lesswrong.com/lw/gu1/decision_theory_faq/))\n\n\nEven if you don’t have *much* time to help, [**please sign up**](https://docs.google.com/spreadsheet/viewform?formkey=dG1oUmNybktpUzBXY0JUR1dTSFVkanc6MQ)! We will of course respect your own limits on availability.\n\n\nThe post [Advise MIRI with Your Domain-Specific Expertise](https://intelligence.org/2013/05/15/advise-miri-with-your-domain-specific-expertise/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-05-15T21:38:56Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "68aac60a387b5cffcc43b5db7f691041", "title": "Five theses, two lemmas, and a couple of strategic implications", "url": "https://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/", "source": "miri", "source_type": "blog", "text": "MIRI’s primary concern about self-improving AI isn’t so much that it might be created by ‘bad’ actors rather than ‘good’ actors in the global sphere; rather most of our concern is in remedying the situation in which *no one knows at all* how to create a self-modifying AI with known, stable preferences.  (This is why we see the main problem in terms of [doing research](http://intelligence.org/research/) and encouraging others to perform relevant research, rather than trying to stop ‘bad’ actors from creating AI.)\n\n\nThis, and a number of other basic strategic views, can be summed up as a consequence of 5 theses about purely factual questions about AI, and 2 lemmas we think are implied by them, as follows:\n\n\n**Intelligence explosion thesis**. A sufficiently smart AI will be able to realize large, reinvestable cognitive returns from things it can do on a short timescale, like improving its own cognitive algorithms or purchasing/stealing lots of server time. The intelligence explosion will hit very high levels of intelligence before it runs out of things it can do on a short timescale. See: [Chalmers (2010)](http://www.consc.net/papers/singularityjcs.pdf); [Muehlhauser & Salamon (2013)](https://intelligence.org/files/IE-EI.pdf); [Yudkowsky (2013)](https://intelligence.org/files/IEM.pdf).\n\n\n**Orthogonality thesis**. Mind design space is huge enough to contain agents with almost any set of preferences, and such agents can be instrumentally rational about achieving those preferences, and have great computational power. For example, mind design space theoretically contains powerful, instrumentally rational agents which act as expected paperclip maximizers and always consequentialistically choose the option which leads to the greatest number of expected paperclips. See: [Bostrom (2012)](http://www.nickbostrom.com/superintelligentwill.pdf); [Armstrong (2013)](http://lesswrong.com/lw/h0k/arguing_orthogonality_published_form/).\n\n\n**Convergent instrumental goals thesis**. Most utility functions will generate a subset of instrumental goals which follow from most possible final goals. For example, if you want to build a galaxy full of happy sentient beings, you will need matter and energy, and the same is also true if you want to make paperclips. This thesis is why we’re worried about very powerful entities even if they have no explicit dislike of us: “The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else.” Note though that by the Orthogonality Thesis you can always have an agent which explicitly, terminally prefers not to do any particular thing — an AI which does love you will not want to break you apart for spare atoms. See: [Omohundro (2008)](http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf); [Bostrom (2012)](http://www.nickbostrom.com/superintelligentwill.pdf).\n\n\n**Complexity of value thesis**. It takes a large chunk of Kolmogorov complexity to describe even idealized human preferences. That is, what we ‘should’ do  is a computationally complex mathematical object even after we take the limit of reflective equilibrium (judging your own thought processes) and other standard normative theories. A superintelligence with a randomly generated utility function would not do anything we see as worthwhile with the galaxy, because it is unlikely to accidentally hit on final preferences for having a diverse civilization of sentient beings leading interesting lives. See: [Yudkowsky (2011)](https://intelligence.org/files/ComplexValues.pdf); [Muehlhauser & Helm (2013)](https://intelligence.org/files/IE-ME.pdf).\n\n\n**Fragility of value thesis**. Getting a goal system 90% right does not give you 90% of the value, any more than correctly dialing 9 out of 10 digits of my phone number will connect you to somebody who’s 90% similar to Eliezer Yudkowsky. There are multiple dimensions for which eliminating that dimension of value would eliminate almost all value from the future. For example an alien species which shared almost all of human value except that their parameter setting for “boredom” was much lower, might devote most of their computational power to replaying a single peak, optimal experience over and over again with slightly different pixel colors (or the equivalent thereof). Friendly AI is more like a satisficing threshold than something where we’re trying to eke out successive 10% improvements. See: Yudkowsky ([2009](http://lesswrong.com/lw/y3/value_is_fragile/), [2011](https://intelligence.org/files/ComplexValues.pdf)).\n\n\nThese five theses seem to imply two important lemmas:\n\n\n**Indirect normativity**. Programming a self-improving machine intelligence to implement a grab-bag of things-that-seem-like-good-ideas will lead to a bad outcome, regardless of how good the apple pie and motherhood sounded. E.g., if you give the AI a final goal to “make people happy” it’ll just turn people’s pleasure centers up to maximum. “Indirectly normative” is Bostrom’s term for an AI that calculates the ‘right’ thing to do via, e.g., looking at human beings and modeling their decision processes and idealizing those decision processes (e.g. what you would-want if you knew everything the AI knew and understood your own decision processes, reflective equilibria, ideal advisior theories, and so on), rather than being told a direct set of ‘good ideas’ by the programmers. Indirect normativity is how you deal with Complexity and Fragility. If you can succeed at indirect normativity, then small variances in essentially good intentions may not matter much — that is, if two different projects do indirect normativity correctly, but one project has 20% nicer and kinder researchers, we could still hope that the end results would be of around equal expected value. See: [Muehlhauser & Helm (2013)](https://intelligence.org/files/IE-ME.pdf).\n\n\n**Large bounded extra difficulty of Friendliness**. You can build a Friendly AI (by the Orthogonality Thesis), but you need a lot of work and cleverness to get the goal system right. Probably more importantly, the rest of the AI needs to meet a higher standard of cleanness in order for the goal system to remain invariant through a billion sequential self-modifications. Any sufficiently smart AI to do clean self-modification will tend to do so regardless, but the problem is that intelligence explosion might get started with AIs substantially less smart than that — for example, with AIs that rewrite themselves using genetic algorithms or other such means that don’t preserve a set of consequentialist preferences. In this case, building a Friendly AI could mean that our AI has to be smarter about self-modification than the minimal AI that could undergo an intelligence explosion. See: [Yudkowsky (2008)](https://intelligence.org/files/AIPosNegFactor.pdf) and [Yudkowsky (2013)](https://intelligence.org/files/IEM.pdf).\n\n\nThese lemmas in turn have two major strategic implications:\n\n\n1. We have a lot of work to do on things like indirect normativity and stable self-improvement. At this stage a lot of this work looks really foundational — that is, we can’t describe how to do these things using infinite computing power, let alone finite computing power.  We should get started on this work as early as possible, since basic research often takes a lot of time.\n2. There needs to be a Friendly AI project that has some sort of boost over competing projects which don’t live up to a (very) high standard of Friendly AI work — a project which can successfully build a stable-goal-system self-improving AI, before a less-well-funded project hacks together a much sloppier self-improving AI.  Giant supercomputers may be less important to this than being able to bring together the smartest researchers (see the open question posed in [Yudkowsky 2013](https://intelligence.org/files/IEM.pdf)) but the required advantage cannot be left up to chance.  Leaving things to default means that projects less careful about self-modification would have an advantage greater than casual altruism is likely to overcome.\n\n\nThe post [Five theses, two lemmas, and a couple of strategic implications](https://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-05-06T01:36:33Z", "authors": ["Eliezer Yudkowsky"], "summaries": []} -{"id": "a98531ef58ef2a628fd3a2f7ebfd1c0f", "title": "AGI Impact Experts and Friendly AI Experts", "url": "https://intelligence.org/2013/05/01/agi-impacts-experts-and-friendly-ai-experts/", "source": "miri", "source_type": "blog", "text": "MIRI’s mission is “to ensure that the creation of smarter-than-human intelligence has a positive impact.” A central strategy for achieving this mission is to find and train what one might call “[AGI](http://en.wikipedia.org/wiki/Strong_AI#Artificial_General_Intelligence_research) impact experts” and “[Friendly AI](http://en.wikipedia.org/wiki/Friendly_artificial_intelligence) experts.”\n\n\n*AGI impact experts* develop skills related to predicting technological development (e.g. building [computational models](https://intelligence.org/files/ChangingTheFrame.pdf) of AI development or reasoning about [intelligence explosion microeconomics](https://intelligence.org/files/IEM.pdf)), predicting AGI’s likely impact on society, and identifying which interventions are most likely to increase humanity’s chances of safely navigating the creation of AGI. For overviews, see [Bostrom & Yudkowsky (2013)](https://intelligence.org/files/EthicsofAI.pdf); [Muehlhauser & Salamon (2013)](https://intelligence.org/files/IE-EI.pdf).\n\n\n*Friendly AI experts* develop skills useful for the development of mathematical architectures that can enable AGIs to be *trustworthy* (or “human-friendly”). This work is carried out at [MIRI research workshops](http://intelligence.org/2013/03/07/upcoming-miri-research-workshops/) and in various publications, e.g. [Christiano et al. (2013)](http://lesswrong.com/lw/h1k/reflection_in_probabilistic_set_theory/); [Hibbard (2013)](http://arxiv.org/pdf/1111.3934v2.pdf',100])). Note that the term “Friendly AI” was selected (in part) to avoid the suggestion that we understand the subject very well — a phrase like “Ethical AI” might sound like the kind of thing one can learn a lot about by looking it up in an encyclopedia, but our present understanding of trustworthy AI is too impoverished for that.\n\n\nNow, what do we mean by “expert”?\n\n\n \n\n\n\n### Reliably superior performance on representative tasks\n\n\nAn [expert](http://education.yahoo.com/reference/dictionary/entry/expert) is “a person with a high degree of skill in or knowledge of a certain subject.” Some domains (e.g. chess) provide objective measures of expertise, while other domains rely on peer recognition (e.g. philosophy). However, as [Ericsson (2006)](http://commonsenseatheism.com/wp-content/uploads/2012/12/Ericsson-An-Introduction-to-Cambridge-Handbook-of-Expertise-and-Expert-Performance.pdf) notes:\n\n\n\n> people recognized by their peers as experts do not always display superior performance on domain-related tasks. Sometimes they are no better than novices even on tasks that are central to the expertise, such as selecting stocks with superior future value, treatment of psychotherapy patients, and forecasts.\n> \n> \n\n\nThus, we should specify that the *kind* of expertise we want in AGI impact experts and Friendly AI experts is what Ericsson (2006) calls “Expertise as Reliably Superior Performance on Representative Tasks” (RSPRT). It won’t do humanity much good to have a bunch of peer-credentialed “AGI impact experts” who aren’t really any better than laypeople at predicting AGI outcomes, or a bunch of “Friendly AI experts” who aren’t much good at generating new FAI-relevant math results.\n\n\nAs an example of expertise as RSPRT, consider chess. Do [chess ratings](http://en.wikipedia.org/wiki/Chess_rating_system) reliably track with RSPRT? Yes they do. For example, chess ratings are highly correlated with the ability to select the best move for presented chess positions ([de Groot 1978](http://www.amazon.com/Thought-Choice-Chess-Adriann-Degroot/dp/9027979146/); [Ericsson & Lehmann 1996](http://commonsenseatheism.com/wp-content/uploads/2012/12/Ericsson-Lehmann-Expert-and-exceptional-performance-evidence-of-maximal-adaptation-to-task-constraints.pdf); [Van der Maas & Wagenmakers 2005](http://hvandermaas.socsci.uva.nl/Homepage_Han_van_der_Maas/Publications_files/papers/Han1chess.pdf)).\n\n\nSimilar methods have been used to confirm “expertise as RSPRT” in medicine (Ericsson [2004](http://edianas.com/portfolio/proj_EricssonInterview/articles/2004_Academic_Medicine_Vol_10,_S70-S81.pdf), [2007](http://direct.bl.uk/bld/PlaceOrder.do?UIN=206689557&ETOC=RN&from=searchengine)), sport ([2007](http://commonsenseatheism.com/wp-content/uploads/2013/04/Cote-The-development-of-skill-in-sport.pdf)), sport ([Côté et al. 2012](http://commonsenseatheism.com/wp-content/uploads/2013/04/Cote-The-development-of-skill-in-sport.pdf',100]))), Scrabble ([Tuffiash et al. 2007](http://commonsenseatheism.com/wp-content/uploads/2013/03/Tuffiash-et-al-Expert-performance-in-Scrabble.pdf)), and music ([Lehmann & Grüber 2006](http://commonsenseatheism.com/wp-content/uploads/2013/03/Lehman-Gruber-Music.pdf)).\n\n\nSo, what are some “representative tasks” for which AGI impact experts and FAI experts should demonstrate superior performance?\n\n\n \n\n\n### Scholastic expertise\n\n\nAt the very least, we’d hope AGI impact experts and Friendly AI experts would have a kind of *scholastic* expertise in AGI impact and Friendly AI. That is, they should know what the basic debates are about, which arguments and counter-arguments are often given, and who gives them. Generally, experts in everything from [Zoroastrian theology](http://www.amazon.com/Zoroastrian-theology-earliest-times-present/dp/1177571749) to [theoretical time travel](http://en.wikipedia.org/wiki/Time_travel) at *least* have *this* kind of expertise.\n\n\nFor example, [Nick Bostrom](http://nickbostrom.com/) has researched AGI impact on and off for more than a decade, and has written extensively on the subject. Both in conversation and through his writings, Bostrom demonstrates pretty solid scholastic expertise in AGI impact.\n\n\nAI researchers, in contrast, do *not* tend to be familiar with the basic debates, arguments, and counterarguments related to AGI impact. (Why would they be? That’s not their job.) Thus, it’s hard to see much value in, say, the projections about AGI impact from the [AAAI Presidential Panel on Long-Term AI Futures](http://www.aaai.org/Organization/presidential-panel.php), which included no participants with known scholastic expertise in AGI impact — and only one participant who is (barely) involved in the broader [machine ethics](http://en.wikipedia.org/wiki/Machine_ethics) community ([Alan Mackworth](http://www.cs.ubc.ca/~mack/)).\n\n\nBut maybe we shouldn’t place any value on the opinions of those who *do* have scholastic expertise in the subject, either. Maybe those with scholastic expertise can’t reliably demonstrate superior performance on anything more practical than merely knowing which arguments and counterarguments are in play.\n\n\nIdeally, AGI impact experts and FAI experts should do more than demonstrate scholastic expertise. What other examples of RSPRT expertise should be relevant to both AGI impact experts and FAI experts?\n\n\n \n\n\n### Sensitivity to evidence\n\n\nIn general, humans [don’t](http://www.amazon.com/Thinking-Fast-and-Slow-ebook/dp/B00555X8OA/) accurately update their beliefs in response to medium-sized bits of evidence, like a [perfectly rational agent](http://en.wikipedia.org/wiki/Intelligent_agent) would. That’s why we need science, where [our method is to](http://lesswrong.com/lw/qi/faster_than_science/) “amass such an enormous mountain of evidence\\* that… scientists cannot ignore it.”\n\n\nBut there usually aren’t “mountains” of evidence available when testing hypotheses about the design of future technologies and their likely impact. As explained [elsewhere](http://lesswrong.com/r/lesswrong/lw/fpe/philosophy_needs_to_trust_your_rationality_even/): “The less evidence you have, or the harder it is to interpret, the more rationality you need to get the right answer. (As likelihood ratios get smaller, your priors need to be better and your updates more accurate.)”\n\n\nCan human rationality be improved? Based on a couple decades of “debiasing” research ([Larrick 2004](http://commonsenseatheism.com/wp-content/uploads/2011/09/Larrick-Debiasing.pdf)), my guess is that we probably can, but we haven’t tried very hard yet.\n\n\nWhy think there is low-hanging fruit in the field of rationality training? Very few people, if any, put as much effort into improving their rationality as our best musicians and athletes put into improving their musical and athletic abilities. The best musicians practice 4 hours per day over many years ([Ericsson et al. 1993](https://syllabus.byu.edu/uploads/h52kB4gCLyQP.pdf)); champion swimmer Michael Phelps spent [3-6 hours per day](http://edition.cnn.com/2012/07/30/us/michael-phelps-on-pmt/index.html) in the pool; Sun Microsystems co-founder Bill Joy practiced programming 10 hours per day in college ([Gladwell 2008](http://www.amazon.com/Outliers-The-Story-Success-ebook/dp/B001ANYDAO/), p. 46); and during one period, chess champion Bobby Fischer reportedly practiced chess [14 hours a day](http://www.slate.com/articles/sports/sports_nut/2012/10/bobby_fischer_jonathan_safran_foer_on_the_life_of_the_jewish_chess_champion.html). But who spends 4-10 hours per day doing [calibration training](http://lesswrong.com/lw/1f8/test_your_calibration/) or building up good [rationality habits](http://lesswrong.com/lw/fc3/checklist_of_rationality_habits/)?\n\n\nIdeally, both AGI impact experts and Friendly AI experts would train good rationality habits so as to increase their sensitivity to evidence, so that they can reason productively about future technologies without first needing to amass (unavailable) “mountains of evidence.”\n\n\n \n\n\n### What might an FAI expert look like?\n\n\nNext, let’s look at the specific skills needed for FAI expertise in particular. Clearly, such experts must be able to generate new results in math. And luckily, math research skill is more easily measurable and “objective” than, say, psychology or philosophy research skill.\n\n\nWhat other kinds of expertise might we want in an FAI expert?\n\n\n[Yudkowsky described](http://lesswrong.com/lw/cze/reply_to_holden_on_tool_ai/) an FAI expert like this:\n\n\n\n> A Friendly AI [expert] is somebody who specializes in seeing the correspondence of mathematical structures to What Happens in the Real World. It’s somebody who looks at Hutter’s specification of AIXI and reads the actual equations… and sees, “Oh, this AI will try to gain control of its reward channel,” as well as numerous subtler issues like, “This AI presumes a Cartesian boundary separating itself from the environment; it may drop an anvil on its own head.” Similarly, working on [TDT](http://wiki.lesswrong.com/wiki/Timeless_decision_theory) means e.g. looking at a mathematical specification of decision theory, and seeing “Oh, this is vulnerable to blackmail” and coming up with a mathematical counter-specification of an AI that isn’t so vulnerable to blackmail.\n> \n> \n> …If you want to have a sensible discussion about which AI designs are safer, there are specialized skills you can apply to that discussion, [such as the skill described above,] as built up over years of study and practice by someone who specializes in answering that sort of question.\n> \n> \n\n\nLet me give some examples of people who, as Yudkowsky put it, “specialize in seeing the correspondence of mathematical structures to What Happens in the Real World.” (In particular, we’re interested in the consequences of mathematical objects with a kind of “general intelligence,” not so much the real world consequences of narrow-domain algorithms like [Stuxnet](http://en.wikipedia.org/wiki/Stuxnet).) To the extent that AGI behavior can be modeled with mathematics, this is a crucial skill.\n\n\nYudkowsky read Hutter’s specification of [AIXI](http://wiki.lesswrong.com/wiki/AIXI) and saw “Oh, this AI will try to gain control of its reward channel” and “the AI presumes a Cartesian boundary separating itself from the environment; it may [drop an anvil](http://wiki.lesswrong.com/wiki/Anvil_problem) on its own head,” but he didn’t write down technical demonstrations of these facts.\n\n\n[Laurent Orseau](http://www.agroparistech.fr/mia/doku.php?id=equipes:membres:page:laurent) (AgroParisTech) and [Mark Ring](http://www.idsia.ch/~ring/) (IDSIA) independently demonstrated those problems (AIXI-like agents hacking their own reward channels, and the challenge of the Cartesian boundary) in [Ring & Orseau (2011)](http://www.idsia.ch/~ring/AGI-2011/Paper-B.pdf',100])) and [Orseau & Ring (2011)](http://www.idsia.ch/~ring/Orseau,Ring%3BSelf-modification%20and%20Mortality%20in%20Artificial%20Agents,%20AGI%202011.pdf). They also worked toward formalizing the latter problem in [Orseau & Ring (2012)](http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_76.pdf), as did [Bill Hibbard](http://www.ssec.wisc.edu/~billh/homepage1.html) (University of Wisconsin) in [Hibbard (2012)](https://link.springer.com/chapter/10.1007/978-3-642-35506-6_12).\n\n\nExamples of this kind of work from MIRI’s research fellows or research associates include [Dewey (2011)](http://singularity.org/files/LearningValue.pdf), [de Blanc (2011)](http://singularity.org/files/OntologicalCrises.pdf), and [Yudkowsky (2010)](http://singularity.org/files/TDT.pdf).\n\n\nThis skill may be difficult to measure objectively, but that is true of *many* of the skills that university administrators (or recruiters for hedge funds and technology companies) try to identify in mathematical researchers. And yet these groups have much success in locating the best and brightest. So perhaps there is some hope for identifying people with this skill.\n\n\nThere are other tasks on which FAI experts should demonstrate “reliably superior performance.” For example, they must be able to formalize philosophical concepts. Here again there is no standard measure for the skill, but we have many past examples from which to learn. The last century was a pretty productive one for turning previously mysterious philosophical concepts into formal ones. See [Kolmogorov (1965)](http://commonsenseatheism.com/wp-content/uploads/2013/04/Kolmogorov-Three-Approaches-to-the-Quantiative-Definition-of-Information.pdf) on complexity and simplicity, Solomonoff ([1964a](http://commonsenseatheism.com/wp-content/uploads/2011/12/Solomonoff-A-formal-theory-of-inductive-inference-part-1.pdf), [1964b](http://commonsenseatheism.com/wp-content/uploads/2011/12/Solomonoff-A-formal-theory-of-inductive-inference-part-2.pdf)) on induction, [Von Neumann and Morgenstern (1947)](http://en.wikipedia.org/wiki/Theory_of_Games_and_Economic_Behavior) on rationality, [Shannon (1948)](http://makseq.com/materials/lib/Articles-Books/General/InformationTheory/p3-shannon.pdf',100])) on information, and Tennenholtz’s development of “program equilibrium” (for an overview, see [Wooldridge 2012](http://commonsenseatheism.com/wp-content/uploads/2013/04/Woolridge-Computation-and-the-Prisoners-Dilemma.pdf)).\n\n\nReaders interested in developing Friendly AI expertise should consider taking the courses (or reading the textbooks) listed in [Course Recommendations for MIRI Researchers](http://intelligence.org/courses/).\n\n\n \n\n\n### What might an AGI impact expert look like?\n\n\nTo begin, AGI impact experts should demonstrate reliably superior performance at forecasting technological progress, especially AI progress.\n\n\nUnfortunately, we haven’t yet discovered reliable methods for successful long-term technological forecasting ([Muehlhauser & Salamon 2012](http://singularity.org/files/IE-EI.pdf)), and both experts and laypeople are *particularly* bad at predicting AI ([Armstrong & Sotala 2012](http://singularity.org/files/PredictingAI.pdf)). The price-performance formulation of Moore’s Law has been [surprisingly robust](http://squid314.livejournal.com/354867.html) across time, but one cannot predict specific technologies from this trend without making additional assumptions that are (in most cases) less robust than Moore’s Law. Famous tech forecaster Ray Kurzweil [claims](http://www.kurzweilai.net/images/How-My-Predictions-Are-Faring.pdf) good accuracy, but these claims are [probably](http://lesswrong.com/lw/gbi/assessing_kurzweil_the_results/) [overstated](http://lesswrong.com/lw/diz/kurzweils_predictions_good_accuracy_poor/).\n\n\nNone of this should be surprising: good forecasting performance seems to depend (among other things) on regular feedback on one’s predictions, and quick feedback isn’t available when making long-term forecasts.\n\n\nLuckily, there are many opportunities for forecasters to improve their performance. [Horowitz & Tetlock (2012)](http://www.foreignpolicy.com/articles/2012/09/06/trending_upward), based on their own empirical research and prediction training, offer some advice on the subject:\n\n\n* *Explicit quantification*: “The best way to become a better-calibrated appraiser of long-term futures is to get in the habit of making quantitative probability estimates that can be objectively scored for accuracy over long stretches of time. Explicit quantification enables explicit accuracy feedback, which enables learning.”\n* *Signposting the future*: Thinking through specific scenarios can be useful if those scenarios “come with clear diagnostic signposts that policymakers can use to gauge whether they are moving toward or away from one scenario or another… Falsifiable hypotheses bring high-flying scenario abstractions back to Earth.”\n* *Leveraging aggregation*: “the average forecast is often more accurate than the vast majority of the individual forecasts that went into computing the average…. [Forecasters] should also get into the habit that some of the better forecasters in [an IARPA forecasting tournament called [ACE](http://www.iarpa.gov/Programs/ia/ACE/ace.html)] have gotten into: comparing their predictions to group averages, weighted-averaging algorithms, prediction markets, and financial markets.\n\n\n[Armstrong & Sotala (2012)](http://singularity.org/files/PredictingAI.pdf) add that it can be helpful to *decompose the phenomena* into many parts and make predictions about each of the parts, as feedback may be available for at least some of the parts. This is the approach to AI prediction taken by [The Uncertain Future](http://theuncertainfuture.com/) ([Rayhawk et al. 2009](http://singularity.org/files/ChangingTheFrame.pdf',100]))).\n\n\nArmstrong & Sotala also make a distinction between “grind” — lots of hard work and money — and “insight” — entirely new unexpected ideas. Grind is moderately easy to predict, while insight is difficult to forecast. Grind predictions could be more reliable to the extent that a phenomenon is mostly about grind, and doesn’t require new conceptual breakthroughs. For example, while AI appears to be an “insight” technology, [whole brain emulation](http://en.wikipedia.org/wiki/Mind_uploading) may be largely a “grind” technology, and therefore more easily predicted.\n\n\nThere is much more to say about the skills needed for AGI impact expertise, but for now I leave the reader with the examples above, and also the following passage from [Bostrom (1997)](http://www.nickbostrom.com/old/predict.html):\n\n\n\n> [Recently] a starring role has developed on the intellectual stage for which the actor is still wanting. This is the role of the generalised scientist, or *the polymath*, who has insights into many areas of science and the ability to use these insights to work out solutions to those more complicated problems which are usually considered too difficult for scientists and are therefore either consigned to politicians and the popular press, or just ignored. The sad thing is that ignoring these problems won’t make them go away, and… some of them are challenges to the very survival of intelligent life.\n> \n> \n> [One such problem is] *superintelligence*… [which] takes on practical urgency when many experts think that we will soon have the ability to create superintelligence.\n> \n> \n> What questions could [this discipline] deal with? Well, questions like: How much would the predictive power for various fields increase if we increase the processing speed of a human-like mind a million times? If we extend the short-term or long-term memory? If we increase the neural population and the connection density? What other capacities would a superintelligence have? …Can we know anything about the motivation of a superintelligence? Would it be feasible to preprogram it to be good or philanthropic, or would such rules be hard to reconcile with the flexibility of its cognitive processes? Would a superintelligence, given the desire to do so, be able to outwit humans into promoting its own aims even if we had originally taken strict precautions to avoid being manipulated? Could one use one superintelligence to control another? …How would our human self-perception and aspirations change if were forced to abdicate the throne of wisdom…? How would we individuate between superminds if they could communicate and fuse and subdivide with enormous speed? Will a notion of personal identity still apply to such interconnected minds? …Could we then be able to compete with the superintelligences, if we were accelerated and augmented with extra memory etc., or would such profound reorganisation be necessary that we would no longer feel we were humans? Would that matter?\n> \n> \n> Maybe these are not the right questions to ask, but they are at least a start.\n> \n> \n\n\n### \n\n\n### Concluding thoughts\n\n\n[MIRI](http://intelligence.org/) exists entirely to host such experts and enable their research, and [FHI](http://www.fhi.ox.ac.uk/) and [CSER](http://cser.org/) share that focus among others. All three organizations are funding-limited, but to some degree they are also person-limited, because there are so few people in the world actively developing AGI impact expertise or Friendly AI expertise. The goal of this post is to light the path for those who may want to contribute to this important research program.\n\n\n \n\n\n### Notes\n\n\nMy thanks to Carl Shulman, Kaj Sotala, Eliezer Yudkowsky, Louie Helm, and Benjamin Noble for their helpful comments.\n\n\nThe post [AGI Impact Experts and Friendly AI Experts](https://intelligence.org/2013/05/01/agi-impacts-experts-and-friendly-ai-experts/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-05-01T01:00:48Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "bb42a3c7d6d8f27f019fafe3dc40c48e", "title": "“Intelligence Explosion Microeconomics” Released", "url": "https://intelligence.org/2013/04/29/intelligence-explosion-microeconomics-released/", "source": "miri", "source_type": "blog", "text": "MIRI’s new, 93-page technical report by Eliezer Yudkowsky, “[Intelligence Explosion Microeconomics](https://intelligence.org/files/IEM.pdf),” has now been released. The report explains one of the open problems of our research program. Here’s the abstract:\n\n\n\n> I. J. Good’s thesis of the ‘intelligence explosion’ is that a sufficiently advanced machine intelligence could build a smarter version of itself, which could in turn build an even smarter version of itself, and that this process could continue enough to vastly exceed human intelligence. As Sandberg (2010) correctly notes, there are several attempts to lay down return-on-investment formulas intended to represent sharp speedups in economic or technological growth, but very little attempt has been made to deal formally with I. J. Good’s intelligence explosion thesis as such.\n> \n> \n> I identify the key issue as returns on cognitive reinvestment – the ability to invest more computing power, faster computers, or improved cognitive algorithms to yield cognitive labor which produces larger brains, faster brains, or better mind designs. There are many phenomena in the world which have been argued as evidentially relevant to this question, from the observed course of hominid evolution, to Moore’s Law, to the competence over time of machine chess-playing systems, and many more. I go into some depth on the sort of debates which then arise on how to interpret such evidence. I propose that the next step forward in analyzing positions on the intelligence explosion would be to formalize return-on-investment curves, so that each stance can say formally which possible microfoundations they hold to be falsified by historical observations already made. More generally, I pose multiple open questions of ‘returns on cognitive reinvestment’ or ‘intelligence explosion microeconomics’. Although such questions have received little attention thus far, they seem highly relevant to policy choices affecting the outcomes for Earth-originating intelligent life.\n> \n> \n\n\nThe preferred place for public discussion of this research is [here](http://lesswrong.com/lw/hbd/new_report_intelligence_explosion_microeconomics/). There is also a private mailing list for technical discussants, which you can apply to join [here](https://docs.google.com/forms/d/1KElE2Zt_XQRqj8vWrc_rG89nrO4JtHWxIFldJ3IY_FQ/viewform).\n\n\nThe post [“Intelligence Explosion Microeconomics” Released](https://intelligence.org/2013/04/29/intelligence-explosion-microeconomics-released/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-04-29T21:28:24Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "83a3fe62ac16bf70f925f30a0f7f2a82", "title": "“Singularity Hypotheses” Published", "url": "https://intelligence.org/2013/04/25/singularity-hypotheses-published/", "source": "miri", "source_type": "blog", "text": "[![singularity hypotheses](http://intelligence.org/wp-content/uploads/2013/04/singularity-hypotheses.jpg)](http://www.amazon.com/Singularity-Hypotheses-Scientific-Philosophical-Assessment/dp/3642325599/)[*Singularity Hypotheses: A Scientific and Philosophical Assessment*](http://www.amazon.com/Singularity-Hypotheses-Scientific-Philosophical-Assessment/dp/3642325599/) has now been published by Springer, in hardcover and ebook forms.\n\n\nThe book contains 20 chapters about the prospect of machine superintelligence, including 4 chapters by MIRI researchers and research associates.\n\n\n**“Intelligence Explosion: Evidence and Import”** ([pdf](https://intelligence.org/files/IE-EI.pdf)) by Luke Muehlhauser and (previous MIRI researcher) Anna Salamon reviews\n\n\n\n> the evidence for and against three claims: that (1) there is a substantial chance we will create human-level AI before 2100, that (2) if human-level AI is created, there is a good chance vastly superhuman AI will follow via an “intelligence explosion,” and that (3) an uncontrolled intelligence explosion could destroy everything we value, but a controlled intelligence explosion would benefit humanity enormously if we can achieve it. We conclude with recommendations for increasing the odds of a controlled intelligence explosion relative to an uncontrolled intelligence explosion.\n> \n> \n\n\n**“Intelligence Explosion and Machine Ethics”** ([pdf](https://intelligence.org/files/IE-ME.pdf)) by Luke Muehlhauser and Louie Helm discusses the challenges of formal value systems for use in AI:\n\n\n\n> Many researchers have argued that a self-improving artificial intelligence (AI) could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals. If so, and if the AI’s goals differ from ours, then this could be disastrous for humans. One proposed solution is to program the AI’s goal system to want what we want before the AI self-improves beyond our capacity to control it. Unfortunately, it is difficult to specify what we want. After clarifying what we mean by “intelligence,” we offer a series of “intuition pumps” from the field of moral philosophy for our conclusion that human values are complex and difficult to specify. We then survey the evidence from the psychology of motivation, moral psychology, and neuroeconomics that supports our position. We conclude by recommending ideal preference theories of value as a promising approach for developing a machine ethics suitable for navigating an intelligence explosion or “technological singularity.”\n> \n> \n\n\n**“Friendly Artificial Intelligence”** by Eliezer Yudkowsky is a shortened version of [Yudkowsky (2008)](https://intelligence.org/files/AIPosNegFactor.pdf).\n\n\nFinally, **“Artificial General Intelligence and the Human Mental Model”** ([pdf](https://intelligence.org/files/AGI-HMM.pdf)) by Roman Yampolskiy and (MIRI research associate) Joshua Fox  reviews the dangers of anthropomorphizing machine intelligences:\n\n\n\n> When the first artificial general intelligences are built, they may improve themselves to far-above-human levels. Speculations about such future entities are already affected by anthropomorphic bias, which leads to erroneous analogies with human minds. In this chapter, we apply a goal-oriented understanding of intelligence to show that humanity occupies only a tiny portion of the design space of possible minds. This space is much larger than what we are familiar with from the human example; and the mental architectures and goals of future superintelligences need not have most of the properties of human minds. A new approach to cognitive science and philosophy of mind, one not centered on the human example, is needed to help us understand the challenges which we will face when a power greater than us emerges.\n> \n> \n\n\nThe book also includes brief, critical responses to most chapters, including responses written by Eliezer Yudkowsky and (previous MIRI staffer) Michael Anissimov.\n\n\nThe post [“Singularity Hypotheses” Published](https://intelligence.org/2013/04/25/singularity-hypotheses-published/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-04-25T03:43:00Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "81a63f894eb0b05f941492f2da5efa1a", "title": "Altair’s Timeless Decision Theory Paper Published", "url": "https://intelligence.org/2013/04/19/altairs-timeless-decision-theory-paper-published/", "source": "miri", "source_type": "blog", "text": "[![Altair paper front](http://intelligence.org/wp-content/uploads/2013/04/Altair-paper-front.png)](https://intelligence.org/files/Comparison.pdf)During his time as a research fellow for MIRI, Alex Altair wrote a paper on [Timeless Decision Theory](http://wiki.lesswrong.com/wiki/Timeless_decision_theory) (TDT) that has now been published:  “[A Comparison of Decision Algorithms on Newcomblike Problems](https://intelligence.org/files/Comparison.pdf).”\n\n\nAltair’s paper is both more succinct and also more precise in its formulation of TDT than Yudkowsky’s earlier paper “[Timeless Decision Theory](https://intelligence.org/files/TDT.pdf).” Thus, Altair’s paper should serve as a handy introduction to TDT for philosophers, computer scientists, and mathematicians, while Yudkowsky’s paper remains required reading for anyone interested to develop TDT further, for it covers more ground than Altair’s paper.\n\n\nAltair’s abstract reads:\n\n\n\n> When formulated using Bayesian networks, two standard decision algorithms (Evidential Decision Theory and Causal Decision Theory) can be shown to fail systematically when faced with aspects of the prisoner’s dilemma and so-called “Newcomblike” problems. We describe a new form of decision algorithm, called Timeless Decision Theory, which consistently wins on these problems.\n> \n> \n\n\nWe may submit to a journal later, but we’ve published the current version to our website so that readers won’t need to wait two years (from submission to acceptance to publication) to read it.\n\n\nFor a gentle introduction to the entire field of normative decision theory (including TDT), see Muehlhauser and Williamson’s [Decision Theory FAQ](http://lesswrong.com/lw/gu1/decision_theory_faq/).\n\n\nThe post [Altair’s Timeless Decision Theory Paper Published](https://intelligence.org/2013/04/19/altairs-timeless-decision-theory-paper-published/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-04-19T00:15:48Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "f550c0a95e8bdabfcc833c453846a7b7", "title": "MIRI’s April newsletter: Relaunch Celebration and a New Math Result", "url": "https://intelligence.org/2013/04/18/miri-april-newsletter-relaunch-celebration-and-a-new-math-result/", "source": "miri", "source_type": "blog", "text": "| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \n |\n\n |\n| \n\n\n\n\n| | | |\n| --- | --- | --- |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \nGreetings from The Executive Director\nDear friends,\nThese are exciting times at MIRI.\nAfter years of awareness-raising and capacity-building, we have finally transformed ourselves into a research institute focused on producing the mathematical research required to build trustworthy (or “human-friendly”) machine intelligence. As our most devoted supporters know, this has been our goal for roughly a decade, and it is a thrill to have made the transition.\nIt is also exciting to see how much more quickly one can get academic traction with mathematics research, as compared to philosophical research and technological forecasting research. Within *hours* of publishing a draft of [our first math result](http://intelligence.org/2013/03/22/early-draft-of-naturalistic-reflection-paper/), Field Medalist Timothy Gowers had seen the draft and commented on it ([here](https://plus.google.com/117663015413546255805/posts/jJModdTJ2R3?hl=en)), along with several other professional mathematicians.\nWe celebrated our “relaunch” at an April 11th party in San Francisco. It was a joy to see old friends and make some new ones. You can see photos and read some details below.\nFor more detail on our new strategic priorities, see our blog post: [MIRI’s Strategy for 2013](http://intelligence.org/2013/04/13/miris-strategy-for-2013/).\nCheers,\nLuke Muehlhauser\nExecutive Director\n\n\n\nMIRI Relaunch Celebration in San Francisco\nOn April 11th, at [HUB San Francisco](http://sanfrancisco.impacthub.net/), MIRI celebrated its [name change](http://intelligence.org/2013/01/30/we-are-now-the-machine-intelligence-research-institute-miri/) and its “[relaunch](http://intelligence.org/2013/04/13/miris-strategy-for-2013/)” as a mathematics research institute. The party was also a celebration of our ongoing [2nd research workshop](http://intelligence.org/2013/03/07/upcoming-miri-research-workshops/), featuring MIRI research fellow Eliezer Yudkowsky and 11 visiting researchers from North America and Europe. About 50 people attended the party.\nOur party included a short presentation by visiting researcher [Qiaochu Yuan](http://math.berkeley.edu/~qchu/) (UC Berkeley). Qiaochu (pronounced, as he likes to explain, “*chow* like food and *chew* also like food”) explained one of the open problems on MIRI’s research agenda: the [Löbian obstacle to self-modifying systems](http://lesswrong.com/lw/h1k/reflection_in_probabilistic_set_theory/). He explained why we’d want an AI to be able to trust its successor AIs, why [Löb’s Theorem](http://en.wikipedia.org/wiki/Lob%27s_theorem) is an obstacle to that, and how the [new probabilistic logic from our 1st research workshop](http://lesswrong.com/lw/h1k/reflection_in_probabilistic_set_theory/) might lead to a solution.\nIn addition to the usual food and drinks, our party was supplied with poster boards on easels so that the researchers in attendance could explain pieces of their work to anyone who was interested — or, people could just doodle. 🙂\nAdditional photos from the event will be published soon — stay tuned via [our blog](http://intelligence.org/blog/) or our [Facebook page](https://www.facebook.com/MachineIntelligenceResearchInstitute).\n\n\nMIRI’s First Math Result\nNovember 11-18, 2012, we held (what we now call) the *1st MIRI Workshop on Logic, Probability, and Reflection*. This workshop included [4 participants](http://intelligence.org/2013/03/07/upcoming-miri-research-workshops/), and resulted in the discovery of a kind of “loophole” in [Tarski’s undefinability theorem](http://en.wikipedia.org/wiki/Tarski%27s_undefinability_theorem) (1936) which *may* lead to a solution for the [Löbian obstacle to trustworthy self-modification](http://lesswrong.com/lw/h1k/reflection_in_probabilistic_set_theory/). We [published](http://intelligence.org/2013/03/22/early-draft-of-naturalistic-reflection-paper/) an early version of the paper explaining this result on March 22nd, and the latest draft lives here: [Definability of “Truth” in Probabilistic Logic](http://intelligence.org/wp-content/uploads/2013/03/Christiano-et-al-Naturalistic-reflection-early-draft.pdf). The paper’s lead author is visiting researcher [Paul Christiano](http://rationalaltruist.com/) (UC Berkeley).\nEliezer’s post [Reflection in Probabilistic Set Theory](http://lesswrong.com/lw/h1k/reflection_in_probabilistic_set_theory/) explains the meaning of the result, and also comments on how the result was developed:\nPaul Christiano showed up with the idea (of consistent probabilistic reflection via a fixed-point theorem) to a week-long [MIRI research workshop] with Marcello Herreshoff, Mihaly Barasz, and myself; then we all spent the next week proving that version after version of Paul’s idea couldn’t work or wouldn’t yield self-modifying AI; until finally… it produced something that looked like it might work. If we hadn’t been trying to *solve* this problem… [then] this would be just another batch of impossibility results in the math literature. I remark on this because it may help demonstrate that Friendly AI is a productive approach to math *qua* math, which may aid some mathematician in becoming interested.\nThe participants of our ongoing *2nd MIRI Workshop on Logic, Probability, and Reflection* are continuing to develop this result to examine its chances for resolving the Löbian obstacle to trustworthy self-modification — or, as workshop participant Daniel Dewey (Oxford) called it, the “Löbstacle.”\n\n\nProofreaders Needed\nSeveral MIRI research articles are being held up from publication due to a lack of volunteer proofreaders, including Eliezer Yudkowsky’s “Intelligence Explosion Microeconomics.”\nWant to be a proofreader for MIRI? Here are some reasons to get involved:\n* Get a sneak peek at our [publications](http://intelligence.org/research/) before they become publicly available.\n* Earn points at [MIRIvolunteers.org](http://mirivolunteers.org/), our online volunteer system that runs on [MIRIvolunteers.org](http://www.youtopia.com/info/), our online volunteer system that runs on [Youtopia](http://www.youtopia.com/info/). (Even if you’re not interested in the points, tracking your time through Youtopia helps us manage and quantify the volunteer proofreading effort.)\n* Having polished and well-written publications is of high-value to MIRI.\n* Help speed up our publication process. Proofreading is currently our biggest bottle-neck.\n\n\nFor more details on how you can sign up as a MIRI proofreader, see [here](http://lesswrong.com/lw/h51/call_for_help_volunteers_needed_to_proofread/).\n\n\nFacing the Intelligence Explosion Published\n *[Facing the Intelligence Explosion](http://intelligenceexplosion.com)* is now available as an ebook! You can get it [here](http://intelligenceexplosion.com/ebook).\nIt is available as a “pay-what-you-want” package that includes the ebook in three formats: MOBI, EPUB, and PDF.\nIt is also available on Amazon Kindle ([US](https://www.amazon.com/facing-the-intelligence-explosion/dp/B00C7YOR5Q/ref=as_li_ss_tl?tag=miri05-20), [Canada](https://www.amazon.ca/facing-the-intelligence-explosion/dp/B00C7YOR5Q/ref=as_li_ss_tl?tag=miri05-20), [UK](https://www.amazon.co.uk/facing-the-intelligence-explosion/dp/B00C7YOR5Q/ref=as_li_ss_tl?tag=miri05-20), and most others) and the Apple iBookstore ([US](https://itunes.apple.com/us/book/facing-intelligence-explosion/id623915471?ls=1), [Canada](https://itunes.apple.com/ca/book/facing-intelligence-explosion/id623915471?ls=1), [UK](https://itunes.apple.com/gb/book/facing-intelligence-explosion/id623915471?ls=1) and most others).\nAll sources are DRM-free. Grab a copy, share it with your friends, and review it on Amazon or the iBookstore.\nAll proceeds go directly to funding the technical and strategic [research](http://intelligence.org/research/) of the [Machine Intelligence Research Institute](http://intelligence.org/).\n\n\nEfficient Charity Article\nIn 2011, Holden Karnofsky of [Givewell](http://www.givewell.org/) wrote a series of posts on the topic of “[efficient charity](http://lesswrong.com/lw/3gj/efficient_charity_do_unto_others/)“: how to get the most bang for your philanthropic buck. Karnofsky argued for a particular method of estimating the expected value of charitable donations, a method he called “Bayesian Adjustment.” Some readers interpreted this method as providing an *a priori* judgment that existential risk reduction charities (such as MIRI) could not be efficient uses of philanthropic dollars. (Karnofsky [denies](http://lesswrong.com/lw/gzq/bayesian_adjustment_does_not_defeat_existential/8nto) that interpretation.)\nKarnofsky’s argument is subtle and complicated, but important. Since MIRI is also interested in the subject of efficient charity, we worked with Steven Kaas to produce a reply to Karnofsky’s posts, titled [Bayesian Adjustment Does Not Defeat Existential Risk Charity](http://lesswrong.com/lw/gzq/bayesian_adjustment_does_not_defeat_existential/). We do not think this resolves our points of disagreement with Karnofsky, but it does move the dialogue one step forward. Karnofsky has since replied to our article in two comments ([one](http://lesswrong.com/lw/gzq/bayesian_adjustment_does_not_defeat_existential/8nto), [two](http://lesswrong.com/lw/gzq/bayesian_adjustment_does_not_defeat_existential/8ntq)), and we expect the dialogue will continue for some time.\n\n\nAppreciation of Ioven Fables\nDue to changes in MIRI’s operational needs resulting from our transition to more technical research, MIRI no longer requires a full-time executive assistant, and thus our current executive assistant Ioven Fables ([LinkedIn](http://www.linkedin.com/pub/ioven-fables/40/337/561)) will be stepping down this month. Ioven continues to support our mission, and he may perform occasional contracting work for us in the future.\nIt was a pleasure for me to work with Ioven over the past 11 months. He played a major role in transforming MIRI into a more robust and efficient organization, and his consistent cheer and professionalism will be missed. I recommend his services to anyone looking to hire someone to help with operations and development work at their organization or company.\nIoven: Thanks so much for your service to MIRI! I enjoyed working with you, and I wish you the best of luck.\nLuke Muehlhauser\n |\n\n |\n\n |\n\n |\n\n\n\n \n\n\nThe post [MIRI’s April newsletter: Relaunch Celebration and a New Math Result](https://intelligence.org/2013/04/18/miri-april-newsletter-relaunch-celebration-and-a-new-math-result/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-04-18T22:44:24Z", "authors": ["Jake"], "summaries": []} -{"id": "947cafb2d313fad891c55b8f0b911b22", "title": "MIRI’s Strategy for 2013", "url": "https://intelligence.org/2013/04/13/miris-strategy-for-2013/", "source": "miri", "source_type": "blog", "text": "This post is not a detailed strategic plan. For now, I just want to provide an update on **what MIRI is doing in 2013 and why**.\n\n\nOur mission remains the same. The creation of smarter-than-human intelligence [will likely be](https://intelligence.org/files/IE-EI.pdf) the most significant event in human history, and MIRI exists to help ensure that this event has a positive impact.\n\n\nStill, much has changed in the past year:\n\n\n* The short-term goals in our [August 2011 strategic plan](https://intelligence.org/files/strategicplan2011.pdf) were largely [accomplished](http://lesswrong.com/r/discussion/lw/dm9/revisiting_sis_2011_strategic_plan_how_are_we/).\n* We [changed our name](http://intelligence.org/2013/01/30/we-are-now-the-machine-intelligence-research-institute-miri/) from “The Singularity Institute” to “The Machine Intelligence Research Institute” (MIRI).\n* We were once doing three things — research, rationality training, and the Singularity Summit. Now we’re doing *one* thing: research. Rationality training was spun out to a separate organization, [CFAR](http://appliedrationality.org/), and the Summit was [acquired](http://singularityu.org/2012/12/09/singularity-university-acquires-the-singularity-summit/) by Singularity University. We still co-produce the Singularity Summit with Singularity University, but this requires limited effort on our part.\n* After dozens of hours of strategic planning in January–March 2013, and with input from 20+ external advisors, **we’ve decided to (1) put less effort into public outreach, and to (2) shift our research priorities to Friendly AI math research**.\n\n\nIt’s this last pair of changes I’d like to explain in more detail below.\n\n\n\n### Less effort into public outreach\n\n\nIn the past, public outreach has been a major focus of MIRI’s efforts, in particular through the annual [Singularity Summit](http://intelligence.org/singularitysummit/). These efforts brought our mission to thousands of people and grew our networks substantially. But in 2013 we’ve decided to invest much less effort into public outreach, for two reasons:\n\n\n* It’s not clear that *additional* public outreach has high marginal value.\n* [FHI](http://www.fhi.ox.ac.uk/) at Oxford University has recently increased its public outreach efforts on the topic of human-friendly AI, and [CSER](http://cser.org/) at Cambridge University is beginning to do the same. Their outreach efforts benefit from elite university prestige that MIRI cannot match. (See, for example, the [November 2012 media coverage of CSER](http://lesswrong.com/lw/fnj/centre_for_the_study_of_existential_risk_cser_at/).)\n\n\n### \n\n\n### Three kinds of research\n\n\nHistorically, MIRI has produced three kinds of research.\n\n\n**Expository research**. Some of our work consolidates and clarifies the strategic research previously only available in conversation with experts (e.g. at MIRI or [FHI](http://www.fhi.ox.ac.uk/)) or in a written but disorganized form (e.g. in mailing list archives). Our expository publications make it easier for researchers around the world to understand the current state of knowledge and build on it, but the task of organizing and clearly explaining previous work often requires a significant amount of research effort itself. Examples of this sort of work, from MIRI and from others, include [Chalmers (2010)](http://www.imprint.co.uk/singularity.pdf), [Muehlhauser & Helm (2013)](https://intelligence.org/files/IE-ME.pdf), [Muehlhauser & Salamon (2013)](https://intelligence.org/files/IE-EI.pdf), [Yampolskiy & Fox (2012)](https://intelligence.org/files/AGI-HMM.pdf), and (much of) [Nick Bostrom](http://nickbostrom.com/)‘s forthcoming scholarly monograph on machine superintelligence.\n\n\n**Strategic research**. Probabilistically nudging the future away from bad outcomes and toward good outcomes is a tricky business. Prediction is hard, and the causal structure of the world is complex. Nevertheless we agree with Oxford’s [Future of Humanity Institute](http://www.fhi.ox.ac.uk/) that careful research today can improve our chances of navigating the future successfully. (See Bostrom’s “[Technological Revolutions: Ethics and Policy in the Dark](http://www.nickbostrom.com/revolutions.pdf).”) Therefore, some of MIRI’s research (often in collaboration with FHI) has focused on improving our understanding of how technologies will evolve and which interventions available today are most promising: [Yudkowsky (2013)](https://intelligence.org/files/IEM.pdf); [Shulman & Bostrom (2012)](https://intelligence.org/files/HowHardIsAI.pdf); [Armstrong & Sotala (2012)](https://intelligence.org/files/PredictingAI.pdf); [Kaas et al. (2010)](https://intelligence.org/files/EconomicImplications.pdf); [Shulman (2010)](https://intelligence.org/files/WBE-Superorgs.pdf); [Rayhawk et al. (2009)](https://intelligence.org/files/ChangingTheFrame.pdf).\n\n\n**Friendly AI research**. One promising approach to mitigating AI risks (and all other catastrophic risks) is to build a stably self-improving AI with humane values — a “[Friendly AI](http://friendly-ai.com/)” or “FAI.”\n\n\nThere are two types of open problems in FAI theory: “philosophy problems” and “math problems.” FAI philosophy problems are so confusing to humanity that we don’t even know how to state them crisply at this time, such as the problem of [extrapolating human values](http://lesswrong.com/r/discussion/lw/g35/ideal_advisor_theories_and_personal_cev/). In contrast, FAI math problems *can* be stated crisply enough to be math problems, e.g. the [Löbian obstacle to self-modifying systems](http://lesswrong.com/lw/h1k/reflection_in_probabilistic_set_theory/). Our hope is that in time, all FAI philosophy problems will be clarified into math problems, as has happened to many philosophical questions before them: see [Kolmogorov (1965)](http://commonsenseatheism.com/wp-content/uploads/2013/04/Kolmogorov-Three-Approaches-to-the-Quantiative-Definition-of-Information.pdf',100])) on complexity and simplicity, Solomonoff ([1964a](http://commonsenseatheism.com/wp-content/uploads/2011/12/Solomonoff-A-formal-theory-of-inductive-inference-part-1.pdf), [1964b](http://commonsenseatheism.com/wp-content/uploads/2011/12/Solomonoff-A-formal-theory-of-inductive-inference-part-2.pdf)) on induction, [Von Neumann and Morgenstern (1947)](http://en.wikipedia.org/wiki/Theory_of_Games_and_Economic_Behavior) on rationality, [Shannon (1948)](http://makseq.com/materials/lib/Articles-Books/General/InformationTheory/p3-shannon.pdf) on information, and Tennenholtz’s development of “program equilibrium” from Hofstadter’s “superrationality” (for an overview, see [Woolridge 2012](http://commonsenseatheism.com/wp-content/uploads/2013/04/Woolridge-Computation-and-the-Prisoners-Dilemma.pdf)).\n\n\nSome examples of FAI philosophy research are [Muehlhauser & Williamson (2012)](http://lesswrong.com/r/discussion/lw/g35/ideal_advisor_theories_and_personal_cev/); [Yudkowsky (2010)](https://intelligence.org/files/TDT.pdf); [Yudkowsky (2004)](https://intelligence.org/files/CEV.pdf). Some examples of FAI math research are [Christiano et al. (2013)](http://intelligence.org/wp-content/uploads/2013/03/Christiano-et-al-Naturalistic-reflection-early-draft.pdf); [LaVictoire et al. (2013)](https://intelligence.org/files/RobustCooperation.pdf); Dewey ([2011](https://intelligence.org/files/LearningValue.pdf), [2012](http://www.danieldewey.net/representation-theorem-for-decisions-about-causal-models.pdf',100]))); [de Blanc (2011)](https://intelligence.org/files/OntologicalCrises.pdf).\n\n\n### \n\n\n### A shift to FAI math research\n\n\nMIRI’s expository research has been highly useful — not because many people encounter our work in academic journals or books, but because thousands of people read these succinct explanations of our research mission upon encountering our website, and because we send these papers to dozens of personal contacts each month after they express an interest in our work.\n\n\nBut **it’s not clear that *additional* expository work, of the kind we can easily purchase, is of high value** after (1) the expository work MIRI and others have done so far, (2) Sotala & Yampolskiy’s forthcoming survey article on proposals for handling AI risk, and (3) Bostrom’s forthcoming book on machine superintelligence. Thus, we decided to not invest much in expository research in 2013.\n\n\nWhat about strategic research? We believe additional strategic research has high value if it is of high quality. One of our staff researchers (Carl Shulman) will spend nearly all of 2013 on strategic research, and another staff researcher (Eliezer Yudkowsky) spent most of January–March on strategic research (specifically, intelligence explosion microeconomics). We are also pleased to see FHI’s strategic research on the subject, for example in Nick Bostrom’s forthcoming book.\n\n\nStill, **strategic research will consume a minority of our research budget in 2013** because:\n\n\n* Valuable strategic research on AI risk reduction is difficult to purchase. Very few people have the degree of domain knowledge and analytic ability to contribute. Moreover, it’s difficult for others to “catch up,” because most of the analysis that has been done hasn’t been written up clearly. (Bostrom’s book should help with that, though.)\n* MIRI has a comparative advantage in Friendly AI research. MIRI’s [Eliezer Yudkowsky](http://yudkowsky.net/) has done more than anyone to develop the technical side of Friendly AI theory, and MIRI now acts as a hub for Friendly AI research, for example by hosting [workshops](http://intelligence.org/2013/03/07/upcoming-miri-research-workshops/) focused on FAI math research.\n* Math research can get academic “traction” more easily than strategic research can. Strategic research on AI risk often fails to get much academic traction because it tends to be interdisciplinary, necessarily speculative, and dependent on several assumptions that may not be shared by other researchers (e.g. [causal functionalism](http://is.gd/7Wfiuh), [AI timelines agnosticism](http://lesswrong.com/lw/h3w/open_thread_april_115_2013/8p4r), [value fragility](http://intelligenceexplosion.com/2012/value-is-complex-and-fragile/), or the [orthogonality thesis](http://lesswrong.com/lw/cej/general_purpose_intelligence_arguing_the/)). In contrast, any mathematician with the right background knowledge can grok a crisply stated math problem very quickly, and he or she can be interested to work on it whether or not they think it’s a *socially important* problem like MIRI does. Within *hours* of [posting](http://intelligence.org/2013/03/22/early-draft-of-naturalistic-reflection-paper/) a draft of [a recent math result](http://intelligence.org/wp-content/uploads/2013/03/Christiano-et-al-Naturalistic-reflection-early-draft.pdf) to our blog, Fields medalist Timothy Gowers had seen the draft and commented on it ([here](https://plus.google.com/117663015413546257905/posts/jJModdTJ2R3?hl=en)), along with several other professional mathematicians.\n\n\nFinally: why did we choose to prioritize FAI math research over FAI philosophy research, for 2013? Our reasons are similar to our reasons for focusing on FAI math research over strategic research: (1) valuable FAI philosophy research is difficult to purchase, and (2) math research can get academic traction more easily than philosophy research can.\n\n\n### \n\n\n### Math research activities in 2013\n\n\nWhich specific actions will we take in 2013 to produce FAI math research?\n\n\n* **We will host several math research workshops.** The first MIRI research workshop (4 participants), in November 2012, was surprisingly productive. It led to the production of a [new probabilistic logic](http://lesswrong.com/lw/h1k/reflection_in_probabilistic_set_theory/) that serves as a “loophole” in Tarski’s undefinability theorem (1936), and also to an early-form probabilistic set theory. Our second MIRI research workshop (12 participants) is currently ongoing. Additional workshops this year will probably have 4-8 participants, will cost only ~$5000/ea, and will probably produce further FAI math research progress while also allowing MIRI to test many hypotheses about how to efficiently produce such progress.\n* **Eliezer will describe several open math problems in Friendly AI theory.** Eliezer is currently drafting an explanation of the Löbian obstacle to self-modifying systems, and may write explanations of some other open problems, so that mathematicians can see what open math problems in FAI theory are available to work on.\n* **We will host several visiting fellows.** A [visiting fellowship](http://intelligence.org/visitingfellow/) at MIRI is often the best way to get “up to speed” on MIRI’s mathematical research agenda, and mathematically-inclined researchers are [encouraged to apply](http://intelligence.org/visitingfellow/).\n* **We *may* hire new mathematical researchers, but we might not.** We are somewhat funding limited when it comes to hiring new researchers. More to the point, we think [Lean Nonprofit](http://intelligence.org/2013/04/04/the-lean-nonprofit/) principles are important. That is, we think it’s important to rapidly and cheaply test hypotheses about how to produce FAI math research efficiently, and running small research workshops with a variety of structures and a variety of researchers is better for that than hiring is. We are more likely to hire new researchers after we have more evidence about how best to efficiently produce FAI math research.\n\n\n### \n\n\n### How you can help\n\n\nIf you know any smart, productive mathematicians with [the right kind of background](http://intelligence.org/courses/) to contribute to our work, please encourage them to contact us (malo@intelligence.org) about our [research workshops](http://intelligence.org/2013/03/07/upcoming-miri-research-workshops/), [visiting fellowships](http://intelligence.org/visitingfellow/), and [research positions](http://intelligence.org/research-fellow/).\n\n\nYou can also support us [financially](http://intelligence.org/donate/) or as a [volunteer](http://mirivolunteers.org/).\n\n\nThe post [MIRI’s Strategy for 2013](https://intelligence.org/2013/04/13/miris-strategy-for-2013/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-04-13T21:42:59Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "a2e03f68208373e108ab5c23c4b55676", "title": "Facing the Intelligence Explosion ebook", "url": "https://intelligence.org/2013/04/13/facing-the-intelligence-explosion-ebook/", "source": "miri", "source_type": "blog", "text": "[![](http://intelligenceexplosion.com/wp-content/uploads/2013/04/FtIE-Cover-300px.png)](http://intelligenceexplosion.com/ebook)\n*[Facing the Intelligence Explosion](http://intelligenceexplosion.com)* is now [available as an ebook](http://intelligenceexplosion.com/ebook)!\n\n\nYou can get it [here](http://intelligenceexplosion.com/ebook). It is available as a “pay-what-you-want” package that includes the ebook in three formats: MOBI, EPUB, and PDF.\n\n\nIt is also available on Amazon Kindle ([US](https://www.amazon.com/facing-the-intelligence-explosion/dp/B00C7YOR5Q/ref=as_li_ss_tl?tag=miri05-20), [Canada](https://www.amazon.ca/facing-the-intelligence-explosion/dp/B00C7YOR5Q/ref=as_li_ss_tl?tag=miri05-20), [UK](https://www.amazon.co.uk/facing-the-intelligence-explosion/dp/B00C7YOR5Q/ref=as_li_ss_tl?tag=miri05-20), and most others) and the Apple iBookstore ([US](https://itunes.apple.com/us/book/facing-intelligence-explosion/id623915471?ls=1), [Canada](https://itunes.apple.com/ca/book/facing-intelligence-explosion/id623915471?ls=1), [UK](https://itunes.apple.com/gb/book/facing-intelligence-explosion/id623915471?ls=1) and most others).\n\n\nAll sources are DRM-free. Grab a copy, share it with your friends, and review it on Amazon or the iBookstore.\n\n\nAll proceeds go directly to funding the technical and strategic [research](http://intelligence.org/research/) of the [Machine Intelligence Research Institute](http://intelligence.org/).\n\n\nThe post [Facing the Intelligence Explosion ebook](https://intelligence.org/2013/04/13/facing-the-intelligence-explosion-ebook/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-04-13T17:17:39Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "0f90cbb61b8fd2c89633d518bf50facd", "title": "The Lean Nonprofit", "url": "https://intelligence.org/2013/04/04/the-lean-nonprofit/", "source": "miri", "source_type": "blog", "text": "Can [Lean Startup](http://theleanstartup.com/) methods work for nonprofits?\n\n\n*[The Lean Startup](http://www.amazon.com/The-Lean-Startup-Entrepreneurs-Continuous/dp/0307887898)*‘s author Eric Ries seems to think so:\n\n\n\n> A startup is a human institution designed to create a new product or service under conditions of extreme uncertainty… Anyone who is creating a new product or business under conditions of extreme uncertainty is an entrepreneur whether he or she knows it or not, and whether working in a government agency, a venture-backed company, a nonprofit, or a decidedly for-profit company with financial investors.\n> \n> \n\n\nIn the past year, I helped launch one new nonprofit ([Center for Applied Rationality](http://appliedrationality.org/)), I massively overhauled one older nonprofit ([MIRI](http://intelligence.org/)), and I consulted with many nonprofit CEOs and directors. Now I’d like to share some initial thoughts on the idea of a “Lean Nonprofit.”\n\n\n\n### Tight Feedback Loops\n\n\nTight feedback loops are a key component of the Lean Startup approach. I’ll summarize Ries’ recommended plan for a new organization into three steps:\n\n\n1. **Launch your “minimum viable product” as quickly as possible, to start learning.** Example: [Zappos](http://www.zappos.com/) founder Nick Swinmurn set out to test the hypothesis that customers wanted to buy shoes online. Rather than building a website with a large database of footwear, Swinmurn approached local shoe stores, took pictures of their inventory, posted the pictures online, and bought the shoes from the stores at full price if customers purchased the shoe through his website. This particular setup couldn’t scale, but it helped Swinmurn learn whether his product was viable.\n2. **Learn precisely what your customers want — not by asking them, but by carrying out lots of experiments, including [A/B testing](http://en.wikipedia.org/wiki/A/B_testing).** Example: When [Dropbox](https://www.dropbox.com) co-founder Drew Houston asked people if they wanted effortless file synchronization, most of them said “no” or couldn’t understand the concept when it was explained. But it turns out the customers didn’t know what they wanted. Early versions of the product were a hit, and millions of people suddenly learned they wanted effortless file synchronization. (See also: [Is That Your True Rejection?](http://lesswrong.com/lw/wj/is_that_your_true_rejection/))\n3. **When your learning demands it, pivot.** Example: Ries spent 6 months writing code that allowed his startup’s 3D instant messaging software [IMVU](http://www.imvu.com/) to interact with popular IM networks, on the theory that users wouldn’t want to learn new software and create new buddy lists. When they finally tested the product, they learned that users wanted a standalone IM network, and they didn’t mind learning a new IM software. IMVU’s creators pivoted, throwing out thousands of lines of code for IM network interoperability.\n\n\nSo, can this approach work for nonprofits?\n\n\n \n\n\n### Profits vs. Mission\n\n\nIn a telling paragraph, Ries says:\n\n\n\n> The goal of a startup is to figure out the right thing to build — the thing customers want and will pay for — as quickly as possible.\n> \n> \n\n\nThe goal of a company is to make profit. That simplifies things. A for-profit startup can run experiments until it learns which products and services customers will pay for, then it can develop those products and services and make a profit.\n\n\nBut a nonprofit startup which optimizes purely for growth at the expense of achieving its founding mission is often said to have “lost its way” and been caught in [lost purposes](http://lesswrong.com/lw/le/lost_purposes/). Distributing excess Superbowl t-shirts to Zambia [makes for good press](http://blog.worldvision.org/partnerships/100000-reasons-to-love-the-super-bowl/) and [helps World Vision lower its overhead ratio](http://goodintents.org/uncategorized/world-vision-the-new-100000-shirts), but [probably doesn’t](http://aidwatchers.com/2011/02/in-zambia-pittsburgh-won/) *help humans* efficiently.\n\n\nBut if we read Ries charitably, we might think he solves the problem with his distinction between vision, strategy, and product:\n\n\n\n> Startups… have a destination in mind… I call that a startup’s *vision*. To achieve that vision, startups employ a *strategy*, which includes a business model, a product road map, a point of view about partners and competitors, and ideas about who the customer will be. The *product* is the end result of this strategy.\n> \n> \n> Products change constantly through the process of optimization… Less frequently, the strategy may have to change (called a pivot). However, the overarching vision rarely changes. Entrepreneurs are committed to seeing the startup through to that destination.\n> \n> \n> …We [at IMVU] adopted the view that our job was to find a synthesis between our vision and what customers would accept; it wasn’t to capitulate to what customers thought they wanted or to tell customers what they ought to want.\n> \n> \n\n\nIt seems to me a nonprofit can stick to its original mission (its *vision*) while using tight feedback loops to figure out which particular strategies and products will best enable it to achieve its mission.\n\n\nThe post [The Lean Nonprofit](https://intelligence.org/2013/04/04/the-lean-nonprofit/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-04-04T03:22:03Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "6a529798568acc45239f205b5ce7ad27", "title": "Early draft of naturalistic reflection paper", "url": "https://intelligence.org/2013/03/22/early-draft-of-naturalistic-reflection-paper/", "source": "miri", "source_type": "blog", "text": "Update: See [Reflection in Probabilistic Logic](http://lesswrong.com/lw/h1k/reflection_in_probabilistic_logic/) for more details on how this result relates to MIRI’s research mission.\n\n\nIn a recent [blog post](http://intelligence.org/2013/03/07/upcoming-miri-research-workshops/) we described one of the results of our 1st MIRI Workshop on Logic, Probability, and Reflection:\n\n\n\n> The participants worked on the foundations of probabilistic reflective reasoning. In particular, they showed that a careful formalization of probabilistic logic can circumvent many classical paradoxes of self-reference. Applied to metamathematics, this framework provides (what seems to be) the first definition of truth which is expressive enough for use in reflective reasoning.\n> \n> \n\n\nIn short, the result described is a “loophole” in [Tarski’s undefinability theorem](http://en.wikipedia.org/wiki/Tarski%27s_undefinability_theorem) (1936).\n\n\nAn early draft of the paper describing this result is now available: **[download it here](http://intelligence.org/wp-content/uploads/2013/03/Christiano-et-al-Naturalistic-reflection-early-draft.pdf)**. Its authors are Paul Christiano (UC Berkeley), Eliezer Yudkowsky (MIRI), Marcello Herreshoff (Google), and Mihály Bárász (Google). An excerpt from the paper is included below:\n\n\n\n> Unfortunately, it is impossible for any expressive language to contain its own truth predicate True*…*\n> \n> \n> There are a few standard responses to this challenge.\n> \n> \n> The first and most popular is to work with meta-languages…\n> \n> \n> A second approach is to accept that some sentences, such as the liar sentence *G*, are neither true nor false…\n> \n> \n> Although this construction successfully dodges the “undefinability of truth” it is somewhat unsatisfying. There is no predicate in these languages to test if a sentence… is undefined, and there is no bound on the number of sentences which remain undefined. In fact, if we are specifically concerned with self-reference, then a great number of properties of interest (and not just pathological counterexamples) become undefined.\n> \n> \n> In this paper we show that it is possible to perform a similar construction over probabilistic logic. Though a language cannot contain its own truth predicate True, it can nevertheless contain its own “subjective probability” function *P*. The assigned probabilities can be reflectively consistent in the sense of an appropriate analog of the reflection property 1. In practice, most meaningful assertions must already be treated probabilistically, and very little is lost by allowing some sentences to have probabilities intermediate between 0 and 1.\n> \n> \n\n\nAnother paper showing an application of this result to set theory is forthcoming.\n\n\nThe post [Early draft of naturalistic reflection paper](https://intelligence.org/2013/03/22/early-draft-of-naturalistic-reflection-paper/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-03-22T03:28:16Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "adca08d631799fede2b2e2a799c33b60", "title": "March Newsletter", "url": "https://intelligence.org/2013/03/07/march-newsletter/", "source": "miri", "source_type": "blog", "text": "| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| [newsletterheader_sm_c.1](http://intelligence.org/wp-content/uploads/2013/04/newsletterheader_sm_c.1.jpg)\n |\n\n |\n| \n\n| | | | |\n| --- | --- | --- | --- |\n| \n\n| | | |\n| --- | --- | --- |\n| \n\n| | |\n| --- | --- |\n| \n\n| |\n| --- |\n| \nGreetings From The Executive Director\nFriends,\nAs previously [announced](http://intelligence.org/2013/01/30/we-are-now-the-machine-intelligence-research-institute-miri/) on our blog, the Singularity Institute has been renamed as the **Machine Intelligence Research Institute (MIRI)**. Naturally, both our staff and our supporters have positive associations with our original name, the “Singularity Institute.” As such, *any* new name will feel strange for a time. However, “MIRI” has sounded better and better to us over the past several weeks, and we think it will grow on you, too.\nSome will worry, “But ‘MIRI’ doesn’t express what you do in any detail!” According to our market research, however, this is “a feature, not a bug.” Researchers, in particular, said they could feel awkward working for an organization with a name that sounded too narrow or “partisan.” They also warned us that the scope of an organization’s activities can change over time, so its name should be very general. University departments and independent research organizations learned these lessons long ago, and thus tend to have very general names (with the universities themselves usually named after their primary campus location).\n“MIRI” has other nice properties, too. It’s easy to spell, it’s easy to pronounce, and it reflects our shifting priorities toward more technical research. Our mission, of course, remains the same: “to ensure that the creation of smarter-than-human intelligence benefits society.”\nSee our new website at [Intelligence.org](http://intelligence.org/). The site guide [here](http://intelligence.org/2013/02/28/welcome-to-intelligence-org/).\nOur emails have changed, too. Be sure to **update your email Contacts list** with our new email addresses, e.g. luke@intelligence.org. Our previous email addresses at singinst.org and singularity.org no longer work. You can see all our new email addresses on the [Team](http://intelligence.org/team/) page.\nCheers,\nLuke Muehlhauser\nExecutive Director\n\n\n\nUpcoming MIRI Research Workshops\nFrom November 11-18, 2012, we held (what we now call) the **1st MIRI Workshop on Logic, Probability, and Reflection**. The four workshop participants ([Eliezer Yudkowsky](http://yudkowsky.net/), [Paul Christiano](http://rationalaltruist.com/), Marcello Herreschoff, and Mihály Bárász) worked on the foundations of probabilistic reflective reasoning. In particular, they showed that a careful formalization of probabilistic logic can circumvent many classical paradoxes of self-reference. Applied to metamathematics, this framework provides (what seems to be) the first definition of truth which is expressive enough for use in reflective reasoning. Applied to set theory, this framework provides an implementation of probabilistic set theory based on unrestricted comprehension which is nevertheless powerful enough to formalize ordinary mathematical reasoning (in contrast with similar fuzzy set theories, which were originally proposed for this purpose but later discovered to be incompatible with mathematical induction).\nThese results suggest a similar approach may be used to work around Löb’s theorem, but this has not yet been explored. This work will be written up over the coming months.\nIn the meantime, MIRI is preparing for the **2nd MIRI Workshop on Logic, Probability, and Reflection**, to take place from April 3-24, 2013. For more details, see the relevant [blog post](http://intelligence.org/2013/03/07/upcoming-miri-research-workshops/).\nAdditional MIRI research workshops are also tentatively planned for the summer and fall of 2013.\n\n\nWinter Fundraiser Success!\nThanks to our dedicated supporters, we met our goal for our [2012 Winter Fundraiser](http://intelligence.org/2013/01/20/2012-winter-matching-challenge-a-success/). Thank you!\nThe fundraiser ran for 45 days, from December 6, 2012 to January 20, 2013.\nWe met our $115,000 goal, raising a total of $230,000 for our operations in 2013.\n\n\nCourse Recommendations for MIRI Researchers\nMIRI Deputy Director Louie Helm has prepared a list of [Recommend Courses for MIRI Researchers](http://intelligence.org/courses/), which answers the question “What should a researcher study if they want to equip themselves to tackle the technical problems on MIRI’s research agenda?” This new page provides a list of subjects to study, along with textbook recommendations, online course recommendations, and recommended courses at particular universities (UC Berkeley, Stanford, MIT, and CMU).\n\n\nDecision Theory FAQ\n If you want future AIs to cooperate in real-world [prisoner’s dilemmas](http://en.wikipedia.org/wiki/Prisoner%27s_dilemma), you’d better hope they’re not using any of the standard decision algorithms discussed in philosophy and computer science journals. For this reason and others, decision theory represents a major focus of MIRI’s research agenda (for example see [Yudkowsky 2010](https://intelligence.org/files/TDT.pdf)).\nTo help clarify some common confusions about decision theory and encourage more researchers to tackle these problems, MIRI Executive Director Luke Muehlhauser wrote a [Decision Theory FAQ](http://lesswrong.com/lw/gu1/decision_theory_faq/) for the website *Less Wrong*. It is by far the most comprehensive decision theory FAQ on the internet, and [section 11](http://lesswrong.com/lw/gu1/decision_theory_faq/#what-about-newcombs-problem-and-alternative-decision-algorithms) is an especially handy summary of how different decision algorithms perform on a battery of standard problems from the literature ([Newcomb’s Problem](http://lesswrong.com/lw/gu1/decision_theory_faq/#newcombs-problem), [Medical Newcomb’s Problem](http://lesswrong.com/lw/gu1/decision_theory_faq/#medical-newcomb-problems), Egan’s [Psychopath Button](http://lesswrong.com/lw/gu1/decision_theory_faq/#the-psychopath-button), [Parfit’s Hitchhiker](http://lesswrong.com/lw/gu1/decision_theory_faq/#parfits-hitchhiker), the [Prisoner’s Dilemma](http://lesswrong.com/lw/gu1/decision_theory_faq/#prisoners-dilemma), and more).\n\n\nBrief History of Ethically Concerned Scientists\nIn 1956, Norbert Weiner wrote that “For the first time in history, it has become possible for a limited group of a few thousand people to threaten the absolute destruction of millions.” Today, the general attitude towards scientific discovery is that scientists are not themselves responsible for how their work is used. But this is not necessarily the attitude that we should encourage. As technology becomes more powerful, it also becomes more dangerous.\nTo celebrate the scientists who took seriously the potential social consequences of their work, and to make it easier for others to write about scientist’s social responsibility, MIRI researcher Kaj Sotala published [A Brief History of Ethically Concerned Scientists](http://lesswrong.com/lw/gln/a_brief_history_of_ethically_concerned_scientists/). Click through to learn about:\n* **John Napier** (1550-1617), who discovered a deadly new form of artillery, but kept its details a secret so that its destructive power could not be wielded.\n* **Lewis Fry Richardson** (1881-1953), who turned down an invitation to optimize the spread of poison gas for the British military, destroyed his unpublished research, left meteorology, and began to study the causes of war instead, hoping to reduce armed conflict.\n* **Leó Szilárd** (1898-1964), who discovered the nuclear chain reaction but arranged for his patent details to be kept secret so they could not be used by Germany to develop atomic bombs, and later campaigned against nuclear proliferation.\n* **Joseph Rotblat** (1908-2005), who left the Manhattan Project over ethical concerns with the atomic bomb and campaigned against nuclear proliferation.\n\n\nand many others.\n\n\nExistential Risk Covered in Aeon Magazine\n We don’t mention each new article about [existential risk](http://www.existential-risk.org/) or [AI risk](https://intelligence.org/files/ReducingRisks.pdf), but [this one](http://www.aeonmagazine.com/world-views/ross-andersen-human-extinction/) by Ross Andersen in *Aeon Magazine* is particularly good. It’s based largely on the work of [Nick Bostrom](http://nickbostrom.com/) at Oxford University, a frequent collaborator with MIRI researchers (e.g. “[The Ethics of Artificial Intelligence](https://intelligence.org/files/EthicsofAI.pdf)“). Bostrom is currently writing a scholarly monograph on machine superintelligence, and Andersen’s article properly highlights the centrality of AI risk. The piece also includes snippets of a conversation with MIRI research associate [Daniel Dewey](http://www.danieldewey.net/) (author of “[Learning What to Value](https://intelligence.org/files/LearningValue.pdf)“).\nWe also recommend Bostrom’s new article “[Existential Risk Prevention as Global Priority](http://www.existential-risk.org/concept.pdf),” forthcoming in *Global Policy*.\n\n\nMetaMed Launches\n Former MIRI President Michael Vassar’s new personalized medicine company has finally launched: behold [MetaMed](http://metamed.com/)! MetaMed offers personalized medical research for patients who want to make sure they’re treatment is informed by the very latest medical breakthroughs. Eliezer Yudkowsky [introduced](http://lesswrong.com/lw/gvi/metamed_evidencebased_healthcare/) the company thusly:\n\nIn a world where 85% of doctors can’t solve [simple Bayesian word problems](http://library.mpib-berlin.mpg.de/ft/ps/PS_Teaching_2001.pdf)…\nIn a world where only 20.9% of reported results that a pharmaceutical company tries to investigate for development purposes, [fully replicate](http://online.wsj.com/article/SB10001424052970203764804577059841672541590.html)…\nIn a world where “[p-values](http://lesswrong.com/lw/1gc/frequentist_statistics_are_frequently_subjective/)” are [anything the author wants them to be](http://biomet.oxfordjournals.org/content/77/3/467.abstract)…\n…and where there are [all sorts of amazing technologies and techniques](http://www.cnn.com/2010/HEALTH/09/09/pinky.regeneration.surgery/index.html) which nobody at your hospital has ever heard of…\n…there’s also [MetaMed](http://metamed.com/). Instead of just having “evidence-based medicine” in journals that doctors don’t actually read, MetaMed will provide you with actual evidence-based healthcare… If you have a sufficiently serious problem and can afford their service, MetaMed will (a) put someone on reading the relevant research literature who understands real statistics and can tell whether the paper is trustworthy; and (b) refer you to a cooperative doctor in their network who can carry out the therapies they find.\nMetaMed was partially inspired by the case of a woman who had her fingertip chopped off, was told by the hospital that she was screwed, and then read through an awful lot of literature on her own until she found someone working on an advanced regenerative therapy that let her actually [grow the fingertip back](http://www.cnn.com/2010/HEALTH/09/09/pinky.regeneration.surgery/index.html). The idea behind MetaMed isn’t just that they will scour the literature to find how the best experimentally supported treatment differs from the average wisdom… but that they will also look for this sort of very recent technology that most hospitals won’t have heard about.\n\n\n\nAn Appreciation of Michael Anissimov\n Due to Singularity University’s [acquisition](http://singularityu.org/2012/12/09/singularity-university-acquires-the-singularity-summit/) of the [Singularity Summit](http://intelligence.org/singularitysummit/) and some major changes to MIRI’s public communications strategy, Michael Anissimov left MIRI in January 2013. Michael continues to support our mission and continues to volunteer for us.\nIt was a pleasure for me to work with Michael during our overlapping time at MIRI. Michael played a major role in “onboarding” me at MIRI and helping me to understand the history and culture of MIRI’s community, and he worked very hard on the Singularity Summit and on our 2012 efforts to transform MIRI into a more effective organization in general.\nI owe Michael much gratitude for his many, many years of service to MIRI, and in particular for helping to build up the Singularity Summit to the point where it was acquired, and for applying himself (of his own accord) to the tasks that he saw needed to be done — for example in taking up MIRI’s public communications mantle when he saw that was a gap in our operations.\nMichael: Thanks so much for your service to MIRI! I enjoyed working with you, and I wish you the best of luck on your future adventures.\nLuke Muehlhauser\n |\n\n |\n\n |\n\n |\n\n |\n\n\n \n\n\nThe post [March Newsletter](https://intelligence.org/2013/03/07/march-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-03-07T17:47:00Z", "authors": ["Jake"], "summaries": []} -{"id": "63e03178a1976355a00ec7c1a9a79bb4", "title": "Upcoming MIRI Research Workshops", "url": "https://intelligence.org/2013/03/07/upcoming-miri-research-workshops/", "source": "miri", "source_type": "blog", "text": "From November 11-18, 2012, we held (what we now call) the **1st MIRI Workshop on Logic, Probability, and Reflection**. This workshop had four participants:\n\n\n* [Eliezer Yudkowsky](http://yudkowsky.net/) (MIRI)\n* [Paul Christiano](http://rationalaltruist.com/) (UC Berkeley)\n* Marcello Herreshoff (Google)\n* Mihály Bárász (Google)\n\n\nThe participants worked on the foundations of probabilistic reflective reasoning. In particular, they showed that a careful formalization of probabilistic logic can circumvent many classical paradoxes of self-reference. Applied to metamathematics, this framework provides (what seems to be) the first definition of truth which is expressive enough for use in reflective reasoning. Applied to set theory, this framework provides an implementation of probabilistic set theory based on *unrestricted* comprehension which is nevertheless powerful enough to formalize ordinary mathematical reasoning (in contrast with similar fuzzy set theories, which were originally proposed for this purpose but later discovered to be incompatible with mathematical induction).\n\n\nThese results suggest a similar approach may be used to work around [Löb’s theorem](http://en.wikipedia.org/wiki/Lob%27s_theorem), but this has not yet been explored. This work will be written up over the coming months.\n\n\nIn the meantime, MIRI is preparing for the **2nd MIRI Workshop on Logic, Probability, and Reflection**, to take place from April 3-24, 2013. This workshop will be broken into two sections. The first section (Apr 3-11) will bring together the 1st workshop’s participants and 8 additional participants:\n\n\n* [Stuart Armstrong](http://www.fhi.ox.ac.uk/our_staff/research/stuart_armstrong) (Oxford University)\n* [Daniel Dewey](http://www.danieldewey.net/) (Oxford University)\n* Benja Fallenstein (University of Vienna)\n* [Patrick LaVictoire](http://www.math.wisc.edu/~patlavic/) (UW Madison)\n* [Jacob Steinhardt](http://cs.stanford.edu/~jsteinhardt/) (Stanford University)\n* [Qiaochu Yuan](http://math.berkeley.edu/~qchu/) (UC Berkeley)\n* [Andrew Critch](http://www.acritch.com/) (UC Berkeley)\n* [Jacob Taylor](http://www.stanford.edu/~jacobt/) (Stanford)\n\n\nThe second section (Apr 12-24) will consist solely of the 4 participants from the 1st workshop.\n\n\nParticipants of this 2nd workshop will continue to work on the foundations of reflective reasoning, for example Gödelian obstacles to reflection, and decision algorithms for reflective agents (e.g. [TDT](https://intelligence.org/files/TDT.pdf)).\n\n\nAdditional MIRI research workshops are also tentatively planned for the summer and fall of 2013.\n\n\n**Update:** An early draft of the paper describing the first result from the 1st workshop is now available [here](http://intelligence.org/2013/03/22/early-draft-of-naturalistic-reflection-paper/).\n\n\nThe post [Upcoming MIRI Research Workshops](https://intelligence.org/2013/03/07/upcoming-miri-research-workshops/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-03-07T04:12:06Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "253a28c5523c9a0f97db3fbbfd5b9deb", "title": "Welcome to Intelligence.org", "url": "https://intelligence.org/2013/02/28/welcome-to-intelligence-org/", "source": "miri", "source_type": "blog", "text": "Welcome to the new home for the Machine Intelligence Research Institute (MIRI), formerly called “The Singularity Institute.”\n\n\nThe new design (from Katie Hartman, who also designed the new site for [CFAR](http://appliedrationality.org/)) reflects our recent [shift in focus](https://intelligence.org/2013/01/30/we-are-now-the-machine-intelligence-research-institute-miri/) from “movement-building” to technical research. Our research and our research advisors are featured prominently on the [home page](http://Intelligence.org), and our network of research associates are included on the [Team](https://intelligence.org/team/) page.\n\n\n[Getting involved](https://intelligence.org/get-involved/) is also clearer, with easy-to-find pages for applying to be a [volunteer](http://singularityvolunteers.org/), an [intern](https://intelligence.org/interns/), a [visiting fellow](https://intelligence.org/visitingfellow/), or a [research fellow](https://intelligence.org/research-fellow/).\n\n\nOur [About](https://intelligence.org/about/) page hosts things like our [transparency page](https://intelligence.org/transparency/), our [top contributors list](https://intelligence.org/topcontributors/), our [About](https://intelligence.org/files/MIRI_PressKit.pdf) page hosts things like our [transparency page](https://intelligence.org/transparency/), our [top contributors list](https://intelligence.org/topcontributors/), our [new press kit](https://intelligence.org/files/MIRI_PressKit.pdf',100])), and our [archive](https://intelligence.org/singularitysummit/) of all Singularity Summit talk videos, audio, and transcripts from 2006-2012. (The Summit was recently [acquired](http://singularityu.org/2012/12/09/singularity-university-acquires-the-singularity-summit/) by Singularity University.)\n\n\nFollow our [blog](https://intelligence.org/blog/) to keep up with the latest news and analyses. Recent analyses include [Yudkowsky on Logical Uncertainty](https://intelligence.org/2013/01/30/yudkowsky-on-logical-uncertainty/) and [Yudkowsky on “What Can We Do Now?”](https://intelligence.org/2013/01/30/yudkowsky-on-what-can-we-do-now/)\n\n\nWe’ll be adding additional content in the next few months, so stay tuned!\n\n\nThe post [Welcome to Intelligence.org](https://intelligence.org/2013/02/28/welcome-to-intelligence-org/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-02-28T02:00:38Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "3efbe764fdf809dd7d9b7c23ce5526fa", "title": "We are now the “Machine Intelligence Research Institute” (MIRI)", "url": "https://intelligence.org/2013/01/30/we-are-now-the-machine-intelligence-research-institute-miri/", "source": "miri", "source_type": "blog", "text": "When Singularity University (SU) [acquired](http://singularityu.org/singularity-university-acquires-the-singularity-summit/) the Singularity Summit from us in December, we also agreed to change the name of our institute to avoid brand confusion between the Singularity Institute and Singularity University. After much discussion and market research, we’ve chosen our new name. We are now the **Machine Intelligence Research Institute** (MIRI).\n\n\nNaturally, both our staff members and supporters have positive associations with our original name, the “Singularity Institute for Artificial Intelligence,” or “Singularity Institute” for short. As such, *any* new name will feel strange for a time. However, “MIRI” has sounded better and better to us over the past few weeks, and we think it will grow on you, too.\n\n\nSome will worry, “But ‘MIRI’ doesn’t express what you do in any detail!” According to our market research, however, this is “a feature, not a bug.” Researchers, in particular, said they could feel awkward working for an organization with a name that sounded too narrow or “partisan.” They also warned us that the scope of an organization’s activities can change over time, so its name should be very general.\n\n\nUniversity departments and independent research organizations learned these lessons long ago, and thus tend to have very general names (with the universities themselves usually named after their primary campus location). For example:\n\n\n* [Center for Intelligent Systems](http://www.eecs.berkeley.edu/CIS/) (CIS) at UC Berkeley\n* [Stanford AI Laboratory](http://ai.stanford.edu/) (SAIL)\n* [Santa Fe Institute](http://www.santafe.edu/) (SFI)\n* [Institute for Advanced Study](http://www.ias.edu/) (IAS)\n* MIT [Computer Science and Artificial Intelligence Laboratory](http://www.csail.mit.edu/) (CSAIL)\n\n\n“MIRI” has other nice properties, too. It’s easy to spell, it’s easy to pronounce, and it reflects our shifting priorities toward more technical research. Our mission, of course, remains the same: “to ensure that the creation of smarter-than-human intelligence benefits society.”\n\n\nWe’ll be operating from [Singularity.org](http://singularity.org/) for a little while longer, but sometime before March 5th we’ll launch a new website, under the new name, at a new domain name: [Intelligence.org](http://intelligence.org/). (We have thus far been unable to acquire some other fitting domain names, including miri.org.)\n\n\nWe’ll let you know when we’ve moved our email accounts to the new domain. All existing newsletter subscribers will continue to receive our newsletter after the name change.\n\n\nMany thanks again to all our supporters who are sticking with us through this transition in branding (from SIAI to MIRI) and our transition in activities (a singular focus on research after passing our rationality work to [CFAR](http://appliedrationality.org/) and the Summit to [SU](http://singularityu.org/)). We hope you’ll come to like our new name as much as we do!\n\n\n![Machine Intelligence Research Institute](http://i.imgur.com/fxh7ZnK.png)\nThe post [We are now the “Machine Intelligence Research Institute” (MIRI)](https://intelligence.org/2013/01/30/we-are-now-the-machine-intelligence-research-institute-miri/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-01-31T06:11:31Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "9ed711f69c0d137839b9728e2e38ab64", "title": "Yudkowsky on Logical Uncertainty", "url": "https://intelligence.org/2013/01/30/yudkowsky-on-logical-uncertainty/", "source": "miri", "source_type": "blog", "text": "A paraphrased transcript of a conversation with Eliezer Yudkowsky.\n\n\n**Interviewer**: I’d love to get a clarification from you on one of the “open problems in Friendly AI.” The logical uncertainty problem that Benja Fallenstein [tackled](http://lesswrong.com/lw/eaa/a_model_of_udt_with_a_concrete_prior_over_logical/) had to do with having uncertainty over logical truths that an agent didn’t have enough computation power to deduce. But: I’ve heard a couple of different things called the “problem of logical uncertainty.” One of them is the “neutrino problem,” that if you’re a Bayesian you shouldn’t be 100% certain that 2 + 2 = 4. Because neutrinos might be screwing with your neurons at the wrong moment, and screw up your beliefs.\n\n\n**Eliezer**: See also [How to convince me that 2 + 2 = 3](http://lesswrong.com/lw/jr/how_to_convince_me_that_2_2_3/).\n\n\n**Interviewer**: Exactly. Even within a probabilistic system like a [Bayes net](http://en.wikipedia.org/wiki/Bayesian_network), there are components of it that are deductive, e.g., certain parts must sum to a probability of one, and there are other logical assumptions built into the structure of a Bayes net, and an AI might want to have uncertainty over those. This is what I’m calling the “neutrino problem.” I don’t know how much of a problem you think that is, and how related it is to the thing that you usually talk about when you talk about “the problem of logical uncertainty.”\n\n\n**Eliezer**: I think there’s two issues. One issue comes up when you’re running programs on noisy processors, and it seems like it should be fairly straightforward for human programmers to run with sufficient redundancy, and do sufficient checks to drive an error probability down to almost zero. But that decreases efficiency a lot compared to the kind of programs you could probably write if you were willing to accept probabilistic outcomes when reasoning about their expected utility.\n\n\nThen there’s this large, open problem of a Friendly AI’s criterion of action and criterion of self-modification, where all my current ideas are still phrased in terms of proving things correct after you drive error probabilities down to almost zero. But that’s probably not a good long-term solution, because in the long run you’d want some criterion of action, to let the AI copy itself onto not-absolutely-perfect hardware, or hardware that isn’t being run at a redundancy level where we’re trying to drive error probabilities down to 2-64 or something — really close to 0.\n\n\n**Interviewer**: This seems like it might be different than the thing that you’re often talking about when you use the phrase “problem of logical uncertainty.” Is that right?\n\n\n**Eliezer**: When I say “logical uncertainty” what I’m usually talking about is more like, you believe Peano Arithmetic, now assign a probability to Gödel’s statement for Peano Arithmetic. Or you haven’t yet checked it, what’s the probability that 239,427 is a prime number?\n\n\n**Interviewer**: Do you see much of a relation between the two problems?\n\n\n**Eliezer**: Not yet. The second problem is fairly fundamental: how can we approximate logical facts we’re not logically omniscient about? Especially when you have uncertain logical beliefs about complicated algorithms that you’re running and you’re calculating expected utility of a self-modification relative to these complicated algorithms.\n\n\nWhat you called the neutrino problem would arise even if we were dealing with physical uncertainty. It comes from errors in the computer chip. It arises even in the presence of logical omniscience when you’re building a copy of yourself in a physical computer chip that can make errors. So, the second problem seems a lot less ineffable. It might be that they end up being the same problem, but that’s not obvious from what I can see.\n\n\nThe post [Yudkowsky on Logical Uncertainty](https://intelligence.org/2013/01/30/yudkowsky-on-logical-uncertainty/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-01-30T15:22:34Z", "authors": ["staff"], "summaries": []} -{"id": "4cfa374c2e915dd1c03207f49c3af89a", "title": "Yudkowsky on “What can we do now?”", "url": "https://intelligence.org/2013/01/30/yudkowsky-on-what-can-we-do-now/", "source": "miri", "source_type": "blog", "text": "A paraphrased transcript of a conversation with Eliezer Yudkowsky.\n\n\n**Interviewer**: Suppose you’re talking to a smart mathematician who looks like the kind of person who might have the skills needed to work on a Friendly AI team. But, he says, “I understand the general problem of AI risk, but I just don’t believe that you can know so far in advance what in particular is useful to do. Any of the problems that you’re naming now are not particularly likely to be the ones that are relevant 30 or 80 years from now when AI is developed. Any technical research we do now depends on a highly conjunctive set of beliefs about the world, and we shouldn’t have so much confidence that we can see that far into the future.” What is your reply to the mathematician?\n\n\n**Eliezer**: I’d start by having them read a description of a particular technical problem we’re working on, for example the “Löb Problem.” I’m writing up a description of that now. So I’d show the mathematician that description and say “No, this issue of trying to have an AI write a similar AI seems like a fairly fundamental one, and the Löb Problem blocks it. The fact that we can’t figure out how to do these things — even given infinite computing power — is alarming.”\n\n\nA more abstract argument would be something along the lines of, “Are you sure the same way of thinking wouldn’t prevent you from working on any important problem? Are you sure you wouldn’t be going back in time and telling Alan Turing to not invent Turing machines because who knows whether computers will really work like that? They didn’t work like that. Real computers don’t work very much like the formalism, but Turing’s work was useful anyway.”\n\n\n**Interviewer**: You and I both know people who are very well informed about AI risk, but retain more uncertainty than you do about what the best thing to do about it today is. Maybe there are lots of other promising interventions out there, like pursuing cognitive enhancement, or doing FHI-style research looking for crucial considerations that we haven’t located yet — like Drexler discovering molecular nanotechnology, or Shulman discovering iterated embryo selection for radical intelligence amplification. Or, perhaps we should focus on putting the safety memes out into the AGI community because it’s too early to tell, again, exactly which problems are going to matter, especially if you have a longer AI time horizon. What’s your response to that line of reasoning?\n\n\n**Eliezer**: Work on whatever your current priority is, after an hour of meta reasoning but not a year of meta reasoning.  If you’re still like, “No, no, we must think more meta” after a year, then I don’t believe you’re the sort of person who will ever act.\n\n\nFor example, [Paul Christiano](http://ordinaryideas.wordpress.com/) isn’t making this mistake, since Paul is working on actual FAI problems *while* looking for other promising interventions. I don’t have much objection to that. If he then came up with some particular intervention which he thought was higher priority, I’d ask about the specific case.\n\n\n[Nick Bostrom](http://nickbostrom.com/) isn’t making this mistake, either. He’s doing lots of meta-strategy work, but he also does work on anthropic probabilities and the parliamentary model for normative uncertainty and other things that are object-level, and he hosts people who like Anders Sandberg who write papers about uploading timelines that are actually relevant to our policy decisions.\n\n\nWhen people constantly say “maybe we should do some other thing,” I would say, “Come to an interim decision, start acting on the interim decision, and revisit this decision as necessary.” But if you’re the person who always tries to go meta and only thinks meta because there might be some better thing, you’re not ever going to actually *do something* about the problem.\n\n\nThe post [Yudkowsky on “What can we do now?”](https://intelligence.org/2013/01/30/yudkowsky-on-what-can-we-do-now/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-01-30T15:21:18Z", "authors": ["staff"], "summaries": []} -{"id": "ac9ad2e87e4adcc1f296c9a5c3e9073e", "title": "2012 Winter Matching Challenge a Success!", "url": "https://intelligence.org/2013/01/20/2012-winter-matching-challenge-a-success/", "source": "miri", "source_type": "blog", "text": "Thanks to our dedicated supporters, we met our goal for our 2012 Winter Fundraiser. Thank you!\n\n\nThe fundraiser ran for 45 days, from December 6, 2012 to January 20, 2013.\n\n\nWe met our $115,000 goal, raising a total of $230,000 for our operations in 2013.\n\n\nEvery donation that the Machine Intelligence Research Institute receives is powerful support for our [mission](https://intelligence.org/files/strategicplan2011.pdf) — ensuring that the creation of smarter-than-human intelligence benefits human society.\n\n\nThe post [2012 Winter Matching Challenge a Success!](https://intelligence.org/2013/01/20/2012-winter-matching-challenge-a-success/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-01-20T19:33:11Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "8c3058d908e6c42aac2d75c67872ef4b", "title": "New Transcript: Eliezer Yudkowsky and Massimo Pigliucci on the Intelligence Explosion", "url": "https://intelligence.org/2013/01/09/new-transcript-eliezer-yudkowsky-and-massimo-pigliucci-on-the-singularity/", "source": "miri", "source_type": "blog", "text": "In this 2010 conversation hosted by [bloggingheads.tv](http://www.bloggingheads.tv), [Eliezer Yudkowsky](http://yudkowsky.net/) and [Massimo Pigliucci](http://rationallyspeaking.blogspot.com/) attempt to unpack the fundamental assumptions involved in determining the plausability of a technological singularity.\n\n\nA transcript of the conversation is now available [here](https://docs.google.com/document/d/1S-7CWOLOtLRDmMiS7LtVxELssUi3OI1-UcrPAzGMuH4/pub), thanks to Ethan Dickinson and Patrick Stevens of [MIRIvolunteers.org](http://mirivolunteers.org/). A video of the conversation can be found at the [bloggingheads](http://bloggingheads.tv/videos/2561) website.\n\n\nThe post [New Transcript: Eliezer Yudkowsky and Massimo Pigliucci on the Intelligence Explosion](https://intelligence.org/2013/01/09/new-transcript-eliezer-yudkowsky-and-massimo-pigliucci-on-the-singularity/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-01-09T21:34:33Z", "authors": ["Jake"], "summaries": []} -{"id": "5761e0db5389b8dbde8216224955060a", "title": "January 2013 Newsletter", "url": "https://intelligence.org/2013/01/09/january-2013-newsletter/", "source": "miri", "source_type": "blog", "text": "![](https://intelligence.org/files/newsletter_header.jpeg)\n**Greetings from the Executive Director**\n\n\n\n\n---\n\n\n[![](https://intelligence.org/files/luke.jpeg \"luke\")](http://intelligence.org/blog/2012/11/07/november-2012-newsletter/luke/)\nDear friends of the Machine Intelligence Research Institute,\n\n\nIt’s been just over one year since I took the reins at the Machine Intelligence Research Institute. Looking back, I must say I’m proud of what we accomplished in the last year.\n\n\nConsider the “top priorities for 2011-2012” from our [August 2011 strategic plan](https://intelligence.org/files/strategicplan20112.pdf). The first priority was “public-facing research on creating a positive singularity.” On this front, we did so well that [MIRI had more peer-reviewed publications in 2012 than in all past years combined](http://lesswrong.com/lw/axr/three_new_papers_on_ai_risk/627o) (well, except for the fact that some publications scheduled for 2012 have been [delayed](http://lesswrong.com/lw/axr/three_new_papers_on_ai_risk/8757) until 2013, but you can still download preprints of those publications from [our research page](http://intelligence.org/research/)).\n\n\nOur second priority was “outreach / education / fundraising.” Outreach and education was mostly achieved through the Singularity Summit and through the new [Center for Applied Rationality](http://appliedrationality.org/), which was spun out of the Machine Intelligence Research Institute but is now its own 501c3 organization running entirely from its own funding. As for fundraising: 2012 was our most successful year yet.\n\n\nOur third priority was “improved organizational effectiveness.” Here, we grew by leaps and bounds throughout 2012. Throughout the year, we built our first comprehensive donor database (to improve donor relations), launched a regular newsletter (to improve public communication), instituted best practices in management and accounting throughout the organization, began tracking costs and predicted benefits for all major projects, started renting a new office in Berkeley that now bustles with activity every day, updated the design and content on our website, gained $40,000/mo of free Google Adwords directing traffic to MIRI web properties, and more.\n\n\nOur fourth priority was to run our annual Singularity Summit. We were pleased not only to run our most professional Summit yet, but also to subsequently sell the Summit to Singularity University (SU). We are confident that the Summit is in good hands, and we are also pleased that SU’s acquisition of the Singularity Summit provides us with some much-needed funding expand our research program.\n\n\nThat said, most of the money from the Summit acquisition is being dedicated to a special fund for Friendly AI researchers, and does not support our daily operations. For that, we need your help! Please [contribute to our ongoing matching challenge](http://intelligence.org/blog/2012/12/06/2012-winter-matching-challenge/), which ends January 20th!\n\n\nOnward and upward,\n\n\nLuke\n\n\n\n\n\n---\n\n\n**Winter Matching Challenge is 80% Complete!**\n\n\n\n\n---\n\n\nOur Winter Matching Challenge ends on January 20th, 2013. We’ve raised $93,000 of our $115,000 target, so we’re 80% of the way there! Remember that every donation to the Machine Intelligence Research Institute made before Jan. 20th will be matched dollar-for-dollar, up to a total of $115,000!\n\n\nNow is your chance to **[double your impact](http://intelligence.org/donate/)** while helping us raise up to $230,000 to help fund [our research program](http://intelligence.org/research/).\n\n\nPlease read our [blog post for the challenge](http://intelligence.org/blog/2012/12/06/2012-winter-matching-challenge/) for more details, including our accomplishments throughout the last year and our plans for the next 6 months. **[Donate now](http://intelligence.org/donate/)**.\n\n\n\n\n---\n\n\n**Anna Salamon on *Philosophy Talk***\n\n\n\n\n---\n\n\nFormer Machine Intelligence Research Institute researcher Anna Salamon recently appeared on the popular philosophy podcast *[Philosophy Talk](https://intelligence.org/feed/?paged=77#philosophytalk.org)*, where she spoke with podcast hosts (and Stanford philosophers) [John Perry](http://john.jperry.net/) and [Ken Taylor](http://www.stanford.edu/~ktaylor/) about the likely impacts of advanced artificial intelligence.\n\n\nAn MP3 of the program can be found [here](https://dl.dropbox.com/u/163098/_temp/Turbo-charging%20the%20Mind.mp3). Below is a summary from the [program page](http://philosophytalk.org/blog/2012/12/turbo-charging-mind):\n\n\n\n> “The rapid advance of computer technology in recent decades has produced a vast array of intelligent machines that far outstrip the human mind in speed and capacity. Yet these machines know far less than we do about almost everything. Is it possible to have the best of both worlds? Can we use new technologies to create a hybrid intelligence that seamlessly integrates the vast knowledge and skills embedded in our biological brains with the vastly greater capacity, speed, and knowledge-sharing ability of our mechanical creations? John and Ken examine the prospects for transcending the biological limits of the human mind with Anna Salamon from the Machine Intelligence Research Institute. This program was recorded live at the Marsh Theater in Berkeley, California.”\n> \n> \n\n\nAnna Salamon worked as a research fellow at the Machine Intelligence Research Institute before leaving to become the Executive Director of the [Center for Applied Rationality](https://intelligence.org/feed/?paged=77#www.appliedrationality.org), an organization dedicated to fostering a community of aspiring rationalists through the teaching of advanced decision making. We are thrilled to see her spreading such rationality skills to the world!\n\n\n\n\n---\n\n\n**An Appreciation of Michael Vassar**\n\n\n\n\n---\n\n\n[![michael_vassar](https://intelligence.org/files/michael_vassar.jpeg)](http://intelligence.org/blog/2013/01/09/january-2013-newsletter/michael_vassar/)Michael Vassar resigned his position as CEO of the Machine Intelligence Research Institute (to join [Panacea Research](http://www.panacearesearch.com/)) more than one year ago, so a letter of appreciation for his contributions to MIRI is long overdue!\n\n\nThe delay does, however, allow me to thank him for one of his most publicly visible contributions, which was only completed recently. During his tenure with MIRI, Michael built the annual [Singularity Summit](http://singularitysummit.com/) into such a valuable cultural asset that Singularity University approached us in late 2011 to acquire it, a deal that [closed one year later](http://singularityu.org/singularity-university-acquires-the-singularity-summit/) in December 2012. The deal provides MIRI with a sizable chunk of funding for our growing [research program](http://intelligence.org/research), and gives SU a key asset in the singularity brand space. What Michael achieved here can be thought of as the non-profit equivalent of a [successful exit](http://www.linkedin.com/today/post/article/20121115154416-5799319-4-keys-to-a-successful-exit).\n\n\nEven before this, Michael managed to nearly double MIRI’s budget (from 2009-2011), helped get us featured in many [top media outlets](http://intelligence.org/media/), and built relationships with several leading academics, e.g. philosopher [David Chalmers](http://en.wikipedia.org/wiki/David_Chalmers) (whose [two](http://www.consc.net/papers/singularityjcs.pdf) [papers](http://consc.net/papers/singreply.pdf) on the singularity introduced the topic to mainstream philosophers), psychologist [Gary Marcus](http://en.wikipedia.org/wiki/Gary_Marcus) (who now writes in mainstream outlets about [AI safety](http://www.newyorker.com/online/blogs/newsdesk/2012/11/google-driverless-car-morality.html)), and physicist [Max Tegmark](http://space.mit.edu/home/tegmark/) (whose forthcoming book has a section on the singularity).\n\n\nStill, Michael’s most significant contribution may be the creation of a street-level movement where before there were mostly just internet discussions. In 2006-2007, Michael began to build a community in New York City, which has since grown into the largest hub of [Less Wrong](http://lesswrong.com/) / Machine Intelligence Research Institute activity outside the Bay Area. Many early members of this community went on to become MIRI staff members (Carl Shulman, Amy Willey, Jason Murray), major MIRI supporters, or community leaders. After relocating to the Bay Area, Michael worked with Anna Salamon and Carl Shulman to launch MIRI’s Visiting Fellows program, which essentially created the street-level Less Wrong / Machine Intelligence Research Institute community in the Bay Area, which has produced its own set of MIRI staff members, supporters, and community leaders.\n\n\nMichael remains an active MIRI Board member and supporter.\n\n\nThanks, Michael, for your dedicated work on behalf of the Machine Intelligence Research Institute! We wish you the best of luck with [Panacea Research](http://www.panacearesearch.com/).\n\n\n\n\n---\n\n\n**An appreciation of Amy Willey**\n\n\n\n\n---\n\n\n[![amy](https://intelligence.org/files/amy.png)](http://intelligence.org/blog/2013/01/09/january-2013-newsletter/amy/)\nToward the end of 2012, MIRI’s Chief Operating Officer and Singularity Summit organizer Amy Willey got married and moved to Michigan, where her husband’s company ([Stik.com](http://www.stik.com/)) is now located. Due to her change of location and the sale of the Singularity Summit, Amy will no longer be working with MIRI in 2013.\n\n\nAmy organized the Singularity Summit in 2010, 2011, and 2012, and is thus (in addition to Michael Vassar) the other person chiefly responsible for building the Summit into the top-notch event it is today.\n\n\nAs our Chief Operations Officer for several years, Amy was a dependable and trustworthy member of our team, and she played a significant role in growing MIRI into a mature organization that “has its act together,” follows organizational best practices, and so on.\n\n\nThanks, Amy, for your dedicated work for the Machine Intelligence Research Institute! We wish you the best of luck on your future adventures.\n\n\n\n\n---\n\n\n**Luke Muehlhauser Speaking at Stanford in February**\n\n\n\n\n---\n\n\n[![-1](https://intelligence.org/files/1.jpeg)](http://intelligence.org/blog/2013/01/09/january-2013-newsletter/attachment/1/)Luke will be speaking on the Leonardo Art/Science Evening Rendezvous at Stanford on February 6, 2013.\n\n\nHere is the abstract of the talk:\n\n\n\n> **Superhuman Artificial Intelligence: Promise and Peril**\n> \n> \n> Technological revolutions shape our world more than anything else, and superhuman AI will be the most transformative technological revolution of all. But will this revolution be positive or negative? Will it be more like modern medicine or the atom bomb? Several considerations suggest that superhuman AI will, by default, have negative effects on humanity. To ensure that superhuman AI impacts us positively, we should invest in AI safety research today, so that AI safety research outpaces AI capabilities research.\n> \n> \n\n\nDetails [here](http://www.scaruffi.com/leonardo/feb2013.html).\n\n\n\n\n---\n\n\n**Joshua Fox Speaks in Second Life**\n\n\n\n\n---\n\n\nLast month, MIRI research associate Joshua Fox gave [a talk](http://blog.joshuafox.com/2012/12/unequal-under-law.html) “Unequal under the Law: Legal Systems and Artificial General Intelligence”, from the 8th Annual Colloquium on the Law of Futuristic Persons, conducted in the massive online virtual world [Second Life](http://en.wikipedia.org/wiki/Second_Life). Watch his talk on YouTube [here](http://www.youtube.com/watch?v=RIlx520ACR0).\n\n\n\n\n---\n\n\n**Doug Wolens’ Documentary “The Singularity” Now Available on iTunes**\n\n\n\n\n---\n\n\nFilmmaker Doug Wolens has been working on his documentary “The Singularity” for five years, and now it’s finally [out on iTunes](http://thesingularityfilm.com/)! Interviewees in the film include Ray Kurzweil, Eliezer Yudkowsky, David Chalmers, Bill McKibben, Marshall Brain, Richard A. Clarke, and many others. Learn more about the film by reading the [director’s statement](http://thesingularityfilm.com/directors-statement/).\n\n\n\n\n---\n\n\n**Featured Volunteer: Frank White**\n\n\n\n\n---\n\n\nThis month, we thank Frank White for his volunteer work on the French translation of *[Facing the Singularity](http://www.facingthesingularity.com)*. Frank first came across the Machine Intelligence Research Institute while searching for papers in the field of artificial intelligence. Translating works on the Singularity can be a challenge due to all the specific and scientific terms involved, some of which may not have direct analogues in other languages. Rather than using new words in the middle of the text, which can be jarring to readers, Frank opts for rewording the original text to keep a smooth flow in the translation. Thank you, Frank!\n\n\nYou, too, can sign up to volunteer for the Machine Intelligence Research Institute at [singularityvolunteers.org](http://singularityvolunteers.org/).\n\n\n\n\n---\n\n\n**Featured Summit Video: Jaan Tallinn: “Why Now? A Quest in Metaphysics”**\n\n\n\n\n---\n\n\nOur featured Summit video this month is a mind-bending talk — with great animations! — by Jaan Tallinn called [Why Now? A Quest in Metaphysics](http://fora.tv/v/c16836). Here’s the abstract:\n\n\n\n> The word “singularity” usually denotes something exceptional, a situation that breaks a given model. It therefore seems like an incredible coincidence that we were born just decades before an imminent technological singularity that threatens to break our model of the evolution of the entire universe. What if that incredible coincidence is merely an illusion though — what if our model is not correct to begin with? The talk combines intelligence explosion, multiverse, anthropic principle and simulation argument into an alternative model of the universe — a model where, from the perspective of a human observer, technological singularity is the norm, not the exception.\n> \n> \n\n\n\n\n---\n\n\nThe post [January 2013 Newsletter](https://intelligence.org/2013/01/09/january-2013-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2013-01-09T18:06:54Z", "authors": ["staff"], "summaries": []} -{"id": "ecbaa6099bf29ca875c425a949b2414d", "title": "December 2012 Newsletter", "url": "https://intelligence.org/2012/12/19/december-2012-newsletter/", "source": "miri", "source_type": "blog", "text": "![](http://miri.wpengine.com/wp-content/uploads/2012/10/newsletter_header.jpeg)\n**Greetings from the Executive Director**\n\n\n\n\n---\n\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/10/luke.jpeg \"luke\")](http://intelligence.org/blog/2012/11/07/november-2012-newsletter/luke/)\nDear friends of the Singularity Institute,\n\n\nThis month marks the biggest shift in our operations since the [Singularity Summit](http://singularitysummit.com/) was founded in 2006. Now that Singularity University has [acquired](http://singularityu.org/singularity-university-acquires-the-singularity-summit/) the Singularity Summit (details below), and SI’s interests in rationality training are being developed by the now-separate [Center for Applied Rationality](http://appliedrationality.org/), **the Singularity Institute is making a major transition**. For 12 years we’ve largely focused on movement-building — through the Singularity Summit, [Less Wrong](http://lesswrong.com/), and other programs. This work was needed to build up a community of support for our mission and a pool of potential researchers for our unique interdisciplinary work.\n\n\nNow, the time has come to say “Mission Accomplished Well Enough to Pivot to Research.” Our community of supporters is now large enough that qualified researchers are available for us to hire, if we can afford to hire them. Having published [30+ research papers](http://intelligence.org/research/) and [dozens more](http://lesswrong.com/lw/f6o/original_research_on_less_wrong/) original research articles on Less Wrong, we certainly haven’t neglected research. But **in 2013 we plan to pivot so that a much larger share of the funds we raise is spent on research**. If you’d like to help with that, please [contribute to our ongoing fundraising drive](http://intelligence.org/blog/2012/12/06/2012-winter-matching-challenge/).\n\n\nOnward and upward,\n\n\nLuke\n\n\n\n \n\n\n\n\n---\n\n\n \n\n**Singularity Summit Conference Acquired by Singularity University**\n\n\n\n\n---\n\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/12/singularityu.png \"singularityu\")](http://singularityu.org)The [Singularity Summit](http://singularitysummit.com/) conference, founded by SI in 2006, has been [acquired from SI by Singularity University](http://singularityu.org/singularity-university-acquires-the-singularity-summit/). As part of the agreement, the Singularity Institute will change its name (to reduce brand confusion), but will remain as co-producers of the Singularity Summit in some succeeding years. We are pleased that we can transition the conference to an organization with a strong commitment to maintaining its quality as it grows.\n\n\nMost of the funds from the Summit acquisition will be placed in a separate fund for a “[Friendly AI team](http://lesswrong.com/lw/cv9/building_toward_a_friendly_ai_team/),” and therefore does not support our daily operations or other programs.\n\n\nWe wish to thank everyone who participated in making the Singularity Summit a success, especially past SI president Michael Vassar (now with [Panacea Research](http://www.panacearesearch.com/) and Summit organizer Amy Willey.\n\n\n\n\n---\n\n\n**2012 Winter Matching Challenge!**\n\n\n\n\n---\n\n\nWe’re excited to announce our [2012 Winter Matching Challenge](http://intelligence.org/blog/2012/12/06/2012-winter-matching-challenge/). Thanks to the generosity of several major donors,† every donation to the Singularity Institute made now until January 5th, 2013 will be matched dollar-for-dollar, up to a total of $115,000!\n\n\nNow is your chance to **double your impact** while helping us raise up to $230,000 to help fund [our research program](https://intelligence.org/research/).\n\n\nNote that the new [Center for Applied Rationality](http://appliedrationality.org/) (CFAR) will be running a separate fundraiser soon.\n\n\nPlease read our [blog post for the challenge](http://intelligence.org/blog/2012/12/06/2012-winter-matching-challenge/) for more details, including our accomplishments throughout the last year and our plans for the next 6 months. **Please support our transition from a movement-building phase into a research phase**.\n\n\n† $115,000 of total matching funds has been provided by Edwin Evans, Mihály Barasz, Rob Zahra, Alexei Andreev, Jeff Bone, Michael Blume, Guy Srinivasan, and Kevin Fischer.\n\n\n\n\n---\n\n\n**A Week of Friendly AI Math at the Singularity Institute**\n\n\n\n\n---\n\n\n[![2](http://miri.wpengine.com/wp-content/uploads/2012/11/2012-11-03-19.42.22.jpg)](http://intelligence.org/blog/2012/11/14/whiteboards-at-the-singularity-institute-2/) \n\nFrom Nov. 11th-17th, SI held a Friendly AI math workshop at our headquarters in Berkeley, California. The participants — Eliezer Yudkowsky, Marcello Herreshoff, Paul Christiano, and Mihály Barasz — tackled a particular problem related to Friendly AI. We held the workshop mostly to test hypotheses about ideal team size and the problem’s tractability, while allowing there was some small chance the team would achieve a significant result in just one week.\n\n\nHappily, it seems the team *did* achieve a significant result, which the participants estimate would be the equivalent of 1-3 papers if published. More details are forthcoming.\n\n\n\n\n---\n\n\n**SI’s Turing Prize Awarded to Bill Hibbard for “Avoiding Unintended AI Behaviors”**\n\n\n\n\n---\n\n\n[![award to bill hibbard](http://miri.wpengine.com/wp-content/uploads/2012/12/award-to-bill-hibbard.jpg)](http://miri.wpengine.com/wp-content/uploads/2012/12/award-to-bill-hibbard.jpg)\nThis year’s [AGI-12](http://www.winterintelligence.org/oxford2012/agi-12/) conference, held in Oxford UK, included a special track on [AGI Impacts](http://www.winterintelligence.org/oxford2012/agi-impacts/). A selection of papers from this track will be published in a special volume of the [Journal of Experimental & Theoretical Artificial Intelligence](http://www.tandfonline.com/action/aboutThisJournal?show=editorialBoard&journalCode=teta) in 2013.\n\n\nThe Singularity Institute had previously announced a $1000 prize for the best paper from AGI-12 or AGI Impacts on the question of how to develop safe architectures or goals for AGI. At the event, the prize was awarded to Bill Hibbard for his paper [Avoiding Unintended AI Behaviors](http://www.ssec.wisc.edu/~billh/g/hibbard_agi12a.pdf).\n\n\nSI’s Turing Prize is awarded in honor of Alan Turing, who not only discovered some of the key ideas of machine intelligence, but also grasped its importance, writing that “…it seems probable that once [human-level machine thinking] has started, it would not take long to outstrip our feeble powers… At some stage therefore we should have to expect the machines to take control…” The prize is awarded for work that not only increases awareness of this important problem, but also makes technical progress in addressing it.\n\n\nSI researcher Carl Shulman also presented at AGI Impacts. You can read the abstract of his talk, “Could we use untrustworthy human brain emulations to make trustworthy ones?”, [here](http://www.winterintelligence.org/agi-impacts-extended-abstracts/). If video of the talk becomes available later, we’ll link to it in a future newsletter.\n\n\n \n\n\n \n\n\n\n\n---\n\n\n**New Paper: How We’re Predicting AI — or Failing To**\n\n\n\n\n---\n\n\n\n**Note:** The findings in this paper are based on a dataset error. For details, see .\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/12/ageatprediction.png \"ageatprediction\")](http://miri.wpengine.com/wp-content/uploads/2012/11/PredictingAI.pdf)\nThe new paper by [Stuart Armstrong](http://lesswrong.com/user/Stuart_Armstrong/) ([FHI](http://www.fhi.ox.ac.uk/)) and [Kaj Sotala](http://lesswrong.com/user/Kaj_Sotala/) ([SI](http://intelligence.org)) has now been published ([PDF](http://miri.wpengine.com/wp-content/uploads/2012/11/PredictingAI.pdf)) as part of the [*Beyond AI* conference proceedings](http://api.viglink.com/api/click?format=go&key=9f37ca02a1e3cbd4f3d0a3618a39fbca&loc=http%3A%2F%2Flesswrong.com%2Flw%2Ffjn%2Fhow_were_predicting_ai_or_failing_to%2F&v=1&libid=1355260544370&out=http%3A%2F%2Fwww.kky.zcu.cz%2Fen%2Fpublications%2F1%2FJanRomportl_2012_BeyondAIArtificial.pdf&ref=https%3A%2F%2Fwww.google.com%2F&title=%22How%20We're%20Predicting%20AI%20%E2%80%94%20or%20Failing%20to%22%20-%20Less%20Wrong&txt=%3Cem%3EBeyond%20AI%3C%2Fem%3E%26nbsp%3Bconference%20proceedings&jsonp=vglnk_jsonp_13552607531192). Some of these results were previously discussed [here](http://lesswrong.com/lw/e36/ai_timeline_predictions_are_we_getting_better/). The original predictions data are available [here](http://lesswrong.com/lw/e79/ai_timeline_prediction_data/). The *Less Wrong* thread is [here](http://lesswrong.com/lw/fjn/how_were_predicting_ai_or_failing_to/). We thank Stuart and Kaj for this valuable meta-study of AI predictions.\n\n\nFor the study, Stuart Armstrong and Kaj Sotala examined a database of 257 AI predictions, made in a period spanning from the 1950s to the present day. This database was assembled by researchers from the Singularity Institute (Jonathan Wang and Brian Potter) systematically searching though the literature. 95 of these are considered AI timeline predictions.\n\n\nThe paper examines a couple folk theories of AI prediction, the [“Maes-Garreau law”](http://en.wikipedia.org/wiki/Maes%E2%80%93Garreau_law) (that people predict AI happening near the end of their own lifetime) and the prediction that “AI is always 15-25 years into the future”. Systematic analysis of the database of AI predictions revealed support for the second theory but not the first. Many of the predictions were concentrated around 15-25 years in the future, and this trend held whether the predictions were being made in the 1950s or the 2000s. Predictions were not observed to cluster around the expected end of lifetime of the predictors, a result which contradicts the Maes-Garreau hypothesis. It was also found that the predictions of experts do not correlate in any distinct way relative to non-experts; i.e., there seems to be little evidence that experts make better AI predictions than non-experts. At Singularity Summit 2012, Stuart Armstrong [summarized the results of the study](http://fora.tv/2012/10/14/Stuart_Armstrong_How_Were_Predicting_AI) in an onstage talk.\n\n\nThe major take-away from this study is that predicting the arrival of human-level AI is such a fuzzy endeavor that we should take any prediction with a large grain of salt. The rational approach is to widen our confidence intervals — that is, recognize that we don’t really know when human-level AI will be developed, and make plans accordingly. Just as we cannot confidently state that AI is near, we can’t confidently state that AI is far off. (This was also the conclusion of an earlier Singulairty Institute publication, “[Intelligence Explosion: Evidence and Import](http://miri.wpengine.com/wp-content/uploads/2012/09/IE-EI.pdf).”)\n\n\n\n\n---\n\n\n**Michael Anissimov Publishes Responses to P.Z. Myers and Kevin Kelly**\n\n\n\n\n---\n\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/12/thinktwice.png \"thinktwice\")](http://www.acceleratingfuture.com/michael/blog/2012/11/think-twice-a-response-to-kevin-kelly-on-thinkism/)\nSI media director Michael Anissimov has published blog posts responding to [biologist P.Z. Myers](http://www.acceleratingfuture.com/michael/blog/2012/09/comprehensive-copying-not-required-for-uploading/) on [whole brain emulation](http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf) and [bestselling author Kevin Kelly](http://www.acceleratingfuture.com/michael/blog/2012/11/think-twice-a-response-to-kevin-kelly-on-thinkism/) on [AI takeoff](http://wiki.lesswrong.com/wiki/AI_takeoff).\n\n\nIn a blog post, [P.Z. Myers](http://freethoughtblogs.com/pharyngula/2012/07/14/and-everyone-gets-a-robot-pony/) rejected the idea of whole brain emulation in general, stating *“It won’t work. It can’t.”* However, his response focuses on the scanning of a live brain with current technology. In response, Anissimov concedes that Myers’ criticisms make sense in the narrow context in which he makes them, but his response misunderstands that whole brain emulation refers to a wide range of possible scanning approaches, not just the reductionistic straw man of “scan in, emulation out”. So, while Myers’ critique applies to certain types of brain scanning approaches, it does not apply to whole brain emulation in general.\n\n\nAnissimov’s [response to Kevin Kelly](http://www.acceleratingfuture.com/michael/blog/2012/11/think-twice-a-response-to-kevin-kelly-on-thinkism) is a response to a [blog post from four years ago](http://www.kk.org/thetechnium/archives/2008/09/thinkism.php), “Thinkism”. Kelly’s blog post is notable as one of the most substantive critiques of the fast AI takeoff idea by a prominent intellectual. Kelly argues that the idea of scientific and technological research and development occurring more rapidly than “calendar time” is incredulous, because there are inherently time-limited processes, such as cellular metabolism, which limit progress on difficult research problems, namely indefinitely extending human healthspans. Anissimov argues that faster-than-human, smarter-than-human intelligence could overcome the human-characteristic rate of innovation through superior insight, breaking problems into their constituent parts, and by making experimentation massively accelerated and parallel.\n\n\n\n\n---\n\n\n**Original Research on Less Wrong**\n\n\n\n\n---\n\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/12/lesswrong.png \"lesswrong\")](http://lesswrong.com/lw/f6o/original_research_on_less_wrong/)\nSI executive director [Luke Muehlhauser](http://lukeprog.com), with help from our research team, has compiled a [list of original research](http://lesswrong.com/lw/f6o/original_research_on_less_wrong/) produced by the web community *Less Wrong*. Though many of the posts on *Less Wrong* are [summaries of previously published research](http://lesswrong.com/lw/eik/eliezers_sequences_and_mainstream_academia/), there is also a substantial amount of original expert material in philosophy, decision theory, mathematical logic, and other fields.\n\n\nExamples of original research on *Less Wrong* include Eliezer Yudkowsky’s [“Highly Advanced Epistemology 101 for Beginners” sequence](http://wiki.lesswrong.com/wiki/Highly_Advanced_Epistemology_101_for_Beginners), Wei Dai’s [posts on his original decision theory (UDT)](http://lesswrong.com/lw/15m/towards_a_new_decision_theory/), Vladimir Nesov’s [thoughts on counterfactual mugging](http://lesswrong.com/lw/3l/counterfactual_mugging/), and Benja Fallstein’s [investigation of the problem of logical uncertainty](http://lesswrong.com/lw/eaa/a_model_of_udt_with_a_concrete_prior_over_logical/). In all, the list compiles over 50 examples of original research on Less Wrong stretching from 2008 to the present. Of particular interest is original research that contributes towards [solving open problems](http://lukeprog.com/SaveTheWorld.html) in Friendly AI.\n\n\nOriginal research on *Less Wrong* continues to be pursued in several [discussion threads](http://lesswrong.com/r/discussion/new/) on the site.\n\n\n\n\n---\n\n\n**How Can I Reduce Existential Risk From AI?**\n\n\n\n\n---\n\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/12/SI_logo.png \"SI_logo\")](http://intelligence.org/blog/2012/12/19/december-2012-newsletter/si_logo/)\nAnother recent Less Wrong post of note by Luke Muehlhauser is [“How can I reduce existential risk from AI?”](http://lesswrong.com/lw/ffh/how_can_i_reduce_existential_risk_from_ai/) “Existential risk”, a [term coined by Oxford philosopher Nick Bostrom](http://www.existential-risk.org/), specifically refers to risks to the survival of the human species. The Singularity Institute argues that advanced artificial intelligence is an *existential risk* to the future of the human species. “Existential risk” generally refers to the total destruction of the human species, rather than risks which threaten 90% or 99% of the population, which would constitute global catastrophic risks but not true existential risks.\n\n\nSince our founding in 2000, the Singularity Institute has argued that smarter-than-human, self-improving Artificial Intelligence is an existential risk to humanity. The concern is that, at some point over the next hundred years, advanced AI will be created that can manufacture its own sophisticated robotics and threaten to displace human civilization, not necessarily through deliberate action but merely as a side-effect of the exploitation of resources required for our survival, such as carbon, oxygen, or physical space. For additional background, please read our concise summary, [“Reducing Long-Term Catastrophic Risks from Artificial Intelligence”](http://intelligence.org/summary/).\n\n\nThe post outlines three major categories of work towards reducing AI risk: (1) meta-work, such as making money to contribute to Friendly AI research; (2) strategic work, work towards a better strategic understanding of the challenges we face; and (3) direct work, such as technical research, political action, or particular kinds of technological development. All three are crucial to building the “existential risk mitigation ecosystem,” a cooperative effort of hundreds of people to better understand AI risk and do something about it.\n\n\n\n\n---\n\n\n**New Research Associates**\n\n\n\n\n---\n\n\nThe Singularity Institute is pleased to announce four new [research associates](http://intelligence.org/research-associates/), Benja Fallenstein, Marcello Herreshoff, Mihály Barasz, and Bill Hibbard.\n\n\n**Benja Fallenstein** is interested in the basic research necessary for the development of safe AI goals, especially from the perspective of mathematical models of evolutionary psychology, and also in anthropic reasoning, decision theory, game theory, reflective mathematics, and programming languages with integrated proof checkers. Benja is a mathematics student at University of Vienna, with a focus in biomathematics.\n\n\n**Marcello Herreshoff** has worked with the Singularity Institute on the math of Friendly AI, from time to time since 2007. In High School, he was a two time USACO finalist and he published a novel combinatorics result, which he presented at the Twelfth International Conference on Fibonacci Numbers and Their Applications. He holds a BA in Math from Stanford University. At Stanford he was awarded two honorable mentions on the Putnam mathematics competition, and submitted his honors thesis for publication in the Logic Journal of the IGPL. His research interests include mathematical logic, and in its use in formalizing coherent goal systems.\n\n\n**Mihály Barasz** is interested in functional languages and type theory and their application in formal proof systems. He cares deeply about reducing existential risks. He has an M.Sc. summa cum laude in Mathematics from Eotvos Lorand University, Budapest and currently works at Google.\n\n\n**[Bill Hibbard](http://www.ssec.wisc.edu/~billh/homepage1.html)** is an Emeritus Senior Scientist at the University of Wisconsin-Madison Space Science and Engineering Center, currently working on issues of AI safety and unintended behaviors. He has a BA in Mathematics and MS and PhD in Computer Sciences, all from the University of Wisconsin-Madison.\n\n\n\n\n---\n\n\n**AI Risk-Related Improvements to the LW Wiki**\n\n\n\n\n---\n\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/12/LW_wiki.png \"LW_wiki\")](http://intelligence.org/blog/2012/12/19/december-2012-newsletter/lw_wiki/)\nThe Singularity Institute has greatly [improved the *Less Wrong* wiki with new entries](http://lesswrong.com/lw/fcu/ai_riskrelated_improvements_to_the_lw_wiki/), featuring topics from [Seed AI](http://wiki.lesswrong.com/wiki/Seed_AI) to [moral uncertainty](http://wiki.lesswrong.com/wiki/Moral_uncertainty) and more. Over 120 pages were updated in total. The improvements to the wiki were prompted by earlier proposals for a dedicated [scholarly AI risk wiki](http://lesswrong.com/lw/cnj/a_scholarly_ai_risk_wiki/). The improvements to the wiki enable more background knowledge for publishing short, clear, scholarly articles on AI risk.\n\n\nSome articles of interest include the [5-and-10 problem](http://wiki.lesswrong.com/wiki/5-and-10), [AGI skepticism](http://wiki.lesswrong.com/wiki/AGI_skepticism), [AGI Sputnik moment](http://wiki.lesswrong.com/wiki/AGI_Sputnik_moment), [AI advantages](http://wiki.lesswrong.com/wiki/AI_advantages), [AI takeoff](http://wiki.lesswrong.com/wiki/AI_takeoff), [basic AI drives](http://wiki.lesswrong.com/wiki/Basic_AI_drives), [benevolence](http://wiki.lesswrong.com/wiki/Benevolence), [biological cognitive enhancement](http://wiki.lesswrong.com/wiki/Biological_Cognitive_Enhancement), [Coherent Extrapolated Volition](http://wiki.lesswrong.com/wiki/Coherent_Extrapolated_Volition), [computing overhang](http://wiki.lesswrong.com/wiki/Computing_overhang), [computronium](http://wiki.lesswrong.com/wiki/Computronium'), [differential intellectual progress](http://wiki.lesswrong.com/wiki/Differential_intellectual_progress), [economic consequences of AI and whole brain emulation](http://wiki.lesswrong.com/wiki/Economic_consequences_of_AI_and_whole_brain_emulation), [Eliezer Yudkowsky](http://wiki.lesswrong.com/wiki/Eliezer_Yudkowsky), [emulation argument for human-level AI](http://wiki.lesswrong.com/wiki/Emulation_argument_for_human-level_AI), [extensibility argument for greater-than-human intelligence](http://wiki.lesswrong.com/wiki/Extensibility_argument_for_greater-than-human_intelligence), [evolutionary argument for human-level AI](http://wiki.lesswrong.com/wiki/Evolutionary_argument_for_human-level_AI), [complexity of human value](http://wiki.lesswrong.com/wiki/Complexity_of_value), [Friendly artificial intelligence](http://wiki.lesswrong.com/wiki/Friendly_artificial_intelligence), [Future of Humanity Institute](http://wiki.lesswrong.com/wiki/Future_of_Humanity_Institute), [history of AI risk thought](http://wiki.lesswrong.com/wiki/History_of_AI_risk_thought), [intelligence explosion](http://wiki.lesswrong.com/wiki/Intelligence_explosion), [moral divergence](http://wiki.lesswrong.com/wiki/Moral_divergence), [moral uncertainty](http://wiki.lesswrong.com/wiki/Moral_uncertainty), [Nick Bostrom](http://wiki.lesswrong.com/wiki/Nick_Bostrom), [optimal philanthropy](http://wiki.lesswrong.com/wiki/Optimal_philanthropy), [optimization process](http://wiki.lesswrong.com/wiki/Optimization_process), [Oracle AI](http://wiki.lesswrong.com/wiki/Oracle_AI), [orthogonality thesis](http://wiki.lesswrong.com/wiki/Orthogonality_thesis), [paperclip maximizer](http://wiki.lesswrong.com/wiki/Paperclip_maximizer), [Pascal’s mugging](http://wiki.lesswrong.com/wiki/Pascal%27s_mugging), [recursive self-improvement](http://wiki.lesswrong.com/wiki/Recursive_self-improvement), [reflective decision theory](http://wiki.lesswrong.com/wiki/Reflective_decision_theory), [singleton](http://wiki.lesswrong.com/wiki/Singleton), [Singularitarianism](http://wiki.lesswrong.com/wiki/Singularitarianism), [Singularity](http://wiki.lesswrong.com/wiki/Singularity), [subgoal stomp](http://wiki.lesswrong.com/wiki/Subgoal_stomp), [superintelligence](http://wiki.lesswrong.com/wiki/Superintelligence), [terminal value](http://wiki.lesswrong.com/wiki/Terminal_value), [timeless decision theory](http://wiki.lesswrong.com/wiki/Timeless_decision_theory), [tool AI](http://wiki.lesswrong.com/wiki/Tool_AI), [utility extraction](http://wiki.lesswrong.com/wiki/Utility_extraction), [value extrapolation](http://wiki.lesswrong.com/wiki/Value_extrapolation), [value learning](http://wiki.lesswrong.com/wiki/Value_learning), and [whole brain emulation](http://wiki.lesswrong.com/wiki/Whole_brain_emulation).\n\n\n\n\n---\n\n\n**Featured Volunteer: Ethan Dickinson**\n\n\n\n\n---\n\n\nThis month, we thank Ethan Dickinson for his volunteer work transcribing videos from the Singularity Summit 2012 conference. When we talked with Ethan about his work, he mentioned how one talk, Julia Galef’s, (embedded below) even inspired a teary-eyed emotional climax. Ethan became involved in volunteer work for SI after several years of developing an increasing interest in rationality, originally introduced via *[Harry Potter and the Methods of Rationality](http://hpmor.com/)*. Today, he feels he is using the full powers of \n\nhis imagination to understand the Singularity. Paraphrasing Jaan Tallin, he has used his own reading of science fiction such as William Gibson’s *Neuromancer* and Isaac Asimov’s novels to imagine worlds “much more optimistic and much more pessimistic” than many of the middle ground scenarios for the future widely assumed today.\n\n\n\n\n---\n\n\n**Featured Summit Video: Julia Galef on Rationality**\n\n\n\n\n---\n\n\n\nFour decades of cognitive science have confirmed that *Homo sapiens* are far from “rational animals.” Scientists have amassed a daunting list of ways that our brain’s fast-and-frugal judgment heuristics fail in modern contexts for which they weren’t adapted, or stymie our attempts to be happy and effective. Hence the project we’re undertaking at the new Center for Applied Rationality (CFAR) — training human brains to run algorithms that optimize for our interests as autonomous beings in the modern world, not for the interests of ancient replicators. This talk explores what we’ve learned from that process so far, and why training smart people to be rational decision-makers is crucial to a better future.\n\n\n\n\n---\n\n\n**News Items**\n\n\n\n\n---\n\n\n[![asimo](http://miri.wpengine.com/wp-content/uploads/2012/12/asimo.png)](http://intelligence.org/blog/2012/12/19/december-2012-newsletter/asimo/)\n[Killer robots? Cambridge brains to assess AI risk](http://news.cnet.com/8301-11386_3-57553993-76/killer-robots-cambridge-brains-to-assess-ai-risk/) \n\n*CNET*, November 26, 2012\n\n\n\n\n---\n\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/12/Screen-Shot-2012-12-12-at-8.36.11-AM.png)](http://intelligence.org/blog/2012/12/19/december-2012-newsletter/screen-shot-2012-12-12-at-8-36-11-am/)\n[DARPA’s Pet-Proto Robot Navigates Obstacles](http://www.youtube.com/watch?&v=FFGfq0pRczY&feature=etp-pd-nxx-62) \n\n*YouTube*, October 24, 2012\n\n\n\n\n---\n\n\nThe post [December 2012 Newsletter](https://intelligence.org/2012/12/19/december-2012-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2012-12-19T17:52:38Z", "authors": ["Louie Helm"], "summaries": []} -{"id": "8a14ce8c9320053ca8bb43928c334e28", "title": "2012 Winter Matching Challenge!", "url": "https://intelligence.org/2012/12/06/2012-winter-matching-challenge/", "source": "miri", "source_type": "blog", "text": "Thanks to the generosity of several major donors,† every donation to the Machine Intelligence Research Institute made now until January 5th, 2013 will be matched dollar-for-dollar, up to a total of $115,000!\n\n\n**[Donate Now!](https://intelligence.org/donate/)**\n\n\n\n\n\n\n\n\n\n\n$0\n\n\n\n\n$28.75K\n\n\n\n\n$57.5K\n\n\n\n\n$86.25K\n\n\n\n\n$115K\n\n\n\n\n\n\n\n\nNow is your chance to **double your impact** while helping us raise up to $230,000 to help fund [our research program](https://intelligence.org/research/).\n\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/06/towardapositivesingularity.jpg \"towardapositivesingularity\")](https://intelligence.org/donate/)\n(If you’re unfamiliar with our mission, please see our [press kit](http://miri.wpengine.com/wp-content/uploads/2012/11/SI_PressKit.pdf) and read our short research summary: [Reducing Long-Term Catastrophic Risks from Artificial Intelligence](http://intelligence.org/summary/).)\n\n\nNow that Singularity University has [acquired](http://singularityu.org/singularity-university-acquires-the-singularity-summit/) the [Singularity Summit](http://singularitysummit.com/), and SI’s interests in rationality training are being developed by the now-separate [CFAR](http://appliedrationality.org/), **the Machine Intelligence Research Institute is making a profound transition**. (Note that most of the money from the acquisition is being placed in a separate fund for Friendly AI researchers, and therefore does not support our daily operations or other programs.)\n\n\nFor 12 years we’ve largely focused on movement-building — through the Singularity Summit, [Less Wrong](http://lesswrong.com/), and other programs. This work was needed to build up a community of support for our mission and a pool of potential researchers for our unique interdisciplinary work.\n\n\nNow, the time has come to say “Mission Accomplished.” Or at least, “Mission Accomplished Well Enough to Pivot to Research.” Our community of supporters is now large enough that many qualified researchers are available for us to hire, if we can afford to hire them.\n\n\nHaving published [30+ research papers](http://intelligence.org/research/) and [dozens more](http://lesswrong.com/lw/f6o/original_research_on_less_wrong/) original research articles on Less Wrong, we certainly haven’t neglected research. But **in 2013 we plan to pivot so that a much larger share of the funds we raise is spent on research**.\n\n\n### Accomplishments in 2012\n\n\n* Held a one-week research workshop on one of the open problems in Friendly AI research, and got progress that participants estimate would be the equivalent of 1-3 papers if published. (Details forthcoming. The workshop participants were Eliezer Yudkowsky, Paul Christiano, Marcello Herreshoff, and Mihály Bárász.)\n* Produced our annual [Singularity Summit](http://www.singularitysummit.com/) in San Francisco. Speakers included Ray Kurzweil, Steven Pinker, Daniel Kahneman, Temple Grandin, Peter Norvig, and many others.\n* Launched the new [Center for Applied Rationality](http://appliedrationality.org/), which ran 5 workshops in 2012, including [Rationality for Entrepreneurs](http://appliedrationality.org/entrepreneurs/) and [SPARC](http://appliedrationality.org/sparc2012/) (for young math geniuses), and also published one (early-version) smartphone app, [The Credence Game](http://www.acritch.com/credence-game/).\n* Launched the redesigned, updated, and reorganized [Singularity.org](http://intelligence.org/blog/2012/06/18/welcome-to-the-new-singularity-org/) website.\n* [Achieved most of the goals](http://lesswrong.com/r/discussion/lw/dm9/revisiting_sis_2011_strategic_plan_how_are_we/#summary) from our [August 2011 strategic plan](http://miri.wpengine.com/wp-content/uploads/2012/06/strategicplan20112.pdf).\n* 11 new [research publications](http://intelligence.org/research/).\n* Eliezer published the first 12 posts in his sequence [Highly Advanced Epistemology 101 for Beginners](http://wiki.lesswrong.com/wiki/Highly_Advanced_Epistemology_101_for_Beginners), the precursor to his forthcoming sequence, *Open Problems in Friendly AI*.\n* SI staff members published many other substantive articles on Less Wrong, including [How to Purchase AI Risk Reduction](http://lesswrong.com/r/discussion/lw/cs6/how_to_purchase_ai_risk_reduction/), [How to Run a Successful Less Wrong Meetup](http://lesswrong.com/lw/crs/how_to_run_a_successful_less_wrong_meetup/), a [Solomonoff Induction tutorial](http://lesswrong.com/lw/dhg/an_intuitive_explanation_of_solomonoff_induction/), [The Human’s Hidden Utility Function (Maybe)](http://lesswrong.com/lw/9jh/the_humans_hidden_utility_function_maybe/), [How can I reduce existential risk from AI?](http://lesswrong.com/lw/ffh/how_can_i_reduce_existential_risk_from_ai/), [AI Risk and Opportunity: A Strategic Analysis](http://lesswrong.com/r/discussion/lw/ajm/ai_risk_and_opportunity_a_strategic_analysis/), and [Checklist of Rationality Habits](http://lesswrong.com/lw/fc3/checklist_of_rationality_habits/).\n* Launched our new volunteers platform, [SingularityVolunteers.org](http://singularityvolunteers.org/).\n* Hired two new researchers, Kaj Sotala and Alex Altair.\n* Published our [press kit](http://miri.wpengine.com/wp-content/uploads/2012/11/SI_PressKit.pdf) to make journalists’ lives easier.\n* And of course *much* more.\n\n\n### Future Plans You Can Help Support\n\n\nIn the coming months, we plan to do the following:\n\n\n* As part of Singularity University’s acquisition of the Singularity Summit, we will be changing our name and launching a new website.\n* Eliezer will publish his sequence *Open Problems in Friendly AI*.\n* We will publish nicely-edited ebooks (Kindle, iBooks, and PDF) for many of our core materials, to make them more accessible: *[The Sequences, 2006-2009](http://wiki.lesswrong.com/wiki/Sequences)*, *[Facing the Singularity](http://facingthesingularity.com/)*, and *[The Hanson-Yudkowsky AI Foom Debate](http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate)*.\n* We will publish several more research papers, including “Responses to Catastrophic AGI Risk: A Survey” and a short, technical introduction to [timeless decision theory](http://wiki.lesswrong.com/wiki/Timeless_decision_theory).\n* We will set up the infrastructure required to host a productive Friendly AI team and try hard to recruit enough top-level math talent to launch it.\n\n\n(Other projects are still being surveyed for likely cost and strategic impact.)\n\n\nWe appreciate your support for our high-impact work! Donate now, and seize a better than usual chance to move our work forward. Credit card transactions are securely processed using either PayPal or Google Checkout. If you have questions about donating, please contact Louie Helm at (510) 717-1477 or louie@intelligence.org.\n\n\n† $115,000 of total matching funds has been provided by Edwin Evans, Mihály Bárász, Rob Zahra, Alexei Andreev, Jeff Bone, Michael Blume, Guy Srinivasan, and Kevin Fischer.\n\n\nThe post [2012 Winter Matching Challenge!](https://intelligence.org/2012/12/06/2012-winter-matching-challenge/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2012-12-06T19:42:41Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "3f57fd38cd21dcf8cdb9c7e73319ff4b", "title": "Once again, a reporter thinks our positions are the opposite of what they are", "url": "https://intelligence.org/2012/11/26/once-again-a-reporter-thinks-our-positions-are-the-opposite-of-what-they-are/", "source": "miri", "source_type": "blog", "text": "Perhaps the most accurate media coverage the Machine Intelligence Research Institute (MIRI) has yet received was [a piece](http://commonsenseatheism.com/wp-content/uploads/2011/09/The-Singularity-in-Playboy.pdf) by legendary science author [Carl Zimmer](http://commonsenseatheism.com/wp-content/uploads/2011/09/The-Singularity-in-Playboy.pdf) in *Playboy*. To give you a sense of how inaccurate *most* of our media coverage is, here’s a (translated) quote from some coverage of MIRI in [a Franco-German documentary](http://lesswrong.com/lw/8ch/singularity_institute_mentioned_on_francogerman_tv/):\n\n\n\n> In San Francisco however, a society of young voluntary scientists believes in the good in robots. How naive! Here at [MIRI]…\n> \n> \n\n\nSuch a quote is amusing because, of course, the Machine Intelligence Research Institute has been saying for a decade that AI will *by default* be harmful to humanity, that it is *extremely difficult* to design a “moral machine,” that neither we nor anyone else knows how to do it yet, and that [dozens of approaches](https://intelligence.org/files/IE-ME.pdf) proposed by others are naive and insufficient for one reason or another.\n\n\nNow, in [a new piece](http://www.bryanappleyard.com/i-extinct-you-robot/) for *The Sunday Times*, Bryan Appleyard writes:\n\n\n\n> Yudkowsky [from MIRI] seemed to me simplistic in his understanding of moral norms. “You would not kill a baby,” he said to me, implying that was one norm that could easily be programmed into a machine.\n> \n> \n> “Some people do,” I pointed out, but he didn’t see the full significance. SS officers killed babies routinely because of an adjustment in the society from which they sprang in the form of Nazism. Machines would be much more radically adjusted away from human social norms, however we programmed them.\n> \n> \n\n\nAgain: MIRI has been saying for a decade that it is extremely difficult to program a machine to not kill a baby. Indeed, our position is that directly programming moral norms won’t work because [our values are more complex than we realize](https://intelligence.org/files/IE-ME.pdf). The direct programming of moral norms is something that others have proposed, and a position we criticize. For example, here is a quote from the [concise summary](https://intelligence.org/summary/) of our research program:\n\n\n\n> Since we have no introspective access to the details of human values, the solution to this problem probably involves designing an AI to learn human values by looking at humans, asking questions, scanning human brains, etc., rather than an AI preprogrammed with a fixed set of imperatives that sounded like good ideas at the time.\n> \n> \n\n\nBut this doesn’t mean we can simply let some advanced machine learning algorithms observe human behavior to learn our moral values, because:\n\n\n\n> The explicit moral values of human civilization have changed over time, and we regard this change as progress, and extrapolate that progress may continue in the future. An AI programmed with the explicit values of 1800 might now be fighting to reestablish slavery… Possible bootstrapping algorithms include “do what we would have told you to do if we knew everything you knew,” “do what we would’ve told you to do if we thought as fast as you did and could consider many more possible lines of moral argument,” and “do what we would tell you to do if we had your ability to reflect on and modify ourselves.” In moral philosophy, this notion of moral progress is known as reflective equilibrium.\n> \n> \n\n\nMoving on… Appleyard’s point that “Machines would be much more radically adjusted away from human social norms, however we programmed them” is another point MIRI been making from the very beginning. See the warnings against anthropomorphism in “[Creating Friendly AI](https://intelligence.org/files/CFAI.pdf)” (2001) and “[Artificial Intelligence as a Positive and Negative Factor in Global Risk](https://intelligence.org/files/AIPosNegFactor.pdf)” (written in 2005, published in 2008). AI mind designs will be far more “alien” to us than the minds of aliens appearing in movies.\n\n\nAppleyard goes on to say:\n\n\n\n> [Compared to MIRI,] the Cambridge group has a much more sophisticated grasp of these issues. Price, in particular, is aware that machines will not be subject to social pressure to behave well.\n> \n> \n> “When you think of the forms intelligence might take,” Price says, “it seems reasonable to think we occupy some tiny corner of that space and there are many ways in which something might be intelligent in ways that are nothing like our minds at all.”\n> \n> \n\n\nWhat Appleyard doesn’t seem to realize, here, is that Price is basically quoting a point long stressed by MIRI researcher Eliezer Yudkowsky — that there is a huge space of possible minds, and humans only occupy a tiny corner of that space. In fact, that point is associated with MIRI and Eliezer Yudkowsky more than with anyone else! Here’s another quote from the paper Yudkowsky wrote in 2005:\n\n\n\n> The term “Artificial Intelligence” refers to a vastly greater space of possibilities than does the term “Homo sapiens.” When we talk about “AIs” we are really talking about minds-in-general, or optimization processes in general. Imagine a map of mind design space. In one corner, a tiny little circle contains all humans; within a larger tiny circle containing all biological life; and all the rest of the huge map is the space of minds in general. The entire map floats in a still vaster space, the space of optimization processes… It is this enormous space of possibilities which outlaws anthropomorphism as legitimate reasoning.\n> \n> \n\n\nSo once again, a reporter thinks MIRI’s positions are the opposite of what they are.\n\n\nBeware what you read in the popular media!\n\n\n*Update*: I told Appleyard of his mistake, and he simply denied that his article has made a mistake on this matter.\n\n\nThe post [Once again, a reporter thinks our positions are the opposite of what they are](https://intelligence.org/2012/11/26/once-again-a-reporter-thinks-our-positions-are-the-opposite-of-what-they-are/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2012-11-27T00:23:57Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "78d2714b487c91eae86b128a4d6e2a5f", "title": "November 2012 Newsletter", "url": "https://intelligence.org/2012/11/07/november-2012-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| November 2012 Newsletter |\n\n\n**Greetings from the Executive Director**\n\n\n\n\n---\n\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/10/luke.jpeg \"luke\")](http://intelligence.org/blog/2012/11/07/november-2012-newsletter/luke/)\nDear friends of the Machine Intelligence Research Institute,\n\n\nMy thanks to the dozens of staff members, contractors, and volunteers who helped make this year’s [Singularity Summit](http://singularity.us5.list-manage1.com/track/click?u=353906382677fa789a483ba9e&id=f026e8c5d4&e=035f10ad19) our most professional and exciting Summit yet! Videos of the talks are now [online](http://singularity.us5.list-manage.com/track/click?u=353906382677fa789a483ba9e&id=dea514a5dd&e=035f10ad19), but I pity those who missed out on the live event and the killer lobby scene. We made more room in the schedule this year for mingling and networking, and everyone seemed to love it. After all, the future won’t be created merely by information and information technologies, but by the *communities of people* who decide to create the future together.\n\n\nThe Summit is a tremendous amount of work each year, and so it felt great to have so many people approach me to say, unprompted, “Wow, this is the best Summit yet!” and “You guys really took it to the next level this year; this is great!” I replayed those moments in my head on Sunday night as I drifted into the blissful coma that would repay several weeks of sleep debt.\n\n\n[Luke Muehlhauser](http://lukeprog.com/)\n\n\n\n\n\n---\n\n\n**Singularity Summit Rocks San Francisco**\n\n\n\n\n---\n\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/10/temple-grandin.png \"temple grandin\")](http://intelligence.org/blog/2012/11/07/november-2012-newsletter/temple-grandin/)The Singularity Summit 2012 was held at the Masonic Center in San Francisco on October 13-14, with an attendance of over 600 scientists, entrepreneurs, and thought leaders. We received media coverage from the BBC (online later this month), the *[Wall Street Journal](http://online.wsj.com/article/28D7B608-CBC0-413A-B20C-0D1F0295E28E.html)*, *[The Verge](http://www.theverge.com/2012/10/22/3535518/singularity-rapture-of-the-nerds-gods-end-human-race)*, and [*PolicyMic*](http://www.policymic.com/articles/16546/human-immortality-singularity-summit-looks-forward-to-the-day-that-humans-can-live-forever), and several other media outlets. The full program is online for your viewing enjoyment [at Fora.tv](http://fora.tv/conference/the_singularity_summit_2012). (If you like to watch talks at an accelerated playback speed, you can sign up for a free trial of Fora.tv, download the HD video, and play it with a program like [VLC](http://www.videolan.org/vlc/index.html) that allows you to adjust playback speed.)\n\n\nWe would like to thank everyone who participated this year, including all the speakers, attendees, and hard-working staff. Everyone we heard from was very impressed by the program and the quality of lobby networking and discussion. Stay tuned for announcements about the next Singularity Summit!\n\n\n\n\n---\n\n\n**Workshop: Rationality for Entrepreneurs**\n\n\n\n\n---\n\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/11/presentation-center.png \"presentation center\")](http://intelligence.org/blog/2012/11/07/november-2012-newsletter/presentation-center/)On November 16-18, the Machine Intelligence Research Institute’s sister organization [Center for Applied Rationality](http://appliedrationality.org) (CFAR) will be running an immersive rationality workshop in the San Francisco Bay Area for a select group of 25 entrepreneurs.\n\n\nThe workshop builds on [CFAR’s previous rationality training retreats](http://appliedrationality.org/retreats/) to present a curriculum that addresses the highest-priority improvements to reasoning for entrepreneurs. Small class sizes, interactive workshop activities, personalized attention inside and outside of class, and six weeks of regular followup are designed to help participants learn to actually *use* the techniques, rather than just know *about* them academically. The curriculum includes:\n\n\n1. What ideal decision-making looks like: when to trust your gut, and when to trust your head.\n2. How to learn about your own motivations and goals using principled thought experiments.\n3. How to make more accurate everyday predictions using Bayes’ Rule, a simple but powerful intuitive tool from probability theory.\n4. The science behind stress reactions, and how to make it easier to ask VCs for investments, customers for money, and your employees to go for equity.\n\n\nApplications are still open. To fill out an application, or to find out more about the workshop, go to the [CFAR website](http://appliedrationality.org/entrepreneurs/).\n\n\n\n\n---\n\n\n**New Less Wrong Sequence from Eliezer Yudkowsky**\n\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/10/yudkowsky.png \"yudkowsky\")](http://intelligence.org/blog/2012/11/07/november-2012-newsletter/yudkowsky-2/)\n\n\n---\n\n\nEliezer Yudkowsky returns to *Less Wrong* with a new sequence of articles, [Highly Advanced Epistemology 101 for Beginners](http://wiki.lesswrong.com/wiki/Highly_Advanced_Epistemology_101_for_Beginners), which sets the stage for his next sequence, “Open Problems in Friendly AI.” The latter sequence will outline the mathematical and philosophical problems which need to be solved to make concrete progress on Friendly AI.\n\n\nFor an earlier article outlining some open problems in Friendly Artificial Intelligence, see Luke Muehlhauser’s [So You Want to Save the World](http://lukeprog.com/SaveTheWorld.html). These open problems include: developing a reflective decision theory, selecting ideal Bayesian priors, and ensuring that an AI’s utility function remains stable even under fundamental changes to the AI’s ontology.\n\n\n\n\n---\n\n\n**New Volunteer Platform Launched!**\n\n\n\n\n---\n\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/10/volunteer.png \"volunteer\")](http://intelligence.org/blog/2012/11/07/november-2012-newsletter/volunteer-2/)Sign up here: [www.singularityvolunteers.org](http://www.singularityvolunteers.org)\n\n\nOver the past couple of months we thought hard about how to improve our volunteer program, with the goal of finding a system that makes it easier to engage volunteers, create a sense of community, and quantify volunteer contributions. After evaluating several different volunteer management platforms, we decided to partner with [Youtopia](http://www.youtopia.com/info/) — a young company with a lot of promise — and make heavy use of [Google Docs](http://en.wikipedia.org/wiki/Google_Docs).\n\n\nYoutopia structures volunteer opportunities into challenges with associated activities. Completing activities earn volunteers points — which allows them to measure how much they are contributing relative to other volunteers (friendly competition encouraged) — and awards that showcase specific accomplishments. Leveraging Google Docs allows volunteers to work together more smoothly — in real-time or asynchronously.\n\n\nIt used to be that most volunteers were isolated from each other as they worked with different SI staff members on various projects. This made generating a sense of community difficult. We think a sense of community is important for long term volunteer engagement. Also, it was difficult to quantify the contributions made by our volunteers. Now, volunteers can see what their peers are working on, compete on challenges, and work on activities together — all while Youtopia makes it easy for us to quantify their contributions.\n\n\nHere is a quote from Project Manager Malo Bourgon: “I’d strongly encourage everyone to head over to [singularityvolunteers.org](http://singularityvolunteers.org/), register as a volunteer, and explore the challenges we currently have posted. As a small nonprofit, we literally have hundreds of hours of work we just can’t afford to do each month. Because of this, volunteers are really important to us; they really do make a meaningful impact.”\n\n\n\n\n---\n\n\n**Singularity Rising Published**\n\n\n\n\n---\n\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/10/singularity-rising.png \"singularity rising\")](http://intelligence.org/blog/2012/11/07/november-2012-newsletter/singularity-rising/)*Singularity Rising*, a new book by Smith College economics professor James D. Miller (author of [*Principles of Microeconomics*](http://www.amazon.com/Principles-Microeconomics-James-Miller/dp/0073402834/)), is now available [for purchase](http://www.amazon.com/Singularity-Rising-Surviving-Thriving-Dangerous/dp/1936661659/). Here are some of the scenarios that Professor Miller considers in his new book:\n\n\n* A merger of man and machine making society fantastically wealthy and nearly immortal.\n* Competition with billions of cheap AIs drive human wages to almost nothing while making investors rich.\n* Businesses rethink investment decisions to take into account an expected future period of intense creative destruction.\n* Inequality drops worldwide as technologies mitigate the cognitive cost of living in impoverished environments.\n* Drugs designed to fight Alzheimer’s disease and keep soldiers alert on battlefields have the fortunate side effect of increasing all of their users’ IQs, which, in turn, adds a percentage points to worldwide economic growth.\n\n\nMiller’s book has received glowing endorsements from Luke Muehlhauser, Paypal co-founder Peter Thiel, SENS foundation Chief Science Officer Aubrey de Grey, Humanity+ Chairman Natasha Vita-More, and novelist Vernor Vinge.\n\n\n\n\n---\n\n\n**Register Now for Tickets to AGI-12!**\n\n\n\n\n---\n\n\nThe [Fifth Conference on Artificial General Intelligence](http://agi-conference.org/2012/) will be held at Oxford University this year, from December 8-11. AGI researchers will present and discuss their results from the last year. Register [here](http://www.winterintelligence.org/travel-and-registration/).\n\n\nSome of the speakers include [David Hanson](http://en.wikipedia.org/wiki/David_Hanson_(robotics_designer)), CEO of Hanson Robotics, who will speak on humanoid robots and AGI; [Angelo Cangelosi](http://www.tech.plym.ac.uk/soc/staff/angelo/), professor of AI and cognition, who will speak on cognitive robotics; professor of cognitive science [Margaret Boden](http://en.wikipedia.org/wiki/Margaret_Boden), who will speak on creativity and AGI; and [Nick Bostrom](http://nickbostrom.com/), professor of philosophy, who will speak on the future evolution of advanced AGIs and the dynamics of AGI goal systems.\n\n\nImmediately following AGI-12 will be the first conference on [AGI Impacts](http://www.winterintelligence.org/oxford2012/agi-impacts/), organized and hosted by the Future of Humanity Institute. The keynote speakers will be [Steve Omohundro](http://en.wikipedia.org/wiki/Steve_Omohundro) and [Bruce Schneier](http://www.schneier.com/).\n\n\nThe Singularity Institute is sponsoring a $1,000 prize, the [2012 Turing Prize for Best AGI Safety Paper](http://www.winterintelligence.org/oxford2012/agi-impacts/cfp/#prize), for exceptional research on the question of how to develop safe architectures or goals for AGI.\n\n\nWe hope to see you in Oxford for these important conferences!\n\n\n\n\n---\n\n\n**Michael Anissimov and Louie Helm to Speak at Humanity+ @ San Francisco**\n\n\n\n\n---\n\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/11/anissimov-helm.png \"anissimov-helm\")](http://singularity.org/blog/2012/11/07/november-2012-newsletter/anissimov-helm/)SI staffers Michael Anissimov and Louie Helm will give talks at the upcoming [Humanity+ @ San Francisco conference](http://2012.humanityplus.org/) on December 1-2, speaking alongside distinguished presenters such as Aubrey de Grey and David Pearce.\n\n\nMichael Anissimov will speak on “The Media Performance of Transhumanism” while Louie Helm will speak on “The Mainstream Academic Publishing We Need”. Here is Michael Anissimov’s abstract:\n\n\n\n> Since 2005 or so, transhumanism and transhumanist ideas have had a rising profile in the media. What have been our greatest successes of the past few years and how can we repeat them? Which memes are getting the most airtime, and which are being ignored? Is more media exposure always better? What can we do to ensure that we and our organizations are media-savvy? How do we leverage technology to maximize the impact of social media? This talk by the media director of the Singularity Institute will examine these questions and come to concrete conclusions.\n> \n> \n\n\nHere is Louie Helm’s abstract:\n\n\n\n> It may seem obvious to you that progress in fields like AGI and life extension will have the most long-lasting and far-reaching impact of any work you could possibly be doing right now. But what enabled you to realize that? Your path to understanding probably passed through a period of several months of independent self-study that required you to evaluate many informal arguments of varying quality, scattered across the internet. But if we want to attract more and better researchers, especially domain experts who don’t have hundreds of hours to study outside their field, we need to formally summarize our best ideas in standard academic style to help erase this enormous barrier to entry. Contributing to this effort by publishing current research is accessible to many and support is available for those who are motivated.\n> \n> \n\n\nSee other abstracts on the [conference abstracts page](http://2012.humanityplus.org/abstracts/). Tickets for the conference are [available now](http://2012.humanityplus.org/tickets/).\n\n\n\n\n---\n\n\n**Featured Volunteer: Tim Oertel**\n\n\n\n\n---\n\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/10/tim-oertel.png \"tim oertel\")](http://singularity.org/blog/2012/11/07/november-2012-newsletter/tim-oertel/)Every newsletter, we like to recognize a volunteer who has made a special contribution to the Singularity Institute. This month, we honor Tim Oertel for his proofreading work.\n\n\nWith the launch of the new website, the Singularity Institute has also republished many of it’s [existing publications](https://intelligence.org/feed/research) into SI’s new article template. Moreover, this year has also been a productive one in terms of *new* publications from SI staff and research associates. As such, volunteer proofreaders have never been more important to SI. Tim Oertel is one of SI’s leading proofreaders. Thank you for your excellent work, Tim!\n\n\n\n\n---\n\n\n**Featured Summit Video: Luke Muehlhauser on \n\n“The Singularity, Promise and Peril”**\n\n\n\n\n---\n\n\n\nIn  “[The Singularity: Promise and Peril](http://singularity.us5.list-manage.com/track/click?u=353906382677fa789a483ba9e&id=365639e618&e=035f10ad19)“, SI Executive Director [Luke Muehlhauser](http://singularity.org/team/) explains the Singularity, its potential risks and benefits, and what we can do about it. Muehlhauser emphasizes that intelligence lies at the root of all technology, updating the familiar Arthur C. Clarke quote “Any sufficiently advanced technology is indistinguishable from magic”, with his new version, “Any sufficiently advanced *intelligence* is indistinguishable from magic.” For more of Luke’s views on the Singularity, we encourage you to read his e-book *[Facing the Singularity](http://facingthesingularity.com/)*.\n\n\n\n\n---\n\n\n**Featured Research Paper: “Learning What to Value”**\n\n\n\n\n---\n\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/10/learningwhatovalue1.png \"learningwhatovalue\")](http://miri.wpengine.com/wp-content/uploads/2012/06/LearningValue.pdf)In this [research paper from 2011](http://miri.wpengine.com/wp-content/uploads/2012/06/LearningValue.pdf), SI research associate [Daniel Dewey](http://singularity.org/research-associates/) (now also a Research Fellow at the [Future of Humanity Institute](http://www.fhi.ox.ac.uk/) at Oxford University) outlines his concept of “value learners.” Here is a snippet from the paper’s abstract:\n\n\n\n> Reinforcement learning can only be used in the real world to deŀne agents whose goal is to maximize expected rewards, and since this goal does not match with human goals, AGIs based on reinforcement learning will often work at cross-purposes to us. To solve this problem, we deŀne value learners, agents that can be designed to learn and maximize any initially unknown utility function so long as we provide them with an idea of what constitutes evidence about that utility function.\n> \n> \n\n\n\n\n---\n\n\n**News Items**\n\n\n\n\n---\n\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/11/1000_genomes2.png \"1000_genomes\")](http://singularity.org/blog/2012/11/07/november-2012-newsletter/1000_genomes-3/)\n[First Genome Study to Sequence 1000+ Human Genomes](http://www.guardian.co.uk/science/2012/oct/31/genomes-project-inventory-human-genetic-variation) \n\n*The Guardian*, October 31, 2012\n\n\n \n\n\n \n\n\n\n\n---\n\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/11/google-car1.png \"google car\")](http://singularity.org/blog/2012/11/07/november-2012-newsletter/google-car-2/)\n[Self-driving cars now legal in California](http://edition.cnn.com/2012/09/25/tech/innovation/self-driving-car-california) \n\n*CNN*, October 30, 2012\n\n\n \n\n\n \n\n\n\n\n---\n\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/11/robots.png \"robots\")](http://singularity.org/blog/2012/11/07/november-2012-newsletter/robots/)\n[Swarm Robots Cooperate with a Flying Drone](http://www.wimp.com/robotsdrone/) \n\n*Wimp.com*, October 23, 2012\n\n\n \n\n\n \n\n\n\n\n---\n\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/11/watson.png \"watson\")](http://singularity.org/blog/2012/11/07/november-2012-newsletter/watson/)\n[IBM’s Watson Expands Commercial Applications, Plans to Go Mobile](http://singularityhub.com/2012/10/14/ibms-watson-jeopardy-champ-expands-commercial-applications-aims-to-go-mobile/) \n\n*Singularity Hub*, October 14, 2012\n\n\n \n\n\n\n\n---\n\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/11/cat.png \"cat\")](http://singularity.org/blog/2012/11/07/november-2012-newsletter/cat/)\n[Google Puts Its Virtual Brain Technology to Work](http://www.technologyreview.com/news/429442/google-puts-its-virtual-brain-technology-to-work/) \n\n*Technology Review*, October 5, 2012\n\n\n \n\n\n \n\n\n\n\n---\n\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/11/bees.png \"bees\")](http://singularity.org/blog/2012/11/07/november-2012-newsletter/bees/)\n[‘Green Brain’ Project to Create an Autonomous Flying Robot With a Honey Bee Brain](http://www.sciencedaily.com/releases/2012/10/121001111405.htm) \n\n*ScienceDaily*, October 2, 2012\n\n\n \n\n\n \n\n\n\n\n---\n\n\n**Thank You for Reading!**\n\n\nThe post [November 2012 Newsletter](https://intelligence.org/2012/11/07/november-2012-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2012-11-07T00:59:20Z", "authors": ["Louie Helm"], "summaries": []} -{"id": "942e1f455d6244452948ea2faf59a8c4", "title": "September 2012 Newsletter", "url": "https://intelligence.org/2012/09/21/september-2012-newsletter/", "source": "miri", "source_type": "blog", "text": "| |\n| --- |\n| |\n| \n\n| |\n| --- |\n| September 2012 Newsletter |\n\nGreetings from the Executive Director\n\n| |\n| --- |\n| \nAugust was a busy month for the Machine Intelligence Research Institute. Thanks to our successful summer fundraiser, we are running full steam ahead on all fronts: [Singularity Summit 2012](http://singularitysummit.com/), the launch of [CFAR](http://appliedrationality.org/), increased research output (see below), and improving organizational efficiency in literally dozens of ways.\nThank you for your continued support as we work toward a positive Singularity.\nLuke Muehlhauser\n |\n\n**Register Now for Singularity Summit 2012**\n\n| |\n| --- |\n| \nOur annual conference, the [Singularity Summit](http://singularitysummit.com/), is back on the west coast this year! The Summit is October 13-14 at the Nob Hill Masonic Auditorium in San Francisco. Thought leaders from all over the world converge on this event, the leading conference on transformative emerging technologies, providing fantastic opportunities for networking.\nThe [full schedule of events](http://singularitysummit.com/schedule/) for Singularity Summit was recently announced. Featured presenters include futurist Ray Kurzweil, Nobel Prize winner Daniel Kahneman, cognitive scientist Steven Pinker, co-founder of genomics company 23andMe Linda Avey, animal welfare and autism advocate Temple Grandin, mathematician and sci-fi great Vernor Vinge, Google Director of Research Peter Norvig, and other [thought leaders](http://singularitysummit.com/2012/09/07/summit-2012-poster/singularity-poster/).\nFor regular updates on the Summit as it approaches, follow the [Singularity Summit blog](http://singularitysummit.com/blog/). There are still [discounted rooms available](http://singularitysummit.com/logistics/) at the Mark Hopkins hotel, a two minute walk from the Summit. |\n\n**SI Hires Two New Researchers**\n\n| |\n| --- |\n| \nThe Machine Intelligence Research Institute has hired two new full-time Research Fellows. [Alex Altair](http://intelligence.org/team/#altair), based out of our Berkeley headquarters, is focused on making the investigation of [Friendly Artificial Intelligence](http://friendly-ai.com/faq.html) more mathematically rigorous. He recently wrote [“An Intuitive Explanation of Solomonoff Induction”](http://lesswrong.com/lw/dhg/an_intuitive_explanation_of_solomonoff_induction/) with Luke Muehlhauser, and is now working on a paper on timeless decision theory. [Kaj Sotala](http://intelligence.org/team/#sotala), based in Helsinki, Finland, is developing several papers on AI risk strategy. Kaj co-authored two SI papers this year, on the [advantages of artificial intelligences, uploads, and digital minds](http://miri.wpengine.com/wp-content/uploads/2012/08/AdvantagesOfAIs.pdf) and [brain uploading-related group mind scenarios](http://miri.wpengine.com/wp-content/uploads/2012/08/AdvantagesOfAIs.pdf). We congratulate Alex and Kaj on their new positions and look forward to encouraging their future efforts. |\n\n**New Evidence About AI Predictions**\n\n| |\n| --- |\n| \nKaj Sotala, our new Research Fellow, recently analyzed a large set of AI predictions (compiled previously by other SI researchers). The results are summarized in the *Less Wrong* post [“AI timeline predictions: are we getting better?”](http://lesswrong.com/lw/e36/ai_timeline_predictions_are_we_getting_better/) Kaj collaborated with Stuart Armstrong of the [Future of Humanity Institute](http://www.fhi.ox.ac.uk/) at Oxford for this project. 257 predictions were retained, and the key results are summarized in the graph above.\nOf particular interest was whether predictions had a tendency of falling within the lifetime of the predictor — an alleged phenomenon coined as the [“Maes-Garreau law”](http://www.kk.org/thetechnium/archives/2007/03/the_maesgarreau.php) by Kevin Kelly. This study revealed, for the first time in a scientific way, that there is no substantial tendency for predictions to expect AI within their own lifetimes. The clustering effect you would expect if this tendency existed is [nowhere to be found](http://kajsotala.fi/Random/ScatterAgeToAI.jpg).\nAnders Sandberg, a researcher at the Future of Humanity Institute, blogged about the study, writing, “[Stuart Armstrong and Kaj Sotala… produced an excellent post analyzing a set of predictions about the future of AI](http://lesswrong.com/lw/e36/ai_timeline_predictions_are_we_getting_better/). Among other things, they looked for evidence of the [Maes-Garreau law](http://en.wikipedia.org/wiki/Maes-Garreau_Law), that people predict AI somewhere about when they retire. Somewhat surprisingly, they found that this was not true. Instead, over a third of predictors claim AI will happen 16-25 years in the future, irrespective of age. [There was no strong correlation between age and expected distance into the future](http://lesswrong.com/lw/e36/ai_timeline_predictions_are_we_getting_better/77gd).” |\n\n**Machine Intelligence Research Institute Upgrades Its Research Output**\n\n| |\n| --- |\n| \nThe Machine Intelligence Research Institute’s research fellows and research associates have [more peer-reviewed publications forthcoming in 2012 than they had published in all past years combined](http://lesswrong.com/r/discussion/lw/axr/three_new_papers_on_ai_risk/627o).\nWe thank Eliezer Yudkowsky, Daniel Dewey, Luke Muehlhauser, Anna Salamon, Louie Helm, Joshua Fox, Carl Shulman, and Kaj Sotala for their recent research efforts and encourage them to continue this valuable work. |\n\nPress Kit Published\n\n| |\n| --- |\n| \n**In anticipation of the upcoming Singularity Summit, we have published an online press kit [here](http://miri.wpengine.com/wp-content/uploads/2012/08/SI_PressKit_old.pdf). This press kit compiles a letter from our executive director, a quick overview of our research, essential bios of staff and supporters, and a selection of media articles. Journalists interested in writing about the Machine Intelligence Research Institute or Singularity Summit are encouraged to get in touch with our Media Director Michael Anissimov at [admin@intelligence.org](mailto:admin@intelligence.org).** |\n\nSummer Program on Rationality and Cognition\n\n| |\n| --- |\n| \nIn early August, SI co-sponsored the [Summer Program on Rationality and Cognition](http://appliedrationality.org/sparc.html) (SPARC), a week-long summer camp for young mathematicians held at UC Berkeley. The Center for Applied Rationality (CFAR), a non-profit currently in the process is spinning off from SI, also sponsored the program. Paul Christiano, a primary organizer of the program, said, “SPARC drew many of the very best high school mathematicians, and introduced them to a roughly even mix of [LessWrong](http://lesswrong.com/)-style rationality and technical foundations for probabilistic reasoning. Students were enthusiastic and the program was very well received (comparable to other CFAR programs). The apparent success of SPARC will facilitate similar programs in future years, helping CFAR understand how to reach a wider audience and establishing a positive reputation in a wider community.” |\n\n**Luke Muehlhauser’s Reddit AMA a Success**\n\n| |\n| --- |\n| \nLuke Muehlhauser conducted an “Ask Me Anything” on the social media website Reddit.com, under [/r/Futurology](http://www.reddit.com/r/Futurology/comments/y9lm0/i_am_luke_muehlhauser_ceo_of_the_singularity/). He said “I am Luke Muehlhauser, CEO of the Machine Intelligence Research Institute. Ask me anything about the Singularity, AI progress, technological forecasting, and researching Friendly AI!” The result was a total of 2,150 comments and 1,329 upvotes! For a selection of Luke’s responses, read the feed for his [Reddit account](http://www.reddit.com/user/lukeprog). There was also [coverage](http://www.wired.com/geekdad/2012/08/preventing-skynet/) at *WIRED*. |\n\n**Featured Singularity Summit Video: Max Tegmark**\n\n| |\n| --- |\n| \nIn this presentation at Singularity Summit 2011, cosmologist and MIT professor Max Tegmark explores what could happen to us billions of years in the future. Entitled [“The Future of Life: a Cosmic Perspective”](http://www.youtube.com/watch?v=GctnYAYcMhI), Tegmark summarizes what direction cosmologists currently believe the universe will take in coming billions of years and analyzes the implications for the future of life and civilization. He questions the common perception of humans as being the pinnacle of life and foresees beings “as far ahead of us as we are from bacteria”. |\n\n**Featured Volunteer: Alton Sun**\n\n| |\n| --- |\n| \nAlton Sun, a photographer from the San Francisco Bay Area, has volunteered for the Machine Intelligence Research Institute (MIRI) and our sponsored organization CFAR (Center for Applied Rationality) on many occasions now. Alton likes orchestrating photo shoots in some cool warehouses with shadows and orange light, or a grassy field with bright sun and beautiful trees. Alton finds, however, that the component of beauty normally found in the physical surroundings of his work is found in the minds of the people who work here, where their ethically oriented and intellectual personalities hold new sources of inspiration.\nSince to Alton “inaction is inexcusable”, it simply becomes a question of upon what one should act. For him, his contribution of photography to SI is a means of addressing some of the world’s most challenging problems and answering some of the largest questions for what is coming in the future of humanity.\nTo get to know his work, find his portfolio here: [altonsunphoto.com](http://altonsunphoto.com/) |\n\nThank You for Reading Our Newsletter!\n\n| |\n| --- |\n| |\n\n |\n\n\n \n\n\nThe post [September 2012 Newsletter](https://intelligence.org/2012/09/21/september-2012-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2012-09-21T18:16:48Z", "authors": ["Jake"], "summaries": []} -{"id": "4d86b6c62070d20168c0f208658871b5", "title": "August 2012 Newsletter", "url": "https://intelligence.org/2012/08/21/august-2012-newsletter/", "source": "miri", "source_type": "blog", "text": "This newsletter was sent to newsletter subscribers in early August, 2012.\n\n\n\n\n| | | | | | | |\n| --- | --- | --- | --- | --- | --- | --- |\n| \n\nGreetings from the Executive Director\n\n\n| |\n| --- |\n| The big news this month is that we [surpassed our fundraising goal](http://intelligence.org/blog/2012/07/30/2012-summer-singularity-challenge-success/) of raising $300,000 in the month of July. My thanks to everyone who donated! Your contributions will help us finish launching [CFAR](http://appliedrationality.org/) and begin to build a larger and more productive research team working on some of [the most important research problems in the world](http://lukeprog.com/SaveTheWorld.html). Luke Muehlhauser\n |\n\n\n\n\n\nSingularity Summit Prices Will Increase on August 15th!\n\n\n| |\n| --- |\n| Two-day tickets prices for the Singularity Summit 2012 are still only $635, but will increase again on August 15th! For anyone interested in hearing some of the foremost speakers on science, technology, and the future of humanity, buy your ticket today to our international conference at the Nob Hill Masonic Center, SF on October 13-14th! |\n\n\n\n\n\n2012 Summer Singularity Challenge Success!\n\n\n| |\n| --- |\n| Thanks to the effort of our donors, the 2012 Summer Singularity Challenge has been met! All $150,000 contributed will be matched dollar for dollar by our matching backers, raising a total of $300,000 to fund the Machine Intelligence Research Institute’s operations. We reached our goal near 6pm on July 29th. On behalf of our staff, volunteers, and entire community, I want to personally thank everyone who donated. Your dollars make the difference. Here’s to a better future for the human species. |\n\n\n\n\n\nFacing the Singularity Finished\n\n\n| |\n| --- |\n| Luke Muehlhauser has now published the final chapters of his introductory blog on the coming of AI, [*Facing the Singularity*](http://facingthesingularity.com/). The penultimate chapter explains what can be done to improve our odds of a positive singularity, and the final chapter outlines what benefits we can expect from a positive singularity. |\n\n\n\n\nComparison of 2011 August strategic plan to today\n\n\n| |\n| --- |\n| Progress updates are nice, but without a previously defined metric for success it’s hard to know whether an organization’s achievements are noteworthy or not. Is the Machine Intelligence Research Institute making good progress, or underwhelming progress? Luckily, in August 2011 we published a [strategic plan](http://miri.wpengine.com/wp-content/uploads/2012/06/strategicplan20112.pdf) that outlined lots of specific goals. It’s now August 2012, so we can check our progress against the standard set nearly one year ago. The full comparison is available [here](http://lesswrong.com/lw/dm9/revisiting_sis_2011_strategic_plan_how_are_we/), and the final section is excerpted below: Now let’s check in on what we said **our top priorities for 2011-2012** were:1. *Public-facing research on creating a positive singularity*. Check. [SI has more peer-reviewed publications in 2012 than in all past years combined](http://lesswrong.com/lw/axr/three_new_papers_on_ai_risk/627o).\n2. *Outreach / education / fundraising*. Check. Especially, through [CFAR](http://appliedrationality.org/).\n3. *Improved organizational effectiveness*. Check. [Lots of good progress](http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/6jzn) on this.\n4. *Singularity Summit*. [Check](http://singularitysummit.com/).\n\nIn summary, I think SI is a bit behind where I hoped we’d be by now, though this is largely because we’ve poured so much into launching [CFAR](http://appliedrationality.org/), and as a result, CFAR has turned out to be significantly more cool at launch than I had anticipated. |\n\n\n\n\n\nSI Publishes Solomonoff Induction Tutorial\n\n\n| |\n| --- |\n| Visiting Fellow Alex Altair worked with Luke Muehlhauser to publish [An Intuitive Explanation of Solomonoff Induction](http://lesswrong.com/lw/dhg/an_intuitive_explanation_of_solomonoff_induction/), a sequel to Eliezer Yudkowsky’s [Intuitive Explanation of Bayes’ Theorem](http://yudkowsky.net/rational/bayes/). Whereas Bayes’ Theorem is a key idea in probability theory, Solomonoff Induction is a key idea in the study of universal, automated inference.It begins:\nPeople disagree about things. Some say that television makes you dumber; other say it makes you smarter.  Some scientists believe life must exist elsewhere in the universe; others believe it must not. Some say that complicated financial derivatives are essential to a modern competitive economy; others think a nation’s economy will do better without them.  It’s hard to know what is true.\nAnd it’s hard to know how to figure out what is true.  Some argue that you should assume the things you are most certain about and then deduce all other beliefs from your original beliefs. Others think you should accept at face value the most intuitive explanations of personal experience. Still others think you should generally agree with the scientific consensus until it is disproved.\nWouldn’t it be nice if determining what is true was like baking a cake? What if there was a recipe for finding out what is true? All you’d have to do is follow the written directions exactly, and after the last instruction you’d inevitably find yourself with some sweet, tasty truth!\nIn this tutorial, we’ll explain the closest thing we’ve found so far to a recipe for finding truth: Solomonoff induction. |\n\n\n\n\n\n\nDialogue with Bill Hibbard about AGI\n\n |\n| Luke Muehlhauser has published a [dialogue](http://lesswrong.com/lw/di6/muehlhauserhibbard_dialogue_on_agi/) between himself and computer scientist Bill Hibbard, author of [*Super-Intelligent Machines*](http://www.amazon.com/Super-Intelligent-Machines-International-Systems-Engineering/dp/0306473887/), about AI safety. The dialogue is part of Luke’s [series of interviews about AI safety](http://wiki.lesswrong.com/wiki/Muehlhauser_interview_series_on_AGI). |\n\n\n\n\n### Featured Donor: Robin Powell\n\n\n\n\n| |\n| --- |\n| Below is an interview with this month’s featured donor, Robin Powell.*Luke Muehlhauser*: Robin, you’ve been donating $200 a month since August 2004. That adds up to more than $20,000, making you our 8th largest publicly listed donor! Why do you support the Machine Intelligence Research Institute like this?*Robin Powell*: I honestly believe that a beneficial Singularity is the best hope that humanity has for long-term survival. Having spent hundreds of hours researching the various people and groups that are actively working on Singularity-related issues, the Machine Intelligence Research Institute is the only one that I really feel has their eyes on the right ball, which is the Friendly AI problem. I feel confident that my donations are the most effective way I can possibly aid in the best possible future for humanity.\n*Luke*: What do you give up each month in order to donate $200/month to the Machine Intelligence Research Institute?\n*Robin*: Mostly I’ve been able to get by when things got complicated by re-budgeting, but I’ve had to do that rather a lot more often than I would have had to otherwise.\n*Luke*: What challenges have you faced since August 2004, while continuing to donate $200 a month?\n*Robin*: The time that I took off a couple of months to help my aging father, without pay, was by far the hardest; the extra money would really have helped then. But for me it’s about expected return: when the future of the human race is in the balance, having to borrow from friends briefly or similar hardships seem pretty inconsequential.\n*Luke*: What one thought would you most like to share with the community of people who care about reducing existential risks?\n*Robin*: AI is coming, relatively soon. There is no more important task for humanity than to prevent our extinction and preserve a better version of our values. Now is the time to spend time and money protecting the future of humanity. Please help us.\n*Luke*: Thanks for your time, Robin, and thanks for your continued support! |\n\n\n\n\n\n### Featured Summit Video\n\n\n\n\n| |\n| --- |\n| This month we are featuring a video from the 2006 Singularity Summit: Eliezer Yudkowsky’s “[The Human Importance of the Intelligence Explosion](http://vimeo.com/album/1777581/video/44144898)“. Eliezer’s talk discusses I.J. Good’s concept of an “intelligence explosion,” and its central importance for the human species.\n |\n\n\n### Use Good Search, support the Machine Intelligence Research Institute\n\n\n\n\n| |\n| --- |\n| [GoodSearch](http://www.goodsearch.com/nonprofit/singularity-institute-for-artificial-intelligence-siai.aspx), which allows you to donate to a cause merely by using their search engine, now has a donation option for the Machine Intelligence Research Institute. Use GoodSearch to [donate every day without opening your wallet](http://www.goodsearch.com/nonprofit/singularity-institute-for-artificial-intelligence-siai.aspx)! |\n\n\n\nThe post [August 2012 Newsletter](https://intelligence.org/2012/08/21/august-2012-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2012-08-21T17:21:16Z", "authors": ["Louie Helm"], "summaries": []} -{"id": "04abaf95cac6ca4dd9810d24f844d80d", "title": "July 2012 Newsletter", "url": "https://intelligence.org/2012/08/06/july-2012-newsletter/", "source": "miri", "source_type": "blog", "text": "*This newsletter was sent out to Machine Intelligence Research Institute newsletter subscribers in July 2012*\n\n\n### Greetings from the Executive Director\n\n\n\n\n---\n\n\n![Luke Muehlhauser](https://intelligence.org/files/201207_newsletter_01_LukeMuehlhauser.png)\nFriends of the Machine Intelligence Research Institute,\n\n\nGreetings! Our new monthly newsletter will bring you the latest updates from the Machine Intelligence Research Institute (MIRI). (You can read earlier monthly progress updates [here](http://intelligence.org/blog/category/monthly-progress/).)\n\n\nThese are exciting times at MIRI. We just launched our [new website](http://intelligence.org/), and also the website for the [Center for Applied Rationality](http://appliedrationality.org/). We have several research papers under development, and after a long hiatus from AI research, researcher Eliezer Yudkowsky is planning a new sequence of articles on “Open Problems in Friendly AI.”\n\n\nWe have also secured $150,000 in matching funds for a new fundraising drive. To help support us in our work toward a positive Singularity, please **[donate today](http://intelligence.org/blog/2012/07/03/summer-challenge/)** and have your gift doubled!\n\n\nLuke Muehlhauser \n\nMachine Intelligence Research Institute Executive Director\n\n\n\n\n\n### Donate Today to Double Your Impact!\n\n\n\n\n---\n\n\n![](https://intelligence.org/files/201207_newsletter_02_money.jpg)Our summer 2012 matching drive is on! Now is your chance to [**double your impact**](http://intelligence.org/blog/2012/07/03/summer-challenge/) while helping us raise up to $300,000 to help fund [our research program](http://intelligence.org/research/) and stage the upcoming [Singularity Summit](http://singularitysummit.com/)… which you can [register for now](https://www.thinkreg.com/events/singularity/)!Since we published our [strategic plan](http://miri.wpengine.com/wp-content/uploads/2012/06/strategicplan20112.pdf) in August 2011, we have achieved *most* of the near-term goals outlined therein. Here are just a *few* examples:\n\n\n* We outlined the [open research problems](http://lukeprog.com/SaveTheWorld.html) related to our work (Section 1.1).\n* We recruited several more [research associates](http://intelligence.org/research-associates/) and about a dozen [remote researchers](http://lesswrong.com/lw/bke/the_singularity_institute_still_needs_remote/) (Section 1.2e).\n* We held our annual [Singularity Summit](http://www.singularitysummit.com/) and gained corporate sponsors for it (Section 2.1).\n* We made progress in decision theory ([example](http://lesswrong.com/lw/8wc/a_model_of_udt_with_a_halting_oracle)) via LessWrong.com and our research associates (Section 2.2b).\n* We published [How to Run a Successful Less Wrong Meetup Group](http://lesswrong.com/lw/crs/how_to_run_a_successful_less_wrong_meetup/) (Section 2.2d).\n* We released pre-prints of several forthcoming research articles, including [How Hard is Artificial Intelligence?](http://www.nickbostrom.com/aievolution.pdf), [Intelligence Explosion: Evidence and Import](http://miri.wpengine.com/wp-content/uploads/2012/06/IE-EI-old.pdf), and [The Singularity and Machine Ethics](http://miri.wpengine.com/wp-content/uploads/2012/06/SaME.pdf) (Section 2.3).\n* We [redesigned](http://intelligence.org/blog/2012/06/18/welcome-to-the-new-singularity-org/) our primary website (Section 2.6).\n* We acquired $40,000/month in free Google Adwords advertising, to drive traffic to websites operated by the Machine Intelligence Research Institute (Section 2.6c).\n* We began publishing [monthly progress reports](http://intelligence.org/blog/category/monthly-progress/) (Section 2.9b).\n* We built up the Center for Applied Rationality such that it should be able to spin off from the Machine Intelligence Research Institute later this year (Section 3.1).\n* We created a [transparency section](http://intelligence.org/transparency/) on our website, where visitors can find our IRS 990 forms, and also several standard organizational policies, e.g. a conflict of interest policy, non-discrimination policy, etc (Section 3.2).\n\n\nIn the coming year, the **Machine Intelligence Research Institute plans to do the following**:\n\n\n* **Hold our annual [Singularity Summit](http://www.singularitysummit.com/)**, this year in San Francisco! Speakers this year include Ray Kurzweil, Steven Pinker, Tyler Cowen, Temple Grandin, Peter Norvig, Robin Hanson, and Vernor Vinge.\n* **Spin off the [Center for Applied Rationality](http://appliedrationality.org/)** as a separate organization focused on rationality training, so that the Machine Intelligence Research Institute can be focused more exclusively on Singularity research and outreach.\n* **Publish additional [research](http://intelligence.org/research/)** on AI risk and Friendly AI.\n* **Eliezer will write an “Open Problems in Friendly AI” sequence** for *Less Wrong*.\n* **Finish *[Facing the Singularity](http://facingthesingularity.com/)*** and publish ebook versions of *Facing the Singularity* and *[The Sequences, 2006-2009](http://wiki.lesswrong.com/wiki/Sequences)*.\n* And much more! For details on what we might do with additional funding, see [How to Purchase AI Risk Reduction](http://lesswrong.com/lw/cs6/how_to_purchase_ai_risk_reduction/).\n\n\nWe appreciate your support for our high-impact work! **[Donate now](http://intelligence.org/donate/)**, and seize a better than usual chance to move our work forward.\n\n\n\n### Eliezer to Write “Open Problems in Friendly AI”\n\n\n\n\n---\n\n\n![](https://intelligence.org/files/201207_newsletter_03_EliezerYudkowsky.jpg)After a long hiatus from AI research to help with movement-building, Eliezer Yudkowsky is now planning a sequence of articles on open problems in Friendly AI research. These articles will help to explain the technical research that can be done today to help ensure a positive Singularity. The articles will be initially published at [*Less Wrong*](http://lesswrong.com/), where his [earlier sequences of articles](http://wiki.lesswrong.com/wiki/Sequences)were published.What about Eliezer’s in-progress rationality books? They are on hold for now while Eliezer works on other projects. We have signed a retainer with a professional author who has written at least one best-selling science book, and he will work on Eliezer’s books once he finishes his current project, probably late this fall.\n\n\n\n### Visit Our New Website\n\n\n\n\n---\n\n\n![](https://intelligence.org/files/201207_newsletter_04_research.png)SI Media Director Michael Anissimov and many others have worked hard to create the new look and feel for our online web presence, at [Singularity.org](http://intelligence.org/). The site has been reorganized into six top-level pages: [About](http://intelligence.org/about/), [What We Do](http://intelligence.org/what-we-do/), [Research](http://intelligence.org/research/), [Media](http://intelligence.org/media/), [Get Involved](http://intelligence.org/get-involved/), and [Donate](http://intelligence.org/donate/).The single greatest update to our website is to our [research](http://intelligence.org/research/)page and our research papers, which are now organized into three categories. Most of our research papers have been ported to a clean new template, and every reference has been manually checked and updated.\n\n\nWe’ve also created a [transparency](http://intelligence.org/transparency/) page which features Q&As, policy documents, and our tax forms dating back to our founding in 2000.\n\n\nFor casual reading, there is also a new [tech summaries](http://intelligence.org/techsummaries/) page that features short articles on emerging technologies such as the [Berkeley Brain-Computer Interface](http://intelligence.org/brain-computer-interfaces/), [regenerative medicine](http://intelligence.org/regenerative-medicine/), and [AI in automated science and discovery](http://intelligence.org/automated-science/).\n\n\nBe sure to [subscribe](http://intelligence.org/feed/) to our [blog](http://intelligence.org/blog/) for regular updates!\n\n\n\n### Center for Applied Rationality (CFAR)\n\n\n\n\n---\n\n\n![](https://intelligence.org/files/201207_newsletter_05_CFARlogoO.png)Per our August 2011 [strategic plan](http://miri.wpengine.com/wp-content/uploads/2012/06/strategicplan20112.pdf), SI is helping to launch a separate organization devoted to rationality skills training. That organization is the Center for Applied Rationality (CFAR), which was recently granted 501c3 status and has a new website at [AppliedRationality.org](http://appliedrationality.org/).In an age when our economic, political, and technological choices can spark both amazing progress and unprecedented devastation, it’s crucial that decision-makers are not only aware of the many near-universal cognitive biases, but also well-trained in avoiding them, and in using thinking habits based in probability and logic that outperform our brains’ innate algorithms. And that’s the goal of the Center for Applied Rationality (CFAR): to turn decades of research in cognitive science into a set of practicable techniques people can actually use to become better at reasoning and making decisions.\n\n\nTo think rationally about the future, we must be able to weigh different levels of risk and uncertainty, compare expected values, avoid over-weighting short term outcomes at the expense of the long term, avoid the “narrative fallacy” (in which vividly imagined outcomes appear more likely), have well-calibrated levels of confidence in their judgments, and think clearly even about emotionally fraught decisions — and much, much more.\n\n\nCFAR is devoted to teaching those techniques, and the math and science behind them, to adults and exceptional youth. In the process, CFAR will be breaking new ground in studying the long-term effects of rationality on life outcomes using randomized controlled trials, to help us improve our material and to contribute to the body of knowledge in applied rationality. And we’ll building a real-life community of tens of thousands of students, entrepreneurs, researchers, programmers, philanthropists, and other people who are passionate about using rationality to improve their decisions for themselves and for the rest of the world.\n\n\nRead more about [what CFAR does](http://appliedrationality.org/whatwedo/), and send your friends to the new [What is Rationality?](http://appliedrationality.org/rationality/) page, which may now be the best short introduction to rationality available.\n\n\n\n### Featured Donor: Jesse Liptrap\n\n\n\n\n---\n\n\n![Jesse Liptrap](https://intelligence.org/files/201207_newsletter_06_JesseLiptrap.jpg)Each month, our newsletter will share the views of a featured donor.Jesse Liptrap, a bioinformatician at the University of California, Berkeley was led to the SI’s broader community via a Less Wrong meetup in Spring of 2009. With a longstanding interest in transhumanism and the philosophy of moving beyond the frailty of human hardware, Jesse took quickly to the idea that applied rationality offers a set of tools for more accurately assessing the risks and opportunities of powerful emerging technologies.\n\n\nOver the past few years, Jesse has climbed our [Top Contributors](http://intelligence.org/topcontributors/) list. He also serves as SI’s non-staff Treasurer.\n\n\n\n### Featured Summit Video\n\n\n\n\n---\n\n\n[![](https://intelligence.org/files/201207_newsletter_07_SIFeauturedVideo.png)](https://vimeo.com/7397629)This month we are featuring a video from the 2010 Singularity Summit: Anna Salamon’s “[How Much it Matters to Know What Matters: A Back of the Envelope Calculation](https://vimeo.com/7397629).” Anna’s talk shows that research about the Singularity has extremely high value of information, and should therefore be supported over and above many other kinds of research currently being conducted.\n\n\n\n### $1000 Prize for Best AGI Safety Paper\n\n\n\n\n---\n\n\n![](https://intelligence.org/files/201207_newsletter_08_prize.jpg)This year’s AGI-12 conference will include a special track about the impacts of artificial general intelligence, called “AGI Impacts. “The Machine Intelligence Research Institute is sponsoring a $1000 prize for the best AGI safety contribution to ‘AGI-12′ and ‘AGI-Impacts 2012.’ The winner will be decided by a jury from SI and announced by Louie Helm at the end of the AGI Impacts conference.\n\n\nThe award is given in honor of Alan Turing, who not only discovered some of the key ideas in machine intelligence, but also grasped its importance, writing that “…it seems probable that once [human-level machine thinking] has started, it would not take long to outstrip our feeble powers… At some stage therefore we should have to expect the machines to take control…”\n\n\nThe prize is awarded for work that not only increases awareness of this important problem, but also makes technical progress in addressing it.\n\n\nThe deadline for paper submission is August 31st. For details, see the AGI-12 [Call for Papers](http://www.winterintelligence.org/oxford2012/agi-impacts/cfp/).\n\n\n\n### Ioven Fables Hired\n\n\n\n\n---\n\n\n![](https://intelligence.org/files/201207_newsletter_09_IovenFables.jpg)The Machine Intelligence Research Institute has hired Ioven Fables to help with organizational operations and development. Ioven is a business person with a long dedication to big-picture understanding, philosophy, and the future.Ioven has worked for years at the business end of technology, and has B.A. in Philosophy from Boston College.\n\n\n\n### How to Get Involved\n\n\n\n\n---\n\n\n![](https://intelligence.org/files/201207_newsletter_10_LouieHelm.jpg)\nWould you like to [be more involved](http://intelligence.org/get-involved/) in our work at MIRI? There are so many opportunities available, there may be a role for everyone. Here’s what you could do:\n\n\n* [Donate](http://intelligence.org/donate/). There are projects we’d like to launch, and people we’d like to hire, as soon as we can raise the funds to do so. Financial support is perhaps the clearest, most direct way to contribute to our mission.\n* [Volunteer](http://intelligence.org/volunteer/). We have dozens of opportunities for skilled volunteering work. Visit our [volunteering page](http://intelligence.org/volunteer/) to see how you can help.\n* [Share your expertise](http://lesswrong.com/lw/aus/please_advise_the_singularity_institute_with_your/). Have expertise in economics, maths, computer science, cognitive science, physics, law, non-profit development, marketing, event planning, executive coaching, publishing? Please [sign up](http://lesswrong.com/lw/aus/please_advise_the_singularity_institute_with_your/) as a Machine Intelligence Research Institute Volunteer Advisor!\n* [Intern with us](http://intelligence.org/interns/). Apply to work with us in Berkeley as an intern.\n* [Apply to be a visiting fellow](http://intelligence.org/visiting-fellows/). Visiting fellows work with us for short periods on research projects. Gain valuable experience and work directly with our researchers!\n* [Apply for a job](http://intelligence.org/opportunities/). We are currently seeking research fellows, a communications director, and a grants manager. You can also [apply to be a remote researcher](http://lesswrong.com/lw/bke/the_singularity_institute_still_needs_remote/), a job you can do from anywhere in the world!\n\n\nThank you for your continued interest and support. Don’t hesitate to get in contact with me at [louie@intelligence.org](mailto:louie@intelligence.org).\n\n\nLouie Helm \n\nMachine Intelligence Research Institute Director of Development\n\n\n### In Other News\n\n\n\n\n---\n\n\n* [Cambridge University to Launch Centre for the Study of Existential Risk (CSER)](http://cser.org/)\n* [Humanoid Robot Works Side by Side With People](http://www.sciencedaily.com/releases/2012/05/120522084322.htm)\n* [New Robot Outperforms Humans in Identifying Natural Materials by Their Textures](http://www.sciencedaily.com/releases/2012/06/120618194952.htm)\n* [New Device Allows Paralyzed People to Type Words Using Only Their Thoughts](http://www.bbc.com/news/technology-18644084)\n* [Scientists Develop Most Realistic Robot Legs Yet](http://www.bbc.co.uk/news/health-18724114)\n* [Sophisticated Robotic Hand Doubles As A Human Exoskeleton](http://singularityhub.com/2012/07/07/sophisticated-robotic-hand-also-doubles-as-a-human-exoskeleton/)\n\n\nThe post [July 2012 Newsletter](https://intelligence.org/2012/08/06/july-2012-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2012-08-06T12:34:32Z", "authors": ["Louie Helm"], "summaries": []} -{"id": "05b37a4fe9da16c0a5a9d1a2d8dada83", "title": "2012 Summer Singularity Challenge Success!", "url": "https://intelligence.org/2012/07/30/2012-summer-singularity-challenge-success/", "source": "miri", "source_type": "blog", "text": "Thanks to the effort of our donors, the [2012 Summer Singularity Challenge](https://intelligence.org/blog/2012/07/03/summer-challenge/) has been met! All $150,000 contributed will be matched dollar for dollar by our matching backers, raising a total of $300,000 to fund the Machine Intelligence Research Institute’€™s operations. We reached our goal near 6pm on July 29th.\n\n\nOn behalf of our staff, volunteers, and entire community, I want to personally thank everyone who donated. Your dollars make the difference.\n\n\nHere’€™s to a better future for the human species.\n\n\nThe post [2012 Summer Singularity Challenge Success!](https://intelligence.org/2012/07/30/2012-summer-singularity-challenge-success/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2012-07-30T17:50:35Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "7b4011689c5ee5b1c376931a5b6da81b", "title": "2012 Summer Singularity Challenge", "url": "https://intelligence.org/2012/07/03/summer-challenge/", "source": "miri", "source_type": "blog", "text": "Thanks to the generosity of several major donors,† every donation to the Machine Intelligence Research Institute made now until July 31, 2012 will be matched dollar-for-dollar, up to a total of $150,000!\n\n\n**[Donate Now!](https://intelligence.org/donate/)**\n\n\n\n\n\n\n\n\n\n\n$0\n\n\n\n\n$37.5K\n\n\n\n\n$75K\n\n\n\n\n$112.5K\n\n\n\n\n$150K\n\n\n\n\n\n\n\nNow is your chance to **double your impact** while helping us raise up to $300,000 to help fund [our research program](https://intelligence.org/research/) and stage the upcoming [Singularity Summit](https://intelligence.org/singularitysummit/)… which you can [register for now](http://singularitysummit.com/)!\n\n\n**Note**: If you prefer to support rationality training, you are welcome to *earmark your donations* for “CFAR” ([Center for Applied Rationality](http://www.appliedrationality.org/)). Donations earmarked for CFAR will *only* be used for CFAR, and donations *not* earmarked for CFAR will *only* be used for Singularity research and outreach.\n\n\n[![](http://miri.wpengine.com/wp-content/uploads/2012/06/towardapositivesingularity.jpg \"towardapositivesingularity\")](https://intelligence.org/donate/)\nSince we published our [strategic plan](https://intelligence.org/files/strategicplan20112.pdf) in August 2011, we have [achieved most of the near-term goals outlined therein](http://lesswrong.com/r/discussion/lw/dm9/revisiting_sis_2011_strategic_plan_how_are_we/#summary). Here are just a *few* examples:\n\n\n* We outlined the [open research problems](http://lukeprog.com/SaveTheWorld.html) related to our work (Section 1.1).\n* We recruited several more [research associates](https://intelligence.org/research-associates/) and about a dozen [remote researchers](http://lesswrong.com/lw/bke/the_singularity_institute_still_needs_remote/) (Section 1.2e).\n* We held our annual [Singularity Summit](http://singularitysummit.com/) and gained corporate sponsors for it (Section 2.1).\n* We made progress in decision theory ([example](http://lesswrong.com/lw/8wc/a_model_of_udt_with_a_halting_oracle)) via LessWrong.com and our research associates (Section 2.2b).\n* We published [How to Run a Successful Less Wrong Meetup Group](http://lesswrong.com/lw/crs/how_to_run_a_successful_less_wrong_meetup/) (Section 2.2d).\n* We released pre-prints of several forthcoming research articles, including [How Hard is Artificial Intelligence?](http://www.nickbostrom.com/aievolution.pdf), [Intelligence Explosion: Evidence and Import](https://intelligence.org/files/IE-EI.pdf), and [The Singularity and Machine Ethics](https://intelligence.org/files/SaME.pdf) (Section 2.3).\n* We [redesigned](https://intelligence.org/blog/2012/06/18/welcome-to-the-new-singularity-org/) our primary website (Section 2.6).\n* We acquired $40,000/month in free Google Adwords advertising, to drive traffic to websites operated by the Machine Intelligence Research Institute (Section 2.6c).\n* We began publishing [monthly progress reports](https://intelligence.org/blog/category/monthly-progress/) (Section 2.9b).\n* We built up the Center for Applied Rationality such that it should be able to spin off from the Machine Intelligence Research Institute later this year (Section 3.1).\n* We created a [transparency section](https://intelligence.org/transparency/) on our website, where visitors can find our IRS 990 forms, and also several standard organizational policies, e.g. a conflict of interest policy, non-discrimination policy, etc (Section 3.2).\n\n\nIn the coming year, the **Machine Intelligence Research Institute plans to do the following**:\n\n\n* **Hold our annual [Singularity Summit](http://singularitysummit.com/)**, this year in San Francisco!\n* **Spin off the [Center for Applied Rationality](http://www.appliedrationality.org/)** as a separate organization focused on rationality training, so that the Machine Intelligence Research Institute can be focused more exclusively on Singularity research and outreach.\n* **Publish additional [research](https://intelligence.org/research/)** on AI risk and Friendly AI.\n* **Eliezer will write an “Open Problems in Friendly AI” sequence** for *Less Wrong*. (For news on his rationality books, see [here](http://lesswrong.com/lw/d06/intellectual_insularity_and_productivity/6swt).)\n* **Finish *[Facing the Singularity](http://facingthesingularity.com/)*** and publish ebook versions of *Facing the Singularity* and *[The Sequences, 2006-2009](http://wiki.lesswrong.com/wiki/Sequences)*.\n* And much more! For details on what we might do with additional funding, see [How to Purchase AI Risk Reduction](http://lesswrong.com/lw/cs6/how_to_purchase_ai_risk_reduction/).\n\n\nIf you’re planning to earmark your donation to CFAR (Center for Applied Rationality), here’s a preview of **what CFAR plans to do in the next year**:\n\n\n* **Develop additional lessons** teaching the most important and useful parts of rationality. CFAR has already developed and tested *over 18 hours of lessons* so far, including classes on how to evaluate evidence using Bayesianism, how to make more accurate predictions, how to be more efficient using economics, how to use thought experiments to better understand your own motivations, and much more.\n* **Run immersive rationality retreats** to teach from our curriculum and to connect aspiring rationalists with each other. CFAR ran pilot retreats in May and June. Participants in the May retreat called it “transformative” and “astonishing,” and the average response on the survey question, “Are you glad you came? (1-10)” was a 9.4. (We don’t have the June data yet, but people were similarly enthusiastic about that one.)\n* **Run SPARC, a camp on the advanced math of rationality** for mathematically gifted high school students. CFAR has a stellar first-year class for SPARC 2012; most students admitted to the program placed in the top 50 on the USA Math Olympiad (or performed equivalently in a similar contest).\n* **Collect longitudinal data on the effects of rationality training**, to improve our curriculum and to generate promising hypotheses to test and publish, in collaboration with other researchers. CFAR has already launched a one-year randomized controlled study tracking reasoning ability and various metrics of life success, using participants in our June minicamp and a control group.\n* **Develop apps and games about rationality**, with the dual goals of (a) helping aspiring rationalists practice essential skills, and (b) making rationality fun and intriguing to a much wider audience. CFAR has two apps in beta testing: one training players to update their own beliefs the right amount after hearing other people’s beliefs, and another training players to calibrate their level of confidence in their own beliefs. CFAR is working with a developer on several more games training people to avoid cognitive biases.\n* And more!\n\n\nWe appreciate your support for our high-impact work! **[Donate now](https://intelligence.org/donate/)**, and seize a better than usual chance to move our work forward. Credit card transactions are securely processed using either PayPal or Google Checkout. If you have questions about donating, please contact Louie Helm at (510) 717-1477 or louie@intelligence.org.\n\n\n† $150,000 of total matching funds has been provided by Jaan Tallinn, Tomer Kagan, Alexei Andreev, and Brandon Reinhart.\n\n\nThe post [2012 Summer Singularity Challenge](https://intelligence.org/2012/07/03/summer-challenge/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2012-07-04T05:40:48Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "232673ac03037c0e1f04fa66f2010434", "title": "Machine Intelligence Research Institute Progress Report, May 2012", "url": "https://intelligence.org/2012/06/16/singularity-institute-progress-report-may-2012/", "source": "miri", "source_type": "blog", "text": "Past progress reports: [April 2012](http://intelligence.org/blog/2012/05/08/singularity-institute-progress-report-april-2012/), [March 2012](http://intelligence.org/blog/2012/04/06/singularity-institute-progress-report-march-2012/), [February 2012](http://intelligence.org/blog/2012/03/03/singularity-institute-progress-report-february-2012/), [January 2012](http://intelligence.org/blog/2012/02/05/singularity-institute-progress-report-january-2012/), [December 2011](http://intelligence.org/blog/2012/01/16/singularity-institute-progress-report-december-2011/).\n\n\nHere’s what the Machine Intelligence Research Institute did in May 2012:\n\n\n* **How to Purchase AI Risk Reduction**: Luke wrote [a series of posts](http://lesswrong.com/r/discussion/lw/cs6/how_to_purchase_ai_risk_reduction/) on how to purchase AI risk reduction, with cost estimates for many specific projects. Some projects are currently in place at SI; others can be launched if we are able to raise sufficient funding.\n* **Research articles**: Luke continued to work with about a dozen collaborators on several developing research articles, including “Responses to Catastrophic AGI Risk,” mentioned [here](http://lesswrong.com/lw/cr6/building_the_ai_risk_research_community/).\n* **Other writings**: Kaj Sotala, with help from Luke and many others, published *[How to Run a Successful Less Wrong Meetup Group](http://lesswrong.com/lw/crs/how_to_run_a_successful_less_wrong_meetup/)*. Carl published several articles: (1) [Utilitarianism, contractualism, and self-sacrifice](http://reflectivedisequilibrium.blogspot.com/2012/05/utilitarianism-contractualism-and-self.html), (2) [Philosophers vs. economists on discounting](http://reflectivedisequilibrium.blogspot.com/2012/05/philosophers-vs-economists-on.html), (3) [Economic growth: more costly disasters, better prevention](http://reflectivedisequilibrium.blogspot.com/2012/05/economic-growth-more-costly-disasters.html), and (4) [What to eat during impact winter?](http://reflectivedisequilibrium.blogspot.com/2012/05/what-to-eat-during-impact-winter.html) Eliezer wrote [Avoid Motivated Cognition](http://lesswrong.com/lw/bnk/sotw_avoid_motivated_cognition/). Luke posted part 2 of his [dialogue with Ben Goertzel](https://intelligence.org/feed/?paged=79) about AGI.\n* **Ongoing long-term projects**: Amy continued to work on Singularity Summit 2012. Michael continued to work on the Machine Intelligence Research Institute’s new primary website, new annual report, and new newsletter design. Louie and SI’s new executive assistant Ioven Fables are hard at work on organizational development and transparency (some of which will be apparent when the new website launches).\n* **Center for Applied Rationality (CFAR)**: The CFAR team continued to make progress toward spinning off this rationality-centric organization, in keeping with [SI’s strategic plan](https://intelligence.org/files/strategicplan2011.pdf). We also held the first summer minicamp, which surpassed our expectations and was very positively received. (More details on this will be compiled later.)\n* **Meetings with advisors, supporters, and potential researchers:** As usual, various SI staff met or spoke with dozens of advisors, supporters, and collaborators about how to build the existential risk community, how to mitigate AI risk, how to improve the Machine Intelligence Research Institute’s effectiveness, and other topics.\n* And of course much more than is listed here!\n\n\nFinally, we’d like to recognize our **most active volunteers**in May 2012: Matthew Fallshaw, Gerard McCusker, Frank Adamek, David Althaus, Tim Oertel, and Casey Pfluger. Thanks everyone! (And, our apologies if we forgot to name you!)\n\n\nThe post [Machine Intelligence Research Institute Progress Report, May 2012](https://intelligence.org/2012/06/16/singularity-institute-progress-report-may-2012/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2012-06-16T07:39:29Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "d97b4173ea472b7cb8e3f62048e43db7", "title": "Machine Intelligence Research Institute Progress Report, April 2012", "url": "https://intelligence.org/2012/05/08/singularity-institute-progress-report-april-2012/", "source": "miri", "source_type": "blog", "text": "Past progress reports: [March 2012](http://intelligence.org/blog/2012/04/06/singularity-institute-progress-report-march-2012/), [February 2012](http://intelligence.org/blog/2012/03/03/singularity-institute-progress-report-february-2012/), [January 2012](http://intelligence.org/blog/2012/02/05/singularity-institute-progress-report-january-2012/), [December 2011](http://intelligence.org/blog/2012/01/16/singularity-institute-progress-report-december-2011/).\n\n\nHere’s what the Machine Intelligence Research Institute did in April 2012:\n\n\n* **SPARC**: Several MIRI staff members are working in collaboration with SI research associate Paul Christiano and a few others to develop a rationality camp for high school students with exceptional mathematical ability (SPARC). This is related to our efforts to spin off a new rationality-focused organization, and it is also a major step forward in our efforts to locate elite young math talent that may be useful in our research efforts.\n* **Research articles**: Luke published [AI Risk Bibliography 2012](http://intelligence.org/upload/AI%20Risk%20Bibliography%202012.pdf). He is currently developing nearly a dozen other papers with a variety of co-authors. New SI research associate [Kaj Sotala](http://www.xuenay.net/) has two papers forthcoming in the *International Journal of Machine Consciousness*: [Advantages of Artificial Intelligences, Uploads, and Digital Minds](http://www.xuenay.net/Papers/DigitalAdvantages.pdf) and [Coalescing Minds: Brain Uploading-Related Group Mind Scenarios](http://www.xuenay.net/Papers/CoalescingMinds.pdf).\n* **Other articles**: Luke published [a dialogue](http://lesswrong.com/r/discussion/lw/bxr/muehlhauserwang_dialogue/) with AGI researcher Pei Wang and several more posts in the [AI Risk and Opportunity](http://lesswrong.com/r/discussion/lw/ajm/ai_risk_and_opportunity_a_strategic_analysis/) series. Luke also worked with Kaj Sotala to develop an instructional booklet for Less Wrong meetup group organizers, which is nearly complete.\n* **Ongoing long-term projects**: Amy continued to work on Singularity Summit 2012. Michael launched the new [Singularity Summit website](http://singularitysummit.com/), continued to work on the Machine Intelligence Research Institute’s new primary website, new annual report, and new newsletter design. Luke uploaded several more volunteer-prepared translations of *[Facing the Singularity](http://facingthesingularity.com/)*. Luke also continued to build the Machine Intelligence Research Institute’s set of remote collaborators, who are hard at work converting the Machine Intelligence Research Institute’s research articles to a new template, hunting down predictions of AI, writing literature summaries on heuristics and biases, and more.\n* **Center for Applied Rationality (CFAR)**: “Rationality Group” now has a final name: the Center for Applied Rationality (CFAR). The CFAR team has been hard at work preparing for the upcoming [rationality minicamps](http://lesswrong.com/lw/b98/minicamps_on_rationality_and_awesomeness_may_1113/), as well as continuing to develop the overall strategy for the emerging organization.\n* **Meetings with advisors, supporters, and potential researchers:** As usual, various SI staff met or spoke with dozens of advisors, supporters, and collaborators about how to build the existential risk community, how to mitigate AI risk, how to improve the Machine Intelligence Research Institute’s effectiveness, and other topics. Quixey co-founder and CEO Liron Shapira was added as an advisor.\n* And of course much more than is listed here!\n\n\nFinally, we’d like to recognize our **most active volunteers**in April 2012: Matthew Fallshaw, Gerard McCusker, Frank Adamek, David Althaus, Tim Oertel, Casey Pfluger, Paul Gentemann, and John Maxwell. Thanks everyone! (And, our apologies if we forgot to name you!)\n\n\nThe post [Machine Intelligence Research Institute Progress Report, April 2012](https://intelligence.org/2012/05/08/singularity-institute-progress-report-april-2012/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2012-05-08T21:55:17Z", "authors": ["Louie Helm"], "summaries": []} -{"id": "271c6f97712eea9bf7b3d387f0238659", "title": "Machine Intelligence Research Institute Progress Report, March 2012", "url": "https://intelligence.org/2012/04/06/singularity-institute-progress-report-march-2012/", "source": "miri", "source_type": "blog", "text": "Past progress reports: [February 2012](http://intelligence.org/blog/2012/03/03/singularity-institute-progress-report-february-2012/), [January 2012](http://intelligence.org/blog/2012/02/05/singularity-institute-progress-report-january-2012/), [December 2011](http://intelligence.org/blog/2012/01/16/singularity-institute-progress-report-december-2011/).\n\n\nFun fact of the day: [The Machine Intelligence Research Institute’s research fellows and research associates have more peer-reviewed publications forthcoming in 2012 than they had published in all past years combined](http://lesswrong.com/r/discussion/lw/axr/three_new_papers_on_ai_risk/627o).\n\n\nHere’s what the Machine Intelligence Research Institute did in March 2012:\n\n\n* **Research articles**: Luke and Anna released an updated draft of [Intelligence Explosion: Evidence and Import](http://commonsenseatheism.com/wp-content/uploads/2012/02/Muehlhauser-Salamon-Intelligence-Explosion-Evidence-and-Import.pdf), and Luke and Louie released an updated draft of [The Singularity and Machine Ethics](http://commonsenseatheism.com/wp-content/uploads/2011/11/Muehlhauser-Helm-The-Singularity-and-Machine-Ethics-draft.pdf). Luke submitted an article (co-authored with Nick Bostrom) to *Communications of the ACM* — an article on Friendly AI. Machine Intelligence Research Institute research associate Joshua Fox released two forthcoming articles co-authored with (past Machine Intelligence Research Institute Visiting Fellow) Roman Yampolskiy: [Safety Engineering for Artificial General Intelligence](http://joshuafox.com/professional/media/YampolskiyFox__SafetyEngineeringforAGI.pdf) and [Artificial General Intelligence and the Human Mental Model](http://joshuafox.com/professional/media/YampolskiyFox__AGIAndTheHumanModel.pdf).\n* **Other articles**: Luke published [The AI Problem, with Solutions](http://facingthesingularity.com/2012/ai-the-problem-with-solutions/), [How to Fix Science](http://lesswrong.com/lw/ajj/how_to_fix_science/), [Muehlhauser-Goertzel Dialogue Part 1](http://lesswrong.com/r/discussion/lw/aw7/muehlhausergoertzel_dialogue_part_1/), a [list of journals that may publish articles on AI risk](http://tinyurl.com/AI-risk-journals), and the first three posts in his series [AI Risk and Opportunity: A Strategic Analysis](http://lesswrong.com/r/discussion/lw/ajm/ai_risk_and_opportunity_a_strategic_analysis/). The Machine Intelligence Research Institute paid (past Visiting Fellow) [Kaj Sotala](http://www.xuenay.net/) to write most of a new instructional booklet for Less Wrong meetup group organizers, which should be published in the next month or two. Eliezer continued work on his new Bayes’ Theorem tutorial and other writing projects. Carl published [Using degrees of freedom to change the past for fun and profit](http://lesswrong.com/lw/a9f/using_degrees_of_freedom_to_change_the_past_for/) and [Are pain and pleasure equally energy efficient?](http://reflectivedisequilibrium.blogspot.com/2012/03/are-pain-and-pleasure-equally-energy.html)\n* **Ongoing long-term projects**: Amy continued to work on Singularity Summit 2012. Michael continued to work on the Machine Intelligence Research Institute’s new primary website, new Summit website, new annual report, and new newsletter design. Louie continued to improve our accounting processes and also handled several legal and tax issues. Luke uploaded several more volunteer-prepared translations of *[Facing the Singularity](http://facingthesingularity.com/)*. Luke also continued to build the Machine Intelligence Research Institute’s set of remote collaborators, who are hard at work converting the Machine Intelligence Research Institute’s research articles to a new template, hunting down predictions of AI, writing literature summaries on heuristics and biases, and more.\n* **Rationality Group**: Per our [strategic plan](https://intelligence.org/files/strategicplan2011.pdf), we will launch this new “Rationality Group” organization soon, so that the Machine Intelligence Research Institute can focus on its efforts on activities related to AI risk. In March, Rationality Group (led by Anna) contracted with [Julia Galef](http://measureofdoubt.com/about/) and Michael Smith to work toward launching the organization. Eliezer continued to help Rationality Group develop and test its lessons. Rationality Group has begun offering **prizes** for suggesting exercises for developing rationality skills, starting with the skills of “[Be Specific](http://lesswrong.com/lw/bc3/sotw_be_specific/)” and “[Check Consequentialism](http://lesswrong.com/lw/b4f/sotw_check_consequentialism/).” Rationality Group has also announced three [Minicamps on Rationality and Awesomeness](http://lesswrong.com/lw/b98/minicamps_on_rationality_and_awesomeness_may_1113/), for May 11-13, June 22-24, and July 21-28. [Apply now](http://api.viglink.com/api/click?format=go&key=9f37ca02a1e3cbd4f3d0a3618a39fbca&loc=http%3A%2F%2Flesswrong.com%2Flw%2Fb98%2Fminicamps_on_rationality_and_awesomeness_may_1113%2F&v=1&libid=1333494789559&out=https%3A%2F%2Fdocs.google.com%2Fspreadsheet%2Fviewform%3Fformkey%3DdEctaFJONTk1UVdfdE9sSEpiQTFLLWc6MA&ref=http%3A%2F%2Flesswrong.com%2Fpromoted%2F&title=Minicamps%20on%20Rationality%20and%20Awesomeness%3A%20May%2011-13%2C%20June%2022-24%2C%20and%20July%2021-28%20-%20Less%20Wrong&txt=%3Cstrong%3EApply%20now.%3C%2Fstrong%3E&jsonp=vglnk_jsonp_13334948542221).\n* **Meetings with advisors, supporters, and potential researchers:** As usual, various SI staff met or spoke with dozens of advisors, supporters, and collaborators about how to build the existential risk community, how to mitigate AI risk, how to improve the Machine Intelligence Research Institute’s effectiveness, and other topics. This included a two-week visit by [Nick Beckstead](https://sites.google.com/site/nbeckstead/), who worked with us on AI risk reduction strategy.\n* And of course much more than is listed here!\n\n\nFinally, we’d like to recognize our **most active volunteers**in March 2012: Matthew Fallshaw, Gerard McCusker, Frank Adamek, and David Althaus. Thanks everyone! (And, our apologies if we forgot to name you!)\n\n\nThe post [Machine Intelligence Research Institute Progress Report, March 2012](https://intelligence.org/2012/04/06/singularity-institute-progress-report-march-2012/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2012-04-06T08:38:49Z", "authors": ["Louie Helm"], "summaries": []} -{"id": "fba31997bee2c3a3211cb0ce7957c623", "title": "Machine Intelligence Research Institute Progress Report, February 2012", "url": "https://intelligence.org/2012/03/03/singularity-institute-progress-report-february-2012/", "source": "miri", "source_type": "blog", "text": "Past progress reports: [January 2012](http://intelligence.org/blog/2012/02/05/singularity-institute-progress-report-january-2012/), [December 2011](http://intelligence.org/blog/2012/01/16/singularity-institute-progress-report-december-2011/).\n\n\nHere’s what the Machine Intelligence Research Institute did in February 2012:\n\n\n* **Winter fundraiser completed**: Thanks to the generous contributions of our supporters, our latest winter fundraiser was a success, raising [much more](http://intelligence.org/blog/2012/02/20/2011-2012-winter-fundraiser-completed/) than our target of $100,000!\n* **Research articles**: Luke and Anna published the [Singularity Summit 2011 Workshop Report](http://intelligence.org/upload/Singularity%20Summit%202011%20Workshop%20Report.pdf) and released a draft of their article [Intelligence Explosion: Evidence and Import](http://commonsenseatheism.com/wp-content/uploads/2012/02/Muehlhauser-Salamon-Intelligence-Explosion-Evidence-and-Import.pdf), forthcoming in Springer’s *The Singularity Hypothesis*. Luke also worked on an article forthcoming in *Communications of the ACM*.\n* **Other articles**: Luke published a continuously updated list of [Forthcoming and desired articles on AI risk](http://tinyurl.com/ai-risk-research). For Less Wrong, Carl published [Feed the Spinoff Heuristic](http://lesswrong.com/lw/9xs/feed_the_spinoff_heuristic/), and Luke published [My Algorithm for Beating Procrastination](http://lesswrong.com/lw/9wr/my_algorithm_for_beating_procrastination/), [A brief tutorial on preferences in AI](http://lesswrong.com/r/discussion/lw/a73/a_brief_tutorial_on_preferences_in_ai/), and [Get Curious](http://lesswrong.com/lw/aa7/get_curious/). Carl also published 4 articles on ethical careers for the [80,000 Hours blog](http://80000hours.org/blog) (later posts will discuss optimal philanthropy and existential risks): [How hard is it to become the Prime Minister of the United Kingdom?](http://80000hours.org/blog/22-how-hard-is-it-to-become-prime-minister-of-the-united-kingdom), [Entrepreneurship: a game of poker, not roulette](http://80000hours.org/blog/23-entrepreneurship-a-game-of-poker-not-roulette), [Software engineering: Britain vs. Silicon Valley](http://80000hours.org/blog/26-software-engineering-britain-vs-silicon-valley), and [5 ways to be misled by salary rankings](http://80000hours.org/blog/24-5-ways-to-be-misled-by-salary-rankings).\n* **Ongoing long-term projects**: Amy continued to work on Singularity Summit 2012. Michael continued to work on the Machine Intelligence Research Institute’s new website, and uploaded all past Singularity Summit videos to YouTube. Louie continued to improve our accounting processes and also handled several legal and tax issues. Luke uploaded several volunteer-prepared translations of *[Facing the Singularity](http://facingthesingularity.com/)*, and also a [podcast](http://lesswrong.com/r/discussion/lw/aey/facing_the_singularity_podcast/) for this online mini-book.\n* **Grant awarded**: The Machine Intelligence Research Institute awarded philosopher [Rachael Briggs](http://www.rachaelbriggs.net/Rachael_Briggs/Home.html) a $20,000 grant to write a paper on Eliezer Yudkowsky’s [timeless decision theory](http://intelligence.org/upload/TDT-v01o.pdf). Two of Rachael’s papers — [Distorted Reflection](http://pgrim.org/pa2010reading/briggsdistorted.pdf) and [Decision-Theoretic Paradoxes as Voting Paradoxes](http://commonsenseatheism.com/wp-content/uploads/2011/09/Briggs-Decision-theoretic-paradoxes-as-voting-paradoxes.pdf) — have previously been selected as among the 10 best philosophy papers of the year by [The Philosopher’s Annual](http://www.philosophersannual.org/).\n* **Rationality Group**: Anna and Eliezer continued to lead the development of a new rationality education organization, temporarily called “Rationality Group.” Per our [strategic plan](https://intelligence.org/files/strategicplan2011.pdf), we will launch this new organization soon, so that the Machine Intelligence Research Institute can focus on its efforts on activities related to AI risk. In February our Rationality Group team worked on curriculum development with several potential long-term hires, developed several rationality lessons which they tested (weekly) on small groups and iterated in response to feedback, spoke to advisors about how to build the organization and gather fundraising, and much more. The team also produced one example rationality lesson on sunk costs, including [a presentation and exercise booklets](http://lesswrong.com/lw/9hb/position_design_and_write_rationality_curriculum/). Note that Rationality Group is **currently hiring** curriculum developers, a remote executive assistant, and others, so [apply here](https://docs.google.com/spreadsheet/viewform?hl=en_US&formkey=dElRYS1hY3N4a3JIRV90R0lmdE9MN0E6MQ) if you’re interested!\n* **Meetings with advisors, supporters, and potential researchers:** As usual, various SI staff met or spoke with dozens of advisors, supporters, and collaborators about how to build the existential risk community, how to mitigate AI risk, how to improve the Machine Intelligence Research Institute’s effectiveness, and other topics. We also met with several potential researchers to gauge their interest and abilities. Carl spent two weeks in Oxford visiting the [Future of Humanity Institute](http://www.fhi.ox.ac.uk/) and working with the researchers there.\n* **Outsourcing**: On Louie’s (sound) advice, the Machine Intelligence Research Institute is undergoing a labor transition such that most of the work we do (in hours) will eventually be performed not by our core staff but by (mostly remote) hourly contractors and volunteers, for example [remote researchers](http://lesswrong.com/r/discussion/lw/9t8/the_singularity_institute_needs_remote/), remote [LaTeX workers](http://lesswrong.com/r/discussion/lw/a6m/si_wants_to_hire_a_remote_latex_guru/), remote editors, and [remote assistants](http://lesswrong.com/r/discussion/lw/aco/the_singularity_institute_is_hiring_virtual/). This shift provides numerous benefits, including (1) involving the broader community more directly in our work, (2) providing jobs for aspiring rationalists, and (3) freeing up our core staff to do the things that, due to accumulated rare expertise, only they can do.\n* And of course much more than is listed here!\n\n\nFinally, we’d like to recognize our **most active volunteers**in February 2012: Brian Rabkin, Cameron Taylor, Mitchell Owen, Gerard McCusker, Alex Richard, Andrew Homan, Vincent Vu, Gabriel Sztorc, Paul Gentemann, John Maxwell, and David Althaus. Thanks everyone! (And, our apologies if we forgot to name you!)\n\n\nThe post [Machine Intelligence Research Institute Progress Report, February 2012](https://intelligence.org/2012/03/03/singularity-institute-progress-report-february-2012/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2012-03-03T11:31:06Z", "authors": ["Louie Helm"], "summaries": []} -{"id": "cb892d9c9b527c71a5e8fc6099a45ec6", "title": "2011-2012 Winter Fundraiser Completed", "url": "https://intelligence.org/2012/02/20/2011-2012-winter-fundraiser-completed/", "source": "miri", "source_type": "blog", "text": "Thanks to our dedicated supporters, we met our goal for our 2011-2012 Winter Fundraiser. Thank you!\n\n\nThe fundraiser ran for 56 days, from December 27, 2011 to February 20, 2012.\n\n\nWe exceeded our $100K goal, raising a total of $143,048.84 from 101 individual donors.\n\n\nEvery donation that the Machine Intelligence Research Institute receives is powerful support for our [mission](https://intelligence.org/files/strategicplan2011.pdf) — ensuring that the creation of smarter-than-human intelligence ([superintelligence](http://www.nickbostrom.com/ethics/ai.html)) benefits human society. We welcome donors [contacting us](mailto:admin@intelligence.org) to learn more about our pursuit of this mission and our continued expansion.\n\n\nKeep your eye on this blog for regular progress reports from our executive director.\n\n\nThe post [2011-2012 Winter Fundraiser Completed](https://intelligence.org/2012/02/20/2011-2012-winter-fundraiser-completed/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2012-02-21T03:51:09Z", "authors": ["Louie Helm"], "summaries": []} -{"id": "d84aba6157b1583959d948a44ffeda74", "title": "Machine Intelligence Research Institute Progress Report, January 2012", "url": "https://intelligence.org/2012/02/05/singularity-institute-progress-report-january-2012/", "source": "miri", "source_type": "blog", "text": "Past progress report: [December 2011](http://intelligence.org/blog/2012/01/16/singularity-institute-progress-report-december-2011/).\n\n\nHere’s what the Machine Intelligence Research Institute did in January 2012:\n\n\n* **Winter fundraiser**: We continued raising funds in January, but we still have about $30,000 left to go in our winter fundraiser before the deadline of February 20th. Please support our recent efforts toward greater transparency, efficiency, and productivity by [donating now](http://intelligence.org/donate)!\n* **Strategic discussions**: In January we held a long and ongoing series of discussions concerning Machine Intelligence Research Institute strategy. Which scenarios are the most probable “desirable” futures for humanity, which ones can our species influence most significantly, and which ones should the Machine Intelligence Research Institute work to influence? Which tactical moves should the Machine Intelligence Research Institute make right now? How can our efforts best create synergies with other organizations focused on existential risks? These are complex questions, and in January, Machine Intelligence Research Institute staff members spent dozens of hours sharing their own evidence and arguments. (At one point, we also [called upon](http://lesswrong.com/lw/9l6/mathematicians_mathletes_the_singularity/) the expertise of more than a dozen elite mathematicians in our circle.) These discussions continue today, and our opinions on strategy appear to be more unified than they were at the beginning of the month. But there is more evidence to gather and more strategic analysis to be done.\n* **Ongoing long-term projects**: Amy continued her preparations for Singularity Summit 2012. Michael Anissimov and others continued work on the Machine Intelligence Research Institute’s new website, which will feature loads of new content and a cleaner design. As part of our transparency efforts, Luke gave a [second Q&A](http://lesswrong.com/lw/980/singularity_institute_executive_director_qa_2/) about the Machine Intelligence Research Institute, an interview at [80,000 Hours](http://80000hours.org/blog/17-high-impact-interview-1-existential-risk-research-at-siai), and another interview at [Singularity 1 on 1](http://www.singularityweblog.com/luke-muehlhauser-on-singularity-1-on-1-superhuman-ai-is-coming-this-century/). Louie continued to work on improving our book-keeping and accounting practices. Anissimov finished thanking all donors who gave during 2011. (If you donated in 2011 and were *not* thanked, please contact michael@singularity.org!)\n* **Articles**: Luke and Anna continued writing “Intelligence Explosion: Evidence and Import,” and Carl continued working with [Stuart Armstrong](http://www.fhi.ox.ac.uk/our_staff/research/stuart_armstrong) of FHI on “Arms Races and Intelligence Explosions.” Luke began adding non-English translations at *[Facing the Singularity](http://facingthesingularity.com/)*, and published [No God to Save Us](http://facingthesingularity.com/2012/no-god-to-save-us/) and [Value is Complex and Fragile](http://facingthesingularity.com/2012/value-is-complex-and-fragile/) there. Carl, with co-author Nick Bostrom, submitted a final version of “[How Hard is Artificial Intelligence?](http://www.nickbostrom.com/aievolution.pdf)” to the *Journal of Consciousness Studies*. For *Less Wrong*, Luke published [What Curiosity Looks Like](http://lesswrong.com/lw/96j/what_curiosity_looks_like/), [Can the Chain Still Hold You?](http://lesswrong.com/lw/99t/can_the_chain_still_hold_you/), [Leveling Up in Rationality](http://lesswrong.com/r/lesswrong/lw/9fy/leveling_up_in_rationality_a_personal_journey/), and [The Human’s Hidden Utility Function (Maybe)](http://lesswrong.com/lw/9jh/the_humans_hidden_utility_function_maybe/); Anna published [Urges vs. Goals](http://lesswrong.com/lw/8q8/urges_vs_goals_the_analogy_to_anticipation_and/). Eliezer continued work on his new Bayes Tutorial. Luke and Anna wrote a report on the workshops that followed Singularity Summit 2011, which should be published soon.\n* **Rationality Group**: Anna continued to lead the development of a new rationality education organization, temporarily called “Rationality Group.” Per our [strategic plan](https://intelligence.org/files/strategicplan2011.pdf), we will launch this new organization soon, so that the Machine Intelligence Research Institute can focus on its efforts on activities related to AI risk. In January we made one trial-hire for the new organization, and [reached out to dozens](http://lesswrong.com/lw/9hb/position_design_and_write_rationality_curriculum/) of other potential team members. We also [published](http://lesswrong.com/lw/9hb/position_design_and_write_rationality_curriculum/) a draft of one rationality lesson as a sample (PowerPoint slides + booklet PDFs).\n* **New team members**: [Kevin Fischer](http://angel.co/kevin-fischer) of GK International joined our board of directors. We also added several new [research associates](http://intelligence.org/aboutus/researchassociates): [Paul Christiano](http://ordinaryideas.wordpress.com/), [Tyrrell McAllister](http://www.linkedin.com/pub/tyrrell-mcallister/9/439/71), [János Kramar](http://www.linkedin.com/pub/janos-kramar/25/41b/72b), and Mihaly Barasz (at Google Switzerland). Luke hired an executive assistant, Denise Simard. Michael Vassar officially left his role as President to work for his new company, [Personalized Medicine](http://www.medicineispersonal.com/).\n* **Meetings with advisors, supporters, and potential researchers:** As usual, various SI staff met or spoke with dozens of advisors, supporters, and collaborators about how to build the existential risk community, how to mitigate AI risk, how to improve the Machine Intelligence Research Institute’s effectiveness, and other topics. We also met with several potential researchers to gauge their interest and abilities.\n* **Relaunched the Visiting Fellows program**: In January we relaunched our Visiting Fellows program. Instead of hosting many visiting fellows at once, we will now host only 1-2 fellows at a time, for a limited duration unique to each visiting fellow. Our visiting fellow for the last week of January was Princeton philosophy undergraduate [Jake Nebel](http://princeton.academia.edu/JakeNebel). If you’re interested, please apply to our Visiting Fellows program [here](http://intelligence.org/aboutus/opportunities/visiting-fellow).\n* **Much more**: Launched a redesign of [HPMoR.com](http://hpmor.com/), continued work in the optimal philanthropy movement, continued work on our first annual report, and much more.\n\n\nFinally, we’d like to recognize our **most active volunteers** in January 2012: Mitchell Owen, Brian Rabkin, Huon Wilson, David Althaus, Florent Berthet, Sergio Terrero, “Lightwave,” Emile Kroeger, and Giles Edkins. Thanks everyone! (And, our apologies if we forgot to name you!)\n\n\nThe post [Machine Intelligence Research Institute Progress Report, January 2012](https://intelligence.org/2012/02/05/singularity-institute-progress-report-january-2012/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2012-02-05T22:06:40Z", "authors": ["Louie Helm"], "summaries": []} -{"id": "582eb961909b0a42ab9e5d90479ab40c", "title": "Machine Intelligence Research Institute Progress Report, December 2011", "url": "https://intelligence.org/2012/01/16/singularity-institute-progress-report-december-2011/", "source": "miri", "source_type": "blog", "text": "“I think the Machine Intelligence Research Institute has some very smart people working on the most important mission on Earth, but… what exactly are they *doing* these days? I’m in the dark.”\n\n\nThere’s a good reason I hear this comment so often. We haven’t done a good job of communicating our progress to our supporters.\n\n\nSince being appointed Executive Director of the Machine Intelligence Research Institute (SI) in November, I’ve been working to change that. I gave [two](http://lesswrong.com/r/discussion/lw/8s6/video_qa_with_singularity_institute_executive/) [Q&As](http://lesswrong.com/lw/980/singularity_institute_executive_director_qa_2/) about SI and explained our research program with a [list of open problems in AI risk research](http://lukeprog.com/SaveTheWorld.html). Now, I’d like to introduce our latest effort in transparency: **monthly progress reports**.\n\n\nWe begin with last month: December 2011. What did we do in December 2011?\n\n\n(From this point on I’ll refer to myself as “Luke,” for clarity.)\n\n\n* **Winter fundraiser**. We launched our winter fundraiser and have been contacting our supporters. The fundraiser has raised over $40k so far, though we still have $60k to go! (So, please [donate](http://intelligence.org/donate)!)\n* **Singularity Summit 2012**. Our chief operating officer, Amy Willey, worked all month on preparations for Singularity Summit 2012, with much help from Luke. As a result we have now chosen a team of professionals with which we will take the Summit to “the next level,” and we’ve already confirmed several major speakers: Ray Kurzweil, Steven Pinker, Tyler Cowen, Temple Grandin, Peter Norvig, Robin Hanson, Peter Thiel, Melanie Mitchell, Vernor Vinge, and Carl Zimmer. We have also opened negotiations with many other speakers. This is a big improvement over our preparations for Singularity Summit 2011, which effectively began in May 2011, leaving us little time to capture certain speakers and develop certain kinds of media coverage. This much progress at such an early stage, in addition to a larger budget and greater professional assistance, will allow Singularity Summit 2012 to be a major leap forward for the event. Amy has also been developing arrangements for a *possible* European Singularity Summit in 2012.\n* **Rationality Org**. As explained in our [strategic plan](https://intelligence.org/wp-content/uploads/strategicplan2011.pdf), we recognize the branding confusion produced by focusing on both AI risk research *and* rationality education, so we are preparing to spin off a separate rationality education organization so that the Machine Intelligence Research Institute can focus on AI risk research. Internally, we are calling the rationality education organization “Rationality Org.” Anna and Eliezer, with some help from Luke, did a lot of work developing plans for the future Rationality Org. We spent even *more* time developing the core rationality lessons, testing versions of them on different groups of people, and iterating the content. We expect the Rationality Org to launch late this year or early next year, and we expect it to not only [raise the sanity waterline](http://lesswrong.com/lw/1e/raising_the_sanity_waterline/) but also bring significant funding toward existential risk reduction.\n* **New website design**. Our media director, Michael Anissimov, with much help from Luke, worked out the strategy and design of SI’s new website and worked with a designer to iterate the design several times. The designer is now programming the site.\n* **New donor database**. In December, our Director of Development, Louie Helm, finished setting up our new donor database, including the custom code for automatically importing data from Paypal, Google Checkout, etc. This database gives us a much better view of who our supporters are, and allows us to more effectively thank them for their support. Anissimov wrote personal thank-you notes to hundreds of past donors.\n* **Research articles**. Luke and Anna made continued progress on their overview article “Intelligence Explosion: Evidence and Import.” Carl continued work with [FHI](http://www.fhi.ox.ac.uk/)‘s Stuart Armstrong on their article “Arms Races and Intelligence Explosions,” and continued work with Nick Bostrom on their article “How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects.”\n* **Other articles**. Luke wrote a few articles for Less Wrong: [Hack Away at the Edges](http://lesswrong.com/lw/8ns/hack_away_at_the_edges/), [Why study the cognitive science of concepts](http://lesswrong.com/r/discussion/lw/8oy/why_study_the_cognitive_science_of_concepts/), and [So You Want to Save the World](http://lesswrong.com/lw/91c/so_you_want_to_save_the_world/). Eliezer made lots of progress on his new Bayes Theorem tutorial, including (outsourced) illustrations and much audience testing.\n* **Eliezer’s book**. Eliezer finished the book proposal for his first book (already mostly written), *The Science of Changing Your Mind*. We have begun looking for good agents to represent the book.\n* **Facing the Singularity**. Luke continued to develop his online book *Facing the Singularity*, a layman’s introduction to the Singularity, its consequences, and what we can do about it. The chapters he wrote in December 2011 were: [The Crazy Robot’s Rebellion](http://facingthesingularity.com/2011/the-crazy-robots-rebellion/), [Not Built to Think About AI](http://facingthesingularity.com/2011/not-built-to-think-about-ai/), [Playing Taboo with “Intelligence”](http://facingthesingularity.com/2011/playing-taboo-with-intelligence/), [Superstition in Retreat](http://facingthesingularity.com/2011/superstition-in-retreat/), [Plenty of Room Above Us](http://facingthesingularity.com/2011/plenty-of-room-above-us/), and [Don’t Flinch Away](http://facingthesingularity.com/2011/dont-flinch-away/).\n* **Additional transparency efforts**. Anissimov and Luke began work on the design and content for an annual report. They also shot and produced [Luke’s video Q&A #1](http://lesswrong.com/r/discussion/lw/8s6/video_qa_with_singularity_institute_executive/).\n* **Optimal philanthropy**. The optimal philanthropy movement (e.g. [Giving What We Can](http://www.givingwhatwecan.org/)) is growing exponentially. Carl and Anna did much collaboration and research with other members of the movement. Partly due to their work, the optimal philanthropy movement has great awareness of the case for [existential risk reduction as optimal philanthropy](http://www.existential-risk.org/concept.pdf), which should bring significant funding for existential risk reduction work in the coming years.\n* **Meetings with advisors, supporters, and potential researchers**. During December 2011, various SI staff met or spoke with dozens of advisors, supporters, and collaborators about how to build the existential risk community, along with other topics. We also met with several potential researchers to gauge their interest and abilities.\n* **Google Adwords upgrade**. For months, Louie and others have been tweaking the ads we get from $10k/month of Google Adwords donated to us by Google. By December 2011, our ads were so successful that we qualified for an upgrade, and are now receiving $40k/month of free advertising via Google Adwords.\n* **Better financial management**. In December 2011 we began to train our new treasurer, long-time donor and friend of SI, Jesse Liptrap. This means that someone outside the organization is keeping a close watch on our finances. We also began work on improving our book-keeping and accounting practices, which will allow better budgeting, forecasting, and resource management.\n* **Unpublished research**. As with most research institutes, most of our research does not end up in a published paper for 1-3 years, if ever, even though it informs our views on many things. Unpublished research in December 2011 included research on population ethics, brain-computer interfaces, optimal philanthropy, [technological forecasting](http://lesswrong.com/lw/9ao/longterm_technological_forecasting/), nuclear extinction risks, AI architectures, anthropics, decision theories, rationality training, Oracle AI, science productivity,  and more. SI’s research associates contributed to some of this research, including the Less Wrong discussion post [A model of UDT with a halting oracle](http://lesswrong.com/r/discussion/lw/8wc/decision_theory_with_halting_oracles/).\n* **New board member**. [Quixey](http://www.quixey.com/) co-founder and CEO, [Tomer Kagan](http://www.forbes.com/pictures/lmf45kde/tomer-kagan-co-founder-and-ceo-quixey-28/), was added to SI’s board of directors. Tomer is a good friend and brings a wealth of business and management experience to our team.\n* **Much more**. Of course, we worked on dozens of other, smaller projects. These include: updates to [IntelligenceExplosion.com](http://intelligenceexplosion.com/); development of contacts for Rationality Org; the organization of regular SI staff dinners, to promote coordination and friendship; speaking with donors at Peter Thiel’s “Fast Forward” party; development of a database of helpful volunteers and assistants; implementing [Olark](http://www.olark.com/) on our [donate page](http://intelligence.org/donate/); meetings with reporters from various media organizations; uploading old videos to Vimeo and YouTube; fixing errors and outdated content on our website; finishing our 2010 990 and sent it to Brandon Reinhart to add to his [financial examination](http://lesswrong.com/lw/5il/siai_an_examination/) of the Machine Intelligence Research Institute, preparing a [new template for SI research publications](http://commonsenseatheism.com/wp-content/uploads/2012/01/Dewey-Learning-what-to-value.pdf) (courtesy of research associate Daniel Dewey); and much more.\n\n\nThe post [Machine Intelligence Research Institute Progress Report, December 2011](https://intelligence.org/2012/01/16/singularity-institute-progress-report-december-2011/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2012-01-16T23:35:26Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "531cfdeef9cbb9b1252d71b82a992538", "title": "Q&A #2 with Luke Muehlhauser, Machine Intelligence Research Institute Executive Director", "url": "https://intelligence.org/2012/01/12/qa-2-with-luke-muehlhauser-singularity-institute-executive-director/", "source": "miri", "source_type": "blog", "text": "**Machine Intelligence Research Institute Activities**\n\n\nBugmaster asks:\n\n\n\n> …what does the SIAI actually do? You don’t submit your work to rigorous scrutiny by your peers in the field… you either aren’t doing any AGI research, or are keeping it so secret that no one knows about it… and you aren’t developing any practical applications of AI, either… So, what is it that you are actually working on, other than growing the SIAI itself ?\n> \n> \n\n\nIt’s a good question, and my own biggest concern right now. Donors would like to know: Where is the visible return on investment? How can I see that I’m buying existential risk reduction when I donate to the Machine Intelligence Research Institute?\n\n\nSI has a problem, here, because it has done so much invisible work lately. Our researchers have done a ton of work that hasn’t been written up and published yet; Eliezer has been writing his rationality books that aren’t yet published; Anna and Eliezer have been developing a new rationality curriculum for the future “Rationality Org” that will be spun off from the Machine Intelligence Research Institute; Carl has been doing a lot of mostly invisible work in the optimal philanthropy community; and so on. I believe this is all valuable x-risk-reducing work, but of course not all of our supporters are willing to just take our word for it that we’re doing valuable work. Our supporters want to see tangible results, and all they see is the Singularity Summit, a few papers a year, some web pages and Less Wrong posts, and a couple rationality training camps. That’s good, but not good enough!\n\n\nI agree with this concern, which is why I’m focused on doing things that happen to be both x-risk-reducing and visible.\n\n\nFirst, we’ve been working on visible “meta” work that makes the Machine Intelligence Research Institute more transparent and effective in general: a strategic plan, a donor database (“visible” to donors in the form of thank-yous), a new website (forthcoming), and an annual report (forthcoming).\n\n\nSecond, we’re pushing to publish more research results this year. We have three chapters forthcoming in *The Singularity Hypothesis*, one chapter forthcoming in *The Cambridge Handbook of Artificial Intelligence*, one forthcoming article on the difficulty of AI, and several other articles and working papers we’re planning to publish in 2012. I’ve also begun writing the first comprehensive [outline of open problems in Singularity research](http://lukeprog.com/SaveTheWorld.html), so that interested researchers from around the world can participate in solving the world’s most important problems.\n\n\nThird, there is visible rationality work forthcoming. One of Eliezer’s books is now being shopped to agents and publishers, and we’re field-testing different versions of rationality curriculum material for use in Less Wrong meetups and classes.\n\n\nFourth, we’re expanding the Singularity Summit brand, an important platform for spreading the memes of x-risk reduction and AI safety.\n\n\nSo my answer is to the question is: “Yes, visible return on investment has been a problem lately due to our choice of projects. Even before I was made Executive Director, it was one of my top concerns to help correct that situation, and this is still the case today.”\n\n\n**What if?**\n\n\nXiXiDu asks:\n\n\n\n> What would SI do if it became apparent that AGI is at most 10 years away?\n> \n> \n\n\nThis would be a serious problem because by default, AGI will be extremely destructive, and we don’t yet know how to make AGI not be destructive.\n\n\nWhat would we do if we thought AGI was at most 10 years away?\n\n\nThis depends on whether it’s apparent to a wider public that AGI is at most 10 years away, or a conclusion based only on a nonpublic analysis.\n\n\nIf it becomes apparent to a wide variety of folks that AGI is close, then it should be much easier to get people and support for Friendly AI work, so a big intensification of effort would be a good move. If the analysis that AGI is 10 years away leads to hundreds of well-staffed and well-funded AGI research programs and a rich public literature, then trying to outrace the rest with a Friendly AI project becomes much harder. After an intensified Friendly AI effort, one could try to build up knowledge in Friendly AI theory and practice that could be applied (somewhat less effectively) to systems not designed from the ground up for Friendliness. This knowledge could then be distributed widely to increase the odds of a project pulling through, calling in real Friendliness experts, etc. But in general, a widespread belief that AGI is only 10 years away would be a much hairier situation than the one we’re in now.\n\n\nBut if the basis for thinking AI was 10 years away was nonpublic (but nonetheless persuasive to supporters who have lots of resources), then it could be used to differentially attract support to a Friendly AI project, hopefully without provoking dozens of AGI teams to intensify their efforts. So if we had a convincing case that AGI was only 10 years away, we might not publicize this but would instead make the case to individual supporters that we needed to immediately intensify our efforts toward a theory of Friendly AI in a way that only much greater funding can allow.\n\n\n**Budget**\n\n\nMileyCyrus asks:\n\n\n\n> What kind of budget would be required to solve the friendly AI problem?\n> \n> \n\n\nLarge research projects always come with large uncertainties concerning how difficult they will be, especially ones that require fundamental breakthroughs in mathematics and philosophy like Friendly AI does.\n\n\nEven a small, 10-person team of top-level Friendly AI researchers taking academic-level salaries for a decade would require tens of millions of dollars. And even getting to the point where you can raise that kind of money requires a slow “ramping up” of researcher recruitment and output. We need enough money to attract the kinds of mathematicians who are also being recruited by hedge funds, Google, and the NSA, and have a funded “chair” for each of them such that they can be prepared to dedicate their careers to the problem. That part alone requires tens of millions of dollars for just a few researchers.\n\n\nOther efforts like the Summit, Less Wrong, outreach work, and early publications cost money, and they work toward having the community and infrastructure required to start funding chairs for top-level mathematicians to be career Friendly AI researchers. This kind of work costs between $500,000 and $3 million per year, with more money per year of course producing more progress.\n\n\n**Predictions**\n\n\nWix asks:\n\n\n\n> How much do members’ predictions of when the singularity will happen differ within the Machine Intelligence Research Institute?\n> \n> \n\n\nI asked some Machine Intelligence Research Institute staff members to answer a slightly different question, one pulled from the Future of Humanity Institute’s 2011 machine intelligence survey:\n\n\n\n> Assuming no global catastrophe halts progress, by what year would you assign a 10% / 50% / 90% chance of the development of human-level machine intelligence? Feel free to answer ‘never’ if you believe such a milestone will never be reached.\n> \n> \n\n\nIn short, the survey participants’ median estimates (excepting 5 outliers) for 10% / 50% / 90% were:\n\n\n\n> 2028 / 2050 / 2150\n> \n> \n\n\nHere are five of the Machine Intelligence Research Institute’s staff members’ responses, names unattached, for the years by which they would assign a 10% / 50% / 90% chance of HLAI creation, conditioning on no global catastrophe halting scientific progress:\n\n\n2025 / 2073 / 2168 \n\n2030 / 2060 / 2200 \n\n2027 / 2055 / 2160 \n\n2025 / 2045 / 2100 \n\n2040 / 2080 / 2200\n\n\nThose are all the answers I had time to prepare in this round; I hope they are helpful!\n\n\nThe post [Q&A #2 with Luke Muehlhauser, Machine Intelligence Research Institute Executive Director](https://intelligence.org/2012/01/12/qa-2-with-luke-muehlhauser-singularity-institute-executive-director/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2012-01-12T23:34:24Z", "authors": ["Louie Helm"], "summaries": []} -{"id": "951de2e753de3480aaabded069f5266c", "title": "2011 Machine Intelligence Research Institute Winter Fundraiser", "url": "https://intelligence.org/2011/12/27/2011-singularity-institute-winter-fundraiser/", "source": "miri", "source_type": "blog", "text": "Help us raise $100,000 that will go towards funding a website redesign, [new research papers](http://intelligence.org/summary/), and staging a bigger and better Singularity Summit 2012 in San Francisco. This fundraiser ends January 20th, so please contribute now to help us achieve our goal!\n\n\n[Donate now!](http://intelligence.org/donate)\n\n\n\n\n\n\n\n\n\n\n$0\n\n\n\n\n$25K\n\n\n\n\n$50K\n\n\n\n\n$75K\n\n\n\n\n$100K\n\n\n\n\n\n\n\n\n*This is your last chance to make a tax-deductible donation in 2011.*\n\n\n### ARTIFICIAL INTELLIGENCE MORE RELEVANT THAN EVER\n\n\nRecent books like *[Machine Ethics](http://www.amazon.com/Machine-Ethics-Michael-Anderson/dp/0521112354/)* from Cambridge University Press and *[Robot Ethics](http://www.amazon.com/Robot-Ethics-Implications-Intelligent-Autonomous/dp/0262016664/)* from MIT Press, along with the U.S. military-funded research that resulted in *[Governing Lethal Behavior in Autonomous Robots](http://www.amazon.com/Governing-Lethal-Behavior-Autonomous-Robots/dp/1420085948/)* show that the world is waking up to the challenges of building safe and ethical AI. But these projects focus on limited AI applications and fail to address the most important concern: how to ensure that *smarter-than-human* AI benefits humanity. The Machine Intelligence Research Institute has been working on that problem longer than anybody, a full decade before the Singularity landed on the [cover of *TIME* magazine](http://www.time.com/time/magazine/article/0,9171,2048299,00.html).\n\n\n### ACCOMPLISHMENTS IN 2011\n\n\n2011 was our biggest year yet. Since the year began, we have:\n\n\n* Held our annual [Singularity Summit](http://www.singularitysummit.com/) in New York City, with more than 900 in attendance. Speakers included inventor and futurist Ray Kurzweil, economist Tyler Cowen, PayPal co-founder Peter Thiel, *Skeptic* publisher Michael Shermer, Mathematica and WolframAlpha creator Stephen Wolfram, neuroscientist Christof Koch, MIT physicist Max Tegmark, and famed *Jeopardy!* contestant Ken Jennings.\n* Held a smaller Singularity Summit in Salt Lake City.\n* Held a one-week Rationality Minicamp and a ten-week Rationality Boot Camp.\n* Created the Research Associates program, which currently has 7 researchers coordinating with Singularity Institute.\n* Published our [Intelligence Explosion FAQ](http://intelligence.org/ie-faq), [IntelligenceExplosion.com](http://intelligenceexplosion.com/), and [Friendly-AI.com](http://friendly-ai.com/).\n* Wrote three chapters for Springer’s upcoming volume *[The Singularity Hypothesis](http://singularityhypothesis.blogspot.com/)*, along with four other research papers.\n* Began work on a new, clearer website design with lots of new content, which should go live Q1 2012.\n* Began [outlining open problems in Singularity research](http://lukeprog.com/SaveTheWorld.html) to help outside collaborators better understand our research priorities.\n\n\n### FUTURE PLANS YOU CAN HELP SUPPORT\n\n\nIn the coming year, we plan to do the following:\n\n\n* Hold our annual Singularity Summit, in San Francisco this year.\n* Improve organizational transparency by creating a simpler, easier-to-use website that includes Machine Intelligence Research Institute planning and policy documents.\n* Publish a document of open research problems in Singularity Research, to clarify the research space and encourage other researchers to contribute to our mission.\n* Add additional skilled researchers to our Research Associates program.\n* Publish a well-researched document making the case for existential risk reduction as optimal philanthropy.\n* Diversify our funding sources by applying for targeted grants and advertising our affinity credit card program.\n\n\nWe appreciate your support for our high-impact work.\n\n\nThe post [2011 Machine Intelligence Research Institute Winter Fundraiser](https://intelligence.org/2011/12/27/2011-singularity-institute-winter-fundraiser/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2011-12-27T21:57:07Z", "authors": ["Louie Helm"], "summaries": []} -{"id": "34ffb4dc5ab4150a573b37b5fd47626d", "title": "Interview with New MIRI Research Fellow Luke Muehlhauser", "url": "https://intelligence.org/2011/09/15/interview-with-new-singularity-institute-research-fellow-luke-muehlhuaser-september-2011/", "source": "miri", "source_type": "blog", "text": "![](https://intelligence.org/files/LukeFace.png)\n#### Section One: Background and Core Ideas\n\n\nQ1. [What is your personal background?](https://intelligence.org/feed/?paged=81#WhatIsYour) \n\nQ2. [Why should we care about artificial intelligence?](https://intelligence.org/feed/?paged=81#WhyShouldWe) \n\nQ3. [Why do you think smarter-than-human artificial intelligence is possible?](https://intelligence.org/feed/?paged=81#WhyDoYou) \n\nQ4. [The mission of the Machine Intelligence Research Institute is to “to ensure that the creation of smarter-than- human intelligence benefits society.” How is your research contributing to that mission?](https://intelligence.org/feed/?paged=81#TheMissionOf) \n\nQ5. [How does MIRI’s approach to making friendly AI differ from the concept of Asimov laws?](https://intelligence.org/feed/?paged=81#HowDoesSIAIs) \n\nQ6. [Why is it necessary to make an AI that “€œwants the same things we want”?](https://intelligence.org/feed/?paged=81#WhyIsIt) \n\nQ7. [If dangerous AI were to develop, why couldn’€™t we just “€œpull the plug”€?](https://intelligence.org/feed/?paged=81#IfDangerousAI) \n\nQ8. [Why are you and the Machine Intelligence Research Institute focused on artificial intelligence instead of human intelligence enhancement or whole brain emulation?](https://intelligence.org/feed/?paged=81#WhyAreYou)\n\n\n \n\n\n\n#### Section Two: Research Area Questions\n\n\nQ9. [What research areas do you specifically investigate to develop “Friendliness content” for artificial intelligence?](https://intelligence.org/feed/?paged=81#WhatResearchAreas) \n\nQ10. [Why are you and the Machine Intelligence Research Institute focused on artificial intelligence instead of human intelligence enhancement or whole brain emulation?](https://intelligence.org/feed/?paged=81#WhyAreYou) \n\nQ11. [What is value extraction and how is it relevant to Friendly AI theory?](https://intelligence.org/feed/?paged=81#WhatIsValue) \n\nQ12. [What is the psychology of concepts and how is it relevant to Friendly AI theory?](https://intelligence.org/feed/?paged=81#WhatIsThe) \n\nQ13. [What is game theory and how is it relevant to Friendly AI theory?](https://intelligence.org/feed/?paged=81#WhatIsGame) \n\nQ14. [What is metaethics and how is it relevant to Friendly AI theory?](https://intelligence.org/feed/?paged=81#WhatIsMetaethics) \n\nQ15. [What is normativity and how is it relevant to Friendly AI theory?](https://intelligence.org/feed/?paged=81#WhatIsNormativity) \n\nQ16. [What is machine ethics and how is it relevant to Friendly AI theory?](https://intelligence.org/feed/?paged=81#WhatIsMachine) \n\nQ16. [What are some of those open problems in Friendly AI theory?](https://intelligence.org/feed/?paged=81#WhatAreSome) \n\nQ17. [In late 2010 the Machine Intelligence Research Institute published “€œTimeless Decision Theory.” What is timeless decision theory and how is it relevant to Friendly AI theory?](https://intelligence.org/feed/?paged=81#InLate2010) \n\nQ18. [What is reflective decision theory and how is it relevant to Friendly AI?](https://intelligence.org/feed/?paged=81#WhatIsReflective) \n\nQ19. [Can you give a concrete example of how your research has made progress towards a solution on one or more open problems in Friendly AI?](https://intelligence.org/feed/?paged=81#CanYouGive) \n\nQ20. [Why hasn’€™t Machine Intelligence Research Institute produced any concrete artificial intelligence code?](https://intelligence.org/feed/?paged=81#WhyHasntSingularity)\n\n\n#### Section Three: Less Wrong and Rationality\n\n\nQ21. [You originally came to Machine Intelligence Research Institute’s attention when you gained over 10,000 karma points very quickly on Less Wrong. For those who aren’€™t familiar with it, can you tell us what Less Wrong is and what its relationship is to the Machine Intelligence Research Institute?](https://intelligence.org/feed/?paged=81#YouOriginallyCame) \n\nQ22. [What originally got you interested in rationality?](https://intelligence.org/feed/?paged=81#WhatOriginallyGot) \n\nQ23. [You recently were a instructor at Rationality MiniCamp in Berkeley during the summer. Can you tell us a little about the Rationality camps, what people do there, what you taught, and so on?](https://intelligence.org/feed/?paged=81#YouRecentlyWere) \n\nQ24. [Besides being a website, Less Wrong groups meet up around the world. If I were interested, where would I be able to get involved in one of those meetups, and what goes on at these meetups?](https://intelligence.org/feed/?paged=81#BesidesBeingA) \n\nQ25. [The recently released Strategic Plan mentions intentions to “€œspin off rationality training to another organization so that the Machine Intelligence Research Institute can focus on Friendly AI research.” Can you tell us something about that?](https://intelligence.org/feed/?paged=81#TheRecentlyReleased)\n\n\n#### Section Four: Machine Intelligence Research Institute Operations\n\n\nQ26. [You and Louie Helm were hired on September 1st. How were the two of you hired?](https://intelligence.org/feed/?paged=81#YouAndLouie) \n\nQ27. [What does Louie Helm do as Director of Development?](https://intelligence.org/feed/?paged=81#WhatDoesLouie) \n\nQ28. [The Machine Intelligence Research Institute just raised $250,000 in our Summer Challenge Grant. What will those funds be spent on?](https://intelligence.org/feed/?paged=81#SingularityInstituteJust) \n\nQ29: [Carl Shulman was hired not long before you and Louie. What is his role in the organization?](https://intelligence.org/feed/?paged=81#CarlShulmanWas) \n\nQ30. [What sort of new researchers is the Machine Intelligence Research Institute looking for?](https://intelligence.org/feed/?paged=81#WhatSortOf)\n\n\n**Michael Anissimov:** You were recently hired as a [full-time employee](http://intelligence.org/team#luke) by the Machine Intelligence Research Institute. What is your personal background?\n\n\n**Luke Muehlhauser:** I studied psychology in university but quickly found that I learn better and faster as an autodidact. Since then, I’€™ve consumed many fields of science and philosophy, one at a time, as they were relevant to my interests. I’€™ve written [dozens of articles](http://lukeprog.com/writings.html) for [my blog](http://commonsenseatheism.com/) and for [Less Wrong](http://lesswrong.com/), and I host a [podcast](http://commonsenseatheism.com/?p=1911) on which I interview leading philosophers and scientists about their work. I also have an interest in the mathematics and cognitive science of [human rationality](http://lesswrong.com/r/lesswrong/lw/7e5/the_cognitive_science_of_rationality/), because I want the research I do to be arriving at plausibly correct answers, not just answers that make me feel good.\n\n\n**Michael:** Why should we care about artificial intelligence?\n\n\n**Luke:** Artificial intelligence is becoming a more powerful technology every year. We now have robots that do [original scientific research](http://commonsenseatheism.com/wp-content/uploads/2011/02/King-Rise-of-the-Robo-Scientists.pdf), and the U.S. military is developing systems for [autonomous battlefield robots](http://www.amazon.com/Governing-Lethal-Behavior-Autonomous-Robots/dp/1420085948/) that make their own decisions. Artificial intelligence will become even more important when it passes human levels of intelligence, at which points it will be able to do many things we care about better than we can — things like curing cancer and preventing disasters.\n\n\n**Michael:** Why do you think smarter-than-human artificial intelligence is possible?\n\n\n**Luke:** The first reason is scientific. Human intelligence is a product of information processing in a brain made of meat. But meat is not an ideal platform for intelligence; it’€™s just the first one that evolution happened to produce. Information processing on a faster, more durable, and more flexible platform like silicon should be able to surpass the abilities of an intelligence running on meat if we can figure out which information processing algorithms are required for intelligence –€” either by looking more closely at which algorithms the brain is using or by gaining new insights in mathematics.\n\n\nThe second reason is historical. Machines have already passed human ability in hundreds of particular tasks: [playing chess](http://commonsenseatheism.com/wp-content/uploads/2011/08/Campbell-Deep-Blue.pdf) or [Jeopardy](http://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html?_r=3&ref=homepage&src=me&pagewanted=all), searching through large databanks, and in a recent advance, [reading road signs](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). There is little reason to suspect this trend will stop, unless scientific progress in general stops.\n\n\n**Michael:** The [mission](http://intelligence.org/summary/) of the Machine Intelligence Research Institute is to “to ensure that the creation of smarter-than-human intelligence benefits society.” How is your research contributing to that mission?\n\n\n**Luke:** A smarter-than-human machine intelligence that benefits (rather than harms) society is called “[Friendly AI](http://intelligence.org/ie-faq#FriendlyAI).” My primary research focus is what we call the problem of “friendliness content.” What does it look like for an AI to be “friendly” or to “benefit” society? We all have ideas about what a good world looks like and what a bad world looks like, but when thinking about that in the context of AI you must be very precise, because an AI will only do exactly what it is programmed to do.\n\n\nIf we can figure out how to specify exactly what it would mean for an AI to be “friendly,” then the creation of Friendly AI could be the best thing that ever happened. An advanced artificial intelligence could do science better and faster than we can, and thereby cure cancer, cure diseases, allow for human immortality, prevent disasters, solve the problems of climate change, and allow us to extend our civilization to other planets. A Friendly AI could also discover better economic and political systems that improve conditions for everyone.\n\n\n**Michael:** How does SIAI’s approach to making friendly AI differ from the concept of Asimov’s laws?\n\n\n**Luke:** Asimov’s [Three Laws of Robotics](http://en.wikipedia.org/wiki/Three_Laws_of_Robotics) for governing robot behavior are [widely](http://commonsenseatheism.com/wp-content/uploads/2011/09/Anderson-Asimovs-three-laws-of-robotics-and-machine-metaethics.pdf) [considered](http://www.amazon.com/Moral-Machines-Teaching-Robots-Right/dp/0199737975/) to be inadequate for making sure that intelligent machines bring no harm to humans. In fact, Asimov used his stories to illustrate many of the ways in which those laws could lead to unintended consequences. SIAI’s approach is very different in that we don’t think that constraints on AI behavior will work in the long run. We need advanced AI to want the same things we want. If the AI wants something different than what we want, it will eventually find a way around whatever constraints we put on it, due to its vastly superior intelligence. But if we can make an AI want the same things we want, then it will be much more effective than we can be at bringing about the kind of world that we want – curing cancer and inventing immortality and so on.\n\n\n**Michael:** Why is it necessary to make an AI that “wants the same things we want”€?\n\n\n**Luke:** A powerful AI that wants something different than we do could be dangerous. For example, suppose the AI’€™s goal system is programmed to maximize pleasure. That sounds good at first, but if you tell a super-powerful AI to “maximize pleasure,” it might do something like (1) convert most of Earth’s resources into computing machinery, killing all humans in the process, so that it can (2) tile the solar system with as many small digital minds as possible, and (3) have those digital minds run a continuous cycle of the single most pleasurable experience possible. But of course, that’s not what we want! We don’t just value pleasure, we also value things like novelty and exploration. So we need to be very careful when we tell an AI precisely what it means to be “friendly.”\n\n\nWe must be careful not to anthropomorphize. A machine intelligence won’€™t necessarily have our “€˜common sense,”™ or our values, or even be sentient. When AI researchers talk about [machine intelligence](http://www.vetta.org/documents/Machine_Super_Intelligence.pdf), they only mean to talk about a machine that is good at achieving its goals — whatever they are –€” in a wide variety of environments. So if you tell an AI to maximize pleasure, it will do exactly that. It’s not going to stop halfway through and “€˜realize”€™ –€” like a human might –€” that maximizing pleasure isn’€™t what was intended, and that it should do something else.\n\n\n**Michael:** If dangerous AI were to develop, why couldn’€™t we just “pull the plug”€?\n\n\n**Luke:** We might not know that an AI was dangerous until it was too late. An AI with a certain level of intelligence is going to realize that in order to achieve its goals it needs to avoid being turned off, and so it would hide both the level of its own intelligence and its dangerousness.\n\n\nBut the bigger problem is that if some AI development team has already developed an AI that is intelligent enough to be dangerous, then other teams are only a few months or years behind them. You can’€™t just unplug every dangerous AI that comes along until the end of time. We’€™ll need to develop a Friendly AI that can ensure safety much better than humans can.\n\n\n**Michael:** Why are you and Machine Intelligence Research Institute focused on artificial intelligence instead of human intelligence enhancement or whole brain emulation?\n\n\n**Luke:** Human intelligence enhancement is important, and may be needed to solve some of the harder problems of Friendly AI. Whole brain emulation is a particularly revolutionary kind of human intelligence enhancement that, if invented, could allow us to upload human minds into computers, run them at speeds much faster than is possible with neurons, make backup copies, and allow immortality.\n\n\nMany researchers think that artificial intelligence will arrive before whole brain emulation does, but predicting the timelines of future technology can be difficult. We are very interested in whole brain emulation, and in fact that was the subject of a presentation our researcher [Anna Salamon](http://annasalamon.com/) gave at a recent AI conference. One reason for us to focus on AI for the moment is that there are dozens of open problems in Friendly AI theory that we can make progress on right now without needing the vast computational resources required to make progress in whole brain emulation.\n\n\n### Section Two: Research Area Questions\n\n\n**Michael:** What research areas do you specifically investigate to develop “Friendliness content” for artificial intelligence?\n\n\n**Luke:** One relevant area of research is [cognitive neuroscience](http://en.wikipedia.org/wiki/Cognitive_neuroscience), especially the subfields [neuroeconomics](http://en.wikipedia.org/wiki/Neuroeconomics) and [affective neuroscience](http://en.wikipedia.org/wiki/Affective_neuroscience).\n\n\nWorlds are “good” or “bad” to us because of our values, and our values are stored in the brain’s neural networks. For decades, we’ve had to infer human values by observing human behavior because the brain has been a “black box” to us. But that can only take us so far because the environments in which we act are highly complex, and that makes it difficult to infer human values merely from behavior. Recently, new technologies like [fMRI](http://en.wikipedia.org/wiki/Functional_magnetic_resonance_imaging) and [TMS](http://en.wikipedia.org/wiki/Transcranial_magnetic_stimulation) and [optogenetics](http://en.wikipedia.org/wiki/Optogenetics) have allowed us to look into the black box and watch what the brain does. In fact, we’ve located the specific neurons that seem to encode the brain’s expected subjective value for the possible actions we are considering at a given moment. We’ve also learned a lot about the specific algorithms used by the brain to update how much we value certain things – in fact, they turned out to be a type of algorithm first discovered in computer science, called temporal difference reinforcement learning.\n\n\nA second relevant area of research is [choice modeling](http://en.wikipedia.org/wiki/Choice_modelling) and [preference elicitation](http://en.wikipedia.org/wiki/Preference_elicitation). Economists use a variety of techniques, for example [Willingness to Pay](http://en.wikipedia.org/wiki/Willingness_to_pay) measures, to infer human preferences from human behavior. AI researchers also do this, usually for the purposes of designing a piece of software called a [decision support system](http://en.wikipedia.org/wiki/Decision_support_system). The human brain doesn’t seem to encode a coherent preference set, so we’€™ll need to use choice modeling and preference elicitation techniques to extract a coherent preference set from whatever it is that human brains actually do.\n\n\nOther fields relevant to friendliness content theory include value extrapolation, the psychology of concepts, game theory, metaethics, normativity, and machine ethics.\n\n\n**Michael:** What is value extrapolation and how is it relevant to Friendly AI theory?\n\n\n**Luke:** Most philosophers talk about “[ideal preference theories](http://commonsenseatheism.com/wp-content/uploads/2011/09/Zimmerman-Why-Richard-Brandt-does-not-need-cognitive-psychotherapy.pdf),” but I prefer to call them “€˜value extrapolation algorithms.”€™ If we want to develop Friendly AI, we may not want to just scan human values from our brains and give those same values to an AI. I want to eat salty foods all day, but I kind of wish I didn’t want that, and I certainly don’t want an AI to feed me salty foods all day. Moreover, I would probably change my desires if I knew more and was more rational. I might learn things that would change what I want. And it’s unlikely that the human species has reached the end of moral development. So we don’t want to fix things in place by programming an AI with our current values. We want an AI to extrapolate our values so that it cares about what we would want if we knew more, were more rational, were more morally developed, and so on.\n\n\n**Michael:** What is the psychology of concepts and how is it relevant to Friendly AI theory?\n\n\n**Luke:** Some researchers think that part of the solution to the friendliness content problem will come from examining our intuitive concept of “€˜ought”€™ or “€˜good,”€™ and using this to inform our picture of what we think a good world would be like, and thus what the goal system of a super-powerful machine should be aimed toward. Philosophers have been examining our intuitive concepts of “€˜ought”€™ or “€˜good”€™ for centuries and made little progress, but perhaps new tools in psychology and neuroscience can help us do this conceptual analysis better than philosophers could from their armchairs.\n\n\nOn the other hand, psychological experiments have been undermining our classical theories about what concepts are, leading [some](http://www.amazon.com/Doing-without-Concepts-Edouard-Machery/dp/0199837562/) to go so far as to conclude that concepts do not exist in any useful sense. The results of that research program in psychology and philosophy could have profound implications for any approach to friendliness content that depends on an examination of our intuitive concepts of “€˜ought”€™ or “€˜good.”€™\n\n\n**Michael:** What is game theory and how is it relevant to Friendly AI theory?\n\n\n**Luke:** Game theory is a highly developed field of mathematics concerned with particular scenarios (“€˜games”€™) where an agent’s success depends on the choices of others. Its models and discoveries have been applied to business, economics, political science, biology, computer science, and philosophy.\n\n\nGame theory is relevant to friendliness content because many of our values result from our need to make decisions in scenarios where our success depends on the choices of others. It may also be relevant to value extrapolation algorithms, as the extrapolation process is likely to change the ways in which our values and decisions interact with the values and decisions of others.\n\n\n**Michael:** What is metaethics and how is it relevant to Friendly AI theory?\n\n\n**Luke:** Philosophers often divide the field of ethics into three levels. [Applied ethics](http://www.iep.utm.edu/ethics/#H3) is the study of particular moral questions: How should we treat animals? Is lying ever acceptable? What responsibilities do corporations have concerning the environment? [Normative ethics](http://www.iep.utm.edu/ethics/#H2) considers the principles by which we make judgments in applied ethics. Do we make one judgment over another based on which action produces the most good? Or should we be following a list of rules and respecting certain rights? Perhaps we should advocate what we would all agree to behind a veil of ignorance that kept us from knowing what our lot in life will be?\n\n\n[Metaethics](http://www.iep.utm.edu/ethics/#H1) goes one level deeper. What do terms like “€˜good”€™ and “€˜right”€™ even mean? Do moral facts exist, or is it all relative? Is there such a thing as moral progress? These questions are relevant to friendliness content because presumably, if moral facts exist, we would want an AI to respect them. Even if moral facts do not exist, our moral attitudes are part of what we value, and that is relevant to friendliness content theory.\n\n\n**Michael:** What is normativity and how is it relevant to Friendly AI theory?\n\n\n**Luke:** [Normativity](http://www.philosophyetc.net/2004/12/normativity.html) is about norms, and there are many kinds. Prudential norms concern what we ought to do to achieve our goals. Epistemic norms concern how we ought to pursue knowledge. Doxastic norms concern what we ought to believe. Moral norms concern how we ought to behave ethically. And so on.\n\n\nA classic concern of normativity is the “[is-ought gap](http://en.wikipedia.org/wiki/Is%E2%80%93ought_problem).”™ Supposedly, you cannot reason from an “is”€™ statement to an “€˜ought”€™ statement. It doesn’t logically follow from “€œThe man in front of me is suffering” that “œI ought to help him.”€ Actually, it’s trivial to [bridge the is-ought gap](http://www.philosophyetc.net/2004/04/bridging-isought-gap.html) when it comes to prudential norms. “If you want Y, then you ought to do X,” is just another way of saying “Doing X will increase your chances of attaining Y.” The first sentence contains an “ought”€™ claim, but the second sentence reduces it away into a purely descriptive sentence about the natural world.\n\n\nSome philosophers think that the “€˜is-ought gap”€™ can be bridged in the same way for epistemic and moral norms. Perhaps “you ought to believe X” just means “If you want true beliefs, then you ought to believe X,”€ which in turn can be reduced into the purely descriptive statement “€œBelieving X will increase your proportion of true beliefs.”€\n\n\nBut is there any other kind of normativity? Are there “€˜categorical”€™ oughts that do not depend on an “€œIf you want X” clause? Naturalists tend to deny this possibility, but perhaps categorical epistemic or moral oughts can be derived from the mathematics of game theory and decision theory, as naturalist Gary Drescher suggests in [*Good and Real*](http://www.amazon.com/Good-Real-Demystifying-Paradoxes-Bradford/dp/0262042339/). If so, it may be wise to make sure they are included in friendliness content theory, so that an AI can respect them.\n\n\n**Michael:** What is machine ethics and how is it relevant to Friendly AI theory?\n\n\n**Luke:** [Machine ethics](http://www.amazon.com/Machine-Ethics-Michael-Anderson/dp/0521112354/) is one of several names for the field that studies two major questions: (1) How can we get machines to behave ethically, and (2) which types of machines can be considered genuine moral agents (in the sense of having rights or moral worth like a human might)? Most of the work in the field so far is relevant only to “€˜narrow AI”€™ machines that are not nearly as intelligent as humans are, but two directions of research that may be useful for Friendly AI are [mechanized deontic logic](http://commonsenseatheism.com/wp-content/uploads/2011/02/Arkoudas-Toward-ethical-robots-via-mechanized-deontic-logic.pdf) and [computational metaethics](http://commonsenseatheism.com/wp-content/uploads/2011/03/Lokhorst-Computational-meta-ethics-toward-the-meta-ethical-robot.pdf).\n\n\nUnfortunately, our understanding of Friendly AI –€” and of not-yet-invented AI technologies in general –€” is so primitive that we’re not even sure which fields will turn out to matter. It seems like cognitive neuroscience, preference elicitation, value extrapolation, game theory, and several other fields are relevant to Friendly AI theory, but it might turn out that as we come to understand Friendly AI better, we’€™ll learn that some research avenues are not relevant. But the only way we can learn that is to continue to make incremental progress in the areas of research that seem to be relevant.\n\n\n**Michael:** What are some of those open problems in Friendly AI theory?\n\n\n**Luke:** If we think just about about the issue of Friendliness content, some of the open questions are: How does the brain choose which few possible actions it will encode expected subjective value for? How does it combine absolute value and probability estimates to make those expected subjective value computations? Where is absolute value stored, and how is it encoded? How can we extract a coherent utility function or preference set from this activity in human brains? Which algorithms should we use to extrapolate these preferences, and why? When extrapolated, will the values of two different humans converge? Will the values of all humans converge? Would the values of all sentient beings converge? Will the details of human cognitive neuroscience matter much, or will such details get “˜washed out”€™ by the higher-level mathematical structure of value systems and game theory? How can these extrapolated values be implemented in the goal system of an AI?\n\n\nFriendliness content is only one area of open problems in Friendly AI theory. There are many other questions. How can an agent make optimal decisions when it is capable of directly editing its own source code, including the source code of the decision mechanism? How can we get an AI to maintain a consistent utility function throughout updates to its ontology? How do we make an AI with preferences about the external world instead of its reward signal? How can we generalize the theory of machine induction –€” called [Solomonoff induction](http://www.vetta.org/documents/disSol.pdf) –€” so that it can use higher-order logics and reason correctly about observation selection effects? How can we approximate such ideal processes such that they are computable?\n\n\nThat’s a start, anyway. ![🙂](https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png)\n\n\n**Michael:** In late 2010 the Machine Intelligence Research Institute published [“Timeless Decision Theory.”€](http://intelligence.org/upload/TDT-v01o.pdf) What is timeless decision theory and how is it relevant to Friendly AI theory?\n\n\n**Luke:** Decision theory is the study of how to make optimal decisions. We value different things differently, and we are uncertain about which actions will bring about what we value. One of the problems not handled well by traditional decision theories like [Evidential decision theory](http://en.wikipedia.org/wiki/Evidential_decision_theory) (EDT) and [Causal decision theory](http://plato.stanford.edu/entries/decision-causal/) (CDT) is the problem of logical uncertainty –€” our uncertainty about mathematical and logical facts, for example what the nth decimal of pi is. One way to think about Timeless decision theory (TDT) is that it’s a step toward a decision theory that can handle logical uncertainty.\n\n\nFor an AI to be safe, its decision mechanism will have to be somewhat clear and mathematically testable for stability and safety. That probably means it will need to make decisions with decision theory, rather than through a relatively opaque neural nets mechanism. So we need to solve some fundamental problems in decision theory first, and logical uncertainty is one of the remaining fundamental problems in decision theory.\n\n\n**Michael:** What is reflective decision theory and why is it necessary to Friendly AI?\n\n\n**Luke:** Traditional decision theories cannot handle agents that can modify their own source code, including the source code for their decision mechanism. A reflective decision theory is one that can handle such a strongly self-modifying agent. Because an advanced AI will be intelligent enough to modify its own source code, we need to develop a reflective decision theory that will allow us to ensure that the AI will remain Friendly throughout the self-modification and self-improvement process.\n\n\n**Michael:** Can you give a concrete example of how your research has made progress towards a solution on one or more open problems in Friendly AI?\n\n\n**Luke:** I’ve only just begun working with the Machine Intelligence Research Institute, and making progress on open problems in Friendly AI theory is only one of the many things I do. My first contribution to friendliness content theory was to [summarize](http://lesswrong.com/lw/71x/a_crash_course_in_the_neuroscience_of_human/) some very recent advances in neuroeconomics that are relevant to the study of human values. I did that because other researchers in the field were not yet familiar with that material, and I think much of the work in friendliness content theory can be done collaboratively by a broad community of researchers if we are all well-informed.\n\n\nThese results from neuroeconomics appear to be relevant to friendliness content theory, though only time will tell. For example, we’ve learned that expected utility for human actions is encoded cardinally (not ordinally) in the brain, and thus avoids a limiting result from economics called [Arrow’s impossibility theorem](http://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem).\n\n\n**Michael:** Why hasn’€™t the Machine Intelligence Research Institute produced any concrete artificial intelligence code?\n\n\n**Luke:** This is a common confusion. Most of the open problems in Friendly AI theory are in math and philosophy, not in computer programming. Sometimes programmers approach us, offering to work on Friendly AI theory, and we reply: “€œWhat we need are mathematicians. Are you brilliant at math?”\n\n\nAs it turns out, the heroes who can save the world are not those with incredible strength or the power of flight. They are mathematicians.\n\n\n### Section Three: Less Wrong and Rationality\n\n\n**Michael:** You originally came to the Machine Intelligence Research Institute’€™s attention when you gained over 10,000 karma points very quickly on [Less Wrong](http://www.lesswrong.com/). For those who aren’€™t familiar with it, can you tell us what Less Wrong is and what its relationship is to the Machine Intelligence Research Institute?\n\n\n**Luke:** Less Wrong is a group blog and community devoted to the study of rationality: How to get truer beliefs and make better decisions. the Machine Intelligence Research Institute’€™s co-founder Eliezer Yudkowsky originally wrote hundreds of articles about rationality for another blog, [Overcoming Bias](http://www.overcomingbias.com/), because he wanted to build a community of people that could think clearly about difficult problems like Friendly AI. Those articles were then used as the seed content for a new website, Less Wrong. I discovered Less Wrong because of my interest in rationality, and eventually started writing articles for the site –€” many of which became very popular.\n\n\n**Michael:** What originally got you interested in rationality?\n\n\n**Luke:** I was raised an enthusiastic evangelical Christian, and had a dramatic crisis of faith when I [learned](http://lesswrong.com/lw/7dy/a_rationalists_tale/) a few things about the historical Jesus, science, and philosophy. I was disturbed by how confidently I had believed something that was so thoroughly wrong, and I no longer trusted my intuitions. I wanted to avoid being so wrong again, so I studied the phenomena that allow human brains to be so mistaken — things like [confirmation bias](http://en.wikipedia.org/wiki/Confirmation_bias) and the [affect heuristic](http://lesswrong.com/lw/lg/the_affect_heuristic/). I also gained an interest in the mathematics of correct thinking, like [Bayesian updating](http://yudkowsky.net/rational/bayes) and [decision theory](http://wiki.lesswrong.com/wiki/Decision_theory).\n\n\n**Michael:** You recently were a instructor at Rationality MiniCamp in Berkeley during the summer. Can you tell us a little about the MiniCamp, what people did there, what you taught, and so on?\n\n\n**Luke:** [Anna](http://annasalamon.com/) and I put together the minicamp, a one-week camp full of classes and activities about rationality, social effectiveness, and existential risks. Over 20 people stayed in a large house in Berkeley, where we held the classes. Some of them came from as far away as Sweden and England.\n\n\nMinicamp was a blast, mostly because the people were so great! We are still in contact, still learning and growing.\n\n\nWe taught things like how to update our beliefs using probability theory, how to use the principle of fungibility to better fulfill our goals, and how to use body language and fashion to improve some parts of our lives that math-heads sometimes neglect! We also taught classes on [optimal philanthropy](http://lesswrong.com/lw/3gj/efficient_charity_do_unto_others/) (how to get the most bang for your philanthropic buck) and [existential risks](http://www.existential-risk.org/) (risks that could cause human extinction).\n\n\n**Michael:** Besides being a website, Less Wrong groups meet up around the world. If I were interested, where would I be able to get involved in one of those meetups, and what goes on at these meetups?\n\n\n**Luke:** Because of [how sensitive humans are to context](http://lesswrong.com/lw/52g/the_good_news_of_situationist_psychology/), surrounding yourself with other people who are learning rationality and trying to improve themselves is one of the most powerful ways to improve yourself.\n\n\nThe easiest way to find a Less Wrong meetup near you is probably to check for the most recent [front-page](http://lesswrong.com/promoted/) post on Less Wrong with the title that begins “€œWeekly LW Meetups…”€ That post will list all the Less Wrong meetups happening that week.\n\n\nEach Less Wrong meetup has different people and different activities. You can contact the meetup organizer for the meetup nearest you for more information.\n\n\n**Michael:** The recently released [Strategic Plan](http://intelligene.org/blog/2011/08/26/singularity-institute-strategic-plan-2011/) mentions intentions to “€œspin off rationality training to another organization so that the Machine Intelligence Research Institute can focus on Friendly AI research.”€ Can you tell us something about that?\n\n\n**Luke:** We believe that building a large community of rationality enthusiasts is crucial to the success of our mission. The Less Wrong rationality community has been an indispensable source of human and financial capital for the Machine Intelligence Research Institute. However, we understand that it’s confusing to be an organization devoted to two such apparently different fields: advanced artificial intelligence and human rationality. That’s why we are working toward launching a new organization devoted to rationality training. The Machine Intelligence Research Institute, then, will be more solely devoted to the safety of advanced artificial intelligence.\n\n\n### Section Four: Machine Intelligence Research Institute Operations\n\n\n**Michael:** You and Louie Helm were hired on September 1st. How were the two of you hired?\n\n\n**Luke:** The Machine Intelligence Research Institute doesn’€™t hire someone unless they do quite a bit of volunteer work first. I first came to the Machine Intelligence Research Institute as a visiting fellow. During the next few months I co-organized and taught at the Rationality Minicamp, taught classes for the longer Rationality Boot Camp, wrote dozens of articles on metaethics and rationality for Less Wrong, wrote the [Intelligence Explosion FAQ](http://intelligence.org/ie-faq) and [IntelligenceExplosion.com](http://intelligenceexplosion.com/), led the writing of a strategic plan for the organization, and did many smaller tasks.\n\n\nLouie Helm arrived in Berkeley not long after I did. As a past visiting fellow, Louie was the one who had suggested I apply to the visiting fellows program. Louie did some teaching for Rationality Boot Camp, helped me write the strategic plan, developed a donor database so that our contact with donors is more consistent, optimized the Machine Intelligence Research Institute’s finances, did lots of fundraising, and much more.\n\n\nWe both produced lots of value for the organization over those months as volunteers, so the Board hired us –€” me as a research fellow and Louie as Director of Development.\n\n\n**Michael:** What does Louie Helm do as Director of Development?\n\n\n**Luke:** We’€™re a small team, so we all do more than our title says, and Louie is no exception. Louie raises funds, communicates with donors, applies for grants, and so on. But he also launched the Research Associates program, coordinates the [Volunteer Network](http://www.singularityvolunteers.org/), helps organize the [Singularity Summit](http://www.singularitysummit.com/), seeks out potential new researchers, and more.\n\n\n**Michael:** The Machine Intelligence Research Institute just raised $250,000 in our Summer Challenge Grant. What will those funds be spent on?\n\n\n**Luke:** We were very pleased by the results of the summer challenge grant. No single person gave more than $25,000, so the grant succeeded because so many different people gave. More than 40 people gave $1,000 or more, which shows a high degree of trust from our core supporters.\n\n\nIt costs $368,000 annually to support our lean family of eight full-time staff members, four of whom are research fellows: Eliezer Yudkowsky, Anna Salamon, Carl Shulman, and myself. The money will also be used to run the [2011 Singularity Summit](http://www.singularitysummit.com/), though we expect that event to be cash-positive this year. We plan to redesign the singinst.org website so that it is easier to navigate and provides greater organizational transparency. And with enough funds after the Summit, we hope to hire additional researchers.\n\n\n**Michael:** Carl Shulman was hired not long before you and Louie. What is his role in the organization?\n\n\n**Luke:** Carl also did quite a lot of work for the Machine Intelligence Research Institute before being hired. He has written several papers and given a few talks, many of which you can read from our [publications](http://intelligence.org/research/) page. He continues to work on a variety of research projects, and collaborates closely with researchers at Oxford’s [Future of Humanity Institute](http://www.fhi.ox.ac.uk/).\n\n\n**Michael:** What sort of new researchers is the Machine Intelligence Research Institute looking for?\n\n\n**Luke:** Mathematicians, mostly. If you’€™re a brilliant math student and want to live and work in the Bay Area where you’ll be surrounded by smart, influential, altruistic people, please [apply here](http://intelligence.org/aboutus/opportunities/research-fellow).\n\n\nThe post [Interview with New MIRI Research Fellow Luke Muehlhauser](https://intelligence.org/2011/09/15/interview-with-new-singularity-institute-research-fellow-luke-muehlhuaser-september-2011/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2011-09-15T10:17:50Z", "authors": ["Louie Helm"], "summaries": []} -{"id": "8898eff22ce53b0c8e627829dd5db9fd", "title": "2011 Summer Matching Challenge Success!", "url": "https://intelligence.org/2011/09/01/2011-summer-matching-challenge-success/", "source": "miri", "source_type": "blog", "text": "Thanks to the effort of our donors, the 2011 Summer Singularity Challenge has been met! All $125,000 contributed will be matched dollar for dollar by our matching backers, raising a total of $250,000 to fund the Machine Intelligence Research Institute’€™s operations. We reached our goal two days early, near midnight of August 29th.\n\n\nOn behalf of our staff, volunteers, and entire community, I want to personally thank everyone who donated. Your dollars make the difference.\n\n\nHere’€™s to a better future for the human species.\n\n\nThe post [2011 Summer Matching Challenge Success!](https://intelligence.org/2011/09/01/2011-summer-matching-challenge-success/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2011-09-01T19:10:40Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "a7297d5b886aad5438ff00f5a482f4c3", "title": "Machine Intelligence Research Institute Strategic Plan 2011", "url": "https://intelligence.org/2011/08/26/singularity-institute-strategic-plan-2011/", "source": "miri", "source_type": "blog", "text": "The Machine Intelligence Research Institute has prepared a Strategic Plan ([PDF](https://intelligence.org/files/strategicplan2011.pdf)) to concisely describe our near-term goals, vision, and concrete actions in pursuit of those goals and vision. We welcome commentary from our supporter community on the plan. The release of this Strategic Plan is part of an overall effort to increase transparency at the Institute.\n\n\n**Update**: This strategic plan is now obsolete. See our [2013 strategic priorities](http://intelligence.org/2013/04/13/miris-strategy-for-2013/) instead.\n\n\nThe post [Machine Intelligence Research Institute Strategic Plan 2011](https://intelligence.org/2011/08/26/singularity-institute-strategic-plan-2011/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2011-08-26T23:27:30Z", "authors": ["Louie Helm"], "summaries": []} -{"id": "cbfb52d730b25c7dc6aa9a1c3526116f", "title": "New Intelligence Explosion Website", "url": "https://intelligence.org/2011/08/07/new-intelligence-explosion-website/", "source": "miri", "source_type": "blog", "text": "We’ve put together a new website focused on the intelligence explosion concept: [IntelligenceExplosion.com](http://intelligenceexplosion.com/). The site is a “landing page” that provides an easy introduction to the topic for laymen and researchers alike.\n\n\nAlso see Nick Bostrom’s landing pages for [anthropics](http://www.anthropic-principle.com/), [the simulation argument](http://www.simulation-argument.com/), and [existential risk](http://www.existentialrisk.com/).\n\n\nThe post [New Intelligence Explosion Website](https://intelligence.org/2011/08/07/new-intelligence-explosion-website/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2011-08-08T04:09:09Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "7d6ed6072b50be86132519f44b685c18", "title": "Announcing the $125,000 Summer Singularity Challenge", "url": "https://intelligence.org/2011/07/22/announcing-the-125000-summer-singularity-challenge/", "source": "miri", "source_type": "blog", "text": "Thanks to the generosity of several major donors†, every donation to the Machine Intelligence Research Institute made now **until August 31, 2011** will be [matched dollar-for-dollar](http://intelligence.org/donate), up to a total of $125,000.\n[Donate now!](http://intelligence.org/donate)Now is your chance to **double your impact** while supporting the Machine Intelligence Research Institute and helping us raise up to $250,000 to help fund [our research program](http://intelligence.org/summary/) and stage the upcoming **[Singularity Summit](http://www.singularitysummit.com/)**… for which you can **[register now](http://www.singularitysummit.com/registration/)**!\n\n\n† $125,000 in backing for this challenge is being generously provided by Rob Zahra, [Quixey](http://www.quixey.com/), Clippy, Luke Nosek, Edwin Evans, Rick Schwall, Brian Cartmell, Mike Blume, Jeff Bone, Johan Edsträm, Zvi Mowshowitz, John Salvatier, Louie Helm, Kevin Fischer, Emil Gilliam, Rob and Oksana Brazell, Guy Srinivasan, John Chisholm, and John Ku.\n\n\n\n\n---\n\n\nThe year 2011 has been **huge for Artificial Intelligence**. With the IBM computer Watson defeating two top *Jeopardy!* champions in February, it’s clear that the field is making steady progress. Journalists like Torie Bosch of *Slate* have [argued](http://www.slate.com/id/2298318/) that *“We need to move from robot-apocalypse jokes to serious discussions about the emerging technology.”* We couldn’t agree more — in fact, the Machine Intelligence Research Institute has been thinking about how to create safe and ethical artificial intelligence since long before the Singularity landed on the [front cover](http://www.time.com/time/health/article/0,8599,2048138,00.html) of *TIME* magazine.\n\n\nThe last 1.5 years were our biggest ever. Since the beginning of 2010, we have:\n\n\n* Held our annual [Singularity Summit](http://www.singularitysummit.com/), in San Francisco. Speakers included Ray Kurzweil, James Randi, Irene Pepperberg, and many others.\n* Held the first Singularity Summit Australia and Singularity Summit Salt Lake City.\n* Held a wildly successful Rationality Minicamp.\n* Published seven research papers, including Yudkowsky’s much-awaited ‘[Timeless Decision Theory](http://intelligence.org/upload/TDT-v01o.pdf)‘.\n* Helped philosopher David Chalmers write his seminal paper ‘The Singularity: A Philosophical Analysis’, which has sparked broad discussion in academia, including an [entire issue](http://fragments.consc.net/djc/2010/12/singularity-symposium.html) of *Journal of Consciousness Studies* and a [book](http://singularityhypothesis.blogspot.com/) from Springer devoted to responses to Chalmers’ paper.\n* Launched the Research Associates program.\n* Brought MIT cosmologist Max Tegmark onto our advisory board, published our [Singularity FAQ](http://intelligence.org/ie-faq), and much more.\n\n\nIn the coming year, we plan to do the following:\n\n\n* Hold our annual [Singularity Summit](http://www.singularitysummit.com/), in New York City this year.\n* Publish three chapters in the upcoming academic volume *[The Singularity Hypothesis](http://singularityhypothesis.blogspot.com/)*, along with several other papers.\n* Improve organizational transparency by creating a simpler, easier-to-use website that includes Machine Intelligence Research Institute planning and policy documents.\n* Publish a document of open research problems related to Friendly AI, to clarify the research space and encourage other researchers to contribute to our mission.\n* Add additional skilled researchers to our Research Associates program.\n* Publish well-researched documents making the case for existential risk reduction as optimal philanthropy.\n* Diversify our funding sources by applying for targeted grants and advertising our [affinity credit card program](http://intelligence.org/donate/).\n\n\nWe appreciate your support for our high-impact work. As PayPal co-founder and Machine Intelligence Research Institute donor Peter Thiel said:\n\n\n\n> “I’m interested in facilitating a forum in which there can be… substantive research on how to bring about a world in which AI will be friendly to humans rather than hostile… [The Machine Intelligence Research Institute represents] a combination of very talented people with the right problem space [they’re] going after… [They’ve] done a phenomenal job… on a shoestring budget. From my perspective, the key question is always: What’s the amount of leverage you get as an investor? Where can a small amount make a big difference? This is a very leveraged kind of philanthropy.”\n> \n> \n\n\n[Donate now](http://intelligence.org/donate), and seize a better than usual chance to move our work forward. Credit card transactions are securely processed through Causes.com, Google Checkout, or PayPal. If you have questions about donating, please call Amy Willey at (586) 381-1801.\n\n\nThe post [Announcing the $125,000 Summer Singularity Challenge](https://intelligence.org/2011/07/22/announcing-the-125000-summer-singularity-challenge/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2011-07-23T04:13:58Z", "authors": ["Luke Muehlhauser"], "summaries": []} -{"id": "59e847d1f8f03d8d0d94364b099dbfe0", "title": "Tallinn-Evans Challenge Grant Success!", "url": "https://intelligence.org/2011/01/20/tallinn-evans-challenge-grant-success/", "source": "miri", "source_type": "blog", "text": "Thanks to the effort of our donors, the Tallinn-Evans Singularity Challenge has been met! All $125,000 contributed will be matched dollar for dollar by Jaan Tallinn and Edwin Evans, raising a total of $250,000 to fund the Machine Intelligence Research Institute’s operations in 2011. On behalf of our staff, volunteers, and entire community, I want to personally thank everyone who donated. Keep watching this blog throughout the year for updates on our activity, and sign up for [our mailing list](http://intelligence.org/get-involved/) if you haven’t yet.\n\n\nHere’s to a better future for the human species.\n\n\nWe are preparing a donor page to provide a place for everyone who donated to share some information about themselves if they wish, including their name, location, and a quote about why they donate to the Machine Intelligence Research Institute. If you would like to be included in our public list, please [email us](mailto:admin@intelligence.org).\n\n\nAgain, thank you. The Machine Intelligence Research Institute depends entirely on contributions from individual donors to exist. Money is indeed the [unit of caring](http://lesswrong.com/lw/65/money_the_unit_of_caring/), and one of the easiest ways that anyone can contribute directly to the success of the Machine Intelligence Research Institute. Another important way you can help is by plugging us into your networks, so please [email us](mailto:admin@intelligence.org) if you want to help. \n\n\nIf you’re interested in connecting with other Machine Intelligence Research Institute supporters, we encourage joining our [group on Facebook](http://www.facebook.com/home.php?sk=group_140277979364858). There are also local *[Less Wrong](http://lesswrong.com)* meetups in cities like San Francisco, Los Angeles, New York, and London.\n\n\nThe post [Tallinn-Evans Challenge Grant Success!](https://intelligence.org/2011/01/20/tallinn-evans-challenge-grant-success/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2011-01-21T00:00:56Z", "authors": ["Louie Helm"], "summaries": []} -{"id": "ef2a412d247469c73a84985221bf8d1f", "title": "Announcing the Tallinn-Evans $125,000 Singularity Challenge", "url": "https://intelligence.org/2010/12/21/announcing-the-tallinn-evans-125000-singularity-holiday-challenge/", "source": "miri", "source_type": "blog", "text": "Thanks to the generosity of two major donors; Jaan Tallinn, a founder of Skype and Ambient Sound Investments, and Edwin Evans, CEO of the mobile applications startup Quinly, **every contribution to the Machine Intelligence Research Institute up until January 20, 2011 will be matched dollar-for-dollar, up to a total of $125,000.**\n\n\n\n\n\n\n\n\n\n\n$0\n\n\n\n\n$31.25K\n\n\n\n\n$62.5K\n\n\n\n\n$93.75K\n\n\n\n\n$125K\n\n\n\n\n\n\n\n\nInterested in optimal philanthropy — that is, maximizing the future expected benefit to humanity per charitable dollar spent? The technological creation of greater-than-human intelligence has the potential to unleash an “intelligence explosion” as intelligent systems design still more sophisticated successors. This dynamic could transform our world as greatly as the advent of human intelligence has already transformed the Earth, for better or for worse. Thinking rationally about these prospects and working to encourage a favorable outcome offers an extraordinary chance to make a difference. The Machine Intelligence Research Institute exists to do so through its research, the Singularity Summit, and public education.\n\n\nWe support both direct engagements with the issues as well as the improvements in methodology and rationality needed to make better progress. Through our [Visiting Fellows program](http://intelligence.org/visiting-fellows/), researchers from undergrads to Ph.Ds pursue questions on the foundations of Artificial Intelligence and related topics in two-to-three month stints. Our [Resident Faculty](http://intelligence.org/team), up to four researchers from three last year, pursues long-term projects, including AI research, a literature review, and a book on rationality, the first draft of which was just completed. MIRI researchers and representatives gave over a dozen presentations at half a dozen conferences in 2010. Our Singularity Summit conference in San Francisco was a great success, bringing together over 600 attendees and 22 top scientists and other speakers to explore cutting-edge issues in technology and science.\n\n\nWe are pleased to receive donation matching support this year from Edwin Evans of the United States, a long-time MIRI donor, and Jaan Tallinn of Estonia, a more recent donor and supporter. Jaan recently gave a [talk on the Singularity and his life](http://aaltoes.com/2010/10/jaan-tallinn-from-soviets-to-singularity/) at a entrepreneurial group in Finland. Here’s what Jaan has to say about us:\n\n\n![jaan-269x178](http://miri.wpengine.com/wp-content/uploads/2010/12/jaan-269x178.jpg)\n*“We became the dominant species on this planet by being the most intelligent species around. This century we are going to cede that crown to machines. After we do that, it will be them steering history rather than us. Since we have only one shot at getting the transition right, the importance of MIRI’s work cannot be overestimated. Not finding any organisation to take up this challenge as seriously as MIRI on my side of the planet, I conclude that it’s worth following them across 10 time zones.”* \n\n— Jaan Tallinn, MIRI donor\n\n\nMake a lasting impact on the long-term future of humanity today — make a [donation to the Machine Intelligence Research Institute](http://intelligence.org/donate) and help us reach our $125,000 goal. For more detailed information on our projects and work, contact us at [admin@intelligence.org](mailto:admin@intelligence.org) or read our new [organizational overview.](http://intelligence.org/summary/)\n\n\nThe post [Announcing the Tallinn-Evans $125,000 Singularity Challenge](https://intelligence.org/2010/12/21/announcing-the-tallinn-evans-125000-singularity-holiday-challenge/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2010-12-21T23:26:54Z", "authors": ["Louie Helm"], "summaries": []} -{"id": "d2bc53f559f0b185e8b559c34d36b810", "title": "2010 Singularity Research Challenge Fulfilled!", "url": "https://intelligence.org/2010/03/01/2010-singularity-research-challenge-fulfilled/", "source": "miri", "source_type": "blog", "text": "Thanks to our donors, yesterday we met our fundraising goal of $100,000 for the 2010 Singularity Research Challenge. MIRI would like to thank the grant’s matching donors and everyone who contributed. Every donation, however small, funds research and advocacy targeted towards maximizing the probability of a positive Singularity. \n\n\nIf you have any questions or comments about MIRI’s activity or would like to discuss targeted donations for future projects, please feel free to contact us anytime at admin at intelligence dot org. We also encourage you to subscribe to this blog, if you haven’t already, to stay up-to-date on MIRI’s activity. \n\n\nAgain, thank you, and here’s to a productive and successful 2010!\n\n\nThe post [2010 Singularity Research Challenge Fulfilled!](https://intelligence.org/2010/03/01/2010-singularity-research-challenge-fulfilled/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2010-03-02T07:26:45Z", "authors": ["Louie Helm"], "summaries": []} -{"id": "f92923bd4d4641e27d04bb449c1db501", "title": "Announcing the 2010 Singularity Research Challenge", "url": "https://intelligence.org/2009/12/23/announcing-the-2010-singularity-research-challenge/", "source": "miri", "source_type": "blog", "text": "Offering unusually good philanthropic returns –€” meaning greater odds of a positive Singularity and lesser odds of human extinction — the Machine Intelligence Research Institute has launched a new challenge campaign. The sponsors, Edwin Evans, Rolf Nelson, Henrik Jonsson, Jason Joachim, and Robert Lecnik, have generously put up $100,000 of matching funds, so that every donation you make until February 28th will be matched dollar for dollar. If the campaign is successful, it will raise a full $200,000 to fund MIRI’s 2010 activities.\n\n\nFor almost a decade, the Machine Intelligence Research Institute has been asking questions on the future of human civilization: How can we benefit from increasingly powerful technology without succumbing to the risks, up to and including human extinction? What is the best way to handle artificial general intelligence (AGI): programs as smart as humans, or smarter?\n\n\nAmong MIRI’s core aims is to continue studying “Friendly AI”: AI that acts benevolently because it holds goals aligned with human values. This involves drawing on and contributing to fields like decision theory, computer science, cognitive and moral psychology, and technology forecasting.\n\n\nCreating AI, especially the Friendly kind, is a difficult undertaking. We’re in it for as long as it takes, but we’ve been doing more than laying the groundwork for Friendly AI. We’ve been raising the profile of AI risk and Singularity issues in academia and elsewhere, forming communities around enhancing human rationality, and researching other avenues that promise to reduce the most severe risks the most effectively.\n\n\nIf you make a donation to the Machine Intelligence Research Institute, you can choose which grant proposal your donation should help to fill. Any time a grant proposal is fully funded, it goes into our “€œactive projects”€ file: it becomes a project that we have money enough to fund, and that we are publicly committed to funding. (Some of the projects will go forward even without earmarked donations, with money from the general fund –€” but many won’€™t, and since our work is limited by how much money we have available to support skilled staff and Visiting Fellows, more money allows more total projects to go forward.)\n\n\nDonate now, and seize a better than usual chance to move our work forward.\n\n\nThe post [Announcing the 2010 Singularity Research Challenge](https://intelligence.org/2009/12/23/announcing-the-2010-singularity-research-challenge/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2009-12-24T01:52:04Z", "authors": ["Tom McCabe"], "summaries": []} -{"id": "beec96347a61a1d51efe3763d19eab3c", "title": "Introducing Myself", "url": "https://intelligence.org/2009/02/16/introducing-myself/", "source": "miri", "source_type": "blog", "text": "Many friends of MIRI will know that I have been a supporter of its mission since its founding, and have rendered my informal assistance, including a major role in arranging matching funds for the Institute’s 2007 Challenge Grant.\n\n\nI am pleased to take a more direct role in fostering its success as President of MIRI. I have left my previous role as Founder and Chief Strategist at [SirGroovy.com](http://sirgroovy.com/), a growing online music licensing firm, and have been assuming responsibility for the management of the Machine Intelligence Research Institute over the last few weeks.\n\n\nProspective volunteers, donors, and aspiring researchers should now make contact with me rather than with Tyler Emerson.\n\n\nTo those who I am greeting for the first time, let me introduce myself. On a professional level, I hold a Master of Business Administration from Drexel University, and am coming from a role that combined management, research, analysis, and strategy in a fast-growing music licensing firm from its founding in New York. In that capacity, in a previous role at [Aon](http://www.aon.com/default.jsp), and in my academic studies I have been enduringly interested in finance and economics, particular the economics of technology and IP. Scientifically, I earned my undergraduate degree in biochemistry and have worked in several labs, as well as serving at the [National Institute of Standards and Technology](http://www.nist.gov/index.html), and have extensively studied the history of science and technology, as well as the potential for biological cognitive enhancement. I have served with the [Peace Corps](http://www.peacecorps.gov/) in Kazakhstan, and am splitting my time between Manhattan, where I live with my wife Aruna, and Silicon Valley.\n\n\nMy interest in the safety of technological development, driven by the potentially grave ethical consequences, is over a decade old. I have been particularly focused on the potential of advanced nanotechnology and artificial intelligence, participating in forums such as Transvision and Foresight conferences, the [SL4](http://www.sl4.org/) mailing list, [Overcoming Bias](http://www.overcomingbias.com/), and organizations such as MIRI and the Center for Responsible Nanotechnology (CRN). I plan to make my relevant work available at a single site, but in the meantime I will point to a small selection. For instance, I coauthored an [analysis](http://www.kurzweilai.net/meme/frame.html?main=/articles/art0685.html?) of the risks of advanced molecular manufacturing and mitigating strategies with [Robert Freitas](http://www.rfreitas.com/) , and contributed “[Corporate Cornucopia](http://www.kurzweilai.net/articles/art0675.html?printable=1)” as a member of CRN’s Global Task Force. Those who would like to see more can view Michael Anissimov’s archive of some of my [writings](http://www.acceleratingfuture.com/michael/blog/2006/08/michael-vassars-papers/) at his [website](http://www.acceleratingfuture.com/michael/blog/), and some of my most recent talks, an Institute of Ethics and Emerging Technologies [presentation](http://ieet.org/archive/vassar.mp3) on the political implications of different conceptions of willpower and a [Convergence08](http://www.viddler.com/explore/Jeriaska/videos/17/) talk on decision theory for humans, are available on the web.\n\n\nAs President, I plan to build on MIRI’s successes, such as the Singularity Summit, while also working to increase its internal and extramural research capabilities and output. In the course of the latter, I shall pay particular attention to the publication of research that improves the quality of our thinking about the potential and safety of advanced artificial intelligence, such as MIRI Research Fellow Eliezer Yudkowsky’s [two](https://intelligence.org/files/CognitiveBiases.pdf) [contributions](http://www.intelligence.org/upload/artificial-intelligence-risk.pdf) to the Oxford edited volume, *Global Catastrophic Risks*, and thus to better communicating internal research progress to our supporters.\n\n\nSome of this work will involve indirect, meta-level contributions. For instance, recent work at MIRI by Rolf Nelson, Anna Salamon, Steven Rayhawk, Thomas McCabe and others has led to the development of a software tool for combining judgments about particular future scenarios and technological developments to reveal inconsistencies and enable the adoption of a more coherent probability assignment for planning. The content and algorithms of this tool have been completed, and work is now underway to finalize the software’s interface and make it publicly available to improve the quality of reasoning about interrelated technology scenarios, including those involving artificial intelligence.\n\n\nAnother effort involves conducting expert elicitation research to determine the state of academic and non-academic expert opinion regarding timelines and risks for advanced artificial intelligence.\n\n\nFuture research along these lines may explore particular biases and psychological factors affecting attitudes and reasoning related to artificial intelligence.\n\n\nOther research will be directly focused on object-level problems. I plan to work vigorously to identify more promising extramural scholars whose work can be fruitfully promoted by MIRI grants, work such as MIRI-Canada Academic Prize Recipient Shane Legg’s “Machine Super-Intelligence.” At the same time I will be working to recruit and make best use of talented Research Fellows for MIRI’s internal efforts.\n\n\nI look forward to describing further directions over the coming months, and invite the advice and opinions of the friends of MIRI at admin@intelligence.org.\n\n\nYours,\n\n\nMichael Vassar\n\n\nThe post [Introducing Myself](https://intelligence.org/2009/02/16/introducing-myself/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2009-02-16T18:04:43Z", "authors": ["Michael Vassar"], "summaries": []} -{"id": "41e3c8bba2599e55eeb8d6c37b7a53c9", "title": "Three Major Singularity Schools", "url": "https://intelligence.org/2007/09/30/three-major-singularity-schools/", "source": "miri", "source_type": "blog", "text": "I’ve noticed that Singularity discussions seem to be splitting up into three major schools of thought: Accelerating Change, the Event Horizon, and the Intelligence Explosion.\n\n\n\n* **Accelerating Change:**\n\t+ *Core claim:* Our intuitions about change are linear; we expect roughly as much change as has occurred in the past over our own lifetimes. But technological change feeds on itself, and therefore accelerates. Change today is faster than it was 500 years ago, which in turn is faster than it was 5000 years ago. Our recent past is not a reliable guide to how much change we should expect in the future.\n\t+ *Strong claim:* Technological change follows smooth curves, typically exponential. Therefore we can predict with fair precision when new technologies will arrive, and when they will cross key thresholds, like the creation of Artificial Intelligence.\n\t+ *Advocates:* Ray Kurzweil, Alvin Toffler(?), John Smart\n* **Event Horizon:**\n\t+ *Core claim:* For the last hundred thousand years, humans have been the smartest intelligences on the planet. All our social and technological progress was produced by human brains. Shortly, technology will advance to the point of improving on human intelligence (brain-computer interfaces, Artificial Intelligence). This will create a future that is weirder by far than most science fiction, a difference-in-kind that goes beyond amazing shiny gadgets.\n\t+ *Strong claim:* To know what a superhuman intelligence would do, you would have to be at least that smart yourself. To know where Deep Blue would play in a chess game, you must play at Deep Blue’s level. Thus the future after the creation of smarter-than-human intelligence is absolutely unpredictable.\n\t+ *Advocates:* Vernor Vinge\n* **Intelligence Explosion:**\n\t+ *Core claim:* Intelligence has always been the source of technology. If technology can *significantly* improve on human intelligence – create minds smarter than the smartest existing humans – then this closes the loop and creates a positive feedback cycle. What would humans with brain-computer interfaces do with their augmented intelligence? One good bet is that they’d design the next generation of brain-computer interfaces. Intelligence enhancement is a classic tipping point; the smarter you get, the more intelligence you can apply to making yourself even smarter.\n\t+ *Strong claim:* This positive feedback cycle goes FOOM, like a chain of nuclear fissions gone critical – each intelligence improvement triggering an average of >1.000 further improvements of similar magnitude – though not necessarily on a smooth exponential pathway. Technological progress drops into the characteristic timescale of transistors (or super-transistors) rather than human neurons. The ascent rapidly surges upward and creates *superintelligence* (minds orders of magnitude more powerful than human) before it hits physical limits.\n\t+ *Advocates:* I. J. Good, Eliezer Yudkowsky\n\n\nThe thing about these three *logically distinct* schools of Singularity thought is that, while all three core claims support each other, all three strong claims tend to contradict each other.\n\n\nIf you extrapolate our existing version of Moore’s Law past the point of smarter-than-human AI to make predictions about 2099, then you are contradicting both the strong version of the Event Horizon (which says you can’t make predictions because you’re trying to outguess a transhuman mind) and the strong version of the Intelligence Explosion (because progress will run faster once smarter-than-human minds and nanotechnology drop it into the speed phase of transistors).\n\n\nI find it very annoying, therefore, when these three schools of thought are mashed up into Singularity paste. [Clear thinking requires making distinctions.](http://www.overcomingbias.com/2007/08/the-virtue-of-n.html)\n\n\nBut what is still more annoying is when someone reads a blog post about a newspaper article about the Singularity, comes away with *none* of the three interesting theses, and spontaneously reinvents the dreaded fourth meaning of the Singularity:\n\n\n* **Apocalyptism:** Hey, man, have you heard? There’s this bunch of, like, crazy nerds out there, who think that some kind of unspecified huge nerd thing is going to happen. What a bunch of wackos! It’s geek religion, man.\n\n\nI’ve heard (many) other definitions of the Singularity attempted, but I usually find them to lack separate premises and conclusions. For example, the old Extropian FAQ used to define the “Singularity” as the Inflection Point, “the time when technological development will be at its fastest” and just before it starts slowing down. But what makes this an interesting point in history apart from its definition? What are the consequences of this assumption? To qualify as a school of thought or even a thesis, one needs an internal structure of argument, not just a definition.\n\n\nIf you’re wondering which of these is the *original* meaning of the term “Singularity,” it is the Event Horizon thesis of Vernor Vinge, who coined the word.\n\n\nThe post [Three Major Singularity Schools](https://intelligence.org/2007/09/30/three-major-singularity-schools/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2007-09-30T23:11:14Z", "authors": ["Eliezer Yudkowsky"], "summaries": []} -{"id": "225b1a06772d13860ef478f5b1c57876", "title": "The Power of Intelligence", "url": "https://intelligence.org/2007/07/10/the-power-of-intelligence/", "source": "miri", "source_type": "blog", "text": "In our skulls we carry around 3 pounds of slimy, wet, greyish tissue, corrugated like crumpled toilet paper. You wouldn’t think, to look at the unappetizing lump, that it was some of the most powerful stuff in the known universe. If you’d never seen an anatomy textbook, and you saw a brain lying in the street, you’d say “Yuck!” and try not to get any of it on your shoes. Aristotle thought the brain was an organ that cooled the blood. It doesn’t *look* dangerous.\n\n\nFive million years ago, the ancestors of lions ruled the day, the ancestors of wolves roamed the night. The ruling predators were armed with teeth and claws – sharp, hard cutting edges, backed up by powerful muscles. Their prey, in self-defense, evolved armored shells, sharp horns, poisonous venoms, camouflage. The war had gone on through hundreds of eons and countless arms races. Many a loser had been removed from the game, but there was no sign of a winner. Where one species had shells, another species would evolve to crack them; where one species became poisonous, another would evolve to tolerate the poison. Each species had its private niche – for who could live in the seas and the skies and the land at once? There was no ultimate weapon and no ultimate defense and no reason to believe any such thing was possible.\n\n\nThen came the Day of the Squishy Things. \n\n\n\n\nThey had no armor. They had no claws. They had no venoms.\n\n\nIf you saw a movie of a nuclear explosion going off, and you were told an Earthly life form had done it, you would never in your wildest dreams imagine that the Squishy Things could be responsible. After all, Squishy Things aren’t radioactive.\n\n\nIn the beginning, the Squishy Things had no fighter jets, no machine guns, no rifles, no swords. No bronze, no iron. No hammers, no anvils, no tongs, no smithies, no mines. All the Squishy Things had were squishy fingers – too weak to break a tree, let alone a mountain. Clearly not dangerous. To cut stone you would need steel, and the Squishy Things couldn’t excrete steel. In the environment there were no steel blades for Squishy fingers to pick up. Their bodies could not generate temperatures anywhere near hot enough to melt metal. The whole scenario was obviously absurd.\n\n\nAnd as for the Squishy Things manipulating DNA – that would have been beyond ridiculous. Squishy fingers are not that small. There is no access to DNA from the Squishy level; it would be like trying to pick up a hydrogen atom. Oh, *technically* it’s all one universe, *technically* the Squishy Things and DNA are part of the same world, the same unified laws of physics, the same great web of causality. But let’s be realistic: you can’t get there from here.\n\n\nEven if Squishy Things could *someday* evolve to do any of those feats, it would take thousands of millennia. We have watched the ebb and flow of Life through the eons, and let us tell you, a year is not even a single clock tick of evolutionary time. Oh, sure, *technically* a year is six hundred trillion trillion trillion trillion Planck intervals. But nothing ever happens in less than six hundred million trillion trillion trillion trillion Planck intervals, so it’s a moot point. The Squishy Things, as they run across the savanna now, will not fly across continents for at least another ten million years; *no one* could have that much sex.\n\n\nNow explain to me again why an Artificial Intelligence can’t do anything interesting over the Internet unless a human programmer builds it a robot body.\n\n\nI have observed that someone’s flinch-reaction to “intelligence” – the thought that crosses their mind in the first half-second after they hear the word “intelligence” – often determines their flinch-reaction to the Singularity. Often they look up the keyword “intelligence” and retrieve the concept *booksmarts* – a mental image of the Grand Master chessplayer who can’t get a date, or a college professor who can’t survive outside academia.\n\n\n“It takes more than intelligence to succeed professionally,” people say, as if charisma resided in the kidneys, rather than the brain. “Intelligence is no match for a gun,” they say, as if guns had grown on trees. “Where will an Artificial Intelligence get money?” they ask, as if the first *Homo sapiens* had found dollar bills fluttering down from the sky, and used them at convenience stores already in the forest. The human species was not born into a market economy. Bees won’t sell you honey if you offer them an electronic funds transfer. The human species *imagined* money into existence, and it exists – for *us,* not mice or wasps – because we go on believing in it.\n\n\nI keep trying to explain to people that the archetype of intelligence is not Dustin Hoffman in *The Rain Man*, it is a human being, period. It is squishy things that explode in a vacuum, leaving footprints on their moon. Within that grey wet lump is the power to search paths through the great web of causality, and find a road to the seemingly impossible – the power sometimes called creativity.\n\n\nPeople – venture capitalists in particular – sometimes ask how, if the Machine Intelligence Research Institute successfully builds a true AI, the results will be *commercialized.* This is what we call a framing problem.\n\n\nOr maybe it’s something deeper than a simple clash of assumptions. With a bit of creative thinking, people can imagine how they would go about travelling to the Moon, or curing smallpox, or manufacturing computers. To imagine a trick that could accomplish *all these things at once* seems downright impossible – even though such a power resides only a few centimeters behind their own eyes. The gray wet thing still seems mysterious to the gray wet thing.\n\n\nAnd so, because people can’t quite see how it would all work, the power of intelligence seems less real; harder to imagine than a tower of fire sending a ship to Mars. The prospect of visiting Mars captures the imagination. But if one should promise a Mars visit, and also a grand unified theory of physics, and a proof of the Riemann Hypothesis, and a cure for obesity, and a cure for cancer, and a cure for aging, and a cure for stupidity – well, it just sounds wrong, that’s all.\n\n\nAnd well it should. It’s a serious failure of imagination to think that intelligence is good for so little. Who could have imagined, ever so long ago, what minds would someday do? We may not even *know* what our real problems are.\n\n\nBut meanwhile, because it’s hard to see how one process could have such diverse powers, it’s hard to imagine that one fell swoop could solve even such prosaic problems as obesity and cancer and aging.\n\n\nWell, one trick cured smallpox and built airplanes and cultivated wheat and tamed fire. Our current science may not agree yet on how exactly the trick works, but it works anyway. If you are temporarily ignorant about a phenomenon, that is a fact about your current state of mind, not a fact about the phenomenon. A blank map does not correspond to a blank territory. If one does not quite understand that power which put footprints on the Moon, nonetheless, the footprints are still there – real footprints, on a real Moon, put there by a real power. If one were to understand deeply enough, one could create and shape that power. Intelligence is as real as electricity. It’s merely far more powerful, far more dangerous, has far deeper implications for the unfolding story of life in the universe – and it’s a tiny little bit harder to figure out how to build a generator.\n\n\nThe post [The Power of Intelligence](https://intelligence.org/2007/07/10/the-power-of-intelligence/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).", "date_published": "2007-07-11T02:41:10Z", "authors": ["Eliezer Yudkowsky"], "summaries": []}