cssAudio - Activefile-genericCSS - ActiveGeneric - ActiveHTML - ActiveImage - ActiveJS - ActiveSVG - ActiveText - Activefile-genericVideo - Activehtmlicon-personicon-teamoctocatpop-outspinnerstartv

Pen Settings

CSS Base

Vendor Prefixing

Add External CSS

These stylesheets will be added in this order and before the code you write in the CSS editor. You can also add another Pen here, and it will pull the CSS from it. Try typing "font" or "ribbon" below.

Quick-add: + add another resource

Add External JavaScript

These scripts will run in this order and before the code in the JavaScript editor. You can also link to another Pen here, and it will run the JavaScript from it. Also try typing the name of any popular library.

Quick-add: + add another resource

Code Indentation

     

Save Automatically?

If active, Pens will autosave every 30 seconds after being saved once.

Auto-Updating Preview

If enabled, the preview panel updates automatically as you code. If disabled, use the "Run" button to update.

            
              <div class="jumbotron">
  <h1 class="text-center" id="header-name">Eliezer Yudkowsky<br><small>fellow rationalist</small></h1>
</div>
<div class="container-fluid">
  <div class="row">
    <div class="col-md-7 col-lg-7 col-xs-12">
      <p class="lead"><b>Eliezer Shlomo Yudkowsky</b> (born September 11, 1979) is an American artificial intelligence researcher known for popularizing the idea of friendly artificial intelligence. He is a Research Fellow and co-founder at the Machine Intelligence Research
        Institute, a private research nonprofit based in Berkeley, California.</p>
      <p class="lead">Eliezer Yudkowsky writes about the fine art of human rationality. Over the last few decades, science has found an increasing amount to say about sanity. Probability theory and decision theory give us the formal math; and experimental psychology,
        particularly the subfield of cognitive biases, has shown us how human beings think in practice. Now the challenge is to apply this knowledge to life – to see the world through that lens.</p>
    </div>
    <div class="col-md-4 col-lg-4 col-xs-12 col-md-offset-1 col-lg-offset-1">
      <img class="img-responsive" src="https://upload.wikimedia.org/wikipedia/commons/3/35/Eliezer_Yudkowsky%2C_Stanford_2006_%28square_crop%29.jpg" alt="Image of Eliezer" />
    </div>
  </div>

  <div>
    <h2>Academic publications</h2>
    <ul>
      <li>Yudkowsky, Eliezer (2007). <small>"Levels of Organization in General Intelligence" (PDF). Artificial General Intelligence. Berlin: Springer.</small></li>
      <li>Yudkowsky, Eliezer (2008). <small>"Cognitive Biases Potentially Affecting Judgement of Global Risks" (PDF). In Bostrom, Nick; Ćirković, Milan. Global Catastrophic Risks. Oxford University Press. ISBN 978-0199606504.</small></li>
      <li>Yudkowsky, Eliezer (2008). <small>"Artificial Intelligence as a Positive and Negative Factor in Global Risk" (PDF). In Bostrom, Nick; Ćirković, Milan. Global Catastrophic Risks. Oxford University Press. ISBN 978-0199606504.</small></li>
      <li>Yudkowsky, Eliezer (2011). <small>"Complex Value Systems in Friendly AI" (PDF). Artificial General Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA, August 3–6, 2011. Berlin: Springer.</small></li>
      <li>Yudkowsky, Eliezer (2012). <small>"Friendly Artificial Intelligence". In Eden, Ammon; Moor, James; Søraker, John; et al. Singularity Hypotheses: A Scientific and Philosophical Assessment. Berlin: Springer. ISBN 978-3-642-32559-5.</small></li>
      <li>Bostrom, Nick; Yudkowsky, Eliezer (2014). <small>"The Ethics of Artificial Intelligence" (PDF). In Frankish, Keith; Ramsey, William. The Cambridge Handbook of Artificial Intelligence. New York: Cambridge University Press. ISBN 978-0-521-87142-6.</small></li>
      <li>LaVictoire, Patrick; Fallenstein, Benja; Yudkowsky, Eliezer; Bárász, Mihály; Christiano, Paul; Herreshoff, Marcello (2014).<small> "Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem". Multiagent Interaction without Prior Coordination: Papers from the AAAI-14 Workshop. AAAI Publications.</small></li>
      <li>Soares, Nate; Fallenstein, Benja; Yudkowsky, Eliezer (2015). <small>"Corrigibility". AAAI Workshops: Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, January 25–26, 2015. AAAI Publications.</small></li>
    </ul>
  </div>

  <h2>Famous works</h2>
  <ul>
    <li>Harry Potter and the Methods of Rationality <small><a href="http://hpmor.com/">link</a></small></li>
    <li>Rationality: From AI to Zombies <small><a href="https://intelligence.org/rationality-ai-zombies/">link</a></small></li>
  </ul>
  <br />
  <hr />
  <div class="footer">
    <p class="lead text-center">To learn more, visit <a href="https://en.wikipedia.org/wiki/Eliezer_Yudkowsky">wikipedia page</a></p>
    <p class="lead text-center"><small>Created By: <a href="https://www.freecodecamp.com/xRahul">RJ</a></small></p>
  </div>
</div>
            
          
!
            
              .jumbotron {
  background-image: url("http://www.yudkowsky.net/assets/10/home.jpg");
}

#header-name {
  color: #FFF;
}

.footer {
  background-color: #F1F1F1;
}
            
          
!
999px
Close

Asset uploading is a PRO feature.

As a PRO member, you can drag-and-drop upload files here to use as resources. Images, Libraries, JSON data... anything you want. You can even edit them anytime, like any other code on CodePen.

Go PRO

Loading ..................

Console