HTML preprocessors can make writing HTML more powerful or convenient. For instance, Markdown is designed to be easier to write and read for text documents and you could write a loop in Pug.
In CodePen, whatever you write in the HTML editor is what goes within the <body>
tags in a basic HTML5 template. So you don't have access to higher-up elements like the <html>
tag. If you want to add classes there that can affect the whole document, this is the place to do it.
In CodePen, whatever you write in the HTML editor is what goes within the <body>
tags in a basic HTML5 template. If you need things in the <head>
of the document, put that code here.
The resource you are linking to is using the 'http' protocol, which may not work when the browser is using https.
CSS preprocessors help make authoring CSS easier. All of them offer things like variables and mixins to provide convenient abstractions.
It's a common practice to apply CSS to a page that styles elements such that they are consistent across all browsers. We offer two of the most popular choices: normalize.css and a reset. Or, choose Neither and nothing will be applied.
To get the best cross-browser support, it is a common practice to apply vendor prefixes to CSS properties and values that require them to work. For instance -webkit-
or -moz-
.
We offer two popular choices: Autoprefixer (which processes your CSS server-side) and -prefix-free (which applies prefixes via a script, client-side).
Any URLs added here will be added as <link>
s in order, and before the CSS in the editor. You can use the CSS from another Pen by using its URL and the proper URL extension.
You can apply CSS to your Pen from any stylesheet on the web. Just put a URL to it here and we'll apply it, in the order you have them, before the CSS in the Pen itself.
You can also link to another Pen here (use the .css
URL Extension) and we'll pull the CSS from that Pen and include it. If it's using a matching preprocessor, use the appropriate URL Extension and we'll combine the code before preprocessing, so you can use the linked Pen as a true dependency.
JavaScript preprocessors can help make authoring JavaScript easier and more convenient.
Babel includes JSX processing.
Any URL's added here will be added as <script>
s in order, and run before the JavaScript in the editor. You can use the URL of any other Pen and it will include the JavaScript from that Pen.
You can apply a script from anywhere on the web to your Pen. Just put a URL to it here and we'll add it, in the order you have them, before the JavaScript in the Pen itself.
If the script you link to has the file extension of a preprocessor, we'll attempt to process it before applying.
You can also link to another Pen here, and we'll pull the JavaScript from that Pen and include it. If it's using a matching preprocessor, we'll combine the code before preprocessing, so you can use the linked Pen as a true dependency.
Search for and use JavaScript packages from npm here. By selecting a package, an import
statement will be added to the top of the JavaScript editor for this package.
Using packages here is powered by esm.sh, which makes packages from npm not only available on a CDN, but prepares them for native JavaScript ESM usage.
All packages are different, so refer to their docs for how they work.
If you're using React / ReactDOM, make sure to turn on Babel for the JSX processing.
If active, Pens will autosave every 30 seconds after being saved once.
If enabled, the preview panel updates automatically as you code. If disabled, use the "Run" button to update.
If enabled, your code will be formatted when you actively save your Pen. Note: your code becomes un-folded during formatting.
Visit your global Editor Settings.
<script src="https://cdn.freecodecamp.org/testable-projects-fcc/v1/bundle.js"></script>
<nav id="navbar">
<h1><header id="title">An Overview of HTTP</header></h1>
<a class="nav-link" href="#introduction">Introduction</a>
<a class="nav-link" href="#Components_of_HTTP-based_systems">Components of HTTP-based systems</a>
<a class="nav-link" href="#Basic_aspects_of_HTTP">Basic aspects of HTTP</a>
<a class="nav-link" href="#What_can_be_controlled_by_HTTP">What can be controlled by HTTP</a>
<a class="nav-link" href="#http_flow">HTTP flow</a>
<a class="nav-link" href="#http_messages">HTTP Messages</a>
<a class="nav-link" href="#apis_based_on_http">APIs based on HTTP</a>
<a class="nav-link" href="#conclusion">Conclusion</a>
</nav>
<main id="main-doc">
<section class="main-section" id="introduction">
<header>Introduction</header>
<p>HTTP is a protocol which allows the fetching of resources, such as HTML documents. It is the foundation of any data exchange on the Web and it is a client-server protocol, which means requests are initiated by the recipient, usually the Web browser. A complete document is reconstructed from the different sub-documents fetched, for instance text, layout description, images, videos, scripts, and more.
</p>
<img src="https://mdn.mozillademos.org/files/13677/Fetching_a_page.png" alt="Internet traffic flow">
<p>Clients and servers communicate by exchanging individual messages (as opposed to a stream of data). The messages sent by the client, usually a Web browser, are called requests and the messages sent by the server as an answer are called responses.
</p>
<p>Designed in the early 1990s, HTTP is an extensible protocol which has evolved over time. It is an application layer protocol that is sent over TCP, or over a TLS-encrypted TCP connection, though any reliable transport protocol could theoretically be used. Due to its extensibility, it is used to not only fetch hypertext documents, but also images and videos or to post content to servers, like with HTML form results. HTTP can also be used to fetch parts of documents to update Web pages on demand.
</p>
<img src="https://mdn.mozillademos.org/files/13673/HTTP%20&%20layers.png" alt="HTTP in relation to the stack">
</section>
<section class="main-section" id="Components_of_HTTP-based_systems">
<header>Components of HTTP-based systems</header>
<p>HTTP is a client-server protocol: requests are sent by one entity, the user-agent (or a proxy on behalf of it). Most of the time the user-agent is a Web browser, but it can be anything, for example a robot that crawls the Web to populate and maintain a search engine index.
</p>
<p>
Each individual request is sent to a server, which handles it and provides an answer, called the response. Between the client and the server there are numerous entities, collectively called proxies, which perform different operations and act as gateways or caches, for example.
</p>
<img src="https://mdn.mozillademos.org/files/13679/Client-server-chain.png" alt="Component Network Flow">
<p>In reality, there are more computers between a browser and the server handling the request: there are routers, modems, and more. Thanks to the layered design of the Web, these are hidden in the network and transport layers. HTTP is on top, at the application layer. Although important to diagnose network problems, the underlying layers are mostly irrelevant to the description of HTTP.
</p>
<ul>
<li class="li-title">Client: the user-agent</li>
<p>The user-agent is any tool that acts on the behalf of the user. This role is primarily performed by the Web browser; other possibilities are programs used by engineers and Web developers to debug their applications.
</p>
<p>The browser is always the entity initiating the request. It is never the server (though some mechanisms have been added over the years to simulate server-initiated messages).
</p>
<p>To present a Web page, the browser sends an original request to fetch the HTML document that represents the page. It then parses this file, making additional requests corresponding to execution scripts, layout information (CSS) to display, and sub-resources contained within the page (usually images and videos). The Web browser then mixes these resources to present to the user a complete document, the Web page. Scripts executed by the browser can fetch more resources in later phases and the browser updates the Web page accordingly.
</p>
<p>A Web page is a hypertext document. This means some parts of displayed text are links which can be activated (usually by a click of the mouse) to fetch a new Web page, allowing the user to direct their user-agent and navigate through the Web. The browser translates these directions in HTTP requests, and further interprets the HTTP responses to present the user with a clear response.
</p>
<li class="li-title">The Web server</li>
<p>On the opposite side of the communication channel, is the server, which serves the document as requested by the client. A server appears as only a single machine virtually: this is because it may actually be a collection of servers, sharing the load (load balancing) or a complex piece of software interrogating other computers (like cache, a DB server, or e-commerce servers), totally or partially generating the document on demand.
</p>
<p>A server is not necessarily a single machine, but several server software instances can be hosted on the same machine. With HTTP/1.1 and the Host header, they may even share the same IP address.
</p>
<li class="li-title">Proxies</li>
<p>Between the Web browser and the server, numerous computers and machines relay the HTTP messages. Due to the layered structure of the Web stack, most of these operate at the transport, network or physical levels, becoming transparent at the HTTP layer and potentially making a significant impact on performance. Those operating at the application layers are generally called proxies. These can be transparent, forwarding on the requests they receive without altering them in any way, or non-transparent, in which case they will change the request in some way before passing it along to the server. Proxies may perform numerous functions:
<ul>
<li>caching (the cache can be public or private, like the browser cache)</li>
<li>filtering (like an antivirus scan or parental controls)</li>
<li>load balancing (to allow multiple servers to serve the different requests)</li>
<li>authentication (to control access to different resources)</li>
<li>logging (allowing the storage of historical information)</li>
</ul>
</p>
</ul>
</section>
<section class="main-section" id="Basic_aspects_of_HTTP">
<header>Basic aspects of HTTP</header>
<ul>
<li class="li-title">HTTP is simple</li>
<p>HTTP is generally designed to be simple and human readable, even with the added complexity introduced in HTTP/2 by encapsulating HTTP messages into frames. HTTP messages can be read and understood by humans, providing easier testing for developers, and reduced complexity for newcomers.
</p>
<li class="li-title">HTTP is extensible</li>
<p>Introduced in HTTP/1.0, HTTP headers make this protocol easy to extend and experiment with. New functionality can even be introduced by a simple agreement between a client and a server about a new header's semantics.
</p>
<li class="li-title">HTTP is stateless, but not sessionless</li>
<p>HTTP is stateless: there is no link between two requests being successively carried out on the same connection. This immediately has the prospect of being problematic for users attempting to interact with certain pages coherently, for example, using e-commerce shopping baskets. But while the core of HTTP itself is stateless, HTTP cookies allow the use of stateful sessions. Using header extensibility, HTTP Cookies are added to the workflow, allowing session creation on each HTTP request to share the same context, or the same state.
</p>
<li class="li-title">HTTP and connections</li>
<p>A connection is controlled at the transport layer, and therefore fundamentally out of scope for HTTP. Though HTTP doesn't require the underlying transport protocol to be connection-based; only requiring it to be reliable, or not lose messages (so at minimum presenting an error). Among the two most common transport protocols on the Internet, TCP is reliable and UDP isn't. HTTP therefore relies on the TCP standard, which is connection-based.
</p>
<p>Before a client and server can exchange an HTTP request/response pair, they must establish a TCP connection, a process which requires several round-trips. The default behavior of HTTP/1.0 is to open a separate TCP connection for each HTTP request/response pair. This is less efficient than sharing a single TCP connection when multiple requests are sent in close succession.
</p>
<p>In order to mitigate this flaw, HTTP/1.1 introduced pipelining (which proved difficult to implement) and persistent connections: the underlying TCP connection can be partially controlled using the Connection header. HTTP/2 went a step further by multiplexing messages over a single connection, helping keep the connection warm and more efficient.
</p>
<p>Experiments are in progress to design a better transport protocol more suited to HTTP. For example, Google is experimenting with QUIC which builds on UDP to provide a more reliable and efficient transport protocol.
</p>
</ul>
</section>
<section class="main-section" id="What_can_be_controlled_by_HTTP">
<header>What can be controlled by HTTP</header>
<p>This extensible nature of HTTP has, over time, allowed for more control and functionality of the Web. Cache or authentication methods were functions handled early in HTTP history. The ability to relax the origin constraint, by contrast, has only been added in the 2010s.
</p>
<p>Here is a list of common features controllable with HTTP.
<ul>
<li class="li-title">Caching</li>
<p>How documents are cached can be controlled by HTTP. The server can instruct proxies and clients, about what to cache and for how long. The client can instruct intermediate cache proxies to ignore the stored document.
</p>
<li class="li-title">Relaxing the origin constraint</li>
<p>To prevent snooping and other privacy invasions, Web browsers enforce strict separation between Web sites. Only pages from the same origin can access all the information of a Web page. Though such constraint is a burden to the server, HTTP headers can relax this strict separation on the server side, allowing a document to become a patchwork of information sourced from different domains; there could even be security-related reasons to do so.
</p>
<li class="li-title">Authentication</li>
<p>Some pages may be protected so that only specific users can access them. Basic authentication may be provided by HTTP, either using the WWW-Authenticate and similar headers, or by setting a specific session using HTTP cookies.
</p>
<li class="li-title">Proxy and tunneling</li>
<p>Servers or clients are often located on intranets and hide their true IP address from other computers. HTTP requests then go through proxies to cross this network barrier. Not all proxies are HTTP proxies. The SOCKS protocol, for example, operates at a lower level. Other protocols, like ftp, can be handled by these proxies.
</p>
<li class="li-title">Sessions</li>
<p>Using HTTP cookies allows you to link requests with the state of the server. This creates sessions, despite basic HTTP being a state-less protocol. This is useful not only for e-commerce shopping baskets, but also for any site allowing user configuration of the output.
</p>
</ul>
</p>
</section>
<section class="main-section" id="http_flow">
<header>HTTP Flow</header>
<p>When a client wants to communicate with a server, either the final server or an intermediate proxy, it performs the following steps:
<ol>
<li>Open a TCP connection: The TCP connection is used to send a request, or several, and receive an answer. The client may open a new connection, reuse an existing connection, or open several TCP connections to the servers.</li>
<li>Send an HTTP message: HTTP messages (before HTTP/2) are human-readable. With HTTP/2, these simple messages are encapsulated in frames, making them impossible to read directly, but the principle remains the same. For example:
</li>
<code>GET / HTTP/1.1
Host: developer.mozilla.org
Accept-Language: fr</code>
<li>Read the response sent by the server, such as:</li>
<code>HTTP/1.1 200 OK
Date: Sat, 09 Oct 2010 14:28:02 GMT
Server: Apache
Last-Modified: Tue, 01 Dec 2009 20:18:22 GMT
ETag: "51142bc1-7449-479b075b2891b"
Accept-Ranges: bytes
Content-Length: 29769
Content-Type: text/html
!DOCTYPE html... (here comes the 29769 bytes of the requested web page)</code>
<li>Close or reuse the connection for further requests.</li>
</ol>
</p>
<p>If HTTP pipelining is activated, several requests can be sent without waiting for the first response to be fully received. HTTP pipelining has proven difficult to implement in existing networks, where old pieces of software coexist with modern versions. HTTP pipelining has been superseded in HTTP/2 with more robust multiplexing requests within a frame.
</p>
</section>
<section class="main-section" id="http_messages">
<header>HTTP Messages</header>
<p>HTTP messages, as defined in HTTP/1.1 and earlier, are human-readable. In HTTP/2, these messages are embedded into a binary structure, a frame, allowing optimizations like compression of headers and multiplexing. Even if only part of the original HTTP message is sent in this version of HTTP, the semantics of each message is unchanged and the client reconstitutes (virtually) the original HTTP/1.1 request. It is therefore useful to comprehend HTTP/2 messages in the HTTP/1.1 format.</p>
<p>There are two types of HTTP messages, requests and responses, each with its own format.
</p>
<ul>
<li class="li-title">Requests</li>
<p>An example HTTP request:
</p>
<img src="https://mdn.mozillademos.org/files/13687/HTTP_Request.png" alt="Example HTTP Request">
<p>Requests consists of the following elements:
<ul>
<li>An HTTP method, usually a verb like GET, POST or a noun like OPTIONS or HEAD that defines the operation the client wants to perform. Typically, a client wants to fetch a resource (using GET) or post the value of an HTML form (using POST), though more operations may be needed in other cases.</li>
<li>The path of the resource to fetch; the URL of the resource stripped from elements that are obvious from the context, for example without the protocol (http://), the domain (here, developer.mozilla.org), or the TCP port (here, 80).</li>
<li>The version of the HTTP protocol.</li>
<li>Optional headers that convey additional information for the servers.</li>
<li>Or a body, for some methods like POST, similar to those in responses, which contain the resource sent.</li>
</ul>
</p>
<li class="li-title">Responses</li>
<p>An example response:
</p>
<img src="https://mdn.mozillademos.org/files/13691/HTTP_Response.png" alt="Example HTTP Response">
<p>Responses consist of the following elements:
<ul>
<li>The version of the HTTP protocol they follow.</li>
<li>A status code, indicating if the request was successful, or not, and why.</li>
<li>A status message, a non-authoritative short description of the status code.</li>
<li>HTTP headers, like those for requests.</li>
<li>Optionally, a body containing the fetched resource.</li>
</ul>
</p>
</ul>
</section>
<section class="main-section" id="apis_based_on_http">
<header>APIs based on HTTP</header>
<p>The most commonly used API based on HTTP is the XMLHttpRequest API, which can be used to exchange data between a user agent and a server. The modern Fetch API provides the same features with a more powerful and flexible feature set.
</p>
<p>Another API, server-sent events, is a one-way service that allows a server to send events to the client, using HTTP as a transport mechanism. Using the EventSource interface, the client opens a connection and establishes event handlers. The client browser automatically converts the messages that arrive on the HTTP stream into appropriate Event objects, delivering them to the event handlers that have been registered for the events' type if known, or to the onmessage event handler if no type-specific event handler was established.
</p>
</section>
<section class="main-section" id="conclusion">
<header>Conclusion</header>
<p>HTTP is an extensible protocol that is easy to use. The client-server structure, combined with the ability to simply add headers, allows HTTP to advance along with the extended capabilities of the Web.
</p>
<p>Though HTTP/2 adds some complexity, by embedding HTTP messages in frames to improve performance, the basic structure of messages has stayed the same since HTTP/1.0. Session flow remains simple, allowing it to be investigated, and debugged with a simple HTTP message monitor.
</p>
<code>codebox unused but required for test.</code>
<code>codebox unused but required for test.</code>
<code>codebox unused but required for test.</code>
</section>
</main>
<footer>
<p>2019 WebDesign by KevinR-Thompson</p>
<p>*All the documentation in this page is taken from <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Overview" target="_blank">MDN</a></p>
</footer>
#navbar {
min-width: 290px;
position: fixed;
top: 0;
left 0;
display: flex;
flex-direction: column;
width: 300px;
height: 88%;
overflow-y: auto;
overflow-x: hidden;
}
header {
border-bottom: solid;
font-size: 1.8em;
font-weight: bold;
}
.nav-link {
border: solid;
border-color: black;
padding: 10px;
font-size: 1.5em;
}
#main-doc {
background-color: ;
width: 73%;
height: 100%;
position: absolute;
top: 0;
left: 26%;
display: flex;
flex-direction: column;
}
.main-section {
position: relative;
top: 5.3%;
padding-bottom: 10%;
}
img {
max-width: 75%;
}
code {
display: block;
text-align: left;
white-space: pre;
position: relative;
word-break: normal;
word-wrap: normal;
line-height: 2;
background-color: #f7f7f7;
padding: 15px;
margin: 10px;
border-radius: 5px;
}
.li-title {
font-weight: bold;
background-color: hsl(0 0% 90%);
}
#conclusion {
padding-bottom: 20%;
}
footer {
position: fixed;
bottom: 0;
background-color: white;
width: 100%;
text-align: center;
}
@media only screen and (max-width: 1200px) {
#navbar {
background-color: white;
position: fixed;
top: 0;
padding: 0;
margin: 0;
width: 98%;
height: 265px;
z-index: 1;
}
#main-doc {
position: relative;
left: 0px;
width: 99%;
margin-left: 0px;
margin-top: 270px;
}
}
@media only screen and (max-width: 400px) {
#main-doc {
margin-left: -10px;
}
code {
margin-left: -20px;
width: 100%;
padding: 15px;
padding-left: 10px;
padding-right: 45px;
min-width: 233px;
}
}
// !! IMPORTANT README:
// You may add additional external JS and CSS as needed to complete the project, however the current external resource MUST remain in place for the tests to work. BABEL must also be left in place.
/***********
INSTRUCTIONS:
- Select the project you would
like to complete from the dropdown
menu.
- Click the "RUN TESTS" button to
run the tests against the blank
pen.
- Click the "TESTS" button to see
the individual test cases.
(should all be failing at first)
- Start coding! As you fulfill each
test case, you will see them go
from red to green.
- As you start to build out your
project, when tests are failing,
you should get helpful errors
along the way!
************/
// PLEASE NOTE: Adding global style rules using the * selector, or by adding rules to body {..} or html {..}, or to all elements within body or html, i.e. h1 {..}, has the potential to pollute the test suite's CSS. Try adding: * { color: red }, for a quick example!
// Once you have read the above messages, you can delete all comments.
Also see: Tab Triggers