Find your perfect html color using our online color picker. Move the cursor to the color you like and click on it to get color code in HEX, RGB, HSL and CMYK format. Click on the image and drag to use the magnifier. Want a color palette that matches your images? With this tool, you can create color combinations in seconds. Just drop your photo on the page or open it using “Browse” button. You can also upload image from URL.

Drop it
Browse or drop image here


Support this website

If you’ve found this site helpful, you can support it by linking to it. Just share this page on social networks, forums or add a link on your website. Thanks for your support!


About The Author

Christoph Erdmann has been a web developer since 2001 and has specialized in the frontend since 2019. He has implemented numerous projects for numerous national …
More about

The Embedded Image Preview (EIP) technique introduced in this article allows us to load preview images during lazy loading using progressive JPEGs, Ajax and HTTP range requests without having to transfer additional data.

Low Quality Image Preview (LQIP) and the SVG-based variant SQIP are the two predominant techniques for lazy image loading. What both have in common is that you first generate a low-quality preview image. This will be displayed blurred and later replaced by the original image. What if you could present a preview image to the website visitor without having to load additional data?

JPEG files, for which lazy loading is mostly used, have the possibility, according to the specification, to store the data contained in them in such a way that first the coarse and then the detailed image contents are displayed. Instead of having the image built up from top to bottom during loading (baseline mode), a blurred image can be displayed very quickly, which gradually becomes sharper and sharper (progressive mode).

Representation of the temporal structure of a JPEG in baseline mode
Baseline mode (Large preview)
Representation of the temporal structure of a JPEG in progressive mode
Progressive mode (Large preview)

In addition to the better user experience provided by the appearance that is displayed more quickly, progressive JPEGs are usually also smaller than their baseline-encoded counterparts. For files larger than 10 kB, there is a 94 percent probability of a smaller image when using progressive mode according to Stoyan Stefanov of the Yahoo development team.

If your website consists of many JPEGs, you will notice that even progressive JPEGs load one after the other. This is because modern browsers only allow six simultaneous connections to a domain. Progressive JPEGs alone are therefore not the solution to give the user the fastest possible impression of the page. In the worst case, the browser will load an image completely before it starts loading the next one.

The idea presented here is now to load only so many bytes of a progressive JPEG from the server that you can quickly get an impression of the image content. Later, at a time defined by us (e.g. when all preview images in the current viewport have been loaded), the rest of the image should be loaded without requesting the part already requested for the preview again.

Shows the way the EIP (Embedded image preview) technique loads the image data in two requests.
Loading a progressive JPEG with two requests (Large preview)

Unfortunately, you can’t tell an img tag in an attribute how much of the image should be loaded at what time. With Ajax, however, this is possible, provided that the server delivering the image supports HTTP Range Requests.

Using HTTP range requests, a client can inform the server in an HTTP request header which bytes of the requested file are to be contained in the HTTP response. This feature, supported by each of the larger servers (Apache, IIS, nginx), is mainly used for video playback. If a user jumps to the end of a video, it would not be very efficient to load the complete video before the user can finally see the desired part. Therefore, only the video data around the time requested by the user is requested by the server, so that the user can watch the video as fast as possible.

We now face the following three challenges:

  1. Creating The Progressive JPEG
  2. Determine Byte Offset Up To Which The First HTTP Range Request Must Load The Preview Image
  3. Creating the Frontend JavaScript Code

1. Creating The Progressive JPEG

A progressive JPEG consists of several so-called scan segments, each of which contains a part of the final image. The first scan shows the image only very roughly, while the ones that follow later in the file add more and more detailed information to the already loaded data and finally form the final appearance.

How exactly the individual scans look is determined by the program that generates the JPEGs. In command-line programs like cjpeg from the mozjpeg project, you can even define which data these scans contain. However, this requires more in-depth knowledge, which would go beyond the scope of this article. For this, I would like to refer to my article “Finally Understanding JPG“, which teaches the basics of JPEG compression. The exact parameters that have to be passed to the program in a scan script are explained in the wizard.txt of the mozjpeg project. In my opinion, the parameters of the scan script (seven scans) used by mozjpeg by default are a good compromise between fast progressive structure and file size and can, therefore, be adopted.

To transform our initial JPEG into a progressive JPEG, we use jpegtran from the mozjpeg project. This is a tool to make lossless changes to an existing JPEG. Pre-compiled builds for Windows and Linux are available here: If you prefer to play it safe for security reasons, it’s better to build them yourself.

From the command line we now create our progressive JPEG:

$ jpegtran input.jpg > progressive.jpg

The fact that we want to build a progressive JPEG is assumed by jpegtran and does not need to be explicitly specified. The image data will not be changed in any way. Only the arrangement of the image data within the file is changed.

Metadata irrelevant to the appearance of the image (such as Exif, IPTC or XMP data), should ideally be removed from the JPEG since the corresponding segments can only be read by metadata decoders if they precede the image content. Since we cannot move them behind the image data in the file for this reason, they would already be delivered with the preview image and enlarge the first request accordingly. With the command-line program exiftool you can easily remove these metadata:

$ exiftool -all= progressive.jpg

If you don’t want to use a command-line tool, you can also use the online compression service to generate a progressive JPEG without metadata.

2. Determine Byte Offset Up To Which The First HTTP Range Request Must Load The Preview Image

A JPEG file is divided into different segments, each containing different components (image data, metadata such as IPTC, Exif and XMP, embedded color profiles, quantization tables, etc.). Each of these segments begins with a marker introduced by a hexadecimal FF byte. This is followed by a byte indicating the type of segment. For example, D8 completes the marker to the SOI marker FF D8 (Start Of Image), with which each JPEG file begins.

Each start of a scan is marked by the SOS marker (Start Of Scan, hexadecimal FF DA). Since the data behind the SOS marker is entropy coded (JPEGs use the Huffman coding), there is another segment with the Huffman tables (DHT, hexadecimal FF C4) required for decoding before the SOS segment. The area of interest for us within a progressive JPEG file, therefore, consists of alternating Huffman tables/scan data segments. Thus, if we want to display the first very rough scan of an image, we have to request all bytes up to the second occurrence of a DHT segment (hexadecimal FF C4) from the server.

Shows the SOS markers in a JPEG file
Structure of a JPEG file (Large preview)

In PHP, we can use the following code to read the number of bytes required for all scans into an array:

We have to add the value of two to the found position because the browser only renders the last row of the preview image when it encounters a new marker (which consists of two bytes as just mentioned).

Since we are interested in the first preview image in this example, we find the correct position in $positions[1] up to which we have to request the file via HTTP Range Request. To request an image with a better resolution, we could use a later position in the array, e.g. $positions[3].

3. Creating The Frontend JavaScript Code

First of all, we define an img tag, to which we give the just evaluated byte position:

As is often the case with lazy load libraries, we do not define the src attribute directly so that the browser does not immediately start requesting the image from the server when parsing the HTML code.

With the following JavaScript code we now load the preview image:

var $img = document.querySelector("img[data-src]");
var URL = window.URL || window.webkitURL;

var xhr = new XMLHttpRequest();
xhr.onload = function(){
    if (this.status === 206){
        $img.src_part = this.response;
        $img.src = URL.createObjectURL(this.response);
}'GET', $img.getAttribute('data-src'));
xhr.setRequestHeader("Range", "bytes=0-"   $img.getAttribute('data-bytes'));
xhr.responseType = 'blob';

This code creates an Ajax request that tells the server in an HTTP range header to return the file from the beginning to the position specified in data-bytes... and no more. If the server understands HTTP Range Requests, it returns the binary image data in an HTTP-206 response (HTTP 206 = Partial Content) in the form of a blob, from which we can generate a browser-internal URL using createObjectURL. We use this URL as src for our img tag. Thus we have loaded our preview image.

We store the blob additionally at the DOM object in the property src_part, because we will need this data immediately.

In the network tab of the developer console you can check that we have not loaded the complete image, but only a small part. In addition, the loading of the blob URL should be displayed with a size of 0 bytes.

Shows the network console and the sizes of the HTTP requests
Network console when loading the preview image (Large preview)

Since we already load the JPEG header of the original file, the preview image has the correct size. Thus, depending on the application, we can omit the height and width of the img tag.

Alternative: Loading the preview image inline

For performance reasons, it is also possible to transfer the data of the preview image as data URI directly in the HTML source code. This saves us the overhead of transferring the HTTP headers, but the base64 encoding makes the image data one third larger. This is relativized if you deliver the HTML code with a content encoding like gzip or brotli, but you should still use data URIs for small preview images.

Much more important is the fact that the preview images are available immediately and there is no noticeable delay for the user when building the page.

First of all, we have to create the data URI, which we then use in the img tag as src. For this, we create the data URI via PHP, whereby this code is based on the code just created, which determines the byte offsets of the SOS markers:

The created data URI is now directly inserted into the `img` tag as src:

Of course, the JavaScript code must also be adapted:

Instead of requesting the data via Ajax request, where we would immediately receive a blob, in this case we have to create the blob ourselves from the data URI. To do this, we free the data-URI from the part that does not contain image data: data:image/jpeg;base64. We decode the remaining base64 coded data with the atob command. In order to create a blob from the now binary string data, we have to transfer the data into a Uint8 array, which ensures that the data is not treated as a UTF-8 encoded text. From this array, we can now create a binary blob with the image data of the preview image.

So that we don’t have to adapt the following code for this inline version, we add the attribute data-bytes on the img tag, which in the previous example contains the byte offset from which the second part of the image has to be loaded.

In the network tab of the developer console, you can also check here that loading the preview image does not generate an additional request, while the file size of the HTML page has increased.

Shows the network console and the sizes of the HTTP requests
Network console when loading the preview image as data URI (Large preview)

Loading the final image

In a second step we load the rest of the image file after two seconds as an example:

    var xhr = new XMLHttpRequest();
    xhr.onload = function(){
        if (this.status === 206){
            var blob = new Blob([$img.src_part, this.response], { type: 'image/jpeg'} );
            $img.src = URL.createObjectURL(blob);
    }'GET', $img.getAttribute('data-src'));
    xhr.setRequestHeader("Range", "bytes="  (parseInt($img.getAttribute('data-bytes'), 10) 1)  '-');
    xhr.responseType = 'blob';
}, 2000);

In the Range header this time we specify that we want to request the image from the end position of the preview image to the end of the file. The answer to the first request is stored in the property src_part of the DOM object. We use the responses from both requests to create a new blob per new Blob(), which contains the data of the whole image. The blob URL generated from this is used again as src of the DOM object. Now the image is completely loaded.

Also now we can check the loaded sizes in the network tab of the developer console again..

Shows the network console and the sizes of the HTTP requests
Network console when loading the entire image (31.7 kB) (Large preview)


At the following URL I have provided a prototype where you can experiment with different parameters:

The GitHub repository for the prototype can be found here:

Considerations At The End

Using the Embedded Image Preview (EIP) technology presented here, we can load qualitatively different preview images from progressive JPEGs with the help of Ajax and HTTP Range Requests. The data from these preview images is not discarded but instead reused to display the entire image.

Furthermore, no preview images need to be created. On the server-side, only the byte offset at which the preview image ends has to be determined and saved. In a CMS system, it should be possible to save this number as an attribute on an image and take it into account when outputting it in the img tag. Even a workflow would be conceivable, which supplements the file name of the picture by the offset, e.g. progressive-8343.jpg, in order not to have to save the offset apart from the picture file. This offset could be extracted by the JavaScript code.

Since the preview image data is reused, this technique could be a better alternative to the usual approach of loading a preview image and then a WebP (and providing a JPEG fallback for non-WebP-supporting browsers). The preview image often destroys the storage advantages of the WebP, which does not support progressive mode.

Currently, preview images in normal LQIP are of inferior quality, since it is assumed that loading the preview data requires additional bandwidth. As Robin Osborne already made clear in a blog post in 2018, it doesn’t make much sense to show placeholders that don’t give you an idea of the final image. By using the technique suggested here, we can show some more of the final image as a preview image without hesitation by presenting the user a later scan of the progressive JPEG.

In case of a weak network connection of the user, it might make sense, depending on the application, not to load the whole JPEG, but e.g. to omit the last two scans. This produces a much smaller JPEG with an only slightly reduced quality. The user will thank us for it, and we don’t have to store an additional file on the server.

Now I wish you a lot of fun trying out the prototype and look forward to your comments.

Smashing Editorial(dm, yk, il)


A quality website is a website with a fast page load. Readers don’t like to wait, and it’s also no secret that page loading time is playing a very important role in the way how Google is ranking websites. And when it comes to page loading time, image size and image optimization are very important factors.

Here’s a definitive guide on how to optimize images for the web as well as a few extra other techniques and guidelines.

Use The Right Image Format

The 3 most common image formats on the web are .jpg, .png and .gif. Here’s a brief summary of each image file format and when you should use it.

  • png: Use PNG images if the image has text in it, or if you need a transparent background.
  • gif: Use GIF for very small images such as a 5*5px background tile, or animated images.
  • jpg: Use JPG or JPEG images for displaying photos, illustration images, etc.

Use Thumbnails Instead of HTML Resizing

HTML and CSS offer you the possibility of resizing images by specifying the desired width and height. While this is a useful feature, the image isn’t actually resized, it’s only displayed at a smaller size. You want to display a 500px width image? Then, resize your original image to 500px and display the resized version instead of the original. This will result in a much faster page load and a better user experience.

If you’re using WordPress, the upload tool automatically resizes any uploaded image to various sizes (original, medium, thumb, etc) so you should always choose the appropriate size.

On php-based websites, there’s many different libraries that allow you to easily generate thumbnails on the fly. ImageMagick is one of the most popular.

Use CSS3 Effects Instead of Images

Need a gradient or a fancy text effect on your website? Don’t use images! The CSS3 specification allows you to add lots of visual effects. One of my rules of thumb when it comes to web design and development is to avoid using images as much as possible.

In other words, if you can do something using CSS, do it with CSS, not images. There’s tons of things that you can do with CSS3 instead of using images, and your website will be faster.

Use web fonts instead of encoding text in images

In late 2019, I still see lots of people encoding text in images. This is definitely bad. In 90% of the case, you can instead use a Webfont and CSS. Webfonts provide a faster page load than a whole bunch of encoded text images.

Using webfonts is super easy. In order to ensure a good cross-browser compatibility, you need to have the font you wish to use in the following formats: .ttf, .woff, .svg and .eot. If you only have one of those file formats, there’s a super useful online tool to help, the Fontsquirrel webfont generator.

Drop your fonts somewhere on your web server, then add the following on your .css file:

@font-face {
  font-family: 'Tagesschrift';
  src: url('tagesschrift.eot'); 
       url('tagesschrift.woff') format('woff'),   
       url('tagesschrift.ttf') format('truetype'),
       url('tagesschrift.svg#font') format('svg');

Once done, you assign the webfont to an element using the font-family property:

p {
    font-family: "Tagesschrift", Georgia, Serif;

Make Use of Photoshop’s “Save For Web” Tool

When it comes to web design, Photoshop is by far the most popular program, and most of you are probably using it. Despite its popularity, a lot of users never use the “Save for web” feature. It’s a shame because this function allows Photoshop to provide the user presets to save an image in order for it to be displayed on a web page.

Basically, if you’re intending to display an image on your website, use Photoshop’s “Save for web” function. Always.

Online Tools for Image Optimization

If you don’t have Photoshop, don’t worry. Optimizing images online has never been easier, thanks to many free websites that provide online image compression. Here are a few tools you can use:

  • Optimizilla: This online image optimizer uses a smart combination of the best optimization and lossy compression algorithms to shrink JPEG and PNG images to the minimum possible size while keeping the required level of quality.
  • Tiny PNG: TinyPNG uses image optimization and smart lossy compression techniques to reduce the file size of your PNG files. Althouth this handy tool focuses on PNG, it can as well work with other image formats.
  • Compressor is a very useful online tool for optimizing your images. It supports JPG, PNG, SVG and GIF, and offers both lossy and lossless image compression. can provide up to 90% file size reduction.

Using WordPress? Install an Image Optimization Plugin

If you’re using WordPress, you can save a lot of time by simply installing a plugin that will take care of optimizing your images. I’ve been using WP Smush. It works like a charm: install the plugin, then upload your images normally. WP Smush takes each file you’re uploading and perform an advance image compression technique that optimizes the image file size with no compromise on the image quality.

Results are impressive: file size can be reduced up to 80% in size. This will make your website load much faster while keeping good image quality.

Another interesting plugin is Optimole. It features most of the options offered by Smush and adds new functionalities: Images can be served from a global content delivery network, WebP images support, lazy-loading, etc.

Use Caching Techniques to Display Your Images Faster

Although this isn’t really an image optimization technique by itself, caching an image file will make your web page load faster to your returning visitors.

Here’s a ready-to-use code snippet that will cache various filetypes (gif, png and jpeg images, and also other kind of documents like pdf or flv).

This code has to be pasted in your website .htaccess file. Make sure you have a backup of it before applying this technique, just in case something goes wrong.

# 1 YEAR

Header set Cache-Control "max-age=29030400, public"

# 1 WEEK

Header set Cache-Control "max-age=604800, public"

# 2 DAYS

Header set Cache-Control "max-age=172800, proxy-revalidate"

# 1 MIN

Header set Cache-Control "max-age=60, private, proxy-revalidate"

Frequently Asked Questions

Why Is Image Optimization Important?

Image optimization makes your website lighter and therefore has as faster load time, resulting in better user experience.

How do I Optimize An Image For Web Without Losing Quality?

Both Photoshop’s “Save for Web” function and the online tools listed in this article allows you to optimize an image for web, without losing quality.

How Does Image Optimization Work?

Image optimization is a technique that removes all the unnecessary data that is saved within the image, in order to reduce the file size of the image. Optimized images are up to 80% lighter than uncompressed images, resulting in a much faster page load time.