Building a search engine from scratch

A whirlwind tour of the big ideas powering our web search

December 6th, 2019

The previous blog post in this series explored our journey so far in building an independent, alternative search engine. If you haven’t read it yet, we would highly recommend checking it out first!

It is no secret that Google search is one of the most lucrative businesses on the planet. With quarterly revenues of Alphabet Inc. exceeding $40 Billion[1] and a big portion of that driven by the advertising revenue on Google’s search properties, it might be a little surprising to see the lack of competition to Google in this area[2]. We at Cliqz believe that this is partly due to the web search bootstrapping problem: the entry barriers in this field are so massive that the biggest, most successful companies in the world with the resources to tackle the problem shy away from it. This post attempts to detail the bootstrapping problem and explain the Cliqz approach to overcoming it. But let us first start by defining the search problem.

The expectation for a modern web search engine is to be able to answer any user question with the most relevant documents that exist for the topic on the internet. The search engine is also expected to be blazingly fast, but we can ignore that for the time being. At the risk of gross oversimplification, we can define the web search task as computing the content match of each candidate document with respect to the user question (query), computing the current popularity of the document and combining these scores with some heuristic.

The content match score measures how well a given document matches a given query. This could be as simple as an exact keyword match, where the score is proportional to the number of query words present in the document:

query avengers endgame
document avengers endgame imdb

If we could score all our documents this way, filter the ones that contain all the query words and sort them based on some popularity measure, we would have a functioning, albeit toy, search engine. Let us look at the challenges involved in building a system capable of handling just the exact keyword match scenario at a web scale, which is a bare minimum requirement of a modern search engine.

According to a study published on, a conservative estimate of the number of documents indexed by Google is around 60 Billion.

1. The infrastructure costs involved in serving a massive, constantly updating inverted index at scale.

Considering just the text content of these documents, this represents at least a petabyte of data. A linear scan through these documents is technically not feasible, so a well understood solution to this problem is to build an inverted index. The big cloud providers like Amazon, Google or Microsoft are able to provide us with the infrastructure needed to serve this system, but it is still going to cost millions of euros each year to operate. And remember, this is just to get started.

2. The engineering costs involved in crawling and sanitizing the web at scale.

The crawler[3] infrastructure needed to keep this data up to date while detecting newer documents on the internet is another massive hurdle. The crawler needs to be polite (some form of domain level rate-limiting), be geographically distributed, handle multilingual data and aggressively avoid link farms and spider traps[4]. A huge portion of the crawlable[5] web is spam and duplicate content; sanitizing this data is another big engineering effort.

Also, a significant portion of the web is cut off from you if your crawler is not famous. Google has a huge competitive advantage in this regard, a lot of site owners allow just GoogleBot (and maybe BingBot), making it an extremely tedious process for an unknown crawler to get whitelisted on these sites. We would have to handle these on a case-by-case basis knowing that getting the attention of the sites is not guaranteed.

3. You need users to get more users (Catch-22)

Even assuming we manage to index and serve these pages, measuring and improving the search relevance is a challenge. Manual evaluation of search results can help get us started, but we would need real users to measure changes in search relevance in order to be competitive.

4. Finding the relevant pages amongst all the noise on the web.

The biggest challenge in search, though, is the removal of noise. The Web is so vast that any query can be answered. The ability to discard the noise in the process makes all the difference between a useless search engine and a great one. We discussed this topic with some rigor in our previous post, providing a rationale for why using query logs is a smarter way to cut through the noise on the web. We also wrote in depth about how to collect these logs in a responsible manner using Human Web. Feel free to check these posts out for further information.

Query/URL pairs, typically referred to as query logs, are often used by search engines to optimize their ranking and for SEO to optimize incoming traffic. Here is a sample from the AOL query logs dataset[6].

Query Clicked Url
college savings plan
pennsylvania college savings plan
pennsylvania college savings plan

We can use these query logs to build a model of the page outside of its content, which we refer to as page models. The example below comes from a truncated version of the page model that we have at the moment for one particular CNN article on Tesla’s Cybertruck launch. The scores associated with the query are computed as a function of its frequency (i.e. the number of times the query/URL pair was seen in our logs) and its recency (i.e. recently generated query logs are a better predictor for relevance).

  "queries": [
      "tesla cybertruck",
      "tesla truck",
      "new tesla car",
      "pick up tesla",
      "new tesla truck",
      "cyber truck tesla price",
      "how much is the cybertruck",
      "cybertruck unveiling",
      "new tesla cybertruck",
      "cyber truck unveiling",

We have hundreds of queries on the page, but even this small sample should provide you with an intuition on how a page model helps us summarize and understand the contents of the page. Even without the actual page text, the page model suggests that the article is about a new Tesla car called the Cybertruck; it details an unveiling event and it contains potential pricing information.

The more unique queries we can gather for a page, the better our model for the page will be. Our use of Human Web also enables us to collect anonymous statistics on the page, a part of which is shown below. This structure shows the popularity of the page in different countries at this moment in time, which is used as a popularity signal. We can see that it is very popular in the UK, less so is Australia, etc.

"counters": {
  "ar": 0.003380009657170449,
  "at": 0.016900048285852245,
  "au": 0.11492032834379527,
  "be": 0.02704007725736359,
  "br": 0.012071463061323033,
  "cn": 0.0014485755673587638,
  "cz": 0.008691453404152583,
  "de": 0.06422018348623854,
  "dk": 0.028971511347175277,
  "fr": 0.025108643167551906,
  "gb": 0.3355866731047803,
  "it": 0.00772573635924674,
  "jp": 0.005311443746982134,
  "ru": 0.0159343312409464,
  "se": 0.0294543698696282,
  "ua": 0.012071463061323033

Now that we understand how page models are generated, we can start stepping through the search process. We can break down this process into different stages, as described below.

The TL;DR Version

This is a high level overview if you want to know how Cliqz search is different.

  1. Our model of a web page is based on queries only. These queries could either be observed in the query logs or could be synthetic, i.e. we generate them. In other words, during the recall phase, we do not try to match query words directly with the content of the page. This is a crucial differentiating factor – it is the reason we are able to build a search engine with dramatically less resources in comparison to our competitors.
  2. Given a query, we first look for similar queries using a multitude of keyword and word vector based matching techniques.
  3. We pick the most similar queries and fetch the pages associated with them.
  4. At this point, we start considering the content of the page. We utilize it for feature extraction during ranking, filtering and dynamic snippet generation.

1. The Query Correction Stage

When the user enters a query into our search box, we first perform some potential corrections on it. This not only involves some normalization, but also expansions and spell corrections, if necessary. This is handled by a service which is called suggest – we will have a post detailing its inner workings soon. For now, we can assume that the service provides us with a list of possible alternatives to the user query.

2. The Recall and Precision Stages

We can now start building the core of the search engine. Our index contains billions of pages; the job of the search is to find the N, usually around 50, most relevant pages for a given query. We can broadly split this problem into 2 parts, the recall and the precision stages.

The recall stage involves narrowing down the billions of pages to a much smaller set of, say, five hundred candidate pages, while trying to get as many relevant pages as possible into this set. The precision stage performs more intensive checks on these candidate pages to filter the top N pages and decide on the final ordering.

2.1 Recall Stage

A common technique used to perform efficient retrieval in search is to build an inverted index. Rather than building one with the words on the page as keys, we use ngrams of the queries on the page model as keys. This let’s us build a much smaller and less noisy index.

We can perform various types of matching on this index:

  • Word based exact match: We look for the exact user query in our index and retrieve the linked pages.
  • Word based partial match: We split the user query into ngrams and retrieve the linked pages for each of these.
  • Synonym and stemming based partial match: We stem the words in the user query and retrieve the linked pages for each of its ngrams. We could also replace words in the query with their synonyms and repeat the operation. It should be clear that this approach, if not used with caution, could quickly result in an explosion of candidate pages.

This approach works great for a lot of cases, e.g. when we want to match model numbers, names, codes, rare-words. Basically when the token of the query is a must, which is not possible to know before-hand. But it can also introduce a lot of noise as shown below, because we lack a semantic understanding of the query.

query soldier of fortune game
document 1 ps2 game soldier fortune
document 2 soldier of fortune games
document 3 (Noise) fortune games
document 4 (Noise) soldier games

An alternative approach to recall is to map these queries to a vector space and match in this higher dimensional space. Each query gets represented by a K dimensional vector, 200 in our case, and we find the nearest neighbors to the user query in this vector space.

This approach has an advantage with the fact that it can match semantically. But it could also introduce noise with aggressive semantic matching, as illustrated below. This technique could also be unreliable when the query contains rare words, like model numbers or names, as their vector space neighbors could be random.

query soldier of fortune game
document 1 soldier of fortune android
document 2 sof play
document 3 soldier of fortune playstation
document 4 (Noise) defence of wealth game

We train these query vectors based on some custom word embeddings learnt from our data. These are trained on billions of <good query, bad query> pairs collected from our data and use the Byte-Pair-Encoding implementation of SentencePiece[7] to address the issue with missing words in our vocabulary.

We then build an index with billions of these query vectors using Granne[8], our in-house solution for memory efficient vector matching. We will have a post on the internals of Granne soon on this blog, you should definitely look out for it if the topic is of interest.

We also generate synthetic queries out of titles, descriptions, words in the URL and the actual content of the page. These are, by definition, noisier that the query-logs captured by HumanWeb. But they are needed; otherwise, newer pages with fresh content would not be retrievable.

Which model do we prefer? The need for semantics is highly dependent on the context of the query which is unfortunately difficult to know a priori. Consider this example:

Scenario 1 how tall are people in stockholm? how tall are people in sweden?
Scenario 2 restaurants in stockholm restaurants in sweden

As you can see, the same semantic matching could result in good or bad matches based on the query. The semantic matching in Scenario 1 is useful for us to retrieve good results, but the matching in Scenario 2 could provide irrelevant results. Both keyword and vector based models have their strength and weaknesses. Combining them in an ensemble, together with multiple variations of those models, gives us much better results than any particular model in isolation. As one would expect, there is no silver bullet model or algorithm that does the trick.

The recall stage combines the results from both the keyword and vector-based indices. It then scores them with some basic heuristics to narrow down our set of candidate pages. Given the strict latency requirements, the recall stage is designed to be as quick as possible while providing acceptable recall numbers.

2.2 Precision Stage

The top pages from the recall stage enter the precision stage. Now that we are dealing with a smaller set of pages, we can subject them to additional scrutiny. Though the earlier versions of Cliqz search used a heuristic driven approach for this, we now use gradient boosted decision trees[9] trained on hundreds of heuristic and machine learned features. These are extracted from the query itself, the content of the page and the features provided by Human Web. The trees are trained on manually rated search results from our search quality team.

3. Filter Stage

The pages that survive the precision stage are now subject to more checks, which we call filters. Some of these are:

  • Deduplication Filter: This filter improves diversity of results by removing pages that have duplicated or similar content.
  • Language Filter: This filter removes pages which do not match the user’s language or our acceptable languages list.
  • Adult Filter: This filter is used to control the visibility of pages with adult content.
  • Platform Filter: This filter replaces links with platform appropriate ones e.g. mobile users would see the mobile version of the web page, if available.
  • Status Code Filter: This filter removes obsolete pages, i.e. links we cannot resolve anymore.

4. Snippet Generation Stage

Once the result set is finalized, we enhance the links to provide well formatted and query-sensitive snippets. We also extract any relevant structured data the page may contain to enrich its snippet.

The final result set is returned back to the user through our API gateway to the various frontend clients.

Maintaining Multiple Mutable Indices

The section on recall presented an extremely basic version of the index for the sake of clarity. But in reality, we have multiple versions of the index running in various configurations using different technologies. We use Keyvi[10], Granne, RocksDB and Cassandra in production to store different parts of our index based on their mutability, latency and compression constraints.

The total size of our index currently is around 50 TB. If you could find a server with the required disk space along with sufficient RAM, it is possible to run our search on localhost, completely disconnected from the internet. It doesn’t get more independent than that.

Search Quality

Search quality measurement plays an integral part in how we test and build our search. Apart from automated sanity checking of results against our competitors, we also have a dedicated team working on manually rating our results. Over the years, we have collected the ratings for millions of results. These ratings are used not only to test the system but to help train the ranking algorithms we mentioned before.


The query logs we collect from Human Web are unfortunately insufficient to build a good quality search. The actual content of the page is not only necessary to perform better ranking, it is required to be able to provide a richer user experience. Enriching the result with titles, snippets, geolocation and images helps the user make an informed decision about visiting a page.

It may seem like Common Crawl would suffice for this purpose, but it has poor coverage outside of the US and its update frequency is not realistic for use in a search engine.

While we do not crawl the web in the traditional sense, we maintain a distributed fetching infrastructure spread across multiple countries. Apart from fetching the pages in our index at periodic intervals, it is designed to respect politeness constraints and robots.txt while also dealing with blocklists and crawler unfriendly websites.

We still have issues getting our fetcher cliqzbot whitelisted on some very popular domains, like Facebook, Instagram, LinkedIn, GitHub and Bloomberg. If you can help in any way, please do reach out at beta[at]cliqz[dot]com. You’d be improving our search a lot!

Tech Stack

  • We maintain a hybrid deployment of services implemented primarily in Python, Rust, C and Golang.
  • Keyvi is our main index store. We built it to be space efficient, fast but also provide various approximate matching capabilities with its FST data structure.
  • The mutable part of our indices are maintained on Cassandra and RocksDB.
  • We built Granne to handle our vector based matching needs. It is a lot more memory efficient than other solutions we could find – you can read more about it in tommorrow’s blog post.
  • We use qpick[11] for our keyword matching requirements. It is built to be both memory and space efficient, while being able to scale to billions of queries.

Granne and qpick have been open-sourced under MIT and GPL-2.0 licenses respectively, do check them out!

It is hard to summarize the infrastructural challenges in running a search engine in a small section of this post. We will have dedicated blog posts detailing our Kubernetes infrastructure which enables all the above soon, so do check them out!

While some of our early decisions allowed us to drastically reduce the effort required in setting up a web scale search engine, it should be clear by now that there are still a lot of moving parts which must work in unison to enable the final experience. The devil is in the details for each of these steps – topics like language detection, adult content filtering, handling redirects or multilingual word embeddings are challenging at web scale. We will have posts on more of the internals soon, but we would love to hear your thoughts on our approach. Feel free to share and discuss this post, we will try our best to answer your questions!


  1. Alphabet Q3 2019 earnings release – report ↩︎

  2. Statista search engine market share 2019 – link ↩︎

  3. Web Crawler – Wikipedia – link ↩︎

  4. Spamdexing – Wikipedia – link ↩︎

  5. Deep Web – Wikipedia – link ↩︎

  6. Web Search Query Log Downloads – link ↩︎

  7. SentencePiece – GitHub – link ↩︎

  8. Granne – GitHub – link ↩︎

  9. Gradient Boosting – Wikipedia – link ↩︎

  10. Keyvi – GitHub – link ↩︎

  11. qpick – GitHub – link ↩︎


  1. Hello it’s me, I am also visiting this web page regularly, this web site is actually fastidious
    and the people are truly sharing pleasant thoughts.

  2. This is very fascinating, You are a very professional
    blogger. I have joined your feed and stay up for in quest of more of your wonderful post.
    Additionally, I have shared your web site in my social networks

  3. Excellent weblog here! Additionally your website rather a lot up very fast!
    What host are you the usage of? Can I get your associate
    link for your host? I want my web site loaded up as quickly
    as yours lol

  4. Have you ever thought about adding a little bit more than just your articles?
    I mean, what you say is valuable and everything. But think of if you
    added some great images or video clips to give your posts more, “pop”!
    Your content is excellent but with pics and video clips,
    this blog could definitely be one of the
    very best in its niche. Awesome blog!

  5. I have to thank you for the efforts you’ve put in writing this website.
    I’m hoping to see the same high-grade content from you in the future as well.
    In fact, your creative writing abilities has inspired me
    to get my own website now 😉

  6. Thanks a lot for sharing this with all folks you actually
    realize what you are talking approximately! Bookmarked. Kindly
    additionally visit my site =). We may have a hyperlink change
    contract among us

  7. Good post. I learn something new and challenging
    on sites I stumbleupon on a daily basis. It will always be useful to
    read content from other writers and use a little something from other

  8. A person essentially help to make significantly posts I would state.
    This is the very first time I frequented your web page
    and up to now? I amazed with the analysis you made to make this actual
    submit amazing. Wonderful task!

  9. What’s up it’s me, I am also visiting this web page on a regular basis,
    this web page is actually nice and the people are genuinely sharing
    good thoughts.

  10. Wow that was strange. I just wrote an extremely long comment but after I clicked submit my comment didn’t
    appear. Grrrr… well I’m not writing all that over again. Anyway, just
    wanted to say fantastic blog!

  11. I’d like to thank you for the efforts you’ve put in penning this
    website. I’m hoping to see the same high-grade blog posts from you in the future as
    well. In fact, your creative writing abilities has motivated me to get my own,
    personal site now 😉

  12. Hello! I understand this is kind of off-topic however I needed to ask.
    Does managing a well-established blog like yours take a massive amount work?
    I am completely new to blogging however I do write in my diary
    daily. I’d like to start a blog so I will be able
    to share my personal experience and thoughts online.
    Please let me know if you have any suggestions or tips for new aspiring
    blog owners. Appreciate it!

  13. hi!,I love your writing very so much! proportion we communicate more approximately your
    post on AOL? I require a specialist in this area to solve my problem.
    Maybe that is you! Taking a look ahead to look you.

  14. Thanks for every other informative blog. Where else may just I get
    that kind of information written in such an ideal way?
    I’ve a undertaking that I’m just now operating on,
    and I have been at the glance out for such info.

  15. Currently it appears like WordPress is the preferred blogging platform available right now.
    (from what I’ve read) Is that what you’re using on your blog?

  16. Howdy! This is kind of off topic but I need some advice from an established blog.
    Is it tough to set up your own blog? I’m not very techincal but I can figure things out
    pretty fast. I’m thinking about setting up my own but I’m not sure where to begin. Do you
    have any ideas or suggestions? With thanks

  17. Howdy, There’s no doubt that your web site could be having
    web browser compatibility issues. Whenever I look at your blog in Safari, it looks fine but when opening in IE, it has
    some overlapping issues. I simply wanted to give you a quick heads up!
    Apart from that, great site!

  18. Hi! I know this is kinda off topic however , I’d figured
    I’d ask. Would you be interested in trading links or maybe guest
    writing a blog article or vice-versa? My blog covers a lot of the
    same subjects as yours and I feel we could greatly benefit from each
    other. If you are interested feel free to send
    me an email. I look forward to hearing from you!
    Terrific blog by the way!

  19. I really like your blog.. very nice colors & theme. Did you
    make this website yourself or did you hire someone to do it for you?
    Plz respond as I’m looking to construct my own blog and would like to find out where
    u got this from. appreciate it

  20. Hurrah! After all I got a blog from where I be capable of genuinely obtain valuable data regarding my study and

  21. Ahaa, its good conversation on the topic of
    this paragraph here at this webpage, I have read all that, so at this time me also commenting at this place.

  22. That is very interesting, You are an excessively skilled blogger.
    I have joined your rss feed and stay up for searching for more of your great post.
    Additionally, I have shared your site in my
    social networks

  23. I got this web site from my buddy who informed
    me regarding this web page and now this time I am browsing this web page and
    reading very informative articles or reviews
    at this place.

  24. Ahaa, its good conversation regarding this paragraph at this place at this
    webpage, I have read all that, so at this time me also commenting at this place.

  25. Pretty nice post. I just stumbled upon your blog and wished to say that I have truly enjoyed surfing around
    your blog posts. In any case I’ll be subscribing to your rss feed and I hope you write again soon!

  26. You have made some good points there. I looked on the net
    to find out more about the issue and found most individuals will
    go along with your views on this website.

  27. An outstanding share! I’ve just forwarded this onto a colleague who was conducting a little
    research on this. And he in fact bought me dinner simply because I
    found it for him… lol. So let me reword this….
    Thank YOU for the meal!! But yeah, thanx
    for spending time to discuss this matter here on your blog.

  28. Just wish to say your article is as surprising. The clearness for your post is simply cool and i can think you are a professional on this subject.

    Well along with your permission let me to clutch your RSS feed
    to keep up to date with approaching post. Thanks one million and please keep up
    the rewarding work.

  29. I’ve been surfing online more than three hours today, yet I never found
    any interesting article like yours. It’s pretty worth enough for me.

    In my view, if all web owners and bloggers made good content as you did,
    the web will be a lot more useful than ever before.

  30. Normally I don’t read post on blogs, but I wish to
    say that this write-up very pressured me to take a look at and do so!
    Your writing style has been surprised me. Thank you, quite great article.

  31. I like the valuable info you provide in your articles.
    I will bookmark your blog and check again here
    frequently. I am quite sure I will learn many new stuff right
    here! Best of luck for the next!

  32. This is really interesting, You’re an excessively professional
    blogger. I’ve joined your feed and look forward
    to in quest of more of your great post. Also, I have shared your
    website in my social networks

  33. Excellent blog here! Also your website lots up fast!
    What host are you the usage of? Can I am getting your associate link
    to your host? I wish my web site loaded up as fast as yours lol

  34. Wow, amazing weblog layout! How lengthy have you ever been running a blog for?
    you made running a blog glance easy. The total
    glance of your site is excellent, let alone the content material!

  35. Nice post. I used to be checking continuously this weblog and
    I’m inspired! Very helpful info particularly the closing phase 🙂
    I maintain such info a lot. I was seeking this particular information for a long time.
    Thank you and good luck.

  36. I’m not sure where you are getting your information, but good topic.
    I needs to spend some time learning more or understanding more.
    Thanks for great info I was looking for this info for my mission.

  37. Write more, thats all I have to say. Literally, it seems as
    though you relied on the video to make your point. You clearly know
    what youre talking about, why throw away your intelligence on just posting videos to your weblog when you could be giving us something informative to read?

  38. Howdy just wanted to give you a quick heads up. The text in your article seem to be
    running off the screen in Safari. I’m not sure if this is a formatting issue or
    something to do with web browser compatibility but I figured I’d
    post to let you know. The style and design look great though!
    Hope you get the problem fixed soon. Kudos

  39. Excellent post. Keep writing such kind of information on your
    page. Im really impressed by your blog.
    Hello there, You’ve performed a great job. I will definitely digg it and individually suggest to my friends.
    I am confident they will be benefited from this website.

  40. Thanks for your personal marvelous posting! I
    really enjoyed reading it, you may be a great author.
    I will make sure to bookmark your blog and may come back down the road.
    I want to encourage you to continue your great posts,
    have a nice evening!

  41. Hey! I could have sworn I’ve been to this website before but after checking through some of the post I realized it’s new to me.

    Anyways, I’m definitely happy I found it and I’ll be book-marking
    and checking back often!

  42. hello!,I like your writing very much! percentage we keep in touch more about
    your post on AOL? I require a specialist on this house to solve my problem.
    May be that’s you! Looking ahead to peer you.

  43. I’m not sure why but this site is loading extremely slow for me.
    Is anyone else having this issue or is it a problem on my
    end? I’ll check back later on and see if the problem still exists.

  44. I believe what you posted was actually very reasonable.
    But, think about this, what if you added a little information? I mean, I don’t wish to tell you how to run your website, however what if you added something that grabbed people’s attention? I
    mean Building a Search Engine from Scratch –
    Pavvy Designs is a little plain. You could look at Yahoo’s front page and note
    how they create news titles to grab viewers to click. You might try adding a video or a picture
    or two to grab readers excited about what you’ve got to say.
    In my opinion, it would bring your posts a little bit more

  45. Thanks for any other informative site. The place else may
    just I get that kind of information written in such an ideal approach?
    I have a challenge that I’m just now running on, and I’ve been at the look out
    for such information.

  46. Everything is very open with a clear clarification of the issues.
    It was definitely informative. Your site is very useful.
    Thank you for sharing!

  47. Hey there would you mind letting me know which webhost you’re utilizing?

    I’ve loaded your blog in 3 completely different internet browsers and I must
    say this blog loads a lot quicker then most.
    Can you suggest a good hosting provider at a reasonable price?
    Thanks a lot, I appreciate it!

  48. I do not know whether it’s just me or if everyone
    else experiencing issues with your site. It appears as though some of the written text
    on your posts are running off the screen. Can someone else please provide feedback and let me know if this is happening to them as well?
    This could be a issue with my internet browser because I’ve had this happen before.

  49. Oh my goodness! Amazing article dude! Thank you, However I am going through
    problems with your RSS. I don’t understand
    the reason why I can’t subscribe to it. Is there anyone else getting similar RSS problems?
    Anyone that knows the solution will you kindly respond?

  50. I think that what you composed made a lot of sense.

    However, consider this, suppose you were to write
    a killer title? I ain’t suggesting your content is not good.,
    however what if you added something to maybe get people’s attention? I mean Building a Search Engine from
    Scratch – Pavvy Designs is kinda plain. You could glance at Yahoo’s home page and see
    how they create article headlines to grab people to click.
    You might add a related video or a related picture or two
    to get readers excited about everything’ve written. Just my
    opinion, it might make your blog a little bit more interesting.

  51. You’ve made some really good points there. I looked on the net for more
    info about the issue and found most individuals
    will go along with your views on this site.

  52. This is a very good tip especially to those new to the blogosphere.
    Simple but very accurate info… Many thanks for sharing this one.

    A must read article!

  53. Asking questions are in fact pleasant thing if you are not
    understanding something totally, but this piece of writing gives fastidious understanding even.

  54. Great post. I was checking constantly this blog and
    I am impressed! Very useful info specially the final section :
    ) I handle such information much. I used to be seeking this
    certain information for a long time. Thanks and good

  55. It is appropriate time to make some plans for the future and
    it is time to be happy. I’ve read this post and if
    I could I want to suggest you few interesting things or suggestions.
    Perhaps you can write next articles referring to this article.

    I desire to read more things about it!

  56. Can I simply just say what a comfort to discover an individual who really understands what they are discussing online.
    You definitely realize how to bring a problem to light and make it important.
    A lot more people should read this and understand this
    side of your story. I was surprised that you aren’t more popular because
    you certainly have the gift.

  57. We’re a group of volunteers and opening a new scheme in our community.
    Your site offered us with valuable info to work
    on. You have done an impressive job and our entire community
    will be grateful to you.

  58. It’s amazing to visit this web page and reading the
    views of all friends on the topic of this piece of writing, while I am
    also eager of getting familiarity.

  59. I’m not that much of a internet reader to be honest but your
    blogs really nice, keep it up! I’ll go ahead and bookmark your website to come back down the road.
    Many thanks

  60. Hello there, just became aware of your blog through Google, and found that it’s
    truly informative. I am gonna watch out for brussels.
    I will appreciate if you continue this in future. A lot of people will be benefited from
    your writing. Cheers!

  61. Someone necessarily lend a hand to make significantly posts I would state.

    That is the very first time I frequented your website
    page and so far? I surprised with the analysis you made to make this actual publish
    incredible. Fantastic process!

  62. I was very happy to discover this page. I need to to thank you for ones time
    for this wonderful read!! I definitely appreciated every part of it and i also have you
    saved to fav to see new things on your site.

  63. Excellent post. I used to be checking constantly this weblog and I’m impressed!

    Extremely helpful information particularly the ultimate phase :
    ) I care for such info much. I used to be looking for this certain info for a very long time.
    Thank you and good luck.

  64. Hi there, i read your blog occasionally and i own a similar one and i was just wondering if
    you get a lot of spam feedback? If so how do you prevent it, any plugin or anything you can recommend?
    I get so much lately it’s driving me insane so any help is very much appreciated.

  65. Greetings, I do believe your site may be having web browser compatibility issues.
    When I take a look at your website in Safari, it looks fine however, when opening in Internet Explorer, it has some overlapping issues.
    I simply wanted to give you a quick heads up! Besides that, excellent

  66. I don’t even know how I stopped up right here, but I thought this
    post was great. I don’t recognise who you are however certainly you are going to a famous blogger for those who are not already.

Leave a Reply

Your email address will not be published.