Findability Vocabulary
- Findability:
- The attribute of a person, object, or piece of information describing its' ability to be located. In relation to search-engines and the information that they enable people to locate, "findability" refers to making the information easier to be found. Thus, to increase the "findability" of a web-site means to make it more prominent to search-engine algorithms; the ultimate aim of web-developers being to make their site stand-out against the competition.
- Micro-Format:
- <XML> structured form of meta-tag used by computer applications or within a web-page to relate data such as geographical information, meteorological data and product descriptions. The data in a micro-format is stored as key-value sets in pre-determined classes to allow the receiving application/device to quickly extract the required information. Common forms of micro-formats are h-cards, v-cards and hResume, which is used to transmit CVs.
- Search-Engine:
- An algorithm created (and continually fine-tuned) by companies such as Google, Bing and others to collate information on the millions of web-pages spread across the internet. Search-engines use web-crawlers to seek out this information, which is then stored in databases. The search-engine then queries these whenever a user enters a 'search term' into the search-engine, returning a list of the best-matches to the entered term.
- Site-Map:
- A file containing an <XML>-based description of the pages and files that make-up a web-site, and how they relate to each other. The site-map is read by search-engine web-crawlers to aid them in navigating and classifying a site more efficiently. Thus, a site-map is an essential tool of SEO, as any steps taken to help the search-bots can only lead to better results.
- Web-Traffic Analysis:
- An essential part of SEO is analysing the results generated by the search-engine algorithms to determine how effective the work has been ... and how those results could be further improved. Analysis of the traffic to a web-site can show how many users are visiting, how long they stay, which links they came from, which pages they view, and how much money they spent (and on what). Proper web-traffic analysis is a 'big-picture' science; many factors must be taken into consideration to properly determine how to improve results.
- Meta-Tag:
- Brief and descriptive term embedded within the <head> section of a web-page to relate data to browsers and web-crawlers. Meta-tags are not displayed on-screen; the information within them is used to modify settings, or allow search-engines to collate data. Commonly-used tags contain the creator of a site ("author"), summary of its' purpose ("description") and search-terms which may be used to define the site ("keywords").
- MOD_REWRITE:
- Method created by Apache Web Services (AWS) to modify URL requests to the server. Using whatever regex conditions have been
set in the 'RewriteCond', MOD_REWRITE will re-direct the URL call to a different URL. An example: if 'RewriteRule
^example.html$ http://www.google.com/ [R=301]' were to be entered into the .htaccess file, then any call to 'example.html'
would not call that page, but would instead be re-directed to the Google home-page.
For further information, refer to the AWS URL Re-writing Guide.
- Search Engine Optimization (SEO):
- The science of making a web-site more findable to search-engine algorithms and, ultimately, web-users. The goal of SEO is to make a web-site appear as high as possible in the Search Engine Results Pages (SERPs), and thus, noticeable to users. Research has shown that users rarely look beyond the first page of results, and so SEO involves techniques such as improving the content of a web-site to make it more appealing to the search-engines; repairing errors which might adversely affect results; and examining which key-words will produce the best outcomes for a web-site.
- Web-Crawler:
- Also known as a 'spider', 'spider-bot', or simply 'crawler', these are the 'librarians' of the WorldWideWeb, used by search-engine software applications to trawl the internet and collate data from web-sites so that search-engine algorithms can index these and determine where they will rank in the SERPs with respect to entered search-terms. Web-crawlers read the robots.txt, .htaccess and site-map files to aid their task. Web-crawlers employed by other applications can be used for web-scraping and data-driven programming.
- WordPress:
- WordPress is an open-source content management system which makes the creation of web-pages easy - even for those without any knowledge of web-development. Latest statistics show WordPress is behind more than 43% of all web-sites on the Internet. Originally designed for producing blogs, changes to the core code - as well as a massive library of plug-ins and theme templates - mean that WordPress can now be used to create any type of web-site. WordPress was released on May 27, 2003, by Matt Mullenweg and Mike Little.