The site mechanics is discribed in this section for coders and curious people.
I begin with some terminology : I'm comparing a website to a tree. It has always a root (the home page), and each page (called node) possibly has some branchs (links) to children pages. We can thus represent any site by drawing some kind of a reverse genealogic tree, with parents and children. The depth (or level) of any page in the tree is the minimal number of clicks needed to reach this page from the home page.
I was trying to solve the following problem : imagine that I want to make a site with a section "Cheese" and a section "Goats". No problem, until the day I want to create a subsection about goat's milk cheese. Should I put it in "cheese" or in "goat" ?
It's an impossible choice, I want this page to be accessible from the cheese and from the goats.
But, I dont want to write this page twice. if I made a mistake, I want to correct it only once.
A good way to solve this problem is to use keywords (or labels) :
each page will be indexed, not by a unique position in a lexical map, but by some keywords. In our example, I define the keywords "cheese" and "goat" and fill in a database with a table like this :
|Home page||Welcome to my website...||no||no
|Cheese||Cheese is food made with milk...||yes||no
|Goats||Goats are vegetarian animals...||no||yes
|Goat's milk cheese||The goat's milk cheese, as its name indicates it...
What's great is that all the structure of the site is determined by this binary coding.
The children of a given page, let's take for instance "cheese" which contains only the keyword cheese, are all the pages which contain this keyword and another. The level of a page is exactly the number of keywords.
With this system, we get several natural ways to reach a page, and that's what I wanted.
So, instead of a tree, it would now be more pertinent to say that this site is a graph, since several branchs may lead to the same node.
So, each page of this site is indexed by some keywords (from a list of 64, see the Search section) in a table of a database.
The pair PHP/MySQL is perfectly adapted to the realisation of this project. My first attempt computed the site map directly from the database at each call of a page. It works, but it's a little slow.
You're now surfing on the second version of this site, much faster : every time I enter or modify a page, an lexical indexation engine creates a second table in which a page may be present several times at different locations. You can see that some pages indeed appear several times (with different parents) in the lexical site map, this was my purpose.