Why (not) segmentation?
I am studying operating systems and the x86 architecture, and while I was reading about segmentation and paging I naturally was curious how modern OSes handle memory management. From what I found Linux and most other operating systems essentially shun segmentation in favor of paging. A few of the reasons for this that I found were simplicity and portability. What practical uses are there for segmentation (x86 or otherwise) and will we ever see robust operating systems using it or will they continue to favor a paging based system. Now I know this is a loaded question but I am curious how segmentation would be handled with newly developed operating systems. Does it make so much sense to favor paging that no one will consider a more 'segmented' approach? If so, why? And when I say 'shun' segmentation I am implying that Linux only uses it as far as it has to. Only 4 segments for user and kernel code/data segments. While reading the Intel documentation I just got the feeling that segmentation was designed with more robust solutions in mind. Then again I was told on many occasions how over complicated the x86 can be. I found this interesting anecdote after being linked to Linux Torvald's original 'announcement' for Linux. He said this a few posts later: Simply, I'd say that porting is impossible. It's mostly in C, but most people wouldn't call what I write C. It uses every conceivable feature of the 386 I could find, as it was also a project to teach me about the 386. As already mentioned, it uses a MMU, for both paging (not to disk yet) and segmentation. It's the segmentation that makes it REALLY 386 dependent (every task has a 64Mb segment for code & data - max 64 tasks in 4Gb. Anybody who needs more than 64Mb/task - tough cookies). I guess my own experimentation with x86 led me to ask this question. Linus didn't have StackOverflow, so he just implemented it to try it out.

I am studying operating systems and the x86 architecture, and while I was reading about segmentation and paging I naturally was curious how modern OSes handle memory management. From what I found Linux and most other operating systems essentially shun segmentation in favor of paging. A few of the reasons for this that I found were simplicity and portability.
What practical uses are there for segmentation (x86 or otherwise) and will we ever see robust operating systems using it or will they continue to favor a paging based system.
Now I know this is a loaded question but I am curious how segmentation would be handled with newly developed operating systems. Does it make so much sense to favor paging that no one will consider a more 'segmented' approach? If so, why?
And when I say 'shun' segmentation I am implying that Linux only uses it as far as it has to. Only 4 segments for user and kernel code/data segments. While reading the Intel documentation I just got the feeling that segmentation was designed with more robust solutions in mind. Then again I was told on many occasions how over complicated the x86 can be.
I found this interesting anecdote after being linked to Linux Torvald's original 'announcement' for Linux. He said this a few posts later:
Simply, I'd say that porting is impossible. It's mostly in C, but most people wouldn't call what I write C. It uses every conceivable feature of the 386 I could find, as it was also a project to teach me about the 386. As already mentioned, it uses a MMU, for both paging (not to disk yet) and segmentation. It's the segmentation that makes it REALLY 386 dependent (every task has a 64Mb segment for code & data - max 64 tasks in 4Gb. Anybody who needs more than 64Mb/task - tough cookies).
I guess my own experimentation with x86 led me to ask this question. Linus didn't have StackOverflow, so he just implemented it to try it out.