2012/11/04

Software Rants 4: The 21st Century Terminal

I can't use Vi or Emacs because both of them have their own keybind sets that can fill an entire textbook and I don't have 3 years to learn this stuff.  I just want to write code.  Shouldn't be that complicated.  Same thing with shell script (if ['""'''"]; then fi needs to be shot out back).  Same thing with html and javascript too, really.  They are all forced standards that grew out of much simpler systems that worked much better in their conception as something less shitty and less kitchen sink than they are today.

So I like zsh over bash for no really apparent reason because I don't use any of its built in features, but probably because it is not the ubiquitous default.  Assumed defaults are dangerous and often show the greatest cracks in the paint for improvement.  So here is my ignorant rant about terminals.

TTYs are garbage.  The fact X is run on TTY7 makes me feel nauseous because 99% of Linux users never end up using ctrl-alt-1 through 6 to understand what the default terminals are for.  They use their own graphics layers, font scaling, smoothing, resolution and such, because they are completely disconnected from the modern graphical environment built off X or Wayland or whatever your graphics stack wants to be.  They add significant undue complexity because they stand starkly apart from what anyone expects in a "modern" os.  Terminal emulators are just X frames designed to trick bash, zsh, etc into thinking they are running in a tty environment.  If you have a faulty system you might get dumped to a tty with some random program running in place of whaever your choice sh might be, but you don't have anything wrapping your terminal to guard you, you just have a terminal, some daemons (NOT NORMAL PROGRAMS NO, YOU NEED TO INTERACT WITH THOSE THROUGH ARBITRARY INIT CONTROLS!) and your shell, and anything you run with it.  Maybe you fork something.  I wonder what tty7 would look like with all the standard pipes it deals with.

The root problem is a disconnect between a 3d graphical environment and a display environment.  2 decades ago, nobody had the 3d stuff and assumed that VGA was the be all end all of display technology.  The modern kernel has kos of nvidia, radeon, fglrx, intel, or nouveru to act as display controllers but not necessarilly 3d rendering engines.  They don't inherently provide opengl hooks, those come from x, but the entire stack looks like a big mess of multiple entry points into what is in the end the same hardware device.

In my theoretical perfect OS, you assume the presence of something that can run some standard graphics ABI library.  Even the simplest terminal is running on "hardware accelerated" (aka, gpu bound) code that the system can just run over an abstracted pipe layer if there is no dedicated graphics hardware (but by bacon, I'd mandate some kind of heterogeneous processing node that supports SIMD small massively parallel cores and large pipeline highly cached general big cores on the same die in the same ALU and FPU soup).   You would have a tiny microkernel of CPU scheduler, memory controller, virtual memory manager, and hardware device hooks.  It payload runs something provided to it when it runs, be it a network boot host controller script, an init framework, or a shell executable.  If your init doesn't setup the hardware controllers for the various devices the OS can see but doesn't overburden itself to manage (because in the end, all that hardware is being provided by firmware as virtual memory maps anyway, and you can just hook those into user space applications that manage them) then that hardware just never does anything.  A kernel need be no more than "hey, cpu is running, I'm scheduling executing code, maintain a page table, maintain a process table, and schedule the execution of stuff, here is the rest of the devices the firmware gave me to play with, someone else can have them".

So you have an init framework.  It's job would probably be simple too, review the kernel device hooks, find the device drivers for the hardware, and bring them together in matrimony.  Along the way, you start both a virtual file system server and device handlers for physical devices, and device controllers for any actual hardware storage devices.  The VFS handles translating file system calls into device lookups or other applications.  You would probably also have a socket server that everything else relies on to make RPC calls across the system, and you can use the VFS to reference these sockets as files.  Because everything is a file.

But you have your kernel of memory management and process scheduling, your device controllers, and then some abstraction layer servers that act as arbiters of device control, like the vfs server, a socket server, a display server, a sound server, an input server.  And once init is done, it could launch either a shell or a desktop manager of some kind.  Or nothing, it could just let the system sleep.  Or maybe you payload prime95 at a low execution priority to saturate core when idle.  This shouldn't be rocket science, it should be a configuration file in some standardized serialization format (probably like json) that just reads devices : discover { graphics : radeon, default : auto } (default auto would just reduce the controller for a device given kernel info about it, etc), controllers : {vfs : some-userspace-vfs-host, display : xorg, network : network-manager}, payload : /bin/bash, or /bin/gdm, or something.  Not hard, xml like specialized tags, documentation wouldn't be hard.

Only hard part would be that the init framework would need to figure out where and how to start a vfs server without actually having it running.  Maybe that also belongs in the kernel, it seems important.  Virtual file system, virtual memory, virtual execution seem like consistent concepts in kernels.

Anywho, your payload of whatever is now running with some kind of display controller available.  In linux-space, opengl would be an assumed trait, not some tacked on x sugar.  And your terminal could be graphically greater than just bitmap glpyhs on some pixel grid.

One of the greatest weaknesses in a modern terminal is the difficulty in translating an idea of computation into a tangible thing happening.  Zsh has some primitive recommendations about what you type if it can't figure it out, but that is still nothing big.  I'm talking eclipse-style as you type you get a list of available completion terms (and not just tab cycle through hope for the best completion), composed menus overlaying the terminal showing options as you type).  That alone is huge, but reactive search would also be nice, it already exists in some terminal aps (top, for example, live reduces input search terms nicely) but the base terminal itself should be thinking about what you want.

And none of this really screams "oh noes the embedded world is ending" because on a 20mm die at 22nm you can stick 25 million transistors, more than a Cell processor.  That thing runs a freaking PS3.  That is a tenth of some of the smallest desktop fab standards today, which have 10 to 100 times the transistors.  By the time this theoretical system could be vertically integrated if I got a hundred billions dollars tomorrow to do it, standard fab tech would be at most 6 - 10 nm and be at least 10 - 20x more space efficient.  So a 5mm chip could outperform an i7 920.

Eventually you can't argue against sane defaults.  In the end, you can just set graphics : false, and not load a graphics driver.  You can skip any device host you don't want anyway in this theoretical init system.  Of course you would use socket activation on the socket server (might end up sticking that in the kernel... oh dears).

Terminals being dumb text interfaces is getting old.  We can do better.  We should do better.  I'm too dumb to do things the hard way!

No comments:

Post a Comment