2012/12/20

Software Rants 6: How to Reinvent the Wheel of Servicing

In my post about new OS paradigms, I remarked about how you won't replace IP as *the* network protcol and how we should all just bow down to its glory. 

However, one thing that can easily change (and does, all the time, multiple times a week) is change how we interact over that protocol.  It is why we even have URI's, and we have everything from ftp:// to http:// to steam:// protoctols over IP packets.  I want to bring up some paralells I see between this behavior, "classic" oprating system metaphors, and the relatively modern concept of treating everything as files circa Plan 9 and my stupid ramblings. 

If I was writing an application for a popular computing platform, I would be using a system call interface into the operating system, some kind of message bus service (like dbus) for communicating with most service, I would personally use some kind of temp file as an interchange but I could also open a Unix socket as a means of IPC.  Or maybe I go really crazy and start using OS primitives to share memory pages.  Any way you slice it, you are effectively picking and choosing protocols - be it the Unix socket "protocol", the system call "protocol", etc.  In Plan 9 / crazy peoples world, you forgo having protocols in favor of a a file system, where you can access sockets, system calls, memory pages, etc as files.  You use a directory tree structure rather than distinct programmatic syntaxes to interface with things, and the generic nature improves the interchangeability, ease of learning, and in some cases it can be a performance gain since you are using significantly less overhead in using a kernel VFS manager to handle the abstractions.

If I took this concept to the net, I wouldn't want to specify a protocol in an address.  I would want the protocols to be abstracted by a virtual file system, so in the same way I mentioned /net/Google.com/reader should resolve as the address of Google's reader, you could be more specific and try something like /net/Google.com:80/reader.https (this is a generic example using the classic network protocols) where you can be specific about how resources are opened (in the same way you use static file system typing to declare how to handle files).  But this treats Google.com as a file system in and of itself - and if you consider how we navigate most of these protcols, we end up treating them as virtual file servers all the same.  The differentiation is in how we treat the server as a whole.

In current usage, interacting with ftp://mozilla.org and http://mozilla.org produces completely different results, because ftp requests are redirected to an ftp server and http ones are directed to an http server.  But https doesn't inherently mean use a different server, because it just means sticking a TLS layer on top of the communication, but the underlying behavior of either end resolves the same - packets from one are generated, boxed in an encrypted container, shipped, decrypted on the receiving end, and then processed all the same.  That is in many ways more elegant than the non-transparent designation of what server process to interact with at an address based solely off the URI designation.

So what I would rather see is, in keeping with that VFS model, a virtual mount of a remote server under syntax like /net/Google.com producing a directory containing https, ftp, mail, jabber, etc, where an application would be able to easily just mount a remote server, and depending on the visible folders derive supported operations just off that.

Likewise, authentication becomes important.  /net/zanny@google.com would be expected to produce (with an authentication token, be it a cached key or a password) *my* view of this server, in the same way users and applications would get different views of a virtual file system. 

This leads to a much more cleaner distinction of tasks, because in the current web paradigm, you usually have a kernel IP stack managing inbound packets on ports, where to send them, and such.  You register Apache on port 80 and 443 and then it decodes the packets received on those ports (which now sounds even more redundant, because you are using a url specifier and a port, but the problem becomes that ports are not nearly as clear as protocols).

So in a vfs network filesystem, determining the available protocols on a webserver should be simpler, by just looking at a top level directory of the public user on that server, instead of querying a bunch of protocols for responses.  Be it via file extensions or ports, it would still be an improvement.

No comments:

Post a Comment