2013/03/27

An Exploration of Human Potential 4: The Next Generation of Investment

There was a vsauce video that asked a question about the potential of kickstarter to replace Hollywood, and he touches on ways that Hollywood might try to exploit it. So I figured I should write what I think the inevitable conclusion of the crowdfunding "revolution" is, how and why it happened, and what comes next.

Kickstarter, Indiegogo, and its kin all center around the idea of consumers paying for the creation of things they want, often with "perks" for donating certain amounts, often (but not always) including a copy of the thing made. Small donations for things like video games often don't get you a copy of the game, which to me seems very backwards, but I'll get to that.

The major issues with crowdfunding are twofold - one, there is no investor protection clause, if a project reaches its funding goals, you are out money and hoping they make whatever you are banking on. While there is nothing inherently bad about this - it just means you are dumb to invest in people you don't have reasonable expectations will produce the product you want, and if they turn around and run off with your money its your fault for making a risky investment.

The problem with no protections is that unproven and untested developers / produces see magnitudes less interest and contribution than already entrenched groups that have delivered in the past - which is reasonable given the lack of protection, because if you have to choose between something brand new from some random guy trying to launch a project out of their garage, or an industry expert trying to create a sequel to some IP they own, you are obviously going with the latter because you can reasonably assume they will actually make the product. But I'll get to why this is catastrophically bad in a bit.

The second major issue is that you are conflating the real role of crowdfunders - people acting as investors in the creation of new products, ideas, or initiatives - with donation tier gifts that are supposed to appease them of their money. It is an unnecessary indirection, but it is in many ways systemic of the first point - since you have no guarantees your project will actually happen, much more realistic "prizes" work to abate the issue and appease the masses.

The problem is this isn't a macroscopic solution to what I would argue is a systemic issue in the 21st century due to automation and globalization that will see the death of labor markets for physical unskilled work and an increase in the number of people who don't need to work mindless jobs. This means more people can, and should be driven to, enter creative ventures, and anything less than crowdfunding with perverse information ubiquity is disingenuous of society and all the technological innovation made up to this point.

The end goal needs to be that consumers with money have a means of finding people offering to pursue and create new information, head new initiatives, and craft new products for those interested in them, and the ability to directly invest in what you want to see it made. For one, it is the only ethical solution to the tyranny of IP and resolution of broken property rights, and second, it is the best way to resolve the current economic spiral into extreme inequality between the wealthy investor and the paycheck to paycheck laborer.

Back to my first issue, the reason that having no protections is bad is that it means entrenched market forces are disproportionately invested in because they have proven track records - their histories make them less risky to invest in, and that would drive people to put their money in what they feel is the least risky, to see the things they want made with the least chance of losing their investment, and that means entering a market would require an excess of work, often to produce something of similar caliber without the crowdfunding that is meant to enable small ventures from working.

The only solution is to abate the risk aspect. Any venture that operates in this new dichotomy will need to rigorously calcuate their expenses and obectives and produce realistic goals so that people can invest in them without fear of the venture taking the money and running - it would require some legal enforcement, maybe via contract with the exchange operator (aka, the kickstarter.com in this scenario) that and venture proposing needs to actually make what they say or else face lawsuit of fraudulent business practices and monetary extortion. The legal system is a giant mess as well, but that is tangential - in a much more functional legal system, the provider of the exchange service would prosecute any project that fraudulently takes the money and doesn't deliver a product for its "investors" interests, entirely to mitigate the risk of investment in unproven ventures.

Of course, that means anyone going into a crowdfunded project needs to fear being sued for not delivering. That is good. That means they have to be realistic, and that the projects investors can reasonably expect that they get what they pay for.

The resolution to the IP atrocities comes in the form of these projects that are funded being the means to provide the wellbeing and livelihoods of the creators for the duration of the venture - when they produce the product, they give a windowed release target, and meet it - otherwise they are liable for fraudulence. They propose how much they would need to live off for the duration of the venture plus additional expenses, and it is up to the crowdfunders to determine if they are a worthy investment. If they reach their target monetary goal, they get the money and are contractually obliged to return the product in the time frame specified.

This also means the donation goals are unnecessary and detract from the purpose - paying content creators (or any idea creator) for creating such ideas. The information based results of their labor (the blueprints for a robot they build, the movie they make, the program they write) should be released in as close to public domain or at the least an open attribution license as possible, as part of the contract. Once the product is funded by people that want to put the money where their mouth is to see it made, it should be freely available like the information it is. If you do crowdfund the invention of scarce resources like, say, building a cold fusion reactor for a billion dollars, the schematics and blueprints better be public domain, but the reactor itself is obviously owned by the original venture to sell electricity as they wish, because it is finite tangible property with scarcity, and you can't just steal it. Of course, it is up to the contract between the investors and the venture - if they want to build a fusion reactor and give it to the local government that is entirely in their contractual rights, they just need to provide them obfuscated to their investors.

The reason this matters so much is that investment right now is a rigged, closed game for the economic elite. The stock market isn't an accurate investment scheme - many companies on the stock markets could care less what their shares trade at, because they already did an IPO and got their cash reserves. After that, the trading rate never impacts them unless they release more shares into the market. People exchanging company ownership with other people doesn't impact the company at all unless the shareholders take advantage of their 51%+ ownership collectively. Dividend payouts are unrelated to the trading value of a stock, they depend on profit margins. So in practice, the only way to actually invest in new ideas is to be in a closed circle of wealthy investors surrounding an industry who plays the chessboard of ideas to their advantage with behind the scenes agreements and ventures that the public can't engage in - be they agreements between friends to try something new, or a wealthy person just taking a million dollars and trying something for a profit privately - those aren't open investments in what people want.

This becomes more important when we consider we are losing dramatic amounts of middle class power with the decline in income and savings - people don't have the spending power or push to drive the markets anymore because of gross inequality, and the best way to fix that is to open people to supporting one another without their corporate overlord interests controlling what they can buy or enjoy. Moving the engine of creativity and investment back into the hands of the masses means people see what they collectively want made, made, and those that traditionally pull the economic strings lose considerable power if the people paying for the content creators are the people themselves, rather than money hungry investment firms and publishers.

It is absolutely an end game - the collective funding of idea creation is the endgame of every information based industry, but to get there will require considerable cultural, legal, and economic shifts away from the poisonous miasma we exist in today. I hope it can happen in my lifetime, the degree of information freedom we could see in such a world would be wonderful to behold.

2013/03/22

Software Rants 11: Build Systems

So in my forrays into the world of KDE, I quickly came upon the fiasco surrounding KDE4 where the software compilation transistioned its build system from autotools to cmake (in the end, at least). They were originally targeting scons, thought it was too slow, tried to spin their own solution (bad idea) and ended up with Wif, gave up on that and settled on Cmake.

First thing is first, any application that uses some kind of script language (and make / cmake / qmake all absolutely qualify) that use their own syntax and don't derive from any actual script language are inherently stupid. If you encounter a situation where you can say "a domain language is so much more efficient in terms of programmer productivity here than any script syntax" you are probably tackling the problem wrong. Now, 10 years ago, you would have had a legitimate argument - no script language was mature enough to provide a terse syntax that delivered on brevity and ease of use at the time. Now Python, Ruby, and arguably Javascript are all contenders for mainstream script languages with near-human readable syntax and rampant extendability.

So if you see a problem and try to apply a domain language to it, I'm calling you right there as doing it wrong. The overhead of dependencies on a script language interpreter or JIT (like Python) is never worth the mental overhead of having to juggle dozens of domain languages.

And that brings me to build processes, which are one of the most obvious candidates for scripting, but are almost always domain languages, be it from Ant and Maven to Make and Autotools, and the gamut in between, only Scons is reasonable to me, because it is a build environment in python where you write your build scripts in python.

Now that is valuable. Scons is on track, I hope, to not only merge back with Wif, but solve its performance hindrances and deliver a modern build system divergent from the alien syntax of make for a modern use case where external dependencies, caching, and deployment all need to be accounted for.

However, I am stuck looking at cmake for any new project I want to work with, solely due to qtcreator and kdevelop integration. And honestly, if it stays out of my way, I will put up with it. I want to see Scons succeed though, so like the other hundred projects I want to get involved with, I want to see Scons integration in IDEs. I also want to see it solve its performance problems and deliver a solid product.

One thing I wonder is why they didn't use build files in Python but wrote the routines in C or C++ so they could interface the Python Interpreter with some scons.so library.

I definitely think any software you write you intend to be run across many machines should be native. Anything less is a disservice to your clientele, in microseconds of time wasted or in measurable electrical consumption of script interpreters in use cases they don't fit.

A build description for a project? Absolutely script-worthy. The backend to process build scripts? Should be native. The project is a business app with low deployment? Python that sucka. The project is a consumer app? Probably native.

I used to think it would make more sense to write everything in Python and then delve into C++ where performance is needed, but the promises of cross platform porting of qml and qt apps is just too good to pass up.

But yeah, build systems are a fucking mess, and as I continue to write up my Magma manifesto, one of the core tenants is not only compiler level support for Stratos scripts, but the usage of Stratos as the build system from the get go. The modularization instead of textualization of files makes bundle finding and importing a breeze, and the usage of compiled libraries or even whole software packages is just a step away.

2013/03/14

Software Rants 10: The Crossroads of Toolkits, circa 2013

I'm making a prediction - in 5 years, the vast majority of application based software development will be done in one of two environments - html5, or qt.

Those sound radically different, right? What kind of stupid tangent am I going off on now? Well, the biggest thing happening in application world is a transition - the world is slowly getting off the Windows monoculture thanks to mobile, and the most costly shift is that every new device OS, from blackberry to ios to Android to the upcoming Ubuntu Phone and Sailfish, plus the growing GNU/Linux gaming scene and OSX adoption, so the biggest deal is finding a development environment where you can write once, deploy everywhere.

And the only two runners in that race right now are qt and html5. I'd mention Mono and Xamarin, but the C# runtime is so slow and huge on mobile platforms the performance isn't there, and the momentum is moving towards qt anyway. Here are the respective pros and cons:

Qt
  • Optional native performance if written wholly in C++.
  • More practically, write applications in qml (for layout and styling) and javascript, and stick any performance critical parts in C++, and since signals and slots makes the transition seamless, take advantage of rapid deployment and high performance.
  • LGPL apps distribute the qt libraries through a bundled download assistant that will retrieve them once for all local qt apps, so they aren't redundantly cloned. Downside is that with low adoption the downloader is a hindrance for users.
  • Integrates nicely into most toolkits appearances. For example, it uses the options conditional button in Android, and supports gestures.
  • As native apps, qt apps are local, offline capable, and are exposed to the file system and all other niceties of a first class citizen program.
Html
  • Most pervasive platform, but not concrete. qt5 is stable and shippable, and because Digia controls it you can expect forward updates to fall through without a hitch. Banking on html5 webapps means you aren't supported on unupdated devices as much (not that much of a problem in, say, 2 years) but older devices (which tails out more and more as compute power for the consumer plateaus) mean fewer browser updates, and the need to have tag soup trying to figure out what features you have available.
  • Not solid. At all. Webaudio is still sorely lacking, webrtc isn't finalized, webgl is still experimental, and input handling is nonexistent. Local storage is too small to cache any significant amount of an app, especially for offline usage.
  • By being web based, you have inherent access to all the resources of the Internet, whereas qt requires web APIs to access the same things.
  • Inherently cloud based, explicit cloud configuration required for qt.
  • Qt generates installable apps for its target platforms as native local applications. html5 apps are cloud based and thus only as slow as the web page load times to access them to get into. So a lower barrier to entry.
So where is all this crazy "one or the other" mindset coming from? It is becoming increasingly silly and infeasible to use native widget toolkits and languages for every platform you target - what could be boxed up in one html5 or qt app with two skins (one mobile and one desktop) with shared logic and build infrastructure, debugging, and testing, in three languages (qml/js/c++ vs html/js/css) would require, in native form targeting every platform:
  • Objective C + Cocoa for ios
  • Objective C + Quartz for OSX
  • Windows Forms + C#/C++ + win32 for Windows
  • WinRT + C++ for Windows Phone
  • GTK (or qt) + c (or C++) for Linux
  • Java + ADK for Android
  • Qt for Blackberry, Ubuntu Phone, Sailfish (anyway).
With the exception of Windows Phone (which won't succeed, and will bomb pretty bad anyway, and qt could just get platform parity with it if it ever became popular) qt works everywhere, and is actually required on the newest mobile platforms anyway. Likewise, html5 apps will work everywhere as long as you are targeting IE10+, Firefox 16+, Chrome 20+, ios5, Android 4.0+, etc. Qt isn't as limited against backwards systems because it exists natively as a local app.

Nothing else comes close to this device parity of these two platforms. Any new application developer is naive not to use one of these, because all the others listed are dead ends with platform lock in. The plethora of backers of the w3c and Digia are from all these platforms and have interest in promoting their continued growth and the platforms themselves realize that being device transcendent makes them all the more useful.

What I find really interesting is that the interpreted languages of Java / C# are nowhere. Mono is close to being device prolific, but Oracle is a sludge of an outdated bureaucratic death trap and never realizes opportunity since they bought Sun, so they just let Java flounder into obscurity. Which is fine, the language grows at molasses pace and makes me mad to even look at with such critical flaws as no function objects and no default arguments.

But qt does it better, with C++ of all things. I guess GCC / Clang are useful in their architecture proliferation.

Which is one of the main reasons I'm focusing myself on qt, and will be doing my work in the next few months in it. I think it is the future, because at the end of the day, html is still a markup language. It has grown tumors of styling and scripting and has mutated over the years, but you are still browsing markup documents at the end of the day. I just like having access to a system to its core, and qt provides that option when necessary. So I'm betting on qt, and hope it pays off.

Software Rants 9: Capturing the Desktop

In my continuing thinking about next generation operating systems and the ilk, I wanted to outline the various aspects of a system necessary to truly win the world - all the of parts of a whole computing experience that, if presented and superior to all competitors, would probably change the world overnight. No piece can be missing, as can be said of Linux space and its lack of non-linear video editors, GIMP's subpar feature parity against competition, and audio's terrible architecture support. So here are the various categories and things a next generation desktop needs to compromise the consumer space.

Core
  • Microkernel providing consistent device ABI and abstractions. Needs to be preemptive, have a fair low overhead scheduler, and be highly optimized in implementation. The kernel should provide a socket based IPC layer.
  • Driver architecture built around files, interfacing with kernel provided device hooks to control devices. Driver signing for security, but optinal disabling for debugging and testing. Drivers need an explicit debug test harness since they are one of the most important components to minimize bugs in.
  • Init daemon that supports arbitrary payloads, service status logging, catastrophic error recovery, and controlled system failure. The init daemon should initialize the IPC layer for parallel service initialization (think systemd or launchd).
Internals
  • Command shell using an elegant shell script (see: Stratos in shell context). Most applications need to provide CLI implementations to support server functionality.
  • Executor that will checksum and sign check binary payloads, has an intelligent fast library search and inject implementation, and supports debugger / profiler injection without any runtime overhead of standard apps.
  • Hardware side, two interface specifications - serial and parallel digital. Channels are modulated for bandwidth, and dynamic parallel channels allow for point to point bandwidth control on the proverbial northbridge. High privileged devices should use PDI, and low privileged should use SDI. Latency tradeoffs for bandwidth should be modulation specific, so one interface each should be able to transition cleanly from low latency low bandwidth to high latency pipelined bandwidth. Consider a single interface where a single channel parallel is treated as a less privileged interface. Disregard integrated analog interfaces. USB can certainly be implemented as an expansion card.
  • Consider 4 form factors of device profile - mobile, consumer, professional, and server. Each has different UX and thus size / allocation of buses requirements, so target appropriately. Consumer should be at most mini-ITX scale, professional should be at most micro-ATX - we are in the future, we don't need big boards.
Languages
  • Next generation low level systems language, that is meant to utilize every programming paradigm and supply the ability to inline script or ASM code (aka, Magma). Module based, optimized for compiler architecture.
  • A common intermediary bytecode standard to compile both low and middle level languages against, akin to LLVM bytecode. Should support external functionality hooks, like a GC or runtime sandbox. This bytecode should also be signable, checksum-able, and interchangeable over a network pipe (but deterministic execution of bytecode built for a target architecture in a systems programming context is not guaranteed).
  • Middle level garbage collected modularized contextual language for application development. Objectives are to promote device agnosticism, streamline library functionality, while providing development infrastructure to support very large group development, but can also be compiled and used as a binary script language. See : Fissure.
  • High level script language able to tightly integrate into Magma and Fissure. Functions as the shell language, and as a textual script language for plugins and framework scripting on other applications. Meant to be very python-esque, in being a dynamic, unthreaded simple execution environment that promotes programmer productivity and readability at the cost of efficiency (see : Stratos).
  • Source control provided by the system database backend, and source control is pervasive on every folder and file in the system unless explicitly removed. Subvolumes can be declared for treatment like classic source control repositories. This also acts as system restore and if the database is configured redundant acts as backup.
Storage
  • Copy on write, online compressing transparent filesystem with branch caching, auto defragmentation, with distributed metadata, RAID support, and cross volume partitioning. Target ZFS level security and data integrity.
  • Everything-as-a-file transparent filesystem - devices, services, network locations, processes, memory, etc as filesystem data structures. Per-application (and thus per-user) filesystem view scopes. See the next gen FS layout specification for more information.
  • Hardware side, target solid state storage with an everything-as-cache storage policy - provide metrics to integrate arbitrary cache layers into the system caching daemon, use learning readahead to predict usage, and use the tried and true dumb space local and time local caching policy.

Networking
  • Backwards compatibility with the ipv6 network transport layer, TCP/IP/UDP, TLS security, with full stack support for html / css / ecmascript complaint documents over them.
  • Rich markup document format with WYSIWYG editor support, scripting, and styling. Meant to work in parallel with a traditional TCP stack.
  • Next generation distributed network without centralization support, with point to point connectivity and neighborhood acknowledgement. Meant to act both as a LAN protocol for both simple file transfer, service publication (displays, video, audio, printers, inputs, to software like video libraries, databases, etc) that can be deployed wideband as a public Internet without centralization.
  • Discard support for ipv4, ftp, nfs, smb, vnc, etc protocols in favor of modern solution.
Video
  • Only a 3d rendering API where 2d is a reduced set case. All hardware is expected to be heterogeneous SIMD and complex processing, so this API is published on every device. Since Magma has SIMD instruction support, this API uses Magma in the simd context instead of something arbitrary like GLSL. Is a standard library feature of low.
  • Hardware graphics drivers only need support the rendering API in its device implementation, and executor will allocate instructions against it. No special OS specific hooks necessary. Even better, one standard linkable may be provided that backends present gpu hardware or falls back to pipelined core usage.
  • No need for a display server / service, since all applications work through a single rendering API. A desktop environment is just like any 3d application running in a virtual window, it just runs at the service level and can thus take control of a display (in terms of access privileges, user applications can't ever take control of a display, and the best they can do is negotiate with the environment to run in a chromeless fullscreen window).
  • Complete non-linear video editor and splicer that is on par with Vegas.
  • Compete 3d modeler / animator / scene propagator supporting dae, cad, and system formats.
  • System wide hardware video rendering backend library supporting legacy formats and system provided ones, found in Magma's std.
  • Complete 2d vector and raster image composer, better UX and feature parity than Gimp, at least on par with photoshop. Think Inkscape + sai.
  • 3d (and by extension, fallback 2d) ORM game engine implemented in Magma provided as a service for game makers. Should also have a complete SDK for development, use models developed in our modeler.
  • Cloud video publishing service baked into a complete content creation platform.
  • Art publishing service akin to DA on the content creation platform.
  • Saves use version control and continuous saving through DB caching to keep persistent save clones.
Audio
  • Like Video, a single 3d audio API devices need to support at the driver level (which means positional and point to point audio support). Standards should be a highly optimized variable bitrate container format.
  • Software only mixing and equalizing, supplied by the OS primary audio service, and controllable by the user. Each user would have a profile, like they would have a video profile.
  • Audio mixing software of at least the quality of Audacity and with much better UX.
  • Audio production suite much better than garageband.
  • System wide audio backend (provided in Magma's std) that supports legacy and system formats.
  • Audio publishing service akin to bandcamp in a content creation platform.

Textual
  • Systemic backend database assumed present in some object mapping API specified in Magma. Different runlevels have access to different table groups and access privilege applies to the database server. This way, all applications can use a centralized database-in-filesystem repostiory rather than running their own. Note : database shards and tables are stored app-local rather than in a behemoth registry-style layout, and are loaded on demand rather than as one giant backend. The database server just manages independently storage. The database files use the standard serialization format, so users can write custom configurations easily. These files, of course, can be encrypted.
  • Since the database is inherently scriptable, you can store spreadsheets in it. It can also act as a version control repository, so all documents are version controlled. 
  • Singular document standard, supporting scripting and styling, used as local WYSIWYG based binary or textual saved documents, or as "web" pages.
  • Integrated development environment using gui source control hooks, support for the system debugger and profiler, consoles, collaborative editing, live previews, designer hooks, etc. Should be written in Magma, and load on demand features. Target qtcreator, not visual studio / eclipse.
Security
  • Pervasive, executable based mandatory access control. Profiles are file based, scripted in the standard serialization format, should be simple to modify and configure with admin privildges.
  • Contextual file system views, as a part of MAC, an application can only "see" what it is allowed to see, in a restricted context.
  • Binary signing pervasively, keys stored in central database.
  • Folder, file, and drive based encryption. An encrypted system partition can be unlocked by a RAMFS boot behavior.
  • Device layer passwords are supported as encryption keys. The disk is encrypted with the password as the key, instead of the traditional independent behavior where you can just read the contents off a password protected disk.
  • Network security implied - the firewall has a deny policy, as do system services. Fail2ban is included with reasonable base parameters that can be modified system wide or per service. All network connections on the system protocol negotiate secure connections and use a hosted key repository with the name server for credentials exchange and validation.
Input
  • Going to need to support arbitrary key layouts with arbitrary glyphic key symbol correlations. Think utf8 key codes. Vector based dimensional visual movement, which can be implemented as touch, mouse, rotation, joysticks, etc. So the two input standards are motion and key codes.
  • Input devices provided as files in the FS (duh) and simple input APIs provided in Magma.
If you can make the experience of content and software creators sufficiently extravagant, you can capture markets. We live in an era of constant global communication, such an OS needs to take full advantage at every level of pervasive communication, including network caching. Since the language stack is designed to provide a vertical contextual development paradigm, almost all resources are implemented in Magma libraries with bindings everywhere else up the stack as appropriate. Since most devices, services, etc are provided as files, the library implementations can be simple and platform agnostic given file provisioning.

2013/03/06

Reddit Rants 2: Mir Fallout Ranting

This time, in the wake of Mir's unveiling as the new Ubuntu display server, I was retorting someone saying fragmentation isn't a problem here and that the competition Mir would produce would be positive and get Wayland developed faster, here is my retort:

The correct way to go about FOSS development is:

Explore Options -> engage with open contribution projects in the same space -> attempt to contribute to the already established product, improving it into what you need, given community support -> if that doesn't happen, consider forking -> if forking is not good enough and you need to rebase, start from scratch.

Canonical skipped to the last step. It is fine if you have no other option but to fragment because then you are representing some market segment whose needs are not met.

The needs of a next generation display server that can run on any device with minimal overhead, sane input handling, and network abstraction already exists and it is in a stable API state with running examples, called Wayland.

The problem with Mir and Canonical is that unlike community projects and community engagement, Canonical doesn't give a crap about what the community thinks. They maintain Upstart because fuck you, they created bazaar in an era of git because fuck you, they maintain a pointless compositor named Compiz because fuck you, they invented a UI you could easily recreate in Plasma or Xfce or even Mate with slight modification but they did it from scratch and introduced a standardless mess of a application controls API because fuck you.

They want to control the whole stack, not play ball. They got way too much momentum from the Linux community in an era when Fedora was still mediocre, Arch didn't exist (and is still too user unfriendly) Debian was still slow as hell, opensuse was barely beginning, and the FOSS ecosystem wanted to rally around a big player in the consumer space, where redhat was in the server space.

Mir is bad because it will persist regardless of its merit. Solely because Canonical would never give up and depreciate it the same way they are still trying to advertise an Ubuntu TV 2 years later with no working demo, Canonical will now steal any momentum X / Wayland have towards reasonable graphics driver support and possibly steal the entire gpu manufacturers support away from what is shaping up to be the much more technically promising and openly developed project in the form of Wayland.

SurfaceFlinger is a great comparison. Mir will be just like that. It will eat up hardware support into an unusable backend that can't easily mesh with modern display servers but hardware manufacturers don't support multiple display servers. So if Mir crashes and burns, interest in Linux wanes because it looks like "same old fragmented unstable OS" and if it doesn't its completely detached from the FOSS community anyway under the CLA and Canonical will control it entirely to their will.

It isn't a question of communal merit. Canonical doesn't play that way. That is why this is so bad. It is fine if the top level programs are fragmented and disparate, because that presents workflow choice. The OS display server, audio backend, network stack, init daemon are not traditionally user experience altering, they are developer altering. If you want developers, you point them to one technically potent stack of tools well implemented by smart people with collective support behind them so they can make cool things and expect them to run on the OS. That isn't the case when you have 3 display servers, 3 audio backends, 3 init daemons, 500 package formats, etc.

I also wrote a shorter reponse on a HN thread:

I'm personally not too worried here. The thing is both Wayland and Mir will be able to run X on top of them, so currently all available GUI programs will still work.

What matters is the "winner". They will both hit mainstream usage, we will see which one is easier to develop for, and that one will take off. If Mir's claims of fixing input / specialization issues in Wayland comes to fruition, then it will probably win. If Mir hits like Unity, or atrophies like Upstart, then Wayland will probably win.

The problem is his Wayland fails everyone can switch to Mir. If Mir proves weaker, we are stuck with a more fragmented desktop space because Canonical doesn't change their minds on these things.

I also played prophet a bit on phoronix (moronix?) about how this will pan out:

There are only 3 real ways this will end.

1. Canonical, for pretty much the first time ever, produces original complex software that works on time, and does its job well enough to hit all versions of Ubuntu in a working state (aka, not Unity in 11.04). By nature of being a corporate entity pushing adoption, and in collusion with Valve + GPU vendors, Mir sees adoption in the steambox space (in a year) and gets driver support from Nvidia / ATI / Qualcomm / etc. Mir wins, regardless of technical merit, by just having the support infrastrcture coalescing around it. Desktop Linux suffers as Canonical directs Mir to their needs and wants, closes development under the CLA, and stifles innovation in the display server space even worse than the stagnation of X for a decade caused.

2. Mir turns out like most Canonical projects as fluff, delay, and unimpressive results. The consequence is that Ubuntu as a platform suffers, mainstream adoption of GNU loses is once again kicked back a few pegs since distributors like system76 / Dell / Hp can't realistically be selling Ubuntu laptops will a defective display server and protocol, but nobody else has been pushing hard on hardware sold to consumers with any other distro (openSuse or Fedora seem like the runner up viable candidates, though). Valve probably withdraws some gaming support because of the whole fiasco, and gpu drivers don't improve at all because Mir flops and Wayland doesn't get the industry visibility it needs, and its potential is thrown into question by business since Canonical so eagerly just ignored it. The result is we are practically stuck with X for an extradited period of time since nobody is migrating to Wayland because Mir took all the momentum out of the push to drop X.

3. The best outcome is that Mir crashes and burns, Wayland is perfect by years end and can be shipping in mainstream distros, someone at Free Desktop / Red Hat gets inroads enough with AMD / Nvidia to get them to either focus entirely on the open source drivers to support Wayland (best case) or refactor their proprietary ones to work well on Wayland (and better than they do right now on X). The pressure from desktop graphics and the portability of Wayland, given Nvidia supporting it on Tegra as well, might pressure hard line ARM gpu vendors to also support Wayland. The open development and removal of the burden of X mean a new era of Linux graphics, sunshine and rainbows. Ubuntu basically crashes and burns since toolkits and drivers don't support Mir well, or at all, and Canonical being the bullheaded business it is would never consider using the open standard (hic, systemd, git).

Sadly, the second one is the most likely.

My TLDR conclusions are Mir is just a powergrab by Canonical as they continue to rebuild the GNU stack under their CLA license and their control. I don't have a problem with them trying to do vertical integration of their own thing, but it hurts the Linux ecosystem that made them what they are today to fork off like this and it will ruin the momentum of adoption the movement has right now, which makes me sad.

Reddit Rants 1 : Plan 9

I'm going to start posting some of the rants I put on reddit that get some traction here as well, for posteritys sake. Because all the crap I've been saying is so important I should imortalize it in blag form forever!

This one was in a thread about plan9, got the most upvotes in the thread I think, and illustrates what it was and why it happened. Consider it a follow up to my attempts at running plan9 after all that research I did.

> I wiki'd Plan 9 but an someone give me a summary of why Plan 9 is important?

  1. It was a solution to a late 80s problem that came up, that never was solved because technology outpaced it. Back then, there were 2 kinds of computers - giant room sized behemoth servers to do any real work on, and workstations for terminals. And they barely talked. Plan 9, because of how the kernel is modularized, allows one user session to have its processors hosted on one machine, the memory on another, the hard disks some place else, the display completely independent of all those machines, and they can input somewhere else, and the 9p protocol lets all those communications be done over a network line securely. So you could have dozens of terminals running off a server, or you could just (in a live session) load up the computation server to do something cpu intensive. The entire system, every part, was inherently distributed.
  2. It treated every transaction as a single protocol, so that networked and local device access would be done under 9p, and the real goal was to make it so that any resource anywhere could be mounted as a filesystem and retrieved as a file. It had files for every device in the system well beyond what Linux /dev provides, and it had almost no system calls because most of that work was done writing or reading from system files. About the only major ones were read, write, open, and close, which were dynamic on the type of interaction taking place and could do radically different things (call functions in a device driver, mount and stream a file over a network, or read a normal file from a local volume).
  3. File systems could be overlayed on one another and had namespaces, so that you could have two distinct device folders, merge them into one in the VFS, and treat /dev as one folder even if the actual contents are in multiple places. Likewise, each running program got its own view of the file system specific to its privileges and requirements, so access to devices like keyboards, mice, network devices, disks, etc could be restricted on a per application basis by specifying what it can or can not read or write to in the file system.
  4. This might sound strage, but graphics are a more first class citizen in plan9 than they were in Unix. The display manager is a kernel driver itself, so unlike X it isn't userspace. The system wasn't designed to have a layer of teletypes under the graphics environment, they were discrete concepts.
Plan9 was full of great concepts and ideas in modern OS design. The problem became that it was always a research OS, so nobody tried using it in production. The same reason Minix is pretty low ball - it didn't have a major market driver for adoption. The problem it solved best, distributed systems using heterogeneous compoents spread across a network, became less of a problem as compute power improved, and the growth of the internet allowed similar behaviors to be overlayed on top of complete systems. The overhead of running entire operating systems to utilize network resources has never since been high enough to justify taking some of the radical departures (good ones, I think) plan9 made.
Today, it is kind of old (it doesn't fully support ANSI C, for example, and doesn't use the standard layout of libraries) and while it is realistically possible that if GCC and glibc were ported to plan9 fully, that you could build a pretty complete stack out of already available FOSS Linux programs, the target audience of plan9 is developers who really like dealing with files rather than arbitrary system calls, communication protocols, signal clones, etc.


I'll argue some flaws of plan9 (I also posted a lot of positives earlier up this thread...) on a lower level:

1.  It doesn't support ansi C, and uses its own standard library layout for its C compiler.  Because the OS threw out the sink, porting the GNU coreutils, glibc, and GCC would take a ton of effort.  So nobody takes the initiative.

2.  9p is another case of xkcd-esque standard protocols mess.  Especially today - I would make the argument IP as a filesystem protocol would probably make the most sense in a "new" OS, because you can change local crap much easier than you can change the world from using the internet.  *Especially* since ipv6 has the lower bytes as local addressing - you can easily partition that space into a nice collection of addressable PIDs, system services, and can still use loopback to access the file system (and if you take the plan9 approach with application level filesystem scopes, its easy to get to the top of your personal vfs).

3.  It is *too old*.  Linux of today is nothing like Linux of 1995 for the most part.  Almost every line since Linux 2.0 has been rewritten at least once.  plan9, due to not having as large a developer community, has a stale codebase that has aged a lot.  The consequence is that it is still built with coaxial ports, vga, svideo, IDE, etc in mind rather than modern interfaces and devices like PCIE, SATA, hdmi, usb, etc.  While they successfully added these on top, a lot of the behaviors of the OS were a product of its times when dealing with devices, and it shows.  This is the main reason I feel you have an issue with the GUI and text editor - they were written in the early 90s and have nary been updated that much since.  Compare rio to beOS, OS/2, Windows 95, or Mac OS 8.

A lot of the *ideas* (system resources provided as files, application VFS scopes, a unified protocol to access every resource) are *amazing* and I want them everywhere else.  The problem is that those don't show themselves off to the people who make decisions to back operating systems projects as much.

In closing (of the blog post, now) I still think I'd love to dedicate a few years to making a more modern computing platform that NT / Unix / whatever iOS is. I've illustrated my ideas elsewhere, and I will soon be posting a blog post linking to a more conceptualized language definition of that low-level language I was thinking of (I have a formal grammar for, I'm just speccing out a standard library.