So Bitcoin tanked today for a while after Mt. Gox got DDoSed. It came back, and is still sitting at the insane $200 valuation it has held for a few days. And it was $30 in January.
Long term, though, Bitcoin has one shortcoming that makes it both a pyramid scheme and invalid as a currency. At least in the long term.
By restricting the monetary supply over time, and decreasing the rate of coin creation, it not only means early adopters get a disproportionate amount of the monetary base in concentration, it means that every 4 years when the coin generation halves, it deflates the currency more and more. After 2140, the supply of Bitcoin will only ever decrease over time, as wallets are lost and the coins contained in such wallets rendered unusable forever.
This is catastrophically bad for any currency. When the supply shrinks over time, it means that the currency naturally becomes scarcer, so its valuation will deflate no matter the economic climate it exists in. In the future timeline where Bitcoin takes over, the holders of the majority of currency reserves (be they banks, exchanges, or super-wealthy citizenry) have no reason to ever spend money or invest since their money gains value naturally. Of course, they would probably still invest - but if the risk climate now is awful when we have tremendous inflation on the horizon of USD, it would take an absurdly safe and obvious investment to make a future bitcoin billionaire do anything with their money but sit on it, take it out of the money supply, and reap deflation.
It means those who have money gain value without doing anything. Just like how right now, having money means losing value because more and more money enters the system, as money leaves the system in the deflationary economy the holders of the scarcer and scarcer resource only need sit on their reserves to have more real valuation. That is disastrous for economies if vast swathes of the monetary base are out of circulation being used as an investment.
It is what is happening right now with bitcoin, but this is a currency exchange speculatory bubble - the inevitable flight from heavily rigged fiat currency means bitcoin is one of the few "safe" investments now - what should be reasonable investment, land, food, industry, metals, etc - are so heavily regulated and controlled they are invalid for that purpose, so artificially overvalued metals like gold and silver (a travesty of wasted good conductor metals that could see industrial use if they weren't sitting in vaults wasting space) become the flight targets.
But gold is saturated and rigged by massively wealthy market holders. Buying gold now is so extremely foolish. Buying bitcoin now that its speculative bubble has come in full swing is also foolish - it was trending towards $50 now, not $250, and will eventually correct back down (and at such time I intend to throw a thousand or two in investing in it for the inevitable rise in valuation when the next major eCommerce site starts accepting it).
But it isn't a long term solution. One problem I see is that 256 bit SHA might be impossible to crack now, but I can't help but predict in 50 years that cracking SHA will be, while still a costly operation, not impossible - if I were trying to invent a cryptocurrency so good as to last centuries, I'd go with SHA3 512 bit right now, albeit SHA3 wasn't finalized in 2008 when the original drafters of bitcoin were developing it.
I also imagine there are ways to get a cryptocurrency more resistant to a majority deception attack like bitcoin is - it doesn't even take 51% of compute power in the swarm to be able to alter transactions, it only takes 30% to stand a sizable chance of pulling off fraud.
Bitcoin is a great first try. In the long run, the best money is mathematics - distilling a secure currency is not an impossible problem. Just a complex one. But Bitcoins weakness is not its security, it is it's deflationary future, and any potential cryptocurrency worth its weight in salt will need to have a fixed, constant increase (as a % of total money, I would say) increase in it's monetary base, to offset lost wallets and keep the money losing value naturally over time so as to keep the currency acting as a means of exchange and not a means of investment.
2013/04/10
2013/03/27
An Exploration of Human Potential 4: The Next Generation of Investment
There was a vsauce video that asked a question about the potential of kickstarter to replace Hollywood, and he touches on ways that Hollywood might try to exploit it. So I figured I should write what I think the inevitable conclusion of the crowdfunding "revolution" is, how and why it happened, and what comes next.
Kickstarter, Indiegogo, and its kin all center around the idea of consumers paying for the creation of things they want, often with "perks" for donating certain amounts, often (but not always) including a copy of the thing made. Small donations for things like video games often don't get you a copy of the game, which to me seems very backwards, but I'll get to that.
The major issues with crowdfunding are twofold - one, there is no investor protection clause, if a project reaches its funding goals, you are out money and hoping they make whatever you are banking on. While there is nothing inherently bad about this - it just means you are dumb to invest in people you don't have reasonable expectations will produce the product you want, and if they turn around and run off with your money its your fault for making a risky investment.
The problem with no protections is that unproven and untested developers / produces see magnitudes less interest and contribution than already entrenched groups that have delivered in the past - which is reasonable given the lack of protection, because if you have to choose between something brand new from some random guy trying to launch a project out of their garage, or an industry expert trying to create a sequel to some IP they own, you are obviously going with the latter because you can reasonably assume they will actually make the product. But I'll get to why this is catastrophically bad in a bit.
The second major issue is that you are conflating the real role of crowdfunders - people acting as investors in the creation of new products, ideas, or initiatives - with donation tier gifts that are supposed to appease them of their money. It is an unnecessary indirection, but it is in many ways systemic of the first point - since you have no guarantees your project will actually happen, much more realistic "prizes" work to abate the issue and appease the masses.
The problem is this isn't a macroscopic solution to what I would argue is a systemic issue in the 21st century due to automation and globalization that will see the death of labor markets for physical unskilled work and an increase in the number of people who don't need to work mindless jobs. This means more people can, and should be driven to, enter creative ventures, and anything less than crowdfunding with perverse information ubiquity is disingenuous of society and all the technological innovation made up to this point.
The end goal needs to be that consumers with money have a means of finding people offering to pursue and create new information, head new initiatives, and craft new products for those interested in them, and the ability to directly invest in what you want to see it made. For one, it is the only ethical solution to the tyranny of IP and resolution of broken property rights, and second, it is the best way to resolve the current economic spiral into extreme inequality between the wealthy investor and the paycheck to paycheck laborer.
Back to my first issue, the reason that having no protections is bad is that it means entrenched market forces are disproportionately invested in because they have proven track records - their histories make them less risky to invest in, and that would drive people to put their money in what they feel is the least risky, to see the things they want made with the least chance of losing their investment, and that means entering a market would require an excess of work, often to produce something of similar caliber without the crowdfunding that is meant to enable small ventures from working.
The only solution is to abate the risk aspect. Any venture that operates in this new dichotomy will need to rigorously calcuate their expenses and obectives and produce realistic goals so that people can invest in them without fear of the venture taking the money and running - it would require some legal enforcement, maybe via contract with the exchange operator (aka, the kickstarter.com in this scenario) that and venture proposing needs to actually make what they say or else face lawsuit of fraudulent business practices and monetary extortion. The legal system is a giant mess as well, but that is tangential - in a much more functional legal system, the provider of the exchange service would prosecute any project that fraudulently takes the money and doesn't deliver a product for its "investors" interests, entirely to mitigate the risk of investment in unproven ventures.
Of course, that means anyone going into a crowdfunded project needs to fear being sued for not delivering. That is good. That means they have to be realistic, and that the projects investors can reasonably expect that they get what they pay for.
The resolution to the IP atrocities comes in the form of these projects that are funded being the means to provide the wellbeing and livelihoods of the creators for the duration of the venture - when they produce the product, they give a windowed release target, and meet it - otherwise they are liable for fraudulence. They propose how much they would need to live off for the duration of the venture plus additional expenses, and it is up to the crowdfunders to determine if they are a worthy investment. If they reach their target monetary goal, they get the money and are contractually obliged to return the product in the time frame specified.
This also means the donation goals are unnecessary and detract from the purpose - paying content creators (or any idea creator) for creating such ideas. The information based results of their labor (the blueprints for a robot they build, the movie they make, the program they write) should be released in as close to public domain or at the least an open attribution license as possible, as part of the contract. Once the product is funded by people that want to put the money where their mouth is to see it made, it should be freely available like the information it is. If you do crowdfund the invention of scarce resources like, say, building a cold fusion reactor for a billion dollars, the schematics and blueprints better be public domain, but the reactor itself is obviously owned by the original venture to sell electricity as they wish, because it is finite tangible property with scarcity, and you can't just steal it. Of course, it is up to the contract between the investors and the venture - if they want to build a fusion reactor and give it to the local government that is entirely in their contractual rights, they just need to provide them obfuscated to their investors.
The reason this matters so much is that investment right now is a rigged, closed game for the economic elite. The stock market isn't an accurate investment scheme - many companies on the stock markets could care less what their shares trade at, because they already did an IPO and got their cash reserves. After that, the trading rate never impacts them unless they release more shares into the market. People exchanging company ownership with other people doesn't impact the company at all unless the shareholders take advantage of their 51%+ ownership collectively. Dividend payouts are unrelated to the trading value of a stock, they depend on profit margins. So in practice, the only way to actually invest in new ideas is to be in a closed circle of wealthy investors surrounding an industry who plays the chessboard of ideas to their advantage with behind the scenes agreements and ventures that the public can't engage in - be they agreements between friends to try something new, or a wealthy person just taking a million dollars and trying something for a profit privately - those aren't open investments in what people want.
This becomes more important when we consider we are losing dramatic amounts of middle class power with the decline in income and savings - people don't have the spending power or push to drive the markets anymore because of gross inequality, and the best way to fix that is to open people to supporting one another without their corporate overlord interests controlling what they can buy or enjoy. Moving the engine of creativity and investment back into the hands of the masses means people see what they collectively want made, made, and those that traditionally pull the economic strings lose considerable power if the people paying for the content creators are the people themselves, rather than money hungry investment firms and publishers.
It is absolutely an end game - the collective funding of idea creation is the endgame of every information based industry, but to get there will require considerable cultural, legal, and economic shifts away from the poisonous miasma we exist in today. I hope it can happen in my lifetime, the degree of information freedom we could see in such a world would be wonderful to behold.
Kickstarter, Indiegogo, and its kin all center around the idea of consumers paying for the creation of things they want, often with "perks" for donating certain amounts, often (but not always) including a copy of the thing made. Small donations for things like video games often don't get you a copy of the game, which to me seems very backwards, but I'll get to that.
The major issues with crowdfunding are twofold - one, there is no investor protection clause, if a project reaches its funding goals, you are out money and hoping they make whatever you are banking on. While there is nothing inherently bad about this - it just means you are dumb to invest in people you don't have reasonable expectations will produce the product you want, and if they turn around and run off with your money its your fault for making a risky investment.
The problem with no protections is that unproven and untested developers / produces see magnitudes less interest and contribution than already entrenched groups that have delivered in the past - which is reasonable given the lack of protection, because if you have to choose between something brand new from some random guy trying to launch a project out of their garage, or an industry expert trying to create a sequel to some IP they own, you are obviously going with the latter because you can reasonably assume they will actually make the product. But I'll get to why this is catastrophically bad in a bit.
The second major issue is that you are conflating the real role of crowdfunders - people acting as investors in the creation of new products, ideas, or initiatives - with donation tier gifts that are supposed to appease them of their money. It is an unnecessary indirection, but it is in many ways systemic of the first point - since you have no guarantees your project will actually happen, much more realistic "prizes" work to abate the issue and appease the masses.
The problem is this isn't a macroscopic solution to what I would argue is a systemic issue in the 21st century due to automation and globalization that will see the death of labor markets for physical unskilled work and an increase in the number of people who don't need to work mindless jobs. This means more people can, and should be driven to, enter creative ventures, and anything less than crowdfunding with perverse information ubiquity is disingenuous of society and all the technological innovation made up to this point.
The end goal needs to be that consumers with money have a means of finding people offering to pursue and create new information, head new initiatives, and craft new products for those interested in them, and the ability to directly invest in what you want to see it made. For one, it is the only ethical solution to the tyranny of IP and resolution of broken property rights, and second, it is the best way to resolve the current economic spiral into extreme inequality between the wealthy investor and the paycheck to paycheck laborer.
Back to my first issue, the reason that having no protections is bad is that it means entrenched market forces are disproportionately invested in because they have proven track records - their histories make them less risky to invest in, and that would drive people to put their money in what they feel is the least risky, to see the things they want made with the least chance of losing their investment, and that means entering a market would require an excess of work, often to produce something of similar caliber without the crowdfunding that is meant to enable small ventures from working.
The only solution is to abate the risk aspect. Any venture that operates in this new dichotomy will need to rigorously calcuate their expenses and obectives and produce realistic goals so that people can invest in them without fear of the venture taking the money and running - it would require some legal enforcement, maybe via contract with the exchange operator (aka, the kickstarter.com in this scenario) that and venture proposing needs to actually make what they say or else face lawsuit of fraudulent business practices and monetary extortion. The legal system is a giant mess as well, but that is tangential - in a much more functional legal system, the provider of the exchange service would prosecute any project that fraudulently takes the money and doesn't deliver a product for its "investors" interests, entirely to mitigate the risk of investment in unproven ventures.
Of course, that means anyone going into a crowdfunded project needs to fear being sued for not delivering. That is good. That means they have to be realistic, and that the projects investors can reasonably expect that they get what they pay for.
The resolution to the IP atrocities comes in the form of these projects that are funded being the means to provide the wellbeing and livelihoods of the creators for the duration of the venture - when they produce the product, they give a windowed release target, and meet it - otherwise they are liable for fraudulence. They propose how much they would need to live off for the duration of the venture plus additional expenses, and it is up to the crowdfunders to determine if they are a worthy investment. If they reach their target monetary goal, they get the money and are contractually obliged to return the product in the time frame specified.
This also means the donation goals are unnecessary and detract from the purpose - paying content creators (or any idea creator) for creating such ideas. The information based results of their labor (the blueprints for a robot they build, the movie they make, the program they write) should be released in as close to public domain or at the least an open attribution license as possible, as part of the contract. Once the product is funded by people that want to put the money where their mouth is to see it made, it should be freely available like the information it is. If you do crowdfund the invention of scarce resources like, say, building a cold fusion reactor for a billion dollars, the schematics and blueprints better be public domain, but the reactor itself is obviously owned by the original venture to sell electricity as they wish, because it is finite tangible property with scarcity, and you can't just steal it. Of course, it is up to the contract between the investors and the venture - if they want to build a fusion reactor and give it to the local government that is entirely in their contractual rights, they just need to provide them obfuscated to their investors.
The reason this matters so much is that investment right now is a rigged, closed game for the economic elite. The stock market isn't an accurate investment scheme - many companies on the stock markets could care less what their shares trade at, because they already did an IPO and got their cash reserves. After that, the trading rate never impacts them unless they release more shares into the market. People exchanging company ownership with other people doesn't impact the company at all unless the shareholders take advantage of their 51%+ ownership collectively. Dividend payouts are unrelated to the trading value of a stock, they depend on profit margins. So in practice, the only way to actually invest in new ideas is to be in a closed circle of wealthy investors surrounding an industry who plays the chessboard of ideas to their advantage with behind the scenes agreements and ventures that the public can't engage in - be they agreements between friends to try something new, or a wealthy person just taking a million dollars and trying something for a profit privately - those aren't open investments in what people want.
This becomes more important when we consider we are losing dramatic amounts of middle class power with the decline in income and savings - people don't have the spending power or push to drive the markets anymore because of gross inequality, and the best way to fix that is to open people to supporting one another without their corporate overlord interests controlling what they can buy or enjoy. Moving the engine of creativity and investment back into the hands of the masses means people see what they collectively want made, made, and those that traditionally pull the economic strings lose considerable power if the people paying for the content creators are the people themselves, rather than money hungry investment firms and publishers.
It is absolutely an end game - the collective funding of idea creation is the endgame of every information based industry, but to get there will require considerable cultural, legal, and economic shifts away from the poisonous miasma we exist in today. I hope it can happen in my lifetime, the degree of information freedom we could see in such a world would be wonderful to behold.
2013/03/22
Software Rants 11: Build Systems
So in my forrays into the world of KDE, I quickly came upon the fiasco surrounding KDE4 where the software compilation transistioned its build system from autotools to cmake (in the end, at least). They were originally targeting scons, thought it was too slow, tried to spin their own solution (bad idea) and ended up with Wif, gave up on that and settled on Cmake.
First thing is first, any application that uses some kind of script language (and make / cmake / qmake all absolutely qualify) that use their own syntax and don't derive from any actual script language are inherently stupid. If you encounter a situation where you can say "a domain language is so much more efficient in terms of programmer productivity here than any script syntax" you are probably tackling the problem wrong. Now, 10 years ago, you would have had a legitimate argument - no script language was mature enough to provide a terse syntax that delivered on brevity and ease of use at the time. Now Python, Ruby, and arguably Javascript are all contenders for mainstream script languages with near-human readable syntax and rampant extendability.
So if you see a problem and try to apply a domain language to it, I'm calling you right there as doing it wrong. The overhead of dependencies on a script language interpreter or JIT (like Python) is never worth the mental overhead of having to juggle dozens of domain languages.
And that brings me to build processes, which are one of the most obvious candidates for scripting, but are almost always domain languages, be it from Ant and Maven to Make and Autotools, and the gamut in between, only Scons is reasonable to me, because it is a build environment in python where you write your build scripts in python.
Now that is valuable. Scons is on track, I hope, to not only merge back with Wif, but solve its performance hindrances and deliver a modern build system divergent from the alien syntax of make for a modern use case where external dependencies, caching, and deployment all need to be accounted for.
However, I am stuck looking at cmake for any new project I want to work with, solely due to qtcreator and kdevelop integration. And honestly, if it stays out of my way, I will put up with it. I want to see Scons succeed though, so like the other hundred projects I want to get involved with, I want to see Scons integration in IDEs. I also want to see it solve its performance problems and deliver a solid product.
One thing I wonder is why they didn't use build files in Python but wrote the routines in C or C++ so they could interface the Python Interpreter with some scons.so library.
I definitely think any software you write you intend to be run across many machines should be native. Anything less is a disservice to your clientele, in microseconds of time wasted or in measurable electrical consumption of script interpreters in use cases they don't fit.
A build description for a project? Absolutely script-worthy. The backend to process build scripts? Should be native. The project is a business app with low deployment? Python that sucka. The project is a consumer app? Probably native.
I used to think it would make more sense to write everything in Python and then delve into C++ where performance is needed, but the promises of cross platform porting of qml and qt apps is just too good to pass up.
But yeah, build systems are a fucking mess, and as I continue to write up my Magma manifesto, one of the core tenants is not only compiler level support for Stratos scripts, but the usage of Stratos as the build system from the get go. The modularization instead of textualization of files makes bundle finding and importing a breeze, and the usage of compiled libraries or even whole software packages is just a step away.
First thing is first, any application that uses some kind of script language (and make / cmake / qmake all absolutely qualify) that use their own syntax and don't derive from any actual script language are inherently stupid. If you encounter a situation where you can say "a domain language is so much more efficient in terms of programmer productivity here than any script syntax" you are probably tackling the problem wrong. Now, 10 years ago, you would have had a legitimate argument - no script language was mature enough to provide a terse syntax that delivered on brevity and ease of use at the time. Now Python, Ruby, and arguably Javascript are all contenders for mainstream script languages with near-human readable syntax and rampant extendability.
So if you see a problem and try to apply a domain language to it, I'm calling you right there as doing it wrong. The overhead of dependencies on a script language interpreter or JIT (like Python) is never worth the mental overhead of having to juggle dozens of domain languages.
And that brings me to build processes, which are one of the most obvious candidates for scripting, but are almost always domain languages, be it from Ant and Maven to Make and Autotools, and the gamut in between, only Scons is reasonable to me, because it is a build environment in python where you write your build scripts in python.
Now that is valuable. Scons is on track, I hope, to not only merge back with Wif, but solve its performance hindrances and deliver a modern build system divergent from the alien syntax of make for a modern use case where external dependencies, caching, and deployment all need to be accounted for.
However, I am stuck looking at cmake for any new project I want to work with, solely due to qtcreator and kdevelop integration. And honestly, if it stays out of my way, I will put up with it. I want to see Scons succeed though, so like the other hundred projects I want to get involved with, I want to see Scons integration in IDEs. I also want to see it solve its performance problems and deliver a solid product.
One thing I wonder is why they didn't use build files in Python but wrote the routines in C or C++ so they could interface the Python Interpreter with some scons.so library.
I definitely think any software you write you intend to be run across many machines should be native. Anything less is a disservice to your clientele, in microseconds of time wasted or in measurable electrical consumption of script interpreters in use cases they don't fit.
A build description for a project? Absolutely script-worthy. The backend to process build scripts? Should be native. The project is a business app with low deployment? Python that sucka. The project is a consumer app? Probably native.
I used to think it would make more sense to write everything in Python and then delve into C++ where performance is needed, but the promises of cross platform porting of qml and qt apps is just too good to pass up.
But yeah, build systems are a fucking mess, and as I continue to write up my Magma manifesto, one of the core tenants is not only compiler level support for Stratos scripts, but the usage of Stratos as the build system from the get go. The modularization instead of textualization of files makes bundle finding and importing a breeze, and the usage of compiled libraries or even whole software packages is just a step away.
2013/03/14
Software Rants 10: The Crossroads of Toolkits, circa 2013
I'm making a prediction - in 5 years, the vast majority of application based software development will be done in one of two environments - html5, or qt.
Those sound radically different, right? What kind of stupid tangent am I going off on now? Well, the biggest thing happening in application world is a transition - the world is slowly getting off the Windows monoculture thanks to mobile, and the most costly shift is that every new device OS, from blackberry to ios to Android to the upcoming Ubuntu Phone and Sailfish, plus the growing GNU/Linux gaming scene and OSX adoption, so the biggest deal is finding a development environment where you can write once, deploy everywhere.
And the only two runners in that race right now are qt and html5. I'd mention Mono and Xamarin, but the C# runtime is so slow and huge on mobile platforms the performance isn't there, and the momentum is moving towards qt anyway. Here are the respective pros and cons:
Qt
Nothing else comes close to this device parity of these two platforms. Any new application developer is naive not to use one of these, because all the others listed are dead ends with platform lock in. The plethora of backers of the w3c and Digia are from all these platforms and have interest in promoting their continued growth and the platforms themselves realize that being device transcendent makes them all the more useful.
What I find really interesting is that the interpreted languages of Java / C# are nowhere. Mono is close to being device prolific, but Oracle is a sludge of an outdated bureaucratic death trap and never realizes opportunity since they bought Sun, so they just let Java flounder into obscurity. Which is fine, the language grows at molasses pace and makes me mad to even look at with such critical flaws as no function objects and no default arguments.
But qt does it better, with C++ of all things. I guess GCC / Clang are useful in their architecture proliferation.
Which is one of the main reasons I'm focusing myself on qt, and will be doing my work in the next few months in it. I think it is the future, because at the end of the day, html is still a markup language. It has grown tumors of styling and scripting and has mutated over the years, but you are still browsing markup documents at the end of the day. I just like having access to a system to its core, and qt provides that option when necessary. So I'm betting on qt, and hope it pays off.
Those sound radically different, right? What kind of stupid tangent am I going off on now? Well, the biggest thing happening in application world is a transition - the world is slowly getting off the Windows monoculture thanks to mobile, and the most costly shift is that every new device OS, from blackberry to ios to Android to the upcoming Ubuntu Phone and Sailfish, plus the growing GNU/Linux gaming scene and OSX adoption, so the biggest deal is finding a development environment where you can write once, deploy everywhere.
And the only two runners in that race right now are qt and html5. I'd mention Mono and Xamarin, but the C# runtime is so slow and huge on mobile platforms the performance isn't there, and the momentum is moving towards qt anyway. Here are the respective pros and cons:
Qt
- Optional native performance if written wholly in C++.
- More practically, write applications in qml (for layout and styling) and javascript, and stick any performance critical parts in C++, and since signals and slots makes the transition seamless, take advantage of rapid deployment and high performance.
- LGPL apps distribute the qt libraries through a bundled download assistant that will retrieve them once for all local qt apps, so they aren't redundantly cloned. Downside is that with low adoption the downloader is a hindrance for users.
- Integrates nicely into most toolkits appearances. For example, it uses the options conditional button in Android, and supports gestures.
- As native apps, qt apps are local, offline capable, and are exposed to the file system and all other niceties of a first class citizen program.
- Most pervasive platform, but not concrete. qt5 is stable and shippable, and because Digia controls it you can expect forward updates to fall through without a hitch. Banking on html5 webapps means you aren't supported on unupdated devices as much (not that much of a problem in, say, 2 years) but older devices (which tails out more and more as compute power for the consumer plateaus) mean fewer browser updates, and the need to have tag soup trying to figure out what features you have available.
- Not solid. At all. Webaudio is still sorely lacking, webrtc isn't finalized, webgl is still experimental, and input handling is nonexistent. Local storage is too small to cache any significant amount of an app, especially for offline usage.
- By being web based, you have inherent access to all the resources of the Internet, whereas qt requires web APIs to access the same things.
- Inherently cloud based, explicit cloud configuration required for qt.
- Qt generates installable apps for its target platforms as native local applications. html5 apps are cloud based and thus only as slow as the web page load times to access them to get into. So a lower barrier to entry.
- Objective C + Cocoa for ios
- Objective C + Quartz for OSX
- Windows Forms + C#/C++ + win32 for Windows
- WinRT + C++ for Windows Phone
- GTK (or qt) + c (or C++) for Linux
- Java + ADK for Android
- Qt for Blackberry, Ubuntu Phone, Sailfish (anyway).
Nothing else comes close to this device parity of these two platforms. Any new application developer is naive not to use one of these, because all the others listed are dead ends with platform lock in. The plethora of backers of the w3c and Digia are from all these platforms and have interest in promoting their continued growth and the platforms themselves realize that being device transcendent makes them all the more useful.
What I find really interesting is that the interpreted languages of Java / C# are nowhere. Mono is close to being device prolific, but Oracle is a sludge of an outdated bureaucratic death trap and never realizes opportunity since they bought Sun, so they just let Java flounder into obscurity. Which is fine, the language grows at molasses pace and makes me mad to even look at with such critical flaws as no function objects and no default arguments.
But qt does it better, with C++ of all things. I guess GCC / Clang are useful in their architecture proliferation.
Which is one of the main reasons I'm focusing myself on qt, and will be doing my work in the next few months in it. I think it is the future, because at the end of the day, html is still a markup language. It has grown tumors of styling and scripting and has mutated over the years, but you are still browsing markup documents at the end of the day. I just like having access to a system to its core, and qt provides that option when necessary. So I'm betting on qt, and hope it pays off.
Software Rants 9: Capturing the Desktop
In my continuing thinking about next generation operating systems and the ilk, I wanted to outline the various aspects of a system necessary to truly win the world - all the of parts of a whole computing experience that, if presented and superior to all competitors, would probably change the world overnight. No piece can be missing, as can be said of Linux space and its lack of non-linear video editors, GIMP's subpar feature parity against competition, and audio's terrible architecture support. So here are the various categories and things a next generation desktop needs to compromise the consumer space.
Core
Networking
Textual
Core
- Microkernel providing consistent device ABI and abstractions. Needs to be preemptive, have a fair low overhead scheduler, and be highly optimized in implementation. The kernel should provide a socket based IPC layer.
- Driver architecture built around files, interfacing with kernel provided device hooks to control devices. Driver signing for security, but optinal disabling for debugging and testing. Drivers need an explicit debug test harness since they are one of the most important components to minimize bugs in.
- Init daemon that supports arbitrary payloads, service status logging, catastrophic error recovery, and controlled system failure. The init daemon should initialize the IPC layer for parallel service initialization (think systemd or launchd).
- Command shell using an elegant shell script (see: Stratos in shell context). Most applications need to provide CLI implementations to support server functionality.
- Executor that will checksum and sign check binary payloads, has an intelligent fast library search and inject implementation, and supports debugger / profiler injection without any runtime overhead of standard apps.
- Hardware side, two interface specifications - serial and parallel digital. Channels are modulated for bandwidth, and dynamic parallel channels allow for point to point bandwidth control on the proverbial northbridge. High privileged devices should use PDI, and low privileged should use SDI. Latency tradeoffs for bandwidth should be modulation specific, so one interface each should be able to transition cleanly from low latency low bandwidth to high latency pipelined bandwidth. Consider a single interface where a single channel parallel is treated as a less privileged interface. Disregard integrated analog interfaces. USB can certainly be implemented as an expansion card.
- Consider 4 form factors of device profile - mobile, consumer, professional, and server. Each has different UX and thus size / allocation of buses requirements, so target appropriately. Consumer should be at most mini-ITX scale, professional should be at most micro-ATX - we are in the future, we don't need big boards.
- Next generation low level systems language, that is meant to utilize every programming paradigm and supply the ability to inline script or ASM code (aka, Magma). Module based, optimized for compiler architecture.
- A common intermediary bytecode standard to compile both low and middle level languages against, akin to LLVM bytecode. Should support external functionality hooks, like a GC or runtime sandbox. This bytecode should also be signable, checksum-able, and interchangeable over a network pipe (but deterministic execution of bytecode built for a target architecture in a systems programming context is not guaranteed).
- Middle level garbage collected modularized contextual language for application development. Objectives are to promote device agnosticism, streamline library functionality, while providing development infrastructure to support very large group development, but can also be compiled and used as a binary script language. See : Fissure.
- High level script language able to tightly integrate into Magma and Fissure. Functions as the shell language, and as a textual script language for plugins and framework scripting on other applications. Meant to be very python-esque, in being a dynamic, unthreaded simple execution environment that promotes programmer productivity and readability at the cost of efficiency (see : Stratos).
- Source control provided by the system database backend, and source control is pervasive on every folder and file in the system unless explicitly removed. Subvolumes can be declared for treatment like classic source control repositories. This also acts as system restore and if the database is configured redundant acts as backup.
- Copy on write, online compressing transparent filesystem with branch caching, auto defragmentation, with distributed metadata, RAID support, and cross volume partitioning. Target ZFS level security and data integrity.
- Everything-as-a-file transparent filesystem - devices, services, network locations, processes, memory, etc as filesystem data structures. Per-application (and thus per-user) filesystem view scopes. See the next gen FS layout specification for more information.
- Hardware side, target solid state storage with an everything-as-cache storage policy - provide metrics to integrate arbitrary cache layers into the system caching daemon, use learning readahead to predict usage, and use the tried and true dumb space local and time local caching policy.
Networking
- Backwards compatibility with the ipv6 network transport layer, TCP/IP/UDP, TLS security, with full stack support for html / css / ecmascript complaint documents over them.
- Rich markup document format with WYSIWYG editor support, scripting, and styling. Meant to work in parallel with a traditional TCP stack.
- Next generation distributed network without centralization support, with point to point connectivity and neighborhood acknowledgement. Meant to act both as a LAN protocol for both simple file transfer, service publication (displays, video, audio, printers, inputs, to software like video libraries, databases, etc) that can be deployed wideband as a public Internet without centralization.
- Discard support for ipv4, ftp, nfs, smb, vnc, etc protocols in favor of modern solution.
- Only a 3d rendering API where 2d is a reduced set case. All hardware is expected to be heterogeneous SIMD and complex processing, so this API is published on every device. Since Magma has SIMD instruction support, this API uses Magma in the simd context instead of something arbitrary like GLSL. Is a standard library feature of low.
- Hardware graphics drivers only need support the rendering API in its device implementation, and executor will allocate instructions against it. No special OS specific hooks necessary. Even better, one standard linkable may be provided that backends present gpu hardware or falls back to pipelined core usage.
- No need for a display server / service, since all applications work through a single rendering API. A desktop environment is just like any 3d application running in a virtual window, it just runs at the service level and can thus take control of a display (in terms of access privileges, user applications can't ever take control of a display, and the best they can do is negotiate with the environment to run in a chromeless fullscreen window).
- Complete non-linear video editor and splicer that is on par with Vegas.
- Compete 3d modeler / animator / scene propagator supporting dae, cad, and system formats.
- System wide hardware video rendering backend library supporting legacy formats and system provided ones, found in Magma's std.
- Complete 2d vector and raster image composer, better UX and feature parity than Gimp, at least on par with photoshop. Think Inkscape + sai.
- 3d (and by extension, fallback 2d) ORM game engine implemented in Magma provided as a service for game makers. Should also have a complete SDK for development, use models developed in our modeler.
- Cloud video publishing service baked into a complete content creation platform.
- Art publishing service akin to DA on the content creation platform.
- Saves use version control and continuous saving through DB caching to keep persistent save clones.
- Like Video, a single 3d audio API devices need to support at the driver level (which means positional and point to point audio support). Standards should be a highly optimized variable bitrate container format.
- Software only mixing and equalizing, supplied by the OS primary audio service, and controllable by the user. Each user would have a profile, like they would have a video profile.
- Audio mixing software of at least the quality of Audacity and with much better UX.
- Audio production suite much better than garageband.
- System wide audio backend (provided in Magma's std) that supports legacy and system formats.
- Audio publishing service akin to bandcamp in a content creation platform.
Textual
- Systemic backend database assumed present in some object mapping API specified in Magma. Different runlevels have access to different table groups and access privilege applies to the database server. This way, all applications can use a centralized database-in-filesystem repostiory rather than running their own. Note : database shards and tables are stored app-local rather than in a behemoth registry-style layout, and are loaded on demand rather than as one giant backend. The database server just manages independently storage. The database files use the standard serialization format, so users can write custom configurations easily. These files, of course, can be encrypted.
- Since the database is inherently scriptable, you can store spreadsheets in it. It can also act as a version control repository, so all documents are version controlled.
- Singular document standard, supporting scripting and styling, used as local WYSIWYG based binary or textual saved documents, or as "web" pages.
- Integrated development environment using gui source control hooks, support for the system debugger and profiler, consoles, collaborative editing, live previews, designer hooks, etc. Should be written in Magma, and load on demand features. Target qtcreator, not visual studio / eclipse.
- Pervasive, executable based mandatory access control. Profiles are file based, scripted in the standard serialization format, should be simple to modify and configure with admin privildges.
- Contextual file system views, as a part of MAC, an application can only "see" what it is allowed to see, in a restricted context.
- Binary signing pervasively, keys stored in central database.
- Folder, file, and drive based encryption. An encrypted system partition can be unlocked by a RAMFS boot behavior.
- Device layer passwords are supported as encryption keys. The disk is encrypted with the password as the key, instead of the traditional independent behavior where you can just read the contents off a password protected disk.
- Network security implied - the firewall has a deny policy, as do system services. Fail2ban is included with reasonable base parameters that can be modified system wide or per service. All network connections on the system protocol negotiate secure connections and use a hosted key repository with the name server for credentials exchange and validation.
- Going to need to support arbitrary key layouts with arbitrary glyphic key symbol correlations. Think utf8 key codes. Vector based dimensional visual movement, which can be implemented as touch, mouse, rotation, joysticks, etc. So the two input standards are motion and key codes.
- Input devices provided as files in the FS (duh) and simple input APIs provided in Magma.
2013/03/06
Reddit Rants 2: Mir Fallout Ranting
This time, in the wake of Mir's unveiling as the new Ubuntu display server, I was retorting someone saying fragmentation isn't a problem here and that the competition Mir would produce would be positive and get Wayland developed faster, here is my retort:
The correct way to go about FOSS development is:
Explore Options -> engage with open contribution projects in the same space -> attempt to contribute to the already established product, improving it into what you need, given community support -> if that doesn't happen, consider forking -> if forking is not good enough and you need to rebase, start from scratch.
Canonical skipped to the last step. It is fine if you have no other option but to fragment because then you are representing some market segment whose needs are not met.
The needs of a next generation display server that can run on any device with minimal overhead, sane input handling, and network abstraction already exists and it is in a stable API state with running examples, called Wayland.
The problem with Mir and Canonical is that unlike community projects and community engagement, Canonical doesn't give a crap about what the community thinks. They maintain Upstart because fuck you, they created bazaar in an era of git because fuck you, they maintain a pointless compositor named Compiz because fuck you, they invented a UI you could easily recreate in Plasma or Xfce or even Mate with slight modification but they did it from scratch and introduced a standardless mess of a application controls API because fuck you.
They want to control the whole stack, not play ball. They got way too much momentum from the Linux community in an era when Fedora was still mediocre, Arch didn't exist (and is still too user unfriendly) Debian was still slow as hell, opensuse was barely beginning, and the FOSS ecosystem wanted to rally around a big player in the consumer space, where redhat was in the server space.
Mir is bad because it will persist regardless of its merit. Solely because Canonical would never give up and depreciate it the same way they are still trying to advertise an Ubuntu TV 2 years later with no working demo, Canonical will now steal any momentum X / Wayland have towards reasonable graphics driver support and possibly steal the entire gpu manufacturers support away from what is shaping up to be the much more technically promising and openly developed project in the form of Wayland.
SurfaceFlinger is a great comparison. Mir will be just like that. It will eat up hardware support into an unusable backend that can't easily mesh with modern display servers but hardware manufacturers don't support multiple display servers. So if Mir crashes and burns, interest in Linux wanes because it looks like "same old fragmented unstable OS" and if it doesn't its completely detached from the FOSS community anyway under the CLA and Canonical will control it entirely to their will.
It isn't a question of communal merit. Canonical doesn't play that way. That is why this is so bad. It is fine if the top level programs are fragmented and disparate, because that presents workflow choice. The OS display server, audio backend, network stack, init daemon are not traditionally user experience altering, they are developer altering. If you want developers, you point them to one technically potent stack of tools well implemented by smart people with collective support behind them so they can make cool things and expect them to run on the OS. That isn't the case when you have 3 display servers, 3 audio backends, 3 init daemons, 500 package formats, etc.
I also wrote a shorter reponse on a HN thread:
I'm personally not too worried here. The thing is both Wayland and Mir will be able to run X on top of them, so currently all available GUI programs will still work.
What matters is the "winner". They will both hit mainstream usage, we will see which one is easier to develop for, and that one will take off. If Mir's claims of fixing input / specialization issues in Wayland comes to fruition, then it will probably win. If Mir hits like Unity, or atrophies like Upstart, then Wayland will probably win.
The problem is his Wayland fails everyone can switch to Mir. If Mir proves weaker, we are stuck with a more fragmented desktop space because Canonical doesn't change their minds on these things.
I also played prophet a bit on phoronix (moronix?) about how this will pan out:
There are only 3 real ways this will end.
1. Canonical, for pretty much the first time ever, produces original complex software that works on time, and does its job well enough to hit all versions of Ubuntu in a working state (aka, not Unity in 11.04). By nature of being a corporate entity pushing adoption, and in collusion with Valve + GPU vendors, Mir sees adoption in the steambox space (in a year) and gets driver support from Nvidia / ATI / Qualcomm / etc. Mir wins, regardless of technical merit, by just having the support infrastrcture coalescing around it. Desktop Linux suffers as Canonical directs Mir to their needs and wants, closes development under the CLA, and stifles innovation in the display server space even worse than the stagnation of X for a decade caused.
2. Mir turns out like most Canonical projects as fluff, delay, and unimpressive results. The consequence is that Ubuntu as a platform suffers, mainstream adoption of GNU loses is once again kicked back a few pegs since distributors like system76 / Dell / Hp can't realistically be selling Ubuntu laptops will a defective display server and protocol, but nobody else has been pushing hard on hardware sold to consumers with any other distro (openSuse or Fedora seem like the runner up viable candidates, though). Valve probably withdraws some gaming support because of the whole fiasco, and gpu drivers don't improve at all because Mir flops and Wayland doesn't get the industry visibility it needs, and its potential is thrown into question by business since Canonical so eagerly just ignored it. The result is we are practically stuck with X for an extradited period of time since nobody is migrating to Wayland because Mir took all the momentum out of the push to drop X.
3. The best outcome is that Mir crashes and burns, Wayland is perfect by years end and can be shipping in mainstream distros, someone at Free Desktop / Red Hat gets inroads enough with AMD / Nvidia to get them to either focus entirely on the open source drivers to support Wayland (best case) or refactor their proprietary ones to work well on Wayland (and better than they do right now on X). The pressure from desktop graphics and the portability of Wayland, given Nvidia supporting it on Tegra as well, might pressure hard line ARM gpu vendors to also support Wayland. The open development and removal of the burden of X mean a new era of Linux graphics, sunshine and rainbows. Ubuntu basically crashes and burns since toolkits and drivers don't support Mir well, or at all, and Canonical being the bullheaded business it is would never consider using the open standard (hic, systemd, git).
Sadly, the second one is the most likely.
My TLDR conclusions are Mir is just a powergrab by Canonical as they continue to rebuild the GNU stack under their CLA license and their control. I don't have a problem with them trying to do vertical integration of their own thing, but it hurts the Linux ecosystem that made them what they are today to fork off like this and it will ruin the momentum of adoption the movement has right now, which makes me sad.
Explore Options -> engage with open contribution projects in the same space -> attempt to contribute to the already established product, improving it into what you need, given community support -> if that doesn't happen, consider forking -> if forking is not good enough and you need to rebase, start from scratch.
Canonical skipped to the last step. It is fine if you have no other option but to fragment because then you are representing some market segment whose needs are not met.
The needs of a next generation display server that can run on any device with minimal overhead, sane input handling, and network abstraction already exists and it is in a stable API state with running examples, called Wayland.
The problem with Mir and Canonical is that unlike community projects and community engagement, Canonical doesn't give a crap about what the community thinks. They maintain Upstart because fuck you, they created bazaar in an era of git because fuck you, they maintain a pointless compositor named Compiz because fuck you, they invented a UI you could easily recreate in Plasma or Xfce or even Mate with slight modification but they did it from scratch and introduced a standardless mess of a application controls API because fuck you.
They want to control the whole stack, not play ball. They got way too much momentum from the Linux community in an era when Fedora was still mediocre, Arch didn't exist (and is still too user unfriendly) Debian was still slow as hell, opensuse was barely beginning, and the FOSS ecosystem wanted to rally around a big player in the consumer space, where redhat was in the server space.
Mir is bad because it will persist regardless of its merit. Solely because Canonical would never give up and depreciate it the same way they are still trying to advertise an Ubuntu TV 2 years later with no working demo, Canonical will now steal any momentum X / Wayland have towards reasonable graphics driver support and possibly steal the entire gpu manufacturers support away from what is shaping up to be the much more technically promising and openly developed project in the form of Wayland.
SurfaceFlinger is a great comparison. Mir will be just like that. It will eat up hardware support into an unusable backend that can't easily mesh with modern display servers but hardware manufacturers don't support multiple display servers. So if Mir crashes and burns, interest in Linux wanes because it looks like "same old fragmented unstable OS" and if it doesn't its completely detached from the FOSS community anyway under the CLA and Canonical will control it entirely to their will.
It isn't a question of communal merit. Canonical doesn't play that way. That is why this is so bad. It is fine if the top level programs are fragmented and disparate, because that presents workflow choice. The OS display server, audio backend, network stack, init daemon are not traditionally user experience altering, they are developer altering. If you want developers, you point them to one technically potent stack of tools well implemented by smart people with collective support behind them so they can make cool things and expect them to run on the OS. That isn't the case when you have 3 display servers, 3 audio backends, 3 init daemons, 500 package formats, etc.
I also wrote a shorter reponse on a HN thread:
I'm personally not too worried here. The thing is both Wayland and Mir will be able to run X on top of them, so currently all available GUI programs will still work.
What matters is the "winner". They will both hit mainstream usage, we will see which one is easier to develop for, and that one will take off. If Mir's claims of fixing input / specialization issues in Wayland comes to fruition, then it will probably win. If Mir hits like Unity, or atrophies like Upstart, then Wayland will probably win.
The problem is his Wayland fails everyone can switch to Mir. If Mir proves weaker, we are stuck with a more fragmented desktop space because Canonical doesn't change their minds on these things.
I also played prophet a bit on phoronix (moronix?) about how this will pan out:
There are only 3 real ways this will end.
1. Canonical, for pretty much the first time ever, produces original complex software that works on time, and does its job well enough to hit all versions of Ubuntu in a working state (aka, not Unity in 11.04). By nature of being a corporate entity pushing adoption, and in collusion with Valve + GPU vendors, Mir sees adoption in the steambox space (in a year) and gets driver support from Nvidia / ATI / Qualcomm / etc. Mir wins, regardless of technical merit, by just having the support infrastrcture coalescing around it. Desktop Linux suffers as Canonical directs Mir to their needs and wants, closes development under the CLA, and stifles innovation in the display server space even worse than the stagnation of X for a decade caused.
2. Mir turns out like most Canonical projects as fluff, delay, and unimpressive results. The consequence is that Ubuntu as a platform suffers, mainstream adoption of GNU loses is once again kicked back a few pegs since distributors like system76 / Dell / Hp can't realistically be selling Ubuntu laptops will a defective display server and protocol, but nobody else has been pushing hard on hardware sold to consumers with any other distro (openSuse or Fedora seem like the runner up viable candidates, though). Valve probably withdraws some gaming support because of the whole fiasco, and gpu drivers don't improve at all because Mir flops and Wayland doesn't get the industry visibility it needs, and its potential is thrown into question by business since Canonical so eagerly just ignored it. The result is we are practically stuck with X for an extradited period of time since nobody is migrating to Wayland because Mir took all the momentum out of the push to drop X.
3. The best outcome is that Mir crashes and burns, Wayland is perfect by years end and can be shipping in mainstream distros, someone at Free Desktop / Red Hat gets inroads enough with AMD / Nvidia to get them to either focus entirely on the open source drivers to support Wayland (best case) or refactor their proprietary ones to work well on Wayland (and better than they do right now on X). The pressure from desktop graphics and the portability of Wayland, given Nvidia supporting it on Tegra as well, might pressure hard line ARM gpu vendors to also support Wayland. The open development and removal of the burden of X mean a new era of Linux graphics, sunshine and rainbows. Ubuntu basically crashes and burns since toolkits and drivers don't support Mir well, or at all, and Canonical being the bullheaded business it is would never consider using the open standard (hic, systemd, git).
Sadly, the second one is the most likely.
My TLDR conclusions are Mir is just a powergrab by Canonical as they continue to rebuild the GNU stack under their CLA license and their control. I don't have a problem with them trying to do vertical integration of their own thing, but it hurts the Linux ecosystem that made them what they are today to fork off like this and it will ruin the momentum of adoption the movement has right now, which makes me sad.
Reddit Rants 1 : Plan 9
I'm going to start posting some of the rants I put on reddit that get some traction here as well, for posteritys sake. Because all the crap I've been saying is so important I should imortalize it in blag form forever!
This one was in a thread about plan9, got the most upvotes in the thread I think, and illustrates what it was and why it happened. Consider it a follow up to my attempts at running plan9 after all that research I did.
> I wiki'd Plan 9 but an someone give me a summary of why Plan 9 is important?
Today, it is kind of old (it doesn't fully support ANSI C, for example, and doesn't use the standard layout of libraries) and while it is realistically possible that if GCC and glibc were ported to plan9 fully, that you could build a pretty complete stack out of already available FOSS Linux programs, the target audience of plan9 is developers who really like dealing with files rather than arbitrary system calls, communication protocols, signal clones, etc.
I'll argue some flaws of plan9 (I also posted a lot of positives earlier up this thread...) on a lower level:
1. It doesn't support ansi C, and uses its own standard library layout for its C compiler. Because the OS threw out the sink, porting the GNU coreutils, glibc, and GCC would take a ton of effort. So nobody takes the initiative.
2. 9p is another case of xkcd-esque standard protocols mess. Especially today - I would make the argument IP as a filesystem protocol would probably make the most sense in a "new" OS, because you can change local crap much easier than you can change the world from using the internet. *Especially* since ipv6 has the lower bytes as local addressing - you can easily partition that space into a nice collection of addressable PIDs, system services, and can still use loopback to access the file system (and if you take the plan9 approach with application level filesystem scopes, its easy to get to the top of your personal vfs).
3. It is *too old*. Linux of today is nothing like Linux of 1995 for the most part. Almost every line since Linux 2.0 has been rewritten at least once. plan9, due to not having as large a developer community, has a stale codebase that has aged a lot. The consequence is that it is still built with coaxial ports, vga, svideo, IDE, etc in mind rather than modern interfaces and devices like PCIE, SATA, hdmi, usb, etc. While they successfully added these on top, a lot of the behaviors of the OS were a product of its times when dealing with devices, and it shows. This is the main reason I feel you have an issue with the GUI and text editor - they were written in the early 90s and have nary been updated that much since. Compare rio to beOS, OS/2, Windows 95, or Mac OS 8.
A lot of the *ideas* (system resources provided as files, application VFS scopes, a unified protocol to access every resource) are *amazing* and I want them everywhere else. The problem is that those don't show themselves off to the people who make decisions to back operating systems projects as much.
In closing (of the blog post, now) I still think I'd love to dedicate a few years to making a more modern computing platform that NT / Unix / whatever iOS is. I've illustrated my ideas elsewhere, and I will soon be posting a blog post linking to a more conceptualized language definition of that low-level language I was thinking of (I have a formal grammar for, I'm just speccing out a standard library.
This one was in a thread about plan9, got the most upvotes in the thread I think, and illustrates what it was and why it happened. Consider it a follow up to my attempts at running plan9 after all that research I did.
> I wiki'd Plan 9 but an someone give me a summary of why Plan 9 is important?
- It was a solution to a late 80s problem that came up, that never was solved because technology outpaced it. Back then, there were 2 kinds of computers - giant room sized behemoth servers to do any real work on, and workstations for terminals. And they barely talked. Plan 9, because of how the kernel is modularized, allows one user session to have its processors hosted on one machine, the memory on another, the hard disks some place else, the display completely independent of all those machines, and they can input somewhere else, and the 9p protocol lets all those communications be done over a network line securely. So you could have dozens of terminals running off a server, or you could just (in a live session) load up the computation server to do something cpu intensive. The entire system, every part, was inherently distributed.
- It treated every transaction as a single protocol, so that networked and local device access would be done under 9p, and the real goal was to make it so that any resource anywhere could be mounted as a filesystem and retrieved as a file. It had files for every device in the system well beyond what Linux /dev provides, and it had almost no system calls because most of that work was done writing or reading from system files. About the only major ones were read, write, open, and close, which were dynamic on the type of interaction taking place and could do radically different things (call functions in a device driver, mount and stream a file over a network, or read a normal file from a local volume).
- File systems could be overlayed on one another and had namespaces, so that you could have two distinct device folders, merge them into one in the VFS, and treat /dev as one folder even if the actual contents are in multiple places. Likewise, each running program got its own view of the file system specific to its privileges and requirements, so access to devices like keyboards, mice, network devices, disks, etc could be restricted on a per application basis by specifying what it can or can not read or write to in the file system.
- This might sound strage, but graphics are a more first class citizen in plan9 than they were in Unix. The display manager is a kernel driver itself, so unlike X it isn't userspace. The system wasn't designed to have a layer of teletypes under the graphics environment, they were discrete concepts.
Today, it is kind of old (it doesn't fully support ANSI C, for example, and doesn't use the standard layout of libraries) and while it is realistically possible that if GCC and glibc were ported to plan9 fully, that you could build a pretty complete stack out of already available FOSS Linux programs, the target audience of plan9 is developers who really like dealing with files rather than arbitrary system calls, communication protocols, signal clones, etc.
I'll argue some flaws of plan9 (I also posted a lot of positives earlier up this thread...) on a lower level:
1. It doesn't support ansi C, and uses its own standard library layout for its C compiler. Because the OS threw out the sink, porting the GNU coreutils, glibc, and GCC would take a ton of effort. So nobody takes the initiative.
2. 9p is another case of xkcd-esque standard protocols mess. Especially today - I would make the argument IP as a filesystem protocol would probably make the most sense in a "new" OS, because you can change local crap much easier than you can change the world from using the internet. *Especially* since ipv6 has the lower bytes as local addressing - you can easily partition that space into a nice collection of addressable PIDs, system services, and can still use loopback to access the file system (and if you take the plan9 approach with application level filesystem scopes, its easy to get to the top of your personal vfs).
3. It is *too old*. Linux of today is nothing like Linux of 1995 for the most part. Almost every line since Linux 2.0 has been rewritten at least once. plan9, due to not having as large a developer community, has a stale codebase that has aged a lot. The consequence is that it is still built with coaxial ports, vga, svideo, IDE, etc in mind rather than modern interfaces and devices like PCIE, SATA, hdmi, usb, etc. While they successfully added these on top, a lot of the behaviors of the OS were a product of its times when dealing with devices, and it shows. This is the main reason I feel you have an issue with the GUI and text editor - they were written in the early 90s and have nary been updated that much since. Compare rio to beOS, OS/2, Windows 95, or Mac OS 8.
A lot of the *ideas* (system resources provided as files, application VFS scopes, a unified protocol to access every resource) are *amazing* and I want them everywhere else. The problem is that those don't show themselves off to the people who make decisions to back operating systems projects as much.
In closing (of the blog post, now) I still think I'd love to dedicate a few years to making a more modern computing platform that NT / Unix / whatever iOS is. I've illustrated my ideas elsewhere, and I will soon be posting a blog post linking to a more conceptualized language definition of that low-level language I was thinking of (I have a formal grammar for, I'm just speccing out a standard library.
2013/02/09
An Exploration of Human Potential 3: Copyright, IP, and Knowledge
It occurs to me I have never written a reference for my stance on the copyright debacle of the 21st century, so I'll talk about my historical views of copyright, and where we go from here.
A long time ago, in a land far far away (at least an ocean away) the ideas of copyright were primarily motivated to prevent someone from taking your mathematical formulae, or written works, and claiming them as their own. It was wholly to protect an inventor of new things from having their "ideas" stolen.
In the US (this focuses on the US, because this is the land of copyright enforcement worldwide sadly) those policies meant that authors could publish their books with recourse if someone started making bootleg copies and trying to either sell or freely distribute them. They made the copyright on material last 25 years, which meant an author had effective monopoly on the distribution and usage of their ideas until that period expired, when it would enter public domain and anyone could use that work, without even citing a source.
In practice, this just meant that once something went public domain, you could nary profit off it anymore. In practice, it would mean that the price of a public domain book was only limited to the costs of actually printing and distributing said book, since market economics dictated that since the actual work printed was now free to use, anyone could print it.
This also applied to works of art, where bootleg replicas would violate the copyright of a painter, if you were to trace duplicate something, you could be taken to court by the original creator.
Disney came around in the 20th century, and on the nascent currents of a budding film industry, started creating animated films. Steamboat Willie, being the historical point of reference for copyright term today, being made in 1928. Since then, Disney has lobbied for extensions of copyright (I am still unaware of who the content creator of the film is attributed to in the lifetime + 70 years terminology) to keep that film out of public domain so that they can still claim ownership of Mickey Mouse.
Now, my opinion is that that is highly toxic to culture and society. By perpetually preventing the creative media of long dead authors from entering public domain it prevents modern artists from openly deriving works and perpetuating culture in a legally unambiguous way. Today, artists and authors create works of derivative value from still copyrighted material, even that which has absolutely become a part of culture (Bugs Bunny, Star Wars, James Bond). Even works as recent as the Lord of the Rings films or Harry Potter I would argue are absolutely essential culture in a significant portion of western society and media.
Instead of having the unambiguous law of deriving art and creative endeavor from public domain works, modern artists live in a society where an extremely huge portion of their inspiration is still under copyright of some corporate entity, even with long dead authors, and will be until there is some fundamental societal change that deems the continued perpetual copyright of everything unacceptable. Content creators are perpetually at the mercy of corporate entities that "own" the ideas behind almost everything in modern creative media, and that is tremendously harmful to society.
Likewise, very little culture from the last century is freely available. It is all under copyright, owned by some business that intends to sell and profit off limited distribution and legal monopoly for all time. This leads me to the critical point here, and why this is only becoming a really significant issue in the age of the internet.
Before everyone in the world became connected over electrical pulses across wires in the last 20 years, the act of distributing bootleg copies of creative works was itself a costly act. The physical video tapes, or photo paper, necessary to reproduce something was cost prohibitive enough that people wouldn't try to freely give material they possess without cost. Under those grounds, it becomes very obvious that someone selling copies is making money where the original creator should have been - under the original pretenses of copyright. Likewise, it was never frowned upon (and in many ways this is the folly of the industries backing draconian copyright law) for average joes to make copies of the media they possessed to share with friends and family, at least for a time between the 70s and 90s. Children would share cassette tapes, and parents would replicate a VHS tape to give to a neighbor, or loan it. They would share the media.
This also brings up another important distinction - since the olden times of copyright law, we have shifted the channels in which we impart and distribute the things we place under copyright. It is a violation of intellectual property to take a car produced by Ford, and rebuild it from scratch using the original car as a template. That violates Ford's patents on the designs of the vehicle. Patents, unlike copyright, exist as a way to say "I came up with this original invention rather than a work of art, and this is how I made it - nobody else can make this thing for some number of years, because I did it first". However, a significant portion of what was easily understood as patented - mechanical parts, schemata, building plans - and that which is copyrighted - works of art, writing, "ideas" that aren't "inventions" (even in the definitions they blur, though patents are specific enough to require a thorough specification of the patented good to be submitted to the patent office, as opposed to copyright which is assumed).
In the same realm as why software patents are horribly wrong, a lot of the ideas behind patents are socially destructive - if someone comes up with a medical breakthrough, rather than having market forces drive the costs of their creation to just the money it takes to make the product, they get an exclusive right to authorize its creation. This is one of the drivers for why the pharmaceutical industry is as large as it is - the cost of making a pill is scant, but you need to recoup millions in investment in creating a cure.
Patents, however, are still another whole pie of wrong and disaster to be dealt with later. Back to copyright.
Entering the 21st century, we are now able to distribute the things relegated to copyright - art, music, and IP - for free. We can recreate them, for free. We have intentionally been driven to represent them as information rather than any more physical form because information is cheap in the computer age. The revolution of the transistor has allowed us to convey knowledge for very little cost. The next revolution will be to convey the physical world at a similar lack of expense. But for now, we have the ability (and take advantage) of the capacity to replicate the numbers we store and transfer through these machines we have to whoever wants them, at an extremely negligible cost of electrisity on our parts. We do this freely now. Where the bootlegger in 1990 couldn't fathom shipping copies of Star Wars 6 to everyone in the US, he could stick it in a torrent and let anyone that wants it download it. It might take forever if nobody else participates in the sharing, but it is never financially impeding his ability to spread the knowledge he possesses.
And this requires a thorough definition - a VHS tape contains physically imprinted images on film. The film is magnified and displayed in rapid succession in a VCR to create the appearance of moving pictures. Physical pictures are pigments imbedded in some form of tree pulp or some other medium. Music is interesting in that we have never been able to recreate it in a physical medium - we have always been sending audio by way of electrical impulses or some other information form. Speakers themselves just reproduce sine waves of reverberations to create audible sounds. Sound itself is just distortions of air hitting ear drums. The only way to represent that is as a mathematical formula.
However, today, we don't store our videos on VHS tapes. Hell, our TVs never displayed VHS as magnified film - a VHS player would encode the images into a signal to be sent over coaxial cable, or S-video, or some other electrical medium as a numeric pattern of electrical pulses. That was already information. It is why we can rip the VHS tapes we had. It is how broadcast television works.
The medium we use today skips the intermediary physical form to massively increase available space and minimize costs - dvds and blu-ray disks are just magnitized platters storing numbers. The number forms we use, h264, or dalaa, or vp8, to "encode" a pixel map (and successive pixel maps to create the appearance of moving pictures) are all just mathematical formulae (that *&%# MPEG-LA can patent because of software patents). We take numbers, put them through a formula, and then use the resulting number as a pattern to electrically stimulate crystals in a display to generate certain colors in a grid. That produces a picture. We display pictures in rapid succession to create the appearance of motion.
That video is a number - the audio, also a number. Words are also numbers - unicode characters are just upwards of 4 bytes to denote a glyph representing some symbol from some language or other utility case. The machine will take the number and display the corresponding glyph it knows. Still numbers. Still information, still knowledge.
Because in the end, knowing a number is posessing knowledge - to know Pi is 3.1415826535 is to have information. To know the formulae to derive Pi is also knowledge.
Knowledge is cheap. It is easy to convey - we have passed knowledge through ages where physical possessions were much more scarce. History by word of mouth existed long before writing on physical possessions. It is easier to convey now more than ever - with the computer revolution, the distribution of knowledge anywhere on the Earth becomes exceedingly cheap compared to even half a century ago. A satellite can send electromagnetic radiation (radio waves) in a targeted direction to convey information. Interpret the wavelength or period of that waveform as a bit pattern, consider it in base 2, and you have any number. Any number can create any of the pictorial, videographic, or audible material ever produced if such material has been realized as such a number. Scanners, microphones, and cameras all work to capture the physical information (though all visual media is just capturing other electromagnetic radiation in the visible spectrum through the reflection of light off a pigmeted surface) can all be interpreted as such a number.
As a consequence, all we see, all we hear, all we sense has to be numbers, because we interpret them with brains that experience through electrical signals. Just like computers. The mediums through which we experience our environments are analogous to the mediums computers operate in, and thus our "world" is easily digitized.
Today, information is easy to convey. It is so inexpensive to reproduce a number digitally with a computer that it is effectively free. To convey the number, we have laid wiring to send numbers almost anywhere in the world. These wires are cheap, and the power necessary to send a signal over them is negligible. We can effectively freely convey information.
So we possess numbers. We have duplicates of some source number, be it Pi, or a Beatles song, the Illiad, a picture of your grandparents, or Star Trek: Wrath of Khan. These numbers are easy to replicate, and easy to distribute. Culture and experience are defined by our senses and how we process the world around us - through the same medium we can send information for free. This collective knowledge, and the culture and information contained therein, is physically able to be shared without cost and without hardship.
We don't do that, though, because we have laws originally meant to prevent bootleggers from undercutting an author selling their book. In my philosophy, even the bootlegger was fine to me - if you possess something, you should be able to do whatever you want with the knowledge you can derive from it, including recreating it, and distributing said copy.
The creation of knowledge is not something to be funded on a profit motive. The cost associated is in creating the knowledge - in forging it - not in distributing it, or replicating the result. A thousand years ago, trying to distribute a book was by far the dominate cost - and it was logical to charge for that. To recoup the costs of creating the content by making money on units. Today, the latter two are completely free, and the former is as expensive as ever, if not more so.
Charge the expense. If you want to create knowledge, ask people to pay you to create it. Don't abuse archaic law that artificially restricts the propagation of knowledge and information indefinitely as a means to create income. If you make something people want, people will pay you to make it. If those who possess the result never chose to distribute it, that is fine - it is a conscience choice. If someone does decide to freely release the knowledge, you should have no right to demand it not be given by others.
I am firmly against the ideas behind copyright and patent. I don't believe that true inventors and visionaries care about possessing an indefinite monopoly on the distribution of their creations. They create out of passion, and if they produce things of value, would easily find those who value their work to pay them to create more. Rather than having wealthy investors who are creating knowledge to profit from, knowledge should be funded by those who crave more knowledge and want to see it created.
We are at an extreme end of the spectrum - knowledge is never free, unless the creator goes out of their way to make it so. If they don't actively make it free, it will remain forever restricted and punishable by law to speak the numbers that reproduce this knowledge to the mind. Hopefully, we come back from the extreme. Maybe even one day, we will see the err of our ways as a species and realize knowledge is not something to profit from, but to freely share, yet it is something we need to value and put our resources towards seeing made.
As an addendum, I want to argue against the counterpoint to direct funding of knowledge creation - that people won't spend their money to see new content made if they don't have to pay to experience it. The problem is that people will consume resources they possess, even if they don't need to in most cases. Very few people are actually rational actors that conserve resources - if, suddenly, all the lost culture from the last century and for the foreseeable future became freely available, all the excess funds people would obtain, they would probably spend on physical possessions in truth. Then they would realize without funding, the creators of the knowledge they appreciate evaporates without monetary support, and they would literally put their money where they mouth is - something they like, like Harry Potter, they would actively put money into to see it happen. I absolutely think the contract negotiations in investing in the creation of this content requires something beyond the take-money-and-run guarantees of something like Kickstarter, but that is a negotiation of funding. It isn't some legal wall against information.
Another issue is attribution - I do believe in this. If you use something created by someone else, I would much prefer it require acknowledgement the source. I do think that sufficiently ambient culture becomes pervasive enough that you can't deceive about the original author, but in the limited case of few actors, you want to keep someone who created a painting from having someone else take it and claim it to be their own. So while I am firmly against laws against the distribution of knowledge, I do think attribution is still important, and should be maintained for the life of the creator, including in the case of a derivative work - though I don't want to see a judicial system bogged down with everyone claiming every other work was derivative, so I would rather see it where the direct usage of ideas of someone else, provable in direct quotation or reference, as requiring attribution. It goes back to the original purpose of copyright - preventing someone else from claiming your work as their own - not in preventing the spread of knowledge.
A long time ago, in a land far far away (at least an ocean away) the ideas of copyright were primarily motivated to prevent someone from taking your mathematical formulae, or written works, and claiming them as their own. It was wholly to protect an inventor of new things from having their "ideas" stolen.
In the US (this focuses on the US, because this is the land of copyright enforcement worldwide sadly) those policies meant that authors could publish their books with recourse if someone started making bootleg copies and trying to either sell or freely distribute them. They made the copyright on material last 25 years, which meant an author had effective monopoly on the distribution and usage of their ideas until that period expired, when it would enter public domain and anyone could use that work, without even citing a source.
In practice, this just meant that once something went public domain, you could nary profit off it anymore. In practice, it would mean that the price of a public domain book was only limited to the costs of actually printing and distributing said book, since market economics dictated that since the actual work printed was now free to use, anyone could print it.
This also applied to works of art, where bootleg replicas would violate the copyright of a painter, if you were to trace duplicate something, you could be taken to court by the original creator.
Disney came around in the 20th century, and on the nascent currents of a budding film industry, started creating animated films. Steamboat Willie, being the historical point of reference for copyright term today, being made in 1928. Since then, Disney has lobbied for extensions of copyright (I am still unaware of who the content creator of the film is attributed to in the lifetime + 70 years terminology) to keep that film out of public domain so that they can still claim ownership of Mickey Mouse.
Now, my opinion is that that is highly toxic to culture and society. By perpetually preventing the creative media of long dead authors from entering public domain it prevents modern artists from openly deriving works and perpetuating culture in a legally unambiguous way. Today, artists and authors create works of derivative value from still copyrighted material, even that which has absolutely become a part of culture (Bugs Bunny, Star Wars, James Bond). Even works as recent as the Lord of the Rings films or Harry Potter I would argue are absolutely essential culture in a significant portion of western society and media.
Instead of having the unambiguous law of deriving art and creative endeavor from public domain works, modern artists live in a society where an extremely huge portion of their inspiration is still under copyright of some corporate entity, even with long dead authors, and will be until there is some fundamental societal change that deems the continued perpetual copyright of everything unacceptable. Content creators are perpetually at the mercy of corporate entities that "own" the ideas behind almost everything in modern creative media, and that is tremendously harmful to society.
Likewise, very little culture from the last century is freely available. It is all under copyright, owned by some business that intends to sell and profit off limited distribution and legal monopoly for all time. This leads me to the critical point here, and why this is only becoming a really significant issue in the age of the internet.
Before everyone in the world became connected over electrical pulses across wires in the last 20 years, the act of distributing bootleg copies of creative works was itself a costly act. The physical video tapes, or photo paper, necessary to reproduce something was cost prohibitive enough that people wouldn't try to freely give material they possess without cost. Under those grounds, it becomes very obvious that someone selling copies is making money where the original creator should have been - under the original pretenses of copyright. Likewise, it was never frowned upon (and in many ways this is the folly of the industries backing draconian copyright law) for average joes to make copies of the media they possessed to share with friends and family, at least for a time between the 70s and 90s. Children would share cassette tapes, and parents would replicate a VHS tape to give to a neighbor, or loan it. They would share the media.
This also brings up another important distinction - since the olden times of copyright law, we have shifted the channels in which we impart and distribute the things we place under copyright. It is a violation of intellectual property to take a car produced by Ford, and rebuild it from scratch using the original car as a template. That violates Ford's patents on the designs of the vehicle. Patents, unlike copyright, exist as a way to say "I came up with this original invention rather than a work of art, and this is how I made it - nobody else can make this thing for some number of years, because I did it first". However, a significant portion of what was easily understood as patented - mechanical parts, schemata, building plans - and that which is copyrighted - works of art, writing, "ideas" that aren't "inventions" (even in the definitions they blur, though patents are specific enough to require a thorough specification of the patented good to be submitted to the patent office, as opposed to copyright which is assumed).
In the same realm as why software patents are horribly wrong, a lot of the ideas behind patents are socially destructive - if someone comes up with a medical breakthrough, rather than having market forces drive the costs of their creation to just the money it takes to make the product, they get an exclusive right to authorize its creation. This is one of the drivers for why the pharmaceutical industry is as large as it is - the cost of making a pill is scant, but you need to recoup millions in investment in creating a cure.
Patents, however, are still another whole pie of wrong and disaster to be dealt with later. Back to copyright.
Entering the 21st century, we are now able to distribute the things relegated to copyright - art, music, and IP - for free. We can recreate them, for free. We have intentionally been driven to represent them as information rather than any more physical form because information is cheap in the computer age. The revolution of the transistor has allowed us to convey knowledge for very little cost. The next revolution will be to convey the physical world at a similar lack of expense. But for now, we have the ability (and take advantage) of the capacity to replicate the numbers we store and transfer through these machines we have to whoever wants them, at an extremely negligible cost of electrisity on our parts. We do this freely now. Where the bootlegger in 1990 couldn't fathom shipping copies of Star Wars 6 to everyone in the US, he could stick it in a torrent and let anyone that wants it download it. It might take forever if nobody else participates in the sharing, but it is never financially impeding his ability to spread the knowledge he possesses.
And this requires a thorough definition - a VHS tape contains physically imprinted images on film. The film is magnified and displayed in rapid succession in a VCR to create the appearance of moving pictures. Physical pictures are pigments imbedded in some form of tree pulp or some other medium. Music is interesting in that we have never been able to recreate it in a physical medium - we have always been sending audio by way of electrical impulses or some other information form. Speakers themselves just reproduce sine waves of reverberations to create audible sounds. Sound itself is just distortions of air hitting ear drums. The only way to represent that is as a mathematical formula.
However, today, we don't store our videos on VHS tapes. Hell, our TVs never displayed VHS as magnified film - a VHS player would encode the images into a signal to be sent over coaxial cable, or S-video, or some other electrical medium as a numeric pattern of electrical pulses. That was already information. It is why we can rip the VHS tapes we had. It is how broadcast television works.
The medium we use today skips the intermediary physical form to massively increase available space and minimize costs - dvds and blu-ray disks are just magnitized platters storing numbers. The number forms we use, h264, or dalaa, or vp8, to "encode" a pixel map (and successive pixel maps to create the appearance of moving pictures) are all just mathematical formulae (that *&%# MPEG-LA can patent because of software patents). We take numbers, put them through a formula, and then use the resulting number as a pattern to electrically stimulate crystals in a display to generate certain colors in a grid. That produces a picture. We display pictures in rapid succession to create the appearance of motion.
That video is a number - the audio, also a number. Words are also numbers - unicode characters are just upwards of 4 bytes to denote a glyph representing some symbol from some language or other utility case. The machine will take the number and display the corresponding glyph it knows. Still numbers. Still information, still knowledge.
Because in the end, knowing a number is posessing knowledge - to know Pi is 3.1415826535 is to have information. To know the formulae to derive Pi is also knowledge.
Knowledge is cheap. It is easy to convey - we have passed knowledge through ages where physical possessions were much more scarce. History by word of mouth existed long before writing on physical possessions. It is easier to convey now more than ever - with the computer revolution, the distribution of knowledge anywhere on the Earth becomes exceedingly cheap compared to even half a century ago. A satellite can send electromagnetic radiation (radio waves) in a targeted direction to convey information. Interpret the wavelength or period of that waveform as a bit pattern, consider it in base 2, and you have any number. Any number can create any of the pictorial, videographic, or audible material ever produced if such material has been realized as such a number. Scanners, microphones, and cameras all work to capture the physical information (though all visual media is just capturing other electromagnetic radiation in the visible spectrum through the reflection of light off a pigmeted surface) can all be interpreted as such a number.
As a consequence, all we see, all we hear, all we sense has to be numbers, because we interpret them with brains that experience through electrical signals. Just like computers. The mediums through which we experience our environments are analogous to the mediums computers operate in, and thus our "world" is easily digitized.
Today, information is easy to convey. It is so inexpensive to reproduce a number digitally with a computer that it is effectively free. To convey the number, we have laid wiring to send numbers almost anywhere in the world. These wires are cheap, and the power necessary to send a signal over them is negligible. We can effectively freely convey information.
So we possess numbers. We have duplicates of some source number, be it Pi, or a Beatles song, the Illiad, a picture of your grandparents, or Star Trek: Wrath of Khan. These numbers are easy to replicate, and easy to distribute. Culture and experience are defined by our senses and how we process the world around us - through the same medium we can send information for free. This collective knowledge, and the culture and information contained therein, is physically able to be shared without cost and without hardship.
We don't do that, though, because we have laws originally meant to prevent bootleggers from undercutting an author selling their book. In my philosophy, even the bootlegger was fine to me - if you possess something, you should be able to do whatever you want with the knowledge you can derive from it, including recreating it, and distributing said copy.
The creation of knowledge is not something to be funded on a profit motive. The cost associated is in creating the knowledge - in forging it - not in distributing it, or replicating the result. A thousand years ago, trying to distribute a book was by far the dominate cost - and it was logical to charge for that. To recoup the costs of creating the content by making money on units. Today, the latter two are completely free, and the former is as expensive as ever, if not more so.
Charge the expense. If you want to create knowledge, ask people to pay you to create it. Don't abuse archaic law that artificially restricts the propagation of knowledge and information indefinitely as a means to create income. If you make something people want, people will pay you to make it. If those who possess the result never chose to distribute it, that is fine - it is a conscience choice. If someone does decide to freely release the knowledge, you should have no right to demand it not be given by others.
I am firmly against the ideas behind copyright and patent. I don't believe that true inventors and visionaries care about possessing an indefinite monopoly on the distribution of their creations. They create out of passion, and if they produce things of value, would easily find those who value their work to pay them to create more. Rather than having wealthy investors who are creating knowledge to profit from, knowledge should be funded by those who crave more knowledge and want to see it created.
We are at an extreme end of the spectrum - knowledge is never free, unless the creator goes out of their way to make it so. If they don't actively make it free, it will remain forever restricted and punishable by law to speak the numbers that reproduce this knowledge to the mind. Hopefully, we come back from the extreme. Maybe even one day, we will see the err of our ways as a species and realize knowledge is not something to profit from, but to freely share, yet it is something we need to value and put our resources towards seeing made.
As an addendum, I want to argue against the counterpoint to direct funding of knowledge creation - that people won't spend their money to see new content made if they don't have to pay to experience it. The problem is that people will consume resources they possess, even if they don't need to in most cases. Very few people are actually rational actors that conserve resources - if, suddenly, all the lost culture from the last century and for the foreseeable future became freely available, all the excess funds people would obtain, they would probably spend on physical possessions in truth. Then they would realize without funding, the creators of the knowledge they appreciate evaporates without monetary support, and they would literally put their money where they mouth is - something they like, like Harry Potter, they would actively put money into to see it happen. I absolutely think the contract negotiations in investing in the creation of this content requires something beyond the take-money-and-run guarantees of something like Kickstarter, but that is a negotiation of funding. It isn't some legal wall against information.
Another issue is attribution - I do believe in this. If you use something created by someone else, I would much prefer it require acknowledgement the source. I do think that sufficiently ambient culture becomes pervasive enough that you can't deceive about the original author, but in the limited case of few actors, you want to keep someone who created a painting from having someone else take it and claim it to be their own. So while I am firmly against laws against the distribution of knowledge, I do think attribution is still important, and should be maintained for the life of the creator, including in the case of a derivative work - though I don't want to see a judicial system bogged down with everyone claiming every other work was derivative, so I would rather see it where the direct usage of ideas of someone else, provable in direct quotation or reference, as requiring attribution. It goes back to the original purpose of copyright - preventing someone else from claiming your work as their own - not in preventing the spread of knowledge.
2013/01/13
Software Rants 9: Sensible Filesystem Layout
I run Arch as my main OS now, and it is pleasing how Lib and Lib64 in / symlink to /usr/lib. So in this post I'm going to express my ideas behind filesystem layout and a way to make one that makes sense.
First, the aspects of a good VFS:
Resources is the collection of trasport protocols the machine supports, and subaddressing these directories accesses their resources (by address or dns resolution). This would include any locally mounted media not forming some other component of the filesystem.
The implication is that all resources are treated similarly. The remote ones are mounted by protocol, and local disks can (if you want) be mounted by fs type or the driver used to control them, the same way ftp, http, etc are all served by different daemons.
The socket layer is also in resources, and can provide external socket based network access or local IPC with message passing. Different daemons will process writes or reads from their own directories provided here. Resources is thus very dynamic, because it represents accessing everything provided by device controllers and daemons.
Users are the discretization of anything on a per-user basis. This includes programs, libraries, configurations, etc. Root isn't a special user, it is just a member of every group, and owns the top level directories. Each user has a personal Configuration directory, to hold application configuration.
Groups are an abstraction, like many things in this vfs - they can either be policy controls or file systems to be merged into the homes of the users that are members of them. For example, all users are members of default, and new user creation would inherit all application defaults from the default group. Until a user overrides default configuration, you could just have a symlink to default configurations, avoiding some redundancy. Any user who inherits a configuration from default could also have systemwide configuration changed to match. You could even create a default user to force usage of applications as default. Thus, if you ran something as default, you would always run it in its default configuration, and if a user doesn't have execute privileges on something they might have to run it as default. Sounds very nice in a corporate or locked down setting. I parethensize guest with default, because in general you want some "base" user everyone else inherits from. If you have a public user, that might be the guest account, or it might be a dedicated default account. Applications installed to this user would thus be accessable from everyone, and if they have execute privlidges in that user they could then have their own configurations and states stored locally for applications in one place.
Likewise, libraries could be treated the same way. The default or guest user could have their ~/libs as the fallback of all other library searches through any other user and any groups they are members of (that act as skeleton users). If you don't have a dedicated guest or default user, you could have the default group be its own filesystem containing a libs folder to search on, as could any other group. The idea here is that the user and group policy holds that you have a cascade search pattern from the perspective of a user - first the user itself, then the groups it is a member of in some profile defined precedence. This has the nice capacity to run applications, like in Linux, in user sandboxes. If the user has no grooup policy outside itself, and has all the applications it needs local to itself with any libraries, you could in effect jail and sandbox that session so it can't effect other users. You could even give it no access pirivlges to other top level directories to prevent it from having any outside interaction.
This also has a nice effect of providing easy mandatory access control - you can have default MAC in the default group, and per-user execution control for the things they are groups of, and elevated access control in the root account. I would definitely expect any next-gen OS to have MAC and security at the deepest levels, including the filesystem - that is why this VFS has per-user and per-application views of the environment.
Devices are the hardware components of the system to be accessible in their hardware form by daemons to control them. Daemons can individually publicize the control directories to other processes so they can either hide or show themselves. They can make virtual writable or read only files for block IO directly with them - the idea is that "displays" and "accelerators" would provide the resources for a display manager to provide a graphical environment, by showing a display accelerator (GPU) and any screens it can broadcast to (even networked ones (provided by link), which might be found under /Network/miracast/... as well).
System is another abstraction, provided by the kernel or any listening daemons on it. You can expect hardware devices to be shown here, including hardware disks to mount, or network controllers to inhereit. Since resources is an abstraction, hardware controllers for these devices use the System devices folder to access them, and their memory. In practice, a normal user shouldn't need a view on System at all, since application communication should be done over sockets, so /proc is only for the purpose of terminating other processes. An application can have a view of /proc to show itself, its children, and anything that makes itself visible. You shouldn't need signals, since with the socket layer you can just write a signal message to an application. The difference is rather than having dedicated longjumps to handle signals, an application needs some metric of processing the messages it receives. I think it is a better practice than having an arbitrary collection of signals that may or may not be implemented in program logic and have background kernel magic going on to jump a program to execute them.
I think this is much better than what we have in any OS, even Plan 9. Even if you don't consider a generic user, from a sysadmin standpoint, discretizing the difference between users, groups, and resources is a useful abstraction. I'd almost consider moving System itself into resources, since it is just the kernel providing utilities itself. You might want to allow applications to generate their own /Resources/ directories, maybe under a user subdirectory, to allow even more generic sharing and access to other processes goods.
First, the aspects of a good VFS:
- The ability to "search" on target directories to find things - libraries, executables, pictures - almost any search-able thing should have some simple directory you can search on and find the thing you are after.
- A layout that facilitates not searching at all, because search is expensive. You want the ability to say, with certainty, that the the foo library is in /something/dir/libs/foo. Even if foo isn't on this machine, there should be one path to foo that produces the library you want, in any environment (Windows completely fails on this, and the /usr/local and /usr/share nonsense on Linux does too, good thing almost nobody uses those anymore).
- Directory names that make sense. System32, etc, SysWOW64, usr, etc completely screw this up. So does Android, it calls the internal memory sdcard, and any sd card you actually plug into the device is external, and that is just from the non-root view of the filesystem.
- The ability to mount anywhere, and the ability to mount anything, be it actual storage media, a web server, a socket layer, process views, etc.
- The filesystem should be naturally a tree, with special files (symlinks, which should just be "links" because if you mount anything, you can "link" to a website under /net like /http/google.com/search?q=Google, or you could link to the root directory). Or to a process id. This creates graphs, but because graphs aren't inherent in the filesystem and are provided by specialized files, you can navigate a filesystem as a tree without risking loops if you ignore links.
- Forced file extensions or static typing by some metric. Don't leave ambiguity in what a file does, and make extensionless files illegal, and require definitions of what files do what - ex: if you have a binary data backage, use the extension .bin rather than nothing, because .bin would be registered as application specific binary data. If you have a utf-8 encoded text file, use .txt - the file would have a forced metadata header containing the encoding. Once you have an extension, you can embed file specific metadata that can be easily understood and utilized. Without extensions, you read the file from the beginning hoping to find something useful, or rely on metadata in the filesystem, which is often not transferable, especially over network connections.
- Root
- Boot
- Resources
- http
- google.com
- search?q=bacon
- ftp
- smb
- ssh
- temp
- AltimitSocket
- IPC Here
- TCP
- 80
- http
- Users
- Zanny
- Programs
- Videos
- Pictures
- Music
- Documents.dir
- Config.dir
- libs
- root
- guest (or default)
- Groups
- default
- printers
- display
- audio
- mount
- admin
- network
- devices
- virtual
- System
- power
- hypervisor
- firmware
- proc
- dev
- usb
- mice
- keyboards
- touchscreens
- displays
- disks
- by-label
- Storage.disk
- by-uuid
- printers
- microphones
- speakers
- memory
- mainboard
- processors
- accelerators
Resources is the collection of trasport protocols the machine supports, and subaddressing these directories accesses their resources (by address or dns resolution). This would include any locally mounted media not forming some other component of the filesystem.
The implication is that all resources are treated similarly. The remote ones are mounted by protocol, and local disks can (if you want) be mounted by fs type or the driver used to control them, the same way ftp, http, etc are all served by different daemons.
The socket layer is also in resources, and can provide external socket based network access or local IPC with message passing. Different daemons will process writes or reads from their own directories provided here. Resources is thus very dynamic, because it represents accessing everything provided by device controllers and daemons.
Users are the discretization of anything on a per-user basis. This includes programs, libraries, configurations, etc. Root isn't a special user, it is just a member of every group, and owns the top level directories. Each user has a personal Configuration directory, to hold application configuration.
Groups are an abstraction, like many things in this vfs - they can either be policy controls or file systems to be merged into the homes of the users that are members of them. For example, all users are members of default, and new user creation would inherit all application defaults from the default group. Until a user overrides default configuration, you could just have a symlink to default configurations, avoiding some redundancy. Any user who inherits a configuration from default could also have systemwide configuration changed to match. You could even create a default user to force usage of applications as default. Thus, if you ran something as default, you would always run it in its default configuration, and if a user doesn't have execute privileges on something they might have to run it as default. Sounds very nice in a corporate or locked down setting. I parethensize guest with default, because in general you want some "base" user everyone else inherits from. If you have a public user, that might be the guest account, or it might be a dedicated default account. Applications installed to this user would thus be accessable from everyone, and if they have execute privlidges in that user they could then have their own configurations and states stored locally for applications in one place.
Likewise, libraries could be treated the same way. The default or guest user could have their ~/libs as the fallback of all other library searches through any other user and any groups they are members of (that act as skeleton users). If you don't have a dedicated guest or default user, you could have the default group be its own filesystem containing a libs folder to search on, as could any other group. The idea here is that the user and group policy holds that you have a cascade search pattern from the perspective of a user - first the user itself, then the groups it is a member of in some profile defined precedence. This has the nice capacity to run applications, like in Linux, in user sandboxes. If the user has no grooup policy outside itself, and has all the applications it needs local to itself with any libraries, you could in effect jail and sandbox that session so it can't effect other users. You could even give it no access pirivlges to other top level directories to prevent it from having any outside interaction.
This also has a nice effect of providing easy mandatory access control - you can have default MAC in the default group, and per-user execution control for the things they are groups of, and elevated access control in the root account. I would definitely expect any next-gen OS to have MAC and security at the deepest levels, including the filesystem - that is why this VFS has per-user and per-application views of the environment.
Devices are the hardware components of the system to be accessible in their hardware form by daemons to control them. Daemons can individually publicize the control directories to other processes so they can either hide or show themselves. They can make virtual writable or read only files for block IO directly with them - the idea is that "displays" and "accelerators" would provide the resources for a display manager to provide a graphical environment, by showing a display accelerator (GPU) and any screens it can broadcast to (even networked ones (provided by link), which might be found under /Network/miracast/... as well).
System is another abstraction, provided by the kernel or any listening daemons on it. You can expect hardware devices to be shown here, including hardware disks to mount, or network controllers to inhereit. Since resources is an abstraction, hardware controllers for these devices use the System devices folder to access them, and their memory. In practice, a normal user shouldn't need a view on System at all, since application communication should be done over sockets, so /proc is only for the purpose of terminating other processes. An application can have a view of /proc to show itself, its children, and anything that makes itself visible. You shouldn't need signals, since with the socket layer you can just write a signal message to an application. The difference is rather than having dedicated longjumps to handle signals, an application needs some metric of processing the messages it receives. I think it is a better practice than having an arbitrary collection of signals that may or may not be implemented in program logic and have background kernel magic going on to jump a program to execute them.
I think this is much better than what we have in any OS, even Plan 9. Even if you don't consider a generic user, from a sysadmin standpoint, discretizing the difference between users, groups, and resources is a useful abstraction. I'd almost consider moving System itself into resources, since it is just the kernel providing utilities itself. You might want to allow applications to generate their own /Resources/ directories, maybe under a user subdirectory, to allow even more generic sharing and access to other processes goods.
2013/01/01
2013
One, January 1st 12:00 AM EST DST is dumb. My New Years is now the Winter Solstice at 0:00 GMT.
I have three major objectives this year:
I graduated. I think college is overpriced garbage for credentials that become meaningless when everyone and their mother with money can just toss them out lackadaisically. It is an outdated education model in an era when we can engage with knowledge at an infinite level through instantaneous ubiquitous unlimited communication through networks. In college, here is my take away on a per semester basis in college in my major:
I really feel that a lecture environment stifles creativity and specialization to a fault. It limits the students to the views of the teacher, and they have the time commitment of the total student body split between those professors. And I had small class sizes. My largest CS class was by far CS1, and since then the most students in one course was AI with like ~20 students. The average was 10. I can't fault my professors, they put in the effort. They also were teaching computer science, which is mostly the theory of computation, and it isn't directly applicable to the field.
But that is the problem. The theory is great, but we aren't living in times of leisure and excess, we are losing that at a tremendous rate. We don't have the money to blow on these 4 year degrees that don't teach anything essential to livelihood. I don't feel inherently enlightened by going to college, I was self-teaching myself astronomy and physics my last two years of high school. I learned about Quarks, Neutron Stars, the fundamental forces, and such through wikipedia. I learned C++ from the cppreference that I have been contributing to. Teaching yourself what you want to know is more possible and easier than ever before, and it needs to take hold in culture as the way to learn, because it is the only way to truly learn and enjoy it. At least for me, and anyone like me. Maybe someone likes the lectures, the tangential topics, the boring information digesting. I didn't, and I look forward to the future.
I don't think there will be any dramatic tech shifts in 2013, by the way. I hope I come to eat these words, but it seems like 2013 is the maturity of Android as a gaming platform with the release of Tegra 4, and Google's global conquest really comes to fruition as they take over the computing space with their mobile OS. Windows will flounder, qt5 will be awesome, I hope to see (maybe by my hand) KDE running on Wayland. I don't think we will get consumer grade robotics, 3d printing, or automated vehicles this year. We might see the hinting at something maybe coming in 2014, but this is a year of transition to the next thing. I hope I can get involved in whatever that is, not for profit, but for importance. I want to do big things. I live in fear of doing insignificant things in my time, and it is the biggest factor holding me back.
I have three major objectives this year:
- Get a job in my industry. After 6 months I haven't found anything, but I will press on. I'm really starting to lean towards freelancing because I really don't want to be locked into a 40 hour a week job. That is so much of my time that I would rather spend working on my own projects and learning.
- Get active in some project. Probably KDE, but I want it somewhere in the complete package Linux stack. I'm not going to try to reinvent the wheel by forking what other people have done and letting it atrophy out in obscurity, I'm going to engage with persistent projects to make them better. It is the only way to get the desktop metaphor mature.
- Tear down my mother's old house. She pays it off this year, and the main motivator for me not going 12 hours a day job hunting is that I don't want to move out just to have to come back in a few weeks or months to help move everything out and tear the thing down. But it has to be done, this is the year to do it, I'll see it through to fruition.
I graduated. I think college is overpriced garbage for credentials that become meaningless when everyone and their mother with money can just toss them out lackadaisically. It is an outdated education model in an era when we can engage with knowledge at an infinite level through instantaneous ubiquitous unlimited communication through networks. In college, here is my take away on a per semester basis in college in my major:
- Freshman 1: Writing functions, very basic data types, Monte Hall problem.
- Freshman 2: Object orientation, static types, basic containers, big O.
- Sophomore 1: Stack overflows, basic assembly, more containers, testing.
- Sophomore 2: Design patterns, C, Swing, threading in Java.
- Junior 1: Basic kernel concepts, how shells and pipes work.
- Junior 2: Data visualization algorithms, openMP, source control.
I really feel that a lecture environment stifles creativity and specialization to a fault. It limits the students to the views of the teacher, and they have the time commitment of the total student body split between those professors. And I had small class sizes. My largest CS class was by far CS1, and since then the most students in one course was AI with like ~20 students. The average was 10. I can't fault my professors, they put in the effort. They also were teaching computer science, which is mostly the theory of computation, and it isn't directly applicable to the field.
But that is the problem. The theory is great, but we aren't living in times of leisure and excess, we are losing that at a tremendous rate. We don't have the money to blow on these 4 year degrees that don't teach anything essential to livelihood. I don't feel inherently enlightened by going to college, I was self-teaching myself astronomy and physics my last two years of high school. I learned about Quarks, Neutron Stars, the fundamental forces, and such through wikipedia. I learned C++ from the cppreference that I have been contributing to. Teaching yourself what you want to know is more possible and easier than ever before, and it needs to take hold in culture as the way to learn, because it is the only way to truly learn and enjoy it. At least for me, and anyone like me. Maybe someone likes the lectures, the tangential topics, the boring information digesting. I didn't, and I look forward to the future.
I don't think there will be any dramatic tech shifts in 2013, by the way. I hope I come to eat these words, but it seems like 2013 is the maturity of Android as a gaming platform with the release of Tegra 4, and Google's global conquest really comes to fruition as they take over the computing space with their mobile OS. Windows will flounder, qt5 will be awesome, I hope to see (maybe by my hand) KDE running on Wayland. I don't think we will get consumer grade robotics, 3d printing, or automated vehicles this year. We might see the hinting at something maybe coming in 2014, but this is a year of transition to the next thing. I hope I can get involved in whatever that is, not for profit, but for importance. I want to do big things. I live in fear of doing insignificant things in my time, and it is the biggest factor holding me back.
Subscribe to:
Posts (Atom)