2014/09/12

Project Ouroboros

So I just made a random name for this post to reflect the content, which is also mostly random.

I have a few posts about MMO concepts already, but I want to get to the meat of that series (mainly because I have no motivation to be thorough with it without having an end game in mind).

I think MMOs, in general, are a great passtime. The whole point is to give someone a world to engross themselves in, to trade time for enjoyment regardless of the allotment. But the problem with most major games is that they are game-y - they take the fundamental shift WoW made in the industry and assume that is the future, and ignore the functional base upon which MMO's were founded, in that they should be an engagement with a world, not an engagement with a dungeon or boss or quest or pvp zone.

So what I am interested in is comprehensive engagement - to suspend ones disbelief, even at a superficial level without engaging in the worlds actual lore, and just living in a virtual reality for a while. Modern titles fail entirely to achieve this - from SWTOR to Rift to GW2 to Neverwinter, the gamification of the world means you can never just live in it, you always have to have an immediate goal in a precessing series of objectives to accomplish as if the world were broken down into stages in Mario. And in effect they always are - zones are level appropriate, quest chains are linear, and the developers want to make sure you see everything neat they made while not getting lost so the objectives are straightforward and marked on your map.

That defeats the point, though. The point is that you should get lost. You should be confused, because reality is confusing. The game world should not be a straightforward playground of colorful events that attract players to ride the rides and then leave. You should find caves at random with show-not-tell stories built in that have no outlying indication of adventure, but that you find it for yourself. And they should not always have great loot - maybe you just find an empty cave with a strange carving on the wall that will drive forum communities mad for years.

If your design is around function and form, you miss the magic. It is the difference between the Morrowind and the Skyrim - between the Baldurs Gate and Neverwinter Nights. In the former titles, you are lost, there is no real direction on where to go, you just find it out as you play along. But even in the absence of a tangible goal the game remains fun because you are engrossed in the world. You don't need to be saving the world for an experience to be enjoyable, it just has to be fun in and of itself.

I used those examples on purpose, because the later games are still considered fantastic. You can have a game where everything is laid out for you, linear or otherwise directed, and have a fantastic time. But I think most of my target demographic (whom I know exists, because I talk to this audience all the time in myself and my friends) find the magic of the prior to be something that is on a level of its own few games can touch. It is why you constantly go back and replay, and still often experience this sense unless you play these games to death, because the world is not arranged to play through in a directed path, it is willy nilly and fantastic for it.

I want to see more MMOs like that. So Project Ouroboros (working title) is my take on how you would achieve this while having a game your friends could play.

Firstly, the model of this idea is novel - development starts with whatever base engine... It would require weeks of planning in and of itself to figure out the best course of action here, but I will present a few options. First, a novel engine built ground up for this game, fully open source and community driven, built on some modern tech stack. Maybe you use QML for the UI and Ogre for the rendering engine, that sounds like fun. Time consuming though, and to achieve this would require external funding sources I am not fond of. Option two is to use an open engine, the most common I can think of is Godot. That one is not suited for a 3d mmo at all though. You could use the staple Unity or some such, but the proprietary nature of it (and most other commercial engines) defeats what I'm getting into further on.

Regardless, once you have an engine of some sort, with a proper development toolchain (released to the public so users can create their own environments, items, etc) you build the first zone. Probably the human race one, since it is a classic "default". And that is it, you ship. I envision a title where the "level" cap is around 20, so this zone could constitute levels 1 to 4 or 5. But anyone playing the game at this point - the 0.1 point - would hit max level and have not much else to do. So that is where the revenue model comes in.

The project would require a custom website, where users can donate a la kickstarter or patreon modeling to whatever idea they want to see implemented most. Whatever has the most money behind it is what is worked on, so the game grows organically dictated by what players are willing to pay for. If the first thing they want is large group raid content, they get that at level 5. If they want more questing zones, we can build those for years. If they want new races, those get made. Anyone can submit proposals, albeit user proposals require a minimum pledge contribution to seeing it happen. Additionally, anything the official development team does not take on as a project the users themselves can build and seek the pledges for themselves. The difference is that the parent company of this project, whatever I would call it, would be able to claim a proposal to let the community know not to work on it, but community works cannot lock proposals in such an absolute way because you cannot guarantee they will finish it, instead they can just mark projects they are intending to create as such, but you have to take those claims with a grain of salt.

The design of this proposition system is key to the entire games success. It needs to have a great integrated discussion system, project management tools for the user and developer creations, the ability to get feedback and iterate rapidly, and the ability to beta test everything with one click integration to the game client. 

The revenue model is then that the main development team, severs, Q/A, etc are all funded by whatever the users pay us to add to the game. Thus there are no microtransactions, no subscriptions, and if the game is a success the revenue should be huge, but it is only reflective of the desire of the community to see new content added. It is a self perpetuating cycle. Hence the project name.

Mechanically, a problem with modern MMO's (and I take this from ZybekTV on youtube) is that a game like WIldstar is very demanding to play. Because it blends action gameplay and 40 man raiding, it means you need a higher degree of constant vigilance in your playstyle while raiding for hours on end. It is a very taxing concept that might be the downfall of that title - where the demands of the gameplay, while fun, are so exhausting the playerbase cannot commit the effort necessary to complete the hardest of challenges. Which is a shame, but I would have forcasted that outcome in advance. It is why, I feel, WoW remains so popular - and why its popularity has only diminished as the game has gotten more complicated.

I am not saying you should have a dumb as bricks gameplay experience - that wil alienate whatever elite players you might have, and those players are essential to maintain a vibrant community. But you need pacing - you need the let players play at those hyper-high skill capped levels when appropriate, while falling back to simpler mechanics when the best is not needed, or not wanted. At the least every player class (note, see further down for details on the distinction between a class and a spec) would have specs that were heavily skill dependent, leading to the highest possible performances, and some that are simplified or not as skillful that do their job but do not have as much performance variance. This can even happen inside a spec - maybe you have two finishing moves with a combo system, one that requires a skillshot and one that does its effect targeted. The skillshot does more damage and might have an added effect, but you trade that for the reality you can miss, and that you have to take the time to aim. It is a tradeoff as such, but the mechanics are succesful if that tradeoff is worth it to the high skill high engagement player while the less skilled or less engaged player can get by with option B without feeling like they are intentionally playing bad.

The class system I also feel is the optimization of modern class mechanics seen in MMOs today. You start the game classless - for engagements sake, you are a potato sack peasant (normally) who, depending on starting experience, picks their starting class and spec according to a need or want. You could legitimately stay level 0 running around flinging your fists as an untrained brute at fence posts as long as you want. But eventually you find your way into an armor proficiency - a class - and you get the default spec of that class, which would be more reflective of its traditional role.

The four armor types would be cloth, leather, mail and plate. The four starting specs would be wizard, mercenary, cleric, and warrior respectively. Each would have simplified mechanics and they would all be jack of all trades specs - they would all be somewhat durable, somewhat damaging, some aoe, some cc, some sustainability. Not high skill specs, but with a few opportunties for skillful play. As you level up you unlock additional specs (funded by desire for them) through quests or just leveling up, that run the gamut of functionality that armor type might provide.

Which is an important distinction - you are really only limited by your armor class moreso than a specific role or playstyle. I can easily imagine all the classes getting access to specs that can perform all the functions of the traditional trinity, meaning  you never need to be without a tank - not because the mechanical benefits of specialization are lost, and the interplay between roles is absent, but because any class has access to a tank spec. This is a compromise - you don't want to give all player characters access to all specs and functions because then you have no progression attachment to your character - you would be going from a plate wearing hulk to a fireball flinging wizard in a robe, which makes no sense. But having the means by which a wizard could transition into a warlock, or a cleric could become a druid, makes more sense. As such, there are four kinds of armor drops - one for each armor type - but that armor is mostly cross-spec applicable. Different stats would be better for different specs, so your elementalist wizard gear might not be optimal for a hellfire warlock, but you would use the same primary stats and benefit somewhat from all your secondaries.

The mechanics to change spec would be a little more involved though. You would want commitment to a playstyle, and unlike most MMOs the customization within a spec would be limited. There should be a sense of commitment to your mercenary being an assassin rather than an archer, and changing between them would take some resources and time, especially to unlock it the first time.

Again, this is a compromise - you do not want player spec swapping every dungeon because of optimality concerns, but you do want players to be able to switch to a tank role when the guilds raid is missing one. I imagine you could switch specs in one of two ways - a class quest, and lump gold. The later is for when it is really necessary for group play, the former is for when you just want to change around.

Each class would, over time, accumulate a lot of specs to chose from - some would be given, some would require quest chains, and some might even be secret. They are effectively kits, where you have base class spells common to all specs that define the class (ie, all cloth wearers have some fundamental arcane magic talent, all mercenaries are skilled with locks, traps, and diplomacy, all clerics have a bond to some deity, all warriors are military trained) but you add on top of those with a spec, that can go from just a shift in spells to a radical change in playstyle.

This diversity is the true specialization of this game - rather than have micromanaged talent points or skill trees to read a wiki page on and then just take the optimal route, you get the skills and spells of a role and it is in your choice of spec that dictates your distinction amongst your peers. That, and your blinging armor.

Appearances are important. TF2 is one of the only games to get this right. I would want to see every race added have a distinct silhouette for each armor class - the warriors would be buff, the clerics would be tall, the mercenaries could be short, the wizards lanky. Depending on race, of course. 

This provides a multitiude of benefits that fix a lot of problems I perceive in class based games of today - developers will add dozens of classes, which means each character is invested into but severely limited to its few specs. Instead, I want that turned on its head, where you have few classes but each one having many specs,  so that you can pick a fundamental playstyle you like - the swashbuckling stealthy charismatic rogue, the master of the arcane, the divine, or the weapon slinging plate clad titan, and explore the diverse interpretations of each of these archetypes through all manner of specializations. And different specs would have different degrees of skill - for example, maybe the mage has a Timelord spec that requires juggling a bunch of disparate resources combined with everything being skillshots, maybe to such a degree that you are juggling a ball of sand around that needs to contact enemies from certain directions to combo into certain effects. It might be the highest theoretical single target spec in the class, but it would require huge dedication and skill to play to its potential, and maybe it has a compounding buff effect where each differentiating combo of ball tossing multiples the effects of the other combos, meaning that if you cannot maintain your juggle your performance is magnitudes worse.

That kind of high risk gameplay would be great for the veteran or pro who wants to be a true badass - and to play with one would be a marvel to behold. But there, you see the difference between a lord of time who manipulates the field to annihilate everything and some goon with a ball of mud that does about as much damage as a wet mud ball to the face would do. Likewise, you could be the elementalist who throws out fireballs with targeted lock on. Maybe the highest performing one causes his spells to explode at the right point to hit the most targets and keep the most burning victims possible, but even the noob would still be lobbing fire grenades into the bosses face that burn everyone around him. The performance difference would be much less pronounced, but for it to be a success  there must be a balance - it needs to be worth it to play the time wizard perfectly if you can, but it also needs to be worth it to bring an elementalist mage because there are very few people who can play a perfect time wizard, especially all the time. And even then, the time mage is not the optimal solution all the time - maybe it is only best on single mob or many mob fights, but if you have two high value targets the buff effects only apply to one while the fire mage can stick burning effects on a few targets really well. That would mean a fire mage at best is out performing that best time wizard under certain circumstances. It is a balancing act, but a rapidly rolling game like this could easily tweak numbers to the fraction to insure the balance is maintained.

Back to how to pull this idea off. So you have your website, your engine, and you have a starting zone - it should encompass my above points, in that while you can go down a nice directed main story through the zone reaching a lot of areas, there should be chunks of the map left unexplored when you reach the end of the main quest, and you should have a motivation to wonder "what the hell is over there?".

There might be a few empty caves of nothing, and maybe a tower full of level 10 skeletons that rip you apart unless you can amass a forty man group of max level (5) players to fight them. Maybe they do not drop anything - maybe they have a really low chance of dropping fantastic items or reagents.

Maybe there is a hut with a book on a shelf that talks about the owners hidden cellar in the swamp. You go to the exact location the book details and find a tiny intractable rock on the floor you would have never seen in passing that opens an underground cavern, containing monsters and unique items. Maybe a secret bookcase in a nobles house just leads to a shoe closet, after you thought you were so clever.

Likewise, you want a living world. Monsters would not just wander around from spawn locations. If an area has reason to contain monsters, it would contain them, and if it did not... it would not. The open fields should just be that, open, maybe some harvestable herbs, maybe a roaming band of bandits on occasion, but nothing just spawning out of thin air to walk in a square area. Think Skyrim, and how much of the world is empty, but where it makes sense to have actors they are there. If there is a village of gnolls, the gnolls respawn from their huts after a set time. And if you attack one, the entire village comes after you, so you need to actually try to pick them off slowly without drawing the attention of the whole hoard of them - screw mechanics where you have aggro radiuses so small you could throw a stone between the mob you just lopped the head off of and his friend who does not care what just happened in front of his face.

Player engagement is also important, and you do not want others in your world to be your antagonists. Quest objectives would naturally be completable by out of group players attacking the same targets or interacting with the same objects. You want there to be an incentive to help others, so maybe attacking a mob another player already tagged nets you more gold drops than usual, and the loot is player independent.

But that is all fluff and detail. The point is that you want a world not designed to be gamed, but designed to be lived. Where NPCs walk around town and most of them want nothing to do with you. Where the blacksmith might get mad and refuse to sell anything to you if you keep trying to pawn off rat ears to him as junk instead of going to a shady guy in a back alley who might just try to murder you if you are too low level.

It would be complicated, but I think quality over quantity can really win here. Take the time to make something truly fantastic and ask for money to create more, not unnaturally gate what you already have and try to produce as much as possible to sell more units. If it is good enough, and I think it could be, the runaway effect would be that players would be throwing money at amazing ideas to such huge degrees that you could forget the money at some point. The community contributions would give you the ultimate hiring pool to draw new developers from, and you could naturally grow the business as popularity and funding grow in tandem.

And I would never want to obsolete content - if we made a raid at level four, and then increased the level cap by adding higher level zones, I'd want to see there be a good enough reason to form raid groups at low levels to run that old content, or to see that content scale. It is the greatest shame in most MMOs today that the real game is just whatever is the latest raid.

Oh, and one last mention - gear. Gear should be hard. You should start in rags, and you should feel amazing when you get a cloth hood that looks like a paper bag. If you see someone in blinged out armor, you know they did not just get a random drop - they got reagents, paid huge sums of money, spent huge amounts of time in professions to craft it, and put in effort beyond just time, luck, or money into seeing it come to fruition. And the dungeons should be hard, and not accessible by all. Because the greatest feeling in an MMO is not being that decked out mage with transdimensional rifts in their shoulders, it is seeing them walk down the street and realizing how much effort he put in to get there. And to think that you want to put in that effort to, and accomplish greatness like he had. The whole plate. To become great requires great commitments and deeds, something all these other titles forget because they want to theme park their way to huge subscriber bases.

2014/09/03

Hardware Rants 2: Electric Notebookaru

I'm just dropping a forum post on HN here because I repeat it so often I'd rather just link to a blog at this point.

The subject always revoles around "What is the best Linux notebook?".


The right notebook for Linux is a Linux notebook. Vendors of such machines are still boutique, but unlike in the past they actually exist. Just a few are Zareason, Thinkpenguin, and System76. I personally have an off brand Clevo 740 SU (same model as the System76 Galago) because I had confidence in the notebooks ability and got it OSless since I'd have to install Arch anyway.

But recommending all these Samsung and Llenovo Windows machines is a disservice to everyone. It is a disservice to the Linux ecosystem because you cannot guarantee hardware support and using a non-Windows OS will void your warranty most of the time (at least if they catch you with it). It means that you get a bad impression of the experience due to any bugs or glitches you encounter, and rather than acknowledge you crossed into the land of dragons and took the risk many end up blaming the ecosystem that cannot provide hardware support realistically for any parts the vendors do not support in kernel themselves - especially those that are actively hostile to attempts to implement hardware support (video drivers - Android ones are really bad right now, but Nouveau had to wage a tangible uphill battle to get where it is today, but a lot of wifi cards, pci cards, etc can have no vendor support).

It also means you are paying your MS tax, and getting a Windows license you intend not to use. I'm not even going to argue the price aspect, because we know it really does not matter - if Linux were ever a threat (and really, it already is with Windows talking about version 9 possibly being free or ultra low cost with the looming threat of Android) they would just give away licenses, and that combined with bloatware contracts would provide the vendors more revene than just shipping Ubuntu (or whatever distro).

What I care about is the message. Every Linux Thinkpad fanboy is one bullet point in the Llenovo board meeting affirming the need not to ever ship a non-Windows notebook (except in countries like Germany that actually force them to). It sends the message "go ahead, bill me for a Windows license I'll never use, and make me fight the hardware to make it work, but I will still buy your stuff because having a pleasant straightforward and painless Linux experience is not one of my priorities.
It also obscures how large the market segment is, because to these vendors every machine sold is a mark that Windows is still king. If they do not see retailors selling real Linux machines (including the Dell one) they have absoutely no reason to ever fathom selling Linux native machines themselves.

And that hurts you, because that means there is less adoption, fewer options, and less pressure towards more widespread use of the platform. And for whatever your reason, you want to use Linux right now, and buying these Windows machines denies others from having the chance to even know it exists, and supports the continued monopoly Microsoft has on the personal computing industry (and Apple is not even on that radar).

So please, when you are looking for a new notebook with the intent to run a Linux distro on it, give some consideration to the vendors actually selling Linux machines, with support, as first class citizens. If you cannot find one that meets you needs so be it, but don't go out of your way to buy a Windows notebook and hope it can run a Linux distro flawlessly.

Original posting was here.

2014/08/17

Q3 2014 Ideas Summary

Approaching what is likely to be a necessary hiring season, I'm going to collect my ideas for startups in a nice, concise format for the sake of pitching. I believe I have written articles on these topics in the past, so they are also more comprehensive references. I'm ranking these by a combination of feasibility, practicality, and profitability - consider the global maxima convergence of all three to be the ranking criteria.

1. Distributed Patronage Funded Media

Ranked first because it is a straightforward profit center, and I can easily plan out the technical details. The only complexity is getting initial users - I believe in the idea enough that given a starter userbase it would explode. Getting the starters would be problematic.

First stage is a QML based device-pervasive application that enables the distribution and consumption of creative content. For utilities sake, we would want to integrate as many third party services as possible (youtube embeds, flicker / imgur integration, etc) but the core competency is that all users would passively use torrents in the background to seed and leech this content. This alleviates the need for huge amounts of money to start up with petabytes of storage and network bandwidth to do a media startup. It also means you never run into the modern YouTube problem - where you accumulate a petabyte of data an hour, and your datacenters grow out of control to contain it all.

Second stage is a webview to consume and upload content into this swarm. This requires enough money to subsidize it though, since there is no way to implement torrent based technologies in a browser, so you need to severely cripple the bandwidth of web users to keep costs under control. But it is functionally mandatory to have a website version of this project just to compete with the entrenched parties involved.

The primary revenue source is integrated patronage. The system takes 5% (subject to change) of all donations to content creators, and the layout is such that users are presented constantly with ways to fund the creators they like. If an advertising company approached us down the line on favorable terms, I see no reason not to let users choose to put ads in their videos and on their pages, but the default will be none.

To avoid the copyright fiasco, the terms of use are simply that any uploaded content must be relicensed to a creative commons license and the uploader must have relicensing rights. IE, they are recognized IP owners. Anything less gets deleted, to avoid the copyright morass, and to guarantee all content from this project is copyleft, so anyone can reuse it on their own - all works would be licensed under the CC attribution share-alike license, and anyone who cannot reassign copyright to that gets their content taken down.

The long term feasibility of this project hinges on adoption. I feel that a properly guided patronage system is revolutionary, to such a degree if I could get this project to take off it could approach YouTube and Facebook scale in a matter of years. If it grows fast enough, more centralization to spur adoption is always an option - no matter what, I want the distributed backend to stay an option because it decentralizes backing up the content swarm. I also expect that in the same way popular torrents see thousands of seeders, a legitimate option that enables seeding of liberated content would be huge.

To get that adoption, I imagine outreach is necessary to content creators to try it, and to get some short term exclusivity deals if possible. It is essential to get really good media works to start interest in the platform, or else it would wallow in obscurity on the edges of the Internet.

2. Continuously crowdfunded persistent world

This project is a combination of two concepts in recent years - patronage, and game development - in a way I feel is much more mutually beneficial than the current progression towards super-alpha releases. Today, while many media classifications are diversifying into patreon and subbable, et al, gaming remains rooted in kickstarter campaigns that go radically over budget.

Well, the reason is obvious - they cannot properly budget how much a game will cost to develop when they are forecasting years of development time. While I do not see an easy way to combine patronage with fixed point release titles, a massively multiplayer game is a much different proposition.

I have drafts of a novel fantasy world that derives a lot of Lovecraftian themes instead of Tolkeinesque concepts. Quake was one of my favorite video games ever, for the theming, and I think it is dearly lacking in more modern titles. The most important distinction is business model - this game would be entirely funded by patronage creating new content determined by what players want to pay for most. New end game content? If it gets funded first. More questing zones? Fund it. So on and so forth. It would also, as per my other projects, be completely free and open source under the GPL and CC-A-SA licenses for code and assets respectively.

This project is much grander than the former, but is poised to make significantly better margins. It basically has three components - the website, providing all the traditional gamer features like character profiles, item and monster databases, guides, forums, etc - plus the donation pages to fund content development, and infrastructure for users to upload, discuss, vote, patch, etc their own in engine creations, along with means to easily download and try usermade content. The tools to build game assets, depending on usability of other tools, that need to be usable by the end users to produce content. And the game itself, which I would love to try using qt3d and qml to build a highly modular interface and portable engine that can easily go ultra-cross platform with user configurable QML UI elements.

Again, this is an industry shattering idea - if it worked, it would change gaming, funding, etc forever. It would not need many users to be successful because there is no colossal investment upfront to build the game - there is still a larger investment than the previous project, because you do need the website, world design, and game in a usable initial state to give players a taste - but that is the glory.

Modern games, primarily MMOs, have huge upfront development costs to provide thousands of hours of content out of the gate to players. Mainly because Ultima Online and Everquest did it. This game would start minuscule - a single starting zone, one dungeon, for one race, from levels - say - 1 to 5. That means the art design required  is minimal, it is just the development time to prepare the game. Since we are not subscription or cash shop based, player retention is not as important in the long run - we just need players to find something they like, that they want more, and who will put money where their mouth is. As long as the content is of sufficient quality, players would pay to get more. This enables constant evolution of the project as more players join, where higher revenues enable more developers to work on the project, and you get a positive feedback loop of faster and more diverse content attracting more players to a free software title with user contributions that then captivates them to contribute back to see more of what they like.

3. Magma Language (and OS redesigns in general)

Low level languages suck. I start here, but this idea is broader than just a language. But I digress. To start, the world is built on C, and it sucks. But it will take a startup to upset that, because large companies are already vested in the status quo too much. But at some point you realize the technical debt of the modern personal computer is becoming preposterous, when you realize that the locale library in C++ has more members than the entire containers library.

So Magma is not just a language, but an idea - we have 50 years of bloat and cruft on our personal computers, and we are right near the end of Moores Law. A project started today to correct for that could see performance and productivity gains in magnitudes for years, that continue to improve technology when the silicon does not. But not only that, but as more people are coding each day, and code is contributing more and more to global productivity, optimizing coding is more important than ever.

Rust is a language that tries to fix C++. But it is still like C++, which is kind of its weakness. But the real weakness is that fundamentally all computers are now running in abstractions knee deep, faking cache, faking ram speeds, faking disk IO, faking network IO, faking core count, faking execution orders, and faking parallelism. Rather than that, I'd want to design an assembly language around heterogeneous computing, consider all volatile and non-volatile storage as cache to the network, with a return to simpler electrical designs of interconnects based around one bitwise communication protocol, with state negotiation - you either have one or many parallel pipes, giving you a serial or parallel port. It is always bidirectional, and the clock rates determine the line bandwidth, power draw, and latency. This lets you have everything from 5hz blinker leds to 16ghz 256 pipe parallel busses to graphics cards in one architecture. Think Networking - you have a fundamental ethernet layer that uses MAC and is IP agnostic, and you use packet encapsulation to have higher level protocols inside. Same concept applies here, you can forsake timings and bandwidth to get consistency and simplicity like packets, or you can do bitstream high bandwidth bidirectional communication on one line.

You take a modernized hardware architecture, add in a modernized heterogeneous ISA, build a microkernel on top of it, forsake the present device hegemony besides what adapters you could integrate (which would not really be hard - active usb converters would be simple, and this system would need ethernet layer compatibility to communicate with other systems all the same).

This one is last because its ridiculous. Nobody would fund it, yet it is probably the most lucrative project here. On the backs of reinventing OpenGL a dozen times, and reinventing C a hundred, maybe its time to rethink the foundations to better enable developers of the future. I have barely scratched the surface of the insanity mandated by whole sale rebuilding computing ground up for modern paradigms and capabilities, but I still think it is important that someone do it. Plan9 did it at the turn of the 90s, and while it never became popular, it may be because they didn't go far enough. They were close, though - the benefits almost outweighed the cost of transitioning away. The fact they called it a research project that whole time and never gave it a chance doomed it from the start, but fixing computing is essential for the future. I'm talking the hardware you want the first sentient AI running on.

All three of these ideas are things I am deeply passionate about, to the point where I would work for free on any of them because I believe in them that much (just give me ownership stake). So as I progress through this season, I'm actively going to be looking for like minded individuals to pursue some of this insanity with me. Maybe we could change the world.

Update: Just going to suffix on additional ideas.

4. The cryptocurrency

Bitcoin is great and all, but really it is a replacement for gold, the store of value, not dollars, the medium of exchange. Because bitcoin and litecoin have finite caps on the total number of coins generated, and that generation degrades over time, it means that inflation only slows over time from the volumetric size of the monetary base. And then there are coins like dogecoin and peercoin, that perpetually generate more coins at a constant rate - in dogecoins case, it is 6 billion coins a year, in peercoins case, its 1% of coins every year. This means doge deflates over time as the size of the base becomes larger, and peercoin has constant inflation.

But the truth is the size of the monetary base is largely irrelevent as long as money is distributed well. Inflation and deflation on their own do not matter as much either. What matters is the objective of money, and the metric by which a successful currency works, which is monetary velocity - how fast money moves through the economy.

No cryptocurrency today addresses this - constant inflation or finite coins mean the algorithm is blind to how the currency is being used. Instead, what you want is an algorithm that adjusts the coin generation rate according to how fast money is moving - as velocity slows, you generate more coins to promote inflation and spur money transfer. When its speeding up, you generate fewer coins to avoid oversaturing the market while preserving the coins value.

On top of that, you maintain the staples of crypto - proof of stake and work, fast transaction times, smaller blocks, etc. I'd also want to include a dns system in the protocol - the ability to exchange with someone by name instead of by address. Just having a distributed dns service could work, but integrating it into the blockchain would be novel as well.

2014/06/22

Toms SteamOS Response

So I wrote a huge response on Toms to a bunch of SteamOS questions and discussions I'm posting here because it was a lot of work. That way I can just link back here during discussions...

----


Just clearing up some misconceptions in this thread.

[quote]Does the steam client runs steamos games?[/quote]

SteamOS is just a modified Debian where the default launch is to put Steam in fullscreen Big Picture. It is very close to the same as unstable branch Debian running Gnome, it just backports a bunch of kernel patches and DRM drivers to support newer hardware. It IS pure Linux, you can even use the Gnome desktop on the shipped images. It just is missing a huge chunk of usual desktop features - root is locked, the repositories provided are missing a lot of stuff from upstream Debian, and there are very few default Gnome apps.

[quote]Silverstone sells a similar style case for those who want to DIY a steambox that still is similar in size. I am almost sure SilverStone makes the case as well because I saw some silverstone parts in the steam boxes they sent out. [/quote]


I wish that case came out a few months earlier, I would have built my current rig in it. It is just insanely good space utilization. I have not yet seen anyone try putting int a full water loop with a 240mm rad yet though, which I really want to see - it should be able to do it.

[quote]Anyone tech savvy enough to even be aware of steam machines[/quote]

That audience isn't the target of the Steamboxes. The initiative will do or die on how they compete in stores when put up next to the Xbone and PS4. They are not in any way a desktop computer replacement or surrogate and shouldn't be treated as such. There are Linux rebuilt desktops for that purpose if you don't want to use Windows, but they come will comprehensive application suites and are designed for computer use, not console use like SteamOS.

[quote]3 major considerations[/quote]

If you want a thousand dollar gaming computer you buy a thousand dollar gaming machine. DirectX 12 is not out yet, and will require games to explicitly support it - there is no free performance for any title out already. In addition, modern OpenGL can be pretty fast but it requires writing your draw calls in a way unlike any other API right now to get asynchronous no overhead behavior. We should also see OGL5 at some point, and AMD keeps saying they want to open Mantle up and bring it to Linux, though at this point they might as well talk about making pigs fly.

[quote]OSX -> Linux should be easy for game developers(it is already OpenGL most of the time.).[/quote]

OpenGL is an honest mess. I've tried porting GLES shader code onto the various desktops in the past and the experience is anything but simple. For one, OSX only recently adopted GL 4.1, but a lot of that functionality is still broken because its been undertested. So you need to target GL 3.3 in the same way a lot of games still have DirectX 9 versions for cards that don't support tessilation. Same problem exists in the GL space. The good news is that all Linux GPU drivers are now at at least 3.3 as well, so you can write new GL apps targeting 3.3 with 4.0+ extensions when available, such as tessilation shaders.

[quote]I wonder if they will stop releasing future Valve games on Windows when the OS is ready for download forcing PC owners to use their OS if they want to play their newest games?[/quote]

This is highly unlikely. Just in the general case, if you write your games to OGL 3.3+ and SDL, you can port them to Windows with a weeks effort even on huge titles assuming you architected the engine well enough and intentionally played to platform agnosticism. If anything, DirectX, .net, COM, and native Windows API usage might see a decline since it makes a lot less sense in the same way for average Joe developer Mantle doesn't make sense, but the problem is always moving software away from the MS proprietary stack to open standards, not the other way around.

The legacy of the id game engines is that because they always used OpenGL, even on Windows, they have remained insanely portable. The same can be said for most Blizzard games, especially WoW, since they all have OGL backends. Most developers today that release on OSX and Windows are still using DirectX just because thats what their main development studio engineers know, and their porting studio knows OpenGL. It is kind of a waste of resources but it is only recently you could even attempt write once run everywhere OpenGL, and even then it is just an attempt - like I said, OGL across OSes and drivers is always a huge mess. And it is even worse on mobile with GLES in all the garbage blob Android drivers.


[quote]Here is what I want from Steam OS: I want to be able to use any controller or mouse and keyboard that I want, not the ridiculous looking Steam controller, I want Steam OS to use way less resources such as CPU, RAM and drive space vs Windows 7/8, I want to be able to use my AMD GPU, I want to be able to play every single game I own not just Linux games. If I can play the same Windows games on Steam - figure out a way to get it working like using something like Wine or some other virtual machine OR simply release a Steam OS version of those games which can be downloaded using the same key you used for your Windows game which would be the ideal way to go IMO - and those games run better because the OS is using less CPU and RAM overall due to less background processes and it equates to higher framerates and better performance of those games then I will immediately switch to Steam OS for gaming and use my laptop for everything else.[/quote]

Linux supports the PS3 and 4 controllers, the 360 pad (but not the xbone one yet), and USB standard keyboards and mice. Most gaming keyboards / mice work fine except for special function keys because the vendors won't provide specialized drivers for them, and the kernel ends up using the USB HID generic keyboard or mouse driver.

That works most of the time, though. I do intentionally shop only for peripherals that support Linux, though. 7 button mice and any keyboard that has media but no programmable keys will work great. I have a Quickfire rapid pro and G500 mouse and both work flawlessly besides the logtech mouse 9 button being mapped to mouse 3 in the driver. Across a half dozen mice I have DPI adjustors always work because the devices modify their polling rate against the driver rather than through some proprietary interface, so they all work with scaling DPI just great.

I also only use Intel and AMD hardware because I'm active in the Mesa project. Catalyst on Linux is kind of always a mess, but the Mesa radeon driver has come very far in recent years, to the point where I recently played through Metro Last Light on high settings on my 7870 just fine, and it can run Civ 5, Witcher 2, Starcraft 2, WoW, etc well. The Metro performance is really great, the games through Wine are about 60 - 80% Windows speeds, Witcher 2 was an awful port, and Civ5 runs great, but it isn't a very framerate sensitive title.

SteamOS support for that driver, though, is lackluster. They include Catalyst anyway, which is notoriously buggy, but supports the latest OpenGL versions, and usually gets better results on the higher end hardware. That driver is no where near as good as Nvidias, but on the flip side AMD actively supports the FOSS driver and Nvidia pretty much hates FOSS. I hope in the next few years AMD just depreciates Catalyst on Linux entirely for the Gallium project, and just relegates Catalyst to support mode for enterprise customers in compute clusters (who are the primary drivers of its continued development on Linux anyway).

SteamOS has an install size of around a gigabyte, and different distros have different footprints. For example, I have a huge number of programs installed on Arch but all my programs (including Steam, but not the games) and OS parts take up 11GB on disk, compared to a completely blank Windows 7 using around 20GB. If I was only running, say, openbox and Steam, I could easily get my install size before games below a gigabyte.

Memory usage is also significantly lower depending on the desktop. SteamOS uses XCompMgr instead of Mutter from Gnome as the window manager, but still has a lot of Gnome services using up memory for some reason or other. Best case scenarios on hand crafted installations could get memory below 500MB before games, though, where Windows always uses at least 1 by itself, before Steam, which itself uses a hundred megs or so.

Performance wise besides the ability to hand crank all the background services on Linux, OS overhead is a solved problem, and Windows doesn't have any real efficiencies there over Linux besides being able to hand optimize your Linux kernel for your hardware. The basic fundamentals of an OS have not changed and their performance characteristics haven't either for a good decade. The only real plus side of Linux over Windows is that if a device is going to be supported, it will be included with the kernel or not. No driver hunting because there is almost never a third party driver not in the kernel tree, with a few exceptions like some Broadcom chips only working with their old wl driver you have to get from them or a software repo.

I'm really surprised Valve has not adopted Wine at all yet. I expect GOG to ship a lot of Wine wrapped games this fall though. I often have better experiences running games through Wine than some of the crappy native ports (hic, Witcher 2), becuase Wine has gotten really good at DirectX 9. Blizzard games mostly work out of the box - on purpose. I know that Blizzard titles, Neverwinter, Tera, and Rage all have engineers explicitly making sure they run in Wine well.

And any game you bought on Steam is installable to your account on any supported device, including Linux. I had Civ5 on Windows years ago, and when it got released on Linux earlier this month it just shows up in the installable games list. No keys or nothing needed, anything you buy on Steam (or Desura, or GOG, or Humble) you can download and run on any supported platform with that original purchase. This isn't console bullshit.

In reality though, game overhead is mostly from the engines themselves and the graphics API, not so much the OS besides the raw space and memory wastage of Windows. You would see that performance difference if Mantle were opened up, developed as a Gallium state tracker, and shipped supporting every GPU out there through that driver - it is where the community free Nvidia driver is too. If the API can be implemented to the winsys driver API they use, that would mean everyone can see the performance benefits. Problem is Mantle is really designed to run "on metal", and Gallium is designed to provide a hardware interface that API's can plug into, like DirectX9, OpenGL, OpenMax, EGL, etc, so it may not see as much benefit. Though Winsys itself is a really modern design around how hardware works today, so it is probably close to what Mantle looks like in the first place.

[quote]I expect Linux ports will still take a while to really pick up speed[/quote]

It is mostly waiting on publishers. Almost any indie game is on Linux already, Ubisoft has said they intend to bring new titles to the system, Metro Last Light was Deep Silver's latest game and they brought it over and will likely support new titles on it as well (and they are backporting 2033 I heard). Paradox is probably the largest publisher right now that is really great on Linux support - they have brought almost their whole catalog over. 2k, Ubisoft, Zenimax, EA, Activision, and Square have not launched a single title yet for Linux, and that is where most peoples issues are with. But there is nothing we can really do to compel them to do so. More likely they will just launch new titles on it for a while, and if it is popular they will backport classics.

[quote]I'm in for a linux box dual/triboot of android/steamos/some linux variant[/quote]

A lot of people are treating SteamOS as some distinct thing from desktop Linux, but it really is not. It is completely redundant, besides the peace of mind for driver support that Valve expects in titles they ship, to run both. It is the same client on both, the SteamOS Steam client just starts at boot and runs in big picture as the desktop. You can just have a desktop session for gaming at your login manager that does the same through any distro out there right now.

[quote]let me re-phrase I'd buy a linux dedicated machine and I'd download steam games and play them[/quote]

You can build your own or buy a prebuilt machine from System76 or another company (thinkpenguin - Dell and HP also ship Linux computers too) any time.

[quote] Small devs could make a lot more money if they could easily get their game to run everyone very quickly and have a HUGE TAM to sell to. No more need for EA etc. [/quote]

You can already run OGL games on Windows. The problem is that OGL 3 was slow to release, and developers en masse learned and transitioned to DirectX in the early 2000s. Now all the graphics engineers are one trick ponies that sit high in big publishers and studios. Additionally, the Windows GPU drivers have a lot of atrophied OpenGL support because it isn't used as much as DirectX, so when a title that uses it comprehensively like Rage comes out the driver vendors get caught with their pants dropped.

If I were a dev today, I would be pushing for OpenGL ES in the core engine - that way you could bring it to any computing device out there right now in existence for the most part, at least by the time the engine is done - OpenGL 4.3 requires symbol matching to GLES3, so your GLES3 code should run fine on any conforming driver. Problem is nobody is doing this so all the driver code paths are raw and untested. And you would be budgeting that Apple would move from 4.1 to 4.3 soon (I'm actually unsure if Apple supports the ARB extension already or not, I know Mesa does even though its only at 3.3).

You could still pull in extensions and pass the GLES context into desktop GL code to apply fancier volumetric shaders, tessellation, etc. The only real downside to that model is that GL draw calls before the ARB indrect functions in 4.1 - 4.3 are insanely slow. GLES 3.1 actually includes them - and compute shaders - but that is just too new right now, although when I can eventually write for GLES 3.1 and get comprehensive support I'll be in developer heaven and finally be willing to work professionally in the engine space. Right now I'm hobbyist in Mesa because it honestly is a morass of awful overhead and obtuse tricks to get around awful APIs. If I had compute shaders and indirect draws I'd be set.

[quote]Im sick of not being able to manage multiple displays the way do on windows[/quote]

This is actually a pitfall of X, the current Linux display manager. And it is a huge piece of shit that IS finally dying. Once KDE ships Wayland support this summer a huge swathe of distros are liable to adopt it instead of X Q4 2014 to Q1 2015 and that display server fixes the thirty year mess, while maintaining reasonable backwards compatibility.

Problem is the proprietary drivers are unlikely to support Wayland as soon as the Mesa drivers are, and SteamOS itself is certainly going to be using X for years, so those benefits might not be seen for a while from Valves distro, or Debian itself. Especially with Ubuntu jumping the shark with their own display server.

[quote]People will want to be able to actually use the computer(dualboot) to browse, do photoshop, edit video etc.[/quote]

Steam has a built in browser, and SteamOS comes with Gnome and its stock browser as well. "do photoshop" is kind of a corner to get backed into, since Adobe doesn't ship it for Linux, and there is absolutely nothing anyone can do about it. Video editing though - there are a lot of good video editors on Linux, my choice is kdenlive - as a Vegas user is a past life, it does as much or more in some areas and I really love the UI customizability. Openshot is another big popular one that got kickstarted recently.

And on the topic of photoshop, Krita, Gimp, Inkscape, and Karbon exist. Your milage may vary, but that is why there are options (note the last two are just vector drawing apps, the former two are general purpose artistry and painting).

[quote]As long as the (Mobo, Graphics, Keyboards, Mice, ect ect) vendors make driver/app support and other software companys make Linux apps like Team Speak or Vent for starters I would be into the Steam OS. But Id rather build my own system and dual boot so I can still use my investment in current Windows games i have already bought. Also in the future I want to be able to make my own system and not be stuck to SteamOS specific systems... [/quote]

Motherboard manufacturers are absolutely awful with Linux. None of them provide out of the box support, none of them really contribute in any way to the kernel, at all. If even one of them was proactive I'd exclusively buy their products and shout their praises from the hilltops.

In terms of GPUs, all the major vendors are doing something. Intel is really big on their FOSS graphics stack, AMD have Gallium and Catalyst (yeah, two drivers, hopefully the former gets good enough to replace the later soon) and Nvidia has a really up to date and performant copy of their Windows driver.

I talked about keyboards and mice earlier in this book long post...

Teamspeak is avaialble for Linux, Vent is not but there is the reverse engineered Mangler that works with Vent servers. Neither is recommended though, since Mumble exists, is foss and supports Linux as a first class citizen with the in game audio and positional audio features. There is also Skype, and several Jingle and SIP clients as well, and Google supports their plugins on Linux and webrtc of course works.

Tangentially, I think dual booting is the wrong approach. If there is something keeping you on Windows just keep using it. I did dual boot for a year, but barely used Ubuntu 8.04 at the time because the only value add was the lower power usage and snappier application loading. There will never be anything on Linux program wise not available on Windows - all you will get are a bunch of low level benefits like better filesystems like btrfs, tiny install sizes, tons of customization and control, and ease of use from package management. But if you are also running Windows all those benefits are moot.

But nothing stops you from building your own Linux machine - you just have to be careful about parts, because while they are all tested on Windows, you have to be explicit with your Linux support. The good news is Intel and AMD keep their chipsets and CPUs up to date, so as long as you get boards without external controllers for USB or SATA ports everything should work great out of the box. Hard drives are a protocol thing, so they all work no matter what - the only blunders I've ever had are how some hard drives advertise full disk encryption but in small text only with a TPM, which is proprietary Microsoft tech. My 840 Pro has FHD from the ATA password through efi though, which works great.

And you are absolutely not stuck to SteamOS. Like I said, it is just their own Debian spin where they launch the Steam client in big picture mode. It is the same Steam available in the Ubuntu, Suse, Fedora, Arch, etc repositories. And its the same underlying OS - a game that runs on one will run on all of them, barring old kernels and drivers - latest versions across the board should all work the same.

[quote]For just about the same money you can get a windows machine and run literally everything. When there is no technical advantage (and mostly drawbacks) why do it? [/quote]

If you budgeting a $500 custom built machine, $100 for the Windows license is a huge chunk of your budget, and if the games you want run on Linux then it seems like a waste of money. With most computer vendors they get subsidized deals on the licenses, and often include bloatware to actually profit off the machine. You could bloatwareify any Linux distro as well, though - even more so since you can run custom software repos and inject ads into almost anything since its all FOSS.

I remarked on several technical advantages earlier on, though, but the biggest reason Valve is doing this is because, one, the Windows Store and how they treated ARM in Windows 8 (locked down from the bootloader to the installable apps) scared them off, two, because Windows 8 is an extreme flop, and three, because by having control of the OS they can do things like ship an image that boots to Steam and lets them run it in a living room like a console. To try to get that on top of Windows 8 would be a nightmare if it would even be doable, and then you have to consider the memory and storage overhead of having to have the entirety of Windows. They took a calculated risk that breaking compatibility was worth it, though.

[quote]30 fold from a nearly 0% market share is still low.

and no, I'm not trying to put down the idea of steam machines, I think it's awesome, but so far what have we seen? delayed OS launch, OS installation problems, OS optimization problems (almost all AAA titles have similar or lower FPS compared to Win8), and repeated controller delays. heck even Alienware's newest 'steam machine' will run Win8.1 out of the box. and NO, i'm not interested in buying a $500+ device just to play linux only titles.

at the end of the day, I expect steam machines to take off SLOWLY, with people who already own gaming PCs experimenting dual-booting Steam OS, and their market-share will grow ONLY if they perform well. [/quote]

Linux user results are around 1.2%, so a 30 fold increase would be pretty huge - if SteamOS was a third of Steam's installbase it would be a major market mover. I doubt that kind of adoption, especially in the short term, though - there are millions of Windows computers that have just Steam and maybe a game or two, that count towards its totals. There is a lot of inertia there in the numbers.

They are delaying the launch because big publishers are not moving quickly to it. Mainly because its a disruption - EA et all probably have MS, Sony, and Nintendo divisions that develop engines and bring games to each platform, but SteamOS requires OpenGL and Unix developers which they probably don't have unless they were already porting to OSX.

The optimization problems are case by case. No matter what, you are always really easily able to make a bad port. See Dark Souls on Windows. Witcher 2 is kind of like that on Linux - if you try porting by just wrapping DirectX in a translation layer like Wine you are going to have a bad time. Metro Last Light, Civ5, Portal 2, Team Fortress, Left 4 Dead, Dota 2, Dungeon Defenders, Garrys Mod, etc all run really well for me, and are all competitive in fps with Windows - varying from 75 to 150% the performance. And it is all the quality of the port.

Steam Machines compete with consoles. Hopefully the void left by none of the current crop actually being powerful gives Steam Machines room to prosper - the combination of deep sales and the ability to get a console that can actually do 1080p gaming for like $800 should be competitive.

... talk about walls of text.

2014/05/07

Hardware Rants 1: An Open Future

I keep going back to this (probably ignorant, and stupid) idealization that we are still wasting our time on ancient technology. We still use tar. But this is not about software, it is about hardware.

x86 is already approaching its 30th birthday, and it was never a well designed beast to begin with. Itanium was meant to replace it, and I wish it did, but today we don't have that luxury. I think in most ways Itanium failed because it didn't innovate enough.

There is this massive disconnect in consumer electronics. Everyone does things their own way - custom silicon from the ground up, mostly because the free software ideology has never translated into a free hardware one. There is literally no modern transistor based machine fabricated with a modern process that is also open source. Which is the whole problem, because if being open gimps your ability to function, you lose the benefits.

I have been thinking in recent months about, architecturally, how to tackle this. We are still building on all these legacy proprietary monstrosities, so if we toss all that out for a moment and tackle the parts on their own, we might be able to come to some reasonable conclusions about where to go from here.

First, firmware. You need to start with something. In hardware terms, you have ROM you payload into a cpu, and it goes from there. I think one principle to take to heart in future innovations in the computer space is the ability to package all the tech together in one die. Modern Intel CPUs, for example, have the entire northbridge that ten years ago was its own discrete circuitry onboard. I don't see any reason a newer architecture could not go further with this - having a real system on a chip is hugely advantageous, because it lets you make assumptions.

What would we want on a SOC? Bandwidth channels to external interfaces (I'll get to that), a firmware, something to signal the chip to start and stop, local memory (that may one day simply be enough for most systems) to use before attaching external memory, voltage regulation, and processing "space". I say space because that can include branch predictors, cache, fpus, simd core arrays, or generic pipelined cores. I'd call it core soup, though, because you go beyond AMD's HSA and just treat everything as a first class processor. 

Which I think is really important. We are still developing graphics hardware with each generation having a (usually) proprietary microcode that is generated by (usually) proprietary drivers on the cpu side. That is so bad it hurts.

I really want to talk about interconnects, though. Modern systems are a hodge podge of disparate technologies -  legacy IDE, PCI, serial, and parallel. System specific inter-processor interconnects like QPI. Memory address interfaces like DMI. PCI Express, which has done good in becoming a pretty standard transport interface - it is included in Thunderbolt, for example. Sata and SAS, and don't get me started on how I have no idea how we got into this mess where enterprise uses a different hard disk interface from the consumer space. Or why SATA still requires dedicated power.

In truth, though, all these interfaces are just trade offs along a few spectrum:

  • Bandwidth, where you need wider busses and increased synchronization mechanisms or timing control to regulate it, and you almost always are frequency limited already and need to deliver more bandwidth via parallel interfaces.
  • Latency, which is the difference between real time transports and packet based ones.
  • Distance, when you have cabling traveling meters rather than milimeters, that needs to be flexible, you can't send as high frequency or low jitter signaling as you can on a pcb.
  • Interface, where SATA has protocol operators described as electric signals, DMI is just direct processor based hardware to address a numeric node in a memory array. The later is not portable, but the former is.
The modern problem is we have all these interfaces centered around the first three, when all we really should be discretizing is the last one. Thirty years ago manual clocking or bus rate timings were a real issue, but today dynamic frequency and timing has crept into our processors like a child's plaything, but our interfaces never take advantage of that tech. Mainly because they are old.

In principle, what we really need are the following interfaces to almost any computer:
  • High bandwidth, low latency interconnect between central system components (memory, other processors, graphics accelerators, local storage)
  • Low bandwidth, high jitter tolerance, peripheral interconnect to peripherals that don't demand large bandwidth needs, that minimizes latency.
In theory, you could have one negotiation API for all hardware on any interface, and communicate to it with its choice protocol. It is all electrons on copper anyway, right? We can adopt the PCI worlds concept of lanes here - you have bandwidth lanes off the CPU nodes, and those lanes can be linked to memory, other processors, graphics accelerators, local storage, etc. You make trade offs - less local devices need to run at lower frequencies with more spectrum spread, sacrificing bandwidth for transmit distance. All this could be done by some basic negotiation protocol - the board itself has rom sectors describing each bus's locality, and the rated speeds and lanes for each device.

The serial version is just a single lane, with the same semantics. Rather than have addressing protocols at the controller level, you can have them on the devices and have communication negotiation with the host.

Each lane would have power delivery - probably on the order of 5v 1.1a per lane for 5.5w pull each, so a 30 lane gpu could pull 165w before it needs external power, or an ssd could use 5.5. I could imagine a justification here for having a higher power variant, or just having power tuning on the chipset (which is what we are going for, after all) running off 12v.

Speaking of voltage, this is an area I am just not involved in, but I'd definitely look into depreciating the 3.3 / 5 / 12 volt rails for one common carrier DC voltage that the board itself can distribute appropriately. And getting rid of 24pin connectors. Those things are so stupidly huge.

You would want internal and external variants of these interfaces - though I'd imagine you could just stick the same connectors in each, really, like esata. You would want to include some kind of mounting mechanism for heavy equipment, though.

And the most important policy in all this is that such a system would need to be open. Open standards, open consortium run technology. No patents, open design documentation - at least in the reference implementation.

As an aside, every standards organization that doesn't maintain a reference implementation is lazy. Looking at you, Khronos group. Adopt Mesa, and if it isn't matching your feature bulletins than you might want to slow down pushing the envelope.

As usual, I'm interested in this goodness. If you have money and want to pay me to work on this full time, contact me via the usual channels. I got some passion here I'd need to spend a year draining to the bone to get out of my system.

2014/01/28

Software Rants 18: Things to do in KDE for 2014

So it has been a year and I'm still a KDE fanboy. Qt is a great toolkit that needs way more adoption, and QML fixes any gripes I had with c++ widgets. I'm still of the opinion a top priority needs to move the entire qt / kde codebase back towards utilizing the STL, but honestly I can't blame anyone involved for not wanting to try to port tens of millions of lines of code to a new language version, especially when that would prompt a lot of architectural changes (smart pointers, atomics, std::thread, lambdas, constexprs).

My biggest issue is always finding something to do that isn't overwhelming. Today I had an idea - I use btrfs, I was going to write a script to automate my snapshots. After a few minutes in bash, I found instructions to write an EOF to sudo, and gaged a bit, then went to Python. And then I'm thinking "why don't I throw a gui on this and make it official"?


My natural tendency is to move sysadmin features into a KCM module, but if I were to do that I would want to use frameworks 5 and write it in qml, which isn't quite mature yet. Principally I'd be writing code I couldn't use myself on my own system, which is a bit of a no-no, since it would be such a short project.


So I want to go over everything I think could help KDE in 2014, in terms of bug fixes and features, assuming current trajectories place a frameworks 5 release around June and the entire KDE ecosystem switching over sometime in early 2015.

  • Menu-in-titlebar : Currently, there is a library called appmenu that allows you to put application menus in an icon in the title bar. This is good, but the next iteration is to place the full menu in the title - personally, I'd implement this as an extension of appmenu, so that on sufficiently small windows the menu collapses into the app icon. When the titlebar isn't big enough to filt the whole menu, you have a minimum title size (determined by study, whatever is the minimum set of characters to uniquely identify a window in, say, two standard deviations of all windows) and fill the rest of the bar as much as possible with the menu, from the left to right. If the whole menu fits, the title can either right-align all the time, or center in the remaining space. If the whole menu doesn't fit, you have two arrows on eithe end to scroll the bar, which also works as a flickable and with the mousewheel. If you invoke a menu through keybind, such as the file menu, it would animate the slide to bring it into the frame if it isn't already visible. I would also imagine other controls you could integrate into a titlebar, such as per-window brightness, contrast, volume, pitch, and base.
  • Window / display sharing : Miracast is supposed to show up in Mesa this year, but as a technology it is woefully lacking. We should also see Daala released this year - hopefully taking the video codec world by storm. Whenever that happens an evaluation will be needed on open codecs for one that can do streaming with low latency and good compression like h264 streaming seen in Miracast, Shield, OnLive, etc. This technology needs to be standardized beyond KDE, so it can be used as a window share protocol for Wayland for network transport, for use in RDP to help with its bandwidth constraints, and open video conferencing. WebRTC and Jingle are both projects involved in this domain, so whatever matures out of there could be wide-band adopted as window, screen, and livestreaming across the desktop. An intuitive implementation could be a killer feature for years.
  • Timeline : Rosa has an implementation of an activity timeline already in release, and should see adoption in the broader ecosystem. A lot of features from Rosa should probably go mainline kde, since they all seem well integrated with Plasma Active and Desktop.
  • When I create a new folder, Dolphin shouldn't pop up a dialog, it should make the folder while I enter a name field. This is actually what happens with the qt files dialog.
  • Rekonq : I really want to use rekonq. Probably yesterday. The footprint is much lower, it has a super fast rendering engine, and it integrates so well into the desktop. My problems with it all stem from core features of Firefox I just need available, and its its implementation of an addon API to mimic Chromes so it can use all of their apps will be the deal breaker. The other aspect is syncing - it currently only does Chrome bookmarks syncing. If it can act as a Chrome browser for syncing purposes, it would be set.
  • Deep screen recording integration : KSnapshot should at least have an option to record to vp8 or ogv with a start / stop record button that is keybindable. Since opengl recording requires injection, dedicated programs suffice, but having one of those as well would be nice. In theory, though, you would want to expand KSnapshot's sharing features to send images to others, and to broadcast live video or send the recorded footage. I'd definitely like to tackle this problem domain this year.
  • An active solution to small buttons : Plasma Active has one problem right now, which is that most buttons are too small. I expect that in kf5 the theming engine will be reworked enough to easily specify a scaling factor on all interactables (probably through some deep qt integration), rather than just individual elements, so the desktop can have its traditional metaphor that can elegantly scale down to the mobile form factor with one slider.
  • Distributed cloud accounts : It would be infeasible to have a completely free desktop like KDE provide per-user networked accounts with a verbose sharing policy with akonadi, plus all your per-app settings, plus your saved passwords, etc. An alternative solution is to effectively combine bittorrent sync and wallet passphrases - run a daemon that seeds and shares wallet data with the swarm, with a few orders of duplication and information sharing (ie, each account is duped in fractional parts across the whole network, and you can rebuild your own complete network account database, which has that access logged in case someone is trying to brute force you. Plus, every IP could have fail-2-ban on a swarm scale, so anyone trying to brute force  pull all accounts would have their ip ranges blocked quickly for short lockouts but ones that make raiding the entire swarm infeasible. Because if you had this system, you could have a secure share of your passwords, bookmarks, application preferences, you can share files in your kde cache, etc. It would require some kind of usage limit so someone doesn't just dump data into the cloud and overwhelm it, but it should be able to hold at least voice mails and several photos or short video clips.
I'll add to this list as I think of new things, but I hope this is a bright year for the KDE project, and that I can find the time to crank out some of these ideas. First I want to finish the cadeques book and read the kf5 documentation in progress.

Display overclocking with XRandr

As the trends shift and more enthusiast class users start considering Linux, it is important you supply them with the tools to do all the push-it-to-the-limit overtuning effects they had available under Windows.

Monitor overclocking isn't a well documented art, especially since it can literally break the display, in a way reminiscent of 90's AMD processors literally melting their sockets. This isn't the coddled down overclocking of todays desktops, where pushing things too far just results in an unexpected reboot and need to tweak expectations lower. Try setting your monitors refresh rate to twice or more what it is rated and have fun with a dead panel.

That doesn't mean you can't overclock soundly. Trying to double your refresh rate is like trying to run an i7 at 6ghz with the stock cooler. Instead, look for incremental increases in performance like any other overclock, and hold it back from the literal edge of stability a bit so your screen lasts longer than a month.

Under Windows, as usual, the sizable install base has produced graphical tools to set refresh rates. And yet again, Linux proves its technological superiority by not requiring any such tool, xrandr has all the facilities you need to create custom display modes.

So yeah, we are using the cli here, but this is one situation I'll defend this usage - one, it requires you to know what the hell you are doing, and two it is integrated right into the display manager itself and is literally a few bash lines, and if you get a stable OC you can just add it to your xprofile or autostart scripts. In this example I'm taking a 60hz screen to 65 - start here, at the least (or on a 30hz panel at like 33), and increment up until you start seeing graphical errors or glitches. Good tests for graphical errors are fast paced rapid scene changes in 3d games, plus some repetitious pattern (to gauge burn-in), plus anything that rapidly fluctuates the subpixels color intensity, I just use a gif of rapid color flicker at 48 fps.

First, we use the cvt, provided by xorg, which stands for coordinated video timings from VESA. It generates the modelines for our desired new display profile.

cvt 1920 1080 65

Modeline "1920x1080_65.00"  188.00  1920 2048 2248 2576  1080 1083 1088 1124 -hsync +vsync

We get back this Modeline, which we will add to xrandr - note changing the quoted part just changes what we name this mode, so you can name it "oc" or "65" and be fine, since xrandr by default populates modes by resolution (ie, 1920x1080).

xrandr --newmode "1920x1080_65.00" 188.00 1920 2048 2248 2576 1080 1083 1088 1124 -hsync +vsync

So now we have a registered mode, we need to add it as a valid mode to our specific screen - running xrandr without arguments prints the state of all the display outs, find your choice screen and replace HDMI1 here with whatever it got named.

xrandr --verbose --addmode HDMI1 "1920x1080_65.00"

Then you need to actually set the mode. If after running this you see obvious glitches, revert quickly before your display is damaged. Good news is that on many panels, going ludicrously out of refresh bounds will just get you out of range errors on the display or in your xorg log (ie, my Asus IPS pannel probably is only rated for hdmi 1.2, so it can't go beyond 75hz refresh).

xrandr --output HDMI1 --mode "1920x1080_65.00"

If you encounter errors, remember you will probably need to specify the rate as well for your default modes, since most panels present multiple refresh rates and omitting may not pick the highest assumed rate.

xrandr --output HDMI1 --mode 1920x1080 --rate 60

But if it works, congratulations, you overclocked your monitor. You will have to add all the xrandr lines into an init script to run every boot, since it is a non-standard mode. I'm definitely going to be prioritizing good OC displays in the future, you can save some buck on that 120hz premium. I'd love to get an early 4k display that can support getting an OC to 90hz, since Displayport 1.3 can easily handle that.

2014/01/12

AUR Packages that should really end up in official repos!

What a strange blog topic, but something I'd like to document since I really think the only failing of Arch is the rate of official adoption of aur packages.

As a preface, I won't mention anything proprietary, because most of those packages can only be in the AUR due to the legality of distribution. Also, why would the Arch support infrastructure bear the burden of distributing other peoples proprietary blob? Though they do have some proprietary software in their repos, such as Steam and Flash, I'm not going to harp on anyone for not serving up server space for other peoples binaries.

First group - games. Plenty of games and game engines are already in community, so any foss game engine has no excuse (assuming it isn't brand new) for not getting adopted.

  • darkplaces (quake engine)
  • eduke32 + eduke32-dukeplus (duke3d engine)
  • freedoom (entirely free game assets for doom!)
  • gzdoom (doom engine)
  • hexen2 (hammer of thyrion)
  • openarena (foss quake 3 fork)
  • zandronum (foss doom engine with multiplayer)
And the second group is a plethora of KDE and qt apps and documentation that all seem to have 100+ votes but are still not in the main repos:
  • kcm-ufw
  • kcmsystemd
  • kde-thumbnailer-epub
  • kdeartwork-sounds
  • kdeplasma-applets-networkmanagement
  • kdeplasma-applets-starfield-wallpaper
  • kdeplasma-applets-veromix
  • skanlite
  • qt5-examples
  • qt5-jsbackend
I would say that a lot of these aformentioned packages all include a lot of data files like models, textures, and sounds that would take up huge amounts of server space - and I'd completely understand not including that in the main repos, if it was not for the fact games like 0 A.D. and Xonotic have their data hosted in official repos.

I also get a lot of the second set of packages I mention require the input of the original authors who may be MIA. That would explain a lot of it. And maybe some of the first group just outright said no to official Arch packaging, who knows.

I just think moving forward Arch will need to consider ramping up the process of pulling the most popular and longest running AUR packages that aren't proprietary into offical repos. My impression is that it doesn't happen nearly enough, and that can lead to overdependence on the aur when pacman and its repository system works so well. Being constrained to pkgbuilding everything you use isn't the right answer, but neither is a laissez faire avenue to getting anything into official repos on a whim.