2016/02/13

Moving to Wordpress

I have no idea why anyone would read this entire disaster up to this point, but new content will be posted to https://zannyland.wordpress.com/.

Nothing against Blogger besides the whole Google thing, just feel like a change of environment would be fun. Just wordpress is pretty much standard, and its open source, throw in that it is a nice change of scenery and thats a strikeout. On the bright side, import on Wordpress and export on Blogger worked perfectly.

2015/11/19

Software Rants 22: Saving SteamOS

Steam Machines have landed, and the results have been everything expected but nothing desired. Unsurprisingly the uncoordinated release brings obsolete OS versions, broken controllers, and a store that doesn't even show games that actually work on the system. They still only support one GPU vendors proprietary driver, and they still have no compelling reason to adopt the platform.

So it should be no surprise that, while Steam Link and the Steam Controller will probably be wildly successful, Valve has failed to move the industry towards open APIs and cross platform by default through SDL / Vulkan. It does not help that Khronos is likewise dropping the ball catastrophically by having terrible disclosure on whenever they are actually going to publish their supposed "open" and "collaboratively developed" API that is still MIA almost two years after its announcement, but even OpenGL 4.5 would be fine if drivers existed that supported it and Valve shouldered the burden on making sure things happened at a quality standard that would move console units.

But that never really was the point - fundamentally SteamOS is a hedge, something to keep on the back shelf for whenever Microsoft goes too far for Gabe to handle that Steam stops working as they continue to follow OSX towards total lockdown of their proprietary operating system. So Valve honestly has all they wanted out of this product - they got big picture mode and in home streaming for their Steam Link, and the continued lack of availability of broadcasting on the native LInux client shows where the priorities lie.

So how do you go back in time and fix this mess? Or how do you try the even harder problem than inventing time travel by somehow overcoming the awful press this shot launch will generate and reforge the product into something that might actually see adoption, maybe even achieve the ultimate goal of usurping consoles entirely with an open standard base?

Well first, we need an open standard base. Part of the reason Valve will not see any industry fervor towards their platform is that they are taking as much a cut as Sony or Microsoft on games sales. Even if Steam is just a program running on top of X rather than the whole OS, Android soundly demonstrates how alternative app stores do not work at all when you have a default one shipped almost universally. And Steam is even worse - its not only universal, you have to go out of your way to get out of it.

If Valve wants SteamOS to succeed, they have to admit that while they can control what is on their Steam Client Store, they need to open up the client software to work as a system shell, and they need to allow third party stores to list themselves on Steam. That means if Origin or Blizzard want to throw their client or the Battle.net launcher on Steam so these companies can ship their games without Steam's cut, Valve has to enable that through some mechanism so that average people can install third party launchers and stores and play games through them.
Open sourcing the Linux Steam Client would be a start. It doesn't do anything to remove any competitive advantage, because their whole advantage is the web services Valve provides through the client - the screenshot backups, the chat, the video broadcasting, not to mention the games they sell and host themselves. To try to reuse Valve's Steam Client software against them would be to reinvent all their back-end infrastructure and somehow persuade users and developers to switch. Which is honestly impossible to perform, especially since Valve would never lose their trademarks to the Steam brand.

They could even release the source under a nonfree license such as one that bars commercial use - the point is they need community involvement and a clear path for third parties to get their storefronts on Steam so that the Linux gaming scene isn't the one stop show of whoever sells on Steam.

One core problem is the lack of common services on SteamOS - mainly, that the Big Picture client needs to perform more roles even as a TV OS than just playing games, yet Valve has not sufficiently provided for all the use cases average people expect out of a set top box.

It would be a killer system to include DVR functionality (when video in or coax in is provided by the machine), and would need to bundle Chromium with its DRM module to stream Netflix and Hulu. Amazon Prime Video would be a mess because Linux in its current state cannot support HDCP, and there is no way to ever implement a proprietary only video DRM in an open source OS like that. What a shame.

That means webapps support in SteamOS. Most of the common services people use on other consoles have ready made webapps versions already available for other set top boxes, including Youtube, Pandora, and Netflix. Integrating webapps as first class launchers in Steam fixes almost all these issues if they are also overhauling their internal browser with a Chromium implementation, while also serving device IDs for either TVs or mobile to get UIs better suited to long distance viewing.

The second generation of Steam Machines will need to have quality control. No more $1500+ desktop computers sold on Steam. There needs to be pricing limits and minimum feature sets required to use Steam Machine branding, and a minimum of software support and build aesthetic to actually build a console. And more importantly than any of that, we need a reference Steam Machine from Valve themselves, targeting baseline specs at a break even price point to define the floor of Steam Machines, preferably at a console competitive price point of around $400. Valve definitely have the resources and scale to make it happen, even more than Dell did, who were dependent on per-unit profits to justify the commitment.

And Valve needs a timed exclusive for, lets say, "SteamOS 3.0". Half Life 3, Portal 3, whatever, something needs to be on SteamOS prereleased a month or three early to move units. Valve games are legendary. They could easily move consoles if they release a good enough title exclusively on their own OS, especially if they only announce its a timed exclusive without giving concrete time tables for releases elsewhere. Its dirty, it defeats the point somewhat, but it is also necessary to drive adoption, otherwise the chicken and egg problem of no customers no games is never corrected.

But most of that is money and effort, and it is extremely unlikely to happen now that Valve hast hey want out of their product. They put the minimum effort in to keep Microsoft from trying to push their Windows Store as a competitor to Steam, much less lock them out of the platform, and now that they have what they want SteamOS can rot until they need a hedge against the monopoly again. What they underestimate is how entrenched Windows already is, and how it can only get worse with lock in like DirectX 12. Their hedge is never going to pack the punch it had in 2012 again, and it is unlikely Microsoft will be strong-armed in this way again either, unless SteamOS proves a legitimate competitor before that time.

It would be a shame to witness the death of PC gaming at the hands of console-grade proprietary lockdown that Valve was aware of but was too apathetic to really act to stop. There are thousands of us who realize the danger, and Valve obviously tasted it, but it is naive to think it just goes away and you can proceed with using the proprietary monopoly product indefinitely now. Its a never ending battle of politics if you don't actually own your platform, and Valve has done everything they can to try controlling a platform they have no control over.

2015/11/04

Software Rants 21: Wildstar

So I'm skipping a Plasma 5.4 rants. Only issues are that krunner is now incredibly laggy and crashes a lot due to its history feature, my displayport monitor starts slower than the others so plasmashell doesn't properly add it to the desktop set and I have to restart it after logging in most of the time, and the new audio widget doesn't have output switching. Highlights are lots of new icons, krunner history (lol), and a good audio plasmoid finally.

But on to more important things, like video games! Particularly MMOs. Particularly, Wildstar. It went free to play about two weeks ago, and I have been playing quite a bit since then (played every class a smidge, lowest is Engineer at 3 and highest is my 50 medic and 8 warrior, spellslinger, and stalker). I'm going to detail my likes and dislikes about this game, though I do like it and really hope it can succeed - amongst competition like Neverwinter and Tera, it does stand out for its setting and its housing, and the PVE is much nicer when you actually do need to use the trinity.

At that, in terms of my qualifications, I have most leveling reputations maxed, I got max level in scientist, I decked out my house quite a bit (but have not bought an expanded house because I'm cheap and hoarding plat for riding skill) and have completed most veteran adventures at this point, and have gone into a veteran dungeon to lag to death and die immediately (but the lag I'm not considering since its wholly because Wildstar cannot use any Wine performance optimization and I believe it throws a lot of rendering errors as well).

First up, the intro experience. This game lacks characters. Artemis Zin is pushed early on in both starting zones as this continuous character, but without a major introductory cutscene to introduce her shes just words you follow like every other NPC. By the end of the game when her story ends in Malgrave I think I picked up a bit on her personality but only by reading lore items and actually paying attention, something MMO players will never often do.

I'm going to make large comparisons to WoW - of course - because this game was developed by a studio founded by ex-vanilla WoW developers. In WoW, I do care about major lore characters, and there are three ways I develop an affinity - cutscenes that show them off (Varian Wrynn in the Ulduar cinematic), significantly scripted ingame actions they perform (think the duel in the wrath launch event between Garrosh and Thrall), and user generated lore hype (everyone loved the Warchief in vanilla just because of his WC3 lore and it spread to other players.

Basically, you cannot make a character great on written text in a 3d world. They need to take action. In the OmniCore weekly at the end of the game you escort a mechari called Axis Pheydra who finally gives you some character with an NPC that has voiceovers besides a bit of Artemis. Basically each race has a quest leader - Mondo Zax for the Chua, for example - that recurrs in the world, but since all they do is act as questgivers and have very limited interaction in the world they aren't really too interesting.

2015/06/01

Software Rants 20: Plasma 5.3 Problems

Blogger problems: why can't you set the default font to be something other than MS shovelware? I keep manually putting these blog posts on Helvetica.

I figure with each new release in the KDE 5 series I'll just make blog posts about whats broken. Gives me some clarrvoyance on where I can direct any contributions I want to make.

This release was quite the landmark for me, though - I finally switched my desktop and notebook over to Plasma 5. So first thing is first, something must have improved from last time for me to make the switch, and it definitely happened. So lets address my complaints last time and see what has gotten better, and add some more remarks now that it has been my daily driver for several weeks.


  1. Performance. With the release of KDE applications 15.04 a lot of software has gotten a KF5 release, so much more of my desktop is now using the same shared libraries again. We still have that render thread problem - Qt 5.6 is not fixing it, so it will be at least next year before they reenable it. Which is insane. Besides that, opening almost anything is quite noticably sluggish on Plasma 5 on both my computers - it seems there is no disk caching going on, and since almost every window is now using the GPU, it might be compiling shaders every time, which is probably the real culprit. At the same time I've put Cyanogenmod 12 on my S4, and if I find my two year old phone is more responsive than my eight thread i7 computers with ten times or more CPU and GPU performance than the thing, both with SSDs that can pull 400 MB/s data rates, something is still amiss. It could also just be the QML compiler, but that seems less likely since both Android and KDE5 are using interpreters, so sluggishness on one should not be so pronounced. And usually my own QML apps are pretty spiffy in startup time.
  2. Multiple monitors. Works alright now! I can setup my desktop properly, and it preserves across boots... usually. I'm pretty sure sometimes my DRM driver is reporting my displays by different names at random, which breaks everything and gives me an ugly clone 4x3 fallback mirror on all my screens, which then means I need to redo the configuration. But that is not KDEs fault - it sometimes happened under KDE4. The difference is that in the old one I could reorient the monitors for that one boot and not save the layout, whereas now the layout is autosaved. Honestly, the advantages of autosaving I think outweigh the inconvenience of sometimes having back to back boot cycles where I have to reconfigure my displays.
  3. The KCM hasn't improved much, but that is pretty much my fault. I've found trying to work on either the KCM projects or KDE Telepathy to be incredibly frustrating experiences. The system settings KCMs are in the Plasma repo rather than systemsettings itself, so you have to rebuild the whole desktop to try modifying one KCM... on the flip side, KTP is super annoying in how you have to manually build binaries from a half dozen repos to test changes, because the project is - imo - pointlessly overarchitected into several git projects rather than just being one or two. But thats another blog post!
  4. Plasmashell still randomly crashes, but at least it restarts properly now. Opening programs still does randomly kill the shell, which is an oddity...
  5. Kmix is still awful, but I think someone was working on giving it a QML interface for the next release, which is what is needed.
  6. Kickoff works fine now. Only complaints are how you cannot resize it, system settings is no longer under "computer", and that you cannot drag desktop files from kickoff and drop them in icon tasks like you could in kde4. Its a lot prettier now though!
  7. Digital clock is still worse. I just have to live with having my time and date in the wrong format with no easy way to fix it.
  8. Why the hell does systemsettings do this? Hell, a lot of KDE software is now doing that. I know what it is - default window dimensions. What I want to know is why Qt got stupid and stopped sizing containments like this around the elements therein. Here is how it should look. And under KDE4 it usually would. But a lot of different settings pages are now ingoring the actual dimensions of subelements and presenting themselves as stupidly small boxes. File dialogs do it. Kwin effects does it. It seems like a major Qt5 regression where any resizable elements containing lists of widgets just stop caring how much space the widgets take up, and just don't even try to size themselves reasonably for the elements contained therein. Using it more, there were a lot of improvements in individual systemsettings modules, but the UI is still barf.
  9. Breeze-GTK at least is in the works from reddit posts I've read. Some super amazing guy is working on it, so hopefully it will be done one day. GTK still looks like butt.
  10. Muon hasn't changed pretty much at all.
  11. Bluedevil was fixed! And there is now a native Bluetooth widget like the networkmonitor one. Extremely positive changes here, everything is better!
  12. IconTasks now always delays tooltips. I don't mind... since I always do that... I just wonder if anyone else now misses instant tooltips.
Now that it is my daily driver, and now that all my deal breakers are resolved, I can bitch and moan about all the new problems i've run into! This is the list I'll use in my "why plasma 5.4 sucks and free software developers should be doing a better job giving me amazing desktops for nothing the brats" post on an internet blog where I complain about things too much.
  1. Responsiveness. I mentioned this above, but its still an outstanding issue. I have a desktop with 16GB of ram, an 840 Pro, and I get tangible application load delays and sluggish animations that never happened on kde4. I do believe it is related to QML, seeing as its everywhere now which means shaders are containtly being compiled and my GPU is being hit quite often. It might be related to how I'm using my 290 on radeonSI as my only GPU now, since my onboard Intel DVI port could not see my 144hz monitor as 144hz. I do want to consider writing the modeline myself and trying prime offloading again, though.
  2. plasmashell still has issues. Any real time widget will constantly load the CPU, like its drawing updates at infinite FPS or something - this includes the network, disk IO, and CPU monitors. Kind of annoying to not have that info anymore. Sometimes, applications launched from plasmashell stop having working file dialogs - the application will lockup as if there is a superceding window, but no window will appear, and I imagine it is due to some hidden window somewhere else in the plasmashell sea is impersonating the dialog. Either way, when this happens, I must xkill the program and restart plasmashell - theres no other way to get working file dialogs back. Only happens with KF5 programs too. And the shell does still crash occasionally.
  3. First boot problems. On my desktop the time from login -> working desktop is often at least a minute. It takes around 15 - 20 seconds for krandr to modset the screens away from the SDDM mirrored monitor configuration, and then another minute or so of plasmashell and kwin jerking it in the corner for some reason. Logs are not informative on what is actually going on here, and I'm already in the habit of starting my desktop when I get up to go brush my teeth, and I sleep it throughout the day if I'm going anywhere, so I don't really feel the sting of this problem - but it is a problem all the same. On my notebook, desktop widgets flicker like mad for a while after first logging in. Restarting the shell fixes the flickering, but its pretty wild while its going on. Only impacts desktop widgets, not the panel, so when I open kmail / firefox I don't have to look at it anymore, and it does go away after a minute or two. Still a regression.
  4. Autostart is ignoring /usr/share/autostart. Just outright. Adding scripts is broken, so I had to make desktop files for all my shell scripts.
  5. KDED is trying to open ksysguard as ksysguard.desktop, but ksysguard got a KF5 port and was renamed org.kde.ksysguard.desktop. I have to symlink the new one as ksysguard.desktop in my applications directory and then have my custom shortcut of meta-g to open it point to that desktop file for it to work, or I get a "malformed ksysguard.desktop" error hitting the shortcut. This happens nowhere else.
  6. KDE Telepathy is a mess. Its now on KF5, has a new desktop widget, and looks gorgeous... but everything has regressed so bad. It now has a proper KCM to manage accounts, its now using kpeople, and soon it should be using telepathy as its accounts backend. Kontact should also be adopting kpeople and that backend too at some point, and when it works it should be the online accounts management we should have had years ago. But for now, its a half baked mess - no error messages when accounts fail to connect, and the various implementations of login providers spinlock themselves when external servers are nonresponsive to login attempts. That means if I try to signin to a hangouts account but the telepathy part or KTP is broken, it will max out a CPU core and sit there waiting for a working pipe that will never be setup until the connection times out. You can only add about a half dozen different accounts now, down from several dozen in the 4.x series, and we end up with Gadu-gadu and IBM Sametime rather than the real esoteric accounts like AIM... or SIP... or Bonjour... half my IM accounts are no longer supported even though Telepathy still works with them just fine. Even when an account is avaialble, going online is a pain in the ass - you must not only enable chat in the internet accounts KCM, you have to go into the contact list and force online to actually try connecting if it ever failed even once. The Call UI is completely gone, I can't even dig up an experimental git repo of a ktp5 callui. The Google Talk teleapathy tube must be screwed up or something, because I cannot have two hangouts accounts active at once anymore, and the client will randomly lose connection on the only one I can keep online to then spinlock and error out until I relog the session. Ifi it was not such a pain in the ass to get a working development environment for KTP, I'd be trying to fix this mess in an instant... but every time I go to do it I have to pull down so much other stuff in git form, and then build it all and try installing all this crud to just run the connection manager that I always give up after an hour or so. Some projects like mozilla-central really need to learn how to seperate their git repos, but I see no reason why a desktop IM client needs different git projects for its file transfer dialog, accounts manager, filetransfer-handler, desktop applets, text interface, call interface, and a kded module. None of this stuff works for the most part without all the rest, if anything it should be ktp with plugins for sendfile, call, etc.
  7. Its great that desktop widgets now snap to a grid rather than being this randomly sized blob of rectangle, but the alignment is wonky when mixing old widgets like the notes one where it consumes much more space than the actual displayed notepad. Whenever my screen resets it also dumps all the widgets in a corner that I have to replace them all in their positions on my screen.
So enough Negative Nancy, whats good about this release?
  1. QML everywhere is still great. CPU usage on most things is lower, animations are pretty, Breeze is gorgeous, and the new system tray is fantastic. These all are pretty much carryovers from 5.2, though.
  2. We are finally getting plasmoids, even if many are still broken. Plasmoids are probably one of the greatest features of KDE, it would be nice if they were given more class in the ecosystem to make it easier to develop and package them. I intend to try making a bitcoin plasmoid just for funsies today, for example.
  3. All the fixes I mentioned above, but I'll restate the big ones just for posterity - bluetooth works! Multiple monitors work! Modesetting works! Power management works! That cool feature where the system won't sleep with an external display attached when you lidclose works and is great! 
So despite the criticism its still the best desktop environment out there. And its now better than kde4, else I wouldn't have switched over already. Heres hoping the future is only brighter for KDE. We had the difficult UI transition from 3->4, and now the internals transition from 4->5, so hopefully the model in place today - GPU accelerated QML, with glorious Breeze styling, with huge configurability and the abiility to effectively immitate any other workflow - can last a long time.

I'll add to this list over time as I bump into more issues. They are definitely out there!

2015/01/30

Software Rants 19: Plasma 5.2 Problems


So the KDE collection 5.2 just came out recently, including Kwin 5.2, a kcm to configure displays, Plasma 5.2, etc.

And after two days and endless problems I have to give up and go back to KDE4, again. So here is why:

  1. Performance. The moment by moment fluidity of Plasma is great now, but a lot of other aspects have hugely degraded in the transition, and a large part of it is probably the fact QtQuick still will not generate a render thread on Mesa drivers. You know, what a significant portion of the Linux desktop uses. Opening any program via panels locks the desktop up for a few milliseconds, because of that. It is so distracting and so adolescent a problem it makes the whole experience feel like a joke. You get sort of the same issue on KDE4 opening KDE5 apps (Arch has gotten rid of KDE4 konsole and kate, so you open the Frameworks 5 versions now) where it has to load all the shared libraries in addition to the program itself. But that is a good thing - it shows off how much of a benefit shared libraries are, because under Plasma 5 they open snappy and quick.
  2. Multiple Monitors. The real nail in my coffin for running Plasma 5.2 is that it plain DOES NOT WORK with three screens on my system. I have an r9 290, on top of Mesa 10.4, with four displays running off of it - a 1440x900 75hz ancient Dell monitor on the left on DVI, a 144hz 24" 1080p "gaming" monitor in the middle on dual link DVI, an IPS 1080p 21" panel on the right for coding and reading running over a DP to HDMI cable, and a 35' HDMI line through the wall to the living room TV to play movies or screencast with. And if I try to even start kwin with all these monitor attachments, arranged or not, it simply crashes, every time. I can get it to work with two monitors, but not three or four. It seems to take offense with the HDMI and DP screens. Regardless, I cannot use a desktop where all my monitors do not work.
  3. An addendum to the monitors - the new display configuration KCM is complete garbage. It has a single option under "advanced" in the refresh rate, which is just a waste of screen space and makes it seem half assed. It has no presentation of if or if not it is preserving its settings, and oh my surprise, it does not save a huge plethora of settings or at least remember them in the GUI, in that if you explicitly set refresh rates, resolutions, orientations, or positioning it will just randomly reset these fields to defaults whenever you apply changes. The UI Interactivity of drag and drop snapping monitors is great, that is great progress and is the best part. But everything else is lackluster and horrible and needs fixed, and I hope to find the time to try it myself.
  4. Plasma Shell crashes all the time. This is on Qt5.4, on kernel 3.18, on Mesa 10.4. Newest everything, so any bugfixes are here. Dragging panels, resizing widgets, opening programs, and just clicking the application launcher too fast crashes it. In my experience QtQuick2 is super solid and hard to crash, so I have no idea what unholy sorcery the shell is trying to do to it to make it so unstable, but it is not acceptable in what is now shipping in a mainline Kubuntu release.
  5. Kmix is complete garbage and needs a huge overhaul. The volume bar is neat and works when you hit the keys, gives a nice feedback sound and such. That works. Nothing else does. The UI is actually broken on Plasma 5 - on KDE4 with the Breeze theme the slider bars at least render correctly, albeit Veromix demonstrates a hugely superior interface for that. The control panel is also nice, and Veromix has nothing like it. But having this ugly ass Qwidgets slider amidst gorgeous clipboard and network management tray icons is a huge blemish to the experience. Again, if I had the time and skill to go into KMix's sources and rewrite it a nice QML interface with nice animations and drag and drop sinks and quick attach effects, I would.
  6. The kickoff plasmoid rewrite broke it somewhere, where you just lose the side buttons completely. I think adding favorites causes it. I could consistently reproduce it just by searching a few programs, adding some favorites, opening and closing it a few times, etc. For the stock launcher to literally remove the ability to find the programs list for most users at random is a huge oversight.
  7. The digital clock plasmoid has taken a huge step backward. I like the idea of basing all the system wide measurements and metrics off locales, but doing so means I'm stuck with the abhorrent month / day / year date everywhere, which I do not want. But I have no choice! I can't tell which other country uses ISO dates if any, and I have no way to configure my own locale settings anymore. I guess this is more a KCM issue. However, the digital clock still has completely broken NTP support on however it plans to sync time, so while I would love to not have to muck around with ntpd conf files by just setting it to the NA NTP server in the GUI, that doesn't happen. At least it isn't making inactive root password dialogs behind the window anymore. Actually, there is one down side - they removed the ability to set the font on the clock, and you can't put it in 24 hour mode anymore. I don't want AM / PM wasting space on my panels width, I know military time, let me set it to use it. It should be an option right under show seconds.
  8. System settings got a bit better, but the UI is still hugely painful. It should have adopted the syntax of how kmails settings module works, with tab groups on the left, tabs in the group on top, so that you weren't having these huge UI clashes where you were going three depth levels into a menu structure to find some setting. Also it solves the problem of how display configuration and window management are on completely opposite ends of the listing.
  9. Where the hell is Breeze GTK? GTK apps now look like burrito diarrhea with oxygen-gtk.
  10. Muon feels like a childrens toy. I'm not sure how the UI was designed, but it seems to be all custom QML without using controls at all, which make all the buttons and frames stand out as wrong. It also could use a bit of mobileization of its UI, since its a software store it really should be a Tonka toy in how you use it. And little things like being able to zoom in on broken images for apps makes it feel unprofessional. It is so much better than everything else we have that I have to gun for it, and it still beats out the Ubuntu App store, but it could do so much more.
  11. Bluedevil 5 always starts with bluetooth off. This is incredibly annoying when using a bluetooth mouse. There is no obvious setting anywhere to make it... turn on.. when you turn it on? And I have no idea who thought it would be a good idea to start your bluetooth program and NOT TURN ON BLUETOOTH.
  12. IconTasks got rid of its delayed tooltip option. This is another thing I want back desperately, because now it is impossible to use the context menus of system tray items because they have tooltips immediately popping up. This is an issue with kmix and bluedevil.
I'm still switching as soon as I can actually use my monitors on Kwin5. Now I have to dig through a thousand IRC channels, mailing lists, and bug trackers to try to find out if anyone else has reported it, and what I have to do to get upstream aware of the issue.

I guess I should mention the good while I'm here:

  • Breeze. Such a gorgeous theme in both light and dark. Everything about Breeze is glorious and makes me so happy KDE looks fantastic again. If anything it takes the sensibilities that Material Design approaches but does not go batshit crazy like Google did. My only complaint is that stain of a white pixel wide bar on Breeze Dark when the panel is oriented vertically. That really needs to be removed. The icons are great, hell even the Oxygen fonts are nice.
  • QtQuick2. When the shell is not freezing up due to not having a render thread, it is butter smooth, uses about a quarter less resources in total than KDE4, and looks great with the new notifications center. The icons on panels scale very fluidly now where they didn't in KDE4. The only thing missing is a Plasmoid grid - having the ability to arbitrarilly position plasmoids is great, but you should also be able to orient them into a block grid that snaps them in place so you need less finesse to make your desktop look nice. The grid does not need to be overly large - I could imagine a standard sized notes plasmoid taking up a 4x4 or 8x8 region.
  • Frameworks are a huge improvement over the monolithic kdelibs. Fundamentally KDE5 for everyone but the Plasma Designers was cleaning up technical debt, and I think it will pay off hugely in the coming years. The technologies underlying this cycle are top notch and hopefully these roadblocks don't last very long. I was not around for the KDE4 release debacle, but it seems to be less bad this time.
I'm still really worried about Kubuntu 15.04. I really want to stick everyone I know ever on it - Muon is good, Systemd is necessary, Plasma 5 should be great, but all these issues make me feel that the time is still not here. It might be Qt 5.5 or 5.6 and Plasma 5.3 or 5.4 before this stuff is resolved and I can actually start using it, and it will be just in time for Wayland to break everything again. I almost wish that Kubuntu and everyone were pushing for a configuration with Weston as the system compositor and Kwin as a local one until Kwin can take over Westons job rather than continuing to use X11 and thus running into another wall of bugs when everyone starts beating up Kwin-wayland.

PS: A lot of these are turning out to be Arch-only problems, at least from my experience. Kubuntu 15.04 on a clients desktop on radeonSI is running pretty much flawlessly, with no menu breakage, bluetooth turns on by default, kmix inherits the theming from the system properly. Only difference is the GTK apps aren't getting Oxygen styled, they are getting the GTK fallback theme.

I really do think all these kinks are just another release cycle away from resolution, though, so hopefully four months from now 5.3 can pull me in fully.

2014/09/12

Project Ouroboros

So I just made a random name for this post to reflect the content, which is also mostly random.

I have a few posts about MMO concepts already, but I want to get to the meat of that series (mainly because I have no motivation to be thorough with it without having an end game in mind).

I think MMOs, in general, are a great passtime. The whole point is to give someone a world to engross themselves in, to trade time for enjoyment regardless of the allotment. But the problem with most major games is that they are game-y - they take the fundamental shift WoW made in the industry and assume that is the future, and ignore the functional base upon which MMO's were founded, in that they should be an engagement with a world, not an engagement with a dungeon or boss or quest or pvp zone.

So what I am interested in is comprehensive engagement - to suspend ones disbelief, even at a superficial level without engaging in the worlds actual lore, and just living in a virtual reality for a while. Modern titles fail entirely to achieve this - from SWTOR to Rift to GW2 to Neverwinter, the gamification of the world means you can never just live in it, you always have to have an immediate goal in a precessing series of objectives to accomplish as if the world were broken down into stages in Mario. And in effect they always are - zones are level appropriate, quest chains are linear, and the developers want to make sure you see everything neat they made while not getting lost so the objectives are straightforward and marked on your map.

That defeats the point, though. The point is that you should get lost. You should be confused, because reality is confusing. The game world should not be a straightforward playground of colorful events that attract players to ride the rides and then leave. You should find caves at random with show-not-tell stories built in that have no outlying indication of adventure, but that you find it for yourself. And they should not always have great loot - maybe you just find an empty cave with a strange carving on the wall that will drive forum communities mad for years.

If your design is around function and form, you miss the magic. It is the difference between the Morrowind and the Skyrim - between the Baldurs Gate and Neverwinter Nights. In the former titles, you are lost, there is no real direction on where to go, you just find it out as you play along. But even in the absence of a tangible goal the game remains fun because you are engrossed in the world. You don't need to be saving the world for an experience to be enjoyable, it just has to be fun in and of itself.

I used those examples on purpose, because the later games are still considered fantastic. You can have a game where everything is laid out for you, linear or otherwise directed, and have a fantastic time. But I think most of my target demographic (whom I know exists, because I talk to this audience all the time in myself and my friends) find the magic of the prior to be something that is on a level of its own few games can touch. It is why you constantly go back and replay, and still often experience this sense unless you play these games to death, because the world is not arranged to play through in a directed path, it is willy nilly and fantastic for it.

I want to see more MMOs like that. So Project Ouroboros (working title) is my take on how you would achieve this while having a game your friends could play.

Firstly, the model of this idea is novel - development starts with whatever base engine... It would require weeks of planning in and of itself to figure out the best course of action here, but I will present a few options. First, a novel engine built ground up for this game, fully open source and community driven, built on some modern tech stack. Maybe you use QML for the UI and Ogre for the rendering engine, that sounds like fun. Time consuming though, and to achieve this would require external funding sources I am not fond of. Option two is to use an open engine, the most common I can think of is Godot. That one is not suited for a 3d mmo at all though. You could use the staple Unity or some such, but the proprietary nature of it (and most other commercial engines) defeats what I'm getting into further on.

Regardless, once you have an engine of some sort, with a proper development toolchain (released to the public so users can create their own environments, items, etc) you build the first zone. Probably the human race one, since it is a classic "default". And that is it, you ship. I envision a title where the "level" cap is around 20, so this zone could constitute levels 1 to 4 or 5. But anyone playing the game at this point - the 0.1 point - would hit max level and have not much else to do. So that is where the revenue model comes in.

The project would require a custom website, where users can donate a la kickstarter or patreon modeling to whatever idea they want to see implemented most. Whatever has the most money behind it is what is worked on, so the game grows organically dictated by what players are willing to pay for. If the first thing they want is large group raid content, they get that at level 5. If they want more questing zones, we can build those for years. If they want new races, those get made. Anyone can submit proposals, albeit user proposals require a minimum pledge contribution to seeing it happen. Additionally, anything the official development team does not take on as a project the users themselves can build and seek the pledges for themselves. The difference is that the parent company of this project, whatever I would call it, would be able to claim a proposal to let the community know not to work on it, but community works cannot lock proposals in such an absolute way because you cannot guarantee they will finish it, instead they can just mark projects they are intending to create as such, but you have to take those claims with a grain of salt.

The design of this proposition system is key to the entire games success. It needs to have a great integrated discussion system, project management tools for the user and developer creations, the ability to get feedback and iterate rapidly, and the ability to beta test everything with one click integration to the game client. 

The revenue model is then that the main development team, severs, Q/A, etc are all funded by whatever the users pay us to add to the game. Thus there are no microtransactions, no subscriptions, and if the game is a success the revenue should be huge, but it is only reflective of the desire of the community to see new content added. It is a self perpetuating cycle. Hence the project name.

Mechanically, a problem with modern MMO's (and I take this from ZybekTV on youtube) is that a game like WIldstar is very demanding to play. Because it blends action gameplay and 40 man raiding, it means you need a higher degree of constant vigilance in your playstyle while raiding for hours on end. It is a very taxing concept that might be the downfall of that title - where the demands of the gameplay, while fun, are so exhausting the playerbase cannot commit the effort necessary to complete the hardest of challenges. Which is a shame, but I would have forcasted that outcome in advance. It is why, I feel, WoW remains so popular - and why its popularity has only diminished as the game has gotten more complicated.

I am not saying you should have a dumb as bricks gameplay experience - that wil alienate whatever elite players you might have, and those players are essential to maintain a vibrant community. But you need pacing - you need the let players play at those hyper-high skill capped levels when appropriate, while falling back to simpler mechanics when the best is not needed, or not wanted. At the least every player class (note, see further down for details on the distinction between a class and a spec) would have specs that were heavily skill dependent, leading to the highest possible performances, and some that are simplified or not as skillful that do their job but do not have as much performance variance. This can even happen inside a spec - maybe you have two finishing moves with a combo system, one that requires a skillshot and one that does its effect targeted. The skillshot does more damage and might have an added effect, but you trade that for the reality you can miss, and that you have to take the time to aim. It is a tradeoff as such, but the mechanics are succesful if that tradeoff is worth it to the high skill high engagement player while the less skilled or less engaged player can get by with option B without feeling like they are intentionally playing bad.

The class system I also feel is the optimization of modern class mechanics seen in MMOs today. You start the game classless - for engagements sake, you are a potato sack peasant (normally) who, depending on starting experience, picks their starting class and spec according to a need or want. You could legitimately stay level 0 running around flinging your fists as an untrained brute at fence posts as long as you want. But eventually you find your way into an armor proficiency - a class - and you get the default spec of that class, which would be more reflective of its traditional role.

The four armor types would be cloth, leather, mail and plate. The four starting specs would be wizard, mercenary, cleric, and warrior respectively. Each would have simplified mechanics and they would all be jack of all trades specs - they would all be somewhat durable, somewhat damaging, some aoe, some cc, some sustainability. Not high skill specs, but with a few opportunties for skillful play. As you level up you unlock additional specs (funded by desire for them) through quests or just leveling up, that run the gamut of functionality that armor type might provide.

Which is an important distinction - you are really only limited by your armor class moreso than a specific role or playstyle. I can easily imagine all the classes getting access to specs that can perform all the functions of the traditional trinity, meaning  you never need to be without a tank - not because the mechanical benefits of specialization are lost, and the interplay between roles is absent, but because any class has access to a tank spec. This is a compromise - you don't want to give all player characters access to all specs and functions because then you have no progression attachment to your character - you would be going from a plate wearing hulk to a fireball flinging wizard in a robe, which makes no sense. But having the means by which a wizard could transition into a warlock, or a cleric could become a druid, makes more sense. As such, there are four kinds of armor drops - one for each armor type - but that armor is mostly cross-spec applicable. Different stats would be better for different specs, so your elementalist wizard gear might not be optimal for a hellfire warlock, but you would use the same primary stats and benefit somewhat from all your secondaries.

The mechanics to change spec would be a little more involved though. You would want commitment to a playstyle, and unlike most MMOs the customization within a spec would be limited. There should be a sense of commitment to your mercenary being an assassin rather than an archer, and changing between them would take some resources and time, especially to unlock it the first time.

Again, this is a compromise - you do not want player spec swapping every dungeon because of optimality concerns, but you do want players to be able to switch to a tank role when the guilds raid is missing one. I imagine you could switch specs in one of two ways - a class quest, and lump gold. The later is for when it is really necessary for group play, the former is for when you just want to change around.

Each class would, over time, accumulate a lot of specs to chose from - some would be given, some would require quest chains, and some might even be secret. They are effectively kits, where you have base class spells common to all specs that define the class (ie, all cloth wearers have some fundamental arcane magic talent, all mercenaries are skilled with locks, traps, and diplomacy, all clerics have a bond to some deity, all warriors are military trained) but you add on top of those with a spec, that can go from just a shift in spells to a radical change in playstyle.

This diversity is the true specialization of this game - rather than have micromanaged talent points or skill trees to read a wiki page on and then just take the optimal route, you get the skills and spells of a role and it is in your choice of spec that dictates your distinction amongst your peers. That, and your blinging armor.

Appearances are important. TF2 is one of the only games to get this right. I would want to see every race added have a distinct silhouette for each armor class - the warriors would be buff, the clerics would be tall, the mercenaries could be short, the wizards lanky. Depending on race, of course. 

This provides a multitiude of benefits that fix a lot of problems I perceive in class based games of today - developers will add dozens of classes, which means each character is invested into but severely limited to its few specs. Instead, I want that turned on its head, where you have few classes but each one having many specs,  so that you can pick a fundamental playstyle you like - the swashbuckling stealthy charismatic rogue, the master of the arcane, the divine, or the weapon slinging plate clad titan, and explore the diverse interpretations of each of these archetypes through all manner of specializations. And different specs would have different degrees of skill - for example, maybe the mage has a Timelord spec that requires juggling a bunch of disparate resources combined with everything being skillshots, maybe to such a degree that you are juggling a ball of sand around that needs to contact enemies from certain directions to combo into certain effects. It might be the highest theoretical single target spec in the class, but it would require huge dedication and skill to play to its potential, and maybe it has a compounding buff effect where each differentiating combo of ball tossing multiples the effects of the other combos, meaning that if you cannot maintain your juggle your performance is magnitudes worse.

That kind of high risk gameplay would be great for the veteran or pro who wants to be a true badass - and to play with one would be a marvel to behold. But there, you see the difference between a lord of time who manipulates the field to annihilate everything and some goon with a ball of mud that does about as much damage as a wet mud ball to the face would do. Likewise, you could be the elementalist who throws out fireballs with targeted lock on. Maybe the highest performing one causes his spells to explode at the right point to hit the most targets and keep the most burning victims possible, but even the noob would still be lobbing fire grenades into the bosses face that burn everyone around him. The performance difference would be much less pronounced, but for it to be a success  there must be a balance - it needs to be worth it to play the time wizard perfectly if you can, but it also needs to be worth it to bring an elementalist mage because there are very few people who can play a perfect time wizard, especially all the time. And even then, the time mage is not the optimal solution all the time - maybe it is only best on single mob or many mob fights, but if you have two high value targets the buff effects only apply to one while the fire mage can stick burning effects on a few targets really well. That would mean a fire mage at best is out performing that best time wizard under certain circumstances. It is a balancing act, but a rapidly rolling game like this could easily tweak numbers to the fraction to insure the balance is maintained.

Back to how to pull this idea off. So you have your website, your engine, and you have a starting zone - it should encompass my above points, in that while you can go down a nice directed main story through the zone reaching a lot of areas, there should be chunks of the map left unexplored when you reach the end of the main quest, and you should have a motivation to wonder "what the hell is over there?".

There might be a few empty caves of nothing, and maybe a tower full of level 10 skeletons that rip you apart unless you can amass a forty man group of max level (5) players to fight them. Maybe they do not drop anything - maybe they have a really low chance of dropping fantastic items or reagents.

Maybe there is a hut with a book on a shelf that talks about the owners hidden cellar in the swamp. You go to the exact location the book details and find a tiny intractable rock on the floor you would have never seen in passing that opens an underground cavern, containing monsters and unique items. Maybe a secret bookcase in a nobles house just leads to a shoe closet, after you thought you were so clever.

Likewise, you want a living world. Monsters would not just wander around from spawn locations. If an area has reason to contain monsters, it would contain them, and if it did not... it would not. The open fields should just be that, open, maybe some harvestable herbs, maybe a roaming band of bandits on occasion, but nothing just spawning out of thin air to walk in a square area. Think Skyrim, and how much of the world is empty, but where it makes sense to have actors they are there. If there is a village of gnolls, the gnolls respawn from their huts after a set time. And if you attack one, the entire village comes after you, so you need to actually try to pick them off slowly without drawing the attention of the whole hoard of them - screw mechanics where you have aggro radiuses so small you could throw a stone between the mob you just lopped the head off of and his friend who does not care what just happened in front of his face.

Player engagement is also important, and you do not want others in your world to be your antagonists. Quest objectives would naturally be completable by out of group players attacking the same targets or interacting with the same objects. You want there to be an incentive to help others, so maybe attacking a mob another player already tagged nets you more gold drops than usual, and the loot is player independent.

But that is all fluff and detail. The point is that you want a world not designed to be gamed, but designed to be lived. Where NPCs walk around town and most of them want nothing to do with you. Where the blacksmith might get mad and refuse to sell anything to you if you keep trying to pawn off rat ears to him as junk instead of going to a shady guy in a back alley who might just try to murder you if you are too low level.

It would be complicated, but I think quality over quantity can really win here. Take the time to make something truly fantastic and ask for money to create more, not unnaturally gate what you already have and try to produce as much as possible to sell more units. If it is good enough, and I think it could be, the runaway effect would be that players would be throwing money at amazing ideas to such huge degrees that you could forget the money at some point. The community contributions would give you the ultimate hiring pool to draw new developers from, and you could naturally grow the business as popularity and funding grow in tandem.

And I would never want to obsolete content - if we made a raid at level four, and then increased the level cap by adding higher level zones, I'd want to see there be a good enough reason to form raid groups at low levels to run that old content, or to see that content scale. It is the greatest shame in most MMOs today that the real game is just whatever is the latest raid.

Oh, and one last mention - gear. Gear should be hard. You should start in rags, and you should feel amazing when you get a cloth hood that looks like a paper bag. If you see someone in blinged out armor, you know they did not just get a random drop - they got reagents, paid huge sums of money, spent huge amounts of time in professions to craft it, and put in effort beyond just time, luck, or money into seeing it come to fruition. And the dungeons should be hard, and not accessible by all. Because the greatest feeling in an MMO is not being that decked out mage with transdimensional rifts in their shoulders, it is seeing them walk down the street and realizing how much effort he put in to get there. And to think that you want to put in that effort to, and accomplish greatness like he had. The whole plate. To become great requires great commitments and deeds, something all these other titles forget because they want to theme park their way to huge subscriber bases.

2014/09/03

Hardware Rants 2: Electric Notebookaru

I'm just dropping a forum post on HN here because I repeat it so often I'd rather just link to a blog at this point.

The subject always revoles around "What is the best Linux notebook?".


The right notebook for Linux is a Linux notebook. Vendors of such machines are still boutique, but unlike in the past they actually exist. Just a few are Zareason, Thinkpenguin, and System76. I personally have an off brand Clevo 740 SU (same model as the System76 Galago) because I had confidence in the notebooks ability and got it OSless since I'd have to install Arch anyway.

But recommending all these Samsung and Llenovo Windows machines is a disservice to everyone. It is a disservice to the Linux ecosystem because you cannot guarantee hardware support and using a non-Windows OS will void your warranty most of the time (at least if they catch you with it). It means that you get a bad impression of the experience due to any bugs or glitches you encounter, and rather than acknowledge you crossed into the land of dragons and took the risk many end up blaming the ecosystem that cannot provide hardware support realistically for any parts the vendors do not support in kernel themselves - especially those that are actively hostile to attempts to implement hardware support (video drivers - Android ones are really bad right now, but Nouveau had to wage a tangible uphill battle to get where it is today, but a lot of wifi cards, pci cards, etc can have no vendor support).

It also means you are paying your MS tax, and getting a Windows license you intend not to use. I'm not even going to argue the price aspect, because we know it really does not matter - if Linux were ever a threat (and really, it already is with Windows talking about version 9 possibly being free or ultra low cost with the looming threat of Android) they would just give away licenses, and that combined with bloatware contracts would provide the vendors more revene than just shipping Ubuntu (or whatever distro).

What I care about is the message. Every Linux Thinkpad fanboy is one bullet point in the Llenovo board meeting affirming the need not to ever ship a non-Windows notebook (except in countries like Germany that actually force them to). It sends the message "go ahead, bill me for a Windows license I'll never use, and make me fight the hardware to make it work, but I will still buy your stuff because having a pleasant straightforward and painless Linux experience is not one of my priorities.
It also obscures how large the market segment is, because to these vendors every machine sold is a mark that Windows is still king. If they do not see retailors selling real Linux machines (including the Dell one) they have absoutely no reason to ever fathom selling Linux native machines themselves.

And that hurts you, because that means there is less adoption, fewer options, and less pressure towards more widespread use of the platform. And for whatever your reason, you want to use Linux right now, and buying these Windows machines denies others from having the chance to even know it exists, and supports the continued monopoly Microsoft has on the personal computing industry (and Apple is not even on that radar).

So please, when you are looking for a new notebook with the intent to run a Linux distro on it, give some consideration to the vendors actually selling Linux machines, with support, as first class citizens. If you cannot find one that meets you needs so be it, but don't go out of your way to buy a Windows notebook and hope it can run a Linux distro flawlessly.

Original posting was here.

2014/08/17

Q3 2014 Ideas Summary

Approaching what is likely to be a necessary hiring season, I'm going to collect my ideas for startups in a nice, concise format for the sake of pitching. I believe I have written articles on these topics in the past, so they are also more comprehensive references. I'm ranking these by a combination of feasibility, practicality, and profitability - consider the global maxima convergence of all three to be the ranking criteria.

1. Distributed Patronage Funded Media

Ranked first because it is a straightforward profit center, and I can easily plan out the technical details. The only complexity is getting initial users - I believe in the idea enough that given a starter userbase it would explode. Getting the starters would be problematic.

First stage is a QML based device-pervasive application that enables the distribution and consumption of creative content. For utilities sake, we would want to integrate as many third party services as possible (youtube embeds, flicker / imgur integration, etc) but the core competency is that all users would passively use torrents in the background to seed and leech this content. This alleviates the need for huge amounts of money to start up with petabytes of storage and network bandwidth to do a media startup. It also means you never run into the modern YouTube problem - where you accumulate a petabyte of data an hour, and your datacenters grow out of control to contain it all.

Second stage is a webview to consume and upload content into this swarm. This requires enough money to subsidize it though, since there is no way to implement torrent based technologies in a browser, so you need to severely cripple the bandwidth of web users to keep costs under control. But it is functionally mandatory to have a website version of this project just to compete with the entrenched parties involved.

The primary revenue source is integrated patronage. The system takes 5% (subject to change) of all donations to content creators, and the layout is such that users are presented constantly with ways to fund the creators they like. If an advertising company approached us down the line on favorable terms, I see no reason not to let users choose to put ads in their videos and on their pages, but the default will be none.

To avoid the copyright fiasco, the terms of use are simply that any uploaded content must be relicensed to a creative commons license and the uploader must have relicensing rights. IE, they are recognized IP owners. Anything less gets deleted, to avoid the copyright morass, and to guarantee all content from this project is copyleft, so anyone can reuse it on their own - all works would be licensed under the CC attribution share-alike license, and anyone who cannot reassign copyright to that gets their content taken down.

The long term feasibility of this project hinges on adoption. I feel that a properly guided patronage system is revolutionary, to such a degree if I could get this project to take off it could approach YouTube and Facebook scale in a matter of years. If it grows fast enough, more centralization to spur adoption is always an option - no matter what, I want the distributed backend to stay an option because it decentralizes backing up the content swarm. I also expect that in the same way popular torrents see thousands of seeders, a legitimate option that enables seeding of liberated content would be huge.

To get that adoption, I imagine outreach is necessary to content creators to try it, and to get some short term exclusivity deals if possible. It is essential to get really good media works to start interest in the platform, or else it would wallow in obscurity on the edges of the Internet.

2. Continuously crowdfunded persistent world

This project is a combination of two concepts in recent years - patronage, and game development - in a way I feel is much more mutually beneficial than the current progression towards super-alpha releases. Today, while many media classifications are diversifying into patreon and subbable, et al, gaming remains rooted in kickstarter campaigns that go radically over budget.

Well, the reason is obvious - they cannot properly budget how much a game will cost to develop when they are forecasting years of development time. While I do not see an easy way to combine patronage with fixed point release titles, a massively multiplayer game is a much different proposition.

I have drafts of a novel fantasy world that derives a lot of Lovecraftian themes instead of Tolkeinesque concepts. Quake was one of my favorite video games ever, for the theming, and I think it is dearly lacking in more modern titles. The most important distinction is business model - this game would be entirely funded by patronage creating new content determined by what players want to pay for most. New end game content? If it gets funded first. More questing zones? Fund it. So on and so forth. It would also, as per my other projects, be completely free and open source under the GPL and CC-A-SA licenses for code and assets respectively.

This project is much grander than the former, but is poised to make significantly better margins. It basically has three components - the website, providing all the traditional gamer features like character profiles, item and monster databases, guides, forums, etc - plus the donation pages to fund content development, and infrastructure for users to upload, discuss, vote, patch, etc their own in engine creations, along with means to easily download and try usermade content. The tools to build game assets, depending on usability of other tools, that need to be usable by the end users to produce content. And the game itself, which I would love to try using qt3d and qml to build a highly modular interface and portable engine that can easily go ultra-cross platform with user configurable QML UI elements.

Again, this is an industry shattering idea - if it worked, it would change gaming, funding, etc forever. It would not need many users to be successful because there is no colossal investment upfront to build the game - there is still a larger investment than the previous project, because you do need the website, world design, and game in a usable initial state to give players a taste - but that is the glory.

Modern games, primarily MMOs, have huge upfront development costs to provide thousands of hours of content out of the gate to players. Mainly because Ultima Online and Everquest did it. This game would start minuscule - a single starting zone, one dungeon, for one race, from levels - say - 1 to 5. That means the art design required  is minimal, it is just the development time to prepare the game. Since we are not subscription or cash shop based, player retention is not as important in the long run - we just need players to find something they like, that they want more, and who will put money where their mouth is. As long as the content is of sufficient quality, players would pay to get more. This enables constant evolution of the project as more players join, where higher revenues enable more developers to work on the project, and you get a positive feedback loop of faster and more diverse content attracting more players to a free software title with user contributions that then captivates them to contribute back to see more of what they like.

3. Magma Language (and OS redesigns in general)

Low level languages suck. I start here, but this idea is broader than just a language. But I digress. To start, the world is built on C, and it sucks. But it will take a startup to upset that, because large companies are already vested in the status quo too much. But at some point you realize the technical debt of the modern personal computer is becoming preposterous, when you realize that the locale library in C++ has more members than the entire containers library.

So Magma is not just a language, but an idea - we have 50 years of bloat and cruft on our personal computers, and we are right near the end of Moores Law. A project started today to correct for that could see performance and productivity gains in magnitudes for years, that continue to improve technology when the silicon does not. But not only that, but as more people are coding each day, and code is contributing more and more to global productivity, optimizing coding is more important than ever.

Rust is a language that tries to fix C++. But it is still like C++, which is kind of its weakness. But the real weakness is that fundamentally all computers are now running in abstractions knee deep, faking cache, faking ram speeds, faking disk IO, faking network IO, faking core count, faking execution orders, and faking parallelism. Rather than that, I'd want to design an assembly language around heterogeneous computing, consider all volatile and non-volatile storage as cache to the network, with a return to simpler electrical designs of interconnects based around one bitwise communication protocol, with state negotiation - you either have one or many parallel pipes, giving you a serial or parallel port. It is always bidirectional, and the clock rates determine the line bandwidth, power draw, and latency. This lets you have everything from 5hz blinker leds to 16ghz 256 pipe parallel busses to graphics cards in one architecture. Think Networking - you have a fundamental ethernet layer that uses MAC and is IP agnostic, and you use packet encapsulation to have higher level protocols inside. Same concept applies here, you can forsake timings and bandwidth to get consistency and simplicity like packets, or you can do bitstream high bandwidth bidirectional communication on one line.

You take a modernized hardware architecture, add in a modernized heterogeneous ISA, build a microkernel on top of it, forsake the present device hegemony besides what adapters you could integrate (which would not really be hard - active usb converters would be simple, and this system would need ethernet layer compatibility to communicate with other systems all the same).

This one is last because its ridiculous. Nobody would fund it, yet it is probably the most lucrative project here. On the backs of reinventing OpenGL a dozen times, and reinventing C a hundred, maybe its time to rethink the foundations to better enable developers of the future. I have barely scratched the surface of the insanity mandated by whole sale rebuilding computing ground up for modern paradigms and capabilities, but I still think it is important that someone do it. Plan9 did it at the turn of the 90s, and while it never became popular, it may be because they didn't go far enough. They were close, though - the benefits almost outweighed the cost of transitioning away. The fact they called it a research project that whole time and never gave it a chance doomed it from the start, but fixing computing is essential for the future. I'm talking the hardware you want the first sentient AI running on.

All three of these ideas are things I am deeply passionate about, to the point where I would work for free on any of them because I believe in them that much (just give me ownership stake). So as I progress through this season, I'm actively going to be looking for like minded individuals to pursue some of this insanity with me. Maybe we could change the world.

Update: Just going to suffix on additional ideas.

4. The cryptocurrency

Bitcoin is great and all, but really it is a replacement for gold, the store of value, not dollars, the medium of exchange. Because bitcoin and litecoin have finite caps on the total number of coins generated, and that generation degrades over time, it means that inflation only slows over time from the volumetric size of the monetary base. And then there are coins like dogecoin and peercoin, that perpetually generate more coins at a constant rate - in dogecoins case, it is 6 billion coins a year, in peercoins case, its 1% of coins every year. This means doge deflates over time as the size of the base becomes larger, and peercoin has constant inflation.

But the truth is the size of the monetary base is largely irrelevent as long as money is distributed well. Inflation and deflation on their own do not matter as much either. What matters is the objective of money, and the metric by which a successful currency works, which is monetary velocity - how fast money moves through the economy.

No cryptocurrency today addresses this - constant inflation or finite coins mean the algorithm is blind to how the currency is being used. Instead, what you want is an algorithm that adjusts the coin generation rate according to how fast money is moving - as velocity slows, you generate more coins to promote inflation and spur money transfer. When its speeding up, you generate fewer coins to avoid oversaturing the market while preserving the coins value.

On top of that, you maintain the staples of crypto - proof of stake and work, fast transaction times, smaller blocks, etc. I'd also want to include a dns system in the protocol - the ability to exchange with someone by name instead of by address. Just having a distributed dns service could work, but integrating it into the blockchain would be novel as well.

2014/06/22

Toms SteamOS Response

So I wrote a huge response on Toms to a bunch of SteamOS questions and discussions I'm posting here because it was a lot of work. That way I can just link back here during discussions...

----


Just clearing up some misconceptions in this thread.

[quote]Does the steam client runs steamos games?[/quote]

SteamOS is just a modified Debian where the default launch is to put Steam in fullscreen Big Picture. It is very close to the same as unstable branch Debian running Gnome, it just backports a bunch of kernel patches and DRM drivers to support newer hardware. It IS pure Linux, you can even use the Gnome desktop on the shipped images. It just is missing a huge chunk of usual desktop features - root is locked, the repositories provided are missing a lot of stuff from upstream Debian, and there are very few default Gnome apps.

[quote]Silverstone sells a similar style case for those who want to DIY a steambox that still is similar in size. I am almost sure SilverStone makes the case as well because I saw some silverstone parts in the steam boxes they sent out. [/quote]


I wish that case came out a few months earlier, I would have built my current rig in it. It is just insanely good space utilization. I have not yet seen anyone try putting int a full water loop with a 240mm rad yet though, which I really want to see - it should be able to do it.

[quote]Anyone tech savvy enough to even be aware of steam machines[/quote]

That audience isn't the target of the Steamboxes. The initiative will do or die on how they compete in stores when put up next to the Xbone and PS4. They are not in any way a desktop computer replacement or surrogate and shouldn't be treated as such. There are Linux rebuilt desktops for that purpose if you don't want to use Windows, but they come will comprehensive application suites and are designed for computer use, not console use like SteamOS.

[quote]3 major considerations[/quote]

If you want a thousand dollar gaming computer you buy a thousand dollar gaming machine. DirectX 12 is not out yet, and will require games to explicitly support it - there is no free performance for any title out already. In addition, modern OpenGL can be pretty fast but it requires writing your draw calls in a way unlike any other API right now to get asynchronous no overhead behavior. We should also see OGL5 at some point, and AMD keeps saying they want to open Mantle up and bring it to Linux, though at this point they might as well talk about making pigs fly.

[quote]OSX -> Linux should be easy for game developers(it is already OpenGL most of the time.).[/quote]

OpenGL is an honest mess. I've tried porting GLES shader code onto the various desktops in the past and the experience is anything but simple. For one, OSX only recently adopted GL 4.1, but a lot of that functionality is still broken because its been undertested. So you need to target GL 3.3 in the same way a lot of games still have DirectX 9 versions for cards that don't support tessilation. Same problem exists in the GL space. The good news is that all Linux GPU drivers are now at at least 3.3 as well, so you can write new GL apps targeting 3.3 with 4.0+ extensions when available, such as tessilation shaders.

[quote]I wonder if they will stop releasing future Valve games on Windows when the OS is ready for download forcing PC owners to use their OS if they want to play their newest games?[/quote]

This is highly unlikely. Just in the general case, if you write your games to OGL 3.3+ and SDL, you can port them to Windows with a weeks effort even on huge titles assuming you architected the engine well enough and intentionally played to platform agnosticism. If anything, DirectX, .net, COM, and native Windows API usage might see a decline since it makes a lot less sense in the same way for average Joe developer Mantle doesn't make sense, but the problem is always moving software away from the MS proprietary stack to open standards, not the other way around.

The legacy of the id game engines is that because they always used OpenGL, even on Windows, they have remained insanely portable. The same can be said for most Blizzard games, especially WoW, since they all have OGL backends. Most developers today that release on OSX and Windows are still using DirectX just because thats what their main development studio engineers know, and their porting studio knows OpenGL. It is kind of a waste of resources but it is only recently you could even attempt write once run everywhere OpenGL, and even then it is just an attempt - like I said, OGL across OSes and drivers is always a huge mess. And it is even worse on mobile with GLES in all the garbage blob Android drivers.


[quote]Here is what I want from Steam OS: I want to be able to use any controller or mouse and keyboard that I want, not the ridiculous looking Steam controller, I want Steam OS to use way less resources such as CPU, RAM and drive space vs Windows 7/8, I want to be able to use my AMD GPU, I want to be able to play every single game I own not just Linux games. If I can play the same Windows games on Steam - figure out a way to get it working like using something like Wine or some other virtual machine OR simply release a Steam OS version of those games which can be downloaded using the same key you used for your Windows game which would be the ideal way to go IMO - and those games run better because the OS is using less CPU and RAM overall due to less background processes and it equates to higher framerates and better performance of those games then I will immediately switch to Steam OS for gaming and use my laptop for everything else.[/quote]

Linux supports the PS3 and 4 controllers, the 360 pad (but not the xbone one yet), and USB standard keyboards and mice. Most gaming keyboards / mice work fine except for special function keys because the vendors won't provide specialized drivers for them, and the kernel ends up using the USB HID generic keyboard or mouse driver.

That works most of the time, though. I do intentionally shop only for peripherals that support Linux, though. 7 button mice and any keyboard that has media but no programmable keys will work great. I have a Quickfire rapid pro and G500 mouse and both work flawlessly besides the logtech mouse 9 button being mapped to mouse 3 in the driver. Across a half dozen mice I have DPI adjustors always work because the devices modify their polling rate against the driver rather than through some proprietary interface, so they all work with scaling DPI just great.

I also only use Intel and AMD hardware because I'm active in the Mesa project. Catalyst on Linux is kind of always a mess, but the Mesa radeon driver has come very far in recent years, to the point where I recently played through Metro Last Light on high settings on my 7870 just fine, and it can run Civ 5, Witcher 2, Starcraft 2, WoW, etc well. The Metro performance is really great, the games through Wine are about 60 - 80% Windows speeds, Witcher 2 was an awful port, and Civ5 runs great, but it isn't a very framerate sensitive title.

SteamOS support for that driver, though, is lackluster. They include Catalyst anyway, which is notoriously buggy, but supports the latest OpenGL versions, and usually gets better results on the higher end hardware. That driver is no where near as good as Nvidias, but on the flip side AMD actively supports the FOSS driver and Nvidia pretty much hates FOSS. I hope in the next few years AMD just depreciates Catalyst on Linux entirely for the Gallium project, and just relegates Catalyst to support mode for enterprise customers in compute clusters (who are the primary drivers of its continued development on Linux anyway).

SteamOS has an install size of around a gigabyte, and different distros have different footprints. For example, I have a huge number of programs installed on Arch but all my programs (including Steam, but not the games) and OS parts take up 11GB on disk, compared to a completely blank Windows 7 using around 20GB. If I was only running, say, openbox and Steam, I could easily get my install size before games below a gigabyte.

Memory usage is also significantly lower depending on the desktop. SteamOS uses XCompMgr instead of Mutter from Gnome as the window manager, but still has a lot of Gnome services using up memory for some reason or other. Best case scenarios on hand crafted installations could get memory below 500MB before games, though, where Windows always uses at least 1 by itself, before Steam, which itself uses a hundred megs or so.

Performance wise besides the ability to hand crank all the background services on Linux, OS overhead is a solved problem, and Windows doesn't have any real efficiencies there over Linux besides being able to hand optimize your Linux kernel for your hardware. The basic fundamentals of an OS have not changed and their performance characteristics haven't either for a good decade. The only real plus side of Linux over Windows is that if a device is going to be supported, it will be included with the kernel or not. No driver hunting because there is almost never a third party driver not in the kernel tree, with a few exceptions like some Broadcom chips only working with their old wl driver you have to get from them or a software repo.

I'm really surprised Valve has not adopted Wine at all yet. I expect GOG to ship a lot of Wine wrapped games this fall though. I often have better experiences running games through Wine than some of the crappy native ports (hic, Witcher 2), becuase Wine has gotten really good at DirectX 9. Blizzard games mostly work out of the box - on purpose. I know that Blizzard titles, Neverwinter, Tera, and Rage all have engineers explicitly making sure they run in Wine well.

And any game you bought on Steam is installable to your account on any supported device, including Linux. I had Civ5 on Windows years ago, and when it got released on Linux earlier this month it just shows up in the installable games list. No keys or nothing needed, anything you buy on Steam (or Desura, or GOG, or Humble) you can download and run on any supported platform with that original purchase. This isn't console bullshit.

In reality though, game overhead is mostly from the engines themselves and the graphics API, not so much the OS besides the raw space and memory wastage of Windows. You would see that performance difference if Mantle were opened up, developed as a Gallium state tracker, and shipped supporting every GPU out there through that driver - it is where the community free Nvidia driver is too. If the API can be implemented to the winsys driver API they use, that would mean everyone can see the performance benefits. Problem is Mantle is really designed to run "on metal", and Gallium is designed to provide a hardware interface that API's can plug into, like DirectX9, OpenGL, OpenMax, EGL, etc, so it may not see as much benefit. Though Winsys itself is a really modern design around how hardware works today, so it is probably close to what Mantle looks like in the first place.

[quote]I expect Linux ports will still take a while to really pick up speed[/quote]

It is mostly waiting on publishers. Almost any indie game is on Linux already, Ubisoft has said they intend to bring new titles to the system, Metro Last Light was Deep Silver's latest game and they brought it over and will likely support new titles on it as well (and they are backporting 2033 I heard). Paradox is probably the largest publisher right now that is really great on Linux support - they have brought almost their whole catalog over. 2k, Ubisoft, Zenimax, EA, Activision, and Square have not launched a single title yet for Linux, and that is where most peoples issues are with. But there is nothing we can really do to compel them to do so. More likely they will just launch new titles on it for a while, and if it is popular they will backport classics.

[quote]I'm in for a linux box dual/triboot of android/steamos/some linux variant[/quote]

A lot of people are treating SteamOS as some distinct thing from desktop Linux, but it really is not. It is completely redundant, besides the peace of mind for driver support that Valve expects in titles they ship, to run both. It is the same client on both, the SteamOS Steam client just starts at boot and runs in big picture as the desktop. You can just have a desktop session for gaming at your login manager that does the same through any distro out there right now.

[quote]let me re-phrase I'd buy a linux dedicated machine and I'd download steam games and play them[/quote]

You can build your own or buy a prebuilt machine from System76 or another company (thinkpenguin - Dell and HP also ship Linux computers too) any time.

[quote] Small devs could make a lot more money if they could easily get their game to run everyone very quickly and have a HUGE TAM to sell to. No more need for EA etc. [/quote]

You can already run OGL games on Windows. The problem is that OGL 3 was slow to release, and developers en masse learned and transitioned to DirectX in the early 2000s. Now all the graphics engineers are one trick ponies that sit high in big publishers and studios. Additionally, the Windows GPU drivers have a lot of atrophied OpenGL support because it isn't used as much as DirectX, so when a title that uses it comprehensively like Rage comes out the driver vendors get caught with their pants dropped.

If I were a dev today, I would be pushing for OpenGL ES in the core engine - that way you could bring it to any computing device out there right now in existence for the most part, at least by the time the engine is done - OpenGL 4.3 requires symbol matching to GLES3, so your GLES3 code should run fine on any conforming driver. Problem is nobody is doing this so all the driver code paths are raw and untested. And you would be budgeting that Apple would move from 4.1 to 4.3 soon (I'm actually unsure if Apple supports the ARB extension already or not, I know Mesa does even though its only at 3.3).

You could still pull in extensions and pass the GLES context into desktop GL code to apply fancier volumetric shaders, tessellation, etc. The only real downside to that model is that GL draw calls before the ARB indrect functions in 4.1 - 4.3 are insanely slow. GLES 3.1 actually includes them - and compute shaders - but that is just too new right now, although when I can eventually write for GLES 3.1 and get comprehensive support I'll be in developer heaven and finally be willing to work professionally in the engine space. Right now I'm hobbyist in Mesa because it honestly is a morass of awful overhead and obtuse tricks to get around awful APIs. If I had compute shaders and indirect draws I'd be set.

[quote]Im sick of not being able to manage multiple displays the way do on windows[/quote]

This is actually a pitfall of X, the current Linux display manager. And it is a huge piece of shit that IS finally dying. Once KDE ships Wayland support this summer a huge swathe of distros are liable to adopt it instead of X Q4 2014 to Q1 2015 and that display server fixes the thirty year mess, while maintaining reasonable backwards compatibility.

Problem is the proprietary drivers are unlikely to support Wayland as soon as the Mesa drivers are, and SteamOS itself is certainly going to be using X for years, so those benefits might not be seen for a while from Valves distro, or Debian itself. Especially with Ubuntu jumping the shark with their own display server.

[quote]People will want to be able to actually use the computer(dualboot) to browse, do photoshop, edit video etc.[/quote]

Steam has a built in browser, and SteamOS comes with Gnome and its stock browser as well. "do photoshop" is kind of a corner to get backed into, since Adobe doesn't ship it for Linux, and there is absolutely nothing anyone can do about it. Video editing though - there are a lot of good video editors on Linux, my choice is kdenlive - as a Vegas user is a past life, it does as much or more in some areas and I really love the UI customizability. Openshot is another big popular one that got kickstarted recently.

And on the topic of photoshop, Krita, Gimp, Inkscape, and Karbon exist. Your milage may vary, but that is why there are options (note the last two are just vector drawing apps, the former two are general purpose artistry and painting).

[quote]As long as the (Mobo, Graphics, Keyboards, Mice, ect ect) vendors make driver/app support and other software companys make Linux apps like Team Speak or Vent for starters I would be into the Steam OS. But Id rather build my own system and dual boot so I can still use my investment in current Windows games i have already bought. Also in the future I want to be able to make my own system and not be stuck to SteamOS specific systems... [/quote]

Motherboard manufacturers are absolutely awful with Linux. None of them provide out of the box support, none of them really contribute in any way to the kernel, at all. If even one of them was proactive I'd exclusively buy their products and shout their praises from the hilltops.

In terms of GPUs, all the major vendors are doing something. Intel is really big on their FOSS graphics stack, AMD have Gallium and Catalyst (yeah, two drivers, hopefully the former gets good enough to replace the later soon) and Nvidia has a really up to date and performant copy of their Windows driver.

I talked about keyboards and mice earlier in this book long post...

Teamspeak is avaialble for Linux, Vent is not but there is the reverse engineered Mangler that works with Vent servers. Neither is recommended though, since Mumble exists, is foss and supports Linux as a first class citizen with the in game audio and positional audio features. There is also Skype, and several Jingle and SIP clients as well, and Google supports their plugins on Linux and webrtc of course works.

Tangentially, I think dual booting is the wrong approach. If there is something keeping you on Windows just keep using it. I did dual boot for a year, but barely used Ubuntu 8.04 at the time because the only value add was the lower power usage and snappier application loading. There will never be anything on Linux program wise not available on Windows - all you will get are a bunch of low level benefits like better filesystems like btrfs, tiny install sizes, tons of customization and control, and ease of use from package management. But if you are also running Windows all those benefits are moot.

But nothing stops you from building your own Linux machine - you just have to be careful about parts, because while they are all tested on Windows, you have to be explicit with your Linux support. The good news is Intel and AMD keep their chipsets and CPUs up to date, so as long as you get boards without external controllers for USB or SATA ports everything should work great out of the box. Hard drives are a protocol thing, so they all work no matter what - the only blunders I've ever had are how some hard drives advertise full disk encryption but in small text only with a TPM, which is proprietary Microsoft tech. My 840 Pro has FHD from the ATA password through efi though, which works great.

And you are absolutely not stuck to SteamOS. Like I said, it is just their own Debian spin where they launch the Steam client in big picture mode. It is the same Steam available in the Ubuntu, Suse, Fedora, Arch, etc repositories. And its the same underlying OS - a game that runs on one will run on all of them, barring old kernels and drivers - latest versions across the board should all work the same.

[quote]For just about the same money you can get a windows machine and run literally everything. When there is no technical advantage (and mostly drawbacks) why do it? [/quote]

If you budgeting a $500 custom built machine, $100 for the Windows license is a huge chunk of your budget, and if the games you want run on Linux then it seems like a waste of money. With most computer vendors they get subsidized deals on the licenses, and often include bloatware to actually profit off the machine. You could bloatwareify any Linux distro as well, though - even more so since you can run custom software repos and inject ads into almost anything since its all FOSS.

I remarked on several technical advantages earlier on, though, but the biggest reason Valve is doing this is because, one, the Windows Store and how they treated ARM in Windows 8 (locked down from the bootloader to the installable apps) scared them off, two, because Windows 8 is an extreme flop, and three, because by having control of the OS they can do things like ship an image that boots to Steam and lets them run it in a living room like a console. To try to get that on top of Windows 8 would be a nightmare if it would even be doable, and then you have to consider the memory and storage overhead of having to have the entirety of Windows. They took a calculated risk that breaking compatibility was worth it, though.

[quote]30 fold from a nearly 0% market share is still low.

and no, I'm not trying to put down the idea of steam machines, I think it's awesome, but so far what have we seen? delayed OS launch, OS installation problems, OS optimization problems (almost all AAA titles have similar or lower FPS compared to Win8), and repeated controller delays. heck even Alienware's newest 'steam machine' will run Win8.1 out of the box. and NO, i'm not interested in buying a $500+ device just to play linux only titles.

at the end of the day, I expect steam machines to take off SLOWLY, with people who already own gaming PCs experimenting dual-booting Steam OS, and their market-share will grow ONLY if they perform well. [/quote]

Linux user results are around 1.2%, so a 30 fold increase would be pretty huge - if SteamOS was a third of Steam's installbase it would be a major market mover. I doubt that kind of adoption, especially in the short term, though - there are millions of Windows computers that have just Steam and maybe a game or two, that count towards its totals. There is a lot of inertia there in the numbers.

They are delaying the launch because big publishers are not moving quickly to it. Mainly because its a disruption - EA et all probably have MS, Sony, and Nintendo divisions that develop engines and bring games to each platform, but SteamOS requires OpenGL and Unix developers which they probably don't have unless they were already porting to OSX.

The optimization problems are case by case. No matter what, you are always really easily able to make a bad port. See Dark Souls on Windows. Witcher 2 is kind of like that on Linux - if you try porting by just wrapping DirectX in a translation layer like Wine you are going to have a bad time. Metro Last Light, Civ5, Portal 2, Team Fortress, Left 4 Dead, Dota 2, Dungeon Defenders, Garrys Mod, etc all run really well for me, and are all competitive in fps with Windows - varying from 75 to 150% the performance. And it is all the quality of the port.

Steam Machines compete with consoles. Hopefully the void left by none of the current crop actually being powerful gives Steam Machines room to prosper - the combination of deep sales and the ability to get a console that can actually do 1080p gaming for like $800 should be competitive.

... talk about walls of text.