2012/11/24

Ubuntu, mysql-server, apparmor, and symbolic links

If you can't start mysql-server on Ubuntu (at least version 12.10, where I encountered the error, but I imagine it might be pervasive) apparmor might (will) be denying the daemon from running due to mysql using absolute paths and skipping symbolic links on any major system directories (in my case, /var and /tmp).  Apparmor doesn't like that, since mysql-server doesn't have permissions on /mnt/data/var or /mnt/data/tmp, and the apparmor instructions only state the main system directories.

This bug only happens if you symlink directories apparmor will manage access to and from.  And it will only happen if an application, for some dumb reason, traces the absolute location of its operating directories and tries doing system calls on file operations using the paths of absolute file location, rather than using system prescribed paths such that it references major system directories through absolute paths.

The solution is to edit /etc/apparmor.d/tunables/alias and add in aliases such as:

alias /tmp/ -> /mnt/data/tmp/,
alias /var -> /mnt/data/var/,
If you do this, apparmor treats references to those directories like references to the symlink versions.  It is advisable to do this with any default dir you symlink on an apparmor based system so that the security suite doesn't bitch about random applications using arbitrary paths to get to the same folders.

2012/11/17

GPU Fan Speed Controller Project Notes

I'm going to be writing a gui / cron job daemon for regulating the fan speeds of the proprietary nvidia / amd drivers for graphics cards.  Mainly because with Steam coming to Linux and the burgeoning interest in the platform, there really needs to be an easy way (a la Overdrive / EVGA Precision on Windows) to create a software fan curve, and the Nvidia X Server / amdcccle are insufficient in purpose for that.  Fortunately, Nvidia and AMD provide means to access the gpu temperatures and fan controls from aticonfig and nvidia-settings.  So I'll just be writing a configurable GUI frontend, a config file, and a background cron daemon to monitor it and update temperatures accordingly.  I'm basing this project off the work of 2Karl's bash scripts to accomplish the same, but they are a little cumbersome to modify (no one point reference to the names of the scripts, no table of fan speeds, etc).

I'm just writing down my game plan with this little project here:
  • Using Python3 and pyQT, the latter because QT is a nicer GUI framework than GTK at this point, and Python because this isn't processor intensive and doesn't need low level optimizations.
  • One python GUI application (maybe gpu-fan-control) and one background process to watchdog the gpu (gpu-fan-monitor).  The GUI should be writing to a configuration file, probably under ~/.gfancc/profileX.cfg, and the monitor should be set as a startup application that will read this file, monitor the gpu temperatures via pipelining with aticonfig / nvidia-settings, and at a user-defined interval check for temperature thresholds to change fan speed at.
  • The initial goal is to get the daemon and configuration running by hand, then write a nice GUI to set it up.
  • My primary GPU is a GTX 285, so I'll have this working with Nvidia first.  The configuration file should specify the gpu type, and the daemon should have some autocreate functionality in case of a missing configuration.
  • The GUI will probably start with just a few data points of temperature: speed that the background will just build a one to one function out of of fan speeds.  If the temperature hasn't passed a threshold, we shouldn't update the fan speed (to avoid calls into the control software whose performance I can't directly influence).
So I'll be putting this up on github soon.  Hoping to use this as a way to get into qt, packaging in Linux, and it seemed to me like something needed with gaming coming to Linux.

2012/11/15

Gaming Rants 2: Why WoW Went Downhill

As a 5 year WoW veteran with over 365 days played across 3 mains (rogue, druid, and priest) I quit in patch 4.0 after raiding for a few months because the game had honestly turned into complete crap and I was wasting my time on it.  Here's why.

  1. Progression as a whole went to shit in Wrath, and stayed awful in Cataclysm: Progression and goals in MMOs are so essential.  I stopped playing GW2 mainly because of it.  In Gaming Rants 1 on GW2, I outlined the reasons I kept playing WoW - when I ran out of them, I burned out and quit.  In PVE, they completely changed the progression system of classic and Burning Crusade, where each raid was a progression of the last, basically requiring some farming of the previous tier to progress much into the next due to gear and skill requirements, to a system where each major content patch reset gear and progression and made each raid the singular end game pve experience.  In the raid world, instead of having tiers of progression from MC to BWL to AQ / Naxx, you would just have ToC, or Firelands, or ICC, or Demon Soul.  And that is so boring because you are always in the last dungeon, you are always a few bosses away from done, and you never have no sights to see.  4.0 was almost a breath of fresh air with 3 raids, but the real missing aspect is progression where you go from instance to instance (they are "trying" that in 5.0, but it looks like its failing - I'll get into that).  You are nearly done with one raid, and you can look forward to another to get your feet wet in.  So good.  And it meant the people who do get to the end feel really good because it means they are the best of the best.  They are hardcore, they have commitment, and everyone else, including me, can look up to them, want to be like them, to achieve that.  Today?  Everyone has end game gear, they all look the same, the difference between an idiot with raid finder loot and a hardcore raider is probably that the hardcore raider reforged BC / Classic gear over their new crap.  Wonder why.
  2. They also killed PVP progression: I would have loved to get into Rated BGs, but never had the "crew" to do it.  So it never happened.  They sounded ok, but without reasonable balance (another issue) they would have always been FOTM anyway.  Without negative rating, they were just grinds anyway.  Back in my day, you went from 1500 to 2300 rating and fought progressively harder opponents.  It meant arenas were actually a ladder, rather than fake rating on top of hidden rating on top of personal rating on top of team rating on top of rating rating.  The personal rating and MMR systems ruined arenas for me because there was no progress anymore - and specifically, MMR forces you to a rating and actively keeps you there.  Progressing becomes hard after you play a bunch, if you switch team mates your MMR is almost permanently fixed.  That is such crap.  Arena participation dived from usually over a hundred gladiator teams in each bg in BC to a few dozen in Wrath and sometimes single digits in Cata.  And it was because the progression was gone, replaced with a 0 to 1500 grind to make the progress artificial.  Rating on gear was fine though, to gate pvp gear.  I think the ratings could have been lower - gear should never have been an "issue" progressing at any point, just a way to keep the percentages of who gets what gear nearly even between pvp and pve.
  3. Dungeon content went from challenge to crap: They made dungeons faceroll to facilitate the dungeon finder system, so there was no fun in doing the 5 man content anymore.  In classic and BC, there were incentives (rare drops, actually gasp useful 5 man loot like Blackhands Bredth, badges, or reputation) that made going back to this content after getting raid gear enjoyable because it is fun to crush your old challenges.  And the oldest 5 mans were legitimately difficult.  BC heroics were really hard until ZA and 2.4 badge gear trivialized them.  In 2.2?  Shiz was serious.  The normal modes weren't even a walk in the park - they required CC, resource management, kiting, and knowledge of fight mechanics.  That is funny today when any random new max level player can just chain queue heroics, spam a dps / heal / tank rotation, and expect badges to pop out at the end.  No challenge means no fun, and dungeons going to crap was symptomatic of the greater issues.
  4. The world became small: With raid finder, dungeon finder, portals to every city, and flying mounts, the game world went from massive and mysterious to mastered and boring.  The large game space is worth nothing if you never have to endure it or brave it.  Without guardless neutral towns, wars never broke out.  Everyone felt safe and pacified.  Terrain was designed to support player flows, rather than the classic MC walkways that would get saturated with war.
  5. Flying mounts and god-guards killed world PVP: I loved world pvp in classic.  Unknown odds, wars, assassinating - it was a roguey thing and I reveled in it.  It completely died in BC.  With few exceptions of daily quest hubs, Isle of Quel Danas, Halaa, and Auchindoun, people would just be flying everywhere with no engagement with other players.  World PVP, in its raw and unsterilized ways, kept the game fresh.  In Classic, you could never be safe outside a capital city, in contested zones many neutral towns weren't even safe in the early days - it was a truly dangerous world.  And danger and challenge make games fun.  Maybe not for casuals.  Maybe not for the "target demographic".  But they did, and do, for me - and I miss that.
  6. Destructive content kept the game flat and dull: When every expansion (and more recently, every patch) reset every player, not only did it demoralize the entire population by making often years worth of work worth naught, it also means that content loses all value.  Nobody comes close to raiding the old raids anymore, the old dungeons might be done once for an achievement.  There are dozens of dungeons and raids that are now dormant and empty because people race to max level - they are never a challenge anymore.  All those Cataclysm raids will now be relegated to the trash heap, with someone raiding them maybe for a transmog or achievement, and steamrolling it with a few friends.  What a waste of developer time.  If instead of constantly resetting the field, new content was injected where the playerbase was most concentrated (ex: when BC came out, Karazhan would have still been a 10 man, and would have acted a bridge from UBRS to ZG / AQ20, Gruul / Mag / SSC would have been between ZA / AQ20 and MC, and TK could have been between BWL and AQ40 / Naxx, where the major bottlnecks were... - and then BT could have been past Naxx, and Hyjal could have bridged MC and BWL).  By now, we would have dozens of raid instances, all of them "overlapping" difficulty and gear (bigger raids were give higher level stuff for less effort, but you could still have really hard 10 or 20-25 mans giving top tier loot).  Rather than 3 Pandarian raids that are gated over 3 months because they can't make raid content fast enough, and they are just going to toss these raids out in 5.1 anyway with new heroics that completely supplant their loot.  And in 6.0 all these raids and dungeons go in the dust bin as people skip ahead to level 95.
  7. The sterile game world is soulless: In classic, caves existed that were empty, mountain peaks with nothing, glyphic signs referencing developers in random places, a sign in an "unreachable" zone that said it was under construction, with the entire world tree already built, entire regions of the game world not practically useful - the deep Silithid hives, Deadwind Pass, southern Blasted Lands besides Kazzak - but they existed to have something challenging in the world, waiting for players.  Same reason world bosses existed.  They should have had roaming ones, it would have been fantastic (they did have some!  some random elite packs would wander some zones, like the Fel Reaver, but I wanted raid bosses like that!).  The inconsistency in design and dangers of the unknown made the original game alive, like a real world, even when every mob stood in one place and aggroed at 20 yards.  It was so much more alive than the dumb scripting and cutscenes of today because now, your challenge is presented to you with a big sign and a raid guide page entry with a ready check, not in a random cave in a new zone.
  8. Itemization has become trite, dull, and bad: One of the greatest reasons I fell out of love with PVP was a two fold reason - the introduction of flat damage reduction from resilience, and the removal of critical damage reduction.  The first basically meant spells and abilities would scale even more out of control because they could just tweak the magic "players hit players for less" button and make the experience extremely jaded between pve and pvp.  The second is just bad balance and it promotes RNG.  They were trying to lessen RNG in Wrath, and then threw a curve ball and got rid of critical damage reduction - crits are the greatest RNG in WoW PVP, and have been pretty much forever, and was pretty much the reason they added the stat in the first place - but you can see how the design direction changed.  They cared less about balance, skill, and enjoyment, and more about bigger numbers and skinner boxes to maintain subscriptions.  And humorously, it failed - they are below their peak subs, they barely got back over 10 million with Pandaland, and I guarantee it will hit 8 million or less by 5.3, because people get tired of not having challenge or goals.  And heroic modes are the most obtuse "goals" ever - harder hitting bosses for larger numbers on the same items.  That is nothing on new loot with original art, crazy itemization, and custom special effects and procs. That was loot, what is there now is a mockery of progression.  The fact gear has become extremely sanitized, and players can reforge, level up, regem, etc the items makes item levels just a statistic you want to get higher.  No more Onslaught Girdle being insanely good into Naxx even though it drops from the first raid because it was itemized perfectly - no more Dragonspine Trophey being the best trinket for 3 tiers.  Itemization became flat and boring, and that makes loot and progression less interesting.  Arbitrary stat allocation made content fun because the value of a boss kill varied with the drops beyond just who wants the new +5 ilevel pants.
  9. As an addentum to why gear + pvp suck, stamina is useless: In classic, stamina was used in place of secondary stats on high warlord gear to make it pvp gear - it made you more tankey and take longer to die.  You didn't become more efficient to heal, you just lived longer from 100 to 0.  Today, nobody stacks stamina because the stat has been devastatingly eclipsed by every other stat - mainly, the revamped primary stats that give bonuses to multiple other stats.  It is still 10 hp per 1 stamina, even when damage / healing / mitigation per stat point in every other class has only gone up every expansion, maybe except Cataclysm.  Today it is so awful that nobody ever itemizes it anymore.  And that is the contributing reason pvp sucks, and why every patch balance is thrown through a blender - players gain tremendous damage and healing increases, often from gaining points in 3+ stats when each of them alone outpaces stamina, and they expect the singular stat of resilience to counteract it.  And when it doesn't they have to rebalance vast swathes of classes (if they ever do, they usually don't and just let every arena season go to the dumps) because every time they add another tier of gear, resilence isn't enough to offset damage gains, healers get stronger to counteract the damage, but it means burst is stronger, people die faster, and they live shorter because stamina doesn't keep pace.  Above all else, the scaling of player health pools has always turned the game into crap.  The only time player health was ever acceptable was in BC, when base health was high, and stamina was high on pvp gear and scaled the best it ever has since arenas started (it didn't scale as well in classic without resilience offsetting other stats).  When health pools don't scale, the game gets progressively more zergy and ping-pong like, and it gets stale.
  10. The game is still pay to play: After buying 3 expansions, I don't get why they don't just make non-current content free to play.  They already let people play forever at level 20 - and I know hardware.  It costs them dirt to support players on their servers, and they make runaway profits from the game and so little goes to maintain the servers.  They could have tons of free to play players at lower levels, maybe on legacy servers.  They could be buying item shop loot like pets and what have you, but they are not.  The reason I couldn't keep playing was not just because the game turned bad, but if I'm paying $15 a month for something, I expect to get my monies worth - and that required, for me, a truckload of dedication.  I didn't want to keep it up, but felt conflicted staying subscripted and effectively wasting my money if I wasn't playing the game.  So I did the obvious - I quit.  And I won't pay for almost any game anymore, because free to play is such an earnest budget proposition.  I have a large enough catalog, people are awesome enough to make enough mods and custom maps for what I have, and I have too little free time to want to pay money to waste my free time.  I'd play WoW again, despite all its modern day failures and faults.
That's enough for now.  For all its worth, the modern game is almost always better than its predecessor versions - I wouldn't really want to play classic again, combat is too slow and flat, mana is terrible on everything, and while itemization was better back then, stats themselves were boring with only crit and hit mattering.  Random resists on spells and abilities were awful, and it is good they were taken out.  Random low percentage procs also sucked, and they are not as common (cough, 5 stack taste for blood, cough).  Really though, if you retrofitted the better game-play decisions from recent expansions into classic or BC and balanced bosses accordingly, without giving every player the kitchen sink of spells so they can do everything, the game would probably be amazing. (NO ROGUE AOE, NO HUNTER HEALING, NO MAGE PERSISTENT PETS... etc).

Gaming Rants 1: Why Guild Wars 2 Failed My Expectations

So I bought Guild Wars 2 after playing a beta weekend.  I was hoping it got... better.. that what I got in the beta at higher levels.  I only hit around ~15 in the beta, and ~25 on live.  The problem in both was that I just ended up grinding the same areas over and over trying to level up, didn't enjoy it, and just quit. 

So here are my reasons for why I didn't like GW2.  Not saying someone might be completely opposite and have loved these features, but it really gets at what I want in a persistent world nowadays.

  1. Progression didn't matter: the gating system of leveling in traditional games was / is meant to give you a sense of progression of power.  In GW2, this never happens, because you get downleveled or upleveled anywhere you go.  You can never go back to the newbie area and one shot mobs because you get turned into a level 5 when zoning in.  Immersion wise, it basically implies you can go from killing world dragons to dying to a bunny.  It made me not care about getting anywhere in the game, so I.. didn't.  Any MMO I play basically needs to give me a reason to keep playing... (con't)
  2. Personal character combat and playstyles were dull: I tried a bunch of classes up to at least level 10 - the warrior, guardian, engineer, necromancer, thief, and elementalist.  My highest level characters were my guardian on live at level 25 and my beta elementalist at 15.  All these classes had the exact same setup: 1 - 3 were damage abiltiies, spam on cd and rarely ever requiring thought.  The last 2 were usually 20 - 30 second cds you also used on cd because why not, they are only cooldown restricted.  6 is the heal that you also spam on cd if you aren't at full hp, or if you get it for some other effect... you spam it on cd.  7, 8, and 9 were usually the highest dps abilities you could get, that you spammed on cd.  And 10 was an "ultimate" that you would use as a get out of shit card.  One ability you don't spam.  Yet you would often spam it!  If it was an offensive cd, you spammed that shit like it was in style!  You never had to stand and cast a spell, you could cast while moving.  There were no spell interrupts, so you never had to worry about casting in someones face.  The only CC effects were tiny stuns (tbh, long cc is dumb, so this is fine) that didn't have any timing element to them.  Without healers or tanks, you basically spammed damage into whatever is nearest you until it died or you died.  Some weapons sets on some classes could be "tanky" or "healy" in that the 2 - 5 spells would often be tank or healer centric, and you could get heal / tank 6 - 9 spells, but you could never actually "tank" or "heal" as a role, you were just a shitty damage dealer with tanky / healy utility.  So no matter how you sliced the game, you were spamming abilities on cd (maybe if you were microoptimizing your playstyle you would stack debuffs to maximize dps, but in general it was just button spam till things die nonsense).  You had a dodge, but because of latency and the horrible animations (honestly, monsters flail around in idle more than they do when swinging) and no delay between animation start and effect firing, you basically just randomly rolled around hoping to avoid something.  In GW2 favor, you could usually roll a projectile in flight to immune it, so I found it actually useful versus enemies at range.
  3. PVP is a spamfest: With everyone being a dps class, nobody have legitimate damage "rotations" or ability priorities, and there never being a tradeoff or choice in how you engage in combat, it becomes a cluster fuck of throwing every ability you have into a pile of bodies, trying to get away, and then doing it all over again.  Since synergy was... limited, at best, you would usually only care to beat on the same thing as everyone else hoping some debuff benefited you.  With so many arbitrary weapon combos, and pretty much every combo producing a different set of 1 - 5 spells, any one player has way too many raw abilities to memorize from other profession + weapon combos, but that gets into the next point...
  4. There is absolutely no diversity of choice in how you play: One weapon set per class was, in my playtime, always completely better than every other at some role.  A guardian / warrior not using a 2h sword for mobility / damage was dumb, an elementalist without a staff for aoe was bad, etc.  The only semblance of choice was when classes with weapon swaps would pick a secondary weapon to supplement their only true primary weapon choice, because they could either pick another weapon to have more faceroll damage ability spam by constantly weapon swapping, or use something tanky / healy in case they get focused and want to tank it / get health back.  Also the choice between melee or ranged weapons did matter.  But in pretty much every profession, one set would be definitively and mathematically better than everything else, and everyone just goes with that.  So all that weapon combo diversity goes out the window since something has to have raw numerical superiority at whatever you are doing, and since that is always trying to zerg damage into something, guess what wins out.
  5. Groups don't matter and players exist in a vacuum: Monsters scale off how many people are engaging them, so grouping up has no benefit besides getting unique loot per kill.  If you don't kill things faster, you aren't getting any benefit, but since xp isn't split, it doesn't really devalue grouping either.  It is a completely neutral proposition.  Better than Borderlands 2 in that regard!  Not better than WoW circa 2004, or freaking Super Mario Bros circa the 80s.  More players should have tangible benefits since it is hard to get people to play with you.  Without a trinity, or any class synergy (hell, something like affliciton locks and shadow priests having shadow damage dot buffs is synergy that GW2 is severely lacking, sure, almost every class has a debuff it provides on some weapon that buffs everyone else, but it is rarely up, and barely a damage boost, and not coordinated since everything else is already inherently zergy) it feels like a single player game with other people.  There can be no really competitive and interesting pvp or pve to me, because I can't get into the lone wolf gameplay of CoD instead of Team Fortress, or Fighting Games instead of coordinated synergetic gameplay like WoW Arena's RMP.  I feel like persistent worlds require an aspect of each player being an imcomplete package or else the gameplay becomes boring and playing with friends loses flavor because supporting the weaknesses in others with your strengths makes these games.
  6. The plot is obfuscated and inconsistent: By level 25 I still have no idea what is going on.  I was playing an Asura that hunted down a green armored man that killed some rookies, who was in league with an evil faction of Asura.  I then went into a set of forests and caves consisting of friendly unnaturally smart trogg rip-offs and got bored.  I had some "epic" quest where I had to open a portal to a ruined city on an island that was overrun by the old dragon the main plot has you kill, but as a storyline the whole thing falls flat for a few reasons - 1. the character I play is personified just enough to make me not feel in the story but not enough for me to see my "character" as an agent in this world.  Also, its an MMO, you will never have immersive characters with agency because you are too busy engaging with other people treating your avatar as you.  Strangely, SWTOR didn't have this problem, because the story had a consistent goal throughout that I could care about, the scale was sufficiently epic for me to engage with, and it came in discretized chunks where each ending brought new beginnings (just an example, Sith Inquisitor is training -> foundations of feud with Thannaton -> murder plot -> artifact hunt and power acquisition for 10 levels -> betrayal by master and Thanaton -> hunt for more artifacts and power -> kill Thanaton.  Each part led into the next, and made me care about going forward, because I always wanted to kill some jerk in the Sith at some point and looked forward to it.  The enemy of Guild Wars 2 was never put in my face for me to care about, and he never did anything I could personally want to kill him for, so I just didn't care.
  7. The streamlined experience makes it flat: Waypoints are completely contrived, and you can just fade in and out anywhere in the world whenever you want.  So the world had depth until you find waypoints, and then you are pretty much done.  Since almost everything important is on the map, there is no exploration (with rare, honestly well done exceptions like the Lion's Heart cave and a few aerial jumping puzzles) and while some zones are thematically consistent, some are just too big and become a colorful mess of inconsistency.  
  8. There are no long term objectives: In WoW, I started playing and saw Bloodfang Armor - and I wanted it so bad that I played for a year to get it.  By the time I finished the set, BC came out, and I wanted gladiator and a netherdrake so I played shaman / rogue with a great player in the first season as a rogue and got gladiator in 2s.  Then I wanted to be the best rogue on the server and in 2 seasons was nigh the undisputed best player left on Agamaggan, mainly because Gummi Bears and their entire rank 1 crew left.  In Wrath, I wanted to try playing GM, so I did.  After that I wanted to try pvping hardcore again, but never could get the players on the shitty server.  That pretty much eventually led to me quitting, combined with the reasons WoW fell apart for me.  Hey, another blog idea!
Those 8 are pretty much it though.  If the combat was fun, grouping was beneficial and engaging, and I had goals, I would have liked it more.  The last day I played it was basically just logging in to chat and attempt to engage with my only friend also playing near my level before just getting bored and quitting after gaining a level.  I had no reason to keep going, I had my entire skill bar and every weapon unlocked, and the gameplay wasn't fun enough to keep me going.  I could farm BGs in WoW for years or grind instances forever because I liked the way WoW played in the old days.  I just couldn't get into GW2 personally, so I gave up.

2012/11/14

Software Rants 5: Architecting a 21st Century OS

In multiple posts on this pile of cruft and silliness I've spoken about how I don't like the design of most modern operating systems.  I've even written some poorly construed ideas on the subject.  So how would I go about building something that, comparatively, has taken billions of man hours to create in a couple cases? (although subjects like Haiku make me feel a bit more optimistic). 

So I said microkernel.  This theoretical system has one goal being the minimization of kernel-space code - and anything platform specific that doesn't directly tie into the ability to execute software should definitely be userspace.  Linux has the philosophy that one source archive needs to compile with a configuration to work on any system ever, from the lowly phone to a massive supercomputer cluster of thousands of nodes.  You will get tremendously different binaries and KO sets depending on your configurations.  The difference between a PPC64 supercomputer all necessary modules compiled in kernel and a udev based dynamic module loading ARM kernel for phones makes them barely resemble each other - besides some basic tenants, the same command line arguments, the idea of loading an init payload, initializing devices, and using the same (similar?) memory management and scheduling algorithms (there are some builds of the kernel that use different ones...), the resulting system will be using entirely different tracks the source tree.

I disagree with that, in that those aren't the same software projects anymore, and keeping them in the same source tree is the exact opposite of the do one thing right, well, and concisely UNIX ideology.  And that is fine, because it means the Kernel is ultraportable, you can get an upstream codebase, build it, and run it on practically anything, and with a source clone you can customize the configuration to suit almost any use case. 

But there is also a reason Linux takes multiple levels of management and a ton of organization behind it - they have everything from device drivers to hardware translation layers to language documentation to binary blobs being merged into one program and it becomes impossible for any one person to understand or conceptualize such divergent tech.  I don't call myself smart, but I do think smart people are averse to complexity where possible, and this is definitely one of the places I see ample unneeded complexity.

So here is my proposition : a microkernel core, designed in and written in L, that only handles virtual memory management, preempted process scheduling ("virtual execution"), a virtual filesystem abstraction layer (so that aspects of this kernel can be mapped into the final root filesystem without awkward dependencies on user space filesystem managers), a virtual socket layer (for communication purposes, the sockets themselves would be managed in userspace, but the kernel would initalize this system so that some user space network daemon can manage socket communication - but the kernel itself will be using sockets as well to communicate hardware state and with the daemons it connects to!), and a hardware abstraction layer that allows user-space privileged daemons to take control of hardware components (disks, video devices, busses and usb hubs, etc, basically everything divergent from the RAM/CPU mix*).

*: I would be interested in exploring if there would be a way to have the memory controller in user space.  In that the kernel could start and initialize itself only in processor space, but it seems to be incessantly complex... you can't even establish any other virtual systems without system memory to use.  It would have the same issue a filesystem host would have, that the kernel would need to, in advance, start up this daemon, and then fall back to its own boot procedures.

The traditional boot philosophy is on -> firmware is payloaded and initialized by hardware -> firmware does primitive device detection and scans persistent storage buses for something with a recognizable partition table and payloading a bootloader from such -> giving it a "direct media interface table" even though that is Intel jumbo, with some hardware memory mapping used to provide devices.

UEFI is like this, with more complex device loading (including a 3d driver of some sort with the graphical setups) and the ability to scan not just partition tables but fat-32 filesystems for bootable executables.  It is pretty much an OS in and of itself considering how it behaves.

In my grand delusions, we could scratch the unnecessary parts - the important aspects of a boot cycle are device initialization, error detection, and searching for a payload.  Device initialization is already pretty well "passed on" to the resulting OS.  Error reporting is more complicated, because you are dealing in a world where the most you have access to may be some primitive bus signals to indicate problems, such as beep codes, error panels on the board, or keyboard signals to blink the ps2 port.  BIOSes and EFI boot procedure are obscured by platforms - each new chipset does things differently, merging more parts onto the cpu, or handling device channels differently.  In terms of payload searching, EFI actually does a really good job - given a boot device, and a system partition on it, load a binary.  No need for traditional bootloaders (which is nice).

On -> check for cpu / memory errors, on error try to signal all devices with some special hardware failure signal and a payload error descriptor.  Devices would need to be designed to handle this signal if they have some way to display errors (vga, keyboards, sound systems) or drop it.  The expectation is that "on" means devices power on, and the independent units like network controllers  come online at the same time and await the cpu to initialize them.

Check errors -> initialize devices.  Given no catastrophic errors, check if any other device has errors, and if not build a table of devices to provide the payload device to boot.  If a device has an error, broadcast another error payload signal to let anything capable alert the "outside" of the problem.  But don't stop booting if the error is recoverable, just indicate a failed device in the provided "table".

Initialize devices -> payload.  You have a table of devices, and the firmware needs to be aware of where we can find something to payload.  In terms binary byte ordering, that is an open question - we would prefer big endian for readability if it doesn't incur a hardware complexity cost.  Nobody should almost ever be working with binary data representation at this scale anyway, but if we can do Big End without circuitry overhead, we should, otherwise, keep it simple stupid and use Little E.

Since we have effectively an integrated bootloader, we need some very simple file system to just store binaries.  Now here a point of contention - have complex file system analysis machines that can read our "stock" FS type (which would absolutely be in the btrfs vein of COW auto-compressing snapshotted filesystems) or have a discrete filesystem just for the bootloader to read off.   We need to think of MBRs here - and logical and physical sector sizes.

I want to propose variable sized sectors, the same way there are viable sized memory pages - 4K memory and 4K disk are great defaults, but you can always use larger contiguous blocks of.. both.  In both cases, page size and sector size implicate overhead on the managers of these spaces while having larger sizes implicates overhead of its own when boundaries are not nicely met.

For one, traditional storage media just can't have variable sized physical sectors.  Having different logical sectors seems silly because it is a great simplication of work to have 4k sector sizes in both cases.  That will continue to work well for some time, and a sufficiently smart operating system can optimize sector utilization to minimize wasted space.  That is a device driver problem, though.

In terms of hardware memory pages, I still think swapping has value even if traditional desktops don't need it anymore - too many problems just require tremendous amounts of memory to work with beyond the bounds of traditional computing concepts, and we like embedded systems with low memory.  Even if you could theoretically implement something akin to paging with file system abstractions (writing to and from disk once you approach the physical memory limit) having the option there has proven to be worth it.

So page sizes - we don't want too many sizes, and we want them to scale nicely if possible.  This would require research and insight I don't possess, but you definitely want to support variable sized pages.

So we assume disks have 4K sectors, pages are at least 4K, and we may or may not have a dedicated partition standard for bootable binaries with some specialized file system.  We will need a disk partition table, and if were mandating 4k sectors, we have 4k bits to store info on the device.  I like how GPT has a backup copy of itself, so we want one of those, so its really 8k bits total, the first and last sectors.  In terms of sector addressing, I'm starting to think about 48 bit as an option - the overhead of 64 bit just for 64 zetabytes seems unnecessary when 48 bits gives an exabyte of storage.  Currently, the largest storage device is approximately 4tb, up from last years 3tb, the year before that at 2.5 tb, the year before that at 2, the year before that at 1.5, etc.  So if we go with a terabyte a year (which is about right for the last 5 years) we have a few decades before this becomes an issue, and we can just add in a 64 bit addressable 2.0 version anyway since we want userspace drivers.

Similarly, I'm not entirely sold on pure 64 bit cpus either.  The real big argument for large memory cpu sets is that server farms with shared memory need to address all that space, but you still get 281 petabytes on 48 bits.  I'd probably make the 1.0 of this system 48 bit, and make sure the language and instruction set inherently are valid converting to 64 and maybe even 80 / 96 bit.  This is actually really easy in the conceptualization of L, because you would have pointers as objects, and their size would be compile specific to the platform pointer size.  Integers are decoupled from words, so you can create ints of all sizes from 1 to ~16 bytes (I don't see why you wouldn't implement 128 bit integers on a modern architecture).  This also brings up the potential to just do the 64 bit virtual 48 bit physical word sizes Intel uses, but there is a translation in there that adds unnecessary complexity in my opinion.

So 48 bit CPU, 48 bit sector addresses, 4k standard pages and sectors.  Big endian unless there is a complexity or performance hit, in which case we just use little.  We want point to point memory rather than dual channel, and I already talked about CPU architecture earlier - having a heterogeneous collection of registers, alus, fpus, simd cores, etc to run dedicated parallel instructions.  This way the traditionally discrete gpu and cpu cores (even on a shared die) can be more tightly integrated.  You could also use one single ram, and not have to reserve it for the gpu cores.  As long as the instruction set is designed around supporting SIMD instructions for the parallel processing cores, we should be sound.

So we have our payload, we run it, and we set up a process scheduler that I will look into more in the future (really, this is the kind of decision that takes a ton of reading up on, to figure out the best preemptive scheduler for purpose, but CFS I guess is the industry standard).  We have the virtual memory, and we need some way for the kernel to initialize physical hardware virtual memory mappings.

So we don't want the kernel explicitly dealing with device management, but it needs to initialize device memory, so we can just add that into the device table the firmware provides - one of the hardware signals can be for the memory map size.  The kernel can then map memory addresses at this time to the given tables memory requirements for applications, and when an application access device hooks, it also gets control of the virtual page table referring to that device.

So scheduler and memory is up.  We want a virtual file system now - no hardware folders are even initialized yet, but we can provide a virtual hardware file system for device access.  This would be an elevated privilege folder - as an abstraction, you can have device control servers open devices for writing to take exclusive control over them, and the writable file is their memory map.  I proposed a file system layout earlier, so here we are talking about something akin to /System/Hardware/*.  The VFS would propagate a node per hardware device provided, the folder would require level 1 permissions to access, and once a device server has control of the virtual memory map with the device opened for writing, the only thing other servers can do is read it.

So this virtual file server needs the concept of permissions at the kernel level - we want executables running to have a level of privilege, beyond the bounds of kernel vs user mode processor execution state - this is a software privilege, where level 0 is the kernel, level 1 is init and systemwide access services and daemons, and has a "pure" view of the file system.  Level 2 would be the usual run mode of programs - restricted access to system essential files, restricted views of the file system, and each application would have device privileges specific to it given to it by a level 1 service.

Some examples - the vfs, socket layer, memory mapping, and scheduler are operating in level 0 kernel mode.  A gpu driver, device controller, usb host, a file system daemon, or a dhcp host, smb server, virtual machine service.  You want two levels of indirection here in most cases - a sound server to manage audio device servers running, a display server to manage display adapters running, a network server to manage networking devices, etc.  Access to these servers is restricted in the vfs and vss through executable privileges, probably by a level 1 executor server that wraps kernel level execution behavior.  Basically, kernel sets up permissions that the executor server manages, since level 2 permissions can't access level 0 directly.  Level 3 is the "pure sandbox" - it would be started by level 2 programs (like a VM) and has no device access directly, and only has its own restricted view of the vfs and by default has no write permissions outside its execution context.  You could thus host users from level 2 session managers (maybe run by an administrator) and they would be unable to manipulate the outer system by design.

So we have 4 permissions levels right now, and you could theoretically add more and more to add more levels of virtualization.  A virtual machine could thus just be a level 2 program that pipes commands from level 4 devices through the virtual memory map of a level 3 kernel into the level 1 devices above it.  Very slick I think.

The other major revelation is the idea of display management.  In the absence of dedicated video hardware to control by a level 1 daemon, the video daemon could itself emulate a video server.   Or you could set up the userspace where Kernel is level 0, device controllers are level 1, device managers are level 2, and user applications are level 3, so that user applications never interface with devices directly but only through abstraction layers.  I actually like that somewhat more than the other model in some use cases.

And then of course the traditional server model can just run everything at level 1.  It isn't kernel mode, but it has device mapper access so you can set up the traditional ultra-fast interconnects.  This alleviates the problem of FUSE and its ilk in Linux because you don't have to emulate the device controller without any hooks and pipe them, you inherit devices into user space.

So I'll talk about the device / manager services more next post.

2012/11/12

Politics Ranting 4: Ideologies of a pragmatic Libertarian

So I consider myself Libertarian.  By and large, I want the government out of business and life, and really think history teaches us by repeat example that power corrupts, political bureaucracy is slow, inefficient, and often fails, and that the best democracies involve fewer people rather than more.  I made other posts on this blog about why the US political system is shyte, and how to circumvent some of these failings through the economics of money and people.

But I don't agree with the entire Libertarian platform.  Minimizing the fed to the smallest possible degree is an absolute must, because it least represents the will of its constituents of any level of government.  States are not that much better, they are just smaller pools of representation so they more accurately reflect the views of their people.  But they still have some pretty dramatic dissent across them, and a great example would be my home state of Pennsylvania.

This year, PA elected Obama.  They elected quite a few democrats to Harrisburg too, and they reelected Bob Casey Jr.  However, my county of Berks reelected a Republican to the house by an overwhelming margin, because I like in the boonies hick country where half the people are Mennonite and the other half are old and racist.  This includes Reading though, which like any large concentration of people is decidedly "blue" (even though I reiterate my disdain of categorizing along the black and white axis of American politics).


I'm just going to go out on a limb and say my area is pretty "conservative" as a whole, even with a population center of liberals.  If they had exclusive say over their own politics, ethnic minorities probably would be enslaved again if a few layers of government and constitutions didn't forbid it.  Meanwhile, if Philly is as liberal as it votes, they would have a socialist state where private business did not exist.

So there is some pretty complex disparity between regions in a state.  Especially an American state - in my proposal, I'd rather have organic states redrawn every census that contain around a million people each, whereas modern PA would be 12 of these.  But that definitely isn't small enough - while I definitely think a better system of government would make people inherently more mobile and not as tied down to areas against their ideologies like they are now, you can't expect everyone to have the capital to move away from a state containing a million bodies whenever they don't agree with the going majority viewpoint.

So I advocate local politics and enough social mobility to get people get out of areas they disagree with.   I said it before, but it is against the libertarian platform because they want states rights, not local rights.  They assume that the original 13 colonies were the "right" size, and in many ways don't even question it.  Even though the largest of the original colonies at the first census was... (gasp) Pennsylvania, at 430 thousand.  A third the population of modern Philadelphia.

And that gets to my point - the American system of governance does not facilitate population growth organically.  Jefferson wanted the constitution rewritten every generation because he knew they couldn't predit everything.  This is absolutely one of them, it is in my opinion the root cause of almost everything wrong in modern America.  You have too few representatives ruling over too many people everywhere.  Under my proposed system, you would have regions of ~10,000 people (1/6 the population of first-census Rhode Island, the least populated state at the time!) ruled by a council of 10, writing laws for themselves by their own beliefs.  One of those 10 gets elected to represent them at the state level.  So a council of 9 directly votes on laws, selects leaders of local departments (police chief, chairmen of a local hospital, boss engineer, firechief, etc).  Those 10 should know their people. Personally.  Each one on that council should know, on a personal level, the ~1k people they represent, and the people should have selected them for their humanity, not their spin.  They can directly question people in their district about matters of law and with so few people the average person can sit in on discussions and make their voices heard.

You can't do that with a congress any larger.  10k people is about the size of most large high schools, and stuffing that many people in an Auditorium is.. troublesome.  But doable.  You can get that entire community together at once.  You won't do it almost anywhere else.  And they can select their own laws, their own policies, and what works can attract new people to move and create a more concentrated sect of communities sharing opinion and law.

So that is the states-are-best policy out the window.  The other major Libertarian stance I dislike is the concept that government can't run anything.  The principle thing to consider, from my perspective, on this topic is cooperation against competition.  Any business will be inherently competitive and that is meant to lower costs and cut corners to maximize efficiency.  Any cooperative venture is vulnerable to inertia, bloating, and stagnation. 

I don't think utilies and infrastructure can be effectively done in a competitive way.  And I don't mean buses - those aren't really infrastructure.  You can buy a bunch of busses and sell seats without tremendous opportunity costs associated with laying roads, because roads are a public utility right now.  Private roads would be systemically infeasible, because whoever gets roads "to market" first surmounts the opportunity costs and any competitor would need to dedicate tremendous capital to "catch up".  Especially with limited land to cover with roads, and the probably that once an established player enters the market, they would influence regional politics to block any new entry into the market arguing against consuming more land for roads.

Electricity is similar.  Power companies produce juice and sell it on market, and people can buy from any distributor even if they aren't inherently using their power because the lines don't distinguish who is putting in or taking out juice, just how much enters or exists from any one point.  The lines themselves are maintained publicly though, because if they weren't, the entry costs, again, of "setting up shop" by repopulating phone lines with new power lines would be prohibitve, and again, competitors would manipulate government into artificially limiting markets.

The internet is so horribly shit in this country and Canda for exactly the above reasons.  As Verizon lays fiber, they corner the market.  They own the wire, and nobody can ever hope to overcome that opportunity cost once Verizon is in the market.  The potential for profit is much lower when there is anything but a monopoly in such a circumstance, and costs are too prohibitive, and governments are too easily manipulated to deny new entries into the utilities market.

Same thing with public water, and public rail.  And it is the same reason private airliners work.  If you have companies sharing the "lines" the system can work, if they need their own wires they don't.  If cable lines are public, the system is open to new players.  If they are not, the cost of entry is too high and too easily manipulated by the entrenched player, and they can pull all the strings they want if you try to "share" their line.  It is in their best interests to keep you out of the game and themselves in a monopoly position.

I just don't see a system where you can have private utility lines, roads, sewage, or rail.  The redundant reproduction of similar infrastructure is prohibitively costly, and any player that owns the line controls the market.  It is a guaranteed monopoly.  Internet is a strange circumstance, because there are a few more factors that really ruin everything for us besides just the lines (and is a reason why we had dozens of dial-up providers in the 90s and nothing since).

First, servers and routing stations are expensive.  Like the cost of laying line, adding new players to a market dramatically reduces potential profits through competition, and setting up new routing farms in new areas that already have some providers is something no business will touch because it presents too much risk and too limited profit (it requires giving more speed away at lower prices to entice entrenched users to switch).  So building routing centers is another prohibitive cost.

Second, wireless spectrum is restricted and limited.  There is no open spectrum to set up your own 3g tower on, you either need to own it or get your pockets cleaned out by someone who owns the spectrum, or get sued into dirt.  It doesn't help that only the 300mhz - 3000mhz range hits the sweet spot in the tradeoff of data rate against signal range.  We need this entire spectrum reset and open to anyone that wants it, and we need to let the market organically manage it rather than having an overbearing fed had it out like candy to whoever funds the most recent campaign when spectrum crops up.

Third, the wires are still prohibitively expensive.  Fiber channels for 75% of America would revolutionize everything, and as long as you don't target the prohibitively remote populations that make figures of "to lay fiber to every American would cost 15k a person!" 'legitimate' it is perfectly affordable as a public works project.

Fourth, ICANN and the other routers and network back-ends are in many places private, so you need to pay them off and be in their good graces.  New players in the game need not mess with the in crowd, it is pretty closed off to new entries.  So even if you can get the servers and the fiber to the home, you still need to bargain with the maintainers of the rest of the infrastructure to let you hook in.  Good luck with that.

So internet, above most other things, really needs some public lines.  And routers.  The server farms and the infrastructure need to be public, and unlike other industries, this is something hard to do wrong.  You only lay new cable every once in a while, if you can overcome the hurdle of public repair crews being shit and lazy because they get lump sum paid or unionized so you can repair damage is a reasonable time frame (including in the data centers) and keep this stuff well funded so the routers don't become oversaturated and the lines are updated regularly (which to be honest hasn't happend yet in the public space.. nobody ever reinvests in roads or rail or water or power after they have something that has a semblance of working in place...).

But it is better than private enterprise gaining near instant monopoly and stifling innovation every time.  The infrastructure markets are just too biased to be effectively competitive.  Because they aren't competitive, and they are what separate us from animals - no animal has evolved wheels, because anything that would spend its time laying roads for others would be selected against since they spent all that time helping others and benefiting the species rather than benefiting themselves.  Infrastructure puts us one step beyond birds building nests.  And it is an inherently cooperative thing to make us all better for it, because the labors of few will benefit us all in the long run, and you can't make that competitive or else nature already would have.

But in my system, local government could make this decision.  It need not be a federal mandate.  If any area wants to have shitty internet, they could.  The only problem I see is that if one area wants good public internet, roads, and power, but all the neighbors are shoddy privately owned industries that try to suck the one regions dollars dry if they want to pass through their neighborhoods, there needs to be some arbitration so the locality providing well to their people need not suffer their neighbors indignities.  10k people should be enough in theory, and you at least produce some competition in who a local area should pass through to access the broader market, but I see room for corruption and exploitation whenever you cross local borders.

Maybe that could fall under "inter-region commerce" and interstate lines could fall under interstate commerce, so that the next level up has to manage these connections.  Because then the majority would want their regions to have free passage through everyone else, so they would all agree to open borders and lines of infrastructure.

Other issues, like healthcare systems, I feel would be handled organically.  If it isn't something that is interconnected across boarders as much, each area can take its own approach and whatever works will spread and become popular.  The people that want direct payment of medical service can move somewhere doing that, the people that like insurance can move there, people that want single payer can move somewhere doing that.  And in those systems, there will probably be someone doing it "better" and good ideas would spread.

Like I said in my ideal society post, taxes would be a solved problem.  I'll talk more about my tax theory later.

So in summary, small local governments writing the majority of law, so that good ideas are spread and adopted, and experimentation occurs everywhere.  People move where ideas agree with them, and every census as these regions expand and grow more and more localities, they take over their regional and state governments, and the area will adopt similar policies at higher levels. 

The only real weakness I see are changes in the global environment altering what was once great policy for the worse.  If the nation adopts a free trade agreement as a whole that is beneficial for all at some point, they can only change that at the national level by revoking it and requiring a change of ideal to trickle back up to the national forum.  While it is a very good system of checks and balances, it requires a (hopefully) slow federal legislature to revoke law, and then for states to revoke their own older policies on the matter, and finally for localities to change it, then for states to vote it back up.  It would be a slow process.  It is good that it is slow though, and I figure it could be done quickly if necessary (federal overwhelming vote tears it apart overnight, next day states dismantle it, and then localities change it the day after) so you can change a broken law in 3 days, and maybe eventually the changed system will trickle back up as the new standard, also resistant to immediate change.

2012/11/10

UEFI: The Good, the Bad, and the Stupid

After a week working on this htpc build which uses the Asrock AMI UEFI firmware, I  have developed some stronger opinions about UEFI that I had before, now that I have some hands on experience.

The Good:
  • GPT is a much needed replacement for the old msdos partition table.  None of the other partition tables caught on, and it really took some strong concerted effort to get us to move away from that archaic mess (4 primary partitions, oh humor).  It supports enough primary partitions to never have to think about logical ones again, it has a redundant backup copy of the partition table in case the front one becomes corrupted or damaged so partitions can be recovered, and it doesn't have an unnecessary mess of bootloader space resting inside it, which means Grub can't stuff its bloated mass into the partition table as well.
  • The traditional firmware -> payload a disk -> load a bootloader to load an OS on that disk -> execute OS payload model of exection is kind of dumb.  UEFI eliminates the need for a bit of this work by making the firmware partition aware and letting it payload an executable directly.  EFIStub Linux kernels or Windows 7+ can act as UEFI executables.
  • They use file extensions!  EFI payload binaries are .efi files.  Good someone has some static typing sense.
The Bad:
  •  Secure boot.  The given reason behind it is to protect from malignant bootable executables replacing bootloaders or kernel payloads to run malicious software.  In reality, modern operating systems are (and if not, should be, your fault for installing a crummy OS) very protective of their bootloaders and essential system files already.  It requires root permissions on Linux and a bunch of security prompts on Windows to alter the device contents referring to kernels and bootloaders.  In reality, secure boot is just a wall to walled-garden the space of executable programs on UEFI equipped devices.  Especially because fucking Microsoft is the certificate authority to key EFI binaries.  Come the fuck on.  If it had been an independent consortium (given the track record for key signing independent sources isn't that great either.. Verisign, ugh) it might have looked less obviously like Microsoft was trying to lock down consumer hardware.
  • There is effectively no cross-platform inherent support for  backwards compatability with msdos disks, ahci boot options, and UEFI doesn't by default provide a boot device selection option, just the EFI shell.
  • The EFI shell is dumb for a plethora of reasons.  One, you don't need a freaking shell on top of a configurable firmware interface.  You don't need to have arbitrary executables in the firmware.  You don't need a dozen utility applications that are useless when every operating system provides their own.  People are inertial and keep using what they know best, even if having an efi memtest, prime95, meminfo, cpuinfo, etc would be nice in theory, the shell was not well thought out and is mostly a rip of Windows powershell.  Hint: terminals in NT based oses (and dos ones too) were shit.  You want hardware configuration options in an ncurses interface like every bios before had, and the ability to select from all reachable EFI binaries what to run.  That would have been great.
  • EFI still doesn't solve the age old problem of device initialization.  Every modern OS still has to re-probe every bus and device hub to figure out what is available, even after the POST test already found everything and handled it.  The firmware of EFI must have its own video driver somewhere, since it uses almost any graphics out solution there is.  But the NT and Linux kernels will always use their own drivers then, instead.  The real problem is that the firmware on the mobo is fixed sized small and flashable ROM, whereas the OS kernels are loading device drivers from disk since they can only situationally flash the board rom.  In a perfect world, the board firmware would provide the device drivers to the OS and those could be updated from firmware or OS without having to reflash the whole of the onboard memory.  We have great toggle nand now, we should put that on instead.  This way, the firmware and operating systems could get drivers from one place and the OS could use alternatives if it really wanted to.  (note, I am a hypocrit because I was just talking about interia and transferring the entire architecture of kernel mod drivers from being on disk to being a payload of EFI firmware is ridiculously complex).  My point is that it is very dumb that the firmware will initialize almost every device with its own drivers (usb ports, ps2 ports, networking, and video devices at least,  because it needs to probe disks, find input devices, download its own updates over dhcp networks, and display the firmware interface) and then the OS will also use its own device drivers for the same hardware that was already initialized.

The Stupid: 
  • Graphical interfaces.  While not an inherent EFI standard, every major mobo manufacturer got the bright idea to at least quadruple firmware size to support vector graphics, gifs (I'd imagine?) transparency, fidelity, and a bunch of other eye candy features to give users gui programs for firmware.  I am a stick in the mud here - and the efi configuration utilities don't add any real value to the experience.  The old ncurses style bios utilities that required keyboard naviagation had the exact same usability sans the mouse clicking these uefi firmwares do, without all the graphics crap making payloading take microseconds longer than is necessary.
  • Secure Boot.  Because its not just bad, its stupid, because Microsoft will never get away with their master plan of locking down OEM hardware.
  • FAT32 being the efi partition standard.  It is an old piece of crap that is the product of Microsofts filesystem ingenuity.  They should have developed a specialized simple low overhead filesystem for the efi system partition that was an open standard.  Because we should have been using fat16(or another 16 bit sector pointer), not fat32, but Microsoft requires it to install Windows 7 in efi mode, because there is dumb overhead of 32 bit sector pointers on partitions that should never be over 256mb in size in any practical use case.
Overall, I'd still say EFI is an improvement as long as you never turn on Secure Boot (or mandate it... oh Microsoft on ARM, even my Transformer tablet is more open, providing me a bootloader unlocker, and they never had to be).  The idea of loading binaries instead of bootloaders is a great step foward in my book.

2012/11/06

A Guide to Asrock AMI UEFI FM2 Overclocking Options

I spent a while messing with various settings in the uefi Asrock is using for Trinity apus (I imagine the am3 setup is similar).  Here is a summary explanation of what each feature does, in the hope I may one day be helpful.

OC Tweaker
  • EZ OC Mode:  either manually configure overclock options or chose preset overclocks.  The presets from my experience are overzealous with voltage and reset RAM preferences.  They are designed to be "safe" but be careful using the highest presets because they put the voltage at 1.4 + offset of .2 which can put you over safe operating frequencies under load, especially if you use load line calibration which can skew voltage even higher.  That can get really dangerous in terms of processor burnout.  So I recommend going manual and tweaking individual settings.
  • Overclock Mode: manually tweak the fsb frequency or leave it default and allow spread spectrum.  Spread spectrum prevents voltage spikes on the various system buses off the southbridge, and overclocking the pci bus is often dangerous because it can fry add-in cards or cause hardware transfer rate errors since it also affects the sata and usb interfaces.   You never need to overclock the PCI bus and spread spectrum is good at rated clock rates, so auto overclock mode and auto spread spectrum.
  • AMD Turbo Core Technology: enable the cpu boost feature when only one core is maxed out.  Even if you do a hard overclock, you might want to try this, because it only becomes active when the other cores are underutilized and the heat generation off one core going about 400mhz faster usually won't kill a system.  I enable it, and use the aformentioned 400 mhz above full utilization clock.
  • APM Management: an enhanced version of cool'n'quiet that supports per-core downclocking and undervolting in a more granular and sophisticated way.  If you use cool'n'quiet, you want this on.  The tradeoff here is that by powering down underused cores revving them back up isn't instantaneous and a lot of people have misplaced fears of the voltage shifts wearing out the chip faster.  Modern chips are much better designed for undervolting to save power and the only time to heed caution here is if you have an absurd Vcore.  Leave it on to save juice.
  • Multiplers: You want this manual, else you are not really overclocking.  The goal is to get the base clock multiplier as high as possible while staying stable and with manageable heat and voltage rates.  One unmentioned aspect in most cpu stories is that if you downclock a processor enough you can drastically cut the wattage usage by dropping the voltage to match - if you took an A10 that usually rates at 1.325 to 1.375 volts down to 1.25 or 1.2 volts and find a stable clock rate (probably around 3ghz) you could easily see wattage and heat output cut to around 60 - 65 watts, maybe 90 watts under load, compared to the stock idle 100 watts and 125 under load.
  • Vcore and offset: Boosting voltage boosts overclock headroom for stable clocks but also boosts heat and power consumption.  You want these as low as possible for a given clock rate while keeping the system stable at its maximum frequency.  Generally, the given configuration is overgenerous and excessive and it should be tweaked through torture testing to find a stable vcore at your desired clock rate as low as possible.  You should actively try to adjust the offset rather than the vcore, because most power management functions operate as a fraction of vcore, so overclocking it reduces power efficiency.
  • CPU NB Frequency: this is the northbridge data rate.  You want this as high as possible while remaining stable for maximum bandwidth between the memory and cpu, and the best way to know if it is excessive is if you get hardware errrors on memory transactions that don't show up in memtest.  In my case, I was able to get total system failure at 2600mhz when running SIMD instructions off the gpu, because those are usually more bridge intensive than CPU based memory operations and want a lot more bandwidth.  Unlike some platforms, you can't explicitly set the northbridge or cpu base frequency, which is really a good thing because both are integrally tied to a lot of underlying parts that can easily break if tweaked.
  • CPU NB/GFX Voltage: The gpu cores and northbridge have their own voltage.  As per usual, you want this as low as possible while maintaining a stable system with a given nb overclock and gfx overclock.
  • APU Load-line calibration:  This is used in modern cpus to offset vdroop.  Only use load line calibration if your voltage isn't dangerously high - traditionally, your given voltage is a fixed limit that the processor won't pass, but may dip below (droop) especially under constant heavy load.  LLC will up voltage to offset droop, which means your rated voltage isn't your limit anymore, but you have less chance of voltage dropoff under load.  So use LLC at low overclocks but avoid it with higher ones where the voltage spike it causes puts the cpu over safe limits.
  • GFX Engine Clock: This is the integrated gpu clock rate.  Like the cpu clock rate, you want this as high as possible without overvolting too much on the northbridge voltage.  I found that the gpu barely heats up compared to the generic cpu cores, and can go quite a bit higher than stock as a result, but that is also due to the northbridge voltage being lower than the cpu voltage.  Treat this independently of the cpu settings in most cases.
  • Memory Profile: JEDEC profiles are mandated for DDR and are base profiles meant to work on anything of a certain DRAM spec.  You want to use the XMP profile as the template the manufacturer meant the RAM to run at.  In my case, my Corsair Vengeance sticks were giving the wrong XMP RA tRAS value and I needed to tweak it from 30 to 27.  Make sure to try at your memories given ratings and overcloclking the CAS and such latencies usually isn't worth trying.
  • DRAM Frequency: has preset frequencies that go with the standard.  The rated frequency all Piledriver based chipsets are rated at is 1866, but they all support overclocks up to at least 2400.  For example, I had 2133 Corsair RAM, so set this to the rated speed of your ram.  It is always a tiered thing, and the only way to really fine tune adjust dram frequencies is to overclock the northbridge base frequency, and you really take a lot of risks if you do this, even if you can do this.  My P6T can, but my Pro4-M can't.  You might for laughs try putting RAM modules rated for something like 1866 at 2133, but most likely that will error out in memory tests, especially if you don't try adjusting the latencies and voltages upwards to compensate.
  • DRAM Voltage:  Part of the XMP spec is the rated voltage.  You want this as low as possible without memory errors.  Undervolting memory usually isn't worth it much because memory modules don't generate much heat (anymore, at least, old DDR2 and DDR ram was much hotter due to higher voltages) so stock rates are usually sufficient unless you are a real power nutter.  If you get memory errors in memtest or SIMD transactions that aren't related to northbridge frequencies and voltages, it might be an issue with your RAM voltage.  For example, those Vengeance sticks I was talking about would error out at 2400mhz northbrige at stock 1.5 volts but a 1.505 voltage was sufficient to avoid those errors.
  • APU PCIE Voltage & SB Voltage: Keep these constant at 1.208 volts and 1.1 volts respectively.  You don't overclock your PCI bus or SB buses so don't overvolt them unless you are a really crappy power supply that has bad rails.  And if that's happening, you have bigger problems anyway.  So don't touch these unless you have explicit SATA or PCI device failures, voltage drooping, or performance degradation from too little power on these lines.  And like I said, that really shouldn't happen.

Advanced 
Advanced CPU Configuration:  
  • Core C6 Mod: Enables the lowest power state of the processor (deep sleep mode).  This enables S3 suspend to completely shut down cores at 0v rather than leaving the processor in a low power halted state.  Leave it on, modern systems support it fine and it is designed not to be unstable.  If you need to turn this off your motherboard or cpu are pretty much broken.
  • Cool 'n 'Quiet: The precursor to APM, also supports memory downvolting.  If you use APM you want this on, if you don't use APM you might still want to use it because it is less aggressive.  AMD has supported CnQ for 4 processor generations now and it is a very mature power saving standard that won't burn out your chip by alternating voltages.
  • SVM: The AMD hardware virtualization support.  Keep it on, it allows you ti run hardware accelerated virtual OSes and doesn't cost anything.  This option is only here to prevent virtual guest operating systems.
  • CPU Thermal Throttle:  This is the fraction of power to take the processor to when in S1 ram suspend mode.  You want this as low as possible without causing instability, and lower rates can make resuming slower because it has to restore more voltage difference.   Since most systems use S3 suspend as the default mode (power down everything except ram, instead of S1 keep everything on in low power mode and store everything in ram) you don't need to worry about this much, and auto is just fine.
North Bridge Configuration: 
  • Primary Graphics Adapter: If you have a discrete card and are not using hybrid crossfire, you want this on to use your discrete card instead of the integrated gpu.  There is practically no use case for this though, except maybe in laptops where you power off the discrete card unless you have something graphically intensive.  But modern GPUs have really good power saving modes anyway.  You won't need to change this unless you know you need to change this.
  • Share Memory: Amount of system ram to allocate to the integrated GPU.  The way integrated GPUs currently work is that they deny the OS access to a portion of installed RAM and use it as a substitute VRAM.  Since RAM is cheap, set this to the max with any APU unless you are running a system with little RAM (why?).
  • Onboard HDMI HD Audio: Leave this on to support the HDMI port also sending digital audio.  Duh.  You want this if you plug in a tv or display with speakers via HDMI, but there really is no reason to turn it off.  If you don't have HDMI audio capable hardware, it just won't play anything.  Turning this off just gets rid of the audio device the OS sees for HDMI audio.
  • DVI Function: DVI cables can operate in an emulated HDMI mode.  So if you use an HDMI converter on the DVI port, use HDMI mode, and if you use a regular DVI cable, use Dual Link. 
South Bridge Configuration: These are case by case and self-explanatory options for the most part.  The options I have are for the front panel audio controller, the lan controller, and led controls.  Enable or disable features, mostly.

Storage Configuration:
  •  SATA Controller: You probably have hard disks connected.  You want the SATA controller on.  You might have an ATA controller here if your mobo supports it, turn it off if you don't have an ATA disk (the old slow crap from the 90s).
  • SATA Mode: Traditionally you have 3 choices here.  Hardware raid, ahci, and ide.  IDE is legacy and slow, ahci is modern for multiple discreet disks and is probably want you want, and if you want to setup a RAID you should know what you are doing in advance.  AHCI provides a lot of benefits for monitoring and performance.
  • AMD AHCI BIOS ROM: This option loads a rom image into ram before starting anything containing legacy AHCI tables for backwards support.  Most modern operating systems don't use or care about these, but if you need them they are there, but you would find out elsewhere if you do.  You can try running it to see if you get more sensors out of it, but most OSes can detect AHCI 2 sensor data without the rom load or extra boot time.
  • SATA IDE Combined Mode: Allows your "last" sata ports to act like a combined IDE port for PATA devices if you use a translation cable.  Almost certainly you don't need this because you don't have PATA devices, and if you did, you would have gotten a mobo with a PATA port.
  • Hard Disk SMART: If the AHCI ROM is enabled, you always have smart data available.  If you don't use the rom, turn this on always to enable smart monitoring.  Smart is for ATA devices to give you important diagnostics and testing for hardware integrity.
 Super IO Configuration: usually these are the enables / disables on the backplate.  The only thing to worry about are IRQs and addresses, IRQs are interrupt handlers, you want all of them on so if you use devices they provide interrupts as intended in certain circumstances, addresses are hardware locations where these devices are referenced.  Make sure they don't overlap (if you uefi even allows it).

ACPI Configuration:
  • Suspend to RAM: I only have auto here.  You want this on.  If given the option, you want S3 suspend, it uses less power, turns off fans and lights, and most modern OSes support it natively.  S1 suspend is an ACPI power state that only downclocks all the buses and keeps everything running in a halt state, which is less efficient.
  • Check Ready Bit: Hilariously, this might be the only setting I am almost completely in the dark about, along with the rest of the internet.  The manual for my mobo is extremely useful with the description of "Use this item to enable or disable the feature Check Ready Bit."  Research says it might cause issues with ssds during s3 resume, and it has no practical benefit it seems, so disable it.  I didn't have issues with it, but I can't find a single benefit to this ACPI feature, so turn it off anyway.
  • Restore on AC Power Loss: You probably don't want this on unless you are trying to have an always up server.  If you are losing power repeatedly your system will thank you for not auto-restarting every time it regains juice and loses it again.
  • Power Ons: Lets you power on the system with various devices.  Should be self explanatory.
  • ACPI HPET table: The high precision event timer is the really useful periodic interrupt ticker in ACPI boards that lets the system track sychronization much better than through software.  Turn it on.
USB Configuration: Self explanatory.  Turn on what you need and enable legacy if you need it for something that uses it.

Network Configuration: Deceptive options, they relate to the software, not the motherboard hardware per se.  These configure how the firmware updates itself from within setup.  Almost every router uses DHCP for natting so yours probably does too.  PPPoE is less used and you'd know if you have it.

So those are the options on the Asrock FM2 Pro4-M for an overclocker, in detail.  Hope this might be useful to someone.  Message me if you see any errors (I did some hazy research on some more vague terminology).

Addentum Update Notes:
  • Load Line Calibration effectively can not be disabled for me.  The four settings, default, 1/2 vcore, 1/2 nb vcore, and 1/2 vcore + 1/2 nb vcore seem to only correlate to the voltage used as the standard LLC voltage.  Intution says that 1/2 nb core should be LLCing the northbridge, but this setting will overvolt the CPU cores as well.  Effectively, default is 1/2 vcore, and I think 1/2 vcore + 1/2 nb vcore just picks the greater of the two to use as its LLC standard.   This means you can take the actual CPU frequency really low (I had it down to 1.2, causing an idle voltage of 0.85 volts at stock, where the voltage would still ramp up to 1.45 under load due to LLC).  It doesn't seem to allow the chip to go under 1.45.  Any overclocks done past ~1.325 will cause the maximum voltage limit to rise, so that a 1.325 voltage with .1 offset will run around 1.6 volts (which is getting pretty dangerous). 
  • APM and Cool'n'Quiet love constantly changing Cstates even under load.  Voltage and frequency can be pretty variable.  I still leave them on, because by now the overhead is almost nonexistant (they are honestly downclocking in between instructions at this point if the pipeline is indicating a lull I think) and as long as I'm only pushing 1.6v under load I'm not that worried.
  • The northbridge and GPU can also take really low voltages at stock.  I got them down to 1.125 and the minimum 1.194 respectively without any instability, but there might be some LLC going on there too at low thresholds (especially if you have nb vcore llc on, but I can't live test the northbridge voltage).
  • The recommendation is still to set the core voltage as low as possible without causing the cpu to be unstable at its lowest cstate, but also consider that LLC will always force the cores back up quite a bit.
  • Firmware 1.8 enabled 1 and 2 gigabyte video memory sizes.  I'm sticking with 1gb, since this thing runs off a 1080p screen.  And 8 gigs of ram leaves a lot of room for a htpc machine anyway.  It also allows you to force EFI mode with CSM disable in the ACPI settings.

2012/11/05

Dumb Sounds and Smart Screens, and Connectivity Standards

 Edit: I just want to preface this blog, after rereading it, that I do know you can send an analog or digital signal over most cables since they are just electric pulses any way you slice it.  One uses period of charge and one uses wavelength to determine data contents.  I mention them in stark contrast a lot, and I mean in the practical utilization of cable standards (for example, TN phone cable is really crappy digital cable since it has terribly low refresh rate).

Today there are quite a few "standard" connections for a wide variety of technology.  Most of them boil down to analog or digital, though.  For example, 3.5mm audio is almost a hundred years old and is just differential voltage meaning height of a sound wave.  Digital signals are barely different - they are still electrical signals over a wire - but they are analyzed with a discrete principle that the presence of absence of charge can indicate their state.  It is why spdif requires a sound card on the recieving end to reencode the signal as an analog output and 3.5mm audio is just raw sound signals.  For the same reasons, sound over distance becomes distorted if not modulated and digital signals can go longer distances (some ethernet standards can go kilometers).  And you can tell the difference because all the digital standards are measured in bitrates, and analong cabling is always fixed to some interpretation standard (analog video off coax or rca cable, or off vga, for example).

One really silly reality is that modern computers almost always use analog modular sound and digital via hdmi or dvi video.  For some dumb reason, dvi has some backwards compatability with vga and supports analog signals on some of its pins at some low video resolution modes.  Hertz rates play a big part in cable distance in digital world, since dvi is usually 400hz, and runs on copper, so it is limited to a short distance compared to the often fiber based log 550mhz ethernet lines, but they carry less data rate.

The big deal is the end is the tradeoff of bandwidth to distance traveled in digital land.  Same thing happens in wireless world, except with smaller numbers because of much higher environmental interference in the air.  But I don't get why we can't just have standard digital interconnects that span the spectrum they really translate to.  What we want is the greatest stable frequency distance with the lowest power draw transferring the maximum data rate with the lowest latency.  Each of those 4 quantifies some use case, and each deserves a specialization - really, we have that already.  distance is coax, power draw is ethernet cat (which blurs with distance, since they are synonymous in many ways), bandwidth is hdmi / dvi (6 gigabit, but ethernet is getting competitive, so like I said, lines are blurring) or latency (again, they are all mostly low latency - usb has a higher latency since it does byte conversion on all transfer nodes).

In honesty, usb sucks.  It has a really short distance before distorting, it only has (at usb3 even) 600mbit bandwidth, and it has higher latency because it is a translation layer.  Ethernet is already packet based, it just doesn't provide power.  Is it really that expensive for a digital packet interconnect standard to carry power?  If so, why does SAS work at 6gigabit (albeit, a SAS or SATA cable has a much shorter range limit than any of the above).

You really end up with two use cases then.  Maximize bandwidth with minimum losses of distance, power, and latency; and maximum distance, with minimum loses of bandwidth, power, and latency.  You want low latency / power draw, and preferrably the ability to prove 5v or some such power over the line (albeit that, by example of SAS and USB, really costs the distance, so a long distance interconnect is almost guaranteed not to also carry power due to noise problems, and shielding the two layers would be expensive).

So what do we end up with?  I'd say something like SAS + ethernet.  And cat ethernet cable is really starting to pique my interest, since we are now talking about 100 gigabit ethernet, with room for terabit.  The bandwidth on these cables is huge, the maximum range is huge as well without signal degredation, and all it takes is intelligent enough tvs and monitors to decode packets.  Really, it isn't hard for ethernet packet systems to just ask newly connected devices what they are for and provide accordingly.  The only real issue would be in the realm of a network adapter thinking its on a router when its on a display, and you get noise on your screen, or vice versa.  Just means you need good handshakes.

I guess that also validates the reasons for having divergent systems  in place for each device.  Even though usb blurs them all since it seems to not care what goes over the line as much.  Ethernet controllers want packet data, hdmi and dvi want display packets, usb wants device packets, etc.

I really was thinking about this since usb is a train wreck.  USB device drivers are erroneously complex, slow, and in general suck.  They all draw from one device controllers bandwidth and power, they have terrible transfer rates and distance, and seem to only be a standard for the sake of it.  They were first to market with plug and play powered ubiquitous devices and won where I am really surprised nothing is competing.  Because USB is shit.

And the digital display (vga is too restrictive a medium, which is interesting in and of itself - with 4k resolution on the horizon, it is really interesting how analog can't support that) with analog audio (well, hdmi carries digital, but the idea of digital display data being decoded by screens while headphones and speakers get raw audio sine waves seems like an undue disparity, but they do represent fundamentally different things - after all, photons are particles and pixels are inherently discrete and digital, but sound is a wave and is inherently continuous and analog).

So it is an interesting thought experiment, at the least.  Still can't believe we cant' do better than 3.5mm modulated voltage copper wires for sound transfer though, in a hundred years.  Sound seems like it was always an easily solved problem, so why the hell can't ALSA get hardware mixing right.

2012/11/04

Software Rants 4: The 21st Century Terminal

I can't use Vi or Emacs because both of them have their own keybind sets that can fill an entire textbook and I don't have 3 years to learn this stuff.  I just want to write code.  Shouldn't be that complicated.  Same thing with shell script (if ['""'''"]; then fi needs to be shot out back).  Same thing with html and javascript too, really.  They are all forced standards that grew out of much simpler systems that worked much better in their conception as something less shitty and less kitchen sink than they are today.

So I like zsh over bash for no really apparent reason because I don't use any of its built in features, but probably because it is not the ubiquitous default.  Assumed defaults are dangerous and often show the greatest cracks in the paint for improvement.  So here is my ignorant rant about terminals.

TTYs are garbage.  The fact X is run on TTY7 makes me feel nauseous because 99% of Linux users never end up using ctrl-alt-1 through 6 to understand what the default terminals are for.  They use their own graphics layers, font scaling, smoothing, resolution and such, because they are completely disconnected from the modern graphical environment built off X or Wayland or whatever your graphics stack wants to be.  They add significant undue complexity because they stand starkly apart from what anyone expects in a "modern" os.  Terminal emulators are just X frames designed to trick bash, zsh, etc into thinking they are running in a tty environment.  If you have a faulty system you might get dumped to a tty with some random program running in place of whaever your choice sh might be, but you don't have anything wrapping your terminal to guard you, you just have a terminal, some daemons (NOT NORMAL PROGRAMS NO, YOU NEED TO INTERACT WITH THOSE THROUGH ARBITRARY INIT CONTROLS!) and your shell, and anything you run with it.  Maybe you fork something.  I wonder what tty7 would look like with all the standard pipes it deals with.

The root problem is a disconnect between a 3d graphical environment and a display environment.  2 decades ago, nobody had the 3d stuff and assumed that VGA was the be all end all of display technology.  The modern kernel has kos of nvidia, radeon, fglrx, intel, or nouveru to act as display controllers but not necessarilly 3d rendering engines.  They don't inherently provide opengl hooks, those come from x, but the entire stack looks like a big mess of multiple entry points into what is in the end the same hardware device.

In my theoretical perfect OS, you assume the presence of something that can run some standard graphics ABI library.  Even the simplest terminal is running on "hardware accelerated" (aka, gpu bound) code that the system can just run over an abstracted pipe layer if there is no dedicated graphics hardware (but by bacon, I'd mandate some kind of heterogeneous processing node that supports SIMD small massively parallel cores and large pipeline highly cached general big cores on the same die in the same ALU and FPU soup).   You would have a tiny microkernel of CPU scheduler, memory controller, virtual memory manager, and hardware device hooks.  It payload runs something provided to it when it runs, be it a network boot host controller script, an init framework, or a shell executable.  If your init doesn't setup the hardware controllers for the various devices the OS can see but doesn't overburden itself to manage (because in the end, all that hardware is being provided by firmware as virtual memory maps anyway, and you can just hook those into user space applications that manage them) then that hardware just never does anything.  A kernel need be no more than "hey, cpu is running, I'm scheduling executing code, maintain a page table, maintain a process table, and schedule the execution of stuff, here is the rest of the devices the firmware gave me to play with, someone else can have them".

So you have an init framework.  It's job would probably be simple too, review the kernel device hooks, find the device drivers for the hardware, and bring them together in matrimony.  Along the way, you start both a virtual file system server and device handlers for physical devices, and device controllers for any actual hardware storage devices.  The VFS handles translating file system calls into device lookups or other applications.  You would probably also have a socket server that everything else relies on to make RPC calls across the system, and you can use the VFS to reference these sockets as files.  Because everything is a file.

But you have your kernel of memory management and process scheduling, your device controllers, and then some abstraction layer servers that act as arbiters of device control, like the vfs server, a socket server, a display server, a sound server, an input server.  And once init is done, it could launch either a shell or a desktop manager of some kind.  Or nothing, it could just let the system sleep.  Or maybe you payload prime95 at a low execution priority to saturate core when idle.  This shouldn't be rocket science, it should be a configuration file in some standardized serialization format (probably like json) that just reads devices : discover { graphics : radeon, default : auto } (default auto would just reduce the controller for a device given kernel info about it, etc), controllers : {vfs : some-userspace-vfs-host, display : xorg, network : network-manager}, payload : /bin/bash, or /bin/gdm, or something.  Not hard, xml like specialized tags, documentation wouldn't be hard.

Only hard part would be that the init framework would need to figure out where and how to start a vfs server without actually having it running.  Maybe that also belongs in the kernel, it seems important.  Virtual file system, virtual memory, virtual execution seem like consistent concepts in kernels.

Anywho, your payload of whatever is now running with some kind of display controller available.  In linux-space, opengl would be an assumed trait, not some tacked on x sugar.  And your terminal could be graphically greater than just bitmap glpyhs on some pixel grid.

One of the greatest weaknesses in a modern terminal is the difficulty in translating an idea of computation into a tangible thing happening.  Zsh has some primitive recommendations about what you type if it can't figure it out, but that is still nothing big.  I'm talking eclipse-style as you type you get a list of available completion terms (and not just tab cycle through hope for the best completion), composed menus overlaying the terminal showing options as you type).  That alone is huge, but reactive search would also be nice, it already exists in some terminal aps (top, for example, live reduces input search terms nicely) but the base terminal itself should be thinking about what you want.

And none of this really screams "oh noes the embedded world is ending" because on a 20mm die at 22nm you can stick 25 million transistors, more than a Cell processor.  That thing runs a freaking PS3.  That is a tenth of some of the smallest desktop fab standards today, which have 10 to 100 times the transistors.  By the time this theoretical system could be vertically integrated if I got a hundred billions dollars tomorrow to do it, standard fab tech would be at most 6 - 10 nm and be at least 10 - 20x more space efficient.  So a 5mm chip could outperform an i7 920.

Eventually you can't argue against sane defaults.  In the end, you can just set graphics : false, and not load a graphics driver.  You can skip any device host you don't want anyway in this theoretical init system.  Of course you would use socket activation on the socket server (might end up sticking that in the kernel... oh dears).

Terminals being dumb text interfaces is getting old.  We can do better.  We should do better.  I'm too dumb to do things the hard way!