2012/10/27

Trinity Build Notes

 Random observations in no necessary order:

Observations 1: Saturday Oct 27th, 22:00 hours.  Day after build, running off a live usb stick.
  • NCIX took forever to process and ship order because something was missing (?) packaging was good though.
  • Arc Mini case has shitty rubber feet, they are just glued on, one just popped off when I rotated it on carpet.  Gonna glue them on with something more beefy.
  • The Arc Mini rubber graumet mounting holes for cable management are flimsy.  They work so far though.
  • Arc Mini has very little clearance behind mobo tray for cables.  They fit though, and the side panel isn't warping, seems sturdy.
  • Arc Mini has really nice detachable dust filters.  Might want to get another 2 fans (one front one top) since they are already really silent.
  • Amazing temps out of this thing so far. After 2 hours of prime95 torture at stock frequencies the CPU peaked at 41c, and averaged between 39 and 40.  Board never got hotter than 27c, and ambient is 26c.  Overclocking will be minimal, but I'll do some since the thermals are so low.
  • Hard to read if the PSU voltages / wattages are good.  lm-sensors returns rubbish because the ISA adapter is crap on the Asrock board.  Don't really care, the board was cheap and is fit for purpose.
  • A Hyper Evo 212 will overlap DIMM slot A1 on the board.  If I get another two fans, I'll step that up to 4, with 2 slim (probably 2 120mm, but might try 140 on the side panel, the heatsink overlaps some of the fan mount) so I can mount a slim fan on the pull of the heatsink and have room for all 4 dimms.  Also could get low profile dimms, the corsair dominator ones have crappy cheap plastic heatsinks that take up a ton of room.  If I get more ram, definitely get low profile.
  • Fan control doesn't even matter on this case (it came with a fan knob backplate piece for the spare expansion slot port off the back) the fans are silent any way you slice it.  It is somewhat annoying how the fan control knob uses 4 pin power and thus can't use firmware / os fan control.  The board only has 2 more fan headers anyway, and one of them is "cpu2" which I imagine means dual fan cpu coolers.  The singular other header is near the front.  It would suck with a dual fan configuration because the mobo only supports one extra fan, and I'd like to have those speed controlled most since they would make the most obvious noise (again, if there was any).
  • The bundled cpu cooler was silly.  It would probably be pushing 60c under load instead of 40 and be loud as expletives because it only has a 60mm or something fan.
  • Cable management could have went better, the backplate fan controller wires just lay on the psu, the mobo headers are all jutting out since they are all vertical from the board, so they all bend sharply to enter cable maangement slots.
  • The hot swap drive bays are nice.  They aren't "really" hot swap, since you need to mount drives, and there is nothing holding the cabling behind in place, so if you unplug and remove one you have to keep track of the wires somehow.  But they are significantly better than my old Armor cases hard metal drive cage where every drive gets screwed in with the cage.
  •  The memory is so far running solid at stock frequencies.  Hopping to OC it to at least 2200, data shows it really ups APU performance.  Don't really want a voltage hike, DDR3 is finnicky over 1.5.
  • The ASrock UEFI bios is ok and sucks simultaneously.  As my first real hard look into UEFI, mouse control is so convoluted and unnecessary, but by using a full graphics stack rather than just a text terminal it takes a lot longer on the startup and it seems sluggish since I imagine it doesn't have very complex graphics drivers.  It is the "future" though, so might as well deal with it.
  • UEFI Shell is still retarded.  The syntax is so convoluted and arbitrary, given that they could have taken a few decades worth of shell experience, they really bombed on it imo.
I'll keep writing crap on here to summarize thoughts on this thing as I go along.

2012/10/24

Political Ranting Part 3: How My Republic Would Work

One week from elections 2012, yadda yadda.  So a lot of people are yelling about jobs and crap that the federal government of a republic has no duty dealing with.  After watching the 3rd party debates, it seems obvious (well, duh?) that any major presidential candidate or party acting today on the federal stage doesn't recognize the fundamental flaws in the way the American system is implemented.  And it isn't something shallow like first past the post, it is deeper than that, and it requires some introspection on history.

Way back when this country was founded, it was populated by a few million people, and the way the original proceedings of the time were that white men got to vote every year or two in local, state, and federal elections, and most of those votes were for local people taking on local governmental duties like the sherrif, a judge, the mayor of the town, the town council, etc.  The state they lived in had their own way of electing regional governances, the state congress and the governor, and the only federal thing any average citizen had to worry about was a vote for their local house representative.  The house was interesting in that originally it had a variable number of members and the only rule in the Constitution was that you could have at most one member per 30,000 people.

The modern house seats a representative per ~710,000 people (not necessarily voters, just census residents).  Back when the constitution was written, the house had 65 members.  The 1790 population census at Wikipedia cites approximately 4 million people in the country at the time.  That would have been one representative per ~62k people.   It now is fixed at 435, and one representative "represents" over 10 times as many people.  The problem here should be obvious - as individual representatives represent more and more people, they lose connection to their constitutenacy and see them more as statistics and numbers rather than individuals.  Your vote and voice matters less in the house because you are encompassed in a larger group.  So back in the dawning years of the nation, you could easily expect to have a larger voice in the house since you had more representation per person at the time, just because there were fewer people in general.

That isn't really the big problem here.  Back then, you only voted for a house representative.  Your state would appoint senators and memebers of the electoral college to elect the president.  You didn't need to worry about those - and today I would say that would be so much better than what we have.  In the worst case scenario, California, you have 2 senators representing almost 38 million people.  The electoral college system and the tremendous stupidity of the average voter means that only around 8 states matter in the presidency, and the recent staged parody of a debate season only shows how dumb voters are for swinging between two politicians with established political careers as senator + president, and governor, respectively, and taking their words as meaning anything when we have years of their actions to refer to.

It doesn't help that the original purpose of the electoral college, and the allocation of senate seats, was to appease states that at the time were small, and were acting as independent nations.  It was a concession not for liberty or good government but like the 3/5ths compromise was done just to get a functional federal government in the first place.  Today, they harm us greatly, and the direct election of these people makes it so that you can't expect them to have any concept of who they represent.  No wonder they only hear the voices of extremely wealthy special interests and campaign donors that line their coffers, they can't possibly even begin to comprehend how many people they "represent" when it numbers in the millions.  And no wonder nobody can compete with the two party system, because the amount of money it takes to reach that many people and voice positions is astronomical.

The problem is simply one of scale and size.  Because in the end, politics is dealing with human beings.  Your average citizen does not have the time (hell, I'm unemployed and it took me around ~30 hours to research the major ticket placements I get to vote on in PA this year, and compare the candidates for each, who really like obscuring their positions on whatever it is they are being elected for, I wonder why) to pick individuals for dozens of positions every election cycle.  They don't have the time to wade through the bullshit rhetoric they get heaped upon all the time.  No wonder we have parties and spin dominating discourse rather than intellect, reason, and sound policy.  You have individuals voting for way too many things, so of course they eventually break down and vote party lines.  You have first past the post elections, so of course two party systems that are easily corrupted and exploited by wealthy interests can just funnel money into one of two places and expect wide reaching political benefit because the parties act as hive minds (especially the republican party, as it goes more and more right and gets more and more extremist, the core is all that remains and they speak in one voice now it seems, but the democrats are just as spineless and bought out from their own special interests).

You have politicians representing way too many people directly, who do horrendous things to secure reelections, who are funded by massive corporate dollars funneled through their parties to maintain control, and who can never fathom conceptualizing all the people they represent as individuals rather than statistics.  You have outdated mechanisms of election designed around appeasing disparate foreign powers into consolidating into one nation rather than being around individual liberty and voice.

So unlike everyone else bitching about this problem, here is my solution.  I imagine it is gravely flawed and shows off my ignorance in droves, but it is better than nothing to be loud and ignorant than to be ignorant and in denial of it.

1.  People need to be electing as few people as possible, but have as much influence on those elected as possible.  Your voice should be heard, respected, considered, and it should have implications on every level of politics regardless of viewpoint.  You can kill two birds with one stone here - use CPGrey's (amazingly smart guy) binary partitioning of the population scheme until you are dividing the population into roughly equal segments of something between 1000 and 1500 people.  I'd argue that is about the "average" limit a normal person can hope to actually relate to their constituancy on a human, personal level, without being too much overhead per person (if you were electing a representative per 10 people, supporting 10% of the population as politicians would be infeasible, you want to minimize the economic overhead of governance).  So every 1000 - 1500 people can elect (now here is something equally radical) someone for a two year term, with no chance for reelection, as a sitting representative on a local council of 10.  They control local law, and have a local monetary transaction / consumption (ie, the exchange of money between parties, you could probably have this nationally the burden of the receiver to pay a percentage of money granted as a local transaction tax, and make it a constitutional law that localities can only tax in this way, in one direction, but they can control the tax rate).  Unlike our current broken state sales tax code, it would be for any money coming into the pockets of residents of a locality (note, these localities are approximately 10k to 15k people) and citizens are expected to pay as a percentage of money received.

That includes wages, it includes physical property sales, it includes capital gains, inheritance, and Christmas cards.  No exceptions.  When money one person has goes to someone else, a percentage is expected to be given as local tax.  Of course you could never process all of these and make sure grandparents don't give their grandchildren $50 where the kids don't pay their reception tax on it.  Money between private citizens, I'd imagine, would be untaxed, because you just can't feasibly tax that.  Money between groups would be taxed.  And groups are an important concept in my entire political ideology, because at the end of the day, most of the work of government is to protect individuals from groups.

So the transfer of wealth, through business (including funeral estates) would be taxed locally, in that a percentage of money you recieve from any source must be paid annually to your local council.

Here is the beauty.  That accounts for a lot of money.  And if you have sufficient social mobility, people will move to areas with lower tax rates on purpose, and some will pay higher taxes for government services they like.  You could easily have a 1% tax of this kind pay all of a local governments dues.  Of course, wealthy neighborhoods will probably have a very tiny tax rate of this kind.  Good on them!  It is their local tax, and if their local government makes good decisions that attract people to live there, those good ideas grow and spread.  It is an organic trial and error of policy and taxation that lets people move to where they agree with the ideologies of the area.  And since it is at such a small scale, the influence of individuals on their local councils is very high, so good ideas can be heard and attempted.  And it doesn't cost millions to get a message across to a constituency.

This is a lot like city states.  They were tight knit, closed off communities with their own policies.  People would move between them and to the prosperous ones they agreed with.  The only weakness in city states were disputes between them, in civil liberties within them (small groups are very prone to mob rule and witch hunting) and defense (because a small group of people has a hard time fighting off a big one).

You can solve most of those issues today though, just through progress, innovation, and technology.  You still have a nation instead of independent localities, so you have a constitution and higher courts.  You absolutely have codified national law on the topic of civil liberties, and the abused and discriminated against can appeal to higher courts than the local one for protection and justice.  You still have a national military for protection of any locality, and you would expect the same public outrage about abuse of such a military (... heh, like that actually happens when the US military acts barbaric abroad).  You have instantaneous communication - nobody is isolated anymore.  And economics are global and globalized, so you don't have traditional issues like food shortages or price disputes, since everything is on an international market anyway.

Disputes between localities is a states issue.  Plain and simple.  This system I propose is build like a pyramid, in many ways like the original constitution envisioned, but today we have all politics operating on the top and having wide reaching implications for everyone, where local and state governments barely write laws but exude more influence over the actual lives of citizens just on how they fund road repairs and schools and tax property.  You want tax law to be national and flat, and you want local policy experimentation to find out what works before you force it upon everyone, where they have no choice in the matter.

And choice is important.  If you can be socially mobile, you can go where you like the policies of the state or local government, rather than be stuck with nonsense like NCLB, the patriot act, or NDAA.  If you don't like Obamacare, under such a system, it would be a local law, not a national one, and healthcare could be based in regions (or states... getting to that).

So how does state and federal law work here?  If you have localities of a council of 10, that council elects 1 member amongst themselves when they are first elected themselves (so that in the end state representatives are directly elected, but they are chosen by council members once they convene in a concesus, and since there are no reelections it would be really hard to rig a state elector body to be predominately of one political viewpoint unless it represents the viewpoints of the people at large, which is an emergent behavior of this system - as people organically move to like minded ideological areas, the higher levels of governance surrounding them would start adopting common law that they all agree on, and you can expect larger like minded communities to emerge beyond just the local level, over time, slowly, and with lots of room for objection and redirection if policies don't pan out).

So you have a "state" of another council of 100, which represents 1 million to 1.5 million (mathematically, if localities have a normal distribution [which doesn't happen because you are bilaterally splitting the population anyway, but this is just a thought experiment now - you would actual have very closely sized localities everywhere under proper mathematical partitioning] of 1k to 1.5k people, as you compound additions of these distributions you get statistically much more likely to be at the median of 1.25k people with very low variance).  These states have a few restrictions - they can only tax at up to 1.5x the lowest local tax rate, and they can't borrow money (localities can, because they might need to perform local damage control - that is fine, it is a small enough group to handle variability of finances, but states and federal governments can't borrow and perpetually spend deficit without significant bad economic ramifications from inflation, and I'll talk about currency and exchange in a later post).  So the idea is that you would have a state war chest, or a finance campaign from citizens to fund things they want that aren't a part of the tax code, or when the state needs emergency funds, they could just run a donation drive.   This is much more sound economics, and much more "free", since it means individuals and constituents would be funding their own state policies that are beyond the tax code.

Now I don't know if I would want state funding to be state resident only, or open it up to the fed, or allow random contributors.  Because once you start adding arbitrary money flows into the mix, you add a lot of potential corruption, and can easily end up with state dependence on federal dollars.  But if you only let states contribute to their own budgets, and run into a desperate need for money that the state can't produce even with citizen involvement... honestly, that is the better way to do things.  States need to show the restraint not to overspend themselves, and if they do, they reap the consequences of their actions and the citizens can rebuke them appropriately.

Since state councils are fixed at a large size, you can imagine discourse is less fluid but more open for on the floor debate.  It would be necessary given that state councils can't comprehend the people they represent, much like how current politicians can't, so any policy at the state level needs to be slow and laborious to get through.  Since representatives are from localities, you aren't going to have arbitrary party lines drawn, and there would be much more contention over policy.

The other important state-wise requirement is that any law or policy change under consideration must first be approved or already in place under at least half of all localities it oversees and less than a third of localities can be in direct opposition to it through their own policies or legislation.   Meaning you have between 51% or more support and 32% or less opposition.  That means it is hard to get laws past at the state level, which is intentional.

After states, you have a federal level which isn't of fixed size, and is composed of 3 representatives per state (meaning states are actually councils of 997, insuring no tied votes, like the localities have 9 per council so they can't tie either).  Federal law, like state law, requires at least 66% of states support and less than 25% oppose such legislation.  This makes federal law extremely hard to pass, on purpose.  If you took the current USA at 310 million people, you would have 744 federal council members.  Which is bigger than the current congress, but not by much, and their influence is very restricted.

Each local, state, and federal tier picks citizens from whatever jurisdiction they control (that aren't themselves) to fill the roles of judicial, executive, and managerial duties.  I would imagine the first month of each election cycle might be consumed just by generating a full government, and until positions are filled the predecessor maintains the position, but it can't be held off indefinitely, so it would need to be some constitutional mandate that councils must fill government vacancies within 30 days of the departure of the previous office holder, or some such.  Offices, unlike elected officials, could probably be renewed, so in effect the councils are bosses of every other government job and every 2 years can replace those it finds ill suited for their work.  You can also have councils able to "fire" people with a voting majority, and replace them immediately.  Elections / choosing of new personnel to fill roles is only required once every 2 years, though.

So I'll stop there.  It gives the general idea I have.  People elect only a local councilman, but that councilman may be serving in the national council.  They can't be reelected, so every 2 years every 1 - 1.5k people pick someone new to play councilman.  With that few people, you aren't voting on party or rhetoric, but by the people you know in your community from experience.  You wouldn't need campaign ads or any such nonsense, someone who wants to run for council could just go door to door, or the locality could just have a town house debate between people who want to run.  It is the most organic election system I could imagine, and if you layer the higher levels of bueracracy on top of the local councils, you allow for maximum legislative experimentation with the most fairness for everyone, as long as they are mobile enough to move where the ideologies agree with them.  You want somewhere for every political viewpoint to go and live the way they want, with the minimum of executive and state overhead and mandate that isn't already agreed upon and in law by the localities.

The federal still controls the military, and the states probably control the police, so you still have the problems of protection dealt with.  State and federal can only tax at 1.5x the highest rate of its constituent bodies, so taxes are really tight, on purpose.  Federal and states can't borrow on a deficit, and must finance any post-tax policies through fund raisers and donation drives.  Since the fed controls the military, it would still have the ability to take "extreme immediate action" when needed, but that really is the only essential role of the federal government in a sound republic in my book.

2012/10/23

AMD Rant about CPUs and whatnot

 This whole wall of text was a blithering comment I wrote on HN about AMD.  The original thread is here.

I don't buy this story at all (that AMD's time is running out), mainly because AMD never could fight Intel in a straight up fight. AMD is at least an order of magnitude a smaller company than Intel, so much so that Intel spent more money on R&D (http://newsroom.intel.com/community/intel_newsroom/blog/2011...) than AMD makes in total revenue (http://phx.corporate-ir.net/phoenix.zhtml?c=74093&p=irol...) last year.


That isn't even about monopolistic business practices, decisions, or market forces. You are comparing two companies operating on effectively different planes of existence. Intel owns the instruction set, has the most advanced silicon fabs in the world (and still makes their chips in house) and spends more on R&D than AMD even makes. And all Intel does is make CPUs.
Meanwhile, AMD bought ATI and took a tremendous gamble on APUs. They are just starting to mature their APU line with Trinity in the last few weeks, and are still reeling from integrating two large companies together like that. They had to sell their own fabs off and couldn't even make their most recent generation of GPUs at Global Foundries because they aren't keeping up anymore. On that front, the 7000 series graphics cards (from my objective viewpoint) basically crushed Nvidia for the first time in a while. They were first to market, as a result didn't have major shortages, and price cut at the appropriate times to keep their products competitive. It took Nvidia almost half a year to have their GPU line out after AMD, and their chips, at competitive prices, are almost exclusively openGL / graphics devices, being beaten in GPGPU operations by the old 500 series and easily by the 7000 series because they tried going many core limited pipeline over more generic cores in the 500 or 7000 series that were better at arbitrary GPU compute tasks.

So they are doing really well in graphics. And their APUs are really good graphics chips too. The only flaw in AMD right now is that they are floundering on the cpu fabrication front as badly as Nvidia did with their graphics line (only with their cpus). They eat power, they are effectively 1.5 generations of fab tech behind, and the bulldozer architecture is weak in floating point and serial operations.

That doesn't ruin a company. Hopefully next year is the year they really start moving forward, because I really think AMD is the company to finally really merge gpu and cpu components into some kind of register / pipeline / alu soup that can really revolutionize the industry (imagine SIMD instruction extensions to x64 that behave like opencl parallel operations and have the normal processor cores work on register ranges and vectors like a gpu, rather than just having a discrete gpu and cpu on one die).

Even barring that kind of pipe dream, Steamroller is shaping up to be sound. It finally gets a die shrink AMD desperately needs to stay competitive, if only to 28nm, and finally puts GCN into their APU graphics instead of the 6000 series era VLIW architecture.

They can't really stand up and fight Intel head on anymore, because Intel got on the ball again, and their cpus are crushing AMD in a lot of use cases, especially power usage. But AMD still has significantly better graphics, and are leveraging it, and they are finally getting over the ATI growing pains, so I'd wager they are still in the game, if only barely. They have a lot of potential still.
Footnote: I really think market is a big reason AMD is falling behind. The Ultrabook campaign is stealing wealthy pc buyers from them, and that is where chip makers get a majority of their profits (look at the high end mobile i7 chips selling for a thousand bucks). Desktop sales are abyssmal besides OEM systems or businesses. Intel wins at getting business contracts just by size alone, they just have more reach. Desktop enthusiasts can bank on AMD being a cost effective platform, but the wow factor lies in Intel chips, even at the premium, and they steal that market too. AMD doesn't even do well in the cheap HTPC market because their chips burn so much power. They are in a crossroads where all their target markets are either becoming obsolete or they are losing ground, and not because they have bad products, but because their perceptions and influence are growing worse.

Right now, AMD is really strong in mid ranges. Mid range laptops with a trinity APU are really good and extremely cost effective (I had a friend buy an A8 based Toshiba because it was $500 cheaper than a comparable Intel machine that could run League of Legends). Piledriver is good enough in the desktop space to recommend one of the 4 or 6 core variants to friends looking for a budget PC gaming experience, because they are pretty much more than enough with a proper overclock for anything major. But AMD has (from my experience) a bad image right now as a dying company and as a maker of budget goods, even when their GPUs kick butt and their desktop CPUs can (at least according to the recent Phoronix Piledriver FX benchmark) hold ground against even Intels best Ivy Bridge offerings in some cases at almost half the price.

2012/10/16

Software Rants 3: Exploring L

I talked about L in my post about my dream operating system.  The idea was that you could take 3 decades of low level programming experience and write a language that maintains the efficiencies of C and its ilk without having the unintuitive syntax and backwards compatibility woes.

One big thing is that this is entirely opinionated.  C++11 will be the de-facto low level large scale language for many projects going forward, and my complaints about it are never that it doesn't fit its purpose.  I'll speak my reasons by example - in C++11, rvalues are dnoted as T&&, since they already used T& as a reference, and &thing is the address of something.  << and >> are overloaded by pipe operations from <iostream>, which is actually something you can do to any object (overload operands).  I won't even decry that behavior, since it has its uses - lots of people like how many languages use += as a concatenation operator on strings instead of "App".concat("le) or some other object syntax.

So what do you want out of a new fangled low level language?  You want all the productivity efficiencies of low level abstractions we have developed and taken to heart in recent decades, first off, and here is a list of my personal preferences:
  • Templates - I hate writing C code without them.  They are my killer C++ feature.  I can't write generic functions without them, and since C doesn't have classes (which means it doesn't have polymorphism) I can't use virtual overloads through inheritance. 
  • Classes, and more importantly, polymorphism - you can emulate classes pretty reasonably in C with just structs and function pointers.  Access modifiers aren't inherent to classes either, and you can use namespaces as a replacement.  The real benefit of classes is polymorphism and virutal function lookup.  Now, an important quality in any low level language is to not hide implementation, so virtualizing a function absolutely needs to be a user decision as it is in C++.
  • Access modification - private / public / protected, and all that.  And not in the C++ hackneyed overloaded jump references syntax of public: stuff.  C# / Java declaration specific defining of scope.  You may have a way to declare a namespace of public or private data members, but it would need to have dedicated syntax, because data accessors are not anything like block naming and it is just another hackneyed obtuse angle of C++.
  • Objects from the ground up - being able to treat anything as an object as desired, and understanding that low level objects are nothing more than a package of data members and functions.  I feel this is something sorely lacking in any low level language, including the syntax to have something like (15l).toString().  One of the greatest weaknesses of even Java is the negligance in treating primitives and objects harmoniously (reference vs copy passing, Integer and int being separate types but you can type coerce one into another implicitly, etc).
  • Function objects - in keeping with objects from the ground up, you want your functions to be regular objects just like primitives or collections of either.
  • Pointers as their own type, preferably templated - *int is not a hard concept, it is a hard syntax.  c and c++ are usually the only exposure almost any programmer gets to memory addressing, and they are a completely alien ideology of glyphic modification of meaning of referencing to a primitive.  I would absolutely much rather see, instead of *int, pointer<int>.  I want to elaborate on this, so lets have some examples of this idea:

C++:

int *foo, **bar, zoo = 5;
foo = &zoo
**bar = &foo
cout << foo // is the address of zoo
cout << *foo // value of zoo (5)
cout << bar // address again
cout << *bar // address of zoo
cout << zoo // the integer
foo = 5000
cout << foo // memory address 5000
cout << *foo // segfault most likely, page for 5000 probably wasn't generated

L:

reference<int> foo; // compile time error if pointing to a non-integer
reference genericfoo; // can point to anything, ambiguous contents
reference<reference<int>>bar; // compile time error if not point to an int pointer
reference<reference>genericbar; // can point to any pointer since it is generic
int zoo = 5;

foo = reference(zoo) or zoo.reference() // the OO way is to bind a reference function to the Object type, but that obscures implementation, since it acts as if you are calling a function - in truth, it is a compiler reduction to the address of zoo, so something like reference zoo akin to return zoo might be more appropriate, to not obscure purpose, but not maintain the glyphic overload of &

bar = reference(foo) // hey look, consistency!

//using python print syntax, just for brevity, I'd imagine the real L would require a reference to the stdout pipe to write to
print(foo) // address
print(genericfoo) //another address
print(foo.val or val()?) // print contents...

foo.val seems more appropriate, since it is a template data member.  Due to explicit templating, you know it returns an int, if it was generic, it would be an Object.

That brings up another interesting train of thought.  I wonder if there would be a way to unify the concepts of typename and the globally inherited Object? 
Traditionally, typename t fulfills nearly the same idea as referencing an Object, except that typename is a compile time generic with explict declaration at runtime but Object is always generic because it is used at runtime.  Actually, that seems silly - an Object is a type, but a type is not an object, it is a name.  So types are names and objects are named things.  So the type / object distinction is still useful.

Anywho, the point on this pointer business is that if you use a ground up language designed around complex bundling of functionality into classes, with all the benefits of templates and polymorphism involved, it seems silly not to wrap a lot of the more obscure behavior, like pointers, into an object of their own.

Some other ideas I'm just footnoting here:

I like the idea of using : instead of = for "equality".  In javascript maps, you use the syntax {foo:5, bar:'bacon'}, and that is using the traditional colon means is syntax.  The equals sign really does imply equality, and although mathematically haivng the concept of glyph name equals contents being a mathematically sound statement also works, it kind of borks the concept of equality in an environment where you do have explicit equality in the form of ==.  Also, in classic C++ and C, the colon is really underutilized, because it implies block declaration.  Since namespaces and blocks are effectively the same thing (except traditional C blocks don't denote namespace containment, but the point is they both represent contiguous regions) you lose value in the colon anyway.

Maybe just stick with = for backwards compatability.  But just as a brain tease, which of the following makes more sense?

foo is 5
foo equals 5

I'd argue the first, because you are making the claim that the integer foo is 5, not the awkward statement-not-question that integer foo equals 5 (since you are already using the interrogative form elsewhere)

2012/10/12

Software Rants 2: Useful File Abstractions

Since my plans to explore Plan 9 came crashing down when the live cd wouldn't even boot (in 386 mode even!) I'm going to tackle my thoughts on what Plan 9 is all about - files.  And distributed systems.  But mostly files.

Since I can't actually boot the thing, I'm going off videos I watched, tutorials I read on the site, and a professors pdf beginners manual for Plan 9.

So the system is a monokernel where this kernel controls the filesystem because the filesystem is so integral to the running of everything.  Device drivers are there, and are exposed as file systems, so I could rant about how microkernels are great, but this ain't necessarily the time.  The kernel of plan9 is less interesting than what you can do with it.

So I think a procfs /proc is one of the best things ever.  Exposing devices, hardware interfaces, and writable / readable buffers to interact is amazing.  One of the best things Linux ripped from Plan 9 was the idea behind the procfs.  Being able to do system calls through files (since you are already blocking on read / writes) is great.

The /net is also amazing.  Exposing sockets, tcp and udp protocols, under a filesystem is just the natural way to do things.  You already read and write to sockets and streams, having them exist in the VFS seems obvious, but only Plan 9 does it.  Which is sad.  Because the way Plan 9 does it is amazing.  Open a new connection on port 80?  Guess you should make a file in /net/tcp for it.  Since each application and user has its own namespace view on the filesystem, you don't have name conflicts.  This has a downside I'd imagine though, since debugging an application whose files are not obviously exposed through other programs to the user might be problematic.  It seems like a trivial problem to solve, since /procfs could just provide mount points for the file systems of other running programs if the program and OS gives that application permission.

So that comes to what I think are useful file abstractions - files on disks, obviously.  File systems to abstract physical disks and partitions from one another, absolutely.  Hardware information as live generated files to profile the environment under proc, delightful. The idea behind /net, having files for sockets, ports, and protocols.  Astonishing.  Hardware devices, such as the sound card, as read and writable devices (maybe even providing their own file systems to control system calls and behavior) revolutionary.

Interprocess communication through file systems also works nicely with namespaces.  A process can request to share some portion of the visible file space of another process, and they can interact privately through files.  You can open sockets to one another too.  9P is amazing because it doesn't care if your file request is local or remote - which I feel is an abstraction sorely lacking in every modern OS.  We already call the localhost 127.0.0.1 (such an arbitrary ip) and already have some processes opening up socket connections to the localhost for interprocess communication.  I'd figure almost anything is better than arbitrary protocols like d-bus or more generalized RPCs.  Sharing serialized text data is really easy over a network or a file system, but harder over a signaling schema.  Signals are another interesting prospect, because you could abstract and provide process signaling folders where you can write to a signal file and the operating system will do a kernel signal on the process in question with the payload you write.  That is amazing.

The idea of having distributed components of the operating system in Plan 9 is also neat.  I'm only saying neat here, because the fact they all exist in kernel space with the OS as a giant sludge of unncessary vulnerability is a little off putting - why the hell is rio, the window manager, running in the OS?  All a kernel needs to do is provide abstractions.  It abstracts processes by allowing for scheduled execution.  It provides virtual memory so application's can exist in their own sandboxes.  It provides means to hook and interact with hardware devices (or in Linuxs case, just handles the drivers itself, and provides software devices to interact with).  That really should be it.  If you have something that starts, runs a scheduler, starts a memory manager, and provides privildged access to control and interact with the devices it has available to it, you have all a kernel need do.

Nothing stops a sufficiently well designed kernel from providing its init payload the ability to hook userspace software drivers for very hardware level components given certain privildges.  I really think that the problem with kernels, security, and a lot of other modern computer issues is the failure to ever conceive a really sound execution privilege hierarchy.  Linux has group permissions, but the kernel isn't providing per-device privileges for access or manipulation.

This isn't about file system privileges - groups cover that fairly well, albeit the creation of groups being restricted to root seems obtuse.  It is about device and interface privileges, allowing some programs to hook into devices and some to be restricted in what they can interact with.  Maybe some executable, made by a privileged user, shouldn't have read access to certain folders?  Maybe it shouldn't have write access?  Maybe you don't want it to see the printer as a device file?  That kind of extreme fine grained per process execution control is lacking, at least in a concise way, and that limits the security model of the modern operating environment significantly.  Plan 9 and its process namespaces does an amazing lot in moving towards solving that problem, since you can distinguish who gets access to what folders and files based off the creators desires and its own privileges. 

So Plan 9 is awesome, but Plan 9 is ugly.  It scraps the C standard in many ways, and still uses C for everything.  It doesn't embrace static file typing, which I really feel would make an everything-in-the-filesystem approach much more palpable to people if they can easily tell what any given file is supposed to be just from the file extension.  The ability to host pieces of the OS as servers is great, but many things shouldn't be running in such a privileged state when it isn't necessary, and like I argued, many things can be limited to userspace just fine.

An exploration of Plan 9, circa October 2012

I was toying around in my Arch VM and decided to take it up to the next level, so I got an iso of Plan 9 off the main site (using the October 5th image, it does a cron job recompilation of the iso nightly.. off a Plan 9 server hosting the sources online!  The web interface is kind of absolutely ugly, but hey, look at that potential from something that shares no code with any of the big 3 (bsd, linux, nt) kernels and stacks!).  I always was fascinated by the abstractions of Plan 9, and the features it provides by distributing the core components the way it does.  Microkernels are also nice. 

Where Unix comes off to me as the "Hey, these storage devices and files are cool, lets try making an OS using them", Plan 9 comes off as "Let's push the file system based operating environment to the limit".  So I'm going to use this post to detail any notable bumps in the road and observations made running Plan 9 under qemu-kvm.

So bump in the road one is that the plan9.iso image doesn't boot as an scsi or ide cdrom in qemu or virtualbox.  In qemu it does a terminal dump of something before failing (in scsi mode seaBIOS doesn't even recognize it as bootable media) and in virtualbox it gets to a 1 / 2 selection "run plan9 live or install it" and then barfs some nonsense characters and freezes.  So this post is on hiatus.

2012/10/10

An Exploration of Human Potential Part 2: Enter the Matrix?

Brain-machine interfaces are much closer to reality than anyone wants to give them credit for - it isn't an exploration of potential to talk about mind controlled robotic limbs or body parts, because we are already there.  It may not be pervasive in the market due to the backlash of the status quo to resist change, but we have brain interface devices that can control new limbs. The real revolution will be outside the physical space, and in the digital one.

2.  We will all be jacked in to a digital space shorter than you may think.  The modern brain interface devices are getting less and less intrusive by the year, by leaps and bounds, while simultaneously becoming more accurate and being able to interpret more and more complex electric signals given off by the various lobes of gray matter occupying the space between ones ears.  The brain is not inherently hard.  It runs off electricity and each node is effectively a processor with a few megahertz of performance.  The system has some inherent inefficiencies because memories are stored in a neuron that specialized for that purpose, but could have been specialized for something else, like processing.  It is a massively parallel machine with crappy device compatibility.

We definitely get over compatibility problems.  I see too many videos all the time of insanely smart people tearing apart cabling to make frakencable that connects a usb line into a 3.5mm audio jack, or some other bizarre mix that should never work but by genius and luck does.  There are examples of reading a cats memories as images, and we have brain controlled robotic limbs already.  The ability to emulate and interpret the electrical signaling of the body is rapidly becoming a reality all around us.

So what are the implications?  Once we are able to "jack-in" to cyberspace, such that through software we are given false signaling to make us perceive and interact with alternate realities, I would imagine many would never leave.  They could create anything they wanted - change their own personal dream land to whatever they wanted.  In a later part I will talk about replacing the various fleshy parts that won't be as necessary, but I wouldn't be surprised if many people just forfeit their mortal forms and exist solely in cyberspace as errant AI.  It is definitely a step beyond just faking sensory information and interpreting it, but it isn't an impossible problem, especially post-singularity when computational power to emulate keeps getting larger.

You can't emulate the whole brain with electricity though.  A large part of what makes living things less predictable and more errant in their patterns is the presence of hormones to influence cell behavior.  And chemical interactions can be much more volatile than the flow of electrons, which I think explains a lot of the more unexplainable aspects of cognition.  No idea how you emulate those, unless the AIs are just subjugated to randomized behavior alteration.  We do have very good ways to measure randomness - atomic oscillation, crystal harmonics, etc. 

I would love to go on a rant how we still lack any evidence for randomness being a "thing" - almost anything random today is just something we can't accurately predict yet.  Things that were once considered random, from the weather to migration routes to rock formation, have been proven aspects of nature and components of the physiological interactions of the universe.  I am almost certain the rest of the universal randomness factors are explainable in similar ways, albeit of magnitudes more complexity.

So people get virtual, real, cyberspace, that we can connect into directly through the brain stem, and emulate anything (electronic, at least) that we want.  I figure our 3d printers could manufacture the molecules that compose our hormones that influence our more errant behavior, so you could maybe even emulate those on a life brain.  You don't even need the hormones per-se for the digitized mind - they are just randomly generated situational behavior alterations.  Hopefully, a simple base here leads to emergent behavior, because otherwise we are all very boring and predictable in the end.  But if we get emergent behavior out of boring stateful AI of today, I remain optimistic.

2012/10/07

Static Typing as a Religion

Typing is a pervasive concept throughout software. You want to describe data - and types are the what to a names who.  Having a very clear definition of "what" pertaining to everything you deal with makes interactions simpler and purposes more obvious.

If / when I develop with other people involved, I love statically typed languages because they mean that my code is easier to comprehend for others, and that I can more easily debug someone elses code.  The barrier to entry on static types is lower, even if the "power" of boxing means you can do some crazy things in dynamic languages.  I have a natural aversion to "crazy" things, mainly because I am both not very smart and not very good at development.  So static typing makes things easier.

But they are also more performant.  So they are easier to understand and debug, and perform better, the only reason to use a dynamic language is for the lack of boilerplate necessary for type conversions and the power of boxing languages in terms of a language feature.

Now, that is really the end of the programmatic static typing blog.  I want to talk more about static typing elsewhere.

Windows does a lot wrong.  Its user model is awful, its VFS is crap, it has terrible abstractions of hardware, etc.  But one thing it does right is that it statically types every file in the system.  The Unix way of having lots of files without static typing (hint, file extensions) leads to a lot of ambiguity in interpreting what something is meant to do.  Windows doesn't need an executable bit, because Windows executables are .exe.  You can actually put a file extensions on Linux binaries - .elf, for executable and linkable format.  Linux doesn't traditionally do this because Unix didn't use many file extensions.

There are a lot of latent extensionless files floating around, especially on the software side of the traditional Unix ecosystem - MAKEFILE, README, CHANGELOG, etc.  But these are ambiguous files from an external viewer, and it requires more information to determine what it is meant to be.  The default is just to treat it like a text file, and hope it matches your default encoding, which nowadays is UTF8.  And if it doesn't?  I guess you get gibberish. 

In Linux space, a lot of files have their type distinguished by metadata in either the filesystem or file header.  You hope that it exists or makes sense, because the name of the file itself doesn't give you any clues.  If you don't recognize the metadata syntax of a file, you might interpret it wrong as something else.  When you don't even know if it is supposed to be interpreted in some encoding or as binary you have little assurance to stand on.

Of course, I can rename a .txt to .jpg or vice versa and get gibberish.  But I can, in software, try boxing FLOAT_MAX as an int and get an overflow.  Every sound security policy can be broken.  But when you don't make assumptions, you maintain more clarity of purpose.  If I ever designed a file system, I would absolutely mandate all files be statically typed with something akin to a file extension (I would argue, though, that the traditional foo.bar syntax is unintuitive since traditionally a dot or decimal point means either a property of or a fractional component, it doesn't naturally mean "type of").  I'd rather see it called foo:bar, a colon is a much better delegate sigil.  I would go so far as to mandate it - do what Windows does, if a file is extensionless / typeless, bitch about it.  Because you shouldn't be making assumptions.

2012/10/02

Filesystems Ranting

I've been playing around for a few days learning the various virtualization solutions on Linux, after having some past experience with qemu I finally jumped on the virtualbox bandwagon.  I really missed a lot of features!  I plan to try out kvm since it is kernel based and promises good performance, and using anything from Sun now that Oracle is decimating all their OSS projects leaves me hesitant. 

Anywho, virtualization is awesome, hard, and way beyond my intellect in general.  A lot of the meta elements of software blow my mind at their complexity, and what is effectively a machine code JIT when not running the same instruction set definitely is up there.  I want to complain a bit about the commonplace Linux VFS structure, because it makes my head hurt. 

Let us start with the greatest insult to the world - usr.  The only reason I can look at my root directory without crying is that the existance of usr would make for a great mount point for user space applications if opt wasn't used by half of commercial software.  But that is just the worst offender, let us look at the myriad of folders in my / on my Ubuntu 12.04 partition (I'll switch to Arch soon, but I wasn't that hardcore 2 months ago). 

bin, dev, boot, etc, cdrom, home, lib, lib32, lib64, lost+found, media, mnt, opt, proc, root, run, sbin, selinux, srv, sys, tmp, usr, var.  The whole usr and bin / sbin debacle is explained elsewhere in more respectable vocabulary so I don't even get into that, but it is stupid.  If the world made sense we would have system wide applications and user wide applications as the only application subdivisions needed.  We don't, so we have bin, sbin, usr/bin, usr/sbin, usr/share/bin, opt, and some crap that installs itself in bizarre places like ~/.app.  So that sucks.

cdrom is rubbish because a cd drive should be a device that you treat like a freaking device and mount like a device.  It should be under /mnt or /media.  Those two suck as well, because mnt is supposed to be device mounts that are not a part of the OS FS and /media is supposed to be for removable media.  Too bad 12.04 defaults to everything being freaking removable media, including SAS and SATA drives.  That is mostly genomes fault though, since it is using their hardware detection defaults. 

Now, Windows for comparison isn't much better.  Having the top level directory be all devices and having special behavior at the top level like it does makes it a leaky abstraction to no end.  The beauty of Linux is that it can mount arbitrary devices, be they network servers, usb sticks, or raid drives on mount points and abstract away the pains of D:\\ and C:\\.  My E: drive in Windows, for example, is /media/Storage in Ubuntu.  Which is slick.  The non-slick part is that devices (and not dev, that is different!) are by default mounted in media, but Ubuntu keeps mnt around (and I do too, it is more grammatically accurate about SATA drives than media) for posterities sake.

Speaking of posterity, bin and sbin are rubbish everywhere.  Superuserbin or whatever you want to call it is absurd, because every linux FS is advanced enough to have per-file permissions, which is what you *really* want with applications.  The abstraction of two folders, one for everyone and one for root, only works until you want tiers of privildged application users and that abstraction breaks down.  Also, most distros use sbin on the user path anyway, even if they can't run anything in there, so they know that blkid and fdisk require su permissions, which defeats the performance benefit of skipping a bunch of binaries a user can never run in the first place.

lib, lib32, and lib64 are just hilarious.  Really.  Var is another interesting one, since variable data goes there.  When I traced the folder for all files, pretty much all of them were under /var/lib or were the logs.  So application data was being stored there.  Seems dumb to dedicate a top level directory to it.  Just a reminder, usr has everyone one of these in duplicate because who wants to make sense.  I would imagine applications would rather store logs and vars user local with some top level system default for multi-user applications, but not in the root directory.

lost+found is really, really nutter.  A directory just for lost file recovery.  That is always there.  You couldn't have /media/lost+found or something, it needs its own top level directory that never  gets used, because ext4 doesn't blow chunks as much anymore and anything lost at this point is hardware failure.  Which is good.

Anywho, I'll propose what I wish I had as my top level -

users
system
applications
devices
network

The end.  Users can contain root or superuser, and any other users made.  I'd even wager you could get rid of applications and put it in /users/all/applications or something, it would be applications that are installed for all users with execution permissions and write in their own folders only (so they can modify their state but the user running them can't arbitrarily modify the applications configurations if it doesn't want them to).  Devices would be physical attached media, be it non-mounted physical drives, usb sticks, or cd drives, but they are all the storage devices that are not a part of the file system at boot that are auto-mounted (if they are).  Network would be an abstracted file system like proc that contains network devices.  People like seeing those.  Might even put those in devices and be really pretty about it.  System would be a bulk folder containing most of what goes in / right now - I could imagine system being something like:

status (renamed proc)
dev (the old /dev raw device files)
lib
log
var
tmp
bin
sbin
boot
cfg
headers (from /usr/headers)
dump (alternate lost+found)

Of course I'd love to see Plan 9 style system calls in here too.  Point is, having the equivalent of the M$ Windows folder splayed over other important concepts.  Nothing stops you from doing the traditional VFS mount different parts of the filesystem all over different disks).  The only downside here is that everything needs to be able to see inside the system folder (you'd want to hide it in a filebrowser from your average joe user) but if you don't put scary important stuff in /system you should be fine, you can still restrict viewing of the subdirectories appropriately (ie, everyone can read / write files they create in tmp, var, and log, but not other applications stuff unless they have super permissions).  lib would hold libraries of course, but I would imagine we could be sane and have a file nomenclature for libs that makes sense, like glibc-x86-64-3.332.so like the kernel is named. 

Let's talk about lib a bit more.  For one, if I ever implemented this FS, it would be on something running 64 bit anyway.  You still have a glutton of 32 bit applications without 64 bit versions (it isn't as much of a problem under Linux since most stuff is foss and you can recompile it in 64 bit anyway) but there are enough programs to make it an issue.  Of course, they all expect /lib or /usr/lib or usr/share/lib and 32 bit libraries anyway, so you can stick whatever old nomenclature sos they use in /system/lib and let the linker deal with the conflict.  I can't justify having separate 32 and 64 bit folders when 32 bit should be going by the wayside soon and you can just deal with some old syntax libraries cluttering up your libs.

Anywho, the idea is that you abstract away the operating system internals from the file system so average joe doesn't get scared when they open / and see everything.  Hell, you could skip the whole /system thing and put the internals at the top level - it is better than the usr mess we have now.

Of course, I also need to mention what is in place now works, and has worked for years, and was the product of years of iteration on old and new concepts.  The mess is only superficial because nobody is supposed to even be looking at / anymore - you want your average joe to think ~ is the world and devices are magically showing up as they attach them.

I'd still argue that is dumb, because files and file hierarchies are some of the best abstractions in computing and trying to hide them makes people dumber, but to each their own, when I have my own distro and influence I'll shove people around :P