2013/11/14

Software Rants 17: Two Project Ideas built around optional subscriptions

With the debacle about Youtube switching to Google Plus comments and effectively ruining conversations on the site, plus the recent announcement of Warlords of Draenor, I want to get my own ideas about how to create two completely independent systems but have them using the same business model.

I'd love to work on one of these. I think both of them could change the world for the better in big ways, so in the absurdly unlikely event someone reading this has the capital to fund this stuff, contact me.

First, the premise - modern business transaction based business models are broken, especially on the Internet - you are either bound at the hip to advertisers (Youtube), abusing the skinner box and micropayments (mobile gaming), or don't actually make money (Tumblr, Twitter). In principle, you won't get people to pay to see or use your site directly (a good thing, your marginal costs of usage are nothing so the marginal cost to use should be nothing). Some people are having success on Kickstater and its ilk, however that model is proving itself weak in the capacity to fund a long-running project. IE, even after collecting your 1000% of asking price in donations, you still run out of money because you are trying to use crowd funding as venture capital. To make matters worse, even though upwards of millions of people directly paid you to create a product whom you are not beholden to at all in that they don't own your company or have any investment return prospects besides the product you are creating, you often expect to abuse copyright to make proceeds off your product after it is finished, ie, selling a kickstarted game on Steam.

I understand why that happens - if you tried to actually pay the entire development cost of a modern gaming title, it would not be in the budget of a million or two bucks but maybe ten to twenty. The most successful campaigns barely scrape 10 million, and those are hardware projects.

But I think the problem there is the payment window. You make a month long campaign and then go dark after that to additional funding avenues, at least in public - you might use the kickstarters success to pitch to investors, but not the general public.

Likewise, the average joe content creator, be it indie games, music, video, art, etc - are all shackled to either comissions for their work on a per user basis, or advertising giant Google pretty much exclusively. Many webcomics use non-adsense advertising, but that almost never breaks even with the time investment.

I see the same solution to both problems - instead of asking for money once, or targeting individual contributors, or being beholden to advertising as a surrogate revenue source - you constantly collect from a broad pool of users in variable amount.

Media Platform built around micropayment subscriptions

Today, a lot of youtuber's are having success with subbable and patreon, services that allow your audience to pay you on a monthly or per-content basis a fixed amount they set. Twitch also uses a similar system with its subscriptions - $5 a month, but that is very limited in scope compared to what I'm going to talk about.

The idea is that a very small fraction of your audience will often have the disposable funds to pay you for your work and will do so to keep you producing. As long as you have enough income to justify your content creation, the number of people "freeloading" off the pervasive information penetration of the Internet are just a catalyst to attract more voluntary subscriptions. It is also important to let people volunteer as much money, as often or as infrequently as they like, because the amount of money someone might offer can vary tremendously.

You can also integrate "prizes" into such a system for one time or recurring payments. IE, every $20 you get a comission from an artist, or every $50 gets you a song or you could pay $1000 for a 30 second animation. 

Rather than focus a platform on one type of media (video, audio, games, art, text) I'd propose a model where you have users, who have feeds of created content of any form - with means to batch consume a users content of a type, with the ability to tag or categorize it as you wish into your own personal collections - with reddit-style comments and "forums".

As a result, each user is their own effective "subreddit" of posts they create, and only they can submit to their own personal page - you would have three sorting options, rating, max rating, and date. Rating behaves like reddit or google+ comments that degrade over time so newer popular content will rise to the top. Max rating is raw voting totals, and date is by oldest or newest.


You can also have public forums in this system that can behave just like normal subreddits on reddit - a user creates it by unique name, and can control its usage by assigning moderators, etc. They behave exactly like user pages, which also allow you to password lock them or make individual entries private or password protected.

Your subscriptions feed is just like reddits home page, the sum of all the users and general groups you follow. You would also be able to create, submit, or share meta-posts and meta-groups - colections of users or forums as one forum or collections of submissions as a single submission - that imitates music albums, tv show seasons, photo albums, aka, playlists and meta-reddits.

This site would use the minimal of advertising, potentially none - the funding source is through user content. Depending on some beta testing, users might immediately be able to accept payments or might need a certain submission comment threshold to do so. Users could hide content pages to subscribers or donors only, but one of the principles of this kind of site would be that in the TOS all content posted must be under a permissive license, with the most permissive license being the default creative commons license, with the default being creative commons attribution share-alike.

The goal is to let content uploaded be distributed by users - in effect, to minimize costs, I would propose using an optional desktop client for computers to share the bandwidth load and seed content to reduce server costs using the bittorrent protocol, so that as users view content they cache it locally and seed it. Such content would have expirations - ie, the user could set a maximum local storage limit, and whenever that limit is reached the oldest content not seeded is purged. Content uploaded should not be able to be restricted back into proprietary non-open usage, but we would want to mandate attribution. 

I would need a more thorough legal evaluation of licenses for this purpose - because I'm not sure if the CC-A-SA license guarantees derivative works need to be openly redistributable.

Back to funding - I'm open to traditional dollar based funding models, but I imagine using bitcoin as the primary funding tool would help the platform evolve. We wouldn't act as an exchange, but I would definitely integrate with coinbase at the least and possibly other exchanges especially for foreigners in other countries to have ways to move fiat into btc for use on this site. Each user would get an account-bound wallet we maintain, and can move funds in and out using web tools and can deposit to a btc address we give them. 

Like I said, we could also have fiat money accounts and the like, but that gets very complicated. It would probably use paypal, amazon payments, or some other authority that has the legal power to deal in that tremendous mess. Users would get to choose varying ways to pay for works or user content creation - you can donate on a timed basis, ie, daily, weekly, monthly, annually. You could donate per-content, and can pick by tags, a title filter, content type, or all content produced with daily, weekly, monthly, annual limits. You can do a one-time donation of any amount, and like I said earlier the content creator can have a listed set of donor benefits that are either recurring or one time, with either per-user or in general limits on availability.

To help enable this, nothing stops a later iteration of this product finding a, say, t-shirt company , and offering donor incentive shirt designs on-site with seamless integration. IE, like this animation by insert-X-author? The donate button is right on the video, and the shirt is one of the choices. If you have money on site it is literally a one-click buy assuming your shipping info is already on record, otherwise you have to enter that. If you have no on-site money, we can provide third party payment services including bitpay. Users would have an option to make their site btc address (or a personal one they want to list) public so you can donate from alternative btc services.

I want to mention I'm not a big fan of the depreciating nature of bitcoin as an exchange currency, and if any competitive inflating cryptobuck came along I'd want to be an early adopter. People will more willingly donate and exchange on and off site if their money is becoming worth less over time.

As with any good site, it would require a comprehensive and effective search engine for content - recommendations, filtered searches by content type, tag, user type, user name, rating, etc. I don't think one sentence, however, is sufficient to detail the complexity of a well implemented search engine, so we might depend on something like duckduckgo on the backend.

It would all, of course, be completely FOSS, and the revenue is from transaction fees. Since you would expect the site to be operating with huge amounts of money transfer in bitcoin, you could use a very low transaction fee of like 1%.

This way the success of users means the success of the site. You have to leverage the convenience of on-site funding over using an off-site service like patreon or subbable, and there would be a balancing act - attracting content producers with favorable margins in the first place while understanding the convenience factor of on-site payments can let us put margins above what other such services get from transactions of this nature.

2. Collaberative MMO

The MMO space is horribly stagnant. I have an idea for a potential organic game and business model based off the media site above - in theory, nothing prevents the same code base being used or it even being within one company with two interacting teams. 

I won't go into details on game mechanics here, because I've talked about how I'd want to design this theoretical MMO in other posts on this blog. The series is kind of on hiatus, because I keep going back to software rants whenever I want to talk game design, but I'll work on that.

The principle issue with most modern MMO games is content - the investment to create an initial set, the habits of making leveling content and group content independently, the inability to release it rapidly enough to keep up with player hunger, especially in monthly subscription games, and the habit of invalidating past work.

To address budgetary concerns this game would not have a long development time. In traditional MMO terms, if we wanted the initial product to be 60 levels, we might release with only 5 levels worth of zones and content done. Then as we finish more zones we push them to a beta realm for testing and then live for people to experience, with constant releases of small amounts of content. Since this game would not be subscription based (we will get to that).

The business model is the direct funding of content creation using a reddit like platform - and both users and the business itself can do it. In theory we could make this an entire funding platform independent of this origin MMO, in that a user will submit a content creation proposal, and would set a fixed or variable funding campaign, with pontential donor incentives. This ties back into the media content site, because you use the same backend payment and rewards systems, and that site already can enable a user to create a funding project for eventual creations rather than donations on a recurring basis.

What we do here is use that system on a macroscopic scale in the context of an MMO - users can create submissions on a "requests" forum, where top ranked posts can be reviewed for potential funding projects, and the development house itself will create independent listings of different projects to fund. What gets funded gets made, so users can pick and chose the content they want to be developed.

You might see, say, in WoW terms again, a new raid for 100k, a new dungeon for 5k, a new zone for 50k, a class review for 10k, a new specialization for 30k, new monster models for 15k, new skins for 3k, etc.

You could have donor mounts and perks as well, in the same micropayments system in place in many MMOs now.

The real divider is that this game, like the original site, would be entirely FOSS. All models, all sounds, music, textures, etc - would be under CC attribution sharealike or AGPL3 / GPL3. This lets others, if they want, host their own servers. In the end, as a business - we don't care, because our funding isn't in trying to sell the finished product, but in creating it, and in getting paid to host our own servers. It doesn't affect our bottom line at all for someone to use third party servers, especially because our servers are free and where the new content momentum would be.

Because of the openess, we would also have a forum for user created models, animations, textures, and sound effects - users would upvote them, and by submitting content there you have to approve a TOS that says you give it to us under CC-A-SA or A/GPL3 licenses (the latter if it includes code).

Just like on the media site, we could have game resources in a bittorrent network. The one thing I would like but I don't think it is feasible is having users host their own servers - there is no way to verify the game is unmodified, because the code would be open they could just spoof authentication measures. This means we need to self-host the game servers to avoid cheating.

I think it could be wildly successful and popular - the natural popularity and hunger for content would accelerate funding to enable the development of more content to make the game better and attract new users. Because the model isn't pay to win - it isn't pay for pretty much anything with the exception of any pay-for cosmetics the users fund in the first place - you can expect a huge playerbase with no difficulty actually balancing the game. You could create "hardware" content for the extreme raider and casual content for the 1 hour a day full time mom because both can contribute to funding whatever they want to see in the game. It is literally putting your money where your mouth is.

Since this would be a company, we would be able to deny feature requests and we are the only ones to establish the funding campaigns for content added to the final game.

Another way to entice user participation would be a model where, once a model or sound or such is produced and verified working and wasn't pre-funded kickstarter style, we could have a funding campaign to pay the user creator that gets the asset put in the game.

Of course, the user could skip such a funding goal if they just want to give away their work. But it is an alternate way to get them paid to create, without the risk of not getting what you asked for.


I want to also mention this need not be an MMO - you could create any game this way. Make a single player FPS with one level for free, and have such infrastructure in place to incentivize the funding of additional content. You could do this with almost any game, it just seems an MMO would work best due to its inherent networked nature and the lack of a model to make an open MMO.


In general, I'd want to found a business to implement both ideas under one codebase and roof - they tie together very well. Such a platform could enable an entire generation of media creation and sharing, and b y enforcing open license principles it could keep a generation of media free while paying content creators.

If anyone reads this and finds it interesting and would go so far as to be interested in trying to make either a reality, contact me! (yeah right)

2013/09/17

Software Rants 16: Binary Firmware and Trade Secrets

Almost a year ago, I built my grandparents a new pc with an a10-5800k processor, on the pretense that AMD is hurting and they contribute a foss driver, they deserve some support.

A year later, my grandparents are still regularly using catalyst on their Suse install, because whenever I plug in their living room tv via hdmi, the driver fails and I get two black screens, or the tv has color bars on it until I mode set the normal desktop monitor, and then we are back to all black.

Looking forward, in future computer builds, I am looking at parts not for features or performance or reliability but I look for parts that aren't binary blobs doing whatever the hell they want with no way to audit their behavior, in kernel space.

The problem is there is nobody on my side in this. Intel's chipsets are completely proprietary, with no open source support for them at all. Their processors are obfuscated trade secrets with opcodes to payload encrypted firmware blobs to modify the microcode to do whatever the hell they want at runtime, and their network controllers have binary blob firmware that might be broadcasting who knows what.

Sadly, though, they are the only vendor with an actual free gpu driver. No firmware blobs, no bullshit, relatively open specification. They don't support gallium, but really, thats their call. They waste their time, and everyone elses time, not using one stack, but at least they are trying.

However, on the other side of the fence, you have AMD - they participate in coreboot, their cpus are still trade secret proprietary mess but not as bad as Intel, but they use prorprietary firmware blobs with their gpus, and thus you can't reimplement their driver without reverse engineering that crap.

When it comes to processors in general, they are all bullshit trade secret messes. There is not one open spec cpu, even the Leeong chip from China licenses MIPS. So if you are stuck on some proprietary garbage, you might as well use the x86 chips, since at least Intel isn't profiteering off of IP law with their cpu design (AMD's contract with them is pennies on the chips they make).

So why can't I buy an AMD chipset with an Intel APU? I want open firmware across the board, because I want an open system where I can look at exactly how everything works and tweak it to my desire. But there is no way to build a system without someone restricting my freedom with my own hardware, and it sucks.

Meanwhile, Nvidia is off in la-la land, their proprietary blob graphics cards can blow me, but Nouveau is more open than AMD chips, since the reverse engineered firmware is foss! And they have open source drivers for their Tegra APUs, and they just license ARM cores.

I hope they consider making a Tegra 5+ based NUC, because I'd be interested in that. Kepler graphics on a foss driver + firmware, on a chipset with coreboot, and arm v8 cores I could live with. But right now all our options suck. If I had connections, i definitely have the desire, I'd make an open hardware company. But you wouldn't want to found it in the US, because patent and trademark trolling would crush you before you got off the ground.










2013/09/13

The Future of the Computing Platform

I'm in love with NUCs. Even though I usually despise the Intel marketing terminology (because they always trademark it, and because it usually sounds silly like ultrabook) in this case they nailed it on the head.

Firstly, the desktop. Or more specifically, I'd call this the class without an internal battery, because size is the topic. ATX and even micro ATX mainboards are now complete overkill. The only reason they even exist is exclusively graphics cards - and only the most insane users run multi-card configurations (note - the difficulty with multi card configurations is probably on the foundation of how hackneyed the entire platform is, but ranting about hardware architecture is something I have already done, and something I will probably do again in the future. Regardless, if you discount the graphics card slots, what can you really put in a pci slot anymore? Let's list them:

  •  Discrete Audio Cards: Only for audiophiles. And if you care, just get a mainboard with beefier integrated audio. Asus even has a z87 board with a discete class audio solution embedded into the board proper. You don't need a coprocessor because pcm to onboard analog signaling is dirt cheap, and you often end up using digital audio out anyway in which case you can often just transport an aac or the pcm stream anyway. Note: spdif fuckers, get opus support in digital audio standards yesterday and junk all that other crap.
  •  TV Tuner Cards: One, tv is dying, two, there are usb tuners. When I built my grandparents rig I got them an internal pci tuner under the assumption it gets better quality. In hind sight, probably not. On the first point, broadcast television is already going the way of the Dodo and I would never build a new system around converting dvb-t or a signals to mpeg2.
  •  Raid Cards: The integrated raid controllers on most mid range motherboards are sufficient for 3 - 5 disk raid 5, and really, if you are even thinking of consumer raid this is where you are looking. Servers already have entirely different pcb form factors anyway, so you can keep dedicated pcie raid cards there just fine. Additionally, raid 0 with ssds has no performance improvement (maybe sata express will change that in a higher per channel bandwidth world) but I doubt it, because principally ssds already have radical random reads and writes. This means you might as well raid 1 with ssds rather than raid 5, and SSDs aren't like mechanical disks where one memory sector going bad kills the drive. You just don't need that redundancy class even in a homebrew home server. And again, if you do, the integrated raid is often good enough.
  •  PCIE SSDs: These have a claim to fame in how sata 6gbps is at its limit and the only way to get more bandwidth in consumer hardware right now is pci express lanes. However, the interconnect isn't really targeting block devices, and the big 16x slots are overkill for 1GB/s sequential read ssds.
  •  Network Switches: This is one that is harder to replace because a four way splitter has peak bandwidth requirements of 4gb/s. It isn't a block device so you can't stick it on sata 6gbps, even though that would make sense. Though, again, who is building a system with a network switch? I intend to, but I'm weird. Niche markets shouldn't dictate consumer standards.
  •  Port Expansion: ie, more usb3 ports on a hub card. Like integrated audio, this should also be integrated to satisfatory levels. If you need more, you are a niche.
 Overall, there are just not many cases where 99% of people would want an expansion card *besides* for a gpu.  Likewise, this isn't 2005 anymore, and you aren't likely to upgrade your processor but keep the same motherboard. You pack a processor, mobo, and set of ram into a system, and leave it like that. You might get more ram (ie, a second set if you only bought one) but even that is rare, and you could have just bought double capacity from the start.

I predict a new world of form factors - for the classical enthusiast class, a small mitx size board taking up to 2 sodimm slots and a socket in the 20 - 30mm range rather than 35 - 50 (since large dies are overheating anyway), mpcie and msata completely replacing standard pcie slots, and 2.5" mechanical drives becoming standard under-the-board connected disks. The only exception is gpus, which won't be able to easily migrate from high bandwidth 16x lanes with space for heatsinks and dedicated fans running on 12v to pcie 1x at 3.3v with no fan options unless the device makes room. But APUs are now proving themselves very capable, and unified memory is a huge benefit for simpler implementation. And all 3 big players have apus (Nvidias are just ARM based).

 Even smaller than that, I expect soldered boards of combined pcb + cpu + memory to become common, since you rarely upgrade any of those parts, and becuse all 3 are always mandatory in any build. It enables smaller units when you don't need to worry about standard ports and sockets.

The enthusiast market won't go away, but likewise these big behemoth motherboards are also dinosaurs. There are connectors for these smaller form factors, and NUCs are paving the way, although the combination of case and pcb isn't great. What you really want is a case and a motherboard standard with a simpler power connection architecture (so you could have an external or internal power brick). But they are the future, and it will be small.



2013/08/18

Software Rants 15: The Window Tree

I've had a barrel of fun (sarcasm) with X recently, involving multi-seat, multi-head, multi-gpu - just in general, multiples of things you can have multiples of, but most of the time don't, so the implementations of such things in X are lacking at best, utterly broken at worst.

I also am becoming quite frustrated with opensuse, trying to fix varous graphical and interface glitches to get a working mutlihead system for my grandparents.

But I look towards Wayland, and while I appreciate the slimming, I have to be worried when thing like minimizing, input, and network transport are hacks on top of a 1.0 core that already happened. It reeks of a repeat of the X behavior that lead to the mess we have now.

So I want to talk about how I, a random user with no experience in writing a display protocol or server, would implement a more modern incarnation.

First, is to identify the parts of the system involved. This might have been a shortcoming in Wayland - the necessary parts were not carved out in advance, so they needed to be tacked on after the fact. You can represent this theoretical system as an hourglass, with two trees on either side of a central management framework. In Linux terms, this would be through DRI and mode setting, but the principle is that you must communicate virtual concepts like desktops and windows (and more) into physical devices, and do so in a fluid, organic, interoperable, hotpluggable fashion. This might be one of Waylands greatest weaknesses, in how its construction doesn't lend itself to using an arbitrary protocol as a display pipe.

You would have a collection of display sinks - physical screens, first and foremost, but also projectors, recorders, a remote display server, cameras to record from, etc. They are all presented as screens - you can read a screen with the necessary permissions (through the display server). To write a screen, you must also use the display server. You can orient these screens in a miriad of ways - disjoint desktops, running in seperate sessions - you might have disparate servers, with each one managing seperate displays, and inter-server display connectivity is achieved through either the general wide-band network transport (rdp, udp, etc) or over a lower latency / overhead local interconnect (dbus). Servers claim ownership of the displays they manage, and are thus a lower level implementation of this technology than a userspace server like X or even the partial kernel implemented wayland - it supplants the need for redunant display stacks, where right now virtual terminals are not managed by a display server, but by the kernel itself, in this implementation virtual terminals would just be another possible desktop to provide by the display server.

Obviously, this server needs EGL and hardware acceleration where possible, or use llvmpipe. The system needs to target maximal acceleration when available, account for disparate compute resources, and not assume the state of its executation environment at all - this means you could have variable numbers of processors, with heterogeneous compute performance, bandwidth, latency, an arbitrary number of (hot pluggable) acceleration devices (accessable through DRI or gl) that may, or may not, be capable of symmetric bulk processing of workloads. Multiple devices can't assumed to be clones, and while you should correlate and optimize for displays made available through certain acceleration devices (think pci gpus with display outs vs the onboard outs vs usb converters vs a laptop where the outs are bridged vs a server where the outs are on another host) you need to be open to accelerate in one place and output in another, as long as it is the most optimal utiliziation of resources.

So this isn't actually a display server at all, it is the abolition of age old assumptions about the state of an operating computer system that prevent the advancement of the overall state of kernel managed graphics. Tangentially, this relates to my Altimit conceptualizations - in my idealized merged firmware / os model where drivers are shared and not needlessly replicated between firmware and payload, the firmware would initialize the core components of this display server model and use a standardized minimum set of accelerated apis to present the booting environment on all available output displays (you wouldn't see network ones until you could initialize that whole stack, for example). Once the payload is done, the os can reallocate displays according to saved preferences. But the same server would be running all the way through - using the same acceleration drivers, same output protocols, same memory mapping or the port sinks.

Sadly, we aren't there yet, so we can't get that kind of unified video (or unified everything as the general case). Instead, we look towards what we can do with what we have now - we can accept that the firmware will use its own world of display management and refactor our world  (the kernel and beyond world) to use one single stack for everything.

So once you have this server running, you need to correlate virtual desktops and terminals to displays. The traditional TTY model is a good analogy here - when the server starts (as a part of the kernel) it would intiialize a configured number of VTs allocated to displays in a configured way (ie, tty1 -> screen0, tty2 -> screen1 which clones to remote screen rscreen3, tty4 -> recorder0 which captures the output, tty5 -> screen2 which clones to recorder1, etc). tty6 could be unassociated, and on screen0, with its associated input device keyboard, you could switch terminals like you do now. You could have the same terminal opened on multiple displays, where instead of having a display side clone, you have a window side clone (ie, not all outputs to, say, screen0 and 1 are cloned, but tty15 outputs to both of them as simultaneous display sinking).

A window manager would start and register itself with the server (probably over dbus) as a full screen application and request either the default screen or a specific screen - recalling that screens need not be physical, but could also be virtual, as is the case in network transport or multidisplay desktops. It is provided information about the screen it is rendering to, such as the size, the DPI, brightness, contrast, refresh, etc - with some of these optionally configurable over the protocol. This window manager may also request the ability to see or write to other screens beyond its immediate parent, and the server can manage access permissions accordingly per application.

On that desktop (which to the server is a window, occupying a screen as a "full screen controlling application" akin to most implementations of a full screen application, and whenever it spawns new windowed processes, it allocates them as its own child windows. You get a tree of windows, starting with a root full screen application, which is bound to displays (singular or plural) to render to. It could also be bound to a null display, or no display at all - in the former, you render to nothing, in the latter, you enter a freeze state where you suspend the entire window tree under the assumption you will rebind that application later.

In this sense, a program running in full screen on a display, and a desktop window manager, are acting the same way - they are spawned, communicate with the central server as being a full screen application running on some screen, and assume control. If you run a full screen application from a desktop environment, it might halt the desktop itself, or more likely it moves it to the null screen where it can recognize internally it isn't rendering and thus stop.

I think it would actually require some deeper analysis if you even want an application to be able to unbind from displays at all - additionally, you often have many applications in a minimized state, but you want to give a full screen application ownership of the display it runs on (or do you?) so you would need to create virtual null screens dynamically for any application entering a hidden state.

Permissions, though, are important. You can introduce good security into this model - peers can't view one another, but parents can control their children. You can request information about your parent (be it the server itself, or a window manager) and get information about the screen you are runnning on only with the necessary permissions to do so. Your average application should just care about the virtual window it is given, and support notification of when its window changes (is hidden, resized, closed, or maybe even if it enters a maximized state, or is obscured but still presented). Any window can spawn its own children windows, to a depth and density limit (to prevent a windowed application from assulting the system with forking) set by the parent, which is set by its parent, so on and so forth, to a display manager limit on how much depth and bredth of windows any full screen application may take.

The full screen application paradigm supports traditional application switching in resource constrained environments - when you take a screen from some other full screen application, the display server will usually place it is a suspended state until you finish / close, or a fixed timer limit on your ownership expires (a lot like preemptive multiprocessing with screens) and control is returned. Permissions are server side, and cascade through children, and while they can be diminished, they require first cllass windows with privildge assention to raise them.

You can also bind any other output device to an applications context. If you only want sound playing out of one desktop environment, you can control hardware allocation server side accordingly. Same with inputs - keyboards, mice, touchscreens, motion tracking, etc - can all be treated as input, be it digital keycoding, vector motion, or stream based (like a webcam or udp input), and assigned to whatever window you want, from the display server itself (which delegates all events to all fullscreens). Or you can bind them to a focus, which in the same way you have default screens, you can have a default window according to focus, and delegate events into it (from the application level, it would be managed by the window manager).

You could also present a lot of this functionality through the filesystem - say, sys/wm, where screens/ can correspond to the physical and virtual screens in use (similar to hard drives, or network transports, or audio sinks), and sys/wm/displays where the fullscreen parents reside, such as displays/kwin, or displays/doomsday, or displays/openbox. These are simultaneously writable as parents and browasable as directories of their own children windows, assuming adequete permissions in the browser. You could write to another window to communicate with it over the protocol, or you could write to your own window as if writing to your framebuffer object. Since the protocols initial state is always to commune with ones direct parent, you can request permissions to view, read, or write your peers, corresponding to knowing their existence, viewing their state, and communicating over the protocol to them. As a solution to the subwindow tearing problem, the server understands movements in parent are recursive to children, such that moving, say, one window 5px means a displacement in all children by 5px, and a corresponding notification to that windowit has moved, and that it has moved due to a parents movement.

The means why which you write your framebuffer are not the display servers problem - you could use egl, or just write the pixels (in a format established at window creation) between the server and the process. Access to acceleration hardware, while visible by the display server, is a seperate permissions model, probably based off the user permissions of the executing process rather than a seperate permissions heirarchy per binary.

In practice, the workflow would be as follows: the system would boot, and udev would resolve all display devices and establish them in /sys/wm/screens. This would include udev network transport displays, usb adapted displays, virtual displays (cloned windows, virtual screens combined from multiple other screens in an established orientation, duplicate outputs to one physical screen as overlays) and devices that are related to screens and visual devices like cameras, or in the future holographics, or even something far out like an imaging protocol to transmit scenes to the brain.

Because output screens are abstracted, the display manager starts after udev's initial resolution pass and uses configuration files to create the defult virtual screens. It doesn't spawn any windowing applications, though.

After this step, virtual terminals can be spawned and assigned screens (usually the default screen, which in the absence of other configuration is just a virtual screen spanning all physical screens that has the various physical properties of the lowest common denominator of feature support amongst displays). In modern analogy, you would probably spawn VTs with tty1 running on vscreen0, and 2-XX in suspend states, ready to do a full screen switch when the display server intercepts certain keycodes from any input device.

Then you could spawn a window manager, like kwin, which would communicate with this display server and do a full screen swap with its own display configuration - by default, it would also claim vscreen0, and swap out tty1 to suspend. It would request all input delegation available, and run its own windows - probably kdm as a window encompassing its entire display, while it internally manages stopping its render loops. It would spawn windows like plasma-desktop, which occupy space on its window frame that it assigns over this standard protocol. plasma-desktop can have elevated permissions to view its peer windows (like kdm) and has lax spawn permissions (so you can create thousands of windows on a desktop with thousands of their own children without hitting any limits). If you run a full screen application from plasma-active, it can request a switch with the display server on the default screen, its current screen, or find out what screens are available (within its per-app permissions) to claim. If it claims kwins screen, kwin would be swapped into a suspend state, which cascades to all its children windows. Maybe it also had permissions and spawned an overlay screen on vscreen0, and forked a seperate full screen application of the form knotify or some such, which would continue running after vscreen0 was taken by another full screen application, and since it has overlay precedence (set in configuration) notifications could pop up on vscreen0 without vblank flipping or tearing server-side.

Wayland is great, but I feel it might not be handling the genericism a next generation display server needs well enough. Its input handling is "what we need" not "what might be necessary" which might prompt its obsolescence one day. I'd rather sacrifice some performance now to cover all our future bases and make a beautiful sensible solution than to optimize for the now case and forsake the future.

Also, I'd like the niche use cases (mirroring of a virtual 1920x1080 60hz display onto a hundred screens as a window, or capturing a webcam as a screen to send over telepathy, or having 3 DEs running on 3 screens with 3 independent focuses and input device associations between touchscsreens, gamepads, keyboards, mice, trackpads, etc) to work magically. To fit in as easily as the standard use case (one display, that doesn't hotplug, one focus on one window manager with one keyboard and one tracking device).


















2013/08/01

Reddit Rants 2: So I wrote a book for a reddit comment

http://www.reddit.com/r/planbshow/comments/1je0xj/regulation_dooms_bitcoin_plan_b_17_bitcoin_podcast/cbe7mv1

So I really like the planb show! I guess I like debating macroeconomics. I can't post the entire conversation here because its a back and forth. 2500 words on the second reply, though!

2013/07/27

Software Rants 14: Queue Disciplines

Rabbit hole time! After getting my TR-WDR3600 router to replace a crappy Verizon router/dsl modem combo in preparation for my switch to cablevision internet (for soon ye of 150 KB/s down and 45 KB/s up ye days be numbered) I have dived headfirst into the realm of openwrt, and the myriad of peculiarities that make up IEE 802. 

My most recent confrontation was over outbound queueing - I found my experience using my (constantly bottlenecked) DSL connection pitiful in terms of how well page loads were performing and how responsive websites were under load, so I investigated.

I found a pitiful amount of documentation besides the tc-X man pages on the queue algorithms the kernel supports. I was actually reading the source pages (here are my picks of interest). 

So of course I go right for the shiny new thing, codel. It is a part of the 3.3 kernel buffer bloat tuning. It has to be better than the generic fifo queue, right? The qos package of luci in openwrt always uses hfsc, for example, so it requires elbow grease and an ssh connection to get fq_codel running.

Well, not really. It is just ssh root@router.lan tr qdisc add dev eth0.2 root fq_codel. But it is the thought that counts. 


What did make me happy was the reinforcement of my purchase decision by the Atheros driver (ath71xx) being one of the few kernel 3.3 supported BQL drivers. So that was good. It is currently running on my wan connection, we'll see how it works.

What I found interesting was that apparently the networking in Linux is a real clusterfuck. Who would have known. The bufferbloat problem from a few years ago was, and still is, serious business. And according to documentation 802.11n drivers are much, much worse than just ethernet switches.


It was an educational learning process, though. Codel is a near-stateless, near-bufferless, classless queue discipline that is supposed to handle network variability well and work out of the box, which is exactly what the next generation of network routing algorithms needs. And if it works well, I hope it takes over the world, because fifo queues are so 2002.

2013/07/15

ffmpeg syntax to extract audio from an mp4

ffmpeg -i <inputfile.mp4> -acodec copy <outfile.aac/.mp3>

Just keeping this on file. I need it way too often and always forget it.

2013/06/29

Software Rants 13: Python Build Systems

So after delving into pcsx2 for a week and having the wild ride of a mid-sized CMake project I can officially say any language that makes conditionals require a repetition of the initial statement dumb as hell. But CMake proves a more substancial problem - domain languages that leak, a lot.

Building software is a complex task. You want to call external programs, perform a wide variety of repetitious tasks, do checking, verifying, and on top of that you need to be able to keep track of changes to minimize time to build.

Interestingly, that last point leads me to a tangent - there are 3 technologies that are treated pretty much independently of one another but overlap a lot here. Source control, build management, and packaging all involve the manipulation of a code base and its outputs. Source control does a good job managing changes, build systems create conditional products for circumstance, and packagers prepare the software for deployment. 

I think it would be interesting if a build system took advantage of the presence of the other two dependencies of a useful large software project - maybe using git staging to track changes in the build repository. Maybe the build system can prepare packages directly, rather than having an independent packaging framework - after all, you need to recompile most of the time anyway.

But that is aside the point. The topic is build systems - in particular, waf. Qmake is too domain specific and has the exact same issues as make, cmake, autotools, etc - they all start out as domain languages that mutate into borderline turing complete languages because their domain is hugely broad and complex, and it has evolved more complex over time. This is why I love the idea of python based build systems - though at the same time, it occurs to me most python features go unused in a build system and just waste processor cycles too. 


But I think building is the perfect domain of scripting languages - python might be slow, but I could care less considering how pretty it is. However, my engagements with waf have made me ask some questions - why does it break traditional pythonic software development wholesale (from bundling the library with source distribution, to expecting fixed name files of wscript that provide functions with some wildcard argument that acts really magical).

What you really want is to write proj.py and use traditional pythonic coding practices with a build system library, probably from pypi, You download the library, do an import buildsystem, or from buildsystem import builder or something, rather than pigeonhole a 2 decade old philosophy of files without extensions in every directory with a fixed name.

Here is an example I'd like to write in this theoretical build system covering pretty much every aspect off the top of my head:

# You can play waf and just stick the builder.py file with the project, 
# without any of the extensionless fixed name nonsese.
from builder import recurse, find_packages, gcc, clang
from sys import platform

subdirs = ('sources', 'include', ('subproj', 'subbuilder.py'))
name = 'superproj'
version = '1.0.0'
args = (('install', ret='inst'),)
pkg_names = ('sdl', 'qt5', 'cpack')

builder.lib_search_path += ('/lib','/usr/lib','/usr/local/lib', '~/.lib', '/usr/lib32', '/usr/lib64', './lib')

# Start here, parse the arguments (including optional specifiers in args) a lot of the builder. global members
# can be initialized with this function via default arguments.
todo = builder.init('.', opt=args, build_dir='../build')

if(todo = 'configure'):
  # builder packages are an internal class, providing libraries, versioning, descriptions, and headers.
  # when you call your compiler, you can supply packages to compile with.
  pkgs += builder.find_packages(pkg_names)
  pkgs += find_packages('kde4')
  utils += builder.find_progs('gcc', 'ld', 'cpp', 'moc')
  # Find a library by name, it will do case insensitive search for any library file of system descript,
  # like libpulseaudio.so.0.6 or pulseaudio.dll. It would cache found libraries already and not repeat
  # itself on subsequent builds.
  libs += builder.find_lib('pulseaudio')
  otherFunction()
  builder.recurse(subdirs)
elif(todo = 'build'):
  # You can get environments for various languages from the builder, supplying them with 
  cpp = builder.env.cpp
  py = builder.env.py
  qt = builder.env.qt # for moc support
  
  # you can set build dependencies on targets, so if the builder can find these in the project tree
  # it builds them first
  builder.depends('subproj', 'libproj')
  
  # builder would be aware of sys.platform
  if platform is 'linux': # linux building
    qt.srcs('main.cpp', 'main.moc')
    qt.include('global.hpp')
    qt.pkgs = pkgs['qt5']
    qt.jobs = 8 # or the .compile syntax
    qt.cc = gcc # set the compiler
    qt.args = ('-wstring',)
    # qt.compile would always run the MOC
    qt.compile(jobs=8,cc=gcc,args=self.args+(, '-O2', '-pthread'),warn=gcc.warn.all,out=verbose)
    # at this point, you have your .o files generated and dropped in your builder.build_dir directory.
    builder.recurse(subdirs, 'build')
  if platform is 'darwin': # osx building
  if platform is 'win32': # windows building
elif todo='link':
  # do linking
elif todo='install':
  # install locally
elif tood='pack':
  # package for installation, maybe using cpack

Basically, you have a library to enable building locally, and you use it as a procedural order of operations to do so, rather than define black box functions you want some builder program to run. There could also be prepared build objects you could get from such a library, say, builder.preprocess(builder.defaults.qt) would supply an object to parse whatever operation is being incited (so you would use it regardless of the calling function on your script) to do the boilerplate for your choice platform. 

I imagine it could go as far as to include anything from defaults.vsp to defaults.django or defaults.cpp or defaults.android. It would search on configure, include on build, and package on pack all the peripheral libraries complementing the choice development platform in one entry line.

The principle concerns with such a schema are mainly performance. You want a dependency build graph in place so you know what you can build in parallel (besides inherently using nprocs forked programs to parse each directory independently, where the root script starts the process, so you need builder.init() in any script that is meant to start a project build, but if you recurse into a subproject that calls that function it doesn't do anything a second time).

You would want to support a lot of ways to deduce changes, besides just hashes, you could use file system modification dates, or maybe even git staging and version differences (ie, a file that doesn't match the current commit version is assumed changed). You would cache such post-changes afterwards. You would probably by default use all available means and the user can turn the up for speedups with potential redundant recompilation (ie, if you move a file, its modification date changes, the old cache is invalidated, but if they hash the same it is assumed the same file moved and isn't recompiled).

If you support build environments, you can support radically different languages. I just think there are some shortcomings in both scons and waf that prevent them from truly taking advantage of their pythonic nature, and using all the paradigms available to python is one of them, I feel.

2013/06/24

Magma Rants 5: Imports, Modules, and Contexts

One of the largest issues in almost any language is the trifecta of imports, packaging, and versioning. For Magma, I want it to be a well thought out design that enables portable, compartmentalized code, interoperability between code, and the ability to import both precompiled and compiled object code.

First, we inherit the nomenclature of import <parent>:<child>, where internally referencing such a module is through the defined <parent>:<child> namespacing. Imports are filesystem searched, first locally (with a compiler limited depth, blacklist, and whitelist available) then on the systems import and library paths. You can never define a full pathname import to a static filesystem object with the import clause, but the internal plumbing in std:module includes the necessary woodwork to do raw module loading.

The traditional textual headers and binary libraries process still works. You don't want to bloat deployment libraries with development headers, though if possible I'd make it an option. Magma APIs, with the file suffix of .mapi, are the primary way to provide an abstract view of a library implementation.

In general practice though, we want to avoid the duplication of work in writing headers and source files for every part of a program to speed up compile times. This is mostly a build system problem, in that you want to verify (via hash) a historic versioning of each module, so if it changes its hash you know to recompile it. This means you should should write APIs for libraries or externalized code - which is what a c++ header really should be for.

In addition, an API only describes public member data - you don't need to describe the memory layout of an object in an API so that the compiler can resolve how to allocate address space, you just specify the public accessors. When you compile a shared object, the public accessors are placed in a forward table that a linker just needs to import out. Note that since a library can contain multiple api declarations in one binary, the format also has a reference table to the API indexing arrays.

The workflow becomes one of importing APIs where needed, and using compiler flags and environment variables to search and import the library describing that api. One interesting prospect might be to go the other way - to require compiled libraries be named the same as their apis, and to have one api point to one binary library with one allocator table. It would mean a lot of smaller libraries, but that actually makes some sense. It also means you don't need a seperate linker declaration because any imported api will have a corresponding (for the linkers sake) compiled binary of the same name in the library search path.

I really like that approach - it also introduces the possibility of delayed linking, so that a library isn't linked in until its accessed, akin to how memory pages work in virtual memory. You could also have asynchronous linking, where accessing the libraries faculties before it is pulled into memory causes  a lock. Maybe an OS feature?

As a thought experiment I'm going to document what I think are all the various flaws in modern shared object implementations and how to fix them in Altimit / Magma:

  • You need headers to a library to compile with, and a completely foreign binary linkable library or statically included library to link in at build or run time.
  • You need to describe the complete layout of accessible objects and functions in a definition of a struct or class, so that the compiler knows the final size of an object type.
  • You need to make sure the header inclusions and library search path contained the desired files, even on disparate runtime environments.
  • Symbol tables in binaries can be large and cumbersome to link at runtime and can create sizable load times.

2013/06/06

Magma Rants 4: Containers and Glyphs

Containers are the most core pervasive aspect of any languages long term success. In Magma, since () denotes scope blocks (and can be named), and [] are only used for template declarations, {} and [] are available stand alone to act as container aliases like in Python. [] is an std:array, the primitive statically sized array of homogenous  elements. If it has multiple types in one array, it uses std:var as the array type and uses the natural conversion from[object] conversion available in var if it a user defined type, or an overridden more precise conversion function.

{X, X, X} is for unique sets, and {(X,Y),(X,Y)} is for maps. In the same line of thinking, the language tries to find a common conversion type these objects fit in (note: the compiler won't trace the polymorphic inheritance tree to try to find a common ancestor) and casts them, or throws them in vars. The indexing hash functions for sets and maps that determine uniqueness are well defined for std types and you can implement your own as a template override of std:hash[T](T), which needs to return a uint.

Python (since I love Python) also includes the immutable list type () as a tuple, but since Magma [] is already a static contiguous std:array and not an std:dynArray, there is no performance benefit. Note that, like everying in Magma, [] and {} are implied const and can be declared muta {} or muta [] to construct a mutable version.

One of the principle goals I had in thinking about Magma is that a majority of languages overload and obfuscate the implication of glyph patterns, which makes compilation time consuming in a complex parser since syntax is very situational depending on the surrounding text in a source file. Additionally, any time you use multiple sequential glyphs to represent some simple concept (equality as ==, scope as ::, // for comments) I feel is a failure of the language to properly balance glyph allocation and behavior. Albeit, in the current documentation on Magma, I'm using == for logical equality because I turned = back into the assignment operator instead of :, solely on the basis because += is way too baked into my brain to see +: and not think that is strange, and it allowed me to use : for scope and . for property access (which are different, Java).

In conceptualizing Magma, I drafted out all the available glyphs on a standard keyboard and assigned those to language functions. As a result, glyphs like $ became available substitutes for traditional named return in other languages, and made function declarations more obvious because you declare a return type in a function definition (or fn).

2013/06/05

Magma Rants 3: Powerful variance and generics

Magma uses the same compile time template checking that C++ uses - templates (defined with square braces [] in class and function definitions). The distinction between polymorphism and templates is, I feel, still valuable, and unlike Go, I don't see anything inherently wrong with native compiled templates in the C++ vein - if a template usage doesn't support, at compile time, the functions and typing the template uses, it is a compiler error. The implementation will try to coerce types using the object generic functions object.to[T](T) and object.from[T](T), if either direction is defined (because either class could define the conversion to another type) the cast is done. This avoids the ambiguity of dynamic casting in C++, because there is a well defined set of potential casts for every object and the only difference between static_cast and dynamic_cast are if the casts themselves are implemented as const or not. Const casting still exists but requires the "undefined" context to allow undefined behavior (ie, mutating a passed-in const object can be very bad). Const cast is found in std:undefined:ConstCast().

From the other direction, Magma contains std:var, which is the stratos autoboxing container type. It is used pervasively as a stand in for traditional polymorphic [Object] passing because you can retrieve the object from a var with compile time guarantees and type safety, and var includes a lot of additional casting functionality not found in native Magma casts in strings and numbers. If you have a heterogeneous collection, you almost always want the contents to be vars, unless you have a shared common restricting ancestor to denote a subset of objects. You can still query the type of a var, and it delegates all interactions with it besides the ? operator to the descendant. If you really need to call query() / ?, call var:get[T](). You can also get the contents get function by getting it and calling get on it.

Magma also has the auto keyword as a type specifier to reduce an rvalue equality, in the same way C++ does. It statically reduces type from an rvalue statement and parses as such.

2013/06/02

Magma Rants 2: Low level abstractions in the base language

One thing that I don't like in modern low level language design is how you easily hang yourself on aged constructs from the 70s, like jumps, and how valuable glyphs like the colon are consumed to maintain a very niche feature. As a result, the base language itself is just the core components of modern language paradigms, with a lot of traditional behavior that goes rarely used and can easily play a gotcha on a developer. Here is a collection of traditionally core language features in C and its ilk that are available only under the System context:
  • std:bit contains bitwise operations (bit and and or, shift left and right, bitwise negation). Many std classes like bitfield, flags, and the compiler in the presence of multiple same-scope booleans will use bitwise arithmetic and bit operations but they aren't user facing because a user rarely needs them. The traditional "or" flags syntax of FLAG1 | FLAG2 | FLAG3 is instead a function of the flags addition, in the form FLAG1 + FLAG2 + FLAG3, and - a flag means to remove it from the bitfield.
  • std:flow imports enable (as a compiler feature) goto, continue, break, and label. They take the form of functions, std:goto(label), std:label(name), std:continue(), and std:break().
  • std:ptr  contains the raw pointer construct ptr[T], and the alloc() and free() functions.
  • std:asm introduces the asm(TYPE) {} control block to inline assembly instructions in Magma code.
The base language is thus memory safe, doesn't enable bit overflow of variables, and has consistent control flow. This functionality is stil available for those who need it, but excluded for the consideration of any large project that wants to avoid the debugging nightmares that emerge from using these low level tools.

2013/05/25

Game Rants 3: Neverwinter: Part 3: Endgame

Endgame is when you hit 60, right after all the leveling content becomes extremely hard to solo with the stock level 15 companions (yay). I put in the majority of my time here before running out of things to do and quitting.

Part 3: The End Game
Pros:
  • Lots of dungeons to do via the inclusion of epic difficulty.
  • PVP gear is easy to obtain, so it doesn't devolve into "full pvp geared player crushes noob" for too long.
  • Zone events become very much worth doing as soon as you aren't farming mobs to get enchantments. This gives you some modicum of reason to go back to the last 3 zones.
  • Astral diamond acquisition becomes solely a game of playing the auction house (also a con).
  • Combat is dynamic, if limited. PVP engagements can be skillful, but they can also be painfully RNG (so also a con).
  • Professions and worshiping are long run, low commit engagements to keep people logging in.
  • Gearscore requirements on epics mean you won't get undergeared players ruining runs.
Cons:
  • Weapon / Armor enchants are purely pay to win. With 1% success rates, but a Greater Tenebrous enchant adding upwards of thousands of DPS to your output, more than any blue -> epic upgrade, and the extreme rarity of any weapon or armor slot enchantment, you end up with absurd AD prices on the AH that normal players can't realistically obtain. 
  • In PVE, there are only 5 mans. The queue system still doesn't insist on a healer (even though they are mandatory) so you pretty much have to make premades or take a hella long time queuing, getting borked group compositions, and requeueing again. Gauntlgrym should be out this month, and fix this, so it isn't that bad.
  • What is bad is that every single PVE encounter with a number of exceptions I could count on my fingers is exactly the same, even at epic difficulty. Don't stand in red areas,  constant add spawns, with boatloads of adds at 75 / 50 / 25%. Repeat. That is it almost every single boss fight. The PVE is very flat in that it mechanically is rote.
  • Like in every other freaking MMO ever, health does not scale on gear relative to raw damage increases from weapon upgrades. This means unlike the pvp at any other level, max level pvp in epics is very zergy with opportunities to one shot players. This never happens while leveling because the ratio of base health to damage on weapons that were green or blue was in check, but at max level this will get even worse since only select gear gives minute max health increases, but weapon damage (and the raw bulk of other dps stats combined) increases burst and dps significantly each tier.
  • Mounts completely break PVP. This is really bad at 60, where you end up doing a lot of bgs, and keep cursing at the pay to win nonsense that is 110% speed mounts. They give you a tremendous competitive advantage when you can backcap over and over all game and nobody can keep up with you because you spent real money on a mount to win bgs.
  • Players by max level obtain absurd amounts of CC, and there is no DR on any of it. As a consequence, almost any 2 v 1 engagement is always going to be one player being unable to move while dying in seconds. The existance of latency makes it completely random if someones dodge ability is making them immune to some spell now, or making them immune 5 seconds from now. People get so many evade rolls they can just spam it until they get away. The lack of any choice paradigm in ability usage means you just unload your encounters, maybe a daily, run away, recharge, repeat. You have no reason to stay engaged in a fight, especially if you unloaded on someone who hasn't been able to retaliate yet.
  • Gearscore requirements on epics are very stringent and I feel excessive - most of these dungeons have no enrage timers, and the only requirement is people not get killed by avoiding damage. And avoiding damage doesn't come quicker with more gear. The gearscore requirements in effect artificially gate content by making you run lower level dungeons for gear not because you need it to defeat bosses but because the game says you have to do attempt those bosses.
  • Converting astral diamond from zen isn't that bad. I'd prefer Cryptic to make the money off a "gold" selling industry (since in effect, astral diamond is the gold of NW). The problem that is bad is that every single pve dungeon drop is BOE, which means you can pay to win buy a full set of the best anything for real life money. This breaks the morale of any competitive PVE scene possible and makes progression pointless. 
  • Astral diamond prices in-game do not reflect the reality of AD acquisition rates. Nightmare lockbox horses, that are a tiny chance to get and $1.50 a key to unlock, sell for as much as 80% mount speed training and half of 110% speed training, and give you both, plus the mount. The combined cost of both mount skills purely through AD and not considering the real economy valuation of zen to AD puts just getting mount training at almost 3 times more expensive than buying a 110% mount that gives you the skill. This is pervasive of anything that costs diamond - respecs, scrolls, etc - all cost an order of magnitude more than they should, at least. This disconnect cheapens the experience and dissuades players.
  • Likewise, zen prices are absurd, and completely out of hand. $40 companions, $50 mounts, $10 single bags on one character. Outrageous, and if Cryptic doesn't make fistfulls of money on NW, it is because they priced the item shop so poorly it alienated a huge portion of the playerbase from buying anything. Not only did they give away all the work (the leveling content) for free, and make it fun, the end game doesn't keep you engaged for long and you can walk away, just like in SWTOR, taking the bulk of their effort with no incentive to pay for anything. That is a recipe for disaster.
  • Aggro is still broken for clerics, who will instantly pull aggro doing anything and then never lose it. Tanks are optional, but healers are not.
So overall, the pve is predictable and dull, without any raid content yet. The dracolich fight in Castle Never is, surprisingly, killing adds again. Go figure. PVP is the only real challenge, and that is entirely pay to win between the buying of Tenebrous + armor / weapon enchants, and the mounts. Damage outscaled hp and control abilities became too abundant, so you get two shot without being able to play. Clerics have limited pvp utility - they can use their blue shields to prevent you from taking almost any damage, but require pay to win levels of itemization to keep someone alive through healing.

The real problem is that there is no carrot. Tier 2 gear might take a long time to get farming epics (heroics) but you don't even need it to clear anything. If not for the gearscore requirements, you could do every instance in blues (unless your dps is so low you can't keep the add waves down) because there are no enrage mechanics, nothing that has to be tanked, and clerics pull all the aggro anyway.

You can get full pvp gear in a day, realize it requires absurd pay to win nonsense to get the epic enchants, and quit.

There was a lot of potential ruined by the pay to win aspects and an excessive focus on the leveling process. But once you see the zones once, you are pretty much done - there is no alternative progression path, the story and zones are linear. So you play to max, play a few days, do the dungeons and pvp, and you are completely finished. Which is never a good thing for an MMO.

I'm hoping added content and balancing make Neverwinter good again, but some very fundamental mechanics - spammable immunity dodges, 8 buttons max, aggro mechanics that don't focus on tanks holding threat, the ability to just dodge spam avoid almost all damage, the lack of CC DR, and an excess of CC at that, contribute to some pretty big flaws to keep the game from reaching its potential.

Because the setting is great. The atmosphere is ok (it doesn't come close to Baldur's Gate quality atmosphere, but that had a lot more work put into characterization - with the lack of NPC persistence in NW, and your companions just being dumb meat bricks, you have no overarching engagement besides this distant threat of Valindra).

I'd like to see a refactoring of player abilities to prompt more choice paradigm behavior - making in combat decisions on what to do besides "hit the button and get rooted or not" and beyond picking a daily to use. The best MMO mechanics evolve from players have to chose between offense and defense, burst or sustained, etc, in combat - not just outside it when you pick your spells. The cooldown based usage and obvious situations to use encounter powers cheapen the gameplay. The ability for everyone to avoid so much damage and go immune to effects so often cheapen role specialization. Every dpser gets some aoe (even if rogues aoe sucks), they get some control, they get some utility, etc - it makes them jacks of all trades where you should have aggressive specialization and dependence on other players to form bonds of engagement. Especially in D&D, where you classically needed a bunch of different roles to realistically combat a wide array of threats.

It was the most fun MMO I've played since SWTOR, so props. But it is still full of holes I can't wait to see filled (I'd love to become a lead developer on an MMO some day to put my ideas to the test).

2013/05/21

Magma Rants 1: Introduction to the Language

I've been doing a bit of a thought experiment recently, around the idea of programming languages. I imagine the vast majority of programmers do this, so I don't think I'm special here, and while I would love for this idea to go somewhere, the pressing need for a sustainable consistent influx of cash is more important than solving big problems and fixing the world.

I have a manifesto writeup going on on a Google Doc, found here. I regularly add sections over time, the real pain is when I redo parts of the spec because I don't like the way something is petering out. Here are some examples of that:


  • Initially, I liked the idea of using colons as assignment, such that in constructors you would use foo : 5, in variables you would use float bar : 3.5, etc. I gave up on this, for two reasons - one, even though the precedent of using an equals sign for assignment isn't really mathematically accurate (and I really liked the idea of not needing the terrible == for equality) some operations like +: and -: just look ugly with colons attached. Blame my great aesthetic design mindset. This also opens up the colon to be another dedicated glyph, and let me separate property access (.) from scope (:) which I prefer.
The purpose of Magma is to solve a problem - which I hope most languages aim to accomplish - by accepting a reality of native language design. You want a kitchen sink that you can write firmware or a kernel in, but want to actually build a working project with it. I think my approach is a step towards solving this problem, in that the Magma specification outlines multiple effective compiler specifications, with different error conditions and warnings depending on the chosen context. Contexts are compiler flags specifying what kind of binary you are constructing, and the compiler builds each library and executable in its own context, which is an optional header in its metadata specification (depending of course on if you want to compile this to say, LLVM, or some newer binary format for a new architecture). The default context is the application context, but the language specification has multiple contexts:
  • system - allows raw bitwise, raw shifts, raw pointer manipulation, jumps, and pointer assignment to integers and casting between the various integers (pointers, ints, fixed width chars) won't produce compiler warnings or errors. You can also use asm: blocks to write inline assembly. This state also outright disables exception handling and any bounds safety checks in standard library classes. This context is meant for kernels and firmware, and should be used sparingly. Even a kernel proper should have most of its libraries written in another context. It is also a good idea to isolate system context code in its own libraries or binaries independent of the bulk of a main application, as a sort of "danger zone".
  • lib - compiles libraries instead of executable binaries, with no main function. By default, uses the app context, and aliases applib. You can create libraries in other contexts by just suffixing the name with lib, such as systemlib.
  • app - The full standard library is available, but you can't use raw shifts, jumps, pointers (use refs) or std:bit (bool and std:flags will still use bitwise internally just fine). 
  • web - Targets the Fissure intermediary language, enabling binary web applications. The full ramifications of this context need to be ironed out through trial and error - it has no file system access, no access to the networking stack besides the convenience http send receive layer, etc.
Besides compiling binaries in various contexts, for security purposes Magma apps need ot specify (in their metadata, or embedded) the various std parts they use and what system resources they access (files, network, contacts, accounts, 3d video, audio, etc) so they can request permissions in a mandatory access control environment seamlessly.

The broad objective is to recognize 5 things:
  1. Programmers like familiar syntax. If you can get one syntax up an entire stack of languages for various purposes, (which Magma / Fissure / Stratos are intended to be) you can significantly streamline the time investment for new developers to pick up the entire technology paradigm.
  2. People like readable code. Magma is not only tries to achieve minimal glyphic overhead and readable code, it can be written whitespace significant or not (using traditional curly braces and semicolons) to enable choice.
  3. People like choice. Choice of paradigm, choice of library, and contexts enable a choice of warning states.
  4. Times are changing. Heterogeneous computing is going to be huge and massively important, and no modern language is going to have an easy time tacking on easy to use SIMD functionality the way Magma will with std:simd functions, parallel profiling in the compilation stage, etc.
  5. Build files suck. Qmake, CMake etc - Scons is really neat, but Python (lacking contexts) is hard to get into a nice syntax for compilation instructions. Enter the stratos build context, which is the dialect of Stratos used to write .stob files to build magma binaries. I mean, build systems collectively blow, and none of the modern languages (Go, Rust, etc all have no immediate solution). Make syntax is completely alien to imperative languages (just like shell is completely alien too) and the raw effort to learn them all is absurd and unacceptable for new developers in the coming years, at least to me.
I think we can do better than what we have, and go beyond C++, D, and even Go and Rust, and really recognize the need for a native language you can write anything in and make it simple. Contexts I like to think make this a lot easier than trying to kitchen sink everything into one compiler state and hoping people don't break stuff with preemptive optimizations into inline assembler.

The point of this blog series isn't to write the manfiesto, but to brainstorm aspects of the langauge ideas I have by writing them down.

2013/05/12

Game Rants 3: Neverwinter: Part 2: The Mid Game

The mid game starts in Blacklake and ends in the Northdark. It spans from level 5 to 60, which means it is the super-majority of leveling content. Ends up, that is most of what it is, and that is what I'm touching on here.


Part 2: The Mid Game
Pros:
  • There are options to leveling - foundries, quests, dungeons, skirmishes, and pvp all award xp, so you can level any way you want (in theory).
  • Zones have directed stories - you start near the entrance, progress through quest hubs, and eventually reach a dungeon at the end. Every zone has a corresponding big bad in a dungeon to kill, and all these dungeons have epic level 60 versions too.
  • Power's are obtained quickly early on and peter out in higher levels to 2 - 3 new abilities every 10 levels with the bulk of powers early on. This means you get the majority of your ability choices quickly, and have actual skill choices early on.
  • Most abilities are useful for something, at least on the TR, CR, and Cleric from my experience. There are few stand out "garbage" powers that are useless, but few and far between. In most circumstances, the powers you run are situational, which is good for the choice paradigm.
  • Quest dialog and most mobs have voice overs, which help immersion.
  • The graphics are great, and the game runs exceptionally well on Arch (which is where I'm running it from).
  • Music is great.
  • There really isn't much pressure in the pay to win direction while leveling. The level process is really fast so XP boosts are not necessary, and you can easily hit max level just doing the leveling quest content. You can't usually afford a mount as soon as you hit level 20, but you almost always can by 23, and the uselessness of gold for the most part makes it easy to justify the purchase. Bag space is the biggest offender, but you do get 2 bags (one at 10 and one at 30) that somewhat offset the bag pains, and proper inventory management lets you do entire quest hubs, visit the vendor, and repeat ad nauseam without really feeling a space crunch.
  • PVP auto levels you to the X9th level in a bracket, so there is no level imbalance. Ability and gear imbalance can contribute somewhat to the experience, but each battle gives half a level and you can get max level pretty quickly through it, so it is pretty balanced leveling pvp. I feel like they learned well from both the failings of the no-balancing WoW and the everyone-to-max level TOR.
  • Beyond the visuals, the environments are amazing. Being in the crater of a volcano, or climbing a massive Ice Giant's pick, or battling Gray Wolves beneath a giant flaming Wolf carved mountain all present epic landscapes. The look is great and really conveys strong atmosphere. A real standout was the Chasm, which progressively got more corrupted the deeper you went.
Cons:
  • There are very few consistent characters and no overarching story arc in the leveling content. It means there is very little engagement with any zones story because you know the characters are fleeting.
  • You don't change the world through your actions. You go places, kill monsters, they will still be there, Helm's Hold is still under demon control, Icespire is still covered in giants, etc - there is no real phasing, so the game doesn't change as you accomplish things, cheapening the engagement even more.
  • Quests are rarely dropped, and always require running back to town. This is offset by having multiple progressive objectives in a single quest, but the need to go back to the Protector's Enclave to turn in a completion quest after each zone accentuates this issue - even though you do go back for a reason (usually when you finish a zone, there is another one to progress to).
  • While the zones looked awesome, very few environments were manipulable or changed during progression. It was a very static world - besides the mobs standing around waiting to fight, the quest NPCs idling in camps, the world itself is fairly unchanging.
  • There is a lot of "free" content here - the leveling process is still drug out, and these zones were all complex works of art that took a lot of effort, but they still reek of unnecessary.
  • With the foundry nerf, the only ways to level now are quests and pvp. Dungeons and skirmishes give awful xp per run and for the time commitment, but the gear you get is so fleeting they often aren't worth doing outside the fun of seeing them the first time (which is really fun!). Foundries are almost never worth doing anymore, ever, which I feel hurts one of the games best aspects.
  • When leveling up, powers have very nebulous descriptions, and a huge component of how viable an ability is is its animation - how long it is, where it goes, etc. You can't figure this information out yourself, and with respecs costing real money, there is no easy way besides lots of out of game research to figure out how to distribute your limited power points.
  • Dungeon roles are questionable. Guardians aren't really needed because most boss mobs spawn tons of adds that a guardian can't aoe tank, barely do any direct damage, and often swing so slowly anyone else can dodge them. Clerics are absolutely mandatory for almost anything past the Crypts, but the dungeon queue system will stick 5 dps in a doomed group instead of requiring a cleric (at least).
  • Aggro is very broken. For the most part, it is a combination of distance to target and damage done, but guardians can't outaggro healing aggro, which seems to apply from any distance. This means most mobs can't be pulled off a healing cleric, which makes combat a one dimensional kite rather than a coordinated utilization of classes filling roles.
  • This might be a bug moreso than a hard negative, but the group travel mechanics are very annoying. You can't transport between solo player zones and dungeons in a group and it forces you to wait for your party to go almost anywhere. I feel like the entire mechanic is a pointless holdover from D&D proper, and letting people zone in places wouldn't hurt.
The biggest issue with the mid game is the needlessness of it all - the story is too static to be deeply engrossing besides a few rare characters like the lovers that show up in both the Plague Tower and deep Chasm. Because your actions don't have a lasting impact on the world, and the overarching story of catching Valindra takes a backseat after the tutorial and rarely pops up even in passing except in engagements in the city or in the Ebon Downs, the plot is all over the place and leaves players wanting.

This contributes to a greater sense that way too much development time and effort was put into this leveling content - from voiced over quests, to well realized zones, to all the different monster models and animations, a lot of this seems like a poor allocation of resources when launching a f2p MMO - people will level once, experience this content once, maybe twice if they level the single alt the game lets you roll without paying money, and then they expect a repeatable end game to keep them playing.

And f2p depends as much as any game on their persistent players to bring in the new players to spend money and to consistently buy new trinkets as they enter the store. Your hardcore audience is your best money sink, but you don't win them over with a lot of well designed questing zones, because they do those once and never come back.

I feel like the game would have done significantly better on a slashed budget with level 20 as the cap, with you getting 3 power and feat points per level, than having the level 60 cap, all these excessive leveling zones (and let us be honest, the Plague Tower quest chain would have been a great point to finish up at a level cap, and then just add a few levels at a time as new zones are introduced, and have early access to these zones for a few days - of course, deleveling players that go in them and get higher levels so they don't get a power advantage in PVE or PVP until after everyone can enter them). All these zones are just massive developer sinks that absolutely took lots of development time and will produce very little return on investment both in player time engaged in them and in income as a result of them. Like I said earler in this post, there is little incentive to spend money while leveling (the lockboxes I feel are a good exception, and while even the box keys are radically overpriced, a $1.50 a key is much more reasonable for most players to shill out at a whim).

This is even more pronounced at a neutered end game - and, in praise of Cryptic, the leveling content is not short but it isn't excessively long either. It might wane on some players but it won't on most, and the ~60 hours to hit max level finishing the quest content (which many players might not even touch while pvping) is acceptable. My arguments against the quantity of it is directed towards Cryptics bottom line - they spent a lot of time making these beautiful zones with forgettable one off plot threads that few people will be paying money to experience, since they gave it away for free. Kind of like how TOR gave away the best part of that game by making it f2p.

Another issue is the routes to 60 - any group content gives awful XP per run (I feel like completing a dungeon should give at least an entire level in XP, and a skirmish at least half for the time commitment and awful XP returns just killing mobs) but doesn't give enough gear per run or time to justify doing them between quests. If you follow the actual progression the devs laid out in terms of questing content, which is to run a zone, reach the end, do a dungeon, and then move on to another zone, you will outlevel zones in no time and end up being unable to queue for dungeons whose quests at the end of some zones you just reached. Nerfing XP gains isn't an effective tactic either because some people just want to get to end game and making them grind there, even with solid quest content, isn't making them happy paying customers any time sooner.

The foundries were fun and interesting apart from the normal quest content. They would mix a lot less fighting or a lot harder solo content into an otherwise monotonous zoned questing experience, and if not for the exploits around knocking mobs off platforms or farming ogres, they would have been a great complement to the leveling experience. In the next part, I'll go into why the foundries are now completely useless besides the fun they provide (and remember, games are about having fun - and I can't forget to mention I only write this much crap because Neverwinter is, at the end of the day, quite fun. Flawed, which is I what I'm getting to, but still fun).