When I was active in the GNOME project, it was routine to spend an hour or two a day of my free time working on whatever - writing user docs, helping manage bug reports, chatting on IRC with my pals, compiling the latest whatever in the background. I loved the people (still do!) and I loved the work. And I loved the software, too. I believe in the political and ethical notions of freedom and sharing that free software embodies. And I also prefer the freedom and flexibility to control your own data and computational destiny that you get with the Linux platform and the software stack that rides on top of it.
When I decided a couple of years ago to step down from my formal GNOME obligations so I could use my free time to start working on a book project, I expected that I would still be able to kick in a bit of free time to update user docs, triage bug reports, and help with some of the grunt work. But I quickly reallized that if you can't spare the minimum amount of time and effort required to keep the latest versions of the software stack built, there's no way to make useful contributions to the development effort. That bar is quite high - too high for me given my new self-imposed obligations.
As a user, I remained (and remain) enthusiastic. But as the time I was actively hacking recedes, the distance between the software stack on my computer and the latest interesting development work grows. The solution, of course, is simple - upgrade! Between Fedora and Ubuntu, this ought to be a relatively straightforward proposition. But at this point you have to remember that I am not a really a hacker. To make the contributions I did, I had to learn some modicum of command line skills. But every time during my active days I had to do a serious system upgrade, I faced a chore and a crap shoot. When it went smoothly, great. When it didn't, I had to spend hours Googling and IRC'ing how to manually change the ".frabinator_conf" file, and get the "gurglebarger" module to load before the "bergenhafter" module, and why won't it talk to my printer? Or worse, why won't it boot? So when I stopped hacking, I stopped doing full system upgrades, because I'm supposed to be writing now, not hacking config files by hand.
For the way that I write, Emacs is still a beloved tool, and Gnumeric even frozen in time in 2003 still meets pretty much all my number-crunching needs. Libxml - god love Daniel Veillard, one of my heroes - always builds no matter what the calcified state of my stack. Increasingly, thought, all the cool new software is out of my reach as the kernel and the stack advance away from me. Occasionally I'll see some cool new gadget I want to try. I'll download it and try to compile it and it'll tell me that I need to upgrade the frabinator. Sorry, no go. Gotta get back to writing.
The purpose of this screed is not to plead for y'all to stop working on the stack. That would miss the whole point of the power and joy of free software. The people who own the pieces of the stack love it, and want to make it better, and are doing amazing things with it that benefit the users who get the latest version. But people like me are always going to be left behind.
]]>Here's my test: Global warming->1750->September 30->Hoover Dam->Colorado River Compact.
(Note that you seem to need to enter the precise URL string from the Wikipedia entry, i.e. "Kevin_Bacon".)
]]>Someone needs to do a Kevin Bacon-style analysis of the linkages within Wikipedia as a body of knowledge: how many links separating any two given ideas within Wikipedia? And what sort of idea/knowledge clusters emerge?
]]>Green dots are daily "visits," as defined by Webalizer's default - essentially all requests from the same IP address separated by less than 30 minutes count as a single "visit." (This means that people poking Inkstain for its RSS feed more frequently than every 30 minutes, like p.g.o does, show up as a small number of visits despite accounting for a large number of hits.)
The black line is a 7-day rolling average, and the red dotted line is a simple linear regression - not really statistically meaningful, but hey - this is a blog, not science!
I'm looking for some reasonable way of sorting out how many of the hits/visits are coming from bots. If one assumes they're well-behaved and are therefore asking for "robots.txt," then they account for something like 10 percent at most. But I've got to think it's substantially higher than that.
I am happy to report that "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.10) Gecko/2005" is the most common browser here, suggesting that despite my increasingly tenuous connection to the free software world as anything other than a user, the prototypical Inkstain readers are still geeks with too much time on their hands. And that perennial favorite, "Google games," remains the most common search string by which people find their way here.
(Stats done with R.)
]]>The climateaudit guys, critics of the hockey stick, have done their analysis in R, which is a nice and freely available (GPL) statistical tool. The new reanalysis by Wahl and Ammann also uses R and both groups have published their code.
So I've been playing a bit with R, which is a fun toolkit.
(click through for more)
]]>]]>
He isn't sure he wants the public to find out about his research. He says this, even though his work would probably be of interest to many people, and could be useful to far more. The problem, he told me, is that if too many people find out what he has done and realize its value, some of them may start using it for illegal purposes. He doesn't want that kind of trouble.