After vanishing off the radar for a few days, I can now unveil groundstation.
Born of a discussion at work about how all existing solutions to issue tracking and project management pretty much suck, and my natural flair for finding the most awkward way imaginable to do something, I started building a framework for decentralised issue management, based on gossip protocols (full points to @wolfeidau for planting that seed in my head) and utilising git’s object store as a backend.
Right now all it can do is propogate git objects over the local network via broadcast discovery, but the TODO list is pretty significant. I’m planning to implement:
Verification of changesets based on RSA/ECDSA cryptography (choosing those algorithms because of their ubiquity with developers)
Implementation of an arbiter node making the broadcast discovery an effective local means, but not the only way to “sync” your events
Better control over “channel” subscription to avoid the obvious DoS attacks that are trivial to exploit with the current implementation
Better support for event based propagation, instead of polling as currently stands
… and if there’s time maybe a frontend or something so you can actually interact with it. Who can say.
You can try it out right now if you’re ok with not syncing any git objects other than blobs (the only primitive type that I’m planning to use in my tracker).
Clone the sources down on a few local machines on the same subnet,
As I eluded to in yesterday’s post I used doko to do some GPS lookups for mapgit.
After playing with it briefly after lars sent it through, it looked like a much more flexible solution (and one that’d work on more platforms than just OSX, meaning I wasn’t entirely reliant on Mac like I had been).
I haven’t got much to report, beyond having implemented filebacked caching, resolution strategies, some privacy stuff (limit precision) and support for timeouts throughout.
Get doko 0.2.0 from it’s bitbucket repo, and again thanks to lars for spending the time on writing it!
Leaving work on friday to take a few days off, rather than relaxing and drinking like most at this time of year, I had a few ideas for projects I’d like to realise.
At some point on the first night, it occurred to me that a ridiculous but potentially achievable feat might be to build a thing for every day I’m off work. I’ve already missed that goal, but keeping in spirit, I’ll be posting things as they come off the ranks. I’m starting late, and I’ve already gotten a few to a point where they’re worth talking about, so hopefully this will be enough filler to allow me to post daily until I go back to work. Which brings us to..
A while ago I started geotagging my commits with a post-commit hook and a tool called whereami. I silently collected some (massively skewed- due to an oversight on my part) data for a while, and then remembered about it recently when lars whom I work with kindly offered to do some plotting magics in R.
He came up with this:
Which ultimately inspired me to start looking into this again and produce a more general solution.
I built http://mapgit.com in reasonably short order the next day. Right now it’s basically just a thin layer around redis which allows you to upload pairs of “commit location” pairs and have them transparently stored in redis.
Before the break is over I intend to built on the side of the github API to allow for selecting all the locations (both as a set and as a distribution) for a given rev-list or branch, fetched directly from github’s API or passed straight into mapgit, and export them in a format that R likes for easy plotting.
So far it’s quite barebones, but I’m happy with how it turned out for only a few hours invested.
Recently, I was working on a web application in which we were passing PHP session IDs around to emulate users. As a result, the app experienced severe slowness on page load as well as in a few other places. Upon investigation, it turned out this slowness was a result of the sessions blocking each other as they were passed around. All of the following refers to PHP’s default session handling functionality.
PHP’s internal session handling mechanisms put a lock on the session file to prevent different scripts from overwriting session data. Unless you are explicitly releasing these with session_write_close(), scripts accessing the same session are pushed onto a stack (in FIFO order) and must wait for all earlier scripts to terminate. This becomes especially problematic when a script is calling the other with PHP cURL. In this case, the first script waits for the second to return before releasing it’s lock, and the second won’t start until the first release it’s lock. This results in pages that do very little.
After I found session_write_close(), and patched the code, I decided to dig in a bit deeper to how this function worked. The comments on php.net  suggested you could have overwriting problems when not used correctly (read: trying to write to the session after closing the write), so I pulled together a bit of code to test those effects.
The following three files  are pulled together after my testing to best describe the situation involving session write locks and their interaction with session_write_close() and the effect on $_SESSION.
First is our main file, session_locker.php, which as the name suggests can lock the session when session_write_close() is commented out. This is our work-horse, making the requests, outputting the results, and finally giving session_locker.php’s view of $_SESSION.
Our second file is session_user.php, which takes a session ID (with no validation :trollface: ) and becomes that ‘user.’ Since sessions are handled with cookies, this is an easy way to ‘proxy’ a user internally, tho not necessarily the recommended implementation. Here we set the session up, output our intial view of $_SESSION, then make some modifications and output that $_SESSION again.
And our last file is very basic, just a stand alone $_SESSION viewer. Hence the overly clever name, session_viewer.php.
When we visit session_locker.php, it sets a few $_SESSION variables, then closes the write lock, and sets one more before requesting that session_user.php be run. The output order at the bottom is important in showcasing the overwriting issues, and is as follows:
We can see here that the ‘second file’ output, or session_user.php, being called by cURL sees the $_SESSION variables set by session_locker.php minus otest, as it was set after the write lock was closed. It then sets it’s own variables including overwriting otest and returns. We then let session_locker.php account for it’s actions and tell us what it thinks $_SESSION is. It’s version of $_SESSION has the old otest value and is entirely missing stuff (the variable name, not random things.) As a result of closing the the session write lock, session_locker.php ends up with an outdated and unbound version of $_SESSION (essentially moving the entire thing to the local scope, rather than super global.) Using session_viewer.php, we can see what the true value of $_SESSION is and that it matches session_user.php.
PHP won’t error out when you try to write to the session after closing it, nor will it simply ignore the call. It will actually modify the $_SESSION variable and for the rest of that script’s iteration will work perfectly. Other scripts, running concurrently or thereafter (anything outside of the script where that assignment was made) won’t see the update. Based on this behavior, I’d assume the variable is only changed in memory, but never pushed to the session file for saved state. This could certainly be a debugging gotcha , as your code would dump out the right value, but it wouldn’t persist anywhere else.
Psych0tik would like to introduce everyone to our newest node in our IRC network, Magikarp.psych0tik.net. After frequent network instability we decided to discontinue using Natalya as our IRC hub, followed by Storm coming down temporarily due to changes in Samurai’s life. For the past few months we’ve been running on just Mudkipz but with this week’s development, we’re re-expanding the network with the addition of a new hub located in Amsterdam. In addition, Magikarp will also be hosting our IRC services daemon, they will be unavailable for a short period while the databases are transfered. This is all still somewhat in testing, so don’t be alarmed if things go up and down. Report it and a staffer should be on the problem shortly.
Magikarp will be accessible in the normal irc.psych0tik.net way, IPv6 or IPv4 and always SSL only on port 6697.
The IRC network will be temporarily running off of mukipz.psych0tik.net while samurai makes some modifications to his infrastructure. The DNS name irc.psych0tik.net will be directed to mudkipz on the morning of Tuesday August 21, 2012.
This is part of a project to update the overall infrastructure of the psychotik IRC network. The new server is running an updated ircd and your existing IRC accounts are still valid. Mudkipz will be reachable over IPv4 and soon by IPv6 as soon as the firewall rules are amended to allow it.