This year I’ve decide to embark more on a quest to explore sleep, and the way I experience it personally. It’s something I’ve struggled with as long as I remember, and I figure that by gathering more data points I can at least start to unwind this puzzle.
A few days ago I was considering an experiment, and this post serves as a braindump to flesh out the idea, as well as hopefully a place to gather feedback and ideally hear from someone who’s tried something like this before.
The basic premise is that for a week I’d like to lose the circadian rhythm, sleep when I feel like it and until I wake up naturally, and only afterwards inspect the data.
This has some immediately obvious flaws, the first being that if I’m to lose all perception of time I’d have to black out my house and not go outside. I suspect that by not deliberately structuring my life anything specific this can be overcome.
In more detail, I plan to pull the clock off the devices I own and the interface for the software I’m likely to use in that time (my system taskbar, irssi and tmux all show my timestamps), and use my cli twitter client twat to tweet like a madman, giving my timestamped data points to look at afterwards.
There’s some planning to be done and some preliminary research to be done, but I’m hoping that at the very least it’ll be interesting, even if it serves only as a cautionary tale to others.
In my last post I unveiled groundstation, a supremely pre-beta cut of a tool I’m building to automagically sync objects in several git repos with any and all nearby peers. Up until tonight, I had been testing with two laptops, connected to the same wireless network (more or less the usecase I envisage).
This evening, I had only my laptop with me: but “Not to worry,” I thought, “I’ll just light up my dev VM!”. At work we use vagrant to light up ondemand VM’s, bootstrap them with babushka and get on with it. We use some trickery in the vagrant-dns gem to make the VM addressible from the host, with vagrant taking care of NAT for us.
Which is where things got interesting. groundstation uses UDP broadcast to find it’s peers, which WILL penetrate most NAT configurations, but with the caveat that the source address will be rewritten- in this instance rewriting it to my external IP address, causing my daemon to attempt to connect to it’s “peer” and sync it’s objects- with itself.
After vanishing off the radar for a few days, I can now unveil groundstation.
Born of a discussion at work about how all existing solutions to issue tracking and project management pretty much suck, and my natural flair for finding the most awkward way imaginable to do something, I started building a framework for decentralised issue management, based on gossip protocols (full points to @wolfeidau for planting that seed in my head) and utilising git’s object store as a backend.
Right now all it can do is propogate git objects over the local network via broadcast discovery, but the TODO list is pretty significant. I’m planning to implement:
Verification of changesets based on RSA/ECDSA cryptography (choosing those algorithms because of their ubiquity with developers)
Implementation of an arbiter node making the broadcast discovery an effective local means, but not the only way to “sync” your events
Better control over “channel” subscription to avoid the obvious DoS attacks that are trivial to exploit with the current implementation
Better support for event based propagation, instead of polling as currently stands
… and if there’s time maybe a frontend or something so you can actually interact with it. Who can say.
You can try it out right now if you’re ok with not syncing any git objects other than blobs (the only primitive type that I’m planning to use in my tracker).
Clone the sources down on a few local machines on the same subnet,
As I eluded to in yesterday’s post I used doko to do some GPS lookups for mapgit.
After playing with it briefly after lars sent it through, it looked like a much more flexible solution (and one that’d work on more platforms than just OSX, meaning I wasn’t entirely reliant on Mac like I had been).
I haven’t got much to report, beyond having implemented filebacked caching, resolution strategies, some privacy stuff (limit precision) and support for timeouts throughout.
Get doko 0.2.0 from it’s bitbucket repo, and again thanks to lars for spending the time on writing it!
Leaving work on friday to take a few days off, rather than relaxing and drinking like most at this time of year, I had a few ideas for projects I’d like to realise.
At some point on the first night, it occurred to me that a ridiculous but potentially achievable feat might be to build a thing for every day I’m off work. I’ve already missed that goal, but keeping in spirit, I’ll be posting things as they come off the ranks. I’m starting late, and I’ve already gotten a few to a point where they’re worth talking about, so hopefully this will be enough filler to allow me to post daily until I go back to work. Which brings us to..
A while ago I started geotagging my commits with a post-commit hook and a tool called whereami. I silently collected some (massively skewed- due to an oversight on my part) data for a while, and then remembered about it recently when lars whom I work with kindly offered to do some plotting magics in R.
He came up with this:
Which ultimately inspired me to start looking into this again and produce a more general solution.
I built http://mapgit.com in reasonably short order the next day. Right now it’s basically just a thin layer around redis which allows you to upload pairs of “commit location” pairs and have them transparently stored in redis.
Before the break is over I intend to built on the side of the github API to allow for selecting all the locations (both as a set and as a distribution) for a given rev-list or branch, fetched directly from github’s API or passed straight into mapgit, and export them in a format that R likes for easy plotting.
So far it’s quite barebones, but I’m happy with how it turned out for only a few hours invested.
Recently, I was working on a web application in which we were passing PHP session IDs around to emulate users. As a result, the app experienced severe slowness on page load as well as in a few other places. Upon investigation, it turned out this slowness was a result of the sessions blocking each other as they were passed around. All of the following refers to PHP’s default session handling functionality.
PHP’s internal session handling mechanisms put a lock on the session file to prevent different scripts from overwriting session data. Unless you are explicitly releasing these with session_write_close(), scripts accessing the same session are pushed onto a stack (in FIFO order) and must wait for all earlier scripts to terminate. This becomes especially problematic when a script is calling the other with PHP cURL. In this case, the first script waits for the second to return before releasing it’s lock, and the second won’t start until the first release it’s lock. This results in pages that do very little.
After I found session_write_close(), and patched the code, I decided to dig in a bit deeper to how this function worked. The comments on php.net  suggested you could have overwriting problems when not used correctly (read: trying to write to the session after closing the write), so I pulled together a bit of code to test those effects.
The following three files  are pulled together after my testing to best describe the situation involving session write locks and their interaction with session_write_close() and the effect on $_SESSION.
First is our main file, session_locker.php, which as the name suggests can lock the session when session_write_close() is commented out. This is our work-horse, making the requests, outputting the results, and finally giving session_locker.php’s view of $_SESSION.
Our second file is session_user.php, which takes a session ID (with no validation :trollface: ) and becomes that ‘user.’ Since sessions are handled with cookies, this is an easy way to ‘proxy’ a user internally, tho not necessarily the recommended implementation. Here we set the session up, output our intial view of $_SESSION, then make some modifications and output that $_SESSION again.
And our last file is very basic, just a stand alone $_SESSION viewer. Hence the overly clever name, session_viewer.php.
When we visit session_locker.php, it sets a few $_SESSION variables, then closes the write lock, and sets one more before requesting that session_user.php be run. The output order at the bottom is important in showcasing the overwriting issues, and is as follows:
We can see here that the ‘second file’ output, or session_user.php, being called by cURL sees the $_SESSION variables set by session_locker.php minus otest, as it was set after the write lock was closed. It then sets it’s own variables including overwriting otest and returns. We then let session_locker.php account for it’s actions and tell us what it thinks $_SESSION is. It’s version of $_SESSION has the old otest value and is entirely missing stuff (the variable name, not random things.) As a result of closing the the session write lock, session_locker.php ends up with an outdated and unbound version of $_SESSION (essentially moving the entire thing to the local scope, rather than super global.) Using session_viewer.php, we can see what the true value of $_SESSION is and that it matches session_user.php.
PHP won’t error out when you try to write to the session after closing it, nor will it simply ignore the call. It will actually modify the $_SESSION variable and for the rest of that script’s iteration will work perfectly. Other scripts, running concurrently or thereafter (anything outside of the script where that assignment was made) won’t see the update. Based on this behavior, I’d assume the variable is only changed in memory, but never pushed to the session file for saved state. This could certainly be a debugging gotcha , as your code would dump out the right value, but it wouldn’t persist anywhere else.