[eroft final] Tabomancy 2.0

For my final, I worked on improving my tabomancy Chrome extension that would give you a daily fortune based on how many tabs you have opened.

I didn’t quite get it to work perfectly, and there’s a lot more that could be done with the writing as well. But the main change I made, technically, was for the fortune to show up every time you created a new tab in your browser, rather than having to click on the little extension icon. You only get one fortune a day, and when your fortune isn’t “ready” yet, it shows you the image above. It now calculates your fortune after you have opened 6 new tabs (and it doesn’t matter if you open them all in a row, or if you closed some, or how much time has passed between opening them). At that moment, it counts how many tabs you have opened and give you your fortune.

I think this definitely makes it a little more interesting than how I had it last time. Previously, you just had to click on the little extension icon at the top of your Chrome. But now the fortune happens at a somewhat random moment, so it feels less in the control of the person who is getting their fortune read. So that makes is (slightly) less game-able, which may mean that it’s a more “natural” reading.

There would definitely be a lot more technical and conceptual work to be done for this project to be really good, or to be a Chrome extension that might be good enough for someone to actually want to use. It doesn’t take into consideration people’s different tab habits, and the logic is really simplistic (less tabs = good, more tabs = bad). I haven’t quite figured out if that’s a good thing or a bad thing.


[eroft week 4] writing with location data

Around 6 months ago, I turned on my personal location tracking data in Google Maps on my phone, so I can see where it says I’ve been. You can download the data in a json file that looks like this:

I like the idea that all the places I have been on earth—when and where I went—could be translated into a secret message.

So I decided to see if I could transform my location data into writing. I got a list of English words from here, and then wrote a (janky) script to take the last digit of each value of each location object, and then find the corresponding word in the list.

I also added some logic to add in punctuation based on whether or not the resulting number was divisible by certain numbers.

At first, I played with a bit of randomness, but I realized that for this exercise, it was better to make sure that the script was written in a way such that the results would be the same each time. I think that contributes to the idea that the places I’ve been communicate a specific message. (And if anyone else imputed their own Google location data, they would get one result.) So I took out the randomness.

The results and code are here.

But here are some snippets I like:


retrievals singling shmooze begun cruck awarded anabranch crickets microlith antiprohibition, froghopper aleurone?

arbitrament advisees impulsing unadvised notching cargoes.

veggieburger auriculas waveband apperceptions treasurable areola rhizobium economist hipster appending locusts skirting bequeather d�butantes abruptest karat!


adulterants, aphrodisiacs apologetically discomposure manhattan.

gitterns fictionist balladist!

tittered biotechnology tricorns, mulcting spadefish assassination disbelievers pushier?


infuse teargassing badge!

serval aerifies seasonally, sarcophagus arsonists?


[eroft week 3] daily fortune: tabomancy

I made a really simple fortune teller, which does what I’m calling “tabomancy.” Basically, it’s a Chrome extension you can click on to get your daily fortune:

The way it determines whether or not you are going to have a good day is by looking at how many tabs you have open in your window. The more tabs you have open, the worse of a day it’s going to be.

This is obviously silly, but I also sort of believe that it might be maybe kind of…real. Having a ton of tabs open usually means that there’s something stressful going on in your life, and you’re trying to balance a lot of things. If you only have one or two tabs open, it just feels more relaxed.

Of course I’m just imposing my own tech superstition/ritual onto this — I think I do a thing where I close all my tabs if I’m feeling too cluttered, and it has a sort of cleansing effect. This is not true for everyone but it might be true enough for enough people that tabomancy might actually work.

Code here.

[eroft week 2] Pop Rocks Oracle Deck

I decided to make an oracle deck out of different flavors of Pop Rocks, obscured in generic envelopes:

The deck is shuffled, and then the querent picks an envelope.

The reader then opens the envelope and pours some of the Pop Rocks into the querent’s palm.

The querent then eats the pop rocks, and the reader prompts them to think about how the Pop Rocks taste, how they feel, whether the experience feels good or bad, etc.

I think this is pretty similar to a traditional physical oracle deck, and the envelopes are pretty easy stand-ins for cards. But I wanted to use Pop Rocks because they’re such an evocative candy, and it’s hard not to have a reaction to eating them. (I also think the different colors and flavors contribute to the experience.)

[eroft week 1] 24 hours in (zelda) nature meditation

My electronic ritual is standing on a mountain in the videogame, Legend of Zelda: Breath of the Wild, for a full 24 hour day in-game. Besides creating a transparent ploy to play videogames for homework, I decided to make this ritual as a way to experience the effects of meditating in nature without having to actually be in nature.

It took 24 minutes to run through 24 hours in the game, starting at 10:00 pm. I had my character, Link, stand on a lush mountain, and I looked around by moving the game camera around, without moving Link around at all.

I watched the moon move through the clouds, and then I watched the sun rise from over the sea. At night, it was cold enough to see Link’s breath. The wind blew grass and other small things around, and the colors and the light changed throughout the day.

It’s the first time I’ve sat so long intentionally in a videogame without moving my character around or otherwise “playing” it as intended, but it did somewhat succeed in creating a feeling of being in nature, moreso than if I were more focused on running around killing monsters or fulfilling a quest. It also got pretty boring after awhile in spite of its beauty, which is in line with what I feel when I sit in one place in nature as well.

There was an interesting effect of feeling like I was actually in two places at once — I listened to the sounds of the crickets and the wind in the game, but also the sound of cars honking outside my real window. When I’m normally playing a videogame, I don’t feel like I’m “in” two places – I just block out whatever else is going on. But just sitting quietly, a little bored, allowed me to hear both things at once.


[understanding networks week 5] packets and mysteries

This week, we were to analyze traffic on our networks at home using Wireshark. So to begin, I had Wireshark capture one minute of packet activity on my wifi network at home.

In just 60 seconds, it captured 6894 packets! Seems like a lot, given that I probably browsed one or two sites in that minute, but I guess that reveals how much is going on in the background when I think I’m not even doing much. The total protocol counts were:

DNS: 65
IGMPv2: 2
QUIC: 113
SSDP: 41
STP: 33
TCP: 4809
TLSv1.2: 1805
UDP: 21

As you can see, most of it was TCP. I am not entirely sure what was causing these – I did a whois lookup on some of the IPs, and saw some familiar names: Amazon, Verizon, Facebook (which I don’t actually have, must have been Instagram…). Still, I was wondering if maybe Tweetdeck (a desktop Twitter client) was a source of such a high number of packets, but I wasn’t sure how to confirm it.

Another experiment I did was to check on the packets coming to and from my Raspberry Pi sitting in my living room.

My pi takes a picture of one of my house plants every morning and tweets it out from @grow_slow at 10:17 am. The only other thing it does it reboot itself at 10:00 am. So I turned on Wireshark and filtered it for the pi’s IP…


And just as I suspected, it didn’t do much just sitting there. So I tried ssh-ing in from my laptop to see what would come up.screenshot-2016-10-10-16-25-01

It still surprises me how many packets are required just for one ssh login. I also then tried logging in via FTP:


Something I didn’t expect was that Wireshark revealed my username and password when I used FTP. (As you can see I haven’t changed them from the defaults, oops. But to any potential hackers reading: I’m changing it!!).

Because my python program on my pi is set to tweet at 10:17 am, I waited until the time, expecting to see some packets, but…nothing showed up, even though the tweet successfully posted. In fact the only thing that would show up was these, which occurred every few minutes:

screenshot-2016-10-10-16-32-01  I also found that my laptop also sent a packet to the same IP with the same protocol. From reading a bit about the Internet Group Management Protocol, it sounds like it’s a way to forward the same IP packets to a number of hosts within a network. My guess is that both my pi and my laptop are telling the router that they’re available for multicast?

One last random curious thing I found: when I was ssh-ing in to my pi from my laptop, I noticed that it was sending packets while I was typing on the command line, not just when I submitted a command, which is not what I expected.

My understanding still feels very fuzzy, and I don’t know why I didn’t see any packets coming to or from my pi when I run the program that tweets. I think it’ll take me a little more time and research to feel like I’m starting to really understand this.

[understanding networks week 2] Traceroute Commute


(See the project here.)

For me, the most fascinating part of learning the traceroute command this week was the idea that each IP address the packets move through is tied to a physical place in the world. It was interesting to see the common network providers in the sites I regularly visit, but I became mostly curious about the locations they were associated with.

I decided to trace three websites of places that I have regularly commuted to in real life, including ITP (Greenwich Village), and my two most recent jobs at The New York Times (Times Square) and Kickstarter (Greenpoint). All three of these places are located in New York City, but running a traceroute shows that the packets bounced around to more far-reaching places in order to get to their respective websites.

I started by running the traceroute command in my command line:


I then used Maxmind to find the associated coordinates and other info based on IP addresses:


I then put all that data in a CSV file, which I changed into JSONs to make the data easy to work with.

From there, I used the Google Maps API to build a little page that would find a streetview location for every IP address passed through during the traceroute for itp.nyu.edu, kickstarter.com and nytimes.com.

The result is a sort of internet “commute” for the places that I have physically commuted to.


Of course these places aren’t exactly “accurate,” but it’s interesting, for example, that the last stops for both nytimes.com and kickstarter.com are in Seattle, probably because Amazon is there.

One note: I think it’s kind of funny how much more time my real world commute takes than this cyber commute, in spite of ostensibly shorter distances.

Anyway, this was a fun exercise in making the internet feel a little more physical.

The project is here, and the code is below.


[wdwod week 4] tsa claims, an attempt

I didn’t finish the assignment this week, sadly. I was trying to work with this TSA Claims Data, and spent awhile manipulating the data so that it would sum the total claims per airline. I got it to look like this (which involved figuring out how to change NaN to 0):

Screenshot 2016-04-21 10.07.48

But I couldn’t figure out how to then extract each element in the code. I’m sure this is a simple problem but I didn’t have time to fix it and make a visualization this week. 😦



[web dev with open data week 3] cheaters by religiousness

Screenshot 2016-04-13 22.14.00

(See the visualization here.)

This week we dove a little into D3.js (which made manipulating SVG quite a bit easier). I decided to use this very old dataset about extramarital affairs. There were a number of different things in the data, but I looked specifically at how religious people reported themselves to be, and what percentage of people with the same levels of religiousness had cheated on their spouse.

Originally, I had visualized it by number of people in each category, but since was a disparity between how many people were in each religiousness group, I thought showing the percentage would be more interesting.

But I also didn’t want to completely lose the info with the sheer number of people in each category. (For example, the “somewhat religious” group was much larger than the “anti religious” group). So I added an effect so that when you hover over a bar, it changes color and tells you how many people were in each category.

Screenshot 2016-04-13 22.15.28Screenshot 2016-04-13 22.15.44

(The screenshots above remove the cursor for some reason but you get the idea.)

One note is that I realize “cheaters” might not necessarily be an accurate word for what’s going on. There’s some nuance in what exactly “extramarital affair” means, and it doesn’t always mean that someone is going behind the back of their spouse without their knowledge. I’m making some assumptions about this 1969 survey, though – that the people are referring to cheating and not ethical non-monogamy.

Anyway! You can check it out here. Code is also below.

We were also asked to take a look at I Quant NY this week. I looked specifically at this post called “Parking Immunity? Diplomats Owe NYC $16 Million in Unpaid Parking Tickets. A Closer Look at the Worst Offenders.” The writer looked at a dataset showing unpaid parking tickets in NYC, and found something interesting — license plates of diplomats seemed to rack up the highest number of unpaid tickets.

I thought this was a clever insight gleaned from the data, but I am curious to know how Ben manipulated the data to get the info he ended up with. He writes:

Whats not so cool is the fact that the City loaded many rows of the data in there twice accidentally.  That meant there were multiple rows with the same ticket number and conflicting outstanding debt amounts.  Though I understand that data errors happen, I don’t understand how the City can keep putting out data sets with no ownership and no effective way to send in fixes.  A city who cares about the usability of its Open Data can do better.

He then says he “cleaned up” the data. But I wish I knew more about what he did to change it, and how he knew for sure there the duplicates were accidents.


[live web week 10] final project idea: kitchencam


For my final project, I’m interested in creating something that will allow for random people on the internet to control one particular thing in real life. Before I talk specifically about my project idea, I’ll talk a bit about my references.


The Telegarden


The Telegarden is an art installation that allows web users to view and interact with a remote garden filled with living plants. Members can plant, water, and monitor the progress of seedlings via the tender movements of an industrial robot arm.

Twitch Plays Pokemon


Twitch Plays Pokemon was an experiment that allowed for internet users to collaboratively play a game of Pokemon Red by controlling what happened in the game via chat.



Jennifer Ringley was one of the first “lifecasters,” using a webcam to take a photo of her bedroom every 15 minutes for years. It was a way for anyone on the internet to see what was happening in one person’s daily life.


What I want to do is set up a webcam in my kitchen. I’ll build a webpage, where anyone can go and see what’s currently happening in my kitchen, but they will have to click a button that takes a picture first.

I think this is an interesting idea because it’s a little different from typical “lifecasting” or livestreaming. It’s easy to be voyeuristic on the internet, just by stumbling across anyone’s personal livestream. But because the user has to click a button to capture a photo and see what’s happening, the user is a little more complicit in the voyeurism.

Personally, I’m not interested in being particularly exhibitionistic (in the bedroom style of Jennicam), but I figured getting to look into someone’s kitchen at any point in the day feels somewhat intimate. You can learn a lot by seeing what someone does in the kitchen – seeing how they make their coffee, whether they’re cooking or microwaving leftover takeout, why they’re eating cereal at midnight or drinking at noon.


I’m not 100% sure how I’ll build this out, but I imagine I’ll use a Raspberry Pi or something similar for the webcam and use socket servers to connect to a webpage. We’ll see…!