[web dev with open data week 3] cheaters by religiousness

Screenshot 2016-04-13 22.14.00

(See the visualization here.)

This week we dove a little into D3.js (which made manipulating SVG quite a bit easier). I decided to use this very old dataset about extramarital affairs. There were a number of different things in the data, but I looked specifically at how religious people reported themselves to be, and what percentage of people with the same levels of religiousness had cheated on their spouse.

Originally, I had visualized it by number of people in each category, but since was a disparity between how many people were in each religiousness group, I thought showing the percentage would be more interesting.

But I also didn’t want to completely lose the info with the sheer number of people in each category. (For example, the “somewhat religious” group was much larger than the “anti religious” group). So I added an effect so that when you hover over a bar, it changes color and tells you how many people were in each category.

Screenshot 2016-04-13 22.15.28Screenshot 2016-04-13 22.15.44

(The screenshots above remove the cursor for some reason but you get the idea.)

One note is that I realize “cheaters” might not necessarily be an accurate word for what’s going on. There’s some nuance in what exactly “extramarital affair” means, and it doesn’t always mean that someone is going behind the back of their spouse without their knowledge. I’m making some assumptions about this 1969 survey, though – that the people are referring to cheating and not ethical non-monogamy.

Anyway! You can check it out here. Code is also below.

We were also asked to take a look at I Quant NY this week. I looked specifically at this post called “Parking Immunity? Diplomats Owe NYC $16 Million in Unpaid Parking Tickets. A Closer Look at the Worst Offenders.” The writer looked at a dataset showing unpaid parking tickets in NYC, and found something interesting — license plates of diplomats seemed to rack up the highest number of unpaid tickets.

I thought this was a clever insight gleaned from the data, but I am curious to know how Ben manipulated the data to get the info he ended up with. He writes:

Whats not so cool is the fact that the City loaded many rows of the data in there twice accidentally.  That meant there were multiple rows with the same ticket number and conflicting outstanding debt amounts.  Though I understand that data errors happen, I don’t understand how the City can keep putting out data sets with no ownership and no effective way to send in fixes.  A city who cares about the usability of its Open Data can do better.

He then says he “cleaned up” the data. But I wish I knew more about what he did to change it, and how he knew for sure there the duplicates were accidents.

https://gist.github.com/nicolehe/ca7ac4a19fb44ab732837b0c3fc6e367.js

[web dev with open data week 2] internet users

Screenshot 2016-04-07 13.53.30

(See the page here.)

This week, I made a little visualization about how many people have access to the internet around the world. It’s built sort of jankily with SVG. The data is from Gapminder, showing how many people in 100 have access to the internet in 2011. You can see it here.

[arcade week 3] the golden age of videogames

Screenshot 2016-02-17 16.43.38

We were to play three games from this list these week, and I went with three that I’d never heard of: BurgerTime, Elevator Action, and Track & Field.

BurgerTime was my favorite of the three. It’s sort of Pac Man-esque, in that you run around the screen avoiding enemies. But instead of collecting stuff, you’re stepping over the burger parts to make full burgers that cascade down the screen. The silly theme of the game appealed to me, and the objective was pretty straightforward once you began. I found it to be rather soothing to play for some reason – it just felt satisfying to complete a burger.

Screenshot 2016-02-17 16.47.54

The next one I tried was Elevator Time. I like the look of this game and was intrigued by the idea of an entire game where you ride an elevator, but found it sort of confusing to play. I wasn’t sure what my goal was, or how to interact with the different elements of the game. I bet that playing it on a real cabinet vs. a poorly-explained emulator online would probably help, but I didn’t get into it enough to be hooked to try again and again once I died.

Screenshot 2016-02-17 18.13.59

The last one I played was Track & Field – the hurdles, specifically. This is the only game that reflects a real game in the world (sports), so it was interesting for that reason. There’s something different about doing a simulation of something that exists in “real life,” because you come in with preconceived notions about how it’s supposed to work in the game.

I figured the hurdles game would be relatively easy, because all I had to do was jump at the right moment. But of course, like all arcade games, it wasn’t.

While I also enjoyed the aesthetics of this game, it didn’t hook me, but for the opposite reason Elevator Action didn’t hook me. I felt like I understood how the game works, but it was almost boring for that reason. It simulated something I understand in real life, but didn’t surprise me with anything new. (In contrast with something like QWOP, of course).

Now I think I’m going to play more BurgerTime.

[live web week 3] a broken drawing thing

Screenshot 2016-02-16 22.26.00

Things did not go well for me this week for this assignment. I had a lot of trouble getting this collaborative drawing page to work, and it still basically doesn’t, even after a fair amount of work. But I figured I’ll just document my code sorrows for now and sort it out at a later time.

My goal was to let two people collaboratively draw in one canvas in different colors, with neither person’s mouse affecting the other’s drawing. This proved harder than I thought it would be. I did manage to get the drawing to only happen while the person is clicking (though it seems wonky), and for the color to be randomized with each page reload.

I made two separate canvases so that each person’s drawing would not affect the other’s. I wanted to later the canvases on top of each other, but I couldn’t get that to work, so I just separated them. Now what happens is that you draw on the left box, and it shows up in the other person’s right box, in your color.

I also made an erase button that should wipe out your own box, but for some reason the image pops back up once you start drawing again.

On the plus side, I do feel like I’m starting to get a hang of how sockets work. I’m far from mastering it, but diving in this week has made me feel like I understand it way more than I did before.

Below is my extremely dysfunctional code.

https://gist.github.com/nicolehe/23415ff7b992d527f89e.js

[video and sound week 3] How to fly a kite storyboard

We’ve now moved on to the video portion of video and sound, and our first assignment is to draw a storyboard for a short video. I’m working with Melody and Aaron,

We decided to make a “How to Fly a Kite” video, though that may certainly change depending on how successful we are at flying a kite when it comes time to shoot. If that happens, the theme will likely be slightly different…

For now, we broke it up into three sections: 1) Get a kite, 2) Get the kite in the air, and 3) Keep the kite in the air.

Below are the storyboards we made. (Apologies for the lack of drawing ability in our entire group!)

IMG_0991

IMG_0992

IMG_0993

[icm week 3] Allergic to Love v. 1

Screenshot 2015-09-24 03.21.50

[TLDR: Here’s my (unfinished but finished-enough-for-this-assignment) game, Allergic to Love.]

Oh man, this was a rough one. The beginning was good: Esther and I partnered up to start this project. I really wanted to make a game, so we went down the path of trying to build something that involved shooting at objects that moved on the screen.

Together we got as far as making things fall down, making things shoot up, and coming up with an interim solution for making objects disappear as they got hit. Esther did some pretty awesome work figuring out how to use arrays and create functions, and then we took that code and built separate things on top of it.

I quickly realized that what I wanted to do this week was beyond the scope of what we’ve learned so far in class, so it was quite difficult to sort out some specific things. Once I came up with the theme of the game, it took me a really long time to figure out how to make all the falling objects in the shape of hearts. After some experimenting with for loops, making functions, arrays and bezier curves, I got it!

Screenshot 2015-09-23 19.55.00

This was very exciting. I started adding things like making them fall from different heights and at different speeds. Some of the trickiest stuff to figure out was how to make things happen in the win state and lose state. I ended up having to add a bunch of Boolean arrays. It also started to come together aesthetically.

Screenshot 2015-09-24 03.20.04

I added some fun things, like making the girl’s skin turn green every time a heart hit the ground. (She’s allergic, after all.)

//if the heart hits the ground
    if (hearts.y[i] == height) {
      if (hearts.hitTheGround[i] === false) {
        hearts.onGround++; // skin changes color
        skin.r = skin.r - 10;
        skin.g = skin.g + 5;
        skin.b = skin.b + 5;
        hearts.clr[i] = (0); //hearts turn black
      }
      hearts.hitTheGround[i] = true;
    }

I also had some crazy mishaps experimenting with the gradient that frankly look way cooler than what I was going for.

Screenshot 2015-09-24 00.55.23

There’s still a lot I want to do, and the code is incredibly unfinished and messy. But it’s almost 4 am and I got this thing in a playable state, so I guess now is as good a time as any to stop. And even though I’ve been playing it all evening, it’s pretty hard! I feel like there’s still a lot to improve on here, but this will have to do for version 1.

Screenshot 2015-09-24 03.38.15

Check out Allergic to Love, v.1!

My (extremely messy) code is below.

Continue reading “[icm week 3] Allergic to Love v. 1”

[pcomp week 3] The Lonely But Tender Ghost (digital and analog inputs, digital outputs)

IMG_0156

This week we learned how to program the Arduino to take inputs from our sensors and program them to make stuff happen.

I went to the Soft Lab workshop on Friday, where I learned how to sew a simple button, so I used that in the first example of alternating LEDs with a switch:

The fun part was using analog sensors to change the brightness of LEDs — I wired up a force sensor and a photocell to control two different LEDs on the breadboard.

I had a ton of ideas for our assignment to do something creative with these sensors this week, many of which sounded great in my mind but in reality were all varying degrees of unfeasible for the time being. One thing that stuck with me — newly inspired by the Soft Lab — was the idea of doing something with a doll or plushie. My goal was to make a plushie that gave you the sense that it had feelings.

I decided to go with a force sensitive resistor. The idea was that I’d make a plushie with LED eyes that would change color depending on how hard you squeezed it.

Here’s the circuit I built on the breadboard:

The map() function was really helpful for me to turn the input from the sensor into three different states, which I could then turn into colors. I learned how to use an RGB LED with the help of this example from Adafruit, and I ended up using the setColor() function written in that sketch in my final code.

IMG_0163

The next step was to make my plushie!

 

IMG_0151 IMG_0153

I realized that my original plan to sew two RGB LEDs into fabric as eyes was actually extraordinarily complicated, so I just made the light separate from the plushie and went with the next best thing: googly eyes.

I built my own force sensitive resistor with some conductive material and Velostat, and sewed it all up in some felt to make my little ghost plushie. I noticed that the input values I got from the commercial FSR went pretty accurately from 0 – 1023, but my homemade FSR pretty much started at 750 or so rather than 0. I adjusted my variable in my code to accommodate it and it worked perfectly well.

I decided to call him the Lonely But Tender Ghost. In his normal state, the light is white. When you squeeze him tenderly, the light turns green. If you squeeze him too hard the light turns red. 😦

This is just a basic first project, but hopefully later on I can further explore building an object that makes you feel like it’s expressing human feelings, perhaps creating sympathy or empathy in you, the user.

My full Arduino code is below.

Continue reading “[pcomp week 3] The Lonely But Tender Ghost (digital and analog inputs, digital outputs)”

[pcomp week 3] Design meets disability reading & observation

from the Alternative Limb Project
from the Alternative Limb Project

This week’s reading was Graham Pullin’s Design Meets Disability, discussing both objects explicitly used to counteract a disability, like prosthetics, as well as objects used by people of all abilities that have varying levels of inclusiveness. Glasses are cited as an example of successful design for disability, to the point that people don’t consider poor eyesight a disability because glasses have transitioned from being medical devices to fashion accessories. This reminds me of Norman’s phrase, “Attractive things work better.”

I appreciate this perspective in the context of physical computing. If we’re designing for the human body, it’s important to take into consideration the ways in which people’s bodies and abilities are different, and to not take any particular ability for granted. I think it’s neat to see examples of things designed specifically for, say, wheelchair users, but also to see products that keep different preferences of usage in mind (a clicking sound and sensation, for example.)

(A small note on the examples: it was fun to see Nick’s Bricolo because we used to work together at my old job before ITP!)

——-

For my observation assignment, I decided to watch people use the entrance to the subway. More specifically, I watched them use the Metrocard slider that collects their fare.

NEW YORK, UNITED STATES - JANUARY 13:  A person swipes a metrocard in New York subway station on January 13, 2014 in New York, United States. The Metropolitan Transportation Authority (MTA) declares that MetroCard the subway rapid transit system is going to change until 2019. (Photo by Bilgin S. Sasmaz/Anadolu Agency/Getty Images)

According to the MTA, people swipe in about 1.7 billion times a year. That’s a lot! I’ve probably done it a thousand times myself.

That said, it’s certainly not perfect. My assumption is that people who are accustomed to the system — understanding which way to swipe and the specific speed at which you swipe — can move through pretty quickly within 3 seconds or so with no problem. But tourists, anyone that has insufficient fare on their Metrocard or any other Metrocard problem, or people that move too slowly I predict will have trouble with the machine.

I watched people use the machine at Union Square because there’s a lot of activity there, and locals and tourists alike.

I noticed that the people using the machines generally fell into three groups:

  1. Confident and experienced users who got through with no problem
  2. Confused users who had problems, likely tourists
  3. Confident users who had a problem with their card

The first group was the majority of users who moved through the system quickly. The second group usually approached the machines slowly and often in groups, and would often swipe too slowly or too quickly, receiving the “Swipe card again at this turnstile” message. They would try again and again until it worked. This usually would take something more like 10 or 15 seconds.

The third group actually ran into the most trouble. People who were experienced and confident moved forward with the speed of someone who would get through in a couple seconds, but were halted by the machine abruptly if the card didn’t work. Sometimes they would almost run into the turnstile because the momentum was carrying them forward. Other times there were almost collisions with people behind them, especially if they had to turn back to refill their card.

In the case of insufficient fare, people had to go back to the machines to refill them, which could take up to a few minutes.

Developing the correct motion to swipe in a way that the machine understands is a skill that improves with practice. This is probably one reason why most other (and more modern) subway systems around the world use a tapping system, which seems to be easier for anyone using the machine, even if they’ve never done it before.

The way to solve the insufficient fare problem seems to be harder. It’s not an issue of not informing riders of how much fare is left (since it’s on the display when you swipe in), but people forget that they need to refill even if during the last ride they knew they ran out. It seems to be an issue of when riders are notified that they need to refill, which should ideally be when they walk into the station and not when they’re already at the turnstile.

A shorter term solution might be to design the space around the turnstiles in such a way that people can quickly exit the turnstile area if they need to, so it’s not a crowded bottleneck.