Work has kept me busy this year, but the past few months were slightly more of a challenge than usual when I was thrust into the world of front-end development. All of our front-end engineers were gone, updates were needed, I like learning new things, so that was that. We have two React apps at work, one slighly older that uses Flux, another a bit newer using Redux. They were each configured by different people and demonstrated many different ways of doing things. After a few weeks of writing production code and quite a few frustrated IMs to friends, I reached a point where I was feeling good about my time with JS, React, and front-end development as a whole.
This is not a story about that, though.
I had it in my head that configuring Webpack, React, Redux, etc,… from scratch was a pain in the ass. Since the setup portion was the area that I felt least confident, I thought I would do some research and find the best React starter repo and use that as a template. After all, a popular project would probably reflect the current state of the art and include all kinds of helpful things that I and my former co-workers might not have known about!
I did some research. I wanted something that already had React 15 and the modern versions of React-Router and Redux. I wanted it to preferably already have a test framework setup. It had to use webpack, naturally, have SASS support, and hot reloading. These requirements didn’t seem too demanding but findind the right library proved a little tough. There’s a lot of old stuff out there, a lot of of libraries using old versions or missing pieces. Finally, I settled on an extremely active React starter project that had all the right versions, the right config, the right number of Github stars, the right number of contributors and people responding to issues. Sweet! I cloned, I copied into a new repo, I got to work.
Things were weird right away. The sprawl was insane. Unreal. Every conceivable tactic to split this project into extra files had been employed. Each route was split into pieces, some routes had dedicated components, others had dedicated containers, others… those are just routes, you have to find the rest of those pieces. The webpack config was split and split and split. Always quick to assume that I’m just a primitive Ruby engineer who doesn’t understand the ways of these sophisticated front-end professionals, I put my head down, refactored it a bit to make it more sensible to me, and pushed through.
It worked for a little while. Sure, every time I had to add a new route or component, I felt like I was jumping through flaming hoops that were also screaming at me and dancing around and covered in spikes, but… I just had to adapt to their modular approach! This was a “fractal” file organization, broken up by features. (Forget the fact that today’s unshared, feature-specific code and assets are tomorrow’s reusable time-saver and this reeked over pre-optimization.) I spent more and more time. I wrote some cool code. Things started coming together.
Somewhere along the line, I simplified the routes, components, and containers to look more like one of my apps at work. It felt less sophisticated but no matter how much I read, I couldn’t find any evidence that anyone else was actually using this library’s approach to file organization. Oh yeah, also, my huge production apps at work looked nothing like this and they were doing fine. Oh, and no repo that I had ever looked at was organized like this. I again considered that maybe there was something wrong with this library’s approach.
Those flaming hoops that we had to jump through to make things work? There were no hoops at this point, just fire. We had to find a way to just jump through the fire and make things work.
After a lot of stress, we got everything working. The app was styled, everything looked good!
Finally – finally! – things seemed pretty stable. I thought everything could be left alone for a little while.
But then I read about this cool new library: webpack-dashboard. It made my webpack config look all… pretty, ‘n stuff! I wanted to use it. I started following the documentation aaaaand…
Everything broke. Everything fucking broke. I could not figure out where to even start making the simple changes required to make this thing work.
That was the end! Fuck this shit! I decided to find another library to use as a baseline for the project. I’d rip my code out, configure the stuff I didn’t want to configure before, and know that I had something predictable.
A few weeks after I started the new project, create-react-app was released. It was super barebones – no Redux, no Router, no tests, no SASS – but fuck it! I’d figure out the rest!
I cloned it, I ejected (cause I’m a control freak who likes to see more of his dependencies), and guess what jumped out at me immediately?
It was so. fucking. simple. There were so few files. It was so predictable. There were a few clearly configurable options, but it was all totally declarative, easy to expand, easy to reason about. I quickly wired up everything I needed, ported over my code, and had it working. Since then, there have probably been a dozen little tweaks I had to make to my webpack or test config and it never fucking surprises me or breaks. I keep an eye on the project’s repo to see what changes they’re making and if I see something I like, I follow their lead.
WHAT’S MY FUCKING POINT, THEN?
I’ve got a few.
My first point is that complicated code is not necessarily healthy code. Libraries that aim to be all things to all people run the risk of becoming amalgamations of many organizations’ infrastructures, not examples of how any one organization would ever handle their infrastructure! The first React starter library I used was more of a tech demo (“LOOK AT ALL THE CRAZY SHIT YOU CAN DO WITH REACT!”) than a reasonable approach to project organization.
Finally, remember that at the end of the day, doing things “right” by the standards of a community (of fucking strangers, who gives a shit?) should be secondary to feel productive and shipping. Don’t feel obligated to change tech or put up with patterns that just do not seem to be sticking if it’s getting in the way of your work. I should have punted on the first library right away.
And that’s that. I’ve been happy as a clam since then, coding and shipping. My project is getting pretty intense and I’m eyeing TypeScript as a way to make things more reliable. I branched and started the conversion but noticed a lot of weirdness around the different approaches to obtaining type defs, so once I spend some time configuring my project for global typings and… WAIT A MINUTE. NOT THIS AGAIN. I’M WAITING FOR 2.0.
This situation is sad for many reasons. Among them, it’s a reminder that many people view others in very black and white, static terms, where reflection, discussion, and growth are impossible once any connection to “shady” ideas or people have been made. They see Death in June’s Douglas P as someone who, despite owning his mistakes as the actions of a young person trying to find his way in the 80s, will be forever tainted. Worse still, they see someone like Caïna’s Andy Curtis-Brignell as equally untouchable for agreeing that Douglas could grow up and should not be punished for mistakes he is trying to move past. These are the same people who reject reformed skinheads from participating or object to bands containing minorities have been photographed wearing shirts that they find questionable.
Call me crazy, but I think that the most powerful advocates for social change in metal are people with connections to things that make us uncomfortable. I have more than a few friends who had terrible opinions when they were young but grew out of them. The keys to this transformation: the opportunity to relate to people with different backgrounds and shared interests. The time and place to realize that their attitudes just don’t make sense.
Nothing erodes racist attitudes like finding common ground and building connections with different types of people. People who go through these transformations understand how others end up where they were. Their inclusion in the underground makes the statement that it’s possible to change. Their presence is crucial, even if what they contribute in the fight against extremism happens slowly and without any deliberate actions. The wholesale rejection of individuals who at some point engaged in actions we find questionable makes it harder, sometimes impossible, to reach those who need common ground the most. This goes for folks who grow out of terrible perspectives and those of us who are willing to accept change in others.
But back to Caïna and this specific event, Andy is exactly the type of person we need more of in the underground. Very few black metal artists are willing to make strong statements against discrimination – he is one of those few. He probably also has a surprising number of listeners who disagree with his views, which means that his music can often present opportunities for communication between people who would otherwise have trouble finding common ground. To prohibit his music and his fans from participating in an event because he chooses to believe that people can change, that art can unite, is a sad, sad decision with profound implications.
I think about this whole thing and wonder if there is someone somewhere who actually does have right-leaning tendencies and will take this as one more reason to feel persecuted and isolated. They’ll say that this is more “SJW PC nonsense,” they’ll be more willing to side with actual skinheads when they are kicked off of a show for actually being shitheads, they’ll make shittier friends, and they’ll fall deeper into this pit. I’m not saying that this one incident is the start of a slippery slope – I don’t see how it could be – but it contributes to a worldview that is best fought through inclusion and dialogue, not exclusion and shame.
A post the other day on /r/ruby asked about the reasoning behind the recommendation that instance variables be accessed through methods instead of directly throughout code where they are in scope. Most of the responses dealt the fact that this helps make your code more maintainable, which is true, but I think there’s a more significant reason for this: it adds data to your class’s public interface, so it encourages healthy design habits and forces some introspection.
In a carefully designed class, changes to interface are taken seriously. Before you add, remove, or change it, you should ask some questions, the most important (in my opinion) of which are, “Is this method reasonable, given the stated purpose of this class?” and “Is this a method something that users can trust to be there and work properly in future releases of this library?” When you expose an instance variable as a method, you implicitly respond to both of those statements in the affirmative, so think carefully about whether those questions are also true of that instance variable!
When you find yourself in the position that you have an instance variable that shouldn’t be exposed as part of the public API, consider that you might have a class with too much responsibility or an instance variable that you don’t need. Unneeded instance variables often come from doing this:
def my_process get_foo use_foo_here use_foo_again return_value end def get_foo @foo = ObjectLookup.find_by(quality: 'foo') end def use_foo_here @bar = @foo + 10 end def use_foo_again @baz = @bar + 20 end def return_value @bar > 40 ? @bar : @foo end
…when something like this is healthier in every way:
def my_process foo = get_foo bar = plus_10(foo) baz = plus_20(bar) return_value(foo, baz) end def get_foo ObjectLookup.find_by(quality: 'foo') end def plus_10(obj) obj + 10 end def plus_20(obj) obj + 20 end def return_value(starting, ending) ending > 40 ? ending : starting end
Yes, there are times whenyou want to reuse an expensive query multiple times internally within an object and exposing it to the public API would be inappropriate. That’s fine, just mark that method private. But that should be your last result, not an excuse for extra data. Changes to the public API are an opportunity to question your approach and improve the quality of your code, so face them head on!
Since 2011, this blog has been powered by WordPress. As of today, it’s entirely static and generated by Jekyll. Isn’t it funny how things have changed?
WordPress is a very cool product. It does so many things so well and has managed to dominate such a huge portion of the web that it demands respect. I can sum up my feelings about it pretty easily:
- The templates and plugins system is great, mostly.
- The basic WYSIWYG editor is fantastic until you discover Markdown.
- It’s easy to update your installation and your plugins, except when it’s not.
- Update recommendations help keep you safe, as long as you check frequently.
In other words, its strong points are really strong but its weak points – its risky plugins, the horror that is troubleshooting when something breaks, the need to constantly be on top of security as a result of its huge API – are pretty big for someone who just wants a place to talk shit every few weeks or months.
Sometime last year, we launched an official site for Neo4j.rb, Neo4jrb.io, and decided to host through Github.io and use Middleman. It’s been wonderful. My big takeaways from a year-ish of working with a statically generated blog:
- Local dev and Git deployment > remote Admin page. It makes writing as convenient as coding.
- Vim and Markdown > WYSIWYG editor. It makes writing as fast as coding.
- We essentially don’t have to worry about security.
- Modifying HTML and CSS > modifying themes.
It was kind of a no-brainer. I wanted something to help me write more often and this seemed like a good excuse, so I took the plunge. Even though Github.io is free, I decided to self-host because I like having an SFTP server and space to experiment. A search revealed a WordPress plugin to migrate content straight to Jekyll, so that made the platform decision easy. Ultimately, migration was simple enough:
- Basic new server setup: nginx, users, updates, etc,…
- Export old content using this plugin
- Install rbenv, Ruby, Jekyll and related gems
- Install git and configure post-receive hook, mosty descrived here
- Tweaks to migrated new blog: styling, fixing some things the migration didn’t get right, install/configure jekyll-archives
It took most of a day, when said and done, but none of it was what I would describe as “hard.” There are still some styling tweaks to make but I’m happy with it, overall. The real hard part comes next: actually writing more often.
I like Air Jordan 1s. A lot. It's a new thing, especially since I hadn't owned a pair of shoes that wasn't all black since 1998, probably, but I'm enjoying this opportunity to step outside of my comfort zone and try something different.
Did you know that "sneakerbots" are a thing? Cause they are a thing. Limited edition sneakers are announced ahead of time and like any limited, premium item, there is a market around their buy and sell, so, naturally, a market has emerged around the very means to buy them in the first place. I watched a few YouTube videos and from what I can tell, "sneakerbots" look like applications that watch the clock and then zoom through the checkout process as soon as the shoes go on sale. The kicker is that few of them are free. Buyers often pay $25-$75 for a license that claims to work with all of the big retailers' websites.
I've been reading /r/sneakers a lot lately and there seems to be some misunderstanding about how they work. In particular, it seems like a lot of people have this idea that sneakerbots are some magic tools that guarantee a successful purchase, but the truth is that they are limited by the web sites' servers' abilities to process requests. This, ultimately, is the greatest weakness of these things because it means that no matter how good your script is, you're always at the mercy of the server's ability to handle traffic. At the same time, it stands to reason that traffic will ramp up in the milliseconds after a pair goes on sale, so the faster you can hit submit, the better your chance of completing a purchase.
Regardless of whether paid sneakerbots are worth it, the whole thing bugs me. It's fucked up that most new releases are only accessible through eBay at 3x or more their original price. It's fucked up that I can buy new releases from literally hundreds of different auctions for $500+ but I've not once seen a pair on someone's feet in Manhattan or Brooklyn. These new releases are going straight from the shelves to eBay and I'm guessing that most will never see concrete. The paid sneakerbot market adds insult to injury, exploiting the desire to catch a drop without being able to guarantee a sale. The videos I saw looked shady as shit: you're installing some closed source software built anonymously and designed to exploit shopping carts. It has access to your browser, your account, probably your credit card info. Fuck everything about that.
Enter FSB, my open-source sneaker buyer in the form of a Chrome extension. I had been looking for an excuse to learn a bit more about extensions anyway and if I can give an alternative to paid shady software, that's cool. (I guess it'd be better if it was a bit easier to use, but whatever.) My current version is shitty: it targets Footlocker's mobile site, it only works with shoes, and there might be a bug that's keeping the checkout button from being clicked. (In fairness, it's a bug caused by Footlocker's mobile site, a mismatched cert name that stops JS execution.) I threw it together in a bit more than a day and when I tried to use it the morning of the Shattered Backboards release, Footlocker's shitty servers were a show-stopper. At the end of this post, I'll talk about how it could be improved and what it would take to build a better sneakerbot. You can find the Github repo at https://github.com/subvertallchris/fsb.
I'll walk through some of the things that took time to figure out.
Start by reviewing the docs on extension setup, I won't cover that since there are many good tutorials out there for that.
My manifest.json likely demands more permissions than it. It uses a non-persistent background page to maintain state, listen for events, and fire off new pieces of the process as events occur.
cartcheckout.js both act as registration scripts. They ping the background page so it can record their tab id and, if needed, start a process when it discovers them.
Chrome's extensions API makes it very easy to manipulate CSS and start scripts, but if you want to, say, get the value of a field from a form and use it somewhere else, or make the clicking of a button on tab 1 cause a button on tab 2 to be clicked, it can be tricky. background.js contains the bulk of the code that solved this problem and, mentioned above, feels a lot like how we use RabbitMQ to kick off actions on servers at work.
Evented Programming for Dummies
Most programmers, myself included, typically approach tasks from a procedural perspective. We think in terms of sequence: first you do A, then B, then C, and so on. In the Ruby language, we have something like this:
def a_method do_this then_this finally_this end def do_this # some things happen end def then_this # some more things happen end def finally_this # final things happen end
An evented model decouples one action from another. Instead of defining the sequence, you create independent actions that announce their behavior through messages, or events. Each action can listen for and respond to others. Think of it as, "When Section A completes, it announces, 'SECTION_A_DONE'." Anything that is listening for "SECTION_A_DONE" will start working on its own. Section A neither knows nor cares what happens after it and those actions that start don't care what fired the message, they just know that it's their turn to do something. We also do away with the idea of a central function that ensures that Section A flows into Section B. At my day job, we use a server called RabbitMQ to manage this process and some programming languages, like Go, have messaging baked in as core concepts.
With FSB, I had a few different tasks happening independently of one another: I needed a cart open, I needed something else watching a timer and prepared to POST a form, I needed to listen for a checkout and complete the purchase when the cart was populated. There are some procedural elements to it but approaching it from an evented perspective let me think in terms of state, not process. This worked well with Chrome's Extensions API. Tabs are able to send messages to the persistent background page, and the background page can send messages back to tabs, but they can't communicate directly, so purely procedural programming wasn't possible. Messages were at the heart of the most complicated parts of the sneaker-buying event, but they also let me compartmentalize the process in a way that was easier to manage than it would have been otherwise, so building entirely around messages felt right.
Another nice feature of the background page: it can also inject arbitrary scripts into a page. These scripts execute in the context of the tab but with an important limitation: depending on how the site's scripts are defined, certain functions may not be visible. For instance, a page might have jQuery loaded, but my injected scripts might not be able to see
On FSB's Operation...
FSB targets footlocker.com's mobile site. Mobile sites tend to load faster than their desktop equivalents, so this seemed like the thing to pursue. I also noticed there's no CSRF token required, so you can add an item to your mobile cart from anywhere on the site without having to load the product page first! Dumb. Sadly, the mobile and desktop sites share a bit of code that causes some JS errors that screwed me up later on, but whatever...
So, with all that in mind, take a look at the code. It's very straightforward. I suggest installing the Chrome apps and extensions developer tool so you can monitor the background page. Examine each message received and sent through
background.js and you'll see how the process works. Each message injects a script, the script performs a task and sends a message back to the background page, which is received and starts another event. Messages can contain data and message receipt can return a callback to the sender. This let me capture data from an active tab, send it to the background page for collection, and/or request information from elsewhere. In all cases, the background page listens and responds to requests.
This really is the pattern repeated throughout the whole process: script is executed, tab sends message that script has started, background page returns data or instructions based on the message sent from the tab, tab receives message and processes the script's body, tab sends message when complete, the process completes. Call and response.
The readme at https://github.com/subvertallchris/fsb walks through operation, but I suggest you monitor each page through Chrome's console throughout the process. Background page, too. Each step announces its status. It WILL try to complete the purchase right now so if you don't want that, comment out the
click here. On the final page, the inspector was revealing some SSL problems — it looked like a requested resource didn't match its cert, so scripts would sometimes stop running. I'm sure there's a workaround but since my focus was on learning the extension, not being guaranteed to get a pair of sneakers, I didn't obsess over this detail.
There are two key areas that need work.
First, the UI of the page tab sucks. If I was trying to release this, I'd want something a bit friendlier, with a date/time picker and better size selection. I'd also want a more robust management console so you could see what was scheduled and what was active.
Second, it needs to make multiple simultaneous attempts to work around the weakness of Footlocker's servers. You'd need to manage the global state as well as the state of each tab.
I don't plan on going any further with this unless someone wants to help out. It was a fun project and I'm happy for the experience but I'm not looking to get any deeper into this nonsense. The money I've been spending on shoes is bad enough but I value my time even more and there are enough demands on that already!