Friday, April 17, 2015

The 0.1x Programmer is not a Myth


People that claim 10x programmers are a myth haven't spend enough time with terrible programmers.


Thanks to @kmoir, today I watched @jacobian's very interesting PyCon 2015 Keynote:
Jacob asserts that because most people fall onto a normal distribution that most programmers must as well. He makes a great analogy between marathon running ability and programming ability.

He illustrates his point with this chart:


I wholeheartedly agree that the mythical perception negatively affects a beginners confidence and that it is hurting our industry pipeline. I completely disagree that programmers lie on a normal distribution. Like Jacob, I have no data to support my claims, however, I'll be honest and say so, rather than presenting my claims as fact.

First, I want to introduce the three pillars I think someone needs to be a successful programmer (these probably make you successful at most things): hard work, a desire to improve, and intelligence. I posit that these qualities are mutable, but generally it is better to find someone that already has high values of at least two of these qualities. For example, if you're trying to hire a programmer for an esoteric language without much of a community, you might look for someone with intelligence and a desire to improve. If you need someone to pore over thousands of lines of COBOL looking for two-digit dates, you might optimize your hiring for hard-workers; they might not need to know the deep inner workings of the systems they're modifying. They might just need to expand all the records that contain a date.

There exist a lot of hard-working, intelligent programmers. There are unfortunately not as many with a desire to improve. A friend once told me you can have X years of experience, or 1 year of experience X times. (I'm sure that quote is attributable to someone famous.) I think this is extremely common in programming.

I believe that programmers have programming ability on a distribution more like this (enjoy my professional illustration):
However, I also believe that programmers have job-acquisition ability on a normal distribution:
I don't think there is any correlation between these graphs. I've seen plenty of programmers that have great job-acquisition ability, but poor programming ability. I've also seen plenty of the inverse. 

Burying the Lede

If you assume that my anecdotal graphs above are true, then there are probably a lot of bad programmers employed today. In Jacob's talk, he mentions that the US government projects a 1.5 million gap between available jobs and available programmers by 2020. Thus, companies have a high incentive to hope that a bad programmer will become better. In my experience, companies that employ most bad programmers don't know they have bad programmers, and thus are unlikely to fire them, so bad programmers have little incentive to improve. In my 15 years of professional experience, the only times I've ever seen a bad programmer be fired was for insubordination or other issues unrelated to ability.

TLDR Again

10x programmers are probably extremely rare. It is 0.1x programmers that are common. I've interviewed lots of them. I've worked directly with lots of them in my 6 years as a consultant and 4 years suffering in large midwestern corporations. I've also worked with some really good programmers as well. I've never worked with someone that I'd classify as a 10x programmer, but I've met a few 2x programmers.


In Jacob's talk, he mentions that he didn't invent django, but rather just happened to be an early user. He further claims to be a mediocre programmer without presenting any evidence. I think Jacob is simply undervaluing himself.  I'm not part of the python community at all. I don't follow @jacobian on twitter. I could be completely wrong about the people that he's worked with, but based on a quick googling, he's only worked at The Lawrence Journal World (with the inventors of django) and Heroku. I assume that means he never spent 2008 hacking on a massive EJB-based single-use custom web framework from 2002 for an insurance company. I'm not holding up that as a badge of honor or martyrdom, but of a vastly different experience.

For the Beginners

I agree with Jacob that 10x expectations are unrealistic and are intimidating beginners away from the field. However, a 10x difference in ability is real. We need to communicate to beginners not that they must be 10x badasses, but that most programmers are 0.1x clock-punchers who don't have much desire to improve. Beginners have an amazing opportunity to quickly surpass people with lots more experience. This doesn't require working 60 hours/week every week, though one might be required occasionally. It requires a desire to investigate root causes of problems instead inserting Thread.sleep(1) in your code when you have a race condition. It requires a automating a database upgrade process instead of manually babysitting it every weekend so you can get easy overtime pay. It requires optimizing your build instead of accepting that it takes 2 minutes to redeploy your webapp after a simple JSP change. (These are all real-life 0.1x programmer examples.)


In case you think I spent past professional life in an awful gulag, I didn't at all. There were definitely some low points, but I had lots of opportunities to learn a vast array of different technologies, and I got the opportunity to work full time on iOS starting in 2009, which I'm still doing today.

Thursday, March 5, 2015

Mozilla Should Build an Android Permissions Delegator

Earlier today, I had a thought:
Of course, this wasn't novel:

However, this doesn't have to be an idea only used for evil. What if there existed a trusted app that exposed all permissions available to other apps at runtime? Mozilla seems like a good candidate to make this. They're a trusted organization dedicated to privacy and openness and they have the technical expertise to pull it off. Let's pretend one exists called Mozilla Services.

For end users, this could greatly improve the Android experience. Let's use Facebook (which I don't have installed because it requests WAY too many permissions) as an example:

Instead of this crazy long permissions list, Facebook could request only the ability to connect to the internet. (This permission is so common that Google hides it from the default list.) Then, when Facebook wants to do something evil like listen to my microphone when I post an update, it could do so at runtime by sending an Intent to the Mozilla Services App. Mozilla Services could keep an internal list of permissions granted to various apps (which users could revoke at any time). If Facebook hasn't been approved to use the microphone, Mozilla Services could pop up a dialog asking me to allow Facebook to use the microphone. Mozilla Services could then use the microphone and delegate access to the bitstream to Facebook. If I later decide I don't trust Facebook with microphone access, I could open Mozilla Services and remove Facebook's permission.

Of course, Facebook will never adopt this scenario because it already has over 1 billion installs even with all the permissions it requires. However, some smaller app like Wonder Workshop's Path for Dash, which doesn't yet have 500 installs, might want to reduce friction for users, so it could use Mozilla Services to set up its Bluetooth connection.

Maybe you're way more influential than me and can convince Mozilla to build this. They're hiring Android/iOS engineers...

Monday, February 2, 2015

Towards an Ideal Cocoapod Project Structure

The Situation

I recently returned to iOS from a year-old Android hiatus. On my last iOS project, I used cocoapods as my dependency manager, but this was bolted onto an existing codebase. I'm starting fresh now, and I want to find a canonical project structure for a cocoapod (think AFNetworking, CocoaLumberjack, etc).

 The Problem

I don't want to have duplicate configuration between an Xcode project and a podspec. Ideally, cocoapods would configure a project with which I can do daily development.

The Solution

Development Pods! Specifically, create a demo project for your cocoapod, and use the demo project's Podfile to import your cocoapod into Xcode:

Using the files from a local path.
If you wold like to use develop a Pod in tandem with its client project you can use the path option.
pod 'MyPod', :path => 'MyPod/MyPod'
Using this option CocoaPods will assume the given folder to be the root of the Pod and will link the files directly from there in the Pods project. This means that your edits will persist to CocoaPods installations.

I created a Single-View iOS App, which yielded a project structure like this:

The podspec is a supporting file, so it belongs in MyPod/MyPod with MyPod-Info.plist and friends. Now, if I want to modify my cocoapod's configuration by adding an AFNetworking dependency, I only need to change my podspec and run pod install.

Saturday, September 28, 2013

Cocoapods - ARGV - Returns Options as a Hash, OCMock only for Tests target.

Tonight I actually made my cocoapods objc port use cocoapods. Most of the point of this adventure is for me to deeply learn cocoapods. I initially wanted to replace some manually written stubs in my first ARGV test with OCMock, but it OCMock doesn't support stubbing -[NSObject description], so I didn't actually need it. I imagine I'll need OCMock at some point, and it was also a good exercise in learning how to setup cocoapods dependencies only for specific targets, so I left it in.
I also tried implementing the "returns options as a hash" spec, but ran out of time. Ben Chatelain suggested I use FSArgumentParser:
but I don't think that's going to work. However, FSArgumentParser depends upon CoreParse, and that looks like it'll work out great. I'll probably use FSArgumentParser as an example of how to use CoreParse for my purposes, so I think Ben's suggestion will pay many dividends. I love knowing smart people.

Friday, September 27, 2013

Cocoapods adventures

We've started using Cocoapods at work, and it seems like a really great system. However, it is written in ruby, and I hypothesize that it would get more participation from the community if it was written in Objective-C. Also, since Cocoapods is a specification as well as a system, it would benefit from a separate (though in this case not clean-room) implementation. Thus, I'm going to rewrite Cocoapods in Objective-C and blog about my journey along the way.
Thankfully, Cocoapods looks pretty modular, and it looks like it has good tests. I'm starting with CLAide, which is Cocoapods' command-line aide. This has RSpec tests, which I know nothing about, but I can generally figure out what they're doing just by reading the text.
Tonight, I just wanted to start something. I wasn't too concerned about crazy progress. I simply cloned CLAide and figured out how to run the existing RSpec tests. I found the .travis.yml file with a script command that looked promising, so I tried running that, and installed required gems (bundler, rspec) until it worked.
Next, I just converted the first two tests in the first spec (argv_spec.rb). I'm doing at least one commit for each test I convert. Also, I'm keeping all this work on a separate branch. Eventually, I'll just have a parallel Foundation command-line tool project that I'll keep up to date.
Tomorrow, I'll tackle the first real piece of parsing necessary. Hopefully, I'll have ARGV converted by Sunday night.

Saturday, June 15, 2013

All the smart kids are writing blog posts about exceptions

Since all the smart kids are writing blog posts about exceptions, I figure if I write one, I might be considered smart by association. :)

I favor automation in all things. I will happily supply metadata with my code that will allow it to be automatically verified to be correct, or at least more correct than it could be without the metadata. This colors my viewpoint on exceptions.

This all started because of Craig Buchek's tweet:
I prefer code that enumerates exactly what it will return and how it might fail via the type system. I used to prefer checked exceptions everywhere for this, but I've since changed to just returning a either a good return value or an error.
Old Java:
class Foo {
  Foo foo() throws FooException {
    if (Math.random() < .5) {
      throw new FooException();
    } else {
      return new Foo();
  Foo bar(Foo foo) throws FooException {
    if (Math.random() < .5) {
      throw new FooException();
    } else {
  String baz() throws BazException {
    try {
      Foo foo = foo();
      Foo barFoo = bar(foo);
      return "I ran the gauntlet with " + barFoo;
    } catch (FooException f) {
      throw new BazException(f);

With this code, you can capture all your handling for common errors in one place, and because FooException is a checked exception, the compiler will happily tell you if you forget to handle your error cases. Yay!

However, errors are data just like everything else, and they don't need to be shunted off into a separate world. One might assume that if we remove exceptions, we'll end up with a lot of annoying if checks:
class Foo {
  Either<Foo, FooException> foo() {
    if (Math.random() < .5) {
      return new Either<>(new FooException());
    } else {
      return new Either<>(Foo);
  Either<Foo, FooException> bar(Foo foo) throws FooException {
    if (Math.random() < .5) {
      return new Either<>(new FooException());
    } else {
  Either<String, FooException> baz() throws BazException {
    Either<Foo, FooException> eitherFooOrFooException = foo();
    if (eitherFooOrFooException.left()) {
      Either<Foo, FooException> eitherBarFooOrFooException = bar(eitherFooOrFooException.left());
      if (eitherBarFooOrFooException.left()) {
        return "I ran the gauntlet with " + eitherBarFooOrFooException.left();
      } else {
        return new Either<>(new BazException(eitherBarFooOrFooException.right()));
    } else {
      return new Either<>(new BazException(eitherFooOrFooException.right()));

It doesn't have to be that way. Jessica Kerr describes how in detail. However, Java idioms and language limitations do make these functional styles difficult. So, if you're in Java-land, I won't look down upon you for sticking with Exceptions, as long as they're checked.

Saturday, December 1, 2012

Objective-C Does Not Belong Outside of Mobile

The power of Objective-C comes from its tight and nearly seamless integration with C. The power of C comes from its focus on efficiency and performance and its quasi-portability. Both of these qualities are essential for mobile development. Mobile devices run on a number of different platforms with different binary types (Android uses ELF, while iOS uses Mach-O) and generally have tight clock and memory constraints. Given these compromises, C is a great choice.

However, C doesn't support object-oriented niceties that were fashionable 10-15 years ago, and it doesn't support functional niceties that are en vogue now. Objective-C does an passable job at the former is trying to get better at support for the latter with the help of ReactiveCocoa and friends.

So, given that Objective-C minimally satisfies the requirements of a modern language, should we extend it to the web? I don't think so. The web doesn't operate under the same CPU and memory constraints, so we can use some of that power to make developers more efficient. In cases where bottlenecks occur, most programming environments still support C development. Objective-C may make sense there. However, most developers aren't building Facebook, and Facebook has even figured out a way to make its code more efficient while harnessing the power of its army of highly skilled PHP developers.

Kevin Lawler makes a case for Objective-C on the server. I'll break down my disagreements with his article below. I'll use Java as a counterexample since it is currently the dominant player and is widely deadpanned as being an awful language. If Objective-C can't beat Java, it doesn't stand a chance against ruby, python, javascript, clojure, scala, or any other server-side cult hits.

Objective-C is not a joy:

In the past few years, quietly, almost invisibly, Apple has transformed its Objective-C language into the best language available. I have been working with Objective-C since the release of the iPhone App Store in 2008. In that time Objective-C has evolved from a clunky, boilerplate-heavy language, into a tight, efficient joy. It is an amazing tool. Anything that I would not write in C I would want to write in Objective-C, were support available.
No. Objective-C is not an efficient use of developer time, nor is it a joy. There is still a ton of boilerplate present. I'm sure there are many other things I hate about Objective-C the language, but these are just a few things I came up with off the top of my head.
  • I must declare both an @interface and @implementation for every class.
  • The compiler doesn't support circular references, you must forward declare classes and protocols that are circularly referenced with @class and @protocol.
  • There is no support for namespaces, leading to names like kBluetoothAMPManagerCreatePhysicalLinkResponseAMPDisconnectedPhysicalLinkRequestReceived
  • There are no private methods. There is no way to be sure that a subclass won't accidentally override one of my internal methods simply by reusing one of my names.
  • No type variables (generics)
  • Can't add nil to Foundation collections
  • No support for inner classes
  • It is a superset of C, so everything you hate about C goes here too.

A paragraph full of fail:

To understand the opportunity facing Objective-C it will help to summarize where Java fails. The original promise of Java was that an application written once would compile and deploy on any architecture. Ignoring that this is false, web shops don't use Java for this reason. Platform inconsistency is an issue for almost no one, and it was never an issue to port correct C/C++ code, universal compatibility being the original promise of C as well. This promise however spurred the creation of the JVM, which was Java's first mistake. The JVM is a nonsense abstraction over the assembly and UNIX system layer. Now code runs through an additional layer, which can be slow, and system interactions must be translated through an otherwise pointless, just-for-this-purpose Java API. In the ideal case, this API replicates the entirety of the UNIX system layer in Java-ese, obscuring any helpful C idioms or UNIX-system knowledge in the process, and creating a pointless set of new knowledge to understand. In the less than ideal case, the API fails to implement system-level functionality and creates a barrier between the application and the machine.
 So much is wrong with this paragraph.
Ignoring that this is false, web shops don't use Java for this reason.
There are certainly edge cases, but this is far from false. Java is very much write-once run everywhere. More importantly, it is compile once, link and run everywhere. I'll grant you that some Java libraries have crazy mavenized builds that can take way too long to run, but that isn't Java's fault. Code that you write in Java will compile the same way on any machine and run basically the same way on any other machine, regardless of endianness, instruction width, or operation system. Web shops depend on this to easily consume libraries from all over the Java ecosystem. I invite Mr. Lawler to look at the amount of platform specific code in the average Apache Jakarta project. I'll bet it is effectively zero.
Platform inconsistency is an issue for almost no one, and it was never an issue to port correct C/C++ code, universal compatibility being the original promise of C as well.
I'm hardly a C badass, but I'm pretty good at Java. I'm not at all confident in my ability to write correct platform independent C code. Thankfully, I only target iOS, and I only build with xcode, just like every other iOS developer out there. Nearly all iOS developers build with Clang, and most are on one of a few key versions. I can't imagine trying to build a reusable library that would compile and link with all the different C/C++ compilers/linkers and run on nearly any system.

Even if we ignore the need to write universally compatible code, I would have to invest great amounts of time in consuming any library that wasn't written specifically for my platform. If I wanted to use Apache commons-codec, but it was written on 32-bit Windows, I'd be very skeptical about consuming it on 64-bit Mac without a thorough review. I have no such concern about Java. I can't believe it is 2012, and I have to make that argument.
This promise however spurred the creation of the JVM, which was Java's first mistake. The JVM is a nonsense abstraction over the assembly and UNIX system layer.
No. The JVM is a portable UNIX system layer that runs everywhere. The JVM brought UNIX to windows, and its portable bytecode has enabled an amazing ecosystem of languages, most of which has the power to interoperate.
Now code runs through an additional layer, which can be slow, and system interactions must be translated through an otherwise pointless, just-for-this-purpose Java API.
I guess Mr. Lawler has never heard of the Hotspot JVM's amazing inlining technologies in its JIT compiler. Also, he links to a stackoverflow answer that basically says Java is not slow at all, though the answer does list some downsides with respect to memory and startup time, both of which aren't issues for the web.

A few more nits to pick:

Garbage collection makes execution (and memory usage) unpredictable. You cannot postpone garbage collection forever. The more critical the execution, the more you want to postpone garbage collection, but the longer you postpone garbage collection, the more of a problem it will eventually be. This is a disaster for applications that need to scale.
There is truth here, but Azul's Zing basically blows it all away.
Oracle now owns Java and is a hostile entity. Java is done. Its future as a product is finished.
Although Oracle decimated the JCP, they have been a terrific steward of Java for features. They actually shipped Java 7, which greatly improved performance and includes many new features (including lots of missing UNIX APIs).
As an aside, tying any new language to the Oracle JVM is destined to be a mistake, for reasons previously mentioned.
The JVM is a great place to run a new language. JRuby applications saw free performance gains of up to 30% just from moving from Java6 to Java7.
In practice, object-oriented programming lets large teams of competent programmers build usable software. The same is rarely said for functional languages. In cases where functional language applications do succeed, they are often treated as prototypes and rewritten.
I can point you to a whole army of people that would disagree with that. I know of 3 different companies just in St. Louis that build very big systems with functional platforms. Of course, this is only anecdotal evidence, but I'm sure functional languages are taking off if a small town like St. Louis has such a good showing. Of course, twitter famously uses functional languages for many things, including processing its massive logs.

The lines that caused me to write this blog post:

5. Xcode is an excellent IDE, with tolerably good git support
This line just killed me. Xcode is the biggest piece of shit modern IDE I've ever used. I outlined my hatred in a presentation I gave to Lamda Lounge. If Mr. Lawler thinks it is excellent, he's clearly never used Eclipse JDT or IntelliJ IDEA, and he's never been amazed at swapping hot code at runtime with JRebel. He must not have ever wanted to use a structural code formatter, either. I'm sure he doesn't care about creating plugins or modifying the tools he uses. I doubt he wants a transparent bug reporting system.
IDE auto-completion works wonderfully.
No, it doesn't. I just barely works. If there are any compilation errors in the class, the autocomplete fails.   It stupidly suggests methods from all over the various C APIs that I've never used and I doubt would ever be applicable to my code. It never prioritizes local variables or methods that I most often use. "NSS" doesn't complete to "NSString", it completes to "NSStream". "NSLo" becomes "NSLoadedClasses".
The library import process is less tedious than in Java.
I think Mr. Lawler is trolling me. Xcode has no auto-import at all. In Eclipse, if I type "Set", "import java.util.Set;" is automatically added at the top. I would love this feature in Xcode. If I want a library in Xcode, I have to use the mouse to navigate through 5 modal operations.

Still Good For Mobile

Again, mobile development operates under a different set of constraints. We need C on mobile, and we need a modern superset of C to build mobile applications. Objective-C is a decent choice. As I outlined above, it has warts, but they are tolerable given the constrained environment of mobile. Thanks for journeying with me on my insane screed.